diff --git a/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_content_list.json b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1062b5e8ee14101dba0e992b56a1277b528105c1
--- /dev/null
+++ b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bcd078b49962938b8630e185377edfa51ab599dfdddfabd3efaff9045f312eaa
+size 74041
diff --git a/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_model.json b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c7ef65ba0cd569ba4634a3b06e27570eeaeb18c4
--- /dev/null
+++ b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad8d07895e8085ea61339d81ad95fd7f60bad1f26a0a2c7bb515d2ddcc7ad43c
+size 90998
diff --git a/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_origin.pdf b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..388b25928287fe70e81347ad6a3945f184155f4d
--- /dev/null
+++ b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/bc67d115-01c1-4a35-9229-709c2970d9bb_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:67ee52e4dfd9fe1e4fa543c16ffb7571f45dae31a5f5e9ae9372a83407c24667
+size 6321184
diff --git a/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/full.md b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..52a38642e304b8f9a64047cec14a9b42560d06d1
--- /dev/null
+++ b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/full.md
@@ -0,0 +1,324 @@
+# ABCNet: Real-time Scene Text Spotting with Adaptive Bezier-Curve Network*
+
+Yuliang Liu†, Hao Chen†, Chunhua Shen†, Tong He†, Lianwen Jin‡, Liangwei Wang‡
+‡South China University of Technology †University of Adelaide, Australia †Huawei Noah's Ark Lab
+
+# Abstract
+
+Scene text detection and recognition has received increasing research attention. Existing methods can be roughly categorized into two groups: character-based and segmentation-based. These methods either are costly for character annotation or need to maintain a complex pipeline, which is often not suitable for real-time applications. Here we address the problem by proposing the Adaptive Bezier-Curve Network (ABCNet). Our contributions are three-fold: 1) For the first time, we adaptively fit oriented or curved text by a parameterized Bezier curve. 2) We design a novel BezierAlign layer for extracting accurate convolution features of a text instance with arbitrary shapes, significantly improving the precision compared with previous methods. 3) Compared with standard bounding box detection, our Bezier curve detection introduces negligible computation overhead, resulting in superiority of our method in both efficiency and accuracy.
+
+Experiments on oriented or curved benchmark datasets, namely Total-Text and CTW1500, demonstrate that ABCNet achieves state-of-the-art accuracy, meanwhile significantly improving the speed. In particular, on Total-Text, our real-time version is over 10 times faster than recent state-of-the-art methods with a competitive recognition accuracy.
+
+Code is available at https://git.io/AdelaiDet.
+
+# 1. Introduction
+
+Scene text detection and recognition has received increasing attention due to its numerous applications in computer vision. Despite tremendous progress has been made recently [10, 42, 28, 36, 27, 43, 45, 41, 46, 14], detecting and recognizing text in the wild remains largely unsolved due to its diversity patterns in sizes, aspect ratios, font styles, perspective distortion, and shapes. Although the emergence of deep learning has significantly improved the performance of the task of scene text spotting, a considerable gap still exists in current methods for real-world
+
+
+
+
+
+
+(a) Segmentation-based method.
+
+
+
+
+
+
+(b) Our proposed ABCNet.
+Figure 1. Segmentation-based results are easily affected by nearby text. The nonparametric non-structured segmentation results make them very difficult to align features for the subsequent recognition branch. Segmentation-based results usually need complex post-processing, hampering efficiency. Benefiting from the parameterized Bezier curve representation, our ABCNet can produce structured detection regions and thus the BezierAlign sampling process can be used for naturally connecting the recognition branch.
+
+applications, especially in terms of efficiency.
+
+Recently, many end-to-end methods [31, 37, 34, 24, 44, 21] have significantly improved the performance of oriented or curved scene text spotting. However, these methods either use segmentation-based approaches that maintain a complex pipeline or require a large amount of expensive character-level annotations. In addition, almost all of these methods are slow in inference, hampering the deployment to real-time applications. Thus, our motivation is to design a simple yet effective end-to-end framework for spotting oriented or curved scene text in images [4, 27], which ensures fast inference time while achieving an on par or even better performance compared with state-of-the-art methods.
+
+
+Figure 2. Overview of some end-to-end scene text spotting methods that are most relevant to ours. Inside the GT (ground-truth) box, 'W', 'R', and 'C' represent word-level annotation, text content, and character-level annotation, respectively. 'H', 'Q', and 'P' represent that the method is able to detect horizontal, quadrilateral, and oriented or curved text, respectively. 'RP' means that the method can recognize the curved text inside a quadrilateral box. 'R': recognition; 'BBox': bounding box. Dashed box represents the shape of the text which the method is unable to detect.
+
+To achieve this goal, we propose the Adaptive Bezier Curve Network (ABCNet), an end-to-end trainable framework, for oriented or curved scene text spotting. ABCNet enables oriented or curved scene text detection with Bezier curve adaptation, which introduces negligible computation overhead compared with standard rectangle bounding box detection. In addition, we design a novel feature alignment layer—BezierAlign—to precisely calculate convolutional features of text instances in curved shapes, and thus high recognition accuracy can be achieved without introducing much computation cost. For the first time, we represent the oriented or curved text with parameterized Bezier curves, and the results show the effectiveness of our method. Examples of our spotting results are shown in Figure 1.
+
+Note that previous methods such as TextAlign [11] and FOTS [25] can be viewed as a special case of ABCNet because a quadrilateral bounding box can be seen as the simplest oriented or curved bounding box with 4 straight boundaries. In addition, ABCNet can avoid complicated transformation such as 2D attention [20], making the design of the recognition branch considerably simpler.
+
+We summarize our main contributions as follows.
+
+- In order to accurately localize oriented and curved scene text in images, for the first time, we introduce a new concise parametric representation of curved scene text using Bezier curves. It introduces negligible computation overhead compared with the standard bounding box representation.
+- We propose a sampling method, a.k.a. BezierAlign, for accurate feature alignment, and thus the recognition branch can be naturally connected to the overall struc
+
+ture. By sharing backbone features, the recognition branch can be designed with a light-weight structure.
+
+- The simplicity of our method allows it to perform inference in real time. ABCNet achieves state-of-the-art performance on two challenging datasets, Total-Text and CTW1500, demonstrating advantages in both effectiveness and efficiency.
+
+# 1.1. Related Work
+
+Scene text spotting requires detecting and recognizing text simultaneously instead of concerning only one task. Recently, the emergence of deep-learning-based methods have significantly advanced the performance of text spotting. Both the detection and recognition have been dramatically improved in performance. We summarized several representative deep-learning-based scene text spotting methods into the following two categories. Figure 2 shows an overview of typical works.
+
+Regular End-to-end Scene Text Spotting Li et al. [19] propose the first deep-learning based end-to-end trainable scene text spotting method. The method successfully uses a RoI Pooling [35] to joint detection and recognition features via a two-stage framework, but it can only spot horizontal and focused text. Its improved version [20] significantly improves the performance, but the speed is limited. He et al. [11] and Liu et al. [25] adopt an anchor-free mechanism to improve both the training and inference speed. They use a similar sampling strategy, i.e., Text-Align-Sampling and RoI-Rotate, respectively, to enable extracting feature from quadrilateral detection results. Note that both of these two methods are not capable of spotting oriented or curved scene text.
+
+
+Figure 3. The framework of the proposed ABCNet. We use cubic Bezier curves and BezierAlign to extract curved sequence features using the Bezier curve detection results. The overall framework is end-to-end trainable with high efficiency. Purple dots represent the control points of the cubic Bezier curve.
+
+Oriented or curved End-to-end Scene Text Spotting To detect oriented or curved scene text, Liao et al. [31] propose a Mask TextSpotter which subtly refines Mask R-CNN and uses character-level supervision to simultaneously detect and recognize characters and instance masks. The method significantly improves the performance of spotting oriented or curved scene text. However, the character-level ground truths are expensive, and using free synthesized data is hard to produce character-level ground truth for real data in practice. Its improved version [21] significantly alleviated the reliance for the character-level ground truth. The method relies on a region proposal network, which restricts the speed to some extent. Sun et al. [37] propose the TextNet which produces quadrilateral detection bounding boxes in advance, and then use a region proposal network to feed the detection features for recognition. Although the method can directly recognize the oriented or curved text from a quadrilateral detection, the performance is still limited.
+
+Recently, Qin et al. [34] propose to use a RoI Masking to focus on the oriented or curved text region. However, the results may easily be affected by outlier pixels. In addition, the segmentation branch increases the computation burden; the fitting polygon process also introduces extra time consumption; and the grouping result is usually jagged and not smooth. The work in [24] is the first one-stage oriented or curved scene text spotting method, requiring character-level ground truth data for training. Authors of [44] propose a novel sampling method, RoISlide, which uses fused features from the predicting segments of the text instances, and thus it is robust to long oriented or curved text.
+
+# 2. Adaptive Bezier Curve Network (ABCNet)
+
+ABCNet is an end-to-end trainable framework for spotting oriented or curved scene text. An intuitive pipeline can be seen in Figure 3. Inspired by [49, 38, 12], we adopt a single-shot, anchor-free convolutional neural network as the detection framework. Removal of anchor boxes significantly simplifies the detection for our task. Here the detection
+
+is densely predicted on the output feature maps of the detection head, which is constructed by 4 stacked convolution layers with stride of 1, padding of 1, and $3 \times 3$ kernels. Next, we present the key components of the proposed ABCNet in two parts: 1) Bezier curve detection; and 2) BezierAlign and recognition branch.
+
+# 2.1. Bezier Curve Detection
+
+Compared to segmentation-based methods [41, 46, 1, 39, 47, 29], regression-based methods are more direct solutions to oriented or curved text detection, e.g., [27, 43]. However, previous regression-based methods require complicated prediction to fit the text boundary, which is not very efficient and robust for the various text shapes in practice.
+
+To simplify the oriented or curved scene text detection, following the regression method, we find that Bezier curve, a most fundamental concept of curve representation, is suitable for parameterization of curved text. The Bezier curve represents a parametric curve $c(t)$ that uses the Bernstein Polynomials [30] as its basis. The definition is shown in Equation (1).
+
+$$
+c (t) = \sum_ {i = 0} ^ {n} b _ {i} B _ {i, n} (t), 0 \leq t \leq 1, \tag {1}
+$$
+
+where, $n$ represents the degree, $b_{i}$ represents the $i$ -th control points, and $B_{i,n}(t)$ represents the Bernstein basis polynomials, as shown in Equation (2):
+
+$$
+B _ {i, n} (t) = \binom {n} {i} t ^ {i} (1 - t) ^ {n - i}, i = 0, \dots , n, \tag {2}
+$$
+
+where $\binom{n}{i}$ is a binomial coefficient. To fit arbitrary shapes of the text with Bezier curves, we comprehensively observe oriented or curved scene text from the existing datasets and the real world, and we empirically show that a cubic Bezier curve (i.e., $n$ is 3) is sufficient to fit different kinds of the oriented or curved scene text in practice. An illustration of cubic Bezier curve is shown in Figure 4.
+
+
+Figure 4. Cubic Bezier curves. $b_{i}$ represents the control points. The green lines forms a control polygon, and the black curve is the cubic Bezier curve. Note that with only two end-points $b_{1}$ and $b_{4}$ the Bezier curve degenerates to a straight line.
+
+
+
+Based on the cubic Bezier curve, we can simplify the oriented or curved scene text detection to a bounding box regression with eight control points in total. Note that a straight text that has four control points (four vertices) is a typical case of oriented or curved scene text. For consistency, we interpolate additional two control points in the tripartite points of each long side.
+
+To learn the coordinates of the control points, we first generate the Bezier curve ground truths described in 2.1.1 and follow a similar regression method as in [26] to regress the targets. For each text instance, we use
+
+$$
+\Delta_ {x} = b _ {i x} - x _ {\min }, \Delta_ {y} = b _ {i y} - y _ {\min }, \tag {3}
+$$
+
+where $x_{min}$ and $y_{min}$ represent the minimum $x$ and $y$ values of the 4 vertexes, respectively. The advantage of predicting the relative distance is that it is irrelevant to whether the Bezier curve control points are beyond the image boundary. Inside the detection head, we only need one convolution layer with 16 outputted channels to learn the $\Delta_x$ and $\Delta_y$ , which is nearly cost-free while the results can still be accurate, which will be discussed in Section 3.
+
+# 2.1.1 Bezier Ground Truth Generation
+
+In this section, we briefly introduce how to generate Bezier curve ground truth based on the original annotations. The oriented or curved datasets, e.g., Total-text [4] and CTW1500 [27], use polygonal annotations for the text regions. Given the annotated points $\{p_i\}_{i=1}^n$ from the curved boundary, where $p_i$ represents the $i-th$ annotating point, the main goal is to obtain the optimal parameters for cubic Bezier curves $c(t)$ in Equation (1). To achieve this, we can simply apply standard least square method, as shown in Equation (4):
+
+$$
+\left[ \begin{array}{c c c} B _ {0, 3} \left(t _ {0}\right) & \dots & B _ {3, 3} \left(t _ {0}\right) \\ B _ {0, 3} \left(t _ {1}\right) & \dots & B _ {3, 3} \left(t _ {1}\right) \\ \vdots & \ddots & \vdots \\ B _ {0, 3} \left(t _ {m}\right) & \dots & B _ {3, 3} \left(t _ {m}\right) \end{array} \right] \left[ \begin{array}{l l} b _ {x _ {0}} & b _ {y _ {0}} \\ b _ {x _ {1}} & b _ {y _ {1}} \\ b _ {x _ {2}} & b _ {y _ {2}} \\ b _ {x _ {3}} & b _ {y _ {3}} \end{array} \right] = \left[ \begin{array}{l l} p _ {x _ {0}} & p _ {y _ {0}} \\ p _ {x _ {1}} & p _ {y _ {1}} \\ \vdots & \vdots \\ p _ {x _ {m}} & p _ {y _ {m}} \end{array} \right] \tag {4}
+$$
+
+Here $m$ represents the number of annotated points for a curved boundary. For Total-Text and CTW1500, $m$ is 5 and
+
+
+(a) Original ground truth.
+
+
+(b) Generated results.
+Figure 5. Comparison of Bezier curve generation. In Figure (b), for each curve boundary, the red dash lines form a control polygon, and the red dots represent the control points. Warping results are showed below. In Figure (a), we utilize TPS [2] and STN [15] to warp the original ground truth into rectangular shape. In Figure (b), we use generated Bezier curves and our BezierAlign to warp the results.
+
+7, respectively. $t$ is calculated by using the ratio of the cumulative length to the perimeter of the polyline. According to Equation (1) and Equation (4), we convert the original polyline annotation to a parameterized Bezier curve. Note that we directly use the first and the last annotating points as the first $(b_0)$ and the last $(b_4)$ control points, respectively. A visualization comparison is shown in the Figure 5, which shows that the generating results can be even visually better than the original ground truth. In addition, based on the structured Bezier curve bounding box, we can easily using our BezierAlign described in Section 2.2 to warp the curved text into a horizontal format without dramatic deformation. More examples of the Bezier curve generation results are shown in Figure 6. The simplicity of our method allows it generalize to different kinds of text in practice.
+
+# 2.1.2 Bezier Curve Synthetic Dataset
+
+For the end-to-end scene text spotting methods, a massive amount of free synthesized data is always necessary, as shown in Table 2. However, the existing 800k SynText dataset [7] only provides quadrilateral bounding box for a majority of straight text. To diversify and enrich the oriented or curved scene text, we make some effort to synthesize 150k synthesized dataset (94,723 images contain a majority of straight text, and 54,327 images contain mostly curved text) with the VGG synthetic method [7]. Specially, we filter out 40k text-free background images from COCOText [40] and then prepare the segmentation mask and scene depth of each background image with [33] and [18] for the following text rendering. To enlarge the shape diversity of synthetic texts, we modify the VGG synthetic method by synthesizing scene text with various art fonts and corpus and generate the polygonal annotation for all the text instances. The annotations are then used for producing Bezier curve ground truth by the generating method described in
+
+
+Figure 6. Example results of Bezier curve generation. Green lines are the final Bezier curve results. Red dash lines represent the control polygon, and the 4 red end points represent the control points. Zoom in for better visualization.
+
+
+
+
+
+
+
+
+
+
+(a) Horizontal sampling.
+
+
+(b) Quadrilateral sampling.
+
+
+(c) BezierAlign.
+Figure 7. Comparison between previous sampling methods and BezierAlign. The proposed BezierAlign can accurately sample features of the text region, which is essential for recognition training. Note that the align procedure is processed in intermediate convolutional features.
+
+Section 2.1.1. Examples of our synthesized data are shown in Figure 8.
+
+
+Figure 8. Examples of cubic Bezier curve synthesized data.
+
+
+
+# 2.2. BezierAlign
+
+To enable end-to-end training, most of the previous methods adopt various sampling (feature alignment) methods to connect the recognition branch. Typically a sampling method represents an in-network region cropping procedure. In other words, given a feature map and Region-of-Interest (RoI), using the sampling method to select the features of RoI and efficiently output a feature map of a fixed size. However, sampling methods of previous non-segmentation based methods, e.g., RoI Pooling [19], RoI-Rotate [25], Text-Align-Sampling [11], or RoI Transform [37] cannot properly align features of oriented or curved text (RoISlide [44] numerous predicting segments). By exploiting the parameterization nature of a compact Bezier curve bounding box, we propose BezierAlign for feature sampling. BezierAlign is extended from RoIAngle [8]. Un
+
+like RoIAlign, the shape of sampling grid of BezierAlign is not rectangular. Instead, each column of the oriented or curved grid is orthogonal to the Bezier curve boundary of the text. The sampling points have equidistant interval in width and height, respectively, which are bilinear interpolated with respect to the coordinates.
+
+Formally given an input feature map and Bezier curve control points, we concurrently process all the output pixels of the rectangular output feature map with size $h_{out} \times w_{out}$ . Taking pixel $g_i$ with position $(g_{iw}, g_{ih})$ (from output feature map) as an example, we calculate $t$ by Equation (5):
+
+$$
+t = \frac {g _ {i w}}{w _ {o u t}}. \tag {5}
+$$
+
+We then use $t$ and Equation (1) to calculate the point of upper Bezier curve boundary $tp$ and lower Bezier curve boundary $bp$ . Using $tp$ and $bp$ , we can linearly index the sampling point $op$ by Equation (6):
+
+$$
+o p = b p \cdot \frac {g _ {i h}}{h _ {o u t}} + t p \cdot \left(1 - \frac {g _ {i h}}{h _ {o u t}}\right). \tag {6}
+$$
+
+With the position of $op$ , we can easily apply bilinear interpolation to calculate the result. Comparisons among previous sampling methods and BezierAlign are shown in Figure 7.
+
+Recognition branch. Benefiting from the shared backbone feature and BezierAlign, we design a light-weight recognition branch as shown in Table 1, for faster execution. It consists of 6 convolutional layers, 1 bidirectional
+
+
| Layers (CNN - RNN) | Parameters (kernel size, stride) | Output Size (n, c, h, w) |
| conv layers × 4 | (3, 1) | (n, 256, h, w) |
| conv layers × 2 | (3, (2,1)) | (n, 256, h, w) |
| average pool for h | - | (n, 256, 1, w) |
| Channels-Permute | - | (w, n, 256) |
| BLSTM | - | (w, n, 512) |
| FC | - | (w, n, nclass) |
+
+Table 1: Structure of the recognition branch, which is a simplified version of CRNN [36]. For all convolutional layers, the padding size is restricted to 1. $n$ represents batch size. $c$ represents the channel size. $h$ and $w$ represent the height and width of the outputted feature map, and $n_{class}$ represents the number of the predicted class, which is set to 97 in this paper, including upper and lower cases of English characters, digits, symbols, one category representing all other symbols, and an "EOF" of the last category.
+
+LSTM [13] layer, and 1 fully connected layer. Based on the output classification scores, we use a classic CTC Loss [6] for text string (GT) alignment. Note that during training, we directly use the generated Bezier curve GT to extract the RoI features. Therefore the detection branch does not affect the recognition branch. In the inference phase, the RoI region is replaced by the detecting Bezier curve described in Section 2.1. Ablation studies in Experimental Section 3 demonstrate that the proposed BezierAlign can significantly improve the recognition performance.
+
+# 3. Experiments
+
+We evaluate our method on two recently introduced oriented or curved scene text benchmarks, Total-Text [3] and CTW1500 [27], which also contain a large amount of straight text. We also conduct ablation studies on Total-Text to verify the effectiveness of our proposed method.
+
+# 3.1. Implemented details
+
+The backbone of this paper follows a common setting as most of the previous papers, i.e., ResNet-50 [9] together with a Feature Pyramid Network (FPN) [23]. For detection branch, we utilize RoIAlign on 5 feature maps with 1/8, 1/16, 1/32, 1/64, and 1/128 resolution of the input image while for recognition branch, BezierAlign is conducted on three feature maps with 1/4, 1/8, and 1/16 sizes. The pretrained data is collected from publicly available English word-level-based datasets, including 150k synthesized data described in Section 2.1.2, 15k images filtered from the original COCO-Text [40], and 7k ICDAR-MLT data [32]. The pretrained model is then finetuned on the training set of the target datasets. In addition, we also adopt data augmentation strategies, e.g., random scale training, with the short size randomly being chosen from 560 to 800 and the long size being less than 1333; and random crop, which we make
+
+sure that the crop size is larger than half of the original size and without any text being cut (for some special cases that hard to meet the condition, we do not apply random crop).
+
+We train our model using 4 Tesla V100 GPUs with the image batch size of 32. The maximum iteration is $150\mathrm{K}$ ; and the initialized learning rate is 0.01, which reduces to 0.001 at the $70\mathrm{K}^{\mathrm{th}}$ iteration and 0.0001 at $120\mathrm{K}^{\mathrm{th}}$ iteration. The whole training process takes about 3 days.
+
+# 3.2. Experimental results on Total-Text
+
+Dataset. Total-text dataset [3] is one of the most important oriented or curved scene text benchmark proposed in 2017, which was collected from various scenes, including text-like scene complexity and low-contrast background. It contains 1,555 images, with 1,255 for training and 300 for testing. To resemble the real-world scenarios, most of the images of this dataset contain a large amount of regular text while guarantee that each image has at least one curved text. The text instance is annotated with polygon based on word-level. Its extended version [4] improves its annotation of training set by annotating each text instance with a fixed ten points following text recognition sequence. The dataset contains English text only. To evaluate the end-to-end results, we follow the same metric as previous methods, which use F-measure to measure the word-accuracy.
+
+Ablation studies: BezierAlign. To evaluate the effectiveness of the proposed components, we conduct ablation studies on this dataset. We first conduct sensitivity analysis of how the number of the sampling points may affect the end-to-end results, which is shown in Table 4. From the results we can see that the number of sampling points can significantly affect the final performance and efficiency. We find (7,32) achieves the best trade-off between F-measure and FPS, which is used as the final setting in the following experiments. We further evaluate BezierAlign by comparing it with previous sampling method shown in Figure 7. The results shown in Table 3 demonstrate that the BezierAlign can dramatically improve the end-to-end results. Qualitative examples are shown in Figure 9.
+
+Ablation studies: Bezier curve detection. Another important component is Bezier curve detection, which enables oriented or curved scene text detection. Therefore, we also conduct experiments to evaluate the time consumption of Bezier curve detection. The result in Table 5 shows that the Bezier curve detection does not introduce extra computation compared with standard bounding box detection.
+
+Comparison with state-of-the-art. We further compare our method to previous methods. From the Table 2, we can see that our single scale result (short size being 800) can achieve a competitive performance meanwhile achieving a real time inference speed, resulting in a better trade-off between speed and word-accuracy. With multi-scale inference, ABCNet achieves state-of-the-art performance, sig
+
+| Method | Data | Backbone | F-measure | FPS |
| None | Full |
| TextBoxes [22] | SynText800k, IC13, IC15, TT | ResNet-50-FPN | 36.3 | 48.9 | 1.4 |
| Mask TextSpotter'18 [31] | SynText800k, IC13, IC15, TT | ResNet-50-FPN | 52.9 | 71.8 | 4.8 |
| Two-stage [37] | SynText800k, IC13, IC15, TT | ResNet-50-SAM | 45.0 | - | - |
| TextNet [37] | SynText800k, IC13, IC15, TT | ResNet-50-SAM | 54.0 | - | 2.7 |
| Li et al. [20] | SynText840k, IC13, IC15, TT, MLT, AddF2k | ResNet-101-FPN | 57.80 | - | 1.4 |
| Mask TextSpotter'19 [21] | SynText800k, IC13, IC15, TT, AddF2k | ResNet-50-FPN | 65.3 | 77.4 | 2.0 |
| Qin et al. [34] | SynText200k, IC15, COCO-Text, TT, MLT
+Private: 30k (manual label), 1m (partial label) | ResNet-50-MSF | 67.8 | - | 4.8 |
| CharNet [24] | SynText800k, IC15, MLT, TT | ResNet-50-Hourglass57 | 66.2 | - | 1.2 |
| TextDragon [44] | SynText800k, IC15, TT | VGG16 | 48.8 | 74.8 | - |
| ABCNet-F | SynText150k, COCO-Text, TT, MLT | ResNet-50-FPN | 61.9 | 74.1 | 22.8 |
| ABCNet | 64.2 | 75.7 | 17.9 |
| ABCNet-MS | 69.5 | 78.4 | 6.9 |
+
+Table 2: Scene text spotting results on Total-Text. ABCNet-F is faster as the short size of input image is 600. MS: multi-scale testing. "None" represents recognition without any lexicon. "Full" lexicon contains all words in test set. Datasets: AddF2k [48]; IC13 [17]; IC15 [16]; TT [5]; MLT [32]; COCO-Text [40].
+
+| Methods | Sampling method | F-measure (%) |
| ABCNet | with Horizontal Sampling | 38.4 |
| ABCNet | with Quadrilateral Sampling | 44.7 |
| ABCNet | with BezierAlign | 61.9 |
+
+
+
+
+
+
+
+
+
+
+Figure 9. Qualitative recognition results of the quadrilateral sampling method and BezierAlign. Left: original image. Top right: results by using quadrilateral sampling. Bottom right: results by using BezierAlign.
+
+
+
+nificantly outperforming all previous methods especially in the running time. It is worth mentioning that our faster version can be more than 11 times faster than previous best method [21] with on par accuracy.
+
+Table 3: Ablation study for BezierAlign. Horizontal sampling follows [19], and quadrilateral sampling follows [11].
+
+| Method | Sampling points (nh, nw) | F-measure (%) | FPS |
| ABCNet | +(6,32) | 59.6 | 23.2 |
| +(7,32) | 61.9 | 22.8 |
| +(14,64) | 58.1 | 19.9 |
| +(21,96) | 54.8 | 18.0 |
| +(28,128) | 53.4 | 15.1 |
| +(30,30) | 59.9 | 21.4 |
+
+Table 4: Ablation study of the number of sampling points of BezierAlign.
+
+| Methods | Inference time |
| without Bezier curve detection | 22.8 fps |
| with Bezier curve detection | 22.5 fps |
+
+Table 5: Ablation study for time consumption of the Bezier curve detection.
+
+Qualitative Results. Some qualitative results of ABC-Net are shown in Figure 10. The results show that our method can accurately detect and recognize most of the oriented or curved text. In addition, our method can also well handle straight text, with nearly quadrilateral compact bounding box and correct recognize results. Some errors are also visualized in the figure, which are mainly caused by mistakenly recognizing one of the characters.
+
+# 3.3. Experimental Results on CTW1500
+
+Dataset. CTW1500 [27] is another important oriented or curved scene text benchmark proposed in 2017. Compared to Total-Text, this dataset contains both English and Chinese text. In addition, the annotation is based on text-line level, and it also includes some document-like text, i.e., numerous small text may stack together. CTW1500 contains 1k training images, and 500 testing images.
+
+Experiments. Because the occupation of Chinese text in this dataset is very small, we directly regard all the Chi-
+
+
+Figure 10. Qualitative results of ABCNet on the Total-text. The detection results are shown with red bounding boxes. The float number is the predicted confidence. Zoom in for better visualization.
+
+| Methods | Data | F-measure |
| None | Strong Full |
| FOTS [25] | SynText800k, CTW1500 | 21.1 | 39.7 |
| Two-Stage* [44] | SynText800k, CTW1500 | 37.2 | 69.9 |
| RoIRotate* [44] | SynText800k, CTW1500 | 38.6 | 70.9 |
| LSTM* [44] | SynText800k, CTW1500 | 39.2 | 71.5 |
| TextDragon [44] | SynText800k, CTW1500 | 39.7 | 72.4 |
| ABCNet | SynText150k, CTW1500 | 45.2 | 74.1 |
+
+Table 6: End-to-end scene text spotting results on CTW1500. * represents the results are from [44]. "None" represents lexicon-free. "Strong Full" represents that we use all the words appeared in the test set.
+
+nese text as "unseen" class during training, i.e., the 96-th class. Note that the last class, i.e., the 97-th class is "EOF" in our implementation. We follow the same evaluation metric as [44]. The experimental results are reported in Table 6, which demonstrate that in terms of end-to-end scene text spotting, the ABCNet can significantly surpass previous state-of-the-art methods. Examples results of this dataset are showed in Figure 11. From the figure, we can see that some long text-line instances contain many words, which make a full-match word-accuracy extremely difficult. In other words incorrectly recognizing one character will result in zero scores for the whole text.
+
+
+Figure 11. Qualitative end-to-end spotting results of CTW1500. Better viewed on screen.
+
+
+
+# 4. Conclusion
+
+We have proposed ABCNet—a real-time end-to-end method that uses Bezier curves for oriented or curved scene text spotting. By reformulating oriented or curved scene text using parameterized Bezier curves, ABCNet can detect oriented or curved scene text with Bezier curves which introduces negligible computation cost compared with standard bounding box detection. With such regular Bezier curve bounding boxes, we can naturally connect a lightweight recognition branch via a new BezierAlign layer.
+
+In addition, by using our Bezier curve synthesized dataset and publicly available data, experiments on two oriented or curved scene text benchmarks (Total-Text and CTW1500) demonstrate that our ABCNet can achieve state-of-the-art performance, which is also significantly faster than previous methods.
+
+# Acknowledgements
+
+L. Jin's participation was in part supported by NSFC (Grant No. 61936003), National Key Research and Development Program of China (No. 2016YFB1001405), and GD-NSF (No. 2017A030312006). The authors would like to thank Huawei Technologies for the donation of GPU cloud computing resources.
+
+# References
+
+[1] Youngmin Baek, Bado Lee, Dongyoon Han, Sangdoo Yun, and Hwalsuk Lee. Character Region Awareness for Text Detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 9365-9374, 2019.
+[2] Fred L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell., 11(6):567-585, 1989.
+[3] C.-K Chng and C.-S Chan. Total-text: A comprehensive dataset for scene text detection and recognition. In Proc. IAPR Int. Conf. Document Analysis Recog., pages 935-942, 2017.
+[4] Chee-Kheng Chng, Chee Seng Chan, and Cheng-Lin Liu. Total-text: toward orientation robustness in scene text detection. Int. J. Document Analysis Recogn., pages 1-22, 2019.
+[5] Chee-Kheng Chng, Yuliang Liu, Yipeng Sun, Chun Chet Ng, Canjie Luo, Zihan Ni, ChuanMing Fang, Shuaiqiao Zhang, Junyu Han, Errui Ding, et al. ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text (RRC-ArT). Proc. IAPR Int. Conf. Document Analysis Recog., 2019.
+[6] Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proc. Int. Conf. Mach. Learn., pages 369-376. ACM, 2006.
+[7] Ankush Gupta, Andrea Vedaldi, and Andrew Zisserman. Synthetic data for text localisation in natural images. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2315-2324, 2016.
+[8] Kaiming He, Georgia Gkioxari, Piotr Dollr, and Ross Girshick. Mask R-CNN. In Proc. IEEE Int. Conf. Comp. Vis., 2017.
+[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 770-778, 2016.
+[10] Tong He, Weilin Huang, Yu Qiao, and Jian Yao. Text-attentional convolutional neural network for scene text detection. IEEE Trans. Image Process., 25(6):2529–2541, 2016.
+[11] Tong He, Zhi Tian, Weilin Huang, Chunhua Shen, Yu Qiao, and Changming Sun. An end-to-end textspotter with explicit alignment and attention. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5020-5029, 2018.
+[12] Wenhao He, Xu-Yao Zhang, Fei Yin, and Cheng-Lin Liu. Deep direct regression for multi-oriented scene text detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2017.
+[13] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. In Neural Computation, volume 9, pages 1735-1780, 1997.
+[14] Zhida Huang, Zhuoyao Zhong, Lei Sun, and Qiang Huo. Mask r-cnn with pyramid attention network for scene text detection. In Winter Conf. Appl. Comp. Vision, pages 764-772. IEEE, 2019.
+[15] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Proc. Advances in Neural Inf. Process. Syst., pages 2017-2025, 2015.
+
+[16] D. Karatzas, L. Gomez-Bigorda, et al. ICDAR 2015 competition on robust reading. In Proc. IAPR Int. Conf. Document Analysis Recog., pages 1156-1160, 2015.
+[17] D. Karatzas, F. Shafait, S. Uchida, et al. ICDAR 2013 Robust Reading Competition. In Proc. IAPR Int. Conf. Document Analysis Recog., pages 1484-1493, 2013.
+[18] Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth prediction with fully convolutional residual networks. In Proc. Int. Conf. 3D vision (3DV), pages 239-248. IEEE, 2016.
+[19] Hui Li, Peng Wang, and Chunhua Shen. Towards end-to-end text spotting with convolutional recurrent neural networks. In Proc. IEEE Int. Conf. Comp. Vis., pages 5238-5246, 2017.
+[20] Hui Li, Peng Wang, and Chunhua Shen. Towards end-to-end text spotting in natural scenes. arXiv: Comp. Res. Repository, 2019.
+[21] Minghui Liao, Pengyuan Lyu, Minghang He, Cong Yao, Wenhao Wu, and Xiang Bai. Mask textspotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. IEEE Trans. Pattern Anal. Mach. Intell., 2019.
+[22] Minghui Liao, Baoguang Shi, Xiang Bai, Xinggang Wang, and Wenyu Liu. Textboxes: A fast text detector with a single deep neural network. In Proc. AAAI Conf. Artificial Intell., 2017.
+[23] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 2117-2125, 2017.
+[24] Xing Linjie, Tian Zhi, Huang Weilin, and R. Scott Matthew. Convolutional Networks. In Proc. IEEE Int. Conf. Comp. Vis., 2019.
+[25] Xuebo Liu, Ding Liang, Shi Yan, Dagui Chen, Yu Qiao, and Junjie Yan. Fots: Fast oriented text spotting with a unified network. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 5676-5685, 2018.
+[26] Yuliang Liu and Lianwen Jin. Deep matching prior network: Toward tighter multi-oriented text detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2017.
+[27] Yuliang Liu, Lianwen Jin, Shuaiqiao Zhang, Canjie Luo, and Sheng Zhang. Curved scene text detection via transverse and longitudinal sequence connection. Pattern Recognition, 90:337-345, 2019.
+[28] Yuliang Liu, Sheng Zhang, Lianwen Jin, Lele Xie, Yaqiang Wu, and Zhepeng Wang. Omnidirectional scene text detection with sequential-free box discretization. Proc. Int. Joint Conf. Artificial Intell., 2019.
+[29] Shangbang Long, Jiaqiang Ruan, Wenjie Zhang, Xin He, Wenhao Wu, and Cong Yao. Textsnake: A flexible representation for detecting text of arbitrary shapes. In Proc. Eur. Conf. Comp. Vis., pages 20-36, 2018.
+[30] George G. Lorentz. Bernstein polynomials. American Mathematical Soc., 2013.
+[31] Pengyuan Lyu, Minghui Liao, Cong Yao, Wenhao Wu, and Xiang Bai. Mask textspotter: An end-to-end trainable neural network for spotting text with arbitrary shapes. In Proc. Eur. Conf. Comp. Vis., pages 67-83, 2018.
+
+[32] Nibal Nayef, Yash Patel, Michal Busta, Pinaki Nath Chowdhury, Dimosthenis Karatzas, Wafa Khlif, Jiri Matas, Umapada Pal, Jean-Christophe Burie, Cheng-lin Liu, et al. ICDAR2019 Robust Reading Challenge on Multi-lingual Scene Text Detection and Recognition—RRC-MLT-2019. Proc. IAPR Int. Conf. Document Analysis Recog., 2019.
+[33] Jordi Pont-Tuset, Pablo Arbelaez, Jonathan T Barron, Ferran Marques, and Jitendra Malik. Multiscale combinatorial grouping for image segmentation and object proposal generation. IEEE Trans. Pattern Anal. Mach. Intell., 39(1):128-140, 2016.
+[34] Siyang Qin, Alessandro Bissacco, Michalis Raptis, Yasuhisa Fujii, and Ying Xiao. Towards unconstrained end-to-end text spotting. Proc. IEEE Int. Conf. Comp. Vis., 2019.
+[35] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proc. Advances in Neural Inf. Process. Syst., pages 91-99, 2015.
+[36] Baoguang Shi, Xiang Bai, and Cong Yao. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. IEEE Trans. Pattern Anal. Mach. Intell., 39(11):2298-2304, 2016.
+[37] Yipeng Sun, Chengquan Zhang, Zuming Huang, Jiaming Liu, Junyu Han, and Errui Ding. TextNet: Irregular Text Reading from Images with an End-to-End Trainable Network. In Proc. Asian Conf. Comp. Vis., pages 83–99. Springer, 2018.
+[38] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. FCOS: Fully Convolutional One-Stage Object Detection. Proc. IEEE Int. Conf. Comp. Vis., 2019.
+[39] Zhuotao Tian, Michelle Shu, Pengyuan Lyu, Ruiyu Li, Chao Zhou, Xiaoyong Shen, and Jiaya Jia. Learning Shape-Aware Embedding for Scene Text Detection. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 4234-4243, 2019.
+[40] Andreas Veit, Tomas Matera, Lukas Neumann, Jiri Matas, and Serge Belongie. Coco-text: Dataset and benchmark for text detection and recognition in natural images. arXiv: Comp. Res. Repository, 2016.
+[41] Wenhai Wang, Enze Xie, Xiang Li, Wenbo Hou, Tong Lu, Gang Yu, and Shuai Shao. Shape Robust Text Detection with Progressive Scale Expansion Network. Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2019.
+[42] Wenhai Wang, Enze Xie, Xiaoge Song, Yuhang Zang, Wenjia Wang, Tong Lu, Gang Yu, and Chunhua Shen. Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network. Proc. IEEE Int. Conf. Comp. Vis., 2019.
+[43] Xiaobing Wang, Yingying Jiang, Zhenbo Luo, Cheng-Lin Liu, Hyunsoo Choi, and Sungjin Kim. Arbitrary Shape Scene Text Detection with Adaptive Text Region Representation. In Proc. IEEE Conf. Comp. Vis. Patt. Recogn., pages 6449-6458, 2019.
+[44] Feng Wei, He Wenhao, Yin Fei, Zhang Xu-Yao, and Cheng-Liu Liu. TextDragon: An end-to-end framework for arbitrary shaped text spotting. In Proc. IEEE Int. Conf. Comp. Vis., 2019.
+[45] Zecheng Xie, Yaoxiong Huang, Yuzhhi Zhu, Lianwen Jin, Yuliang Liu, and Lele Xie. Aggregation cross-entropy for sequence recognition. In Proceedings of the IEEE Conference
+
+on Computer Vision and Pattern Recognition, pages 6538-6547, 2019.
+[46] Yongchao Xu, Yukang Wang, Wei Zhou, Yongpan Wang, Zhibo Yang, and Xiang Bai. Textfield: Learning a deep direction field for irregular scene text detection. IEEE Trans. Image Process., 2019.
+[47] Chengquan Zhang, Borong Liang, Zuming Huang, Mengyi En, Junyu Han, Errui Ding, and Xinghao Ding. Look More Than Once: An Accurate Detector for Text of Arbitrary Shapes. Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2019.
+[48] Zhuoyao Zhong, Lianwen Jin, Shuye Zhang, and Ziyong Feng. Deeptext: A unified framework for text proposal generation and text detection in natural images. arXiv: Comp. Res. Repository, 2016.
+[49] Zhuoyao Zhong, Lei Sun, and Qiang Huo. An anchor-free region proposal network for faster r-cnn-based text detection approaches. Int. J. Document Analysis Recogn., 22(3):315-327, 2019.
\ No newline at end of file
diff --git a/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/images.zip b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d92907e976c8adbda1da8b1c4b53fb36d9d1f4a1
--- /dev/null
+++ b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c9667281c653b8bd9f1703a3aeac9e2f2b7297742b2a6502aca30ca68c3a496
+size 1045650
diff --git a/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/layout.json b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a1c322275546b61b692ba0ce772950966bf478b9
--- /dev/null
+++ b/abcnetrealtimescenetextspottingwithadaptivebeziercurvenetwork/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c3416735dcbff61a41f4a20843b849252cd9313559a17ecef219cc56c6401b0
+size 361046
diff --git a/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_content_list.json b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7c06d4fe7097ac22def1271c6f3962cf3453acdb
--- /dev/null
+++ b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f502276886aecd63d79031c22e824eb314f6ccf10ff11602c7e2c42bec1d0f9
+size 71813
diff --git a/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_model.json b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1a4e49c462bf393fce87ea0e50a0664870cbebbb
--- /dev/null
+++ b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15df22d40ed69b52cd178d3c0f79bdae2a2dfa34fbb3a7a13a3b7e599739439a
+size 88536
diff --git a/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_origin.pdf b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ad65b65aeec9c64963627e9fa61c32ec38aa2c42
--- /dev/null
+++ b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/d72a0b1a-eca1-44ca-822b-7bbb0b112b60_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2431ddc225fb684e811942886043089705a67baba334294458db4274d412f2a6
+size 475167
diff --git a/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/full.md b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bfc25427356eebce98614f0a2eefaf17b6b2f3f2
--- /dev/null
+++ b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/full.md
@@ -0,0 +1,323 @@
+# Accurate Estimation of Body Height from a Single Depth Image via a Four-Stage Developing Network
+
+Fukun Yin* Shizhe Zhou*
+
+College of Computer Science and Electronic Engineering,
+
+Hunan University, Changsha, China
+
+* joint first authors: yfk@hnu.edu.cn, shizhe@hnu.edu.cn (corresponding author)
+
+# Abstract
+
+Non-contact measurement of human body height can be very difficult under some circumstances. In this paper we address the problem of accurately estimating the height of a person with arbitrary postures from a single depth image. By introducing a novel part-based intermediate representation plus a four-stage increasingly complex deep neural network, we manage to achieve significantly higher accuracy than previous methods. We first describe the human body in the form of a segmentation of human torso as four nearly rigid parts and then predict their lengths respectively by 3 CNNs. Instead of directly adding the lengths of these parts together, we further construct another independent developing CNN that combines the intermediate representation, part lengths and depth information together to finally predict the body height results. Here we develop an increasingly complex network architecture and adopt a hybrid pooling to optimize training process. To the best of our knowledge, this is the first method that estimates height only from a single depth image. In experiments our average accuracy reaches at $99.1\%$ for people in various positions and postures.
+
+# 1. Introduction
+
+In the field of three-dimensional reconstruction, medical treatment, clothes sizing, etc., human height data is indispensable. In most of the cases, we will require the tested person to stand up straight, and then use a meter or other tools to measure the height, which will consume a lot of time and manpower. Especially in actual application, it will be very hard to measure the height if we lack the measuring tools or if the measured person is a child or is injured who cannot stand up straight. Our method can effectively solve these problems as it only needs a single depth image and then outputs reliable results with an average accuracy of $99.1\%$ in milliseconds, saving a lot of manpower and time. More importantly, we do not require the measured person to
+
+stand in a fixed place or a standard posture. They can stand in various postures such as walking, bending, sitting, etc., within a valid range that the depth camera can collect.
+
+We propose a novel method for estimating human body height from a single depth image. We first use a Kinect [32] to capture an RGB-D image, but discard its color data and only use the depth image to estimate the height, because we find the prediction result that only using depth image is better than using RGB or RGB-D image. Secondly, we segment the human torso from the depth image by using FCN [25]. In order to make the edge information more accurate, we enhance the depth image using high-frequency information. After obtaining the torso image, we propose a novel part-based intermediate representation. The torso image and depth image are input to the same network architecture to get the body parts segmentation that the human torso is divided into four local parts: head, upper body, thigh and calf. We verify that using this intermediate representation approach can significantly improve the accuracy of estimation. After that, we modify the fully connected layer of VGG16 [35] to make it more adaptable to our problem. Then the body parts segmentation and depth information are entered into this network to get intermediate estimation of lengths of the four body parts. After obtaining the intermediate representation above, we design a novel network architecture for the final height prediction.
+
+Even though complex neural networks have very long training time, large number of parameters, and easy to overfit [6][15][20][30], generally, they can fit the data better than the shallow neural network and the prediction may be more accurate. So we propose an increasingly complex deep neural network which we called developing network architecture. At the beginning of the network, we only use a small number of convolutional layers to train, then output the predicted values through the fully connected layer. When the network basically converges, we add new convolutional layers, and continue to train the network. Repeating this process until the network can accurately estimate. Experiments show that the accuracy of network trained by this
+
+
+Figure 1. The workflow of our approach. We use a four-stage neural network to estimate human body height from a single depth image.
+
+process is higher, compared with that of direct training. At the same time, for different characteristics of segmentation image and depth image, we adopt different pooling strategies and use the skip-connection structure [14] to transmit these information to each block without loss, see Figure 6, which further improve the accuracy of estimation significantly.
+
+Our main working steps are as follows: Firstly, the human torso is segmented from the depth image, and then construct an intermediate expression, body parts segmentation image, which further divides the human torso into four local parts: head, upper body, thigh and calf. After that, the lengths of these four parts are predicted separately. Finally, a novel developing network architecture is devised and the network is trained by a hybrid pooling strategy. The above process is shown in Figure 1.
+
+Our main contributions are as follows:
+
+1. We construct a new dataset of human body heights including 2136 RGB-D images with ten postures such as standing, walking, sitting, bending, etc. The human body can be located at any position within the range that can be captured by a depth camera, Figure 2 shows some examples from our dataset.
+2. To the best of our knowledge, this is the first method to predict human height from a single depth image. We only need a commodity depth camera without extra equipment, thus reduces the overall equipment expenditure for practical applications. By using only depth data, we achieve higher accuracy than using depth and RGB, see Section 4.4., which is an interesting fact contradictory to the case of 3D reconstruction where depth+RGB has been proved to be the better input [9] [11].
+3. We verify how the intermediate representation can make the final network easier to learn. It proposes to construct the human body parts segmentation image and estimate the lengths of head, upper body, thigh, and calf respectively. Compared with the method without intermediate representation, the prediction accuracy is greatly improved.
+
+4. We put forward a network architecture with a gradual complexity of iterations which can solve the difficult problem of network initialization and significantly improve the accuracy.
+
+# 2. Related Work
+
+In the field of human height estimation using images or videos, there are not too many previous work especially in recent years, and we have divided these methods into the following three groups.
+
+# 2.1. Single-view Based Prediction
+
+The single-view-based height prediction methods are relatively easier methods in current height measurement. Their methods make the prediction commonly based on camera calibration and reference substances, with an RGB or RGB-D image acquisition device to capture images from a certain angle. Penders et al. [28] propose a reference-based method of fixing the camera to a chosen position, keeping the distance between the subject who is required to stand close to the wall and the camera constant, thus getting the measured value of the distance between the head and the feet, finally converting the measured value into the actual distance with the reference measurement. Criminisi et al. [8] put forward to use the vanishing line and vanishing point method to calibrate the camera, thus eliminating the need for camera built-in parameters. Based on the [8], Lee [19] adds a cube in the image and use genetic algorithms to improve the robustness. In [2], [10] and [12], a heterogeneous method is presented that does not use any calibration or reference, but adopts a proportional relationship between body parts for estimation.
+
+# 2.2. Multi-view Based Prediction
+
+The multi-angle based prediction methods can be roughly categorized into multiple angles of shooting with a single camera, or taking multiple photos with a fixed camera, or shooting with multiple cameras at different angles. Three-dimensional reconstruction is used to estimate human body data in [21] and [24]. They propose a three-dimensional
+
+modeling through multi-angle photographs and then estimating the human body data through a cubic spline. Li et al. [23] use a home camera and a simple rotating disc to collect body images from different angles. Then perform a 3D reconstruction and refine the above model to estimate the human body data. Hung et al. [16] collect the front view, back view and side view of the human body, and then calculate the height of the human body by placing the standard reference.
+
+# 2.3. Video-based prediction
+
+Video-based method is also a common means of height prediction. Compared with the two methods above, this method can automatically and accurately segment the human body from the background, and then estimate the height. The collected video sequence be used to separate the background in [4], [17] and [18], then extract the feature points of the head and feet, and finally calculate the actual height by camera calibration or reference method. Li et al. [22] adopt a non-linear model to evaluate the focal distance, inclination and the height of camera, which removes the noisy interference during camera calibration. Shao et al. [33] use the moving objects in the tracking scene to recover the minimum calibration of the scene, and then adopt the single frame prediction method proposed by [8] to predict on each frame, and finally combine all the single-frame results together to obtain the final multi-frame prediction.
+
+Although there have been many studies on estimating height from images or videos in recent years, the previous methods have some limitations. Most of the methods can only be used to measure postures such as walking and standing, or require the subject to stand at a specified position. Some methods need manual label of the head and feet, which is not fully automated and requires a lot of manpower. There are also others methods that use multiple photos or multiple devices, which cost more time and money. We are committed to research a fully automated height measurement method that requires only one depth camera for various posture, including extreme postures such as sitting and walking.
+
+# 3. Method
+
+This paper investigates how to estimate the height of a human body from a single depth image. In this section, we will show how to create data sets, how to establish intermediate expressions, and how to find an effective network architecture.
+
+# 3.1.Data Set
+
+Our first problem is to create a depth image dataset with height information. There are already some datasets containing human body information, such as W8-400 [27], RGB-D-T [29], etc., but in these data sets the human body is locat
+
+
+RGB
+
+
+Depth
+180 cm
+
+
+
+
+175 cm
+Figure 2. Some examples of our dataset. Our dataset consists of RGB images, depth images and the corresponding human height.
+
+
+
+
+177 cm
+
+
+
+
+159 cm
+
+ed in the center of the image, or with just a simple posture such as upright, walking, or standing straight to the camera. However, these simple data cannot meet the needs of our real-life scenes, just like when we perform 3D reconstruction on the human body, the subject may be located anywhere in the image, and pose various postures, or during the medical height measurement, the patient may not be able to stand. The methods based on these data sets cannot be effectively applied. These problems are the foci and difficulties of the height prediction field, and also the motivation of our research.
+
+We create a human body dataset with 2136 RGB-D images using a Kinect camera [32], but we only use the depth images, and the RGB images can be used by relevant research in the future. The data set consists of 10 postures, including walking, bending, sitting, etc., see Figure 8. There are 14 volunteers in our dataset. They can stand anywhere with arbitrary clothes. Their heights ranges from $158\mathrm{cm}$ to $184\mathrm{cm}$ , which covers a wide range of height [7]. Figure 2 shows some examples from our dataset.
+
+Next, we need to consider how to organize the training data to ensure the network really establishes a connection between depth information and height, in stead of connection between identity and height. To verify this, we extract a man and a woman from 14 people and put all of their 369 images into the test set, also called Strange-test, to avoid the network learning their identity information. For the other 12 people, each person randomly selects 5 images with a total of 60 into the test set, also called Familiar-test.
+
+In summary, our dataset contains 2136 depth images equipped with their corresponding body height values, which is divided into a training set containing 1707 images and a test set containing 429 images.
+
+# 3.2. Intermediate Representation
+
+As discussed below, we will consider how to establish an intermediate representation of height information from depth images, making it easier and more efficient for the network to estimate height information.
+
+Segmentation of human body parts from depth images as intermediate representation has been well applied in pose
+
+
+Figure 3. The network architecture of $f^1(X)$ . The number below each block or layer represents their input size. Input a depth image $X^D$ and an edge image $E$ to output the corresponding torso image $T$ .
+
+estimation [26] [34]. Similarly, in this paper, we use the human body parts segmentation image within our intermediate representation.
+
+In order to make the segmentation more suitable for our problems, we observe that the human head, upper body, thigh, calf are nearly rigid, and the height can be expressed as the sum of the four parts. More importantly, their relative positions can reflect the overall posture. Therefore, we segment the human torso into these four nearly rigid parts using depth images and torso images. In order to eliminate the interference of the hairstyle, we define the head part as between the eyebrow to the neck. We show some of our intermediate representation images in Figure 4.
+
+Experiments show that using our intermediate representation can significantly increase the accuracy of height estimation, see Section 4.4. It provides three advantages:
+
+1. We decompose the problem of human height estimation into estimating the height of four nearly rigid parts as the length of a rigid object is more predictable.
+2. The topological structures and length proportion relationships between these four parts contain the posture information, which provides a powerful clue for height prediction.
+3. Convolutional neural network (CNN) has a good performance in local perception [1]. Decomposing the problem into local small problems can take advantage of CNN and simplify complexity, which makes the final estimation more accurate and stable, see Table 1.
+
+# 3.3. Network Architecture
+
+In this section we will describe our network architecture for obtaining the intermediate representation.
+
+In the first step, we need to segment the human torso image from the depth image. FCN [25], Unet++ [36], Mask r-cnn [13], and other methods [5] [31] have great performance in pixel-to-pixels image segmentation. We slightly modify the input so that we can accept the depth as input and output the human torso segmentation image. However, although these methods can generally segment the torso information, but cannot perform well at the edge of the human body, especially at the head and the feet area. We know
+
+
+Figure 4. Some examples of our part-based intermediate representation. Input the depth image $X^D$ and the torso image $T$ into $f^2(X)$ to get the label image $T$ as our intermediate representation in the form of a segmentation of human torso into four parts.
+
+that in the field of distance prediction, the determination of the starting point and the ending point is of paramount importance, see Figure 10. So we consider inputting the high frequency information of the depth image to the network to improve the prediction ability of the edge. Let the original image as $X^{D}$ and the high frequency image as $E$ . We use the canny operator [3] to extract the edge information:
+
+$$
+E = \operatorname {c a n n y} \left(X ^ {D}\right) \tag {1}
+$$
+
+Then we input the depth image $X^D$ and the high frequency information $E$ into the convolutional neural network $f^{1}(X)$ as shown in Figure 3, and define the loss function as:
+
+$$
+\mathfrak {L} = \frac {1}{N} \sum_ {i \in N} \left| \left| f _ {i} ^ {1} \left(X ^ {D}, E\right) - T _ {i} \right| \right| ^ {2} \tag {2}
+$$
+
+Among them, $T$ is the human torso image, $i$ every pixel and $N$ is the total number of pixels in the torso image.
+
+We adopt the same convolutional network architecture as the $f^1 (X)$ to design a new network $f^{2}(X)$ that input depth image $X^{D}$ and the torso image $T$ to output the corresponding label image $L$ as our intermediate representation. Similarly, we define the loss function of the network as:
+
+$$
+\mathfrak {L} = \frac {1}{N} \sum_ {i \in N} \left| \left| f _ {i} ^ {2} \left(X ^ {D}, T\right) - L _ {i} \right| \right| ^ {2} \tag {3}
+$$
+
+$i$ every pixel and $N$ is the total number of pixels in the label image $L$ .
+
+
+Figure 5. The network architecture of $f^3(X)$ . We modify part of the full convolution layers and add three full connected layers based on VGG16 to input a depth image $X^D$ and an edge image $E$ then output the estimated lengths of these four parts.
+
+After dividing the torso into four parts, we need to predict these four parts separately, and get their lengths $H^{head}$ , $H^{upperbody}$ , $H^{thigh}$ , $H^{calf}$ of the head, upper body, thigh and calf in the image respectively. We design a new network architecture $f^3 (X)$ similar to VGG16 [4], but modify the fully convolution layer and add three full connected layers at the end of the network to make it more suitable for our problem. The network structure is shown in Figure 5. We enter the depth image $X^D$ , and the label image $L$ into the network, and get the estimated length of these 4 parts, namely:
+
+$$
+\left[ H ^ {h e a d} H ^ {u p p e r b o d y} H ^ {t h i g h} H ^ {c a l f} \right] ^ {1 * 4} = f ^ {3} \left(X ^ {D}, L\right) \tag {4}
+$$
+
+In this way, we construct our intermediate representation method of height prediction, a part-based segmentation image and the length of each part. In the next section, we will discuss how to use this intermediate representation for the final prediction and verify our proposed representation method is effective in Chapter 4.
+
+# 3.4. Developing Network
+
+In this section, we will discuss how to use our intermediate representation to estimate body height. We design a developing network architecture with a hybrid pooling approach to solve this problem. In large-scale and medium-scale convolutional blocks, it is still very important to obtain accurate depth information $X^{D}$ and label information $L$ , so we enhance a skip connection structure [14] to directly add these information to the output of the previous convolution
+
+al layer as input of the next convolutional layer. Initially, we simply let the depth information $X^{D}$ and the label information $L$ use the same pooling strategy. However, we find that even if the skip connection structure is added, the network accuracy does not increase significantly. Later, we observe that there are a lot of noise in the depth image $X^{D}$ . It is easy to be interfered by these noises with maxpool, so we change the pooling mode to average pooling which can bring a certain sense of smoothness, and alleviate the interference from the local extreme pixels. We also find that for label image, however, maxpool is more suitable. So we propose a new network architecture based on VGG16 [35] with skip connection structure and a hybrid pool strategy. The network structure diagram $f^{4}(X)$ is shown in Figure 6.
+
+During the training process, the network is prone to overfitting. So we propose an increasingly complex network
+
+
+Figure 7. The change of our developing network structure with different iteration intervals. The blue layers are already working layers, and the yellow layers are newly added layers with the number of iterations.
+
+
+Figure 6. The network architecture of $f^4 (X)$ . We adopt different pooling strategies for the depth image $X^{D}$ and the label image $L$ to estimate human height.
+
+
+Figure 8. Some estimation results. The first and second row are a female and a male volunteer from the Strange-test respectively. The third row shows volunteers from the Familiar-test. All subjects can be located anywhere and pose various postures. In any case, our method can make accurate estimation only from a single depth image.
+
+structure, i.e. developing network to avoid overfitting, as shown in Figure 7.We first pre-train with all convolutional layers until the number of iterations exceeds $N_{1}$ . Then the model is saved, and only the first layer network of each block is reserved. When the iterations exceeds $N_{2}$ ,the second layer of the first and second block is added to training. Similarly, a new layer of the third, fourth and fifth blocks is added when the iterations exceeds $N_{3}$ and $N_{4}$ . It will continue to train until the iteration number reaches $N_{5}$
+
+We train these four subnetworks, i.e., $f^1(X)$ , $f^2(X)$ , $f^3(X)$ and $f^4(X)$ one-at-a-time, see Figure 1. Our proposed network architecture can effectively use our intermediate representation to predict the height of human body, and has excellent performance in preventing over-fitting, see Figure 9 and Table 3.
+
+# 4. Experiment
+
+We conduct a series of experiments to validate our method, including whether to use intermediate representation, whether to adopt our developing network, and different input types. We also compare the accuracy with other methods. Experiments show that using intermediate representation and our network architecture has the highest accuracy over other methods. We show some results of our method in Figure 8.
+
+# 4.1. Data Preparation and Parameter Settings
+
+We resize 2136 depth images to $256*256$ and $224*224$ . In the networks $f^{1}(X)$ and $f^{2}(X)$ , we use images of size $256*256$ , in the network $f^{3}(X)$ and $f^{4}(X)$ , we use the size of $224*224$ . We train our network on the train set of 1707 images and test on the test set of 429 images.
+
+The depths of the networks $f^1(X)$ and $f^2(X)$ are 21, and the other two are 19. The initial learning rate of the
+
+networks $f^1(X)$ and $f^2(X)$ are set to 0.0001, and every 5 epochs we decrease the learning rate by a factor of 0.8. The other two are set to 0.0001, and for every 50 epochs will decrease the learning rate by a factor of 0.5. The batch size of network is set to 8.
+
+We use the average relative error as evaluation index, which is defined as:
+
+$$
+A v e r a g e R e l a t i v e E r r o r = \frac {1}{n} \sum_ {n} \left| H _ {e} - H _ {a} \right| / H _ {a} \tag {5}
+$$
+
+$n$ is the number of samples in the test set. $H_{e}$ is the estimated height, and $H_{a}$ is the real height.
+
+Correspondingly, we define the accuracy as:
+
+$$
+A c c u r a c y = 1 - A v e r a g e R e l a t i v e E r r o r \tag {6}
+$$
+
+# 4.2. Validity of the Intermediate Representation
+
+In this section, we will verify the validity of our intermediate representation proposed in Section 3.2. We conduct three sets of experiments: our method, the method without intermediate expression (denoted by M1), and the method with partial intermediate expressions (denoted by M2).
+
+In our method, we completely follow the steps mentioned in Section 3.2 to get the intermediate representation. In M1, we only use $f^1(X)$ to segment the image, and then input the obtained torso image $T$ and depth image $X^D$ into $f^4(X)$ to estimate height. In M2, we input the label image $L$ which is obtained by $f^1(X)$ and $f^2(X)$ sequentially, and the depth image $X^D$ into $f^4(X)$ to get the result. Table 1 shows the relative error of the three methods.
+
+| Method Name | Error |
| Ours | 0.90% |
| M1 | 1.71% |
| M2 | 1.20% |
+
+Table 1. Average Relative Error of Our Method, M1, M2.
+
+It can be seen from Table 1 that our error is $43\%$ less than that of M1, and $19\%$ less than that of M2. At the same time, we count the number of images of the three methods in different error intervals, as shown in Table 2.
+
+| Error Interval | Ours | M1 | M2 |
| 0<Error≤1% | 273 | 172 | 228 |
| 1%<Error≤2% | 101 | 113 | 128 |
| 2%<Error≤3% | 35 | 65 | 45 |
| 3%<Error≤4% | 17 | 45 | 16 |
| 4%<Error≤5% | 2 | 21 | 3 |
| 5%<Error≤6% | 1 | 7 | 6 |
| 6%<Error≤7% | 0 | 3 | 1 |
| 7%<Error≤8% | 0 | 0 | 1 |
| 8%<Error≤9% | 0 | 2 | 1 |
| 9%<Error≤10% | 0 | 1 | 0 |
+
+It is clear that from Table 2 that our method has excellent performance in reducing the extreme value. $99.3\%$ of all results have error lower than $4\%$ , and $87.2\%$ of them lower than $2\%$ .
+
+The result demonstrates that using our intermediate representation can significantly improve accuracy and reduce the extremely incorrect estimation.
+
+# 4.3. Effectiveness of the Network Architecture
+
+In order to verify the validity of our network architecture proposed in Section 3.3, we conduct two groups of contrast tests (called M3, M4). In M3, rather than using the increasingly complex network architecture, we use all convolutional layers to train the network. In M4, we do not pre-train the network, but gradually restore the network architecture from the simplest architecture. Table 3 shows the relative errors of the three methods.
+
+Table 2. Number of Samples within Different Error Intervals in Our method, M1 and M2.
+
+| Method Name | Error |
| Ours | 0.90% |
| M3 | 0.97% |
| M4 | 1.07% |
+
+As can be seen from Table 3, our architecture is better than M3 and M4 by $7.22\%$ and $15.89\%$ less error respectively.
+
+We plot the curve of accuracy rate during the training process in Figure 9. In our implementation, we set $N1 = 40000$ , $N2 = 60000$ , $N3 = 80000$ , $N4 = 100000$ , $N5 = 160000$ . Before the $N1$ point, M3 adopts the same training method as ours. We observe that the two methods have the similar correct rates. After reaching $N1$ , although the accuracy rate of our method decreases at the beginning, it will gradually increase with the recovery of the network structure, and the final accuracy even exceeds that of M3 and M4, which indicates that our architecture is effective.
+
+
+Figure 9. The accuracy rate curve with the number of iterations in four methods. In order to make the image clearer, we stretched the data within the range of [98,99.2] by 10 times.
+
+# 4.4. Comparison of Different Input Types
+
+In this section we will show which input type is the best for our height estimation network. We conduct three sets of experiments using different inputs: RGB, RGB-D and depth only. For both RGB and RGB-D types, we only change the depth image in the input of network $f^{1}(X)$ , $f^{2}(X)$ , $f^{3}(X)$ and $f^{4}(X)$ to the corresponding image. Then we list the relative errors of these three input types in Table 4.
+
+Table 3. Average Relative Error of Our Method, M3 and M4.
+
+| Input Type | Test Set | Strange-test | Familiar-test |
| RGB | 1.35% | 1.47% | 0.64% |
| RGB-D | 1.05% | 1.13% | 0.56% |
| Depth(Ours) | 0.90% | 0.95% | 0.60% |
+
+Table 4. Average Relative Error of These Three Input Types on Test Set, Strange-test and Familiar-test.
+
+Intuitively, RGB-D images contain more information than the other two types. This means using RGB-D images as input can output the most information. However, we find that solely using depth images generates the best result. We anticipate that the reason is because RGB data in the input will make the network establish a connection between identity and height, rather than estimating height from the image.
+
+In order to verify our conclusion, we conduct experiments respectively on the two test sets: Strange-test and Familiar-test proposed in Section 3.1. We show the average relative error of each input type in these two test data sets in Table 4.
+
+From Table 4 we find that the relative error of the input type using RGB image on the Strange-test is more than twice over the Familiar-test. This confirms our previous conclusion that using RGB images will enable the neural networks to learn a connection between identity and height.
+
+| Posture Name | Ours | Deák et al. [10] | CRIMINISI et al. [8] | Camera Calibration |
| Upright | 0.59% | 4.10% | 1.63% | 1.35% |
| Walking | 0.97% | 2.95% | 7.41% | 2.79% |
| Sitting | 1.00% | 3.58% | 4.82% | 25.50% |
| Bending | 2.19% | 9.92% | 5.38% | 28.53% |
| Arms raising Slightly | 0.78% | 4.38% | 1.45% | 1.20% |
| Unrolling Arms | 0.77% | 3.93% | 1.52% | 1.51% |
| Arms over Head | 0.91% | 4.86% | 1.52% | 1.48% |
| Waving Hands | 0.75% | 4.45% | 1.54% | 1.53% |
| Clapping | 0.69% | 5.50% | 1.58% | 2.91% |
| Having a Waist Line | 0.73% | 4.39% | 1.59% | 2.97% |
| Total Average Error | 0.90% | 4.80% | 6.44% | 2.69% |
+
+Table 5. The Average Relative Error between Our method and Other Methods in Different Postures.
+
+# 4.5. Comparison to Other Methods
+
+This section compares our method with other methods on our test set, including Deak et al. [10] based on the proportional relationship between body parts, CRIMINISI et al. [8] based on vanishing point and vanishing line, and the Kinect camera calibration method [32]. Table 5 shows the comparison results.
+
+It can be seen from the comparison that the average error of our method on the whole test set is significantly better than other methods. Our average relative error is approximately 0.90 which can accurately extract the human body height from the image. Since the methods of CRIMINISI et al. [8] and Camera calibration can not cope well with the non-upright posture of the human body, in Table 5, we separately calculate the error of the four methods according to different postures of the human body. It is noticeable that ours is the best among the four methods in various postures.
+
+At the same time, we analyze the error of other methods. Deák et al. [10] only relied on the ratio between the pixel length of the head and the pupillary distance, so its sensitivity to the gesture is small. But this method requires that a human face should be possibly perpendicular to the camera's optical axis, otherwise the ratio of the distance between the human eye and the length of the head will not be correctly extracted. Besides, due to the differences of individuals, it is not convincingly accurate to solve the height of the person through the statistical law. In the methods of CRIMINISI et al. [8] and Camera calibration, when a person walks, sits, or bends, the straight line distance between the top of the head and the bottom of the foot cannot be a good representation of the height of the human body.
+
+Although these two methods have better performance in the upright pose compared to the aforementioned postures, they have the inherent problem in predicting body height from a single-view image. As shown in Figure 10, the distance $h2$ between dash lines is the real height of the human body, and $h1$ is the predicted height according to the image. Therefore, there is still about $1.5\%$ error in the upright posture for both methods.
+
+
+Figure 10. A failure case with estimating height only using a single-view image in [10] and [8].
+
+# 5. Conclusion and Future Works
+
+We create a method for accurately and quickly estimating body height from a depth image based on increasingly complex network architecture. We propose an intermediate representation based on an effective body torso segmentation, which is automatically obtained by adding high-frequency information of the depth image into a FCN. We first predicts the lengths of each body parts respectively and eventually construct a developing and complex network for the final estimation which effectively suppresses the model over-fitting phenomenon. Our method can cope with sitting, bending, walking and many other postures, and the accuracy rate can even reach at $99.1\%$ .
+
+In future we may enrich our current dataset with more subjects and refine the human body segmentation based on semantic bioinformation which we believe will further improve the accuracy. For future direction, we would like to explore more non-contact measuring techniques of geometric and physical units such as weight and density using optimized deep learning with various inputs.
+
+# 6. Acknowledgements
+
+This work was supported by the grant of Science Foundation of Hunan Province(No.2018JJ3064), National Science Foundation of China(No.61303147). We gratefully acknowledge NVIDIA for GPU donation. We thank Dan Yin, Wei Cai and Zeyu Liu for their help on dataset preparation.
+
+# References
+
+[1] Angelos Amanatiadis, Vasileios G Kaburlasos, and Elias B Kosmatopoulos. Understanding deep convolutional networks through gestalt theory. In 2018 IEEE International Conference on Imaging Systems and Techniques (IST), pages 1-6. IEEE, 2018. 4
+[2] Chiraz BenAbdelkader and Yaser Yacoob. Statistical body height estimation from a single image. In 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition, pages 1-7. IEEE, 2008. 2
+[3] John Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence, (6):679-698, 1986. 4
+[4] Yu Chai and Xiaojing Cao. A real-time human height measurement algorithm based on monocular vision. In 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), pages 293-297. IEEE, 2018. 3
+[5] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801-818, 2018. 4
+[6] Michael Cogswell, Faruk Ahmed, Ross Girshick, Larry Zitnick, and Dhruv Batra. Reducing overfitting in deep networks by decorrelating representations. arXiv preprint arXiv:1511.06068, 2015. 1
+[7] NCD Risk Factor Collaboration et al. A century of trends in adult human height. *Elife*, 5:e13410, 2016. 3
+[8] Antonio Criminisi, Ian Reid, and Andrew Zisserman. Single view metrology. International Journal of Computer Vision, 40(2):123-148, 2000. 2, 3, 8
+[9] Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Christian Theobalt. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Transactions on Graphics (ToG), 36(3):24, 2017. 2
+[10] A Deak, O Kainz, M Michalko, and F Jakab. Estimation of human body height from uncalibrated image. In 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA), pages 1-4. IEEE, 2017. 2, 8
+[11] Yanping Fu, Qingan Yan, Long Yang, Jie Liao, and Chunxia Xiao. Texture mapping for 3d reconstruction with rgb-d sensor. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4645-4653, 2018. 2
+[12] Ye-Peng Guan. Unsupervised human height estimation from a single image. Journal of Biomedical Science and Engineering, 2(06):425, 2009. 2
+[13] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. 4
+[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 2, 5
+
+[15] Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1389-1397, 2017. 1
+[16] Patrick Chi-Yuen Hung, Channa P Witana, and Ravindra S Goonetilleke. Anthropometric measurements from photographic images. Computing Systems, 29:764-769, 2004. 3
+[17] Erno Jeges, Istvan Kispal, and Zoltan Hornak. Measuring human height using calibrated cameras. In 2008 Conference on Human System Interactions, pages 755-760. IEEE, 2008. 3
+[18] István Kispál and Ern Jeges. Human height estimation using a calibrated camera. In Proc. CVPR, 2008. 3
+[19] Kual-Zheng Lee. A simple calibration approach to single view height estimation. In 2012 Ninth Conference on Computer and Robot Vision, pages 161-166. IEEE, 2012. 2
+[20] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. 1
+[21] Jie Li, Mingui Sun, Hsin-Chen Chen, Zhaoxin Li, and Wenyan Jia. Anthropometric measurements from multi-view images. In 2012 38th Annual Northeast Bioengineering Conference (NEBEC), pages 426-427. IEEE, 2012. 2
+[22] Shengzhe Li, Van Huan Nguyen, Mingjie Ma, Cheng-Bin Jin, Trung Dung Do, and Hakil Kim. A simplified nonlinear regression method for human height estimation in video surveillance. EURASIP Journal on Image and Video Processing, 2015(1):32, 2015. 3
+[23] Zhaoxin Li, Wenyan Jia, Zhi-Hong Mao, Jie Li, Hsin-Chen Chen, Wangmeng Zuo, Kuanquan Wang, and Mingui Sun. Anthropometric body measurements based on multi-view stereo image reconstruction. In 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pages 366-369. IEEE, 2013. 3
+[24] Yingying Liu, Arcot Sowmya, and Heba Khamis. Single camera multi-view anthropometric measurement of human height and mid-upper arm circumference using linear regression. *PloS one*, 13(4):e0195600, 2018. 2
+[25] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 1, 4
+[26] Natalia Neverova, Christian Wolf, Florian Nebout, and Graham W Taylor. Hand pose estimation through weakly-supervised learning of a rich intermediate representation. Pre-print: arxiv, 151106728, 2015. 4
+[27] Tam V Nguyen, Jiashi Feng, and Shuicheng Yan. Seeing human weight from a single rgb-d image. Journal of Computer Science and Technology, 29(5):777-784, 2014. 3
+[28] Bas Penders, Ralph Brecheisen, Angele Gerver, Geertjan van Zonneveld, and Willem-Jan Gerver. Validating paediatric morphometrics: body proportion measurement using photogrammetric anthropometry. Journal of pediatric endocrinology and metabolism, 28(11-12):1357-1362, 2015. 2
+[29] Christian Pfitzner, Stefan May, and Andreas Nüchter. Body weight estimation for dose-finding and health monitoring of
+
+lying, standing and walking patients based on rgb-d data. Sensors, 18(5):1311, 2018. 3
+[30] Aaditya Prakash, James Storer, Dinei Florencio, and Cha Zhang. Repr: Improved training of convolutional filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10666-10675, 2019. 1
+[31] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015. 4
+[32] Microsoft Kinect sensors for Windows SDK. Available online: https://docs.microsoft.com/en-us/previousversions/windows/kinect. 1, 3, 8
+[33] Jie Shao, Shaohua Kevin Zhou, and Rama Chellappa. Robust height estimation of moving objects from uncalibrated videos. IEEE Transactions on Image Processing, 19(8):2221-2232, 2010. 3
+[34] Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, and Andrew Blake. Real-time human pose recognition in parts from single depth images. In CVPR 2011, pages 1297-1304. IEEE, 2011. 4
+[35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1, 5
+[36] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 3-11. Springer, 2018. 4
\ No newline at end of file
diff --git a/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/images.zip b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e7f497922121c37a82dfe6ab4cff26192e761fcf
--- /dev/null
+++ b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a589099ee8d216aa5447bd2501de6def211f61eb3a01256227a5793aa4bdd2f1
+size 526765
diff --git a/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/layout.json b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fa4f9b3562bb2bd1d661e39f7c76b869a6622112
--- /dev/null
+++ b/accurateestimationofbodyheightfromasingledepthimageviaafourstagedevelopingnetwork/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05cd355d8d64deed3db901b97c66f119033398552389d13cb06f02099572c548
+size 390858
diff --git a/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_content_list.json b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0afb1352c8248045e6530d8318b421abfcf84021
--- /dev/null
+++ b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:58991e783cf560d4d3d59c64ac908982ea944308e0b8f0aad7a62705e9e46737
+size 91635
diff --git a/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_model.json b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c3553ddb26c104866550692a632caead25425069
--- /dev/null
+++ b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ea80cd120f0547e11c98d5316622949b334250ef16298320ae942944ed1f2c6
+size 120040
diff --git a/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_origin.pdf b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..41024c0111dea76859ade4f20d875eb620ba8840
--- /dev/null
+++ b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/31e82398-e561-4b4c-a670-94e39e04ec8f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:caf6abede42cc3dbd484939da6b389d31afb203f6c44a81f75c4a27d5c0beb4d
+size 1293223
diff --git a/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/full.md b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4eaee168d4b67bec08caee3d5fa4565613b5b1eb
--- /dev/null
+++ b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/full.md
@@ -0,0 +1,435 @@
+# Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations
+
+Sven Gowal*
+
+DeepMind
+
+Chongli Qin*
+
+chongliqin@google.com
+
+Po-Sen Huang
+
+posenhuang@google.com
+
+Taylan Cemgil
+
+taylancemgil@google.com
+
+sgowal@google.com
+
+Krishnamurthy (Dj) Dvijotham
+
+dvij@google.com
+
+Timothy Mann
+
+timothymann@google.com
+
+Pushmeet Kohli
+
+pushmeet@google.com
+
+# Abstract
+
+Recent research has made the surprising finding that state-of-the-art deep learning models sometimes fail to generalize to small variations of the input. Adversarial training has been shown to be an effective approach to overcome this problem. However, its application has been limited to enforcing invariance to analytically defined transformations like $\ell_p$ -norm bounded perturbations. Such perturbations do not necessarily cover plausible real-world variations that preserve the semantics of the input (such as a change in lighting conditions). In this paper, we propose a novel approach to express and formalize robustness to these kinds of real-world transformations of the input. The two key ideas underlying our formulation are (1) leveraging disentangled representations of the input to define different factors of variations, and (2) generating new input images by adversarially composing the representations of different images. We use a StyleGAN model to demonstrate the efficacy of this framework. Specifically, we leverage the disentangled latent representations computed by a StyleGAN model to generate perturbations of an image that are similar to real-world variations (like adding make-up, or changing the skin-tone of a person) and train models to be invariant to these perturbations. Extensive experiments show that our method improves generalization and reduces the effect of spurious correlations (reducing the error rate of a "smile" detector by $21\%$ for example).
+
+# 1. Introduction
+
+The principle by which neural networks are trained to minimize their average error on the training data is known as Empirical Risk Minimization (ERM) [1]. ERM has, for the most part, enabled breakthroughs in a wide variety of fields [2-4], and this success has lead to the usage of neural networks in applications that are safety-critical [5]. ERM, however, is only guaranteed to produce meaningful models
+
+
+Label - Classical training
+Not smiling $(99.36\%)$
+
+
+Not smiling $(57.88\%)$
+
+
+Label - AdvMix (our) training
+Not smiling $(95.82\%)$
+Images under transformation
+Smiling $(99.98\%)$
+Not smiling $(97.26\%)$
+Figure 1. Variations of the same faces. A model obtained through classical training classifies the same face as both "smiling" and "not smiling" (depending on the variations). Our model remains consistent in terms of classification. Note that these persons "do not exist" and have been generated using a StyleGAN model.
+
+
+Smiling $(54.90\%)$
+Smiling $(100\%)$
+Smiling $(61.34\%)$
+
+when the data encountered during training and deployment is drawn independently from the same distribution. When a mismatch between training and testing data occurs, models can fail in catastrophic ways; and, unfortunately, such occurrence is commonplace: training data is often collected through a biased process that highlights confounding factors and spurious correlations [6, 7], which can lead to undesirable consequences (e.g., http://gendershades.org).
+
+The effects of such data shifts are largely detailed in the literature. For example, both Recht et al. [8] and Hendrycks et al. [9] show that the accuracy of IMAGENET models is severely impacted by changes in the data collection process. Methods to counteract such effect, which mainly consist of data augmentation techniques, also struggle. Training against corrupted data only forces the memorization of such corruptions and, as a result, these models fail to generalize to new corruptions [10, 11]. Works such as mixup [12] or AutoAugment [13] pave the way to further improvements,
+
+but still require intricate fine-tuning to succeed in practice.
+
+Another parallel and important line of work uncovered that the addition of small but carefully chosen deviations to the input, called adversarial perturbations, can cause the neural network to make incorrect predictions with high confidence [14-18]. Techniques to build models that are robust to adversarily perturbed examples, such as adversarial training [19], have received a significant amount of attention in the recent years [16, 20-22]. The existence of imperceptible perturbations that alter a model's output demonstrates that supervised learning algorithms still fail to capture the true causal relationships between signal and label. The degradation of performance occurred when shifting between training and adversarial (or otherwise corrupted) distributions indicates that neural networks pick up on correlations that are not necessarily robust to small input perturbations [23]. The existence of imperceptible adversarial perturbations highlights just one form of spurious correlation that causes undesirable behaviors in the networks we train.
+
+This paper focuses on training models that are robust to plausible real-world perturbations that preserve semantic content (such as those presented in Figure 1). We go beyond conventional data augmentation and adversarial training on $l_{p}$ -norm bounded perturbations by leveraging high-quality generative models that can describe such perturbations. In particular, we address the question: "Given a generative model with a sufficiently good disentangled representation that aligns well with the perturbations of interest, can we train neural networks that are resistant to bias and spurious correlations present in the training data?" More specifically, we consider StyleGAN [24] as our underlying generative model. Our contributions are as follows:
+
+1. We develop a framework dubbed Adversarial Mixing with Disentangled Representations (AdvMix) which leverages the disentangled latents of a generative model to train networks that are robust to real-world variations.
+2. We demonstrate how to leverage StyleGAN's mixing property to systematically transfer image attributes likely to be misclassified across image instances, thus allowing us to generate realistic worst-case semantic variations. This enables us to define semantic perturbations in a purely data-driven fashion, as opposed to methods that require data collection under different conditions [25].
+3. We conduct extensive experiments on a controlled Color-MNIST dataset that compare Adversarial Mixing with Disentangled Representations with random data augmentation and demonstrate under which conditions AdvMix achieves higher accuracy.
+4. Finally, we demonstrate empirically on CELEBA that accuracy is not necessarily at odds with robustness [26], once we consider semantic variations other than $\ell_p$ -norm bounded variations.
+
+
+Figure 2. Comparison of different data augmentation techniques. These transformations tend to destroy the image semantics.
+
+
+
+
+
+# 2. Related work
+
+Robustness to $\ell_{p}$ -norm perturbations. Generating pixel-level adversarial perturbations has been and remains extensively studied [16, 18-20, 27, 28]. Most works focus the robustness of classifiers under $\ell_{p}$ -norm bounded perturbations. In particular, it is expected that a robust classifier be invariant to small perturbations in the pixel space (as defined by the $\ell_{p}$ -norm). Goodfellow et al. [16] and Madry et al. [19] laid down foundational principles to train robust networks, and recent works [29, 30] continue to find novel approaches to enhance robustness. While existing work is able to train models that are robust to imperceptible pixel-level variations, the study of robustness against semantically meaningful perturbations is largely under-explored.
+
+Adversarial robustness beyond $\ell_p$ -norm. Engstrom et al. [31] and Kanbak et al. [32] explored geometric transformations such as rotations and translation of images. Early works (e.g., Baluja and Fischer [33]) also demonstrated that it is possible to go beyond analytically defined variations by using generative models to create perturbations. Song et al. [34] and Xiao et al. [35] used a pre-trained AC-GAN [36] to generate perturbations; and they demonstrated that it is possible to generate semantically relevant perturbations for tasks such as MNIST, SVHN and CELEBA. Lastly, Qiu et al. [37] have attempted to generate adversarial examples by interpolating through the attribute space defined by a generative model. With the exception of [38], in which the authors strongly limit semantic variations by keeping the perturbed image close to its original counterpart, there has been little to no work demonstrating robustness to large semantically plausible variations. As such the effect of training models robust to such variations is unclear. To the best of our knowledge, this paper is the first to analyze the difference between adversarial training and data augmentation in the space of semantically meaningful variations.
+
+Data augmentation Data augmentation can reduce generalization error. For image classification tasks, random flips, rotations and crops are commonly used [39]. More sophisticated techniques such as Cutout [40] (which produces random occlusions), CutMix [41] (which replaces parts of
+
+an image with another) and mixup [12] (which linearly interpolates between two images) all demonstrate extremely compelling and surprising results. Indeed, while these methods often result in images that are visibly corrupted and void of semantic meaning (even to the human eye), the resulting models often achieve state-of-the-art accuracy across a wide range of datasets. Figure 2 shows a comparison of these different techniques. Some of these data augmentation techniques have been applied to latent representations of the input (rather than the input itself) [42]. However, these do not focus on the effect of data bias.
+
+Causal reasoning using additional data. Heinze-Deml and Meinshausen [43] use grouped observations (e.g., the same object under different conditions) to discover variations that should not explain the classification label. More recently Arjovsky et al. [25] developed a method called Invariant Risk Minimization (IRM) which tries to find an invariant predictor across different environments (or groups of data points). Both methods were able to build classifiers that were less sensitive to spurious correlations, which, in turn, lead to classifiers that were less biased than classifiers trained purely on an original biased training set. However, they require explicitly annotated data collected under different environmental conditions.
+
+# 3. Adversarial Mixing with Disentangled Representations
+
+In this paper, we consider a model $f_{\theta}$ parametrized by $\theta$ . We would like our model to be robust or invariant to a set of transformations $\mathcal{T}$ . Formally, our goal is to find the model parameters $\theta$ that minimize the semantic adversarial risk
+
+$$
+\underset {(x, y) \sim \mathcal {D}} {\mathbb {E}} \left[ \max _ {t \in \mathcal {T}} L \left(f _ {\theta} (t (x)), y\right) \right], \tag {1}
+$$
+
+where $\mathcal{D} \subset \mathcal{X} \times \mathcal{Y}$ is a data distribution over pairs of examples $x$ and corresponding labels $y$ , and $L$ is a suitable loss function (such as the $0 - 1$ loss in the context of classification tasks). The set of semantic transformations $\mathcal{T}$ contains functions of the form $t: \mathcal{X} \to \mathcal{X}$ . Each element $t \in \mathcal{T}$ is irreducible and, crucially, for the optimal classifier $f_{\theta}: \mathcal{X} \to \mathcal{Y}$ , we would like that $f_{\theta}(t(x)) = f_{\theta}(x)$ for all $t \in \mathcal{T}$ . For example, an MNIST classifier should not be affected by changes in the digit color. In the following, we define a set of transformations $\mathcal{T}$ via a decoder that leverages a disentangled latent representation and explain how to evaluate the resulting risk in Equation (1).
+
+Invariant latent factors. Disentanglement is perceived as a desirable property of representations. Often, one hopes to obtain a representation of the observed data $x \in \mathcal{X}$ in terms of separate and conditionally independent factors $z \in$
+
+$\mathcal{Z}$ given $x$ under a certain class of input transformations [44]. In our particular setting, we will assume a task-specific disentangled representation. Formally, we assume that we have an ideal generator (or decoder), $\operatorname{dec} : \mathcal{Z} \to \mathcal{X}$ , where the latent space $\mathcal{Z}$ is a product space of the form $\mathcal{Z} = \mathcal{Z}_{\parallel} \times \mathcal{Z}_{\perp}$ . For a given classification task that predicts the label $y$ , only the coordinates corresponding to $\mathcal{Z}_{\parallel}$ are relevant, while $\mathcal{Z}_{\perp}$ is irrelevant. We formalize the above notions using conditional independence: given an example $x = \operatorname{dec}(z_{\parallel}, z_{\perp})$ with $z_{\perp} \in \mathcal{Z}_{\perp}$ , $z_{\parallel} \in \mathcal{Z}_{\parallel}$ and corresponding label $y \in \mathcal{V}$ , we have
+
+$$
+\mathbb {P} (y | z _ {\parallel}, z _ {\perp}) = \mathbb {P} (y | z _ {\parallel}). \tag {2}
+$$
+
+Hence, the ideal invariant classifier $f^{\star}$ that outputs a probability distribution over $\mathcal{V}$ should be consistent with the invariance assumption
+
+$$
+f ^ {\star} \left(\operatorname {d e c} \left(z _ {\parallel}, z _ {\perp}\right)\right) = f ^ {\star} \left(\operatorname {d e c} \left(z _ {\parallel}, \tilde {z} _ {\perp}\right)\right) \tag {3}
+$$
+
+for all $\tilde{z}_{\perp}\in \mathcal{Z}_{\perp}$ , and should output the correct label:
+
+$$
+\underset {y ^ {\prime} \in \mathcal {Y}} {\operatorname {a r g m a x}} f ^ {\star} (\operatorname {d e c} (z _ {\parallel}, z _ {\perp})) = y. \tag {4}
+$$
+
+Finally, referring back to Equation (1), we define the set of transforms $\mathcal{T}$ that induce semantically irrelevant perturbations as:
+
+$$
+\begin{array}{l} \mathcal {T} = \{t \mid t (x) = \operatorname {d e c} \left(z _ {\parallel}, \tilde {z} _ {\perp}\right) \text {w i t h} \tilde {z} _ {\perp} \in \mathcal {Z} _ {\perp} \\ s. t. \exists z _ {\perp} x = \operatorname {d e c} \left(z _ {\parallel}, z _ {\perp}\right) \}. \tag {5} \\ \end{array}
+$$
+
+Adversarial training. Given a model $f_{\theta}$ with enough capacity, minimizing the semantic adversarial risk in Equation (1) results in parameters $\theta^{\star}$
+
+$$
+\theta^ {\star} = \underset {\theta} {\operatorname {a r g m i n}} \underset {x = \mathbf {d e c} \left(z _ {\parallel}, z _ {\perp}\right)} {\mathbb {E}} \left[ \max _ {\tilde {z} _ {\perp} \in \mathcal {Z} _ {\perp}} L \left(f _ {\theta} \left(\mathbf {d e c} \left(z _ {\parallel}, \tilde {z} _ {\perp}\right), y\right) \right. \right] \tag {6}
+$$
+
+that satisfy Equations (3) and (4). In other words, there exists no transformation $t \in \mathcal{T}$ that, when applied to $x$ , would result in a misclassification of the optimal classifier $f^{\star} = f_{\theta^{\star}}$ . Solving the saddle point problem in Equation (6) requires solving the corresponding inner-maximization problem
+
+$$
+\tilde {z} _ {\perp} ^ {\star} = \underset {\tilde {z} _ {\perp} \in \mathcal {Z} _ {\perp}} {\operatorname {a r g m a x}} L \left(f _ {\theta} \left(\operatorname {d e c} \left(z _ {\parallel}, \tilde {z} _ {\perp}\right)\right), y\right). \tag {7}
+$$
+
+As enumerating all possible latents $\tilde{z}_{\perp} \in \mathcal{Z}_{\perp}$ is often intractable, we resort to a technique popularized by Madry et al. [19] in the context of adversarial training, which consists of using projected gradient ascent on a differentiable surrogate loss. For a classification task, the $0 - 1$ loss is replaced with the cross-entropy loss:
+
+$$
+\hat {L} \left(f _ {\theta} (x), y\right) = - \log \left(\left[ f _ {\theta} (x) \right] _ {y}\right) \tag {8}
+$$
+
+
+Figure 3. Illustration of the maximization process in Equation (9).
+
+where $[a]_i$ returns the i-th coordinate of $a$ . Gradient ascent steps are then interleaved with projection steps for a given number of iterations $K$ . Formally, we find an estimate $\tilde{z}_{\perp}^{(K)}$ of $\tilde{z}_{\perp}^{\star}$ using the following recursion:
+
+$$
+\tilde {z} _ {\perp} ^ {(k + 1)} = \operatorname {p r o j} _ {\mathcal {Z} _ {\perp}} \left(\tilde {z} _ {\perp} ^ {(k)} + \alpha \nabla_ {\tilde {z} _ {\perp} ^ {(k)}} \hat {L} \left(f _ {\theta} \left(\operatorname {d e c} \left(z _ {\parallel}, \tilde {z} _ {\perp} ^ {(k)}\right)\right), y\right)\right) \tag {9}
+$$
+
+where $\tilde{z}_{\perp}^{(0)}$ is chosen at random within $\mathcal{Z}_{\perp}$ , $\alpha$ is a constant step-size and $\operatorname{proj}_{\mathcal{A}}(a)$ is a projection operator that project $a$ onto $\mathcal{A}$ . Figure 3 illustrates the process.
+
+Ultimately, Adversarial Mixing with Disentangled Representations (shortened as AdvMix) tries to find parameters that minimize the worst-case loss that could arise from altering the input examples through plausible transformations. It guarantees that transformations of the input are meaningful by using a disentangled latent representation that encodes independent controllable factors, where some of these factors are known to be independent from the label. Finding such a disentangled representation is rarely possible, as it is not always known which variations of the input should or should not affect the label. In some cases, however, it is possible to train generative models such that we expect some subset of the latents to not affect the label. Section 4 implements AdvMix using a StyleGAN model.
+
+Data with low density regions. The motivation behind AdvMix stems from the manifold hypothesis [45]. It states that high dimensional data present in the real-world, such as images, often lies on a low-dimensional manifold. As a consequence, there exists large regions in the input space that are outside the support of the data distribution. Hence, for maximal efficiency, data augmentation and adversarial training should be done carefully to make sure that the augmented data is still within the support of the original data distribution. Data augmentation techniques presented in Figure 2 clearly violate this condition, and despite their success, we cannot expect that they perform well across all datasets (in fact, mixup performs poorly on Color-MNIST). Similarly,
+
+
+Figure 4. Comparison of mixup and AdvMix on a toy example. In this example, we are given 200 datapoints. Each data point $(x_{1}, x_{2})$ is sampled according to $x_{1} \sim \mathcal{N}(z_{\perp}, \sqrt{3})$ where $z_{\perp} \in \mathcal{Z}_{\perp} = \{0., 10.\}$ and $x_{2} \sim \mathcal{N}(z_{\parallel}, 1)$ where $z_{\parallel} \in \mathcal{Z}_{\parallel} = \{0., 20.\}$ . The colors represent the label. Note that the latent variable $z_{\parallel} = 20y$ is dependent on the label while $z_{\perp}$ is independent of the label. Panel (a) shows the original set of 200 datapoints; panel (b) shows the effect of sampling additional data using AdvMix; and panel (c) shows the effect of mixup. Of course, we should point out that our method, AdvMix, is aware of the underlying latent representation, while mixup is not.
+
+
+
+
+
+adversarial training targeting $\ell_p$ -norm bounded perturbations tend to trade-off accuracy for robustness [23]. Figure 4 compares mixup and AdvMix on a toy example. In this example, we artificially construct a dataset with two classes and an underlying disentangled latent representation. We observe that by exploiting the knowledge of the disentangled latent representation, AdvMix is capable of generating additional datapoints that are consistent with the original dataset, while mixup generates additional datapoints that are unlikely.
+
+Relationship to mixup. mixup augments data with respect to the input space. Given two pairs of inputs $(x_A, y_A)$ , $(x_B, y_B)$ and a linear interpolation factor sampled from a $\beta$ -distribution $\lambda \sim \beta(\alpha, \alpha)$ , mixup generate a new input pair as follows:
+
+$$
+\tilde {x} = \lambda x _ {A} + (1 - \lambda) x _ {B}
+$$
+
+$$
+\tilde {y} = \lambda y _ {A} + (1 - \lambda) y _ {B}. \tag {10}
+$$
+
+Our methodology combines inputs $(x_A, y_A)$ and $(x_B, y_B)$ in the latent space. If $x_A = \mathrm{dec}(z_{A\parallel}, z_{A\perp})$ and $x_B = \mathrm{dec}(z_{B\parallel}, z_{B\perp})$ , we obtain
+
+$$
+\tilde {x} = \operatorname {d e c} \left(z _ {A _ {\parallel}}, z _ {B _ {\perp}}\right)
+$$
+
+$$
+\tilde {y} = y _ {A}. \tag {11}
+$$
+
+Crucially, this combination only affects the latent sub-space that is independent from the label, thus the label remains unchanged. We also note that, unlike [42], no interpolation occurs in the latent space (i.e., $\lambda z_{A\perp} + (1 - \lambda)z_{B\perp}$ ) as this could result in points that are outside $\mathcal{Z}_{\perp}$ when $\mathcal{Z}_{\perp}$ is not convex.
+
+Relationship to Invariant Risk Minimization. Arjovsky et al. [25] consider the case where we have multiple datasets
+
+$D_{e} = \{x_{i},y_{i}\}_{i = 1}^{n}$ drawn from different training environments $e\in \mathcal{E}$ . As explained in [25], the motivation behind IRM is to minimize the worst-case risk
+
+$$
+\max _ {e \in \mathcal {E}} \underset {(x, y) \in D _ {e}} {\mathbb {E}} [ L (f _ {\theta} (x), y) ]. \tag {12}
+$$
+
+In this paper, the environments are defined by the different instances of $z_{\perp} \in \mathcal{Z}_{\perp}$ . Given a dataset $\{\mathrm{dec}(z_{i_{\parallel}}, z_{i_{\perp}}), y_i\}_{i=1}^n$ , we can rewrite the semantic adversarial risk shown in Equation (1) as Equation (12) by setting the environment set $\mathcal{E}$ to
+
+$$
+\mathcal {E} = \left\{\left\{\operatorname {d e c} \left(z _ {i _ {\parallel}}, z _ {\perp}\right), y _ {i} \right\} _ {i = 1} ^ {n} \mid z _ {\perp} \in \mathcal {Z} _ {\perp} \right\}. \tag {13}
+$$
+
+This effectively create an ensemble of datasets for all possible combinations of $z_{\perp} \in \mathcal{Z}_{\perp}$ for all examples.
+
+The crucial difference between IRM and AdvMix is in the formulation of the risk. While IRM computes the risk by enumerating over a countable set of environments and picking the worst-case, AdvMix attempts to compute the worst-case risk by finding the combination of variations that maximize the risk over all examples.
+
+# 4. Implementation using StyleGAN
+
+So far, we have assumed the presence of a generator (or decoder) that is capable of using a perfectly disentangled latent representation: we have assumed that this representation is partitioned into two subsets, one of which is known to be independent from the target label. In practice, the methodology is often reversed: generative models are trained in the hope of obtaining some level of disentanglement. If a partition of the trained latent space does not influence the label, we can use the corresponding trained generator within AdvMix. This section explains why StyleGAN is a good candidate and details how to implement AdvMix using StyleGAN. In particular, as we rely on StyleGAN's mixing property to enforce a partitioning of the latents, only three elements are needed: $(i)$ a transformation set $\mathcal{Z}_{\perp}$ from which label-independent variants $\tilde{z}_{\perp}$ can be chosen; $(ii)$ a dataset $\mathcal{D} = \{z_{i\parallel},y_i\}_{i = 1}^n$ of latents and labels; and $(iii)$ an efficient method to find a worst-case variation $\tilde{z}_{\perp}\in \mathcal{Z}_{\perp}$ .
+
+StyleGAN. StyleGAN is a generator architecture for generative adversarial networks proposed by Karras et al. [24]. It borrows interesting properties from the style transfer literature [46]. In this work, we rely on the style mixing property. Formally, the StyleGAN architecture is composed of two stages. The first stage takes a latent variable $z \sim \mathcal{N}(\mathbf{0}, \mathbf{1})$ that is not necessarily disentangled and projects it into a disentangled latent space $z = \mathrm{map}(z)$ . The second stage synthesizes an image $x$ from the disentangled latents $z$ using a decoder $x = \mathrm{dec}(z)$ . Overall, the process of generating an image $x$ using a StyleGAN network is defined as
+
+$$
+x = \operatorname {d e c} \circ \operatorname {m a p} (\mathbf {z}) \text {w h e r e} \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {1}). \tag {14}
+$$
+
+The intermediate latent variables $z$ provide some level of disentanglement that affects image generation at different spatial resolutions which allows us to control the synthesis of an image. Particularly, we can apply the "style" of an image to another by mixing the disentangled latents of these images together. In the context of face generation, the styles corresponding to coarse spatial resolutions affect high-level aspects such as pose, and styles of fine resolutions affect mainly the color scheme. In the rest of this manuscript, we focus on variations of the finer style. Concretely, our experiments in Section 5 assume that the fine attributes $z_{\perp}$ are label-independent, while the coarse attributes $z_{\parallel}$ may be label-dependent. Consequently, the finer style $z_{B\perp}$ of an image $x_{\mathrm{B}}$ can be applied to another image $x_{\mathrm{A}} = \mathrm{dec}(z_{A\parallel},z_{A\perp})$ via $\mathrm{dec}(z_{\mathrm{A}\parallel},z_{\mathrm{B}_{\perp}})$ . Figure 5b shows a nominal image and two variations of that image obtained by mixing the finer style of two other images.
+
+Definition of the transformation set. For completeness, we now define the set of transforms $\mathcal{T}$ in Equation (5) by defining $\mathcal{Z}_{\perp}$ . While the formulation of StyleGAN allows $z$ to be sampled within an infinite support, our formulation requires $\mathcal{Z}_{\perp}$ to be bounded. Additionally, as explained by Nalisnick et al. [47], due to concentration of measure, a generative model usually draws samples from its typical set [48] (a subset of the model's full support) rather than regions of high probability density. As such, if $z \in \mathbb{R}^d$ , we wish to define $\mathcal{Z}_{\perp}$ as follows:
+
+$$
+\mathcal {Z} _ {\perp} = \left\{\operatorname {m a p} (\mathbf {z}) _ {\perp} \left| \sqrt {d} - \delta d ^ {\frac {1}{4}} \leq \| \mathbf {z} \| _ {2} \leq \sqrt {d} + \delta d ^ {\frac {1}{4}} \right. \right\} \tag {15}
+$$
+
+where $\delta$ is a small tunable positive constant. In practice, however, we do not want to backpropagate through the map operation as it is inefficient. Instead, a small collection of latents is sampled, passed through the map operation, and $\mathcal{Z}_{\perp}$ is limited to a neighborhood of the points in this collection. This collection is re-sampled for each example and in expectation covers the typical set well (more details are given in Algorithm 2).
+
+**Construction of a dataset of disentangled latents.** Constructing a dataset of labelled latents $\mathcal{D} = \{z_{i\parallel},y_i\}_{i = 1}^n$ requires finding the latents $z_{i}$ that decode into each example $x_{i}$ of an original labelled dataset $\{x_{i},y_{i}\}_{i = 1}^{n}$ . Hence, we need to find a mapping between the image space and the latent space. This mapping, which can be computed offline, is used to construct the dataset $\mathcal{D}$ , and is only required once for each new dataset. Specifically, this mapping is denoted as $\mathrm{enc}: \mathcal{X} \mapsto \mathcal{Z}$ and finds $z_{i}$ such that $x_{i} \approx \mathrm{dec}(z_{i})$ .
+
+
+(a)
+
+
+(b)
+Figure 5. Panel a shows how the latents are progressively able to match a target image (on the far right). Panel b shows two different variations of the obtained image.
+
+# Algorithm 1 Encoder enc
+
+Input: Target image $x$ , trained StyleGAN model dec o map, and trained VGG network vgg. $\alpha_{i}$ and $\beta_{i}$ are hyperparameters all set to 1 and 1/5 respectively. $\gamma^{(k)}$ is a step-size schedule.
+Output: Disentangled latents $\hat{z}$ such that $\mathrm{dec}(\hat{z})\approx x$
+1: $\hat{z} \gets \frac{1}{M} \sum_{i=1}^{M} \mathbf{map}(\mathbf{z}^{(i)})$ with $\mathbf{z}^{(i)} \sim \mathcal{N}(\mathbf{0}, \mathbf{1})$ ▷ Average latents
+2: for $k \in \{1, \dots, N\}$ do
+3: $\hat{x} = \operatorname{dec}(\hat{z})$
+4: $\hat{A} = \mathbf{vgg}(\hat{x})$ $\triangleright \hat{A}$ is a list of activations (after the $2^{\mathrm{nd}}$ convolution of $1^{\mathrm{st}}$ , $2^{\mathrm{nd}}$ and $3^{\mathrm{rd}}$ blocks)
+5: $\mathcal{A} = \mathrm{vgg}(x)$
+6: $\mathcal{A}_{\mathrm{mix}} = \mathrm{vgg}(\mathrm{dec}(\hat{z}_\parallel ,\mathrm{map}(\mathbf{z})_\perp)))$ with $\mathbf{z}\sim \mathcal{N}(\mathbf{0},\mathbf{1})$
+7: $L_{\mathrm{reconstruct}} = \alpha_0\| \hat{x} -x\| _2^2 +\sum_{i = 1}^{|\mathcal{A}|}\alpha_i\| \hat{\mathcal{A}}_i - \mathcal{A}_i\| _2^2$
+8: $L_{\mathrm{mix}} = \sum_{i=1}^{|\mathcal{A}|} \beta_i \| \mathcal{A}_{\mathrm{mix},i} - \mathcal{A}_i\|_2^2$ Reconstruction loss $\triangleright$ Mixing loss
+9: $\hat{z} \gets \hat{z} - \gamma^{(k)}\nabla_{\hat{z}}(L_{\mathrm{reconstruct}} + L_{\mathrm{mix}})$
+10: end for
+
+Algorithm 1 defines this mapping through an optimization process. Inspired by [50], and rather than relying solely on the distance between pixel values to define the loss of that optimization, we use the perceptual loss [51, 52] – which helps steer the optimization process. The perceptual loss is defined on the intermediate activations of a trained VGG-16 network [53] (see line 7). We also found that the StyleGAN generator, dec, is a surjective mapping between its disentangled latent space and the image space (i.e., multiple latents can decode into the same image). Hence, since we heavily rely on the mixing property of StyleGAN, and to the contrary of [50], we propose to add an additional component to the loss that steers the latents towards a subset of latents that can be mixed. In particular, we add a perceptual loss between the synthesized image and a mixed version of the same image (see lines 6 and 8). Figure 5 shows the evolution of the optimization process as well as mixed variants of the resulting image.
+
+# Generating worst-case examples to train robust models.
+
+As explained in Section 3, minimizing the semantic adversarial risk requires solving an inner-maximization problem. We rely on projected gradient ascent on the cross-entropy loss $\hat{L}$ to efficiently find perturbed latents $\tilde{z}_{\perp} \in \mathcal{Z}_{\perp}$ such that, when mixed with $z_{\parallel}$ , make the classifier output a label other than the true label. Algorithm 2 illustrates the process. This algorithm approximates the typical set in Equation (15) by randomly sampling initial latents $\tilde{z}_{\perp}^{(0)}$ $N_{r}$ times and project
+
+# Algorithm 2 Solution to Equation (7)
+
+Input: A nominal input $x$ and label $y$ , a model $f_{\theta}$ , a StyleGAN model dec o map and an encoder enc. $L$ is the $0 - 1$ loss and $\hat{L}$ is the cross-entropy loss.
+
+Output: Possible misclassified example $\tilde{x}$
+
+1: $\tilde{x}\gets x$
+2: $[z_{\parallel},z_{\perp}] = \mathsf{enc}(x)$ See Algorithm 1
+3: for $r\in \{1,\dots ,N_{\mathrm{r}}\}$ do Repeat $N_{\mathrm{r}}$ times
+4: $\tilde{z}^{(0)}\gets \mathsf{map}(\mathbf{z})_{\perp}$ with $\mathbf{z}\sim \mathcal{N}(\mathbf{0},\mathbf{1})$ ▷ Initial latents
+5: $\tilde{x}^{(0)} = \mathsf{dec}(z_{\parallel},\tilde{z}_{\perp}^{(0)})$
+6: for $k\in \{1,\ldots ,K\}$ do $\triangleright$ K is the number of optimization steps
+7: $\tilde{z}_{\perp}^{(k)} \gets \mathbf{proj}\left(\tilde{z}_{\perp}^{(k-1)} + \alpha \nabla_{\tilde{z}_{\perp}^{(k-1)}} \hat{L}(f_{\theta}(\tilde{x}^{(0)}), y)\right)$
+8: $\tilde{x}^{(k)} = \mathsf{dec}(z_{\parallel},\tilde{z}_{\perp}^{(k)})$
+9: if $L(f_{\theta}(\tilde{x}^{(k)}),y) > L(f_{\theta}(\tilde{x},y)$ then
+10: $\tilde{x}\gets \tilde{x}^{(k)}$
+11: return $\triangleright$ Since $L$ is the $0 - 1$ loss, the procedure can terminate early
+12: end if
+13: end for
+14: end for
+
+ing intermediate solutions $\tilde{z}_{\perp}^{(k)}$ back onto a neighborhood of $\tilde{z}_{\perp}^{(0)}$ .3 It refines the initial latents using gradient ascent with the goal of finding latents $\tilde{z}_{\perp}^{(K)}$ that, when mixed with the original image latents $z_{\parallel}$ , generate an image $\mathrm{dec}(z_{\parallel}, \tilde{z}_{\perp}^{(K)})$ that is misclassified. Figure 1 shows the result of this optimization procedure where the original image (on the top-left) is classified as "not smiling" and the optimized image (on the bottom-left) is classified as "smiling". Once perturbed latents $\tilde{z}_{\perp} = \tilde{z}_{\perp}^{(K)}$ are found, we can compute the cross-entropy loss on the image generated by $\mathrm{dec}(z_{i\parallel}, \tilde{z}_{\perp})$ . Formally, for a classifier $f_{\theta}$ and a dataset $\mathcal{D} = \{z_{i\parallel}, y_i\}_{i=1}^n$ , we want to solve
+
+$$
+\operatorname {a r g m i n} _ {\theta} \mathbb {E} _ {z _ {i _ {\parallel}}, y _ {i} \sim D} \left[ L \left(f _ {\theta} \left(\operatorname {d e c} \left(z _ {i _ {\parallel}}, \tilde {z} _ {\perp}\right)\right), y _ {i}\right) \right] \tag {16}
+$$
+
+$$
+\text {a n d} \tilde {z} _ {\perp} = \operatorname {a r g m a x} _ {z _ {\perp} \in \mathcal {Z} _ {\perp}} L \left(f _ {\theta} \left(\operatorname {d e c} \left(z _ {i \|}, z _ {\perp}\right)\right), y _ {i}\right).
+$$
+
+Random mixing with disentangled representations. While this section describes an instantiation of AdvMix using StyleGAN, it is possible to formulate an equivalent random data augmentation baseline. For an input $x$ , we generate a
+
+
+Figure 6. Mean colors given to each digit in the training set of our Color-MNIST case-study.
+
+random variation as follows:
+
+$$
+\tilde {x} = \operatorname {d e c} (\operatorname {e n c} (x) _ {\parallel}, \operatorname {m a p} (\mathbf {z}) _ {\perp}) \text {w i t h} \mathbf {z} \sim \mathcal {N} (\mathbf {0}, \mathbf {1}) \tag {17}
+$$
+
+# 5. Results
+
+In this section, we compare AdvMix to (i) nominal training which minimizes the empirical risk, (ii) Adversarial Training (AT) which minimizes the adversarial risk over $\ell_{\infty}$ -norm bounded perturbations of size $\epsilon$ in input space [19], and (iii) Random Mixing with Disentangled Representations (RandMix) which minimizes the vicinal risk by randomly sampling latents from $\mathcal{Z}_{\perp}$ (rather than systematically finding the worst-case variations). We perform two experiments to assess the generalization abilities of AdvMix. The first experiment is done on an artificially constructed dataset called Color-MNIST (it bares resemblance to the Color-MNIST experiments present in [25]). The second experiment uses CELEBA. Both experiment demonstrate that methods using semantic variations as expressed by a trained StyleGAN model achieve higher accuracy. It also demonstrates that, when the distribution of variations is skewed (i.e., some variations $z_{\perp}$ appear more often than others in the dataset used to train the StyleGAN model), AdvMix obtains higher accuracy than RandMix. For both experiments, we train a truncated VGG network with 5 layers using 5 epochs on Color-MNIST and 20 epochs on CELEBA. We use the Adam [54] optimizer with a learning rate of $10^{-3}$ . AdvMix is trained with $N_{\mathrm{r}}$ set to 5.
+
+# 5.1. Color-MNIST
+
+Color-MNIST consists of a dataset of MNIST [55] digits that are artificially colored to emphasize bias. On the training set, we color each pair $(x,y)$ of the original MNIST dataset with a color drawn randomly from a normal distribution with mean $\mu_y$ and standard deviation $\sigma$ (means $\mu_y$ for $y\in \{0,\dots ,9\}$ are shown in Figure 6). On the test set, we color digits uniformly at random. In other words, the colors present in the training set spuriously correlate with the label. We can use $\sigma$ to affect this correlation: by progressively increasing $\sigma$ the dataset becomes less biased. For all techniques (including mixup), we vary the level of bias and train models using 5 epochs. The StyleGAN model is trained on the training set only, once for each setting of $\sigma$ . The disentangled latents defining the finer style correspond to the final resolution of $32\times 32$ .
+
+
+Figure 7. Accuracy of different training methods on images from our unbiased Color-MNIST test set. The training set is progressively debiased by increasing the standard deviation of the colors present.
+
+Table 1. Effect of bias when training a StyleGAN model on our Color-MNIST dataset.
+
+| Method | Test accuracy on clean images |
| Unbiased | Less biased | More biased |
| RandMix | 99.11% | 98.87% | 97.63% |
| AdvMix | 99.19% | 99.07% | 98.79% |
+
+Figure 7 shows the results. Across all settings, RandMix and AdvMix outperform the other methods. As expected, the gap between all methods decreases as the training set becomes less biased. It is also worth noting that AT is useful (compared to nominal training and mixup) as on this dataset $\ell_{\infty}$ -norm bounded perturbations allow the exploration of slight variations in colors. RandMix and AdvMix are both expected to do well as all variations $z_{\perp}$ (that correspond to applications of different colors) are equally likely to be drawn from the StyleGAN model (since they are uniformly distributed in the training set).
+
+To further emphasize the difference between RandMix and AdvMix, we purposefully bias the training of the StyleGAN model. We create two additional datasets (with $\sigma = 0$ ). With the first dataset (named "more biased"), the StyleGAN model is trained on a large fraction of zeros (and few other digits), while on the second dataset (named "less biased", the StyleGAN model is trained on a large fraction of zeros and ones. As a result, rarely occurring variations (colors of digits from 1 to 9 for the first dataset and colors of digits from 2 to 9 for the second) are less likely to be randomly selected by RandMix. Table 1 shows the results. We observe that AdvMix performs better. However, we note that the gap is not large, as all color variations all contain red, green and blue components (which allows the network to implicitly learn about other color combinations).
+
+Finally, to create a stronger effect, we limit digits to the
+
+Table 2. Effect of bias when training a StyleGAN model on our RGB Color-MNIST dataset (limited to red, blue or green colors). The classifier is a linear model (instead of a convolutional network).
+
+| Method | Test accuracy on clean images |
| Unbiased | 99% red | 99.9% red |
| Less biased | More biased |
| RandMix | 88.55% | 83.18% | 53.56% |
| AdvMix | 85.07% | 85.02% | 85.00% |
+
+red, green and blue colors only (resulting in new datasets), and use a linear classifier (instead of a truncated VGG network). Table 2 demonstrates that, when the StyleGAN model is trained with a significant proportion of red digits, AdvMix does much better. Indeed, AdvMix is able to systematically find the corner cases (i.e., green and blue variations) that are currently misclassified rather than relying on the random sampling of such cases. We note that adversarial training can result in unstable learning, which can explain why RandMix does slightly better when the StyleGAN model is unbiased.
+
+# 5.2. CELEBA
+
+CELEBA [56] is a large-scale public dataset with forty different face attribute annotations including whether a person smiles or wears a hat. We make no modifications to the dataset and use a pretrained StyleGAN model. For all techniques, we train models using 20 epochs. We evaluate all methods on their ability to classify the "smiling" attribute, as well as three other attributes. In this experiment, the disentangled latents defining the finer style correspond to resolutions ranging from $128 \times 128$ to $1024 \times 1024$ .
+
+In Table 3, we observe that AdvMix is the only method that systematically achieves high accuracy. This clearly demonstrates AdvMix can lead to a lower generalization error. It is also interesting to see that RandMix does not always improve on nominal training and that AT consistently trades off clean accuracy for $\ell_{\infty}$ -robustness (as seen in [23]). Finally, Figure 8 shows qualitative examples of images that are all correctly classified by the nominal model, but for which we can find plausible variants that are misclassified. Appendix B shows more results and includes other data augmentation schemes.
+
+Overall, these results are confirming the observations made on the Color-MNIST dataset. They seem to indicate that there is a slightly distributional shift between CELEBA's train and test sets (at least when it comes to the finer image style). By systematically probing variations that are difficult to classify, AdvMix is able to overcome this shift and reach
+
+Table 3. Test accuracy on different classification tasks of the CELEBA dataset.
+
+| Method | Test accuracy on attribute |
| #1 | #2 (smiling) | #3 | #4 |
| Nominal | 96.49% | 90.22% | 83.52% | 78.05% |
| AT (ε = 4/255) | 95.34% | 91.11% | 81.43% | 76.61% |
| AT (ε = 8/255) | 95.22% | 89.29% | 79.46% | 74.39% |
| RandMix | 96.70% | 90.36% | 84.49% | 76.41% |
| AdvMix | 97.56% | 92.29% | 85.65% | 79.47% |
+
+
+Cean
+
+
+
+
+
+
+
+
+
+
+Perturbed
+
+
+Figure 8. The top row shows examples of clean images from CELEBA that are all classified correctly by the nominal model. The bottom row shows semantically plausible variants of these images that are all misclassified.
+
+
+
+
+
+
+
+better classification accuracy (to the contrary of RandMix which can only stumble on difficult variants by chance).
+
+# 6. Conclusion
+
+We have demonstrated a novel approach to achieving robustness to input variations encountered in the real world by generating adversarial instances that compose disentangled representations. We have shown how this framework can be realized by leveraging the StyleGAN architecture – resulting in models that are not only robust to systematic evaluation of insensitivity to variations but also exhibit better generalization, demonstrating that that accuracy is not necessarily at odds with robustness. Our formulation relies on good generative models that can learn a disentangled representation from which some directions are orthogonal to the label we are trying to predict. Methods such as AdvMix are intended to be used to reduce the effect of bias and spurious correlations on classifiers.8 We hope the promising results shown in this paper encourage the development of more effective disentangled representations that cover most factors of variations encountered in the real world. Finally, we hope this work leads to the exploration of this paradigm in the context of other Computer Vision applications and leads to the development of robust perception systems that can be safely used in the real world.
+
+# References
+
+[1] V. Vapnik, "Statistical learning theory," 1998. 1
+[2] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. [Online]. Available: http://www.deeplearningbook.org 1
+[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
+[4] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, and others, "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal processing magazine, vol. 29, no. 6, pp. 82-97, 2012. 1
+[5] K. D. Julian, J. Lopez, J. S. Brush, M. P. Owen, and M. J. Kochenderfer, "Policy compression for aircraft collision avoidance systems," in IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). IEEE, 2016, pp. 1-10. 1
+[6] A. Torralba, A. A. Efros et al., "Unbiased look at dataset bias." in CVPR, vol. 1, no. 2. Citeseer, 2011, p. 7. 1
+[7] A. Kuehlkamp, B. Becker, and K. Bowyer, "Gender-from-iris or gender-from-mascara?" in 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2017, pp. 1151-1159. 1
+[8] B. Recht, R. Roelofs, L. Schmidt, and V. Shankar, “Do imagenet classifiers generalize to imagenet?” arXiv preprint arXiv:1902.10811, 2019. 1
+[9] D. Hendrycks, K. Zhao, S. Basart, J. Steinhardt, and D. Song, “Natural adversarial examples,” arXiv preprint arXiv:1907.07174, 2019. 1
+[10] I. Vasiljevic, A. Chakrabarti, and G. Shakhnarovich, "Examining the impact of blur on recognition by convolutional networks," arXiv preprint arXiv:1611.05760, 2016. 1
+[11] R. Geirhos, C. R. Temme, J. Rauber, H. H. Schütt, M. Bethge, and F. A. Wichmann, “Generalisation in humans and deep neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 7538–7550. 1
+[12] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, "mixup: Beyond empirical risk minimization," arXiv preprint arXiv:1710.09412, 2017. 1, 3
+[13] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, "Autoaugment: Learning augmentation policies from data," arXiv preprint arXiv:1805.09501, 2018. 1
+[14] N. Carlini and D. Wagner, "Adversarial examples are not easily detected: Bypassing ten detection methods," in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. ACM, 2017, pp. 3-14. 2
+[15] ——, “Towards evaluating the robustness of neural networks,” in 2017 IEEE Symposium on Security and Privacy. IEEE, 2017, pp. 39-57.
+[16] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. 2
+[17] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial machine learning at scale,” arXiv preprint arXiv:1611.01236, 2016.
+
+[18] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, "Intriguing properties of neural networks," arXiv preprint arXiv:1312.6199, 2013. 2
+[19] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, "Towards deep learning models resistant to adversarial attacks," arXiv preprint arXiv:1706.06083, 2017. 2, 3, 7
+[20] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," arXiv preprint arXiv:1511.04508, 2015. 2
+[21] H. Kannan, A. Kurakin, and I. Goodfellow, "Adversarial Logit Pairing," arXiv preprint arXiv:1803.06373, 2018.
+[22] C. Xie, Y. Wu, L. van der Maaten, A. Yuille, and K. He, “Feature denoising for improving adversarial robustness,” arXiv preprint arXiv:1812.03411, 2018. 2
+[23] A. Ilyas, S. Santurkar, D. Tsipras, L. Engstrom, B. Tran, and A. Madry, "Adversarial examples are not bugs, they are features," arXiv preprint arXiv:1905.02175, 2019. 2, 4, 8
+[24] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 4401–4410. 2, 5
+[25] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz, "Invariant risk minimization," arXiv preprint arXiv:1907.02893, 2019. 2, 3, 4, 5, 7
+[26] D. Tsipras, S. Santurkar, L. Engstrom, A. Turner, and A. Madry, "Robustness may be at odds with accuracy," arXiv preprint arXiv:1805.12152, 2018. 2
+[27] A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” arXiv preprint arXiv:1607.02533, 2016. 2
+[28] S.-M. Moosavi-Dezfooli, A. Fawzi, J. Uesato, and P. Frossard, "Robustness via curvature regularization, and vice versa," arXiv preprint arXiv:1811.09716, 2018. 2
+[29] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, and M. I. Jordan, "Theoretically principled trade-off between robustness and accuracy," arXiv preprint arXiv:1901.08573, 2019. 2
+[30] C. Qin, J. Martens, S. Gowal, D. Krishnan, A. Fawzi, S. De, R. Stanforth, P. Kohli et al., "Adversarial robustness through local linearization," arXiv preprint arXiv:1907.02610, 2019. 2
+[31] L. Engstrom, B. Tran, D. Tsipras, L. Schmidt, and A. Madry, "A rotation and a translation suffice: Fooling cnns with simple transformations," arXiv preprint arXiv:1712.02779, 2017. 2
+[32] C. Kanbak, S.-M. Moosavi-Dezfooli, and P. Frossard, "Geometric robustness of deep networks: analysis and improvement," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4441-4449. 2
+[33] S. Baluja and I. Fischer, “Adversarial transformation networks: Learning to generate adversarial examples,” arXiv preprint arXiv:1703.09387, 2017. 2
+[34] Y. Song, R. Shu, N. Kushman, and S. Ermon, "Constructing unrestricted adversarial examples with generative models," in Advances in Neural Information Processing Systems, 2018, pp. 8312-8323. 2
+[35] C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song, "Generating adversarial examples with adversarial networks," arXiv preprint arXiv:1801.02610, 2018. 2
+
+[36] A. Odena, C. Olah, and J. Shlens, "Conditional image synthesis with auxiliary classifier gans," in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR.org, 2017, pp. 2642-2651. 2
+[37] H. Qiu, C. Xiao, L. Yang, X. Yan, H. Lee, and B. Li, "Semanticadv: Generating adversarial examples via attribute-conditional image editing," arXiv preprint arXiv:1906.07927, 2019. 2
+[38] A. Jalal, A. Ilyas, C. Daskalakis, and A. G. Dimakis, “The robust manifold defense: Adversarial training using generative models,” arXiv preprint arXiv:1712.09196, 2017. 2
+[39] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. 2
+[40] T. DeVries and G. W. Taylor, "Improved regularization of convolutional neural networks with cutout," arXiv preprint arXiv:1708.04552, 2017. 2
+[41] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo, "Cutmix: Regularization strategy to train strong classifiers with localizable features," arXiv preprint arXiv:1905.04899, 2019. 2
+[42] V. Verma, A. Lamb, C. Beckham, A. Najafi, I. Mitliagkas, A. Courville, D. Lopez-Paz, and Y. Bengio, "Manifold mixup: Better representations by interpolating hidden states," arXiv preprint arXiv:1806.05236, 2018. 3, 4
+[43] C. Heinze-Deml and N. Meinshausen, “Conditional variance penalties and domain shift robustness,” arXiv preprint arXiv:1710.11469, 2017. 3
+[44] I. Higgins, D. Amos, D. Pfau, S. Racaniere, L. Matthey, D. Rezende, and A. Lerchner, “Towards a Definition of Disentangled Representations,” arXiv e-prints, p. arXiv:1812.02230, Dec 2018. 3
+[45] C. Fefferman, S. Mitter, and H. Narayanan, “Testing the manifold hypothesis,” Journal of the American Mathematical Society, vol. 29, no. 4, pp. 983–1049, 2016. 4
+[46] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1501–1510. 5
+[47] E. Nalisnick, A. Matsukawa, Y. W. Teh, and B. Lakshminarayanan, "Detecting out-of-distribution inputs to deep generative models using a test for typicality," arXiv preprint arXiv:1906.02994, 2019. 5
+[48] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012. 5
+[49] R. Vershynin, High-dimensional probability: An introduction with applications in data science. Cambridge University Press, 2018, vol. 47. 5
+[50] R. Abdal, Y. Qin, and P. Wonka, “Image2stylegan: How to embed images into the stylegan latent space?” arXiv preprint arXiv:1904.03189, 2019. 6
+[51] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European conference on computer vision. Springer, 2016, pp. 694–711. 6
+[52] A. Dosovitskiy and T. Brox, "Generating images with perceptual similarity metrics based on deep networks," in Advances
+
+in neural information processing systems, 2016, pp. 658-666. 6
+[53] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. 6
+[54] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014. 7
+[55] Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010. [Online]. Available: http://yann.lecun.com/exdb/mnist/7
+[56] Z. Liu, P. Luo, X. Wang, and X. Tang, “Deep learning face attributes in the wild,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 3730–3738. 8
+[57] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, and others, "Tensorflow: a system for large-scale machine learning," in OSDI, vol. 16, 2016, pp. 265-283. 15
\ No newline at end of file
diff --git a/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/images.zip b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3d01c3a3bf64fcf52cd8b923d4cb96a2d8e7643e
--- /dev/null
+++ b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:23db475651f81162ff3546a0ea0eda3e815a2d5bce1473dac7e57ed9028c2c62
+size 342934
diff --git a/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/layout.json b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..991aabd99564e78e61fd85cbb559a15a9846f85b
--- /dev/null
+++ b/achievingrobustnessinthewildviaadversarialmixingwithdisentangledrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:232d1aad80fc53cdfb82441b39f8ca835236dedbd6500d91490bb546e86e7bb5
+size 605609
diff --git a/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_content_list.json b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f4d95073b3fc03bdd2b3f44981a38360ead2da90
--- /dev/null
+++ b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df4413be24e56b7c6bdcae83200228f4f7e9cd911d136ac70eae77e25643ad5d
+size 86171
diff --git a/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_model.json b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0cda94477ec68953433e0b43c5fb799aacdbf7f5
--- /dev/null
+++ b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e0c9aad5d2681fea0090c7c852ad4d191be565cf762d2cb651d7af2ae9278ec
+size 106446
diff --git a/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_origin.pdf b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..97817ce856db5cb83d24e24246e7c906e19f78d9
--- /dev/null
+++ b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/a7dc8fab-d5ca-4973-8a26-4820902b74f7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7cf9dfb9bf9cd016bcdd0c1aa6d747611463548f885c3eecdeddda34b47ca8d
+size 640786
diff --git a/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/full.md b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..722417a311b4c8b09e49108a488eb9d4695c9bf9
--- /dev/null
+++ b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/full.md
@@ -0,0 +1,342 @@
+# ACNe: Attentive Context Normalization for Robust Permutation-Equivariant Learning
+
+Weiwei Sun1 Wei Jiang1 Eduard Trulls2 Andrea Tagliasacchi3 Kwang Moo Yi1
+1University of Victoria 2Google Research, Zurich 3Google Research, Toronto
+{weiweisun, jiangwei, kyi}@uvic.ca {trulls, taglia}@google.com
+
+# Abstract
+
+Many problems in computer vision require dealing with sparse, unordered data in the form of point clouds. Permutation-equivariant networks have become a popular solution - they operate on individual data points with simple perceptrons and extract contextual information with global pooling. This can be achieved with a simple normalization of the feature maps, a global operation that is unaffected by the order. In this paper, we propose Attentive Context Normalization (ACN), a simple yet effective technique to build permutation-equivariant networks robust to outliers. Specifically, we show how to normalize the feature maps with weights that are estimated within the network, excluding outliers from this normalization. We use this mechanism to leverage two types of attention: local and global - by combining them, our method is able to find the essential data points in high-dimensional space to solve a given task. We demonstrate through extensive experiments that our approach, which we call Attentive Context Networks (ACNe), provides a significant leap in performance compared to the state-of-the-art on camera pose estimation, robust fitting, and point cloud classification under noise and outliers. Source code: https://github.com/vcg-uvic/acne.
+
+# 1. Introduction
+
+Several problems in computer vision require processing sparse, unordered collections of vectors $\mathcal{P} = \{\mathbf{p}_n\in \mathbb{R}^D\}$ , commonly called clouds. Examples include pixel locations $(D = 2)$ , point clouds from depth sensors $(D = 3)$ , and sparse correspondences across a pair of images $(D = 4)$ . The latter includes wide-baseline stereo, one of the fundamental problems in computer vision. It lies at the core of Structure-from-Motion (SfM), which, in turn, is the building block of applications such as 3D reconstruction [1], image-based rendering [43] and time-lapse smoothing [28].
+
+Wide-baseline stereo has been traditionally solved by extracting small collections of discrete keypoints [31] and
+
+finding correspondences among them with robust estimators [16], a reliable approach used for well over two decades. This has changed over the past few years, with the arrival of deep learning and an abundance of new dense [57, 47, 60] and sparse [55, 12, 39, 59, 27] methods. Here, we focus on sparse methods, which have seen many recent developments made possible by the introduction of PointNets [36, 37] – neural networks that rely on multi-layer perceptrons and global pooling to process unordered data in a permutation-equivariant manner – something which is not feasible with neither convolutional nor fully-connected layers.
+
+Networks of this type - hereafter referred to as permutation-equivariant networks - have pioneered the application of deep learning to point clouds. The original PointNet relied on the concatenation of point-wise (context-agnostic) and global (point-agnostic) features to achieve permutation equivariance. Yi et al. [55] proposed Context Normalization (CN) as a simple, yet effective alternative to global feature pooling: all it requires is a non-parametric normalization of the feature maps to zero mean and unit variance. Contrary to other normalization techniques utilized by neural networks [22, 2, 46, 51], whose primary objective is to improve convergence, context normalization is used to generate contextual information while preserving permutation equivariance. Despite its simplicity, it proved more effective than the PointNet approach on wide-baseline stereo, contributing to a relative increase in pose estimation accuracy of $50 - 100\%$ ; see [55, Fig. 5].
+
+Note that CN normalizes the feature maps according to first- (mean) and second- (variance) order moments. Interestingly, these two quantities can be expressed as the solution of a least-squares problem:
+
+$$
+\hat {\boldsymbol {\mu}} = \underset {\boldsymbol {\mu}} {\operatorname {a r g m i n}} \sum_ {n} \| \mathbf {p} _ {n} - \boldsymbol {\mu} \| _ {2} ^ {2} \tag {1}
+$$
+
+$$
+\hat {\boldsymbol {\sigma}} = \underset {\boldsymbol {\sigma}} {\operatorname {a r g m i n}} \sum_ {n} \left| \left| \left| \mathbf {p} _ {n} - \hat {\boldsymbol {\mu}} \right| \right| _ {2} ^ {2} - \left. \boldsymbol {\sigma} ^ {\circ 2} \right| _ {2} ^ {2} \right. \tag {2}
+$$
+
+However, it is well known that least-squares optimization is not robust to outliers [6, Sec. 3], a problem that also afflicts CN. We illustrate this limitation in Fig. 1, where
+
+
+
+
+
+
+
+
+
+
+Figure 1. Robust neural line fitting - We learn to fit lines with outliers (80%) via our ACNe, as well as CNe [55]. We visualize the ground truth and the network estimates. We color-code the weights learned by the k-th residual layer of ACNe and used to normalize the feature maps - notice that our method, which mimics Iterative Re-weighted Least Squares (IRLS), learns to progressively focus its attention on the inliers. This allows ACNe to find the correct solution where CNe fails.
+
+the toy task is to fit a line to data corrupted by outliers. Note that this is a critical weakness, as the application CN was originally devised for, wide-baseline stereo, a problem plagued with outliers : outlier ratios above $80\%$ are typical in standard public datasets; see Section 4.3.
+
+To address this issue, we take inspiration from a classical technique used in robust optimization: Iteratively Weighted Least Squares (IRLS) [8]. As an example, let us consider the computation of the first-order moment (1). Rather than using the square of the residuals, we can optimize with respect to a robust kernel $\kappa$ that allows for outliers to be ignored:
+
+$$
+\underset {\boldsymbol {\mu}} {\operatorname {a r g m i n}} \sum_ {n} \kappa \left(\left\| \mathbf {p} _ {n} - \boldsymbol {\mu} \right\| _ {2}\right), \tag {3}
+$$
+
+which can then be converted back into an iterative least-squares optimization ( $t$ indexes iterations):
+
+$$
+\underset {\boldsymbol {\mu} ^ {t}} {\operatorname {a r g m i n}} \sum_ {n} \underbrace {\psi \left(\left\| \mathbf {p} _ {n} - \boldsymbol {\mu} ^ {t - 1} \right\| _ {2}\right) ^ {- 1}} _ {\text {a t t e n t i o n} w _ {n} ^ {t}} \| \mathbf {p} _ {n} - \boldsymbol {\mu} ^ {t} \| _ {2} ^ {2}, \tag {4}
+$$
+
+where $\psi(\cdot)$ is the penalty function associated with the kernel $\kappa(\cdot)$ ; see [33, 17]. Inspired by this, we design a network that learns to progressively focus its attention on the inliers, operating analogously to $\psi(\cdot)$ over the IRLS iterations.
+
+Specifically, we propose to train a perceptron that translates the (intermediate) feature maps into their corresponding attention weights, and normalizes them accordingly. We denote this approach as Attentive Context Normalization (ACN), and the networks that rely on this mechanism Attentive Context Networks (ACNe). We consider two types of attention, one that operates on each data point individually (local), and one that estimates the relative importance of data points (global), and demonstrate that using them together yields the best performance. We also evaluate the effect of supervising this attention mechanism when possible. We verify the effectiveness of our method on (1) robust line fitting, (2) classification of 2D and 3D point clouds, and (3) wide-baseline stereo on real-world datasets (outdoors and indoors) showing significant improvements over the state of the art. Our work is, to the best our knowledge, the first to
+
+apply attentive mechanisms to the normalization of feature maps. One can also apply a more common form of attention by operating directly on feature maps [49, 15], but we demonstrate that this does not perform as effectively.
+
+# 2. Related work
+
+We discuss recent works on deep networks operating on point clouds, review various normalization methods for deep networks, and briefly discuss attention mechanisms.
+
+Deep networks for point clouds. Several methods have been proposed to process point cloud data with neural networks. These include graph convolutional networks [13, 26], VoxelNets [61], tangent convolutions [44], and many others. A simpler strategy was introduced by PointNets [36, 37], which has since become a popular solution due to its simplicity. At their core, they are convolutional neural networks with $1 \times 1$ kernels and global pooling operations. Enhancements to the PointNet architecture include incorporating locality information with kernel correlation [42], and contextual information with LSTMs [30]. Another relevant work is Deep Sets [56], which derives neural network parameterizations that guarantee permutation-equivariance.
+
+Permutation-equivariant networks for stereo. While PointNets were originally introduced for segmentation and classification of 3D point clouds, Yi et al. [55] demonstrated that they can also be highly effective for robust matching in stereo, showing a drastic leap in performance against hand-crafted methods [16, 45, 5]. The core ingredient of Yi et al. [55] is Context Normalization (CN), an alternative to global feature pooling from PointNets. While similar to other normalization techniques for deep networks [22, 2, 46, 51], CN has a different role – to aggregate point-wise feature maps and generate contextual information. Follow-ups to CN include the use of architectures similar to Yi et al. [55] to iteratively estimate fundamental matrices [39], novel loss formulations [12], and the modeling of locality [59]. In OANet [58], order-aware filtering was utilized to incorporate context and spatial correlation. While all of these works rely on “vanilla” CN, we show how to improve its performance by embedding an attention mechanism therein. Our improvements are compatible with any of these techniques.
+
+Normalization in deep networks. In addition to CN, different strategies have been proposed to normalize feature maps in a deep network, starting with the seminal work of Batch Normalization [22], which proposed to normalize the feature maps over a mini-batch. Layer Normalization [2] transposed this operation by looking at all channels for a single sample in the batch, whereas Group Normalization [51] applied it over subsets of channels. Further efforts have proposed to normalize the weights instead of the activations [41], or their eigenvalues [34]. The main use of all these normalization techniques is to stabilize the optimization process and speed up convergence. By contrast, Instance Normalization [46] proposed to normalize individual image samples for style transfer, and was improved upon in [21] by aligning the mean and standard deviation of content and style. Regardless of the specifics, all of these normalization techniques operate on the entire sample – in other words, they do not consider the presence of outliers or their statistics. While this is not critical in image-based pipelines, it can be extremely harmful for point clouds; see Fig. 1.
+
+Attentional methods. The core idea behind attention mechanisms is to focus on the crucial parts of the input. There are different forms of attention, and they have been applied to a wide range of machine learning problems, from natural language processing to images. Vaswani et al. [48] proposed an attentional model for machine translation eschewing recurrent architectures. Luong et al. [32] blended two forms of attention on sequential inputs, demonstrating performance improvements in text translation. Xu et al. [54] showed how to employ soft and hard attention to gaze on salient objects and generate automated image captions. Local response normalization has been used to find salient responses in feature maps [24, 29], and can be interpreted as a form of lateral inhibition [19]. The use of attention in convolutional deep networks was pioneered by Spatial Transformer Networks [23], which introduced a differentiable sampler that allows for spatial manipulation of the image. In [53], attention is directly applied to the feature map, given by a PointNet-style network operating on point clouds. However, this strategy does not work as well as ours for wide-baseline stereo; see Section B in the supplementary material.
+
+# 3. Attentive Context Normalization
+
+Given a feature map $\mathbf{f} \in \mathbb{R}^{N \times C}$ , where $N$ is the number of features (or data points at layer zero), $C$ is the number of channels, and each row corresponds to a data point, we recall that Context Normalization [55] is a non-parametric operation that can be written as
+
+$$
+\mathcal {N} _ {C N} (\mathbf {f}) = \left(\mathbf {f} - \mu (\mathbf {f})\right) \oslash \sigma (\mathbf {f}), \tag {5}
+$$
+
+where $\mu (\mathbf{f}) = \mathbb{E}[\mathbf{f}]$ is the arithmetic mean, $\sigma (\mathbf{f}) = \sqrt{\mathbb{E}[(\mathbf{f} - \mathbb{E}[\mathbf{f}])^{\circ 2}]}$ is the standard deviation of
+
+the features across $N$ , and $\varnothing$ denotes the element-wise division. Here we assume a single cloud, but generalizing to multiple clouds (i.e. batch) is straightforward. Note that to preserve the properties of unstructured clouds, the information in the feature maps needs to be normalized in a permutation-equivariant way. We extend CN by introducing a weight vector $\mathbf{w} \in [0, \dots, 1]^N$ , and indicate with $\mu_{\mathbf{w}}(\cdot)$ and $\sigma_{\mathbf{w}}(\cdot)$ the corresponding weighted mean and standard deviation. In contrast to Context Normalization, we compute the weights $\mathbf{w}$ with a parametric function $\mathcal{W}_{\omega}(\cdot)$ with trainable parameters $^2$ $\omega$ that takes as input the feature map, and returns a unit norm vector of weights:
+
+$$
+\mathbf {w} = \eta (\mathcal {W} _ {\omega} (\mathbf {f})) , \quad \eta (\mathbf {x}) = \mathbf {x} / \| \mathbf {x} \| _ {1}. \tag {6}
+$$
+
+We then define Attentive Context Normalization as
+
+$$
+\mathcal {N} _ {A C N} (\mathbf {f}; \mathbf {w}) = \left(\mathbf {f} - \mu_ {\mathbf {w}} (\mathbf {f})\right) \oslash \sigma_ {\mathbf {w}} (\mathbf {f}). \tag {7}
+$$
+
+The purpose of the attention network $\mathcal{W}_{\omega}(\cdot)$ is to compute a weight function that focuses the normalization of the feature maps on a subset of the input features - the inliers. As a result, the network can learn to effectively cluster the features, and therefore separate inliers from outliers.
+
+There are multiple attention functions that we can design, and multiple ways to combine them into a single attention vector $\mathbf{w}$ . We will now describe those that we found effective for finding correspondences in wide-baseline stereo, and how to combine and supervise them effectively.
+
+Generating attention. We leverage two different types of attention mechanisms, local and global:
+
+$$
+\mathbf {w} _ {i} ^ {\text {l o c a l}} = \mathcal {W} _ {\omega} ^ {\text {l o c a l}} (\mathbf {f} _ {i}) = \operatorname {s i g m o i d} \left(\mathbf {W} \mathbf {f} _ {i} ^ {\top} + \mathbf {b}\right), \tag {8}
+$$
+
+$$
+\mathbf {w} _ {i} ^ {\text {g l o b a l}} = \mathcal {W} _ {\omega} ^ {\text {g l o b a l}} (\mathbf {f} _ {i}) = \frac {\exp \left(\mathbf {W} \mathbf {f} _ {i} ^ {\top} + \mathbf {b}\right)}{\sum_ {j = 1} ^ {N} \exp \left(\mathbf {W} \mathbf {f} _ {j} ^ {\top} + \mathbf {b}\right)}, \tag {9}
+$$
+
+where $\mathbf{W}$ and $\mathbf{b}$ are the parameters of a perceptron, and $\mathbf{f}_k$ denotes the feature vector for data point $k$ - the $k$ -th row of the feature map $\mathbf{f}$ . Observe that the local attention mechanism (8) acts on each feature vector independently, whereas the global attention mechanism (9) relates the feature vector for each data point to the collection through softmax.
+
+Blending attention. Note that the product does not change the scale of the normalization applied in (7). Therefore, to take into account multiple types of attention simultaneously, we simply merge them together WLremove "toghether" to avoid redundancy through element-wise multiplication. One could use a parametric form of attention blending instead; however, it is non-trivial to combine the weights in a permutation-equivariant way, and we found this simple strategy effective.
+
+
+Figure 2. ACNe architecture – (Left) Our permutation-equivariant network receives an input tensor $\mathbf{P}$ of size $N\times D$ , which is processed by a series of $K$ Attentive Residual Blocks (ARB). The output of the network is a tensor $\mathbf{O}$ size $N\times C$ , which is then converted to a representation appropriate for the task at hand. Note that the first perceptron $\mathcal{F}_{\varphi}^{\mathrm{in}}$ changes the dimensionality from $\mathbf{P}$ of size $N\times D$ (input dimensions) to features $\mathbf{f}$ of size $N\times C$ . (Middle) Within each residual path of the ARB, we manipulate the feature map with perceptrons $\mathcal{F}_{\varphi}$ with parameters $\varphi$ , followed by Attentive Context Normalization (ACN) – we repeat this structure twice. (Right) An ACN module computes local/global attention with two trainable networks, combines them via element-wise multiplication, and normalizes the feature maps with said weights – the $\mathcal{N}_{\mathrm{ACN}}$ block – followed by Group Normalization. Note that all features are all processed in the same way, individually, and the ACN block is the only place where they interact with each other – this architecture guarantees permutation-equivariance.
+
+
+
+
+
+Supervising attention. In some problems, the class for each data point is known a priori and explicit supervision can be performed. In this case, adding a supervised loss on the attention signals can be beneficial. For instance, when finding good correspondences for stereo we can apply binary-cross entropy using the epipolar distance to generate labels for each putative correspondence, as in [55]. Our experiments in Section 6.4 show that while this type of supervision can provide a small boost in performance (1-2%), our approach performs nearly as well without this supervision.
+
+# 4. Network architecture and applications
+
+Our network receives as input $\mathbf{P} \in \mathbb{R}^{N \times D}$ , the tensor representation of $\mathcal{P}$ , and produces an output tensor $\mathbf{O} \in \mathbb{R}^{N \times C}$ . Note that as $\mathcal{P}$ is unstructured, $\mathbf{O}$ must be equivariant with respect to permutations of the $N$ rows of $\mathbf{P}$ . This output tensor is then used in different ways according to the task at hand. We model our architecture after [55], which we refer to as Context Network (CNe). It features a series of residual blocks [20] with Context Normalization (CN). Our architecture, which we call Attentive Context Network, or ACNe, is pictured in Fig. 2. A key distinction is that within each normalization block (Fig. 2; right) we link the individual outputs of each perceptron $\mathcal{F}_{\varphi}$ to our ACN layer. We also replace the Batch Normalization layers [22] used in [55] with Group Normalization [51], as we found it performs better; see Section 6.4 for ablation tests.
+
+We demonstrate that ACNe can be used to solve multiple applications, ranging from classical problems such as robust line fitting (Section 4.1) and point cloud classification on MNIST and ModelNet40 (Section 4.2), to robust camera pose estimation for wide-baseline stereo (Section 4.3).
+
+# 4.1. Robust line fitting
+
+We consider the problem of fitting a line to a collection of points $\mathbf{P} \in \mathbb{R}^{N \times 2}$ that is ridden by noise and outliers; see Fig. 1. This problem can be addressed via smooth (IRLS)
+
+or combinatorial (RANSAC) optimization - both methods can be interpreted in terms of sparse optimization, such that inliers and outliers are clustered separately; see [7]. Let us parameterize a line as the locus of point $[x,y]$ such that $\pmb{\theta} \cdot [x,y,1] = 0$ . We can then score each row of $\mathbf{P}$ (i.e. each 2D point) by passing the output tensor $\mathbf{O} = \mathrm{ACNe}(\mathbf{P})$ to an additional weight network - with local and global components - following (6), yielding weights $\mathbf{w} = \eta (\mathcal{W}_{\omega}(\mathbf{O}))$ . Given $\mathbf{w}$ , and expressing our points in homogeneous coordinates as $\bar{\mathbf{P}} = [\mathbf{P},1] \in \mathbb{R}^{N \times 3}$ , we can compute our covariance matrix as $\mathbf{C_{w}}(\mathbf{P}) = \bar{\mathbf{P}}^{\top} \mathrm{diag}(\mathbf{w})^{2} \bar{\mathbf{P}} \in \mathbb{R}^{3 \times 3}$ . Then, denoting $\nu_0[\mathbf{C}]$ as the eigenvector of $\mathbf{C}$ corresponding to its smallest eigenvalue, $\nu_0[\mathbf{C_w}(\mathbf{P})]$ is the estimated plane equation that we seek to find. We, therefore, minimize the difference between this eigenvector and the ground truth, with additional guidance to $\mathbf{w}^{\mathrm{local}}$ to help convergence:
+
+$$
+\begin{array}{l} \mathcal {L} (\boldsymbol {\omega}) = \alpha \min _ {+ / -} \left\{\| \nu_ {0} [ \mathbf {C} _ {\mathbf {w}} (\mathbf {P}) ] \pm \boldsymbol {\theta} \| _ {2} ^ {2} \right\} \\ + \beta \mathbb {E} \left[ H (\mathbf {y}, \mathbf {w} ^ {\text {l o c a l}}) \right], \tag {10} \\ \end{array}
+$$
+
+where $\mathbb{E}\left[H(\mathbf{a},\mathbf{b})\right]$ is the average binary cross entropy between $\mathbf{a}$ and $\mathbf{b}$ , $\mathbf{y}$ is the ground-truth inlier label, and hyperparameters $\alpha$ and $\beta$ control the influence of these losses. The $\min_{+/-}$ resolves the issue that $-\theta$ and $\theta$ are the same line.
+
+# 4.2. Point cloud classification
+
+We can also apply ACNe to point cloud classification rather than reasoning about individual points. As in the previous application, we consider a set of 2D or 3D locations $\mathbf{P} \in \mathbb{R}^{N \times D}$ as input, where $D$ is the number of dimensions. In order to classify each point set, we transform the output tensor $\mathbf{O} = \mathrm{ACNe}(\mathbf{P})$ into a single vector $\mathbf{v} = \mu_{\mathbf{w}}(\mathbf{O})$ and associate it with a ground-truth one-hot vector $\mathbf{y}$ through softmax. Additional weight networks to generate $\mathbf{w}$ are trained for this task. We train with the cross entropy loss. Thus, the loss that we optimize is:
+
+$$
+\mathcal {L} (\boldsymbol {\omega}) = H \left(\mathbf {y}, \operatorname {s o f t m a x} (\mathbf {v})\right). \tag {11}
+$$
+
+
+input
+
+
+10%
+Figure 3. Classification - We add salt-and-pepper noise to MNIST images, and then convert the digits to an unstructured point cloud. The $\%$ reports the outlier-to-inlier ratio.
+
+
+20%
+
+
+30%
+
+
+40%
+
+
+50%
+
+
+60%
+
+# 4.3. Wide-baseline stereo
+
+In stereo we are given correspondences as input, which is thus $\mathbf{P} \in \mathbb{R}^{N \times 4}$ , where $N$ is the number of correspondences and each row contains two pixel locations on different images. In order to remain comparable with traditional methods, we aim to solve for the Fundamental matrix, instead of the Essential matrix, i.e., without assuming known camera intrinsics. Thus, differently from [55, 12, 59], we simply normalize the image coordinates with the image size instead. This makes our method more broadly applicable, and directly comparable with most robust estimation methods for stereo [16, 45, 9, 10, 4].
+
+We obtain $\mathbf{w}$ from the output tensor $\mathbf{O} = \mathrm{ACNe}(\mathbf{P})$ via (6), as in Section 4.1. The weights $\mathbf{w}$ indicate which correspondences are considered to be inliers and their relative importance. We then apply a weighted variant of the 8-point algorithm [18] to retrieve the Fundamental matrix $\hat{\mathbf{F}}$ , which parameterizes the relative camera motion between the two cameras. To do so we adopt the differentiable, non-parametric form proposed by [55], and denote this operation as $\hat{\mathbf{F}} = g(\mathbf{X},\mathbf{w})$ . We then train our network to regress the ground-truth Fundamental matrix, as well as providing auxiliary guidance to $\mathbf{w}^{\mathrm{local}}$ – the final local attention used to construct the output of the network – with per-correspondence labels obtained by thresholding over the symmetric epipolar distance [18], as in [55]. In addition, we also perform auxiliary supervision on $\mathbf{w}_k^{\mathrm{local}}$ – the intermediate local attentions within the network – as discussed in Section 3. Note that this loss is not necessary, but helps training and provides a small boost in performance; see Section 6.4. We do not supervise global attention and leave it for the network to learn. We therefore write:
+
+$$
+\begin{array}{l} \mathcal {L} (\boldsymbol {\omega}) = \alpha \min _ {+ / -} \left\{\left\| \hat {\mathbf {F}} \pm \mathbf {F} ^ {*} \right\| _ {F} ^ {2} \right\} + \beta \mathbb {E} \left[ H (\mathbf {y}, \mathbf {w} ^ {\text {l o c a l}}) \right] \\ + \gamma \mathbb {E} _ {k} \left[ H \left(\mathbf {y}, \mathbf {w} _ {k} ^ {\text {l o c a l}}\right) \right], \tag {12} \\ \end{array}
+$$
+
+where $\| \cdot \|_F$ is the Frobenius norm, $H$ is the binary cross entropy, and $\mathbf{y}$ denotes ground truth inlier labels. Again, the hyper-parameters $\alpha$ , $\beta$ , and $\gamma$ control the influence of each loss. Similarly to the line-fitting case, the $\min_{+/-}$ resolves the issue that $-\mathbf{F}^*$ and $\mathbf{F}^*$ express the same solution.
+
+# 5. Implementation details
+
+We employ a K-layer structure (excluding the first linear layer that changes according to the number of channels) for ACNe, with $\mathrm{K} \times \mathrm{ARB}$ units, and two perceptron layers in each ARB. The number of layers $\mathrm{K}$ is set to $3 \times$ for 2D point cloud classification, $6 \times$ for robust line fitting, and $12 \times$ for stereo. For 3D point cloud classification, we add ACN normalization to an existing architecture. We also use 32 groups for Group Normalization, as suggested in [51]. Similarly to [55], we use $C = 128$ channels per perceptron.
+
+Training setup. For all applications we use the ADAM optimizer [25] with default parameters and a learning rate of $10^{-3}$ . Except for robust line fitting, we use a validation set to perform early stopping. For robust line fitting, the data is purely synthetic and thus infinite, and we train for 50k iterations. For MNIST, we use 70k samples with a 8:1:1 split for training, validation and testing. For stereo, we use the splits from [58]. For the loss term involving eigen-decomposition (terms multiplied by $\alpha$ in (10) and (12)), we use $\alpha = 0.1$ , following [55]. All other loss terms have a weight of 1, that is, $\beta = 1$ and $\gamma = 1$ . For stereo, we follow [55] and enable the term involving the Fundamental matrix - the first term in (12) - after 20k iterations.
+
+Robust estimators for stereo inference. As a special case, we evaluate the possibility of applying standard robust estimators for outlier rejection such as RANSAC after training the model to potentially maximize its performance, as previously done in [55, 12, 58]. To do so, we modify our architecture by changing the final layer to output only the local attention with the $\mathrm{ReLU} + \mathrm{Tanh}$ activation, as in Yi et al. [55]. We then simply threshold w with zero, select the data points that survive this process as inliers, and feed them to different RANSAC methods to process them further. We compare these results with those obtained directly from the weighted 8-point formulation.
+
+# 6. Results
+
+We first consider a toy example on fitting 2D lines with a large ratio of outliers. We then apply our method to point cloud classification, following [36, 37], which includes 2D for digit classification on MNIST and 3D for object classification on ModelNet40 [52]. These three experiments illustrate that our attentional method performs better than
+
+| Outlier ratio | 60% | 70% | 80% | 85% | 90% |
| CNe [55] | .00019 | .0038 | .056 | .162 | .425 |
| ACNe (Ours) | 1e-6 | .0008 | .024 | .130 | .383 |
+
+Table 1. Robust line fitting - Line fitting results over the test set in terms of the $\ell_2$ distance between ground-truth and the estimates.
+
+| Outlier ratio | 0% | 10% | 20% | 30% | 40% | 50% | 60% |
| PointNet [36] | 98.1 | 95.1 | 93.2 | 79.5 | 67.7 | 70.0 | 54.8 |
| CNe [55] | 98.0 | 95.8 | 94.0 | 91.0 | 90.1 | 87.7 | 87.2 |
| ACNe (Ours) | 98.3 | 97.2 | 96.5 | 95.3 | 94.7 | 94.3 | 93.7 |
+
+vanilla Context Normalization under the presence of outliers. We then apply our solution to wide-baseline stereo, and demonstrate that this increase in performance holds on challenging real-world applications, and against state-of-the-art methods for robust pose estimation. Finally, we perform an ablation study and evaluate the effect of supervising the weights used for attention in stereo.
+
+# 6.1. Robust line fitting - Fig. 1 and Table 1
+
+To generate 2D points on a random line, as well as outliers, we first sample 2D points uniformly within the range $[-1, +1]$ . We then select two points randomly and fit a line that goes through them. With probability according to the desired inlier ratio, we then project each point onto the line to form inliers. We measure the error in terms of the $\ell_2$ distance between the estimated and ground truth values for the line parameters. The results are summarized in Table 1, with qualitative examples in Fig. 1. ACNe consistently outperforms CNe [55]. Both methods break down at a $85 - 90\%$ outlier ratio, while the performance of ACNe degrades more gracefully. As illustrated in Fig. 1, our method learns to progressively focus on the inliers throughout the different layers of the network and weeds out the outliers.
+
+# 6.2. Classifying digits - Fig. 3 and Table 2
+
+We evaluate our approach on handwritten digit classification on MNIST, which consists of $28 \times 28$ grayscale images. We create a point cloud from these images following the procedure of [36]: we threshold each image at 128 and use the coordinates - normalized to a unit bounding box - of the surviving pixel locations as data samples. We subsample 512 points with replacement, in order to have the same number of points for all training examples. We also add a small Gaussian noise of 0.01 to the pixel coordinates after sampling following [36]. Outliers are generated by sampling from a uniform random distribution. We compare our method against vanilla PointNet [36] and CNe [55]. For PointNet, we re-implemented their method under our framework to have an identical training setup.
+
+Table 2. 2D Point cloud classification - Classification accuracy on MNIST, under different outlier ratios (\%). Our method performs best in all cases, and the gap becomes wider with more outliers.
+
+| Outlier ratio | 0% | 10% | 20% | 30% | 40% | 50% |
| PointNet | 85.8 | 81.7 | 81.7 | 80.1 | 78.2 | 78.5 |
| PointNet w/ CN | 87.2 | 84.3 | 84.5 | 83.4 | 81.7 | 81.6 |
| PointNet w/ ACN | 87.7 | 84.6 | 85.0 | 84.6 | 83.3 | 84.2 |
+
+Table 3. 3D Point cloud classification – We replicate the 3D point classification experiment on ModelNet40 from [36], with vanilla PointNet. We then add outliers with Gaussian noise. Our approach performs best with and without outliers.
+
+Table 2 summarizes the results in terms of classification accuracy. Our method performs best, with the gap widening as the outlier ratio increases – while CNe shows some robustness to noise, PointNet quickly breaks down. Note that the results for PointNet are slightly different from the ones reported in [36], as we use a validation split to perform early stopping. In addition, to reduce randomness, we train 10 different models and report the average results.
+
+# 6.3. Classifying 3D objects - Table 3
+
+We apply our method to the problem of 3D object (point cloud) classification. We use the ModelNet40 dataset [52], and compare with PointNet [37]. Similarly to the MNIST case, we contaminate the dataset with outliers to test the robustness of each method. Specifically, we add a predetermined ratio of outliers to the point clouds, sampled uniformly within the range $[-1, 1]$ . We also add small Gaussian perturbations to the locations of the points, with a standard deviation of 0.01. We then sample 1024 points from the point cloud to perform classification. Again, to simply test if ACN can improve existing pipelines, we plug our normalization into the vanilla PointNet architecture. Note that the original PointNet includes an affine estimation step which provides a small performance boost – we omit it from our implementation, in order to isolate the architectural differences between the methods. We report the results in Table 3. Our method performs best, with the gap becoming wider as outliers become prevalent.
+
+# 6.4. Wide-baseline stereo - Fig. 4 and Table 4
+
+Wide-baseline stereo is an extremely challenging problem, due to the large number of variables to account for - viewpoint, scale, illumination, occlusions, and properties of the imaging device - see Fig. 4 for some examples. We benchmark our approach on a real-world dataset against multiple state-of-the-art baselines, following the data [58] and protocols provided by [55]. Their ground truth camera poses are obtained from Structure-from-Motion with VisualSfM [50], from large image collections of publicly available, challenging photo-tourism data.
+
+We evaluate performance in terms of the reconstructed poses. Since the stereo matching problem is defined only up to a scale factor [18], it is not possible to compute absolute (metric) errors for translation. Instead, we follow
+
+
+Figure 4. Wide-baseline stereo - We show the results of different matching algorithms on the dataset of [55]. We draw the inliers produced by them, in green if the match is below the epipolar distance threshold (in red otherwise). Note that this may include some false positives, as epipolar constraints map points to lines - perfect ground truth would require dense pixel-to-pixel correspondences.
+
+the methodology of [55] and measure the error between the ground truth and estimated vectors between both cameras, for both rotation and translation, and combine them by taking the maximum of the two. We then evaluate the accuracy over all image pairs at multiple error thresholds, accumulate it up to a limit (either $10^{\circ}$ or $20^{\circ}$ ), and summarize performance by its mean - which we call the mean Average Precision (mAP); see [55]. This means that methods that perform better at lower error thresholds are rewarded. We use their data mostly as is, using the pre-extracted correspondences and splits from OANet [58], but adapt it to the Fundamental matrix problem. In contrast to previous works [55, 58], which report results on the scene the models were trained on, we focus on unknown scenes, in order to determine each method's actual performance.
+
+As we discussed in Section 4.3, both CNe [55] and OANet [58] assume known camera intrinsic and estimate the Essential matrix, instead of the Fundamental matrix - this is a significantly easier problem, as the number of free parameters drops from 7 to 5. However, most research papers on this topic focus on estimating the Fundamental matrix [9, 10, 38, 3, 4], which is why we focus on this problem instead. For completeness, we also report results for the Essential matrix in the supplementary appendix, for which we also achieve state-of-the-art results.
+
+In more detail, given an image pair, we extract 2k keypoints for each image with SIFT [31]. Matches are then formed from one image to the other, in both directions. As is typical for image matching, we then filter out nondiscriminative correspondences via bi-directional check, enforcing one-to-one matching. For RANSAC variants we found it to be critical to further apply Lowe's ratio test [31] without it RANSAC variants provide worse results. We apply it with a ratio threshold of 0.8. We do not apply this test for learned methods, as it throws out too many inliers for learned methods to bring any benefit. Also, when training learned methods, we train without bidirectional check to show as many correspondences in the training set as possible to the network.
+
+We consider the following methods: LMedS [40], RANSAC [16, 9], MLESAC [45], DegenSAC [10], MAGSAC [4] CNe [55], DFE [39], OANet [58] and ACNe (ours). We consider the pose estimated with the weighted 8-point algorithm directly, as well as those combined with a robust outlier rejection method as outlined in Section 5.
+
+Quantitative results. We report quantitative results in Table 4, for two different error thresholds $(10^{\circ} / 20^{\circ})$ . We make three fundamental observations:
+
+(1) Our method consistently outperforms all of the baselines, including CNe and OANet. The difference in performance between ACNe and its closest competitor, OANet, is of $14.1 / 9.8\%$ relative for Outdoors, and $8.6 / 9.2\%$ relative for Indoors when used without any additional post processing. The gap for Outdoors is reduced to $1\%$ when they are combined with MAGSAC, but ACNe still outperforms OANet. For Indoors, we observe a drop in performance for both OANet and ACNe when combining them with RANSAC or MAGSAC. The margin between learned and traditional methods is significant, with ACNe performing $30.1 / 39.6\%$ better relative on Outdoors and $45.5 / 47.1\%$ better relative on Indoors, compared to the best performing traditional baseline – including a very recent method, MAGSAC.
+(2) Different from the findings of [55], we observe that RANSAC variants may harm performance, particularly with ACNe. This is because through its global attention - $\mathbf{w}^{\mathrm{global}}$ - ACNe can infer the relative importance of each correspondence, which is not easily taken into account when passing samples to a robust estimator. In this manner, ACNe goes beyond simple outlier rejection. The best performance is typically achieved by using ACNe at its pure form, directly feeding its weights to the weighted 8-point algorithm. Given that all our experiments are on unseen sequences, this further shows that ACNe generalizes very well, even without being followed by an additional robust estimator.
+(3) Contrary to the results of Yi et al. [55] and Zhang et al. [58], we find that traditional baselines perform better than reported on either work. This is because their exper
+
+ | Method | Outdoors | Indoors |
| Traditional | LMedS | .296/.383 | .142/.235 |
| RANSAC | .356/.437 | .172/.272 |
| MLESAC | .148/.216 | .135/.230 |
| DegenSAC | .328/.394 | .191/.291 |
| MAGSAC | .385/.457 | .185/.282 |
| Learned | CNe (weighted-8pt) | .323/.469 | .189/.331 |
| CNe+RANSAC | .449/.554 | .201/.315 |
| CNe+MAGSAC | .500/.598 | .213/.326 |
| DFE (weighted-8pt)3 | .319/.470 | .167/.294 |
| DFE+RANSAC | .414/.508 | .193/.303 |
| DFE+MAGSAC | .452/.541 | .211/.320 |
| OANet (weighted-8pt) | .439/.581 | .256/.392 |
| OANet+RANSAC | .482/.592 | .211/.331 |
| OANet+MAGSAC | .514/.615 | .230/.346 |
| Ours | ACNe (weighted-8pt) | .501/.638 | .278/.428 |
| ACNe+RANSAC | .478/.590 | .209/.329 |
| ACNe+MAGSAC | .518/.621 | .226/.343 |
+
+imental setup did not consider Lowe's ratio test, nor the bidirectional check. Without these, the performance of traditional baselines drops drastically – RANSAC and MAGSAC drop $79.2 / 73.0\%$ and $92.0 / 85.1\%$ relative performance, respectively, for Outdoors, and $66.9 / 59.2\%$ and $82.2 / 74.1\%$ for Indoors.
+
+Ablation study – Table 5. We perform an ablation study to evaluate the effect of the different types of attention, as well as the supervision on the local component of the attentive mechanism. We also compare with CNe, as its architecture is the most similar to ours. We use the train and validation splits for the Saint Peter's Square sequence for this study, as it is the primary sequence used for training in [55] and has many images within the set. (1) We confirm that CNe [55] performs better with Batch Normalization (BN) [22] than with Group Normalization (GN) [51] – we use GN for ACNe, as it seems to perform marginally better with our attention mechanism. (2) We observe that our attentive mechanisms allow ACNe to outperform CNe, and that their combination outperforms their separate use. (3) Applying supervision on the weights further boosts performance.
+
+With learned features – Table 6. Finally, we report that our method also works well with two state-of-the-art, learned local feature methods – SuperPoint [14] and LF-Net [35]. They are learned end-to-end – their characteristics are thus different from those of SIFT keypoints. We test again on Saint Peter's Square, as our primary focus is to show that it is possible to use other feature types. In Table 6 we report that
+
+Table 4. Pose estimation accuracy - mAP at $10^{\circ}/20^{\circ}$ error threshold. Similarly to [55], we consider multiple baselines, as well as pairing different methods with state-of-the-art RANSAC variants. Our method consistently outperforms all others by a significant margin, even without an additional robust estimator, in some cases.
+
+| Methods | CNe [55] | ACNe (Ours) |
| w/ BN | w/ GN | L | G | L+G | L+G+S |
| Weighted-8pt | .435 | .414 | .531 | .593 | .597 | .602 |
+
+Table 5. Ablation study - mAP at $20^{\circ}$ with different CNe [55] and ACNe (ours) variants on stereo. The labels indicate: $L$ - local attention; $G$ - global attention; $S$ - local attention supervision.
+
+ | SIFT | SuperPoint | LF-Net |
| MAGSAC (w/o ratio test) | .146 | .205 | .134 |
| MAGSAC | .264 | .230 | .157 |
| OANet (weighted 8pt) | .488 | .547 | .543 |
| OANet+MAGSAC | .479 | .442 | .452 |
| ACNe (weighted 8pt) | .602 | .637 | .619 |
+
+Table 6. With learned local features - mAP at $20^{\circ}$ with learned local features and different methods. Our method outperforms other methods and performs best with SuperPoint [14].
+
+both methods improve performance over SIFT with OANet and ACNe, but with ratio test and MAGSAC they perform worse. It is interesting how SuperPoint, without the ratio test, performs better than SIFT with MAGSAC, but the order is reversed when the ratio test is introduced, highlighting its importance. Regardless of feature type, we demonstrate that our approach provides improved performance over other methods, and that it pairs best with SuperPoint.
+
+# 7. Conclusion
+
+We have proposed Attentive Context Normalization (ACN), and used it to build Attentive Context Networks (ACNe) to solve problems on permutation-equivariant data. Our solution is inspired by IRLS, where one iteratively re-weighs the importance of each sample, via a soft inlier/outlier assignment. We demonstrated that by learning both local and global attention we are able to outperform state-of-the-art solutions on line fitting, classification of point clouds in 2D (digits) and 3D (objects), and challenging wide-baseline stereo problems. Notably, our method thrives under large outlier ratios. For future research directions, we consider incorporating ACN into general normalization techniques for deep learning. We believe that this is an interesting direction to pursue, as all existing techniques make use of statistical moments.
+
+# Acknowledgements
+
+This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, NSERC Collaborative Research and Development Grant (Google), and by Compute Canada.
+
+# References
+
+[1] Sameer Agarwal, Noah Snavely, Ian Simon, Steven M. Seitz, and Richard Szeliski. Building Rome in One Day. In Int. Conf. on Comput. Vis., 2009. 1
+[2] Jimmy L. Ba, Jamie R. Kiros, and Geoffrey E. Hinton. Layer Normalization. arXiv Preprint, 2016. 1, 2, 3
+[3] Daniel Barath and Jiří Matas. Graph-cut RANSAC. In Conf. on Comput. Vis. Pattern Recognit., 2018. 7
+[4] Daniel Barath, Jana Noskova, and Jiri Matas. MAGSAC: Marginalizing Sample Consensus. In Conf. on Comput. Vis. Pattern Recognit., 2019. 5, 7
+[5] JiaWang Bian, Wen-Yan Lin, Yasuyuki Matsushita, and SaiKit Yeungand Tan-Dat Nguyena nd Ming-Ming Cheng. GMS: Grid-Based Motion Statistics for Fast, Ultra-Robust Feature Correspondence. In Conf. on Comput. Vis. Pattern Recognit., 2017. 2
+[6] Sofien Bouaziz, Andrea Tagliasacchi, Hao Li, and Mark Pauly. Modern Techniques and Applications for Real-Time Non-rigid Registration. In SIGGRAPH Asia (Technical Course Notes), 2016. 1
+[7] Sofien Bouaziz, Andrea Tagliasacchi, and Mark Pauly. Sparse Iterative Closest Point. In Comput. Graphics Forum, 2013. 4
+[8] Rick Chartrand and Wotao Yin. Iteratively Reweighted Algorithms for Compressive Sensing. In Int. Conf. on Acoustics, Speech, Signal Process., 2008. 2
+[9] Ondrej Chum, Jií Matas, and Josef Kittler. Locally Optimized RANSAC. In Joint Pattern Recognition Symposium, 2003. 5, 7
+[10] Ondrej Chum, Tomas Werner, and Jiri Matas. Two-view Geometry Estimation Unaffected by a Dominant Plane. In Conf. on Comput. Vis. Pattern Recognit., 2005. 5, 7
+[11] Andrew Cotter, Maya Gupta, Heinrich Jiang, Erez Louidor, James Muller, Tamann Narayan, Serena Wang, and Tao Zhu. Shape constraints for set functions. In Int. Conf. on Mach. Learn., pages 1388-1396, 2019. 2
+[12] Zheng Dang, Kwang Moo Yi, Yinlin Hu, Fei Wang, Pascal Fua, and Mathieu Salzmann. Eigendecomposition-Free Training of Deep Networks with Zero Eigenvalue-Based Losses. In European Conf. on Comput. Vis., 2018. 1, 2, 5
+[13] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In Adv. Neural Inf. Process. Syst., 2016. 2
+[14] Daniel Detone, Tomasz Malisiewicz, and Andrew Rabinovich. Superpoint: Self-Supervised Interest Point Detection and Description. CVPR Workshop on Deep Learning for Visual SLAM, 2018. 8
+[15] Mihai Dusmanu, Ignacio Rocco, Tomas Pajdla, Marc Pollefeys, Josef Sivic, Akihiko Torii, and Torsten Sattler. D2-Net: A Trainable CNN for Joint Detection and Description of Local Features. Conf. on Comput. Vis. Pattern Recognit., 2019, 2, 11
+[16] Martin A. Fischler and Robert C. Bolles. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Communications ACM, 24(6):381-395, 1981. 1, 2, 5, 7
+
+[17] John Fox. An R and S-Plus Companion to Applied Regression. Sage, 2002. 2
+[18] Richard Hartley and Andrew Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000. 5, 6
+[19] H K. Hartline, Henry G. Wagner, and Floyd Ratliff. Inhibition in the Eye of Limulus. Journal of General Psychology, 1956. 3
+[20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Conf. on Comput. Vis. Pattern Recognit., 2016. 4
+[21] Xun Huang and Serge Belongie. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. In Int. Conf. on Comput. Vis., 2017. 3
+[22] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In Int. Conf. on Mach. Learn., 2015. 1, 2, 3, 4, 8
+[23] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial Transformer Networks. In Adv. Neural Inf. Process. Syst., 2015. 3
+[24] Kevin Jarrett, Koray Kavukcuoglu, Marc A. Ranzato, and Yann LeCun. What is the Best Multi-Stage Architecture for Object Recognition? In Int. Conf. on Comput. Vis., 2009. 3
+[25] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimisation. In Int. Conf. on Learn. Representations, 2015. 5
+[26] Thomas N. Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. Int. Conf. on Learn. Representations, 2017. 2
+[27] Florian Kluger, Eric Brachmann, Hanno Ackermann, Carsten Rother, Michael Ying Yang, and Bodo Rosenhahn. CON-SAC: Robust Multi-Model Fitting by Conditional Sample Consensus. CVPR, 2020. 1
+[28] Johannes Kopf, Michael F Cohen, and Richard Szeliski. First-Person Hyper-Lapse Videos. ACM Trans. on Graphics, 33(4):78, 2014. 1
+[29] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Adv. Neural Inf. Process. Syst., 2012. 3
+[30] Xinhai Liu, Zhizhong Han, Yu-Shen Liu, and Matthias Zwicker. Point2Sequence: Learning the Shape Representation of 3D Point Clouds with an Attention-Based Sequence to Sequence Network. Amer. Assoc. for Artif. Intell. Conf., 2019. 2
+[31] David G. Lowe. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis., 20(2):91-110, 2004. 1, 7
+[32] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective Approaches to Attention-based Neural Machine Translation. Empirical Methods in Nat. Language Process., 2015. 3
+[33] Muhammad J Mirza and Kim L Boyer. Performance Evaluation of a Class of M-estimators for Surface Parameter Estimation in Noisy Range Data. IEEE Transactions on Robotics and Automation, 9(1):75–85, 1993. 2
+
+[34] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral Normalization for Generative Adversarial Networks. In Int. Conf. on Learn. Representations, 2018. 3
+[35] Yuki Ono, Eduard Trulls, Pascal Fua, and Kwang Moo Yi. Lf-Net: Learning Local Features from Images. In Adv. Neural Inf. Process. Syst., 2018. 8
+[36] Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Conf. on Comput. Vis. Pattern Recognit., 2017. 1, 2, 5, 6
+[37] Charles R. Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Point-net++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Adv. Neural Inf. Process. Syst., 2017. 1, 2, 5, 6
+[38] Rahul Raguram, Ondrej Chum, Marc Pollefeys, Jiri Matas, and Jan-Michael Frahm. USAC: a Universal Framework for Random Sample Consensus. IEEE Trans. on Pattern Anal. Mach. Intell., 35(8):2022-2038, 2012. 7
+[39] René Ranftl and Vladlen Koltun. Deep Fundamental Matrix Estimation. In European Conf. on Comput. Vis., 2018. 1, 2, 7
+[40] Peter J. Rousseeuw. Least Median of Squares Regression. Journal of the American Statistical Association, 1984. 7
+[41] Tim Salimans and Diederik P. Kingma. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks. In Adv. Neural Inf. Process. Syst., 2016. 3
+[42] Yiru Shen, Chen Feng, Yaoqing Yang, and Dong Tian. Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling. In Conf. on Comput. Vis. Pattern Recognit., pages 4548-4557, 2018. 2
+[43] Noah Snavely, Rahul Garg, Steven M Seitz, and Richard Szeliski. Finding Paths Through the World's Photos. ACM Trans. on Graphics, 2008. 1
+[44] Maxim Tatarchenko, Jaesik Park, Vladlen Koltun, and Qian-Yi Zhou. Tangent Convolutions for Dense Prediction in 3D. In Conf. on Comput. Vis. Pattern Recognit., 2018. 2
+[45] Philip H.S. Torr and Andrew Zisserman. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Understanding, 78:138-156, 2000. 2, 5, 7
+[46] Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance Normalization: The Missing Ingredient for Fast Stylization. arXiv Preprint, 2016. 1, 2, 3
+[47] Benjamin Ummenhofer, Huizhong Zhou, Jonas Uhrig, Nikolaus Mayer, Eddy Ilg, Alexey Dosovitskiy, and Thomas Brox. DeMoN: Depth and Motion Network for Learning Monocular Stereo. In Conf. on Comput. Vis. Pattern Recognit., 2017. 1
+[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia
+
+Polosukhin. Attention Is All You Need. In Adv. Neural Inf. Process. Syst., 2017. 3
+[49] Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, and Xiaou Tang. Residual Attention Network for Image Classification. In Conf. on Comput. Vis. Pattern Recognit., 2017. 2, 11
+[50] Changchang Wu. Towards Linear-Time Incremental Structure from Motion. In 3DV, 2013. 6
+[51] Yuxin Wu and Kaiming He. Group Normalization. In European Conf. on Comput. Vis., 2018. 1, 2, 3, 4, 5, 8
+[52] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3D ShapeNets: A Deep Representation for Volumetric Shapes. In Conf. on Comput. Vis. Pattern Recognit., 2015. 5, 6
+[53] Saining Xie, Sainan Liu, Zeyu Chen, and Zhuowen Tu. Attentional ShapeContextNet for Point Cloud Recognition. In Conf. on Comput. Vis. Pattern Recognit., 2018. 3
+[54] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, Attend and Tell: Neural Image Caption Generation with Visual Attention. Int. Conf. on Mach. Learn., 2015. 3
+[55] Kwang Moo Yi, Eduard Trulls, Yuki Ono, Vincent Lepetit, Mathieu Salzmann, and Pascal Fua. Learning to Find Good Correspondences. In Conf. on Comput. Vis. Pattern Recognit., 2018. 1, 2, 3, 4, 5, 6, 7, 8, 11, 12
+[56] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Ruslan R Salakhutdinov, and Alexander J Smola. Deep Sets. In Adv. Neural Inf. Process. Syst., 2017. 2
+[57] Amir R. Zamir, Tilman Wekel, Pulkit Argrawal, Colin Weil, Jitendra Malik, and Silvio Savarese. Generic 3D Representation via Pose Estimation and Matching. In European Conf. on Comput. Vis., 2016. 1
+[58] Jiahui Zhang, Dawei Sun, Zixin Luo, Anbang Yao, Lei Zhou, Tianwei Shen, Yurong Chen, Long Quan, and Hongen Liao. Learning Two-View Correspondences and Geometry Using Order-Aware Network. In Int. Conf. on Comput. Vis., 2019, 2, 5, 6, 7, 12
+[59] Chen Zhao, Zhiguo Cao, Chi Li, Xin Li, and Jiaqi Yang. NM-Net: Mining Reliable Neighbors for Robust Feature Correspondences. Conf. on Comput. Vis. Pattern Recognit., 2019, 1, 2, 5
+[60] Tinghui Zhou, Matthew Brown, Noah Snavely, and David Lowe. Unsupervised Learning of Depth and Ego-Motion from Video. In Conf. on Comput. Vis. Pattern Recognit., 2017. 1
+[61] Yin Zhou and Oncel Tuzel. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Conf. on Comput. Vis. Pattern Recognit., 2018. 2
\ No newline at end of file
diff --git a/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/images.zip b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ffbc33c5e8d06736710209b9eb9549dafd2ca031
--- /dev/null
+++ b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d835c4771e97759d79c92080af4a0a4c5efb944574e2c9fc939f9e9513f9dae3
+size 435126
diff --git a/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/layout.json b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d0e413a35cc4d2bdc1cb02ab329fee9410626a01
--- /dev/null
+++ b/acneattentivecontextnormalizationforrobustpermutationequivariantlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad4a398d766bcb31a11f7b832eb06d07517d25ddb8f797c5adbd859e0d887cbb
+size 462067
diff --git a/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_content_list.json b/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0d50bd2a5f8308f71f6fdbedc4daf8273fe9ee78
--- /dev/null
+++ b/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2c509473b989e12210695480226052433df867d2044d96d0da3c9511521be722
+size 72959
diff --git a/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_model.json b/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..415477c3879bcea00dcd515b5ad21b83b5cec887
--- /dev/null
+++ b/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6b86df699f4ece7b749ffac4f3446e428ebbc786acc3f4ef53dce8bb12cd207c
+size 90502
diff --git a/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_origin.pdf b/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a9de019b953506e04d28db2c9611c2508a15075d
--- /dev/null
+++ b/actbertlearninggloballocalvideotextrepresentations/54957bfe-d7fd-4f02-b231-53cc8e938b38_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5816976afeda790a3061bc7e67aacf79a6d7889f93b21591427e83c7333e7948
+size 924479
diff --git a/actbertlearninggloballocalvideotextrepresentations/full.md b/actbertlearninggloballocalvideotextrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3a638c06566990bbe1f502c3daa82167a8c3a710
--- /dev/null
+++ b/actbertlearninggloballocalvideotextrepresentations/full.md
@@ -0,0 +1,255 @@
+# ActBERT: Learning Global-Local Video-Text Representations
+
+Linchao Zhu $^{1,2}$ and Yi Yang $^{2*}$ $^{1}$ Baidu Research $^{2}$ ReLER, University of Technology Sydney {linchao.zhu, yi.yang}@uts.edu.au
+
+# Abstract
+
+In this paper, we introduce ActBERT for self-supervised learning of joint video-text representations from unlabeled data. First, we leverage global action information to catalyze mutual interactions between linguistic texts and local regional objects. It uncovers global and local visual clues from paired video sequences and text descriptions for detailed visual and text relation modeling. Second, we introduce a TaNgled Transformer block (TNT) to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered via judicious clues extraction from contextual information. It enforces the joint video-text representation to be aware of fine-grained objects as well as global human intention. We validate the generalization capability of ActBERT on downstream video-and-language tasks, i.e., text-video clip retrieval, video captioning, video question answering, action segmentation, and action step localization. ActBERT significantly outperforms the state-of-the-art, demonstrating its superiority in video-text representation learning.
+
+# 1. Introduction
+
+While supervised learning has been successful in a variety of computer vision tasks [17, 9, 38, 29], self-supervised representation learning from unlabeled data has attracted increasing attention in recent years [4, 27]. In self-supervised learning, a model is first pre-trained on a large amount of unlabeled data with a surrogate loss. The fine-tuning process further helps the pre-trained model to be specialized in downstream tasks. Recently, there has been rapid progress in self-supervised representation learning for texts [7, 45], where the Bidirectional Encoder Representations from Transformers (BERT) model [7] generalizes remarkably to many natural language tasks, e.g., question answering [2].
+
+Motivated by BERT's success in self-supervised training, we aim to learn an analogous model for video and text joint modeling. We exploit video-text relations based on narrated instructional videos, where the aligned texts are detected by off-the-shelf automatic speech recognition (ASR) models. These instructional videos serve as natural sources for video-text relationship studies. First, they are vastly available and freely accessible on YouTube and other platforms [26, 33]. Second, the visual frames are aligned with the instructional narrations. The text narrations not only cover the objects in the scene explicitly but identify the salient action in the video clip.
+
+To generalize BERT to video-and-language tasks, Sun et al. [33] extended the BERT model by learning from quantized video frame features. The original BERT takes discrete elements as inputs and predicts the corresponding tokens as the output. In contrast, visual features are distributed representations with real value, while the real-value features cannot be directly categorized into discrete labels for "visual token" prediction. Sun et al. [33] discretized visual features into visual words via clustering. These visual tokens can be directly passed to the original BERT model. However, detailed local information, e.g., interacting objects, human actions would be possibly lost during clustering. It prevents the model from uncovering fine-grained relations between video and text. In this paper, we propose ActBERT to learn a joint video-text representation that uncovers global and local visual clues from paired video sequences and text descriptions. Both the global and the local visual signals interact with the semantic stream mutually. ActBERT leverages profound contextual information and exploits fine-grained relations for video-text joint modeling.
+
+First, ActBERT incorporates global actions, local regional objects and text descriptions in a joint framework. Actions, e.g., "cut", "rotate", "slice", are essential to various video-related downstream tasks. The recognition of human actions can demonstrate the model's capacity in motion understanding and complex human intention reasoning. It could be beneficial to explicitly model human actions during model pre-training. Long-term action sequences furthermore offer temporal dependencies about an instruc
+
+tional task. Though action clues are important, they are largely ignored in previous self-supervised video-text training [33, 26], where actions are treated identically to objects. To model human actions, we first extract verbs from the text descriptions and construct an action classification dataset from the original dataset. Then, a 3D convolution network is trained to predict the action labels. The features from the optimized network are used as the action embedding. In this way, clip-level actions are represented, and the corresponding action label is inserted. Besides global action information, we incorporate local regional information to provide fine-grained visual cues [21, 34, 32, 19, 5]. Object regions provide detailed visual clues about the whole scene, including the regional object feature, the position of the object. The language model can benefit from the regional information for better language-and-visual alignment.
+
+Second, we introduce a TaNgled Transformer block (TNT) to encode features from three sources, i.e., global actions, local regional objects, and linguistic tokens. Previous studies [21, 34] consider two modalities when designing the new transformer layers, i.e., fine-grained object information from image and natural language. Lu et al. [21] introduced a co-attentional transformer layer, where the key-value pairs from one modality are passed to the other modality's attention block to act as the new key-value pairs. However, in our scenario, there are three sources of inputs. The two sources, i.e., local regional features and linguistic texts, offer detailed descriptions of the occurring event in the clip. The other global action feature provides the human intention in time-series as well as a straightforward clue for contextual inferring. We design a new tangled transformer block for cross-modality feature learning from three sources. To enhance the interactions between two visual cues and linguistic features, we use a separate transformer block [40] to encode each modality. The mutual cross-modal communication is later enhanced with two additional multi-head attention blocks. The action feature catalyzes mutual interactions. With the guidance from the action features, we inject visual information to the linguistic transformer, and incorporate linguistic information to the visual transformers. The tangled transformer dynamically selects judicious cues its context to facilitate the target prediction.
+
+Furthermore, we design four surrogate tasks to train ActBERT, i.e., masked language modeling with global and local visual cues, masked action classification, masked object classification and cross-modal matching. The pre-trained ActBERT is transferred to five video-related downstream tasks, i.e., video captioning, action segmentation, text-video clip retrieval, action step localization, and video question answering. We quantitatively show ActBERT achieves the state-of-the-art performance with a clear margin.
+
+# 2. Related Work
+
+Video and language. There are many existing video-and-language tasks to evaluate the model's capacities in joint video-text representation learning, e.g., video question answering [36, 10, 18, 54], video captioning [46, 52], text-video retrieval [47, 41, 25], video grounding [50]. In video and language modeling, it can be difficult to learn relations between ordered video frames and their corresponding descriptions, where video temporal information and the interactions between multiple objects spatio-temporally requires to be incorporated. The dominant approach for multi-modal modeling is to leverage Recurrent Neural Networks (RNNs) and their variants, e.g., Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), to model sequence relations, e.g., [28, 53]. Zhou et al. [52] leveraged masked transformers in both the encoder and the decoder for dense video captioning. Most of these works are conducted on well-annotated datasets where the descriptions are manually generated, requiring considerable human interference. There are other works to learn video representations from limited annotated data [55]. The video data is a natural source to learn cross-modal representations. The text descriptions are automatically generated by off-the-shelf automatic speech recognition (ASR) models. This is more scalable and general to the model's deployment in real-world applications. In this paper, we focus on learning joint videotext representation in a self-supervised way.
+
+Cross-modal pre-training. In the past year, many works extended BERT to model cross-modal data [21, 32, 34, 5, 19, 33]. The recent BERT model for video-text modeling [33] introduces visual words for video frames encoding, where local regional information is largely ignored. The synchronized video-audio signal is also a good test-bed for cross-modal representation learning [3, 15]. However, they leveraged low-level audio signals and only considered the synchronization nature of video data. In this work, we focus on video-text joint representation learning. Our ActBERT leverages multi-source information and achieves remarkable performance in many downstream video-text tasks.
+
+Instructional videos. Learning from instructional videos is challenging due to its data complexity across various tasks [6, 1, 51, 26]. These videos are collected from many domains, e.g., cooking, sports, gardening. Many works also regard the transcriptions generated from instructional videos as a source of supervision [1, 51, 26]. However, we employ ActBERT to explicitly model human actions, local regions in a unified framework. We improve [26] with more specific relation modeling between videos and their description. We quantitatively demonstrated that ActBERT is more suitable for unsupervised video-text modeling.
+
+# 3. Model Architecture
+
+# 3.1. Preliminary
+
+We first illustrate the original BERT [7] model. BERT [7] pre-trains a language model on large corpora in an unsupervised way. The pre-trained model is found to be flexible and beneficial to a variety of downstream tasks, e.g., question answering [2].
+
+In BERT [7], the input entities are processed by a multilayer bidirectional transformer [40]. The embeddings of each input are processed with stacked self-attention layers to aggregate contextual features. The attention weights are adaptively generated. The output features contain contextual information about the original input sequence. In self-attention, the generated features are irrelevant to input sequence order, and it enables the output representation to be permutation-invariant. The output representation is not affected when the input sequence is shuffled. A position embedding is commonly applied to each input entity for the incorporation of sequential order clues.
+
+In the original BERT, Devlin et al. introduced two tasks for pre-training. In the task of masked language modeling (MLM), a portion of input words are randomly masked out. These masked-out words are replaced by a special token "[MASK]". The task is to predict the masked words based on the observations from the contextual contents. The contextual contents are unmasked elements that provide useful relevant cues for the prediction of the masked word.
+
+The other task, i.e., Next Sentence Prediction (NSP), models order information between two sentences. Two sentences are sampled from a document, and NSP aims to identify if the second sentence is adjacent to the first sentence with the correct order. The two sentences are concatenated via a token "[SEP]", so that the models can be aware of the inputs being separated sentences. The prediction is made upon the output features of the first token "[CLS)". This is a binary classification problem, and a simple sigmoid classifier is used. A prediction of "1" indicates the sentences are consecutive, and the second sentence is right after the first sentence.
+
+# 3.2. ActBERT
+
+# 3.2.1 Input Embeddings
+
+There are four types of input elements in ActBERT. They are actions, image regions, linguistic descriptions and special tokens. Special tokens are used to distinguish different inputs.
+
+Each input sequence starts with a special token "[CLS]" and ends with another token "[SEP]". We put the linguistic descriptions after "[CLS]". There are the action inputs followed by local regional features. We denote the action features as $a_1, \ldots, a_L$ , the frame region fea
+
+tures as $r_1, \ldots, r_M$ . The sequential text descriptions is denoted as $w_1, \ldots, w_N$ . The whole sequence is denoted as $\{[\mathrm{CLS}], w_1, \ldots, w_N, [\mathrm{SEP}], a_1, \ldots, a_L, [\mathrm{SEP}], r_1, \ldots, r_M, [\mathrm{SEP}]\}$ . "[SEP]" is also inserted between different sentences. We also insert "[SEP]" between regions that are from different clips, which can help the model to identify the clip boundaries. For each input step, the final embedding feature consists of four different embeddings. The embeddings are position embedding, segment embedding, token embedding, visual feature embedding. We added a few new tokens to distinguish action features and regional object features. The visual embedding is introduced to extract visual and action information. These embeddings are added to be the final feature of ActBERT. We explain them in detail as follows.
+
+Position embedding. Following [7], we incorporate a learnable position embedding to every input in the sequence. Since self-attention does not consider order information, position encoding offers a flexible way to embed a sequence when the sequence order matters. For the actions in different clips, the position embeddings are different as the video clips are ordered. For the regions extracted from the same frame, we use the same position embedding. To distinguish regions from the same frame, we consider spatial position embedding for different spatial positions. The details will be described in "Visual (action) embedding".
+
+Segment embedding. We consider multiple video clips for long-term video context modeling. Each video clip or video segment has a corresponding segment embedding. The elements, i.e., action inputs, regional object inputs, linguistic descriptions, have the same segment embedding in the same video clip.
+
+Token embedding. Each word is embedded with WordPiece embeddings [42] with a 30,000 vocabulary. In addition to the special tokens mentioned above ("[CLS]", "[MASK]", "[SEP]", we introduce "[ACT]" and "[REGION]") to represent the action features and the region features extracted from video frames, respectively. Note that all action inputs have the identical token embedding, which reveals the modality of the inputs.
+
+Visual (action) embedding. We now explain the visual (action) embedding in details. We first illustrate the procedure to obtain the action embedding. For each video clip, we extract verbs from its corresponding descriptions. For simplicity, we remove clips that do not have any verbs. We then build a vocabulary from all the extracted verbs. After verb vocabulary construction, each video clip has one or multiple category labels. We train a 3D convolutional neural network on this constructed dataset. The inputs to the 3D network is a tensor that contains an additional temporal dimension. We leverage a softmax classifier on top of the convolutional neural network. For clips with multiple labels, we normalize the one-hot label with $\ell_1$ -norm, where
+
+the scores for all labels are summed to be 1. After the model is trained, we extract the features after global average pooling as the action features. This feature can well represent the actions that occurred in the video clip.
+
+To obtain regional object features, we extract bounding boxes and the corresponding visual features from a pretrained object detection network. Similar to Lu et al. [21], we utilized pre-trained Faster R-CNN network [29] to extract the categorical distribution under the COCO vocabulary [20]. The image region features offer detailed visual information for visual and text relation modeling. For each region, the visual feature embeddings are the feature vectors before the output layer in the pre-trained network. Following [21], we incorporate spatial position embeddings to represent region locations with a 5-D vector. This vector consists of four box coordinates and the fraction of the region area. Specifically, we denote the vector as $\left(\frac{x_1}{W},\frac{y_1}{H},\frac{x_2}{W},\frac{y_2}{H},\frac{(x_2 - x_1)*(y_2 - y_1)}{W*H}\right)$ , where $W$ is the frame width, $H$ is the frame height, and $(x_{1},y_{1})$ and $(x_{2},y_{2})$ are the top-left and bottom-right coordinates, respectively.
+
+This vector is then embedded to match the dimension of the visual feature. The final regional object feature is the summation of the spatial position embedding and the object detection feature.
+
+# 3.2.2 Tangled Transformer
+
+We design a TaNgled Transformer (TNT) to better encode three sources of information, i.e., action features, regional object features and linguistic features.
+
+Instead of using only one transformer that treats the visual and text features equally, our tangled transformer consists of three transformers. The three transformers take three sources of features, respectively. To enhance the interactions between visual and linguistic features, we propose to inject visual information to the linguistic transformer and incorporate linguistic information to the visual transformers. With cross-modal interactions, the tangled transformer can dynamically select judicious cues for target prediction.
+
+We denote the intermediate representations at transformer block $l$ as $h^l = \{(h_{w_0}^l, \dots, h_{w_N}^l), (h_{a_0}^l, \dots, h_{a_L}^l), (h_{r_0}^l, \dots, h_{r_M}^l)\}$ . For simplicity, we denote $h_w^l = \{h_{w_0}^l, \dots, h_{w_N}^l\}$ , $h_a^l = \{h_{a_0}^l, \dots, h_{a_L}^l\}$ , and $h_r^l = \{h_{r_0}^l, \dots, h_{r_M}^l\}$ , which are processed by $w$ -transformer, $a$ -transformer, and $r$ -transformer, respectively (Figure 1). Besides the standard multi-head attention encoding features from the same modality, we leverage the other two multi-head attention blocks to enhance mutual interactions between the transformer blocks. Specifically, we utilize $h_a^l$ to catalyze mutual interactions. We denote the multi-head attention as output $= \text{Multihead}(Q, K, V)$ , where $Q$ is the query, $K$ is the key, $V$ is the value. The details of multi-head
+
+
+Figure 1: Our tangled transformer takes three sources of information as inputs, which enhances the interactions between linguistic features and visual features.
+
+attention can be found in [40]. We use $h_a^l$ as a query to attend judicious cues from $h_w^l$ and $h_r^l$ :
+
+$$
+c _ {w} = \text {M u l t i h e a d} \left(W _ {q} ^ {1} h _ {a} ^ {l}, W _ {k} ^ {w} h _ {w} ^ {l}, W _ {v} ^ {w} h _ {w} ^ {l}\right), \tag {1}
+$$
+
+$$
+c _ {r} = \text {M u l t i h e a d} \left(W _ {q} ^ {2} h _ {a} ^ {l}, W _ {k} ^ {r} h _ {r} ^ {l}, W _ {v} ^ {r} h _ {r} ^ {l}\right), \tag {2}
+$$
+
+where $W_{*}^{*}$ are learnable weights. $c_{w}$ is the blended feature from linguistic representations, while $c_{r}$ is the guided feature from regional object representation. We then generate a new key-value pair from $c_{w}$ using a linear layer. This generated key-value pair is stacked with the key-value pairs from the original $a$ -transformer and $r$ -transformer. Similarly, we generate a new key-value pair from $c_{r}$ , which is stacked with key-value pair in $w$ -transformer. With this form tangled transformer, visual and linguistic features are further associated.
+
+Note that our tangled transformer is different from the co-attentional transformer block in [21] in several ways. First, the co-attentional transformer block simply passes the keys and values from one modality to the other modality's attention block, without further pre-processing. Second, [21] treats the two modalities equally, while our tangled block utilizes a global cue to guide the selection of local hints from linguistic and visual features. Third, the keys and values from different modalities replace the origin key-values in [21], while our tangled transformer stacks the key-value with the original one. In this way, both the linguistic and visual features are incorporated during transformer encoding.
+
+# 3.2.3 ActBERT Training
+
+We introduce four tasks for ActBERT pre-training. Our framework is presented in Figure 2. We naturally extend
+
+
+Figure 2: Our ActBERT framework. We incorporate three sources of information during pre-training, i.e., global actions, local regional objects, and text descriptions. The yellow grid indicates that the action or the region object is masked out.
+
+the Masked Language Modeling in our cross-modal setting. There are some existing extensions for image and language pre-training [21, 33], and video and language pretraining [33]. Compared to [33], we explicitly model actions and regional information in a unified framework.
+
+Masked Language Modeling with Global and Local Visual Cues. We extend the Masked Language Modeling (MLM) task in BERT to our setting. We leverage visual cues from local regional objects and global actions to uncover the relationships between visual and linguistic entities. As described in Section 3.1, each word in the input sentence is randomly masked with a fixed probability. The task forces the model to learn from contextual descriptions, and at the same time, extract relevant visual features to facilitate prediction. When a verb word is masked out, the model should exploit the action features for a more accurate prediction. When a description of an object is masked out, local regional features can provide more contextual information. Thus, the strong model needs to align visual and linguistic inputs locally and globally. The output feature is then appended with a softmax classifier over the whole linguistic vocabulary.
+
+Masked Action Classification. Similarly, in Masked Action Classification, the action features are masked out. The
+
+task is to predict the masked action label based on linguistic features and object features. Explicit action prediction can be beneficial in two perspectives. First, action sequential cues can be exploited in the long-term. For example, for a video with action sequences of "get into", "rotate", "add", this task can better exploit the temporal order information regarding performing this instructional assignment. Second, the regional objects and linguistic texts are leveraged for better cross-modality modeling. Note that in Masked Action Classification, the goal is to predict the categorical label of the masked-out action feature. This task can enhance the action recognition capability of the pre-trained model, which can be further generalized to many downstream tasks, e.g., video question answering.
+
+Masked Object Classification. In Masked Object Classification, the regional object features are randomly masked out. We follow [21] to predict a distribution over fixed vocabulary for the masked-out image region. The target distribution of the masked-out region is calculated as the softmax activation that is extracted by forwarding the region to the same pre-trained detection model in the feature extraction stage. The KL divergence between the two distributions is minimized.
+
+Cross-modal matching. Similar to the Next Sentence Pre
+
+diction (NSP) task, we apply a linear layer on top of the output of the first token "[CLS]". It is followed by a sigmoid classifier, indicating the relevance score of the linguistic sentences and the visual features. If the score is high, it shows that the text well-described the video clips. The model is optimized via a binary cross-entropy loss. To train this cross-modal matching task, we sample negative video-text pairs from the unlabeled dataset. We follow [26] for sampling positive pairs and negative pairs.
+
+# 4. Experiments
+
+In this section, we evaluate ActBERT in multiple downstream video-and-language tasks. We quantitatively evaluate the generalization capability of ActBERT on five challenging tasks, i.e., text-video clip retrieval, video captioning, video question answering, action segmentation, and action step localization.
+
+# 4.1. ActBERT implementation details
+
+HowTo100M. We pre-train ActBERT on the HowTo100M dataset [26]. The HowTo100M dataset is constructed by querying YouTube API. The top 200 search results are kept. This dataset covers a total of 23,611 tasks, e.g., maintenance and repair, animal rescue, food preparation. This dataset is biased towards actions, where the verbs like "go", "make", "come" being the most frequent. The nouns are also distributed in a long-tailed way, where objects like "water", "cup" are ranked top. Each video has a corresponding narration that is extracted from video subtitles. As the association between video clips and texts are not manually annotated, the video-text connection can sometimes be weak. There are cases of noisy correspondences, where the actors sometimes talk about unrelated things. Though noisy, we found pre-training on HowTo100M can still significantly improve the performance of downstream tasks.
+
+Pre-training details. To construct video-text inputs for ActBERT pre-training, we sample video clips from the HowTo100M dataset. Instead of only using one clip for video-text joint training, we leverage multiple adjacent clips to cover a longer context. This enables ActBERT to model relations in different segments. We sample 10 adjacent video clips, and the temporal-aligned linguistic tokens are extracted to form a video-text pair.
+
+To obtain the local regional features, we use Faster R-CNN pre-trained on the Visual Genome [16] dataset following [21]. The backbone is ResNet-101 [9]. We use the frame rate of 1 FPS to extract the regional features. Each region feature is RoI-pooled from the convolutional feature from that region. We set the detection confidence threshold as 0.4, and each frame contains at most five boxes. Transformer and co-attentional transformer blocks in the visual stream have hidden state size of 1024 and 8 attention heads.
+
+To obtain the action features, we first construct an action classification dataset. We sample frames at 8 FPS. For each clip, we extract the verb from its text descriptions. Then, we train a ResNet-3D [39] network with a softmax classification loss. We initialized the weights of the ResNet-3D model from a pre-trained model on Kinetics [12]. The Kinetics dataset covers 400 actions from YouTube videos. The 3D convolutional network converges faster using when it is pre-trained on Kinetics. The input clip length to ResNet-3D is 32. The clip covers a 4-second video duration. The spatial shape of the input frame is $224 \times 224$ . The initial learning rate is set to 0.001. The batch size is 16. We decay the learning rate by 0.1 at iteration 100,000, and the total number of training iterations is 1,000,000. We keep other training settings unchanged following [39]. During feature extraction, we sample the central clip, and each frame is central cropped. We use the feature after global average pooling as the clip representation.
+
+During ActBERT pre-training, $15\%$ of input features are randomly masked out. ActBERT has 12 layers of transformer blocks. Each transformer block has a hidden unit size of 768. We initialize the linguistic transformer with the BERT model pre-trained on the BookCorpus [56] and English Wikipedia. The other two transformers are randomly initialized. The network is optimized by Adam optimizer. We set the learning rate to be $10^{-5}$ . We trained the model for five epochs due to the large-scale data. We use four NVIDIA Tesla V100 GPUs for model training.
+
+# 4.2. Results on video-and-text tasks
+
+We evaluate ActBERT on five downstream tasks, i.e., action step localization, action segmentation, text-video clip retrieval, video captioning, and video question answering. We evaluate the five tasks on CrossTask [57], COIN [35], YouCook2 [51], and MSR-VTT [44]. Videos from the test sets of these datasets are removed during pre-training on HowTo100M.
+
+# 4.2.1 Datasets
+
+CrossTask: We evaluate action step localization on the CrossTask [57] dataset. CrossTask [57] contains 83 tasks and $4.7\mathrm{k}$ videos related to cooking, car maintenance, crafting, etc. We use the recall metric described in [57], which is defined by the number of step assignments that fall into the ground-truth interval, divided by the total number of steps in the video. COIN: We evaluate the action segmentation task on the recent COIN [35] dataset. COIN [35] contains 180 tasks and 11,827 videos. This dataset consists of 46,354 annotated segments. The videos are collected from YouTube. YouCook2: We evaluate text-video clip retrieval and video captioning on YouCook2. YouCook2 is a cooking video dataset collected from YouTube, covering a large variety of
+
+| Method | BLEU-3 | BLEU-4 | METEOR | ROUGE-L | CIDEr |
| Zhou et al. [52] | 7.53 | 3.84 | 11.55 | 27.44 | 0.38 |
| S3D [43] | 6.12 | 3.24 | 9.52 | 26.09 | 0.31 |
| VideoBert [33] | 6.80 | 4.04 | 11.01 | 27.50 | 0.49 |
| VideoBert + S3D [33] | 7.59 | 4.33 | 11.94 | 28.80 | 0.55 |
| ActBERT | 8.66 | 5.41 | 13.30 | 30.56 | 0.65 |
+
+cooking styles, methods, ingredients and cookwares [51]. In YouCook2, there are 89 types of recipes and totally 14k clips described with linguistic texts. Following [26], we evaluate the text-video clip retrieval task on the validation clips of YouCook2. MSR-VTT: We evaluate text-video clip retrieval and video question answering on MSR-VTT. The MSR-VTT dataset [44] is a general video dataset collected from YouTube with text descriptions. For the video question answering task, we evaluate the multiple-choice VideoQA following [47]. There are 2,990 questions in total for testing. Each test video is associated with a ground-truth caption, a correct answer, and four mismatched descriptions. For text-video clip retrieval, following [47], we use 1,000 pairs text-video for evaluation.
+
+# 4.2.2 Video captioning
+
+We compare our ActBERT to VideoBERT [33] on the video captioning task. We take the pre-trained action transformer as the video encoder. We follow the setup from [52] that takes the video clips from YouCook2 [51] as input, and a transformer decoder is used to decode videos to captions. We do not use the regional object transformer to fairly compare to [33]. Similar to [33], we cross-validate the hyperparameters on the training set. We report the standard evaluation metrics for captioning, i.e., BLEU, METEOR, and ROUGE, on the validation set. The model is optimized by Adam optimizer for 40k iterations. We set the initial learning rate to $1.0 \times 10^{-3}$ , and the batch size is 128. The results are shown in Table 1. We outperform VideoBERT [33] across all metrics, achieving a 1.36 improvement on METEOR. It demonstrates that our pre-trained transformer learns a better video representation. It also indicates the effectiveness of ActBERT in modeling video sequences by considering both global and local video cues. Our transformer generalizes better in video captioning.
+
+# 4.2.3 Action segmentation
+
+The action segmentation task in COIN is to design an action label for a video at the frame-level. To apply ActBERT to action segmentation, we fine-tune ActBERT by adding a linear classifier upon the output features for dense frame
+
+Table 1: Video captioning results on YouCook2. We outperform VideoBERT [33] across all the metrics.
+
+| Method | Frame Accuracy (%) |
| NN-Viterbi [30] | 21.17 |
| VGG [31] | 25.79 |
| TCFPN-ISBA [8] | 34.30 |
| ActBERT w/o region cues | 52.10 |
| ActBERT | 56.95 |
+
+Table 2: Action segmentation results on COIN.
+
+labeling. We do not feed the text descriptions during the fine-tuning process. The results are shown in Table 2. The baseline methods are conducted by [35]. Notably, ActBERT significantly outperforms the baselines with more than $20\%$ improvements. It shows that the pre-trained ActBERT can deal with only visual inputs when linguistic descriptions are absent. When we remove the regional information, we observe a performance drop compared to our full model. It shows that detailed local cues are important to the dense frame labeling task.
+
+# 4.2.4 Action step localization
+
+We evaluate action step localization on CrossTask. To fairly compare to [26], we do not fine-tune on the target dataset. We regard the step action label as the text description and directly feed the text-video pair to ActBERT. We regard the prediction for the first token "[CLS]" as the relevance score of this clip belonging to the label. We choose the action with the max relevance score as the final prediction. The results are shown in Table 3. ActBERT significantly outperforms TVJE [26] with a large margin, i.e., the average improvement is $7\%$ . We achieve even better than the supervised baseline. We remove the region cues to have a fair comparison to [26], as [26] does not use object detection features for video and text matching. The results of "ActBERT w/o region cues" also substantially outperform [26], demonstrating the effectiveness of ActBERT pre-training. Our full ActBERT model further improves performance by $4\%$ . This validates that regional information is an important source that provides detailed local object features for text-and-video matching.
+
+# 4.2.5 Text-video clip retrieval
+
+We evaluate ActBERT on the task of video clip retrieval with natural language queries. Given a linguistic query, it aims to rank the video clips from a gallery video set. We use the following metrics for evaluation [26], i.e., Recall@1 (R@1), Recall@5 (R@5), Recall@10 (R@10) and the median rank (Median R). We evaluate ActBERT on YouCook2 and MSR-VTT. We followed [26] to conduct the YouCook2 evaluation. The results are shown in Table 4. ActBERT
+
+ | Make Kimchi Rice | Pickle Cucumber | Make Banana Ice Cream | Grill Steak | Jack Up Car | Make Jello Shots | Change Tire | Make Lemonade | Add Oil to Car | Make Latte | Build Shelves | Make Taco Salad | Make French Toast | Make Irish Coffee | Make Strawberry Cake | Make Pancakes | Make Meringue | Make Fish Curry | Average |
| Alayrac et al. [1] | 15.6 | 10.6 | 7.5 | 14.2 | 9.3 | 11.8 | 17.3 | 13.1 | 6.4 | 12.9 | 27.2 | 9.2 | 15.7 | 8.6 | 16.3 | 13.0 | 23.2 | 7.4 | 13.3 |
| Zhukov et al. [57] | 13.3 | 18.0 | 23.4 | 23.1 | 16.9 | 16.5 | 30.7 | 21.6 | 4.6 | 19.5 | 35.3 | 10.0 | 32.3 | 13.8 | 29.5 | 37.6 | 43.0 | 13.3 | 22.4 |
| Supervised [57] | 19.1 | 25.3 | 38.0 | 37.5 | 25.7 | 28.2 | 54.3 | 25.8 | 18.3 | 31.2 | 47.7 | 12.0 | 39.5 | 23.4 | 30.9 | 41.1 | 53.4 | 17.3 | 31.6 |
| TVJE [26] | 33.5 | 27.1 | 36.6 | 37.9 | 24.1 | 35.6 | 32.7 | 35.1 | 30.7 | 28.5 | 43.2 | 19.8 | 34.7 | 33.6 | 40.4 | 41.6 | 41.9 | 27.4 | 33.6 |
| ActBERT w/o region cues | 37.4 | 29.5 | 39.0 | 42.2 | 29.8 | 37.5 | 35.5 | 37.8 | 33.2 | 32.8 | 48.4 | 25.2 | 37.4 | 35.6 | 42.4 | 47.0 | 46.1 | 30.4 | 37.1 |
| ActBERT | 41.8 | 33.6 | 42.7 | 46.8 | 33.4 | 43.0 | 40.8 | 41.8 | 38.3 | 37.4 | 52.5 | 30.1 | 41.2 | 40.4 | 46.1 | 51.0 | 49.7 | 35.1 | 41.4 |
+
+Table 3: Action step localization results on CrossTask [57].
+
+| Method | Dataset | R@1 | R@5 | R@10 | Median R |
| HGLMM [14] | YouCook2 | 4.6 | 14.3 | 21.6 | 75 |
| TVJE [26] | YouCook2 | 4.2 | 13.7 | 21.5 | 65 |
| TVJE +FT [26] | YouCook2 | 8.2 | 24.5 | 35.3 | 24 |
| ActBERT | YouCook2 | 9.6 | 26.7 | 38.0 | 19 |
| C+LSTM+SA [37] | MSR-VTT | 4.2 | 12.9 | 19.9 | 55 |
| VSE-LSTM [13] | MSR-VTT | 3.8 | 12.7 | 17.1 | 66 |
| SNUVL [48] | MSR-VTT | 3.5 | 15.9 | 23.8 | 44 |
| Kaufman et al. [11] | MSR-VTT | 4.7 | 16.6 | 24.1 | 41 |
| CT-SAN [49] | MSR-VTT | 4.4 | 16.6 | 22.3 | 35 |
| JSFusion [47] | MSR-VTT | 10.2 | 31.2 | 43.2 | 13 |
| TVJE [26] | MSR-VTT | 7.5 | 21.2 | 29.6 | 38 |
| ActBERT | MSR-VTT | 8.6 | 23.4 | 33.1 | 36 |
+
+significantly outperforms TVJE [26] and other baselines. TVJE trains a ranking loss on the HowTo100M dataset. It shows ActBERT is a better pre-training framework for video-text joint representation learning. Notably, our pretrained model achieves better retrieval performance than the finetuned TVJE model ("TVJE +FT") on YouCook2. It shows the superiority of ActBERT in self-supervised videotext representation learning. In MSR-VTT, ActBERT outperforms TVJE by $1.1\%$ on R@1 when no labeled data is accessed. Note that JSFusion [47] is a supervised method that leverages labeled video and text pairs for training.
+
+# 4.2.6 Video question answering.
+
+We evaluate ActBERT on the multiple-choice VideoQA task. We fine-tune the pre-trained ActBERT on the MSR-VTT training set. The video-text pairs are fed to ActBERT. We use a linear classifier upon the output feature. We use a small learning rate of 0.0001 and use Adam optimizer for training. At the inference time, we fed each candidate with the video clip to ActBERT. The final choice is made by selecting the candidates with the max matching score. The
+
+Table 4: Text-video clip retrieval results on YouCook2 and MSR-VTT. “FT” denotes fine-tuning on the training set.
+
+| Method | Accuracy |
| Text-only BLSTM [22] | 32.0 |
| Text-only Human [22] | 30.2 |
| GoogleNet-2D + C3D [22] | 35.7 |
| Merging-LSTM [23] | 34.2 |
| SNUVL [48] | 38.0 |
| CT-SAN [49] | 41.9 |
| LR/RL LSTMs [24] | 40.9 |
| JSFusion [47] | 45.5 |
| ActBERT | 48.6 |
+
+Table 5: Video question answering (multiple-choices) results on MSR-VTT.
+
+results are shown in Table 5. We compare to many baselines in this task. Without fancy joint modeling, ActBERT significantly outperforms JSFusion [47] by $3\%$ . It shows ActBERT's strong generalization from a large-scale dataset.
+
+# 5. Conclusion
+
+In this paper, we introduce ActBERT for joint video-text modeling in a self-supervised way. We directly model both global and local visual cues for fine-grained visual and linguistic relation learning. ActBERT takes three sources of information as input, i.e., global actions, local regional objects, and linguistic descriptions. The novel tangled transformer further enhances the communications between the three sources. Quantitative results on five video-text benchmarks demonstrate the effectiveness of ActBERT. In the future, we will consider evaluating ActBERT on video action recognition and detection. We will also improve ActBERT by designing more powerful modules for video and text modeling.
+
+Acknowledgements. This work is supported by ARC DP200100938.
+
+# References
+
+[1] Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. Unsupervised learning from narrated instruction videos. In CVPR, 2016. 2, 8
+[2] Chris Alberti, Kenton Lee, and Michael Collins. A bert baseline for the natural questions. arXiv preprint arXiv:1901.08634, 2019. 1, 3
+[3] Relja Arandjelovic and Andrew Zisserman. Objects that sound. In ECCV, 2018. 2
+[4] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, 2018. 1
+[5] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740, 2019. 2
+[6] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al. Scaling egocentric vision: The epic-kitchens dataset. In ECCV, 2018. 2
+[7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 1, 3
+[8] Li Ding and Chenliang Xu. Weakly-supervised action segmentation with iterative soft boundary assignment. In CVPR, pages 6508-6516, 2018. 7
+[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 6
+[10] Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In CVPR, 2017. 2
+[11] Dotan Kaufman, Gil Levi, Tal Hassner, and Lior Wolf. Temporal tessellation: A unified approach for video analysis. In ICCV, 2017. 8
+[12] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 6
+[13] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014. 8
+[14] Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. Associating neural word embeddings with deep image representations using fisher vectors. In CVPR, 2015. 8
+[15] Bruno Korbar, Du Tran, and Lorenzo Torresani. Cooperative learning of audio and video models from self-supervised synchronization. In NeurIPS, 2018. 2
+[16] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome:
+
+Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32-73, 2017. 6
+[17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012. 1
+[18] Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L Berg. Tvqa: Localized, compositional video question answering. arXiv preprint arXiv:1809.01696, 2018. 2
+[19] Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. arXiv preprint arXiv:1908.06066, 2019. 2
+[20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 4
+[21] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019. 2, 4, 5, 6
+[22] Tegan Maharaj, Nicolas Ballas, Anna Rohrbach, Aaron Courville, and Christopher Pal. A dataset and exploration of models for understanding video data through fill-in-the-blank question-answering. In CVPR, 2017. 8
+[23] Amir Mazaheri, Dong Zhang, and Mubarak Shah. Video fill in the blank with merging lstms. arXiv preprint arXiv:1610.04062, 2016. 8
+[24] Amir Mazaheri, Dong Zhang, and Mubarak Shah. Video fill in the blank using lr/rl lstms with spatial-temporal attentions. In ICCV, 2017. 8
+[25] Antoine Miech, Ivan Laptev, and Josef Sivic. Learning a text-video embedding from incomplete and heterogeneous data. arXiv preprint arXiv:1804.02516, 2018. 2
+[26] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In ICCV, 2019. 1, 2, 6, 7, 8
+[27] Ishan Misra, C Lawrence Zitnick, and Martial Hebert. Shuffle and learn: unsupervised learning using temporal order verification. In ECCV, 2016. 1
+[28] Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, and Yong Rui. Jointly modeling embedding and translation to bridge video and language. In CVPR, 2016. 2
+[29] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NeurIPS, 2015. 1, 4
+[30] Alexander Richard, Hilde Kuehne, Ahsan Iqbal, and Juergen Gall. Neuralnetwork-viterbi: A framework for weakly supervised video learning. In CVPR, 2018. 7
+[31] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 7
+[32] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530, 2019.2
+
+[33] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In ICCV, 2019. 1, 2, 5, 7
+[34] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019. 2
+[35] Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. Coin: A large-scale dataset for comprehensive instructional video analysis. In CVPR, 2019. 6, 7
+[36] Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Movieqa: Understanding stories in movies through question-answering. In CVPR, 2016. 2
+[37] Atousa Torabi, Niket Tandon, and Leonid Sigal. Learning language-visual embedding for movie understanding with natural-language. arXiv preprint arXiv:1609.08124, 2016. 8
+[38] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015. 1
+[39] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In CVPR, 2018. 6
+[40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 2, 3, 4
+[41] Xin Wang, Jiawei Wu, Da Zhang, Yu Su, and William Yang Wang. Learning to compose topic-aware mixture of experts for zero-shot video captioning. In AAAI, 2019. 2
+[42] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016. 3
+[43] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018. 7
+[44] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In CVPR, 2016. 6, 7
+[45] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. 1
+[46] Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal structure. In ICCV, pages 4507-4515, 2015. 2
+[47] Youngjae Yu, Jongseok Kim, and Gunhee Kim. A joint sequence fusion model for video question answering and retrieval. In ECCV, 2018. 2, 7, 8
+[48] Youngjae Yu, Hyungjin Ko, Jongwook Choi, and Gunhee Kim. Video captioning and retrieval models with semantic attention. arXiv preprint arXiv:1610.02947, 6(7), 2016. 8
+
+[49] Youngjae Yu, Hyungjin Ko, Jongwook Choi, and Gunhee Kim. End-to-end concept word detection for video captioning, retrieval, and question answering. In CVPR, 2017. 8
+[50] Luowei Zhou, Yannis Kalantidis, Xinlei Chen, Jason J Corso, and Marcus Rohrbach. Grounded video description. In CVPR, 2019. 2
+[51] Luowei Zhou, Chenliang Xu, and Jason J Corso. Towards automatic learning of procedures from web instructional videos. In AAAI, 2018. 2, 6, 7
+[52] Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. End-to-end dense video captioning with masked transformer. In CVPR, 2018. 2, 7
+[53] Linchao Zhu, Zhongwen Xu, and Yi Yang. Bidirectional multirate reconstruction for temporal modeling in videos. In CVPR, 2017. 2
+[54] Linchao Zhu, Zhongwen Xu, Yi Yang, and Alexander G Hauptmann. Uncovering the temporal context for video question answering. *IJCV*, 124(3):409–421, 2017. 2
+[55] Linchao Zhu and Yi Yang. Compound memory networks for few-shot video classification. In ECCV, 2018. 2
+[56] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, 2015. 6
+[57] Dimitri Zhukov, Jean-Baptiste Alayrac, Ramadan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. Cross-task weakly supervised learning from instructional videos. In CVPR, 2019. 6, 8
\ No newline at end of file
diff --git a/actbertlearninggloballocalvideotextrepresentations/images.zip b/actbertlearninggloballocalvideotextrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..39b9153b4a92084568cc5a1018d3ff704aeab846
--- /dev/null
+++ b/actbertlearninggloballocalvideotextrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53b0ef7742a09379d254e3a2cf8acbbb7b4ad6c058cb28102e18559706dfe500
+size 356459
diff --git a/actbertlearninggloballocalvideotextrepresentations/layout.json b/actbertlearninggloballocalvideotextrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b2098d6407ab9b2d62304725a0e852b3be27e831
--- /dev/null
+++ b/actbertlearninggloballocalvideotextrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f893d7b40ac482959e4373d5db3534e945450d7977414c8d1a9c694b8d02902c
+size 307139
diff --git a/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_content_list.json b/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..08deb0ac9f62b2d8186d5056e2fc57b73af2ecc8
--- /dev/null
+++ b/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b695384591aea973f7292ff2fe108bdea25369f2fbc0dde04b4c64f6aeabbb5d
+size 68925
diff --git a/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_model.json b/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b04323e396387291a8d5cbabcdf0d0b14c327ace
--- /dev/null
+++ b/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:412b202814ebe033ce854efc14adc923ce8595a2446fc953ec93e7fa99db56b0
+size 84586
diff --git a/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_origin.pdf b/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2d76bdef80e5baa2ea34810acb31cad01f575028
--- /dev/null
+++ b/actionbyteslearningfromtrimmedvideostolocalizeactions/1ce4d7f8-88e8-47ce-bc2a-d47ca88948f2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41dcf5cae15e1c3b5ab512dc14ac0fe096f5c750ce22716b991f0c68b37b7045
+size 1558878
diff --git a/actionbyteslearningfromtrimmedvideostolocalizeactions/full.md b/actionbyteslearningfromtrimmedvideostolocalizeactions/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..699ae58b40080abc203c6a5c71206c77fe33796c
--- /dev/null
+++ b/actionbyteslearningfromtrimmedvideostolocalizeactions/full.md
@@ -0,0 +1,260 @@
+# ActionBytes: Learning from Trimmed Videos to Localize Actions
+
+Mihir Jain $^{1*}$ , Amir Ghodrati $^{1*}$ , Cees G. M. Snoek $^{2}$
+
+$^{1}$ Qualcomm AI Research†, Qualcomm Technologies Netherlands B.V.
+
+$^{2}$ QUVA Lab, University of Amsterdam
+
+mijain@qti=qticomghodrati@qt.i.qualified.comcgmsnoekuva.nl
+
+# Abstract
+
+This paper tackles the problem of localizing actions in long untrimmed videos. Different from existing works, which all use annotated untrimmed videos during training, we learn only from short trimmed videos. This enables learning from large-scale datasets originally designed for action classification. We propose a method to train an action localization network that segments a video into interpretable fragments, we call ActionBytes. Our method jointly learns to cluster ActionBytes and trains the localization network using the cluster assignments as pseudolabels. By doing so, we train on short trimmed videos that become untrimmed for ActionBytes. In isolation, or when merged, the ActionBytes also serve as effective action proposals. Experiments demonstrate that our boundary-guided training generalizes to unknown action classes and localizes actions in long videos of Thumos14, MultiThumos, and ActivityNet1.2. Furthermore, we show the advantage of ActionBytes for zero-shot localization as well as traditional weakly supervised localization, that train on long videos, to achieve state-of-the-art results.
+
+# 1. Introduction
+
+The goal of this paper is to determine the start, the end and the class of each action instance in a long untrimmed video. State-of-the-art approaches for action localization slide a trained model over an untrimmed video to produce classification score sequences over time [5, 18, 40]. They depend on start, end, and action class labels at training time. Weakly-supervised approaches [28, 32, 34] have demonstrated this even works when the long untrimmed training videos come with action class labels only. Different from all these works, we will localize action instances in long untrimmed videos by learning from short trimmed videos labeled with just their action class.
+
+
+Figure 1: From short, trimmed videos we learn to localize actions in long, untrimmed video. During training, our method jointly learns to generate pseudo-labels from ActionBytes, and to localize them in the short video. During testing, our localization model detects the instances of the query action class in the untrimmed video.
+
+Short trimmed videos are highly popular and easy to access for action classification. Datasets in this domain come with a large number of samples and labels [3, 4, 8, 24]. Kinetics-700 [3], for example, has nearly 650k short trimmed video clips categorized into as many as 700 action classes. In this work, we leverage datasets commonly used for action classification, the what, for tackling the task of action localization, the when. This opens up opportunities for 1) learning from larger datasets with more action classes, and 2) localizing unknown classes by transferring knowledge between trimmed and untrimmed video datasets.
+
+However, given just short trimmed videos during training provides virtually no scope to learn about action boundaries. To overcome this limitation, we adopt a self-supervised approach to regularize our network to learn boundary-aware models. Specifically, we use intermediate layers of a CNN model to decompose a trimmed video into multiple atomic actions called ActionBytes. From these we generate pseudolabels to train a CNN to localize ActionBytes within videos. This model can be used to extract a new set of ActionBytes, so we iterate between updating ActionBytes and training the localization model using new pseudo-labels. Given a long test video, we slide our trained model over it to generate a classification score sequence for the query action, and thus localize its instances, see Figure 1.
+
+We make three contributions in this paper. First, we define What2When, the task of localizing actions in long untrimmed videos using short trimmed videos commonly used for action classification. Second, we introduce ActionBytes: interpretable, temporally scale-invariant fragments of videos capable of spotting parts of an action. Third, we propose an iterative approach for training boundary-aware models from short videos. We experimentally show the effectiveness of our method on Thumos14 [10], MultiThumos [38] and AcitivityNet [1]. Since our approach transfers action class knowledge from trimmed videos to untrimmed videos with unseen classes, it is a natural fit for zero-shot applications. We evaluate our model in a zero-shot scenario where the label set of the short trimmed training videos and the long untrimmed test videos are disjoint. Finally, we conduct experiments on the task of weakly supervised action localization. Although our method is not designed for learning from long videos, we show the benefit of ActionBytes as action proposals in obtaining favorable performance compared to the state-of-the-art.
+
+# 2. Related work
+
+The problem of learning from short videos to localize action in long videos relates to multiple recognition tasks in videos.
+
+Mid-level representation. Several works have proposed methods to automatically discover mid-level representations by segmenting an action into atomic actions [9, 15, 33]. Lan et al. [15] discover mid-level action elements by clustering spatio-temporal segments. This is done on a per-class basis. In [6, 9] the authors automatically obtain meaningful action fragments but they require temporal action annotations to do so. Alternatively, [39] also uses parts of actions but exploits their ordered fashion. Unlike all the above methods, our ActionBytes are class-agnostic, which makes them suitable to enable knowledge transfer to videos of unseen classes.
+
+Pseudo-labeling. Recently self-supervised approaches have been proposed pseudo-labeling data in representation learning [2], label propagation for semi-supervised learning [11] and semantic segmentation [16]. This line of work relies on clustering to create pseudo-labels from unlabelled data. We also generate pseudo-labels per video during training, but for a different purpose, we use them to regularize our localization model to be sensitive to boundaries.
+
+Self-training. Our approach can also be considered as a self-training procedure applied to the video domain, and adapted for localization in the What2When task. It differs from other self-training approaches [17, 29, 42] in many ways, but mainly because the pseudo-labels are generated at the sub-video level and are regularized for localization.
+
+Weakly supervised. In recent times, there has been increased interest in developing models that can be trained
+
+with weaker forms of supervision, such as video-level labels. UntrimmedNets [34] and STPN [26] formulated weakly supervised action localization as a multiple instance learning problem along with attention to locate the actions in videos. AutoLoc [32] introduced a boundary predictor built on an Outer-Inner-Constrastive loss. W-TALC [28] introduced a co-activity similarity loss that looks for similar temporal regions in a pair of videos containing a common action class. Nguyen et al. [27] proposes to model both foreground and background, while [39] exploits temporal relations among video segments. All these methods depend on the presence of multiple actions in long videos to learn to discriminate foreground action from background. Differently, we propose a method to learn action boundaries from short videos through our ActionByte mining.
+
+Zero-shot learning. Many approaches for zero-shot and few shot learning focus on intra-dataset splits between seen and unseen classes [7, 14, 19, 37]. While others attempt cross-dataset action recognition [12, 20, 41] and some of those learn only from the image domain to recognize actions in videos [12, 20]. To avoid the use of common classes across datasets, Roitberg et al. [30] present an evaluation by filtering very similar classes between source and target. The common practice in zero-shot learning is to transfer action class knowledge through a semantic embedding space, such as attributes, word vectors or visual features. Among these, word vectors have been preferred as only category names are required for constructing the semantic embedding space. In this paper, we also employ word embeddings to map source classes to target classes while precisely following the zero-shot regime.
+
+# 3. Method
+
+In this section, we explain our proposed method that learns from short trimmed videos to temporally localize actions in long untrimmed videos. We first formally define the problem of What2When action localization. Then, we explain our method illustrated in Figure 2 and its components. We start by introducing ActionBytes, the basic building block of our method and give an explanation on how to extract them from videos. Next, we explain our two-step iterative pipeline that leverages ActionBytes to train localization models on short videos in a self-training fashion. Finally, we discuss the potential of ActionBytes by itself as action proposals in the video localization context.
+
+Problem statement. Given a long test video, we aim to predict a set of action categories present in that video, together with their start and end time. During training, a set of $n$ short, single-action videos $\chi^{short} = \{x_i\}_{i=1}^n$ is given where each video $x$ has a single label $c$ , belonging to label set $C_{short} = \{c_i\}_{i=1}^{n_c}$ . During testing, a set of long untrimmed videos $\chi^{long} = \{x_i'\}_{i=1}^{n'}$ is given, where for each video $x'$ , the goal is to find the boundary of all action in-
+
+
+Figure 2: The proposed mining pipeline segments a video into ActionBytes. These are then clustered and assigned pseudolabels, which are used as a supervision signal to train the localization model. Action classes labels are from $C_{short}$ .
+
+stances and predict their category labels, $c'$ , from the label set $C_{\text{long}} = \{c_i'\}_{i=1}^{n_c'}$ . In this paper, unless explicitly otherwise stated, we train on $\chi^{\text{short}}$ and evaluate on $\chi^{\text{long}}$ .
+
+# 3.1. ActionBytes
+
+It is well-known that high-level features of consecutive frames, extracted from a CNN, usually vary smoothly over time [13, 35]. Therefore, any abrupt change in feature space can represent a high-level change in the pixel space. We leverage this property to segment videos into interpretable fragments, we call ActionBytes.
+
+Suppose $F = \{\pmb{f}_t\}_{t=1}^T$ are $d$ -dimensional features extracted using a deep model for each time instant $t$ , where $T$ is the temporal sequence length. We learn to map these features to a latent space using a latent projection module. The output of the latent projection module, $L \in \mathbf{R}^{l \times T}$ , keeps the affinity to $l$ latent concepts for each time instant (Figure 3). For a given video, we find ActionByte boundaries $B$ by looking for time instants where affinities to latent concepts change abruptly compared to the previous time instant:
+
+$$
+B = \{t \mid t: \sum_ {i = 1} ^ {l} | L [ i, t ] - L [ i, t - 1 ] | > \tau \} \tag {1}
+$$
+
+where $\tau$ is set to the $p^{th}$ percentile, so the number of ActionBytes in a video is directly proportional to its length $T$ . In general, the $p^{th}$ percentile leads to $T \times \frac{100 - p}{100}$ ActionBytes. The length of each of them varies with the video content, with average length equal to $\frac{100}{100 - p}$ .
+
+Each boundary in the set $\bar{B}$ starts an ActionByte, $A_{i} = (B_{i}, B_{i+1} - 1)$ , resulting in $|B| - 1$ ActionBytes. Such boundaries are obtained in a class-agnostic way, but they segment a video into interpretable fragments. These ActionBytes are temporally scale-invariant as their lengths are adapted to the video content. For example, a single ActionByte can capture an atomic action regardless of the action speed. Some ActionBytes examples are shown in Figure 4.
+
+
+Figure 3: Localization model and ActionByte extraction. The localization model is trained with classification and localization losses on pseudo-labels. The latent output $L$ is used to extract ActionBytes. Classes labels are from $C_{short}$ .
+
+# 3.2. Mining ActionBytes
+
+Next, we discuss how we learn a model from short videos. One can train a classification model on short videos and slide it on long test videos. However, such a model is agnostic to boundaries within the short videos, and might not be able to generate good class activation scores for localization. Here, we leverage ActionBytes, to train a discriminative, boundary-aware model from short videos. This is done by decomposing a video into multiple ActionBytes, from which we generate pseudo-labels to train our model.
+
+The proposed pipeline for mining ActionBytes is shown in Figure 2. It has two steps that iterates between Generating pseudo-labels from ActionBytes and Training the localization model with pseudo-labels. For the creation of the pseudo-labels we take inspiration from Caron et al. [2]. We first extract $N$ ActionBytes from a set of training videos and represent each of them by averaging latent features within its boundaries. Next, we group all the ActionBytes into $K$ clusters using the $k$ -means algorithm by solving
+
+$$
+\min _ {C \in \mathbb {R} ^ {l \times K}} \frac {1}{N} \sum_ {n = 1} ^ {N} \min _ {y _ {n} \in \{0, 1 \} ^ {K}} \| a _ {n} - C y _ {n} \| _ {2} ^ {2}
+$$
+
+
+Figure 4: Extracted ActionBytes, highlighted in different colors, for two examples of Baseball Pitch. The ActionBytes capture the action in four parts that are interpretable as (1) 'get into wind-up position' (red), (2) 'loading to deliver' (blue), (3) 'delivery' (pink) and (4) 'follow-through' (green). ActionBytes are scale-invariant and can adapt to varying temporal scale, e.g., the 'follow-through' extends to different number of snippets in the two examples.
+
+where $a_{n}$ is feature vector obtained from ActionByte $n$ . Solving this problem provides a centroid matrix $C$ that is used to assign a cluster id to each ActionByte in a video. Finally, the pseudo-label vector for a video is defined as all the cluster ids assigned to ActionBytes of that video.
+
+Having obtained the multiple pseudo-labels for each training video, we update the parameters of the localization network in the second step for classifying and localizing ActionBytes in the video (shown in Figure 3). Such training leads to a better representation of latent concepts, $L$ , of the model that in turn result in a better set of ActionBytes. Therefore, we iterate over these two steps of extracting ActionBytes and training the localization model. This approach can be seen as a regularization technique. By training the model with pseudo-labels, we avoid the risk of overfitting the model to class labels.
+
+Localization model. Our full localization model, used in the second step of our pipeline, is shown in Figure 3. The role of this model is to learn to classify and localize ActionBytes into the assigned pseudo-labels. This is reminiscent of a model for weakly-supervised temporal localization, where each video has multiple instances of actions and temporal annotations are not available. With this motivation, we now describe our localization.
+
+We first extract features $F = \{\pmb{f}_t\}_{t=1}^T$ from a pretrained deep network where $d$ is the feature dimension and $T$ is the temporal sequence length. We pass extracted features to a latent projection module to map the features to a set of latent concepts, from which we extract ActionBytes. For the latent projection module, we simply use a fully connected layer followed by ReLU [25].
+
+$$
+L = R e L U \left(W _ {p r o j} F\right)
+$$
+
+where $W_{proj} \in \mathbf{R}^{l \times d}$ is the latent projection matrix and $l$ is the number of latent concepts. The output of the latent projection layer, $L$ , is passed through a linear classifier to obtain activation scores over time for pseudo-classes. On these activation sequences, following [28], we apply $k$ -max multiple-instance learning loss for classification and co
+
+activity similarity loss for localization. For $k$ -max MIL loss, the prediction score corresponding to a class is computed as the average of its $k$ -max activations over the temporal dimension. The co-activity similarity loss is computed over class activation sequences and $L$ . For a given video and a class, a vector of similarities between class activation sequence and each row of $L$ ( $l^{th}$ latent concept) is computed. A pair of videos with a common class label will have higher similarities with the same latent concepts. This is what is enforced by this loss, which makes it a suitable localization loss in our method.
+
+Using this model in our mining, we get predictions for the pseudo-labels. In order to translate this into predictions for the training classes, $C_{short}$ , we add a transfer layer on top of the linear classifier. This is an FC layer learned again with a $k$ -max MIL loss, but using class labels (see Figure 3). For localization at test time, we follow the two-stage thresholding scheme of [28] on the output of the transfer layer.
+
+Knowledge transfer. In cross-dataset evaluation, the label set of seen short videos, $C_{short}$ can be different from the label set of unseen long videos, $C_{long}$ . For knowledge transfer in such cases, we follow Objects2Action [12]. We employ the skip-gram model of word2vec [21, 22] as a semantic embedding function to embed each word of a given class label as a vector. For multi-word class labels, we take the average vector of the embedded words [12, 23] to represent the label. The affinities between class labels from $C_{short}$ and $C_{long}$ are computed by cosine similarity between their embeddings. Thus, the class activation score for $C_{short}$ is transferred to that for $C_{long}$ .
+
+The two sets of class-labels, though different, may have some overlap. To evaluate in a pure zero-shot localization set-up, we also conduct an experiment where training is done on a subset of $C_{short}$ , such that this subset does not overlap with test label set $C_{long}$ .
+
+# 3.3. Action proposals from ActionBytes
+
+Segmenting video into ActionBytes is critical to learn a reliable localization model from short videos. In addition
+
+to this, ActionByte by itself is also suited for action localization as an informative action unit. We show how they can be used to form action proposals in long videos during testing. Consequently, we also demonstrate the utility of ActionBytes is not limited to the What2When set-up but also extends to the weakly-supervised set-up.
+
+Since an ActionByte represents an interpretable part of an action, one or more ActionBytes together form a good action proposal. For a given test video, we generate action proposals, $P_{AB}$ , by merging $m \in M$ ActionBytes, where set $M$ contains the numbers of ActionBytes to be merged.
+
+$$
+P _ {A B} = \bigcup_ {m \in M} \bigcup_ {i = 1} ^ {| B | - m} \left(B _ {i}, B _ {i + m} - 1\right) \tag {2}
+$$
+
+where $B_{i}$ is the start of ActionByte $i$ . $(B_{i}, B_{i+m} - 1)$ is an action proposal from $B_{i}$ to $B_{i+m} - 1$ . Each of these proposals is temporally jittered to include up to one neighboring time-step. This is to make sure the immediate neighborhood of boundaries is included in the action proposals.
+
+ActionBytes for weakly-supervised localization. Weakly-supervised action localization is a popular task where training and testing are done on long videos i.e. $L_{short} = L_{long}$ . The ActionByte mining explained in Section 3.2 is critical to learn from short videos. But, when learning on long videos in a weakly-supervised set-up, generating pseudo-labels is not needed, as the long videos are already untrimmed w.r.t. the actual action labels. Therefore, only the localization model, without the transfer layer, is enough to learn good quality classification score sequences and ActionBytes.
+
+# 4. Experiments
+
+In this section, we first explain the datasets we train and evaluate our proposed method on, following the implementation details. Then we present an ablation study of our method, and next we compare our model with baselines in the What2When setup. We also conduct an experiment in a zero-shot setup and compare our model with the state-of-the-art models in the weakly-supervised regime.
+
+Datasets. We use the validation set of Kinetics-400 [4] for training our model. It contains 17,281 single trimmed action videos belonging to 400 action classes with a maximum length of 10 seconds. For evaluation, we report on the untrimmed Thumos14 [10], MultiThumos [38] and ActivityNet1.2 [1]. Thumos14 contains 200 validation videos and 212 test videos with temporal annotations belonging to 20 action classes, with about 15.5 action instances per video on average. The length of the videos in this dataset is on average 212 seconds. MultiThumos has the same set of videos as in Thumos14, but it extends the latter from 20 action classes with 0.3 labels per frame to 65 classes with 1.5 labels per frame. Also, the average number of distinct action
+
+classes in a video is 10.5 (compared to 1.1 in Thumos14), making it a more challenging multi-label dataset. ActivityNet1.2 has 4,819 videos for training and 2,383 videos for validation, which in the literature is used for evaluation. It has 100 classes, with on an average 1.5 action instances per video. The average length of the videos in this dataset is 115 seconds.
+
+Implementation details. As a base network we use I3D [4] pretrained on Kinetics-400. We extract RGB and flow features from the last average-pooled layer (1024 dimensions for each stream). We use TVL1 to compute optical flow. Features are extracted from non-overlapping 16-frame chunks of video. We do not finetune the feature extractors. The network is implemented in PyTorch and trained with Adam optimizer with a learning rate of 0.001. We initialize the localization model by training on the validation set of Kinetics-400 dataset. For $k$ -max MIL loss, we set $k$ to $1/8$ of the length of the video. In all the experiments, we iterate over our pipeline for 3 iterations. The value of the $p$ percentile (sets $\tau$ in Eq. 1) determines how many ActionBytes are extracted from a given video. For Thumos14 and MultiThumos we set $p = 50$ , and for ActivityNet1.2 we use $p \in \{92, 95, 97.5, 99, 99.5\}$ . In all the experiments we set $M = \{1, 2\}$ in Eq. 2. We report the commonly used mean Average Precision (mAP) metric on snippet-level granularity for evaluating detections. For the weakly-supervised setup, experiment settings are kept similar to [28].
+
+Localization at test time. For localization at test time, we use our trained model to generate class-activation sequences over the untrimmed test video. We follow the two-stage thresholding scheme of [28] for localizing actions. The first threshold is applied to filter out classes that have confidence score less than the mean confidence score. The second threshold is applied along the temporal axis to obtain the detections. When ActionByte proposals are added, non-maximum suppression is also applied.
+
+# 4.1. Ablation study
+
+In the ablation, we test on untrimmed Thumos14, and train on the validation set of trimmed Kinetics-400 dataset.
+
+Fixed length versus scale-invariant ActionBytes. First, we evaluate the effect of ActionBytes. We run two setups: the first uses fixed-size segments, uniformly sampled along the video, and the second uses our automatically-extracted ActionByte boundaries. For the first setup we uniformly segment the video into chunks of two snippets, in order to make it comparable with the average length of ActionBytes. The final localization performance at $IoU = 0.5$ is $14.1\%$ for fixed-size segments and $15.5\%$ for ActionBytes. Automatically extracted ActionByte boundaries are preferred over uniformly sampled boundaries.
+
+Influence of number of clusters. Next, we evaluate the
+
+
+Figure 5: Influence of the number of clusters on localization performance. The performance increases up to 500 and decreases afterward, as over-granular clusters might not be able to represent a single ActionByte.
+
+influence of the number of clusters for generating pseudolabels on the final localization performance. Figure 5 shows that the performance increases by increasing the number of clusters up to 500 and then decreases. This makes sense as with a large number of clusters, an ActionByte might not be represented by a single cluster centroid. Therefore, during all the experiments, we fix the number of clusters to 500.
+
+Number of mining iterations. In Figure 6 (Left), we show how performance changes over training iterations. It increases up to a point, and then decreases slightly. This is mainly because, after few epochs, our iterative mining reaches an equilibrium point where the clustering loss stops decreasing (see Figure 6 (Right)) and the model converges to an optimum.
+
+
+Figure 6: Iterative mining. (Left) Action localization mAP over mining iterations. Performance increases as long as the clustering loss (Right) decreases, then both get saturated.
+
+
+
+ActionByte as proposals. As explained in Section 3.3, ActionBytes, when merged together, can act as action proposals. In this ablation, we show how the number of merged ActionBytes influences localization performance. As shown in Figure 7, using single ActionByte proposals $(M = \{1\})$ can improve the performance by more than $3\%$ compared to not using ActionByte proposals. This shows the effectiveness of ActionBytes as proposals. Merging up to 4 ActionBytes $(M = \{1,2,3,4\})$ can improve localization performance further. However, it comes with the cost
+
+
+Figure 7: ActionByte as proposals for localization. Single ActionByte proposals ( $M = \{1\}$ ) improve mAP compared to not using ActionByte proposals. We set $M = \{1,2\}$ in all the experiments as adding more proposals increases the computational cost while bringing marginal improvement.
+
+of processing more proposals. To keep the balance between computational cost and performance, we set $M = \{1,2\}$ in the remaining experiments. Since the ActionBytes vary in length, the proposal length also varies. This is reminiscent of commonly used anchor lengths [32]. The proposal length, for chosen $M$ and $p$ , ranges from 1 to 70 for Thumos14/MultiThumos and from 6 to 369 for ActivityNet.
+
+# 4.2. What2When action localization
+
+In the What2When action localization experiments, we show the benefit of our mined ActionBytes compared to the baseline. For training, we use the validation set of Kinetics-400 dataset. For evaluation, we follow the common protocol from the literature and evaluate on the test sets of Thumos14 and MultiThumos, and validation set of Activitynet1.2. Baseline is the localization model trained on the Kinetics-400 validation set, without ActionBytes and iterative training. This model generates confidence scores for 400 classes over untrimmed long videos. Then we transfer the class scores to target classes as explained in Section 3.2, and localize actions using the two-stage thresholding. Ours is our proposed deep mining method, that is similar to the baseline (and trained on the same dataset) except that we use pseudo-labels during training to regularize the model. To have a fair comparison, we keep all the hyper-parameters fixed during evaluation. Finally, for Ours (+ Proposals) we add ActionByte proposals to the pool of proposals during localization.
+
+As shown in Table 1, the baseline performance on Thumos14 dataset for $IoU = 0.5$ is $8.4\%$ which shows the difficulty of the task. Using our model, the performance increases to $11.3\%$ . This is interesting, considering that the state-of-the-art performance for this dataset for the weakly-supervised regime where training and test is done on the same dataset is just $26.5\%$ [27] (see Table 3). Finally, by
+
+Table 1: What2When action localization performance on Thumos14, ActivityNet1.2 and MultiThumos.
+
+ | Thumos14 | ActivityNet1.2 | MultiThumos |
| 0.3 | 0.4 | 0.5 | 0.7 | 0.3 | 0.4 | 0.5 | 0.7 | 0.3 | 0.4 | 0.5 | 0.7 |
| Baseline | 18.8 | 12.7 | 8.4 | 1.7 | 24.0 | 21.7 | 19.4 | 8.0 | 7.5 | 4.9 | 3.2 | 0.6 |
| Ours | 21.1 | 15.6 | 11.3 | 2.8 | 24.4 | 22.4 | 20.1 | 8.2 | 8.1 | 5.7 | 4.1 | 1.0 |
| Ours (+ Proposals) | 26.1 | 20.3 | 15.5 | 3.7 | 24.7 | 22.7 | 20.3 | 8.3 | 10.8 | 8.1 | 6.1 | 1.4 |
+
+Table 2: Zero-shot action localization performance on Thumos14 and MultiThumos in What2When setup.
+
+ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
| Thumos14 | | | | | |
| Baseline | 13.8 | 11.1 | 7.1 | 4.7 | 3.1 |
| Ours | 14.9 | 12.6 | 8.5 | 6.1 | 4.1 |
| Ours (+ Proposals) | 17.8 | 15.5 | 11.3 | 8.7 | 6.3 |
| MultiThumos | | | | | |
| Baseline | 6.4 | 5.14 | 3.1 | 2.0 | 1.3 |
| Ours | 7.0 | 5.7 | 3.7 | 2.5 | 1.7 |
| Ours (+ Proposals) | 9.4 | 8.0 | 5.6 | 4.1 | 3.0 |
+
+adding ActionByte proposals, the performance increases to $15.5\%$ i.e. an $84\%$ relative improvement overall. This also shows the effectiveness of our ActionBytes as proposals, which is mainly due to their complementary nature to the baseline proposals. The improvements are obtained across the IoUs, especially for the higher ones.
+
+For ActivityNet1.2 the baseline obtains an mAP of $19.4\%$ at $IoU = 0.5$ , while our full model gets to $20.3\%$ . The gains are less compared to Thumos14 but consistent across the IoUs. The reduced gains can be attributed to the nature of temporal annotations, which merge several nearby action instances and in-between pauses into one instance. This meant extra false-positives, as ActionByte proposals do well at separating actions from temporal context.
+
+For results on MultiThumos the trend is similar to Thumos14, mining and then ActionByte proposals consistently improve performance across the IoU thresholds. It is promising that the proposed method maintains its gain on this more challenging multi-label dataset.
+
+# 4.3. Zero-shot action localization
+
+For this set of experiments, we have a similar setup to the previous What2When experiment, except that we adhere to a zero-shot premise and exclude common classes between the source Kinetics-400 dataset and the target datasets. Thus, during training, we exclude 18 classes of Kinetics-400 for Thumos14/MultiThumos. Similarly, 72 classes of Kinetics-400 are excluded for ActivityNet1.2, which leaves classes that are semantically very different from those of
+
+Table 3: Weakly-supervised localization on Themos14 dataset. (*) indicates I3D features.
+
+ | 0.3 | 0.4 | 0.5 | 0.7 |
| Strong supervision |
| Shou et al. [31] | 40.1 | 29.4 | 23.3 | 7.9 |
| Xu et al. [36] | 44.8 | 35.6 | 28.9 | - |
| Zhao et al. [40] | 50.6 | 40.8 | 29.1 | - |
| Chao et al. * [5] | 53.2 | 48.5 | 42.8 | 20.8 |
| Weak supervision |
| Nguyen et al. * [26] | 35.5 | 25.8 | 16.9 | 4.3 |
| Shou et al. [32] | 35.8 | 29.0 | 21.2 | 5.8 |
| Paul et al. * [28] | 40.1 | 31.1 | 22.8 | 7.6 |
| Yu et al. * [39] | 39.5 | - | 24.5 | 7.1 |
| Nguyen et al. * [27] | 46.6 | 37.5 | 26.5 | 9.0 |
| Ours* (Proposals) | 43.0 | 35.8 | 29.0 | 9.5 |
+
+ActivityNet1.2. The remaining classes are semantically very different from those of ActivityNet1.2, resulting in a much lower baseline mAP of $2.6\%$ at $IoU = 0.3$ compared to $24.0\%$ in the What2When experiment. As ActivityNet1.2 is not suitable for zero-shot transfer from Kinetics-400, we evaluate on the other two datasets in Table 2. Compared to the What2When results there is a drop in performance, which is expected, considering the difficulty of the task. However, the same trend is maintained: our mining model performs better than the baseline and adding ActionByte proposals further adds to the localization performance. Again, we observe considerable gains over the baseline for both Thumos14 and MultiThumos, leading to consistent improvement across the IoUs. We believe that these are the first zero-shot temporal localization results reported on Thumos14 and MultiThumos.
+
+# 4.4. Comparison with the state-of-the-art
+
+Here, we demonstrate the effectiveness of our ActionByte proposals in a weakly-supervised setup as explained in Section 3.3. We employ the off-the-shelf model of Paul et al. [28] as baseline and add ActionBytes proposals on top of it. For the Thumos14 dataset, we train the model on the validation set and evaluate on the test set. Similar as before, we use IoU between detections and ground-truth as the evalua
+
+
+Figure 8: Qualitative results showing top localizations on sample videos from Soccer Penalty and Basketball Dunk. Frames representing action instances are highlighted by the orange boxes and the ones for the background are in blue boxes. Below these frames, ground-truth is plotted in red against time in seconds. Localization boundaries are shown in other colors for the baseline detections as well as the detections using the ActionByte proposals. In Soccer Penalty example, there is only one true-positive which is missed by the baseline, while it is populated by our proposals, one of which detects it. Both methods have false positives. The second example of Basketball Dunk is a video longer than 10 minutes, with many action instances. Out of shown 16 instances, our approach could localize 6 while getting 3 false-positives at $IoU = 0.5$ . Two of these false-positives are duplicate detections (in cyan near 620s and 650s). The baseline could localize two action instances with one false-positive. There are a few false-positives and missed detection by our approach, but it could localize some very difficult action instances. Figure best viewed in color.
+
+tion metric. As shown in Table 3, our method outperforms the state-of-the-art for higher overlap thresholds. Our improvement is particularly notable at $IoU = 0.5$ , where we improve the state-of-the-art by a margin of $2.4\%$ . It validates that our ActionByte proposals are suitable for both What2When and weakly supervised tasks. In Table 4, results on ActivityNet1.2 are reported. We outperform state-of-the-art for all $IoUs$ except 0.7. In Table 5, we report results for MultiThumos. To our knowledge, the only video-level localization results reported on MultiThumos is by Yeung et al. [38]. While they report $32.4\%$ at $IoU = 0.1$ , with frame-level supervision, we reach this mAP with weak supervision only. To the best of our knowledge, this is the first weakly-supervised evaluation on MultiThumos. We also evaluate our baseline [28] on this dataset and consistently improve it over the $IoU$ thresholds. In summary, our method could improve over the baselines and achieve promising results on all three datasets. This shows the effectiveness of the ActionByte proposals. We show some qualitative results of our detections in Figure 8.
+
+# 5. Conclusions
+
+We introduced the new task of learning from short trimmed videos to localize actions in long untrimmed videos. To tackle the new task, our proposed pipeline is jointly trained to segment the videos into ActionBytes and localize them in the short video. Our method can be consid-
+
+Table 4: Weakly-supervised localization on ActivityNet1.2 dataset. (*) indicates I3D features.
+
+ | 0.3 | 0.4 | 0.5 | 0.7 |
| Wang et al. [34] | - | - | 7.4 | 3.9 |
| Shou et al. [32] | - | - | 27.3 | 17.5 |
| Paul et al. * [28] | 45.5 | 41.6 | 37.0 | 14.6 |
| Yu et al. * [39] | - | - | 28.3 | 18.9 |
| Ours* (Proposals) | 47.8 | 44.0 | 39.4 | 15.4 |
+
+Table 5: Weakly-supervised localization on MultiThumos dataset. (*) indicates I3D features. †Our evaluation of [28].
+
+ | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 |
| Strong supervision | | | | | |
| Yeung et al. [38] | 32.4 | - | - | - | - |
| Weak supervision | | | | | |
| Paul et al. *† | 30.7 | 24.0 | 17.1 | 12.6 | 8.9 |
| Ours* (Proposals) | 32.4 | 26.8 | 20.5 | 15.7 | 12.1 |
+
+ered as a technique to regularize action boundaries during training. Experiments on the three datasets show the effectiveness of our method not only for the proposed task, but also for zero-shot action localization and weakly supervised action localization. This demonstrates the adaptability of the models trained by our method, as we considerably improve over the baselines and achieve state-of-the-art results.
+
+# References
+
+[1] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 2015. 2, 5
+[2] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, 2018. 2, 3
+[3] Joao Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987, 2019. 1
+[4] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. 1, 5
+[5] Yu-Wei Chao, Sudheendra Vijayanarasimhan, Bryan Seybold, David A Ross, Jia Deng, and Rahul Sukthankar. Rethinking the faster r-cnn architecture for temporal action localization. In CVPR, 2018. 1, 7
+[6] Adrien Gaidon, Zaid Harchaoui, and Cordelia Schmid. Temporal localization of actions with actons. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2782-2795, 2013. 2
+[7] Chuang Gan, Tianbao Yang, and Boqing Gong. Learning attributes equals multi-source domain generalization. In CVPR, 2016. 2
+[8] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The “something something” video database for learning and evaluating visual common sense. In ICCV, 2017. 1
+[9] Rui Hou, Mubarak Shah, and Rahul Sukthankar. Real-time temporal action localization in untrimmed videos by sub-action discovery. In BMVC, 2017. 2
+[10] Haroon Idrees, Amir R Zamir, Yu-Gang Jiang, Alex Gorban, Ivan Laptev, Rahul Sukthankar, and Mubarak Shah. The thumos challenge on action recognition for videos “in the wild”. Computer Vision and Image Understanding, 155:1-23, 2017. 2, 5
+[11] Ahmet Iscen, Giorgos Tolias, Yannis Avrithis, and Ondrej Chum. Label propagation for deep semi-supervised learning. In CVPR, 2019. 2
+[12] Mihir Jain, Jan C van Gemert, Thomas Mensink, and Cees GM Snoek. Objects2action: Classifying and localizing actions without any video example. In ICCV, 2015. 2, 4
+[13] Dinesh Jayaraman and Kristen Grauman. Slow and steady feature analysis: higher order temporal coherence in video. In CVPR, 2016. 3
+[14] Elyor Kodirov, Tao Xiang, Zhenyong Fu, and Shaogang Gong. Unsupervised domain adaptation for zero-shot learning. In ICCV, 2015. 2
+[15] Tian Lan, Yuke Zhu, Amir Roshan Zamir, and Silvio Savarese. Action recognition by hierarchical mid-level action elements. In ICCV, 2015. 2
+
+[16] Mans Larsson, Erik Stenborg, Carl Toft, Lars Hammarstrand, Torsten Sattler, and Fredrik Kahl. Fine-grained segmentation networks: Self-supervised segmentation for improved long-term visual localization. In ICCV, 2019. 2
+[17] D. Lee. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In ICML, 2013. 2
+[18] Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. Bsn: Boundary sensitive network for temporal action proposal generation. In ECCV, 2018. 1
+[19] Jingen Liu, Benjamin Kuipers, and Silvio Savarese. Recognizing human actions by attributes. In CVPR, 2011. 2
+[20] Pascal Mettes and Cees GM Snoek. Spatial-aware object embeddings for zero-shot localization and classification of actions. In ICCV, 2017. 2
+[21] Tomas Mikolov, Kai Chen, Greg S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In ICLR, 2013. 4
+[22] Tomas Mikolov, Ilia Sutskever, Kai Chen, Greg S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. 4
+[23] Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. Evaluating neural word representations in tensor-based compositional settings. In EMNLP, 2014. 4
+[24] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Yan Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, and Carl Vondrick. Moments in time dataset: one million videos for event understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 1
+[25] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. 4
+[26] Phuc Nguyen, Ting Liu, Gautam Prasad, and Bohyung Han. Weakly supervised action localization by sparse temporal pooling network. In CVPR, 2018. 2, 7
+[27] Phuc Xuan Nguyen, Deva Ramanan, and Charless C Fowlkes. Weakly-supervised action localization with background modeling. In ICCV, 2019. 2, 6, 7
+[28] Sujoy Paul, Sourya Roy, and Amit K Roy-Chowdhury. W-talc: Weakly-supervised temporal activity localization and classification. In ECCV, 2018. 1, 2, 4, 5, 7, 8
+[29] S. Reed, H. Lee, D. Anguelov, C. Szegedy, D. Erhan, and A. Rabinovich. Training deep neural networks on noisy labels with bootstrapping. In ICLR, 2015. 2
+[30] Alina Roitberg, Manuel Martinez, Monica Haurilet, and Rainer Stiefelhagen. Towards a fair evaluation of zero-shot action recognition using external data. In ECCV, 2018. 2
+[31] Zheng Shou, Jonathan Chan, Alireza Zareian, Kazuyuki Miyazawa, and Shih-Fu Chang. CDC: Convolutional-deconvolutional networks for precise temporal action localization in untrimmed videos. In CVPR, 2017. 7
+[32] Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, and Shih-Fu Chang. AutoLoc: Weakly-supervised temporal action localization in untrimmed videos. In ECCV, 2018. 1, 2, 6, 7, 8
+
+[33] Kevin Tang, Li Fei-Fei, and Daphne Koller. Learning latent temporal structure for complex event detection. In CVPR, 2012. 2
+[34] Limin Wang, Yuanjun Xiong, Dahua Lin, and Luc Van Gool. Untrimmednets for weakly supervised action recognition and detection. In CVPR, 2017. 1, 2, 8
+[35] Laurenz Wiskott and Terrence J Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural computation, 14(4):715-770, 2002. 3
+[36] Huijuan Xu, Abir Das, and Kate Saenko. R-C3D: Region convolutional 3d network for temporal activity detection. In ICCV, 2017. 7
+[37] Xun Xu, Timothy M Hospedales, and Shaogang Gong. Multi-task zero-shot action recognition with prioritised data augmentation. In ECCV, 2016. 2
+[38] Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, and Li Fei-Fei. Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision, 126(2-4):375-389, 2018. 2, 5, 8
+[39] Tan Yu, Zhou Ren, Yuncheng Li, Enxu Yan, Ning Xu, and Junsong Yuan. Temporal structure mining for weakly supervised action detection. In ICCV, 2019. 2, 7, 8
+[40] Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaou Tang, and Dahua Lin. Temporal action detection with structured segment networks. In ICCV, 2017. 1, 7
+[41] Yi Zhu, Yang Long, Yu Guan, Shawn Newsam, and Ling Shao. Towards universal representation for unseen action recognition. In CVPR, 2018. 2
+[42] Y. Zou, Z. Yu, X. Liu, B.V.K. V. Kumar, and J. Wang. Confidence regularized self-training. In ICCV, 2019. 2
\ No newline at end of file
diff --git a/actionbyteslearningfromtrimmedvideostolocalizeactions/images.zip b/actionbyteslearningfromtrimmedvideostolocalizeactions/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..817188dc8eeb6ec076c42efaafe7b7632c4584f6
--- /dev/null
+++ b/actionbyteslearningfromtrimmedvideostolocalizeactions/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7db6eca0ddd92888aa6fc022729eb7b75b90fe8b3f6155e0caa9bc4eff9b505f
+size 500620
diff --git a/actionbyteslearningfromtrimmedvideostolocalizeactions/layout.json b/actionbyteslearningfromtrimmedvideostolocalizeactions/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e225c8063890421772e8f8580ac4846d26fe684b
--- /dev/null
+++ b/actionbyteslearningfromtrimmedvideostolocalizeactions/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00911fcee1c1bb51cec96ed0003e3cc6ad4d6f151a54e0f8df279eaf59356885
+size 356405
diff --git a/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_content_list.json b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ba9d1960031f201665f59eeb084cd11e9a9eca1f
--- /dev/null
+++ b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:17428177971a57ba18b9355802ab36f4ea6d6806ade7b329c71750e05c1d2454
+size 83300
diff --git a/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_model.json b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bd60c044c419509e771bc6272501651a7d058159
--- /dev/null
+++ b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:862376b428fbac733b86bf29176063528a40bb76375b833c826538a1fa77f0d4
+size 109644
diff --git a/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_origin.pdf b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d8fe5daaa3022e060f632d44a126ee19af1d1531
--- /dev/null
+++ b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/5fd86401-96cc-4db0-97d1-3c9526ebc529_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:94bd6c5a7c5e9fb94c3ec661ce4e97b5ca10af443e0ab190e70ac9250361d256
+size 1216898
diff --git a/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/full.md b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..398c9debf457675d511cbce26cacfd3df8818bcd
--- /dev/null
+++ b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/full.md
@@ -0,0 +1,288 @@
+# Action Genome: Actions as Compositions of Spatio-temporal Scene Graphs
+
+Jingwei Ji Ranjay Krishna Li Fei-Fei Juan Carlos Niebles Stanford University
+
+{jingweij, ranjaykrishna, feifeili, jniebles}@cs.stanford.edu
+
+# Abstract
+
+Action recognition has typically treated actions and activities as monolithic events that occur in videos. However, there is evidence from Cognitive Science and Neuroscience that people actively encode activities into consistent hierarchical part structures. However, in Computer Vision, few explorations on representations that encode event partonomies have been made. Inspired by evidence that the prototypical unit of an event is an action-object interaction, we introduce Action Genome, a representation that decomposes actions into spatio-temporal scene graphs. Action Genome captures changes between objects and their pairwise relationships while an action occurs. It contains 10K videos with 0.4M objects and 1.7M visual relationships annotated. With Action Genome, we extend an existing action recognition model by incorporating scene graphs as spatiotemporal feature banks to achieve better performance on the Charades dataset. Next, by decomposing and learning the temporal changes in visual relationships that result in an action, we demonstrate the utility of a hierarchical event decomposition by enabling few-shot action recognition, achieving $42.7\%$ mAP using as few as 10 examples. Finally, we benchmark existing scene graph models on the new task of spatio-temporal scene graph prediction.
+
+# 1. Introduction
+
+Video understanding tasks, such as action recognition, have, for the most part, treated actions and activities as monolithic events [8, 38, 66, 87]. Most recent models proposed have resorted to end-to-end predictions that produce a single label for a long sequence of a video [10, 23, 31, 69, 72] and do not explicitly decompose events into a series of interactions between objects. On the other hand, image-based structured representations like scene graphs have cascaded improvements across multiple image tasks, including image captioning [2], image retrieval [36, 64], visual question answering [35], relationship modeling [41] and image generation [34]. The scene graph representation, introduced in Visual Genome [43], provides a scaffold that
+
+
+Figure 1: We present Action Genome: a representation that decomposes actions into spatio-temporal scene graphs. Inspired by hierarchical bias theory [84] and event segmentation theory [44], Action Genome provides the scaffold to study the dynamics of actions as relationships between people and objects. This decomposition also allows us to improve action recognition, enable few-shot action detection, and introduce spatio-temporal scene graph prediction.
+
+allows vision models to tackle complex inference tasks by breaking scenes into its corresponding objects and their visual relationships. However, decompositions for temporal events have not been explored much [50], even though representing events with structured representations could lead to more accurate and grounded action understanding.
+
+Meanwhile, in Cognitive Science and Neuroscience, it has been postulated that people segment events into consistent groups [5, 6, 55]. Furthermore, people actively encode those ongoing activities in a hierarchical part structure — a phenomenon referred to as hierarchical bias hypothesis [84] or event segmentation theory [44]. Let's consider the action of "sitting on a sofa". The person initially starts off next to the sofa, moves in front of it, and finally sits atop it. Such decompositions can enable machines to predict future and past scene graphs with objects and relationships as an action occurs: we can predict that the person is about to sit on
+
+Table 1: A comparison of Action Genome with existing video datasets. Built upon Charades [66], Action Genome is the first large-scale video database providing both action labels and spatio-temporal scene graph labels.
+
+| Dataset | Video hours | # videos | # action categories | Objects | Relationships |
| annotated | localized | # categories | # instances | annotated | localized | # categories | # instances |
| ActivityNet [8] | 648 | 28K | 200 | | | - | - | | | - | - |
| HACS Clips [87] | 833 | 0.4K | 200 | | | - | - | | | - | - |
| Kinetics-700 [9] | 1794 | 650K | 700 | | | - | - | | | - | - |
| AVA [26] | 108 | 504K | 80 | | | - | - | ✓ | | 49 | - |
| Charades [66] | 82 | 10K | 157 | ✓ | | 37 | - | | | - | - |
| EPIC-Kitchen [15] | 55 | - | 125 | ✓ | | 331 | - | | | - | - |
| DALY [75] | 31 | 8K | 10 | ✓ | ✓ | 41 | 3.6K | | | - | - |
| CAD120++ [91] | 0.57 | 0.5K | 10 | ✓ | ✓ | 13 | 64K | ✓ | ✓ | 6 | 32K |
| Action Genome | 82 | 10K | 157 | ✓ | ✓ | 35 | 0.4M | ✓ | ✓ | 25 | 1.7M |
+
+the sofa when we see them move in front of it. Similarly, such decomposition can also enable machines to learn from few examples: we can recognize the same action when we see a different person move towards a different chair. While that was a relatively simple decomposition, other events like "playing football", with its multiple rules and actors, can involve multifaceted decompositions. So while such decompositions can provide the scaffolds to improve vision models, how is it possible to correctly create representative hierarchies for a wide variety of complex actions?
+
+In this paper, we introduce Action Genome, a representation that decomposes actions into spatio-temporal scene graphs. Object detection faced a similar challenge of large variation within any object category. So, just as progress in 2D perception was catalyzed by taxonomies [56], partonomies [57], and ontologies [43, 79], we aim to improve temporal understanding with Action Genome's partonomy. Going back to the example of "person sitting on a sofa", Action Genome breaks down such actions by annotating frames within that action with scene graphs. The graphs captures both the objects, person and sofa, and how their relationships evolve as the actions progress from $\langle$ person - next to - sofa $\rangle$ to $\langle$ person - in front of - sofa $\rangle$ to finally $\langle$ person - sitting on - sofa $\rangle$ . Built upon Charades [66], Action Genome provides 476K object bounding boxes with 1.72M relationships across 234K video frames with 157 action categories.
+
+Most perspectives on action decomposition converge on the prototypical unit of action-object couplets [44, 50, 63, 84]. Action-object couplets refer to transitive actions performed on objects (e.g. "moving a chair" or "throwing a ball") and intransitive self-actions (e.g. "moving towards the sofa"). Action Genome's dynamic scene graph representations capture both such types of events and as such, represent the prototypical unit. With this representation, we enable the study for tasks such as spatio-temporal scene graph prediction — a task where we estimate the decomposition of action dynamics given a video. We can also improve existing tasks like action recognition and few-shot action detection by jointly studying how those actions change visual relationships between objects in scene graphs.
+
+To demonstrate the utility of Action Genome's event decomposition, we introduce a method that extends a state-of-the-art action recognition model [76] by incorporating spatio-temporal scene graphs as feature banks that can be used to both predict the action as well as the objects and relationships involved. First, we demonstrate that predicting scene graphs can benefit the popular task of action recognition by improving the state-of-the-art on the Charades dataset [66] from $42.5\%$ to $44.3\%$ and to $60.3\%$ when using oracle scene graphs. Second, we show that the compositional understanding of actions induces better generalization by showcasing few-shot action recognition experiments, achieving $42.7\%$ mAP using as few as 10 training examples. Third, we introduce the task of spatio-temporal scene graph prediction and benchmark existing scene graph models with new evaluation metrics designed specifically for videos. With a better understanding of the dynamics of human-object interactions via spatio-temporal scene graphs, we aim to inspire a new line of research in more decomposable and generalizable action understanding.
+
+# 2. Related work
+
+We derive inspiration from Cognitive Science, compare our representation with static scene graphs, and survey methods in action recognition and few-shot prediction.
+
+Cognitive Science. Early work in Cognitive Science provides evidence for the regularities with which people identify event boundaries [5, 6, 55]. Remarkably, people consistently, both within and between subjects, carve out video streams into events, actions, and activities [11, 28, 83]. Such findings hint that it is possible to predict when actions begin and end, and have inspired hundreds of Computer Vision datasets, models, and algorithms to study tasks like action recognition [19, 37, 71, 80, 81, 82]. Subsequent Cognitive and Neuroscience research, using the same paradigm, has also shown that event categories form partonomies [28, 60, 83]. However, Computer Vision has done little work in explicitly representing the hierarchical structures of actions [50], even though understanding event partonomies can improve tasks like action recognition.
+
+Action recognition in videos. Many research projects have tackled the task of action recognition. A major line of work has focused on developing powerful neural architectures to extract useful representations from videos [10, 23, 31, 69, 72]. Pre-trained on large-scale databases for action classification [8, 9], these architectures serve as cornerstones for downstream video tasks and action recognition on other datasets. To assist more complicated action understanding, another growing set of research explores structural information in videos including temporal ordering [51, 88], object localization [4, 25, 32, 53, 74, 76], and implicit interactions between objects [4, 53]. In our work, we contrast against these methods by explicitly using a structured decomposition of actions into objects and relationships.
+
+Table 1 lists some of the most popular datasets used for action recognition. One major trend of video datasets is providing considerably large amount of video clips with single action labels [8, 9, 87]. Although these databases have driven the progress of video feature representation for many downstream tasks, the provided annotations treat actions as monolithic events, and do not study how objects and their relationships change during actions/activities. In the mean time, other databases have provided more varieties of annotations: AVA [26] localizes the actors of actions, Charades [66] contains multiple actions happening at the same time, EPIC-Kitchen [15] localizes the interacted objects in ego-centric kitchen videos, DALY [75] provides object bounding boxes and upper body poses for 10 daily activities. Still, scene graph, as a comprehensive structural abstraction of images, has not yet been studied in any large-scale video database as a potential representation for action recognition. In this work, we present Action Genome, the first large-scale database to jointly boost research in scene graphs and action understanding. Compared to existing datasets, we provide orders of magnitude more object and relationship labels grounded in actions.
+
+Scene graph prediction. Scene graphs are a formal representation for image information [36, 43] in a form of a graph, which is widely used in knowledge bases [13, 27, 89]. Each scene graph encodes objects as nodes connected together by pairwise relationships as edges. Scene graphs have led to many state of the art models in image captioning [2], image retrieval [36, 64], visual question answering [35], relationship modeling [41], and image generation [34]. Given its versatile utility, the task of scene graph prediction has resulted in a series of publications [14, 30, 43, 46, 48, 49, 59, 77, 78, 85] that have explored reinforcement learning [49], structured prediction [16, 40, 70], utilizing object attributes [20, 61], sequential prediction [59], few-shot prediction [12, 17], and graph-based [47, 77, 78] approaches. However, all of these approaches have restricted their application to static images and have not modelled visual concepts spatio-temporally.
+
+
+Figure 2: Action Genome's annotation pipeline: For every action, we uniformly sample 5 frames across the action and annotate the person performing the action along with the objects they interact with. We also annotate the pairwise relationships between the person and those objects. Here, we show a video with 4 actions labelled, resulting in 20 $(= 4 \times 5)$ frames annotated with scene graphs. The objects are grounded back in the video as bounding boxes.
+
+Few-shot prediction. The few-shot literature is broadly divided into two main frameworks. The first strategy learns a classifier for a set of frequent categories and then uses them to learn the few-shot categories [21, 22, 58]. For example, ZSL uses attributes of actions to enable few-shot [58]. The second strategy learns invariances or decompositions that enable few-shot classification [7, 18, 39, 90]. OSS and TARN propose a measurement of similarity or distance measure between video pairs [7, 39], CMN encodes uses a multi-saliency algorithm to encode videos [90], and ProtoGAN creates a prototype vector for each class [18]. Our framework resembles the first strategy because we use the object and visual relationship representations learned using the frequent actions to identify few-shot actions.
+
+# 3. Action Genome
+
+Inspired from Cognitive Science, we decompose events into prototypical action-object units [44, 63, 84]. Each action in Action Genome is represented as changes to objects and their pairwise interactions with the actor/person performing the action. We derive our representation as a temporally changing version of Visual Genome's scene graphs [43]. However, unlike Visual Genome, whose goal was to densely represent a scene with objects and visual re
+
+
+Figure 3: Distribution of (a) relationship and (b) object occurrences. The relationships are color coded to represent attention, spatial, and contact relationships. Most relationships have at least 1k instances and objects have at least 10k instances.
+
+
+
+Table 2: There are three types of relationships in Action Genome: attention relationships report which objects people are looking at, spatial relationships indicate how objects are laid out spatially, and contact relationships are semantic relationships involving people manipulating objects.
+
+| attention | spatial | contact |
| looking at | in front of | carrying | covered by |
| not looking at | behind | drinking from | eating |
| unsure | on the side of | have it on the back | holding |
| above | leansing on | lying on |
| beneath | not contacting | sitting on |
| in | standing on | touching |
| | twisting | wearing |
| | wiping | writing on |
+
+relationships, Action Genome's goal is to decompose actions and as such, focuses on annotating only those segments of the video where the action occurs and only those objects that are involved in the action.
+
+Annotation framework. Action Genome is built upon the videos and temporal action annotations available in the Charades dataset [66], which contains 157 action classes, 144 of which are human-object activities. In Charades, there are multiple actions that might be occurring at the same time. We do not annotate every single frame in a video; it would be redundant as the changes between objects and relationships occur at longer time scales.
+
+Figure 2 visualizes the pipeline of our annotation. We uniformly sample 5 frames to annotate across the range of each action interval. With this action-oriented sampling strategy, we provide more labels where more actions occur. For instance, in the example, actions "sitting on a chair" and "drinking from a cup" occur together and therefore, result in more annotated frames, 5 from each action. When annotating each sampled frame, the annotators hired were prompted with action labels and clips of the neighboring
+
+
+Figure 4: A weighted bipartite mapping between objects and relationships shows that they are densely interconnected in Action Genome. The weights represent percentage of occurrences in which a specific object occurs in a relationship. There are three colors in the graph and they represent the three kinds of relationships: attention (in orange), spatial (in green) and contact (in purple).
+
+video frames for context. The annotators first draw bounding boxes around the objects involved in these actions, then choose the relationship labels from the label set. The clips are used to disambiguate between the objects that are actually involved in an action when multiple instances of a given category is present. For example, if multiple "cups" are present, the context disambiguates which "cup" to annotate for the action "drinking from a cup".
+
+Action Genome contains three different kinds of human-object relationships: attention, spatial and contact relationships (see Table 2). Attention relationships indicate if a person is looking at an object or not, and serve as indicators for which object the person is or will interacting with. Spa
+
+tial relationships describe where objects are relative to one another. Contact relationships describe the different ways the person is contacting an object. A change in contact often indicates the occurrence of an actions: for example, changing from $\langle$ person - not contacting - book $\rangle$ to $\langle$ person - holding - book $\rangle$ may show an action of "picking up a book".
+
+It is worth noting that while Charades provides an injective mapping from each action to a verb, it is different from the relationship labels we provide. Charades' verbs are clip-level labels, such as "awaken", while we decompose them into frame-level human-object relationships, such as a sequence of $\langle$ person - lying on - bed $\rangle$ , $\langle$ person - sitting on - bed $\rangle$ and $\langle$ person - not contacting - bed $\rangle$ .
+
+Database statistics. Action Genome provides frame-level scene graph labels for the components of each action. Overall, we provide annotations for 234,253 frames with a total of 476,229 bounding boxes of 35 object classes (excluding "person"), and 1,715,568 instances of 25 relationship classes. Figure 3 visualizes the log-distribution of object and relationship categories in the dataset. Like most concepts in vision, some objects (e.g. table and chair) and relationships (e.g. in front of and not looking at) occur frequently while others (e.g. twisting and doorknob) only occur a handful of times. However, even with such a distribution, almost all objects have at least 10K instances and every relationship as at least 1K instances.
+
+Additionally, Figure 4 visualizes how frequently objects occur in which relationships. We see that most objects are pretty evenly involved in all three types of relationships. Unlike Visual Genome, where dataset bias provides a strong baseline for predicting relationships given the object categories, Action Genome does not suffer the same bias.
+
+# 4. Method
+
+We validate the utility of Action Genome's action decomposition by studying the effect of combining learning spatio-temporal scene graphs with learning to recognize actions. We propose a method, named Scene Graph Feature Banks (SGFB), to incorporate spatio-temporal scene graphs into action recognition. Our method is inspired by recent work in computer vision that uses the information "banks" [1, 45, 76]. Information banks are feature representations that have been used to represent, for example, object categories that occur in the video [45], or even include where the objects are [1]. Our model is most directly related to the recent long-term feature banks [76], which accumulates features of a long video as a fixed size representation for action recognition.
+
+Overall, our SGFB model contains two components: the first component generates spatio-temporal scene graphs while the second component encodes the graphs to predict
+
+
+Figure 5: Overview of our proposed model, SGFB, for action recognition using spatio-temporal scene graphs. SGFB predicts scene graphs for every frame in a video. These scene graphs are converted into features representations that are then combined using methods similar to long-term feature banks [76]. The final representation is merged with 3D CNN features and used to predict action labels.
+
+action labels. Given a video sequence $v = \{i_1, i_2, \dots, i_N\}$ , the aim of traditional multi-class action recognition is to assign multiple action labels to this video. Here, $v$ represents the video sequence made up of image frames $i_j, \forall j \in [1, N]$ . SGFB generates a spatio-temporal scene graph for every frame in the given video sequence. The scene graphs are encoded to formulate a spatio-temporal scene graph feature bank for the final task of action recognition. We describe the scene graph prediction and the scene graph feature bank components in more detail below. See Figure 5 for a high-level visualization of the model's forward pass.
+
+# 4.1. Scene graph prediction
+
+Previous research has proposed plenty of methods for predicting scene graphs on static images [48, 52, 77, 78, 85, 86]. We employ a state-of-the-art scene graph predictor as the first step of our method. Given a video sequence $v$ , the scene graph predictor $SG$ generates all the objects and connects each object with their relationships with the actor in each frame, i.e. $SG: I \to G$ . On each frame, the scene graph $G = (O, R)$ consists of a set of objects $O = \{o_1, o_2, \ldots\}$ that a person is interacting with and a set of relationships $R = \{\{r_{11}, r_{12}, \ldots\}, \{r_{21}, r_{22}, \ldots\}, \ldots\}$ . Here $r_{pq}$ denotes the $q$ -th relationship between the person with the object $o_p$ . Note that there can be multiple relationships between the person and each object, including attend-
+
+tion, spatial, and contact relationships. Besides the graph labels, the scene graph predictor $SG$ also outputs confidence scores for all predicted objects: $\{s_{o_1}, s_{o_2}, \ldots\}$ and relationships: $\{\{s_{r_{11}}, s_{r_{12}}, \ldots\}, \{s_{r_{21}}, s_{r_{22}}, \ldots\}, \ldots\}$ . We have experimented with various choices of $SG$ and benchmark their performance on Action Genome in Section 5.3.
+
+# 4.2. Scene graph feature banks
+
+After obtaining the scene graph $G$ on each frame, we formulate a feature vector $f$ by aggregating the information across all the scene graphs into a feature bank. Let's assume there are $|O|$ classes of objects and $|R|$ classes of relationships. In Action Genome, $|O| = 35$ and $|R| = 25$ . We first construct a confidence matrix $C$ with dimension $|O| \times |R|$ , where each entry corresponds to an object-relationship category pair. We compute every entry of this matrix using the scores output by the scene graph predictor $SG$ . $C_{ij} = s_{o_i} \times s_{r_{ij}}$ . Intuitively, $C_{ij}$ is a high value when $SG$ is confident that there is an object $o_i$ in the current frame and its relationship with the actor is $r_{ij}$ . We flatten the confidence matrix as the feature vector $f$ for each image.
+
+Formally, $F_{SG} = [f_1, f_2, \dots, f_T]$ is a sequence of scene graph features extracted from a subsample of frames $i_1, i_2, \dots, i_N$ . We aggregate the features across the frames using methods similar to long-term feature banks [76], i.e. $F_{SG}$ are combined with 3D CNN features $S$ extracted from a short-term clip using feature bank operators (FBO), which can be instantiated as mean/max pooling or non-local blocks [73]. The 3D CNN embeds short-term information into $S$ while $F_{SG}$ provides contextual information, critical in modeling the dynamics of complex actions with long time span. The final aggregated feature is then used to predict action labels for the video.
+
+# 5. Experiments
+
+Action Genome's representation enables us to study few-shot action recognition by decomposing actions into temporally changing visual relationships between objects. It also allows us to benchmark whether understanding the decomposition helps improve performance in action recognition or scene graph prediction individually. To study these benefits afforded by Action Genome, we design three experiments: action recognition, few-shot action recognition, and finally, spatio-temporal scene graph prediction.
+
+# 5.1. Action recognition on Charades
+
+We expect that grounding the components that compose an action — the objects and their relationships — will improve our ability to predict which actions are occurring in a video sequence. So, we evaluate the utility of Action Genome's scene graphs on the task of action recognition.
+
+Problem formulation. We specifically study multi-class action recognition on the Charades dataset [66]. The Cha
+
+Table 3: Action recognition on Charades validation set in mAP (\%). We outperform all existing methods when we simultaneously predict scene graphs while performing action recognition. We also find that utilizing ground truth scene graphs can significantly boost performance.
+
+| Method | Backbone | Pre-train | mAP |
| I3D + NL [10, 73] | R101-I3D-NL | Kinetics-400 | 37.5 |
| STRG [74] | R101-I3D-NL | Kinetics-400 | 39.7 |
| Timeception [31] | R101 | Kinetics-400 | 41.1 |
| SlowFast [23] | R101 | Kinetics-400 | 42.1 |
| SlowFast+NL [23, 73] | R101-NL | Kinetics-400 | 42.5 |
| LFB [76] | R101-I3D-NL | Kinetics-400 | 42.5 |
| SGFB (ours) | R101-I3D-NL | Kinetics-400 | 44.3 |
| SGFB Oracle (ours) | R101-I3D-NL | Kinetics-400 | 60.3 |
+
+rades dataset contains 9,848 crowdsourced videos with a length of 30 seconds on average. At any frame, a person can perform multiple actions out of a nomenclature of 157 classes. The multi-classification task provides a video sequence as input and expects multiple action labels as output. We train our SGFB model to predict Charades action labels during test time and during training, provide SGFB with spatio-temporal scene graphs as additional supervision.
+
+Baselines. Previous work has proposed methods for multiclass action recognition and benchmarked on Charades. Recent state-of-the-art methods include applying I3D [10] and non-local blocks [73] as video feature extractors (I3D+NL), spatio-temporal region graphs (STRG) [74], Timeception convolutional layers (Timeception) [31], SlowFast networks (SlowFast) [23], and long-term feature banks (LFB) [76]. All the baseline methods are pre-trained on Kinetics-400 [38] and the input modality is RGB.
+
+Implementation details. SGFB first predicts a scene graph on each frame, then constructs a spatio-temporal scene graph feature bank for action recognition. We use Faster R-CNN [62] with ResNet-101 [29] as the backbone for region proposals and object detection. We leverage RelDN [86] to predict the visual relationships. Scene graph prediction is trained on Action Genome, where we follow the same train/val splits of videos as the Charades dataset. Action recognition uses the same video feature extractor, hyper-parameters, and solver schedulers as long-term feature banks (LFB) [76] for a fair comparison.
+
+Results. We report performance of all models using mean average precision (mAP) on Charades validation set in Table 3. By replacing the feature banks with spatio-temporal scene graph features, we outperform the state-of-the-art LFB by $1.8\%$ mAP. Our features are smaller in size ( $35 \times 25 = 875$ in SGFB versus 2048 in LFB) but concisely capture the more information for recognizing actions.
+
+We also find that improving object detectors designed for videos can further improve action recognition results. To quantitatively demonstrate the potential of better
+
+Table 4: Few-shot experiments. With the ability of compositional action understanding, our SGFB demonstrates better generalizability than LFB. The SGFB oracle shows the great potential of how much the scene graph representation could benefit action recognition.
+
+ | 1-shot | 5-shot | 10-shot |
| LFB [76] | 28.3 | 36.3 | 39.6 |
| SGFB (ours) | 28.8 | 37.9 | 42.7 |
| SGFB oracle (ours) | 30.4 | 40.2 | 50.5 |
+
+scene graphs on action recognition, we designed an SGFB Oracle setup. The SGFB Oracle assumes that a perfect scene graph prediction method is available. The spatiotemporal scene graph feature bank therefore, directly encodes a feature vector from ground truth objects and visual relationships for the annotated frames. Feeding such feature banks into the SGFB model, we observe a significant improvement on action recognition: $16\%$ increase on mAP. Such a boost in performance shows the potential of Action Genome and compositional action understanding when video-based scene graph models are utilized to improve scene graph prediction. It is important to note that the performance by SGFB Oracle is not an upper bound on performance since we only utilize ground truth scene graphs for the few frames that have ground truth annotations.
+
+# 5.2. Few-shot action recognition
+
+Intuitively, predicting actions should be easier from a symbolic embedding of scene graphs than from pixels. When trained with very few examples, compositional action understanding with additional knowledge of scene graphs should outperform methods that treat actions as monolithic concept. We showcase the capability and potential of spatio-temporal scene graphs to generalize to rare actions.
+
+Problem formulation. In our few-shot action recognition experiments on Charades, we split the 157 action classes into a base set of 137 classes and a novel set of 20 classes. We first train a backbone feature extractor (R101-I3D-NL) on all video examples of the base classes, which is shared by the baseline LFB, our SGFB, and SGFB oracle. Next, we train each model with only $k$ examples from each novel class, where $k = 1, 5, 10$ , for 50 epochs. Finally, we evaluate the trained models on all examples of novel classes in the Charades validation set.
+
+Results. We report few-shot experiment performance in Table 4. SGFB achieves better performance than LFB on all 1,5,10-shot experiments. Furthermore, if with ground truth scene graphs, SGFB Oracle shows a $10.9\%$ 10-shot mAP improvement. We visualize the comparison between SGFB and LFB in Figure 6. With the knowledge of spatiotemporal scene graphs, SGFB better captures action concepts involving the dynamics of objects and relationships.
+
+
+Ground truth: Awakening in bed, Lying on a bed, Snuggling with a pillow
+Figure 6: Qualitative results of 10-shot experiments. We compare the predictions of our SGFB against LFB [76]. Since SGFB uses scene graph knowledge and explicitly captures the dynamics of human-object relationships, it easily learns the concept of "awakening in bed" even when only trained with 10 examples of this label. Also, since SGFB is trained to detect and ground objects, it avoids misclassifying objects, such as television, which then results in more robust action recognition.
+
+# 5.3. Spatio-temporal scene graph prediction
+
+Progress in image-based scene graph prediction has cascaded to improvements across multiple Computer Vision tasks, including image captioning [2], image retrieval [36, 64], visual question answering [35], relationship modeling [41] and image generation [34]. In order to promote similar progress in video-based tasks, we introduce the complementary of spatio-temporal scene graph prediction. Unlike image-based scene graph prediction, which only has a single image as input, this task expects a video as input and therefore, can utilize temporal information from neighboring frames to strength its predictions. In this section, we define the task, its evaluation metrics and report benchmarked results from numerous recently proposed image-based scene graph models applied to this new task.
+
+Problem formulation. The task expects as input a video sequence $v = \{i_1, i_2, \ldots, i_n\}$ where $i_j \forall j \in [1, n]$ represents image frames from the video. The task requires the model to generate a spatio-temporal scene graph $G = (O, R)$ per frame. $o_k \in O$ is represented as objects with category labels and bounding box locations. $r_{j,kl} \in R$ represents the relationships between objects $o_i$ and $o_j$ .
+
+Evaluation metrics. We borrow the three standard evaluation modes for image-based scene graph prediction [52]: (i) scene graph detection (SGDET) which expects input images and predicts bounding box locations, object categories, and predicate labels, (ii) scene graph classification (SGCLS) which expects ground truth boxes and predicts object categories and predicate labels, and (iii) predicate classification (PREDCLS), which expects ground truth bounding boxes
+
+Table 5: We evaluate recently proposed image-based scene graph prediction models and provide a benchmark for the new task of spatio-temporal scene graph prediction. We find that there is significant room for improvement, especially since these existing methods were designed to be conditioned on a single frame and do not consider the entire video sequence as a whole.
+
+| Method | PredCls | SGCls | SGGen |
| image | video | image | video | image | video |
| R@20 | R@50 | R@20 | R@50 | R@20 | R@50 | R@20 | R@50 | R@20 | R@50 | R@20 | R@50 |
| VRD [52] | 14.75 | 14.85 | 14.51 | 14.60 | 13.65 | 14.69 | 13.41 | 14.44 | 10.28 | 10.94 | 10.04 | 10.70 |
| Freq Prior [85] | 32.70 | 32.84 | 32.25 | 32.37 | 31.52 | 32.78 | 31.08 | 32.32 | 24.03 | 24.87 | 23.49 | 24.31 |
| IMP [77] | 35.15 | 35.56 | 34.50 | 34.86 | 31.73 | 34.85 | 31.09 | 34.16 | 23.88 | 25.52 | 23.23 | 24.82 |
| MSDN [48] | 35.27 | 35.64 | 34.61 | 34.93 | 31.89 | 34.98 | 31.28 | 34.28 | 24.00 | 25.64 | 23.39 | 24.95 |
| Graph R-CNN [78] | 35.36 | 35.74 | 34.80 | 35.12 | 31.94 | 35.07 | 31.43 | 34.46 | 24.12 | 25.77 | 23.59 | 25.15 |
| RelDN [86] | 35.89 | 36.09 | 35.36 | 35.51 | 33.47 | 35.84 | 32.96 | 35.27 | 25.00 | 26.21 | 24.45 | 25.63 |
+
+and object categories to predict predicate labels. We refer the reader to the paper that introduced these tasks for more details [52]. We adapt these metrics for video, where the per-frame measurements are first averaged in each video as the measurement of the video, then we average video results as the final result for the test set.
+
+Baselines. We benchmark the following image-based scene graph models for the spatio-temporal scene graph prediction task: VRD's visual module (VRD) [52], neural motif's frequency prior (Freq-prior) [85], iterative message passing (IMP) [77], multi-level scene description network (MSDN) [48], graph R-CNN (Graph R-CNN) [78], and relationship detection network (RelDN) [86].
+
+Results. To our surprise, we find that IMP, which was one of the earliest scene graph prediction models actually outperforms numerous more recently proposed methods. The most recently proposed scene graph model, RelDN marginally outperforms IMP, suggesting that modeling similarities between object and relationship classes improve performance in our task as well. The small gap in performance between the task of PredCls and SGCls suggests that these models suffer from not being able to accurately detect the objects in the video frames. Improving object detectors designed specifically for videos could improve performance. The models were trained only using Action Genome's data and not finetuned on Visual Genome [43], which contains image-based scene graphs, or on ActivityNet Captions [42], which contains dense captioning of actions in videos with natural language paragraphs. We expect that finetuning models with such datasets would result in further improvements.
+
+# 6. Future work
+
+With the rich hierarchy of events, Action Genome not only enables research on spatio-temporal scene graph prediction and compositional action recognition, but also promises various research directions. We hope future work will develop methods for the following:
+
+Spatio-temporal action localization. The majority of
+
+spatio-temporal action localization methods [24, 25, 33, 68] focus on localizing the person performing the action but ignore the objects, which are also involved in the action, that the person interacts with. Action Genome can enable research on localization of both actors and objects, formulating a more comprehensive grounded action localization task. Furthermore, other variants of this task can also be explored; for example, a weakly-supervised localization task where a model is trained with only action labels but tasked with localizing the actors and objects.
+
+Explainable action models. Explainable visual models is an emerging field of research. Amongst numerous techniques, saliency prediction has emerged as a key mechanism to interpret machine learning models [54, 65, 67]. Action Genome provides frame-level labels of attention in the form of objects that a person performing the action is either looking at or interacting with. These labels can be used to further train explainable models.
+
+Video generation from spatio-temporal scene graphs. Recent studies have explored image generation from scene graphs [3, 34]. Similarly, with a structured video representation, Action Genome enables research on video generation from spatio-temporal scene graphs.
+
+# 7. Conclusion
+
+We introduce Action Genome, a representation that decomposes actions into spatio-temporal scene graphs. Scene graphs explain how objects and their relationships change as an action occurs. We demonstrated the utility of Action Genome by collecting a large dataset of spatio-temporal scene graphs and used it to improve state of the art results for action recognition as well as few-shot action recognition. Finally, we benchmarked results for the new task of spatio-temporal scene graph prediction. We hope that Action Genome will inspire a new line of research in more decomposable and generalizable video understanding.
+
+Acknowledgement. We would like to thank Panasonic for their support.
+
+# References
+
+[1] Tim Althoff, Hyun Oh Song, and Trevor Darrell. Detection bank: an object detection based video representation for multimedia event recognition. In Proceedings of the 20th ACM international conference on Multimedia, pages 1065-1068. ACM, 2012. 5
+[2] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer, 2016. 1, 3, 7
+[3] Oron Ashual and Lior Wolf. Specifying object attributes and relations in interactive scene generation. In Proceedings of the IEEE International Conference on Computer Vision, pages 4561-4569, 2019. 8
+[4] Fabien Baradel, Natalia Neverova, Christian Wolf, Julien Mille, and Greg Mori. Object level visual reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pages 105-121, 2018. 3
+[5] Roger G Barker and Herbert F Wright. One boy's day; a specimen record of behavior. 1951. 1, 2
+[6] Roger G Barker and Herbert F Wright. Midwest and its children: The psychological ecology of an american town. 1955. 1, 2
+[7] Mina Bishay, Georgios Zoumpourlis, and Ioannis Pa-tras. Tarn: Temporal attentive relation network for few-shot and zero-shot action recognition. arXiv preprint arXiv:1907.09021, 2019. 3
+[8] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961-970, 2015. 1, 2, 3
+[9] Joao Carreira, Eric Noland, Chloe Hillier, and Andrew Zisserman. A short note on the kinetics-700 human action dataset. arXiv preprint arXiv:1907.06987, 2019. 2, 3
+[10] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6299-6308, 2017. 1, 3, 6
+[11] Roberto Casati and A Varzi. Events, volume 15 of the international research library of philosophy, 1996. 2
+[12] Vincent S Chen, Paroma Varma, Ranjay Krishna, Michael Bernstein, Christopher Re, and Li Fei-Fei. Scene graph prediction with limited labels. arXiv preprint arXiv:1904.11622, 2019. 3
+[13] Aron Culotta and Jeffrey Sorensen. Dependency tree kernels for relation extraction. In Proceedings of the 42nd annual meeting on association for computational linguistics, page 423. Association for Computational Linguistics, 2004. 3
+[14] Bo Dai, Yuqi Zhang, and Dahua Lin. Detecting visual relationships with deep relational networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3298-3308. IEEE, 2017. 3
+[15] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, et al.
+
+Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision (ECCV), pages 720-736, 2018. 2, 3
+[16] Chaitanya Desai, Deva Ramanan, and Charless C Fowlkes. Discriminative models for multi-class object layout. International journal of computer vision, 95(1):1-12, 2011. 3
+[17] Apoorva Dornadula, Austin Narcomey, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationships as functions: Enabling few-shot scene graph prediction. arXiv preprint arXiv:1906.04876, 2019. 3
+[18] Sai Kumar Dwivedi, Vikram Gupta, Rahul Mitra, Shuaib Ahmed, and Arjun Jain. Protagan: Towards few shot learning for action recognition. arXiv preprint arXiv:1909.07945, 2019.3
+[19] Victor Escorcia, Fabian Caba Heilbron, Juan Carlos Niebles, and Bernard Ghanem. Daps: Deep action proposals for action understanding. In European Conference on Computer Vision, pages 768-784. Springer, 2016. 2
+[20] Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. Describing objects by their attributes. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1778-1785. IEEE, 2009. 3
+[21] Li Fei-Fei, Rob Fergus, and Pietro Perona. A bayesian approach to unsupervised one-shot learning of object categories. In Proceedings Ninth IEEE International Conference on Computer Vision, pages 1134–1141. IEEE, 2003. 3
+[22] Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006. 3
+[23] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. Slowfast networks for video recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 6202-6211, 2019. 1, 3, 6
+[24] Rohit Girdhar, João Carreira, Carl Doersch, and Andrew Zisserman. A better baseline for ava. arXiv preprint arXiv:1807.10066, 2018. 8
+[25] Rohit Girdhar, Joao Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 244-253, 2019. 3, 8
+[26] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6047-6056, 2018. 2, 3
+[27] Zhou GuoDong, Su Jian, Zhang Jie, and Zhang Min. Exploring various knowledge in relation extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 427-434. Association for Computational Linguistics, 2005. 3
+[28] Bridgette M Hard, Barbara Tversky, and David S Lang. Making sense of abstract events: Building event schemas. Memory & cognition, 34(6):1221-1235, 2006. 2
+[29] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed-
+
+ings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6
+[30] Roei Herzig, Moshiko Raboh, Gal Chechik, Jonathan Berant, and Amir Globerson. Mapping images to scene graphs with permutation-invariant structured prediction. In Advances in Neural Information Processing Systems, pages 7211-7221, 2018. 3
+[31] Noureldien Hussein, Efstratios Gavves, and Arnold WM Smeulders. Timeception for complex action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 254-263, 2019. 1, 3, 6
+[32] Ashesh Jain, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatio-temporal graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5308-5317, 2016. 3
+[33] Jianwen Jiang, Yu Cao, Lin Song, Shiwei Zhang4 Yunkai Li, Ziyao Xu, Qian Wu, Chuang Gan, Chi Zhang, and Gang Yu. Human centric spatio-temporal action localization. In ActivityNet Workshop on CVPR, 2018. 8
+[34] Justin Johnson, Agrim Gupta, and Li Fei-Fei. Image generation from scene graphs. arXiv preprint arXiv:1804.01622, 2018. 1, 3, 7, 8
+[35] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Inferring and executing programs for visual reasoning. arXiv preprint arXiv:1705.03633, 2017. 1, 3, 7
+[36] Justin Johnson, Ranjay Krishna, Michael Stark, Li-Jia Li, David Shamma, Michael Bernstein, and Li Fei-Fei. Image retrieval using scene graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3668-3678, 2015. 1, 3, 7
+[37] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 1725–1732, 2014. 2
+[38] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 1, 6
+[39] Orit Kliper-Gross, Tal Hassner, and Lior Wolf. One shot similarity metric learning for action recognition. In International Workshop on Similarity-Based Pattern Recognition, pages 31-45. Springer, 2011. 3
+[40] Philipp Krahenbuhl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pages 109-117, 2011. 3
+[41] Ranjay Krishna, Ines Chami, Michael Bernstein, and Li Fei-Fei. Referring relationships. In Computer Vision and Pattern Recognition, 2018. 1, 3, 7
+[42] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pages 706–715, 2017. 8
+
+[43] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73, 2017. 1, 2, 3, 8
+[44] Christopher A Kurby and Jeffrey M Zacks. Segmentation in the perception and memory of events. Trends in cognitive sciences, 12(2):72-79, 2008. 1, 2, 3
+[45] Li-Jia Li, Hao Su, Li Fei-Fei, and Eric P Xing. Object bank: A high-level image representation for scene classification & semantic feature sparsification. In Advances in neural information processing systems, pages 1378–1386, 2010. 5
+[46] Yikang Li, Wanli Ouyang, Xiaogang Wang, and Xiao'Ou Tang. Vip-cnn: Visual phrase guided convolutional neural network. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 7244-7253. IEEE, 2017. 3
+[47] Yikang Li, Wanli Ouyang, Bolei Zhou, Jianping Shi, Chao Zhang, and Xiaogang Wang. Factorizable net: an efficient subgraph-based framework for scene graph generation. In European Conference on Computer Vision, pages 346-363. Springer, 2018. 3
+[48] Yikang Li, Wanli Ouyang, Bolei Zhou, Kun Wang, and Xiaogang Wang. Scene graph generation from objects, phrases and region captions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1261-1270, 2017. 3, 5, 8
+[49] Xiaodan Liang, Lisa Lee, and Eric P Xing. Deep variation-structured reinforcement learning for visual relationship and attribute detection. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 4408–4417. IEEE, 2017. 3
+[50] Ivan Lillo, Alvaro Soto, and Juan Carlos Niebles. Discriminative hierarchical modeling of spatio-temporally composable human activities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 812-819, 2014. 1, 2
+[51] Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift module for efficient video understanding. In Proceedings of the IEEE International Conference on Computer Vision, pages 7083-7093, 2019. 3
+[52] Cewu Lu, Ranjay Krishna, Michael Bernstein, and Li Fei-Fei. Visual relationship detection with language priors. In European Conference on Computer Vision, pages 852–869. Springer, 2016. 5, 7, 8
+[53] Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, and Hans Peter Graf. Attend and interact: Higher-order object interactions for video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6790-6800, 2018. 3
+[54] Aravindh Mahendran and Andrea Vedaldi. Salient deconvolutional networks. In European Conference on Computer Vision, pages 120-135. Springer, 2016. 8
+[55] Albert Michotte. The perception of causality. Routledge, 1963. 1, 2
+[56] George A Miller. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41, 1995. 2
+
+[57] George A Miller and Philip N Johnson-Laird. Language and perception. Belknap Press, 1976. 2
+[58] Ashish Mishra, Vinay Kumar Verma, M Shiva Krishna Reddy, S Arulkumar, Piyush Rai, and Anurag Mittal. A generative approach to zero-shot and few-shot action recognition. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 372-380. IEEE, 2018. 3
+[59] Alejandro Newell and Jia Deng. Pixels to graphs by associative embedding. In Advances in Neural Information Processing Systems, pages 2168-2177, 2017. 3
+[60] Darren Newtson. Attribution and the unit of perception of ongoing behavior. Journal of Personality and Social Psychology, 28(1):28, 1973. 2
+[61] Devi Parikh and Kristen Grauman. Relative attributes. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 503-510. IEEE, 2011. 3
+[62] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015. 6
+[63] Jeremy R Reynolds, Jeffrey M Zacks, and Todd S Braver. A computational model of event segmentation from perceptual prediction. Cognitive science, 31(4):613-643, 2007. 2, 3
+[64] Sebastian Schuster, Ranjay Krishna, Angel Chang, Li Fei-Fei, and Christopher D Manning. Generating semantically precise scene graphs from textual descriptions for improved image retrieval. In Proceedings of the fourth workshop on vision and language, pages 70–80, 2015. 1, 3, 7
+[65] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 618-626, 2017. 8
+[66] Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pages 510-526. Springer, 2016. 1, 2, 3, 4, 6
+[67] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. 8
+[68] Chen Sun, Abhinav Shrivastava, Carl Vondrick, Kevin Murphy, Rahul Sukthankar, and Cordelia Schmid. Actor-centric relation network. In Proceedings of the European Conference on Computer Vision (ECCV), pages 318-334, 2018. 8
+[69] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 4489-4497. IEEE, 2015. 1, 3
+[70] Zhuowen Tu and Xiang Bai. Auto-context and its application to high-level vision tasks and 3d brain image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10):1744-1757, 2010. 3
+[71] Gül Varol, Ivan Laptev, and Cordelia Schmid. Long-term temporal convolutions for action recognition. IEEE
+
+transactions on pattern analysis and machine intelligence, 40(6):1510-1517, 2017. 2
+[72] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision, pages 20-36. Springer, 2016. 1, 3
+[73] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7794-7803, 2018. 6
+[74] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In Proceedings of the European Conference on Computer Vision (ECCV), pages 399-417, 2018. 3, 6
+[75] Philippe Weinzaepfel, Xavier Martin, and Cordelia Schmid. Human action localization with sparse spatial supervision. arXiv preprint arXiv:1605.05197, 2016. 2, 3
+[76] Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 284-293, 2019. 2, 3, 5, 6, 7
+[77] Danfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 2, 2017. 3, 5, 8
+[78] Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, and Devi Parikh. Graph r-cnn for scene graph generation. arXiv preprint arXiv:1808.00191, 2018. 3, 5, 8
+[79] Benjamin Yao, Xiong Yang, and Song-Chun Zhu. Introduction to a large-scale general purpose ground truth database: methodology, annotation tool and benchmarks. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 169-183. Springer, 2007. 2
+[80] Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, and Li Fei-Fei. Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision, 126(2-4):375-389, 2018. 2
+[81] Serena Yeung, Olga Russakovsky, Greg Mori, and Li Fei-Fei. End-to-end learning of action detection from frame glimpses in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2678–2687, 2016. 2
+[82] Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4694–4702, 2015. 2
+[83] Jeffrey M Zacks, Todd S Braver, Margaret A Sheridan, David I Donaldson, Abraham Z Snyder, John M Ollinger, Randy L Buckner, and Marcus E Raichle. Human brain activity time-locked to perceptual event boundaries. Nature neuroscience, 4(6):651, 2001. 2
+[84] Jeffrey M Zacks, Barbara Tversky, and Gowri Iyer. Perceiving, remembering, and communicating structure in events.
+
+Journal of experimental psychology: General, 130(1):29, 2001. 1, 2, 3
+[85] Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global context. arXiv preprint arXiv:1711.06640, 2017. 3, 5, 8
+[86] Ji Zhang, Kevin J Shih, Ahmed Elgammal, Andrew Tao, and Bryan Catanzaro. Graphical contrastive losses for scene graph parsing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11535-11543, 2019. 5, 6, 8
+[87] Hang Zhao, Antonio Torralba, Lorenzo Torresani, and Zhicheng Yan. Hacs: Human action clips and segments dataset for recognition and temporal localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 8668-8678, 2019. 1, 2, 3
+[88] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal relational reasoning in videos. In Proceedings of the European Conference on Computer Vision (ECCV), pages 803-818, 2018. 3
+[89] Guodong Zhou, Min Zhang, DongHong Ji, and Qiaoming Zhu. Tree kernel-based relation extraction with context-sensitive structured parse tree information. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 2007. 3
+[90] Linchao Zhu and Yi Yang. Compound memory networks for few-shot video classification. In Proceedings of the European Conference on Computer Vision (ECCV), pages 751-766, 2018. 3
+[91] Tao Zhuo, Zhiyong Cheng, Peng Zhang, Yongkang Wong, and Mohan Kankanhalli. Explainable video action reasoning via prior knowledge and state transitions. In Proceedings of the 27th ACM International Conference on Multimedia, pages 521-529. ACM, 2019. 2
\ No newline at end of file
diff --git a/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/images.zip b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b685d3820dab8f51b51a12cf3c52cf0f07731ecc
--- /dev/null
+++ b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c9b217e52aebc6b81d42d1987a161835cddebe37796bc006413df614951531f
+size 513359
diff --git a/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/layout.json b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9ec22af285834bd2c25aa51817ef26ff809875dd
--- /dev/null
+++ b/actiongenomeactionsascompositionsofspatiotemporalscenegraphs/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be289d49a1679631141302b4b5c54f685c3304c28982da01f09f39c01e0a2a64
+size 398540
diff --git a/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_content_list.json b/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4fdd9644bd83f2c53ca2bd855822bb4e712b723c
--- /dev/null
+++ b/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b85826b4020d23ae582da1afe4078f4245952a54f8cd9716642ede20939d42e5
+size 80911
diff --git a/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_model.json b/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..779fd65bb8a1623f3b6667463e2c98ed76867f8f
--- /dev/null
+++ b/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0430ebca4c034364eb335d1ab71d563f6d43bbd3abf942f7949a7aadb16787d
+size 103535
diff --git a/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_origin.pdf b/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..41798febb99d87a7d41cf12e3eb8f15a865d8b1b
--- /dev/null
+++ b/actionmodifierslearningfromadverbsininstructionalvideos/8c8a9e3d-94bb-4a0c-abcf-3812587bf4a4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a13324b48dfd1fe71f78654c3df5355aabd2dbd017cd77070619d288569853b2
+size 1137991
diff --git a/actionmodifierslearningfromadverbsininstructionalvideos/full.md b/actionmodifierslearningfromadverbsininstructionalvideos/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..dd1f48243702581862fa72fd85e2395311c9197d
--- /dev/null
+++ b/actionmodifierslearningfromadverbsininstructionalvideos/full.md
@@ -0,0 +1,344 @@
+# Action Modifiers: Learning from Adverbs in Instructional Videos
+
+Hazel Doughty1
+
+Ivan Laptev2
+
+Walterio Mayol-Cuevas1,3
+
+Dima Damen
+
+1University of Bristol
+
+2Inria, École Normale Supérieure
+
+3Amazon
+
+# Abstract
+
+We present a method to learn a representation for adverbs from instructional videos using weak supervision from the accompanying narrations. Key to our method is the fact that the visual representation of the adverb is highly dependant on the action to which it applies, although the same adverb will modify multiple actions in a similar way. For instance, while 'spread quickly' and 'mix quickly' will look dissimilar, we can learn a common representation that allows us to recognize both, among other actions.
+
+We formulate this as an embedding problem, and use scaled dot-product attention to learn from weakly-supervised video narrations. We jointly learn adverbs as invertible transformations operating on the embedding space, so as to add or remove the effect of the adverb. As there is no prior work on weakly supervised learning of adverbs, we gather paired action-adverb annotations from a subset of the HowTo100M dataset for 6 adverbs: quickly/slowly, finely/coarsely, and partially/completely. Our method outperforms all baselines for video-to-adverb retrieval with a performance of 0.719 mAP. We also demonstrate our model's ability to attend to the relevant video parts in order to determine the adverb for a given action.
+
+# 1. Introduction
+
+Instructional videos are a popular type of media watched by millions of people around the world to learn new skills. Several previous works aimed to learn the key steps necessary to complete the task from these videos [1, 30, 45, 62]. However, identifying the steps, or their order, is not all one needs to perform the task well; some steps need to be performed in a certain way to achieve the desired outcome. Take for example the task of making a meringue. An expert would assure you it is critical to add the sugar gradually and avoid over-beating by folding the mixture gently.
+
+This is related to recent efforts on assessing the performance of daily tasks [10, 11, 26], however, these works do not assess individual actions or identify whether they have been performed as recommended by, say, a recipe. As in
+
+
+Figure 1. We learn a joint video-text embedding space from instructional videos and accompanying action-adverb pairs in the narration. Within this space, we learn adverbs as action modifiers — that is transformations which modify the action's embedding.
+
+the example before, steps with such caveats are often indicated by adverbs describing how actions should be performed. These adverbs (e.g. quickly, gently, ...) generalize to different actions and modify the manner of an action. We thus learn these as action modifiers (Fig. 1).
+
+To learn action modifiers for a variety of tasks and actions, we utilize the online resource of instructional videos with accompanying narrations. However, this form of supervision is weak and noisy. Not only are the narrations just roughly aligned with the actions in the video, but often the narrated actions may not be captured in the video altogether. For example, a YouTube instructional video might be narrated as "pour in the cream quickly" but the visuals only show the cream already added. In this case the video would not be useful to learn the adverb 'quickly'.
+
+As the main contribution of this paper, we propose the
+
+first method for weakly supervised learning from adverbs, in which we embed relevant video segments in a latent space and learn adverbs as transformations in this space. We collect action-adverb labels from narrations of a subset of tasks in the HowTo100M dataset [33]. The method is evaluated for video-to-adverb retrieval, as well as adverb-to-video retrieval and shows significant improvements over baselines. Additionally, we present a comprehensive ablation study demonstrating that jointly learning a good action embedding is key to learning action modifiers.
+
+# 2. Related Work
+
+We review works which learn from instructional videos, followed by works using parts-of-speech in video. We then review the related task of object attributes in images and methods which learn embeddings under weak supervision.
+
+Instructional Videos. Movies accompanied by subtitles and scripts have been used for learning from video [12, 13, 25, 47]. However, movies typically focus on talking heads with few object interactions. More recently, instructional videos are a popular source of datasets [1, 33, 44, 60] with hundreds of online videos of the same task. Narrations are used to learn steps of complex tasks [1, 18, 30, 42, 45, 62], and more recently for video retrieval [33], visual grounding [17, 19], action segmentation [60] and learning actions through object state changes [2, 14].
+
+In this work, we offer a novel insight into how these instructional videos can be used beyond step identification. Our work utilizes videos from the recently released HowTo100M dataset [33], learning adverbs and their relevance to critical steps in these tasks.
+
+Learning from Parts-of-Speech in Video. Several problems are at the intersection between language and video: captioning [24, 38, 55, 59], retrieval [9, 16, 21, 31, 33, 52, 54] and visual question answering [15, 56, 57, 61]. The majority of these works use LSTMs or GRUs to combine words into sentence-level features. While some works use learned pooling [32] or attention [55, 56, 57], they do not use knowledge of the words' parts-of-speech (PoS).
+
+A few recent works differentiate words by their PoS tags. Xu et al. [54] learn a joint video-text embedding space after detecting (subject, verb, object) triplets in the input caption. Wray et al. [52] perform fine-grained action retrieval by learning a separate embedding for each PoS before combining these embeddings. Both works focus on verb and noun PoS, as they target action recognition. Alayrac et al. [1] also use verb-noun pairs; the authors use direct object relations to learn unsupervised clusterings of key steps in instructional videos.
+
+While some adverbs are contained in video captioning datasets [24, 59], no prior captioning work models or recognizes these adverbs. The only prior work to utilize adverbs
+
+is that of Pang et al. [39] where many adverbs in the ADHA dataset model moods and facial expressions (e.g. 'happily', 'proudly'). The work uses full supervision including action bounding boxes. Instead, in this work we target adverbs that represent the manner in which an action is performed, using only weak supervision from narrations.
+
+Object Attributes in Images. Adverbs of actions are analogous with adjectives of objects. Learning adjectives for nouns has been investigated in the context of recognizing object-attribute pairs [4, 7, 20, 34, 36, 37, 50, 51] from images. Both [7, 34] tackle the problem of contextuality of attributes, where the appearance of an attribute can vastly differ depending on the object it applies to. Chen and Grauman [7] formulate this as transfer learning to recognize unseen object-attribute compositions. Misra et al. [34] learn how to compose separate object and attributes classifiers for novel combinations. Instead of using classifiers to recognize attributes, Nagarajan and Grauman [36] model attributes as a transformation of an object's embedding. Our work is inspired by this approach.
+
+While some works learn attributes for actions [28, 43, 58], these detect combinations of specific attributes (e.g. 'outdoor', 'uses toothbrush') to perform zero shot recognition and do not consider adverbs as attributes.
+
+Weakly Supervised Embedding. Learned embeddings are commonly used for retrieval tasks, however few works have attempted to learn embeddings under weak supervision [3, 35, 46, 53]. In [3], weak supervision is overcome using a triplet loss that only optimizes distances to the definite negatives and identifies the best matching positive. Two works [35, 46] perform video moment retrieval from text queries without temporal bounds in training. Similar to our approach, both use a form of text-guided attention to find the relevant portion of the video, however these use the full sentence. In our work, we simultaneously embed the relevant portion of the video while learning how adverbs modify actions. We detail our method next.
+
+# 3. Learning Action Modifiers
+
+The inputs to our model are action-adverb narrations and the accompanying instructional videos. Fig. 2(a) shows a sample instructional video, narrated with "...start by quickly rolling our lemons...", from which we identify the action roll and the adverb quickly (see Sec. 4 for NLP details). After training, our model is able to assess whether videos in the test set, of the same or different action, have been achieved quickly, among other learned adverbs.
+
+We present an overview of our method in Fig. 2. We learn a joint video-text embedding shown in Fig. 2(b), where the relevant video parts are embedded (blue dot) close to the text representation of the adverb-modified action 'roll quickly' (yellow dot). We review how joint video
+
+
+Figure 2. (a) Our input is a video $x$ with the weak label $(a, m)$ for the action and adverb respectively. (b) We aim to learn a joint video-text embedding space for adverb and video retrieval where the embedded video (blue) and action-adverb text representation (yellow) are close. (c) We learn adverbs as action modifiers which are transformations in the embedding space. (d) We embed $f'(x, a)$ , a visual representation of the relevant video parts using multi-head scaled dot-product attention where the query is a projection of the action embedding $g(a)$ .
+
+text embeddings are typically learned in Sec. 3.1. This section also introduces the notations for the rest of the paper.
+
+Two prime challenges exist in learning the embedding for our problem, i.e. learning from adverbs in instructional videos. The first is disentangling the representation of the action from the adverb, allowing us to learn how the same adverb applies across different actions. We propose to learn adverbs as action modifiers, one per adverb, as in Fig. 2(c). In Sec. 3.2 we introduce these action modifiers, which we represent as transformations in the embedding space.
+
+The second challenge is learning the visual representation from the relevant parts of the video in a weakly-supervised manner, i.e. without annotations of temporal bounds. In Sec. 3.3, we propose a weakly-supervised embedding function that utilizes multi-head scaled dot-product attention. This uses the text embedding of the action as the query to attend to relevant video parts, as shown in Fig. 2(d).
+
+# 3.1. Learning an Action Embedding
+
+Our base model is a joint video-text embedding, as in [32, 52, 54]. Specifically, given a set of video clips $x \in X$ with corresponding action labels $a \in A$ , our goal is to obtain two embedding functions, one visual and one textual, $f: X \to E$ and $g: A \to E$ such that $f(x)$ and $g(a)$ are close in the embedding space $E$ and $f(x)$ is distant from other action embeddings $g(a')$ . These functions $f$ and $g$ can be optimized with a standard cross-modal triplet loss:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {t r i p l e t}} = \max (0, d (f (x), g (a)) \\ - d \left(f (x), g \left(a ^ {\prime}\right)\right) + \beta) s. t. a ^ {\prime} \neq a \tag {1} \\ \end{array}
+$$
+
+where $a'$ is an action different to $a, d$ is the Euclidean distance and $\beta$ is the margin, set to 1 in all experiments. We use $g(a)$ as the GloVe [41] embedding of the action's verb, and explain how we replace $f(x)$ by $f'(x, a)$ in Sec. 3.3.
+
+# 3.2. Modeling Adverbs as Action Modifiers
+
+While actions exist without adverbs, adverbs are by definition tied to the action, and only gain visual representation when attached to one. Although adverbs have a similar effect on different actions, the visual representation is highly dependent on the action. Therefore, we follow prior work from [36] on object-attribute pairs and model adverbs as learned transformations in the video-text embedding space $E$ (Sec. 3.1). As these transformations modify the embedding of the action, we call them action modifiers. We learn an action modifier $O_{m}$ for each adverb $m \in M$ , such that
+
+$$
+O _ {m} (z) = W _ {m} z \tag {2}
+$$
+
+where $z$ is any point in the embedding space $E$ and $O_{m}: E \to E$ is a learned linear transform by a weight matrix $W_{m}$ . In Sec. 5, we test other geometric transformations: fixed translation, learned translation and nonlinear transformation. Each transformation $O_{m}$ can be applied to any text representation $O_{m}(g(a))$ or video representation $O_{m}(f(x))$ in $E$ to add the effect of the adverb $m$ .
+
+A video $x \in X$ , labeled with action-adverb pair $(a, m)$ , contains a visual representation of the adverb-modified action. We thus aim to embed $f(x)$ close to $O_m(g(a))$ . This is equivalent to embedding the inverse of the transformation $O_m^{-1}(f(x))$ near the action $g(a)$ . We thus jointly learn our embedding, with the action modifiers $O_m$ , using the sum of two triplet losses. The first focuses on the action:
+
+$$
+\begin{array}{l} \mathcal {L} _ {a c t} = \max (0, d (f (x), O _ {m} (g (a))) \\ - d (f (x), O _ {m} \left(g \left(a ^ {\prime}\right)\right)) + \beta) s. t. a ^ {\prime} \neq a \tag {3} \\ \end{array}
+$$
+
+where $a^\prime$ is a different action and $d$ and $\beta$ are the distance function and margin as in Sec. 3.1. Similarly, we have a
+
+triplet loss that focuses on the adverb, such that:
+
+$$
+\begin{array}{l} \mathcal {L} _ {a d v} = \max (0, d (f (x), O _ {m} (g (a))) \\ - d (f (x), O _ {\overline {{m}}} (g (a))) + \beta) \tag {4} \\ \end{array}
+$$
+
+where $\overline{m}$ is the antonym of the labeled adverb $m$ (e.g. when $m = \text{'quickly'}$ , the antonym $\overline{m} = \text{'slowly'}$ ). We restrict the negative in $\mathcal{L}_{adv}$ to only the antonym to deal with adverbs not being mutually exclusive. For instance, a video labeled 'slice quickly' does not preclude the slicing being also done 'finely'. However, it surely has not been done 'slowly'. We demonstrate the effect of this choice in Sec. 5.
+
+# 3.3. Weakly Supervised Embedding
+
+All prior works that learn attributes of objects from images [7, 20, 34, 36, 37] utilize fully annotated datasets, where the object the attributes relate to is the only object of interest in the image. In contrast, we aim to learn action modifiers from video in a weakly supervised manner. Our input is untrimmed videos containing multiple consecutive actions. To learn adverbs, we need the visual representation to be only from the video parts relevant to the action (e.g. 'roll' in our Fig. 2 example). We propose using scaled dot-product attention [49], where the embedded action of interest acts as a query to identify relevant video parts.
+
+For each video $x$ , we use a temporal window of size $T$ , centered around the timestamp of the narrated action-adverb pair, containing video segments $\{x_1, x_2, \dots, x_T\}$ . We start from the visual representation of all segments $f(x) = \{f(x_1), \dots, f(x_T)\}$ , where $f(\cdot)$ is an I3D network. From this, we wish to learn an embedding of the visual features relevant to the action $a$ , which we call $f'(x, a)$ . Inspired by [49], we project $f(x)$ into keys $K$ and values $V$ :
+
+$$
+K = W ^ {K} f (x); \quad V = W ^ {V} f (x) \tag {5}
+$$
+
+We then set the query $Q = W^{Q}g(a)$ to be the projection of the action embedding, to weight video segments by their relevance to that action. The attention weights are obtained from the dot product of the keys $K$ and the action query $Q$ . These then pool the values $V$ . Specifically:
+
+$$
+H (x, a) = \sigma \left(\frac {\left(W ^ {Q} g (a)\right) ^ {\top} W ^ {K} f (x)}{\sqrt {T}}\right) W ^ {V} f (x) \tag {6}
+$$
+
+where $H(x,a)$ is a single attention head and $\sigma$ is the softmax function. We train multiple attention heads such that,
+
+$$
+f ^ {\prime} (x, a) = W ^ {H} \left[ H _ {1} (x, a), \dots , H _ {h} (x, a) \right] \tag {7}
+$$
+
+where $W^{H}$ projects the concatenation of the multiple attention heads $H_{i}(x,a)$ into the embedding space. We learn $h$ attention head weights: $W_{i}^{Q},W_{i}^{K},W_{i}^{V}$ as well as $W^{H}$ parameters for our weakly-supervised embedding.
+
+It is important to highlight that these weights are jointly trained with the embedding space $E$ , so that $f'(x, a)$ is used instead of $f(x)$ in Equations 3 and 4. We opted to explain our embedding space before detailing how it can be learned in a weakly-supervised manner, to simplify the explanation.
+
+# 3.4. Weakly Supervised Inference
+
+Once trained, our model can be used to evaluate cross-modal retrieval of videos and adverbs. For video-to-adverb retrieval, we consider a video query $x$ and the narrated action $a$ , and we wish to estimate the adverb $m$ . For example, we have a video and wish to find the manner in which the action 'slice' was performed. We use the learned function $f'(x, a)$ to embed the relevant visual representation for action $a$ in $E$ . We then rank adverbs by the distance from this embedding to all modified actions $\forall m: O_m(g(a))$ .
+
+For adverb-to-video retrieval, we consider an action-adverb pair $(a,m)$ as a query, embed $O_{m}(g(a))$ , e.g. 'slice finely', and calculate the distance from this text representation to all relevant video segments $\forall x:f'(x,a)$ . For both cases, this allows us to use $a$ to query to the weakly supervised embedding, so as to attend to the relevant video parts.
+
+# 4. Dataset
+
+HowTo100M [33] is a large scale dataset of instructional videos collected from YouTube. Each video has a corresponding narration from manually-entered subtitles or Automatic Speech Recognition (ASR). No ground-truth is available in terms of correct actions or temporal extents.
+
+To test cross-task generalization, we use the same 83 tasks previously used in [62]. These come from cooking, DIY and car maintenance, and are divided into 65 tasks for training and a disjoint set of 18 tasks for testing. However, in [62], only 30 videos per task were used in training. Instead, we use all videos available for these 65 training tasks, where each task consists of 100-500 videos. In total, we have 24,558 videos in training and 1,280 videos in the test set. For these we find action-adverb pairs as follows.
+
+We use the accompanying narrations to discover action-adverb pairs, for both training and testing. First we employ T-BRNN [48] to punctuate the subtitles1, then perform Part-of-Speech (POS) tagging with SpaCy's English core web model. We search for verb→adverb relationships with the advmod dependency, indicating the adverb modifies the verb. We exclude verbs with VBD (past tense) and VBZ (third person singular) tags as these correlate with actions not being shown in the video. For example, in 'sprinkle some finely chopped coriander', 'chopped' is tagged with VBD. Similarly, in 'everything fits together neatly', the verb 'fits' is tagged as VBZ. Examples of the (action, adverb) pairs obtained from the pipeline with the correspond
+
+
+Figure 3. Log-scaled y-axis shows instances of each adverb plotted per action. We display adverbs against their paired antonym (+/- axis).
+
+
+... if you turn the bowl upside down slowly they won't come out ...
+
+
+... mix it well until it is completely dissolved ...
+
+
+... you want to make sure you fill it up partially ...
+
+
+... you want to dice it finely..
+
+
+Figure 4. Example videos and narrations, highlighting the action and adverb discovered with our NLP pipeline. In some cases the weak timestamp is a good localization of the action (top), however in others the action is long (second), the timestamp is a poor match (third), or the action is not captured in the video (bottom).
+
+ing video snippets are shown in Fig. 4. Additionally, we manually filter actions and adverbs that are not visual, e.g. 'recommend' and 'normally', respectively. We explored automatic approaches such as word concreteness scores [5], but found these approaches to be unreliable. We also group verbs into clusters to avoid synonyms as in [8], i.e. we consider 'put' and 'place' as the same action. From this process, we obtain 15,266 instances of action-adverb pairs.
+
+However, these have a long tail of adverbs that are mentioned only a handful of times. We restrict our learning to 6 commonly used adverbs, that come in 3 pairs of antonyms: 'partially'/'completely', 'quickly'/'slowly' and 'finely'/'coarsely'. These adverbs appear in 263 unique action-adverb pairs with 72 different actions. We show the distribution of adverbs per action in Fig. 3. While our training is noisy, i.e. actions might not appear in the video (refer to Fig. 4 bottom), we clean the test set for accurate evaluation of the method. We only consider test set videos where
+
+the action-adverb is present in the video and appears within the 20 seconds around the narration timestamp. These correspond to $44\%$ of the original test set, which is comparable to the $50\%$ level of noise reported by the authors in [33].
+
+This results in 5,475 action-adverb pairs in training and 349 in testing. We consider the mean timestamp between the verb and adverb narrations as the weak supervision for the action's location. These action-adverb weak timestamp annotations and accompanying code are publicly available2.
+
+# 5. Experiments
+
+We first describe the implementation details of our method, followed by the metrics we use for evaluation. We then present our results against those of baselines and evaluate the contribution of the different components.
+
+Implementation Details. We sample all videos at 25fps and scale to a height of 256 pixels. We use I3D [6] with 16 frame segments, pre-trained on Kinetics [22], for both RGB and optical flow. We concatenate these to create 2048D features, extracted once per second as in [62], for $T = 20$ seconds around the narration timestamp.
+
+In all experiments, our embedding space $E$ is 300D, the same as the GloVe word representation [41]. We initialize the action embeddings with the verb's GloVe vector, pretrained on the Wikipedia and Gigaword corpora. The action modifiers $O_{m}$ are initialized with the identity matrix such that they have no effect at first. For our scaled dot-product attention, $Q$ is of size $75 \times 1$ and $K$ and $V$ are of size $75 \times T$ . We use 4 attention heads in $f'(x, a)$ .
+
+All our models are trained with the Adam optimizer [23] for 1000 epochs with a batch size of 512 and a learning rate of $10^{-4}$ . To aid disentangling the actions and adverbs, we first let the model learn only actions (optimized by $\mathcal{L}_{\text {triplet }}$ ) for 200 epochs before introducing the action modifiers. The weights of the action modifiers $W_{m}$ (Eq. 2) are then learned at a slower rate of $10^{-5}$ .
+
+Evaluation Metric. We report mean Average Precision (mAP) for video-to-adverb and adverb-to-video retrieval. For video-to-adverb given a video and the narrated
+
+action we rank the 6 adverbs' relevance. For adverb-to-video given an adverb query (e.g. 'slowly'), we rank videos by the distance of each video labelled with its associated action (e.g. 'put') to the text embedding of the verb-adverb (e.g. 'put slowly') and calculate mAP across the 6 adverbs.
+
+We also report mAP where we restrict the retrieval to the adverb and its antonym, which we refer to as the Antonym setting. This 'Antonym' metric better represents the given labels, therefore we use it for the ablation study. To clarify, we may have a video narrated 'cut coarsely'. We are thus confident the cut was not performed 'finely', however we cannot judge the speed of ('quickly' or 'slowly'). In Antonym video-to-adverb, there are only two possible adverbs to retrieve, thus we report Precision@1 (P@1) which is the same as binary classification accuracy. Similarly, we report mAP Antonym for adverb-to-video retrieval, where we only rank videos labeled with the adverb or its antonym.
+
+# 5.1. Comparative Results
+
+We first compare our work to baselines. Since ours is the first work to learn from adverbs in videos, we adapt methods that learn attributes of objects in images for comparison, as this is the most similar existing task to ours. In this adaptation, actions replace objects, and adverbs replace attributes/adjectives.
+
+We compare to RedWine [34] and AttributeOp [36] as well as the LabelEmbed baseline proposed in [34] which uses GloVe features in place of SVM classifier weights. We replace the image representation by a uniformly weighted visual representation of video segments. Similar to our evaluation, we report results when the action is given in testing, referred to as the 'oracle' evaluation in [36]. Furthermore, for a fair comparison, we use only the antonym as the negative in each method's loss, as we do in Eq. 4. AttributeOp proposes several linguistic inspired regularizers; we report the best combination of regularizers for our dataset — the auxiliary and commutative regularizers. We also compare to random chance and a naive binary classifier per adverb pair. This classifier is analogous to the Visual Product baseline used in [34, 36]. We report on both versions of this baseline, a Linear SVM which trains a binary one-vs-all classifier per adverb (Classifier-SVM) and a 6-way MLP of two fully connected layers (Classifier-MLP). In video-to-adverb, we rank adverbs by classifiers' confidence scores, as in [36]. In adverb-to-video, we use the confidence of the corresponding classifier or MLP output to rank videos.
+
+Comparative results are presented in Table 1. Our method outperforms all baselines for video-to-adverb retrieval, both when comparing against all adverbs and when restricting the evaluation to antonym pairs. We see that AttributeOp is the best baseline method, generally performing better than both RedWine and LabelEmbed. The two latter methods work on a fixed visual feature space, thus
+
+| Method | video-to-adverb | adverb-to-video |
| Antonym | All | Antonym | All |
| Chance | 0.500 | 0.408 | 0.511 | 0.170 |
| Classifier-SVM | 0.605 | 0.532 | 0.563 | 0.264 |
| Classifier-MLP | 0.685 | 0.602 | 0.603 | 0.304 |
| RedWine [34] | 0.693 | 0.594 | 0.595 | 0.290 |
| LabelEmbed [34] | 0.717 | 0.621 | 0.618 | 0.297 |
| AttributeOp [36] | 0.728 | 0.612 | 0.597 | 0.350 |
| Ours | 0.808 | 0.719 | 0.657 | 0.329 |
+
+Table 1. Comparative Evaluation. Best performance in **bold** and second best **underlined**. We report results for both video-to-adverb and adverb-to-video retrieval with results restricted to the adverb and its antonym (Antonym) and when unrestricted (All).
+
+are prone to errors when the features are non-separable in that space. We can also see that LabelEmbed performs better than RedWine across all metrics, demonstrating that GloVe features are better representations than SVM classifier weights. While AttributeOp marginally outperforms our approach on adverb-to-video 'All', it underperforms on all other metrics, including our main objective, estimating the correct adverb over its antonym for a video query.
+
+# 5.2. Qualitative Results
+
+Fig. 5 presents video examples. For each, we demonstrate attention weights for several action queries. Our method is able to successfully attend to segments relevant to various query actions. The figure also shows predicted actions, and predicted adverb when using the ground-truth action as the query. Our method is able to predict the correct adverb. In the last example, predicted actions are incorrect, but the method correctly identifies a relevant segment and that the action was done 'slowly'. We provide further insights into the learned embedding space in supplementary.
+
+# 5.3. Ablation Study
+
+We report 4 ablation studies on the various aspects of the method: the choice of action modifier transformation $O_{m}(\cdot)$ , our scaled dot-product attention, the contributions of the loss functions, and the length of the video ( $T$ ). We focus on video-to-adverb retrieval in the ablation using the Antonym P@1 metric, as this allows us to answer questions like: "was the 'cut' performed 'quickly' or 'slowly'?"
+
+Action Modifier Representation. In Table 2 we examine different representations for the action modifiers $O_{m}(\cdot)$ (Eq. 2). We compare to a fixed translation from the GloVe representation of the adverb $(m)$ , which is not learned, to three learned representations. First, a learned translation
+
+
+Figure 5. Qualitative Results. Temporal attention values from several action queries. The intensity of the color indicates the attention value. Recall that we use the narrated action to weight the relevance of video segments. Using that, we display the top-5 predicted actions, as well as the correctly predicted adverb for all cases.
+
+| \( O_{m}(z)= \) | Dimension | Learned | P@1 |
| \( z+GloVe(m) \) | 1D | | 0.735 |
| \( z+b_{m} \) | 1D | ✓ | 0.749 |
| \( W_{m}z \) | 2D | ✓ | 0.808 |
| \( W_{m_{2}}\mathrm{ReLU}(W_{m_{1}}z+b_{m}) \) | 2D | ✓ | 0.742 |
+
+Table 2. Comparison of action modifier representation ${O}_{m}\left( \cdot \right)$ . The linear transformation choice clearly improves results.
+
+vector $b_{m}$ initialized from the GloVe embedding is used. Second, our chosen representation - a 2D linear transformation with matrix $W_{m}$ as in Eq. 2. Third, we learn a nonlinear transformation implemented as two fully connected layers, the first with a ReLU activation.
+
+Results show the linear transformation clearly outperforms a vector translation or the non-linear transformation. The translation vector does not having enough capacity to represent the complexity of the adverb, while the nonlinear transform is prone to over-fitting.
+
+Temporal Attention. In Table 3, we compare our proposed multi-head scaled dot-product attention (Sec. 3.3) with alternative approaches to temporal aggregation and attention. In this comparison, we also report action retrieval results, with video-to-action mAP. That is, given the embedding of
+
+the video $f'(x, a)$ queried by the ground-truth action, we rank all actions in the embedding $\forall a : g(a)$ by their distances to the visual query and evaluate the rank of the correct action. Our method does not aim for action retrieval as it assumes knowledge of the ground-truth action, however this metric evaluates the quality of the weakly supervised embedding space. Results are compared to:
+
+- Single: uses only a one-second clip at the timestamp.
+- Average: uniformly weights the $T$ features.
+- Attention from [29]: widely used class agnostic attention, calculating attention with two fully connected layers, $f'(x, a) = \sigma(w_1 \tanh(W_2 f(x))) W_3 f(x)$ .
+- Class-specific Attention: a version of the above with one attention filter per action class.
+- Ours w/o two-stage optimization: our attention without the first 200-epoch stage of learning action triplets without learning adverbs/modifiers.
+- Ours: our attention as described in Sec. 3.3.
+
+Table 3 demonstrates superior performance of our method for the learning of action embeddings and, as a consequence, better learning of action modifiers. These results also demonstrate the challenge of weak-supervision, with video-to-action only performing at $0.246\mathrm{mAP}$ when considering only one second surrounding the narrated action. This improves to 0.692 with our method.
+
+| Method | Action | Adverb |
| Single | 0.246 | 0.705 |
| Average | 0.257 | 0.716 |
| Attention from [29] | 0.235 | 0.708 |
| Class-specific Attention | 0.401 | 0.728 |
| Ours w/o two-stage optimization | 0.586 | 0.774 |
| Ours | 0.692 | 0.808 |
+
+
+Figure 6. Performance as $T$ increases. Blue (axis and plot) shows video-to-action retrieval mAP while red shows video-to-adverb retrieval with Antonym P@1.
+
+Loss Functions. We also evaluate the need for two separate loss functions (Eqs. 3 and 4). As an alternative approach we use a single loss where the negative contains a different action, a different adverb or both. This performs worse by $0.03\mathrm{P@1}$ . Using both losses, but with another adverb as opposed to only the antonym $\overline{m}$ in Equation 4 also results in worse performance by $0.04\mathrm{P@1}$ .
+
+Effect of $T$ . In Fig. 6, we evaluate how the length of the video ( $T$ ) extracted around the weak timestamp affects the model (Sec. 3.3). For larger $T$ , videos are more likely to contain the relevant action, but also other actions. Our embedding function $f'(x, a)$ is able to ignore other actions in the video, up to a point, and successfully learn to attend to the relevant parts given the query action, resulting in better performance with $T \in \{20 \ldots 30\}$ .
+
+Comparison with Action Localization. In this work, we perform weakly supervised embedding to learn action modifiers by attending to action relevant segments. Here, we test whether weakly supervised action localization can be used instead of our proposed attention, to locate key segments before learning action modifiers.
+
+We use published code of two state-of-the-art weakly supervised action localization methods: W-TALC [40] and CMCS [27]. First, we test the output of these methods with a binary adverb-antonym classifier (Classifier-MLP as in Sec. 5.1). We also test these methods in combination with our embedding and action modifier transfor
+
+Table 3. Comparison of temporal attention methods. We report video-to-action retrieval mAP and video-to-adverb retrieval P@1.
+
+| Method | Attention | Adverb Rep | P@1 |
| W-TALC [40] | Avg | Classifier-MLP | 0.705 |
| Avg | Action Modifiers | 0.739 |
| SDP | Action Modifiers | 0.768 |
| CMCS [27] | Avg | Classifier-MLP | 0.696 |
| Avg | Action Modifiers | 0.699 |
| SDP | Action Modifiers | 0.705 |
| Ours | SDP | Action Modifiers | 0.808 |
+
+Table 4. Comparison of our method (Ours) to weakly supervised action localization methods, with and without our scaled dot-product (SDP) and action modifier representations.
+
+mations. For this, we use the methods' predicted action-relevant segments, and average their representation to replace $f'(x, a)$ (Avg). Finally, we combine these relevant segments with our scaled dot-product attention (SDP).
+
+From Table 4 we can conclude that using the output of a weakly-supervised localization method is insufficient, and our joint optimization performs best. Worth noting, localizing the action using W-TALC followed by averaging relevant segments outperforms averaging all segments (0.739 vs. 0.716 from Table 3). This shows that W-TALC is capable of finding some relevant segments. This is further improved by our scaled dot-product attention.
+
+# 6. Conclusion
+
+This paper presents a weakly supervised method to learn from adverbs in instructional videos. Our method learns to obtain and embed the relevant part of the video with scaled dot product attention, using the narrated action as a query. The method then learns action modifiers as linear transformations on the embedded actions; shared between actions. We train and evaluate our method on parsed action-adverb pairs sourced from YouTube videos of 83 tasks. Results demonstrate that our method outperforms all baselines, achieving $0.808\mathrm{mAP}$ for video-to-adverb retrieval, when considering the adverb versus its antonym.
+
+Future work will involve learning from few shot examples in order to represent a greater variety of adverbs as well as exploring applications to give feedback to people guided by instructional videos or written instructions.
+
+Acknowledgements: Work is supported by an EPSRC DTP, EPSRC GLANCE (EP/N013964/1), Louis Vuitton ENS Chair on Artificial Intelligence, the MSR-Inria joint lab and the French government program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). Part of this work was conducted during H. Doughty's internship at INRIA Willow Team. Work uses publicly available dataset.
+
+# References
+
+[1] Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. Unsupervised learning from narrated instruction videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4575-4583, 2016. 1, 2
+[2] Jean-Baptiste Alayrac, Ivan Laptev, Josef Sivic, and Simon Lacoste-Julien. Joint discovery of object states and manipulation actions. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2127-2136, 2017. 2
+[3] Relja Arandjelovic, Petr Gronat, Akihiko Torii, Tomas Pajdla, and Josef Sivic. Netvlad: Cnn architecture for weakly supervised place recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5297-5307, 2016. 2
+[4] Damian Borth, Rongrong Ji, Tao Chen, Thomas Breuel, and Shih-Fu Chang. Large-scale visual sentiment ontology and detectors using adjective noun pairs. In Proceedings of the 21st ACM international conference on Multimedia, pages 223-232, 2013. 2
+[5] Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods, 46(3):904-911, 2014. 5
+[6] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6299-6308, 2017. 5
+[7] Chao-Yeh Chen and Kristen Grauman. Inferring analogous attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 200-207, 2014. 2, 4
+[8] Dima Damen, Hazel Doughty, Giovanni Maria Farinella, Sanja Fidler, Antonino Furnari, Evangelos Kazakos, Davide Moltisanti, Jonathan Munro, Toby Perrett, Will Price, and Michael Wray. Scaling egocentric vision: The epic-kitchens dataset. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 5
+[9] Jianfeng Dong, Xirong Li, Chaoxi Xu, Shouling Ji, Yuan He, Gang Yang, and Xun Wang. Dual encoding for zeroexample video retrieval. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9346-9355, 2019. 2
+[10] Hazel Doughty, Dima Damen, and Walterio Mayol-Cuevas. Who's better? who's best? pairwise deep ranking for skill determination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6057-6066, 2018. 1
+[11] Hazel Doughty, Walterio Mayol-Cuevas, and Dima Damen. The pros and cons: Rank-aware temporal attention for skill determination in long videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7862-7871, 2019. 1
+[12] Olivier Duchenne, Ivan Laptev, Josef Sivic, Francis R Bach, and Jean Ponce. Automatic annotation of human actions in video. In Proceedings of the IEEE International Conference
+
+on Computer Vision (ICCV), volume 1, pages 1491-1498, 2009. 2
+[13] Mark Everingham, Josef Sivic, and Andrew Zisserman. Hello! my name is... buffy"-automatic naming of characters in tv video. In British Machine Vision Conference (BMVC), volume 2, page 6, 2006. 2
+[14] Alireza Fathi and James M Rehg. Modeling actions through state changes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2579-2586, 2013. 2
+[15] Jiyang Gao, Runzhou Ge, Kan Chen, and Ram Nevatia. Motion-appearance co-memory networks for video question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[16] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing moments in video with natural language. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), pages 5803-5812, 2017. 2
+[17] De-An Huang, Shyamal Buch, Lucio Dery, Animesh Garg, Li Fei-Fei, and Juan Carlos Niebles. Finding "it": Weakly-supervised reference-aware visual grounding in instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[18] De-An Huang, Li Fei-Fei, and Juan Carlos Niebles. Connectionist temporal modeling for weakly supervised action labeling. In Proceedings of the European Conference on Computer Vision (ECCV), pages 137–153. Springer, 2016. 2
+[19] De-An Huang, Joseph J Lim, Li Fei-Fei, and Juan Carlos Niebles. Unsupervised visual-linguistic reference resolution in instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2183–2192, 2017. 2
+[20] Phillip Isola, Joseph J Lim, and Edward H Adelson. Discovering states and transformations in image collections. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1383-1391, 2015. 2, 4
+[21] Mihir Jain, Jan C van Gemert, Thomas Mensink, and Cees GM Snoek. Objects2action: Classifying and localizing actions without any video example. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4588-4596, 2015. 2
+[22] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 5
+[23] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations (ICLR), 2015. 5
+[24] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-captioning events in videos. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017. 2
+[25] Ivan Laptev, Marcin Marszalek, Cordelia Schmid, and Benjamin Rozenfeld. Learning realistic human actions from
+
+movies. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8. IEEE, 2008. 2
+[26] Zhenqiang Li, Yifei Huang, Minjie Cai, and Yoichi Sato. Manipulation-skill assessment from videos with spatial attention network. Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), 2019. 1
+[27] Daochang Liu, Tingting Jiang, and Yizhou Wang. Completeness modeling and context separation for weakly supervised temporal action localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1298-1307, 2019. 8
+[28] Jingen Liu, Benjamin Kuipers, and Silvio Savarese. Recognizing human actions by attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3337-3344. IEEE, 2011. 2
+[29] Xiang Long, Chuang Gan, Gerard De Melo, Jiajun Wu, Xiao Liu, and Shilei Wen. Attention clusters: Purely attention based local feature integration for video classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7834-7843, 2018. 7, 8
+[30] Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin Murphy. What's cookin'? interpreting cooking videos using text, speech and vision. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2015. 1, 2
+[31] Pascal Mettes and Cees GM Snoek. Spatial-aware object embeddings for zero-shot localization and classification of actions. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4443-4452, 2017. 2
+[32] Antoine Miech, Ivan Laptev, and Josef Sivic. Learnable pooling with context gating for video classification. arXiv preprint arXiv:1706.06905, 2017. 2, 3
+[33] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019. 2, 4, 5
+[34] Ishan Misra, Abhinav Gupta, and Martial Hebert. From red wine to red tomato: Composition with context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1792-1801, 2017. 2, 4, 6
+[35] Niluthpol Chowdhury Mithun, Sujoy Paul, and Amit K Roy-Chowdhury. Weakly supervised video moment retrieval from text queries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11592-11601, 2019. 2
+[36] Tushar Nagarajan and Kristen Grauman. Attributes as operators: factorizing unseen attribute-object compositions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 169-185, 2018. 2, 3, 4, 6
+[37] Zhixiong Nan, Yang Liu, Nanning Zheng, and Song-Chun Zhu. Recognizing unseen attribute-object pair with genera
+
+tive model. In The Thirty-Third AAAI Conference on Artificial Intelligence, 2019. 2, 4
+[38] Yingwei Pan, Ting Yao, Houqiang Li, and Tao Mei. Video captioning with transferred semantic attributes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6504-6512, 2017. 2
+[39] Bo Pang, Kaiwen Zha, and Cewu Lu. Human action adverb recognition: Adha dataset and a three-stream hybrid model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 2325-2334, 2018. 2
+[40] Sujoy Paul, Sourya Roy, and Amit K Roy-Chowdhury. W-talc: Weakly-supervised temporal activity localization and classification. In Proceedings of the European Conference on Computer Vision (ECCV), pages 563-579, 2018. 8
+[41] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, 2014. 3, 5
+[42] Alexander Richard, Hilde Kuehne, and Juergen Gall. Action sets: Weakly supervised action segmentation without ordering constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5987-5996, 2018. 2
+[43] Amir Rosenfeld and Shimon Ullman. Action classification via concepts and attributes. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 1499-1505. IEEE, 2018. 2
+[44] Fadime Sener and Angela Yao. Zero-shot anticipation for instructional activities. In Proceedings of the IEEE International Conference on Computer Vision (CVPR), pages 862-871, 2019. 2
+[45] Ozan Sener, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Unsupervised semantic parsing of video collections. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4480-4488, 2015. 1, 2
+[46] Reuben Tan, Huijuan Xu, Kate Saenko, and Bryan A Plummer. wman: Weakly-supervised moment alignment network for text-based video segment retrieval. arXiv preprint arXiv:1909.13784, 2019. 2
+[47] Makarand Tapaswi, Martin Buml, and Rainer Stiefelhagen. Book2Movie: Aligning Video scenes with Book chapters. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 2
+[48] Ottokar Tilk and Tanel Alumäe. Bidirectional recurrent neural network with attention mechanism for punctuation restoration. In Interspeech 2016, 2016. 4
+[49] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NeurIPS), pages 5998-6008, 2017. 4
+[50] Xiaoyang Wang and Qiang Ji. A unified probabilistic approach modeling relationships between attributes and objects. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2120-2127, 2013. 2
+
+[51] Yang Wang and Greg Mori. A discriminative latent model of object classes and attributes. In Proceedings of the European Conference on Computer Vision (ECCV), pages 155-168. Springer, 2010. 2
+[52] Michael Wray, Diane Larlus, Gabriela Csurka, and Dima Damen. Fine-grained action retrieval through multiple parts-of-speech embeddings. In The IEEE International Conference on Computer Vision (ICCV), 2019. 2, 3
+[53] Jian Xu, Chunheng Wang, Cunzhao Shi, and Baihua Xiao. Weakly supervised soft-detection-based aggregation method for image retrieval. arXiv preprint arXiv:1811.07619, 2018. 2
+[54] Ran Xu, Caiming Xiong, Wei Chen, and Jason J Corso. Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. In Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. 2, 3
+[55] Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, and Aaron Courville. Describing videos by exploiting temporal structure. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4507-4515, 2015. 2
+[56] Dongfei Yu, Jianlong Fu, Tao Mei, and Yong Rui. Multi-level attention networks for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4709-4717, 2017. 2
+[57] Youngjae Yu, Hyungjin Ko, Jongwook Choi, and Gunhee Kim. End-to-end concept word detection for video captioning, retrieval, and question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3165-3173, 2017. 2
+[58] Rowan Zellers and Yejin Choi. Zero-shot activity recognition with verb attribute induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 946-958, 2017. 2
+[59] Kuo-Hao Zeng, Tseng-Hung Chen, Juan Carlos Niebles, and Min Sun. Generation for user generated videos. In Proceedings of the European Conference on Computer Vision (ECCV), pages 609-625. Springer, 2016. 2
+[60] Luowei Zhou, Chenliang Xu, and Jason J Corso. Towards automatic learning of procedures from web instructional videos. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. 2
+[61] Linchao Zhu, Zhongwen Xu, Yi Yang, and Alexander G Hauptmann. Uncovering the temporal context for video question answering. International Journal of Computer Vision, 124(3):409-421, 2017. 2
+[62] Dimitri Zhukov, Jean-Baptiste Alayrac, Ramadan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. Cross-task weakly supervised learning from instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3537-3545, 2019. 1, 2, 4, 5
\ No newline at end of file
diff --git a/actionmodifierslearningfromadverbsininstructionalvideos/images.zip b/actionmodifierslearningfromadverbsininstructionalvideos/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b0eae03994a10749897fc3b753b2952024e3542c
--- /dev/null
+++ b/actionmodifierslearningfromadverbsininstructionalvideos/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d97c9aeaf88c6c9890bbc18b844a21e888b2461659257a2a52ba9a7b80f3cf1
+size 653848
diff --git a/actionmodifierslearningfromadverbsininstructionalvideos/layout.json b/actionmodifierslearningfromadverbsininstructionalvideos/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..db3bca570fbd7720b955cb74e5ceaf5dc94670c0
--- /dev/null
+++ b/actionmodifierslearningfromadverbsininstructionalvideos/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:364b887255ad7ce4a85f320242ed96794bab224e0e391928f10acf7916328759
+size 442615
diff --git a/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_content_list.json b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9460b73f915bfacc3bdb2de7850366632b89acbc
--- /dev/null
+++ b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2d6f6e8fa1485594225449717f8fbdc41192b34d84423391d43c040bf4d93efd
+size 73712
diff --git a/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_model.json b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..90aa820b8ecdc249efcdbb3c677ee140e4f350dd
--- /dev/null
+++ b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:336ea6d92f8c800259d9edb1d8744989acd78ab18ab9116b7ece5f1d59058da0
+size 90098
diff --git a/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_origin.pdf b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2b169d4e98d62a4ee6aa118003d0b812eb0e7985
--- /dev/null
+++ b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/2e7f2736-36f2-4099-8fc8-db04ec170c4f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb830d1c3b6ba94c933c507416be833d4046f373e8c694186977dcf3c0f57b9e
+size 792728
diff --git a/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/full.md b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2c72d5da595cdd4916e0c885dd0f14e2aa80433f
--- /dev/null
+++ b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/full.md
@@ -0,0 +1,267 @@
+# Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation
+
+Min-Hung Chen $^{1*}$ Baopu Li $^{2}$ Yingze Bao $^{2}$ Ghassan AlRegib $^{1}$ Zsolt Kira $^{1}$ $^{1}$ Georgia Institute of Technology $^{2}$ Baidu USA
+
+# Abstract
+
+Despite the recent progress of fully-supervised action segmentation techniques, the performance is still not fully satisfactory. One main challenge is the problem of spatiotemporal variations (e.g. different people may perform the same activity in various ways). Therefore, we exploit unlabeled videos to address this problem by reformulating the action segmentation task as a cross-domain problem with domain discrepancy caused by spatio-temporal variations. To reduce the discrepancy, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), which contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics, achieving better performance than other Domain Adaptation (DA) approaches. On three challenging benchmark datasets (GTEA, 50Salads, and Breakfast), SSTDA outperforms the current state-of-the-art method by large margins (e.g. for the F1@25 score, from $59.6\%$ to $69.1\%$ on Breakfast, from $73.4\%$ to $81.5\%$ on 50Salads, and from $83.6\%$ to $89.1\%$ on GTEA), and requires only $65\%$ of the labeled training data for comparable performance, demonstrating the usefulness of adapting to unlabeled target videos across variations. The source code is available at https://github.com/cmhungsteve/SSTDA.
+
+# 1. Introduction
+
+The goal of action segmentation is to simultaneously segment videos by time and predict an action class for each segment, leading to various applications (e.g. human activity analyses). While action classification has shown great progress given the recent success of deep neural networks [38, 28, 27], temporally locating and recognizing action segments in long videos is still challenging. One main challenge is the problem of spatio-temporal variations of human actions across videos [16]. For example, different people may make tea in different personalized styles even if the given recipe is the same. The intra-class variations
+
+
+Figure 1: An overview of the proposed Self-Supervised Temporal Domain Adaptation (SSTDA) for action segmentation. "Source" refers to the data with labels, and "Target" refers to the data without access to labels. SSTDA can effectively adapts the source model trained with standard fully-supervised learning to a target domain by diminishing the discrepancy of embedded feature spaces between the two domains caused by spatio-temporal variations. SSTDA only requires unlabeled videos from both domains with the standard transductive setting, which eliminates the need of additional labels to obtain the final target model.
+
+cause degraded performance by directly deploying a model trained with different groups of people.
+
+Despite significant progress made by recent methods based on temporal convolution with fully-supervised learning [20, 6, 23, 8], the performance is still not fully satisfactory (e.g. the best accuracy on the Breakfast dataset is still lower than $70\%$ ). One method to improve the performance is to exploit knowledge from larger-scale labeled data [2]. However, manually annotating precise frame-by-frame actions is time-consuming and challenging. Another way is to design more complicated architectures but with higher costs of model complexity. Thus, we aim to address the spatiotemporal variation problem with unlabeled data, which are comparatively easy to obtain. To achieve this goal, we propose to diminish the distributional discrepancy caused by spatio-temporal variations by exploiting auxiliary unlabeled videos with the same types of human activities performed by different people. More specifically, to extend the framework of the main video task for exploiting auxiliary
+
+data [45, 19], we reformulate our main task as an unsupervised domain adaptation (DA) problem with the transductive setting [31, 5], which aims to reduce the discrepancy between source and target domains without access to the target labels.
+
+Recently, adversarial-based DA approaches [10, 11, 37, 44] show progress in reducing the discrepancy for images using a domain discriminator equipped with adversarial training. However, videos also suffer from domain discrepancy along the temporal direction [4], so using image-based domain discriminators is not sufficient for action segmentation. Therefore, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), containing two self-supervised auxiliary tasks: 1) binary domain prediction, which predicts a single domain for each frame-level feature, and 2) sequential domain prediction, which predicts the permutation of domains for an untrimmed video. Through adversarial training with both auxiliary tasks, SSTDA can jointly align cross-domain feature spaces that embed local and global temporal dynamics, to address the spatiotemporal variation problem for action segmentation, as shown in Figure 1. To support our claims, we compare our method with other popular DA approaches and show better performance, demonstrating the effectiveness for aligning temporal dynamics by SSTDA. Finally, we evaluate our approaches on three datasets with high spatio-temporal variations: GTEA [9], 50Salads [35], and the Breakfast dataset [17]. By exploiting unlabeled target videos with SSTDA, our approach outperforms the current state-of-the-art methods by large margins and achieve comparable performance using only $65\%$ of labeled training data.
+
+In summary, our contributions are three-fold:
+
+1. Self-Supervised Sequential Domain Prediction: We propose a novel self-supervised auxiliary task, which predicts the permutation of domains for long videos, to facilitate video domain adaptation. To the best of our knowledge, this is the first self-supervised method designed for cross-domain action segmentation.
+2. Self-Supervised Temporal Domain Adaptation (SSTDA): By integrating two self-supervised auxiliary tasks, binary and sequential domain prediction, our proposed SSTDA can jointly align local and global embedded feature spaces across domains, outperforming other DA methods.
+3. Action Segmentation with SSTDA: By integrating SSTDA for action segmentation, our approach outperforms the current state-of-the-art approach by large margins, and achieve comparable performance by using only $65\%$ of labeled training data. Moreover, different design choices are analyzed to identify the key contributions of each component.
+
+# 2. Related Works
+
+Action Segmentation methods proposed recently are built upon temporal convolution networks (TCN) [20, 6, 23, 8] because of their ability to capture long-range dependencies across frames and faster training compared to RNN-based methods. With the multi-stage pipeline, MS-TCN [8] performs hierarchical temporal convolutions to effectively extract temporal features and achieve the state-of-the-art performance for action segmentation. In this work, we utilize MS-TCN as the baseline model and integrate the proposed self-supervised modules to further boost the performance without extra labeled data.
+
+Domain Adaptation (DA) has been popular recently especially with the integration of deep learning. With the two-branch (source and target) framework for most DA works, finding a common feature space between source and target domains is the ultimate goal, and the key is to design the domain loss to achieve this goal [5].
+
+Discrepancy-based DA [24, 25, 26] is one of the major classes of methods where the main goal is to reduce the distribution distance between the two domains. Adversarial-based DA [10, 11] is also popular with similar concepts as GANs [12] by using domain discriminators. With carefully designed adversarial objectives, the domain discriminator and the feature extractor are optimized through min-max training. Some works further improve the performance by assigning pseudo-labels to target data [32, 41]. Furthermore, Ensemble-based DA [34, 21] incorporates multiple target branches to build an ensemble model. Recently, Attention-based DA [39, 18] assigns attention weights to different regions of images for more effective DA.
+
+Unlike images, video-based DA is still under-explored. Most works concentrate on small-scale video DA datasets [36, 43, 14]. Recently, two larger-scale cross-domain video classification datasets along with the state-of-the-art approach are proposed [3, 4]. Moreover, some authors also proposed novel frameworks to utilize auxiliary data for other video tasks, including object detection [19] and action localization [45]. These works differ from our work by either different video tasks [19, 3, 4] or access to the labels of auxiliary data [45].
+
+Self-Supervised Learning has become popular in recent years for images and videos given the ability to learn informative feature representations without human supervision. The key is to design an auxiliary task (or pretext task) that is related to the main task and the labels can be self-annotated. Most of the recent works for videos design auxiliary tasks based on spatio-temporal orders of videos [22, 40, 15, 1, 42]. Different from these works, our proposed auxiliary task predicts temporal permutation for cross-domain videos, aiming to address the problem of spatio-temporal variations for action segmentation.
+
+
+Figure 2: Illustration of the baseline model and the integration with our proposed SSTDA. The frame-level features $f$ are obtained by applying the temporal convolution network $G_{f}$ to the inputs, and converted to the corresponding predictions $\hat{y}$ using a fully-connected layer $G_{y}$ to calculate the prediction loss $\mathcal{L}_{y}$ . The SSTDA module is integrated with $f$ to calculate the local and global domain losses, $\mathcal{L}_{ld}$ and $\mathcal{L}_{gd}$ for optimizing $f$ during training (see details in Section 3.2). Here we only show one stage in our multi-stage model.
+
+# 3. Technical Approach
+
+In this section, the baseline model which is the current state-of-the-art for action segmentation, MS-TCN [8], is reviewed first (Section 3.1). Then the novel temporal domain adaptation scheme consisting of two self-supervised auxiliary tasks, binary domain prediction (Section 3.2.1) and sequential domain prediction (Section 3.2.2), is proposed, followed by the final action segmentation model.
+
+# 3.1. Baseline Model
+
+Our work is built on the current state-of-the-art model for action segmentation, multi-stage temporal convolutional network (MS-TCN) [8]. For each stage, a single-stage TCN (SS-TCN) applies a multi-layer TCN, $G_{f}$ , to derive the frame-level features $f = \{f_{1}, f_{2}, \dots, f_{T}\}$ , and makes the corresponding predictions $\hat{\mathbf{y}} = \{\hat{y}_{1}, \hat{y}_{2}, \dots, \hat{y}_{T}\}$ using a fully-connected layer $G_{y}$ . By following [8], the prediction loss $\mathcal{L}_y$ is calculated based on the predictions $\hat{\mathbf{y}}$ , as shown in the left part of Figure 2. Finally, multiple stages of SS-TCNs are stacked to enhance the temporal receptive fields, constructing the final baseline model, MS-TCN, where each stage takes the predictions from the previous stage as inputs, and makes predictions for the next stage.
+
+# 3.2. Self-Supervised Temporal Domain Adaptation
+
+Despite the promising performance of MS-TCN on action segmentation over previous methods, there is still a large room for improvement. One main challenge is
+
+the problem of spatio-temporal variations of human actions [16], causing the distributional discrepancy across domains [5]. For example, different subjects may perform the same action completely differently due to personalized spatio-temporal styles. Moreover, collecting annotated data for action segmentation is challenging and time-consuming. Thus, such challenges motivate the need to learn domain-invariant feature representations without full supervision. Inspired by the recent progress of self-supervised learning, which learns informative features that can be transferred to the main target tasks without external supervision (e.g. human annotation), we propose Self-Supervised Temporal Domain Adaptation (SSTDA) to diminish cross-domain discrepancy by designing self-supervised auxiliary tasks using unlabeled videos.
+
+To effectively transfer knowledge, the self-supervised auxiliary tasks should be closely related to the main task, which is cross-domain action segmentation in this paper. Recently, adversarial-based DA approaches [10, 11] show progress in addressing cross-domain image problems using a domain discriminator with adversarial training where domain discrimination can be regarded as a self-supervised auxiliary task since domain labels are self-annotated. However, directly applying image-based DA for video tasks results in sub-optimal performance due to the temporal information being ignored [4]. Therefore, the question becomes: How should we design the self-supervised auxiliary tasks to benefit cross-domain action segmentation? More specifically, the answer should address both cross-domain and action segmentation problems.
+
+To address this question, we first apply an auxiliary task binary domain prediction to predict the domain for each frame where the frame-level features are embedded with local temporal dynamics, aiming to address the cross-domain problems for videos in local scales. Then we propose a novel auxiliary task sequential domain prediction to temporally segment domains for untrimmed videos where the video-level features are embedded with global temporal dynamics, aiming to fully address the above question. Finally, SSTDA is achieved locally and globally by jointly applying these two auxiliary tasks, as illustrated in Figure 3.
+
+In practice, since the key for effective video DA is to simultaneously align and learn temporal dynamics, instead of separating the two processes [4], we integrate SSTDA modules to multiple stages instead of the last stage only, and the single-stage integration is illustrated in Figure 2.
+
+# 3.2.1 Local SSTDA
+
+The main goal of action segmentation is to learn frame-level feature representations that encode spatio-temporal information so that the model can exploit information from multiple frames to predict the action for each frame. Therefore,
+
+
+Figure 3: The two self-supervised auxiliary tasks in SSTDA: 1) binary domain prediction: discriminate single frame, 2) sequential domain prediction: predict a sequence of domains for an untrimmed video. These two tasks contribute to local and global SSTDA, respectively.
+
+we first learn domain-invariant frame-level features with the auxiliary task binary domain prediction (Figure 3 left).
+
+Binary Domain Prediction: For a single stage, we feed the frame-level features from source and target domains $\pmb{f}^S$ and $\pmb{f}^T$ , respectively, to an additional shallow binary domain classifier $G_{ld}$ , to discriminate which domain the features come from. Since temporal convolution from previous layers encodes information from multiple adjacent frames to each frame-level feature, those frames contribute to the binary domain prediction for each frame. Through adversarial training with a gradient reversal layer (GRL) [10, 11], which reverses the gradient signs during back-propagation, $G_f$ will be optimized to gradually align the feature distributions between the two domains. Here we note $\hat{G}_{ld}$ as $G_{ld}$ equipped with GRL, as shown in Figure 4.
+
+Since this work is built on MS-TCN, integrating $\hat{G}_{ld}$ with proper stages is critical for effective DA. From our investigation, the best performance happens when $\hat{G}_{ld}$ s are integrated into middle stages. See Section 4.3 for details.
+
+The overall loss function becomes a combination of the baseline prediction loss $\mathcal{L}_y$ and the local domain loss $\mathcal{L}_{ld}$ with reverse sign, which can be expressed as follows:
+
+$$
+\mathcal {L} = \sum^ {N _ {s}} \mathcal {L} _ {y} - \sum^ {\widetilde {N} _ {s}} \beta_ {l} \mathcal {L} _ {l d} \tag {1}
+$$
+
+$$
+\mathcal {L} _ {l d} = \frac {1}{T} \sum_ {j = 1} ^ {T} L _ {l d} \left(G _ {l d} \left(f _ {j}\right), d _ {j}\right) \tag {2}
+$$
+
+where $N_{s}$ is the total stage number in MS-TCN, $\widetilde{N_s}$ is the number of stages integrated with $\hat{G}_{ld}$ , and $T$ is the total frame number of a video. $L_{ld}$ is a binary cross-entropy loss function, and $\beta_{l}$ is the trade-off weight for local domain loss $\mathcal{L}_{ld}$ , obtained by following the common strategy as [10, 11].
+
+# 3.2.2 Global SSTDA
+
+Although frame-level features $f$ is learned using the context and dependencies from neighbor frames, the temporal receptive fields of $f$ are still limited, unable to represent full videos. Solely integrating DA into $f$ cannot fully address spatio-temporal variations for untrimmed long videos. Therefore, in addition to binary domain prediction for frame-level features, we propose the second self-supervised auxiliary task for video-level features: sequential domain prediction, which predicts a sequence of domains for video clips, as shown in the right part of Figure 3. This task is a temporal domain segmentation problem, aiming to predict the correct permutation of domains for long videos consisting of shuffled video clips from both source and target domains. Since this goal is related to both cross-domain and action segmentation problems, sequential domain prediction can effectively benefit our main task.
+
+More specifically, we first divide $f^S$ and $f^T$ into two sets of segments $F^S = \{f_a^S, f_b^S, \ldots\}$ and $F^T = \{f_a^T, f_b^T, \ldots\}$ , respectively, and then learn the corresponding two sets of segment-level feature representations $V^S = \{v_a^S, v_b^S, \ldots\}$ and $V^T = \{v_a^T, v_b^T, \ldots\}$ with Domain Attentive Temporal Pooling (DATP). All features $v$ are then shuffled and combined in random order and fed to a sequential domain classifier $G_{gd}$ equipped with GRL (noted as $\hat{G}_{gd}$ ) to predict the permutation of domains, as shown in Figure 4.
+
+Domain Attentive Temporal Pooling (DATP): The most straightforward method to obtain a video-level feature is to aggregate frame-level features using temporal pooling. However, not all the frame-level features contribute the same to the overall domain discrepancy, as mentioned in [4]. Hence, we assign larger attention weights $w_{j}$ (calculated using $\hat{G}_{gd}$ in local SSTDA) to the features which have larger domain discrepancy so that we can focus more on aligning those features. Finally, the attended frame-level features are aggregated with temporal pooling to generate the video-level feature $v$ , which can be expressed as:
+
+$$
+v = \frac {1}{T ^ {\prime}} \sum_ {j = 1} ^ {T ^ {\prime}} w _ {j} \cdot f _ {j} \tag {3}
+$$
+
+where $T^{\prime}$ is the number of frames in a video segment. For more details, please refer to the supplementary.
+
+Sequential Domain Prediction: By separately applying DATP to both source and target segments, respectively, a set of segment-level feature representations $\mathbf{V} = \{v_{a}^{S}, v_{b}^{S}, \dots, v_{a}^{T}, v_{b}^{T}, \dots\}$ are obtained. We then shuffle all the features in $\mathbf{V}$ and concatenate them into a feature to represent a long and untrimmed video $\mathbf{V}'$ , which contains video segments from both domains in random order. Finally, $\mathbf{V}'$ is fed into a sequential domain classifier $G_{gd}$ to predict the permutation of domains for the video segments. For example, if $\mathbf{V}' = [v_{a}^{S}, v_{a}^{T}, v_{b}^{T}, v_{b}^{S}]$ , the goal of $G_{gd}$ is to predict
+
+
+Figure 4: The overview of the proposed Self-Supervised Temporal Domain Adaptation (SSTDA). The inputs from the two domains are first encoded with local temporal dynamics using $G_{f}$ to obtain the frame-level features $\pmb{f}^{S}$ and $\pmb{f}^{T}$ , respectively. We apply local SSTDA on all $\pmb{f}$ using binary domain prediction $\hat{G}_{ld}$ . Besides, $\pmb{f}^{S}$ and $\pmb{f}^{T}$ are evenly divided into multiple segments to learn segment-level features $\pmb{V}^{S}$ and $\pmb{V}^{T}$ by DATP, respectively. Finally, the global SSTDA is applied on $\pmb{V}^{\prime}$ which is generated by concatenating shuffled $\pmb{V}^{S}$ and $\pmb{V}^{T}$ , using sequential domain prediction $\hat{G}_{gd}$ . $\mathcal{L}_{ld}$ and $\mathcal{L}_{gd}$ are the domain losses from $\hat{G}_{ld}$ and $\hat{G}_{gd}$ , respectively. $w$ corresponds to the attention weights for DATP, which are calculated from the outputs of $\hat{G}_{ld}$ . Here we use 8-frame videos and 2 segments as an example for this figure. Best views in colors.
+
+the permutation as $[0,1,1,0]$ . $G_{gd}$ is a multi-class classifier where the class number corresponds to the total number of all possible permutations of domains, and the complexity of $G_{gd}$ is determined by the segment number for each video (more analyses in Section 4.3). The outputs of $G_{gd}$ are used to calculate the global domain loss $\mathcal{L}_{gd}$ as below:
+
+$$
+\mathcal {L} _ {g d} = L _ {g d} \left(G _ {g d} \left(\boldsymbol {V} ^ {\prime}\right), y _ {d}\right) \tag {4}
+$$
+
+where $L_{gd}$ is also a standard cross-entropy loss function where the class number is determined by the segment number. Through adversarial training with GRL, sequential domain prediction also contributes to optimizing $G_{f}$ to align the feature distributions between the two domains.
+
+There are some self-supervised learning works also proposing the concepts of temporal shuffling [22, 42]. However, they predict temporal orders within one domain, aiming to learn general temporal information for video features. Instead, our method predicts temporal permutation for cross-domain videos, which are shown with a dual-
+
+branch pipeline in Figure 4, and integrate with binary domain prediction to effectively address both cross-domain and action segmentation problems.
+
+# 3.2.3 Local-Global Joint Training.
+
+Finally, we also adopt a strategy from [39] to minimize the class entropy for the frames that are similar across domains by adding a domain attentive entropy (DAE) loss $\mathcal{L}_{ae}$ . Please refer to the supplementary for more details.
+
+By adding the global domain loss $\mathcal{L}_{gd}$ (Equation (4)) and the attentive entropy loss $\mathcal{L}_{ae}$ into Equation (1), the overall loss of our final proposed Self-Supervised Temporal Domain Adaptation (SSTDA) can be expressed as follows:
+
+$$
+\mathcal {L} = \sum^ {N _ {s}} \mathcal {L} _ {y} - \sum^ {\widetilde {N _ {s}}} \left(\beta_ {l} \mathcal {L} _ {l d} + \beta_ {g} \mathcal {L} _ {g d} - \mu \mathcal {L} _ {a e}\right) \tag {5}
+$$
+
+where $\beta_{g}$ and $\mu$ are the weights for $\mathcal{L}_{gd}$ and $\mathcal{L}_{ae}$ , respectively.
+
+ | GTEA | 50Salads | Breakfast |
| subject # | 4 | 25 | 52 |
| class # | 11 | 17 | 48 |
| video # | 28 | 50 | 1712 |
| leave-#-subject-out | 1 | 5 | 13 |
+
+# 4. Experiments
+
+To validate the effectiveness of the proposed methods in reducing spatial-temporal discrepancy for action segmentation, we choose three challenging datasets: GTEA [9], 50Salads [35], and Breakfast [17], which separate the training and validation sets by different people (noted as subjects) with leave-subjects-out cross-validation for evaluation, resulting in large domain shift problem due to spatiotemporal variations. Therefore, we regard the training set as Source domain, and the validation set as Target domain with the standard transductive unsupervised DA protocol [31, 5]. See the supplementary for more implementation details.
+
+# 4.1. Datasets and Evaluation Metrics
+
+The overall statistics of the three datasets are listed in Table 1. Three widely used evaluation metrics are chosen as follows [20]: frame-wise accuracy (Acc), segmental edit score, and segmental F1 score at the IoU threshold $k\%$ , denoted as $F1@k$ ( $k = \{10, 25, 50\}$ ). While Acc is the most common metric, edit and F1 score both consider the temporal relation between predictions and ground truths, better reflecting the performance for action segmentation.
+
+# 4.2. Experimental Results
+
+We first investigate the effectiveness of our approaches in utilizing unlabeled target videos for action segmentation. We choose MS-TCN [8] as the backbone model since it is the current state of the art for this task. "Source only" means the model is trained only with source labeled videos, i.e., the baseline model. And then our approach is compared to other methods with the same transductive protocol. Finally, we compare our method to the most recent action segmentation methods on all three datasets, and investigate how our method can reduce the reliance on source labeled data.
+
+Self-Supervised Temporal Domain Adaptation: First we investigate the performance of local SSTDA by integrating the auxiliary task binary domain prediction with the baseline model. The results on all three datasets are improved significantly, as shown in Table 2. For example, on the GTEA dataset, our approach outperforms the baseline by $4.3\%$ for F1@25, $3.2\%$ for the edit score and $3.6\%$ for the frame-wise accuracy. Although local SSTDA mainly works on the frame-level features, the temporal information is still encoded using the context from neighbor frames, helping
+
+Table 1: The statistics of action segmentation datasets.
+
+| GTEA | F1@{10,25,50} | Edit | Acc |
| Source only (MS-TCN)† | 86.5 | 83.6 | 71.9 | 81.3 | 76.5 |
| Local SSTDA | 89.6 | 87.9 | 74.4 | 84.5 | 80.1 |
| SSTDA‡ | 90.0 | 89.1 | 78.0 | 86.2 | 79.8 |
| 50Salads | F1@{10,25,50} | Edit | Acc |
| Source only (MS-TCN)† | 75.4 | 73.4 | 65.2 | 68.9 | 82.1 |
| Local SSTDA | 79.2 | 77.8 | 70.3 | 72.0 | 82.8 |
| SSTDA‡ | 83.0 | 81.5 | 73.8 | 75.8 | 83.2 |
| Breakfast | F1@{10,25,50} | Edit | Acc |
| Source only (MS-TCN)† | 65.3 | 59.6 | 47.2 | 65.7 | 64.7 |
| Local SSTDA | 72.8 | 67.8 | 55.1 | 71.7 | 70.3 |
| SSTDA‡ | 75.0 | 69.1 | 55.2 | 73.7 | 70.2 |
+
+Table 2: The experimental results for our approaches on three benchmark datasets. "SSTDA" refers to the full model while "Local SSTDA" only contains binary domain prediction. $\dagger$ We achieve higher performance than reported in [8] when using the released code, so use that as the baseline performance for the whole paper. $\ddagger$ Global SSTDA requires outputs from local SSTDA, so it is not evaluated alone.
+
+address the variation problem for videos across domains.
+
+Despite the improvement from local SSTDA, integrating DA into frame-level features cannot fully address the problem of spatio-temporal variations for long videos. Therefore, we integrate our second proposed auxiliary task sequential domain prediction for untrimmed long videos. By jointly training with both auxiliary tasks, SSTDA can jointly align cross-domain feature spaces embedding with local and global temporal dynamics, and further improve over local SSTDA with significant margins. For example, on the 50Salads dataset, it outperforms local SSTDA by $3.8\%$ for F1@10, $3.7\%$ for F1@25, $3.5\%$ for F1@50, and $3.8\%$ for the edit score, as shown in Table 2.
+
+One interesting finding is that local SSTDA contributes to most of the frame-wise accuracy improvement for SSTDA because it focuses on aligning frame-level feature spaces. On the other hand, sequential domain prediction benefits aligning video-level feature spaces, contributing to further improvement for the other two metrics, which consider temporal relation for evaluation.
+
+Learning from Unlabeled Target Videos: We also compare SSTDA with other popular approaches [11, 26, 32, 41, 34, 21, 42] to validate the effectiveness of reducing spatiotemporal discrepancy with the same amount of unlabeled target videos. For the fair comparison, we integrate all these methods with the same baseline model, MS-TCN. For more implementation details, please refer to the supplementary.
+
+Table 3 shows that our proposed SSTDA outperforms all the other investigated DA methods in terms of the two metrics that consider temporal relation. We conjecture the main reason is that all these DA approaches are designed for cross-domain image problems. Although they are in
+
+ | F1@{10,25,50} | Edit |
| Source only (MS-TCN) | 86.5 | 83.6 | 71.9 | 81.3 |
| VCOP [42] | 87.3 | 85.9 | 70.1 | 82.2 |
| DANN [11] | 89.6 | 87.9 | 74.4 | 84.5 |
| JAN [26] | 88.7 | 87.6 | 73.1 | 83.1 |
| MADA [32] | 88.6 | 86.7 | 75.8 | 83.5 |
| MSTN [41] | 89.9 | 88.2 | 75.9 | 84.7 |
| MCD [34] | 88.1 | 86.3 | 73.4 | 82.7 |
| SWD [21] | 89.0 | 87.3 | 73.8 | 84.4 |
| SSTDA | 90.0 | 89.1 | 78.0 | 86.2 |
+
+tegrated with frame-level features which encode local temporal dynamics, the limited temporal receptive fields prevent them from fully addressing temporal domain discrepancy. Instead, the sequential domain prediction in SSTDA is directly applied to the whole untrimmed video, helping to globally align the cross-domain feature spaces that embed longer temporal dynamics, so that spatio-temporal variations can be reduced more effectively.
+
+We also compare with the most recent video-based self-supervised learning method, [42], which can also learn temporal dynamics from unlabeled target videos. However, the performance is even worse than other DA methods, implying that temporal shuffling within single domain does not effectively benefit cross-domain action segmentation.
+
+Comparison with Action Segmentation Methods: Here we compare the recent methods to SSTDA trained with two settings: 1) fully source labels, and 2) weakly source labels.
+
+The first setting means we have labels for all the frames in source videos, and SSTDA outperforms all the previous methods on the three datasets with respect to all evaluation metrics. For example, SSTDA outperforms currently the state-of-the-art fully-supervised method, MS-TCN [8], by large margins (e.g. $8.1\%$ for F1@25, $8.6\%$ for F1@50, and $6.9\%$ for the edit score on 50Salads; $9.5\%$ for F1@25, $8.0\%$ for F1@50, and $8.0\%$ for the edit score on Breakfast), as demonstrated in Table 4. Since no additional labeled data is used, these results indicate how our proposed SSTDA address the spatio-temporal variation problem with unlabeled videos to improve the action segmentation performance.
+
+Given the significant improvement by exploiting unlabeled target videos, it implies the potential to train with fewer number of labeled frames using SSTDA, which is our second setting. In this setting, we drop labeled frames from source domains with uniform sampling for training, and evaluate on the same length of validation data. Our experiment indicates that by integrating with SSTDA, only
+
+Table 3: The comparison of different methods that can learn information from unlabeled target videos (on GTEA). All the methods are integrated with the same baseline model MS-TCN for fair comparison. Please refer to the supplementary for the results on other datasets.
+
+| GTEA | F1@{10, 25, 50} | Edit | Acc |
| LCDC [29] | 75.4 | - | - | 72.8 | 65.3 |
| TDRN [23] | 79.2 | 74.4 | 62.7 | 74.1 | 70.1 |
| MS-TCN [8]† | 86.5 | 83.6 | 71.9 | 81.3 | 76.5 |
| SSTDA (65%) | 85.2 | 82.6 | 69.3 | 79.6 | 75.7 |
| SSTDA | 90.0 | 89.1 | 78.0 | 86.2 | 79.8 |
| 50Salads | F1@{10, 25, 50} | Edit | Acc |
| TDRN [23] | 72.9 | 68.5 | 57.2 | 66.0 | 68.1 |
| LCDC [29] | 73.8 | - | - | 66.9 | 72.1 |
| MS-TCN [8]† | 75.4 | 73.4 | 65.2 | 68.9 | 82.1 |
| SSTDA (65%) | 77.7 | 75.0 | 66.2 | 69.3 | 80.7 |
| SSTDA | 83.0 | 81.5 | 73.8 | 75.8 | 83.2 |
| Breakfast | F1@{10, 25, 50} | Edit | Acc |
| TCFPN [7] | - | - | - | - | 52.0 |
| GRU [33] | - | - | - | - | 60.6 |
| MS-TCN [8]† | 65.3 | 59.6 | 47.2 | 65.7 | 64.7 |
| SSTDA (65%) | 69.3 | 62.9 | 49.4 | 69.0 | 65.8 |
| SSTDA | 75.0 | 69.1 | 55.2 | 73.7 | 70.2 |
+
+Table 4: Comparison with the most recent action segmentation methods on all three datasets. SSTDA (65%) means training with 65% of total labeled training data. †Results from running the official code, as explained in Table 2.
+
+ | F1@{10, 25, 50} | Edit | Acc |
| Source only | 86.5 | 83.6 | 71.9 | 81.3 | 76.5 |
| {S1} | 88.6 | 86.2 | 73.6 | 84.2 | 78.7 |
| {S2} | 89.1 | 87.2 | 74.4 | 84.3 | 79.1 |
| {S3} | 89.2 | 87.3 | 72.3 | 83.8 | 78.9 |
| {S4} | 88.1 | 86.4 | 73.0 | 83.0 | 78.8 |
| {S1, S2} | 89.0 | 85.8 | 73.5 | 84.8 | 79.5 |
| {S2, S3} | 89.6 | 87.9 | 74.4 | 84.5 | 80.1 |
| {S3, S4} | 88.3 | 86.8 | 73.9 | 83.6 | 78.6 |
+
+Table 5: The experimental results of design choice for local SSTDA (on GTEA). $\{S_n\}$ : add $\hat{G}_{ld}$ to the $n$ th stage of MS-TCN, where smaller $n$ implies closer to inputs.
+
+$65\%$ of labeled training data are required to achieve comparable performance with MS-TCN, as shown in the "SSTDA $(65\%)$ " row in Table 4. For the full experiments about labeled data reduction, please refer to the supplementary.
+
+# 4.3. Ablation Study and Analysis
+
+Design Choice for Local SSTDA: Since we develop our approaches upon MS-TCN [8], it raises the question: How to effectively integrate binary domain prediction to a multistage architecture? To answer this, we first integrate $\hat{G}_{ld}$ into each stage and the results show that the best performance happens when the $\hat{G}_{ld}$ is integrated into middle stages, such as $S2$ or $S3$ , as shown in Table 5. $S1$ is not a good choice for DA because it corresponds to low-level features with less discriminability where DA shows limited effects [24], and represents less temporal receptive fields for
+
+| Segment # | F1@{10,25,50} | Edit | Acc |
| 1 | 89.4 | 87.7 | 75.4 | 85.3 | 79.2 |
| 2 | 90.0 | 89.1 | 78.0 | 86.2 | 79.8 |
| 3 | 89.7 | 87.6 | 75.4 | 85.2 | 79.2 |
+
+Table 6: The experimental results for different segment numbers of sequential domain prediction (on GTEA).
+
+
+Figure 5: The visualization of temporal action segmentation for our methods with color-coding (input example: make coffee). "MS-TCN" is the baseline model without any DA methods. We only highlight the action segments that are different from the ground truth for clear comparison.
+
+videos. However, higher stages (e.g. $S4$ ) are not always better. We conjecture that it is because the model fits more to the source data, causing difficulty for DA. In our case, integrating $\hat{G}_{ld}$ into $S2$ provides the best overall performance.
+
+We also integrate binary domain prediction with multiple stages. However, multi-stage DA does not always guarantee improved performance. For example, $\{S1, S2\}$ has worse results than $\{S2\}$ in terms of F1@{10, 25, 50}. Since $\{S2\}$ and $\{S3\}$ provide the best single-stage DA performance, we use $\{S2, S3\}$ , which performs the best, as the final model for all our approaches in all the experiments.
+
+Design Choice for Global SSTDA: The most critical design decision for the sequential domain prediction is the segment number for each video. In our implementation, we divide one source video into $m$ segments and do so for one target video, and then apply $G_{gd}$ to predict the permutation of domains for these $2m$ video segments. Therefore, the category number of $G_{gd}$ equals the number of all permutations $(2m)! / (m!)^2$ . In other words, the segment number $m$ determine the complexity of the self-supervised auxiliary task. For example, $m = 3$ leads to a 20-way classifier, and $m = 4$ results in a 70-way classifier. Since a good self-supervised task should be neither naive nor over complicated [30], we choose $m = 2$ as our final decision, which is supported by our experiments as shown in Table 6.
+
+Segmentation Visualization: It is also common to evaluate the qualitative performance to ensure that the prediction results are aligned with human vision. First, we compare our approaches with the baseline model MS-TCN [8] and the ground truth, as shown in Figure 5. MS-TCN fails to
+
+
+Figure 6: The visualization of temporal action segmentation for different DA methods (same input as Figure 5). "Source only" represents the baseline model, MS-TCN. Only the segments different from the ground truth are highlighted.
+
+detect some pour actions in the first half of the video, and falsely classify close as take in the latter part of the video. With local SSTDA, our approach can detect close in the latter part of the video. Finally, with full SSTDA, our proposed method also detects all pour action segments in the first half of video. We then compare SSTDA with other DA methods, and Figure 6 shows that our result is the closest to the ground truth. The others either incorrectly detect some actions or make incorrect classification. For more qualitative results, please refer to the supplementary.
+
+# 5. Conclusions and Future Work
+
+In this work, we propose a novel approach to effectively exploit unlabeled target videos to boost performance for action segmentation without target labels. To address the problem of spatio-temporal variations for videos across domains, we propose Self-Supervised Temporal Domain Adaptation (SSTDA) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics by two self-supervised auxiliary tasks, binary and sequential domain prediction. Our experiments indicate that SSTDA outperforms other DA approaches by aligning temporal dynamics more effectively. We also validate the proposed SSTDA on three challenging datasets (GTEA, 50Salads, and Breakfast), and show that SSTDA outperforms the current state-of-the-art method by large margins and only requires $65\%$ of the labeled training data to achieve the comparable performance, demonstrating the usefulness of adapting to unlabeled videos across variations. For the future work, we plan to apply SSTDA to more challenging video tasks (e.g. spatio-temporal action localization [13]).
+
+# References
+
+[1] Unaiza Ahsan, Rishi Madhok, and Irfan Essa. Video jigsaw: Unsupervised learning of spatiotemporal context for video action recognition. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2019. 2
+[2] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1
+[3] Min-Hung Chen, Zsolt Kira, and Ghassan AlRegib. Temporal attentive alignment for video domain adaptation. CVPR Workshop on Learning from Unlabeled Videos, 2019. 2
+[4] Min-Hung Chen, Zsolt Kira, Ghassan AlRegib, Jaekwon Woo, Ruxin Chen, and Jian Zheng. Temporal attentive alignment for large-scale video domain adaptation. In IEEE International Conference on Computer Vision (ICCV), 2019. 2, 3, 4
+[5] Gabriela Csurka. A comprehensive survey on domain adaptation for visual applications. In Domain Adaptation in Computer Vision Applications, pages 1-35. Springer, 2017. 2, 3, 6
+[6] Li Ding and Chenliang Xu. Tricornet: A hybrid temporal convolutional and recurrent network for video action segmentation. arXiv preprint arXiv:1705.07818, 2017. 1, 2
+[7] Li Ding and Chenliang Xu. Weakly-supervised action segmentation with iterative soft boundary assignment. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 7
+[8] Yazan Abu Farha and Jurgen Gall. Ms-tn: Multi-stage temporal convolutional network for action segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2, 3, 6, 7, 8
+[9] Alireza Fathi, Xiaofeng Ren, and James M Rehg. Learning to recognize objects in egocentric activities. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. 2, 6
+[10] Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International Conference on Machine Learning (ICML), 2015. 2, 3, 4
+[11] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research (JMLR), 17(1):2096-2030, 2016. 2, 3, 4, 6, 7
+[12] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014. 2
+[13] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 8
+[14] Arshad Jamal, Vinay P Namboodiri, Dipti Deodhare, and KS Venkatesh. Deep domain adaptation in action space. In British Machine Vision Conference (BMVC), 2018. 2
+
+[15] Dahun Kim, Donghyeon Cho, and In So Kweon. Self-supervised video representation learning with space-time cubic puzzles. In AAAI Conference on Artificial Intelligence (AAAI), 2019. 2
+[16] Yu Kong and Yun Fu. Human action recognition and prediction: A survey. arXiv preprint arXiv:1806.11230, 2018. 1, 3
+[17] Hilde Kuehne, Ali Arslan, and Thomas Serre. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. 2, 6
+[18] Vinod Kumar Kurmi, Shanu Kumar, and Vinay P Namboodiri. Attending to discriminative certainty for domain adaptation. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
+[19] Avisek Lahiri, Sri Charan Ragireddy, Prabir Biswas, and Pabitra Mitra. Unsupervised adversarial visual level domain adaptation for learning video object detectors from images. In IEEE Winter Conference on Applications of Computer Vision (WACV), 2019. 2
+[20] Colin Lea, Michael D Flynn, Rene Vidal, Austin Reiter, and Gregory D Hager. Temporal convolutional networks for action segmentation and detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1, 2, 6
+[21] Chen-Yu Lee, Tanmay Batra, Mohammad Haris Baig, and Daniel Ulbricht. Sliced Wasserstein discrepancy for unsupervised domain adaptation. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2, 6, 7
+[22] Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-Hsuan Yang. Unsupervised representation learning by sorting sequences. In IEEE International Conference on Computer Vision (ICCV), 2017. 2, 5
+[23] Peng Lei and Sinisa Todorovic. Temporal deformable residual networks for action segmentation in videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1, 2, 7
+[24] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015. 2, 7
+[25] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Unsupervised domain adaptation with residual transfer networks. In Advances in Neural Information Processing Systems (NeurIPS), 2016. 2
+[26] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In International Conference on Machine Learning (ICML), 2017. 2, 6, 7
+[27] Chih-Yao Ma, Min-Hung Chen, Zsolt Kira, and Ghassan AlRegib. Ts-lstm and temporal-inception: Exploiting spatiotemporal dynamics for activity recognition. Signal Processing: Image Communication, 71:76-87, 2019. 1
+[28] Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, and Hans Peter Graf. Attend and interact: higher-order object interactions for video understanding. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1
+
+[29] Khoi-Nguyen C Mac, Dhiraj Joshi, Raymond A Yeh, Jinjun Xiong, Rogerio S Feris, and Minh N Do. Learning motion in feature space: Locally-consistent deformable convolution networks for fine-grained action detection. In IEEE International Conference on Computer Vision (ICCV), 2019. 7
+[30] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision (ECCV), 2016. 8
+[31] Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering (TKDE), 22(10):1345-1359, 2010. 2, 6
+[32] Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. Multi-adversarial domain adaptation. In AAAI Conference on Artificial Intelligence (AAAI), 2018. 2, 6, 7
+[33] Alexander Richard, Hilde Kuehne, and Juergen Gall. Weakly supervised action learning with rn based fine-to-coarse modeling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 7
+[34] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2, 6, 7
+[35] Sebastian Stein and Stephen J McKenna. Combining embedded accelerometers with computer vision for recognizing food preparation activities. In ACM international joint conference on Pervasive and ubiquitous computing (UbiComp), 2013. 2, 6
+[36] Waqas Sultani and Imran Saleemi. Human action recognition across datasets by foreground-weighted histogram decomposition. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2014. 2
+[37] Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. Adversarial discriminative domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 2
+[38] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In IEEE conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1
+[39] Ximei Wang, Liang Li, Weirui Ye, Mingsheng Long, and Jianmin Wang. Transferable attention for domain adaptation. In AAAI Conference on Artificial Intelligence (AAAI), 2019. 2, 5
+[40] Donglai Wei, Joseph J Lim, Andrew Zisserman, and William T Freeman. Learning and using the arrow of time. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[41] Shaoan Xie, Zibin Zheng, Liang Chen, and Chuan Chen. Learning semantic representations for unsupervised domain adaptation. In International Conference on Machine Learning (ICML), 2018. 2, 6, 7
+[42] Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, and Yueting Zhuang. Self-supervised spatiotemporal learning via video clip order prediction. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2, 5, 6, 7
+[43] Tiantian Xu, Fan Zhu, Edward K Wong, and Yi Fang. Dual many-to-one-encoder-based transfer learning for cross
+
+dataset human action recognition. Image and Vision Computing, 55:127-137, 2016. 2
+[44] Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[45] Xiao-Yu Zhang, Haichao Shi, Changsheng Li, Kai Zheng, Xiaobin Zhu, and Lixin Duan. Learning transferable self-attentive representations for action recognition in untrimmed videos with weak supervision. In AAAI Conference on Artificial Intelligence (AAAI), 2019. 2
\ No newline at end of file
diff --git a/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/images.zip b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d0123c2b3fd65165395c5ce0f97375db335a7de8
--- /dev/null
+++ b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd256f51e0f02ef1a5deb0e92cbe756b5605ac5e77648efd6b884e5fcec1abb6
+size 502840
diff --git a/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/layout.json b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c6828267e8a799ef9ff6814764c334ca1c304fea
--- /dev/null
+++ b/actionsegmentationwithjointselfsupervisedtemporaldomainadaptation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dbf2674dbf73550dd861d2c86ff86597711e3cc03ea4431adabcef88513c3cf0
+size 404246
diff --git a/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_content_list.json b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1f9f0598f145166d44bc9a55d9e8078e8b3e032b
--- /dev/null
+++ b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c83e385cde3978521e32b3b54ff0247e4be59628ea26d8c17f7145a823ce1c8b
+size 79780
diff --git a/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_model.json b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..90a7e8d4adc723dc1440790e41b667f057006be3
--- /dev/null
+++ b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:984f2f5f24f57cfba5a6b5bf8f2b0bc048b150e278d72dd3a659349f36d63d4c
+size 96879
diff --git a/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_origin.pdf b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6c8a2f0bfb447a3212cab247814be4dae522253a
--- /dev/null
+++ b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/e1df8bcf-9bf0-439f-ac65-311b81e44d47_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf3423d14b198d5313c3a3b5093fd44a8509aa15692cabf76d4a510f76a721ce
+size 3594263
diff --git a/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/full.md b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e11bb4005e7945b0860b30ba3d75397098937d6f
--- /dev/null
+++ b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/full.md
@@ -0,0 +1,445 @@
+# Active 3D Motion Visualization Based on Spatiotemporal Light-Ray Integration
+
+Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso Showa Nagoya, 466-8555, Japan {sakaue, junsato}@nitech.ac.jp
+
+# Abstract
+
+In this paper, we propose a method of visualizing 3D motion with zero latency. This method achieves motion visualization by projecting special high-frequency light patterns on moving objects without using any feedback mechanisms. For this objective, we focus on the time integration of light rays in the sensing system of observers. It is known that the visual system of human observers integrates light rays in a certain period. Similarly, the image sensor in a camera integrates light rays during the exposure time. Thus, our method embeds multiple images into a time-varying light field, such that the observer of the time-varying light field observes completely different images according to the dynamic motion of the scene. Based on this concept, we propose a method of generating special high-frequency patterns of projector lights. After projection onto target objects with projectors, the image observed on the target changes automatically depending on the motion of the objects and without any scene sensing and data analysis. In other words, we achieve motion visualization without the time delay incurred during sensing and computing.
+
+# 1. Introduction
+
+In computer vision, 3D motion estimation has a long history, and many efficient methods have been proposed under various conditions [17, 4, 13, 14, 6]. Standard methods first extract point correspondences and optic flows in sequential images [7, 2, 10, 3]. The extracted point correspondences and optic flows are then used to compute 3D motion and the structures of the scene [17, 4, 14]. Active sensors, such as 3D range sensors, are also used to measure the 3D distance, which is then used to compute 3D motion [15]. The direct motion estimation method is also proposed based on the Doppler shift of the reflected light on the moving object [6].
+
+Although 3D motion estimation methods have become very advanced in recent years, all existing methods have an essential and unavoidable problem caused by their system
+
+structure: an inevitable time delay in measurements from real dynamic scenes. Indeed, the estimated 3D motions are not the current motions, but rather the past motions in the scene.
+
+In all existing methods, 3D motion is estimated in two steps: image data or 3D data is first obtained using cameras or 3D sensors; and 3D motion is then computed from the observed data based on changes in the observations during a specific time interval. As a result, these existing methods require a certain amount of time to estimate 3D motion, because they need to obtain at least two observations by sensors at different instants in time before computing the motion based on these multiple observations. Furthermore, estimating 3D motion becomes unstable with a shorter sampling interval of observations. This is because 3D motion is computed from the change in observations, and the signal-to-noise ratio of the change in observation degrades drastically in small intervals of observations. Hence, we cannot shorten this interval for stable motion estimations. Observation and computational costs are consequently never zero, even when expensive sensors and powerful computers are used.
+
+Observation and computation delays are severe problems in real-time computer vision applications, such as driver assistance systems for vehicles [16]. In these real-time systems, delayed motion estimations impose a delay on the driver's decisions and actions, risking serious accidents in traffic environments.
+
+In this paper, we propose a novel method of visualizing the dynamic information in a scene, by projecting images with projectors without using any sensors. With the proposed method, the appearance of target objects changes drastically according to their dynamic motion, as shown in Fig. 1. Furthermore, the proposed method can visualize motion information using complex images—for example, when an angry face appears on a forward-moving surface, whereas a smiling face appears on a backward-moving surface. All the processes can be done merely by projecting lights from projectors. As a result, the proposed method can achieve 3D motion estimation and visualization exclusively
+
+
+Figure 1. Motion visualization from image projection: the observed image (color in this figure) on a target surface changes spontaneously according to its motions without using any sensor-feedback system such as a camera or 3D sensor. In this example, the projected pattern (the string) changes according to the relative motion of the vehicle.
+
+by projecting lights and without any time delay.
+
+For this objective, we focus on the time integration of light rays in the sensing system of observers. It is known that the visual system of human observers integrates light rays in a certain period. Similarly, the image sensor in a camera integrates light rays during the exposure time. Thus, our method embeds multiple images into a time-varying light field, such that the observer of the time-varying light field observes completely different images according to the dynamic motion of the scene.
+
+The proposed method is a new framework for 3D motion estimation and representation. As such, it can be implemented for various applications. For example, to control vehicle headlights using the hardware described in [16], we can visualize the relative speed of other vehicles and determine the danger of collision without latency, merely by projecting lights from the vehicles headlights.
+
+To our knowledge, this is the first paper to achieve motion visualization with zero latency, and we believe that this paper opens a new research field for 3D motion estimation in computer vision.
+
+# 2. Related Works
+
+Our method is closely related to coded light projection and light-field displays. In order to measure the 3D shape, coded light projections (i.e., structured light projections) have been studied for decades [1]. More recently, the first version of the Kinect sensor [18] used spatially coded lights to identify the corresponding points between projected lights and observed image points and to recover the 3D structure of a scene. Coded light projections have also been used for other objectives. Nayer et al. proposed a method of separating direct and global components using spatial high-frequency illumination [12]. The temporal coding of projected lights was also used for 3D measurements, among other applications [11]. Although many methods have been developed with coded light projections, coded lights are designed to be observed by sensors and analyzed only subsequently. Therefore, existing methods with coded light projections incur the computational cost to obtain fi
+
+nal results. By contrast, the proposed method visualizes 3D motion without any computation, and hence without any computational delay. Thus, our framework is completely different from that of existing 3D measurements from coded light projections.
+
+Light-field displays have also been studied extensively in recent years [9]. The light field is the subspace of a 7D plenoptic function. The plenoptic function is 7D space, which consists of 3D position, 2D orientation, 1D wavelength, and 1D time. In most cases, however, the light field is considered 4D space—2D position and 2D orientation—assuming that there no degradation to the light power in light travel and neglecting variations to the wavelength and time [9, 19, 8]. Thus, light-field displays are typically considered 4D devices. Wetzstein et al. [19] proposed a method of organizing a 3D display using multiple layered 2D display panels. Huang et al. [8] used a 4D light-field display to correct the visual aberrations of observers, showing deblurred images to near- or far-sighted observers without corrective lenses. Because existing light-field display techniques consider only the spatial position and orientation of lights, these techniques cannot encode visual information in the time domain. By contrast, we here consider light-ray integration in the time domain. We show that it is possible to encode visual information into the time domain of the light field and that the encoded visual information can be decoded by object motion. Thus, we consider the 3D motion of an object as a decoder of coded light.
+
+# 3. Observation Model
+
+We first consider an intensity observation model of observers such as humans and cameras. The image observed by them can be considered to be a process of sensing the light rays in the light field of the scene.
+
+Let $E(x,y,t)$ be a light ray from a point $(x,y)$ at time $t$ toward an orientation of an observer. If the observer observes the light ray at a particular moment, the ray—i.e., the light ray $E(x,y,t)$ —is observed directly. However, general observers, e.g., humans and cameras, do not observe a moment of light rays, but rather only integrated light-rays in the time domain. Indeed, the effect of the integration appears as a blurred motion when an observed object is in motion. Thus, to observe dynamic scenes, we need to consider integration with respect to the time domain as follows:
+
+$$
+I (x, y, T _ {o}) = \int_ {T _ {0} - T} ^ {T _ {0}} E (x, y, t) \tag {1}
+$$
+
+where $T_{0}$ is the current time and $T$ is the exposure time. In ordinary observers, such as cameras and humans, time integration of the observed light rays occurs in general. For example, although fluorescent lights blink with high frequency, humans do not perceive this. Similarly, video pro
+
+projectors with a micro-mirror array, such as digital light processing (DLP) projectors, can represent varying brightness and colors by using the flicker of light.
+
+Here, Eq.(1) has an ambiguity in $E$ . That is, the input light $E$ that satisfies Eq.(1) is not unique. In other words, an observed intensity $I(x,y,T_o)$ is identical even when the input light $E$ changes, provided that Eq. (1) is satisfied. This is an important property of observed intensity in this paper. In the proposed method, we embed multiple images into the ambiguity of the observed intensity, such that these images appear adaptively according to the motion of objects.
+
+# 4. Motion Visualization Based on Light-Ray Integration
+
+We next consider image embedding based on the observation model described in the previous section. We first consider the geometric relationship between a projector and an observer to derive a method of embedding images. In this paper, we assume that the relative position between the projector and the observer is fixed. Then, we consider the epipolar geometry between the projector and the observer.
+
+# 4.1. Epipolar Geometry for Projectors and Observers
+
+Let $\mathbf{P}$ and $\mathbf{P}'$ be projection matrices of a projector and an observer, respectively. Suppose a 3D point $\mathbf{X}$ in the scene is projected to a point $\mathbf{x}$ in a projected image and is observed as $\mathbf{x}'$ by an observer such as a camera. Then, the relationship among them can be described as follows:
+
+$$
+\lambda \tilde {\mathbf {x}} = \mathbf {P} \tilde {\mathbf {X}} \tag {2}
+$$
+
+$$
+\lambda^ {\prime} \tilde {\mathbf {x}} ^ {\prime} = \mathbf {P} ^ {\prime} \tilde {\mathbf {X}} \tag {3}
+$$
+
+where $(\tilde{\cdot})$ denotes the homogeneous representation of a vector. Then, a geometric constraint, the so-called epipolar constraint[5], holds between $\mathbf{x}$ and $\mathbf{x}'$ as follows:
+
+$$
+\tilde {\mathbf {x}} ^ {\prime \top} \mathbf {F} \tilde {\mathbf {x}} = 0 \tag {4}
+$$
+
+where $\mathbf{F}$ denotes a $3\times 3$ fundamental matrix whose rank is 2.
+
+The epipolar constraint stipulates that a pair of corresponding points, $\mathbf{x}$ and $\mathbf{x}'$ , lie on epipolar lines $l$ and $l'$ , respectively, as shown in Fig. 2. These epipolar lines are computed as follows:
+
+$$
+\mathbf {l} ^ {\prime} = \mathbf {F x} \tag {5}
+$$
+
+$$
+\mathbf {l} = \mathbf {F} ^ {\top} \mathbf {x} ^ {\prime} \tag {6}
+$$
+
+This epipolar constraint shows that the observer observes a 3D point, which is illuminated by a pixel on the epipolar line $l$ corresponding to $x'$ on the epipolar line $l'$ . This means that we only need to consider corresponding epipolar lines to analyze the relationship between the projector and the observer. Therefore, we consider the following derivation exclusively on the epipolar lines.
+
+
+Figure 2. Epipolar plane and epipolar lines: the epipolar plane is defined by a 3D point $\mathbf{X}$ and the optical centers of an observer and a projector. The epipolar lines are the intersections of image planes and the epipolar plane.
+
+
+Figure 3. Spatiotemporal image projected by a projector. The horizontal axis shows the epipolar line and the vertical axis shows time.
+
+
+(a) No motion
+
+
+(b) Forward motion and backward motion
+Figure 4. Change in integration in the spatiotemporal image: the black box in (a), blue box and red box in (b) show regions integrated in the observed intensity under each object motion.
+
+# 4.2. Motion Visualization by Light-Ray Integration
+
+We next consider the changes of observation when a target object moves. As described in the previous section, at point $\mathbf{x}^{\prime}$ , an observer observes pixels on a corresponding epipolar line 1. In addition, epipolar lines in an image do not intersect with each other except at the epipole. Therefore, we consider only an epipolar line in this section.
+
+Let us now consider the case where the projector projects dynamic images. If we focus on an epipolar line, these dynamic images can be regarded as a spatiotemporal image, as shown in Fig.3. In this figure, the horizontal axis shows the epipolar line and the vertical axis shows changes over
+
+time. In the spatiotemporal image, we consider the change in observation caused by the motion of the target object.
+
+Let us consider the case where a projector illuminates a planar screen and observed by a camera, as shown in Fig.4. The exposure time of the camera is equivalent to the time length of the spatiotemporal image. We first consider an observation where the planar target screen does not move, as shown in Fig.4(a). In this case, a corresponding pixel on an epipolar line does not change, and thus the observer observes the same point. Consequently, the observed image is the integration of the black box in the figure.
+
+We next consider the case where the planar screen moves forward, as shown in Fig.4(b). In this case, the corresponding point on the epipolar line moves to the left. Thus, the integration of the spatiotemporal image also changes to the blue box in this figure. Therefore, the observed image changes drastically from that in the case without motion. Similarly, if the planar screen moves backward, the corresponding point moves to the right, and thus the integration of spatiotemporal image changes to the red box. As a result, the observed image also changes.
+
+These three examples show that the observation changes drastically according to the target motion. That is, object motion can be visualized using the proposed method. Note that although a planar surface is used for this discussion, there is no constraint on the object shape, insofar as the method is based on the change in depth of an observed target.
+
+# 4.3. Spatiotemporal Image Observation
+
+We next consider observational details in order to control the observed image under object motion.
+
+Let us consider the case where a 3D point $\mathbf{X}$ is illuminated by a point $\mathbf{x}$ on the projected image, and the 3D point $\mathbf{X}$ is observed at a point $\mathbf{x}'$ on the camera image. When the 3D point $\mathbf{X}$ moves toward the optical center of the camera with speed $v$ , the observed point $\mathbf{x}'$ does not change, whereas the illumination point $\mathbf{x}_t$ on the projected image at time $t$ changes as follows:
+
+$$
+\mathbf {x} _ {t} = \mathbf {x} + \alpha (v t) \mathbf {d} \tag {7}
+$$
+
+where $\mathbf{d}$ is a unit vector in the direction of the epipolar line, and $\alpha$ is a map from the change in depth to the change in point $\mathbf{x}$ . On the epipolar line, this can be rewritten in 1D notation as follows:
+
+$$
+x _ {t} = x + \alpha (v t) \tag {8}
+$$
+
+Assuming that the function $\alpha(x)$ is linear, the equation can be rewritten as follows:
+
+$$
+x _ {t} = x + t \alpha (v) \tag {9}
+$$
+
+Let $E(x,t)$ be the illumination value at point $x$ and time $t$ . If the exposure time of the observer is $T$ , the observed
+
+intensity $I(x')$ at $x'$ on the camera image can be computed as follows:
+
+$$
+I \left(x ^ {\prime}\right) = \int_ {0} ^ {T} E (x + t \alpha (v), t) d t \tag {10}
+$$
+
+This continuous notation can be approximated by discrete notation as follows:
+
+$$
+I \left(x ^ {\prime}\right) = \sum_ {t = 0} ^ {T - 1} E (x + t \alpha (v), t) \tag {11}
+$$
+
+Here, Eq.(11) indicates that the observed intensity changes according to the motion $v$ of the object. That is, the observed image for each motion can be controlled by the spatiotemporal image.
+
+Note that the function $\alpha$ describes changes of disparity by the change of depth. Although disparity is inversely proportional to the depth and it is nonlinear, it can be approximated by a linear function, in a small range.
+
+# 4.4. Projection Pattern Estimation
+
+We next consider a method of estimating projection patterns for motion visualization.
+
+Let us consider the case where we want to show an objective image $\hat{\mathbf{I}}_1$ to the observer when the target object moves with speed $v_{1}$ . Let $\hat{I}_1(x)$ be the intensity of $\hat{\mathbf{I}}_1$ at point $x$ . Then, the projected image for presenting this objective image can be obtained by minimizing the following evaluation value $\epsilon_{1}$ :
+
+$$
+\epsilon_ {1} = \sum_ {x = 1} ^ {N} \left(\hat {I} _ {1} (x) - \sum_ {t = 0} ^ {T - 1} E \left(x + t \alpha \left(v _ {1}\right), t\right)\right) ^ {2} \tag {12}
+$$
+
+where $N$ is the number of pixels on the epipolar line.
+
+This evaluation can be minimized with an ordinary least-means square (LMS) method. However, projectors cannot project negative intensities in general, and they have limited intensity:
+
+$$
+0 \leq I (x) \leq I _ {\max } \tag {13}
+$$
+
+where $I_{\mathrm{max}}$ is the maximum intensity value. Conditional LMS can solve this equation. In addition, we degrade the contrast of the objective image. By this degradation, the range of the projection intensity virtually enhanced relative to the objective image.
+
+We next consider the case where the target object moves with $M$ different speeds $v_{i}(i = 1,\dots ,M)$ , and the observer observes $M$ different images $\hat{\mathbf{I}}_i(i = 1,\dots ,M)$ according to the motion. The projected images for such an observation can be derived by minimizing the following $\epsilon$ :
+
+$$
+\epsilon = \sum_ {i = 1} ^ {M} \sum_ {x = 1} ^ {N} \left(\hat {I} _ {i} (x) - \sum_ {t = 0} ^ {T - 1} E \left(x + t \alpha \left(v _ {i}\right), t\right)\right) ^ {2} \tag {14}
+$$
+
+
+(a) in a vehicle
+
+
+(b) Environment
+Figure 5. Experimental environment in outdoor scene. (a) Camera and projector in a vehicle and (b) experimental scene.
+
+Note that the number of samples $T$ of exposure should be more than $M$ , because the equations do not have an explicit solution if $T < M$ . The number of samples can be increased using a high-frequency projector. By projecting the derived spatiotemporal patterns, different motion can be visualized with varying patterns of image.
+
+# 5. Experimental Results
+
+# 5.1. Results in Outdoor Scene
+
+In this section, we show the efficiency of the proposed method by providing results from real image experiments. We first show the experimental results from an outdoor scene. In this experiment, a camera and a projector were equipped on a vehicle, as shown in Fig.5. The relative position between the camera and the projector were fixed in the vehicle. Unfortunately, the projector could project only 60 images each second. Therefore, we decreased the frames per second (fps) of the camera, thus virtually increasing the fps of the projector. The camera observed five images each second. Therefore, 12 images were integrated into each observation.
+
+The projector pattern was generated, such that the observer could see the three different images shown in Fig.6 according to the motion of the vehicle: Image (a) for a static scene, Image (b) for forwarding motion, and Image (c) for backward motion. We degraded the contrast of each image to enhance the image representation ability since the projectable range of intensity was limited by Eq.(13). We derived projected images using the proposed method. These images were transformed by projective transformations, such that their horizontal axes were parallel to the epipolar line with the camera. After the transformation, the projected images were observed by the camera. The vehicle moved forward and backward in front of a wall, and the camera observed images projected onto the wall. The wall had slight 3D structure as shown in Fig.5(b) and Fig.8
+
+Figure8 shows the results observed by the camera. In these results, three different images were observed corresponding to the vehicles motion. The results indicate that our proposed method can visualize 3D motion without any sensing. In addition, the results indicate that our method is
+
+
+(a) Static
+
+
+(b) Forward
+
+
+(c) Backward
+
+
+Figure 6. Objective images for (a) a static scene, (b) forward motion, and (c) backward motion.
+
+
+
+
+Figure 7. Projected images generated by our proposed method.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a) Static scene
+Figure 8. Images observed by a camera: (a) an observed image when the vehicle stopped, (b) an observed image when the vehicle moved forward, and (c) an image observed with backward motion.
+
+
+(b) Forward
+motion
+
+
+(c) Backward
+motion
+
+robust to changes in speed, insofar as the speed of the vehicle could not be controlled accurately. Furthermore, the scene included various types of noisy light rays from street lights, buildings, and so on. Despite this noise, our method worked well. This further indicates that our method is robust to noise. All the results indicate that our method can visualize relative 3D motion in the real world without any sensing or computing.
+
+# 5.2. Results in Indoor Scene
+
+We next show the experimental results for an indoor scene. In this experiment, a camera and a projector were placed, as shown in Fig.9. The camera was used exclusively as an observer. The projector projected dynamic patterns generated by the proposed method. The planar was placed
+
+
+Figure 9. Experimental environment
+
+
+(a) Static
+
+
+(b) Forward
+
+
+(c) Backward
+
+
+Figure 10. Objective images for (a) static object, (b) forward motion, and (c) backward motion.
+(a) Static scene
+Figure 11. Observed images by a camera: (a) an observed image when the screen is static, (b) an observed image when the screen moved forward, and (c) an image observed with backward motion.
+
+
+(b) Forward motion
+
+
+(c) Backward motion
+
+on a moving stage and moved forward and backward iteratively. The speed of the motion was approximately $1\mathrm{cm / s}$ . The camera and the projector were fixed in the scene and only the screen was moved.
+
+The projector pattern was generated, such that the camera could see the three different images shown in Fig.6 according to the motion of the object: Image (a) for static object, Image (b) for forward motion, and Image (c) for backward motion.
+
+The observed results under three different types of object motion are shown in Fig.11. Although the observed images differ slightly from the objective images, we find that the proposed method provides us with entirely different images for forwarding and backward motion and that the difference in these motions is visible from the images. In addition, the camera could observe almost the same image even when the camera position slightly moved. The fact indicates that our proposed method is robust against the movement of the camera and the projector. The detail of this result is in the supplemental material.
+
+We next show the result when a target screen is more
+
+
+(a) Forward motion
+
+
+(b) Backward motion
+Figure 12. Observed images on the colored screen: (a) and (b) are the images observed when the screen moved forward and backward, respectively.
+
+complicated. First, we projected the proposed patterns onto a colored screen, as shown in Fig.12. The color patterns were printed on the screen, and our proposed patterns were projected onto the screen. In this case, reflected light rays were attenuated by the albedo on the screen, such that the observed patterns could be changed accordingly. Figure 12 shows the observed result when the screen moved (a) forward and (b) backward. In these results, the color pattern on the screen and the projected images were simultaneously observed. This is because the number of integrated pixels for each observed pixels was not so large. Consequently, the observed pixels did not change drastically, even when the albedo of the screen was not white. Note that the observed results differ from the target image because the observed images include not only the projected patterns but also the images on the screen. However, human eyes can perceive these projected images, even when the screen is not white, owing to high adaptation abilities. Thus, our proposal can appropriately project images even when target objects are colored.
+
+In addition, colored patterns on the screen were slightly blurred because the target screen moved slowly. However, projected patterns could be clearly observed because the patterns were computed for clear observations when the screen moved. Therefore, our proposed method clearly observes images even when the target screen moves.
+
+We next show the case when a target object is not planar, as shown in Fig. 13. In this case, reflected light rays are attenuated by changes in the angle between the light ray and the surface. Figure 13 shows observed results with forwarding motion and backward motion. In these results, we can observe the target images on the screen object for the same reasons as those mentioned above. Although the non-planar screen distorted the observed images, the target images could be recognized appropriately. This indicates that our method can achieve motion visualization, even if the target object is not planar.
+
+Because these results can be obtained merely by projecting images toward moving objects, the proposed method is
+
+
+(a) Forward motion
+
+
+(b) Backward motion
+
+efficient at visualizing object motion without latency.
+
+# 6. Evaluation
+
+# 6.1. Speed Change
+
+We evaluated the accuracy of our proposed method. For a quantitative evaluation, we used a synthetic environment. In the synthetic environment, we simulated a projector, a camera, and a planar object, as shown in Fig.15. The planar object was moved toward the camera at various speeds. The epipolar lines in the projected image and the observed image were parallel to the horizontal axis of the images, and the image point in the observed image moved by 1 pixel when the target object moved approximately $1\mathrm{mm}$ toward the camera.
+
+We first examined the change of view along with the target object. As shown in Eq. (14), our proposed method essentially visualizes discrete 3D motion. However, general objects move at continuous speeds in the real world. Therefore, we examined how the projected image was observed when the target object moved at speeds different from the target speed. In this experiment, three images were used as objective images, as shown in Fig.11: backward $(-5\mathrm{mm / s})$ , static, and forward $(5\mathrm{mm / s})$ motion. These were observed at several speeds differing from the target speed. Figure 14 shows the observed images at each speed.
+
+The results show that clear images could be observed when the target object moved at the same speed as the target speed. In addition, images similar to the objective images could be observed even when the speed of the target differed slightly from the target speed. This fact indicates that our proposed method is robust to changes in the target speed. Furthermore, the observation results gradually changed when the speed of the target changed. For example, a morphed image of parrots and Lena was observed when the target moved by $0.6\mathrm{cm / s}$ . This indicates that our proposed method can represent not only discrete 3D motion but also continuous motion. That is, users will perceive morphed motion from a morphed observed image.
+
+
+
+
+
+
+
+
+
+
+
+
+-7.0 mm/s
+
+
+-6.0 mm/s
+
+
+-5.0 mm/s
+-4.0 mm/s
+
+
+-3.0 mm/s
+
+
+
+
+Figure 13. Observed images on the non-planar surface: (a) and (b) are the images observed when the screen moved forward and backward, respectively.
+$3.0\mathrm{mm / s}$
+
+
+4.0 mm/s
+5.0 mm/s
+
+
+$6.0\mathrm{mm / s}$
+
+
+$7.0\mathrm{mm / s}$
+
+
+
+
+Figure 14. Observed results at each speed.
+Figure 15. Synthetic environment.
+
+
+(a) 0.00
+
+
+(b) 1.25
+
+
+(c) 2.50
+
+
+(d) 3.75
+Figure 16. Objective images of five different motion speeds.
+
+
+(e) 5.00
+
+# 6.2. Resolution
+
+We next evaluated the resolution of visualized speed. In this experiment, the speed of the object motion was changed from $0\mathrm{mm / s}$ to $5\mathrm{mm / s}$ at an interval of $1.25\mathrm{mm / s}$ . Thus, five different motion speeds were estimated by the proposed method. The objective images for these five motion speeds are shown in Fig. 16. The projected images for visualizing this motion were generated and projected by the proposed method. Figure 17 shows the observed images under these five different motion speeds. In the environment, variations to observed images depended on the target speed evaluated.
+
+
+(a) 0.00
+
+
+(b) 1.25
+
+
+(c) 2.50
+
+
+(d) 3.75
+
+
+(e) 5.00
+
+
+Figure 17. Observed images with five different motion speeds.
+Figure 18. Errors between the observed image and the objective image at five different speeds. The horizontal axis shows the speed of the object, and each line shows the root-mean-square error between the observed image and each objective image under different object speeds.
+
+Figure 18 shows the error of the observed image with respect to each objective image. For example, the green line shows the root-mean-square error (RMSE) between the observed image and the objective image at $1.25\mathrm{mm / s}$ . It required minimum speed of $1.25\mathrm{mm / s}$ , as we expected. From this graph, we find that the RMSE of true motion is very small compared to those of other motions. Thus, the proposed method can visualize several types of motion.
+
+# 6.3. Frame Rate
+
+Finally, we evaluated the relationship between the frame rate of the projector and the accuracy of motion representation. The frame rate of the projector was varied from 5 fps to 50 fps, while the frame rate of the observer was fixed at 1 fps. The number of motions distinguished by the proposed method was also changed, from two to five. Under these conditions, the RMSE of the observed image with respect to the objective image was evaluated. Figure 19 shows the RMSE of the observed images. From this figure, we find that the accuracy of motion representation improves as the
+
+
+Figure 19. Relationship between the frame rate of the projector and the accuracy of motion representation. The horizontal axis shows the frame rate of the projector, and the vertical axis shows the root-mean-square error of the observed images. The number of motions distinguished by the proposed method was varied from two to five.
+
+frame rate of the projector increases. We also find that the accuracy of motion representation degrades as the number of motions increases. However, the accuracy of motion representation can be recovered by increasing the frame rate of the projector, even with a large number of motions. Thus, it is important to use high-frequency projectors to represent several types of object motion with the proposed method.
+
+# 7. Conclusions
+
+In this paper, we proposed a novel method of visualizing object motion using image projection. The proposed method does not require any sensing devices, such as cameras, and does not require any computation. With the proposed method, the appearance of objects changes according to their motion, and feedback from sensors is unnecessary. Consequently, there is no delay when visualizing real object motion. The proposed method is also robust: it does not require sensing information, it is free from observation errors, and it is free from the problem of incorrect correspondences.
+
+These features do not exist in conventional motion estimation methods, and we believe that this paper opens a new research field for 3D motion estimation in computer vision. Especially, our proposed concept that changes observed images based on light integration without any sensing and computation is much useful. This concept can visualize observer motion as well as the movement of the targets. This concept has various applications, for example, the spontaneously changing signboards according to the observer's motion.
+
+# References
+
+[1] K.L. Boyer and A.C. KAK. Color-encoded structured light for rapid active ranging. IEEE transactions on Pattern Analysis and Machine Intelligence, 9(1):14-28, 1987.
+[2] T. Brox and J. Malik. Large displacement optical flow: descriptor matching in variational motion estimation. IEEE transactions on Pattern Analysis and Machine Intelligence, 33(3):500-513, 2011.
+[3] A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In Proc. IEEE International Conference on Computer Vision, pages 2758-2766, 2015.
+[4] O.D. Faugeras. What can be seen in three dimensions with an uncalibrated stereo rig? In Proc. European Conference on Computer Vision, pages 563-578, 1992.
+[5] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2000.
+[6] Felix Heide, Wolfgang Heidrich, Matthias Hullin, and Gordon Wetzstein. Doppler time-of-flight imaging. ACM Trans. Graph., 34(4):36:1-36:11, 2015.
+[7] B.K.P. Horn and B.G. Schunck. Determining optical flow. Artificial Intelligence, 17:185-203, 1981.
+[8] F. Huang, G. Wetzstein, B.A. Barsky, and R. Raskar. Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Transaction on Graphics, 33(4), 2014.
+[9] M. Levoy and P. Hanrahan. Light field rendering. In Proc. SIGGRAPH, pages 31-42, 1996.
+[10] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W.T. Freeman. Sift flow: Dense correspondence across different scenes. In Proc. European Conference on Computer Vision, volume 3, pages 28-42, 2008.
+[11] S.G. Narasimhan, S.J. Koppal, and S. Yamazaki. Temporal dithering of illumination for fast active vision. In Proc. European Conference on Computer Vision, pages 830-844, 2008.
+[12] S.K. Nayar, G. Krishnan, M.D. Grossberg, and R. Raskar. Fast separation of direct and global components of a scene using high frequency illumination. In Proc. SIGGRAPH, pages 935-944, 2006.
+[13] M. Pollefeys, M. Vergauwen, K. Cornelis, J. Tops, F. Verbiest, and L. Van Gool. Structure and motion from image sequences. In Proc. Conference on Optical 3-D Measurement Techniques, pages 251-258, 2001.
+[14] A.J. Davison R.A. Newcombe, S.J. Lovegrove. Dtam: Dense tracking and mapping in real-time. In IEEE International Conference on Computer Vision, 2011.
+[15] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from single depth images. In IEEE Conference on Computer Vision and Pattern Analysis, 2011.
+[16] Robert Tamburo, Eriko Nurvitadhi, Abhishek Chugh, Mei Chen, Anthony Rowe, Takeo Kanade, and Srinivasa G. Narasimhan. Programmable automotive headlights. In Proc. ECCV2014, pages 750-765, 2014.
+
+[17] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: a factorization method. International Journal of Computer Vision, 9, 1992.
+[18] J. Webb and J. Ashley. Beginning Kinect Programming with the Microsoft Kinect SDK. Apress, 2012.
+[19] G. Wetzstein, D. Lanman, W. Heidrich, and R. Raskar. Layered 3d: Tomographic image synthesis for attenuation-based light field and high dynamic range displays. In Proc. SIGGRAPH, 2011.
\ No newline at end of file
diff --git a/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/images.zip b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b76d2baf59bc861433d22246f93ec142697a5cb1
--- /dev/null
+++ b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a7dc3987a68aa188ded59c63c4aa9ab9d38be32003e684f0f866793634065d3
+size 552682
diff --git a/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/layout.json b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bda8427aaee49aa405f866db02a0ffdf9d2faa37
--- /dev/null
+++ b/active3dmotionvisualizationbasedonspatiotemporallightrayintegration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8c90355ee75f0dd483c6ab5556a69587ce76eee60dab1121d08ca6dc1d2d4bec
+size 464902
diff --git a/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_content_list.json b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e760b8a139484a93464cec61f23e9c8267643eac
--- /dev/null
+++ b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0c64fcb3992191c01a74d9e47db1bb31425d2c5ed51b21223c732de972aecd0b
+size 77843
diff --git a/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_model.json b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bf4e8b47814d8529feaee5e87869779c26409348
--- /dev/null
+++ b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed675cdf4a2278f9c10fa19c54a439a0440089eae223677a549adfb6e12dce6d
+size 95510
diff --git a/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_origin.pdf b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f404cd6fd8106e80f3645796a656a67cccfccd3e
--- /dev/null
+++ b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/ae3bd18f-285a-41e1-954d-eb344345772c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83e6bde42d083b8af56719d867a862cff3abf4c50da8d92a95149346560168dc
+size 1959188
diff --git a/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/full.md b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b19abe937bfdce9009ad24eb94d317a1718257b6
--- /dev/null
+++ b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/full.md
@@ -0,0 +1,320 @@
+# ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture
+
+Sena Kiciroglu1 Helge Rhodin1,2 Sudipta N. Sinha3 Mathieu Salzmann1 Pascal Fua1
+1 CVLAB, EPFL 2 Imager Lab, UBC 3 Microsoft
+
+# Abstract
+
+The accuracy of monocular 3D human pose estimation depends on the viewpoint from which the image is captured. While freely moving cameras, such as on drones, provide control over this viewpoint, automatically positioning them at the location which will yield the highest accuracy remains an open problem. This is the problem that we address in this paper. Specifically, given a short video sequence, we introduce an algorithm that predicts which viewpoints should be chosen to capture future frames so as to maximize 3D human pose estimation accuracy. The key idea underlying our approach is a method to estimate the uncertainty of the 3D body pose estimates. We integrate several sources of uncertainty, originating from deep learning based regressors and temporal smoothness. Our motion planner yields improved 3D body pose estimates and outperforms or matches existing ones that are based on person following and orbiting.
+
+# 1. Introduction
+
+Monocular approaches for 3D human pose estimation have improved significantly in recent years, but their accuracy remains relatively low. In this paper, we explore the use of a moving camera whose motion we can control to resolve ambiguities inherent to monocular 3D reconstruction and to increase pose estimation accuracy. This is known as active vision and has received surprisingly little attention in the context of using modern approaches to body pose estimation. An active motion capture system, such as one based on a personal drone, would allow one to film themselves performing a physical activity and analyze their motion, for example to get feedback on their performance. When using only one camera, the quality of such feedback will strongly depend on selecting the most beneficial viewpoints for pose estimation. Fig. 1 depicts an overview of our approach based on a drone-based monocular camera.
+
+In this paper, we introduce an algorithm designed to continuously position a moving camera at optimal viewpoints
+
+
+Figure 1. Method overview. The 2D and 3D human pose is inferred from the current frame of the drone footage, using off the shelf CNNs. The 2D pose and relative 3D pose of the last $k$ frames is then used to optimize for the global 3D human motion. The next view of the drone is chosen so that the uncertainty of the human pose estimation from that view is minimized, which improves reconstruction accuracy.
+
+to maximize the 3D pose estimation accuracy for a freely moving subject. We achieve this by moving the camera in 6D pose space to viewpoints that maximize a utility function designed to predict reconstruction accuracy. However, the utility function cannot be defined in terms of reconstruction accuracy because doing so would require knowing the true person and camera position, leading to a chicken and egg problem. Instead we use prediction uncertainty as a surrogate for accuracy. This is a common strategy used in robot navigation systems for unknown scenes where the robot explores areas that are most incomplete in its internal map representation [20]. However, in our situation, estimating uncertainty is much more difficult since multiple sources of uncertainty need to be considered. These include uncertainties about what the subject will do next, the reliability of the pose estimation algorithm, and the accuracy of distance estimation along the camera's line of sight.
+
+Our key contribution is therefore a formal model that provides an estimate of the posterior variance and probabilistically fuses these sources of uncertainty with appropriate prior distributions. This has enabled us to develop an active motion capture technique that takes raw video footage as input from a moving aerial camera and continuously computes future target viewpoints for positioning the camera, in a way that is optimized for human motion capture. We demonstrate our algorithm in two different scenarios and compare it against standard heuristics, such as constantly rotating around the subject and maintaining a constant angle with respect to the subject. We find that when allowed to choose the next viewpoint without physical constraints, our algorithm outperforms the baselines consistently. For simulated drone flight, our results are on par with constant rotation, which we conclude is the best trajectory to choose in the case of no obstacles blocking the circular flight path. Our code is available at https://github.com/senakicir/ActiveMoCap
+
+# 2. Related work
+
+Most recent approaches to markerless motion capture rely on deep networks that regress 3D pose from monocular images [16, 17, 21, 38, 25, 31, 22, 44, 36, 34, 41, 39, 15]. While a few of these methods improve robustness by enforcing temporal consistency [23], none considers the effect that actively controlling the camera may have on accuracy. The methods most closely related to ours are therefore those that optimize camera placement in multi-camera setups and those that guide robots in a previously-unknown environment.
+
+Optimal Camera Placement for Motion Capture. Optimal camera placement is a well-studied problem in the context of static multi-view setups. Existing solutions rely on maximizing image resolution while minimizing self-occlusion of body parts [5, 2] or target point occlusion and triangulation errors [27]. However, these methods operate offline and on pre-recorded exemplar motions. This makes them unsuitable for motion capture using a single moving camera that films a priori unknown motions in a much larger scene where estimation noise can be high.
+
+In [24] multiple cameras poses are optimized for triangulation of joints in a dome environment using a self-supervised reinforcement learning approach. In our case, we consider the monocular problem. Our method is not learning based, we try to obtain the next best view from the loss function itself.
+
+View Planning for Static and People Reconstruction. There has been much robotics work on active reconstruction and view planning. This usually involves moving so as to maximize information gain while minimizing motion cost, for example by a discretizing space into a volumetric grid
+
+and counting previously unseen voxels [14, 8] or by accumulating estimation uncertainty [20]. When a coarse scene model is available, an optimal trajectory can be found using offline optimization [30, 13]. This has also been done to achieve desired aesthetic properties in cinematography [11]. Another approach is to use reinforcement learning to define policies [7] or to learn a metric [12] for later online path planning. These methods deal with rigid unchanging scenes, except the one in [6] that performs volumetric scanning of people during information gain maximization. However, this approach can only deal with very slowly moving people who stay where they are.
+
+Human Motion Capture on Drones. Drones can be viewed as flying cameras and are therefore natural targets for our approach. One problem, however, is that the drone must keep the person in its field of view. To achieve this, the algorithm of [45] uses 2D human pose estimation in a monocular video and non-rigid structure from motion to reconstruct the articulated 3D pose of a subject, while that of [18] reacts online to the subject's motion to keep them in view and to optimize for screen-space framing objectives. AirCap [32] calculates trajectories of multiple drones that aim to keep the person in view while simultaneously performing object avoidance. This was extended in [35] so as to optimize multiple MAV trajectories by minimizing the uncertainty of the 3D human joint positions being tracked, but focusing on the 3D human pose estimation as an offline step. In [19], this was integrated into an autonomous system that actively directs a swarm of drones and simultaneously reconstructs 3D human and drone poses from onboard cameras. This strategy implements a pre-defined policy to stay at constant distance to the subject and uses pre-defined view angles ( $90^{\circ}$ between two drones) to maximize triangulation accuracy. This enables mobile large-scale motion capture, but relies on markers for accurate 2D pose estimation. In [40], three drones are used for markerless motion capture, using an RGBD video input for tracking the subject.
+
+In short, existing methods either optimize for drone placement but for mostly rigid scenes, or estimate 3D human pose but without optimizing the camera placement. [24] performs optimal camera placement for multiple cameras. Here, we propose an approach that aims to find the best next drone location for monocular view so as to maximize 3D human pose estimation accuracy.
+
+# 3. Active Human Motion Capture
+
+Our goal is to continuously position the camera in 6D pose space so that the acquired by the camera can be used to achieve the best overall human pose estimation accuracy. What makes this problem challenging is that, when we decide where to send the camera, we do not yet know where
+
+the subject will be and in what position exactly. We therefore have to guess. To this end, we propose the following three-step approach depicted by Fig. 1:
+
+1. Estimate the 3D pose up to the current time instant.
+2. Predict the person's future location and 3D pose at the time the camera acquires the next image, including an uncertainty estimate.
+3. Select the optimal camera pose based on the uncertainty estimate and move the camera to that viewpoint.
+
+We will consider two ways the camera can move. In the first case, the camera can teleport from one location to the next without restriction, allowing us to explore the theoretical limits of our approach. Such a teleportation mode can be simulated using a multi-camera setup, enabling us to evaluate our model on both simulated data and real image datasets acquired from multiple viewpoints. In the second, more realistic scenario, the camera is carried by a simulated drone, and we must take into account physical limits about the motion it can undertake.
+
+# 3.1. 3D Pose Estimation
+
+The 3D pose estimation step takes as input the video feed from the on-board camera over the past $N$ frames and outputs for each frame, $t \in (1, \dots, N)$ , the 3D human pose, represented as 15 3D points $\Theta^t \in \mathbb{R}^{15 \times 3}$ , and the drone pose, as 3D position and rotation angles $\mathbf{D}^t \in \mathbb{R}^{2 \times 3}$ . Our focus is on estimating the 3D human pose using the real-time method proposed by [3], which detects the 2D locations of the human's major joints in the image plane, $\mathbf{M}^t \in \mathbb{R}^{15 \times 2}$ , and the subsequent use of [36], which lifts these 2D predictions to 3D pose, $\mathbf{L}^t \in \mathbb{R}^{15 \times 3}$ . However, these per-frame estimates are error prone and relative to the camera. To remedy this, we fuse 2D and 3D predictions with temporal smoothness and bone-length constraints in a space-time optimization. This exploits the fact that the drone is constantly moving so as to disambiguate the individual estimates. The bone lengths, $\mathbf{b}_{\mathrm{calib}}$ , of the subject's skeleton are computed during an apriori calibration stage, where the subject has to stand still for 20 seconds. This is performed only once for each subject. Formally, we optimize for the global 3D human pose by minimizing an objective function $E_{\mathrm{pose}}$ , which we detail below.
+
+# 3.1.1 Formulation
+
+Our primary goal is to improve the global 3D human pose estimation of a subject changing position and pose. We optimize the time-varying pose trajectories across the last $k$ frames. Let $t$ be the last observed frame. We capture the trajectory of poses $\Theta^{t - k}$ to $\Theta^t$ in the pose matrix $\Theta$ . We then write an energy function
+
+$$
+\begin{array}{l} E _ {\text {p o s e}} = E _ {\text {p r o j}} (\boldsymbol {\Theta}, \mathbf {M}, \mathbf {D}) + E _ {\text {l i f t}} (\boldsymbol {\Theta}, \mathbf {L}) \\ + E _ {\text {s m o o t h}} (\boldsymbol {\Theta}) + E _ {\text {b o n e}} (\boldsymbol {\Theta}, \mathbf {b}). \tag {1} \\ \end{array}
+$$
+
+The individual terms are defined as follows. The lift term, $E_{\mathrm{lift}}$ , leverages the 3D pose estimates, $\mathbf{L}$ , from LiftNet [36]. Because these are relative to the hip and without absolute scale, we subtract the hip position from our absolute 3D pose, $\Theta^t$ , and apply a scale factor $m$ to $\mathbf{L}$ to match the bone lengths $\mathbf{b}_{\mathrm{calib}}$ in the least-square sense. We write
+
+$$
+E _ {\text {l i f t}} (\boldsymbol {\Theta}, \mathbf {L}) = \omega_ {l} \sum_ {i = t - k} ^ {t} \| m \cdot \mathbf {L} ^ {i} - \left(\boldsymbol {\Theta} ^ {i} - \boldsymbol {\Theta} _ {\text {h i p j o i n t}} ^ {i}\right) \| _ {2} ^ {2}, \tag {2}
+$$
+
+with $\omega_{l}$ its relative weight.
+
+The projection term measures the difference between the detected 2D joint locations and the projection of the estimated 3D pose in the least-square sense. We write it as
+
+$$
+E _ {\text {p r o j}} (\boldsymbol {\Theta}, \mathbf {M}, \mathbf {D}) = \omega_ {p} \sum_ {i = t - k} ^ {t} \| \mathbf {M} ^ {i} - \Pi \left(\boldsymbol {\Theta} ^ {i}, \mathbf {D} ^ {i}, \mathbf {K}\right) \| _ {2} ^ {2}, \tag {3}
+$$
+
+where $\Pi$ is the perspective projection function, $\mathbf{K}$ is the matrix of camera intrinsic parameters, and $\omega_{p}$ is a weight that controls the influence of this term.
+
+The smoothness term exploits that we are using a continuous video feed and that the motion is smooth by penalizing velocity computed by finite differences as
+
+$$
+E _ {\text {s m o o t h}} (\boldsymbol {\Theta}) = \omega_ {s} \sum_ {i = t - k + 1} ^ {t} \| \left(\boldsymbol {\Theta} ^ {i + 1} - \boldsymbol {\Theta} ^ {i}\right) \| _ {2} ^ {2}. \tag {4}
+$$
+
+with $\omega_{s}$ as its weight.
+
+To further constrain the solution space, we use our knowledge of the bone lengths $\mathbf{b}_{\mathrm{calib}}$ found during calibration and penalize deviations in length. The length of each bone $b$ in the set of all bones $b_{\mathrm{all}}$ is found as $\mathbf{b}_b^t = \left\| (\Theta_{b_1} - \Theta_{b_2})\right\| _2$ for frame $t$ . The bone length term is then defined as
+
+$$
+E _ {\text {b o n e}} (\boldsymbol {\Theta}) = \omega_ {b} \sum_ {i = t - k} ^ {t} \sum_ {b \in b _ {\text {a l l}}} d \left(\mathbf {b} _ {b} ^ {i}, \mathbf {b} _ {\text {c a l i b}, b}\right), \tag {5}
+$$
+
+with $\omega_{b}$ as its weight.
+
+The complete energy $E_{\mathrm{pose}}$ is minimized by gradient descent at the beginning of each control cycle, to get a pose estimate for control. The resulting pose estimate $\hat{\Theta}$ is the maximum a posteriori estimate in a probabilistic view.
+
+# 3.1.2 Calibration Mode
+
+Calibration mode only has to be run once for each subject to find the bone lengths, $\mathbf{b}_{\mathrm{calib}}$ . In this mode, the subject is assumed to be stationary. The situation is equivalent to having the scene observed from multiple stationary cameras, such as in [29]. We find the single static pose $\Theta^{\mathrm{c}}$ that minimizes
+
+$$
+E _ {\text {c a l i b}} = E _ {\text {p r o j}} \left(\boldsymbol {\Theta} ^ {\mathrm {c}}, \mathbf {M}, \mathbf {D}\right) + E _ {\text {s y m m e t r y}} \left(\boldsymbol {\Theta} ^ {\mathrm {c}}\right). \tag {6}
+$$
+
+
+Figure 2. Probabilistic interpretation. Left: A quadratic energy function and its associated Gaussian error distribution. Right: A complex energy function, which is locally approximated with a Gaussian (blue) near the minimum. The curvature of the energy function is a measure of the confidence in the estimate and the variance of the associated error distribution. The energy on the right is more constrained and its error distribution has a lower variance.
+
+In this objective, the projection term, $E_{\mathrm{proj}}$ , is akin to the one in our main formulation but acts on all calibration frames. It can be written as
+
+$$
+E _ {\mathrm {p r o j}} \left(\boldsymbol {\Theta} ^ {\mathrm {c}}, \mathbf {M}, \mathbf {D}\right) = \omega_ {\mathrm {p}} \sum_ {i = 0} ^ {t} \| \mathbf {M} ^ {i} - \Pi \left(\boldsymbol {\Theta} ^ {\mathrm {c}}, \mathbf {D} ^ {i}, \mathbf {K}\right) \| _ {2} ^ {2}, \tag {7}
+$$
+
+with $\omega_{\mathrm{p}}$ controlling its influence. The symmetry term, $E_{\mathrm{symmetry}}$ , ensures that the left and right limbs of the estimated skeleton have the same lengths by penalizing the squared difference of their lengths.
+
+# 3.2. Next Best View Selection
+
+Our goal is to find the next best view for the drone at the future time step $t + 1$ , $\mathbf{D}^{t + 1}$ . We will model the uncertainty of the pose estimate in a probabilistic setting. Let $p(\Theta | \mathbf{M}, \mathbf{D}, \mathbf{L}, \mathbf{b})$ be the posterior distribution of poses. Then, $E_{\mathrm{pose}}$ is its negative logarithm and its minimization corresponds to maximum a posteriori (MAP) estimation. In this formalism, the sum of the individual terms in $E_{\mathrm{pose}}$ models that our posterior distribution is composed of independent likelihood and prior distributions. For a purely quadratic term, $E(x) = \omega (x - \mu)^2$ , the corresponding distribution $p_E = \exp (-E)$ is a Gaussian with mean $\mu$ and standard deviation $\sigma = \frac{1}{\sqrt{2\omega}}$ . Notably, $\sigma$ is directly linked to the weight $\omega$ of the energy. Most of our energy terms involve non-linear operations, such as perspective projection in $E_{\mathrm{proj}}$ , and therefore induce non-Gaussian distributions, as visualized in Fig. 2. Nevertheless, as for the simple quadratic case, the weights $\omega_p$ and $\omega_l$ of $E_{\mathrm{proj}}$ and $E_{\mathrm{lift}}$ can be interpreted as surrogates for the amount of measurement noise in the 2D and 3D pose estimates.
+
+A good measure of uncertainty is the sum of the eigenvalues of the covariance $\Sigma_{p}$ of the underlying distribution
+
+
+Figure 3. Uncertainty estimates for each candidate drone position, visualized on the left as 3D ellipsoids and on the right from a 2D top-down view. Each ellipse visualizes the eigenvalues of the hip location when incorporating an additional view from its displayed position. Here, the previous image was taken from the top (position 16) and uncertainty is minimized by moving to an orthogonal view. The complete distribution has more than three eigenvectors and cannot straightforwardly be visualized in 3D.
+
+$p$ . The sum of the eigenvalues captures the spread of a multivariate distribution with a single variable, similarly to the variance in the univariate case. To exploit this uncertainty estimation for our problem, we now extend $E_{\mathrm{pose}}$ to model not only the current and past poses but also the future ones and condition it on the choice of the future drone position. To determine the best next drone pose, we sample candidate positions and chose the one with the lowest uncertainty. This process is illustrated in Figure 3.
+
+Future pose forecasting. In our setting, accounting for the dynamic motion of the person is key to successfully positioning the camera. We model the motion of the person from the current frame $t$ to the next $M$ future frames $t + i$ , $i \in (1, \ldots, M)$ linearly, i.e. we aim to keep the velocity of the joints constant across our window of frames. We also constrain the future poses by the bone length term. The future pose vectors $\Theta^{t + i}$ are constrained by the smoothness and bone length terms, but for now not by any image-based term since the future images are not yet available at time $t$ . Minimizing this extended $E_{\mathrm{pose}}$ for future poses gives the MAP poses $\hat{\Theta}^{t + i}$ . It continues the motion $\hat{\Theta}^{t - k, \dots, t + K}$ smoothly while maintaining the bone lengths. As we predict only the near future, we have found this simple extrapolation to be sufficient. We leave as future work the use of more advanced methods [10, 42] to forecast further.
+
+Future measurement forecasting. We aim to find the future drone position, $\mathbf{D}^{t + 1}$ , that reduces the posterior uncertainty, but we do not have footage from future viewpoints to condition the posterior on. Instead, we use the predicted future human pose $\hat{\Theta}^{t + i}$ $i\in (1,\dots ,M)$ as a proxy for $\mathbf{L}^{t + i}$ and approximate $\mathbf{M}^{t + i}$ with the projection
+
+$$
+\hat {\mathbf {M}} ^ {t + 1} = \Pi (\hat {\boldsymbol {\Theta}} ^ {t + 1}, \mathbf {D} ^ {t + 1}, \boldsymbol {K}). \tag {8}
+$$
+
+At first glance, constraining the future pose on these virtual estimates in $E_{\mathrm{pose}}$ does not add anything since the terms $E_{\mathrm{proj}}$ and $E_{\mathrm{lift}}$ are zero at $\hat{\Theta}^{t + 1}$ by this construction. However, it changes the energy landscape and models how strong a future observation would constrain the pose posterior. In particular, the projection term, $E_{\mathrm{proj}}$ , narrows down the solution space in the direction of the image plane but cannot constrain it in the depth direction, creating an elliptical uncertainty as visualized in Fig 3. The combined influence of all terms is conveniently modeled as the energy landscape of $E_{\mathrm{pose}}$ and its corresponding posterior.
+
+In our current implementation we assume that the 2D and 3D detections are affected by pose-independent noise, and their variance is captured by $\omega_{p}$ and $\omega_{l}$ , respectively. These factors could, in principle, be view dependent and in relation to the person's pose. For instance, [4] may be more accurate at reconstructing a front view than a side view. However, while estimating the uncertainty in deep networks is an active research field [26], predicting the expected uncertainty for an unobserved view has not yet been attempted for pose estimation. It is an interesting future work direction.
+
+Variance estimator. $E_{\mathrm{pose}}$ and its corresponding posterior has a complex form due to the projection and prior terms. Hence, the sought-after covariance $\Sigma_p$ cannot be expressed in closed form and approximating it by sampling the space of all possible poses would be expensive. Instead, for the sake of uncertainty estimation, we approximate $p(\Theta | \mathbf{D}, \mathbf{M}, \mathbf{L}, \mathbf{b})$ locally with a Gaussian distribution $q$ , such that
+
+$$
+\Sigma_ {p (\boldsymbol {\Theta} | \mathbf {D}, \mathbf {M}, \mathbf {L})} \approx \Sigma_ {q} \text {w h e r e} q = N (\boldsymbol {\Theta} | \hat {\boldsymbol {\Theta}}, \Sigma_ {q}), \tag {9}
+$$
+
+with $\hat{\Theta}$ and $\Sigma_q$ the Gaussians mean and covariance matrix, respectively. Such an approximation is exemplified in Figure 2. For a Gaussian, the covariance of $q$ can be computed in closed form as the inverse of the Hessian of the negative log likelihood, $\Sigma_q = H_{-\log q}^{-1}$ , where $H_{-\log q} = \left.\frac{\partial^2 - \log q(\Theta)}{\partial\Theta}\right|_{\Theta = \hat{\Theta}}$ . Under the Gaussian assumption, $\Sigma_p$ is thereby well approximated by the second order gradients, $H_{E_{\mathrm{pos}}}^{-1}$ , of $E_{\mathrm{pos}}$ . Our experiments show that this simplification holds well for the introduced error terms.
+
+To select the view with minimum uncertainty among a set of $K$ candidate drone trajectories, we therefore
+
+1. optimize $E_{\mathrm{pose}}$ once to forecast $M$ human poses $\hat{\Theta}^{t + i}$ , for $1 \leq i \leq M$
+2. use these forecasted poses to set $\hat{\mathbf{L}}^{t + i}$ and $\hat{\mathbf{M}}^{t + i}$ for each $1\leq i\leq M$ for each candidate trajectory $c$
+3. compute the second order derivatives of $E_{\mathrm{pose}}$ for each $c$ , which form $H_{c}$ , and
+
+4. compute and sum up the respective eigenvalues to select the candidate with the least uncertainty.
+
+Discussion. In principle, $p(\Theta | \mathbf{M}, \mathbf{D}, \mathbf{L}, \mathbf{b})$ , i.e. the probability of the most likely pose, could also act as a measure of certainty, as implicitly used in [27] on a known motion trajectory to minimize triangulation error. However, the term $E_{\mathrm{proj}}(\hat{\Theta}, \hat{\mathbf{M}})$ of $E_{\mathrm{pose}}$ is zero for the future time step $t + i$ , because the projection of $\hat{\Theta}^{t + i}$ is by construction equal to $\hat{\mathbf{M}}^{t + i}$ and therefore uninformative. Another alternative that has been proposed in the literature is to approximate the covariance through first order estimates [37], as a function of the Jakobi matrix. However, as also the first order gradients of $E_{\mathrm{proj}}$ vanish at the MAP estimate, this approximation is not possible in our case.
+
+# 3.3. Drone Control Policies and Flight Model
+
+In the experiments where we simulate drone flight, the algorithm decides between 9 candidate trajectories in the directions up, down, left, right, up-right, up-left, down-right, down-left and center. To ensure that the drone stays a fixed distance away from the person, the direction vector is normalized by the fixed-distance value.
+
+In the remainder of this section, we describe how we model the flight of the drone so that we can predict the position of the drone along a potential trajectory in future time steps. By forecasting the future $M$ locations of the drone on a potential trajectory $c$ , we can predict the 2D pose estimations $\hat{\mathbf{M}}^{t + i}$ for each $\{i\}_{i = 1}^{M}$ more accurately.
+
+We control the flight of our drone by passing it the desired velocity vector and the desired yaw rotation amount with the maximum speed kept constant at $5\mathrm{m / s}$ . The drone is sent new commands once every $\Delta t = 0.2$ seconds.
+
+We model the drone flight in the following manner. We assume that the drone moves with constant acceleration during a time step $\Delta t$ . If the drone has current position $x_{\mathrm{current}}$ and velocity $V_{\mathrm{current}}$ , then with an current acceleration $a_{\mathrm{current}}$ , its next position $x_{\mathrm{goal}}$ in $\Delta t$ time will be
+
+$$
+x _ {\text {g o a l}} = x _ {\text {c u r r e n t}} + V _ {\text {c u r r e n t}} \Delta t + 0. 5 a _ {\text {c u r r e n t}} \Delta t ^ {2}. \tag {10}
+$$
+
+The current acceleration at time $t$ is found as a weighted average of the input acceleration $a_{\mathrm{input}}$ and the acceleration of the previous step $a_{\mathrm{previous}}$ . This can be written as
+
+$$
+a _ {\text {c u r r e n t}} = \alpha a _ {\text {i n p u t}} + (1 - \alpha) a _ {\text {p r e v i o u s}}. \tag {11}
+$$
+
+$a_{\mathrm{input}}$ is determined according to the candidate trajectory being evaluated. The direction of the acceleration vector is set to the direction of the candidate trajectory. We determine the magnitude of the input acceleration through least-square minimization of the difference between the predicted $x_{\mathrm{goal}}$ and the actual drone position. $\alpha$ is found by line search.
+
+By estimating the future positions of the drone, we are able to forecast more accurate future 2D pose estimations,
+
+
+Figure 4. Predicted trajectories as the drone is circling the subject. The future drone positions are predicted for the future 3 steps, represented by triangle markers on the trajectories. Red depicts the chosen trajectory.
+
+
+
+
+
+leading to more accurate decision making. Examples of predicted trajectories are shown in Figure 4. Further details are provided in the supplementary material.
+
+# 4. Evaluation
+
+In this section we evaluate the improvement on 3D human pose estimation that is achieved through optimization of the drone flight.
+
+Simulation environment. Although [28, 3, 36] run in real time, and online SLAM from a monocular camera [9] is possible, we use a drone simulator since the integration of all components onto constrained drone hardware is difficult and beyond our expertise. We make simulation realistic by driving our characters with real motion capture data from the CMU Graphics Lab Motion Capture Database [1] and using the AirSim [33] drone simulator that builds upon the Unreal game engine and therefore produces realistic images of natural environments. Simulation also has the advantage that the same experiment can be repeated with different parameters and be directly compared to baseline methods and ground-truth motion.
+
+Simulated test set. We test our approach on three CMU motions of increasing difficulty: Walking straight (subject 2, trial 1), Dance with twirling (subject 5, trial 8), and Running in a circle (subject 38, trial 3). Additionally, we use a validation set consisting of Basketball dribble (subject 6, trial 13), and Sitting on a stool (subject 13, trial 6), to conduct a grid search for hyperparameters.
+
+Real test set. To show that our planner also works outside the simulator, we evaluate our approach on a section of the MPI-INF-3DHP dataset, which includes motions such as running around in a circle and waving arms in the air. The dataset provides 14 fixed viewpoints that are at varying distances from one another and from the subject, as depicted in Figure 6. In this case, the best next view is restricted to one of the 14 fixed viewpoints. This dataset lets us evaluate whether the object detector of [28], the 2D pose estimation method of [4], and the 3D pose regression technique of [36]
+
+
+Figure 5. Uncertainties estimates across potential viewpoints (left image) compared with the average error we would obtain if we were to visit these locations (right image). The star represents the location of the subject and the large circle depicts the chosen viewpoint according to the lowest uncertainty.
+
+
+
+
+Figure 6. MPIInf_3DHP dataset, which has images taken from 14 viewpoints with various distances to the subject. We use this dataset to evaluate our performance on datasets with realistic camera positioning and real images.
+
+are reliable enough in real environments. Since we cannot control the camera in this setting, we remove those cameras from the candidate locations where we predict that the subject will be out of the viewpoint.
+
+Baselines. Existing drone-based pose estimation methods use predefined policies to control the drone position relative to the human. Either the human is followed from a constant angle and the angle is set externally by the user [19] or the drone undergoes a constant rotation around the human [45]. As another baseline, we use a random decision policy, where the drone picks uniformly randomly among the proposed viewpoints. Finally, the oracle is obtained by moving the drone to the viewpoint where the reconstruction in the next time step will have the lowest average error, which is achieved by exhaustively trying all viewpoints with the corresponding image in the next time frame.
+
+Hyper parameters. We set the weights of the loss term for the reconstruction as follows: $\omega_{p} = 0.0001$ (projec
+
+ | Noisy ground truth | Networks | |
| CMU-Walk | CMU-Dance | CMU-Run | MPI-INF-3DHP | MPI-INF-3DHP | Total |
| Oracle | 0.101±0.001 | 0.101±0.001 | 0.109±0.001 | 0.136±0.002 | 0.17±0.0005 | 0.142±0.027 |
| Ours (Active) | 0.113±0.001 | 0.116±0.003 | 0.19±0.001 | 0.145±0.006 | 0.21±0.0008 | 0.155±0.39 |
| Random | 0.123±0.002 | 0.125±0.003 | 0.159±0.003 | 0.286±0.027 | 0.28±0.03 | 0.195±0.07 |
| Constant Rotation | 0.157±0.002 | 0.146±0.004 | 0.223±0.003 | 0.265±0.010 | 0.29±0.03 | 0.216±0.06 |
| Constant Angle | 0.895±0.54 | 0.683±0.31 | 0.985±0.24 | 1.73±0.61 | 1.26±0.53 | 1.11±0.36 |
+
+Table 1. 3D pose accuracy on the teleportation experiment, using noisy ground truth to estimate M and L in the first three columns, and using the networks of [43, 36] in the fourth column. We outperform all predefined baseline trajectories and approach the accuracy of the oracle that has access to the average error of each candidate position.
+
+tion), $\omega_{s} = 1$ (smoothness), $\omega_{l} = 0.1$ (lift term), $\omega_{b} = 1$ (bone length), which were found by grid search. We set the weights for the decision making as $\omega_{p} = 0.001$ , $\omega_{s} = 1$ , $\omega_{l} = 0.1$ , $\omega_{b} = 1$ . Our reasoning is, we need to set the weights of the projection and lift terms slightly lower because they are estimated with large noise, which is introduced by the neural networks or as additive noise. However, they do not need to be as low for the uncertainty estimation.
+
+# 4.1. Analyzing Reconstruction Accuracy
+
+We report the mean Euclidean distance per joint in meters in the middle frame of the temporal window we optimize over. For teleportation mode, the size of the temporal window is set to $k = 2$ past frames and 1 future frame, and for the drone flight simulations, to $k = 6$ for past frames and 3 future frames.
+
+Simulation Initialization. The frames are initialized by back-projecting the 2D joint locations estimated in the first frame, $\mathbf{M}^{t = 0}$ , to a distance $d$ from the camera that is chosen such that the back-projected bone lengths match with the average human height. We then refine this initialization by running the optimization without the smoothness term, as there is only one frame. All the sequences are evaluated for 120 frames, with the animation sequences played at $5\mathrm{Hz}$ .
+
+Teleportation Mode. To understand whether our uncertainty predictions for potential viewpoints coincide with the actual 3D pose errors we will have at these locations, we run the following simulation: We sample a total of 18 points on a ring around the person, as shown in Fig. 5, and allow the drone to teleport to any of these points. We optimize over a total of $k = 2$ past frames and forecast 1 frame into the future. We chose this window size to emphasize the importance of the next choice of frame.
+
+We perform two variants of this experiment. In the first one, we simulate the 2D and 3D pose estimates, $\mathbf{M},\mathbf{L}$ , by adding Gaussian noise to the ground-truth data. The mean and standard deviation of this noise is set as the error of [3] and [36], run on the validation set of animations. Figure 7 shows a comparison between the ground truth values, noisy ground truth values and the network results. The results of this experiment are reported in Table 1, where we also provide the standard deviations across 5 trials with varying noise and starting from different viewpoints. On the MPI-INF-3DHP dataset, we also provide results using [3]
+
+
+Figure 7. Example image from the MPI-INF-3DHP dataset along with the 2D pose detections $\mathbf{M}$ and 3D relative pose detections $\mathbf{L}$ obtained using ground truth, noisy ground truth or the networks of [3] and [36]. The noise we add on the ground truth poses is determined according to the statistics of [3] and [36], measured on our validation set.
+
+and [36] on the simulator images to obtain the 2D and 3D pose estimates. Further results are in the supplementary material.
+
+Altogether, the results show that our active motion planner achieves consistently lower error values than the baselines and we come the closest to achieving the best possible error for these sequences and viewpoints, despite having no access to the true error. The random baseline also performs quite well in these experiments, as it takes advantage of the drone teleporting to a varied set of viewpoints. The trajectories generated by our active planner and the baselines is depicted in Figure 8. Importantly, Figure 5 evidences that our predicted uncertainties accurately reflect the true pose errors, thus making them well suited to our goal.
+
+Simulating Drone Flight. To evaluate more realistic cases where the drone is actively controlled and constrained to only move to nearby locations, we simulate the drone flight using the AirSim environment. While simulating drone flight, we target a fixed radius of $7\mathrm{m}$ from the subject and therefore provide direction candidates that lead to preserving this distance. We do not provide samples at different distances, as moving closer is unsafe and moving farther leads to more concentrated image projections and thus higher 3D errors. We also restrict the drone from flying outside the altitude range $0.25\mathrm{m} - 3.5\mathrm{m}$ , so as to avoid crashing into the ground and flying above the subject.
+
+In this set of experiments, we fly the drone using the
+
+
+Figure 8. Trajectories found by our active planner along with random and constant rotation baselines. The first row depicts the trajectories for the MPI-INF-3DHP dataset, and the second row shows the trajectories for the dancing motion. The trajectories obtained with our algorithm are regular and look different from the random trajectories, especially for the dancing motion. Our algorithm prefers trajectories resulting in large angular variance with respect to the subject between viewpoints.
+
+ | CMU-Walk | CMU-Dance | CMU-Run | Total |
| Ours (Active) | 0.26±0.03 | 0.22±0.04 | 0.44±0.04 | 0.31±0.10 |
| Constant Rotation | 0.28±0.06 | 0.21±0.04 | 0.41±0.02 | 0.30±0.08 |
| Random | 0.60±0.13 | 0.44±0.19 | 0.81±0.16 | 0.62±0.15 |
| Constant Angle | 0.41±0.07 | 0.63±0.06 | 1.26±0.17 | 0.77±0.36 |
+
+Table 2. Results of drone full flight simulation, using noisy ground truth as input to estimate M and L. The results of constant rotation are the average of 10 runs, with 5 runs rotating clockwise and 5 counter-clockwise. Our approach yields results comparable to those of constant rotation, outperforming the other baselines. The trajectory our algorithm draws also results in a constant rotation, the only difference being the rotation direction.
+
+simulator's realistic physics engine. To this end, we sample 9 candidate directions towards up, down, left, right, upright, up-left, down-right, down-left and center. We then predict the 3 consecutive future locations using our simplified (closed form) physics model, to get and estimate where the drone will be at when continuing in each of the 9 directions. We then estimate the uncertainty at these sampled viewpoints and choose the minimum.
+
+We achieve comparable results to constant rotation on simulated drone flight. In fact, except for the first few frames where the drone starts flying, we observe the same trajectory as constant rotation, only the rotation direction varies. Constant rotation being optimal in this setting is not counter-intuitive, as constant rotation is very useful for preserving momentum. This allows the drone to sample viewpoints as far apart from one another as possible, while keeping the subject in view. Figure 9 depicts the different baseline trajectories and the active trajectory.
+
+
+a) Active
+
+
+b) Random
+
+
+c) Constant Rotation
+Figure 9. Trajectories found during flight by our active planner and the baselines. Our algorithm also chose to perform constant rotation. Because of the drone momentum, the random baseline cannot increase the distance between its camera viewpoints.
+
+# 5. Conclusion and Future Work
+
+We have proposed a theoretical framework for estimating the uncertainty of future measurements from a viewpoint. This permits us to improve 3D human pose estimation by optimizing the viewpoint selection to visit those locations with the lowest expected uncertainty. We have demonstrated with increasingly complex examples, in simulation with synthetic and real footage, that this theory translates to closed-loop drone control and improves pose estimation accuracy. We envision our approach being developed further for improving the performance of athletes and performance artists. It is important to preserve the subjects' privacy in such autonomous systems. We encourage researchers to be sensitive to this issue.
+
+Key to the success of our approach is the integration of several sources of uncertainty. Our primary goal was to make uncertainty estimation tractable, but further improvements are needed to run it on an embedded drone system. The current implementation runs at $0.1\mathrm{Hz}$ , but the optimization is implemented in Python using the convenient but slow automatic differentiation of PyTorch to obtain second derivatives. Furthermore, we have considered a physically plausible drone model but neglected physical obstacles and virtual no-go areas that would restrict the possible flight trajectories. In the case of complex scenes with dynamic obstacles, we expect our algorithm to outperform any simple, predefined policy. Currently, we assume a constant error for the 2D and 3D pose estimates. In future work, we will investigate how to derive situation-dependent noise models of deep neural networks. Furthermore, we plan to study new ways of estimating the uncertainty of the deployed deep learning methods and extend our work to optimize drone trajectories for different computer vision tasks.
+
+# 6. Acknowledgements
+
+This work was supported in part by the Swiss National Science Foundation and by a Microsoft Joint Research Project.
+
+# References
+
+[1] CMU Graphics Lab Motion Capture Database. mocap.cs. cmu.edu.
+[2] A. Aissaoui, A. Ouafi, P. Pudlo, C. Gillet, Z.-E. Baarir, and A. Taleb-Ahmed. Designing a Camera Placement Assistance System for Human Motion Capture Based on a Guided Genetic Algorithm. Virtual reality, 22(1):13-23, 2018.
+[3] Z. Cao, T. Simon, S. Wei, and Y. Sheikh. Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields. In Conference on Computer Vision and Pattern Recognition, pages 1302-1310, 2017.
+[4] Y. Chao, J. Yang, B. Price, S. Cohen, and J. Deng. Forecasting Human Dynamics from Static Images. In Conference on Computer Vision and Pattern Recognition, 2017.
+[5] X. Chen and J. Davis. Camera Placement Considering Occlusion for Robust Motion Capture. Computer Graphics Laboratory, Stanford University, Tech. Rep, 2(2.2):2, 2000.
+[6] W. Cheng, L. Xu, L. Han, Y. Guo, and L. Fang. ihuman3d: Intelligent human body 3d reconstruction using a single flying camera. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 1733-1741. ACM, 2018.
+[7] S. Choudhury, A. K. G., Ranade, and D. Dey. Learning to gather information via imitation. In International Conference on Robotics and Automation, 2017.
+[8] J. Daudelin and M. Campbell. An adaptable, probabilistic, next-best view algorithm for reconstruction of unknown 3-d objects. IEEE Robotics and Automation Letters, 2(3):1540-1547, 2017.
+[9] A. J. Davison, I. Reid, N. Molton, and O. Stasse. Monoslam: Real-Time Single Camera Slam. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6):1052-1067, June 2007.
+[10] K. Fragkiadaki, S. Levine, P. Felsen, and J. Malik. Recurrent Network Models for Human Dynamics. In International Conference on Computer Vision, 2015.
+[11] C. Gebhardt, S. Stevsic, and O. Hilliges. Optimizing for Aesthetically Pleasing Quadrotor Camera Motion. ACM Transactions on Graphics, 37(4):90:1-90:11, 2018.
+[12] B. Hepp, D. Dey, S. Sinha, A. Kapoor, N. Joshi, and O. Hilliges. Learn-To-Score: Efficient 3D Scene Exploration by Predicting View Utility. In European Conference on Computer Vision, 2018.
+[13] B. Hepp, M. Nießner, and O. Hilliges. Plan3D: Viewpoint and Trajectory Optimization for Aerial Multi-View Stereo Reconstruction. ACM Transactions on Graphics, 38(1):4, 2018.
+[14] S. Isler, R. Sabzevari, J. Delmerico, and D. Scaramuzza. An information gain formulation for active volumetric 3d reconstruction. In International Conference on Robotics and Automation, 2016.
+[15] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. In Conference on Computer Vision and Pattern Recognition, 2018.
+[16] J. Martinez, R. Hossain, J. Romero, and J. Little. A Simple Yet Effective Baseline for 3D Human Pose Estimation. In International Conference on Computer Vision, 2017.
+
+[17] D. Mehta, H. Rhodin, D. Casas, P. Fua, O. Sotnychenko, W. Xu, and C. Theobalt. Monocular 3D Human Pose Estimation in the Wild Using Improved CNN Supervision. In International Conference on 3D Vision, 2017.
+[18] T. Nageli, L. Meier, A. Domahidi, J. Alonso-Mora, and O. Hilliges. Real-time planning for automated multi-view drone cinematography. ACM Transactions on Graphics, 2017.
+[19] T. Nageli, S. Oberholzer, S. Plüss, J. Alonso-Mora, and O. Hilliges. Flycon: Real-time environment-independent multi-view human pose estimation with aerial vehicles. ACM Transactions on Graphics, 2018.
+[20] E. Palazzolo and C. Stachniss. Information-driven autonomous exploration for a vision-based mag. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 4:59, 2017.
+[21] G. Pavlakos, X. Zhou, K. Derpanis, G. Konstantinos, and K. Daniilidis. Coarse-To-Fine Volumetric Prediction for Single-Image 3D Human Pose. In Conference on Computer Vision and Pattern Recognition, 2017.
+[22] G. Pavlakos, X. Zhou, K. D. G. Konstantinos, and D. Kostas. Harvesting Multiple Views for Marker-Less 3D Human Pose Annotations. In Conference on Computer Vision and Pattern Recognition, 2017.
+[23] D. Pavllo, C. Feichtenhofer, D. Grangier, and M. Auli. 3D Human Pose Estimation in Video with Temporal Convolutions and Semi-Supervised Training. In Conference on Computer Vision and Pattern Recognition, 2019.
+[24] A. Pirinen, E. Gartner, and C. Sminchisescu. Domes to drones: Self-supervised active triangulation for 3d human pose reconstruction. In Advances in Neural Information Processing Systems 32, pages 3907-3917. 2019.
+[25] A.-I. Popa, M. Zanfir, and C. Sminchisescu. Deep Multitask Architecture for Integrated 2D and 3D Human Sensing. In Conference on Computer Vision and Pattern Recognition, 2017.
+[26] S. Prokudin, P. Gehler, and S. Nowozin. Deep Directional Statistics: Pose Estimation with Uncertainty Quantification. In European Conference on Computer Vision, pages 534-551, 2018.
+[27] P. Rahimian and J. K. Kearney. Optimal Camera Placement for Motion Capture Systems. IEEE Transactions on Visualization and Computer Graphics, 23(3):1209-1221, 2016.
+[28] J. Redmon and A. Farhadi. YOLOv3: An Incremental Improvement. In arXiv Preprint, 2018.
+[29] H. Rhodin, C. Richardt, D. Casas, E. Insafutdinov, M. Shafiei, H.-P. Seidel, B. Schiele, and C. Theobalt. Egocap: Egocentric Marker-Less Motion Capture with Two Fisheye Cameras. ACM SIGGRAPH Asia, 35(6), 2016.
+[30] M. Roberts, D. Dey, A. Truong, S. Sinha, S. Shah, A. Kapoor, P. Hanrahan, and N. Joshi. Submodular Trajectory Optimization for Aerial 3D Scanning. In International Conference on Computer Vision, 2017.
+[31] G. Rogez, P. Weinzaepfel, and C. Schmid. Lcr-Net: Localization-Classification-Regression for Human Pose. In Conference on Computer Vision and Pattern Recognition, 2017.
+
+[32] N. Saini, E. Price, R. Tallamraju, R. Enficiaud, R. Ludwig, I. Martinović, A. Ahmad, and M. Black. Markerless outdoor human motion capture using multiple autonomous micro aerial vehicles. In International Conference on Computer Vision, Oct. 2019.
+[33] S. Shah, D. Dey, C. Lovett, and A. Kapoor. Airsim: High-fidelity visual and physical simulation for autonomous vehicles. In Field and Service Robotics, 2017.
+[34] X. Sun, J. Shang, S. Liang, and Y. Wei. Compositional Human Pose Regression. In International Conference on Computer Vision, 2017.
+[35] R. Tallamraju, E. Price, R. Ludwig, K. Karlapalem, H. Bülthoff, M. Black, and A. Ahmad. Active perception based formation control for multiple aerial vehicles. IEEE Robotics and Automation Letters, PP:1-1, 08 2019.
+[36] B. Tekin, P. Marquez-Neila, M. Salzmann, and P. Fua. Learning to Fuse 2D and 3D Image Cues for Monocular Body Pose Estimation. In International Conference on Computer Vision, 2017.
+[37] A. Tkach, A. Tagliasacchi, E. Remelli, M. Pauly, and A. Fitzgibbon. Online generative model personalization for hand tracking. ACM Transactions on Graphics, 36(6):243, 2017.
+[38] D. Tome, C. Russell, and L. Agapito. Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image. In arXiv Preprint, 2017.
+[39] D. Xiang, H. Joo, and Y. Sheikh. Monocular total capture: Posing face, body, and hands in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
+[40] L. Xu, L. Fang, W. Cheng, K. Guo, G. Zhou, Q. Dai, and Y. Liu. Flycap: Markerless motion capture using multiple autonomous flying cameras. IEEE Transactions on Visualization and Computer Graphics, PP, 10 2016.
+[41] A. Zanfir, E. Marinoiu, and C. Sminchisescu. Monocular 3D Pose and Shape Estimation of Multiple People in Natural Scenes - the Importance of Multiple Scene Constraints. In Conference on Computer Vision and Pattern Recognition, June 2018.
+[42] J. Y. Zhang, P. Felsen, A. Kanazawa, and J. Malik. Predicting 3d human dynamics from video. In International Conference on Computer Vision, 2019.
+[43] B. Zhao, X. Wu, Z.-Q. Cheng, H. Liu, and J. Feng. Multi-View Image Generation from a Single-View. In arXiv Preprint, 2017.
+[44] X. Zhou, Q. Huang, X. Sun, X. Xue, and Y. We. Weakly-Supervised Transfer for 3D Human Pose Estimation in the Wild. In arXiv Preprint, 2017.
+[45] X. Zhou, A. S. Liu, A. G. Pavlakos, A. V. Kumar, and K. Daniilidis. Human Motion Capture Using a Drone. In International Conference on Robotics and Automation, 2018.
\ No newline at end of file
diff --git a/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/images.zip b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..85bc5c291380eba774f45eb5571a2ed34af14a8b
--- /dev/null
+++ b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a94cae6492aa291a140d90ea09f40f4270590649a148af41a95212a0902d3e65
+size 392812
diff --git a/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/layout.json b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..db250d152acab0650bee31eb98a9fee6b8c359ec
--- /dev/null
+++ b/activemocapoptimizedviewpointselectionforactivehumanmotioncapture/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3291b4a54e3edb9b7b7aa747cdf615dc4e40dbc614dcb75896757626d21a3e0d
+size 428646
diff --git a/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_content_list.json b/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4db9af2fd18542a5cb606502be7ad6f2bc83f89
--- /dev/null
+++ b/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:786b37b744a24315fcdf733c4dbc8257c2d251c287986adb23c31adf435fd0e8
+size 73344
diff --git a/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_model.json b/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bbbcde99858e787b300a7bf6d731f95e823454f2
--- /dev/null
+++ b/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a4c9810aa18298501308b50323861896a4e3a869d5eeee65843ceba20746c809
+size 88844
diff --git a/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_origin.pdf b/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..45e4f6c9968c59910c39ade4b4fe7930c7d7b585
--- /dev/null
+++ b/activespeakersincontext/6a7a2272-3f1e-4149-85c0-98365d6e93a5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:afb87bb7da53bb6bc89b8eddf1e95973390a434b2bcf6ad0abdd086d6ac715aa
+size 1287819
diff --git a/activespeakersincontext/full.md b/activespeakersincontext/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7705675b0ef1fb709d8b9bb8ab97c917f400e66b
--- /dev/null
+++ b/activespeakersincontext/full.md
@@ -0,0 +1,260 @@
+# Active Speakers in Context
+
+Juan León Alcázar $^{1*}$ , Fabian Caba Heilbron $^{2}$ , Long Mai $^{2}$ , Federico Perazzi $^{2}$ , Joon-Young Lee $^{2}$ , Pablo Arbeláez $^{1}$ , and Bernard Ghanem $^{3}$
+
+1Universidad de los Andes, 2Adobe Research, 3King Abdullah University of Science and Technology (KAUST), 1{jc.leon,pa.arbelaez}@uniandes.edu.co; 2{caba,malong,perazzi,jolee}@adobe.com; 3{bernard.ghanem}@kaust.edu.sa
+
+# Abstract
+
+Current methods for active speaker detection focus on modeling audiovisual information from a single speaker. This strategy can be adequate for addressing single-speaker scenarios, but it prevents accurate detection when the task is to identify who of many candidate speakers are talking. This paper introduces the Active Speaker Context, a novel representation that models relationships between multiple speakers over long time horizons. Our new model learns pairwise and temporal relations from a structured ensemble of audiovisual observations. Our experiments show that a structured feature ensemble already benefits active speaker detection performance. We also find that the proposed Active Speaker Context improves the state-of-the-art on the AVA-ActiveSpeaker dataset achieving an mAP of $87.1\%$ . Moreover, ablation studies verify that this result is a direct consequence of our long-term multi-speaker analysis.
+
+# 1. Introduction
+
+Active speaker detection is a multi-modal task that relies on the careful integration of audiovisual information. It aims at identifying active speakers, among a set of possible candidates, by analyzing subtle facial motion patterns and carefully aligning their characteristic speech wave-forms. Although it has a long story in computer vision [11], and despite its many applications such as speaker diarization or video re-framing, detecting active speakers in-the-wild remains an open problem. Towards that goal, the recently released AVA Active-Speaker benchmark [31] provides an adequate experimental framework to study the problem.
+
+Recent approaches for active speaker detection [5, 39] have focused on developing sophisticated 3D convolutional models to fuse local audiovisual patterns that estimate binary labels over short-term sequences. These methods perform well on scenarios with a single speaker, but they meet their limits when multiple speakers are present. We argue that this limitation stems from the insufficiency of audio
+
+
+(a)
+
+Spea
+
+ker A
+
+
+
+Speaker B
+
+
+Short-term analysis
+Figure 1. Active Speakers in Context. Our goal is to identify the active speaker at a reference time. Let us assume we only have access to a short audiovisual sample from a single speaker (a). By looking at the lips of the speaker, it is hard to tell if he is talking, but the audio indicates that someone at that moment is talking. We have no other option than provide an educated guess. To increase our success prediction chances, let us leverage multi-speaker context (b). We now observe all speakers in the scene during long-term. From this enriched observation, we can infer two things. First, Speaker B is not talking over the whole sequence, and instead, he is listening to Speaker A. Second, looking at Speaker A (e.g. his lips) for the long-term helps us to smooth out local uncertainties. We propose a new representation, the Active Speaker Context, which learns long-term relationships between multiple speakers to make accurate active speaker detections.
+
+
+(b) Active Speakers in Context Long-Term Multi Speaker Analysis
+
+cues to fully solve the problem and from the high ambiguity of visual cues when considered in isolation [31].
+
+In a multi-speaker scenario, an appropriate disambiguation strategy would exploit rich, long-term, contextual information extracted from each candidate speaker. Figure 1 illustrates the challenges in active speaker detection when there is more than one candidate speaker. Intuitively, we can fuse information from multiple speakers to disambiguate single speaker predictions. For instance, by analyzing a speaker for an extended period, we can smooth out wrong speech activity predictions coming from short filler words. Likewise, observing multiple candidate speakers, jointly, enables us to understand conversational patterns, e.g. that a natural two-speaker conversation consists of an interleaved sequence of the speakers' utterances.
+
+In this paper, we introduce the Active Speaker Context, a novel representation that models long-term interactions between multiple speakers for in-the-wild videos. Our method estimates active speaker scores by integrating audiovisual cues from every speaker present in a conversation (or scene). It leverages two-stream architectures [6, 9, 10] to encode short-term audiovisual observations, sampled from the speakers in the conversation, thus creating a rich context ensemble. Our experiments indicate that this context, by itself, helps improve accuracy in active speaker detection. Furthermore, we propose to refine the computed context representation by learning pairwise relationships via self-attention [33] and by modeling the temporal structure with a sequence-to-sequence model [17]. Our model not only improves the state-of-the-art but also exhibits robust performance for challenging scenarios that contain multiple speakers in the scene.
+
+Contributions. In this work we design and validate a model that learns audiovisual relationships among multiple speakers. To this end, our work brings two contributions.1
+
+(1) We develop a model that learns non-local relationships between multiple speakers over long timespans (Section 3).
+(2) We observe that this model improves the state-of-the-art in the AVA-ActiveSpeaker dataset by $1.6\%$ , and that this improvement is a direct result of modeling long-term multi-speaker context (Section 4).
+
+# 2. Related Work
+
+Multi-modal learning aims at fusing multiple sources of information to establish a joint representation, which models the problem better than any single source in isolation [27]. In the video domain, a form of modality fusion with growing interest in the scientific community involves the learning of joint audiovisual representations [3, 7, 19, 25, 28, 34]. This setting includes problems such as person re-identification [20, 24, 37], audio-visual synchronization [8, 9], speaker diarization [38], bio-metrics [25, 30], and audio-visual source separation [3, 7, 19, 25, 28, 34]. Active speaker detection, the problem studied in this paper, is an specific instance of audiovisual source separation, in which the sources are persons in a video (candidate speakers), and the goal is to assign a segment of speech to an active speaker, or none of the available sources.
+
+Several studies have paved the way for enabling active speaker detection using audiovisual cues [3, 4, 9, 11]. Cutler and Davis pioneered the research [11] in the early 2000s. Their work proposed a time-delayed neural network to learn the audiovisual correlations from speech activity. Alternatively, other methods [13, 32] opted for using visual information only, especially the lips motion, to address the task.
+
+Recently, rich alignment between audio and visual information has been re-explored with methods that leverage audio as supervision [3], or jointly train an audiovisual embedding [7, 9, 26], that enables more accurate active speaker detection. Although these previous approaches were seminal to the field, the lack of large-scale data for training and benchmark limited their application to in-the-wild active speaker detection in movies or consumer videos.
+
+To overcome the lack of diverse and in-the-wild data, Roth et al. [31], introduced AVA-ActiveSpeaker, a large-scale video dataset devised for the active speaker detection task. With the release of the dataset and its baseline -a two-stream network that learns to detect active speakers within a multi-task setting- a few novel approaches have started to emerge. In the AVA-ActiveSpeaker challenge of 2019, Chung et al. [5] improved the core architecture of their previous work [9] by adding 3D convolutions and leveraging large-scale audiovisual pre-training. The submission of Zhang et al. [39] also relied on a hybrid 3D-2D architecture, with large-scale pre-training on two multi-modal datasets [9, 10]. Their method achieved the best performance when the feature embedding was refined using a contrastive loss [15]. Both approaches improved the representation of a single speaker, but ignored the rich contextual information from co-occurring speaker relationships, and intrinsic temporal structures that emerge from dialogues.
+
+Our approach starts from the baseline of a two-stream modality fusion but explores an orthogonal research direction. Instead of improving the performance of a short-term architecture, we aim at modeling the conversational context of speakers, i.e. to leverage active speaker context from long-term inter-speaker relations. Context modeling has been widely studied in computer vision tasks such as object classification [23], video question answering [40], person re-identification[22], or action detection [14, 36]. Despite the existence of many works harnessing context to improve computer vision systems, our model is unique and tailored to detect active speakers accurately. To the best of our knowledge, our work is the first to address the task of active speaker detection in-the-wild using contextual information from multiple speakers.
+
+# 3. Active Speakers in Context
+
+This section describes our approach to active speaker detection, which focuses on learning long-term and interspeaker relationships. At its core, our strategy estimates an active speaker score for an individual face (target face) by analyzing the target itself, the current audio input, and multiple faces detected at the current timestamp.
+
+Instead of holistically encoding long time horizons and multi-speaker interactions, our model learns relationships following a bottom-up strategy where it first aggregates fine-grained observations (audiovisual clips), and then maps
+
+
+Figure 2. Active Speaker Context. Our approach first splits the video data into short clips ( $\tau$ seconds) composed by a stack of face crops and their associated audio. It encodes each of these clips using a two-stream architecture (Short-Term Encoder) to generate a low-dimensional audiovisual encoding. Then, it stacks the high-level audiovisual features from all the clips and all the speakers sampled within a window of size $T(T > \tau)$ centered at a reference time $t$ . We denote this stack of features as $\mathbf{C}_t$ . Then, using self-attention, our approach refines the representation by learning a pairwise attention between all elements. Finally, an LSTM mines temporal relationships across the refined features. This final output is our Active Speaker Context, which we use to classify speech activity of a candidate at time $t$ .
+
+these observations into an embedding that allows the analysis of global relations between clips. Once the individual embeddings have been estimated, we aggregate them into a context-rich representation which we denote as the Active Speaker Ensemble. This ensemble is then refined to explicitly model pairwise relationships, and to explicitly model long-term structures over the clips, we name this refined ensemble the Active Speaker Context. Figure 2 presents an overview of our approach.
+
+# 3.1. Aggregating Local Video Information
+
+Our proposal begins by analyzing audiovisual information from short video clips. The visual information is a stack of $k$ consecutive face crops2 sampled from a time interval $\tau$ . The audio information is the raw wave-form sampled over the same $\tau$ interval. We refer to these clips as a tuples $c_{s,\tau} = \{v_s, a_\tau\}$ , where $v_s$ is a crop stack of a speaker $s$ , and $a_\tau$ is the corresponding audio. For every clip $c_{s,\tau}$ in a video sequence, we compute an embedding $\mathbf{u}_{s,\tau}$ using a short-term encoder $\Phi(c_{s,\tau})$ whose role is twofold. First, it creates a low-dimensional representation that fuses the audiovisual information. Second, it ensures that the embedded representation is discriminative enough for the active speaker detection task.
+
+Short-term Encoder $(\Phi)$ . Following recent works [6, 31, 39], we approximate $\Phi$ by means of a two-stream convolutional architecture. Instead of using compute-intensive 3D convolutions as in [5, 39], we opt for 2D convolutions in both streams. The visual stream takes as input a tensor $\mathbf{v} \in \mathbb{R}^{H \times W \times (3k)}$ , where $H$ and $W$ are the width and height of $k$ face crops. On the audio stream, we convert the
+
+raw audio waveform into a Mel-spectrogram represented as $\mathbf{a} \in \mathbb{R}^{Q \times P}$ , where Q and P depend on the length of the interval $\tau$ . On a forward pass the visual sub-network estimates a visual embedding $\mathbf{u}_{\mathbf{v}} \in \mathbb{R}^{d_v}$ , while the audio sub-network computes an audio embedding $\mathbf{u}_{\mathbf{a}} \in \mathbb{R}^{d_a}$ . We compose an audiovisual feature embedding $\mathbf{u} \in \mathbb{R}^d$ by concatenating the output embedding of each stream.
+
+Structured Context Ensemble. Once the clip features $\mathbf{u} \in \mathbb{R}^d$ have been estimated, we proceed to assemble these features into a set that encodes contextual information. We denote this set as the Active Speaker Ensemble. To construct this ensemble, we first define a long interval $T$ ( $T > \tau$ ) centered at a reference time $t$ , and designate one of the speakers present at $t$ as the reference speaker and every other speaker is designated as context speaker.
+
+We proceed to compute $\mathbf{u}_{s,\tau}$ for every speaker $s = 1, \ldots, S$ present at $t$ over $L$ different $\tau$ intervals throughout temporal window $T$ . This sampling scheme yields a tensor $\mathbf{C}_t$ with dimensions $L \times S \times d$ , where $S$ is the total number of speakers analyzed. Figure 3 contains a detailed example on the sampling process.
+
+We assemble $\mathbf{C}_t$ for every possible $t$ in a video. Since temporal structures are critical in the active speaker problem, we strictly preserve the temporal order of the sampled features. As $\mathbf{C}_t$ is defined for a reference speaker, we can generate as many ensembles $\mathbf{C}_t$ as speakers are present at time $t$ . In practice, we always locate the feature set of the reference speaker as the first element along the $S$ axis of $\mathbf{C}_t$ . Context speakers are randomly stacked along the remaining positions on the $S$ axis. This enables us to directly supervise the label of the reference speaker regardless of the number or order of the context speakers.
+
+
+Figure 3. Building Context Tensors. We build a context ensemble given a reference speaker (Speaker 1 in this example), and a reference time $t$ . First, we define a long-term sampling window $T$ containing $L + 1$ clips centered at time $t$ , $T = \{0, 1, \dots, t, \dots, L - 1, L\}$ . We select as context speakers those that overlap with the reference speaker at $t$ (speakers 2 and 3). Finally, we sample clip-level features $u_{l}$ throughout the whole sampling window $T$ from the reference speaker and all the speakers designated as context. If the temporal span of the speaker does not entirely match the interval $T$ , we pad it with the initial or final speaker features. For instance, Speaker 2 is absent between 0 and $i$ , so we pad left with $u_{i}$ . Similarly, for speaker 3, we pad right with $u_{k}$ . Notice that, by our definition, Speakers 2 and 3 could switch positions, but Speaker 1 must remain at the bottom of the stack.
+
+# 3.2. Context Refinement
+
+After constructing the context ensemble $\mathbf{C}_t$ , we are left with the task of classifying the speaking activity of the designated reference speaker. A naive approach would fine-tune a fully-connected layer over $\mathbf{C}_t$ with binary output classes i.e. speaking and silent. Although such a model already leverages global information beyond clips, we found that it tends not to encode useful relationships between speakers and their temporal patterns, which emerge from conversational structures. This limitation inspires us to design our novel Active Speaker Context (ASC) model. ASC consists of two core components. First, it implements a multi-modal self-attention mechanism to establish pairwise interactions between the audiovisual observations on $\mathbf{C}_t$ . Second, it incorporates a long-term temporal encoder, which exploits temporal structures in conversations.
+
+Pairwise Refinement. We start from the multi-modal context ensemble $\mathbf{C}_t$ , and model pairwise affinities between observations in $\mathbf{C}_t$ regardless of their temporal order or the speaker they belong to. We do this refinement by following a strategy similar to Vaswani et al. [33]. We compute self-attention over long-term sequences and across an arbitrary number of candidate speakers.
+
+In practice, we adapt the core idea of pair-wise attention from the non-local framework [35] to work over multimodal high-level features, thereby estimating a dense attention map over the full set of clips contained in the sampling
+
+window $T$ . We avoid using this strategy over low or mid-level features as there is no need to relate distributed information on the spatial or temporal domains of a clip i.e. in the active speaker detection task, meaningful information is tightly localized on the visual (lips region) and audio (speech snippets) domains.
+
+We implement a self-attention module that first estimates a pairwise affinity matrix $\mathbf{B}$ with dimension $LS\times LS$ and then uses its normalized representation as weights for the input $\mathbf{C}_t$ :
+
+$$
+\mathbf {B} = \sigma \left(\left(W _ {\alpha} * \mathbf {C} _ {t}\right) \cdot \left(W _ {\beta} * \mathbf {C} _ {t}\right) ^ {\top}\right) \tag {1}
+$$
+
+$$
+\mathbf {C} _ {t} ^ {\dagger} = W _ {\delta} * (\mathbf {B} \cdot (W _ {\gamma} * \mathbf {C} _ {t})) + \mathbf {C} _ {t} \tag {2}
+$$
+
+Where $\sigma$ is a softmax operation, $\{W_{\alpha}, W_{\beta}, W_{\gamma}, W_{\delta}\}$ are learnable $1 \times 1 \times 1$ weights that adapt the channel dimensions as needed, and the second term in Equation 2 $(+C_t)$ denotes a residual connection. The output $C_t^\dagger$ is a tensor with identical dimensions as the input $C_t$ ( $L \times S \times d$ ), but it now encodes the pairwise relationships.
+
+Temporal Refinement. The goal of this long-term pooling step is two-fold. First, to refine the weighted features in $\mathbf{C}_t^\dagger$ by directly attending to their temporal structure. Second, to reduce the dimensionality of the final embedding to $d^{\prime}$ ( $d > d^{\prime}$ ), allowing us to use a smaller fully-connected prediction layer. Given the inherent sequential structure of the task, we implement this refinement using an LSTM model [17]. We cast its input by squeezing the speaker and time dimension of $\mathbf{C}_t^\dagger$ into $(L\times S)\times d$ ; thus we input the LSTM time steps $t_i\in \{1,\dots ,L\times S\}$ , with a feature vector $\mathbf{z}_i\in \mathbb{R}^d$ . In practice, we use a single uni-directional LSTM unit with $d^{\prime} = 128$ , and keep the LSTM memory as it passes over the sequence. Thus, we create a sequence-to-sequence mapping between tensor $\mathbf{C}_t^\dagger \in \mathbb{R}^{(L\times S)\times d}$ and a our final Active Speaker Context representation $\mathbf{A}\mathbf{S}\mathbf{C}_t\in \mathbb{R}^{(L\times S)\times d'}$ .
+
+Our final step consists of estimating the presence of an active speaker given $\mathbf{A}\mathbf{S}\mathbf{C}_t$ . We resort to a simple fully-connected layer with binary output (active speaker and silent). We establish the final confidence score using a softmax operator over the outputs and select the value of the speaking class.
+
+# 3.3. Training and Implementation Details
+
+We use a two-stream (visual and audio) convolutional encoder based on the Resnet-18 architecture [16] for the Short-Term Feature extraction (STE). Following [31], we re-purpose the video stream to accept a stack of $N$ face crops by replicating the weights on the input layer $N$ times. The audio stream input is a Mel-spectrogram calculated from an audio snippet, which exactly matches the time interval covered by the visual stack. Since Mel-spectrograms are 2D tensors, we re-purpose the input of the audio stream
+
+to accept a $L \times P \times 1$ tensor by averaging channel-specific weights at the input layer.
+
+Training the Short-term Encoder We train the STE using the Pytorch library [29] for 100 epochs. We choose the ADAM optimizer [21] with an initial learning rate of $3 \times 10^{-4}$ and learning rate annealing $\gamma = 0.1$ every 40 epochs. We resize every face crop to $124 \times 124$ and perform random flipping and corner cropping uniformly along the visual input stack. We drop the large-scale multi-modal pre-training of [5], in favor of standard Imagenet [12] pretraining for the initialization.
+
+Since we want to favor the estimation of discriminative features on both streams, we follow the strategy presented by Roth et al. [31] and add two auxiliary supervision sources, and place them on top of both streams before the feature fusion operation, this creates two auxiliary loss functions $\mathcal{L}_a,\mathcal{L}_v$ . Our final loss function is $\mathcal{L} = \mathcal{L}_{av} + \mathcal{L}_a + \mathcal{L}_v$ . We use the standard Cross-entropy loss for all three terms.
+
+Training the Active Speaker Context Model We also optimize the ASC using the Pytorch library and the ADAM optimizer with an initial learning rate of $3 \times 10^{-6}$ and learning rate annealing $\gamma = 0.1$ every 10 epochs. We train the full ASC module from scratch and include batch normalization layers to favor faster convergence [18]. Similar to the STE, we use Cross-entropy loss to train ASC, but in this scenario, the loss consists of a single term $\mathcal{L}_{av}$ .
+
+The ASC processes a fixed number of speakers $S$ to construct $\mathbf{C}_t$ . Given that not every reference time $t$ contains the same number of speaker detections, there are three scenarios for $J$ overlapping speakers and an ensemble of size $S$ . If $J \geq S$ , we randomly sample $S - 1$ context speakers (one is already assigned as reference). If $J < S$ , we select a reference, and randomly sample (with replacement) $S - 1$ context speakers from the remaining $J - 1$ speakers. In the extreme case where $J = 1$ , the reference speaker is replicated $S - 1$ times.
+
+# 4. Experiments
+
+This section evaluates our method's ability to detect active speakers in untrimmed videos. We conduct the experiments using the large-scale AVA-ActiveSpeaker dataset [31]. We divide the experiment analyses into three parts. First, we compare our approach with the existing state-of-the-art approaches. Then, we ablate our method and inspect the contributions of each of its core components. Finally, we do a performance breakdown and analyze success and failure modes.
+
+AVA-ActiveSpeaker dataset. The AVA-ActiveSpeaker dataset [31] contains 297 Hollywood movies, with 133 of those for training, 33 for validation and 131 for testing. The dataset provides normalized bounding boxes for 5.3 million
+
+faces (2.6M training, 0.76M validation, and 2.0M testing) detected over 15-minute segments from each movie. These detections occur at an approximate rate of 20fps and are manually linked over time to produce face tracks depicting a single identity (actor). Each face detection in the dataset is augmented with a speaking or non-speaking attribute. Thus, the task at inference time is to produce a confidence score that indicates the chance of speaking for each given face detection. In our experiments, we use the dataset official evaluation tool, which computes the mean average precision (mAP) metric over the validation (ground-truth available) and test sets (ground-truth withheld). Unless mentioned otherwise, we evaluate active speaker detection on the AVA-ActiveSpeaker validation subset.
+
+Dataset sampling at training time. As noted by Roth et al. [31], AVA-ActiveSpeaker has a much smaller variability in comparison to natural image datasets with a comparable size. For the training of the STE, we prevent over-fitting by randomly sampling a single clip with $k$ time contiguous crops from every face track instead of densely sampling every possible time contiguous clip of size $k$ in the tracklet. Therefore, our epoch size correlates with the number of face tracks rather than the number of face detections. To train our context refinement models, we use standard dense sampling over the training set, as we do not observe any significant over-fitting in this stage.
+
+# 4.1. Comparison with the State-of-the-art
+
+We compare our method's performance to the state-of-the-art and summarize these results in Table 1. We set $L = 11$ and $S = 3$ for the experiment. We report results on the validation and testing subsets. The latter is withheld for the AVA-ActiveSpeaker task in the ActivityNet challenge [2].
+
+We observe that our method outperforms all existing approaches in the validation subset. This result is very favorable as the other methods rely on 3D convolutions and large scale pre-training, while our model relies exclusively on contextual information built from 2D models. The best existing approach, Chung et al. [5], obtains $85.5\%$ . Even though their method uses a large-scale multi-modal dataset for pre-training, our context modeling outperforms their solution by $1.6\%$ .
+
+As Table 1 shows, our method achieves competitive results in the testing subset. Even though our model discards 3D convolutions and model ensembles [5], we rank 2nd in the AVA-ActiveSpeaker 2019 Leaderboard3. The overall results on the AVA-ActiveSpeaker validation and testing subsets validate the effectiveness of our approach. We empirically demonstrate that it improves the state-of-the-art, but a question remains. What makes our approach strong? We answer that question next via ablation studies.
+
+| Method | mAP |
| Validation subset | |
| Active Speakers Context (Ours) | 87.1 |
| Chung et al. (Temporal Convolutions) [5] | 85.5 |
| Chung et al. (LSTM) [5] | 85.1 |
| Zhang et al. [39] | 84.0 |
| ActivityNet Challenge Leaderboard 2019 | |
| Naver Corporation [5] | 87.8 |
| Active Speakers Context (Ours) | 86.7 |
| University of Chinese Academy of Sciences [39] | 83.5 |
| Google Baseline [31] | 82.1 |
+
+Table 1. Comparison with the State-of-the-art. We report the performance of state-of-the-art methods in the AVA Active Speakers validation and testing subsets. Results in the validation set are obtained using the official evaluation tool published by [31], test set metrics are obtained using the ActivityNet challenge evaluation server. In the validation subset, we improve the performance of previous approaches by $1.6\%$ , without using large-scale multimodal pre-training. In the test subset, we achieve $86.7\%$ and rank second in the leaderboard, without using 3D convolutions, sophisticated post-processing heuristics or assembling multiple models.
+
+| Context & Refinement | mAP |
| No Context | 79.5 |
| Context + No Refinement | 84.4 |
| Context + Pairwise Refinement | 85.2 |
| Context + Pairwise Refinement + MLP | 85.3 |
| Context + Temporal Refinement | 85.7 |
| ASC | 87.1 |
+
+# 4.2. Ablation Analysis
+
+Does context refinement help? We first assess the effectiveness of the core components of our approach. Table 2 compares the performance of the baseline, a two-stream network (No Context) that encodes a single speaker in a short period, a naive context prediction using a single linear layer (Context + No Refinement), and three ablated variants of our method, two of these variants verify the individual contributions of the two ASC refinement steps (Context + Pairwise Refinement and Context + Temporal Refinement), the third (Context + Pairwise Refinement + MLP) has a two layer perceptron which yields about the same number of parameters as the ASC, it is useful to test if the increased performance derives from the increased size of the network.
+
+While the initial assembly of the context tensor already improves the baseline performance, our results show that
+
+context refinement brings complementary gains. That is, the active speaker detection task benefits not only from the presence of additional clip information in the context, but also profits from directly modeling speaker relationships and temporal structures. We observe that our whole context refinement process leads to an average of $4.73\%$ mAP increase over the context tensor and a naive prediction. These results validate our design choice of distilling context via the pairwise and temporal refinement modules.
+
+Are there alternatives for temporal refinement? We now compare our temporal refinement strategy against a baseline strategy for temporal refinement. During the recent ActivityNet challenge, Chung et al. [5] explored the moving average strategy, reporting an increase of $1.3\%$ mAP using a median filter over prediction scores. A key difference is that [5] processes short-term windows (0.5s), whereas we consider windows of 2.25s. We found that smoothing long temporal windows negatively impacts the performance of our method. Table 3 shows that there is a negligible increase $(+0.02\%)$ using short temporal averages, and a drastic drop $(-11.64\%)$ using long averages.
+
+Table 2. Effect of context refinement. We ablate the contributions of our method's core components. We begin with a baseline that does not include any context, which achieves $79.5\%$ . Then, by simply leveraging context with a linear prediction layer, we observe a significant boost of $4.9\%$ . Additionally, we find that adding pairwise and temporal refinement further improves the performance by $0.8\%$ and $1.3\%$ respectively. The ASC best performance is achieved only if both refinement steps are included.
+
+| w/o temporal refinement | + moving average (0.5s) | + moving average (2.25s) | + temporal refinement |
| 85.21% | +0.02% | -11.64% | +1.9% |
+
+Table 3. Moving average vs. temporal refinement (mAP). We observe only marginal benefits when replacing the proposed temporal smoothing step with a moving average, in fact this operation has a large penalty when smoothing longer sampling windows.
+
+Does context size matter? We continue the ablation by analyzing the influence of context size on the final performance of our method. Table 4 summarizes the two dimensions of this analysis, where we vary the temporal support (i.e. vary $L$ from 1 to 11 clips), or alter the number of context speakers (i.e. vary $S$ from 1 to 3 speakers).
+
+Overall, extended temporal contexts and more co-occurring speakers at training time favor the performance of our method. These results indicate that the proposed approach utilizes both types of context to disambiguate predictions for a single speaker. We observe a larger gap in performance when switching between one to two speakers (1.8% on average) than when switching between 2 and 3 (0.15% on average). This behavior might be due to the relative scarcity of samples containing more than three speakers at training time. Regarding temporal support, we observe gradual improvements by increasing $L$ . However, as soon as $L$ reaches 11, we see diminishing returns that seem to be correlated with the average length of face tracks in the training subset. The context size analysis performed here supports our central hypothesis that context from long-time horizons and multiple-speakers is crucial for making accurate active speaker detections.
+
+| Temporal Support (L) ↓ | Number of Speakers (S) |
| S = 1 | S = 2 | S = 3 |
| L = 1 | 79.5 | 83.1 | 82.9 |
| L = 3 | 83.1 | 84.6 | 85.0 |
| L = 5 | 84.3 | 85.8 | 85.9 |
| L = 7 | 84.9 | 86.4 | 86.6 |
| L = 9 | 85.5 | 86.7 | 86.9 |
| L = 11 | 85.6 | 87.0 | 87.1 |
+
+Table 4. Impact of context size. We investigate the effect of different sizes of temporal support and the number of speakers used to construct our context representation. To that end, We report the mAP obtained by different context size configurations. We observe that both types of context play a crucial role at boosting performance. Using our longest temporal support, $L = 11$ (2.25 seconds), our method improves the baseline $(L = 1 / S = 1)$ by $6.1\%$ . Moreover, when combined with context from multiple speakers, i.e. $L = 11 / S = 3$ , we achieve an additional boost of $1.5\%$ resulting in our best performance of $87.1\%$ . In short, our findings reveal the importance of sampling context from long time horizons and multiple speakers.
+
+ | Sampling Distortion Type |
| Temporal Order | Surrounding | None |
| mAP | 77.8 | 84.5 | 87.1 |
+
+Table 5. Effect of context sampling distortion. We observe that our method looses $2.6\%$ mAP when the context speakers are randomly sampled across the video. It also drastically drops $(-9.3\%)$ when the context temporal order is scrambled. These results validate the importance of sampling context for the target face within the right surrounding and preserving its temporal order.
+
+Does context sampling matter? We now evaluate the effect of tempering the temporal structure when constructing $\mathbf{C}_t$ . We also assess the effectiveness of 'in-context' speaker information, i.e. we study if sampling 'out-of-context' speakers degrades the performance of our approach. For the first experiment, we build $\mathbf{C}_t$ exactly as outlined in Section 3.3, but randomly shuffle the temporal sequence of all speakers except clips at reference time $t$ . For the second experiment we replace the context speakers with a set of speakers sampled from a random time $t'$ such that $t' \neq t$ . We report the results in Table 5.
+
+Let us analyze the two sampling distortions one at a time. First, the ablation results highlight the importance of the temporal structure. If such a structure is altered, the effectiveness of our method drops below that of the baseline to $77.8\%$ . Second, it is also important to highlight that incorporating out-of-context speakers in our pipeline is worse than using only the reference speaker (84.5% vs. 87.1%). In other words, temporal structure and surrounding speakers provide unique contextual cues that are difficult to replace with random information sampled from a video.
+
+
+
+
+Figure 4. Performance breakdown. We analyze the performance of the baseline approach (w/o context) and our proposed method (Active Speaker Context) under two different visual characteristics of the samples at inference time: number of faces (left) and face size (right). For the number of faces, we split the dataset into three exclusive buckets: one, two, and three faces, which altogether cover $>90\%$ of the dataset. Similarly, we split the dataset into three face sizes: Small (S), Medium (M), Large (L), corresponding to face crops of width $<=64$ , $>64$ but $<=128$ , and $>128$ pixels, respectively. In all scenarios, we observe that our approach outperforms the baseline, with those gains being more pronounced in challenging scenarios. For instance, when we compare their performance for three (3) faces, our method offers a significant boost of $13.2\%$ . Moreover, for the hard case of small faces (S), we achieve an improvement of $11.3\%$ over the baseline.
+
+
+
+# 4.3. Results Analysis
+
+Performance Breakdown. Following recent works [1], we break down our model's and baseline performances in terms of relevant characteristics of the AVA Active Speaker dataset, namely number of faces and face size, which we present in Figure 4. We also analyze the impact of noise in speech and find that both our method and the baseline are fairly robust to altered speech quality;
+
+The performance breakdown for the number of faces in Figure 4 (left) reveals the drawbacks of the baseline approach, and the benefits of ASC. We split the validation frames into three mutually exclusive groups according to the number of faces in the frame. For each group, we compute the mAP of the baseline and our approach. Although both follow a similar trend with performance decreasing as the number of faces increases, our method is more resilient. For instance, in the challenging case of three faces, our method outperforms the baseline by $13.2\%$ . This gain could be due to our method leverages information from multiple speakers at training time, making it aware of conversational patterns and temporal structures unseen by the baseline.
+
+Dealing with small faces is a challenge for active speaker detection methods [31]. Figure 4 (right) presents how the baseline and our ASC method are affected by face size.
+
+
+Figure 5. Qualitative results. The attention within the pairwise refinement step has some characteristic activation patterns. We highlight the reference speaker in a yellow bounding box and represent the attention score with a heat-map growing from light-blue (no attention) to red (the highest-attention). The first row shows a typical activation pattern for two silent speakers. The attention model focuses exclusively on the reference speaker (highlighted in yellow) at the reference time. In the cases where there is an active speaker (second row), the attention concentrates on the reference speaker over an extended time interval. In the third row, the reference speaker is also active, but in this case, his facial gestures are ambiguous; thus, the attention also looks at the context speaker.
+
+We divide the validation set into three splits: (S) small faces with width less than 64 pixels, (M) medium faces with width 64 and 128 pixels, and (L) large faces with width more than 128 pixels. There is a correlation between the performance of active speaker detection and face size. Smaller faces are usually harder to label as active speakers. However, our approach exhibits less performance degradation than the baseline as face size decreases. In the most challenging case, i.e. small faces, our method outperforms the baseline by $11.3\%$ . We hypothesize that our method aggregates information from larger faces via temporal context, which enhances predictions for small faces.
+
+Qualitative results. We analyze the pairwise relations built on the matrix $\mathbf{C}_t$ on a model trained with only two speakers. Figure 5 showcases three sample sequences centered at a reference time $t$ , each containing two candidate speakers. We highlight the reference speaker in yellow and represent the attention score with a heat-map growing from light-blue (no attention) to red (the highest-attention).
+
+Overall we notice three interesting patterns. First, sequences labeled as silent generate very sparse activations focusing on a specific timestamp (see top row). We hypothesize that identifying the presence of speech is a much simpler task than detecting the actual active speaker. Therefore, our model reliably decides by only attending a short time span. Second, for sequences with an active speaker, our pairwise refinement tends to distribute the attention towards a single speaker throughout the temporal window (see the
+
+second row). Besides, the attention score tends to have a higher value near the reference time and slowly decades as it approaches the limit of the time interval. Third, we find many cases in which our model attends to multiple speakers in the scene. This behavior often happens when the facial features of the reference speaker are difficult to observe or highly ambiguous. For example, the reference speaker in the third row is hard to see due to insufficient lighting and face orientation in the scene. Hence, the network attends simultaneously to both the reference and the context speaker.
+
+# 5. Conclusion
+
+We have introduced a context-aware model for active speaker detection that leverages cues from co-occurring speakers and long-time horizons. We have shown that our method outperforms the state-of-the-art in active speaker detection, and works remarkably well in challenging scenarios when many candidate speakers or only small faces are on-screen. We have mitigated existing drawbacks, and hope our method paves the way towards more accurate active speaker detection. Future explorations include using speaker identities as a supervision source as well as learning to detect faces and their speech attribute jointly.
+
+Acknowledgments. This publication is based on work supported by the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2017-3405, and by Uniandes-DFG Grant No. P17.853122
+
+# References
+
+[1] Humam Alwassel, Fabian Caba Heilbron, Victor Escorcia, and Bernard Ghanem. Diagnosing error in temporal action detectors. In ECCV, 2018.
+[2] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In CVPR, 2015.
+[3] Punarjay Chakravarty, Sayeh Mirzaei, Tinne Tuytelaars, and Hugo Van hamme. Who's speaking? audio-supervised classification of active speakers in video. In International Conference on Multimodal Interaction (ICMI), 2015.
+[4] Punarjay Chakravarty, Jeroen Zegers, Tinne Tuytelaars, et al. Active speaker detection with audio-visual co-training. In International Conference on Multimodal Interaction (ICMI), 2016.
+[5] Joon Son Chung. Naver at activitynet challenge 2019-task b active speaker detection (ava). arXiv preprint arXiv:1906.10555, 2019.
+[6] Joon Son Chung, Amir Jamaludin, and Andrew Zisserman. You said that? arXiv preprint arXiv:1705.02966, 2017.
+[7] Joon Son Chung, Arsha Nagrani, and Andrew Zisserman. Voxceleb2: Deep speaker recognition. arXiv preprint arXiv:1806.05622, 2018.
+[8] Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. Lip reading sentences in the wild. In CVPR, 2017.
+[9] Joon Son Chung and Andrew Zisserman. Out of time: automated lip sync in the wild. In ACCV, 2016.
+[10] Soo-Whan Chung, Joon Son Chung, and Hong-Goo Kang. Perfect match: Improved cross-modal embeddings for audiovisual synchronisation. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019.
+[11] Ross Cutler and Larry Davis. Look who's talking: Speaker detection using video and audio correlation. In International Conference on Multimedia and Expo, 2000.
+[12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
+[13] Mark Everingham, Josef Sivic, and Andrew Zisserman. Taking the bite out of automated naming of characters in tv video. Image and Vision Computing, 27(5):545-559, 2009.
+[14] Rohit Girdhar, Joao Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer network. In CVPR, 2019.
+[15] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.
+[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+[17] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
+[18] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+[19] Arindam Jati and Panayiotis Georgiou. Neural predictive coding using convolutional neural networks toward unsupervised learning of speaker characteristics. IEEE/ACM Transactions on Audio, Speech, and Language Processing,
+
+27(10):1577-1589, 2019.
+[20] Changil Kim, Hijung Valentina Shin, Tae-Hyun Oh, Alexandre Kaspar, Mohamed Elgharib, and Wojciech Matusik. On learning associations of faces and voices. In ACCV, 2018.
+[21] D Kinga and J Ba Adam. A method for stochastic optimization. In ICLR, 2015.
+[22] Jianing Li, Jingdong Wang, Qi Tian, Wen Gao, and Shiliang Zhang. Global-local temporal representations for video person re-identification. In ICCV, 2019.
+[23] Kevin P Murphy, Antonio Torralba, and William T Freeman. Using the forest to see the trees: A graphical model relating features, objects, and scenes. In NeurIPS, 2004.
+[24] Arsha Nagrani, Samuel Albanie, and Andrew Zisserman. Learnable pins: Cross-modal embeddings for person identity. In ECCV, 2018.
+[25] Arsha Nagrani, Samuel Albanie, and Andrew Zisserman. Seeing voices and hearing faces: Cross-modal biometric matching. In CVPR, 2018.
+[26] Arsha Nagrani, Joon Son Chung, and Andrew Zisserman. Voxceleb: a large-scale speaker identification dataset. arXiv preprint arXiv:1706.08612, 2017.
+[27] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal deep learning. In ICML, 2011.
+[28] Andrew Owens and Alexei A Efros. Audio-visual scene analysis with self-supervised multisensory features. In ECCV, 2018.
+[29] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NeurIPS-Workshop, 2017.
+[30] Mirco Ravanelli and Yoshua Bengio. Speaker recognition from raw waveform with sincnet. In IEEE Spoken Language Technology Workshop (SLT), 2018.
+[31] Joseph Roth, Sourish Chaudhuri, Ondrej Klejch, Radhika Marvin, Andrew Gallagher, Liat Kaver, Sharadh Ramaswamy, Arkadiusz Stopczynski, Cordelia Schmid, Zhonghua Xi, et al. Ava-activspeaker: An audiovisual dataset for active speaker detection. arXiv preprint arXiv:1901.01342, 2019.
+[32] Kate Saenko, Karen Livescu, Michael Siracusa, Kevin Wilson, James Glass, and Trevor Darrell. Visual speech recognition with loosely synchronized feature streams. In ICCV, 2005.
+[33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
+[34] Quan Wang, Hannah Muckenhirn, Kevin Wilson, Prashant Sridhar, Zelin Wu, John Hershey, Rif A Saurous, Ron J Weiss, Ye Jia, and Ignacio Lopez Moreno. Voicefilter: Targeted voice separation by speaker-conditioned spectrogram masking. arXiv preprint arXiv:1810.04826, 2018.
+[35] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.
+[36] Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. In CVPR, 2019.
+[37] Sarthak Yadav and Atul Rai. Learning discriminative features for speaker identification and verification. In Inter-
+
+speech, 2018.
+
+[38] Aonan Zhang, Quan Wang, Zhenyao Zhu, John Paisley, and Chong Wang. Fully supervised speaker diarization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019.
+[39] Yuan-Hang Zhang, Jingyun Xiao, Shuang Yang, and Shiguang Shan. Multi-task learning for audio-visual active speaker detection.
+[40] Linchao Zhu, Zhongwen Xu, Yi Yang, and Alexander G Hauptmann. Uncovering the temporal context for video question answering. International Journal of Computer Vision, 124(3):409-421, 2017.
\ No newline at end of file
diff --git a/activespeakersincontext/images.zip b/activespeakersincontext/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2fa9a1ed3935a2a392cde7e8002f4def0056cced
--- /dev/null
+++ b/activespeakersincontext/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce7dc73bca51a25fe8d551be3b9c6c7bd4ef60776111a246b4f25cac80e0d21e
+size 333511
diff --git a/activespeakersincontext/layout.json b/activespeakersincontext/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c223b0e6135703b279a53e3dd7ddb0ca74f562c3
--- /dev/null
+++ b/activespeakersincontext/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04ed4badc7939cf027abd74f1076fa25cd2dfbe6620ef9dd6fcd9d178c2cb3d4
+size 417006
diff --git a/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_content_list.json b/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e3f7c2c042c85de5ac50f8b851b400eaa838ff50
--- /dev/null
+++ b/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:accaef0d9ea60680016dcf16bd03d6c1156f084727dc0b3b2bc3b7f4a4b546e5
+size 81097
diff --git a/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_model.json b/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..99082cab9947a1060216be68b868cbdf432989f7
--- /dev/null
+++ b/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8e22a3af6585bb5329c071ddf3632cac5ca3e6d81647dca18da44d6fa823164
+size 100099
diff --git a/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_origin.pdf b/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..99d30e8bfe509ef6af3d77878e3f7bb43ad7b733
--- /dev/null
+++ b/activevisionforearlyrecognitionofhumanactions/4add1777-e88b-483e-b686-f5e8f552c6af_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:94d67c581c48069cfc4509350a6dd4978b8b4f3139717b6a447da2aa4c7d6571
+size 304743
diff --git a/activevisionforearlyrecognitionofhumanactions/full.md b/activevisionforearlyrecognitionofhumanactions/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b280001d60301d1b512724627a322cf95da436ac
--- /dev/null
+++ b/activevisionforearlyrecognitionofhumanactions/full.md
@@ -0,0 +1,278 @@
+# Active Vision for Early Recognition of Human Actions
+
+Boyu Wang $^{1}$ , Lihan Huang $^{1}$ , Minh Hoai $^{1,2}$
+
+1Stony Brook University, 2VinAI Research
+
+{boywang, lihahuang, minhhoai}@cs.stonybrook.edu
+
+# Abstract
+
+We propose a method for early recognition of human actions, one that can take advantages of multiple cameras while satisfying the constraints due to limited communication bandwidth and processing power. Our method considers multiple cameras, and at each time step, it will decide the best camera to use so that a confident recognition decision can be reached as soon as possible. We formulate the camera selection problem as a sequential decision process, and learn a view selection policy based on reinforcement learning. We also develop a novel recurrent neural network architecture to account for the unobserved video frames and the irregular intervals between the observed frames. Experiments on three datasets demonstrate the effectiveness of our approach for early recognition of human actions.
+
+# 1. Introduction
+
+We propose a method for early recognition of human actions using multiple video cameras. Early recognition aims to recognize an action as soon as possible. This task arises in many situations, and the ability to make early and reliable decisions will enable a wide range of applications, from robotics to surveillance and health care.
+
+Early recognition is an emerging research area, and several methods have been proposed in the last few years [1, 3, 6, 10, 13, 14, 16, 17, 21, 24, 30, 41, 42, 49-51, 53, 57]. However, existing methods only consider scenarios where temporal events can be wholly observed by a single camera at a fixed viewpoint. Unfortunately, human actions usually spread out in time and space, and a single camera with a fixed viewpoint cannot fully capture the progression of action due to limited view and coverage of the camera.
+
+The limitations of a single camera can be overcome by using multiple cameras. Compared to systems with a single camera, the advantages of having multiple cameras are obvious: wider coverage and multiple perspectives. Unfortunately, these obvious benefits of multiple cameras might be difficult to realize in practice due to the lack of a method that can handle the constraints of the physical infrastruc
+
+
+Figure 1: Early recognition of human actions with multiple cameras. Camera 1 is frontal but occluded by a sofa and a coffee table. Camera 2 shows a top-down view from far away. Camera 3 does not observe the action. Camera 4 only sees from the side. Due to the limited network and processing bandwidths, only one camera can be analyzed at a time. Which camera should be used at each time step, if no camera is always superior to the others?
+
+ture. Multiple cameras will require more communication bandwidth and processing power. In many situations, communication bandwidth and processing power cannot be increased, putting an upper limit on the processing throughput. Although many cameras can be installed and the cameras might have high frame rates, not all the frames captured by the cameras can be transmitted and analyzed. Thus, temporal downsampling, i.e., frame dropping, is unavoidable. A simple approach is to apply the same downsampling factor to all cameras to satisfy the overall throughput limit. However, this uniform resource allocation strategy is unlikely to be optimal because some cameras might provide more information about the ongoing human action than other cameras, as illustrated in Fig. 1.
+
+If processing power is not constrained and if limited communication bandwidth is the only issue that we need to address, one may wonder if we can use low-resolution image or high compression factor instead. However, spatial downsampling and compression can reduce the accuracy of the system especially when the camera is far away from the scene of human action. Furthermore, to save com
+
+munication bandwidth, compression must be done prior to transmission using extra hardware equipment. This might be bulky, expensive, and not applicable to all cameras.
+
+In this paper, we consider a scenario where there are $k$ cameras, each with the maximum frame rate of $l$ frames per second. However, the total processing throughput of the entire system is limited to $l$ frames per second. Thus, the duration between two time steps is $1 / l$ second; and at each time step, only one frame from a camera can be analyzed. Considering the importance of this scenario, we propose a framework for camera selection and early recognition of human actions. Our framework is developed based on reinforcement learning, treating camera selection as the decision of an artificial agent that is interested in maximizing its ability to recognize human actions as early as possible. The agent maintains a belief state based on the history of its observations from multiple cameras. We use a set of Recurrent Neural Networks (RNNs) [40] to model the belief state of the agent, from which the policy of the agent is parameterized and learned.
+
+Reinforcement learning is a well-established framework for sequential decision making. But this framework is very general with numerous design choices, and defining the right state space or choosing the right reward function is not trivial. Our first contribution is a well-developed solution for camera selection and early recognition. We explicitly address the need for integrating observations over time for early recognition of human actions. We use recurrent networks for modeling the dynamics of human actions, but our situation requires a network architecture that is robust to the variable frame rate of an input video sequence (due to unobserved video frames). To this end, the second contribution of our work is the development of a novel recurrent network architecture that can estimate the missing values based on the last observed values and the elapsed time from the last observation. Another contribution is the approach for integrating information from multiple cameras for view selection and action recognition. Our framework and its components have been empirically validated on three multi-view or multi-modality datasets. The experiments show that the proposed recurrent network can robustly account for unobserved frames, and the learned policy for camera selection improves the early recognition ability of the camera system.
+
+# 2. Related Work
+
+Using multiple views for human action and activity recognition has been studied before [5, 18, 19, 33, 35, 45, 48, 54, 56]. These studies can be divided into two broad categories: explicitly building 3D models [48, 54] and integrating different 2D views. However, most prior studies were for offline recognition, and they assumed the cameras could be simultaneously used. They neither considered the early recognition problem nor addressed the need for cam-
+
+era selection. View-invariant features have been proposed in [2, 25, 44] for cross-view action recognition. But having a view-invariant representation is insufficient. Due to factors such as distance and occlusion, some views provide little information, and it is important not to select bad views.
+
+Active sensing is an important research area in robotics. However, most works in robotics are for search-and-rescue or static object recognition, e.g., [4, 12, 27, 37, 59], and they are not applicable to early recognition of human actions.
+
+Most similar to our work are the view selection methods [11, 46, 47]. Spurlock et al. [47] used a keyframe classifier for view selection, while Darrell and Pentland [11], Spurlock and Souvenir [46] also used reinforcement learning. However, these methods considered simpler types of human actions, where the classifier and the view selection policy can be defined based on individual frames. These methods are unsuitable for complex human actions, where the dynamics of the actions cannot be recognized accurately without integrating multiple observations over time. Empirically, these methods do not work well for complex human actions, as will be seen in Sec. 5. In this paper, we explicitly address the need for integrating observations over time for early decision making. This, in turn, requires a novel RNN architecture that can handle missing frames. This RNN architecture is a technical novelty by itself, and the whole framework is significantly different from the existing view selection methods.
+
+Another related work is by Possas et al. [34] for human activity recognition. Their setting, however, is different from ours. In their setting, an ego-centric camera and a motion sensor can both record data about human activity. The data from the motion sensor is always analyzed, while the data from the camera might not be analyzed due to the need to reserve power. Possas et al. [34] learn a power-efficient policy to confidently recognize what already happened instead of what is happening.
+
+Our work should not be confused with camera selection in TV broadcast [9, 38], where the selection policy sees all views before making the selection (no communication bandwidth problem). Meanwhile, we need to select a view without seeing the content of the views. Furthermore, the view selection policy for TV broadcast [9, 38] is mainly based on the visibility and changes of the human silhouette; it is not designed for action recognition.
+
+# 3. Overview of Proposed Framework
+
+Our framework for camera selection and early recognition is based on reinforcement learning. The core of the framework is an artificial agent that represents the system of multiple cameras. Camera selection is a sequential decision process of the agent, where the goal is to maximize the accuracy and minimize the latency in recognizing human actions. At each time step, the agent acquires an image
+
+from one camera, analyzes the images, predicts the probabilities of the action classes. The goal of the agent is to maximize its sum of rewards, where the rewards depend on how accurate and early human actions can be detected and recognized.
+
+Belief state. The belief state will integrate information from multiple sequences of images from multiple cameras. We will model the belief state based on RNNs [40] (including LSTMs [15] and IndRNNs [29]), but we extend them to account for missing observations and variable frame rates. How the belief state is modeled and the novelty of our architecture will be described in the next section.
+
+Action and policy. At each time step, the agent will: 1) predict the class of the ongoing action, and 2) select a camera to acquire an image and analyze the scene. We will learn a probabilistic policy for both action recognition and view selection. For action recognition, the output at time $t$ is the probability vector $\mathbf{p}_t$ for the action classes. For view selection, the output at time $t$ is the probability vector $\pi_t$ for camera selection: the $i^{th}$ camera will be selected with the probability $\pi_t(i)$ .
+
+Reward. The reward is calculated based on the agreement between the recognition output and what action is occurring. Suppose there are $m$ action classes. The recognition output of the agent at time $t$ is an $m \times 1$ probability vector $\mathbf{p}_t$ where $\mathbf{p}_t(c)$ is the estimated probability for class $c$ occurring at time $t$ . Let $\mathbf{g}_t \in \{0,1\}^m$ be the ground truth binary indicator vector for what action classes occur at time $t$ ; $\mathbf{g}_t(c) = 1$ if class $c$ is occurring and 0 otherwise. We will consider the reward function that is the sum of log likelihood: $r_t = \sum_{c=1}^{m} \mathbf{g}_t(c) \log \mathbf{p}_t(c)$ . The goal of the agent is to maximize the average running reward: $\lim_{T \to \infty} \frac{1}{T} \sum_{t=1}^{T} r_t$ . By maximizing the average running reward, the agent will maximize the agreement between the predicted probabilities and the ground truth category vector at multiple time steps, leading to early recognition ability.
+
+# 4. Belief state modeling and policy learning
+
+Modeling the belief state and learning the policy are two most crucial steps for the success of the proposed method. The belief state must encode relevant information, and the policy must be appropriately parameterized for two tasks: 1) early recognition of human actions; and 2) camera selection. Unlike previous view selection methods [11, 46] that define the belief state based on individual frames, ours can integrate observations overtime to capture the dynamics of human actions. Our work is based on the recurrent network architecture [40], including LSTM [15] and IndRNN [29], and we will commonly refer to them as RNNs for brevity. We introduce novel extensions to RNNs for handling missing observations and integrating information from multiple cameras. We use RNNs instead of other
+
+
+Figure 2: Overview of our system with three main modules: mfRNN, view integration, and view selection. $\mathbf{x}_t^i$ represents the input from view $j$ at time $t$ . A solid box means the view is selected while a dash box is not. View integration and selection are performed at every step. This figure shows one step only. See Sec. 4 for more details.
+
+action classification methods, e.g., non-local network [52] due to the requirement of early recognition, i.e., to output the action probability at every time step without looking into the future.
+
+We will model the belief state with multiple RNNs, one for each camera. Each RNN is a recurrent network that integrates observations over time, and it outputs the probability vector for the action classes in consideration. Each RNN analyzes its data stream independently of other RNNs, and each has its own ability to deal with unobserved frames. The combined recognition output is the weighted average of the outputs of the individual RNNs, where the weights are learned. For camera selection, we learn a policy that makes decision based on the concatenated hidden state vectors of the RNNs. Fig. 2 provides an overview of the system. We will describe below how to handle unobserved video frames, how to integrate recognition outputs from multiple RNNs, and how to learn the policy for camera selection.
+
+# 4.1. Analyzing observations from a single camera
+
+For each camera, we will train and use an RNN to integrate the observed video frames from the camera to obtain the predicted probability vector for multiple action classes. Using RNNs for integrating a sequence of observations is a powerful and popular approach for recognition, but most existing works often analyze and process observations at a regular interval, assuming the video frames are always observed. In our case, not all video frames can be observed and analyzed at the same time, due to limited system throughput. In this section, we briefly review RNN and then propose a novel extension to address missing observations.
+
+RNN and its limitation. An RNN (including LSTM and IndRNN) is a recurrent network that integrates observations sequentially. Consider a camera, and let $\mathbf{x}_1, \mathbf{x}_2, \dots, \mathbf{x}_t$ be
+
+the sequence of image frames (or their feature representations) that are supposedly observed by the camera. At time $t$ , the RNN analyzes an input $\mathbf{x}_t$ , updates its internal state $\mathbf{h}_t$ , and computes the output $\mathbf{p}_t$ . Here $\mathbf{p}_t$ is the predicted probabilities for the action classes in consideration. The recurrent updates are:
+
+$$
+\mathbf {h} _ {t} = \text {u p d a t e S t a t e} \left(\mathbf {h} _ {t - 1}, \mathbf {x} _ {t}\right); \mathbf {p} _ {t} = \text {c o m p u t e O u t p u t} \left(\mathbf {h} _ {t}\right).
+$$
+
+Both updateState and computeOutput are parametric functions with learnable parameters. These functions have specific forms, such as the ones proposed in [15, 29, 40]. What is important to note here is that the RNN expects the input $\mathbf{x}_t$ at every step. Without the input, the state vector cannot be updated and the output cannot be produced. One can possibly skip the update procedure until the next observation, but this approach performs poorly in practice.
+
+RNN for Missing Frames (mfRNN). We now describe an RNN extension that can account for missing video frames. Consider a camera, and let $\mathbf{x}_t$ be the video frame that is supposedly observed by the camera at time $t$ , but the actual $\mathbf{x}_t$ might or might not be observed. If $\mathbf{x}_t$ is not observed, let $t' < t$ be the last time step where the video frame $\mathbf{x}_{t'}$ is observed. Let $\Delta_t$ be the elapsed time from the last observation: $\Delta_t = t - t'$ . For an unobserved frame $\mathbf{x}_t$ , we estimate the missing values based on the last observation and the elapsed duration $\Delta_t$ :
+
+$$
+\hat {\mathbf {x}} _ {t} = \mathbf {w} _ {t} \odot \mathbf {x} _ {t ^ {\prime}} + (1 - \mathbf {w} _ {t}) \odot \mathbf {x} _ {\text {n u l l}}, \tag {1}
+$$
+
+$$
+\mathbf {w} _ {t} = \exp (- \max (0, \Delta_ {t} \mathbf {u} + \mathbf {v})). \tag {2}
+$$
+
+We assume two nearby video frames are similar and the level of similarity depends on the time difference between the two frames. In the above, the symbol $\odot$ denotes the element-wise product between two vectors. The missing values are estimated based on the last observed values, taking into account the elapsed time from the last observation. We parameterize $\hat{\mathbf{x}}_t$ as a weighted linear combination of $\mathbf{x}_{t'}$ and a default observation vector $\mathbf{x}_{null}$ . The linear combination is controlled by the weight vector $\mathbf{w}_t$ , which is a function of the elapsed time $\Delta_t$ . The entries of $\mathbf{w}_t$ are decay functions of time; they are approximately one if the last observation is recent, and close to zero if the last observation is a distant past. The parameters of the decay functions are $\mathbf{u}$ , $\mathbf{v}$ , and the default vector for filling missing values is $\mathbf{x}_{null}$ ; all these vectors have the same dimensionality as $\mathbf{x}_t$ .
+
+The learnable parameters of an mfRNN are $\mathbf{u}$ , $\mathbf{v}$ , $\mathbf{x}_{null}$ , and the normal parameters of an RNN. At time $t$ , the input to the mfRNN is either $\mathbf{x}_t$ or $\hat{\mathbf{x}}_t$ , depending on whether the frame at time $t$ is observed or not. The output of the mfRNN is the class probability vector $\mathbf{p}_t$ of $m$ dimensions, where $m$ is the number of action classes. To learn the parameters of mfRNN, we minimize the negative log probability of the correct class at each time step $t$ : $l_t = -\log \mathbf{p}_t(c)$ , where $c$
+
+is the ground truth action class, and $\mathbf{p}_t(c)$ is the predicted probability for this class. The loss for a particular training sequence is the sum of the losses at all time steps: $\sum_{t}l_{t}$ . By optimizing the total loss over multiple time steps, we force the mfRNN to make a correct prediction for partial actions, enabling early recognition ability. This loss function is consistent with the reward function defined in Sec. 3.
+
+We can learn the parameters of an mfRNN for each camera using multiple training video sequences from the camera. From each video sequence, we can generate multiple training video sequences by randomly dropping some frames. This yields an augmented set of training data, proactively preparing the mfRNN for missing observations and also increasing its generalization ability for no missing observation cases.
+
+The extension we proposed here is not specific to any recurrent network architecture. In our experiments, we experiment with both LSTMs [15] and IndRNNs [29], which are recurrent network architectures that achieved state-of-the-art performance on many sequence modeling tasks. For brevity, we will refer to all recurrent networks with the extension to handle missing frame estimation as mfRNNs. When it is necessary to specify the underlying architecture, we will refer to them as either mfLSTM or mfIndRNN.
+
+# 4.2. Integrating information from multiple cameras
+
+To integrate information from multiple cameras, we compute a weighted average of the mfRNNs' outputs. Recall from the previous subsection that there is one mfRNN for each camera, and the output of each mfRNN is a vector of class probabilities. Let $\mathbf{p}_t^i$ be the output at time $t$ of the mfRNN for the $i^{th}$ camera. We propose to aggregate multiple outputs by computing their weighted average, where the weights are determined based on the elapsed times from the last observations. The intuition is that for a certain view, if the elapsed time from last observation is large, the output of the corresponding mfRNN is unreliable, so it should contribute less to the consolidated output. Let $\Delta_t^i$ denote the elapsed time from the last observation for the $i^{th}$ camera. We combine the outputs of the mfRNNs as follows: $\mathbf{p}_t = \sum_{i=1}^k \omega^i (\Delta_t^1, \dots, \Delta_t^k) \mathbf{p}_t^i$ . Here $\omega^i (\Delta_t^1, \dots, \Delta_t^k)$ is the weight for the $i^{th}$ view; and it is a function of the elapsed times $\Delta_t^1, \dots, \Delta_t^k$ . The set of weight functions can be seen as a network with $k$ inputs and $k$ outputs, mapping from the elapsed times $\Delta_t^1, \dots, \Delta_t^k$ to the contribution weights. In this work, we use a simple network with two linear layers, a Leaky ReLU of 0.2 as the activation layer, and one soft-max layer. The parameters of this network are learned during the training phase.
+
+# 4.3. Learning the view selection policy
+
+To decide which camera to analyze at every step, a policy needs to be learned for camera selection based on the
+
+history of previous observations. The input to the policy needs to contain information about what has been observed and what has been selected for observation. We therefore parameterize the policy function so that the input to the function has two parts: 1) the hidden states of mfRNNs, 2) the elapsed times from the last observations. Formally, the input of the policy function is: $\mathbf{s}_t = [\mathbf{h}_t^1,\dots ,\mathbf{h}_t^k,\Delta_t^1,\dots ,\Delta_t^k ]$ where $\mathbf{h}_t^i$ is the hidden state of the $i^{th}$ mfRNN and $\Delta_t^i$ is the elapsed time from the last observation for the $i^{th}$ camera. This input vector $\mathbf{s}_t$ is essentially the belief state of the reinforcement learning agent for the view selection policy. We parameterize the policy function as a multi-layer perceptron that takes $\mathbf{s}_t$ as input and outputs the selection probability for each camera.
+
+Let $\pi$ denote this multi-layer perceptron policy function and $\pmb{\theta}$ the parameters of the policy. Let $\pi (a|\mathbf{s}_t,\pmb {\theta})$ be an output of the policy function, specifying the probability for selecting camera $a$ . We use the advantage actor-critic [23, 31] (A2C) to learn the parameters $\pmb{\theta}$ . Each training instance is a set of video sequences from all cameras. Following the current policy $\pi (\cdot |\cdot ,\pmb {\theta})$ on the training instance, we obtain a training episode of belief states, actions, and rewards: $\mathbf{s}_1,a_1,r_1,\mathbf{s}_2,a_2,r_2,\dots$ . For each step $t$ of the episode $t = 0,1,\dots$ , the return $G_{t}$ for time $t$ is computed as a discounted sum of future rewards: $G_{t} = \sum_{j = t}^{T}\gamma^{j - t}r_{j}$ where $\gamma$ the discounted factor. We then update the parameters of the policy function using the formula: $\pmb{\theta} := \pmb{\theta} + \alpha G_{t}\nabla_{\pmb{\theta}}\log \pi (a_{t}|\mathbf{s}_{t},\pmb{\theta})$ .
+
+# 5. Experiments
+
+# 5.1. Datasets and implementation details
+
+We perform experiments on three multi-view or multimodality datasets: NTU RGB-D dataset [43], IXMAS dataset [54], and nvGesture dataset [32].
+
+NTU RGB-D dataset is the largest multi-view dataset for action recognition. This dataset is collected by three Microsoft Kinects, capturing human actions from different views simultaneously. The dataset contains 60 different action classes: 40 daily actions, 9 health-related actions, and 11 mutual actions of two people. There are 40 distinct subjects. We used cross-subject evaluation as in [43], i.e., 20 subjects for training and 20 other subjects for testing. We excluded a small portion of data that does not have three views. In total, there are 39,984 training and 16,395 testing sequences from all three views.
+
+The dataset contains RGB images and skeleton information, but we only use the skeleton information in our experiments. Following [39], we use the distance matrix between skeleton joints to represent a skeleton. At time $t$ , the original skeleton information is a vector of the 3D locations of 25 body joints, and we replace it with the pairwise distance matrix between the body joints. The distance matrix
+
+is symmetric, so only the upper triangular matrix is kept, vectorized, and used as the feature representation for a human performer. For a video frame and an action class with two performers, we concatenate the representation vectors of the two performers. For a video frame with only one performer, the feature representation vector of the performer is duplicated. The length of the feature vectors is 600.
+
+IXMAS dataset contains 11 action classes and 10 actors. Each action was performed three times by each actor. The actions were recorded by four side-view cameras and one top-view camera. This dataset only contains RGB frames. We used the pre-trained I3D model [7] to densely extract features for each frame.
+
+nvGesture dataset contains 25 different gesture classes, intended for human-computer interaction. A total of 20 subjects participated in data collection. The data was captured using two different cameras: one SoftKinetic depth camera recording depth and RGB frames in the front and another top-mounted DUO 3D camera capturing stereo IR. In addition, optical flow can be computed from the color stream and IR disparity map can be computed from IR-stereo pair, resulting in five different modalities. For each modality, there are 1,050 training and 482 test videos. As shown in [32], among all five modalities, the IR disparity performs worst and barely provides any benefit. In our experiments, we excluded the IR disparity modality. For this dataset, all sequences in the same modality have same temporal length. We uniformly split each sequence into 20 segments and used pre-trained I3D model to extract features for each segments. The I3D model was trained on three-channel RGB images and one-channel flow images. To apply them to one-channel depth or IR images, we inflated the one-channel images into three-channel images and finetuned the I3D model on these modalities. This is a weakly-labeled dataset, and some parts of the videos do not contain gestures. We therefore discarded the first three and last three segments. We only used the center segments, within which most gestures occur.
+
+Overlapping views. Although NTU and IXMAS datasets contain certain overlapping camera views, they are still valid to compare different view selection policies, as also adopted by others [38, 46, 47]. Despite overlapping views, camera perspectives are drastically different and some views are more informative than others.
+
+Implementation details. The mfRNNs for different cameras are trained separately. For the NTU dataset, we adopted IndRNN [29], which is an RNN architecture that achieved state-of-the-art performance on this dataset (for recognition, not early recognition). We used the same network architecture: 6-layer IndRNN with the hidden layer of 512 units and the dropout rate of 0.25. Note that our results are not directly comparable to [20, 28, 29, 39, 60] due to: 1) we
+
+ | View 1 | View 2 | View 3 |
| Test Scenario 1 |
| IndRNN | 63.84 | 59.83 | 56.34 |
| IndRNN + data augmentation | 65.62 | 61.08 | 58.66 |
| IndRNN + input interpolation | 66.03 | 62.33 | 59.75 |
| mfIndRNN (proposed) | 70.53 | 65.38 | 60.99 |
| Test Scenario 2 |
| IndRNN | 73.27 | 69.49 | 64.61 |
| IndRNN + data augmentation | 68.72 | 65.61 | 60.82 |
| IndRNN + input interpolation | 68.86 | 65.25 | 62.09 |
| mfIndRNN (proposed) | 75.40 | 71.04 | 66.02 |
+
+trained an mfIndRNN on each view separately, while these methods trained on all three views at once; and 2) some methods [20, 28, 39, 60] use CNN for sequence classification which requires seeing the full video and therefore is not applicable for early recognition. For the IXMAS dataset, we used one-layer LSTM with the hidden state size of 100 and the dropout rate of 0.5. For the nvGesture dataset, we used one-layer LSTM with the hidden state size of 512 and the dropout of 0.25.
+
+Both actor and critic networks for view selection policy were two-layer perceptrons (hidden size of 512 and 128) and with Leaky ReLU of 0.2 as activation function. The discounted reward factor $\gamma$ was 0.9. All the models were trained with Adam optimizer [22] with an initial learning rate of $10^{-4}$ , which was decreased by a factor of 10 when training performance plateaued. Training was stopped when the learning rate was smaller than $10^{-8}$ .
+
+# 5.2. Handling missing frames
+
+One contribution of this paper is the development of mfRNNs, novel recurrent network architectures for handling missing observations. In this section, we analyze the ability of mfRNNs and competing approaches for handling unobserved video frames. We analyze each camera separately, without camera selection and integration.
+
+Methods for comparison. Our approach for handling missing frames has two notable steps: 1) train an RNN classifier with augmented data by random frame dropping; 2) extend the normal RNN architecture to include the null and time-decay parameters for filling the missing values. We consider three alternative methods for comparison: 1) use a normal RNN classifier without any modification and without using augmented training data. 2) perform data augmentation, but use a normal RNN classifier. 3) use a RNN classifier, while perform data augmentation by random frames dropping and
+
+Table 1: Handling missing frames. This shows the classification accuracies of several methods for two different test scenarios. The proposed mfIndRNN achieves the best performance on two test scenarios.
+
+| Integration method | TestScenario 1 | TestScenario 2 |
| Uniform weights | 75.35 | 79.76 |
| Learned weights (proposed) | 76.66 | 81.28 |
+
+Table 2: Comparison of two integration methods. The first method is based on uniform averaging, and the second is based on weighted averaging with learned weights.
+
+use linear interpolation to fill the missing values (we assume future frames is available for interpolation, this is not true in practice). There are works [8, 26, 36, 58] that fill missing observations by either interpolation or imputation. Interpolation uses the temporal relation within the data steam to fill the missing value. However, this usually requires knowing the frames before and after the missing frame, but the frame after the current time is unavailable for online setting. Imputation usually results in a two-step process and missing patterns are not effectively explored [55].
+
+Test scenarios. We measure the classification performance of these methods for two test scenarios. In Test Scenario 1, the test sequences have variable frame rates due to the random frame dropping; the dropping rate ranges from $20\%$ to $70\%$ . In Test Scenario 2, the test sequences are the original test sequences; every frame is observed.
+
+Results. Tab. 1 shows the experiment results for Test Scenario 1 on the NTU dataset. The actual RNN architecture used in this experiment is IndRNN [29], so the resulting network for handling missing frames is called mfIndRNN. Tab. 1 reports the action classification accuracy of four methods on three camera views separately. The reported numbers are averaged over 5 experiment runs. The proposed mfIndRNN achieves the best performance. Using augmented training data or input interpolation also improves the performance of IndRNN. This is expected, as the classifier is trained to anticipate missing video frames, and this scenario does occur during testing. However, the augmented training data may hurt the performance of an IndRNN classifier, as can be seen in Tab. 1, which shows the classification performance on the test sequences without missing frames. This is due to having the wrong type of augmented data: the test data has no missing frames, while the generated training data is severely different. On the other hand, the proposed mfIndRNN with learnable decay parameters has the right architecture to take advantages of the augmented training data, regardless of test scenarios.
+
+# 5.3. Integrating information from multiple cameras
+
+To integrate the outputs of multiple mfRNNs, we perform the following training process. Each training instance is a set of sequences from all camera views. We randomly select a view at every time step. At every time step, each mfRNN will output a probability vector. As described in Sec. 4.2, we learn a weighted combination of these outputs.
+
+| Policy/Method | acc@40 | acc@100 | acc |
| Use All Views | 59.90 | 81.28 | 61.77 |
| Use View-invariant Features [39] | 44.74 | 70.38 | 50.17 |
| Use Keyframe Classifier [47] | 16.22 | 43.57 | 26.01 |
| Frame-based Q-Learning [46] | 27.23 | 53.71 | 35.47 |
| Always Select View 1 | 52.80 | 74.18 | 54.94 |
| Always Select View 2 | 47.71 | 68.92 | 50.52 |
| Always Select View 3 | 41.52 | 62.01 | 45.24 |
| Select Random View | 49.86 | 76.66 | 54.81 |
| Cycle Thru All Views | 51.69 | 78.02 | 56.31 |
| The learned policy (proposed) | 54.55 | 79.62 | 58.01 |
+
+Table 3: Comparison of different view selection policies on the NTU dataset. This experiment assumes only one view can be analyzed at a time. acc@R is the classification accuracy when only R% of the action has occurred. $\overline{acc}$ is the average accuracy taken over all observation ratios.
+
+During training, the parameters of the weight computation function are learned, and the parameters of the mfRNNs are finetuned.
+
+We compare the proposed view combination method with a baseline where the outputs of multiple mfRNNs are averaged. Tab. 2 shows the performance of these methods under two test scenarios on the NTU dataset. In Test Scenario 1, only one view is available at a time, while in Test Scenario 2, all views are available at every step. In both cases, the proposed method outperforms the uniform pooling approach.
+
+# 5.4. Evaluating the learned policy for view selection
+
+Evaluation metrics. We consider three evaluation metrics: 1) recognition accuracy when only the first $40\%$ of the action has observed (denoted as acc@40), 2) recognition accuracy when the full action has been observed (denoted as acc@100), and 3) the average early recognition performance (noted as $\overline{\mathrm{acc}}$ ). The third metric is based on the average accuracy over different observational ratios. The observational ratio is the proportion of an action that has been observed when the recognition decision is made.
+
+Comparison with direct baseline methods. We compare the learned policy with some baseline methods: i) keep using the same view; ii) select a random view at every step; and iii) select views in a cycle, such as 1, 2, 3, 1, 2, 3, etc. Experimental results are shown in Tab. 3, and there are some notable messages1. First, consider the top five policies shown in the table. Always Select View 1 is better than the policies that keeps selecting either View 2 or View 3. This is understandable because View 1 is the frontal view. Always Select View 1 is better than the Select Random View and Cycle Thru All Views when the observation
+
+ratio is $40\%$ (acc@40), but not when the observation ratio is $100\%$ (acc@100). This suggests that View 1 is much more informative than the other views at the beginning of the action, and it is important to observe and analyze more video frames from View 1. However, the additional information provided by View 1 will diminish as the observation ratio increases. Toward the end of the action sequence, it is better to observe the action from multiple cameras for better recognition accuracy. The reinforcement learning policy automatically learns which view to select at each time step, and it outperforms all other direct baseline policies.
+
+Comparison with other view-invariant and view-selection methods. We also implemented three other methods for comparison, and the results are shown in Tab. 3. 1) Use View-Invariant Features: we use the vectorized distance matrix as feature representation, and it is a view-invariant feature representation for skeleton data, as described in Sec. 5.1. Since the features are view invariant, we can use a single RNN for all views. Specifically, an IndRNN is trained with sequences which are generated by randomly selecting one view at each time step. However, only using view-invariant features is not enough, as can be seen in Tab. 3. Due to occlusion and other factors, one view might still be better than other views even though we use view-invariant features. 2) Use Keyframe Classifier for view selection [47]. This method first uses iterative clustering to learn a set of keyframes that are discriminative for recognition. It then uses supervised learning to learn to select the discriminative frames at each time step. 3) Use frame-based $Q$ -learning for view selection [46]. However, this method, and also the second method, only considers the current frame for choosing the next view. The performance of these two methods are relative poor, possibly due to the lack of a recurrent network to integrate information from multiple observations. As can be seen from Tab. 3, the learned view selection policy outperforms other methods by a wide margin. This policy has the average early recognition accuracy $\overline{\mathrm{acc}}$ of $58.01\%$ . This is not too far below $61.77\%$ , which is the average early recognition accuracy of the method that analyzes all views from all cameras at each time step. This unfair comparison is reported here for reference only.
+
+Selection behavior of the learned policy. Due to space limit, the selection behavior of the learned policy at test time is shown in supplementary materials. This policy does not stick to any particular view, and it does not cycle through the views in any order. But the learned policy outperforms the random policy in our experiments, so it must take into account what is occurring and what has been observed to make the decisions. On average, less informative views are selected less frequently than other views. The selection frequency for View 1, 2, and 3 of the NTU dataset are $40.3\%$ , $38.9\%$ , and $20.9\%$ , respectively.
+
+| Policy/Method | acc@40 | acc@100 | acc |
| Always Select View 1 | 18.85 | 30.80 | 22.22 |
| Always Select View 2 | 46.49 | 67.25 | 49.45 |
| Always Select View 3 | 41.17 | 59.96 | 43.87 |
| Select Random View | 40.55 | 68.34 | 47.17 |
| Cycle Thru All Views | 40.02 | 71.32 | 48.85 |
| The learned policy (proposed) | 49.88 | 73.74 | 53.30 |
+
+
+Figure 3: Early recognition performance on the NTU dataset when View 1 suffers from severe occlusions. This shows the recognition accuracy against the observational ratio, which is the proportion of an action that has been observed when the recognition decision is made.
+
+Effect of severe occlusion. We further study the performance of the learned policy in the presence of severe occlusions. Assuming View 1 is blocked by a table, and only the upper body of a person is visible, we set the values of the leg joints in View 1 to 0. Tab. 4 shows the performance of different view selection policies under this condition. The learned policy learns to avoid View 1, and it outperforms other view selection policies by a wide margin. The overall selection frequency for Views 1, 2, 3 are $0.1\%$ , $56.0\%$ , $43.9\%$ respectively. Fig. 3 shows the entire performance curves of the last three policies on the NTU dataset, plotting the recognition accuracy as a function of the observational ratio. To reduce clutter, we only plot the top performing policies in this figure.
+
+Results on the IXMAS dataset. Tab. 5 shows the performance of different view selection policies on the IXMAS dataset. Following [46], we compute and report the leave-one-actor-out cross validation performance of the methods. The shown results for [38, 46, 47] are copied from the original papers. As can be seen, the learned policy achieves the best performance in all three metrics.
+
+Table 4: Performance of view selection policies when View 1 suffers from severe occlusions.
+
+| Method | acc@40 | acc@100 | acc |
| Use All Views | 98.79 | 99.39 | 97.52 |
| Select Random View | 95.94 | 97.15 | 92.21 |
| Cycle Thru All Views | 96.06 | 97.57 | 93.09 |
| Use Keyframe Classifier [47] | n/a | 84.0 | n/a |
| Visibility-based Selection[38] | n/a | 89.0 | n/a |
| Frame-based Q-Learning [46] | 85 | 94.24 | 83 |
| The learned policy (proposed) | 97.64 | 97.87 | 94.32 |
+
+Table 5: Performance on IXMAS dataset.
+
+| Method | acc@40 | acc@100 | acc |
| Use All 4 Modalities (our impl.) | 65.35 | 82.37 | 68.20 |
| Use All 4 Modalities (in [32]) | n/a | 83.4 | n/a |
| Select Random Modality | 58.22 | 78.59 | 62.29 |
| Cycle Thru All Modalities | 58.09 | 81.95 | 64.00 |
| The learned policy (proposed) | 61.82 | 82.36 | 65.37 |
+
+Table 6: Performance on nvGesture dataset. The first two methods are shown for reference only; they use all four modalities at each time step, so they have unfair advantages over other methods.
+
+Results on the nvGesture dataset. Tab. 6 shows the performance of view selection policies on the nvGesture dataset. The learned policy outperforms other view selection policies and that can only analyze one view at each time step. For acc@100, the learned policy performs as well as the method that uses all four modalities, even though the learned policy only uses one modality at a time. Note that the additional post-processing step used in [32] is not applicable to early recognition (due to the need for all frames). This explains the $1\%$ accuracy gap between our implementation and [32] when using all four modalities.
+
+# 6. Conclusions
+
+In this paper, we study the problem of early recognition using multiple cameras. Our objective is to recognize human actions as early as possible under the constraint that only one camera can be accessed and analyzed at a time. We formulate this problem as a sequential decision process and develop our method based on reinforcement learning. Using reinforcement learning, we optimize a policy for selecting the best camera to use at each step and to integrate information from multiple cameras. We also propose mfrnn, a novel recurrent neural network architecture that can deal with unobserved video frames, improving the overall performance of our method in recognizing human actions.
+
+Acknowledgement. This project is partially supported by Brookhaven National Lab, the US National Science Foundation Award IIS-1763981, and VinAI Research.
+
+# References
+
+[1] Yazan Abu Farha, Alexander Richard, and Juergen Gall. When will you do what?-anticipating temporal occurrences of activities. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1
+[2] Saad Ali and Mubarak Shah. Human action recognition in videos using kinematic features and multiple instance learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(2):288-303, 2010. 2
+[3] Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. Encouraging lstms to anticipate actions very early. In Proceedings of the International Conference on Computer Vision, 2017. 1
+[4] Nikolay Atanasov, Bharath Sankaran, Jerome Le Ny, Thomas Koletschka, George J Pappas, and Kostas Daniilidis. Hypothesis testing framework for active object detection. In Proceedings of the IEEE Conference Robotics and Automation. IEEE, 2013. 2
+[5] Zhuowei Cai, Limin Wang, Xiaojiang Peng, and Yu Qiao. Multi-view super vector for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. 2
+[6] Yu Cao, Daniel Barrett, Andrei Barbu, Siddharth Narayanaswamy, Haonan Yu, Aaron Michaux, Yuewei Lin, Sven Dickinson, Jeffrey Mark Siskind, and Song Wang. Recognize human activities from partially observed videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2013. 1
+[7] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 5
+[8] Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. Nature Scientific reports, 8(1):6085, 2018. 6
+[9] Jianhui Chen, Hoang M Le, Peter Carr, Yisong Yue, and James J Little. Learning online smooth predictors for real-time camera planning using recurrent decision trees. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 2
+[10] Lei Chen, Jiwen Lu, Zhanjie Song, and Jie Zhou. Part-activated deep reinforcement learning for action prediction. In Proceedings of the European Conference on Computer Vision, 2018. 1
+[11] Trevor Darrell and Alex Pentland. Active gesture recognition using partially observable markov decision processes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1996. 2, 3
+[12] Enrique Dunn and Jan-Michael Frahm. Next best view planning for active model improvement. In Proceedings of the British Machine Vision Conference, 2009. 2
+[13] Harshala Gammulle, Simon Denman, Sridha Sridharan, and Clinton Fookes. Predicting the future: A jointly learnt model for action anticipation. In Proceedings of the International Conference on Computer Vision, 2019. 1
+
+[14] Minh Hoai and Fernando De la Torre. Max-margin early event detectors. International Journal of Computer Vision, 107(2):191-202, 2014. 1
+[15] Sepp Hochreiter and Jurgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. 3, 4
+[16] Jian-Fang Hu, Wei-Shi Zheng, Lianyang Ma, Gang Wang, Jian-Huang Lai, and Jianguo Zhang. Early action prediction by soft regression. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018. 1
+[17] Dong Huang, Shitong Yao, Yi Wang, and Fernando De La Torre. Sequential max-margin event detectors. In Proceedings of the European Conference on Computer Vision, 2014. 1
+[18] Karim Iskakov, Egor Burkov, Victor Lempitsky, and Yury Malkov. Learnable triangulation of human pose. In Proceedings of the International Conference on Computer Vision, 2019. 2
+[19] Abhishek Kar, Christian Hane, and Jitendra Malik. Learning a multi-view stereo machine. In Advances in Neural Information Processing Systems, 2017. 2
+[20] Qiuhong Ke, Mohammed Bennamoun, Hossein Rahmani, Senjian An, Ferdous Sohel, and Farid Boussaid. Learning latent global network for skeleton-based action prediction. IEEE Transactions on Image Processing, 2019. 5, 6
+[21] Qiuhong Ke, Mario Fritz, and Bernt Schiele. Time-conditioned action anticipation in one shot. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
+[22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In arXiv preprint arXiv:1412.6980, 2014. 6
+[23] Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In Advances in Neural Information Processing Systems, 2000. 5
+[24] Yu Kong, Dmitry Kit, and Yun Fu. A discriminative model with multiple temporal scales for action prediction. In Proceedings of the European Conference on Computer Vision, 2014. 1
+[25] Yu Kong, Zhengming Ding, Jun Li, and Yun Fu. Deeply learned view-invariant features for cross-view action recognition. IEEE Transactions on Image Processing, 26(6): 3028-3037, 2017. 2
+[26] David M Kreindler and Charles J Lumsden. The effects of the irregular sample and missing data in time series analysis. Nonlinear Dynamical Systems Analysis for the Behavioral Sciences Using Real Data, page 135, 2012. 6
+[27] Catherine Laporte and Tal Arbel. Efficient discriminant viewpoint selection for active bayesian recognition. International Journal of Computer Vision, 68(3):267-287, 2006. 2
+[28] Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu. Cooccurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. Proceedings of the International Joint Conference on Artificial Intelligence, 2018. 5, 6
+[29] Shuai Li, Wanqing Li, Chris Cook, Ce Zhu, and Yanbo Gao. Independently recurrent neural network (indrnn): Building a longer and deeper rn. In Proceedings of the IEEE Confer
+
+ence on Computer Vision and Pattern Recognition, 2018. 3, 4, 5, 6
+[30] Shugao Ma, Leonid Sigal, and Stan Sclaroff. Learning activity progression in lstms for activity detection and early detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 1
+[31] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning, 2016. 5
+[32] Pavlo Molchanov, Xiaodong Yang, Shalini Gupta, Kihwan Kim, Stephen Tyree, and Jan Kautz. Online detection and classification of dynamic hand gestures with recurrent 3d convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 5, 8
+[33] Ehsan Adeli Mosabbeb, Kaamran Raahemifar, and Mahmood Fathy. Multi-view human activity recognition in distributed camera sensor networks. Sensors, 13(8750-8770), 2013. 2
+[34] Rafael Possas, Sheila Pinto Caceres, and Fabio Ramos. Ego-centric activity recognition on a budget. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 2
+[35] S. Ramagiri, R. Kavi, and V. Kulathumani. Real-time multiview human action recognition using a wireless camera network. In Proceedings of the International Conference on Distributed Smart Cameras, 2011. 2
+[36] Kira Rehfeld, Norbert Marwan, Jobst Heitzig, and Jürgen Kurths. Comparison of correlation analysis techniques for irregularly sampled time series. *Nonlinear Processes in Geophysics*, 18(3):389–404, 2011. 6
+[37] Sumantra Dutta Roy, Santanu Chaudhury, and Subhashis Banerjee. Active recognition through next view planning: a survey. Pattern Recognition, 37(3):429-446, 2004. 2
+[38] Dmitry Rudoy and Lihi Zelnik-Manor. Viewpoint selection for human actions. International Journal of Computer Vision, 97(3):243-254, 2012. 2, 5, 8
+[39] Alejandro Hernandez Ruiz, Lorenzo Porzi, Samuel Rota Bulò, and Francesc Moreno-Noguer. 3d cnns on distance matrices for human action recognition. In ACM Multimedia, 2017. 5, 6, 7
+[40] D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing, volume 1, chapter 8, pages 318-362. MIT Press, Cambridge, MA, 1986. 2, 3, 4
+[41] M. S. Ryoo, Thomas J. Fuchs, Lu Xia, J. K. Aggarwal, and Larry Matthies. Robot-centric activity prediction from first-person videos: What will they do to me? In International Conference on Human-Robot Interaction, 2015. 1
+[42] M.S. Ryoo. Human activity prediction: Early recognition of ongoing activities from streaming videos. In Proceedings of the International Conference on Computer Vision, 2011. 1
+[43] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 5
+
+[44] Abhishek Sharma, Abhishek Kumar, Hal Daume, and David W Jacobs. Generalized multiview analysis: A discriminative latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012. 2
+[45] Richard Souvenir and Justin Babbs. Learning the viewpoint manifold for action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008. 2
+[46] Scott Spurlock and Richard Souvenir. Multi-view action recognition one camera at a time. In Proceedings of the IEEE Workshop on Applications of Computer Vision, 2014. 2, 3, 5, 7, 8
+[47] Scott Spurlock, Junjie Shan, and Richard Souvenir. Discriminative poses for early recognition in multi-camera networks. In Proceedings of the 9th International Conference on Distributed Smart Cameras. ACM, 2015. 2, 5, 7, 8
+[48] Pavan Turaga, Ashok Veeraraghavan, and Rama Chellappa. Statistical analysis on stiefel and grassmann manifolds with applications in computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008. 2
+[49] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating the future by watching unlabeled video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 1
+[50] Boyu Wang and Minh Hoai. Predicting body movement and recognizing actions: an integrated framework for mutual benefits. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, 2018.
+[51] Boyu Wang and Minh Hoai. Back to the beginning: Starting point detection for early recognition of ongoing human actions. Computer Vision and Image Understanding, 175: 24-31, 2018. 1
+[52] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018. 3
+[53] Xionghui Wang, Jian-Fang Hu, Jian-Huang Lai, Jianguo Zhang, and Wei-Shi Zheng. Progressive teacher-student learning for early action prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
+[54] Daniel Weinland, Remi Ronfard, and Edmond Boyer. Free viewpoint action recognition using motion history volumes. Computer Vision and Image Understanding, 104(2):249-257, 2006. 2, 5
+[55] Brian J Wells, Kevin M Chagin, Amy S Nowacki, and Michael W Kattan. Strategies for handling missing data in electronic health record derived data. eGEMs, 1(3), 2013. 6
+[56] Chen Wu, Amir Hossein Khalili, and Hamid Aghajan. Multiview activity recognition in smart homes with spatiotemporal features. In Proceedings of the International Conference on Distributed Smart Cameras, 2010. 2
+[57] Zhen Xu, Laiyun Qing, and Jun Miao. Activity autocompletion: Predicting human activities from partial videos. In Proceedings of the International Conference on Computer Vision, 2015. 1
+
+[58] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Deep sensing: Active sensing using multi-directional recurrent neural networks. In ICLR, 2018. 6
+[59] Mabel M Zhang, Nikolay Atanasov, and Kostas Daniilidis. Active end-effector pose selection for tactile object recognition through monte carlo tree search. In Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems. IEEE, 2017. 2
+[60] Pengfei Zhang, Cuiling Lan, Junliang Xing, Wenjun Zeng, Jianru Xue, and Nanning Zheng. View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 5, 6
\ No newline at end of file
diff --git a/activevisionforearlyrecognitionofhumanactions/images.zip b/activevisionforearlyrecognitionofhumanactions/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2cdd44e132487a8643542dc39eeef4fa2907b8e3
--- /dev/null
+++ b/activevisionforearlyrecognitionofhumanactions/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b2bfd7afa3eb627e6852fc433bb33c7a91859064bec3f4c772b87738c89314d
+size 330588
diff --git a/activevisionforearlyrecognitionofhumanactions/layout.json b/activevisionforearlyrecognitionofhumanactions/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ea0e7a7f4dff8fc1b98406847444f7e60504d5e5
--- /dev/null
+++ b/activevisionforearlyrecognitionofhumanactions/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cc5217e6215660a01a7282b08e1b617bbafd2521db8f6fbee346728d1694feff
+size 400845
diff --git a/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_content_list.json b/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..45477c593835279f5aad170d7f1ef2c751929e75
--- /dev/null
+++ b/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd8db6228b661e0e3d946caf2094b3c7343627ff9c7a763e989fee689943b223
+size 72257
diff --git a/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_model.json b/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a49b6187dd0f1dd1e8ba1f4b597db2ab2f147f55
--- /dev/null
+++ b/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9288bc5201fa131da6c0a67346c35e9773c4fe9411e56ac2f111cc9a123ecafc
+size 90982
diff --git a/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_origin.pdf b/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a20ad3939b7b9de9ec9c083a71288139f47b983e
--- /dev/null
+++ b/actortransformersforgroupactivityrecognition/f7a85502-95dc-4fa4-87d5-8df7fb452518_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e5453f76c02ebac43a90260b6294653ffb46ecd6ecfb4a54963e7c71dd6ac9d
+size 1392008
diff --git a/actortransformersforgroupactivityrecognition/full.md b/actortransformersforgroupactivityrecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d21cd20f3f24e40ad612360f96a3357387187b7
--- /dev/null
+++ b/actortransformersforgroupactivityrecognition/full.md
@@ -0,0 +1,270 @@
+# Actor-Transformers for Group Activity Recognition
+
+Kirill Gavrilyuk $^{1*}$ Ryan Sanford $^{2}$ Mehrsan Javan $^{2}$ Cees G. M. Snoek $^{1}$ $^{1}$ University of Amsterdam $^{2}$ Sportlogiq
+{kgavrilyuk, cgmsnoek}@uva.nl {ryan.sanford, mehrsan}@sportlogiq.com
+
+# Abstract
+
+This paper strives to recognize individual actions and group activities from videos. While existing solutions for this challenging problem explicitly model spatial and temporal relationships based on location of individual actors, we propose an actor-transformer model able to learn and selectively extract information relevant for group activity recognition. We feed the transformer with rich actor-specific static and dynamic representations expressed by features from a 2D pose network and 3D CNN, respectively. We empirically study different ways to combine these representations and show their complementary benefits. Experiments show what is important to transform and how it should be transformed. What is more, actor-transformers achieve state-of-the-art results on two publicly available benchmarks for group activity recognition, outperforming the previous best published results by a considerable margin.
+
+# 1. Introduction
+
+The goal of this paper is to recognize the activity of an individual and the group that it belongs to [11]. Consider for example a volleyball game where an individual player jumps and the group is performing a spike. Besides sports, such group activity recognition has several applications including crowd monitoring, surveillance and human behavior analysis. Common tactics to recognize group activities exploit representations that model spatial graph relations between individual actors (e.g. [27, 45, 60]) and follow actors and their movements over time (e.g. [28, 45, 48]). The majority of previous works explicitly model these spatial and temporal relationships based on the location of the actors. We propose an implicit spatio-temporal model for recognizing group activities.
+
+We are inspired by progress in natural language processing (NLP) tasks, which also require temporal modeling to capture the relationship between words over time. Typi
+
+
+Figure 1: We explore two complementary static and dynamic actor representations for group activity recognition. The static representation is captured by 2D pose features from a single frame while the dynamic representation is obtained from multiple RGB or optical flow frames. These representations are processed by a transformer that infers group activity.
+
+cally, recurrent neural networks (RNN) and their variants (long short-term memory (LSTM) and gated recurrent unit (GRU)) were the first choices for NLP tasks [8, 41, 52]. While designed to model a sequence of words over time, they experience difficulty modeling long sequences [14]. More recently, the transformer network [55] has emerged as a superior method for NLP tasks [15, 17, 33, 62] since it relies on a self-attention mechanism that enables it to better model dependencies across words over time without a recurrent or recursive component. This mechanism allows the network to selectively extract the most relevant information and relationships. We hypothesize a transformer network can also better model relations between actors and combine actor-level information for group activity recognition compared to models that require explicit spatial and temporal constraints. A key enabler is the transformer's self-attention mechanism, which learns interactions between the actors and selectively extracts information that is important for activity recognition. Therefore, we do not rely on any a priori spatial or temporal structure like graphs [45, 60] or
+
+models based on RNNs [16, 28]. We propose transformers for recognizing group activities.
+
+Besides introducing the transformer in group activity recognition, we also pay attention to the encoding of individual actors. First, by incorporating simple yet effective positional encoding [55]. Second, by explicit modeling of static and dynamic representations of the actor, which is illustrated in Figure 1. The static representation is captured by pose features that are obtained by a 2D pose network from a single frame. The dynamic representation is achieved by a 3D CNN taking as input the stacked RGB or optical flow frames similar to [2]. This representation enables the model to capture the motion of each actor without explicit temporal modeling via RNN or graphical models. Meanwhile, the pose network can easily discriminate between actions with subtle motion differences. Both types of features are passed into a transformer network where relations are learned between the actors enabling better recognition of the activity of the group. We refer to our approach as actor-transformers. Finally, given that static and dynamic representations capture unique, but complimentary, information, we explore the benefit of aggregating this information through different fusion strategies.
+
+We make three contributions in this paper. First, we introduce the transformer network for group activity recognition. It refines and aggregates actor-level features, without the need for any explicit spatial and temporal modeling. Second, we feed the transformer with a rich static and dynamic actor-specific representation, expressed by features from a 2D pose network and 3D CNN. We empirically study different ways to combine these representations and show their complementary benefits. Third, our actor-transformers achieve state-of-the-art results on two publicly available benchmarks for group activity recognition, the Collective [11] and Volleyball [28] datasets, outperforming the previous best published results [2, 60] by a considerable margin.
+
+# 2. Related Work
+
+# 2.1. Video action recognition
+
+CNNs for video action recognition. While 2D convolutional neural networks (CNN) have experienced enormous success in image recognition, initially they could not be directly applied to video action recognition, because they do not account for time, which is vital information in videos. Karpathy et al. [31] proposed 2D CNNs to process individual frames and explored different fusion methods in an effort to include temporal information. Simonyan and Zisserman [49] employed a two-stream CNN architecture that independently learns representations from input RGB image and optical flow stacked frames. Wang et al. [57] proposed to divide the video into several segments and used a multi-
+
+stream approach to model each segment with their combination in a learnable way. Many leveraged LSTMs to model long-term dependencies across frames [18, 37, 42, 47]. Ji et al. [30] were the first to extend 2D CNN to 3D, where time was the third dimension. Tran et al. [53] demonstrated the effectiveness of 3D CNNs by training on a large collection of noisy labeled videos [31]. Carreira and Zisserman [7] inflated 2D convolutional filters to 3D, exploiting training on large collections of labeled images and videos. The recent works explored leveraging feature representation of the video learned by 3D CNNs and suggesting models on top of that representation [26, 59]. Wang and Gupta [59] explored spatio-temporal graphs while Hussein et al. [26] suggested multi-scale temporal convolutions to reason over minute-long videos. Similarly, we also rely on the representation learned by a 3D CNN [7] to capture the motion and temporal features of the actors. Moreover, we propose to fuse this representation with the static representation of the actorpose to better capture exact positions of the actor's body joints.
+
+Attention for video action recognition. Originally proposed for NLP tasks [4] attention mechanisms have also been applied to image caption generation [61]. Several studies explored attention for video action recognition by incorporating attention via LSTM models [37, 47], pooling methods [22, 40] or graphs [59]. Attention can also be guided through different modalities, such as pose [5, 19] and motion [37]. More recently, transformer networks [55] have received special recognition due to the self-attention mechanism that can better capture long-term dependencies, compared to RNNs. Integrating the transformer network for visual tasks has also emerged [21, 44]. Parmar et al. [44] generalized the transformer to an image generation task, while Girdhar et al. [21] created a video action transformer network on top of a 3D CNN representation [7] for action localization and action classification. Similarly, we explore the transformer network as an approach to refine and aggregate actor-level information to recognize the activity of the whole group. However, we use representations of all actors to create query, key and values to refine each individual actor representation and to infer group activity, while [21] used only one person box proposal for query and clip around the person for key and values to predict the person's action.
+
+Pose for video action recognition. Most of the human actions are highly related to the position and motion of body joints. This has been extensively explored in the literature, including hand-crafted pose features [29, 43, 56], skeleton data [20, 25, 39, 46, 50], body joint representation [6, 8] and attention guided by pose [5, 19]. However, these approaches were only trained to recognize an action for one individual actor, which does not generalize well to inferring group activity. In our work we explore the fusion of the
+
+pose features with dynamic representations, following the multi-stream approach [13, 54, 63] for action recognition, but we leverage it to infer group activity.
+
+# 2.2. Group activity recognition
+
+Group activity recognition has recently received more attention largely due to the introduction of the public Collective dataset [11] and Volleyball dataset [28]. Initially, methods relied on hand-crafted features extracted for each actor, which were then processed by probabilistic graphical models [1, 9, 10, 12, 23, 34, 35]. With the emergence of deep learning, the performance of group activity recognition has steadily increased. Some of the more successful approaches utilized RNN-type networks. Ibrahim et al. [28] used LSTM to model the action dynamics of individual actors and aggregate the information to predict group activity. Deng et al. [16] integrated graphical models with RNN. Shu et al. [48] used a two-level hierarchy of LSTMs that simultaneously minimized the energy of the predictions while maximizing the confidence. Bagautdinov et al. [3] jointly detected every actor in a video, predicted their actions and the group activity by maintaining temporal consistency of box proposals with the help of RNN. Wang et al. [58] utilizes single person dynamics, intra-group and inter-group interactions with LSTM-based model. Li and Chuah [36] took an alternative approach, where captions were generated for every video frame and then were used to infer group activity. Ibrahim and Mori [27] created a relational representation of each person which is then used for multi-person activity recognition. Qi et al. [45] proposed an attentive semantic RNN that utilized spatio-temporal attention and semantic graphs to capture inter-group relationships. Lately, studies have been moving away from RNNs. Azar et al. [2] used intermediate representations called activity maps, generated by a CNN, to iteratively refine group activity predictions. Wu et al. [60] built an actor relation graph using a 2D CNN and graph convolutional networks to capture both the appearance and position relations between actors. Like Wu et al. [60] we also rely on actor-level representations but differently, we utilize the self-attention mechanism that has the ability to selectively highlight actors and group relations, without explicitly building any graph. Moreover, we enrich actor features by using static and dynamic representations. Similarly to [2] we build our dynamic representation with a 3D CNN.
+
+# 3. Model
+
+The goal of our method is to recognize group activity in a multi-actor scene through enhancement and aggregation of individual actor features. We hypothesize that the self-attention mechanism provided by transformer networks is a flexible enough model that can be successfully used out-of-the-box, without additional tricks or tweaks, for the infer
+
+ence of the activity of the whole group given the representation of each actor.
+
+Our approach consists of three main stages presented in Figure 2: actor feature extractor, group activity aggregation and fusion. In brief, the input to our model is a sequence of video frames $F_{t}, t = 1, \dots, T$ with $N$ actor bounding boxes provided for each frame where $T$ is the number of frames. We obtain the static and the dynamic representation of each actor by applying a 2D pose network on a single frame and a 3D CNN on all input frames. The dynamic representation can be built from RGB or optical flow frames, which are processed by a 3D CNN followed by a RoIAlign [24] layer. Next, actor representations are embedded into a subspace such that each actor is represented by a 1-dimensional vector. In the second stage, we apply a transformer network on top of these representations to obtain the action-level features. These features are max pooled to capture the activity-level features. A linear classifier is used to predict individual actions and group activity using the action-level and group activity-level features, respectively. In the final stage we introduce fusion strategies before and after the transformer network to explore the benefit of fusing information across different representations. We describe each stage in more details in the following subsections.
+
+# 3.1. Actor feature extractor
+
+All human actions involve the motion of body joints, such as hands and legs. This applies not only to fine-grained actions that are performed in sports activities (e.g. spike and set in volleyball) but also to every day actions such as walking and talking. This means that it is important to capture not only the position of joints but their temporal dynamics as well. For this purpose, we utilize two distinct backbone models to capture both position and motion of joints and actors themselves.
+
+To obtain joints positions a pose estimation model is applied. It receives as input a bounding box around the actor and predicts the location of key joints. Our approach is independent of the particular choice of the pose estimation model. We select the recently published HRNet [51] as our pose network as it has a relatively simple design, while achieving state-of-the-art results on pose estimation benchmarks. We use the features from the last layer of the network, right before the final classification layer, in all our experiments. Specifically, we use the smallest network pose_hrnet_w32 trained on COCO key points [38], which shows good enough performance for our task as well.
+
+The second backbone network is responsible for modeling the temporal dynamics. Several studies have demonstrated that 3D CNNs, with enough available data for training [53, 7], can build strong spatio-temporal representations for action recognition. Accordingly, we utilize the I3D [7] network in our framework since the pose network alone can
+
+
+Figure 2: Overview of the proposed model. An input video with $T$ frames and $N$ actor bounding boxes is processed by two branches: static and dynamic. The static branch outputs an HRNet [51] pose representation for each actor bounding box. The dynamic branch relies on I3D [7], which receives as input either stacked RGB or optical flow frames. To extract actor-level features after I3D we apply a RoIAlign [24] layer. A transformer encoder $(E)$ refines and aggregates actor-level features followed by individual action and group activity classifiers. Two fusion strategies are supported. For early fusion we combine actor-level features of the two branches before $E$ , in the late fusion we combine the classifier prediction scores.
+
+not capture the motion of the joints from a single frame. The I3D network processes stacked $F_{t}, t = 1,..,T$ frames with inflated 3d convolutions. We consider RGB and optical flow representations as they can capture different motion aspects. As 3D CNNs are computationally expensive we employ a RoIAlign [24] layer to extract features for each actor given $N$ bounding boxes around actors while processing the whole input frames by the network only once.
+
+# 3.2. Transformer
+
+Transformer networks were originally introduced for machine translation in [55]. The transformer network consists of two parts: encoder and decoder. The encoder receives an input sequence of words (source) that is processed by a stack of identical layers consisting of a multi-head self-attention layer and a fully-connected feed-forward network. Then, a decoder generates an output sequence (target) through the representation generated by the encoder. The decoder is built in a similar way as the encoder having access to the encoded sequence. The self-attention mechanism is the vital component of the transformer network, which can also be successfully used to reason about actors' relations and interactions. In the following section, we describe the self-attention mechanism itself and how the transformer architecture can be applied to the challenging task of group activity recognition in video.
+
+Attention $A$ is a function that represents a weighted sum of the values $V$ . The weights are computed by matching a query $Q$ with the set of keys $K$ . The matching function can have different forms, most popular is the scaled dot-product [55]. Formally, attention with the scaled dot-product matching function can be written as:
+
+$$
+A (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d}}\right) V \tag {1}
+$$
+
+where $d$ is the dimension of both queries and keys. In the self-attention module all three representations $(Q, K, V)$ are computed from the input sequence $S$ via linear projections so $A(S) = A(Q(S), K(S), V(S))$ .
+
+Since attention is a weighted sum of all values it overcomes the problem of forgetfulness over time, which is well-studied for RNNs and LSTMs [14]. In sequence-to-sequence modeling this mechanism gives more importance to the most relevant words in the source sequence. This is a desirable property for group activity recognition as well because we can enhance the information of each actor's features based on the other actors in the scene without any spatial constraints. Multi-head attention $A_{h}$ is an extension of attention with several parallel attention functions using separate linear projections $h_{i}$ of $(Q, K, V)$ :
+
+$$
+A _ {h} (Q, K, V) = \operatorname {c o n c a t} \left(h _ {1}, \dots , h _ {m}\right) W, \tag {2}
+$$
+
+$$
+h _ {i} = A \left(Q W _ {i} ^ {Q}, K W _ {i} ^ {K}, V W _ {i} ^ {V}\right) \tag {3}
+$$
+
+Transformer encoder layer $E$ consists of multi-head attention combined with a feed-forward neural network $L$ :
+
+$$
+L (X) = \operatorname {L i n e a r} (\operatorname {D r o p o u t} (\operatorname {R e L U} (\operatorname {L i n e a r} (X))) \tag {4}
+$$
+
+$$
+\hat {E} (S) = \operatorname {L a y e r N o r m} \left(S + D r o p o u t \left(A _ {h} (S)\right)\right) \tag {5}
+$$
+
+$$
+E (S) = \operatorname {L a y e r N o r m} (\hat {E} (S) + D r o p o u t (L (\hat {E} (S)))) \tag {6}
+$$
+
+The transformer encoder can contain several of such layers which sequentially process an input $S$ .
+
+In our case $S$ is a set of actors' features $S = \{s_i | i = 1, \dots, N\}$ obtained by actor feature extractors. As features $s_i$ do not follow any particular order, the self-attention mechanism is a more suitable model than RNN and CNN for refinement and aggregation of these features. An alternative approach can be incorporating a graph representation as in [60] which also does not rely on the order of the $s_i$ . However, the graph representation requires explicit modeling of connections between nodes through appearance and position relations. The transformer encoder mitigates this requirement relying solely on the self-attention mechanism. However, we show that the transformer encoder can benefit from implicitly employing spatial relations between actors via positional encoding of $s_i$ . We do so by representing each bounding box $b_i$ of the respective actor's features $s_i$ with its center point $(x_i, y_i)$ and encoding the center point with the same function $PE$ as in [55]. To handle 2D space we encode $x_i$ with the first half of dimensions of $s_i$ and $y_i$ with the second half. In this work we consider only the encoder part of the transformer architecture leaving the decoder part for future work.
+
+# 3.3. Fusion
+
+The work by Simonyan and Zisserman [49] demonstrated the improvements in performance that can be obtained by fusing different modalities that contain complementary information. Following their example, we also incorporate several modalities into one framework. We explore two branches, static and dynamic. The static branch is represented by the pose network which captures the static position of body joints, while the dynamic branch is represented by I3D and is responsible for the temporal features of each actor in the scene. As RGB and optical flow can capture different aspects of motion we study dynamic branches with both representations of the input video. To fuse static and dynamic branches we explore two fusion strategies: early fusion of actors' features before the transformer network and late fusion which aggregates predictions of classifiers, similar to [49]. Early fusion enables access to both
+
+static and dynamic features before inference of group activity. Late fusion separately processes static and dynamic features for group activity recognition and can concentrate on static or dynamic features, separately.
+
+# 3.4. Training objective
+
+Our model is trained in an end-to-end fashion to simultaneously predict individual actions of each actor and group activity. For both tasks we use a standard cross-entropy loss for classification and combine two losses in a weighted sum:
+
+$$
+\mathcal {L} = \lambda_ {g} \mathcal {L} _ {g} \left(y _ {g}, \tilde {y} _ {g}\right) + \lambda_ {a} \mathcal {L} _ {a} \left(y _ {a}, \tilde {y} _ {a}\right) \tag {7}
+$$
+
+where $\mathcal{L}_g, \mathcal{L}_a$ are cross-entropy losses, $y_g$ and $y_a$ are ground truth labels, $\tilde{y}_g$ and $\tilde{y}_a$ are model predictions for group activity and individual actions, respectively. $\lambda_g$ and $\lambda_a$ are scalar weights of the two losses. We find that equal weights for individual actions and group activity perform best so we set $\lambda_g = \lambda_a = 1$ in all our experiments, which we detail next.
+
+# 4. Experiments
+
+In this section, we present experiments with our proposed model. First, we introduce two publicly available group activity datasets, the Volleyball dataset [28] and the Collective dataset [11], on which we evaluate our approach. Then we describe implementation details followed by ablation study of the model. Lastly, we compare our approach with the state-of-the-art and provide a deeper analysis of the results. For simplicity, we call our static branch as "Pose", the dynamic branch with RGB frames as "RGB" and the dynamic branch with optical flow frames as "Flow" in the following sections.
+
+# 4.1. Datasets
+
+The Volleyball dataset [28] consists of clips from 55 videos of volleyball games, which are split into two sets: 39 training videos and 16 testing videos. There are 4830 clips in total, 3493 training clips and 1337 clips for testing. Each clip is 41 frames in length. Available annotations include group activity label, individual players' bounding boxes and their respective actions, which are provided only for the middle frame of the clip. Bagautdinov et al. [3] extended the dataset with ground truth bounding boxes for the rest of the frames in clips which we are also using in our experiments. The list of group activity labels contains four main activities (set, spike, pass, winpoint) which are divided into two subgroups, left and right, having eight group activity labels in total. Each player can perform one of nine individual actions: blocking, digging, falling, jumping, moving, setting, spiking, standing and waiting.
+
+The Collective dataset [11] consists of 44 clips with varying lengths starting from 193 frames to around 1800
+
+frames in each clip. Every 10th frame has the annotation of persons' bounding boxes with one of five individual actions: (crossing, waiting, queueing, walking and talking. The group activity is determined by the action that most people perform in the clip. Following [45] we use 32 videos for training and 12 videos for testing.
+
+# 4.2. Implementation details
+
+To make a fair comparison with related works we use $T = 10$ frames as the input to our model on both datasets: middle frame, 5 frames before and 4 frames after. For the Volleyball dataset we resize each frame to $720 \times 1280$ resolution, for the Collective to $480 \times 720$ . During training we randomly sample one frame $F_{t_p}$ from $T$ input frames for the pose network. During testing we use the middle frame of the input sequence. Following the conventional approach we are also using ground truth person bounding boxes for fair comparison with related work. We crop person bounding boxes from the frame $F_{t_p}$ and resize them to $256 \times 192$ , which we process with the pose network obtaining actor-level features maps. For the I3D network, we use features maps obtained from Mixed_4f layer after additional average pooling over the temporal dimension. Then we resize the feature maps to $90 \times 160$ and use the RoIAlign [24] layer to extract features of size $5 \times 5$ for each person bounding box in the middle frame of the input video. We then embed both pose and I3D features to the vector space with the same dimension $d = 128$ . The transformer encoder uses dropout 0.1 and the size of the linear layer in the feed-forward network $L$ is set to 256.
+
+For the training of the static branch we use a batch size of 16 samples and for the dynamic branch we use a batch size of 8 samples. We train the model for 20,000 iterations on both datasets. On the Volleyball dataset we use an SGD optimizer with momentum 0.9. For the first 10,000 iterations we train with the learning rate 0.01 and for the last 10,000 iterations with the learning rate 0.001. On the Collective dataset, the ADAM [32] optimizer with hyper-parameters $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ and $\epsilon = e^{-10}$ is used. Initially, we set the learning rate to 0.0001 and decrease it by a factor of ten after 5,000 and 10,000 iterations. The code of our model will be available upon publication.
+
+# 4.3. Ablation study
+
+We first perform an ablation study of our approach on the Volleyball dataset [28] to show the influence of all three stages of the model. We use group activity accuracy as an evaluation metric in all ablations.
+
+Actor-Transformer. We start with the exploration of parameters of the actor-transformer. We experiment with the number of layers, number of heads and positional encoding. Only the static branch represented by the pose network is considered in this experiment. The results are re
+
+| # Layers | # Heads | Positional Encoding | Group Activity |
| 1 | 1 | × | 91.0 |
| 1 | 1 | ✓ | 92.3 |
| 1 | 2 | ✓ | 91.4 |
| 2 | 1 | ✓ | 92.1 |
+
+Table 1: Actor-Transformer ablation on the Volleyball dataset using static actor representation. Positional encoding improves the strength of the representation. Adding additional heads and layers did not materialize due to limited number of available training samples.
+
+| Method | Static | Dynamic |
| Pose | RGB | Flow |
| Base Model | 89.9 | 89.0 | 87.8 |
| Graph [60] | 92.0 | 91.1 | 89.5 |
| Activity Maps [2] | - | 92.0 | 91.5 |
| Actor-Transformer (ours) | 92.3 | 91.4 | 91.5 |
+
+Table 2: Actor Aggregation ablation of person-level features for group activity recognition on the Volleyball dataset. Our actor-transformer outperforms a graph while matching the results of activity maps.
+
+ported in Table 1. Positional encoding is a viable part giving around $1.3\%$ improvement. This is expected as group activity classes of the Volleyball dataset are divided into two subcategories according to the location of which the activity is performed: left or right. Therefore, explicitly adding information about actors' positions helps the transformer better reason about this part of the group activity. Typically, transformer-based language models benefit from using more layers and/or heads due to the availability of large datasets. However, the Volleyball dataset has a relatively small size and the transformer can not fully reach its potential with a larger model. Therefore we use one layer with one head in the rest of the experiments.
+
+Actor Aggregation. Next, we compare the actor-transformer with two recent approaches that combine information across actors to infer group activity. We use a static single frame (pose) and dynamic multiple frames (I3D) models as a baseline. It follows our single branch model without using the actor-transformer part, by directly applying action and activity classifiers on actor-level features from the pose and the I3D networks. The first related method uses relational graph representation to aggregate information across actors [60]. We use the authors' publicly available code for the implementation of the graph model. We also use an embedded dot-product function for the ap
+
+| Method | Pose + RGB | Pose + Flow |
| Early - summation | 91.2 | 88.5 |
| Early - concatenation | 91.8 | 89.7 |
| Late | 93.5 | 94.4 |
+
+ppearance relation and distance masking for the position relation, which performed best in [60]. For fair comparison, we replace the actor-transformer with a graph and keep the other parts of our single branch models untouched. The second related method is based on multiple refinement stages using spatial activity maps [2]. As we are using the same backbone I3D network, we directly compare with the results obtained in [2]. The comparisons are reported in Table 2. Our actor-transformer outperforms the graph for all backbone networks with good improvement for optical flow features without explicitly building any relationship representation. We match the results of activity maps [2] on optical flow and having slightly worse results on RGB. However, we achieve these results without the need to convert bounding box annotations into segmentation masks and without multiple stages of refinement.
+
+Fusion. In the last ablation, we compare different fusion strategies to combine the static and dynamic representations of our model. For the late fusion, we set the weight for the static representation to be twice as large as the weight for the dynamic representation. The results are presented in Table 3. The early fusion is not beneficial for our model, performing similar or even worse than single branch models. Early fusion strategies require the actor-transformer to reason about both static and dynamic features. Due to the small size of the Volleyball dataset, our model can not fully exploit this type of fusion. Concentrating on each of two representations separately helps the model to better use the potential of static and dynamic features. Despite Flow only slightly outperforming RGB (91.5% vs. 91.4%), fusion with static representation has a bigger impact (93.9% vs. 93.1%) showing that Flow captures more complementary information to Pose than RGB.
+
+# 4.4. Comparison with the state-of-the-art
+
+Volleyball dataset. Next, we compare our approach with the state-of-the-art models on the Volleyball dataset in Table 4 using the accuracy metrics for group activity and individual action predictions. We present two variations of our model, late fusion of Pose with RGB (Pose + RGB) and Pose with optical flow (Pose + Flow). Both variations surpass all the existing methods with a considerable margin: $0.5\%$ and $1.4\%$ for group activity, $2.7\%$ and $2.9\%$ for
+
+Table 3: Fusion ablation of static and dynamic representations on the Volleyball dataset. The late fusion outperforms the early fusion approaches.
+
+| Method | Backbone | Group Activity | Individual Action |
| Ibrahim et al. [28] | AlexNet | 81.9 | - |
| Shu et al. [48] | VGG16 | 83.3 | - |
| Qi et al. [45] | VGG16 | 89.3 | - |
| Ibrahim and Mori [27] | VGG19 | 89.5 | - |
| Bagautdinov et al. [3] | Inception-v3 | 90.6 | 81.8 |
| Wu et al. [60] | Inception-v3 | 92.5 | 83.0 |
| Azar et al. [2] | I3D | 93.0 | - |
| Ours (RGB + Flow) | I3D | 93.0 | 83.7 |
| Ours (Pose + RGB) | HRNet + I3D | 93.5 | 85.7 |
| Ours (Pose + Flow) | HRNet + I3D | 94.4 | 85.9 |
+
+Table 4: Volleyball dataset comparison for individual action prediction and group activity recognition. Our Pose + Flow model outperforms the state-of-the-art.
+
+| Method | Backbone | Group Activity |
| Lan et al. [35] | None | 79.7 |
| Choi and Salvarese [9] | None | 80.4 |
| Deng et al. [16] | AlexNet | 81.2 |
| Ibrahim et al. [28] | AlexNet | 81.5 |
| Hajimirsadeghi et al. [23] | None | 83.4 |
| Azar et al. [2] | I3D | 85.8 |
| Li and Chuah [36] | Inception-v3 | 86.1 |
| Shu et al. [48] | VGG16 | 87.2 |
| Qi et al. [45] | VGG16 | 89.1 |
| Wu et al. [60] | Inception-v3 | 91.0 |
| Ours (RGB + Flow) | I3D | 92.8 |
| Ours (Pose + RGB) | HRNet + I3D | 91.0 |
| Ours (Pose + Flow) | HRNet + I3D | 91.2 |
+
+Table 5: Collective dataset comparison for group activity recognition. Our Pose + RGB and Pose + Flow models achieve the state-of-the-art results.
+
+individual action recognition. It supports our hypothesis that the transformer-based model with the static and dynamic actor representations is beneficial for the group activity task. Moreover, we also compare the late fusion of RGB with optical flow representation (RGB + Flow) and achieve the same group activity accuracy as in [2] which also uses a backbone I3D network. However, we achieve these results with a much simpler approach and without requiring any segmentation annotation. Combination of all three representations gives the same performance as Pose + Flow showing that only using one dynamic representation is essential.
+
+Collective dataset. We further evaluate our model on the Collective dataset and provide comparisons with previous methods in Table 5. We use only group activity accuracy as a metric following the same approach as the re
+
+
+Figure 3: Example of each actor attention obtained by actor-transformers. Most attention is concentrated on the key actor (5) who performs setting action which helps to correctly predict left set group activity. Best viewed in the digital version.
+
+
+Figure 4: Volleyball dataset confusion matrix for group activity recognition. Our model achieves over $90\%$ accuracy for each group activity.
+
+lated work. Interestingly, our individual branches on the Collective dataset have much more variation in their performance than on the Volleyball dataset: Flow - $83.8\%$ , Pose - $87.9\%$ , RGB - $90.8\%$ . However, with both fused models, Pose + RGB and Pose + Flow, we achieve the state-of-the-art results, slightly outperforming the best published results of [60]. We also explore the fusion of RGB and Flow representations and find that this combination performs best on the Collective dataset reaching $92.8\%$ accuracy. We hypothesize that Pose and RGB representations capture similar information that is complementary to the optical flow representation as supported by the results of Pose + RGB model which is just slightly better than RGB representation alone. We also try to combine all three representations without receiving any additional improvement over RGB + Flow. It is worth noting that with the same backbone I3D network Azar et al. [2] achieve $85.8\%$ accuracy which is $7.0\%$ lower that our results showing the benefits of the transformer-based model over their activity maps approach.
+
+# 4.5. Analysis
+
+To analyze the benefits of our actor-transformer we illustrate the attention of the transformer in Figure 3. Each
+
+
+Figure 5: Collective dataset confusion matrix for group activity recognition. Most confusion comes from distinguishing crossing and walking.
+
+row of the matrix on the right represents the distribution of attention $A_{h}$ in equation 2 using the representation of the actor with the number of the row as a query. For most actors the transformer concentrates mostly on the key actor with number 5 of the left set group activity who performs a setting action. To further understand the performance of our model we also present confusion matrices for group activity recognition on the Volleyball dataset in Figure 4 and the Collective dataset in Figure 5. For every group activity on the Volleyball dataset our model achieves accuracy over $90\%$ with the least accuracy for right set class $(90.6\%)$ . The most confusion emerges from discriminating set, spike and pass between each other despite their spatial location, left or right. Also, the model struggles to distinguish between right winpoint and left winpoint. On the Collective dataset, our approach reaches perfect recognition for queueing and talking classes. However, two activities, crossing and walking, lead to the most confusion for our model. Several works [58, 2] argue that crossing and walking are naturally the same activity as they only differ by the relation between person and street. Integrating global scene-level information potentially can help to distinguish these two activities, which we leave for future work.
+
+# 5. Conclusion
+
+We proposed a transformer-based network as a refinement and aggregation module of actor-level features for the task of group activity recognition. We show that without any task-specific modifications the transformer matches or outperforms related approaches optimized for group activity recognition. Furthermore, we studied static and dynamic representations of the actor, including several ways to combine these representations in an actor-transformer. We achieve the state-of-the-art on two publicly available benchmarks surpassing previously published results by a considerable margin.
+
+# References
+
+[1] Mohammed Abdel Rahman Amer, Peng Lei, and Sinisa Todorovic. Hirst: Hierarchical random field for collective activity recognition in videos. In ECCV, 2014. 3
+[2] Sina Mokhtarzadeh Azar, Mina Ghadimi Atigh, Ahmad Nickabadi, and Alexandre Alahi. Convolutional relational machine for group activity recognition. In CVPR, 2019. 2, 3, 6, 7, 8
+[3] Timur M. Bagautdinov, Alexandre Alahi, François Fleuret, Pascal Fua, and Silvio Savarese. Social scene understanding: End-to-end multi-person action localization and collective activity recognition. In CVPR, 2017. 3, 5, 7
+[4] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2014. 2
+[5] Fabien Baradel, Christian Wolf, and Julien Mille. Human activity recognition with pose-driven attention to rgb. In BMVC, 2018. 2
+[6] Congqi Cao, Yifan Zhang, Chunjie Zhang, and Hanqing Lu. Action recognition with joints-pooled 3d deep convolutional descriptors. In IJCAI, 2016. 2
+[7] João Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. 2, 3, 4
+[8] Guilhem Chéron, Ivan Laptev, and Cordelia Schmid. P-cnn: Pose-based cnn features for action recognition. In ICCV, 2015. 1, 2
+[9] Wongun Choi and Silvio Savarese. A unified framework for multi-target tracking and collective activity recognition. In ECCV, 2012. 3, 7
+[10] Wongun Choi and Silvio Savarese. Understanding collective activities of people from videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36:1242-1257, 2014. 3
+[11] Wongun Choi, Khuram Shahid, and Silvio Savarese. What are they doing? : Collective activity classification using spatio-temporal relationship among people. In ICCV Workshops, 2009. 1, 2, 3, 5
+[12] Wongun Choi, Khuram Shahid, and Silvio Savarese. Learning context for collective activity recognition. In CVPR, 2011. 3
+[13] Vasileios Choutas, Philippe Weinzaepfel, Jérôme Revaud, and Cordelia Schmid. Potion: Pose motion representation for action recognition. In CVPR, 2018. 3
+[14] Jasmine Collins, Jascha Sohl-Dickstein, and David Sussillo. Capacity and trainability in recurrent neural networks. arXiv preprint arXiv:1611.09913, 2016. 1, 4
+[15] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL, 2019. 1
+[16] Zhiwei Deng, Arash Vahdat, Hexiang Hu, and Greg Mori. Structure inference machines: Recurrent neural networks for analyzing relations in group activity recognition. In CVPR, 2016. 2, 3, 7
+
+[17] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019. 1
+[18] Jeff Donahue, Lisa Anne Hendricks, Marcus Rohrbach, Subhashini Venugopalan, Sergio Guadarrama, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:677-691, 2014. 2
+[19] Wenbin Du, Yali Wang, and Yu Qiao. Rpan: An end-to-end recurrent pose-attention network for action recognition in videos. In ICCV, 2017. 2
+[20] Yong Du, Wei Wang, and Liang Wang. Hierarchical recurrent neural network for skeleton based action recognition. In CVPR, 2015. 2
+[21] Rohit Girdhar, João Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer network. In CVPR, 2019. 2
+[22] Rohit Girdhar and Deva Ramanan. Attentional pooling for action recognition. In NIPS, 2017. 2
+[23] Hossein Hajimirsadeghi, Wang Yan, Arash Vahdat, and Greg Mori. Visual recognition by counting instances: A multi-instance cardinality potential kernel. In CVPR, 2015. 3, 7
+[24] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask r-cnn. In ICCV, 2017. 3, 4, 6
+[25] Yonghong Hou, Zhaoyang Li, Pichao Wang, and Wanqing Li. Skeleton optical spectra-based action recognition using convolutional neural networks. IEEE Transactions on Circuits and Systems for Video Technology, 28:807-811, 2018. 2
+[26] Noureldien Hussein, Efstratios Gavves, and Arnold WM Smeulders. Timeception for complex action recognition. In CVPR, 2019. 2
+[27] Mostafa S. Ibrahim and Greg Mori. Hierarchical relational networks for group activity recognition and retrieval. In ECCV, 2018. 1, 3, 7
+[28] Mostafa S. Ibrahim, Srikanth Muralidharan, Zhiwei Deng, Arash Vahdat, and Greg Mori. A hierarchical deep temporal model for group activity recognition. In CVPR, 2016. 1, 2, 3, 5, 6, 7
+[29] Hueihan Jhuang, Juergen Gall, Silvia Zuffi, Cordelia Schmid, and Michael J. Black. Towards understanding action recognition. In ICCV, 2013. 2
+[30] Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:221-231, 2010. 2
+[31] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale video classification with convolutional neural networks. In CVPR, 2014. 2
+[32] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 6
+[33] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. ArXiv, abs/1901.07291, 2019. 1
+
+[34] Tian Lan, Leonid Sigal, and Greg Mori. Social roles in hierarchical models for human activity recognition. In CVPR, 2012. 3
+[35] Tian Lan, Yang Wang, Weilong Yang, Stephen N. Robinovitch, and Greg Mori. Discriminative latent models for recognizing contextual group activities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34:1549-1562, 2012. 3, 7
+[36] Xin Li and Mooi Choo Chuah. Sbgar: Semantics based group activity recognition. In ICCV, 2017. 3, 7
+[37] Zhenyang Li, Kirill Gavrilyuk, Efstratios Gavves, Mihir Jain, and Cees GM Snoek. Videolstm convolves, attends and flows for action recognition. Computer Vision and Image Understanding, 166:41-50, 2018. 2
+[38] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dólar. Microsoft coco: Common objects in context. In ECCV, 2014. 3
+[39] Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatio-temporal LSTM with trust gates for 3d human action recognition. In ECCV, 2016. 2
+[40] Xiang Long, Chuang Gan, Gerard de Melo, Jiajun Wu, Xiao Liu, and Shilei Wen. Attention clusters: Purely attention based local feature integration for video classification. In CVPR, 2018. 2
+[41] Tomas Mikolov, Stefan Kombrink, Lukás Burget, Jan Černocký, and Sanjeev Khudanpur. Extensions of recurrent neural network language model. In ICASSP, 2011. 1
+[42] Joe Yue-Hei Ng, Matthew J. Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. Beyond short snippets: Deep networks for video classification. In CVPR, 2015. 2
+[43] Xiaohan Nie, Caiming Xiong, and Song-Chun Zhu. Joint action recognition and pose estimation from video. In CVPR, 2015. 2
+[44] Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Łukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In ICML, 2018. 2
+[45] Mengshi Qi, Jie Qin, Annan Li, Yunhong Wang, Jiebo Luo, and Luc Van Gool. stagnet: An attentive semantic rnn for group activity recognition. In ECCV, 2018. 1, 3, 6, 7
+[46] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. Ntu rgb+d: A large scale dataset for 3d human activity analysis. In CVPR, 2016. 2
+[47] Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. Action recognition using visual attention. In ICLR Workshops, 2016. 2
+[48] Tianmin Shu, Sinisa Todorovic, and Song-Chun Zhu. Cern: Confidence-energy recurrent network for group activity recognition. In CVPR, 2017. 1, 3, 7
+[49] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014. 2, 5
+[50] Sijie Song, Cuiling Lan, Junliang Xing, Wenjun Zeng, and Jiaying Liu. An end-to-end spatio-temporal attention model for human action recognition from skeleton data. In AAAI, 2017. 2
+
+[51] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In CVPR, 2019. 3, 4
+[52] Ilya Sutskever, James Martens, and Geoffrey E. Hinton. Generating text with recurrent neural networks. In ICML, 2011. 1
+[53] Du Tran, Lubomir D. Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In ICCV, 2015. 2, 3
+[54] Zhigang Tu, Wei Xie, Qianqing Qin, Ronald Poppe, Remco C. Veltkamp, Baoxin Li, and Junsong Yuan. Multistream cnn: Learning representations based on human-related regions for action recognition. Pattern Recognition, 79:32-43, 2018. 3
+[55] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. 1, 2, 4, 5
+[56] Chunyu Wang, Yizhou Wang, and Alan L. Yuille. An approach to pose-based action recognition. In CVPR, 2013. 2
+[57] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016. 2
+[58] Minsi Wang, Bingbing Ni, and Xiaokang Yang. Recurrent modeling of interaction context for collective activity recognition. In CVPR, 2017. 3, 8
+[59] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In ECCV, 2018. 2
+[60] Jianchao Wu, Limin Wang, Li Wang, Jie Guo, and Gangshan Wu. Learning actor relation graphs for group activity recognition. In CVPR, 2019. 1, 2, 3, 5, 6, 7, 8
+[61] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In ICML, 2015. 2
+[62] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. Xlnet: Generalized autoregressive pretraining for language understanding. ArXiv, abs/1906.08237, 2019. 1
+[63] Mohammadreza Zolfaghari, Gabriel L. Oliveira, Nima Sedaghat, and Thomas Brox. Chained multi-stream networks exploiting pose, motion, and appearance for action classification and detection. In ICCV, 2017. 3
\ No newline at end of file
diff --git a/actortransformersforgroupactivityrecognition/images.zip b/actortransformersforgroupactivityrecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3c385b9e316a7980f95365e2f29b70efafcc06eb
--- /dev/null
+++ b/actortransformersforgroupactivityrecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee6d3b1edfbd5144570e37da9b1142207728a47c81e9ca7c41774c8e7e63e3fd
+size 356019
diff --git a/actortransformersforgroupactivityrecognition/layout.json b/actortransformersforgroupactivityrecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d4f3ce23dddd62da4c9acf40dbc0e618a2f376ef
--- /dev/null
+++ b/actortransformersforgroupactivityrecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2d827d1a15d7edc4b6c1a73634b0ec27329d02dad2ec8e222fb686a00e9bb99
+size 344748
diff --git a/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_content_list.json b/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5fdfadc1f963e1b4976f583bb855d57a11a4d60e
--- /dev/null
+++ b/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cc4aa03cd27b8c54c243aa90c6e8aae2834a88f9af0e14fa57aa9ea29e270271
+size 75301
diff --git a/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_model.json b/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f59e62235f0020c7b25ab66bf146f8279251bae4
--- /dev/null
+++ b/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:539c6afcccbd3e6ae581080682056f981df4f5cc0cead7b9bbe8d1b71d02a45d
+size 92942
diff --git a/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_origin.pdf b/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..548cf0500866bbf29509e8c61043d4c52b9c0d5b
--- /dev/null
+++ b/adabitsneuralnetworkquantizationwithadaptivebitwidths/eabd1cb9-ec67-4d1a-a7bc-b496cb0b8948_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4703eb5120dc0712660f3dc3d7c65d6ba2e4d79bf7f3f923bc5acf8e05bba9b9
+size 390247
diff --git a/adabitsneuralnetworkquantizationwithadaptivebitwidths/full.md b/adabitsneuralnetworkquantizationwithadaptivebitwidths/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..1d38906d29067723a0e289e2534e594b00746819
--- /dev/null
+++ b/adabitsneuralnetworkquantizationwithadaptivebitwidths/full.md
@@ -0,0 +1,277 @@
+# AdaBits: Neural Network Quantization with Adaptive Bit-Widths
+
+Qing Jin*
+
+ByteDance Inc.
+
+jinqingking@gmail.com
+
+Linjie Yang*
+
+ByteDance Inc.
+
+linjie.yang@bytedance.com
+
+Zhenyu Liao
+
+Kwai Inc.
+
+liaozhenyu2004@gmail.com
+
+# Abstract
+
+Deep neural networks with adaptive configurations have gained increasing attention due to the instant and flexible deployment of these models on platforms with different resource budgets. In this paper, we investigate a novel option to achieve this goal by enabling adaptive bit-widths of weights and activations in the model. We first examine the benefits and challenges of training quantized model with adaptive bit-widths, and then experiment with several approaches including direct adaptation, progressive training and joint training. We discover that joint training is able to produce comparable performance on the adaptive model as individual models. We also propose a new technique named Switchable Clipping Level (S-CL) to further improve quantized models at the lowest bit-width. With our proposed techniques applied on a bunch of models including MobileNet V1/V2 and ResNet50, we demonstrate that bit-width of weights and activations is a new option for adaptively executable deep neural networks, offering a distinct opportunity for improved accuracy-efficiency trade-off as well as instant adaptation according to the platform constraints in real-world applications.
+
+# 1. Introduction
+
+Recent development of deep learning enables application of deep neural networks across a wide range of platforms that present different resource constraints. For example, popular mobile apps such as TikTok and Snapchat on portable devices pose stringent requirements on response latency and energy consumption, while visual recognition system embedded in a self-driving vehicle [13, 40] is more demanding on fast and accurate prediction. Besides, for medical application [22, 53] applied with portable testing systems, implementing efficient model will accelerate the diagnosing process to save time for doctor and patient. The problem is more serious if other factors are taken into account, such as aging of hardware, battery conditions, as well as dif
+
+ferent versions of software systems. To serve applications under all these scenarios with drastically different requirements, different models tailored for different resource budgets can be devised either manually [14, 15, 16, 37] or automatically through neural architecture search [39, 56, 57]. This strategy is beneficial for optimal trade-offs with a fixed combination of constraints, but is not economical, because it requires time-consuming training and benchmarking for each of these models, which prohibits instant adaptation to favor different scenarios. To tackle this problem, recent work focuses on training a single model that is flexible and scalable. For example, [49] proposes a method where the number of channels can be adjusted through changing the width-multiplier in each layer. Inspired by this work, [3] integrates adaptation of depth, width and kernel size altogether, and achieves better trade-offs between performance and efficiency through progressive training. [48] adopts the same strategy with scaling up factors, but uses simultaneous training algorithm to achieve improved predictive accuracy.
+
+Surprisingly, albeit the above-mentioned methods achieve the desired flexibility of adaptive deployment, bitwidth of weights and intermediate activations, as another degree of freedom, is almost overlooked in previous work. Suppose we can adaptively choose bit-width for a neural network during inference without further training, it will provide an distinct opportunity for more powerful model compression and acceleration. As an example, compared with model with full-precision, quantizing MobileNet V2 to 6-bit compresses the model size by roughly $4.74 \times$ and reduces the BitOPs by $14.25 \times^1$ , while scaling the model's channel numbers by a width-multiplier of $0.35 \times$ only shrinks the model size by $2.06 \times$ and cuts down the FLOPs by $5.10 \times$ . Moreover, as presented in [18], 6-bit MobileNet V2 demonstrates improved predictive capability than the full-precision counterpart, while reducing channel numbers to $0.35 \times$ will significantly impair its performance [37]. It becomes more contrastive if other constraints are taken into account, such as memory cost, latency and energy consumption. Addi
+
+tionally, adaptive bit-widths is generally applicable to most key building blocks of deep neural networks, including time-consuming convolutional and fully-connected layers. Meanwhile, adaptive deployment will also introduce negligible computation, as discussed in [49]. Figure 1 illustrates the basic concept of quantization with adaptive bit-widths.
+
+At the first glance, adaptive bit-widths might be trivial and handy, as weights and activations with different precisions may not differ from each other very much. If so, model trained under some specific precision will be able to directly provide good performance under other bit-widths. However, as we will see in the following, such naive method is not applicable, because important information will be lost during shrinkage or enlargement of bit-widths in the neural network. Even more deliberate method of progressively training quantized models with different bit-width(s) fails to achieve the optimal performance, as the finetuning process sabotages important property of the model, thus significantly diminishes the validation accuracy of the model when quantized back to the original bit-width.
+
+All the above evidence indicates that quantization with adaptive bit-widths does not come as free lunch as it might appear, but is more subtle and involves new mechanisms that require meticulously designed techniques. In this work, we try to investigate this topic, and study specific methods to train quantized neural networks adaptive to different requirements. We utilize the state-of-the-art learning-based quantization method of Scale-Adjusted Training [18] as a baseline scheme for individual-precision quantization. We find that an adaptive model produced by a joint quantization approach with a key treatment to the clipping level parameters [6] is able to achieve comparable performance with individual-precision models on several bit-widths. The treatment to the clipping levels is named Switchable Clipping Level (S-CL). S-CL accommodates large activation values for high-precision quantization, and prevents undesired increasing of clipping levels for low-precision cases. Through some empirical analysis, we find that unnecessarily large clipping levels might cause large quantization error, and impact the performance of quantized model, especially on the lowest precision. To our best knowledge, this work is the first to tackle this problem of producing quantized models with adaptive bit-widths.
+
+This paper is organized as following. After summarizing some related works in Section 2, we first revisit the recent work of scale-adjusted training (SAT) [18], which is adopted in our whole study. In Section 4, we first illustrate potential benefits and challenges of quantization with adaptive bit-widths. Then we propose a joint training approach with a new technique named switchable clipping level based on the analysis of some baseline results. In Section 5, we show that with the proposed techniques, the adaptive models could achieve comparable accuracies as individual ones on
+
+different bit-widths and for a wide range of models including MobileNet V1/V2 and ResNet50.
+
+# 2. Related Work
+
+Neural Network Quantization Neural network quantization has long been studied since the very beginning of the recent blooming era of deep learning, including binarization [1, 7, 8, 36], quantization [20, 51, 54] and ensemble method [55]. Initially, uniform precision quantization is adopted inside the whole network, where all layers share the same bit-width [17, 19, 28, 31, 32, 33, 46, 52]. Recent work employs neural architecture search methods for model quantization, which implements mixed-precision strategy where different bit-widths are assigned to different layers or even channels [10, 26, 41, 42, 44]. [18] analyzes the problem of efficient training for neural network quantization, and proposes a scale-adjusted training (SAT) technique, achieving state-of-the-art performance. However, the possibility of developing a single model applicable at different bit-widths is still not well-examined, and it remains unclear how to achieve this purpose.
+
+Neural Architecture Search Neural architecture search (NAS) gains increasing popularity in recent study [4, 21, 24, 27, 34, 43, 45, 56]. Specifically, the searching strategy is adopted in other aspects of optimizing neural networks, such as automatic tuning of various training hyper-parameters including activation function [35] and data augmentation [9]. The NAS algorithms also benefit other tasks, such as generative adversarial networks [11], object detection [5] and segmentation [23]. As mentioned above, neural architecture search method for quantization is also actively studied in recent literature. However, NAS is computationally expensive, and usually requires time-consuming re-training or finetuning. Recent work has reduced the searching time by a large extent through one-shot architecture search [2, 38]. However, the resulting models are still inflexible, prohibiting their application in adaptive scenarios. Generally, conventional NAS methods are more suitable for optimizing a single model under specific resource constraints.
+
+Adaptive neural networks Different from but related to NAS, [49] proposes to simultaneously train a single model with different width multipliers, to achieve instant adaptation for different application requirements. Following this line, [3] explores adjustment of width, depth and kernel sizes simultaneously, achieving better predictive accuracy under the same computational constraints through progressive training. [48] extends similar strategy to large-size models, and further employs a NAS algorithm to discover better models. However, these methods neglect the option of quantization with different bit-widths in their strategies, leaving quantization with adaptive bit-widths an open problem.
+
+
+Figure 1. Deployment of neural networks with different bit-widths according to the computational budget. Left: Individually train several quantized models with different bit-widths for each scenario. Right: Train a single model quantized with adpative bit-widths and switch to the proper bit-width in real application based on the device condition.
+
+
+
+# 3. Revisiting Scale-Adjusted Training (SAT)
+
+Quantization usually comes with performance degeneration, as the model capacity is significantly reduced in comparison with the full-precision counterpart. However, a recent study [18] demonstrates that a large portion of accuracy degradation is caused by inefficient training where learning-based quantization, potentially acting as a regularization, actually provides an opportunity to improve generalization capability. The key idea is that quantized models usually enforce large variance in their weights, which brings about over-fitting issue during training. Based on this finding, [18] proposes a simple yet effective method, called scale-adjusted training (SAT), which scales the weights down to a healthy level for network optimization. Specifically, constant scaling is applied to the quantized weights of linear layers without BN by
+
+$$
+Q _ {i j} ^ {*} = \frac {1}{\sqrt {n _ {\mathrm {o u t}} \mathbb {V A R} [ Q _ {r s} ]}} Q _ {i j} \tag {1}
+$$
+
+where $Q_{ij}$ is a quantized weight and $n_{\mathrm{out}}$ is the number of output neurons in this layer. By combining with a quantization approach named parameterized clipping activation (PACT) [6], SAT facilitates more efficient training, enabling quantized models to perform consistently and significantly better than conventional quantization techniques, sometimes even surpassing their full-precision counterparts. Due to the numerous algorithms for neural network quantization, it is difficult, if not impossible, to experiment with different quantization algorithms for the adaptive bit-widths problem. To this end, we adopt the PACT algorithm with the SAT technique, which gives the state-of-the-art performance for neural network quantization, throughout all of our experiments. For brevity, we refer to this approach as SAT.
+
+# 4. Quantization with Adaptive Bit-widths
+
+In this section, we first examine the benefits and challenges of quantization with adaptive bit-widths. We explore direct adaptation and progressive quantization as two straightforward methods towards this goal with unsatisfying results. We then propose a novel joint quantization approach to deal with the challenge and achieve the same level performance with the adaptive models compared to the individual models.
+
+# 4.1. Benefit and Challenges
+
+Neural network quantization provides significant reduction in model size, latency, and energy consumption. Training a single quantized models executable at different bit-widths poses a great opportunity for flexible and adaptive deployment since models with larger bit-width are still consistently better than those with smaller bit-width. Actually, for MobileNet V1/V2, changing the bit-width from 4bit to 8bit can enlarge the model size by $1.7 \times$ and the BitOPs by $3.2 \times$ , while the predictive accuracy can change by $1.5\%$ on the ImageNet dataset with SAT [18]. From this we can see that there is a noticeable trade-off between accuracy and efficiency on quantized models. In the following, we will first investigate two straightforward methods for adaptive bit-widths, which will reveal some key challenges of this problem.
+
+# 4.1.1 Modified DoReFa Scheme
+
+Before more detailed analysis, we would like to emphasize a distinct difficulty encountered in quantized models with adaptive bit-widths. The DoReFa scheme [51] is adopted
+
+
+Figure 2. Comparison of two quantization schemes: original scheme (Eq. (2)) and modified scheme (Eq. (3)).
+
+
+
+in the original SAT method for weight quantization, where weights are quantized with
+
+$$
+q _ {k} (x) = \frac {1}{a} \left\lfloor a x \right\rceil \tag {2}
+$$
+
+Here, $\lfloor \cdot \rfloor$ indicates rounding to the nearest integer, and $a$ equals $2^{k} - 1$ where $k$ is the number of quantization bits. However, as illustrated in Figure 2, such a scheme is not practical for quantization with adaptive bit-width, as there is no direct mapping between weights quantized to different bit-widths, disabling direct conversion of quantized models from a bit-width to lower bit-widths. It necessitates storage of the full-precision weights, and the quantization procedure needs to be repeated for different bit-widths during model deployment. This significantly increases the size of the stored model, and greatly limits the applications of the model. To accommodate simple conversion of quantized models, we modify the scheme to use a quantization function given by
+
+$$
+q _ {k} (x) = \frac {1}{\widehat {a}} \min \left(\left\lfloor \widehat {a x} \right\rfloor , \widehat {a} - 1\right) \tag {3}
+$$
+
+Here, $\lfloor \cdot \rfloor$ indicates the floor rounding function, and $\widehat{a}$ equals $2^{k}$ where $k$ is the number of quantization bits. This quantizing function does not differ quite much from that for the original DoReFa scheme, and should give similar performance for quantized model. Moreover, as shown in Figure 2, it enables direct adaptation from higher bit-width to lower bit-width through discarding lower bits in the weights directly. We formulate this capacity with the following theorem which can be easily proved.
+
+Theorem 1 For any $x$ in $[0,1]$ and any two positive integers $a > b$ ,
+
+$$
+\lfloor 2 ^ {a} x \rfloor > > (a - b) = \lfloor 2 ^ {b} x \rfloor \tag {4}
+$$
+
+In the following, we first utilize the original DoReFa scheme to explore quantization with adaptive bit-widths, and compare with the SAT method [18]. More experiment results will be provided using both the original scheme and the modified scheme in Section 5.
+
+# 4.1.2 Direct Adaptation
+
+We first investigate whether quantized models trained on one bit-width can be directly used on other bit-widths. This cheap approach could be viable since weights with different bit-widths might be close to each other in value. To check if this method is practical, we evaluate the validation accuracy of ResNet50 on ImageNet by adjusting the bit-width to several different settings, where the original weights are trained under either the lowest or the highest bit-width (2 bit and 4 bit in this case, respectively). As indicated in previous research [18], quantization with different bit-widths entails difference in variances of weights and activations, as illustrated in Figure 3. Thus the networks trained on one bit-width suffer from the mismatch of the layer statistics when evaluated on another bit-width. To alleviate this problem, we apply batch norm (BN) calibration introduced in [47] to calibrate the statistics in batch normalization layers for reasonable comparison.
+
+The results with and without BN calibration are listed in Table 1, together with performance of models trained and evaluated under the same bit-width using SAT. It is shown that without BN calibration, models trained on one bit degenerate significantly on another bit. With BN calibration, model trained on 2 bit successfully preserves the performance on larger bits, but is still inferior to results achieved by directly training on the large bits; also, model trained on 4 bit degenerates severely on smaller bits. In summary, models trained and evaluated in different bit-widths are not suitable for adaptive deployment of quantization models due to the difference in training and evaluation settings. Particularly, models trained with larger bit-width suffer from more serious performance degeneration when quantized to lower precision, while training with smaller bit-width limits the potential of models deployed on higher precision.
+
+# 4.1.3 Progressive Quantization
+
+The above analysis demonstrates that quantization with adaptive bit-widths is not directly available from models
+
+
+Figure 3. Impact of quantization on variances of weights and activation under different bit-widths. The variance of both quantized weights and activations gets larger when the bit-width gets smaller.
+
+
+
+| Model | 4 bit | 3 bit | 2 bit |
| 4 bit Trained (wo/ BN calib) | 76.3 | 67.1 | 0.3 |
| 2 bit Trained (wo/ BN calib) | 41.1 | 48.7 | 73.3 |
| 4 bit Trained (w/ BN calib) | 76.3 | 73.2 | 20.3 |
| 2 bit Trained (w/ BN calib) | 73.4 | 73.2 | 73.3 |
| SAT [18] | 76.3 | 75.9 | 73.3 |
+
+trained with individual bit-width. In this section, we investigate the possibility of progressive training, where a quantized model is trained on multiple bit-widths sequentially. It can be conducted in two ways, where the bit-width for training can be gradually increased or decreased. We experiment with both methods for ResNet50 on ImageNet. For either case, we use the model trained individually with the highest (lowest) bit-width as the initial point, which is finetuned with the second highest (lowest) bit-width, and further finetuned with the next bit-width. We continue this finetuning process until all bit-widths under consideration are processed. For each phase of finetuning, the same hyper-parameters are adopted as those for training individual quantization. Also, BN calibration is applied to the final model on different bit-widths for reasonable comparison. The results are shown in Table 2.
+
+In Table 2, the model first trained with 2 bit and fine-tuned with ascending bit-width achieves good result at the final 4 bit, but is corrupted on lower bits. The model first trained with 4 bit and fine-tuned with descending bit-width only achieve slightly better performance than directly applying 2 bit model on multiple bits in Table 1, which does not preserve performance of the higher 3 and 4 bits. The above results indicate that progressive training might have introduced undesired perturbation to the model trained previously, which impairs its original performance. This shows the progressive training method is still not suitable for models with adaptive bit-widths.
+
+Table 1. Direct adaptation of models trained on 2 and 4 bits on different bit-widths, with and without batch norm calibration. Results are top-1 validation accuracy (\%) of ResNet50 on ImageNet.
+
+| Model | 4 bit | 3 bit | 2 bit |
| Ascending Bit-width | 76.3 | 73.4 | 29.5 |
| Descending Bit-width | 73.9 | 73.6 | 73.5 |
| SAT [18] | 76.3 | 75.9 | 73.3 |
+
+Table 2. Results of progressive quantization with ascending/descending bit-widths of ResNet50 on ImageNet. Results are top-1 validation accuracy (\%).
+
+| Model | 8 bit | 6 bit | 5 bit | 4 bit |
| Vanilla AdaBits | 72.4 | 72.5 | 72.1 | 70.8 |
| SAT [18] | 72.6 | 72.3 | 71.9 | 71.3 |
+
+Table 3. Results of Vanilla AdaBits with MobileNet V1 on ImageNet with four bit-widths. Results are top-1 validation accuracy $(\%)$ .
+
+# 4.2. Joint Quantization
+
+The above results show that sequential training does not preserve the model characteristics in previously trained bit-widths, which indicates that the model weights for different bit-widths should be jointly optimized. Specifically, we adopt a joint training approach similar to slimmable neural networks [49]. Instead of training models with different channel numbers, we simultaneously train models under different bit-widths with shared weights. Also, as mentioned above, quantization with different bit-widths leads to different variances of quantized weights and activations. Based on this, we adopt the switchable batch normalization technique introduced in [49]. We call this method Vanilla AdaBits, and the performance is listed in Table 3. It can be seen that models trained with all of the bit-widths achieve comparable performance as those trained individually, which validates the effectiveness of this approach. However, there is still a performance gap for the lowest bit-width at 4 bit, which is $0.5\%$ lower than the individually trained model. This is undesired and further improvement needs to be made.
+
+In the PACT algorithm adopted by SAT, the activations of each layer will be first clipped by a learned parameter $\alpha$
+
+
+Figure 4. Clipping levels in different layers for models trained individually with different bit-widths (solid lines) or trained with Vanilla AdaBits (dashed line). Note that the clipping level of a layer refers to the clipping level for the output of this layer. The outputs of the last layer are not clipped.
+
+named clipping level, and then quantized to discrete numbers. Specifically, an activation value $x$ is first clipped to the interval $[0, \alpha]$ , then scaled, quantized and rescaled to produce the quantized value $q$ as
+
+$$
+\widetilde {x} = \frac {1}{2} \left[ | x | - | x - \alpha | + \alpha \right] \tag {5a}
+$$
+
+$$
+q = \alpha q _ {k} \left(\frac {\widetilde {x}}{\alpha}\right) \tag {5b}
+$$
+
+Note that in the original paper of PACT [6], the authors found that different bit-widths result in different clipping levels. In the Vanilla AdaBits, the clipping levels of different bit-widths are shared, which may potentially disturb the optimization process of the network.
+
+To understand the underlying mechanism of the degeneration at the lowest bit-width, we plot the clipping levels from different layers in models trained individually with different bit-widths. As shown in Figure 4, the clipping levels strongly correlates with the bit-width. For the individually trained models, higher bit-widths result in larger values of clipping levels. In the Vanilla Adabits model, the learned clipping levels tend to be smaller than those of high-precision cases, but are larger than those from the model with the lowest bitwidth. To understand the relationship between quantization error and clipping levels, we study the characteristics using a synthetic linear layer with 1000 input neurons where the weights are sampled from $\mathcal{N}(0,1/1000)$ and activations are sampled from a uniform distribution on the interval [0, 1]. For each bit-width, the products of the weights and the activations are fed to the ReLU function to obtain quantized outputs with different clipping levels. The relative error between the full-precision outputs and the quantized outputs are calculated. We plot this quantization error with respect to the clipping levels in Figure 5. It shows that different bit-widths have different behaviors. Quantization error only increases slowly with increase of clipping level for higher
+
+bit-width while it increases significantly with increase of clipping level for lower bit-width. Based on the results from Figure 4, the clipping levels learned by Vanilla Adabits may substantially increase the quantization error for the lowest 4 bit, but do not affect those of the other bit-widths much. Note this is only a qualitative analysis and the quantization errors shown in Figure 5 are not proportional to the quantization errors in the trained networks.
+
+
+Figure 5. Relationship between relative quantization error and the clipping level $\alpha$ for different bit-widths with a synthetic layer. The dots denote the optimal values of $\alpha$ for least quantization error at different bit-widths.
+
+# 4.2.1 Switchable Clipping Level
+
+The above observation indicates that facilitating proper clipping levels for each bit-width could be a key factor for optimal performance of AdaBits models. One set of shared clipping levels is difficult, if not impossible, to satisfy requirements from different bit-widths. To this end, we propose a simple treatment to the clipping levels, named Switchable Clipping Level (S-CL), that employs independent clipping levels for different bit-widths in each layer. During training of quantized models with adaptive bit-widths, S-CL switch to a corresponding set of clipping levels for each bit-width in all layers. This avoids the clipping level parameters being interfered by other bit-widths, especially undesired quantization error introduced by too large or too small clipping levels for the current bit-width. In this way, the performance degeneration issue on the lowest bit-width using Vanilla Adabits can be alleviated.
+
+The model size is almost unchanged with S-CL, which is a negligible portion of less than $0.1 \text{‰}$ . For instance, the byte size ratio of clipping levels with other trainable parameters is $0.0246 \text{‰}$ for MobileNet V1, $0.0588 \text{‰}$ for MobileNet V2, and $0.0084 \text{‰}$ for ResNet50. Meanwhile, S-CL introduces almost no runtime overhead. After re-configuring the model with desired bit-width, it becomes a normal network to run without additional latency and memory cost. These advantages make it a very practical and economical solution to the adaptive bit-widths problem.
+
+# 5. Experiments
+
+We evaluate our AdaBits algorithm on the ImageNet classification task and compare the resulted models with those quantized individually with different bit-widths. After that, we analyze the clipping levels in different layers from an AdaBits model. Finally, we give discussion and present some future work.
+
+# 5.1. ImageNet Classification
+
+To examine our proposed methods, we quantize several representative models with adaptive bit-widths, including MobileNet V1/V2 and ResNet50, and evaluate them on the ImageNet dataset, using AdaBits algorithm.
+
+We follow the same quantization strategy as SAT [18], which first trains a full-precision model, and then uses it as initialization for training the quantized model. The same training hyperparameters and settings are shared between pretraining and finetuning, including initial learning rate, learning rate scheduler, weight decay, the number of epochs, optimizer, batch size, etc. The input images to the model are set to unsigned 8bit integer (uint8), and no standardization (neither demeaning nor normalization) is applied. For the first and last layers, weights are quantized with bit-width of 8 [6], while the input to the last layer is quantized with the same precision as other layers. Meanwhile, bias in the last fully-connected layer(s) and the batch normalization layers are not quantized.
+
+To make a fair comparison, we adopt the same hyperparameters as SAT [18]. The learning rate is initialized to 0.05, and updated every iteration for totally 150 epochs with a cosine learning rate scheduler [25] without restart. Parameters are updated by a SGD optimizer, Nesterov momentum with a momentum weight of 0.9 without damping. Weight decay is set to $4 \times 10^{-5}$ . For MobileNet V1/V2, the batch size is set to 2048, while for ResNet50 it is 1024. The warmup strategy suggested in [12] is adopted by linearly increasing the learning rate every iteration to a larger value (batch size/256 × 0.05) for the first five epochs before using the cosine annealing scheduler. The input image is randomly cropped to $224 \times 224$ and randomly flipped horizontally, and is kept as 8 bit unsigned integer with no standardization applied. Besides, we use full-precision models with clamped weight as initial points to finetune quantized models.
+
+The results for these models are summarized in Table 4, where we list the Top-1 accuracy on ImageNet classification task, together with model size and BitOPs. We show results with the original DoReFa scheme for MobileNet V1/V2 and ResNet50, and results with the modified scheme described in Section 4.1.1 for MobileNet V1/V2. We do not include results on the modified scheme for ResNet50, since we found the 2-bit model in this setting does not converge. We use a prefix of AB- to indicate model quantized with AdaBits. Result of the SAT approach [18] is also reported as reference,
+
+which based on our knowledge presents the state-of-the-art performance on model quantization. We find that our method is able to achieve almost the same performance as individual quantization for all models across all bit-widths using original scheme. Compared to progressive quantization with ascending bit-width in Table 2, AdaBits approach on ResNet50 significantly boost the performance on the lowest 2 bit. Compared to progressive quantization with descending bit-width, AdaBits boost accuracy of $2.2\%$ on 4-bit ResNet50 and $2.2\%$ on 3-bit ResNet50, respectively. Compared to Vanilla AdaBits, our final approach with S-CL increase performance on the lowest 4 bit by $0.3\%$ on MobileNet V1 with the original scheme. For models with the modified scheme, AdaBits also achieve similar performance as individual models. The benefit of the modified scheme is that it allows direct adaptation from higher bit-width to lower bit-width which only requires storing the quantized weights for the highest bit-width to greatly reduce the model size. The AdaBits models with the original scheme still need to store full precision weights in order to produce quantized weights in each bit-width. Our results prove that adaptive bit-width is an additional option for adaptive models, which is able to further improve trade-offs between efficiency and accuracy for deep neural networks.
+
+# 5.2. Illustration of clipping levels
+
+To understand the impact of S-CL, we visualize the clipping levels from different layers in AB-MobileNet V1 with the original scheme in Figure 6. We find different bit-widths indeed lead to different values of clipping levels, which generally follow the order that larger bits have relatively larger clipping levels as in the individual models. By privatizing clipping levels to different bit-widths, different optimal values of clipping levels for different bit-widths can be selected and the optimization of the model can be improved.
+
+
+Figure 6. Clipping levels in different layers from AB-MobileNet V1.
+
+| Scheme | Individual Quantization (SAT) | Adaptive Bit-widths | BitOPs |
| Name | Bit-width | Size | Top-1 Acc. | Name | Size | Top-1 Acc. |
| Original | MobileNet V1 | 8 bit | 4.10 MB | 72.6 | | | 72.4 (-0.2) | 36.40 B |
| MobileNet V1 | 6 bit | 3.34 MB | 72.3 | AB-MobileNet V1 [8, 6, 5, 4] bits | FP | 72.4 (0.1) | 20.81 B |
| MobileNet V1 | 5 bit | 2.96 MB | 71.9 | 72.1 (0.2) | 14.68 B |
| MobileNet V1 | 4 bit | 2.58 MB | 71.3 | 71.1 (-0.2) | 9.67 B |
| MobileNet V2 | 8 bit | 3.44 MB | 72.5 | | | 72.6 (0.1) | 19.25 B |
| MobileNet V2 | 6 bit | 2.92 MB | 72.3 | AB-MobileNet V2 [8, 6, 5, 4] bits | FP | 72.4 (0.1) | 11.17 B |
| MobileNet V2 | 5 bit | 2.66 MB | 72.0 | 72.1 (0.1) | 7.99 B |
| MobileNet V2 | 4 bit | 2.40 MB | 71.1 | 70.8 (-0.3) | 5.39 B |
| ResNet50 | 4 bit | 13.34 MB | 76.3 | AB-ResNet50 [4, 3, 2] bits | FP | 76.1 (-0.2) | 71.81 B |
| ResNet50 | 3 bit | 10.55 MB | 75.9 | 75.8 (-0.1) | 43.75 B |
| ResNet50 | 2 bit | 7.75 MB | 73.3 | 73.2 (-0.1) | 23.71 B |
| Modified | MobileNet V1 | 8 bit | 4.10 MB | 72.6 | | | 72.3 (-0.3) | 36.40 B |
| MobileNet V1 | 6 bit | 3.34 MB | 72.4 | AB-MobileNet V1 [8, 6, 5, 4] bits | 4.35 MB | 72.3 (-0.1) | 20.81 B |
| MobileNet V1 | 5 bit | 2.96 MB | 72.2 | 72.0 (-0.2) | 14.68 B |
| MobileNet V1 | 4 bit | 2.58 MB | 70.5 | 70.4 (-0.1) | 9.67 B |
| MobileNet V2 | 8 bit | 3.44 MB | 72.7 | | | 72.3 (-0.4) | 19.25 B |
| MobileNet V2 | 6 bit | 2.92 MB | 72.5 | AB-MobileNet V2 [8, 6, 5, 4] bits | 3.83 MB | 72.3 (-0.2) | 11.17 B |
| MobileNet V2 | 5 bit | 2.66 MB | 72.1 | 72.0 (-0.1) | 7.99 B |
| MobileNet V2 | 4 bit | 2.40 MB | 70.3 | 70.3 (0.0) | 5.39 B |
+
+Table 4. Comparison between individual quantization and AdaBits quantization for top-1 validation accuracy $(\%)$ of MobileNet V1/V2 and ResNet50 on ImageNet. Note that we use two quantization schemes to compare our AdaBits with SAT baseline models where "original" denotes the original DoReFa scheme and "modified" denote the modified scheme in Eq. (3) which enables producing weights for lower bit-width from the 8-bit model. "FP" denotes the full-precision models is needed to recover weights in different bit-widths.
+
+# 6. Discussion and Future Work
+
+Our approach for adaptive bit-width indicates that bitwidth of quantized models is an additional degree of freedom besides channel number, depth, kernel-size and resolution for adaptive models. Previous work [18, 47] demonstrate the possibility to utilize the trained adaptive models for neural architecture search algorithms, based on which improved architectures can be discovered under predefined resource constraints. This suggests we might be able to employ the quantized models with adaptive bit-widths to search for bit-width in each layer or channel, for the purpose of mixed-precision quantization [10, 44, 42, 41, 26]. On the other hand, adding bit-widths to the list of channel numbers, depth, kernel-size, and resolution enlarges design space for adaptive models, which could enable more powerful adaptive models and facilitate more real world applications, such as face alignment [30, 50] and compressive imaging system [29].
+
+Evaluation of AdaBits with other quantization methods is another future work. Due to numerous algorithms for neural network quantization, we only select a state-of-the-art algorithm SAT to validate the effectiveness of adaptive bit-width. Since our joint training approach is general and can be combined with any quantization algorithms based on quantization-aware training, we believe similar results can be achieved by combining other quantization approaches
+
+with our AdaBits algorithm.
+
+# 7. Conclusion
+
+In this paper, we investigate the possibility to adaptively configure bit-widths for deep neural networks. After investigating several baseline methods, we propose a joint training approach to optimize all selected bit-widths in the quantized model. Another treatment named Switchable Clipping Level is proposed to privatize clipping level parameters to each bit-width and to eliminate undesired interference between different bit-widths. The final AdaBits approach achieves similar accuracies as models quantized with different bit-widths individually, for a wide range of models including MobileNet V1/V2 and ResNet50 on the ImageNet dataset. This new kind of adaptive models widen the choices for designing dynamic models which can instantly adapt to different hardwares and resource constraints.
+
+# 8. Acknowledgement
+
+The authors would like to appreciate invaluable discussion with Professor Hao Chen from University of California Davis and Professor Yi Ma from University of California Berkeley. They also would like to thank Hongyi Zhang, Yangyue Wan, Xiaochen Lian and Xiaojie Jin from ByteDance Inc., Yingwei Li and Jieru Mei from John Hopkins University, and Chaosheng Dong from University of Pittsburgh for technical discussion.
+
+# References
+
+[1] Yu Bai, Yu-Xiang Wang, and Edo Liberty. Proxquant: Quantized neural networks via proximal operators. arXiv preprint arXiv:1810.00861, 2018. 2
+[2] Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, pages 549-558, 2018. 2
+[3] Han Cai, Chuang Gan, and Song Han. Once for all: Train one network and specialize it for efficient deployment. arXiv preprint arXiv:1908.09791, 2019. 1, 2
+[4] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332, 2018. 2
+[5] Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Chunhong Pan, and Jian Sun. Detnas: Backbone search for object detection. arXiv preprint arXiv:1903.10979, 2019. 2
+[6] Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018. 2, 3, 6, 7
+[7] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pages 3123-3131, 2015. 2
+[8] Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to + 1 or-1. arXiv preprint arXiv:1602.02830, 2016. 2
+[9] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018. 2
+[10] Ahmed T. Elthakeb, Prannoy Pilligundla, FatemehSadat Mireshghallah, Amir Yazdanbakhsh, Sicun Gao, and Hadi Esmaeilzadeh. Releg: an automatic reinforcement learning approach for deep quantization of neural networks. arXiv preprint arXiv:1811.01704, 2018. 2, 8
+[11] Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang. Autogan: Neural architecture search for generative adversarial networks. arXiv preprint arXiv:1908.03835, 2019. 2
+[12] Priya Goyal, Piotr Dolkar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. 7
+[13] Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving. Journal of Field Robotics, 2019. 1
+[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1
+
+[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630-645. Springer, 2016. 1
+[16] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 1
+[17] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2704-2713, 2018. 2
+[18] Qing Jin, Linjie Yang, and Zhenyu Liao. Towards efficient training for neural network quantization. arXiv preprint arXiv:1912.10207, 2019. 1, 2, 3, 4, 5, 7, 8
+[19] Cong Leng, Zesheng Dou, Hao Li, Shenghuo Zhu, and Rong Jin. Extremely low bit neural network: Squeeze the last bit out with admm. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. 2
+[20] Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. arXiv preprint arXiv:1605.04711, 2016. 2
+[21] Yingwei Li, Xiaojie Jin, Jieru Mei, Xiaochen Lian, Linjie Yang, Cihang Xie, Qihang Yu, Yuyin Zhou, Song Bai, and Alan Yuille. Autonl: Neural architecture search for lightweight non-local networks in mobile vision. In submission, 2020. 2
+[22] Yingwei Li, Zhuotun Zhu, Yuyin Zhou, Yingda Xia, Wei Shen, Elliot K. Fishman, and Alan L. Yuille. Volumetric Medical Image Segmentation: A 3D Deep Coarse-to-Fine Framework and Its Adversarial Examples, pages 69-91. Springer International Publishing, Cham, 2019. 1
+[23] Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei. Autodeplab: Hierarchical neural architecture search for semantic image segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 82–92, 2019. 2
+[24] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pages 19–34, 2018. 2
+[25] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.7
+[26] Qian Lou, Lantao Liu, Minje Kim, and Lei Jiang. Autoqb: Automl for network quantization and binarization on mobile devices. arXiv preprint arXiv:1902.05690, 2019. 2, 8
+[27] Jieru Mei, Yingwei Li, Xiaochen Lian, Xiaojie Jin, Linjie Yang, Alan Yuille, and Jianchao Yang. Atom{nas}: Fine-grained end-to-end neural architecture search. In International Conference on Learning Representations, 2020. 2
+[28] Naveen Mellempudi, Abhisek Kundu, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, and Pradeep Dubey. Ternary neu
+
+ral networks with fine-grained quantization. arXiv preprint arXiv:1705.01462, 2017. 2
+[29] Xin Miao, Xin Yuan, Yunchen Pu, and Vassilis Athitsos. λ-net: Reconstruct hyperspectral images from a snapshot measurement. In IEEE/CVF Conference on Computer Vision (ICCV), volume 1, 2019. 8
+[30] Xin Miao, Xiantong Zhen, Xianglong Liu, Cheng Deng, Vassilis Athitsos, and Heng Huang. Direct shape regression networks for end-to-end face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5040-5049, 2018. 8
+[31] Asit Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. arXiv preprint arXiv:1711.05852, 2017. 2
+[32] Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, and Debbie Marr. Wprn: wide reduced-precision networks. arXiv preprint arXiv:1709.01134, 2017. 2
+[33] Eunhyeok Park, Junwhan Ahn, and Sungjoo Yoo. Weighted-entropy-based quantization for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5456–5464, 2017. 2
+[34] Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018. 2
+[35] Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Searching for activation functions, 2018. 2
+[36] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pages 525-542. Springer, 2016. 2
+[37] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 1
+[38] Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, and Diana Marculescu. Single-path nas: Designing hardware-efficient convnets in less than 4 hours. arXiv preprint arXiv:1904.02877, 2019. 2
+[39] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2820-2828, 2019. 1
+[40] Alex Teichman and Sebastian Thrun. Practical object recognition in autonomous driving and beyond. In Advanced Robotics and its Social Impacts, pages 35-38. IEEE, 2011. 1
+[41] Stefan Uhlich, Lukas Mauch, Kazuki Yoshiyama, Fabien Cardinaux, Javier Alonso Garcia, Stephen Tiedemann, Thomas Kemp, and Akira Nakamura. Differentiable quantization of deep neural networks. arXiv preprint arXiv:1905.11452, 2019. 2, 8
+[42] Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8612-8620, 2019. 2, 8
+
+[43] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10734-10742, 2019. 2
+[44] Bichen Wu, Yanghan Wang, Peizhao Zhang, Yuandong Tian, Peter Vajda, and Kurt Keutzer. Mixed precision quantization of convnets via differentiable neural architecture search. arXiv preprint arXiv:1812.00090, 2018. 2, 8
+[45] Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018. 2
+[46] Chen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. Alternating multi-bit quantization for recurrent neural networks. arXiv preprint arXiv:1802.00150, 2018. 2
+[47] Jiahui Yu and Thomas S. Huang. Network slimming by slimmable networks: Towards one-shot architecture search for channel numbers. CoRR, abs/1903.11728, 2019. 4, 8
+[48] Jiahui Yu, Pengchong Jin, Hanxiao Liu, Gabriel Bender Pieter-Jan Kindermans, Mingxing Tan, Thomas S. Huang, Xiaodan Song, Ruoming Pang, and Quoc V Le. Bignas: Scaling up neural architecture search with big single-stage models. arXiv preprint arXiv:2003.11142, 2020. 1, 2
+[49] Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. arXiv preprint arXiv:1812.08928, 2018. 1, 2, 5
+[50] Lei Yue, Xin Miao, Pengbo Wang, Baochang Zhang, Xiantong Zhen, and Xianbin Cao. Attentional alignment networks. In BMVC, volume 2, page 7, 2018. 8
+[51] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. 2, 3
+[52] Shu-Chang Zhou, Yu-Zhi Wang, He Wen, Qin-Yao He, and Yu-Heng Zou. Balanced quantization: An effective and efficient approach to quantized neural networks. Journal of Computer Science and Technology, 32(4):667-682, 2017. 2
+[53] Yuyin Zhou, Yingwei Li, Zhishuai Zhang, Yan Wang, Angtian Wang, Elliot K Fishman, Alan L Yuille, and Seyoun Park. Hyper-pairing network for multi-phase pancreatic ductal adenocarcinoma segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 155–163. Springer, 2019. 1
+[54] Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016. 2
+[55] Shilin Zhu, Xin Dong, and Hao Su. Binary ensemble neural network: More bits per network or more networks per bit? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4923-4932, 2019. 2
+[56] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016. 1, 2
+[57] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image
+
+recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697-8710, 2018.
\ No newline at end of file
diff --git a/adabitsneuralnetworkquantizationwithadaptivebitwidths/images.zip b/adabitsneuralnetworkquantizationwithadaptivebitwidths/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9b8c9e364cd8cbb35b8eda9c6051198ca7bdf324
--- /dev/null
+++ b/adabitsneuralnetworkquantizationwithadaptivebitwidths/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c694ce715443f72dc2b4ddb47b5b9f52a0cbcfb024a74c7409959b23be8f0297
+size 380012
diff --git a/adabitsneuralnetworkquantizationwithadaptivebitwidths/layout.json b/adabitsneuralnetworkquantizationwithadaptivebitwidths/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..880f2bfe69214407ac8253792d4ad86c584eed34
--- /dev/null
+++ b/adabitsneuralnetworkquantizationwithadaptivebitwidths/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6b7b068399d2d10f96c813829a42d69716c56b1b2b4ba6e7c8c0088286fc9ab
+size 319273
diff --git a/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_content_list.json b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e7222422c739a46c9f4de3278313cd10205c4418
--- /dev/null
+++ b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54f69ea76a8f5f2957e83980a943d1e2e779bc84e45b877f180620db16dd4101
+size 95716
diff --git a/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_model.json b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c2f338dcc756115dd962be15f6531e670ca59735
--- /dev/null
+++ b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27aceaafab4f19505af432963df7f3b6a2d50bc18ce5647d16bf6b980127d713
+size 114614
diff --git a/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_origin.pdf b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..540115eefad149267465544a8abeba1588009527
--- /dev/null
+++ b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/e2e1a401-7356-40b3-8ce1-f8b2e211ed85_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d4098c1242de1e201663e95620482cbaf9a139547fc784944519bca30012d19
+size 772194
diff --git a/adacofadaptivecollaborationofflowsforvideoframeinterpolation/full.md b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f0419fa8ad1526a7d6abb540203bf3a7c96774c9
--- /dev/null
+++ b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/full.md
@@ -0,0 +1,462 @@
+# AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation
+
+Hyeongmin Lee $^{1}$ Taeoh Kim $^{1}$ Tae-young Chung $^{1}$ Daehyun Pak $^{1}$ Yuseok Ban $^{2}$ Sangyoun Lee $^{1*}$
+
+$^{1}$ Yonsei University
+
+{minimonia,kto,tato0220,koasing,sylee}@yonsei.ac.kr
+
+$^{2}$ Agency for Defense Development
+
+ban@add.re.kr
+
+# Abstract
+
+Video frame interpolation is one of the most challenging tasks in video processing research. Recently, many studies based on deep learning have been suggested. Most of these methods focus on finding locations with useful information to estimate each output pixel using their own frame warping operations. However, many of them have Degrees of Freedom (DoF) limitations and fail to deal with the complex motions found in real world videos. To solve this problem, we propose a new warping module named Adaptive Collaboration of Flows (AdaCoF). Our method estimates both kernel weights and offset vectors for each target pixel to synthesize the output frame. AdaCoF is one of the most generalized warping modules compared to other approaches, and covers most of them as special cases of it. Therefore, it can deal with a significantly wide domain of complex motions. To further improve our framework and synthesize more realistic outputs, we introduce dual-frame adversarial loss which is applicable only to video frame interpolation tasks. The experimental results show that our method outperforms the state-of-the-art methods for both fixed training set environments and the Middlebury benchmark. Our source code is available at https://github.com/HyeongminLEE/AdaCoF-pytorch.
+
+# 1. Introduction
+
+Synthesizing the intermediate frame when consecutive frames have been provided is one of the main research topics in the video processing area. Using a frame interpolation algorithm, we can obtain slow-motion videos from ordinary videos without using professional high-speed cameras. In addition, we can freely convert the frame rates of the videos so it can be applied to the video coding system. To interpolate the intermediate frame of a video requires an understanding of motion, unlike image pixel interpolation. Unfortunately, real world videos contain not only simple motions, but also large and complex ones, making the task
+
+
+(a) The Kernel-Based Approach
+
+
+(b) The Flow-Based Approach
+
+
+(c) Kernel and Flow Combined
+
+
+(d) Ours
+Figure 1: Overall description of the main streams and our method. The blue parts of each figure represent the reference points for generating the target pixel.
+
+significantly more difficult.
+
+Most of the approaches define video frame interpolation as a problem of finding reference locations in input frames which include information for estimating each output pixel value. This can be seen as a motion estimation process, because the task involves tracking the path of the target pixel. Therefore, each algorithm covers its own motion domain, and this area is directly related to the performance. To handle motion in real world videos, we need a generalized operation that can refer to any number of pixels in any location in the input frames. However, most of the existing approaches have a variety of limitations in Degrees of Freedom (DoF).
+
+One is the kernel-based approach (Figure 1 (a)) [34, 35], which adaptively estimates the large-sized kernel for each pixel and synthesizes the intermediate frame by convolving the kernels with the input. This approach finds the proper reference location by assigning large weights to the pixels
+
+of interest. However, it does not refer to any location, as it cannot deal with large motions beyond the kernel size. It is not efficient to keep the large size of the kernel even though the motion is small. The second approach is the flow-based approach (Figure 1 (b)) [20, 27], which estimates the flow vector directly pointing to the reference location for each output pixel. However, it cannot refer to any number of pixels because only one location is referred to in each input frame. Therefore, it is not suitable for complex motions and the result may suffer from lack of information when the input frame is of low-quality. Recently, methods of combining kernel-based and flow-based approaches are proposed to compensate for each other's limitations (Figure 1 (c)) [49, 3]. They multiply the kernels with the location pointed to by the flow vector. Therefore, they can refer to any location plus some additional neighboring pixels. However, this approach is not much different from the flow based approach as it uses significantly fewer reference points than the kernel-based one. In addition, there is room for improvement in terms of DoF because the shape of the kernel is a fixed square.
+
+In this paper, we propose an operation that refers to any number of pixels and any location called Adaptive Collaboration of Flows (AdaCoF). To synthesize a target pixel, we estimate multiple flows, called offset vectors, pointing to the reference locations and sample them. Then the target pixel is obtained by linearly combining the sampled values. Our method is inspired by deformable convolution (DefConv) [8], but AdaCoF is significantly different from it in some points. First, DefConv has a shared weight for all positions, and it is not suitable for video because there are various motions in each position of a frame. Therefore, we allow the weights to be spatially adaptive. Second, AdaCoF is used as an independent module for frame warping, not for feature extraction as DefConv. Therefore, we obtain the weights as the outputs of a neural network, instead of training them as learnable parameters. Third, we add dilation for the starting point of the offset vectors to enforce the them to search a wider area. Lastly, we add an occlusion mask to utilize only one of the two input frames when one of the reference pixels is occluded. As shown in Figure 1 (d), it can refer to any number within any location in the input frames, because the sizes and shapes of the kernels are not fixed. Therefore, our method has the highest DoF compared to most of the other competitive algorithms, and therefore can deal with various complex motions in real world videos. To make the synthesized frames more realistic, we further train a discriminator to detect the generated frame given the output and one of the input frames. Then we train the generator to maximize the entropy of the discriminator using dual-frame adversarial loss. Experimental results on various benchmarks show the effectiveness of AdaCoF over the latest state-of-the-art approaches.
+
+# 2. Related Work
+
+Most of the classic video frame interpolation methods estimate the dense flow maps using optical flow algorithms [12, 19, 44, 46] and warp the input frames [1, 4, 47, 50]. Therefore, the performance of these approaches largely depends on optical flow algorithms. Also, optical flow based approaches have limitations in many cases, such as occlusions, large motion, and brightness changes. Although there are some approaches without using external optical flow modules [25, 29], they still have difficulty in dealing with these problems. Meyer et al. [32] regard video frames as linear combinations of wavelets with different directions and frequencies. This approach interpolates each wavelet's phase and magnitude. This method makes notable progress in both performance and running time. Their recent work also applies deep learning to this approach [31]. However, it still has limitations for large motions of high frequency components.
+
+Recent work has demonstrated the success of applying deep learning in the field of computer vision [10, 14, 18, 21, 23, 41], which, in turn, inspires various deep learning based frame interpolation methods. As all we require for training neural networks are three consecutive video frames, learning based approaches are appropriate for this task. Long et al. [28] propose a CNN architecture that uses two input frames and directly estimates the intermediate frame. However, this type of approach often leads to blurry results. Some other methods focus on where to find the output pixel from the input frames, instead of directly estimating the image. This paradigm is based on the fact that at least one input frame contains the output pixel, even in the case of occlusion. Niklaus et al. [34] estimate a kernel for each location and obtains the output pixel by convolving it over input patches. Each kernel samples the proper input pixels by combining them selectively. However, this requires a lot of memory and estimating large kernels for every pixel is computationally expensive. Niklaus et al. [35] solve this problem by estimating each kernel from the outer product of two vectors. However, this approach cannot handle motions larger than the kernel size and it is still wasteful to estimate large kernels for small motions. Liu et al. [27] estimate a flow map that consists of vectors directly pointing to reference locations. They sample the proper pixels according to the flow map. However, as they assume that the forward and backward flows are the same, it is difficult to handle complex motions. Jiang et al. [20] propose a similar algorithm, but they estimate the forward and backward flows separately. They also improve the flow computation stage by defining the warping loss. However, it could be risky to get only one pixel value from each frame, especially when the input patches are of poor quality. To solve these problems, Reda et al. [38] and Bao et al. [3] combine kernel and flow map based approaches. They multiply
+
+small-sized kernels with the locations pointed by the flow vectors. However, the reference points are still limited in a small area because the kernels maintain their square shape, which results in low DoF.
+
+There are some approaches that use additional information to solve problems in video frame interpolation. Niklaus et al. [33] exploit the context informations extracted from ResNet-18 [18] to enable the informative interpolation and succeed in obtaining high-quality results. In addition, Bao et al. [2] use depth maps estimated from hourglass architecture [6] to solve the occlusion problems. Lastly, Liu et al. [26] obtain better performance with cycle consistency loss and additional edge maps. These approaches can be independently applied to many other algorithms, including our approach.
+
+# 3. Proposed Approach
+
+# 3.1. Video Frame Interpolation
+
+Given consecutive video frames $I_{n}$ and $I_{n + 1}$ , where $n \in \mathbb{Z}$ is a frame index, our goal is to find the intermediate frame $I_{out}$ . All the information required to produce $I_{out}$ can be obtained from $I_{n}$ and $I_{n + 1}$ . Therefore, all we have to do is find the relations between them. We regard the relation as a warping operation $\mathcal{T}$ from $I_{n}$ and $I_{n + 1}$ to $I_{out}$ . For the forward and backward warping operations $\mathcal{T}_f$ and $\mathcal{T}_b$ , we can consider $I_{out}$ as a combination of $\mathcal{T}_f(I_n)$ and $\mathcal{T}_b(I_{n + 1})$ as follows.
+
+$$
+I _ {o u t} = \mathcal {T} _ {f} (I _ {n}) + \mathcal {T} _ {b} (I _ {n + 1}) \tag {1}
+$$
+
+The frame interpolation task results in a problem of how the spatial transform $\mathcal{T}$ can be found. We employ a new operation called Adaptive Collaboration of Flows (AdaCoF) for $\mathcal{T}$ , which convolve the input image with adaptive kernel weights and offset vectors for each output pixel.
+
+Occlusion reasoning. Let both the input and output image sizes be $M \times N$ . In the case of occlusion, the target pixel will not be visible in one of the input images. Therefore we define occlusion map $V \in [0,1]^{M \times N}$ and modify Equation (1) as follows.
+
+$$
+I _ {o u t} = V \odot \mathcal {T} _ {f} (I _ {n}) + (J - V) \odot \mathcal {T} _ {b} (I _ {n + 1}), \tag {2}
+$$
+
+where $\odot$ is a pixel-wise multiplication and $J$ is an $M\times N$ matrix of ones. For the target pixel $(i,j)$ , $V(i,j) = 1$ implies that the pixel is visible only in $I_{n}$ and $V(i,j) = 0$ implies that it is visible only in $I_{n + 1}$ .
+
+
+(a) $d = 0$
+
+
+(b) $d = 1$
+
+
+(c) $d = 2$
+Figure 2: Illustration of the offset vectors of AdaCoF under various dilations.
+
+# 3.2. Adaptive Collaboration of Flows
+
+Let the frame warped from $I$ be $\hat{I}$ . When we define $\mathcal{T}$ as a classic convolution, we can write $\hat{I}$ as follows.
+
+$$
+\hat {I} (i, j) = \sum_ {k = 0} ^ {F - 1} \sum_ {l = 0} ^ {F - 1} W _ {k, l} I (i + k, j + l), \tag {3}
+$$
+
+where $F$ is the kernel size and $W_{k,l}$ are the kernel weights. The input image $I$ is considered to be padded so that the original input and output size are equal. Deformable convolution [8] adds offset vectors $\Delta p_{k,l} = (\alpha_{k,l},\beta_{k,l})$ to the classic convolution as follows.
+
+$$
+\hat {I} (i, j) = \sum_ {k = 0} ^ {F - 1} \sum_ {l = 0} ^ {F - 1} W _ {k, l} I (i + k + \alpha_ {k, l}, j + l + \beta_ {k, l}) \tag {4}
+$$
+
+AdaCoF, unlike the classic deformable convolutions, does not share the kernel weights over the different pixels. Therefore the notation for the kernel weights $W_{k,l}$ should be written as follows.
+
+$$
+\hat {I} (i, j) = \sum_ {k = 0} ^ {F - 1} \sum_ {l = 0} ^ {F - 1} W _ {k, l} (i, j) I (i + k + \alpha_ {k, l}, j + l + \beta_ {k, l}) \tag {5}
+$$
+
+The offset values $\alpha_{k,l}$ and $\beta_{k,l}$ may not be integer values. In other words, $(\alpha_{k,l},\beta_{k,l})$ could point to an arbitrary location, not only the grid point. Therefore, the pixel value of $I$ for any location has to be defined. We use bilinear interpolation to obtain the values of non-grid location as DCNs [8]. It also makes the module differentiable; therefore, the whole network can be trained end-to-end.
+
+Dilation. We found that dilating the starting point of the offset vectors helps AdaCoF to explore wider area as shown in Figure 2. Therefore, we add dilation term $d \in \{0,1,2,\ldots\}$ to the operation as follows.
+
+$$
+\begin{array}{l} \hat {I} (i, j) = \\ \sum_ {k = 0} ^ {F - 1} \sum_ {l = 0} ^ {F - 1} W _ {k, l} (i, j) I (i + d k + \alpha_ {k, l}, j + d l + \beta_ {k, l}) \tag {6} \\ \end{array}
+$$
+
+
+Figure 3: The neural network architecture. The model consists of three main parts: the U-Net, sub-networks, and Adaptive Collaboration of Flows (AdaCoF). The U-Net architecture extracts features from the input image. Then the sub-networks estimates the parameters needed for AdaCoF from the extracted features. The output's height and width of each sub-network are the same as that of the input. Each parameter group for an output pixel is obtained as a 1D vector along the channel axis. The AdaCoF part synthesizes the intermediate frame using the input frames and parameters.
+
+# 3.3. Network Architecture
+
+We design a fully convolutional neural network which estimates the kernel weights $W_{k,l}$ , offset vectors $(\alpha_{k,l},\beta_{k,l})$ , and occlusion map $V$ . Therefore, any video frames size can be used as the input. Furthermore, because each module of the neural network is differentiable, it is end-to-end trainable. Our neural network starts with the U-Net architecture, which consists of encoder, decoder, and skip connections [39]. Each processing unit basically contains $3\times 3$ convolution and ReLU activation. For the encoder part, we use average pooling to extract the features. And for the decoder part, we use bilinear interpolation for the upsampling. After the U-Net architecture, the seven sub-networks finally estimate the outputs $(W_{k,l},\alpha_{k,l},\beta_{k,l}$ for each frame and $V$ . We use sigmoid activation for $V$ to satisfy $V\in [0,1]^{M\times N}$ . Moreover, as the weights $W_{k,l}$ for each pixel have to be non-negative and must add up to 1, softmax layers are used for the constraints. More specific architectures of the network are described in Figure 3.
+
+# 3.4. Objective Functions
+
+Loss Function. First, we have to reduce a difference between the model output $I_{out}$ and ground truth $I_{gt}$ . We use $\ell_1$ norm for the loss as follows.
+
+$$
+\mathcal {L} _ {1} = \left\| I _ {\text {o u t}} - I _ {g t} \right\| _ {1} \tag {7}
+$$
+
+The $\ell_2$ norm can be used, but it is known that the $\ell_2$ norm-based optimization leads to blurry results in most of the image synthesis tasks [16, 28, 30, 43]. Following Liu et al. [27], we use the Charbonnier Function $\Phi(x) = (x^2 + \epsilon^2)^{1/2}$ for optimizing $\ell_1$ norm, where $\epsilon = 0.001$ .
+
+Perceptual Loss. Perceptual loss has been found to be effective in producing visually more realistic outputs [11, 21, 51]. We add the perceptual loss with the feature extractor $\mathcal{F}$ from conv4_3 of ImageNet pretrained VGG16 network.
+
+$$
+\mathcal {L} _ {v g g} = \left\| \mathcal {F} \left(I _ {\text {o u t}}\right) - \mathcal {F} \left(I _ {g t}\right) \right\| _ {2} \tag {8}
+$$
+
+Dual-Frame Adversarial Loss. It is known that training the networks with adversarial loss [15] can lead to results of higher quality and sharpness, instead of increasing mean squared error [24, 5]. This could be applied to video frame interpolation tasks. However, simply applying it to the single output frame does not consider the temporal consistency and leads to a disparate result compared to the input frames. What we want is to make the synthesized frame appear natural among the adjacent frames, not the other real images. Therefore, we concatenate the generated frame and one of the input frames in the temporal order and train the discriminator $C$ to distinguish which of the two is the generated frame with the following loss.
+
+$$
+- \mathcal {L} _ {C} = \log (C ([ I _ {n}, I _ {\text {o u t}} ])) + \log (1 - C ([ I _ {\text {o u t}}, I _ {n + 1} ])), \tag {9}
+$$
+
+ | Middlebury | UCF101 | DAVIS |
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
| Ours-fb | 32.879 | 0.956 | 33.449 | 0.967 | 24.787 | 0.828 |
| Ours-kb | 34.762 | 0.972 | 34.689 | 0.973 | 25.802 | 0.854 |
| Ours-ws | 35.412 | 0.976 | 34.901 | 0.973 | 26.623 | 0.866 |
| Ours-woocc | 35.471 | 0.975 | 34.907 | 0.973 | 26.482 | 0.863 |
| Ours-sdc | 34.973 | 0.972 | 34.673 | 0.974 | 26.367 | 0.866 |
| Ours-vgg | 35.694 | 0.977 | 34.973 | 0.973 | 26.773 | 0.869 |
| Ours | 35.715 | 0.978 | 35.063 | 0.974 | 26.636 | 0.868 |
+
+where $[\cdot ]$ is concatenation. Then we train the main network to maximize the uncertainty, i.e., entropy, of the discriminator with the following loss. This idea is inspired by some prior works [9, 13].
+
+$$
+\begin{array}{l} \mathcal {L} _ {a d v} = C ([ I _ {n}, I _ {o u t} ]) \log (C ([ I _ {n}, I _ {o u t} ])) \tag {10} \\ + C ([ I _ {o u t}, I _ {n + 1} ]) \log (C ([ I _ {o u t}, I _ {n + 1} ])) \\ \end{array}
+$$
+
+Thus, the network is intended to generate an output that is realistic compared to the adjacent input frames.
+
+We finally combine above losses to compose two versions of objective function: distortion-oriented loss $(\mathcal{L}_d)$ and perception-oriented loss $(\mathcal{L}_p)$ as follows.
+
+$$
+\mathcal {L} _ {d} = \mathcal {L} _ {1}, \tag {11}
+$$
+
+$$
+\mathcal {L} _ {p} = \lambda_ {1} \mathcal {L} _ {1} + \lambda_ {v g g} \mathcal {L} _ {v g g} + \lambda_ {a d v} \mathcal {L} _ {a d v}, \tag {12}
+$$
+
+For the perception-oriented version, we first train the network with $\mathcal{L}_d$ then fine-tune it with $\mathcal{L}_p$ .
+
+# 4. Experiments
+
+# 4.1. Experimental Settings
+
+Learning Strategy. We train our neural network using AdaMax optimizer [22], where $\beta_{1} = 0.9, \beta_{2} = 0.999$ . The learning rate is initially 0.001 and decays half every 20 epochs. The batch size is 4 and the network is trained for 50 epochs.
+
+Training Dataset. We use Vimeo90K [49] dataset for training. It contains 51,312 triplets of $256 \times 448$ video frames. To augment the dataset, we randomly crop $256 \times 256$ patches from the original images. We also eliminate the biases due to the priors by flipping horizontally, vertically and swapping the order of frames for the probability 0.5.
+
+Computational issue. Our approach is implemented using PyTorch [36]. To implement the AdaCoF layer, we used CUDA and cuDNN [7] for the parallel processing. We set the kernel size $5 \times 5$ and all the weights, offsets and occlusion map require 0.94 GB of memory for a 1080p video frame. It is about $70\%$ demand compared to
+
+Table 1: Result of ablation study on warping operations.
+
+ | Middlebury | UCF101 | DAVIS |
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
| F = 1 | 32.879 | 0.956 | 33.449 | 0.967 | 24.787 | 0.828 |
| F = 3 | 35.212 | 0.975 | 34.728 | 0.973 | 26.535 | 0.867 |
| F = 5 | 35.715 | 0.978 | 35.063 | 0.974 | 26.636 | 0.868 |
| F = 7 | 35.927 | 0.979 | 34.974 | 0.974 | 26.987 | 0.873 |
| F = 9 | 36.019 | 0.980 | 35.012 | 0.973 | 27.029 | 0.875 |
| F = 11 | 36.094 | 0.981 | 35.024 | 0.974 | 26.941 | 0.873 |
+
+Table 2: Experimental result on kernel size $F$
+
+ | Middlebury | UCF101 | DAVIS |
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
| d=0 | 35.489 | 0.977 | 35.032 | 0.974 | 26.710 | 0.870 |
| d=1 | 35.715 | 0.978 | 35.063 | 0.974 | 26.636 | 0.868 |
| d=2 | 35.876 | 0.980 | 35.099 | 0.974 | 26.910 | 0.870 |
+
+Table 3: Experimental result on dilation $d$
+
+Niklaus et al. [35]. Using RTX 2080 Ti GPU, it takes 0.21 seconds to synthesize a $1280 \times 720$ frame.
+
+Evaluation settings. The test datasets used for the experiments are the Middlebury dataset [1], some randomly sampled sequences from UCF101 [42] and the DAVIS dataset [37]. We evaluate each algorithm by measuring PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity) [45] for all test datasets. For all the tables in this section, the red numbers mean the best performance and the blue numbers mean the second best performance.
+
+# 4.2. Ablation Study
+
+We analyze the contributions of each module in terms of five keywords: warping operation, perceptual loss, kernel size, dilation and adversarial loss.
+
+Warping Operation. To verify that higher DoF leads to better performance, we fix the backbone network and replace AdaCoF with some other warping operations of lower DoF. We train all versions of warping operation with $\mathcal{L}_d$ and the kernel sizes are fixed to be 5 except for Ours-fb.
+
+- Ours-fb: To compare AdaCoF with flow-based approaches, we set the kernel size to be 1.
+- Ours- $kb$ : SepConv [35] is one of the most representative kernel-based approaches. However, because it does not contain an occlusion map, the comparison is not fair. Therefore, we train a new network of SepConv with an occlusion map.
+- Ours-sdc: To compare our algorithm with kernel and flow combined approaches, we exploit Spatially Displaced Convolution (SDC) [38] instead of AdaCoF.
+- Ours-ws: One of the differences between deformable convolution and AdaCoF is that our algorithm does not share the weights over all locations of images. Therefore, we compare it with the weight shared version.
+
+ | AVERAGE | Mequon | Schefflera | Urban | Teddy | Backyard | Basketball | Dumpruck | Evergreen |
| IE | NIE | IE | NIE | IE | NIE | IE | NIE | IE | NIE | IE | NIE | IE | NIE | IE | NIE | IE | NIE |
| MDP-Flow2 [48] | 5.83 | 0.87 | 2.89 | 0.59 | 3.47 | 0.62 | 3.66 | 1.24 | 5.20 | 0.94 | 10.20 | 0.98 | 6.13 | 1.09 | 7.36 | 0.70 | 7.75 | 0.78 |
| DeepFlow [46] | 5.97 | 0.86 | 2.98 | 0.62 | 3.88 | 0.74 | 3.62 | 0.86 | 5.39 | 0.99 | 11.00 | 1.04 | 5.91 | 1.02 | 7.14 | 0.63 | 7.80 | 0.96 |
| SepConv [35] | 5.61 | 0.83 | 2.52 | 0.54 | 3.56 | 0.67 | 4.17 | 1.07 | 5.41 | 1.03 | 10.20 | 0.99 | 5.47 | 0.96 | 6.88 | 0.68 | 6.63 | 0.70 |
| SuperSlomo [20] | 5.31 | 0.78 | 2.51 | 0.59 | 3.66 | 0.72 | 2.91 | 0.74 | 5.05 | 0.98 | 9.56 | 0.94 | 5.37 | 0.96 | 6.69 | 0.60 | 6.73 | 0.69 |
| CtxSyn [33] | 5.28 | 0.82 | 2.24 | 0.50 | 2.96 | 0.55 | 4.32 | 1.42 | 4.21 | 0.87 | 9.59 | 0.95 | 5.22 | 0.94 | 7.02 | 0.68 | 6.66 | 0.67 |
| CyclicGen [26] | 4.20 | 0.73 | 2.26 | 0.64 | 3.19 | 0.67 | 2.76 | 0.72 | 4.97 | 0.95 | 8.00 | 0.91 | 3.36 | 0.87 | 4.55 | 0.53 | 4.48 | 0.52 |
| TOF-M [49] | 5.49 | 0.84 | 2.54 | 0.55 | 3.70 | 0.72 | 3.43 | 0.92 | 5.05 | 0.96 | 9.84 | 0.97 | 5.34 | 0.98 | 6.88 | 0.72 | 7.14 | 0.90 |
| DAIN [2] | 4.86 | 0.71 | 2.38 | 0.58 | 3.28 | 0.60 | 3.32 | 0.69 | 4.65 | 0.86 | 7.88 | 0.87 | 4.73 | 0.85 | 6.36 | 0.59 | 6.25 | 0.66 |
| MEMC-Net [3] | 5.00 | 0.74 | 2.39 | 0.59 | 3.36 | 0.64 | 3.37 | 0.80 | 4.84 | 0.88 | 8.55 | 0.88 | 4.70 | 0.85 | 6.40 | 0.64 | 6.37 | 0.63 |
| AdaCoF (Ours) | 4.75 | 0.73 | 2.41 | 0.60 | 3.10 | 0.59 | 3.48 | 0.84 | 4.84 | 0.92 | 8.68 | 0.90 | 4.13 | 0.84 | 5.77 | 0.58 | 5.60 | 0.57 |
+
+Table 4: Evaluation results on the Middlebury benchmark.
+
+ | Middlebury | UCF101 | DAVIS |
| PSNR | SSIM | PSNR | SSIM | PSNR | SSIM |
| Overlapping | 27.968 | 0.879 | 30.445 | 0.935 | 21.922 | 0.740 |
| Phase Based [32] | 31.117 | 0.933 | 32.454 | 0.953 | 23.465 | 0.800 |
| MIND [28] | 31.346 | 0.943 | 32.437 | 0.963 | 25.570 | 0.852 |
| SepConv [35] | 35.521 | 0.977 | 34.735 | 0.973 | 26.258 | 0.861 |
| DVF [27] | 34.340 | 0.971 | 34.465 | 0.972 | 25.880 | 0.858 |
| SuperSlomo [20] | 34.234 | 0.972 | 34.055 | 0.970 | 25.699 | 0.858 |
| Ours | 35.715 | 0.978 | 35.063 | 0.974 | 26.636 | 0.868 |
| Ours + | 36.139 | 0.981 | 35.048 | 0.974 | 27.070 | 0.874 |
+
+Table 5: Evaluation result with fixed train dataset.
+
+- Ours-woocc: AdaCoF without occlusion map. The intermediate frame is obtained by simply averaging the outputs from the forward and backward warping.
+
+As shown in Table 1, our warping operation outperforms the other ones with lower DoFs. Especially, we can find that the PSNR gap between Ours-sdc and Ours is larger than the gap between Ours-kb and Ours-sdc. It means that breaking the square-shaped kernels to be any shape is more crucial than allowing the kernels to move freely.
+
+Perceptual Loss. We add perceptual loss $\mathcal{L}_{vgg}$ introduced in Section 3.4 without adversarial loss. We set $\lambda_{vgg} = 0.01$ . The row of Ours-vgg in Table 1 shows that the PSNR generally decreases and increases only for DAVIS datasets. This implies that the perceptual loss improves the robustness for hard sequences with large and complex motions.
+
+Kernel Size. We train the network with various kernel sizes $F \in \{1, 3, 5, 7, 9, 11\}$ which means that $F^2$ offset vectors are used. As shown in Table 2, the larger kernel size generally leads to better performance and the PSNR saturates as $F$ increases. Especially, the saturation is earlier for the UCF101 dataset because it contains relatively small motion and low-resolution sequences so that there is no room for the performance increase.
+
+Dilation. In Section 3.3, we add dilation to the AdaCoF operation to enforce the offset vectors to start from a wider area. We check the effect of dilation by training the network
+
+
+(a) Ours- $\mathcal{L}_d$
+
+
+(b) Ours- $\mathcal{L}_p$
+
+
+(c) WGAN-GP
+
+
+(d) TGAN
+Figure 4: The result of adding adversarial losses.
+
+with $F = 5$ and $d \in \{0,1,2\}$ . $d = 0$ means that the offset vectors start from the same location. Table 3 shows that the larger dilation generally leads to better results. As we can see from the $4^{\text{th}} - 7^{\text{th}}$ columns of Figure 6, the offset vectors tend to spread more in the case of large motion. Therefore, dilation provides the effect of better initialization for them. Figure 6 will be covered in more detail in Section 4.5.
+
+Adversarial Loss. For the visually more convincing results, we first train the network with $\mathcal{L}_d$ for 50 epochs and fine-tune it for 10 epochs with $\mathcal{L}_p$ which is introduced in Section 3.4. We set $\lambda_1 = 0.01$ , $\lambda_{vgg} = 1$ , $\lambda_{adv} = 0.005$ . For the comparison, we train the version of changing $\mathcal{L}_{adv}$ to be WGAN-GP loss [17] and TGAN loss [40]. Then we visually compare them with the result of the proposed dual-frame adversarial loss (Ours- $\mathcal{L}_p$ ). According to Figure 4, fine-tuning the network with adversarial losses increases the sharpness of the results. However, WGAN-GP and TGAN loss cause some artifacts to the output image, while our loss preserves the structures of the frames.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Ground Truth
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overlap
+
+
+Phase Based
+
+
+MIND
+Figure 5: Visual comparison of sample sequences with large motions (1 $^{\text{st}}$ - 2 $^{\text{nd}}$ row) and visual comparison of sample sequences with occlusion (3 $^{\text{rd}}$ - 4 $^{\text{th}}$ row). There are occluded areas in the front and back of the car.
+
+
+SepConv
+
+
+DVF
+
+
+SuperSlomo
+
+
+Ours- $\mathcal{L}_d$
+
+
+Ours- $\mathcal{L}_p$
+
+# 4.3.Quantitative Evaluation
+
+We compare our method with simply overlapped results and several competing algorithms including Phase Based [32], MIND [28], SepConv [35], DVF [27], and SuperSlomo [20]. We evaluate two versions of our algorithm. One is the basic version of $F = 5$ , $d = 1$ (Ours) and the other is the version of $F = 11$ , $d = 2$ (Ours +). For a fair comparison, we fix the training environment. We implement the competing algorithms and train them with the train dataset introduced in Section 4.1 commonly for 50 epochs. We measure PSNR and SSIM of each algorithm for the three test datasets. The results are shown in Table 5. According to the table the kernel-based approach (SepConv) generally perform better than the flow-based ones (DVF, SuperSlomo). Finally, our method outperforms the other algorithms for all test datasets by a high margin. We also upload our result to Middlebury Benchmark [1] and compare it with the other recent state-of-the-art algorithms. As reported in Table 4, AdaCoF ranks $2^{\text{nd}}$ in both IE (Interpolation Error) and NIE (Normalized Interpolation Error) among all published methods in Middlebury website. In addition, CyclicGen [26], which ranks $1^{\text{st}}$ in IE, uses additional edge maps for sharper results and the cycle consistency loss is orthogonally applicable to our method. Also, DAIN [2], which ranks $1^{\text{st}}$ in NIE, use pre-trained optical flow estimator and depth maps while our method does not require any additional information. Lastly, our approach shows better performance for data with dynamic motions such as Basketball, Dumpruck and Evergreen.
+
+# 4.4. Visual Comparison
+
+Because the video frame interpolation task does not have a fixed answer, the evaluations based on PSNR and SSIM are not perfect by themselves. Therefore we quantitatively evaluate the methods by comparing each result. Especially, we check how our method and other state-of-the-art algorithms handle the two main obstacles which make motions complex in real world videos: large motion and occlusion.
+
+Large motion. When the reference point is located far away, the search area has to be expanded accordingly. Therefore the large motion problem is one of the most challenging obstacles in video frame interpolation research. The first and second rows of Figure 5 show the estimated results of various approaches including our method. The results of MIND, SepConv tend to be blurry and DVF, SuperSlomo suffer from some artifacts. Compared to the other competing algorithms, our approach better synthesizes fast moving objects. In addition, the perception-oriented AdaCoF (Ours- $\mathcal{L}_p$ ) mitigate the motion blurs of the objects.
+
+Occlusion. Most of the objects in the intermediate frame appear in both adjacent frames. However, in case of occlusion, the object does not appear in one of the frames. Therefore, the appropriate frame has to be selected for each case, which makes the problem more difficult. In the third and fourth rows of Figure 5, a car causes occlusion in its front and back. Comparing the estimated images on occluded areas, our method handles the occlusion problems better than the other approaches.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Frame1
+
+
+Frame2
+
+
+Occlusion map
+
+
+MeanFlow1
+
+
+MeanFlow2
+Figure 6: Various visualizations of the network outputs.
+
+
+varFlow1
+
+
+varFlow2
+
+# 4.5. Offset Visualization
+
+Our method estimates some parameters from the input images: the kernel weights $W_{k,l}$ , the offset vectors $(\alpha_{k,l},\beta_{k,l})$ , and the occlusion map $V$ . To check whether the parameters behave as intended, we visualize them in various ways. Further, because the network is trained by self-supervised learning, the visualizations can be obtained without any supervision. Therefore, they can be used for some other tasks in motion estimation research.
+
+Occlusion map. The third column of Figure 6 shows the occlusion map $V$ . To handle occlusion, the proper frame has to be selected in each case. For example, the pixels in the red area cannot be found in the second frame. Therefore the network decides to consider only the first frame, not the second one. The blue area can be explained in the same way for the second frame, and the green area means that there is no occlusion.
+
+Mean Flow map. The fourth and fifth columns of Figure 6 show the weighted sum of the backward and forward offset vectors for each pixel. We call them Mean Flow $F_{m}$ and they can be calculated by the following equation.
+
+$$
+\Delta p _ {k, l} = \left(\alpha_ {k, l}, \beta_ {k, l}\right) \tag {13}
+$$
+
+$$
+F _ {m} (i, j) = \sum_ {k = 0} ^ {F - 1} \sum_ {l = 0} ^ {F - 1} W _ {k, l} (i, j) \Delta p _ {k, l} \tag {14}
+$$
+
+This means the overall tendency of the offset vectors. Therefore they might behave like a forward/backward optical flow and the figures prove it. This can be used as dense optical flow and can also be obtained from the other flow-based algorithms such as DVF and SuperSlomo.
+
+Variance Flow map. The sixth and seventh columns of Figure 6 are the weighted variance of the backward and forward offset vectors. We call them Variance Flow map $F_{v}$ and they can be calculated by the following equation.
+
+$$
+F _ {v} (i, j) = \sum_ {k = 0} ^ {F - 1} \sum_ {l = 0} ^ {F - 1} W _ {k, l} (i, j) \left(F _ {m} (i, j) - \Delta p _ {k, l}\right) ^ {2} \tag {15}
+$$
+
+The large value for this map means that the offset vectors for the pixel are more spread out so that it can refer to more pixels. According to the figure, more challenging locations such as large motions and occluded areas have larger variance values. Therefore, it can be used as a kind of uncertainty map for some motion estimation tasks. Unlike Mean Flow map, it can only be obtained through our method.
+
+# 5. Conclusion
+
+In this paper, we point out that the DoF of the warping operation to deal with various complex motions is one of the most critical factors in video frame interpolation. Then we propose a new operation called Adaptive Collaboration of Flows (AdaCoF). This method is the most generalized because all of the previous approaches are special versions of AdaCoF. The parameters needed for the AdaCoF operation are obtained from a fully convolutional network which is end-to-end trainable. Our experiments show that our method outperforms most of the competing algorithms even in several challenging cases such as those with large motion and occlusion. We visualize the network outputs to check whether they behave as intended and that the visualized maps are meaningful, so they can be used for other motion estimation tasks.
+
+Acknowledgement This research was supported by R&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation of KOREA(NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289).
+
+# References
+
+[1] Simon Baker, Daniel Scharstein, JP Lewis, Stefan Roth, Michael J Black, and Richard Szeliski. A database and evaluation methodology for optical flow. International Journal of Computer Vision, 92(1):1-31, 2011. 2, 5, 7
+[2] Wenbo Bao, Wei-Sheng Lai, Chao Ma, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang. Depth-aware video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3703-3712, 2019. 3, 6, 7
+[3] Wenbo Bao, Wei-Sheng Lai, Xiaoyun Zhang, Zhiyong Gao, and Ming-Hsuan Yang. Memc-net: Motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 2, 6
+[4] John L Barron, David J Fleet, and Steven S Beauchemin. Performance of optical flow techniques. International journal of computer vision, 12(1):43-77, 1994. 2
+[5] Yochai Blau and Tomer Michaeli. The perception-distortion tradeoff. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6228-6237, 2018. 4
+[6] Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. Single-image depth perception in the wild. In Advances in neural information processing systems, pages 730-738, 2016. 3
+[7] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. codnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. 5
+[8] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 2, 3
+[9] Emily L Denton et al. Unsupervised learning of disentangled representations from video. In Advances in neural information processing systems, pages 4414-4423, 2017. 5
+[10] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295-307, 2016. 2
+[11] Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. In Advances in neural information processing systems, pages 658-666, 2016. 4
+[12] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), December 2015. 2
+[13] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096-2030, 2016. 5
+[14] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks.
+
+In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2
+[15] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. 4
+[16] Ross Goroshin, Michael F Mathieu, and Yann LeCun. Learning to linearize under uncertainty. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1234-1242. Curran Associates, Inc., 2015. 4
+[17] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767-5777, 2017. 6
+[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2, 3
+[19] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE conference on computer vision and pattern recognition (CVPR), volume 2, page 6, 2017. 2
+[20] Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, and Jan Kautz. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2, 6, 7
+[21] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711. Springer, 2016. 2, 4
+[22] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.5
+[23] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. 2
+[24] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photorealistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681-4690, 2017. 4
+[25] Hongbin Liu, Ruiqin Xiong, Debin Zhao, Siwei Ma, and Wen Gao. Multiple hypotheses bayesian frame rate upconversion by adaptive fusion of motion-compensated interpolations. IEEE transactions on circuits and systems for video technology, 22(8):1188-1198, 2012. 2
+[26] Yu-Lun Liu, Yi-Tung Liao, Yen-Yu Lin, and Yung-Yu Chuang. Deep video frame interpolation using cyclic frame generation. In AAAI Conference on Artificial Intelligence, 2019. 3, 6, 7
+
+[27] Ziwei Liu, Raymond A. Yeh, Xiaou Tang, Yiming Liu, and Aseem Agarwala. Video frame synthesis using deep voxel flow. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 2, 4, 6, 7
+[28] Gucan Long, Laurent Kneip, Jose M Alvarez, Hongdong Li, Xiaohu Zhang, and Qifeng Yu. Learning image matching by simply watching video. In European Conference on Computer Vision, pages 434-450. Springer, 2016. 2, 4, 6, 7
+[29] Dhruv Mahajan, Fu-Chung Huang, Wojciech Matusik, Ravi Ramamoorthi, and Peter Belhumeur. Moving gradients: a path-based method for plausible image interpolation. In ACM Transactions on Graphics (TOG), volume 28, page 42. ACM, 2009. 2
+[30] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. In International Conference on Learning Representations (ICLR), 2016. 4
+[31] Simone Meyer, Abdelaziz Djelouah, Brian McWilliams, Alexander Sorkine-Hornung, Markus Gross, and Christopher Schroers. Phasenet for video frame interpolation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2
+[32] Simone Meyer, Oliver Wang, Henning Zimmer, Max Grosse, and Alexander Sorkine-Hornung. Phase-based frame interpolation for video. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. 2, 6, 7
+[33] Simon Niklaus and Feng Liu. Context-aware synthesis for video frame interpolation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 3, 6
+[34] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive convolution. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 1, 2
+[35] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive separable convolution. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 1, 2, 5, 6, 7
+[36] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS 2017 Autodiff Workshop: The Future of Gradient-based Machine Learning Software and Techniques, 2017. 5
+[37] F. Peruzzi, J. Pont-Tuset, B. McWilliams, L. Van Gool, M. Gross, and A. Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In Computer Vision and Pattern Recognition, 2016. 5
+[38] Fitsum A Reda, Guilin Liu, Kevin J Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, and Bryan Catanzaro. Sdc-net: Video prediction using spatially-displaced convolution. In Proceedings of the European Conference on Computer Vision (ECCV), pages 718-733, 2018. 2, 5
+[39] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Nassir Navab, Joachim Hornegger, William M. Wells, and Alejandro F. Frangi, editors, Medical Image Computing and
+
+Computer-Assisted Intervention - MICCAI 2015, pages 234-241, Cham, 2015. Springer International Publishing. 4
+[40] Masaki Saito, Eiichi Matsumoto, and Shunta Saito. Temporal generative adversarial nets with singular value clipping. In Proceedings of the IEEE International Conference on Computer Vision, pages 2830-2839, 2017. 6
+[41] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2
+[42] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012. 5
+[43] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In International conference on machine learning, pages 843-852, 2015. 4
+[44] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2
+[45] Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 5
+[46] Philippe Weinzaepfel, Jerome Revaud, Zaid Harchaoui, and Cordelia Schmid. Deepflow: Large displacement optical flow with deep matching. In The IEEE International Conference on Computer Vision (ICCV), December 2013. 2, 6
+[47] Manuel Werlberger, Thomas Pock, Markus Unger, and Horst Bischof. Optical flow guided tv-1 1 video interpolation and restoration. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 273–286. Springer, 2011. 2
+[48] Li Xu, Jiaya Jia, and Yasuyuki Matsushita. Motion detail preserving optical flow estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9):1744-1757, 2012. 6
+[49] Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T Freeman. Video enhancement with task-oriented flow. International Journal of Computer Vision, 127(8):1106-1125, 2019. 2, 5, 6
+[50] Zhefei Yu, Houqiang Li, Zhangyang Wang, Zeng Hu, and Chang Wen Chen. Multi-level video frame interpolation: Exploiting the interaction among different levels. IEEE Transactions on Circuits and Systems for Video Technology, 23(7):1235-1248, 2013. 2
+[51] Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision (ECCV), pages 597-613. Springer, 2016. 4
\ No newline at end of file
diff --git a/adacofadaptivecollaborationofflowsforvideoframeinterpolation/images.zip b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..addc97cdaeb6306eb961567558965ffddf2d5b9e
--- /dev/null
+++ b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c8dccfd8cc8ec3badfeb455143834a30e1c378a549289f9b8c6dd8c21ed99cd
+size 814625
diff --git a/adacofadaptivecollaborationofflowsforvideoframeinterpolation/layout.json b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a2d31ba3905c7fc2b9f42c88325de08e2ce12783
--- /dev/null
+++ b/adacofadaptivecollaborationofflowsforvideoframeinterpolation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65dfad2710c608ad237a56b889c13cc1f927f0e15859a1fa5702b0fec0236f20
+size 543964
diff --git a/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_content_list.json b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..101fe5c1e10121abc7b228fc6bd75992220462b2
--- /dev/null
+++ b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a095283f1893234614402e6b7c556451ffb4c16dd2ccc0c523a935ab37c07330
+size 65869
diff --git a/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_model.json b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..71445d2971a426d7281004184109d6da348775fa
--- /dev/null
+++ b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8fc9708761ac8401f798464f309a570253c56decdb44cb50773de16969384aac
+size 80123
diff --git a/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_origin.pdf b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..63e169e3adef252cf88ba3d003c2c34d5bb17260
--- /dev/null
+++ b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/e5c9c2c0-16d7-4107-81c8-ede7735c5ed9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14eb7409747d2a3a740bf4cf2519dde954dcedb30b9723dbdb1d4d7622e5f4e1
+size 7564982
diff --git a/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/full.md b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5b6c5e8603cba04060fdf6905d9c29165b9b45a4
--- /dev/null
+++ b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/full.md
@@ -0,0 +1,253 @@
+# AdaCoSeg: Adaptive Shape Co-Segmentation with Group Consistency Loss
+
+Chenyang Zhu $^{1,2}$ Kai Xu $^{2}$ Siddhartha Chaudhuri $^{3,4}$ Li Yi $^{5}$ Leonidas Guibas $^{6}$ Hao Zhang $^{1}$ $^{1}$ Simon Fraser University $^{2}$ National University of Defense Technology
+ $^{3}$ Adobe Research $^{4}$ IIT Bombay $^{5}$ Google Research $^{6}$ Stanford University
+
+# Abstract
+
+We introduce AdaCoSeg, a deep neural network architecture for adaptive co-segmentation of a set of 3D shapes represented as point clouds. Differently from the familiar single-instance segmentation problem, co-segmentation is intrinsically contextual: how a shape is segmented can vary depending on the set it is in. Hence, our network features an adaptive learning module to produce a consistent shape segmentation which adapts to a set. Specifically, given an input set of unsegmented shapes, we first employ an offline pre-trained part prior network to propose per-shape parts. Then, the co-segmentation network iteratively and jointly optimizes the part labelings across the set subjected to a novel group consistency loss defined by matrix ranks. While the part prior network can be trained with noisy and inconsistently segmented shapes, the final output of AdaCoSeg is a consistent part labeling for the input set, with each shape segmented into up to (a user-specified) $K$ parts. Overall, our method is weakly supervised, producing segmentations tailored to the test set, without consistent ground-truth segmentations. We show qualitative and quantitative results from AdaCoSeg and evaluate it via ablation studies and comparisons to state-of-the-art co-segmentation methods.
+
+# 1. Introduction
+
+With the proliferation of data-driven and deep learning techniques in computer vision and computer graphics, remarkable progress has been made on supervised image [1,3] and shape segmentations [11,33]. Co-segmentation is an instance of the segmentation problem where the input consists of a collection, rather than one piece, of data and the collection shares certain common characteristics. Typically, for shape co-segmentation, the commonality is that the shapes all belong to the same category, e.g., chairs or airplanes.
+
+The goal of co-segmentation is to compute a consistent segmentation for all shapes in the input collection. The consistency of the segmentation implies a correspondence between all the segmented parts, which is a critical requirement for knowledge and attribute transfer, collecting
+
+
+
+
+Figure 1. Our adaptive shape co-segmentation network, AdaCoSeg, produces structurally different segmentations (here up to 4 parts) for two sets of chairs — one with armrests, one without. For each set, the segmentations are semantically consistent, allowing shape generation via part reshuffling. However, the same shape can be segmented differently depending on its containing set (see the circled chair), showing the method's adaptivity.
+
+
+
+
+
+statistics over a dataset, and structure-aware shape modeling [18]. Figure 1 shows such a modeling example based on part reshuffling induced by a co-segmentation.
+
+In contrast to the familiar single-instance segmentation problem, a distinctive feature of co-segmentation is that it is inherently contextual. As dictated by the consistency criterion, the same shape may be segmented differently depending on which input set it belongs to; see Figure 1. From this perspective, the input shape collection serves both as the test set and the training set. Ideally, the co-segmentation network can quickly adapt to a new input set without expensive retraining. Such an adaptive network would change its behavior, i.e., the network weights, at the time it is run. This is different from the traditional label learning paradigm, where the trained model strives to generalize to new inputs without changing the network weights, either under the supervised [11, 20] or weakly supervised settings [5, 19, 26].
+
+In this paper, we introduce a deep neural network for shape co-segmentation, coined AdaCoSeg, which is designed to be adaptive. AdaCoSeg takes as input a set of unsegmented shapes represented as point clouds, proposes per-shape parts in the first stage, and then jointly optimizes the parts subject to a novel group consistency loss defined by matrix rank estimates for the specific input set. The
+
+output is a $K$ -way consistent part labeling for each shape, where $K$ is a user-specified hyperparameter for the network. The network weights are initialized randomly and iteratively optimized via backpropagation based on the group loss.
+
+While the co-segmentation component is unsupervised, guided by the group consistency loss, we found that the results can be improved by adding a weak regularizing prior to boost the part proposal. Specifically, we pre-train a part prior network which takes as input a possibly noisy proposed part, represented by an indicator function over the complete point cloud, and denoises or "snaps" it to a more plausible and clean part. The part prior network is similar to the pairwise potential of a conditional random field (CRF) in traditional segmentation [12]: while it is not a general prior, as it is trained to remove only a small amount of noise, it suffices for boundary optimization. It is trained on a large collection of segmented 3D shapes, e.g., ShapeNet [2], where part counts and part compositions within the same object category can be highly inconsistent. No segment label is necessary: the model is label-agnostic.
+
+Overall, our method is weakly supervised, since it produces consistent segmentations without consistent ground-truth segmentations. It consists of an offline, supervised part prior network, which is trained once on inconsistently segmented, unlabeled shapes, and a "runtime", adaptive co-segmentation network which is unsupervised and executed for each input set of shapes. It is important to note that consistency of the segmentations is not tied to the part count $K$ , but to the geometric and structural features of the shape parts in the set, with $K$ serving as an upper bound for the part counts; see Figure 1. On the other hand, adjusting $K$ allows AdaCoSeg to produce consistent co-segmentations at varying levels of granularity; see Figure 7.
+
+Our part prior network is trained using the dataset from ComplementMe [25]; the adaptive co-segmentation is unsupervised. For evaluation only, we also adopt two datasets [30, 32] containing ground truth co-segmentations. While offline training required up to 20 hours to complete, it takes about 7 minutes to co-segment 20 shapes at a resolution of 2,048 points per shape. We show qualitative and quantitative results from AdaCoSeg and evaluate it through ablation studies and comparisons with state-of-the-art co-segmentation methods. Our main contributions include:
+
+- The first DNN for adaptive shape co-segmentation.
+- A novel and effective group consistency loss based on low-rank approximations.
+- A co-segmentation training framework that needs no ground-truth consistent segmentation labels.
+
+# 2. Related work
+
+Deep learning for shape segmentation. Deep models for supervised shape segmentation have been developed for
+
+
+
+
+Figure 2. AdaCoSeg consists of a part prior network (top) and a co-segmentation network (bottom). The part feature encoder and part prior module in the first network learn a weak regularizing prior to denoise proposed part shapes. The co-segmentation network is trained with a novel group consistency loss, defined on a set of shapes, based on the ranks of part similarity matrices.
+
+various representations, such as voxel grids [21, 29], point clouds [9, 15, 20], multi-view projections [11], and surface meshes [28, 33]. The key is to replace hand-crafted features employed in traditional methods by features learned from data. However, these models are mostly trained to target a fixed set of semantic labels. The resulting segmentation for a given shape is also fixed and cannot be adaptive to the context of a shape set, a key feature of co-segmentation. Relatively few works study deep learning for unsupervised shape segmentation [5, 23].
+
+Image co-segmentation. The co-segmentation of a pair or a group of 2D images has been studied for many years in the field of computer vision, where the main goal is to segment out a common object from multiple images [27]. Most works formulate this problem as a multi-image Markov Random Field (MRF), with a foreground consistency constraint. Recently, Li et al. [16] proposed a deep Siamese network to achieve object co-extraction from a pair of images. The general problem setting for all of these image co-segmentation works is significantly different from ours.
+
+Shape co-segmentation. Extensive research has been devoted to co-analysis of sets of shapes [6, 7, 8, 24, 30, 31]. These methods often start with an over-segmentation and perform feature embedding and clustering of the over-segmented patches to obtain a consistent segmentation. While most of these methods are unsupervised, their analysis pipelines all adopt hand-craft features and heuristic-based clustering, often leading to unnatural results amid complex part or structure variations.
+
+Recently, deep learning based approaches are emerging. Shu et al. [23] use deep auto-encoders for per-part feature learning. However, their co-segmentation module does not use a deep network and it strictly constrains the final
+
+segmentations to parts learned in the first stage. In contrast, AdaCoSeg does not strictly adhere to proposals by the part prior network, as the consistency loss can impact and adjust part labeling. Muralikrishnan et al. [19] propose a weakly-supervised method for tag-driven 3D shape co-segmentation, but their model is trained to target a predefined label set. Sung et al. [26] attempt to relate a set of shapes with deep functional dictionaries, resulting in a co-segmentation. However, these dictionaries are learned offline, for individual shapes, so their model cannot adaptively co-segment a set of shapes. In contrast, CoSetNet is split into an offline part which is transferrable across different shape sets, and an online, adaptive co-segmentation network which is learned for a specific input set.
+
+In concurrent work, Chen et al. [5] present a branched autoencoder for weakly supervised shape co-segmentation. The key difference is that BAE-NET is essentially a more advanced part prior network, with each branch tasked to learn a simple representation for one universal part of an input shape collection; there is no explicit optimization for group consistency. As a result, BAE-NET tends to underperform comapred to AdaCoSeg on small input sets and in the presence of large part discrepancies; see Figure 11.
+
+# 3. Overview
+
+Our method works with point-set 3D shapes and formulates shape segmentation as a point labeling problem. The network has a two-stage architecture; see Figure 2.
+
+Part prior network. The network takes as input a point cloud with noisy binary labeling, where the foreground represents an imperfect part, and outputs a regularized labeling leading to a refined part. To train the network, we employ the ComplementMe dataset [25], a subset of ShapeNet [2], which provides semantic part segmentation. The 3D shapes are point sampled, with each shape part implying a binary labeling. For each binary labeling, some random noise is added; the part prior network is trained to denoise these binary labelings. Essentially, the part prior network learns what a valid part looks like through training on a labeling denoising task. Meanwhile, it also learns a multi-scale and part-aware shape feature at each point, which can be used later in the co-segmentation network.
+
+Co-segmentation network. Given an input set of 3D shapes represented by point clouds, our co-segmentation network learns the optimal network weights through backpropagation based on a group consistency loss defined over the input set. The network outputs a $K$ -way labeling for each shape, with semantic consistency, where $K$ is a user prescribed network parameter specifying an upper bound of part counts; the final part counts are determined based on the input shape set and network optimization.
+
+The co-segmentation network is unsupervised, without any ground-truth consistent segmentations. For each part generated by the $K$ -way classification, a binary segmentation is formed and fed into the pre-trained part prior network: (1) to compute a refined $K$ -part segmentation, and (2) to extract a part-aware feature for each point. These together form a part feature for each segment. The corresponding part features with the same label for all shapes in the set constitute a part feature matrix. Then, weights of the co-segmentation network are optimized with the objective to maximize the part feature similarity within one label and minimize the similarity across different labels. This amounts to minimizing the rank of the part feature matrix for each semantic label while maximizing the rank of the joint part feature matrix for two semantic labels.
+
+# 4. Method
+
+The offline stage of AdaCoSeg learns a weak regularizing prior for plausible shape parts, where a part prior network is trained on a large, diverse shape repository with generally inconsistent, unlabeled segmentations. The network serves to refine any proposed parts to better resemble observed ones. The runtime stage jointly analyzes a set of test shapes using a co-segmentation network that iteratively proposes (at most) $K$ -way segmentations of each shape to optimize a group consistency score over the test set.
+
+# 4.1. Part Prior Network
+
+Dataset. In offline pre-training, we want to learn a general model to denoise all plausible part shapes at all granularities, using off-the-shelf data available in large quantities. This weak prior will be used to regularize any consistent segmentation of test shapes. Repositories with standard labeled segmentations [30, 32] are both limited in size and fixed at single pre-decided granularities. Instead, we use the 3D part dataset developed for ComplementMe [25].
+
+This dataset, a subset of ShapeNet [2], exploits the fact that shapes in existing 3D repositories already have basic component structure, since artists designed them modularly. However, the segmentations are inconsistent: while a chair back may be an isolated part in one shape, the back and seat may be combined into a single part in another. ComplementMe does some basic heuristic-based merging of adjacent parts to eliminate very small parts from the collection, but otherwise leaves noisy part structures untouched. Further, the parts lack labels - while some tags may be present in the input shapes, we ignore them since the text is generally inconsistent and often semantically meaningless. Hence, this dataset is an excellent example of the weakly-supervised training data we can expect in a real-life situation. Our method trains a denoising prior on this noisy dataset, which will be used to refine consistent segmentations proposed in our co-segmentation stage.
+
+
+Figure 3. The architecture of the part prior network. The network encodes a shape with noisy part labeling and the whole shape, using the MSG and MRG feature encoders from PointNet++ [20], respectively. It is trained to denoise the input binary labeling and output a clean labeling, indicating a plausible part.
+
+Network architecture. The part prior network learns to denoise an imperfectly segmented part, using an architecture based on components from PointNet++ [20]. The input to the network is a 3D point cloud shape $S$ . Points belonging to the proposed part constitute the foreground $F \subset S$ while the remaining points are the background $B = S \setminus F$ . The output of the network is a probability for each point $q \in S$ , such that the high probability points collectively define the ideal, "clean" part that best matches the proposed part, thereby denoising the noisy foreground.
+
+The architecture of our network is shown in Figure 3. The point cloud is processed by the multi-scale grouping (MSG) and multi-resolution grouping (MRG) modules of PointNet++, to produce two context-sensitive 128-D feature vectors $f_{\mathrm{MSG}}(q)$ and $f_{\mathrm{MRG}}(q)$ for each point $q \in S$ . The MSG module captures the context of a point at multiple scales, by concatenating features over larger and larger neighborhoods. The MRG module computes a similar multi-scale feature, but (half of) the features of a large neighborhood are computed recursively, from the features of the next smaller neighborhood; see [20] for details.
+
+We average the MSG features of foreground points to obtain a robust descriptor $f_{\mathrm{fg}}$ , which is concatenated with the MRG feature of each point to produce $[f_{\mathrm{MRG}}(q), f_{\mathrm{fg}}]$ pairs. The pairs are fed to a binary classifier with ReLU activation, where the output of the classifier indicates the "cleaned" foreground and background.
+
+Training. The part prior network is trained with single parts from the inconsistently segmented dataset. We add noise to each part (foreground) by randomly inserting some background points and excluding some foreground points ( $\sim 20 - 30\%$ ). The network takes noisy parts as input and tries to output clean part indicator functions, using a negative log-likelihood loss and Adam [14] optimizer.
+
+# 4.2. Co-segmentation Network
+
+The runtime stage of our pipeline jointly segments a set of unsegmented test shapes $T = \{S_{1}, S_{2}, \ldots, S_{N}\}$ to maximize consistency between the segmented parts. To this end,
+
+we design a deep neural network that takes a shape's point cloud as input and outputs a $K$ -way segmentation; $K$ is a user-specified hyperparameter specifying the part count. These outputs are compared across the test set to ensure geometric consistency of corresponding segments: our quantitative metric for this is a group consistency energy, which is used as a loss function to iteratively refine the output of the network using back-propagation.
+
+Note that although we use a deep network to output pershape segmentation maps, the trained network is not expected to generalize to new shape sets. Hence, the network performs essentially an unsupervised $K$ -way clustering of the input points across all test shapes. Apart from the consistency loss, the network is guided by the offline prior that has learned to denoise plausible parts of various sizes, but has no notion of consistency or desired granularity.
+
+Network architecture. Our co-segmentation architecture is shown in Figure 4. The network takes a minibatch of test shapes as input. The first part of the network is a classifier that independently assigns one of $K$ abstract labels $\{L_1, L_2, \dots, L_K\}$ to each point in each shape, with shared weights: the set of points in a shape with label $L_i$ defines a single part with that label. Since the classifier output may be noisy, we pass the binary foreground/background map corresponding to each such part through the pre-trained (and frozen) offline denoising network (Section 4.1) and then recompose these maps into a $K$ -way map using a $K$ -way softmax at each point to resolve overlaps. The recomposed output is the final (eventually consistent) segmentation.
+
+The subsequent stages of the network are deterministic and have no trainable parameters: they are used to compute the group consistency energy. First, the MSG features [20] of the foreground points for each part are max-pooled to yield a part descriptor (we found max pooling to work better than average pooling). If the segmentation is consistent across shapes, all parts with a given label $L_{i}$ should have similar descriptors. Therefore, we stack the descriptors for all parts with this label from all shapes in a matrix $M_{i}$ , one per row, and try to minimize its second singular value, a proxy for its rank (low rank = more consistent). Also, parts with different labels should be distinct, so the union of the rows of matrices $M_{i}$ and $M_{j\neq i}$ should have high rank. This time, we want to maximize the second singular value of $\text{concat}(M_i,M_j)$ , where the concat function constructs a new matrix with the union of the rows of its inputs. The overall energy function is:
+
+$$
+\begin{array}{l} \mathcal{E}_{\text{coseg}} = 1 + \max_{i\in \{1,2,\ldots ,K\}}\operatorname {rank}(M_{i}) \\ - \min _ {i, j \in \{1, 2, \dots , K \}, i \neq j} r a n k \left(\operatorname {c o n c a t} \left(M _ {i}, M _ {j}\right)\right), \\ \end{array}
+$$
+
+where the rank function is the second singular value, computed by a (rather expensive) SVD decomposition [34].
+
+
+Figure 4. Left: Given an input point cloud, the $K$ -way classifier segments it into $K$ parts. These parts are then refined by the part prior module, resulting in a refined $K$ -way segmentation of the input point cloud. After that, the part feature encoder is used to extract features for each refined part. Right: Given a set of input point clouds, we construct a part similarity matrix for each abstract part label, based on the part features extracted for all shapes.
+
+As this energy is optimized by gradient descent, the initial layers of the network learn to propose more and more consistent segmentations across the test dataset. Additionally, we found that gaps between segments of a shape appeared frequently and noticeably before re-composition, and were resolved arbitrarily with the subsequent softmax. Hence, we add a second energy term that penalizes such gaps; see more details in the supplementary material.
+
+Because the co-segmentation network has no access to ground truth and relies only on a weak geometry denoising prior, the consistency energy is the principal high-level influence on the final segmentation. We experimented with different ways to define this energy, and settled on SVD-based rank approximation as the best one. Note that the SVD operation makes this a technically non-decomposable loss, which usually needs special care to optimize [13]. However, consistency is in general a transitive property (even though its converse, inconsistency, is not). Hence, enforcing consistency over each of several overlapping batches is sufficient to ensure consistency over their union, and we can refine the segmentation maps iteratively using standard stochastic gradient descent.
+
+# 5. Results and Evaluations
+
+We validate the two stages of AdaCoSeg through qualitative and quantitative evaluation, and compare to state-of-the-art methods. We train our part prior network on the shape part dataset from ComplementMe [25], which is a subset of ShapeNet [2], and test our method with the ShapeNet [32] and COSEG [30] semantic part datasets. We also manually labeled some small groups (6-12 shapes per group) of shapes from ShapeNet [32] to form a cosegmentation benchmark for quantitative evaluation.
+
+
+Figure 5. High degrees of inconsistencies exist in the shape segmentations available in the ComplementMe dataset [25]. The left figure charts the distribution of part counts in each object category, showing their diversity. The right figure shows several shapes, within the same category and having the same part counts (3 parts for airplanes, 4 parts for chairs), that exhibit much structural and geometric variation in their segmentations.
+
+Table 1. Dataset for training the part prior network. For each category, we list the shape count (#S) and part count (#P).
+
+ | Airplane | Bicycle | Car | Chair | Lamp | Table |
| #S | 2,410 | 49 | 976 | 2,096 | 862 | 1,976 |
| #P | 9,134 | 299 | 5,119 | 9,433 | 3,296 | 6,608 |
+
+Discriminative power of matrix ranks. Our network design makes a low-rank assumption for the features of corresponding shape parts: the MSG feature vectors of similar parts form a low-rank matrix, while those dissimilar parts form a higher-rank matrix, where rank is estimated in a continuous way as the magnitude of the second singular value. To show that matrix ranks provide a discriminative metric, we use the ShapeNet semantic part dataset [32], which has a consistent label for each part, as test data. The chair category for this dataset has four labels: back, seat, arm and leg. From each of the 14 $(= \binom{4}{1} + \binom{4}{2} + \binom{4}{3})$ non-empty
+
+
+Figure 6. Number of distinct labels in a collection of parts (Y axis) vs increasing feature variation for that collection (X axis). The plot on the right uses the more discriminative matrix rank-based score, whereas the plot on the left uses MSE which cannot tell 2 and 3-label collections apart.
+
+
+
+proper subsets of labels, we randomly sample a collection of 200 labeled parts. Our hypothesis is that matrix rank should make it easy to distinguish between collections with few distinct labels, and collections with many distinct labels. Figure 6 (right) plots the number of distinct labels in the part collection, vs increasing rank estimates. As we can see, all part collections with a single label have a lower score than those with two labels, which in turn are all lower than those with 3 labels. In contrast, a naive variance metric such as mean squared error, as shown in Figure 6 (left), cannot correctly discriminate between part collections with 2 and 3 labels. We conclude that our rank-based metric accurately reflects consistency of a part collection.
+
+Control, adaptivity, and generalization. AdaCoSeg is not strongly supervised with consistently segmented and labeled training data, unlike most prior deep networks for shape segmentation. Instead, the weakly-supervised part prior allows a fair amount of input-dependent flexibility in what the actual co-segmentation looks like.
+
+First, we can generate test set segmentations with different granularities, controlled by the cardinality bound $K$ . Figure 7 shows co-segmentation of the same shapes for different values of $K$ . In these examples, our method fortuitously produces coarse-to-fine part hierarchies. However, this nesting structure is not guaranteed by the method, and we leave this as future work.
+
+Further, even for a fixed $K$ , different test shape collections can induce different co-segmentations. Figure 1 shows co-segmentations of two different chair collections, both with $K = 4$ . The collection on the left has several chairs with arms: hence, the optimization detects arms as one of the prominent parts and groups all chair legs into a single segment. The other collection has no arms, hence the four part types are assigned to back, seat, front, and back legs.
+
+Quantitative evaluation. Since AdaCoSeg produces segmentations with varying granularity, it is difficult to compare its results to a fixed ground truth segmentation, e.g., [30]. We adopt the following strategy. First, we set $K$ to be the total number of ground truth labels for a shape category. Second, after segmentation, we manually map our abstract labels $\{L_1, L_2, \ldots, L_K\}$ to the semantic labels
+
+
+Figure 7. Coarse-to-fine co-segmentations of the same input shapes, generated by setting $K = 2,3,4$ . The actual part count discovered per shape is adaptively selected and need not be exactly $K$ , as shown in the examples bounded in red.
+
+-arm, back, wing etc) present in the ground truth, using visual inspection of a few example shapes (this step could be automated, but it would not affect the overall argument). Now we can apply the standard Rand Index metric [4] for segmentation accuracy:
+
+$$
+R I = 1 - \binom {2} {N} ^ {- 1} \sum_ {i < j} (C _ {i j} P _ {i j} + (1 - C _ {i j}) (1 - P _ {i j}))
+$$
+
+where $i, j$ are different points of the input point cloud. $C_{ij} = 1$ iff $i$ and $j$ have the same predicted label, and $P_{ij} = 1$ iff they have the same ground truth label. A lower Rand Index implies a better match with the ground truth. Note that the main advantage of RI over IOU is that it computes segmentation overlap without needing segment correspondence. This makes it particularly suited for evaluating co-segmentation where the focus is on segmentation consistency without knowing part labeling or correspondence.
+
+In Table 2, we compare the Rand Index scores of our method vs prior work [7,23,24]. Since our method trains category-specific weak priors by default, we evaluate on those categories of COSEG that are also present in the ComplementMe component dataset. Our method works natively with point clouds, whereas the three prior methods all have access to the original mesh data. Even so, we demonstrate the greatest overall accuracy (lowest RI).
+
+To demonstrate that AdaCoSeg does not rely on the initial training segmentations for the part prior network, we present a quantitative consistency evaluation between the initial segmentations and our co-segmentation results on a subset of our training data; the ground truth of this evaluation is labeled by experts. Table 3 shows that AdaCoSeg
+
+
+Figure 8. A gallery of co-segmentation results obtained by AdaCoSeg, for all the six object categories from the ComplementMe dataset. The input sets vary in size from 7 to 10. More results can be found in the supplementary material.
+
+
+Figure 9. Co-segmentation results obtained by AdaCoSeg when using inconsistent training data. First and third rows show segmentations from the training data. Second and fourth rows show the co-segmentation results obtained by our network.
+
+can even improve the segmentation quality of its own training data. Figure 9 demonstrates a significant improvement by our co-segmentation over the noisy training data. More results can be found in supplemental material.
+
+Ablation study. We explore the effect of our design choices via several ablation studies and show some results in Figure 10. These design choices include:
+
+- No part prior: Remove the part prior network and connect the $K$ -way classifier to point feature encoder.
+
+- No de-noise: No random noise is added when training of our part prior network.
+- No segmentation completeness loss: Optimize Ada-CoSeg by using only the group consistency loss.
+- No contrastive term in group consistency loss: Only keep the second term in our loss function.
+- MSG vs. MRG for part feature encoder: Using MRG instead of MSG for encoding each shape part.
+
+We found that the loss cannot decrease significantly without the part prior module and the contrastive term during training. Refer to the supplemental material for visual segmentation results without the part prior. Further, the denoising is also important for training our co-segmentation network. Finally, we found that the MSG feature for the part encoder, which focuses more on local than global contexts, can achieve better performance over MRG in our task.
+
+Comparison to BAE-NET. Figure 11 visually compares AdaCoSeg with one-shot learning of BAE-NET [5] using one perfect exemplar, on a small test set of 9 chairs; more comparison results can be found in the supplementary material. Both methods can be regarded as weakly supervised but with different supervision strategies. Our experiments show that with explicit optimization adapted to input sets, using the group consistency loss, AdaCoSeg generally outperforms BAE-NET over small test sets and in the presence of strong part discrepancies.
+
+| Category | AdaCoSeg | Shu | Hu | Sidi |
| Chair | 0.055 | 0.076 | 0.121 | 0.135 |
| Lamp | 0.059 | 0.069 | 0.103 | 0.092 |
| Vase | 0.189 | 0.198 | 0.230 | 0.102 |
| Guitar | 0.032 | 0.041 | 0.037 | 0.081 |
+
+Table 2. Rand Index scores for AdaCoSeg vs. prior works. With the exception of the vases, AdaCoSeg performs the best. The hand-crafted features from Sidi et al. [24] prove to be best suited to the vase category.
+
+ | Chair | Table | Bicycle | Lamp | Car | Plane |
| GT | 0.21 | 0.27 | 0.31 | 0.18 | 0.38 | 0.24 |
| Ours | 0.09 | 0.14 | 0.22 | 0.16 | 0.27 | 0.13 |
+
+Table 3. Rand Index score comparison between segmentations in training data (GT) and AdaCoSeg results. AdaCoSeg improves consistency even in its own training data. Visual results can be found in supplemental material.
+
+
+Figure 10. Training rank loss for ablation study on significant features. See supplemental material for more evaluation.
+
+
+Figure 11. Comparing AdaCoSeg with BAE-NET on a small test set. AdaCoSeg, without needing any exemplars, leads to improved accuracy over BAE-NET with one exemplar.
+
+
+
+# 6. Conclusion, limitation, and future work
+
+We present AdaCoSeg, an adaptive deep learning framework for shape co-segmentation. A novel feature of our method is that beyond offline training by the part prior network, the online co-segmentation network is adaptive to the input set of shapes, producing a consistent co-segmentation by iteratively minimizing a group consistency loss via backpropagation over a deep network. Experiments demonstrate
+
+robustness of AdaCoSeg to large degrees of geometric and structural variations in the input sets, which is superior to state of the art.
+
+No ground-truth consistent co-segmentations are needed to train AdaCoSeg. The offline and online stages are trained on different datasets, and for different tasks. The only supervision is at the first stage, to denoise part proposals on an individual shape basis, where the training can be carried out using existing datasets composed of inconsistent segmentations, e.g., [25]. The second optimizes a consistent segmentation on a specific test set, with the part prior as a regularizer. Our two-stage pipeline conserves computation by training the weak prior only once and reusing it across different co-segmentation tasks.
+
+We reiterate that our online co-segmentation network does not generalize to new inputs, which is by design: the network weights are derived to minimize the loss function for the current input set and they are recomputed for each new set. Also, AdaCoSeg is not trained end-to-end. While an end-to-end deep co-segmentation network is desirable, the challenges of developing such networks for an unsupervised problem are well known [17]. Another limitation is that our part prior network is not trained across different object categories. This would have been ideal, but per-category training is typical for most existing segmentation models [9, 15, 21, 29]. Our current network appears capable of handling some intra-category variations, but learning parts and their feature descriptions with all categories mixed together is significantly more challenging.
+
+In future work, we plan to extend our weakly supervised learning framework for cross-category part learning. We would also like to explore co-segmentation via online learning, which represents a family of machine learning algorithms that learn to update models incrementally from sequentially input data streams [10, 22]. In contrast, our current co-segmentation network does not really learn a generalizable model, and the learned network weights cannot be continuously updated as new shapes come in. An online learned model for unsupervised co-segmentation may need to create and maintain multiple segmentation templates.
+
+# Acknowledgement
+
+We thank all the anonymous reviewers for their valuable comments and suggestions. This work was supported in part by an NSERC grant (No. 611370), NSFC (61572507, 61532003, 61622212), NUDT Research Grants (No. ZK19-30), National Key Research and Development Program of China (No. 2018AAA0102200), NSF grants CHS-1528025 and IIS-1763268, a Vannevar Bush Faculty Fellowship, a grant from the Dassault Foundation, a Natural Science Foundation grant for Distinguished Young Scientists (2017JJ1002) from the Hunan Province, and gift funds from Adobe.
+
+# References
+
+[1] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. TPAMI, 39(12):2481-2495, 2017. 1
+[2] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. ShapeNet: An information-rich 3D model repository. arXiv preprint arXiv:1512.03012, 2015. 2, 3, 5
+[3] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. CoRR, abs/1606.00915, 2016. 1
+[4] Xiaobai Chen, Aleksey Golovinskiy, and Thomas Funkhouser. A benchmark for 3D mesh segmentation. In Trans. Graph., volume 28, 2009. 6
+[5] Zhiqin Chen, Kangxue Yin, Matt Fisher, Siddhartha Chaudhuri, and Hao Zhang. BAE-NET: Branched autoencoder for shape co-segmentation. In ICCV, 2019. 1, 2, 3, 7
+[6] Aleksey Golovinskiy and Thomas Funkhouser. Consistent segmentation of 3D models. Computers & Graphics, 33(3):262-269, 2009. 2
+[7] Ruizhen Hu, Lubin Fan, and Ligang Liu. Co-segmentation of 3D shapes via subspace clustering. Computer Graphics Forum, 31(5):1703-1713, 2012. 2, 6
+[8] Qixing Huang, Vladlen Koltun, and Leonidas Guibas. Joint shape segmentation with linear programming. Trans. Graph., 30(6), 2011. 2
+[9] Qiangui Huang, Weiyue Wang, and Ulrich Neumann. Recurrent slice networks for 3D segmentation of point clouds. In CVPR, 2018. 2, 8
+[10] Rong Jin, Steven CH Hoi, and Tianbao Yang. Online multiple kernel learning: Algorithms and mistake bounds. In Int'l Conf. on Algorithmic Learning Theory, 2010. 8
+[11] Evangelos Kalogerakis, Melinos Averkiou, Subhransu Maji, and Siddhartha Chaudhuri. 3D shape segmentation with projective convolutional networks. In CVPR, 2017. 1, 2
+[12] Evangelos Kalogerakis, Aaron Hertzmann, and Karan Singh. Learning 3D mesh segmentation and labeling. Trans. Graph. (SIGGRAPH), 29(3), 2010. 2
+[13] Purushottam Kar, Harikrishna Narasimhan, and Prateek Jain. Online and stochastic gradient methods for non-decomposable loss functions. In NeurIPS, 2014. 5
+
+[14] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 4
+[15] Roman Klokov and Victor Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3D point cloud models. In ICCV, 2017. 2, 8
+[16] Weihao Li, Omid Hosseini Jafari, and Carsten Rother. Deep object co-segmentation. In ACCV, 2018. 2
+[17] Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In ICML, 2019. 8
+[18] Niloy Mitra, Michael Wand, Hao Richard Zhang, Daniel Cohen-Or, Vladimir Kim, and Qi-Xing Huang. Structure-aware shape processing. In SIGGRAPH Asia 2013 Courses, 2013. 1
+[19] Sanjeev Muralikrishnan, Vladimir G Kim, and Siddhartha Chaudhuri. Tags2Parts: Discovering semantic regions from shape tags. In CVPR, 2018. 1, 3
+[20] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017. 1, 2, 4
+[21] Gernot Riegler, Ali Osman Ulusoy, and Andreas Geiger. OctNet: Learning deep 3D representations at high resolutions. In CVPR, 2017. 2, 8
+[22] Shai Shalev-Shwartz and Yoram Singer. Online learning: Theory, algorithms, and applications. 2007. 8
+[23] Zhenyu Shu, Chengwu Qi, Shiqing Xin, Chao Hu, Li Wang, Yu Zhang, and Ligang Liu. Unsupervised 3D shape segmentation and co-segmentation via deep learning. Computer Aided Geometric Design, 43:39-52, 2016. 2, 6
+[24] Oana Sidi, Oliver van Kaick, Yanir Kleiman, Hao Zhang, and Daniel Cohen-Or. Unsupervised co-segmentation of a set of shapes via descriptor-space spectral clustering. Trans. Graph. (SIGGRAPH Asia), 30(6), 2011. 2, 6, 8
+[25] Minhyuk Sung, Hao Su, Vladimir G. Kim, Siddhartha Chaudhuri, and Leonidas Guibas. ComplementMe: Weakly-supervised component suggestions for 3D modeling. Trans. Graph. (SIGGRAPH Asia), 2017. 2, 3, 5, 8
+[26] Minhyuk Sung, Hao Su, Ronald Yu, and Leonidas Guibas. Deep functional dictionaries: Learning consistent semantic structures on 3D models from functions. In NeurIPS, 2018. 1, 3
+[27] Sara Vicente, Carsten Rother, and Vladimir Kolmogorov. Object cosegmentation. In CVPR, 2011. 2
+
+[28] Pengyu Wang, Yuan Gan, Panpan Shui, Fenggen Yu, Yan Zhang, Sogle Chen, and Zhengxing Sun. 3D shape segmentation via shape fully convolutional networks. Computers & Graphics, 70:128-139, 2018. 2
+[29] Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, and Xin Tong. O-CNN: Octree-based convolutional neural networks for 3D shape analysis. ACM Transactions on Graphics, 36(4), 2017. 2, 8
+[30] Yunhai Wang, Shmulik Asafi, Oliver Van Kaick, Hao Zhang, Daniel Cohen-Or, and Baoquan Chen. Active co-analysis of a set of shapes. Trans. Graph. (SIGGRAPH Asia), 31(6), 2012. 2, 3, 5, 6
+[31] Kai Xu, Honghua Li, Hao Zhang, Daniel Cohen-Or, Yueshan Xiong, and Zhi-Quan Cheng. Style-content separation by anisotropic part scales. Trans. Graph. (SIGGRAPH Asia), 29(6), 2010. 2
+[32] Li Yi, Vladimir G Kim, Duygu Ceylan, I Shen, Mengyan Yan, Hao Su, Cewu Lu, Qixing Huang, Alla Sheffer, Leonidas Guibas, et al. A scalable active framework for region annotation in 3D shape collections. Trans. Graph. (SIGGRAPH Asia), 35(6), 2016. 2, 3, 5
+[33] Li Yi, Hao Su, Xingwen Guo, and Leonidas J Guibas. SyncSpecCNN: Synchronized spectral CNN for 3D shape segmentation. In CVPR, 2017. 1, 2
+[34] Renjiao Yi, Chenyang Zhu, Ping Tan, and Stephen Lin. Faces as lighting probes via unsupervised deep highlight extraction. In ECCV, 2018. 4
\ No newline at end of file
diff --git a/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/images.zip b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..23a0c6f7fee858711d1d197e0f7b97281ee6db2c
--- /dev/null
+++ b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:360e093e62f55ea3f4097aef136c83fdb1b97b9d3ff2cc1267b1952aeabfaed2
+size 648342
diff --git a/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/layout.json b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c8435ded327363d5e34e4bb1471ade31da8b5f24
--- /dev/null
+++ b/adacosegadaptiveshapecosegmentationwithgroupconsistencyloss/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48b9dcf98dd763edd439628fa05636729ec62ac1842162827f5abdfff33eaec0
+size 312375
diff --git a/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_content_list.json b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..383abbbeb31387b62490bb507e575ecf0a87b875
--- /dev/null
+++ b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13c33c55528935d54fcd963be7f440c0f4f1cad1d5b59858abfe43e5a9cea87b
+size 79049
diff --git a/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_model.json b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..06d9ae7c7f117c79c89a4fc68d443375370ede60
--- /dev/null
+++ b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:95e7a687c96ae96e2a2eef1d46d1d7f758f6a9696102612956be184ed2e54367
+size 97807
diff --git a/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_origin.pdf b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cfff9b0fa6d1bc4e08771660a9f8769f574bd7f0
--- /dev/null
+++ b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/4094ae63-9167-497a-bd46-74d2c09b77c0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb07f31f8a945b8a271db1d5c4d6f47034de32d7d509aecab96a09eead5de396
+size 2449671
diff --git a/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/full.md b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..41ed03b165d0d60a7111dbae784dcbbff38eb9f4
--- /dev/null
+++ b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/full.md
@@ -0,0 +1,333 @@
+# Adaptive Dilated Network with Self-Correction Supervision for Counting
+
+Shuai Bai1, Zhiqun He2, Yu Qiao3, Hanzhe Hu4, Wei Wu2, Junjie Yan2
+1Beijing University of Posts and Telecommunications SenseTime Group Limited
+3Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences Peking University
+
+baishuai@bupt.edu.cn {hezhiqun, wuwei, yanjunjie}@sensetime.com
+
+yu.qiao@siat.ac.cn huhz@pku.edu.cn
+
+# Abstract
+
+The counting problem aims to estimate the number of objects in images. Due to large scale variation and labeling deviations, it remains a challenging task. The static density map supervised learning framework is widely used in existing methods, which uses the Gaussian kernel to generate a density map as the learning target and utilizes the Euclidean distance to optimize the model. However, the framework is intolerable to the labeling deviations and can not reflect the scale variation. In this paper, we propose an adaptive dilated convolution and a novel supervised learning framework named self-correction (SC) supervision. In the supervision level, the SC supervision utilizes the outputs of the model to iteratively correct the annotations and employs the SC loss to simultaneously optimize the model from both the whole and the individuals. In the feature level, the proposed adaptive dilated convolution predicts a continuous value as the specific dilation rate for each location, which adapts the scale variation better than a discrete and static dilation rate. Extensive experiments illustrate that our approach has achieved a consistent improvement on four challenging benchmarks. Especially, our approach achieves better performance than the state-of-the-art methods on all benchmark datasets.
+
+# 1. Introduction
+
+The counting task is an important topic in computer vision. There are many practical applications, such as traffic management and congestion estimation under video surveillance. In recent years, the methods using convolutional neural network (CNN) have achieved remarkable progress. However, this task remains challenging, which is mainly faced with two challenges: how to effectively supervise the learning process and address the large scale variation problem?
+
+Firstly, compared to the bounding-box annotation, the dotted annotation is less labour-intensive, which is widely
+
+
+Figure 1. Two challenges for the counting problem. a) The location of the dotted annotation (yellow points) is inconsistent, whether it is a vehicle or a person. b) There is large scale variation in the same scene and different scenes.
+
+used in most of the counting datasets [11, 50, 15, 14, 43]. However, as shown in Fig. [1] (a), the dotted annotations are not consistent on different targets because of subjective deviation. Most of the existing state-of-the-art methods [48, 20, 23, 37, 26, 30] use Gaussian distribution to generate a density map as the learning target. The model is optimized by comparing the Euclidean $(L_{2})$ distance between the target density map and the model estimation. However, there are three limitations in this supervised method: 1) The labeling deviation makes the target density map not accurate. 2) The variance of the Gaussian density map does not match the scale of the target. 3) $L_{2}$ loss is sensitive to the deviation of position and the change of variance. As a result, the model cannot learn consistent mapping relationships between density maps and features, which greatly limits the upper bound of the performance. Recently, some works [51, 36, 39, 22] have been proposed to alleviate the inconsistency, including introducing additional loss function (e.g., adversarial loss [34] and structural similarity (SSIM) loss [2]) and fusing the density maps of different variances [42]. These methods mainly focus on the loss function while ignoring the deviation of the target density map.
+
+Secondly, as shown in Fig.1(b), there is large scale variation in different scenes. Even in the same scene, the scale still changes dramatically due to perspective phenomenon.
+
+In order to address large scale variation problem, some previous methods use multi-column network [50,33,17,39,40, 41], stacked multi-branch blocks [2,25], or multi-dilated decoders [12,24] to extract features with different receptive fields. Other methods [29,13,35,45] apply different resolution features to estimate the density maps, and then fuse the density maps with the attention maps [13] or perspective maps [35,8] to obtain the final result. In fact, the value of the scale is continuous, but the aforementioned methods only consider several discrete scales or receptive fields, and there is no way to adapt to a wider range of continuous scale variation. At the same time, extracting multi-scale features will bring more computation load.
+
+Towards the aforementioned issues, we propose a novel supervised learning framework. The framework utilizes the model estimation to correct the annotation with an expectation-maximization (EM) manner, which effectively alleviates the effect of labeling deviations. We consider the density map as a Gaussian mixture model, which consists of $K$ two-dimensional Gaussian distributions, where $K$ is the number of objects in the image. In this setting, the dotted annotation is used to initialize the Gaussian mixture model. The expectation (E) step works as correcting and estimating the responsibility between each position and Gaussian distribution. Then, the maximization (M) step functions work as updating the parameters (e.g., means, the covariances, and the mixing coefficients) of the Gaussian mixture by maximizing the complete data likelihood. The E step and the M step execute alternately. Instead of using $L_{2}$ loss to optimize the model, the self-correction (SC) loss is proposed to optimize both the whole and individuals. For the whole, we generate a new density map with the re-estimated parameters as the GT. For the individuals, we introduce the supervision to the mixing coefficient of each Gaussian.
+
+Furthermore, the scale variation is continuous, which means that the continuous receptive field matches the target scale better than the discrete. To distinguish the different regions, the specific receptive fields at different locations are more effective than the same. Based on the analysis above, we have designed an adaptive dilated convolution module. Instead of static and discrete dilation rate, each location has a specific dilation rate to match the scale variation. Moreover, the range of dilation is continuous and it is learned from the preceding feature maps, which costs less computation load than extracting multi-scale feature.
+
+- We propose a novel supervised learning framework, which effectively utilizes results of model learning to progressively correct the labeling deviations. Besides, the self-correction loss is proposed to simultaneously optimize the model from both the whole and the individuals perspective.
+- We propose an adaptive dilated convolution, which
+
+learns a specific continuous dilation rate to effectively match the scale variation at different locations. Moreover, it gains better performance than multi-scale features fusion or multi-column networks with less computation load.
+
+- Extensive experiments illustrate that our approach has achieved a consistent improvement on four challenging benchmarks. Especially, our approach achieves better performance than the state-of-the-art methods on all benchmark datasets.
+
+# 2. Related Works
+
+As a crucial topic in computer vision, the counting problem has been researched for many years. The early methods [1952101921] regard it as a detection problem, but it is difficult to detect all targets in congested areas. To improve the accuracy of counting in some extremely dense cases, the methods [34532] of direct regression are proposed. Recently, CNN-based methods have achieved remarkable progress. These methods mainly concentrate on solving two challenging problems: large scale variation and lack of effective supervision. We review related works about the counting problem from the aforementioned two aspects.
+
+Methods of alleviating large scale variation. One way to cope with large scale variation is to obtain more rich feature representation. MCNN [50] designs a multi-column convolutional neural network, in which different branches use different kernel sizes to control the size of the receptive fields. Switch-CNN [33] introduces a switch classifier is trained to relay the crowd scene patch to the best branch. SANet [2] applies stacked multi-branch blocks to extract features with different receptive fields. Using a single CNN, CSRNet [20] employs dilated convolution to expand the receptive field, which improves accuracy and proves the effectiveness of dilated convolution. DADNet [12] applies multi-dilated convolution to capture rich spatial context and utilizes deformable convolution to generate a high-quality density map. Another way is to effectively combine features of different resolutions. Hydra-CNN [29] uses a pyramid of image patches extracted at multiple scales to perform the final density prediction. With only one scale, SPN [44] extracts multi-scale features from different layers by the scale pyramid module. SAAN [28] employs the attention mechanism to fuse the density maps estimated by multi-resolution features. PACNN [35] introduces the additional branch to predict the perspective map, which is used to fuse the multi-resolution density maps. Obviously, these methods use discrete receptive fields, which limits their ability to better adapt to continuous scale changes. Moreover, extracting multi-scale features adds more computation load.
+
+Methods of effective supervision. The mainstream state-of-the-art approaches are based on density map super
+
+
+Figure 2. Overview of our counting framework. The input image is first fed into the backbone network to obtain feature representation. The decoder consists of six adaptive dilated convolutions and outputs the estimated density map. Each adaptive dilated convolution estimates a specific dilation rate for each location over the input feature. Then, the sampled locations are determined with the dilation rates and the feature is sampled by bilinear interpolation. The current network estimation is used to correct the annotation with an EM-like manner. Furthermore, the SC loss simultaneously optimizes the model from both the whole and the individuals perspective.
+
+vision. Lempitsky et al. [18] start to use Gaussian distribution to generate a density map as the learning target, which is widely adopted by subsequent methods [15, 23, 48, 37]. $L_{2}$ loss is commonly used in these CNN based methods. However, this kind of supervised learning framework is intolerable to the inconsistency mapping relationships caused by labeling deviation between density maps and features. Some methods [49, 48, 15, 51, 36, 39] introduce the supervision of additional tasks (e.g., depth maps, segmentation graphs, quantity estimation) to mitigate the effects of inconsistency. SANet [2] adds the local pattern consistency loss to reduce the sensitivity of $L_{2}$ loss. CODA [34] introduces an adversarial loss to attenuate the blurry effects of density map. ADMG [42] uses a learned refinement network to fuse the density maps of different variances as a new density map. DSSINet [22] utilizes a dilated multi-scale structural similarity loss to learn the consistency within regions of various sizes. However, most of these methods [2, 34, 47] primarily focus on the design of the loss function and employ hand-craft variance and scale settings.
+
+# 3. Methodology
+
+We propose a framework for objects counting, which is shown in Fig. ② It consists of the adaptive dilated convolution network and the self-correction supervision. In this section, firstly, we revisit the conventional target density map from the Gaussian Mixture Model (GMM). Then we will present a novel supervised learning framework, which uti
+
+lizes the network estimation to correct the annotation with an expectation-maximization manner. Furthermore, we describe the architecture and the operation details of the adaptive dilated convolution.
+
+# 3.1. Self-Correction Supervision
+
+Gaussian density maps are widely used as the learning target in CNN-based methods. It is formulated as:
+
+$$
+\mathbf {D} _ {g t} \left(x _ {n}\right) = \sum_ {k = 1} ^ {K} \mathcal {N} \left(x _ {n} \mid \mu_ {k}, \Sigma_ {k}\right), \tag {1}
+$$
+
+where $\mathbf{D}$ represents the density map of size $H\times W$ . $\mathbf{D}_{est}$ and $\mathbf{D}_{gt}$ denote the estimated and target density maps. $x_{n}$ denotes the $n_{th}$ two-dimensional location $(h_n,w_n)$ in the image, and $X$ represents the 2D location map of size $2\times N$ . $N = H\times W$ is the number of locations, and $K$ is the number of objects in the image. $\mathcal{N}(x|\mu_k,\Sigma_k)$ denotes the $k_{th}$ 2D Gaussian distribution. We use $\mu_{k}$ to indicate the $k_{th}$ dotted annotation. $\Sigma_{k}$ represents the variance of the $k_{th}$ Gaussian. Commonly, the variance is pre-defined or calculated by the $K$ -means algorithm [50,15].
+
+Considering the density map divided by $K$ , it can be regarded as a Gaussian Mixture Model:
+
+$$
+\begin{array}{l} p \left(x _ {n}\right) = \frac {\mathbf {D} _ {g t} \left(x _ {n}\right)}{K} = \sum_ {k = 1} ^ {K} \pi_ {k} \mathcal {N} \left(x _ {n} \mid \mu_ {k}, \Sigma_ {k}\right). \tag {2} \\ s. t. \sum_ {k = 1} ^ {K} \pi_ {k} = 1, 0 \leq \pi_ {k} \leq 1 \\ \end{array}
+$$
+
+The Gaussian mixture model has $K$ hidden variables and
+
+
+Figure 3. Comparison of different supervision methods. Here are four common situations in density map estimation (jitter, reshuffle, missing and the change of Gaussian kernel). In the top row, we visualize the model estimation. In the second row, the initial GT and the corrected GT are shown. The residual between the GT and model estimation is compared in the third row. In the bottom, we compare the sum of the per-pixel $L_{1}$ distance without self-correction, the absolute difference of overall counts, and the SC loss. The SC supervision has a unique property that it tolerates the local bias but reacts strongly to the change of the number of objects.
+
+the mixing coefficient $\pi_{k} = \frac{1}{K}$ , and the dotted annotation $\mu_{k}$ is used as the mean. Due to the subjective bias and the scale matching problem mentioned in section 1, this Gaussian mixture model is not an optimal probability distribution. As the model learns, the model predicts more accurate density maps, but inaccurate annotations limit the upper bound of the performance. Benefitting from features of images, the density map $D_{est}$ estimated by the current network could be better than annotation or at least yiled complementary information, in terms of the consistency of the response locations and the matching of the response range to the target scale to some extent.
+
+This fact inspires us to utilize the current network estimation $D_{est}$ to correct the annotation $D_{gt}$ for generating a more reliable density map for training network in the next time. We propose an EM-like iterative algorithm for this goal. Specially, the annotation is used as the initial parameters of GMM. In the E step, we introduce a responsibility estimation setup, in which the responsibility between each position and each latent distribution are estimated by the current parameters and corrected by the network estimation. In the M step, the parameters of GMM is re-estimated with the current responsibility by maximizing the complete data likelihood. The E step and the M step execute alternately. At each training iteration, the corrected GT will be regenerated. Furthermore, the supervision to the estimated mixing coefficient is introduced to balance the individuals.
+
+Overall, the proposed SC supervision has three key parts, including responsibility estimation, likelihood maximization and self-correction (SC) loss. To simplify the symbols, we reshape $\mathbf{D}_{est}$ into $1\times N$ . It is noteworthy that the probability is non-negative. So we add a ReLU layer behind the output layer. Firstly, the probability matrix $\mathbf{Z}$ of size $K\times H\times W$ is initialized, which represents the conditional probability of $x_{n}$ belonging to the $k_{th}$ Gaussian (object).
+
+The $k_{th}$ matrix of size $H\times W$ in $\mathbf{Z}$ is initialized with a Gaussian distribution with the mean of the dotted annotation and pre-defined variance. Similar with $\mathbf{D}$ , we reshape $\mathbf{Z}$ into $K\times N$ . Here, we use 0.5 as the initialization value of the variance. As the iteration proceeds, $\mathbf{Z}^t$ is re-generated with new Gaussian distributions based on re-estimated parameters (e.g., $\mu^{(t - 1)},\Sigma^{(t - 1)}$ ).
+
+Responsibility estimation. Responsibility estimation works as the E step in the EM algorithm. From the view of dotted annotations, we use the posterior probability to evaluate the responsibility of the $n_{th}$ position and the $k_{th}$ Gaussian distribution. It is formulated as:
+
+$$
+\boldsymbol {\Gamma} _ {k n} ^ {(t)} = \frac {\mathbf {Z} _ {k n} ^ {(t - 1)}}{\sum_ {j = 1} ^ {k} \mathbf {Z} _ {k n} ^ {(t - 1)}}, \tag {3}
+$$
+
+where $t$ denotes the $t_{th}$ iteration, and $\mathbf{Z}_{kn}$ denotes the value of $\mathbf{Z}$ at the position of $(k,n)$ . However, the aforementioned responsibility can not reflect the actual data distribution. The corrected responsibility is given by:
+
+$$
+\mathbf {R} _ {k n} ^ {(t)} = \boldsymbol {\Gamma} _ {k n} ^ {(t)} \times \mathbf {D} _ {e s t} (x _ {n}). \tag {4}
+$$
+
+Finally, using Eq. (4), the responsibility matrix is obtained as $\mathbf{R}$ of size $K\times N$
+
+Likelihood maximization. Likelihood maximization works as the M step of EM algorithm. The parameters are re-estimated with the responsibility matrix:
+
+$$
+\mu_ {k} ^ {(t)} = \frac {1}{\mathbf {N} _ {k} ^ {(t)}} \mathbf {R} _ {k} ^ {(t)} \times \mathbf {X} ^ {T}, \tag {5}
+$$
+
+$$
+\Sigma_ {k} ^ {(t)} = \frac {1}{N _ {k} ^ {(t)}} \mathbf {R} _ {k} ^ {(t)} \times \left(\left(\mathbf {X} - \mu_ {k} ^ {(t)}\right) \cdot \left(\mathbf {X} - \mu_ {k} ^ {(t)}\right)\right) ^ {T}, \tag {6}
+$$
+
+$$
+\pi_ {k} ^ {(t)} = \frac {N _ {k} ^ {(t)}}{\sum_ {n = 1} ^ {N} \mathbf {D} _ {e s t} (x _ {n})}, \tag {7}
+$$
+
+where $N_{k} = \sum_{n = 1}^{N}\mathbf{R}_{kn}^{(t)}$ . Specially, in the counting problem, we can only determine the number of objects in the image and the fact that the probability between each target are same, which means the dimension of the hidden variable is $K$ , and $\pi_k = \frac{1}{K}$ . Therefore, we only update the mean and variance, and fix the $\pi_{k}$ as $\frac{1}{K}$ . In addition, if we consider the limit $\Sigma \to 0$ , the log-likelihood function will also go to infinity when $K > 1$ . It will cause a pathological solution, so the constraint is necessary. Here, we introduce the constraint $0.5\leq \Sigma \leq 5$ . As responsibility estimation and likelihood maximization executing alternately, $\mathbf{D}_{gt}^{(t)}$ becomes more compatible and reasonable than the initial density map.
+
+Self-correction loss. In general, a more reasonable density map is obtained through online updating. Most methods use the Euclidean $(L_2)$ distance to optimize the model:
+
+$$
+\mathcal {L} _ {L _ {2}} = \sum_ {n = 1} ^ {N} \left| \mathbf {D} _ {e s t} \left(x _ {n}\right) - \mathbf {D} _ {g t} \left(x _ {n}\right) \right| ^ {2}. \tag {8}
+$$
+
+Here, we use the $L_{1}$ distance of pixel-level subregion to supervise the density map:
+
+$$
+\mathcal {L} _ {\text {d e n s i t y m a p}} = \frac {1}{N} \sum_ {n = 1} ^ {N} \left| \mathbf {D} _ {\text {e s t}} \left(x _ {n}\right) - \mathbf {D} _ {g t} ^ {(t)} \left(x _ {n}\right) \right|. \tag {9}
+$$
+
+However, the mixing coefficients $\pi$ are not learned in the aforementioned process (Eq. ⑤ and ⑥) since we fix it all the time. From another point of view, $\frac{\sum_{n=1}^{N} \Gamma_{kn}^{(t)} \times \mathbf{D}_{est}(x_n)}{\sum_{n=1}^{N} \mathbf{D}_{est}(x_n)}$ , the mixing coefficients represent the proportions of the targets assigned to the whole distribution. As mentioned above, the proportion of each target should be same, as well as $\pi_k = \frac{1}{K}$ . But the re-estimated $\pi_k$ is not a constant $\frac{1}{K}$ in the Eq. ⑦. Besides, the sum of estimated map is not accurate. Here, we set $\widetilde{\pi}_k = \frac{\sum_{n=1}^{N} \mathbf{R}_{kn}(x_n)}{\sum_{n=1}^{N} \mathbf{D}_{gt}(x_n)} = \frac{1}{K} \sum_{n=1}^{N} \mathbf{R}_{kn}$ . To balance the individuals, we introduce a loss function as:
+
+$$
+\mathcal {L} _ {\text {c o e f f i c i e n t}} = \sum_ {k = 1} ^ {K} \left| \tilde {\pi} _ {k} ^ {(t)} - \frac {1}{K} \right|. \tag {10}
+$$
+
+Finally, the proposed SC loss is formulated as:
+
+$$
+\mathcal {L} = \lambda_ {1} \mathcal {L} _ {\text {d e n s i t y m a p}} + \lambda_ {2} \mathcal {L} _ {\text {c o e f f i c i e n t}}. \tag {11}
+$$
+
+Here, we simply set $\lambda_1 = \lambda_2 = 1$ . Overall, the proposed SC supervision has a number of desirable properties. Firstly, it tolerates the labeling deviation. Dynamically updating the target density map corrects some labeling deviation and helps the model to learn the consistent feature representation. Secondly, it is robust to scale variation. The response area reflects the scale feature of images, and the variance is iteratively adjusted to adapt the response area. Thirdly, it is sensitive to the change of the number of objects. The
+
+fluctuation of the mixing coefficients effectively reflect the missed and false detection. These properties are illustrated in Fig. 3
+
+# 3.2. Adaptive Dilated Convolution
+
+In order to address large scale variation, many network structures with rich receptive fields have been proposed. There is no doubt that a reasonable receptive field plays an important role in the counting problem. Here, we introduce two designs to the proposed adaptive dilated convolution. 1) From the aspect of scale variation, we use a continuous range of receptive fields to match the continuous scale variation. 2) To learn specific awareness, a specific receptive field is learned for each location.
+
+In detail, a standard 2D convolution with the kernel $3 \times 3$ uses a regular grid to sample the input feature map, the grid $\mathbf{G}$ is defined as:
+
+$$
+\mathbf {G} = \left\{(- 1, - 1), (- 1, 0), \dots , (0, 1), (1, 1) \right\}. \tag {12}
+$$
+
+The output feature $\mathbf{F}_{\mathbf{o}}(x_n)$ is calculated as:
+
+$$
+\mathbf {F} _ {\mathbf {o}} \left(x _ {n}\right) = \sum_ {\Delta x _ {i} \in \mathbf {G}} w \left(\Delta x _ {i}\right) \mathbf {F} _ {i} \left(x _ {n} + \Delta x _ {i} \times d\right), \tag {13}
+$$
+
+where $d$ represents the static dilation rate. $w$ denotes the parameters of the convolution.
+
+Commonly, the dilation rate $d$ is a pre-set integer value (e.g., 1, 2 and 3) and static. In adaptive dilated convolution, the dilation is adjusted with $\widetilde{d}$ , which is dramatic. Then, Eq. (13) becomes
+
+$$
+\mathbf {F} _ {\mathbf {o}} \left(x _ {n}\right) = \sum_ {\Delta x _ {i} \in \mathbf {G}} w \left(\Delta x _ {i}\right) \mathbf {F} _ {\mathbf {i}} \left(x _ {n} + \Delta x _ {i} \times \widetilde {d} _ {n}\right). \tag {14}
+$$
+
+For the $n_{th}$ location, the dilation rate is defined as $\widetilde{d}_n$ , which is typically fractional. The value of $\mathbf{F_i}(x + \Delta x_i\times \widetilde{d_n})$ is computed by bilinear interpolation.
+
+As shown in the red box of Fig. ② through a standard convolutional layer with the kernel size $3 \times 3$ and dilation 1 over the same input feature map, the specific dilations $\widetilde{d}$ are estimated. Particularly, we add a ReLU layer to guarantee the dilations are no-negative. The output dilation maps have the same spatial resolution with the input feature maps. The channel dimension becomes 1. The gradients of the dilations are back-propagated through the bilinear operations.
+
+Why is deformable convolution not used here? The deformable convolution [7] introduces unsymmetrical offsets for every position in the sampling grid, which causes the extracted feature has spatial deviations. In the task of object detection, the estimated boxes are corrected by regression to alleviate the deviations. However, the counting problem is position-sensitive task, in which the density and feature of each location need strong consistency. The feature with
+
+| Layer | Size | Type |
| 1 | 3 × 3 × 512 | adconv. + bn + relu |
| 2 | 3 × 3 × 512 | adconv. + bn + relu |
| 3 | 3 × 3 × 512 | adconv. + bn + relu |
| 4 | 3 × 3 × 256 | adconv. + bn + relu |
| 5 | 3 × 3 × 128 | adconv. + bn + relu |
| 6 | 3 × 3 × 64 | adconv. + bn + relu |
| 7 | 1 × 1 × 1 | adconv. + relu |
+
+spatial deviations will lead to wrongful learning, so adaptive dilated convolution is more reasonable than deformable convolution in the counting problem. Moreover, Compared with predicting offsets for every kernel weight, only predicting one value as the dilation rate is more lightweight.
+
+# 4. Experiments
+
+# 4.1. Implementation Details
+
+Network structure. The first ten convolutional layers of VGG16_bn [38] [16] (pretrained on ImageNet [31]) are used as our backbone. The decoder structure is shown in Table 1. The stochastic gradient descent optimizer with an initial learning rate of 0.005 is used to update the parameters, and the learning rate is decayed by gamma 0.2 once the number of epoch reaches one of the milestones. ADNet denotes our adaptive dilated network with the conventional supervision. ADSCNet represents our adaptive dilated network with the SC supervision.
+
+Training details. We augment the training data using horizontal flipping, random cropping and resizing. Without upsampling, the size of the density map is $\frac{1}{8}$ of the original image, and the batch size of each iteration is 32. Specially, to get a better initialization model, we pre-train our model with conventional $L_{2}$ supervision method for 20 epochs. In the SC iterative process, the iterations is set to 2, and we set the initial variance of each Gaussian to 0.5 at the output size.
+
+Evaluation details. The mean absolute error (MAE) and the root mean squared error (MSE) is commonly used as evaluation metrics, which are defined as follows:
+
+$$
+M A E = \frac {1}{M} \sum_ {i = 1} ^ {M} \left| C _ {i} ^ {e s t} - C _ {i} ^ {g t} \right|, M S E = \sqrt {\frac {1}{M} \sum_ {i = 1} ^ {N} \left(C _ {i} ^ {e s t} - C _ {i} ^ {g t}\right) ^ {2}}, \tag {15}
+$$
+
+where $M$ is the number of test images. $C_i^{est}$ and $C_i^{gt}$ are the estimated and labeling count number of the $i_{th}$ image. The lower MAE and MSE, the better performance.
+
+Strong baseline. Due to the small datasets and dramatic scene changes, many state-of-the-art methods [20, 25] still train the model with batch size 1, which is time-consuming. As illustrated in Fig. 4, the performance of CSRNet [20]
+
+Table 1. The architecture of the decoder of ADNet. The adaptive dilated convolution is represented as "adconv".
+
+ | ADNet | 1 | 2 | 3 | 4 |
| MAE (↓) | 61.3 | 56.2 | 55.4 | 57.6 | 58.3 |
| MSE (↓) | 103.9 | 94.8 | 97.7 | 98.7 | 101.3 |
+
+Table 2. Performances of models with the different iteration number on the ShanghaiTechA.
+
+
+Figure 4. The effect of batch size and batch normalization layer.
+
+declines with the batch size increasing on ShanghaiTechA with 400 training images, but the $\mathrm{CSRNet^{*}}$ can achieve an effective boost as the batch size increases after we introduce the batch normalization layers and the data augmentation (random cropping and resizing). Therefore, our baseline uses VGG16_bn [38,16] as the backbone and the same decoder with $\mathrm{CSRNet^{*}}$ but dilation as 1. In particular, our baseline achieves MAE of 66.5 on ShanghaiTech A, which outperforms the best performance [2] in 2018.
+
+# 4.2. Ablation Studies
+
+# 4.2.1 Self-Correction Supervision
+
+Iteration Number. As illustrated in Table② when SC supervision is introduced, MAE has achieved a significant decline. With the increase of the iteration number, MAE first declines and then increases. It gets the best performance when the iteration number reaches 2. Besides, the overall fluctuations are not very large. With the increase of the iteration number, the generated density map will over-fit the estimated result. Especially, the variation $\sigma$ will be too large which will exceed the target scale. Then the interference from background information is introduced. So an appropriate iteration number 2 is used in our experiment.
+
+Robustness to annotation error. As mentioned in section 1, subjective label deviation is a very common phenomenon in the dotted annotation. In order to further evaluate the robustness of our method to label deviation. We further introduce uniform random noise to the original labeling results. In Fig. 6 as the proportion of noised annotations increases, the MAE of SC supervision is not significantly affected, but the MAE of the conventional supervision method continuously declines. It proves the robustness of SC supervision to the annotation errors.
+
+Expansion capability. In order to verify the expansion capability of the proposed method, we introduce our SC supervision to boost MCNN, CSRNet and our VGG_Baseline.
+
+
+(1) Input image
+
+
+(2) Conventional supervision
+
+
+(3) Self-Correction supervision
+Figure 5. Visualization of estimated density maps and dilation maps. Compared with the baseline, the results of SC supervision have consistent response locations (the upper left contours of the head) and uniform response intensity for each person whether in dense or sparse regions. From sparse regions to dense regions, the estimated dilation has obvious decline in (4).
+
+
+(4) Dilation map
+
+
+Figure 6. Robustness evaluations to annotation error.
+
+| Methods | Baseline | Baseline+SC |
| MAE (↓) | MSE (↓) | MAE (↓) | MSE (↓) |
| MCNN* | 108.2 | 167.5 | 101.5 | 152.4 |
| CSRNet* | 64.2 | 100.6 | 58.7 | 98.9 |
| VGG | 66.5 | 106.9 | 60.7 | 100.6 |
+
+Table 3. The effect of SC supervision on three different methods on the ShanghaiTechA.
+
+To exclude interference from other factors, we use the same experimental environment and introduce the normalization layer to implement MCNN (denoted as $\mathrm{MCNN^{*}}$ ), CSRNet (denoted as $\mathrm{CSRNet^{*}}$ ). As shown in Table our SC supervision boosts all three baselines with consistent improvements. They gain relative MAE improvements of $6.19\%$ , $8.57\%$ , $8.72\%$ , which verifies the effectiveness of our SC supervision method.
+
+Visualization of the estimated density maps. We visualize the density maps with different supervision methods in Fig. 5 Firstly, compared with the traditional supervision method, the response positions of SC supervision are more consistent, which mainly concentrate on the upper left contours of the head. It means that the upper left contours of the head is an easily discernible annotation for
+
+crowd counting. The response positions of the conventional results are more random (e.g., face, eyes, or head). This shows that SC supervision enables the model to correct the human annotations itself. Secondly, in estimated density maps of the conventional method, the dense-crowd regions are usually underestimated, while sparse-crowd regions are usually overestimated. But the results of SC supervision have uniform response intensity whether in dense-crowd or sparse-crowd regions. It means that the SC loss effectively balances the proportion of the individuals. Thirdly, the density map has different response ranges for different objects, which reflects the scale variation.
+
+# 4.2.2 Adaptive Dilated Convolution
+
+Effect of dilation rate. In this section, we evaluate the effectiveness of adaptive dilated convolution. For the purpose of comparison, we train multiple variants of baseline. "Dilation- $m$ indicates the baseline has the static dilation rate $m$ of decoder. "Adaptive-Dila." denotes that the adaptive dilated convolution is introduced. The multibranch decoder with dilation rates $(1,3,5)$ is given as "Multi-Dila.(1,3,5)", and the DADNet [12] is a multidilated method. "Deformable-Dila." indicates that the deformable convolution is introduced into the decoder. As illustrated in Table [5], the size of the receptive field influences the performance greatly. The model with dilation 2 achieves the best performance among single static dilated networks. The concatenation of multiple dilated features has a slight improvement but brings heavy computation load. Our ADNet only replaces the dilated convolution in CSRNet [20].
+
+| Methods | UCF_QNRF | ShanghaiTechA | ShanghaiTech B | UCF_CC_50 | TRANCOS |
| MAE(↓)MSE(↓) | MAE(↓)MSE(↓) | MAE(↓)MSE(↓) | MAE(↓)MSE(↓) | MAE(↓)MSE(↓) |
| MCNN(2016) [50] | 277 | 426 | 110.2 | 173.2 | 26.4 | 41.3 | 377.6 | 509.1 | - | - |
| Switch-CNN(2017) [50] | 228 | 445 | 90.4 | 135.0 | 21.6 | 33.4 | 318.1 | 439.2 | - | - |
| ACSCP(2018) [34] | - | - | 75.7 | 102.7 | 17.2 | 27.4 | 291.0 | 404.6 | - | - |
| CSRNet(2018) [20] | - | - | 68.2 | 115.0 | 10.6 | 16.0 | 266.1 | 397.5 | 3.56 | - |
| SANet(2018) [2] | - | - | 67.0 | 104.5 | 8.4 | 13.6 | 258.4 | 334.9 | - | - |
| CAN(2019) [25] | 107 | 183 | 62.3 | 100.0 | 7.8 | 12.2 | 212.2 | 243.7 | - | - |
| DSSINet(2019) [22] | 99.1 | 159.2 | 60.6 | 96.0 | 6.9 | 10.3 | 216.9 | 302.4 | - | - |
| BL(2019) [27] | 88.7 | 154.8 | 62.8 | 101.8 | 7.7 | 12.7 | 229.3 | 308.2 | - | - |
| SPN(2019) [44] | - | - | 61.7 | 99.5 | 9.4 | 14.4 | 259.2 | 335.9 | 3.35 | - |
| SPANet+SANet(2019) [6] | - | - | 59.4 | 92.5 | 6.5 | 9.9 | 232.6 | 311.7 | - | - |
| PGCNet(2019) [46] | - | - | 57.0 | 86.0 | 8.8 | 13.7 | - | - | - | - |
| Baseline | 99.7 | 161.3 | 66.5 | 106.9 | 8.1 | 12.5 | 273.5 | 357.7 | 3.21 | 4.52 |
| Our ADNet | 90.1 | 147.1 | 61.3 | 103.9 | 7.6 | 12.1 | 245.4 | 327.3 | 2.99 | 4.28 |
| Our ADSCNet | 71.3 | 132.5 | 55.4 | 97.7 | 6.4 | 11.3 | 198.4 | 267.3 | 2.60 | 3.89 |
+
+Table 4. Comparisons with State-of-the-art methods on four datasets.
+
+| Methods | MAE(↓) | MSE(↓) |
| Dilation-1 | 66.5 | 106.9 |
| Dilation-2 | 64.2 | 100.6 |
| Dilation-3 | 65.7 | 100.5 |
| Multi-Dila.(1,3,5) | 63.6 | 98.8 |
| DADNet [12] | 64.2 | 99.9 |
| Deformable-Dila. | 62.6 | 97.0 |
| Adaptive-Dila. | 61.3 | 103.9 |
+
+Table 5. The effect of different dilation rates on the Shang-haiTechA.
+
+with adaptive dilated convolution and adds BN layer. The efficiency of the adaptive dilated convolution is between dilated convolution and deformable convolution. Our ADNet improves CSRNet [20] a lot with a small extra computational burden.
+
+Visualization of the dilation maps. As illustrated in Fig. 5 large-scale targets and large-area backgrounds have larger receptive fields, while small-scale targets have smaller receptive fields. In particular, from the large-scale target center to the edge, the value of the dilation has a consecutive from high to low variation, which effectively reflects the scale variation. For the background, a large receptive field is necessary to effectively distinguish it. However, it is difficult for the static dilated network to address the scale variation problem and distinguish the background.
+
+# 4.3. Comparisons with State-of-the-art
+
+We evaluate our method on four datasets, including crowd datasets ShangHaiTech [50], UCF_CC_50 [14], UCF_QNRF [43] and vehicle dataset TRANCOS [11]. The ShanghaiTech crowd counting dataset consists of two parts: PartA and PartB. PartA is more congested than PartB. UCF_CC_50 is a tiny crowd counting dataset with only 50 images, but it has extremely congested scenes with heavy background noise. The UCF_QNRF dataset is a large and high-resolution crowd counting dataset with 1.25 million head annotations. As an extension, TRANCOS is a vehi
+
+cle counting dataset with various perspectives.
+
+Table 4 reports the results of four challenging datasets. The proposed method achieves the consistent improvements. Furthermore, it performs better than existing state-of-the-art methods on all the four benchmark datasets. On UCF_QNRF, ADNet and ADSCNet gain relative MAE improvements of $9.6\%$ , $28.5\%$ . The EM supervision is benefit to the high-resolution images. On ShanghaiTech dataset, ADNet and ADSCNet improve the Baseline with relative MAE improvements of $7.8\%$ , $16.7\%$ on part A, and $6.2\%$ , $21.0\%$ on part B. Since the labeling deviation in the sparse scenes is more serious, ADSCNet gets more improvement in the sparse scenes than the crowd. In addition, the adaptive dilated convolution brings similar improvement both the sparse and crowd scenes. ADNet and ADSCNet improve the Baseline with relative MAE improvements of $10.3\%$ , $27.5\%$ on UCF_CC_50, and $6.85\%$ , $19.0\%$ on TRANCOS, which indicates that our method has expansion capability to more congested scenes and other objects counting task.
+
+# 5. Conclusion
+
+In this paper, we present a novel supervised learning framework for the counting problem. It utilizes the model estimation to iteratively correct the annotation and introduces the SC loss to supervise the whole and individuals, which could be integrated into all CNN-based methods. To adapt the large scale variation, the adaptive dilated convolution is proposed, which learns a dynamic and continuous dilation rate for each location. Experiments on four datasets demonstrate that it significantly improve the performance of the baseline. Furthermore, the estimated density map shows the consistent response position and uniform intensity, which illustrates that using the model estimation to correct the annotation is an efficient way to obtain a suitable annotation for network learning.
+
+# References
+
+[1] Gabriel J Brostow and Roberto Cipolla. Unsupervised bayesian detection of independent motion in crowds. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 1, pages 594-601. IEEE, 2006.
+[2] Xinkun Cao, Zhipeng Wang, Yanyun Zhao, and Fei Su. Scale aggregation network for accurate and efficient crowd counting. In Proceedings of the European Conference on Computer Vision (ECCV), pages 734-750, 2018.
+[3] Antoni B Chan, Zhang-Sheng John Liang, and Nuno Vasconcelos. Privacy preserving crowd monitoring: Counting people without people models or tracking. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-7. IEEE, 2008.
+[4] Prithvijit Chattopadhyay, Ramakrishna Vedantam, Ramprasaath R Selvaraju, Dhruv Batra, and Devi Parikh. Counting everyday objects in everyday scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1135-1144, 2017.
+[5] Ke Chen, Shaogang Gong, Tao Xiang, and Chen Change Loy. Cumulative attribute space for age and crowd density estimation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2467-2474, 2013.
+[6] Zhi-Qi Cheng, Jun-Xiu Li, Qi Dai, Xiao Wu, and Alexander G. Hauptmann. Learning spatial awareness to improve crowd counting. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[7] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764-773, 2017.
+[8] Diptodip Deb and Jonathan Ventura. An aggregated multicolumn dilated convolution network for perspective-free counting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 195-204, 2018.
+[9] Piotr Dólár, Boris Babenko, Serge Belongie, Pietro Perona, and Zhuowen Tu. Multiple component learning for object detection. In European conference on computer vision, pages 211-224. Springer, 2008.
+[10] Weina Ge and Robert T Collins. Marked point processes for crowd counting. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2913-2920. IEEE, 2009.
+[11] Ricardo Guerrero-Gómez-Olmedo, Beatz Torre-Jiménez, Roberto López-Sastre, Saturnino Maldonado-Bascon, and Daniel Onoro-Rubio. Extremely overlapping vehicle counting. In Iberian Conference on Pattern Recognition and Image Analysis, pages 423-431. Springer, 2015.
+[12] Dan Guo, Kun Li, Zheng-Jun Zha, and Meng Wang. Dadnet: Dilated-attention-deformable convnet for crowd counting. In Proceedings of the 27th ACM International Conference on Multimedia, MM '19, 2019.
+[13] Mohammad Hossain, Mehrdad Hosseinzadeh, Omit Chanda, and Yang Wang. Crowd counting using scale-aware attention
+
+networks. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1280-1288. IEEE, 2019.
+[14] Haroon Idrees, Imran Saleemi, Cody Seibert, and Mubarak Shah. Multi-source multi-scale counting in extremely dense crowd images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2547-2554, 2013.
+[15] Haroon Idrees, Muhmmad Tayyab, Kishan Athrey, Dong Zhang, Somaya Al-Maadeed, Nasir Rajpoot, and Mubarak Shah. Composition loss for counting, density map estimation and localization in dense crowds. In Proceedings of the European Conference on Computer Vision (ECCV), pages 532-546, 2018.
+[16] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+[17] Di Kang and Antoni B. Chan. Crowd counting by adaptively fusing predictions from an image pyramid. In *British Machine Vision Conference* 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018, page 89, 2018.
+[18] Victor Lempitsky and Andrew Zisserman. Learning to count objects in images. In Advances in neural information processing systems, pages 1324-1332, 2010.
+[19] Min Li, Zhaoxiang Zhang, Kaiqi Huang, and Tieniu Tan. Estimating the number of people in crowded scenes by mid based foreground segmentation and head-shoulder detection. In 2008 19th International Conference on Pattern Recognition, pages 1-4. IEEE, 2008.
+[20] Yuhong Li, Xiaofan Zhang, and Deming Chen. Csrrnet: Dilated convolutional neural networks for understanding the highly congested scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1091-1100, 2018.
+[21] Sheng-Fuu Lin, Jaw-Yeh Chen, and Hung-Xin Chao. Estimation of number of people in crowded scenes using perspective transformation. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 31(6):645-654, 2001.
+[22] Lingbo Liu, Zhilin Qiu, Guanbin Li, Shufan Liu, Wanli Ouyang, and Liang Lin. Crowd counting with deep structured scale integration network. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[23] Lingbo Liu, Hongjun Wang, Guanbin Li, Wanli Ouyang, and Liang Lin. Crowd counting using deep recurrent spatial-aware network. arXiv preprint arXiv:1807.00601, 2018.
+[24] Ning Liu, Yongchao Long, Changqing Zou, Qun Niu, Li Pan, and Hefeng Wu. Adcrowdnet: An attention-injective deformable convolutional network for crowd understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3225-3234, 2019.
+[25] Weizhe Liu, Mathieu Salzmann, and Pascal Fua. Context-aware crowd counting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5099-5108, 2019.
+[26] Xialei Liu, Joost van de Weijer, and Andrew D Bagdanov. Leveraging unlabeled data for crowd counting by learning to
+
+rank. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7661-7669, 2018.
+[27] Zhiheng Ma, Xing Wei, Xiaopeng Hong, and Yihong Gong. Bayesian loss for crowd count estimation with point supervision. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[28] Hossain Mohammad, Hosseinzadeh Mehrdad, Chanda Omit, and Wang Yang. Crowd counting using scale-aware attention networks. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1280-1288, Jan 2019.
+[29] Daniel Onoro-Rubio and Roberto J López-Sastre. Towards perspective-free object counting with deep learning. In European Conference on Computer Vision, pages 615-629. Springer, 2016.
+[30] Viresh Ranjan, Hieu Le, and Minh Hoai. Iterative crowd counting. In Proceedings of the European Conference on Computer Vision (ECCV), pages 270-285, 2018.
+[31] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+[32] David Ryan, Simon Denman, Clinton Fookes, and Sridha Sridharan. Crowd counting using multiple local features. In 2009 Digital Image Computing: Techniques and Applications, pages 81-88. IEEE, 2009.
+[33] Deepak Babu Sam, Shiv Surya, and R Venkatesh Babu. Switching convolutional neural network for crowd counting. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4031-4039. IEEE, 2017.
+[34] Zan Shen, Yi Xu, Bingbing Ni, Minsi Wang, Jianguo Hu, and Xiaokang Yang. Crowd counting via adversarial cross-scale consistency pursuit. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5245-5254, 2018.
+[35] Miaojing Shi, Zhaohui Yang, Chao Xu, and Qijun Chen. Revisiting perspective information for efficient crowd counting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7279-7288, 2019.
+[36] Zenglin Shi, Pascal Mettes, and Cees G. M. Snoek. Counting with focus for free. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[37] Zenglin Shi, Le Zhang, Yun Liu, Xiaofeng Cao, Yangdong Ye, Ming-Ming Cheng, and Guoyan Zheng. Crowd counting with deep negative correlation learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5382-5390, 2018.
+[38] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+[39] Vishwanath A Sindagi and Vishal M Patel. Cnn-based cascaded multi-task learning of high-level prior and density estimation for crowd counting. In 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pages 1-6. IEEE, 2017.
+[40] Vishwanath A Sindagi and Vishal M Patel. Generating high-quality crowd density maps using contextual pyramid cnns.
+
+In Proceedings of the IEEE International Conference on Computer Vision, pages 1861-1870, 2017.
+[41] Elad Walach and Lior Wolf. Learning to count with cnn boosting. In European Conference on Computer Vision, pages 660-676. Springer, 2016.
+[42] Jia Wan and Antoni Chan. Adaptive density map generation for crowd counting. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[43] Xinlong Wang, Tete Xiao, Yuning Jiang, Shuai Shao, Jian Sun, and Chunhua Shen. Repulsion loss: Detecting pedestrians in a crowd. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7774-7783, 2018.
+[44] Chen Xinya, Yanrui Bin, Nong Sang, and Changxin Gao. Scale pyramid network for crowd counting. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1941-1950, Jan 2019.
+[45] Haipeng Xiong, Hao Lu, Chengxin Liu, Liang Liu, Zhiguo Cao, and Chunhua Shen. From open set to closed set: Counting objects by spatial divide-and-conquer. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[46] Zhaoyi Yan, Yuchen Yuan, Wangmeng Zuo, Xiao Tan, Yezhen Wang, Shilei Wen, and Errui Ding. Perspective-guided convolution networks for crowd counting. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[47] Jie Yang, Jiarou Fan, Yiru Wang, Yige Wang, Weihao Gan, Lin Liu, and Wei Wu. Hierarchical feature embedding for attribute recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
+[48] Cong Zhang, Hongsheng Li, Xiaogang Wang, and Xiaokang Yang. Cross-scene crowd counting via deep convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 833-841, 2015.
+[49] Lu Zhang, Miaojing Shi, and Qiaobo Chen. Crowd counting via scale-adaptive convolutional neural network. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1113-1121, March 2018.
+[50] Yingying Zhang, Desen Zhou, Siqin Chen, Shenghua Gao, and Yi Ma. Single-image crowd counting via multi-column convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 589-597, 2016.
+[51] Muming Zhao, Jian Zhang, Chongyang Zhang, and Wenjun Zhang. Leveraging heterogeneous auxiliary tasks to assist crowd counting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12736-12745, 2019.
+[52] Tao Zhao and Ramakant Nevatia. Bayesian human segmentation in crowded situations. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., volume 2, pages II-459. IEEE, 2003.
\ No newline at end of file
diff --git a/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/images.zip b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fa3ced847444593f3d41ad4760ec14cb69997b5a
--- /dev/null
+++ b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0b2047a9ff08bb03a22285f8fbcfcfa4818017eb9a7d3bb4b899e4724fe827d5
+size 641637
diff --git a/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/layout.json b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..228028c4eab4e41f8547cc460fdf039b591d95d2
--- /dev/null
+++ b/adaptivedilatednetworkwithselfcorrectionsupervisionforcounting/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c28e83086429a061737b93f375b611110305379d187e5debb71b8f1f2c157975
+size 404084
diff --git a/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_content_list.json b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c10f3845139afeecda8a73724790b60f6a972b69
--- /dev/null
+++ b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5764ed53b755fb85643b4559b78b31664867353fa89e8292bd2a1d1ef386acf
+size 75974
diff --git a/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_model.json b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d39ed4a85dc67931fa686f7c509e5162ac3d1efa
--- /dev/null
+++ b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:599839611b284a22b7d18f66005cce9ce97b0bcf7629d9b128c0e3fe44dfad93
+size 90192
diff --git a/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_origin.pdf b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..75db582a07427d91e380a8f43566bb949dbb40cc
--- /dev/null
+++ b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/42cc3e84-b5e9-4c30-8c5e-259448fd2dda_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:241bf4ff9a44776a6dcc1ae82a3ed674ed28d90a3ceddee5c9c940e06583147f
+size 1310060
diff --git a/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/full.md b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f5ee82836c73ffe31ae6cd15d4267efa6442e736
--- /dev/null
+++ b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/full.md
@@ -0,0 +1,293 @@
+# Adaptive Fractional Dilated Convolution Network for Image Aesthetics Assessment
+
+Qiuyu Chen $^{1}$ , Wei Zhang $^{2}$ , Ning Zhou $^{3}$ , Peng Lei $^{3}$ , Yi Xu $^{2}$ , Yu Zheng $^{4}$ , Jianping Fan $^{1}$
+
+$^{1}$ Department of Computer Science, UNC Charlotte
+
+$^{2}$ Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University
+
+3Amazon Lab126
+
+$^{4}$ School of Cyber Engineering, Xidian University
+
+{qchen12,jfan}@uncc.edu,{weizh,yxu17}@fudan.edu.cn
+
+{ningzho,leipeng}@amazon.com,yuzheng.xidian@gmail.com
+
+# Abstract
+
+To leverage deep learning for image aesthetics assessment, one critical but unsolved issue is how to seamlessly incorporate the information of image aspect ratios to learn more robust models. In this paper, an adaptive fractional dilated convolution (AFDC), which is aspect-ratio-embedded, composition-preserving and parameter-free, is developed to tackle this issue natively in convolutional kernel level. Specifically, the fractional dilated kernel is adaptively constructed according to the image aspect ratios, where the interpolation of nearest two integer dilated kernels are used to cope with the misalignment of fractional sampling. Moreover, we provide a concise formulation for mini-batch training and utilize a grouping strategy to reduce computational overhead. As a result, it can be easily implemented by common deep learning libraries and plugged into popular CNN architectures in a computation-efficient manner. Our experimental results demonstrate that our proposed method achieves state-of-the-art performance on image aesthetics assessment over the AVA dataset [18].
+
+# 1. Introduction
+
+This paper addresses image aesthetics assessment where the goal is to predict the given image an aesthetic score. Automatic image aesthetics assessment has many applications such as album photo recommendation, auxiliary photo editing, and multi-shot photo selection. The task is challenging because it entails computations of both global cues (e.g. scene, exposure control, color combination, etc) and localization information (composition, photographic angle, etc).
+
+Early approaches extract aesthetic features according to photographic rules (lighting, contrast) and global image composition (symmetry, rule of thirds), which require ex
+
+
+Still Score=5.01?
+Figure 1: Image warping and cropping are widely used for data augmentation, but they alter the object aspect ratios and composition, causing different aesthetics perceptions. Assigning the groundtruth aesthetic score of the original image to the altered image may introduce label noise and deteriorate the discriminative ability.
+
+tensive manual designs [3, 5, 13, 19, 25, 28]. However, manual design for such aesthetic features is not a trivial task even for experienced photographers. Recent work adopts deep convolutional neural networks for image aesthetics assessment by learning models in an end-to-end fashion. The models mainly use three types of formulations: binary classification labels [12, 15, 16, 29, 20, 33, 23], scores [17, 27, 8], and rankings [14, 22].
+
+In the aforementioned methods, the backbone networks are usually adopted from an image classification network. The data augmentation methods, i.e. image cropping and warping, are widely used for preventing overfitting in the image recognition task. However, a shortcoming is that the compositions and object aspect ratios are altered, which may introduce label noise and harm the task of aesthetics assessment (Fig. 1). A succinct solution proposed in MNA-CNN [16] is to feed one original-size image into the network at a time during training and test (bottom stream in Fig. 2). A major constraint of the approach is that im
+
+ages with different aspect ratios cannot be concatenated into batches because the aspect ratio of each image should be preserved. Thus it slows down the training and inference.
+
+In this paper, we aim to develop a novel adaptive fractional dilated convolution that is mini-batch compatible. As shown in the top row in Fig. 2, our network adaptively dilates the convolution kernels to the composition-preserving warped images according to the image aspect ratios such that the effective receipt field of each dilated convolution kernel is the same as the regular one. Specifically, as illustrated in Fig. 3, the fractional dilated convolution kernel is adaptively interpolated by the nearest two integer dilated kernels with the same kernel parameters. Thus no extra learning parameters are introduced.
+
+The benefits of our method can be summarized as follows: (a) By embedding the information of aspect ratios to construct the convolution layers adaptively, it can explicitly relate the aesthetic perception to the image aspect ratios while preserving the composition; (b) It is parameter-free and thus can be easily plugged into the popular network architectures; (c) Through the deduction, we show that our proposed method can be mini-batch compatible and easily implemented by common deep learning libraries (e.g. PyTorch, Tensorflow); (d) A grouping strategy is introduced to reduce the computational overhead for efficient training/inference; (e) We achieve state-of-the-art performance for image aesthetics assessment on the AVA dataset [18].
+
+# 2. Related Work
+
+In this section, we provide a brief review of some of the most relevant works on: (a) image aesthetics assessment; (b) preserving image aspect ratios and compositions; (c) dilated convolution; (d) dynamic kernels.
+
+Image Aesthetics Assessment. The existing methods on image aesthetics assessment can be mainly categorized into three formulations: (1) Binary (or mean) aesthetic label: Kao et al. [12] propose a multi-task CNN, A&C CNN, which jointly learns both the category classification and the aesthetic perception. Mai et al. [16] address the composition problem in image aesthetics assessment and aggregates multiple sub-networks with different sizes of adaptive pooling layer. Ma et al. [15] feed the patches sampled from the saliency map of the original image into VGG16 [24] with an aggregation layer, where a layer-aware subnet considering path localizations is leveraged to get the final prediction. Sheng et al. [23] assign adaptively larger weights to meaningful training cropping patches according to the prediction errors during the training and aggregate the multi-patch predictions during the test. Hosu et al. [8] propose to incorporate multi-level spatially polled features from the intermediate blocks in a computation efficient manner. (2) Ranking score: Instead of classification or regression formulations, a joint loss of Euclidean and ranking [14] is proposed and
+
+a triplet ranking loss [22] is developed. (3) Score distribution: To address the ordered score distribution, Hossein Talebi and Peyman Milanfar [27] introduce Earth Mover's Distance as a loss function to train 10-scale score distribution. Since the image aesthetics is a subjective property and outlier opinions may appear, Naila Murray and Albert Gordo [17] introduce Huber Loss to train 10-scale score distribution. Besides using the mean score of multiple raters, Ren et al. [20] propose a sub-network to learn a personal rating offset along with the generic aesthetic network and output the personalized score prediction.
+
+Preserving Image Aspect Ratios and Compositions. Multi-patch sampling over the original images is used to preserve the aspect ratios and proves to be effective [15, 23, 8]. A major concern is that sampling patches from the original image may alter essential aesthetic factors (color histogram, object-background ratio) of the original image and the complete aesthetics features are lost. In contrast, our proposed method adaptively restores the original receptive fields from the composition-preserving warping images in an end-to-end fashion. The approach of MNA-CNN [16] is the most related to ours, as they proposed to preserve image aspect ratios and compositions by feeding the original image into the network, one at a time. A major constraint of the approach is that images with different aspect ratios cannot be concatenated into batches because the aspect ratio of each image should be preserved. Thus it tends to slow down the training and inference processes. On the other hand, our proposed method is mini-batch compatible and can be easily implemented by common deep learning libraries.
+
+Dilated Convolution. Our adaptive fractional dilated convolution is motivated by the dilated convolution [31] and atrous convolution [1] in semantic segmentation, but it differs from them in several aspects: (1) Our adaptive fractional dilated convolution is to restore the receptive fields for warped images to the same as regular convolution for original images, while dilated convolution is proposed to retrain the large receptive without down-sampling. (2) The dilation rate can be fractional in our method. (3) The construction of fractional dilated kernel is dynamic respecting the aspect ratios.
+
+Dynamic Kernels. Deformable convolution [2] is proposed to construct the receptive fields dynamically and adaptively by learning better sampling in the convolutional layer. Our proposed method differs from deformable convolution in two folds: (a) The deformable convolution is proposed to learn better sampling in the convolutional layers, whereas our method adapts the receptive fields into the original aspect ratios. Therefore, our proposed method is parameter-free while the deformable convolution requires parameterized layers to predict the sampling indices. (b) Our method provides a concise formula for mini-batch training and it can be easily implemented by the common deep learning
+
+
+Figure 2: Overview of adaptive fractional dilated CNN (above) and the comparison with vanilla CNN (below): Each fractional dilated Conv (above) operated on wrapped input adaptively dilates the same receptive field as the vanilla Conv (below) operated on the original image. It thus helps with the problems: (a) Becomes mini-batch compatible by composition-preserving warping instead of feeding original-size image (b) Preserves aesthetic features related to aspect ratios by adaptive kernel dilation.
+
+frameworks. On the other hand, the deformable convolution needs to rewrite the convolution operation in CUDA and tends to be slow due to the indexing operation.
+
+# 3. Adaptive Fractional Dilated Convolution
+
+In this section, we first introduce the adaptive kernel interpolation to tackle the misalignment due to fractional sampling in the proposed method. We then derive a concise formulation for it in the setting of mini-batch and discuss their computational overhead. Finally, we describe the loss function and an additional composition-aware structure for the composition-preserving warping batch.
+
+# 3.1. Adaptive Kernel Interpolation
+
+As stated in Section 1, cropping modifies the composition of the original image and causes the loss of some critical aesthetics information. As a result, image cropping introduces somewhat label noises in the training stage. To preserve the composition, we firstly warp the image into a fixed size. For network training, such a simple image warping approach suffers from the problem of overfitting due to the absence of data augmentation. Motivated by SPP [6], we adopt random-size warping during the training stage and feed the mini-batch into the networks with global pooling or SPP modules, which can naturally handle arbitrary-size batch inputs. Overall, the random-size warping provides effective data augmentation for training scale-invariant networks while preserving the image compositions.
+
+To cope with the distortion induced by warping, the receptive field of the convolution kernel should be consistent with the receptive field of the convolution kernel that is operated on the image with original aspect ratio. Our proposed approach tackles the distortion issue by adaptively dilating the kernels to the original aspect ratio, as illustrated
+
+
+Figure 3: Illustration of kernel interpolation: linear interpolation of the nearest two integer dilated kernels shared same kernel parameters are used to tackle the sampling misalignment from fractional dilation rates.
+
+in Fig. 2. Since the aspect ratio could be fractional, the dilation rate could be a fraction as well. To tackle the misalignment of feature sampling, we use the linear interpolation of two nearest integer dilation rates to construct the fractional dilation kernel.
+
+Suppose that $w$ and $h$ represent the width and height of original images, respectively. If $h > w$ and $\frac{h}{w}$ is not a integer, as illustrated in Fig. 3, AFDC (adaptive fractional dilated convolution) kernel $k_{AFDC}^{n}$ in $n$ -th layer is constructed as:
+
+$$
+k _ {A F D C} ^ {n} = \left(\lceil r \rceil - r\right) k _ {\left(1, \lfloor r \rfloor\right)} ^ {n} + \left(r - \lfloor r \rfloor\right) k _ {\left(1, \lceil r \rceil\right)} ^ {n} \tag {1}
+$$
+
+where $r = \frac{h}{w}$ . For any non-integer $r$ , it is in the interval $\left[\lfloor r\rfloor ,\lceil r\rceil \right]$ whose length is equal to 1. $\lfloor r\rfloor$ and $\lceil r\rceil$ are two integers nearest to $r$ . $k_{(1,\lfloor r\rfloor)}^n$ and $k_{(1,\lceil r\rceil)}^n$ are two dilated kernels with the nearest integer dilation rates $\lfloor r\rfloor$ and $\lceil r\rceil$ for $n$ th layer, respectively. More specifically, as shown in Fig. 3, $r\in [1,2]$ , $\lfloor r\rfloor = 1$ , $\lceil r\rceil = 2$ . We note that both
+
+
+Figure 4: Illustration for mini-batch compatibility: the distributive property of convolution operation (c.f. Eq. (3)) makes the fractional dilated conv easily implemented and compatible for mini-batch computation with a zero-padded weight vector/matrix (c.f. Eq. (5))
+
+$k_{(1,1)}^n$ and $k_{(1,2)}^n$ inherit the same learning parameters from the original kernel.
+
+Likewise, if $w > h$ and $\frac{w}{h}$ is not an integer, then we choose:
+
+$$
+k _ {A F D C} ^ {n} = (\lceil r \rceil - r) k _ {(\lfloor r \rfloor , 1)} ^ {n} + (r - \lfloor r \rfloor) k _ {(\lceil r \rceil , 1)} ^ {n} \tag {2}
+$$
+
+If $r = \frac{h}{w}$ is an integer, it is enough for us to employ integer dilated kernel.
+
+Therefore, the fractional dilated kernel is adaptively constructed for each image with respect to $w$ and $h$ as shown in Fig. 3. In addition, all the integer dilation kernels share the same kernel parameters and thus no extra learning parameters are introduced.
+
+# 3.2. Mini-Batch Computation and Implementation
+
+To implement the dynamic kernel interpolation in Eq. (1) and Eq. (2) directly, we need to rewrite the kernel-level code due to the diverse kernels in mini-batch. However, through the following deduction, we show that the proposed method can be easily implemented by common deep learning libraries, e.g. PyTorch and TensorFlow.
+
+Using the distributive property of convolution operation, the transformation of the feature maps generated by the adaptive fractional dilated Conv kernels in Eq. (1) can be formulated as:
+
+$$
+\begin{array}{l} f _ {n + 1} = k _ {A F D C} ^ {n} * f _ {n} \\ = \left[\left(\left\lceil \frac {w}{h} \right\rceil - \frac {w}{h}\right) k _ {\left(1, \left\lfloor \frac {w}{h} \right\rfloor\right)} ^ {n} + \left(\frac {w}{h} - \left\lfloor \frac {w}{h} \right\rfloor\right) k _ {\left(1, \left\lceil \frac {w}{h} \right\rceil\right)} ^ {n} \right] * f _ {n} \tag {3} \\ = (\left\lceil \frac {w}{h} \right\rceil - \frac {w}{h}) k _ {\left(1, \left\lfloor \frac {w}{h} \right\rfloor\right)} ^ {n} * f _ {n} + (\frac {w}{h} - \left\lfloor \frac {w}{h} \right\rfloor) k _ {\left(1, \left\lceil \frac {w}{h} \right\rceil\right)} ^ {n} * f _ {n} \\ \end{array}
+$$
+
+where $f_{n}$ denotes the feature maps for the nth layer and * denotes convolution.
+
+In mini-batch training and inference, we can construct multiple kernels with different dilation rates $(rate_k^i, rate_k^j)$ from the same kernel parameters and then use a zero-padded interpolation weight vector $\mathbf{w}$ to compute the operation adaptively for each image as:
+
+$$
+\begin{array}{l} f _ {n + 1} = k _ {A F D C} ^ {n} * f _ {n} \\ = \sum_ {k} w _ {\left(r a t e _ {k} ^ {i}, r a t e _ {k} ^ {j}\right)} k _ {\left(r a t e _ {k} ^ {i}, r a t e _ {k} ^ {j}\right)} ^ {n} * f _ {n} \tag {4} \\ = \mathbf {w} \widetilde {\mathbf {f}} _ {n} \\ \end{array}
+$$
+
+which is just the inner product of two vectors:
+
+$$
+\mathbf {w} = \left[ w _ {\left(r a t e _ {1} ^ {i}, r a t e _ {1} ^ {j}\right)}, \dots , w _ {\left(r a t e _ {K} ^ {i}, r a t e _ {K} ^ {j}\right)} \right] \tag {5}
+$$
+
+and
+
+$$
+\widetilde {\mathbf {f}} _ {n} = \left[ k _ {\left(r a t e _ {1} ^ {i}, r a t e _ {1} ^ {j}\right)} ^ {n} * f _ {n}, \dots , k _ {\left(r a t e _ {K} ^ {i}, r a t e _ {K} ^ {j}\right)} ^ {n} * f _ {n} \right] ^ {\top} \tag {6}
+$$
+
+where the number of dilation kernels is $K$ . As shown in Fig. 4, the interpolation weight $w_{(rate_k^i, rate_k^j)}$ for each instance is either $w_{(rate_k^i, 1)}$ or $w_{(1, rate_k^j)}$ , defined as follows:
+
+$$
+\begin{array}{l} w _ {(r a t e ^ {i}, 1)} = \left\{ \begin{array}{l l} r - (r a t e ^ {i} - 1), & \text {i f r a t e ^ {i} - r \in [ 0 , 1)} \\ (r a t e ^ {i} + 1) - r, & \text {i f r a t e ^ {i} - r \in (- 1 , 0)} \\ 0, & \text {e l s e} \end{array} \right. \\ w _ {(1, r a t e ^ {j})} = \left\{ \begin{array}{l l} r - (r a t e ^ {j} - 1), & \text {i f} r a t e ^ {j} - r \in [ 0, 1) \\ (r a t e ^ {j} + 1) - r, & \text {i f} r a t e ^ {j} - r \in (- 1, 0) \\ 0, & \text {e l s e} \end{array} \right. \tag {7} \\ \end{array}
+$$
+
+In mini-batch, suppose that batch size is $B$ , then the $n + 1$ th feature maps $\mathbf{F}_{n + 1}$ can be formulated as:
+
+$$
+\mathbf {F} _ {n + 1} = \left[ \mathbf {f} _ {n + 1} ^ {1}, \dots , \mathbf {f} _ {n + 1} ^ {B} \right] = \left[ \mathbf {w} ^ {1} \widetilde {\mathbf {f}} _ {n} ^ {1}, \dots , \mathbf {w} ^ {B} \widetilde {\mathbf {f}} _ {n} ^ {B} \right] \tag {8}
+$$
+
+The computation of the above $[\widetilde{\mathbf{f}}_n^1,\dots,\widetilde{\mathbf{f}}_n^B ]$ can be done efficiently in the mini-batch as:
+
+$$
+\left[ k _ {\left(r a t e _ {1} ^ {i}, r a t e _ {1} ^ {j}\right)} ^ {n} * \mathbf {F} _ {n}, \quad k _ {\left(r a t e _ {2} ^ {i}, r a t e _ {2} ^ {j}\right)} ^ {n} * \mathbf {F} _ {n}, \quad \dots , \quad k _ {\left(r a t e _ {K} ^ {i}, r a t e _ {K} ^ {j}\right)} ^ {n} * \mathbf {F} _ {n} \right] ^ {\top} \tag {9}
+$$
+
+We note that the activation function and batch normalization are omitted in the formulas for concise illustration.
+
+The formula in Eq. (8) can be interpreted as a dot production followed by a sum reduction between interpolation weight matrix $\mathbf{W}$ and Eq. (9), which thus can be efficiently implemented by common deep learning frameworks (Pytorch, Tensorflow, etc.). Each integer dilated Conv, $k_{(rate_k^i, rate_k^j)}^n * \mathbf{F}_n$ in Eq. (9), is computed as a normal dilated Conv layer with the shared learning parameters.
+
+
+Figure 5: Grouping strategy to reduce computational overhead: The integer dilated Convs can be shared by properly grouped images according to aspect ratios.
+
+| Network | #Params | #Mult-Adds | Speed (train) | Speed (test) |
| VGG16 | 138M | 15.3G | 8.14 it/s | 12.91 it/s |
| 2-dilation | 138M | 30.7G | 2.70 it/s | 3.85 it/s |
| 7-dilation | 138M | 109.1G | 0.73 it/s | 0.93 it/s |
| ResNet50 | 25.6M | 3.5G | 12.49 it/s | 22.80 it/s |
| 2-dilation | 25.6M | 5.6G | 8.32 it/s | 14.81 it/s |
| 2-dilation* | 25.6M | 6.5G | 6.20 it/s | 9.88 it/s |
| 7-dilation | 25.6M | 10.6G | 3.22 it/s | 5.28 it/s |
| 7-dilation* | 25.6M | 18.8G | 2.08 it/s | 3.12 it/s |
+
+Table 1: Computation comparison: training batch size is set to 16, test batch size is set to 32. The speed is the average result for 100 iterations from the test on single GTX 1080Ti. The fractional dilated Conv is embedded for all BottleNets in ResNet50 while * denotes additional embedding dilation for the first $7 \times 7$ Conv layer as well.
+
+Computational overhead The computational overhead is determined by the number of integer dilated kernels and the number of convolutional layers whose kernel sizes are not $1 \times 1$ . As shown in Table 1, the BottleNet in ResNet50 [7] contains two $1 \times 1$ kernels and one $3 \times 3$ kernel. Since only $3 \times 3$ kernel introduces the computational overhead, the computational cost for 2 integer dilations is roughly 1.5 times of the original model, while VVG16 [24] consists of the majority of $3 \times 3$ kernels and thus the computation cost is approximately 2 times. Some additional computational overhead is caused by the interpolation operation of different dilation kernels.
+
+Reducing overhead with a grouping strategy In practice, the aspect ratios, $\frac{w}{h}$ , of most of images would fall into $[\frac{1}{2}, 2]$ , e.g. $97.8\%$ of the training and testing images in the AVA [18] dataset. Training efficiency can be optimized by grouping batches, e.g. training with three dilation kernels for the most batches, $DilationRates = \{(2, 1), (1, 1), (1, 2)\}$ for the images whose aspect ratios fall into $[\frac{1}{2}, 2]$ . For the datasets with more diverse aspect ratios, a more fine-grained grouping strategy could be applied. As illustrated in Fig. 5, images with aspect ratio range [4, 3] (above) and $[\frac{1}{2}, 1]$ (below) share the valid integer dilated Convs in the grouped batches.
+
+Parallel optimization The calculation of multiple integer dilated kernels in each convolutional layer is equivalent to
+
+broadening the output channel size by the number of dilation kernels. In another words, the computation of dilated Conv group, $\{k_{(rate_k^i, rate_k^j)}^n * \mathbf{F}_n\}$ , can be optimized through parallel computing. WideResNet [32] claims that increasing the width of Conv layers is more accommodating to the nature of GPU computation and helps effectively balance computations more optimally. However, from Table 1, the actual training and testing speeds are approximately linearly correlated with # Muti-Adds, which could be attributed to the current implementation of the framework (TensorFlow) and can be improved by further parallel optimization.
+
+We note that many base networks are stacked mainly with the permutation of $1 \times 1$ and $3 \times 3$ kernels and they can be applicable to embed AFDC in terms of the training and inference speed, i.e. [7, 11, 10, 32, 30] in ResNet stream and [9, 21, 34] in MobileNet stream. Besides, the adaptation is easy because our method is parameter-free. Overall, the random-size warping preserves the composition of the original image and also provides data augmentation to train the network with scale invariance. AFDC can adaptively construct fractional dilated kernels according to the spatial distortion information in a computation-efficient manner.
+
+# 3.3. Composition-Aware Structure and Loss
+
+The commonly-used network structures for the task of image classification usually incorporate global pooling before the fully connected layers [30, 7, 11, 26, 10]. The global pooling eliminates spatial variance which is helpful for the task of image recognition by training the networks with spatial invariant ability, but it causes the loss of localization information for image aesthetics assessment. Motivated by spatial pyramid pooling [6], MNA-CNN-Scene [16], several efforts are made to learn the information of spatial image compositions. First, we use multiple adaptive pooling modules [6] to output $g_{i} * g_{i}$ grids and feed them into the fully-connected layers (c.f. Fig. 2). The localization factors for image aesthetics assessment are highly correlated with the image symmetry and the overall image structure. Then, we aggregate the outputs after the fully-connected layers by concatenation. To limit the number of model parameters and prevent from overfitting, the module of each adaptive pooling layer outputs $\frac{\text{num_features}}{\text{num_grids}}$ channels.
+
+Following the work in [27], we train our network to predict 10-scale score distribution with a softmax function on the top of the network. To get both the mean score prediction and the binary classification prediction, we calculate the weighted sum of score distribution $\sum_{i=1}^{10} i \cdot p_i$ . We use the ordered distribution distance, Earth Mover Distance [27], as our loss function:
+
+$$
+E M D (p, \hat {p}) = \left(\frac {1}{N} \sum_ {k = 1} ^ {N} \left| C D F _ {p} (k) - C D F _ {\hat {p}} (k) \right| ^ {r}\right) ^ {1 / r} \tag {10}
+$$
+
+where $CDF_{p}(k)$ is the cumulative distribution function as
+
+| network | cls. acc. | MSE | EMD | SRCC | LCC |
| NIMA(VGG16)[27] | 0.8060 | - | 0.052 | 0.592 | 0.610 |
| NIMA(Inception-v2)[27] | 0.8151 | - | 0.050 | 0.612 | 0.636 |
| NIMA(ResNet50, our implementation) | 0.8164 | 0.3169 | 0.0492 | 0.6166 | 0.6388 |
| Vanilla Conv (ResNet50) | 0.8172 | 0.3101 | 0.0481 | 0.6002 | 0.6234 |
| AFDC (random-size cropping pretrain) | 0.8145 | 0.3212 | 0.0520 | 0.6134 | 0.6354 |
| AFDC (aspect-ratio-preserving pretrain) | 0.8295 | 0.2743 | 0.0445 | 0.6410 | 0.6653 |
| AFDC + SPP | 0.8324 | 0.2706 | 0.0447 | 0.6489 | 0.6711 |
+
+Table 2: Test result comparison on AVA [18]: The evaluation metrics are following [27]. Reported accuracy values (cls. acc.) are based on binary image classification. MSE(mean squared error), LCC (linear correlation coefficient) and SRCC (Spearmans rank correlation coefficient) are computed between predicted and ground truth mean scores. EMD measures the closeness of the predicted and ground truth rating distributions with $r = 1$ in Eq. (10). AFDC (random-size cropping) transfers the model trained with widely used data augmentation method in ImageNet, while AFDC (aspect-ratio-preserving pretrain) transfers the model trained with aspect-ratio-preserving data augmentation.
+
+$\sum_{i=1}^{k} p_i$ . As stated in Section 1 and the results in [27], predicting the score distribution can provide more information about image aesthetics compared to the mean scores or binary classification labels.
+
+# 4. Experimental Results
+
+Following [27, 17, 15, 16, 12], we have evaluated our proposed method over AVA dataset [18]. The AVA contains around 250,000 images and each image contains the 10-scale score distribution rated by roughly 200 people. For a fair comparison, we use the same random split strategy in [27, 22, 17, 15, 16, 18] to generate 235,528 images for training and 20,000 images for test.
+
+# 4.1. Implementation Details
+
+We use ResNet-50 [7] as the backbone network due to its efficiency on computation and graphic memory as discussed in Section 3.2. We replace all the $3 \times 3$ Conv layers in each BottleNet with our proposed adaptive fraction dilation Conv layers. It is easy to plug AFDC into the common CNN architectures since it does not introduce any extra model parameters. We use the same EMD loss in Eq. (10) with $r = 2$ for better back propagation. To accelerate training, we use the grouping strategy discussed in Section 3.2. For the first 12 epochs, we train the model with three dilation kernels, $1 \times 2$ , $1 \times 1$ , $2 \times 1$ on the grouped images since the aspect ratios for $97.8\%$ training and validation images fall between $\left[\frac{1}{2}, 2\right]$ . Then we train the model with seven dilation kernels, $1 \times 4$ , $1 \times 3$ , $1 \times 2$ , $1 \times 1$ , $2 \times 1$ , $3 \times 1$ , $4 \times 1$ , for the remaining 6 epochs and select the best model from the results in the validation dataset. We note that the training and test speed could be further accelerated by a more fine-grained grouping strategy. We transfer the network parameters (pre-trained on ImageNet) before the fully connected layer and set the initial learning rate to 0.01 for the first 6 epochs. Then we dampen the learning rate to 0.001 for the rest of the training epochs. We find that
+
+setting initial learning rate to 0.001 with a decay rate 0.95 after every 10 epochs can produce comparable results but converges more slowly. The weight and bias momentums are set to 0.9.
+
+# 4.2. Ablation Study
+
+In this section, we introduce the steps to build the final model and analyze the effects of each module step by step: (1) Replacing random cropping with composition-preserving random warping; (2) Replacing vanilla Conv with AFDC in the aspect-ratio-preserving pre-trained model on ImageNet; (3) Adding SPP modules to learn image composition.
+
+Random Warping. For the data augmentation, input images in NIMA [27] are rescaled to $256 \times 256$ , and then a crop of size $224 \times 224$ is randomly extracted. They also report that training with random crops without rescaling produces the results that are not compelling due to the inevitable changes in image compositions. In order to preserve the complete composition, we replace the random-cropping with random-size warping by randomly warping each batch into square size in [224, 320] during each iteration. The network suffers from overfitting without using random warping. We note that non-square-size warping may further help with generalization and potentially train AFDC more robustly.
+
+From Table 2, we generate slightly better results (Vanilla Conv (ResNet50)) compared with NIMA [27]. We use the same loss (EMD loss) and network (ResNet50, our implementation) as NIMA [27]. Comparable results have shown that random warping is an effective data augmentation alternative and it preserves the image composition.
+
+Aspect-Ratio-Preserving Pretrain. We replace the vanilla convolution layers with AFDC in ResNet50. In our experiments, we find that, fine-tuning the fractional dilated convolution network results in similar validation accuracy compared to the original network (c.f. AFDC (random-size cropping pretrain) in Table 2). Compatible validation re
+
+
+Figure 6: The cropping results for the model trained with global pooling (left) and SPP (right). The two cropping samples are obtained by using a sliding window with the lowest score (green) and the highest score (red). The image is firstly resized to 256. A sliding window search with size 224 and stride 10 is applied.
+
+
+Figure 7: The comparison of learning curves: the backbone networks here are all ResNet-50 [7].
+
+sults might be attributed to the pre-trained model which has a distortion-invariant ability. The widely used data augmentation [26] for network training on ImageNet contains random cropping on a window whose size is distributed evenly between $8\%$ to $100\%$ of the original image area with the aspect ratio constrained to $\left[\frac{3}{4}, \frac{4}{3}\right]$ . The model is trained with distortion invariance, which has the opposite interest of our method that tries to preserve the original aspect ratio.
+
+For better transfer learning, we pre-train the ResNet50 [7] on ImageNet [4] without distortion augmentation. Specifically, we sample the $8\%$ to $100\%$ crop size to the image area with a square window, which is slightly modified comparing to the data augmentation method in [26]. As in Table 2, transferring the model from the aspect-ratio-preserving pre-train, we improve the overall test results (AFDC (aspect-ratio-preserving pre-train)) by a margin from the vanilla Conv counterpart.
+
+Composition-Aware Structure. For better representation learning of composition, we use three different scales for SPP, $\{1 \times 1, 2 \times 2, 3 \times 3\}$ . The network with a global pooling layer is equivalent to using only one scale, $1 \times 1$ . From Table 2, the network with SPP modules (AFDC+SPP) generates better results comparing to the network with the global pooling layer (AFDC). The experimental results have shown that incorporating the localization information could benefit the learning of image compositions. In Fig. 6, the automatic cropping example demonstrates that the ability of
+
+localization/composition discrimination is important to find a good cropping result when the global cue in each cropping box has a similar distribution (color, lighting et al.). The model leaned with SPP modules can infer cropping respecting the image compositions, e.g. the relative position of eye and face in the example. We also tried $num_{grids} = 5$ and found that the results were not compelling due to the overfitting from extra model parameters. Three different scales are quite consistent with the common aesthetic rules (global information, symmetrical composition in horizontal and vertical direction, the rules of the thirds).
+
+# 4.3. Effectiveness of AFDC
+
+Learning Representation and Generalization From the experiments in Fig. 7, we argue that preserving aspect ratio information is essential for learning photo aesthetics since our method not only improves the validation results but also improves the training results. Without extra learning parameters, AFDC improves both learning representation and generalization ability. As discussed in Section 1, preserving the image aesthetics information completely omits the label noises caused by random warping and thus facilitates the learning process. The additional aesthetic features related to the aspect ratios allow the model to be more robust and discriminative. To further probe the effects of embedding aspect ratio, we compare different ways to incorporate the dilated convolution and the results are reported in Table 3. When trained with vanilla Conv (top rows in Table 3), AFDC is superior to other dilated Conv methods during the test. It implies the potential optimal between nearest two integer dilated kernels. After training with AFDC (bottom rows in Table 3), it further validates the effectiveness of AFDC, which is guided by the helpful supervision of aspect ratios. We note that such experiments are accessible because our method is parameter-free.
+
+Overall, our proposed AFDC can learn more discriminative and accurate representations related to aesthetics perception, resulting in better generalization by leveraging extra supervision from the information of image aspect ratios. Discriminative to Aspect Ratios To further investigate the response to aspect ratios, we resize the same image into different aspect ratios and test the results on different trained models. As shown in Fig. 8, AFDC (blue line) is discriminative to the change of aspect ratios. The small fluctuation of vanilla Conv (green line) is attributed to sampling change from resizing process. The model with random-size cropping pretrain on Imagenet (orange line) is less discriminative to capture the aesthetics perception related to aspect ratio due to its distortion-invariant pretrain. Moreover, the proposed method produces a multi-modal score distribution, which reflects that it learns complex relation between the aspect ratio and the aesthetics perception. It is in line with the notion that designing better aspect ratios or finding
+
+| Train | Test | cls.acc. | MSE | EMD |
| vanilla | vanilla | 0.8172 | 0.3101 | 0.0481 |
| constant dilation rate = [2,1] | 0.8072 | 0.5163 | 0.0610 |
| second nearest integer dilation | 0.8091 | 0.5368 | 0.0620 |
| mean of nearest two integer dilations | 0.8117 | 0.4558 | 0.0576 |
| nearest integer dilation | 0.8114 | 0.4322 | 0.0562 |
| adaptive fractional dilation | 0.8132 | 0.4133 | 0.0553 |
| AFDC | vanilla | 0.8085 | 0.3210 | 0.0581 |
| constant dilation rate = [2,1] | 0.8132 | 0.3182 | 0.0576 |
| second nearest integer dilation | 0.8156 | 0.3003 | 0.0476 |
| mean of nearest two integer dilations | 0.8274 | 0.2771 | 0.0457 |
| nearest integer dilation | 0.8277 | 0.2757 | 0.0457 |
| adaptive fractional dilation | 0.8295 | 0.2743 | 0.0445 |
+
+
+Figure 8: Comparison of discrimination to the change of aspect ratios.
+
+aesthetically pleasing photography angles is not trivial.
+
+Due to the constraint of training dataset, we admit that the learned perception related to the aspect ratios is not satisfactory yet even the model learns from different aspect ratios. As a matter of fact, the learning ability is available for our proposed method when training on a more specific targeted dataset. It could be utilized in automatic/auxiliary photo enhancement with not only color space transformation but also with spatial transformation, e.g. profile editing, multi-shot selection and automatic resizing.
+
+# 4.4. Comparison With the State-of-the-Art Results
+
+We have compared our adaptive fractional dilated CNN with the state-of-the-art methods in Table 4. The results of these methods are directly obtained from the corresponding papers. As shown in Table 4, our proposed AFDC outperforms other methods in terms ofcls.acc and MSE, which are the most widely targeted metrics. Compared with NIMA(Inception-v2) [27] which uses the same EMD loss, our experimental results have shown that preserving the image aesthetic information completely results in better performance on image aesthetics assessment. We follow the same motivation from MNA-CMM-Scene [16], while our
+
+Table 3: The test result comparison of different convolutions: The results are obtained with trained parameters by vanilla Conv (above) and AFDC (below). Test processes are conducted by different calculation methods for interpolation weights, w in Eq. (5). Vanilla Conv, constant dilation, nearest integer dilation and second nearest integer dilation can be interpreted as feeding one-hot interpolation weight vector into the networks.
+
+| Method | cls. acc. | MSE | SRCC |
| MNA-CNN-Scene [16] | 76.5% | - | - |
| Kong et al. [14] | 77.3% | - | 0.558 |
| AMP [17] | 80.3% | 0.279 | 0.709 |
| Zeng et al. (resnet101) [33] | 80.8% | 0.275 | 0.719 |
| NIMA (Inception-v2) [27] | 81.5% | - | 0.612 |
| MP-Net [15] (50 cropping patches) | 81.7% | - | - |
| Hosu et al. [8] (20 cropping patches) | 81.7% | - | 0.756 |
| A-Lamp [15] (50 cropping patches) | 82.5% | - | - |
| \( MP_{ada} \) [23](≥ 32 cropping patches) | 83.0% | - | - |
| ours (single warping patch) | 82.98% | 0.273 | 0.648 |
| ours (4 warping patches) | 83.24% | 0.271 | 0.649 |
+
+Table 4: Comparison with the SOTA methods: The four patches are warping size $\{224,256,288,320\}$ . The single patch is warping size 320 selected from the best results.
+
+proposed method is applicable to mini-batch training which contains images with different aspect ratios. The experimental results have shown adaptive embedding at kernel level is an effective way to learn more accurate aesthetics perception. Compared with multi-patch based methods [15, 8, 23], our unified model, which learns the image aesthetic features directly from the complete images in an end-to-end manner, can better preserve the original aesthetic information and alleviate the efforts to aggregate sampling prediction, e.g. complicated path sampling strategy and manually designed aggregation structure in [15]. Moreover, our method is much more efficient without feeding multiple cropping patches sampled from original images and could be more applicable for the application. Furthermore, it is much succinct due to its parameter-free manner and can be easily adapted to popular CNN architectures.
+
+# 5. Conclusion
+
+In this paper, an adaptive dilated convolution network is developed to explicitly model aspect ratios for image aesthetics assessment. Our proposed method does not introduce extra model parameters and can be plugged into popular CNN architectures. Besides, a grouping strategy has been introduced to reduce computational overhead. Our experimental results have demonstrated the effectiveness of our proposed approach. Even our adaptive dilated convolution network was proposed to support image aesthetics assessment, it can also be applied in other scenarios when image cropping or warping may introduce label noises. Moreover, adaptive kernel construction in a parameter-free manner provides an intuitive approach to design dynamic embedding at kernel level, which aims at better learning representation and generalization.
+
+# Acknowledgment
+
+We would like to thank the anonymous reviewers for their helpful comments. This work was supported in part by NSFC under Grant (No. 61906143 and No.61473091).
+
+# References
+
+[1] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834-848, 2018.
+[2] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764-773, 2017.
+[3] Ritendra Datta, Dhiraj Joshi, Jia Li, and James Ze Wang. Studying aesthetics in photographic images using a computational approach. In Computer Vision - ECCV 2006, 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, Proceedings, Part III, pages 288-301, 2006.
+[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248-255. IEEE, 2009.
+[5] Sagnik Dhar, Vicente Ordonez, and Tamara L. Berg. High level describable attributes for predicting aesthetics and interestingness. In The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011, pages 1657-1664, 2011.
+[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In European conference on computer vision, pages 346-361. Springer, 2014.
+[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[8] Vlad Hosu, Bastian Goldlucke, and Dietmar Saupe. Effective aesthetics prediction with multi-level spatially pooled features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9375-9383, 2019.
+[9] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andretto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017.
+[10] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132-7141, 2018.
+[11] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, volume 1, page 3, 2017.
+[12] Yueying Kao, Kaiqi Huang, and Steve Maybank. Hierarchical aesthetic quality assessment using deep convolutional neural networks. Signal Processing: Image Communication, 47:500-510, 2016.
+[13] Yan Ke, Xiaou Tang, and Feng Jing. The design of high-level features for photo quality assessment. In 2006 IEEE Computer Society Conference on Computer Vision and Pat
+
+tern Recognition (CVPR 2006), 17-22 June 2006, New York, NY, USA, pages 419-426, 2006.
+[14] Shu Kong, Xiaohui Shen, Zhe L. Lin, Radomír Mech, and Charless C. Fowlkes. Photo aesthetics ranking network with attributes and content adaptation. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part I, pages 662-679, 2016.
+[15] Shuang Ma, Jing Liu, and Chang Wen Chen. A-lamp: Adaptive layout-aware multi-patch deep convolutional neural network for photo aesthetic assessment. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 722-731, 2017.
+[16] Long Mai, Hailin Jin, and Feng Liu. Composition-preserving deep photo aesthetics assessment. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 497-506, 2016.
+[17] Naila Murray and Albert Gordo. A deep architecture for unified aesthetic prediction. CoRR, abs/1708.04890, 2017.
+[18] Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A large-scale database for aesthetic visual analysis. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2408-2415. IEEE, 2012.
+[19] Masashi Nishiyama, Takahiro Okabe, Imari Sato, and Yoichi Sato. Aesthetic quality classification of photographs based on color harmony. In The 24th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2011, Colorado Springs, CO, USA, 20-25 June 2011, pages 33-40, 2011.
+[20] Jian Ren, Xiaohui Shen, Zhe L. Lin, Radomír Mech, and David J. Foran. Personalized image aesthetics. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 638-647, 2017.
+[21] Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018 [10], pages 4510-4520.
+[22] Katharina Schwarz, Patrick Wieschollek, and Hendrik P. A. Lensch. Will people like your image? learning the aesthetic space. In 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, Lake Tahoe, NV, USA, March 12-15, 2018, pages 2048-2057, 2018.
+[23] Kekai Sheng, Weiming Dong, Chongyang Ma, Xing Mei, Feiyue Huang, and Bao-Gang Hu. Attention-based multipatch aggregation for image aesthetic assessment. In Proceedings of the 26th ACM international conference on Multimedia, pages 879-886, 2018.
+[24] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+[25] Xiaoshuai Sun, Hongxun Yao, Rongrong Ji, and Shaohui Liu. Photo assessment based on computational visual attenuated image analysis of the 3D human brain. Neuroimage 2014; 17: 169-180.
+
+tion model. In Proceedings of the 17th International Conference on Multimedia 2009, Vancouver, British Columbia, Canada, October 19-24, 2009, pages 541-544, 2009.
+[26] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-9, 2015.
+[27] Hossein Talebi and Peyman Milanfar. NIMA: neural image assessment. IEEE Trans. Image Processing, 27(8):3998-4011, 2018.
+[28] Hanghang Tong, Mingjing Li, HongJiang Zhang, Jingrui He, and Changshui Zhang. Classification of digital photos taken by photographers or home users. In Advances in Multimedia Information Processing - PCM 2004, 5th Pacific Rim Conference on Multimedia, Tokyo, Japan, November 30 - December 3, 2004, Proceedings, Part I, pages 198-205, 2004.
+[29] Weining Wang, Mingquan Zhao, Li Wang, Jixiong Huang, Chengjia Cai, and Xiangmin Xu. A multi-scene deep learning model for image aesthetic evaluation. *Sig. Proc.: Image Comm.*, 47:511-518, 2016.
+[30] Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 5987-5995. IEEE, 2017.
+[31] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
+[32] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In Proceedings of the British Machine Vision Conference 2016, BMVC 2016, York, UK, September 19-22, 2016, 2016.
+[33] Hui Zeng, Zisheng Cao, Lei Zhang, and Alan C Bovik. A unified probabilistic formulation of image aesthetic assessment. IEEE Transactions on Image Processing, 29:1548-1561, 2019.
+[34] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018 [10], pages 6848-6856.
\ No newline at end of file
diff --git a/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/images.zip b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..434dfc470b45b24e95b1527f7ec15f81b50232c1
--- /dev/null
+++ b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1dd3ae3ae88679d05134c920a53a5d6fdf18edb66e0dc14f699c1ceebc9537fa
+size 566518
diff --git a/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/layout.json b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c4103636ff2c9d30207073dff41502ad80fb5a7e
--- /dev/null
+++ b/adaptivefractionaldilatedconvolutionnetworkforimageaestheticsassessment/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba638ad15105deda7df08e983c5cafd966d401a4a6c0a99b1e2294465cfe8b11
+size 348985
diff --git a/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_content_list.json b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a5852a4afc616b73677414ff39bfec2e1a62bfaa
--- /dev/null
+++ b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78f0d238dc07f4ea3675914960f73476e5851049e8e14860e3f8a350bb333030
+size 76957
diff --git a/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_model.json b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..96f56379645f2624901ed750a0141494e08795f4
--- /dev/null
+++ b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd6f0685c169d1549f6b815fa5ba9730be90d269f84680e086cf200e3cfd68a9
+size 100049
diff --git a/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_origin.pdf b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2fcda7621862436ce5d0a1d969e6d012212765b8
--- /dev/null
+++ b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/83d11bf6-3120-4257-922a-59de1a43a0ea_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d3b2e9a84c076d07aae15730cdf6e71d1ec5a81538e7e6e386d25fce45246ffd
+size 4718876
diff --git a/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/full.md b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..591cd7a0de5df63bf20f322346c3fca604b012e9
--- /dev/null
+++ b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/full.md
@@ -0,0 +1,337 @@
+# Adaptive Graph Convolutional Network with Attention Graph Clustering for Co-saliency Detection
+
+Kaihua Zhang $^{1}$ , Tengpeng Li $^{1}$ , Shiwen Shen $^{2}$ , Bo Liu $^{2*}$ , Jin Chen $^{1}$ , Qingshan Liu $^{1}$ $^{1}$ B-DAT and CICAEET, Nanjing University of Information Science and Technology, Nanjing, China
+ $^{2}$ JD Digits, Mountain View, CA, USA
+
+{zhkhhua, kfliubo}@gmail.com
+
+# Abstract
+
+Co-saliency detection aims to discover the common and salient foregrounds from a group of relevant images. For this task, we present a novel adaptive graph convolutional network with attention graph clustering (GCAGC). Three major contributions have been made, and are experimentally shown to have substantial practical merits. First, we propose a graph convolutional network design to extract information cues to characterize the intra- and interimage correspondence. Second, we develop an attention graph clustering algorithm to discriminate the common objects from all the salient foreground objects in an unsupervised fashion. Third, we present a unified framework with encoder-decoder structure to jointly train and optimize the graph convolutional network, attention graph cluster, and co-saliency detection decoder in an end-to-end manner. We evaluate our proposed GCAGC method on three co-saliency detection benchmark datasets (iCoseg, Cosal2015 and COCO-SEG). Our GCAGC method obtains significant improvements over the state-of-the-arts on most of them.
+
+# 1. Introduction
+
+Human is able to exhibit visual fixation to attend to the attractive and interesting regions and objects for future processing [7]. Co-saliency detection model simulates the human visual system to perceive the scene, and searches for the common and salient foregrounds in an image group. Co-saliency has been used in various applications to improve the understanding of image/video content, such as image/video co-segmentation [55, 13, 14, 59], object colocalization [25, 53], and image retrieval [47, 68].
+
+In co-saliency detection, the semantic categories of the
+
+common salient objects are unknown. Thus, the designed algorithm needs to infer such information from the specific content of the input image group. Therefore, the co-saliency detection algorithm design usually focuses on addressing two key challenges: (1) extracting informative image feature representations to robustly describe the image foregrounds; and (2) designing effective computational frameworks to formulate and detect the co-saliency. Conventional hand-engineered features, such as Gabor filters, color histograms and SIFT descriptors [43] have been widely used in many co-saliency detection methods [12, 41, 69]. However, hand-crafted shallow features usually lack the ability to fully capture the large variations of common object appearances, and complicated background textures [57]. Recently, researchers improve co-saliency detection using deep-learning-based high-level feature representations, and have shown promising results [75, 73, 76]. Nonetheless, these approaches separate the representation extraction from co-saliency detection as two distinct steps, and lose the ability to tailor the image features towards inferring co-salient regions [24]. End-to-end algorithms adopting convolutional neural networks (CNNs) [24, 57, 64] have been developed to overcome this problem, and demonstrated state-of-the-art performance. Although CNN is able to extract image representations in a data-driven way, it is the sub-optimal solution to model long-range dependencies [61]. CNN captures long-range dependencies by deeply stacking convolutional operations to enlarge the receptive fields. However, the repeated convolutional operations cause optimization difficulties [61, 21], and make multi-hop dependency modeling [61]. Moreover, it becomes even more challenging for the CNN to accurately modeling the inter-image non-local dependencies for the co-salient regions in the image group.
+
+To address the aforementioned challenges, we develop a novel adaptive graph convolutional network with attention graph clustering (GCAGC) for co-saliency detection. We first utilize a CNN encoder to extract multi-scale feature representations from the image group, and generate combined dense feature node graphs. We then process the dense
+
+graphs with the proposed adaptive graph convolutional network (AGCN). Compared with only depending on the progressive behavior of the CNN, the AGCN is able to capture the non-local and long-range correspondence directly by computing the interactions between any two positions of the image group, regardless of their intra- and inter-image positional distance. The output from AGCN is further refined by an attention graph clustering module (AGCM) through generated co-attention maps. A CNN decoder is employed in the end to output the finally predicted co-saliency maps. A unified framework is designed to jointly optimize all the components together.
+
+The main contributions of this paper are threefold:
+
+- We provide an adaptive graph convolutional network design to simultaneously capture the intra- and interimage correspondence of an image group. Compared with conventional approaches, this AGCN directly computes the long-range interactions between any two image positions, thus providing more accurate measurements.
+- We develop an attention graph clustering module to differentiate the common objects from salient foregrounds. This AGCM is trained in an unsupervised fashion, and generates co-attention maps to further refine the estimated co-salient foregrounds.
+- We present an end-to-end computational framework with encoder-decoder CNN structure to jointly optimize the graph clustering task and the co-saliency detection objective, while learning adaptive graph dependencies.
+
+# 2. Related Work
+
+Image Co-saliency Detection. This task identifies common distinct foregrounds and segments them from multiple images. Various strategies have been developed for this task. Bottom-up approaches first score each pixel/sub-region in the image group, and then combine similar regions in a bottom-up fashion. Hand-crafted features [12, 15, 35, 41, 69] or deep-learning-based features [75, 74] are usually employed to score such sub-regions. Fu et al. [12] utilize three visual attention priors in a cluster-based framework. Liu et al. [41] define background and foreground cues to capture the intra- and inter-image similarities. Pretrained CNN and restricted Boltzmann machine are used in [75] and [74] to extract information cues to detect common salient objects, respectively. In contrast, fusion-based algorithms [54, 5, 26] are designed to discover useful information from the predicted results generated by several existing saliency or co-saliency detection methods. These methods fuse the detected region proposals by region-wise
+
+adaptive fusion [26], adaptive weight fusion [5] or stacked-autoencoder-enabled fusion [54]. Learning-based methods are the third category of co-saliency detection algorithms, and developed to learn the co-salient pattern directly from the image group. In [24], an unsupervised CNN with two graph-based losses is proposed to learn the intra-image saliency and cross-image concurrency, respectively. Zhang et al. [76] design a hierarchical framework to capture co-salient area in a mask-guided fully CNN. Wei et al. [64] design a multi-branch architecture to discover the interaction across images and the salient region in single image simultaneously. A semantic guided feature aggregation architecture is proposed to capture the concurrent and fine-grained information in [57]. Although many methods have been developed, this field still lacks research on addressing the limitations of CNN for capturing long-range intra- and inter-image dependencies.
+
+Graph Neural Networks (GNNs). GNNs [18, 49] are the models for capturing graph dependencies via message passing between the nodes of graphs. Different from standard neural network, GNNs retain a state that can represent information from its neighborhood with arbitrary depth [78]. Convolutional graph neural networks (GCNs) [4, 8, 29, 31, 1, 45, 16] are a variant of GNNs, and aim to generalize convolution to graph domain. Algorithms in this direction are often categorized as the spectral-based approaches [4, 8, 29, 31], and the spatial-based approaches [1, 45, 16]. The former ones work with a spectral representation of the graphs; and the latter ones define the operation directly on graph, and extract information from groups of spatially connected neighbours. Recently, GNN and GCN have demonstrated promising results in various computer vision tasks, including scene graph generation [67, 36, 19], point clouds classification and segmentation [30, 63], semantic segmentation [58, 48], action recognition [66] and visual reasoning and question answering [6, 44]. More comprehensive review of GNNs can be found in [78, 65].
+
+Graph Clustering. This task divides the graph nodes into related groups. Early works [17, 52] develop shallow approaches for graph clustering. Girvan et al. [17] use centrality indices to discover boundaries of different nodes groups. Wang et al. [60] develop a modularized non-negative matrix factorization approach to incorporate the community structure into the graph embedding, and then perform traditional clustering methods on the embedded features. The limitations of these works are that they only handle partial graph structure or shallow relationships between the content and the structure data [56]. In contrast, deep-learning-based approaches [46, 56] are developed recently to improve graph clustering. Pan et al. [46] present an adversarially regularized framework to extract the graph representation to perform graph clustering. Wang et al. [56] develop a goal-directed deep learning approach to jointly learn graph em
+
+
+Figure 1. Pipeline of the proposed GCAGC for co-saliency detection. Given a group of images as input, we first leverage a backbone CNN as encoder (a) to extract the multi-scale features of each image, and then we adopt the feature pyramid network (FPN) [38] to fuse all the image features from top to down. Next, the lateral output features as node representations are fed into the AGCN (b). The output features of AGCN via two-layer GCNs are then fed into the AGCM (c), generating a set of object co-attention maps. Finally, the co-attention maps and the output features of AGCN are concatenated and fed into the decoder (d), producing corresponding co-saliency maps. $\oplus$ : element-wise addition; $\odot$ : concatenation; $\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{A})$ : graph of nodes $\mathcal{V}$ , edges $\mathcal{E}$ and adjacency matrix $\mathbf{A}$ ; $\mathbf{P}_1^k$ , $\mathbf{P}_2^k$ : learnable projection matrices for graph learning; $\mathbf{W}_1^k$ and $\mathbf{W}_2^k$ : learnable weight matrices in the adopted two-layer GCNs.
+
+bedding and graph clustering together. More detailed review of graph clustering is provided in [2].
+
+# 3. Proposed Approach
+
+# 3.1. Method Overview
+
+Given a group of $N$ relevant images $\mathcal{I} = \{\pmb{I}^n\}_{n=1}^N$ , the task of co-saliency detection aims to highlight the shared salient foregrounds against backgrounds, predicting the corresponding response maps $\mathcal{M} = \{\mathbf{M}^n\}_{n=1}^N$ . To achieve this goal, we learn a deep GCAGC model to predict $\mathcal{M}$ in an end-to-end fashion.
+
+Figure 1 illustrates the pipeline of our approach, which consists of four key components: (a) Encoder, (b) AGCN, (c) AGCM and (d) Decoder. Specifically, given input $\mathcal{I}$ , we first adopt the VGG16 backbone network [51] as the encoder to extract their features by removing the fully-connected layers and softmax layer. Afterwards, we leverage the FPN [38] to fuse the features of pool3, pool4 and pool5 layers, generating three lateral intermediate feature maps $\mathcal{X} = \{\mathbf{X}^k\}_{k=1}^3$ as the multi-scale feature representations of $\mathcal{I}$ . Then, for each $\mathbf{X}^k \in \mathcal{X}$ , we design a sub-graph $\mathcal{G}^k$ with a learnable structure that is adaptive to our co-saliency detection task, which is able to well capture the long-range intra- and inter-image correspondence while preserving the spatial consistency of the saliency. Meanwhile, to fully capture multi-scale information for feature enhancement, the sub-graphs are combined into a multi-graph $\mathcal{G} = \cup_k \mathcal{G}^k$ . Then, $\mathcal{G}$ is integrated into a simple two-layer GCNs $\mathcal{F}_{gcn}$ [29], generating the projected GC filtered features $\mathcal{F}_{gcn}(\mathcal{X}) = \{\mathcal{F}_{gcn}(\mathbf{X}^k)\}_{k=1}^3$ . Recent works [32, 33] show that the GC filtering of GC-Ns [29] is a Laplacian smoothing process, and hence it
+
+makes the salient foreground features of the same category similar, thereby well preserving spatial consistency of the foreground saliency, which facilitates the subsequent intra- and inter-image correspondence. Afterwards, $\mathcal{F}_{gcn}(\mathcal{X})$ are fed into a graph clustering module $\mathcal{F}_{gcm}$ , producing a group of co-attention maps $M_{catt}$ , which help to further refine the predicted co-salient foregrounds while suppressing the noisy backgrounds. Finally, the concatenated features $M_{catt} \odot \mathcal{F}_{gcn}(\mathcal{X})$ are fed into a decoder layer, producing the finally predicted co-saliency maps.
+
+# 3.2. Adaptive Graph Convolution Network
+
+As aforementioned, the AGCN is to process features as Laplacian smoothing [32] that can benefit long-range intra- and inter-image correspondence while preserving spatial consistency. Numerous graph based works for co-saliency detection [24, 55, 77, 23, 27, 37] have been developed to better preserve spatial consistency, but they perform intra-saliency detection and inter-image correspondence independently, which cannot well capture the interactions between co-salient regions across images that are essential to co-saliency detection, thereby leading to sub-optimal performance. Differently, our AGCN constructs a dense graph that takes all input image features as the node representations. Meanwhile, each edge of the graph models the interactions between any pair-wise nodes regardless of their positional distance, thereby well capturing long-range dependencies. Hence, both intra-saliency detection and interimage correspondence can be jointly implemented via feature propagation on the graph under a unified framework without any poster-processing, leading to a more accurate co-saliency estimation than those individually processing each part [24, 55, 77, 23, 27, 37].
+
+
+Figure 2. Illustration of the effect of GC filtering. The GC filtered signal projections $\mathbf{Z}^k$ preserve better spatial consistency of the salient foregrounds than the input graph signals $\mathbf{X}^k$ that highlight more noisy backgrounds. Afterwards, the co-attention maps $M_{catt}$ generated by our AGCM in § 3.3 further reduce the noisy backgrounds existing in $\mathbf{Z}^k$ .
+
+Notations of Graph. We construct a multi-graph $\mathcal{G}(\mathcal{V},\mathcal{E},\mathbf{A}) = \cup_{k = 1}^{3}\mathcal{G}^{k}(\mathcal{V}^{k},\mathcal{E}^{k},\mathbf{A}^{k})$ that is composed of three sub-graphs $\mathcal{G}^k$ , where node set $\mathcal{V} = \{\mathcal{V}^k\}$ , edge set $\mathcal{E} = \{\mathcal{E}^k\}$ , adjacent matrix $\mathbf{A} = \sum_{k}\mathbf{A}^{k}$ , $\mathcal{V}^k = \{v_i^k\}$ denotes the node set of $\mathcal{G}^k$ with node $v_{i}^{k}$ , $\mathcal{E}^k = \{e_{ij}^k\}$ denotes its edge set with edge $e_{ij}^k$ , $\mathbf{A}^k$ denotes its adjacent matrix, whose entry $\mathbf{A}^k (i,j)$ denotes the weight of edge $e_{ij}^k$ . $\mathbf{X}^k = [\pmb {x}_1^k,\dots ,\pmb{x}_{Nwh}^k ]^\top$ denotes the feature matrix of $\mathcal{G}^k$ , where $\pmb {x}_i^k\in \mathbb{R}^{d^k}$ is the features of node $v_{i}^{k}$ with dimension $d^{k}$ .
+
+Adjacency Matrix A. The vanilla GCNs [29] construct a fixed graph without training, which cannot guarantee to be best suitable to a specific task [22]. Recently, some works [22, 34, 27] have investigated adaptive graph learning techniques through learning a parameterized adjacency matrix tailored to a specific task. Inspired by this and the self-attention mechanism in [61], for sub-graph $k$ , to learn a task-specific graph structure, we define a learnable adjacency matrix as
+
+$$
+\mathbf {A} ^ {k} = \sigma \left(\mathbf {X} ^ {k} \mathbf {P} _ {1} ^ {k} \left(\mathbf {X} ^ {k} \mathbf {P} _ {2} ^ {k}\right) ^ {\top}\right), \tag {1}
+$$
+
+where $\sigma(x) = \frac{1}{1 + e^{-x}}$ denotes the sigmoid function, $\mathbf{P}_1^k, \mathbf{P}_2^k \in \mathbb{R}^{d^k \times r}$ are two learnable projection matrices that reduce the dimension of the node features from $d^k$ to $r < d^k$ .
+
+To combine multiple graphs in GCNs, as in [62], we simply element-wisely add the adjacency matrices of all $\mathcal{G}^k$ to construct the adjacency matrix of $\mathcal{G}$ as
+
+$$
+\mathbf {A} = \mathbf {A} ^ {1} + \mathbf {A} ^ {2} + \mathbf {A} ^ {3}. \tag {2}
+$$
+
+Graph Convolutional Filtering. We employ the two-layer GCNs proposed by [29] to perform graph convolution.
+
+
+Figure 3. The schematic diagram of our AGCM $\mathcal{F}_{gcm}$ . Please refer to the text part for details.
+
+tions as
+
+$$
+\begin{array}{l} \mathbf {Z} ^ {k} = \mathcal {F} _ {g c n} (\mathbf {X} ^ {k}) \tag {3} \\ = \mathcal {F} _ {\text {s o f t m a x}} (\hat {\mathbf {A}} \operatorname {R e L U} \left(\mathcal {F} _ {\operatorname {g c f}} (\hat {\mathbf {A}}, \mathbf {X} ^ {k}) \mathbf {W} _ {1} ^ {k}\right) \mathbf {W} _ {2} ^ {k}), \\ \end{array}
+$$
+
+where the GC filtering function is defined as [33]
+
+$$
+\mathcal {F} _ {g c f} (\hat {\mathbf {A}}, \mathbf {X} ^ {k}) = \hat {\mathbf {A}} \mathbf {X} ^ {k}, \tag {4}
+$$
+
+$\mathbf{W}_1^k \in \mathbb{R}^{d^k \times c_1^k}$ , $\mathbf{W}_2^k \in \mathbb{R}^{c_1^k \times c^k}$ denote the learnable weight matrices of two fully-connected layers for feature projections, $\hat{\mathbf{A}} = \tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-\frac{1}{2}}$ , where $\tilde{\mathbf{A}} = \mathbf{A} + \mathbf{I}$ , $\mathbf{A}$ is defined by (2) and $\mathbf{I}$ denotes the identity matrix, $\tilde{\mathbf{D}}(i,i) = \sum_{j}\tilde{\mathbf{A}}(i,j)$ is the degree matrix of $\tilde{\mathbf{A}}$ that is diagonal.
+
+Recent work [33] has shown that the GC filtering $\mathcal{F}_{gcf}$ (4) is low-pass and hence it can make the output signal projections $\mathbf{Z}^k$ smoother in the same cluster, so as to well p-reserve the spatial consistency of the salient foregrounds across images as illustrated by Figure 2. However, some intra-consistency but non-salient regions have also been highlighted. To overcome this issue, in the following section, we will present an attention graph clustering technique to further refine $\mathbf{Z}^k$ to focus on co-salient regions.
+
+# 3.3. Attention Graph Clustering Module
+
+Figure 3 shows the schematic diagram of our AGCM $\mathcal{F}_{gcm}$ . Specifically, given the GC filtering projections $\mathbf{Z}^k\in$ $\mathbb{R}^{Nwh\times c^k},k = 1,2,3$ in (3), we obtain a multi-scale feature matrix by concatenating them as $\mathbf{Z} = [\mathbf{Z}^1,\mathbf{Z}^2,\mathbf{Z}^3] =$ $[z_1,\dots ,z_{Nwh}]^\top \in \mathbb{R}^{Nwh\times d}$ where the multi-scale node features $z_{i}\in \mathbb{R}^{d}$ $d = \sum_{k}c^{k}$ Next, we reshape $\mathbf{Z}$ to tensor $\mathbf{Z}\in \mathbb{R}^{N\times w\times h\times d}$ as input of $\mathcal{F}_{gcm}$ . Then, we define a group global average pooling (gGAP) function $\mathcal{F}_{gGAP}$ as
+
+$$
+\boldsymbol {u} = \mathcal {F} _ {g G A P} (\boldsymbol {Z}) = \frac {1}{N w h} \sum_ {n, i, j} \boldsymbol {Z} (n, i, j,:), \tag {5}
+$$
+
+which outputs a global statistic feature $\pmb{u} \in \mathbb{R}^d$ as the multi-scale semantic saliency representation that encodes the global useful group-wise context information. Afterwards, we correlate $\pmb{u}$ and $\pmb{Z}$ to generate a group of attention maps that can fully highlight the intra-saliency:
+
+$$
+\boldsymbol {M} _ {\text {a t t}} = \boldsymbol {u} \otimes \boldsymbol {Z}, \tag {6}
+$$
+
+where $M_{att} \in \mathbb{R}^{N \times w \times h}$ , $\otimes$ denotes correlation operator. Then, we use sigmoid function $\sigma$ to re-scale the values of $M_{att}$ to $[0,1]$ as
+
+$$
+\boldsymbol {W} = \sigma (\boldsymbol {M} _ {\text {a t t}}). \tag {7}
+$$
+
+From Figure 3, we can observe that $M_{att}$ discovers intrasaliency that preserves spatial consistency, but some noisy non-co-salient foregrounds have also been highlighted. To alleviate this issue, we exploit an attention graph clustering technique to further refine the attention maps, which are able to better differentiate the common objects from salient foregrounds. Motivated by the weighted kernel $k$ -means approach in [10], we define the objective function of AGCM as
+
+$$
+\mathcal {L} _ {g c} = \sum_ {\boldsymbol {z} _ {i} \in \pi_ {f}} w _ {i} \| \boldsymbol {z} _ {i} - \boldsymbol {m} _ {f} \| ^ {2} + \sum_ {\boldsymbol {z} _ {i} \in \pi_ {b}} w _ {i} \| \boldsymbol {z} _ {i} - \boldsymbol {m} _ {b} \| ^ {2}, \tag {8}
+$$
+
+where $\pi_f$ and $\pi_b$ denote the clusters of foreground and background respectively, $m_f = \frac{\sum_{z_i\in\pi_f}z_iw_i}{\sum_{z_i\in\pi_f}w_i}$ and similar for $m_b$ , $w_i$ denotes the $i$ -th element of $\pmb{W}$ in (7).
+
+Following [10], we can readily show that the minimization of the objective $\mathcal{L}_{gc}$ in (8) is equivalent to
+
+$$
+\min _ {\mathbf {Y}} \left\{\mathcal {L} _ {g c} = - \operatorname {t r a c e} \left(\mathbf {Y} ^ {\top} \mathbf {K Y}\right) \right\}, \tag {9}
+$$
+
+where $\mathbf{K} = \mathbf{D}^{\frac{1}{2}}\mathbf{Z}\mathbf{Z}^{\top}\mathbf{D}^{\frac{1}{2}}$ , $\mathbf{D} = \mathrm{diag}(w_1, \ldots, w_{Nwh})$ , $\mathbf{Y} \in \mathbb{R}^{Nwh \times 2}$ satisfies $\mathbf{Y}^{\top}\mathbf{Y} = \mathbf{I}$ .
+
+Let $\mathbf{y} \in \{0,1\}^{Nwh}$ denote the indictor vector of the clusters, and $\mathbf{y}(i) = 1$ if $i \in \pi_f$ , else, $\mathbf{y}(i) = 0$ . We choose $\mathbf{Y} = [\mathbf{y} / \sqrt{|\pi_f|}, (\mathbf{1} - \mathbf{y}) / \sqrt{|\pi_b|}]$ that satisfies $\mathbf{Y}^\top \mathbf{Y} = \mathbf{I}$ and put it into (9), yielding the loss function of our AGCM
+
+$$
+\mathcal {L} _ {g c} = - \left(\frac {\mathbf {y} ^ {\top} \mathbf {K} \mathbf {y}}{\mathbf {y} ^ {\top} \mathbf {y}} + \frac {(\mathbf {1} - \mathbf {y}) ^ {\top} \mathbf {K} (\mathbf {1} - \mathbf {y})}{(\mathbf {1} - \mathbf {y}) ^ {\top} (\mathbf {1} - \mathbf {y})}\right). \tag {10}
+$$
+
+Now, we show the relationship between the above loss $\mathcal{L}_{gc}$ and graph clustering. We first construct the graph of GC as $\mathcal{G}_{gc}(\mathcal{V}_{gc},\mathcal{E}_{gc},\mathbf{K})$ , which is made up of node set $\mathcal{V}_{gc} = \mathcal{V}_f\cup \mathcal{V}_b$ , where $\mathcal{V}_f$ is the set of foreground nodes and $\mathcal{V}_b$ is the set of background nodes, $\mathcal{E}_{gc}$ denotes the edge set such that the weight of edge between nodes $i$ and $j$ is equal to $\mathbf{K}(i,j)$ , where $\mathbf{K}$ is its adjacency matrix defined in (9). Let us denote $\mathrm{links}(\mathcal{V}_l,\mathcal{V}_l) = \sum_{i\in \mathcal{V}_l,j\in \mathcal{V}_l}\mathbf{K}(i,j), l = f,b$ , then, it is easy to show that minimizing $\mathcal{L}_{gc}$ (10) is equivalent to maximizing the ratio association objective [50] for graph clustering task
+
+$$
+\max \left\{\sum_ {l = f, g} \frac {\operatorname {l i n k s} \left(\mathcal {V} _ {l} , \mathcal {V} _ {l}\right)}{\left| \mathcal {V} _ {l} \right|} \right\}. \tag {11}
+$$
+
+where $|\mathcal{V}_l|$ denotes the cardinality of set $\mathcal{V}_l$ .
+
+Directly optimizing $\mathcal{L}_{gc}$ (10) yields its continuous relaxed solution $\hat{\pmb{y}}$ . Then, we reshape $\hat{\pmb{y}}$ into a group of
+
+$N$ co-attention maps $M_{catt} \in \mathbb{R}^{N \times w \times h}$ . Finally, the learned co-attention maps $M_{catt}$ and the input features $\mathbf{Z} \in \mathbb{R}^{N \times w \times h \times d}$ of the AGCM are concatenated, yielding the enhanced features $\mathbf{F} \in \mathbb{R}^{N \times w \times h \times (d + 1)}$ :
+
+$$
+\boldsymbol {F} = \boldsymbol {M} _ {\text {c a t t}} \odot \boldsymbol {Z}, \tag {12}
+$$
+
+where $\odot$ denotes concatenation operator, which serves as the input of the following decoder network.
+
+# 3.4. Decoder Network
+
+Our decoder network has an up-sampling module that is consist of a $3 \times 3$ convolutional layer to decrease feature channels, a ReLU layer and a deconvolutional layer with stride $= 2$ to enlarge resolution. Then, we repeat this module three times until reaching the finest resolution for accurate co-saliency map estimation, following a $1 \times 1$ convolutional layer and a sigmoid layer to produce a group of co-saliency map estimations.
+
+Given the features $\mathbf{F}$ computed by (12) as input, the decoder network generates a group of co-saliency maps $\mathcal{M} = \{\mathbf{M}^n \in \mathbb{R}^{w \times h}\}_{n=1}^N$ . We then leverage a weighted cross-entropy loss for pixel-wise classification
+
+$$
+\begin{array}{l} \mathcal {L} _ {c l s} = - \frac {1}{P \times N} \sum_ {n = 1} ^ {N} \sum_ {i = 1} ^ {P} \left\{\rho^ {n} \mathbf {M} ^ {n} (i) \log \left(\mathbf {M} _ {g t} ^ {n} (i)\right) \right. \tag {13} \\ - (1 - \rho^ {n}) (1 - \mathbf {M} ^ {n} (i)) \log (1 - \mathbf {M} _ {g t} ^ {n} (i)) \}, \\ \end{array}
+$$
+
+where $\mathbf{M}_{gt}^n$ denotes the ground-truth mask of image $I^n\in \mathcal{I}$ $P$ denotes the pixel number of image $I^n$ and $\rho^n$ denotes the ratio of all positive pixels over all pixels in image $I^n$
+
+All the network parameters are jointly learned by minimizing the following multi-task loss function
+
+$$
+\mathcal {L} = \mathcal {L} _ {c l s} + \lambda \mathcal {L} _ {g c}, \tag {14}
+$$
+
+where $\mathcal{L}_{gc}$ is the attention graph clustering loss defined by (10), $\lambda > 0$ is a trade-off parameter. We train our network by minimizing $\mathcal{L}$ in an end-to-end manner, and the learned GCAGC model is directly applied to processing input image group, predicting the corresponding co-saliency maps without any post-processing.
+
+# 4. Results and Analysis
+
+# 4.1. Implementation Details
+
+The training of our GCAGC model includes two stages:
+
+Stage 1. For fair comparison, we adopt the VGG16 network [51] as the backbone network, which is pre-trained on the ImageNet classification task [9]. Following the input settings in [64, 57], we randomly select $N = 5$ images as one group from one category and then select a mini-batch groups from all categories in the COCO dataset [39], which are sent into the network at the same time during training.
+
+
+Figure 4. Visual comparisons of our GCAGC method compared with other state-of-the-arts, including CBCS [12], ESMG [35], CSMG [76] and RCGS [57].
+
+All the images are resized to the same size of $224 \times 224$ for easy processing. The model is optimized by the Adam algorithm [28] with a weight decay of 5e-4 and an initial learning rate of 1e-4 which is reduced by a half every 25,000 iterations. This training process converges until 100,000 iterations.
+
+Stage 2. We further fine-tune our model using MSRA-B dataset [40] to better focus on the salient areas. All the parameter settings are the same as those in Stage 1 except for the training iterations $= 10,000$ . Note that when training, to match the size of input group, we augment the single salient image to $N = 5$ different images as a group using affine transformation, horizontal flipping and left-right flipping.
+
+During testing, we divide all images into several mini-groups to produce the final co-saliency map estimations. The network is implemented in PyTorch with a RTX 2080Ti GPU for acceleration.
+
+# 4.2. Datasets and Evaluation Metrics
+
+We conduct extensive evaluations on three popular datasets including iCoseg [3], Cosal2015 [72] and COCO
+
+SEG [57]. Among them, iCoseg is the most widely used dataset with totally 38 groups of 643 images, among which the common objects in one group share similar appearance or semantical characteristics, but have various pose or color changes. Cosal2015 is a large-scale dataset which is consist of 2,015 images of 50 categories, and each group suffers from various challenging factors such as complex environments, occlusion issues, target appearance variations and background clutters, etc. All these increase the difficulty for accurate co-saliency detection. Recently, to meet the urgent requirement of large-scale training set for deep-learning-based co-saliency detection approaches, COCOSEG has been proposed which are selected from the CO-CO2017 dataset [39], of which 200,000 and 8,000 images are for training and testing respectively from all 78 categories.
+
+We compare our GCAGC method with existing state-of-the-art algorithms in terms of 6 metrics including the precision-recall (PR) curve [70], the receive operator characteristic (ROC) curve [70], the average precision (AP) score [70], F-measure score $F_{\beta}$ [70], S-measure score
+
+
+
+
+
+
+
+
+Figure 5. Comparisons with state-of-the-art methods in terms of PR and ROC curves on three benchmark datasets
+
+
+
+
+
+Table 1. Statistic comparisons of our GCAGC with the other state-of-the-arts. Red and blue bold fonts indicate the best and second best performance, respectively.
+
+| Methods | iCoseg | Cosal2015 | COCO-SEG |
| AP↑ | Fβ↑ | Sm↑ | MAE↓ | AP↑ | Fβ↑ | Sm↑ | MAE↓ | AP↑ | Fβ↑ | Sm↑ | MAE↓ |
| CBCS [12] | 0.7965 | 0.7408 | 0.6580 | 0.1659 | 0.5859 | 0.5579 | 0.5439 | 0.2329 | 0.3043 | 0.3050 | 0.4710 | 0.2585 |
| CSHS [42] | 0.8454 | 0.7549 | 0.7502 | 0.1774 | 0.6198 | 0.6210 | 0.5909 | 0.3108 | - | - | - | - |
| ESMG [35] | 0.8336 | 0.7773 | 0.7677 | 0.1261 | 0.5116 | 0.5120 | 0.5446 | 0.2581 | 0.3387 | 0.3592 | 0.4931 | 0.2349 |
| SACS [5] | 0.8399 | 0.7978 | 0.7523 | 0.1516 | 0.7076 | 0.6927 | 0.6938 | 0.1920 | 0.4176 | 0.4234 | 0.5229 | 0.3271 |
| CODW [72] | 0.8766 | 0.7990 | 0.7500 | 0.1782 | 0.7437 | 0.7051 | 0.6473 | 0.2733 | - | - | - | - |
| DIM [71] | 0.8773 | 0.7919 | 0.7583 | 0.1739 | 0.6305 | 0.6162 | 0.5907 | 0.3123 | 0.3043 | 0.3353 | 0.4572 | 0.3871 |
| UMLF [20] | 0.7881 | 0.7148 | 0.7033 | 0.2389 | 0.7444 | 0.7016 | 0.6604 | 0.2687 | 0.4347 | 0.4309 | 0.4872 | 0.3953 |
| UCSG [24] | 0.9112 | 0.8503 | 0.8200 | 0.1182 | 0.8149 | 0.7589 | 0.7506 | 0.1581 | - | - | - | - |
| RCGS [57] | 0.8269 | 0.7730 | 0.7810 | 0.0976 | 0.8573 | 0.8097 | 0.7959 | 0.0999 | 0.7309 | 0.6814 | 0.7185 | 0.1239 |
| CSMG [76] | 0.9097 | 0.8517 | 0.8208 | 0.1050 | 0.8569 | 0.8216 | 0.7738 | 0.1292 | 0.6309 | 0.6208 | 0.6517 | 0.1461 |
| GCAGC | 0.8867 | 0.8532 | 0.8205 | 0.0757 | 0.8799 | 0.8428 | 0.8224 | 0.0890 | 0.7323 | 0.7092 | 0.7294 | 0.1097 |
+
+$S_{m}$ [11] and Mean Absolute Error (MAE) [57].
+
+# 4.3. Comparisons with State-of-the-arts
+
+We compare our GCAGC approach with 10 state-of-the-art co-saliency detection methods including CBCS [12], CSHS [42], ESMG [35], SACS [5], CODW [72], DIM [71], UMLF [20], UCSG [24], RCGS [57], CSMG [76]. For fair comparisons, we directly report available results released by authors or reproduce experimental results by the public source code for each compared method.
+
+Qualitative Results. Figure 4 shows some visual com
+
+parison results with 4 state-of-the-art methods including CBCS [12], ESMG [35], CSMG [76] and RCGS [57]. Our GCAGC can achieve better co-saliency results than the other methods when the co-salient targets suffer from significant appearance variations, strong semantic interference and complex background clutters. In Figure 4, the two left groups of images are selected from iCoseg. Among them, for the group of Red Sox Players, the audience in the background share the same semantics with those foreground co-salient players, which makes it very difficult to accurately differentiate them. Notwithstanding, our GCAGC can ac
+
+Table 2. Ablative studies of our model on iCoseg and Cosal2015. Here GCAGC-N, GCAGC-M, GCAGC-P denote our GCAGC in absence of AGCN, AGCM and the projection matrices $\mathbf{P}$ in (1), respectively. Red bold font indicates the best performance.
+
+| Datasets | | GCAGC-N | GCAGC-M | GCAGC-P | GCAGC |
| iCoseg | AP↑ | 0.8799 | 0.8606 | 0.8796 | 0.8867 |
| Fβ↑ | 0.8504 | 0.8123 | 0.8463 | 0.8532 |
| Sm↑ | 0.8175 | 0.8203 | 0.8122 | 0.8205 |
| MAE↓ | 0.0831 | 0.0796 | 0.0790 | 0.0757 |
| Cosal2015 | AP↑ | 0.8577 | 0.8779 | 0.8737 | 0.8799 |
| Fβ↑ | 0.8156 | 0.8373 | 0.8375 | 0.8428 |
| Sm↑ | 0.8167 | 0.8145 | 0.8156 | 0.8224 |
| MAE↓ | 0.0967 | 0.0901 | 0.0851 | 0.0890 |
+
+curately highlight the co-salient players due to its two-steps filtering processing from GC filtering to graph clustering that can well preserve spatial consistency while effectively reducing noisy backgrounds. However, the other compared methods cannot achieve satisfying results, which contain either some noisy backgrounds (see the middle columns of RCGS, ESMG, CBCS) or the whole intra-salient areas including non-co-salient regions (see the left-most column of RCGS, the left-fourth columns of ESMG and CBCS). The co-saliency maps in the middle groups (Apple and Monkey) are generated from the image groups selected from Cosal2015. The Apple group suffers from the interferences of other foreground semantic objects such as hand and lemon while the Monkey group undergoes complex background clutters. It is obvious that our GCAGC can generate better spatially coherent co-saliency maps than the other methods (see the two bottom rows of ESMG and CBCS, the left-most columns of RCGS and CSMG). The two right-most groups are selected from COCO-SEG, which contain a variety of challenging images with targets suffering from the interferences of various different categories and complicate background clutters. Notwithstanding, our GCAGC can accurately discover the co-salient targets even when they suffer from extremely complicate background clutters (see the Broccoli group). The experimental results show that our GCAGC can achieve favorable performance against various challenging factors, validating the effectiveness of our GCAGC model that can adapt well to a variety of complicate scenarios.
+
+Quantitative Results. Figure 5 shows the PR and the ROC curves of all compared methods on three benchmark datasets. We can observe that our GCAGC outperforms the other state-of-the-art methods on three datasets. Especially, all the curves on the largest and most challenging Cosal2015 and COCO-SEG are much higher than the other methods. Meanwhile, Table 1 lists the statistic analysis, among which the RCGS is a representative end-to-end deep-learning-based method that achieves state-of-the-art performance on both Cosal2015 and COCO-SEG with the
+
+F-scores of 0.8097 and 0.6814, respectively. Our GCAGC achieves the best F-scores of 0.8428 and 0.7092 on Cosal2015 and COCO-SEG, respectively, outperforming the second best-performing CSMG by $3.31\%$ on Cosal2015 and RCGS by $2.78\%$ on COCO-SEG. All the qualitative results further demonstrate the effectiveness of jointly learning the GCGAC model that is essential to accurate co-saliency detection.
+
+# 4.4. Ablative Studies
+
+Here, we conduct ablative studies to validate the effectiveness of the proposed two modules (AGCN and AGCM) and the adaptive graph learning strategy in the AGCN. Table 2 lists the corresponding quantitative statistic results in terms of AP, $F_{\beta}$ , $S_{m}$ and MAE.
+
+First, without AGCN, the GCAGC-N shows obvious performance drop on Cosal2015 in terms of all metrics, especially for both AP and $F_{\beta}$ , where the former drops from 0.8799 to 0.8577 by $2.22\%$ and the latter drops from 0.8428 to 0.8156 by $2.72\%$ . Besides, the performance of GCAGC-N on iCoseg also suffers from drop in terms of all metrics.
+
+Second, without AGCM, the GCAGC-M suffers from obvious performance drop in terms of all metrics on both datasets, especially for AP and $F_{\beta}$ on iCoseg, where the AP score and the $F_{\beta}$ decline from 0.8867 to 0.8606 by $2.61\%$ and from 0.8532 to 0.8123 by $4.09\%$ , respectively. The results validate the effectiveness of the proposed AGCM that can well discriminate the co-objects from all the salient foreground objects to further boost the performance.
+
+Finally, without adaptive graph learning in AGCN, all metrics in GCAGC-P have obvious decline on both datasets, further showing the superiority of proposed AGCN to learn an adaptive graph structure tailored to the co-saliency detection task compared with the fixed graph design in the vanilla GCNs [29].
+
+# 5. Conclusion
+
+This paper has presented an adaptive graph convolutional network with attention graph clustering for co-saliency detection, mainly including two key designs: an AGCN and an AGCM. The AGCN has been developed to extract long-range dependency cues to characterize the intra- and inter-image correspondence. Meanwhile, to further refine the results of the AGCN, the AGCM has been designed to discriminate the co-objects from all the salient foreground objects in an unsupervised fashion. Finally, a unified framework with encoder-decoder structure has been implemented to jointly optimize the AGCN and the AGCM in an end-to-end manner. Extensive evaluations on three largest and most challenging benchmark datasets including iCoseg, Cosal2015 and COCO-SEG have demonstrated superior performance of the proposed method over the state-of-the-art methods in terms of most metrics.
+
+# References
+
+[1] J. Atwood and D. Towsley. Diffusion-convolutional neural networks. In NeurIPS, pages 1993–2001, 2016.
+[2] D. A. Bader, H. Meyerhenke, P. Sanders, and D. Wagner. Graph partitioning and graph clustering, volume 588. American Mathematical Soc., 2013.
+[3] D. Batra, A. Kowdle, D. Parikh, J. Luo, and T. Chen. i-coseg: Interactive co-segmentation with intelligent scribble guidance. In CVPR, pages 3169-3176, 2010.
+[4] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
+[5] X. Cao, Z. Tao, B. Zhang, H. Fu, and W. Feng. Self-adaptively weighted co-saliency detection via rank constraint. TIP, 23(9):4175-4186, 2014.
+[6] X. Chen, L.-J. Li, L. Fei-Fei, and A. Gupta. Iterative visual reasoning beyond convolutions. In CVPR, pages 7239-7248, 2018.
+[7] R. Cong, J. Lei, H. Fu, M.-M. Cheng, W. Lin, and Q. Huang. Review of visual saliency detection with comprehensive information. TCSVT, 2018.
+[8] M. Defferrard, X. Bresson, and P. Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In NeurIPS, pages 3844-3852, 2016.
+[9] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255, 2009.
+[10] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors a multilevel approach. TPAMI, 29(11):1944-1957, 2007.
+[11] D.-P. Fan, M.-M. Cheng, Y. Liu, T. Li, and A. Borji. Structure-measure: A new way to evaluate foreground maps. In ICCV, pages 4548-4557, 2017.
+[12] H. Fu, X. Cao, and Z. Tu. Cluster-based co-saliency detection. TIP, 22(10):3766-3778, 2013.
+[13] H. Fu, D. Xu, S. Lin, and J. Liu. Object-based rgbd image co-segmentation with mutex constraint. In CVPR, pages 4428-4436, 2015.
+[14] H. Fu, D. Xu, B. Zhang, and S. Lin. Object-based multiple foreground video co-segmentation. In CVPR, pages 3166-3173, 2014.
+[15] C. Ge, K. Fu, F. Liu, L. Bai, and J. Yang. Co-saliency detection via inter and intra saliency propagation. SPIC, 2016.
+[16] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. In ICML, pages 1263-1272, 2017.
+[17] M. Girvan and M. E. Newman. Community structure in social and biological networks. PNAS, 99(12):7821-7826, 2002.
+[18] M. Gori, G. Monfardini, and F. Scarselli. A new model for learning in graph domains. In IJCNN, volume 2, pages 729-734, 2005.
+[19] J. Gu, H. Zhao, Z. Lin, S. Li, J. Cai, and M. Ling. Scene graph generation with external knowledge and image reconstruction. In CVPR, pages 1969-1978, 2019.
+
+[20] J. Han, G. Cheng, Z. Li, and D. Zhang. A unified metric learning-based framework for co-saliency detection. TCVT, 2017.
+[21] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016.
+[22] M. Henaff, J. Bruna, and Y. LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
+[23] K.-J. Hsu, Y.-Y. Lin, and Y.-Y. Chuang. Deepco3: Deep instance co-segmentation by co-peak search and co-saliency detection. In CVPR, pages 8846-8855, 2019.
+[24] K.-J. Hsu, C.-C. Tsai, Y.-Y. Lin, X. Qian, and Y.-Y. Chuang. Unsupervised cnn-based co-saliency detection with graphical optimization. In ECCV, 2018.
+[25] K. R. Jerripothula, J. Cai, and J. Yuan. Cats: Co-saliency activated tracklet selection for video co-localization. In ECCV, pages 187-202, 2016.
+[26] K. R. Jerripothula, J. Cai, and J. Yuan. Image co-segmentation via saliency co-fusion. TMM, 18(9):1896-1909, 2016.
+[27] B. Jiang, X. Jiang, A. Zhou, J. Tang, and B. Luo. A unified multiple graph learning and convolutional network model for co-saliency estimation. In MM, pages 1375-1382, 2019.
+[28] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[29] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
+[30] L. Landrieu and M. Simonovsky. Large-scale point cloud semantic segmentation with superpoint graphs. In CVPR, pages 4558-4567, 2018.
+[31] R. Levie, F. Monti, X. Bresson, and M. M. Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. TSP, 67(1):97-109, 2018.
+[32] Q. Li, Z. Han, and X.-M. Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In AAAI, 2018.
+[33] Q. Li, X.-M. Wu, H. Liu, X. Zhang, and Z. Guan. Label efficient semi-supervised learning via graph filtering. In CVPR, pages 9582-9591, 2019.
+[34] R. Li, S. Wang, F. Zhu, and J. Huang. Adaptive graph convolutional neural networks. In AAAI, 2018.
+[35] Y. Li, K. Fu, Z. Liu, and J. Yang. Efficient saliency-model-guided visual co-saliency detection. SPL, 2015.
+[36] Y. Li, W. Ouyang, B. Zhou, J. Shi, C. Zhang, and X. Wang. Factorizable net: an efficient subgraph-based framework for scene graph generation. In ECCV, pages 335-351, 2018.
+[37] Z. Li, C. Lang, J. Feng, Y. Li, T. Wang, and S. Feng. Cosaliency detection with graph matching. TIST, 10(3):22, 2019.
+[38] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, pages 2117-2125, 2017.
+[39] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740-755, 2014.
+
+[40] T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum. Learning to detect a salient object. TPAMI, 2011.
+[41] Z. Liu, W. Zou, L. Li, L. Shen, and O. Le Meur. Cosaliency detection based on hierarchical segmentation. SPL, 21(1):88-92, 2013.
+[42] Z. Liu, W. Zou, L. Li, L. Shen, and O. Le Meur. Co-saliency detection based on hierarchical segmentation. SPL, 2014.
+[43] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91-110, 2004.
+[44] M. Narasimhan, S. Lazebnik, and A. Schwing. Out of the box: Reasoning with graph convolution nets for factual visual question answering. In NeurIPS, pages 2654-2665, 2018.
+[45] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. In ICML, pages 2014-2023, 2016.
+[46] S. Pan, R. Hu, S.-f. Fung, G. Long, J. Jiang, and C. Zhang. Learning graph embedding with adversarial training methods. arXiv preprint arXiv:1901.01250, 2019.
+[47] A. Papushoy and A. G. Bors. Image retrieval based on query by saliency content. DSP, 2015.
+[48] X. Qi, R. Liao, J. Jia, S. Fidler, and R. Urtasun. 3d graph neural networks for rgbd semantic segmentation. In ICCV, pages 5199-5208, 2017.
+[49] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. TNN, 20(1):61-80, 2008.
+[50] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22(8):888-905, 2000.
+[51] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+[52] Y. Sun, J. Han, J. Gao, and Y. Yu. itopicmodel: Information network-integrated topic modeling. In ICMD, pages 493-502, 2009.
+[53] K. Tang, A. Joulin, L.-J. Li, and L. Fei-Fei. Co-localization in real-world images. In CVPR, 2014.
+[54] C.-C. Tsai, K.-J. Hsu, Y.-Y. Lin, X. Qian, and Y.-Y. Chuang. Deep co-saliency detection via stacked autoencoder-enabled fusion and self-trained cnns. TMM, 2019.
+[55] C.-C. Tsai, W. Li, K.-J. Hsu, X. Qian, and Y.-Y. Lin. Image co-saliency detection and co-segmentation via progressive joint optimization. TIP, 28(1):56-71, 2018.
+[56] C. Wang, S. Pan, R. Hu, G. Long, J. Jiang, and C. Zhang. Attributed graph clustering: A deep attentional embedding approach. *IJCAI*, 2019.
+[57] C. Wang, Z.-J. Zha, D. Liu, and H. Xie. Robust deep co-saliency detection with group semantic. 2019.
+[58] W. Wang, X. Lu, J. Shen, D. J. Crandall, and L. Shao. Zero-shot video object segmentation via attentive graph neural networks. In ICCV, pages 9236-9245, 2019.
+[59] W. Wang, J. Shen, H. Sun, and L. Shao. Video co-saliency guided co-segmentation. TCSVT, 28(8):1727-1736, 2017.
+[60] X. Wang, P. Cui, J. Wang, J. Pei, W. Zhu, and S. Yang. Community preserving network embedding. In AAAI, 2017.
+[61] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In CVPR, pages 7794-7803, 2018.
+
+[62] X. Wang and A. Gupta. Videos as space-time region graphs. In ECCV, pages 399-417, 2018.
+[63] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic graph cnn for learning on point clouds. TOG, 38(5):146, 2019.
+[64] L. Wei, S. Zhao, O. E. F. Bourahla, X. Li, and F. Wu. Group-wise deep co-saliency detection. arXiv preprint arXiv:1707.07381, 2017.
+[65] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
+[66] S. Yan, Y. Xiong, and D. Lin. Spatial temporal graph convolutional networks for skeleton-based action recognition. In AAAI, 2018.
+[67] J. Yang, J. Lu, S. Lee, D. Batra, and D. Parikh. Graph r-cnn for scene graph generation. In ECCV, pages 670-685, 2018.
+[68] L. Yang, B. Geng, Y. Cai, A. Hanjalic, and X.-S. Hua. Object retrieval using visual query context. TMM, 2011.
+[69] L. Ye, Z. Liu, J. Li, W.-L. Zhao, and L. Shen. Co-saliency detection via co-salient object discovery and recovery. SPL, 2015.
+[70] D. Zhang, H. Fu, J. Han, A. Borji, and X. Li. A review of co-saliency detection algorithms: fundamentals, applications, and challenges. TIST, 9(4):38, 2018.
+[71] D. Zhang, J. Han, J. Han, and L. Shao. Cosaliency detection based on intrasaliency prior transfer and deep intersaliency mining. TNNLS, 27(6):1163-1176, 2015.
+[72] D. Zhang, J. Han, C. Li, and J. Wang. Co-saliency detection via looking deep and wide. In CVPR, 2015.
+[73] D. Zhang, J. Han, C. Li, J. Wang, and X. Li. Detection of cosalient objects by looking deep and wide. In CVPR, 2015.
+[74] D. Zhang, J. Han, C. Li, J. Wang, and X. Li. Detection of cosalient objects by looking deep and wide. IJCV, 120(2):215-232, 2016.
+[75] D. Zhang, D. Meng, and J. Han. Co-saliency detection via a self-paced multiple-instance learning framework. TPAMI, 39(5):865-878, 2016.
+[76] K. Zhang, T. Li, B. Liu, and Q. Liu. Co-saliency detection via a mask-guided fully convolutional networks with multi-scale label smoothing. In CVPR, pages 3095-3104, 2019.
+[77] X. Zheng, Z.-J. Zha, and L. Zhuang. A feature-adaptive semi-supervised framework for co-saliency detection. In M-M, pages 959-966, 2018.
+[78] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.
\ No newline at end of file
diff --git a/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/images.zip b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..99908b877625f770f4047bffa45b2f40030a1b8b
--- /dev/null
+++ b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97cbbae6646fa57120b21160ad9c492ca0e71d57ddfbd724a5e290e3e2188b27
+size 712010
diff --git a/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/layout.json b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..64c17efe80c3a69a3bc66361948c67094cf30451
--- /dev/null
+++ b/adaptivegraphconvolutionalnetworkwithattentiongraphclusteringforcosaliencydetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd6718e6e38fc66419577f39772ecac39988a2f48aefd5f709cef2bbd934d62d
+size 470873
diff --git a/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_content_list.json b/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7e9b47d8abb885fb39775147b98283845abc6b59
--- /dev/null
+++ b/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c2ce987cdc76e041190109b486a98331e1b912083777b11233714f3a474b7b9
+size 63423
diff --git a/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_model.json b/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bb6283dc10f5864a56d730a24c2c8c55708aef7c
--- /dev/null
+++ b/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d834dd4c442426d669e621f1e8348ae1ee768260f6f652fa35da69078ebb5651
+size 79355
diff --git a/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_origin.pdf b/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..362d15499614bf330c8ae11ac8a8eb0719d4e402
--- /dev/null
+++ b/adaptivehierarchicaldownsamplingforpointcloudclassification/2bf23d6a-91e7-46de-bfd9-b97d6b652773_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e605a46135dad5100e303f3bf2d2ca90a54070682f9d308d9cdf392454ea71e
+size 1258321
diff --git a/adaptivehierarchicaldownsamplingforpointcloudclassification/full.md b/adaptivehierarchicaldownsamplingforpointcloudclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4fd69d86f02b784f1c5127b786ca70bff8fb7c13
--- /dev/null
+++ b/adaptivehierarchicaldownsamplingforpointcloudclassification/full.md
@@ -0,0 +1,285 @@
+# Adaptive Hierarchical Down-Sampling for Point Cloud Classification
+
+Ehsan Nezhadarya, Ehsan Taghavi, Ryan Razani, Bingbing Liu and Jun Luo
+Noah's Ark Lab, Huawei Technologies Inc.
+Toronto, Canada
+
+{ehsan.nezhadarya, ehsan.taghavi, ryan.razani, liu.bingbing, jun.luol}@huawei.com
+
+# Abstract
+
+Deterministic down-sampling of an unordered point cloud in a deep neural network has not been rigorously studied so far. Existing methods down-sample the points regardless of their importance to the network output. As a result, some important points in the point cloud may be removed, while less valuable points may be passed to next layers. In contrast, the proposed adaptive down-sampling method samples the points by taking into account the importance of each point, which varies according to application, task and training data. In this paper, we propose a novel deterministic, adaptive, permutation-invariant down-sampling layer, called Critical Points Layer (CPL), which learns to reduce the number of points in an unordered point cloud while retaining the important (critical) ones. Unlike most graph-based point cloud down-sampling methods that use $k$ -NN to find the neighboring points, CPL is a global down-sampling method, rendering it computationally very efficient. The proposed layer can be used along with a graph-based point cloud convolution layer to form a convolutional neural network, dubbed CP-Net in this paper. We introduce a CP-Net for 3D object classification task that achieves high accuracy for the ModelNet40 dataset among point cloud based methods, which validates the effectiveness of the CPL.
+
+# 1. Introduction
+
+In most robotic applications, laser point cloud data plays a key role in perception of the surrounding environment. Autonomous mobile robotic systems in particular use point cloud data to train deep models to solve different problems, such as dynamic object detection, Simultaneous Localization and Mapping (SLAM), path planning, etc. With the introduction of many new methods, such as PointNet [19], PointNet++ [20], DGCNN [26], PointCNN [11], and SO-Net [10], extracting features from unordered point cloud with deep neural networks has become a highly active field of research. These methods are shown to be quite suc
+
+cessful in point cloud classification benchmarks, such as ModelNet40 [27].
+
+In practical scenarios, the number of points in the point cloud associated with an object may be quite large, especially as a result of using high density sensors such as Velodyne-64 [16]. One possible way to reduce computation is to down-sample the points in the point cloud as it gets passed through the network. A class of methods are proposed in which $k$ -NN search [18] is used to find the neighbourhood for each point and down-sample according to these neighbourhoods. Such methods, however, trade one kind of expensive computation (neighbourhood search) for another one (processing large point cloud).
+
+In another family of works such as [19] and [20], the 3D point cloud is processed directly, while other works transform point cloud to regular voxels such as methods in [27, 14, 6]. Transforming to regular voxels however, leads to loss of goemetric information and high computational complexity. Recently, the method introduced in RS-CNN [12] attempts to learn irregular CNN-like filters to capture local point cloud features which achieves the state-of-the-art accuracy in classification. In addition to these papers, a deep learning network for point cloud sampling is presented in [5]. This method produces a down-sampled point cloud from raw and unordered input point cloud which is not guaranteed to be a subset of the original one. Therefore, a post-processing matching step is required, leading to a more complex system.
+
+In order to fully leverage a down-sampling method, what is highly needed is a deterministic content-sensitive but fast way of down-sampling an unordered point cloud that can be integrated into a deep neural network – a technique similarly effective and efficient as max pooling in conventional CNN. In this paper, we introduce the Critical Points Layer (CPL), which meets these requirements.
+
+Unlike previous down-sampling methods that generate a set of points different from the input points, CPL not only selects the output points from the input, but also downsamples the points within the network in a way that the critical ones are not lost in this process. Unlike random
+
+sampling type of layers which generate randomly different set of points at inference time, CPL is a deterministic layer, producing the same set of points after each run. It is invariant to permutation of input points, i.e. order-agnostic. It is adaptive in that it learns to down-sample the points during training. Last but not least, it is a global method not limited to neighbourhood search, which makes it quite efficient.
+
+# 2. Related Work
+
+# 2.1. Deep Learning on Point Clouds
+
+To cope with sparsity of point cloud, deep learning methods tend to voxelize space and then apply 3D CNNs to the voxels [14, 28]. A problem with this approach is that network size and computational complexity grow quickly with spatial resolution. On the flip side, lower spatial resolution means larger voxels and higher quantization error. One way to alleviate this problem is to use octree [25] or kd-tree [8] instead of voxels. In [8], for example, a kd-tree of point cloud is built and then traversed by a hierarchical feature extractor, exploiting invariance of the point cloud at different spatial scale. However, such methods still rely on subdividing a bounding volume and fail to exploit local geometric structure of the points themselves. In contrast, point-based neural networks does not require converting point clouds to another format. Resolution loss is thus avoided [29].
+
+# 2.2. CNNs on Point Cloud as Graphs
+
+A promising way to exploit local geometric information is to model point cloud as graph of unordered (non-euclidean) points and then apply CNNs to it. This important research direction [23] has two main variations.
+
+Spectral methods redefine spatial graph convolution as a multiplication in the spectral domain [23, 22]. The first proposed methods along this line lack spatial locality of filters. Parameterization of convolution filters as Chebyshev polynomials of eigenvalues and approximately evaluating them lead to a computationally efficient way to create localized spatial filters [4]. Unfortunately, these filters are learnt in the spectral domain [1] and thus have to be the same for all graphs in the dataset. This means that where graph structure varies in the dataset, such as point clouds, a graph learnt on one shape cannot generalize to others.
+
+Local spatial filtering [23, 17, 9, 24, 30, 3, 13, 15], in contrast, employs spatial filters. A notion of local patch on graphs is employed to allow an operator similar to convolution to be applied to each local patch. Depending on the specific correspondence between filter weights and nodes in each local patch, we have variants such as MoNet [15], GCNN [7] and DCNN [2]. Although much work has been done to apply spatial filtering for deep learning on general graphs, only a few methods, such as KCNet [21], FoldingNet [29], ECC [23] and DGCNN [26] use deep learning
+
+on point cloud graphs.
+
+# 2.3. Point Cloud Down-Sampling in Deep Networks
+
+While graph convolution on point clouds has recently received great attention, point cloud down-sampling is not properly explored in the literature. Such down-sampling is highly desirable for a few reasons:
+
+- Most graph convolution methods on point cloud use $k$ -NN search to find the neighbourhood of each point. Thus, down-sampling the points cuts the computational cost for subsequent convolution layers.
+- Reducing the number of points in the network results in lower runtime memory usage.
+- Down-sampling can boost robustness to certain perturbations in the input data.
+
+In typical point-based neural networks, such as PointNet and DGCNN, the number of points in the point cloud is fixed throughout the network. PointNet++ [20] does down-sample the point cloud using farthest point sampling (FPS). However, since it generates overlapping partitions by finding the $k$ -NN points around each sample point, it needs much more computational power than PointNet, due to the required search in the high-dimensional feature space.
+
+KCNet [21] and FoldingNet [29] down-sample the graph using a graph-based max-pooling that takes maximum features over the neighbourhood of each node using a pre-built $k$ -NN graph (KNNG). However, these methods provide no guarantee that the most important points, which we call critical points, will be passed to downstream. A point with less relevant features may be selected or generated, while an important one may be removed or devalued.
+
+Moreover, the down-sampling used in some of these networks are static, where the sampling is only based on spatial locations of points in the input point cloud, but not on their corresponding learnt features. On the other hand, methods that do use feature space distance between points, such as PointNet++ [20], are computationally prohibitive as explained earlier. In another prominent method, [5] introduces a deep learning approach for optimized point cloud sampling in which a NN model is trained to generate a downsampled point cloud from the original dataset. Finally, all these methods generate a set of new points, instead of selecting a subset of input points. This makes it difficult to track the contribution of each input point to the output.
+
+In this paper, we introduce a computationally efficient Critical Points Layer (CPL), which down-samples the points adaptively based on the learnt features. CPL globally filters out unimportant points while keeping the important ones, according to a point's level of contribution to the global max-pooling (max-reduced feature vector). The CPL is computationally very efficient because it does not need
+
+local nearest neighbour search. Moreover, since the feature vectors obtained from a graph convolution layer already contain the local neighbourhood information of each point that is important, the CPL yields a smaller subset without losing relevant information. In the next two sections, we explain the CPL in detail and report our experimental results.
+
+# 3. Proposed Solution
+
+In this section, we propose two new adaptive down-sampling methods that can be used in deep neural networks. The focus of the proposed methods is to introduce a systemic solution to down-sample the points (or the feature vectors associated to them) in an arbitrary neural network architecture. This differs with methods such as [5], in which a novel method is proposed to down-sample a specific dataset. Our proposed layers, named Critical Points Layer (CPL) and Weighted Critical Points Layer (WCPL), can efficiently down-sample the features related to an unordered point cloud, while being permutation invariant. In this section, CPL, WCPL and a systemic approach to use these two layers in deep neural networks and more specifically classification networks will be explained in details.
+
+# 3.1. Critical Points Layer (CPL)
+
+Lets assume the input to the CPL is an unordered point cloud with $n$ points, each represented as a feature vector $\mathbf{x} \in \mathbb{R}^d$ , where $\mathbb{R}$ is the set of real numbers and $d$ is the dimension of the feature vector. The goal of CPL is to generate a subset of input points, called Critical Points (CP), with $m \leq n$ points, each represented as a feature vector $\mathbf{y} \in \mathbb{R}^l$ , where $l$ is the dimension of the new feature vector. The critical points of a point cloud are the points with maximum information that are needed to be preserved in a down-sampling (or pooling) process. These points may be changed based on the task and application.
+
+The block diagram of the proposed Critical Points Layer (CPL) is illustrated in Figure 1a. In order to elaborate more on the functionality of CPL, its pseudo-code is also provided in Algorithm 1. The steps of the algorithm are explained in more details as follows:
+
+1. The input point cloud $\mathbf{F_S}$ is a matrix with $n$ rows (corresponding to $n$ input points) and $d$ columns (corresponding to $d$ -dimensional feature vectors).
+2. In the first step (Operation 3), the maximum feature value is obtained for each column of the matrix $\mathbf{F}_S$ . This is the same as the max-pooling operation in Point-Net [19]. The resulting $d$ -dimensional feature vector, denoted by $\mathbf{f}_{\mathrm{max}}$ , has the same dimension as input feature vectors and can be independently used for classification and segmentation tasks. However, we are interested in down-sampling the input points rather than
+
+
+(a) Critical Points Layer (CPL)
+
+
+(b) Weighted Critical Points Layer (WCPL)
+Figure 1: Illustration of the proposed CPL and WCPL.
+
+generating a single feature vector out of them. To this aim, the index of each row with a maximum feature value is also saved in the index vector idx. Vector idx contains the indices of all the points that have contributed to the feature vector $\mathbf{f}_{\mathrm{max}}$ . By definition, we call these points, the Critical Points (CP). These are the important points that should be preserved in the down-sampling process.
+
+3. Index vector idx may contain multiple instances of the same point. To avoid these repetitions, unique indices are extracted from idx, using the "set (unique)" function (Operation 6). Output set which has the unique indices is called the Critical Set (CS) and is denoted by uidx. Beside finding the unique vector, we also add-up the feature values from $\mathbf{f}_{\mathrm{max}}$ that correspond to the same point or index (Operation 7). Resulting feature vector $\mathbf{f}_{\mathrm{S}}$ will be later used to sort the input points.
+4. Next, feature vector $\mathbf{f}_{\mathrm{S}}$ is sorted (in an ascending order). Corresponding indices in uidx are also rearranged based on the sorting output (Operation 12), resulting in an index vector which is denoted by suidx. This step is necessary for the following sampling (resizing) operation. It also makes CPL invariant to the order of input points.
+5. Number of elements in suidx may differ for different point clouds in the input batch. For batch processing
+
+Algorithm 1 (Weighted) Critical Points Layer (CPL/WCPL)
+1: function $(\mathbf{F_O},\mathbf{f_O},\mathbf{suidx}) = \mathrm{CPL}(\mathbf{F_S},\mathbf{F_I},k)$
+2: for $i = 0$ to ncols $(\mathbf{F_S}) - 1$ do $\triangleright$ max pooling
+3: $\mathbf{f}_{\mathrm{max}}[i] = \max (\mathbf{F_S}(:,i])$
+4: $\mathbf{idx}[i] = \operatorname {argmax}(\mathbf{F_S}(:,i])$
+5: end for
+6: $\mathbf{uidx} = \mathrm{unique}(\mathbf{idx})$ $\triangleright$ set operation
+7: $\mathbf{f_S}[\mathbf{j}] = \sum_{\mathrm{idx}[\mathbf{i}] = \mathrm{uidx}[\mathbf{j}]} \mathbf{f}_{\mathrm{max}}[\mathbf{i}]$
+8: if WCPL then
+9: $\mathbf{fr}[\mathbf{j}] = |\{i|\mathbf{idx}[i] = \mathbf{uidx}[j]\}|$ $\triangleright$ frequency of $\mathbf{uidx}[j]$ in idx
+10: end if
+11: $-,1 = \mathrm{sort}(\mathbf{f_S})$ $\triangleright$ sorting
+12: $\mathbf{suidx} = \mathbf{uidx}[1]$
+13: if WCPL then
+14: $\mathbf{midx} = \mathrm{repeat}(\mathbf{suidx},\mathbf{fr})$
+15: $\mathbf{rmidx} = \mathrm{resize}(\mathbf{midx},k)\triangleright$ nearest-neighbor resizing
+
+16: $\mathbf{F_O} = \mathbf{F_I}[\mathbf{r}\mathrm{midx},:]$ point collection
+17: else
+18: rsuidx= resize(suidx, k) nearest-neighbor
+resizing
+19: $\mathbf{F_O} = \mathbf{F_i}[\mathbf{rsuidx},:]$ point collection
+20: end if
+21: for $i = 0$ to ncols $(\mathbf{F_O}) - 1$ do max pooling
+22: $\mathbf{f_O}[i] = \max (\mathbf{F_O}(:,i])$
+23: end for
+24:
+25: if WCPL then return $(\mathbf{F_O},\mathbf{f_O},\mathbf{r}\mathrm{midx})$
+26: else return $(\mathbf{F_O},\mathbf{f_O},\mathbf{rsuidx})$
+27: end if
+28: end function
+
+however, these numbers need to be the same. To address this, for each point cloud in the input batch, the index vector suidx is up-sampled to a fixed size vector rsuidx using an up-sampling method for integer arrays, such as nearest neighbor resizing (Operation 18).
+
+6. As the final step, the up-sampled index vector rsuidx, which contains the indices of all the critical points, is used to gather points and their corresponding feature vectors. Since different feature vectors may correspond to a single point, and because of the information being filtered in hidden NN layers, we may want to gather the features from other layers (denoted by $\mathbf{F_I}$ ) than those used for selecting the points (denoted by $\mathbf{F_S}$ ). However, critical points are defined based on the contribution of each point in the maximum feature vector obtained from $\mathbf{F_S}$ , thus here we use $\mathbf{F_I} = \mathbf{F_S}$ .
+
+One of the main requirements of any layer designed for point cloud processing is its invariance to point cloud permutation. The proposed CPL, fulfills this requirement via following properties:
+
+- Sorting the feature vector $\mathbf{f}_{\mathrm{S}}$ in step 4 is order independent, because sorting is based on feature values and not based on indices of the points.
+- Nearest-neighbor resizing in step 5 is invariant to swapping the index of the input points, i.e.
+
+$$
+\begin{array}{r l} \text {r e s i z e (s o r t (s w a p (u i d x)))} & = \\ \text {s w a p (r e s i z e (s o r t (u i d x)))} & \end{array} \tag {1}
+$$
+
+where $sort$ is applied based on feature values, and swap is applied on index only.
+
+# 3.2. Weighted Critical Points Layer (WCPL)
+
+In CPL, a point in the point cloud is counted as a critical point if any of its features contributes to the output maximum feature vector $\mathbf{f}_{\mathrm{max}}$ , regardless of the number of its contributing features. For example, if a point contributes with two of its features, while another point has ten contributing features, both are treated the same in CPL. In other words, in CPL the "importance" of a point has a binary value: a given point is either important (critical) or unimportant (uncritical). In this section, we introduce a modified version of CPL, called Weighted Critical Points Layer (WCPL). The proposed WCPL (Figure 1b) assigns weights to points based on their level of contribution to $\mathbf{f}_{\mathrm{max}}$ .
+
+In this context, to increase the weight of a point by a factor of $C$ , we repeat the point index $C$ times. By increasing the repetition frequency, the probability of selecting the point in the down-sampling process will also increase. From another point of view, in WCPL, the probability of missing a critical point in the output is lower than that in CPL. The pseudo-code of WCPL is given in Algorithm 1 using the if statements.
+
+# 3.3. Critical Points Net (CP-Net)
+
+In this section, we propose a hierarchical architecture to apply deep convolutional neural networks to point clouds, by systematically reducing the number of points using the proposed CPL/WCPL. In the proposed network model, named Critical Points Net (CP-Net), any graph convolution method, such as DCNN [2], GCNN [13], MoNet [15] or EdgeConv (from DGCNN [26]) can be used in convolution layers. The block diagram of CP-Net using EdgeConv as an example is shown in Figure 2.
+
+The input in Figure 2 is an unordered point cloud of size $n$ . In the first step, the point cloud is passed into a convo
+
+
+Figure 2: General block diagram of the proposed CP-Net.
+
+
+Figure 3: The proposed point cloud classification network using CP-Net.
+
+lution layer of choice to filter the input into a richer set of features. The filtered output point cloud $F_{S_0}$ is then used as an input to the first CPL/WCPL. Using a CPL/WCPL with down-sampling factor $k_{0}$ , the number of points in $F_{S_0}$ is reduced to $n / k_{0}$ points. These steps are repeated for as many times as necessary to achieve a desired size of the point cloud (both in terms of number of points and feature vector size). Note that at $j$ -th CPL/WCPL block, one can also benefit from using or concatenating features from all or some of the previous layers, i.e., $\{F_{I_{j0}}, F_{I_{j1}}, \ldots, F_{I_{jj}}\}$ , as long as they correspond to the same points. As a result, the number of output points will be $\frac{n}{k_0 k_1 \ldots k_j + 1}$ .
+
+# 3.4. CP-Net for 3D Object Classification
+
+Here we give an example of CP-Net application in the 3D object classification problem. Block diagram of the proposed network is illustrated in Figure 3. The network is composed of three subnets: 1) $n$ -point feature extraction subnet, 2) $(n/4)$ -point subnet and 3) classification subnet. The detailed steps of the proposed network are as follows:
+
+- Network input is an unordered point cloud of size $n \times 3$ , where each point is a 3D vector.
+- The input data goes through a spatial transformer network as explained in [19], to make it robust against any rigid transformation, including rotation and translation. It is worth noting that instead of using the original input, a modified version of EdgeConv [26] edge feature is used for spatial transformation, as explained
+
+in the next step $^1$
+
+- The output of the spatial transform goes into a filtering CNN, here EdgeConv [26], to produce richer features. Unlike the original EdgeConv [26] operator which uses two kernels in the edge feature function, we use the triple-kernel version $h_{\Theta}(x_i, x_j - x_i, (x_j - x_i)^2)$ , where $(x_j - x_i)^2$ is element-wise square operation between each point $x_i$ and its neighbouring point $x_j$ . In the proposed network, applying the EdgeConv with 128 filters to the input point cloud of size $n \times 3$ , results in a point cloud of size $n \times 128$ .
+- A multi-layer perceptron (MLP) layer expands the feature dimension from 128 to 1024 features, resulting in a point cloud of size $n \times 1024$ .
+- Next, CPL/WCPL is applied to find the critical points and to reduce the number of input points. As shown in Section 4, this step reduces the computational complexity without any loss in the classification accuracy. A down-sampling factor of $1/4$ is chosen to reduce the number of points from $n$ to $n/4$ .
+- Another EdgeConv layer is used to filter the point cloud, this time by preserving the depth and size to further process the received point cloud. Note that reducing the number of points in the previous layer highly reduces the computational complexity of the this layer.
+
+- A reduce-max layer is used to generate a vector of size 1024, out of the point cloud of size $n \times 1024$ .
+- Finally, fully connected layers of size 512, 256 and 40 are applied to transform the feature vector of size 1024 to the number of classes in the ModelNet40 dataset [27], which is 40.
+
+In the proposed 3D classification method, standard softmax cross entropy is used as the loss function. In addition, all layers include a ReLU activation function and batch normalization.
+
+# 4. Experiments
+
+# 4.1. Data preprocessing
+
+We evaluate our model on ModelNet40 3D object classification dataset [27]. The dataset contains 12,311 meshed CAD models from 40 different object categories out of which 9,843 models are used for training and 2,468 models for testing. From each model mesh surface, 1024 points are uniformly sampled and normalized to the unit sphere. For data augmentation, we randomly scale, rotate and shift each object point cloud in the 3D space.
+
+# 4.2. Training Details
+
+To train the model, we use Adam optimizer with an initial learning rate 0.001 and exponentially decay it with a rate of 0.5 every 200,000 steps. The decay rate of batch normalization starts from 0.5 and is increased to 0.99. The Dropout with probability 0.5 is used in the last two fully-connected layers. Training the network with TensorFlow on an Nvidia P100 GPU with batch size 32, takes 9 - 10 hours for 400 epochs.
+
+# 4.3. Statistical Results
+
+To evaluate the performance of a 3D point cloud classification method, we use both overall accuracy and per-class average accuracy, calculated over all the test samples.
+
+The classification accuracy results for our proposed CP-Net/WCP-Net are shown in Table 1 with comparisons against the previously proposed methods. As illustrated, our CP-Net/WCP-Net methods rank as the runner-up to [12] and surpass the accuracy of all other methods in Table 1 and of ModelNet40 benchmark leader board.
+
+# 4.4. Qualitative Results
+
+Figure 4 shows how the proposed CPL learns to down-sample different point clouds. The original point clouds of size 1024 for the object classes lamp, airplane, flower-pot, laptop and car are shown in Figure 4(a). Figures 4(b-e) correspond to the outputs obtained using the down-sampling ratio of 0.25, at epochs 1, 100, 200 and 300, respectively. As seen in Figure 4(b), at the beginning of the training,
+
+| Algorithm | Overall Accuracy (%) | Mean Class Accuracy (%) |
| Vox-Net [14] | 83.00 | 85.9 |
| ECC [23] | 83.2 | - |
| SO-Net [10] | 89.16 | - |
| Pointnet [19] | 89.20 | 86.0 |
| Pointnet++ [20] | 90.70 | - |
| KCNN [21] | 91.0 | - |
| KD-Net [8] | 91.8 | - |
| DGCNN[30] (1 vote) | 91.84 | 89.40 |
| RS-CNN [12] | 93.6 | - |
| Ours (CP-Net) | 92.33 | 89.90 |
| Ours (WCP-Net) | 92.41 | 90.53 |
+
+Table 1: Classification accuracy results on ModelNet40 dataset [27], for input size ${1024} \times 3$ .
+
+| Method | Double Kernel | Triple Kernel |
| DGCNN | 91.84 (135ms) | 89.26 (141ms) |
| CP-Net | 91.88 (115ms) | 92.33 (119ms) |
| WCP-Net | 91.76 (116ms) | 92.41 (120ms) |
+
+Table 2: Effect of edge feature kernels on overall classification accuracy (\%) and execution time (in ms).
+
+some important parts of the objects, such as lamp column, flower leaves and laptop screen are partially lost as a result of down-sampling. After 300 epochs of training however, CPL learns to down-sample the point cloud such that the critical points of the object are mostly retained. In the context of point cloud classification, by important points of an object we mean those points that contain the necessary information to discriminate between different objects in the dataset.
+
+Figure 4(f) shows the corresponding point clouds downsampled by the ratio of $\frac{1}{16}$ , after 300 training epochs. As seen, the important points of each object for our classification task are still preserved even in such small 64-point point clouds. The lamp column, airplane wings and six corners of the laptop are some examples of the preserved important object parts.
+
+# 4.5. Ablation studies
+
+EdgeConv Kernels The effect of using two and three kernels (in EdgeConv operator used in DGCNN [26]) on the overall classification accuracy and execution time is shown in Table 2. For double-kernel version, we use the one used in [26]. The triple-kernel version is defined in section 3.4-c. As seen, the triple kernel version is computationally more complex than the double kernel version. In both cases, not only the proposed CP-Net/WCP-Net outperforms the DGCNN in classification accuracy, it is computationally less complex, due to CPL/WCPL point cloud down-sampling.
+
+Effect of Bottleneck Dimension The effect of bottleneck layer size (number of features in the output feature vector) on classification accuracy is shown in Table 3.
+
+
+Figure 4: (From top to bottom and left to right) The original point clouds for the laptop and car object categories in ModelNet40 dataset [27], and their down-sampled versions (with ratio $\frac{1}{4}$ ) obtained after training the classification CP-Net, shown in Figure 3, for 1, 100, 200 and 300 epochs. (f) The result of training for 300 epochs with down-sampling ratio $\frac{1}{16}$ . The images are color coded to reflect the depth information.
+
+Clearly, increasing the bottleneck layer size improves the accuracy, however it almost saturates at around 1024 features. Note that even with the bottleneck size of 64, accuracy of the proposed CP-Net (\%89.35) is more than that of the PointNet with bottleneck size of 1024 (\%89.20).
+
+ | 64 | 128 | 256 | 512 | 1024 |
| CP-Net | 89.35 | 89.83 | 90.85 | 91.94 | 92.33 |
| WCP-Net | 89.16 | 89.71 | 90.73 | 91.54 | 92.41 |
+
+Effect of Down-Sampling Ratio Table 4 shows the effect of down-sampling ratio on classification accuracy. As expected, the more the point cloud is shrunk, the lower the accuracy is. This is because some of the important information about the object is lost as a result of down-sampling. The proposed CPL and WCLP layers however, preserve the important object points as much as possible. This can be verified from Table 4, where the difference between the accuracy values at down-sampling ratios 1 and $1/16$ (corre
+
+sponding to point clouds of size 1024 and 64 points) is only $\% 0.73$ . This means that in the down-sampling process, CPL preserves the most important information of each object, so that with such small number of points, objects are still classified with high accuracy.
+
+Table 3: Effect of bottleneck dimension on accuracy $(\%)$
+
+ | 1 | 1/2 | 1/4 | 1/8 | 1/16 |
| CP-Net | 92.25 | 92.24 | 92.33 | 92.29 | 91.52 |
| WCP-Net | 92.09 | 92.15 | 92.41 | 92.03 | 91.81 |
+
+Table 4: Effect of down-sampling ratio on accuracy $(\%)$
+
+Taking the effect of down-sampling for a ratio of 4, i.e., 256 points, it is worth comparing the accuracy of CPL with a random down-sampler. If we use random down-sampling of ratio 4 instead of CPL/WCPL, the accuracy of the CP-Net classification network drops to 91.47 from 92.33 and 92.41 for CPL and WCPL, respectively, which shows the effectiveness of CPL in obtaining the critical points and features in the architecture. It is worth noting that although using random down-sampling is viable in training, it is not math-
+
+| Method | Model Size (MB) | Inference Time (ms) | Accuracy (%) |
| PointNet | 40 | 36.1 | 89.20 |
| PointNet++ | 12 | 232.2 | 90.70 |
| KCNN | 11 | 26.4 | 91.0 |
| DGCNN | 21 | 134.5 | 91.84 |
| CP-Net/WCP-Net | 57 | 118.9 | 92.33/92.41 |
+
+Table 5: Model size (in MB), inference mode execution time on P100 GPU (in ms) and classification accuracy $(\%)$ .
+
+ematically and statistically sound at inference. The reason is that every time the inference is run, the random sampler produces a new set of indices for down-sampling, resulting in a non-deterministic output/classification for the same input. Besides, our experiments showed that unlike CPL, random down-sampling is unable to find the critical points of the input point cloud.
+
+# Time and Space Complexity
+
+Table 5 compares the execution time of running the proposed CP-Net/WCP-Net with other state-of-the-art object classification methods, on an Ubuntu machine with a single P100 GPU and an Intel 1.2 GHz CPU. Batch size of 32 1024-point point clouds is used in all the experiments.
+
+In terms of model size, the smallest models are generated by KCNet. CP-Net and WCP-Net generate the largest models due to their larger number of network parameters. The model size can be reduced by decreasing the bottleneck dimension.
+
+In terms of computational complexity, CP-Net and WCP-Net run faster than both DGCNN and PointNet++. However, they are slower than PointNet and KCNet. The lower computational complexity of CP-Net in comparison with DGCNN is due to the employment of CP layer in its network. Similarly, by a proper network design, the proposed CPL/WCPL is expected to accelerate other classification deep networks, such as KCNet and PointNet++.
+
+# 5. Conclusion
+
+In this paper we propose a new deterministic adaptive down-sampling method that can be trained to pass on the most important points (critical points), to the next layers in a neural network. Two down-sampling layers, the critical points layer (CPL) and its weighted version, weighted CPL (WCPL) are proposed. As a systemic approach of using CPL/WCPL in deep neural networks, CP-Net, a hierarchical implementation of the these layers, is also introduced. Finally, a deep classification network is designed based on the proposed CP-Net for 3D object classification. Experimental results using a common dataset show the competitiveness of the proposed method in terms of classification accuracy vs computational complexity trade-off, in comparison with previous state-of-the-art methods.
+
+Last but not least, the effectiveness of CPL to dynamically down-sample the unordered data, inspires the design of a new type of auto-encoders that can handle unordered data such as point clouds. This approach is already under investigation and results will be shown in future works.
+
+# References
+
+[1] William N Anderson Jr and Thomas D Morley. Eigenvalues of the laplacian of a graph. Linear and multilinear algebra, 18(2):141-145, 1985.
+[2] James Atwood and Don Towsley. Search-convolutional neural networks. CoRR, 2015.
+[3] Davide Boscaini, Jonathan Masci, Emanuele Rodola, and Michael Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems, pages 3189-3197, 2016.
+[4] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in Neural Information Processing Systems, pages 3844-3852, 2016.
+[5] Oren Dovrat, Itai Lang, and Shai Avidan. Learning to sample. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2760-2769, 2019.
+[6] Matheus Gadelha, Rui Wang, and Subhransu Maji. Multiresolution tree networks for 3d point cloud processing. In Proceedings of the European Conference on Computer Vision (ECCV), pages 103-118, 2018.
+[7] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
+[8] Roman Klokov and Victor Lempitsky. Escape from cells: Deep kd-networks for the recognition of 3d point cloud models. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 863-872. IEEE, 2017.
+[9] Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. arXiv preprint arXiv:1705.07664, 2017.
+[10] Jiaxin Li, Ben M Chen, and Gim Hee Lee. So-net: Self-organizing network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9397–9406, 2018.
+[11] Yangyan Li, Rui Bu, Mingchao Sun, and Baoquan Chen. Pointcnn. arXiv preprint arXiv:1801.07791, 2018.
+[12] Yongcheng Liu, Bin Fan, Shiming Xiang, and Chunhong Pan. Relation-shape convolutional neural network for point cloud analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8895-8904, 2019.
+[13] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37-45, 2015.
+
+[14] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on, pages 922-928. IEEE, 2015.
+[15] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proc. CVPR, number 2, page 3, 2017.
+[16] Frank Moosmann and Christoph Stiller. Velodyne slam. In Intelligent Vehicles Symposium (IV), 2011 IEEE, pages 393-398. IEEE, 2011.
+[17] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning convolutional neural networks for graphs. In International conference on machine learning, pages 2014-2023, 2016.
+[18] Leif E Peterson. K-nearest neighbor. Scholarpedia, 4(2):1883, 2009.
+[19] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 1(2):4, 2017.
+[20] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pages 5099-5108, 2017.
+[21] Yiru Shen, Chen Feng, Yaoqing Yang, and Dong Tian. Mining point cloud local structures by kernel correlation and graph pooling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 4, 2018.
+[22] David I Shuman, Benjamin Ricaud, and Pierre Vandergheynst. A windowed graph fourier transform. In Statistical Signal Processing Workshop (SSP), 2012 IEEE, pages 133-136. IEEE, 2012.
+[23] Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proc. CVPR, 2017.
+[24] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 1(2), 2017.
+[25] Peng-Shuai Wang, Yang Liu, Yu-Xiao Guo, Chun-Yu Sun, and Xin Tong. O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics (TOG), 36(4):72, 2017.
+[26] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. arXiv preprint arXiv:1801.07829, 2018.
+[27] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguuang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1912-1920, 2015.
+[28] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for volumetric shapes. In Computer Vision and Pattern Recognition, 2015.
+
+[29] Yaoqing Yang, Chen Feng, Yiru Shen, and Dong Tian. Foldingnet: Point cloud auto-encoder via deep grid deformation. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), volume 3, 2018.
+[30] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Proceedings of AAAI Conference on Artificial Intelligence, 2018.
\ No newline at end of file
diff --git a/adaptivehierarchicaldownsamplingforpointcloudclassification/images.zip b/adaptivehierarchicaldownsamplingforpointcloudclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..426395f0970c73ec7991c92da08f94ff8de48a37
--- /dev/null
+++ b/adaptivehierarchicaldownsamplingforpointcloudclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d79c19147b4f1af2800c2e79c737c36008194c9e210652daa2abf7dc888ac43
+size 332553
diff --git a/adaptivehierarchicaldownsamplingforpointcloudclassification/layout.json b/adaptivehierarchicaldownsamplingforpointcloudclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..97b8007a6e4a6c371174c4e7260d170e7f13a47b
--- /dev/null
+++ b/adaptivehierarchicaldownsamplingforpointcloudclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:450182f43c502d1e2c068344a5b40aaf4dbba553c9a6ba849377334e887e3a64
+size 336437
diff --git a/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_content_list.json b/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f60c11bebc5d1a14a8fdc384779e904ee341333a
--- /dev/null
+++ b/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59aadb3e3198e2a83a90feb3b6555e9062f076517a891271984012f1315e9394
+size 83990
diff --git a/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_model.json b/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f900ac1f7a2fbbc6beae1399c34bd5466187c7fe
--- /dev/null
+++ b/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4baac1b87c7b961218ac4d92496ce62362c918e4401ee4658813c9cfe2eb308a
+size 104342
diff --git a/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_origin.pdf b/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1ed33203ed44f0b335c8ddd19a056da0ea59e3b1
--- /dev/null
+++ b/adaptiveinteractionmodelingviagraphoperationssearch/5d407956-031c-4261-8cc0-1a9fd0a10fbe_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd6e20aacbc4b713a0bbf8799b2a8010aeea4c9a8ca330c237c0ca459808d61d
+size 1741341
diff --git a/adaptiveinteractionmodelingviagraphoperationssearch/full.md b/adaptiveinteractionmodelingviagraphoperationssearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8ea8be03cf24d84b68598505d12407b3f6d1ce95
--- /dev/null
+++ b/adaptiveinteractionmodelingviagraphoperationssearch/full.md
@@ -0,0 +1,412 @@
+# Adaptive Interaction Modeling via Graph Operations Search
+
+Haoxin Li $^{1}$ , Wei-Shi Zheng $^{2,3,5,*}$ , Yu Tao $^{2,4}$ , Haifeng Hu $^{1,*}$ , Jian-Huang Lai $^{2}$
+
+$^{1}$ School of Electronics and Information Technology, Sun Yat-sen University, China
+
+$^{2}$ School of Data and Computer Science, Sun Yat-sen University, China
+
+$^{3}$ Peng Cheng Laboratory, Shenzhen 518005, China
+
+4Accuvision Technology Co. Ltd.
+
+5Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China
+
+lihaoxin05@gmail.com, wzheng@ieee.org, gytaoyu@hotmail.com
+
+huhaif@mail.sysu.edu.cn, stsljh@mail.sysu.edu.cn
+
+# Abstract
+
+Interaction modeling is important for video action analysis. Recently, several works design specific structures to model interactions in videos. However, their structures are manually designed and non-adaptive, which require structures design efforts and more importantly could not model interactions adaptively. In this paper, we automate the process of structures design to learn adaptive structures for interaction modeling. We propose to search the network structures with differentiable architecture search mechanism, which learns to construct adaptive structures for different videos to facilitate adaptive interaction modeling. To this end, we first design the search space with several basic graph operations that explicitly capture different relations in videos. We experimentally demonstrate that our architecture search framework learns to construct adaptive interaction modeling structures, which provides more understanding about the relations between the structures and some interaction characteristics, and also releases the requirement of structures design efforts. Additionally, we show that the designed basic graph operations in the search space are able to model different interactions in videos. The experiments on two interaction datasets show that our method achieves competitive performance with state-of-the-arts.
+
+# 1. Introduction
+
+Video classification is one of the basic research topics in computer vision. Existing video classification solutions can be mainly divided into two groups. The first one is the two-stream network based methods [28, 33, 8], which model appearance and motion features with RGB and optical flow streams respectively; the second type is
+
+
+Figure 1. Illustration of our method. We search adaptive network structures to model the interactions in different videos, in which the candidate basic operations (dashed arrows) are selected (solid arrows) to construct adaptive structures for different videos.
+
+the 3D convolution neural networks (CNN) based methods [29, 4, 32, 26, 31, 23], which model spatiotemporal features with stacked 3D convolutions or the decomposed variants. While these methods work well on scene-based action classification, most of them obtain unsatisfactory performance on recognizing interactions, since they haven't effectively or explicitly modeled the relations.
+
+To model the interactions in videos, some methods employ specific structures [40, 14, 16] to capture temporal relations. Others model the relations between entities. Nonlocal network [34] and GloRe [7] design networks with self-attention and graph convolution to reason about the relations between semantic entities. CPNet [22] aggregates features from potential correspondences for representation learning. Space-time region graphs [35] are developed to model the interactions between detected objects with graph
+
+convolution network (GCN).
+
+However, existing methods have to manually design network structures for interaction modeling, which requires considerable architecture engineering efforts. More importantly, the designed structures are fixed so that they could not adaptively model different interactions. For example, the two videos in Figure 1 contain the interactions with greatly different complexities and properties, i.e. the upper one mainly concerns the motions of the background while the lower one involves complicated relations among objects, where which kind of structures should be used to adequately model the interactions is not completely known in advance, so that it requires to construct adaptive structures for more effective interactions modeling.
+
+Instead of designing fixed network structures manually, we propose to automatically search adaptive network structures directly from training data, which not only reduces structures design efforts but also enables adaptive interaction modeling for different videos. As briefly illustrated in Figure 1, different operations are adaptively selected to construct the network structures for adaptive interaction modeling for different videos, which is implemented by differentiable architecture search. To construct the architecture search space, we first design several basic graph operations which explicitly capture different relations in videos, such as the temporal changes of objects and relations with the background. Our experiments show that the architecture search framework automatically constructs adaptive network structures for different videos according to some interaction characteristics, and the designed graph operations in the search space explicitly model different relations in videos. Our method obtains competitive performance with state-of-the-arts in two interaction recognition datasets.
+
+In summary, the contribution of this paper is two-fold. (1) We propose to automatically search adaptive network structures for different videos for interaction modeling, which enables adaptive interaction modeling for different videos and reduces structures design efforts. (2) We design the search space with several basic graph operations, which explicitly model different relations in videos.
+
+# 2. Related Work
+
+# 2.1. Action and Interaction Recognition
+
+In the deep learning era, action recognition obtains impressive improvements with 2D [28, 33, 8] or 3D [15, 29, 26, 4, 32, 31, 23] CNNs. 2D CNNs use RGB frames and optical flows as separate streams to learn appearance and motion representations respectively, while 3D CNNs learn spatiotemporal features with 3D convolutions or the decomposed counterparts. Some other works [19, 16] learn spatiotemporal representations by shifting feature channels or encoding motion features together with spatiotemporal fea
+
+tures, which achieve high performance and efficiency. As for temporal-based actions, TRN [40] and Timeception [14] design specific structures to model the temporal relations.
+
+To model interactions, Gupta et al. [11] apply spatial and functional constraints with several integrated tasks to recognize interactions. InteractNet [9] and Dual Attention Network [37] are proposed to model the interactions between human and objects. Some other works model the relations between entities for interaction recognition. Non-local network [34] models the relations between features with self-attention. CPNet [22] aggregates correspondences for representation learning. GCNs are employed to model the interactions between nodes [35, 7]. These specific structures in the above methods are non-adaptive. In practice, however, we do not know what kinds of interactions are contained in videos, and the non-adaptive structures could not sufficiently model various interactions, which requires adaptive structures for effective modeling.
+
+In this work, we propose to automatically search adaptive network structures with differentiable architecture search mechanism for interaction recognition.
+
+# 2.2. Graph-based Reasoning
+
+Graph-based methods are widely used for relation reasoning in many computer vision tasks. For example, in image segmentation, CRFs and random walk networks are used to model the relations between pixels [5, 3, 18]. GCNs [12, 17] are proposed to collectively aggregate information from graph structures and applied in many tasks including neural machine translation, relation extraction and image classification [1, 2, 25, 36]. Recently, GCNs are used to model the relations between objects or regions for interaction recognition. For example, Chen et al. [7] adopt GCN to build a reasoning module to model the relations between semantic nodes, and Wang et al. [35] employ GCN to capture the relations between detected objects.
+
+In this paper, we design the search space with basic operations based on graph. We propose several new graph operations that explicitly model different relations in videos.
+
+# 2.3. Network Architecture Search
+
+Network architecture search aims to discover optimal architectures automatically. The automatically searched architectures obtain competitive performance in many tasks [42, 20, 43]. Due to the computational demanding of the discrete domain optimization [43, 27], Liu et al. [21] propose DARTS which relaxes the search space to be continuous and optimizes the architecture by gradient descent.
+
+Inspired by DARTS, we employ differentiable architecture search mechanism to automatically search adaptive structures directly from training data, which facilitates adaptive interaction modeling for different videos and releases the requirement of structures design efforts.
+
+
+Figure 2. Overall framework. Some frames are sampled from a video as the input to our model. We extract basic features of the sampled frames with a backbone CNN, and extract class-agnostic bounding box proposals with RPN model. Then we apply RoIAlign to obtain the features of proposals and regard them as node features. In the graph operations search stage, we search for a computation cell, where the supernodes are transformed by the selected graph operations on the superedges (see Section 3.2 and 3.3 for details), to construct adaptive structures. The searched structures are used to model the interactions in the corresponding videos. Finally, the node features are pooled into a video representation for interaction recognition.
+
+# 3. Proposed Method
+
+In order to learn adaptive interaction modeling structure for each video, we elaborate the graph operations search method in this section. We design the architecture search space with several basic graph operations, where the candidate operations are enriched in addition to graph convolution by several proposed new graph operations modeling different relations, e.g. the temporal changes and relations with background. We further develop the search framework based on differentiable architecture search to search adaptive structure for each video, which enables adaptive interaction modeling for different videos.
+
+# 3.1. Overall Framework
+
+We first present our overall framework for interaction recognition in Figure 2. Given a video, we sample some frames as the input to our model. We extract basic features of the sampled frames with a backbone CNN. At the same time, we extract class-agnostic RoIs for each frame with Region Proposal Network (RPN) [13]. Then we apply RoIAlign [13] to obtain features for each RoI. All the RoIs construct the graph for relation modeling. The nodes are exactly the RoIs, and edges are defined depending on the specific graph operations introduced in Section 3.2, in which different graph operations would indicate different connections and result in different edge weights. To obtain adaptive network structures, we employ differentiable architecture search mechanism to search adaptive structures in which graph operations are combined hierarchically. The interactions are modeled with the searched structures by transforming the node features with the selected graph operations. Finally, the output node features are pooled into a video representation for interaction classification.
+
+In the following subsections, we describe the search space with basic graph operations and the architecture search framework in details.
+
+# 3.2. Search Space with Graph Operations
+
+To search the network structures, we firstly need to construct a search space. We search for a computation cell to construct the network structures, as illustrated in Figure 2. A computation cell is a directed acyclic computation graph with $N$ ordered supernodes ("supernode" is renamed from "node" to avoid confusion with the nodes in the graphs constructed from RoIs). Each supernode contains all the nodes and each superedge indicates the candidate graph operations transforming the node features. In the computation cell, the input supernode is the output of the previous one, and the output is the channel-wise concatenated node features of all the intermediate supernodes.
+
+Each intermediate supernode can be obtained by summing all the transformed predecessors (the ordering is denoted as "N-1", "N-2", "N-3" in Figure 2) as follows,
+
+$$
+\boldsymbol {X} ^ {(j)} = \sum_ {i < j} o ^ {i j} \left(\boldsymbol {X} ^ {(i)}\right), \tag {1}
+$$
+
+where $X^{(i)}, X^{(j)}$ are the node features of the $i$ -th and $j$ -th supernode, and $o^{ij}$ is the operation on superedge $(i,j)$ . Thus the learning of cell structure reduces to learning the operations on each superedge, so that we design the candidate operations in the following.
+
+We design the basic operations based on graph for explicit relation modeling. In addition to graph convolution, we propose several new operations, i.e. difference propagation, temporal convolution, background incorporation and node attention, which explicitly model different relations in videos and serve as basic operations in the search space.
+
+# 3.2.1 Feature Aggregation
+
+Graph convolution network (GCN) [17] is commonly used to model relations. It employs feature aggregation for relation reasoning, in which each node aggregates features from its neighboring nodes as follows,
+
+$$
+\boldsymbol {z} _ {i} = \delta \left(\sum_ {j} a _ {i j} ^ {f} \cdot \boldsymbol {W} _ {f} \boldsymbol {x} _ {j}\right), \tag {2}
+$$
+
+where $\pmb{x}_j\in \mathbb{R}^{C_{in}}$ is the feature of node- $j$ with $C_{in}$ dimensions, $\pmb {W}_f\in \mathbb{R}^{C_{out}\times C_{in}}$ is the feature transform matrix applied to each node, $a_{ij}^{f} = \pmb{x}_{i}^{\mathrm{T}}\pmb{U}_{f}\pmb{x}_{j}$ is the affinity between node- $i$ and node- $j$ with learnable weights $\pmb{U}_{f}$ , $\delta$ is a nonlinear activation function and the $\pmb{z}_i\in \mathbb{R}^{C_{out}}$ is the updated feature of node- $i$ with $C_{out}$ dimensions. Through information aggregation on the graph, each node enhances its features by modeling the dependencies between nodes.
+
+# 3.2.2 Difference Propagation
+
+In videos, the differences between objects are important for recognizing interactions. But GCN may only aggregate features with weighted sum, which is hard to explicitly capture the differences. Therefore, we design an operation difference propagation to explicitly model the differences.
+
+By slightly modifying Equation (2), the differences can be explicitly modeled as follows,
+
+$$
+\boldsymbol {z} _ {i} = \delta \left(\sum_ {j, j \neq i} a _ {i j} ^ {d} \cdot \boldsymbol {W} _ {d} \left(\boldsymbol {x} _ {i} - \boldsymbol {x} _ {j}\right)\right), \tag {3}
+$$
+
+where the symbols share similar meanings of those in Equation (2). The item $(x_{i} - x_{j})$ in Equation (3) explicitly models the differences between node- $i$ and node- $j$ , and then the differences are propagated on the graph, as shown in Figure 3(a). Difference propagation focuses on the differences between nodes to model the changes or differences of objects, which benefits recognizing interactions relevant to the changes or differences.
+
+# 3.2.3 Temporal Convolution
+
+Nodes in videos are inherently in temporal orders. However, both feature aggregation and difference propagation model the features in unordered manners and ignore the temporal relations. Here we employ temporal convolution to explicitly learn temporal representations.
+
+In temporal convolutions, we firstly obtain node sequences in temporal order. Given node- $i$ in the $t$ -th frame, we find its nearest node (not required to represent the same object) in each frame measured by the inner product of node features and arrange them in temporal order for a sequence,
+
+$$
+\boldsymbol {X} _ {i} = \left[ \boldsymbol {x} _ {i} ^ {0}, \dots , \boldsymbol {x} _ {i} ^ {t}, \dots , \boldsymbol {x} _ {i} ^ {T - 1} \right], \tag {4}
+$$
+
+
+(a) Difference Propagation
+
+
+(b) Temporal Convolution
+
+
+(c) Background Incorporation
+
+
+(d) Node Attention
+Figure 3. Illustration of proposed graph operations. (a) Difference Propagation, each node propagates the differences to its neighboring nodes. (b) Temporal Convolution, each node learns temporal features with convolution over node sequences along the video. (c) Background Incorporation, each node aggregates the relations with the background. (d) Node Attention, each node learns attention weights to indicate its importance.
+
+where $\boldsymbol{x}_i^0, \dots, \boldsymbol{x}_i^{T-1}$ denote the nearest nodes in frame $0, \dots, T-1$ with reference to the given node $\boldsymbol{x}_i^t$ .
+
+Then we conduct temporal convolutions over the node sequence as shown in Figure 3(b),
+
+$$
+\boldsymbol {z} _ {i} = \delta \left(\boldsymbol {W} _ {t} * \boldsymbol {X} _ {i}\right), \tag {5}
+$$
+
+where $*$ denotes temporal convolution and $W_{t}$ is the convolution kernel. The temporal convolution explicitly learns the temporal representations to model the significant appearance changes of the node sequence, which is essential for identifying interactions with temporal relations.
+
+# 3.2.4 Background Incorporation
+
+The node features derived from RoIAlign exclude the background information. However, background is useful since the objects probably interact with the background. This inspires us to design the background incorporation operation.
+
+In each frame, the detected objects have different affinities with different regions in the background, as illustrated in Figure 3(c). Denote the feature of node- $i$ in the $t$ -th frame as $\pmb{x}_i^t \in \mathbb{R}^{C_{in}}$ and the background feature map corresponding to the $t$ -th frame as $\pmb{y}^t \in \mathbb{R}^{h \times w \times C_{in}}$ . The affinity between $\pmb{x}_i^t$ and $\pmb{y}_j^t$ ( $j = 1, \dots, h \times w$ ) can be calculated as $a_{ij}^b = \pmb{x}_i^{t^\top} \pmb{U}_b \pmb{y}_j^t$ with learnable $\pmb{U}_b$ . The $a_{ij}^b$ indicates the relations between the node and the background with spatial structure, which could be transformed into node features,
+
+$$
+\boldsymbol {z} _ {i} ^ {r} = \boldsymbol {V} _ {b} \boldsymbol {a} _ {i} ^ {b}, \tag {6}
+$$
+
+where $\pmb{a}_{i}^{b} = [a_{i1}^{b};a_{i2}^{b};\dots ;a_{i(h\cdot w)}^{b}] \in \mathbb{R}^{h\cdot w}$ is the affinity vector, and $\pmb{V}_b \in \mathbb{R}^{C_{out} \times (h\cdot w)}$ is the transform matrix transforming the affinity vector into node features.
+
+In addition, the background features can be aggregated according to the affinity $a_{ij}^{b}$ to model the dependencies between detected objects and the background,
+
+$$
+\boldsymbol {z} _ {i} ^ {a} = \sum_ {j = 1, \dots , h \times w} a _ {i j} ^ {b} \cdot \boldsymbol {W} _ {b} \boldsymbol {y} _ {j}. \tag {7}
+$$
+
+Finally, the updated node features are the combination of the two features above followed by a nonlinear activation,
+
+$$
+\boldsymbol {z} _ {i} = \delta \left(\boldsymbol {z} _ {i} ^ {r} + \boldsymbol {z} _ {i} ^ {a}\right). \tag {8}
+$$
+
+# 3.2.5 Node Attention
+
+The graph contains hundreds of nodes but they contribute differently to recognizing interactions. Some nodes irrelevant to the interaction serve as outliers that interfere the interaction modeling, so it is reasonable to weaken the outliers with attention scheme.
+
+The outliers are often the nodes wrongly detected by RPN, which usually have few similar nodes and their similar nodes do not locate regularly at specific regions or along the videos, as briefly illustrated in Figure 3(d). So that we calculate the attention weights according to the similarities and relative positions to the top- $M$ similar nodes.
+
+$$
+\boldsymbol {z} _ {i} = w _ {i} \cdot \boldsymbol {x} _ {i},
+$$
+
+$$
+w _ {i} = \sigma \left(\boldsymbol {W} _ {n} \left[ \boldsymbol {a} _ {i} ^ {n}; \Delta \boldsymbol {s} _ {i} \right]\right),
+$$
+
+$$
+\boldsymbol {a} _ {i} ^ {n} = \left[ \begin{array}{l} a _ {i j _ {1}} ^ {n}; a _ {i j _ {2}} ^ {n}; \dots ; a _ {i j _ {M}} ^ {n} \end{array} \right], \tag {9}
+$$
+
+$$
+\Delta \boldsymbol {s} _ {i} = \left[ \begin{array}{c} \boldsymbol {s} _ {i} - \boldsymbol {s} _ {j _ {1}} \\ \boldsymbol {s} _ {i} - \boldsymbol {s} _ {j _ {2}} \\ \dots \\ \boldsymbol {s} _ {i} - \boldsymbol {s} _ {j _ {M}} \end{array} \right],
+$$
+
+where $w_{i}$ is the attention weight of $\pmb{x}_i$ , which is calculated from similarity vector $\pmb{a}_i^n$ and relative positions $\Delta s_i$ , $\sigma$ is the sigmoid nonlinear function, $j_m$ is the node index of node- $i$ 's $m$ -th similar nodes measured by inner product, and $a_{ij_m}^n$ is the inner product of node features between node- $i$ and node- $j_m$ , and $\pmb{s}_i = [x_i; y_i; t_i]$ is the normalized spatial and temporal positions of node- $i$ . With the attention weights, we are able to focus on informative nodes and neglect the outliers.
+
+The graph operations above explicitly capture different relations in videos and serve as the basic operations in the architecture search space, which facilitates structure search in Section 3.3.
+
+# 3.3. Searching Adaptive Structures
+
+With the constructed search space, we are able to search adaptive structures for interaction modeling. We employ
+
+differentiable architecture search mechanism in DARTS [21] to develop our search framework, and revise the learning of operation weights to facilitate search of adaptive interaction modeling structures.
+
+DARTS. DARTS utilizes continuous relaxation to learn specific operations ( $o^{ij}$ in Equation (1)) on the superedges. The softmax combination of all the candidate operations are calculated as the representation of each supernode,
+
+$$
+\bar {\sigma} ^ {i j} \left(\boldsymbol {X} ^ {(i)}\right) = \sum_ {o \in \mathbb {O}} \frac {\exp \left(\alpha_ {o} ^ {i j}\right)}{\sum_ {o ^ {\prime} \in \mathbb {O}} \exp \left(\alpha_ {o ^ {\prime}} ^ {i j}\right)} o \left(\boldsymbol {X} ^ {(i)}\right), \tag {10}
+$$
+
+where $\mathbb{O}$ is the set of candidate operations, $o$ represents a specific operation, $\alpha_{o}^{ij}$ is the operation weight of operation $o$ on superedge $(i,j)$ , and the $\bar{\sigma}^{ij}(\pmb{X}^{(i)})$ is the mixed output. In this way, the cell structure learning reduces to the learning of operation weights $\alpha_{o}^{ij}$ .
+
+To derive the discrete structure after the search procedure converges, the operation with strongest weight is selected as the final operation on superedge $(i,j)$ ,
+
+$$
+o ^ {i j} = \underset {o \in \mathbb {O}} {\arg \max } \alpha_ {o} ^ {i j}. \tag {11}
+$$
+
+Adaptive Structures. Since the interactions differ from video to video, we attempt to learn adaptive structures for automatic interaction modeling. However, the operation weights $\alpha_{o}^{ij}$ in Equation (10) is non-adaptive. So that we modify the $\alpha_{o}^{ij}$ to be adaptive by connecting them with the input video through a fully-connected (FC) layer,
+
+$$
+\alpha_ {o} ^ {i j} = \boldsymbol {A} _ {o} ^ {i j} \boldsymbol {X}, \tag {12}
+$$
+
+in which $\mathbf{X}$ is the global feature of input video (global average pooling of the backbone feature) and $A_{o}^{ij}$ is the learnable structure weights corresponding to operation $o$ on superedge $(i,j)$ . In this way, adaptive structures are constructed for different videos to model the interactions.
+
+Unlike alternatively optimizing the model in training and validation set to approximate the architecture gradients in DARTS, we jointly optimize the structure weights and the weights in all graph operations in training set to learn adaptive structures.
+
+Fixing Substructures. It is time consuming to search stable structures with too many candidate operations. We attempt to reduce the number of basic operations by combining several operations into fixed substructures and regarding the fixed substructures as basic operations in the search space. For example, we connect feature aggregation and node attention sequentially into a fixed combination, and put it after the other 3 graph operations to construct 3 fixed substructures for search (as shown on the superedges in Figure 4).
+
+By this means, we accelerate search by simplifying the search space and also deepen the structures because each superedge contains multiple graph operations.
+
+Diversity Regularization. We find that the search framework easily selects only one or two operations to construct structures, because these operations are easier to optimize. However, other operations are also effective on interaction modeling, so we hope to keep more operations activated in the searched structures. We introduce the variance of operation weights as an auxiliary loss to constraint that all the operations would be selected equally,
+
+$$
+L _ {v a r} = \frac {1}{| \mathbb {O} | - 1} \sum_ {o \in \mathbb {O}} \left(\alpha_ {o} - \bar {\alpha}\right) ^ {2}, \tag {13}
+$$
+
+where $\alpha_{o} = \sum_{(i,j)}\alpha_{o}^{ij}$ , $\bar{\alpha}$ is the mean of $\alpha_{o}$ . The variance loss is added to the classification loss for optimization.
+
+# 4. Experiments
+
+# 4.1. Datasets
+
+We conduct experiments on two large interaction datasets, Something-Something-V1(Sth-V1) and Something-Something-V2(Sth-V2) [10] (see Figure 7 and 8 for some example frames). Sth-V1 contains 108,499 short videos across 174 categories. The recognition of them requires interaction reasoning and common sense understanding. Sth-V2 is an extended version of Sth-V1 which reduces the label noises.
+
+# 4.2. Implementation Details
+
+In the training, we employ stagewise training of the backbone and the graph operations search for easier convergence. And we optimize the weights in all graph operations and the structure weights $(A_{o}^{ij}$ in Equation (12)) alternately to search adaptive structures.
+
+In the structures search stage, we include the zero and identity as additional candidate operations. Following [6], we add dropout after identity to avoid its domination in the searched structures. We use 3 intermediate supernodes in each computation cell. The weight for auxiliary variance loss $L_{var}$ (Equation (13)) is set to 0.1.
+
+More details about the model, training procedure and data augmentation are included in supplementary materials.
+
+# 4.3. Analysis of Architecture Search Framework
+
+In this section, we analyze our architecture search framework. First we compare the interaction recognition accuracy of our searched structures with our baselines, and the results are shown in Table 1. It is observed that our searched structures obtain about $3\%$ improvements over the baselines, i.e. global pooling (global average pooling of the backbone feature) and pooling over RoIs (average pooling over all the RoI features), indicating that the searched structures are effective to model interactions and improve recognition performance. In the following, we show the searched structures and analyze the effects of adaptive structures.
+
+| Search schemes | V1 Val1Acc | V2 Val1Acc |
| global pooling | 48.1 | 60.3 |
| pooling over RoIs | 48.3 | 60.3 |
| non-adaptive (only testing)2 | 50.2 | 62.4 |
| non-adaptive (training and testing)3 | 50.8 | 63.1 |
| adaptive | 51.4 | 63.5 |
+
+1 Something-Something-V1 validation set and Something-Something-V2 validation set
+2 Only one searched structure (corresponding to most training videos) is used for testing.
+3 The structure are non-adaptive both in training and testing.
+
+Table 1. Interaction recognition accuracy (%) comparison of different search schemes.
+
+
+Figure 4. Two example videos and their corresponding structures. In the figure, "feat_aggr", "diff_prop", "temp_conv", "back_incor", "node_att" represent feature aggregation, difference propagation, temporal convolution, background incorporation and node attention, respectively.
+
+# 4.3.1 Searched Structures
+
+Figure 4 shows two examples of the input videos and the corresponding searched structures. From the searched structures we observe that our architecture search framework learns adaptive structures for different input videos. The main differences between the two structures are the superedges entering "N-3", where case1 learns simple structure but case2 selects complicated structure with more graph operations. Perhaps case2 is confusing with other interactions and requires complicated structures to capture some detailed relations for effective interaction modeling.
+
+Mismatch of videos and structures. To validate the specificity of adaptive structures, we swap the two searched structures in Figure 4 to mismatch the input videos, and use them to recognize the interactions. The results are compared in Figure 5. We observe that the mismatch of videos and structures leads to misclassification, which reveals that different videos require different structures for effective interaction modeling, since different interactions of different complexities are involved.
+
+# 4.3.2 Analysis of Adaptive Structures
+
+To understand the relations between the adaptive structures and the interaction categories, we statistically analyze the proportion of videos per class corresponding to different searched structures in validation set. Figure 6 compares the
+
+
+(a) Match and mismatch classification comparison of case 1.
+
+
+
+
+(b) Match and mismatch classification comparison of case 2.
+
+
+
+
+Figure 5. Top 5 classification score comparison of match and mismatch of videos and structures. (a) and (b) show the results of the two cases in Figure 4. The red bars indicate the groundtruth categories.
+(a) Something-Something-V1
+Figure 6. The proportion of videos per class corresponding to different structures. (a) and (b) show the results on the two datasets. The bars with different colors indicate different structures.
+
+
+(b) Something-Something-V2
+
+results of two searched structures indicated with different colors. We observe that the searched structures are strongly correlated to the interaction categories, where each structure corresponds to some specific interaction categories. For examples, in Something-Something-V1 dataset, the structure indicated with orange bars mainly corresponds to the interactions of indexes $\{2, 4, 6, 12, 15, et al.\}$ , which are about the motions of the camera. While the structure indicated with blue bars includes the interactions about moving/pushing objects (of indexes $\{8, 26, 29, 30, 41, et al.\}$ ). This reveals that our architecture search framework learns to roughly divide the videos into several groups according to some characteristics in the interactions, and search specialized structures for different groups for adaptive interaction modeling. In other words, the adaptive structures automatically model interactions in a coarse (groups) to fine (specialized structure for each group) manner.
+
+We further quantitatively compare the interaction recognition accuracy of non-adaptive and adaptive search schemes in Table 1. We make the following observations: On the one hand, adaptive scheme gains better performance than non-adaptive schemes. On the other hand, using only one searched structure for testing leads to obvious performance degradation, since different structures are searched to match different groups during training but only one struc
+
+| Operations | V1 Val Acc | V2 Val Acc |
| global pooling | 48.1 | 60.3 |
| pooling over RoIs | 48.3 | 60.3 |
| feature aggregation | 49.9 | 62.0 |
| difference propagation | 49.5 | 61.8 |
| temporal convolution | 48.7 | 61.0 |
| background incorporation | 49.7 | 62.4 |
| node attention | 49.8 | 61.8 |
+
+Table 2. Interaction recognition accuracy $(\%)$ comparison of different graph operations.
+
+ture is used for testing, which is insufficient to model the interactions in all groups. These observations further indicate the effectiveness of the adaptive structures.
+
+We also validate that learning with fixed substructures gains slight improvements, diversity regularization helps to learn structures with multiple operations, and the adaptive structures can transfer across datasets. For more details, please refer to our supplementary materials.
+
+# 4.4. Analysis of Graph Operations
+
+In this section, we analyze the role of each graph operation in interaction modeling. Firstly, we compare the recognition accuracy of different operations by placing them on top of the backbone, and the results are shown in Table 2. It is seen that all the operations improve the performance over baselines, indicating that explicitly modeling the relations with graph operations benefits interaction recognition. Different graph operations gain different improvements, which depends on the significance of different relations in the datasets. In the following, we visualize some nodes and cases to demonstrate the different effects of different graph operations in interaction modeling.
+
+Top activated nodes. We visualize the nodes with top affinity values of some operations for the same video in Figure 7. The feature aggregation focuses on the apparently similar nodes to model the dependencies among them as shown in Figure 7(a). On the contrary, the difference propagation models the significant changes of some obviously different nodes in Figure 7(b). In Figure 7(c), the nodes with high attention weights are the hand or the bag, and the nodes with low attention weights are some outliers, which indicates that the node attention helps to concentrate on important nodes and eliminate the interference of outliers.
+
+Successful and failed cases. We show some successful and failed cases to indicate the effects of different operations in Figure 8. In Figure 8(a), the feature aggregation successfully recognizes the interaction due to the obvious dependencies between the paper and the mug. However, it fails when detailed relations in Figure 8(b) and 8(c) are present. In Figure 8(b), the difference propagation and the temporal convolution could capture that the lid is rotating so that they correctly recognize the interaction. In Figure 8(c), the background incorporation is able to capture the relations between the towel and the water in the background so that
+
+
+(a) Feature Aggregation
+
+
+(b) Different Propagation
+
+
+(c) Node Attention
+
+
+Figure 7. Top activated nodes of different operations on the same interaction "Pulling something out of something". In (a) and (b), the red node is the reference node and the blue nodes are the top activated nodes. In (c), The red nodes have the highest attention weights while the blue ones have the lowest attention weights.
+(a) Stuffing something into something
+
+
+(b) Twisting something
+
+
+(c) Twisting something wet until water comes out
+Figure 8. Successful and failed cases of different graph operations. The green bounding boxes are RoIs extracted from RPN.
+
+it makes correct prediction, but other operations ignoring the background information are hard to recognize such an interaction with the background.
+
+More case study and analysis about graph operations are included in supplementary materials.
+
+# 4.5. Comparison with State-of-the-arts
+
+We compare the interaction recognition accuracy with recent state-of-the-art methods, and the results are show in Table 3. Except for STM [16], our method outperforms other methods, which indicates the effectiveness of our method. We model the interactions with adaptive structures, which enhances the ability of interaction modeling and boosts the performance.
+
+Among the recent state-of-the-arts, I3D+GCN [35] also uses graph operation over object proposals to recognize interactions. Our method surpasses it with a margin about $7\%$ , perhaps because we have trained a better backbone with our data augmentation techniques (see Section 4.2 for details),
+
+| Methods | V1 Val Acc | V2 Val Acc |
| I3D+GCN [35] (ECCV'18) | 43.3 | - |
| NonLocal3D+GCN [35] (ECCV'18) | 46.1 | - |
| CPNet [22] (CVPR'19) | - | 57.6 |
| TSM [19] (ICCV'19) | 44.81 | 58.71 |
| ECO [41] (ECCV'18) | 46.4 | - |
| TrajectoryNet [39] (NeurIPS'18) | 47.8 | - |
| S3D [38] (ECCV'18) | 48.2 | - |
| ir-CSN-152 [30] (ICCV'19) | 48.4 | - |
| GST [23] (ICCV'19) | 48.6 | 62.6 |
| discriminative filters [24] (ICCV'19) | 50.12 | - |
| STM [16] (ICCV'19) | 50.7 | 64.2 |
| adaptive structures search (Ours) | 51.4 | 63.5 |
+
+Only RGB results are reported for fair comparison.
+2 Only the results with the same backbone (ResNet50) as ours are reported.
+Table 3. Interaction recognition accuracy $(\%)$ comparison with state-of-the-arts.
+
+and our adaptive structures with multiple graph operations learn better interaction representations.
+
+STM [16] proposes a block to encode spatiotemporal and motion features, and stacks it into a deep network, which obtains better performance on Something-something-V2 dataset than ours. However, we adaptively model interactions with different structures, which provides more understanding about the relations between the interactions and the corresponding structures, instead of only feature encoding in STM. In addition, our structures are automatically searched, which releases the structures design efforts.
+
+# 5. Conclusion
+
+In this paper, we propose to automatically search adaptive network structures for interaction recognition, which enables adaptive interaction modeling and reduces structures design efforts. We design the search space with several proposed graph operations, and employ differentiable architecture search mechanism to search adaptive interaction modeling structures. Our experiments show that the architecture search framework learns adaptive structures for different videos, helping us understand the relations between structures and interactions. In addition, the designed basic graph operations model different relations in videos. The searched adaptive structures obtain competitive interaction recognition performance with state-of-the-arts.
+
+# Acknowledgement
+
+This work was supported partially by the National Key Research and Development Program of China (2018YFB1004903), NSFC(U1911401,U1811461), Guangdong Province Science and Technology Innovation Leading Talents (2016TX03X157), Guangdong NSF Project (No. 2018B030312002), Guangzhou Research Project (201902010037), and Research Projects of Zhejiang Lab (No. 2019KD0AB03). The principal investigator for this work is Wei-Shi Zheng.
+
+# References
+
+[1] Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1957-1967, 2017.
+[2] Daniel Beck, Gholamreza Haffari, and Trevor Cohn. Graph-to-sequence learning using gated graph neural networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 273-283, 2018.
+[3] Gedas Bertasius, Lorenzo Torresani, Stella X Yu, and Jianbo Shi. Convolutional random walk networks for semantic image segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 858-866, 2017.
+[4] J. Carreira and A. Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4724-4733, 2017.
+[5] Siddhartha Chandra, Nicolas Usunier, and Iasonas Kokkinos. Dense and low-rank gaussian crfs using deep embeddings. In The IEEE International Conference on Computer Vision (ICCV), pages 5103-5112, 2017.
+[6] Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. arXiv preprint arXiv:1904.12760, 2019.
+[7] Yunpeng Chen, Marcus Rohrbach, Zhicheng Yan, Yan Shuicheng, Jiashi Feng, and Yannis Kalantidis. Graph-based global reasoning networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 433-442, 2019.
+[8] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1933-1941, 2016.
+[9] Georgia Gkioxari, Ross Girshick, Piotr Dólar, and Kaiming He. Detecting and recognizing human-object interactions. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8359-8367, Salt Lake City, UT, USA, 2018.
+[10] R. Goyal, S. E. Kahou, V. Michalski, J. Materzynska, S. Westphal, H. Kim, V. Haenel, I. Fruend, P. Yianilos, M. Mueller-Freitag, F. Hoppe, C. Thurau, I. Bax, and R. Memisevic. The "something something" video database for learning and evaluating visual common sense. In *The IEEE International Conference on Computer Vision (ICCV)*, pages 5843-5851, 2017.
+[11] A. Gupta, A. Kembhavi, and L. S. Davis. Observing human-object interactions: Using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(10):1775-1789, 2009.
+[12] David K Hammond, Pierre Vandergheynst, and Rémi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129-150, 2011.
+
+[13] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In The IEEE International Conference on Computer Vision (ICCV), pages 2961-2969, 2017.
+[14] Noureldien Hussein, Efstratios Gavves, and Arnold W.M. Smeulders. Timeception for complex action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 254-263, 2019.
+[15] S. Ji, W. Xu, M. Yang, and K. Yu. 3d convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):221-231, 2013.
+[16] Boyuan Jiang, MengMeng Wang, Weihao Gan, Wei Wu, and Junjie Yan. Stm: Spatiotemporal and motion encoding for action recognition. In The IEEE International Conference on Computer Vision (ICCV), pages 2000-2009, 2019.
+[17] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations, 2017.
+[18] Xiaodan Liang, Zhiting Hu, Hao Zhang, Liang Lin, and Eric P Xing. Symbolic graph reasoning meets convolutions. In Advances in Neural Information Processing Systems, pages 1858-1868, 2018.
+[19] Ji Lin, Chuang Gan, and Song Han. Tsm: Temporal shift module for efficient video understanding. In The IEEE International Conference on Computer Vision (ICCV), pages 7083-7093, 2019.
+[20] Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. In International Conference on Learning Representations, 2018.
+[21] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In International Conference on Learning Representations, 2019.
+[22] Xingyu Liu, Joon-Young Lee, and Hailin Jin. Learning video representations from correspondence proposals. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4273–4281, 2019.
+[23] Chenxu Luo and Alan L. Yuille. Grouped spatial-temporal aggregation for efficient action recognition. In The IEEE International Conference on Computer Vision (ICCV), pages 5512-5521, 2019.
+[24] Brais Martinez, Davide Modolo, Yuanjun Xiong, and Joseph Tighe. Action recognition with spatial-temporal discriminative filter banks. In The IEEE International Conference on Computer Vision (ICCV), pages 5482-5491, 2019.
+[25] Makoto Miwa and Mohit Bansal. End-to-end relation extraction using lstms on sequences and tree structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1105-1116, 2016.
+[26] Z. Qiu, T. Yao, and T. Mei. Learning spatio-temporal representation with pseudo-3d residual networks. In The IEEE International Conference on Computer Vision (ICCV), pages 5534–5542, 2017.
+[27] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4780-4789, 2019.
+
+[28] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems, pages 568-576, 2014.
+[29] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), pages 4489-4497, 2015.
+[30] Du Tran, Heng Wang, Lorenzo Torresani, and Matt Feiszli. Video classification with channel-separated convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), pages 5552-5561, 2019.
+[31] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6450-6459, 2018.
+[32] G. Varol, I. Laptev, and C. Schmid. Long-term temporal convolutions for action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(6):1510-1517, 2018.
+[33] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks for action recognition in videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018.
+[34] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7794-7803, 2018.
+[35] Xiaolong Wang and Abhinav Gupta. Videos as space-time region graphs. In The European Conference on Computer Vision (ECCV), pages 413-431, 2018.
+[36] Xiaolong Wang, Yufei Ye, and Abhinav Gupta. Zero-shot recognition via semantic embeddings and knowledge graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6857-6866, 2018.
+[37] Tete Xiao, Quanfu Fan, Dan Gutfreund, Mathew Monfort, Aude Oliva, and Bolei Zhou. Reasoning about human-object interactions through dual attention networks. In The IEEE International Conference on Computer Vision (ICCV), pages 3919-3928, 2019.
+[38] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In The European Conference on Computer Vision (ECCV), pages 318–335, 2018.
+[39] Yue Zhao, Yuanjun Xiong, and Dahua Lin. Trajectory convolution for action recognition. In Advances in Neural Information Processing Systems, pages 2204-2215, 2018.
+[40] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Torralba. Temporal relational reasoning in videos. In The European Conference on Computer Vision (ECCV), pages 831-846, 2018.
+[41] Mohammadreza Zolfaghari, Kamaljeet Singh, and Thomas Brox. Eco: Efficient convolutional network for online video understanding. In The European Conference on Computer Vision (ECCV), pages 713-730, 2018.
+
+[42] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In International Conference on Learning Representations, 2017.
+[43] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8697-8710, 2018.
\ No newline at end of file
diff --git a/adaptiveinteractionmodelingviagraphoperationssearch/images.zip b/adaptiveinteractionmodelingviagraphoperationssearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..13508a6d871c8e8f0dc6d062bf6c8423056500f3
--- /dev/null
+++ b/adaptiveinteractionmodelingviagraphoperationssearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be8a2fe2ab83a04b0e1613e8519a09d1d182935096ae50b58be13631a6b1a252
+size 466629
diff --git a/adaptiveinteractionmodelingviagraphoperationssearch/layout.json b/adaptiveinteractionmodelingviagraphoperationssearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2e30652a5ae5028c027d4b6f5a4fc8969e865878
--- /dev/null
+++ b/adaptiveinteractionmodelingviagraphoperationssearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e3976f1328ae8430c226cf9a5a7680ca2d76582980d24eec221d5ca236f38f6c
+size 432255
diff --git a/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_content_list.json b/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b3caaa632906ed75f540dd7c62081c5a7f60ffc5
--- /dev/null
+++ b/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b9bf7fc7eb4445f901bd042e24810dbb83699ebee8ee17c8bc969ded7b99eb18
+size 90235
diff --git a/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_model.json b/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..926008f4d701d781a525f0f375a513771ac2f85e
--- /dev/null
+++ b/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c35fec31ea41c13ba3d0909f5c4ef1bb4b3c3acbc9224317192105620331ac9
+size 110205
diff --git a/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_origin.pdf b/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fc158359dfebb308f7f1b0a4b9e1963db0059006
--- /dev/null
+++ b/adaptivelossawarequantizationformultibitnetworks/f57faed3-d1bd-4973-83c5-4b3e35208572_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:daefe8760dc99b7805fa6a8e8ce583ab35b9cdb6f2da45cd53ed2ee6067a9b17
+size 216284
diff --git a/adaptivelossawarequantizationformultibitnetworks/full.md b/adaptivelossawarequantizationformultibitnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a878e0cde9157fbd8196f591df3e332d7af1d19d
--- /dev/null
+++ b/adaptivelossawarequantizationformultibitnetworks/full.md
@@ -0,0 +1,416 @@
+# Adaptive Loss-aware Quantization for Multi-bit Networks
+
+Zhongnan $\mathrm{Qu}^1$ , Zimu Zhou $^2$ , Yun Cheng $^1$ , and Lothar Thiele $^1$
+
+1Computer Engineering Group, ETH Zurich, Switzerland
+
+{quiz, chengyu, thiele}@ethz.ch
+
+$^{2}$ School of Information Systems, Singapore Management University, Singapore
+
+zimuzhou@smu.edu.sg
+
+# Abstract
+
+We investigate the compression of deep neural networks by quantizing their weights and activations into multiple binary bases, known as multi-bit networks (MBNs), which accelerate the inference and reduce the storage for the deployment on low-resource mobile and embedded platforms. We propose Adaptive Loss-aware Quantization (ALQ), a new MBN quantization pipeline that is able to achieve an average bitwidth below one-bit without notable loss in inference accuracy. Unlike previous MBN quantization solutions that train a quantizer by minimizing the error to reconstruct full precision weights, ALQ directly minimizes the quantization-induced error on the loss function involving neither gradient approximation nor full precision maintenance. ALQ also exploits strategies including adaptive bitwidth, smooth bitwidth reduction, and iterative trained quantization to allow a smaller network size without loss in accuracy. Experiment results on popular image datasets show that ALQ outperforms state-of-the-art compressed networks in terms of both storage and accuracy.
+
+# 1. Introduction
+
+There is a growing interest to deploy deep neural networks on resource-constrained devices to enable new intelligent services such as mobile assistants, augmented reality, and autonomous cars. However, deep neural networks are notoriously resource-intensive. Their complexity needs to be trimmed down to fit in mobile and embedded devices.
+
+To take advantage of the various pretrained models for efficient inference on resource-constrained devices, it is common to compress the pretrained models via pruning [10], quantization [8, 9, 26, 42, 43], distillation [12], among others. We focus on quantization, especially quantizing both the full precision weights and activations of a deep neural
+
+network into binary encodes and the corresponding scaling factors [4, 36], which are also interpreted as binary basis vectors and floating-point coordinates in a geometry viewpoint [9]. Neural networks quantized with binary encodes replace expensive floating-point operations by bitwise operations, which are supported even by microprocessors and often result in small memory footprints [29]. Since the space spanned by only one-bit binary basis and one coordinate is too sparse to optimize, many researchers suggest a multi-bit network (MBN) [8, 9, 15, 26, 42, 43], which allows to obtain a small size without notable accuracy loss and still leverages bitwise operations. An MBN is usually obtained via trained quantization. Recent studies [31] leverage bit-packing and bitwise computations for efficient deploying binary networks on a wide range of general devices, which also provides more flexibility to design multi-bit/binary networks.
+
+Most MBN quantization schemes [8, 9, 15, 26, 42, 43] predetermine a global bitwidth, and learn a quantizer to transform the full precision parameters into binary bases and coordinates such that the quantized models do not incur a significant accuracy loss. However, these approaches have the following drawbacks:
+
+- A global bitwidth may be sub-optimal. Recent studies on fixed-point quantization [18, 25] show that the optimal bitwidth varies across layers.
+- Previous efforts [26, 42, 43] retain inference accuracy by minimizing the weight reconstruction error rather than the loss function. Such an indirect optimization objective may lead to a notable loss in accuracy. Furthermore, they rely on approximated gradients, e.g. straight-through estimators (STE) to propagate gradients through quantization functions during training.
+- Many quantization schemes [36, 43] keep the first and last layer in full precision, because quantizing these layers to low bitwidth tends to dramatically decrease
+
+the inference accuracy [41, 28]. However, these two full precision layers can be a significant storage overhead compared to other low-bit layers (see Sec. 5.4.3). Also, floating-point operations in both layers can take up the majority of computation in quantized networks [27].
+
+We overcome the above drawbacks via a novel Adaptive Loss-aware Quantization scheme (ALQ). Instead of using a uniform bitwidth, ALQ assigns a different bitwidth to each group of weights. More importantly, ALQ directly minimizes the loss function w.r.t. the quantized weights, by iteratively learning a quantizer that $(i)$ smoothly reduces the number of binary bases and $(ii)$ alternatively optimizes the remaining binary bases and the corresponding coordinates. Although loss-aware quantization has been proposed for binary and ternary networks [14, 13, 46], they are inapplicable to MBNs due to the extended optimization space. They also need approximated gradients during training. ALQ is the first loss-aware quantization scheme for MBNs and eliminates the need for approximating gradients and retaining full precision weights. ALQ is also able to quantize the first and last layers without incurring a notable accuracy loss. The main contributions of this work are as follows.
+
+- We design ALQ, the first loss-aware quantization scheme for multi-bit networks. It is also the first trained quantizer without gradient approximation, and realizes an adaptive bitwidth w.r.t the loss for MBNs (including the first and last layers).
+- ALQ enables extremely low-bit (yet dense tensor form) binary networks with an average bitwidth below 1-bit. Experiments on CIFAR10 show that ALQ can compress VGG to an average bitwidth of 0.4-bit, while yielding a higher accuracy than other binary networks [36, 4].
+
+# 2. Related Work
+
+ALQ follows the trend to quantize deep neural networks using discrete bases to reduce expensive floating-point operations. Commonly used bases include fixed-point [47], power of two [16, 45], and $\{-1,0, + 1\}$ [4, 36]. We focus on quantization with binary bases i.e. $\{-1, + 1\}$ among others for the following considerations. (i) If both weights and activations are quantized with the same binary basis, it is possible to evaluate 32 multiply-accumulate operations (MACs) with only 3 instructions on a 32-bit microprocessor, i.e. bitwise xnor, popcount, and accumulation. This will significantly speed up the convolution operations [16]. (ii) A network quantized to fixed-point requires specialized integer arithmetic units (with various bitwidth) for efficient computing [1, 18], whereas a network quantized with multiple binary bases adopts the same operations mentioned before as binary networks. Popular networks quantized with binary bases include Binary Networks and Multi-bit Networks.
+
+# 2.1. Quantization for Binary Networks
+
+BNN [4] is the first network with both binarized weights and activations. It dramatically reduces the memory and computation but often with notable accuracy loss. To resume the accuracy degradation from binarization, XNOR-Net [36] introduces a layer-wise full precision scaling factor into BNN. However, XNOR-Net leaves the first and last layers unquantized, which consumes more memory. SYQ [6] studies the efficiency of different structures during binarization/ternarization. LAB [14] is the first loss-aware quantization scheme which optimizes the weights by directly minimizing the loss function.
+
+ALQ is inspired by recent loss-aware binary networks such as LAB [14]. Loss-aware quantization has also been extended to fixed-point networks in [13]. However, existing loss-aware quantization schemes [14, 13] are inapplicable for MBNs. This is because multiple binary bases dramatically extend the optimization space with the same bitwidth (i.e., an optimal set of binary bases rather than a single basis), which may be intractable. Some proposals [14, 13, 46] still require full-precision weights and gradient approximation (backward STE and forward loss-aware projection), introducing undesirable errors when minimizing the loss. In contrast, ALQ is free from gradient approximation.
+
+# 2.2. Quantization for Multi-bit Networks
+
+MBNs denote networks that use multiple binary bases to trade off storage and accuracy. Gong et al. propose a residual quantization process, which greedily searches the next binary basis by minimizing the residual reconstruction error [8]. Guo et al. improve the greedy search with a least square refinement [9]. Xu et al. [42] separate this search into two alternating steps, fixing coordinates then exhausted searching for optimal bases, and fixing the bases then refining the coordinates using the method in [9]. LQ-Net [43] extends the scheme of [42] with a moving average updating, which jointly quantizes weights and activations. However, similar to XNOR-Net [36], LQ-Net [43] does not quantize the first and last layers. ABC-Net [26] leverages the statistical information of all weights to construct the binary bases as a whole for all layers.
+
+All the state-of-the-art MBN quantization schemes minimize the weight reconstruction error rather than the loss function of the network. They also rely on the gradient approximation such as STE when back propagating the quantization function. In addition, they all predetermine a uniform bitwidth for all parameters. The indirect objective, the approximated gradient, and the global bitwidth lead to a sub-optimal quantization. ALQ is the first scheme to explicitly optimize the loss function and incrementally train an adaptive bitwidth while without gradient approximation.
+
+# 3. Adaptive Loss-Aware Quantization
+
+# 3.1. Weight Quantization Overview
+
+Notations. We aim at MBN quantization with an adaptive bandwidth. To allow adaptive bandwidth, we structure the weights in disjoint groups. Specifically, for the vectorized weights $\pmb{w}$ of a given layer $l$ , where $\pmb{w} \in \mathbb{R}^{N \times 1}$ , we divide $\pmb{w}$ into $G$ disjoint groups. For simplicity, we omit the subscript $l$ . Each group of weights is denoted by $\pmb{w}_g$ , where $\pmb{w}_g \in \mathbb{R}^{n \times 1}$ and $N = n \times G$ . Then the quantized weights of each group, $\hat{\pmb{w}}_g = \sum_{i=1}^{I_g} \alpha_i \beta_i = B_g \pmb{\alpha}_g$ . $\beta_i \in \{-1, +1\}^{n \times 1}$ and $\alpha_i \in \mathbb{R}_+$ are the $i^{\text{th}}$ binary basis and the corresponding coordinate; $I_g$ represents the bandwidth, i.e. the number of binary bases, of group $g$ . $B_g \in \{-1, +1\}^{n \times I_g}$ and $\alpha_g \in \mathbb{R}_+^{I_g \times 1}$ are the matrix forms of the binary bases and the coordinates. We further denote $\alpha$ as vectorized coordinates $\{\pmb{\alpha}_g\}_{g=1}^G$ , and $\pmb{B}$ as concatenated binary bases $\{B_g\}_{g=1}^G$ of all weight groups in layer $l$ . A layer $l$ quantized as above yields an average bandwidth $I = \frac{1}{G} \sum_{g=1}^{G} I_g$ . We discuss the choice of group size $n$ , and the initial $B_g$ , $\alpha_g$ , $I_g$ in Sec. 5.1.
+
+Problem Formulation. ALQ quantizes weights by directly minimizing the loss function rather than the reconstruction error. For layer $l$ , the process can be formulated as the following optimization problem.
+
+$$
+\min _ {\hat {\boldsymbol {w}} _ {g}} \quad \ell (\hat {\boldsymbol {w}} _ {g}) \tag {1}
+$$
+
+$$
+\text {s . t .} \quad \hat {\boldsymbol {w}} _ {g} = \sum_ {i = 1} ^ {I _ {g}} \alpha_ {i} \boldsymbol {\beta} _ {i} = \boldsymbol {B} _ {g} \boldsymbol {\alpha} _ {g} \tag {2}
+$$
+
+$$
+\operatorname {c a r d} (\boldsymbol {\alpha}) = I \times G \leq \operatorname {I} _ {\min } \times G \tag {3}
+$$
+
+where $\ell$ is the loss; $\operatorname{card}(.)$ denotes the cardinality of the set, i.e. the total number of elements in $\alpha$ ; $\mathrm{I}_{\min}$ is the desirable average bitwidth. Since the group size $n$ is the same in one layer, $\operatorname{card}(\alpha)$ is proportional to the storage consumption.
+
+ALQ tries to solve the optimization problem in Eq.(1)-Eq.(3) by iteratively solving two sub-problems as below. The overall pseudocode is illustrated in Alg. 5 in Appendix B.3.
+
+- Step 1: Pruning in $\alpha$ Domain (Sec. 3.2). In this step, we progressively reduce the average bitwidth $I$ for a layer $l$ by pruning the least important (w.r.t. the loss) coordinates in $\alpha$ domain. Note that removing an element $\alpha_{i}$ will also lead to the removal of the binary basis $\beta_{i}$ , which in effect results in a smaller bitwidth $I_{g}$ for group $g$ . This way, no sparse tensor is introduced. Sparse tensors could lead to a detrimental irregular computation. Since the importance of each weight group differs, the resulting $I_{g}$ varies across groups, and thus contributes to an adaptive bitwidth $I_{g}$ for each group. In this step, we only set some elements of $\alpha$ to zero (also remove them from $\alpha$ leading to a reduced $I_{g}$ ) without changing
+
+the others. The optimization problem for Step 1 is:
+
+$$
+\min _ {\boldsymbol {\alpha}} \quad \ell (\boldsymbol {\alpha}) \tag {4}
+$$
+
+$$
+\text {s . t .} \quad \operatorname {c a r d} (\boldsymbol {\alpha}) \leq \mathrm {I} _ {\min } \times G \tag {5}
+$$
+
+- Step 2: Optimizing Binary Bases $B_{g}$ and Coordinates $\alpha_{g}$ (Sec. 3.3). In this step, we retrain the remaining binary bases and coordinates to recover the accuracy degradation induced by the bitwidth reduction. Similar to [42], we take an alternative approach for better accuracy recovery. Specifically, we first search for a new set of binary bases w.r.t. the loss given fixed coordinates. Then we optimize the coordinates by fixing the binary bases. The optimization problem for Step 2 is:
+
+$$
+\min _ {\hat {\boldsymbol {w}} _ {g}} \quad \ell (\hat {\boldsymbol {w}} _ {g}) \tag {6}
+$$
+
+$$
+\text {s . t .} \quad \hat {\boldsymbol {w}} _ {g} = \sum_ {i = 1} ^ {I _ {g}} \alpha_ {i} \beta_ {i} = \boldsymbol {B} _ {g} \boldsymbol {\alpha} _ {g} \tag {7}
+$$
+
+Optimizer Framework. We consider both sub-problems above as an optimization problem with domain constraints, and solve them using the same optimization framework: subgradient methods with projection update [5].
+
+The optimization problem in Eq.(6)-Eq.(7) imposes domain constraints on $B_{g}$ because they can only be discrete binary bases. The optimization problem in Eq.(4)-Eq.(5) can be considered as with a trivial domain constraint: the output $\alpha$ should be a subset (subvector) of the input $\alpha$ . Furthermore, the feasible sets for both $B_{g}$ and $\alpha$ are bounded.
+
+Subgradient methods with projection update are effective to solve problems in the form of $\min_{\boldsymbol{x}}(\ell(\boldsymbol{x}))$ s.t. $\boldsymbol{x} \in \mathbb{X}$ [5]. We apply AMSGrad [37], an adaptive stochastic subgradient method with projection update, as the common optimizer framework in the two steps. At iteration $s$ , AMSGrad generates the next update as,
+
+$$
+\begin{array}{l} \boldsymbol {x} ^ {s + 1} = \Pi_ {\mathbb {X}, \sqrt {\hat {\boldsymbol {V}} ^ {s}}} \left(\boldsymbol {x} ^ {s} - a ^ {s} \boldsymbol {m} ^ {s} / \sqrt {\hat {\boldsymbol {v}} ^ {s}}\right) \\ = \underset {\boldsymbol {x} \in \mathbb {X}} {\operatorname {a r g m i n}} \| \left(\sqrt {\hat {\boldsymbol {V}} ^ {s}}\right) ^ {1 / 2} \left(\boldsymbol {x} - \left(\boldsymbol {x} ^ {s} - \frac {a ^ {s} \boldsymbol {m} ^ {s}}{\sqrt {\hat {\boldsymbol {v}} ^ {s}}}\right)\right) \| \tag {8} \\ \end{array}
+$$
+
+where $\Pi$ is a projection operator; $\mathbb{X}$ is the feasible domain of $\pmb{x}$ ; $a^s$ is the learning rate; $\pmb{m}^{s}$ is the (unbiased) first momentum; $\hat{\pmb{v}}^s$ is the (unbiased) maximum second momentum; and $\hat{\pmb{V}}^s$ is the diagonal matrix of $\hat{\pmb{v}}^s$ .
+
+In our context, Eq.(8) can be written as,
+
+$$
+\hat {\boldsymbol {w}} _ {g} ^ {s + 1} = \underset {\hat {\boldsymbol {w}} _ {g} \in \mathbb {F}} {\operatorname {a r g m i n}} f ^ {s} (\hat {\boldsymbol {w}} _ {g}) \tag {9}
+$$
+
+$$
+f ^ {s} = \left(a ^ {s} \boldsymbol {m} ^ {s}\right) ^ {\mathrm {T}} \left(\hat {\boldsymbol {w}} _ {g} - \hat {\boldsymbol {w}} _ {g} ^ {s}\right) + \frac {1}{2} \left(\hat {\boldsymbol {w}} _ {g} - \hat {\boldsymbol {w}} _ {g} ^ {s}\right) ^ {\mathrm {T}} \sqrt {\hat {\boldsymbol {V}} ^ {s}} \left(\hat {\boldsymbol {w}} _ {g} - \hat {\boldsymbol {w}} _ {g} ^ {s}\right) \tag {10}
+$$
+
+where $\mathbb{F}$ is the feasible domain of $\hat{\boldsymbol{w}}_g$ .
+
+Step 1 and Step 2 have different feasible domains of $\mathbb{F}$ according to their objective (details in Sec. 3.2 and Sec. 3.3). Eq.(10) approximates the loss increment incurred by $\hat{\boldsymbol{w}}_g$ around the current point $\hat{\boldsymbol{w}}_g^s$ as a quadratic model function under domain constraints [5, 37]. For simplicity, we replace $a^s m^s$ with $g^s$ and replace $\sqrt{\hat{V}^s}$ with $H^s$ . $g^s$ and $H^s$ are iteratively updated by the loss gradient of $\hat{\boldsymbol{w}}_g^s$ . Thus, the required input of each AMSGrad step is $\frac{\partial\ell^s}{\partial\hat{\boldsymbol{w}}_g^s}$ . Since $\hat{\boldsymbol{w}}_g^s$ is used as an intermediate value during the forward, it can be directly obtained during the backward.
+
+# 3.2. Pruning in $\alpha$ Domain
+
+As introduced in Sec. 3.1, we reduce the bitwidth $I$ by pruning the elements in $\alpha$ w.r.t. the resulting loss. If one element $\alpha_{i}$ in $\alpha$ is pruned, the corresponding dimension $\beta_{i}$ is also removed from $B$ . Now we explain how to instantiate the optimizer in Eq.(9) to solve Eq.(4)-Eq.(5) of Step 1.
+
+The cardinality of the chosen subset (i.e. the average bitwidth) is uniformly reduced over iterations. For example, assume there are $T$ iterations in total, the initial average bitwidth is $I^0$ and the desired average bitwidth after $T$ iterations $I^T$ is $\mathrm{I}_{\mathrm{min}}$ . Then at each iteration $t$ , $(M_p = \mathrm{round}((I^0 - \mathrm{I}_{\mathrm{min}}) \times G / T))$ of $\alpha_i^t$ 's are pruned in this layer. This way, the cardinality after $T$ iterations will be smaller than $\mathrm{I}_{\mathrm{min}} \times G$ . See Alg. 2 in Appendix B.1 for the pseudocode.
+
+When pruning in the $\alpha$ domain, $B$ is considered as invariant. Hence Eq.(9) and Eq.(10) become,
+
+$$
+\boldsymbol {\alpha} ^ {t + 1} = \underset {\boldsymbol {\alpha} \in \mathbb {P}} {\operatorname {a r g m i n}} f _ {\boldsymbol {\alpha}} ^ {t} (\boldsymbol {\alpha}) \tag {11}
+$$
+
+$$
+f _ {\boldsymbol {\alpha}} ^ {t} = \left(\boldsymbol {g} _ {\boldsymbol {\alpha}} ^ {t}\right) ^ {\mathrm {T}} \left(\boldsymbol {\alpha} - \boldsymbol {\alpha} ^ {t}\right) + \frac {1}{2} \left(\boldsymbol {\alpha} - \boldsymbol {\alpha} ^ {t}\right) ^ {\mathrm {T}} \boldsymbol {H} _ {\boldsymbol {\alpha}} ^ {t} \left(\boldsymbol {\alpha} - \boldsymbol {\alpha} ^ {t}\right) \tag {12}
+$$
+
+where $g_{\alpha}^{t}$ and $H_{\alpha}^{t}$ are similar as in Eq.(10) but are in the $\alpha$ domain. If $\alpha_{i}^{t}$ is pruned, the $i^{\text{th}}$ element in $\alpha$ is set to 0 in the above Eq.(11) and Eq.(12). Thus, the constrained domain $\mathbb{P}$ is taken as all possible vectors with $M_{p}$ zero elements in $\alpha^{t}$ .
+
+AMSGrad uses a diagonal matrix of $H_{\alpha}^{t}$ in the quadratic model function, which decouples each element in $\alpha^t$ . This means the loss increment caused by several $\alpha_i^t$ equals the sum of the increments caused by them individually, which are calculated as,
+
+$$
+f _ {\boldsymbol {\alpha}, i} ^ {t} = - g _ {\boldsymbol {\alpha}, i} ^ {t} \alpha_ {i} ^ {t} + \frac {1}{2} H _ {\boldsymbol {\alpha}, i i} ^ {t} \left(\alpha_ {i} ^ {t}\right) ^ {2} \tag {13}
+$$
+
+All items of $f_{\alpha, i}^{t}$ are sorted in ascending. Then the first $M_{p}$ items $(\alpha_{i}^{t})$ in the sorted list are removed from $\alpha^{t}$ , and results in a smaller cardinality $I^{t} \times G$ . The input of the AMSGrad step in $\alpha$ domain is the loss gradient of $\alpha_{g}^{t}$ , which can be computed with the chain rule,
+
+$$
+\frac {\partial \ell^ {t}}{\partial \boldsymbol {\alpha} _ {g} ^ {t}} = \boldsymbol {B} _ {g} ^ {t} ^ {\mathrm {T}} \frac {\partial \ell^ {t}}{\partial \hat {\boldsymbol {w}} _ {g} ^ {t}} \tag {14}
+$$
+
+$$
+\hat {\boldsymbol {w}} _ {g} ^ {t} = \boldsymbol {B} _ {g} ^ {t} \boldsymbol {\alpha} _ {g} ^ {t} \tag {15}
+$$
+
+Our pipeline allows to reduce the bitwidth smoothly, since the average bitwidth can be floating-point. In ALQ, since different layers have a similar group size (see Sec. 5.1), the loss increment caused by pruning is sorted among all layers, such that only a global pruning number needs to be determined. The global pruning number is defined by the total number of pruned $\alpha_{i}$ 's, i.e. the difference of $\sum_{l} \mathrm{card}(\alpha_{l})$ before and after pruning. More details are explained in Appendix B.1 and B.3. This pruning step not only provides a loss-aware adaptive bitwidth, but also seeks a better initialization for training the following lower bitwidth quantization, since quantized weights may be relatively far from their original full precision values.
+
+# 3.3. Optimizing Binary Bases and Coordinates
+
+After pruning, the loss degradation needs to be recovered. Following Eq.(9), the objective in Step 2 is
+
+$$
+\hat {\boldsymbol {w}} _ {g} ^ {s + 1} = \underset {\hat {\boldsymbol {w}} _ {g} \in \mathbb {F}} {\operatorname {a r g m i n}} f ^ {s} (\hat {\boldsymbol {w}} _ {g}) \tag {16}
+$$
+
+The constrained domain $\mathbb{F}$ is decided by both binary bases and full precision coordinates. Hence directly searching optimal $\hat{w}_g$ is NP-hard. Instead, we optimize $B_{g}$ and $\alpha_{g}$ in an alternative manner, as with prior MBN quantization w.r.t. the reconstruction error [42, 43].
+
+Optimizing $B_{g}$ . We directly search the optimal bases with AMSGrad. In each optimizing iteration $q$ , we fix $\alpha_{g}^{q}$ , and update $B_{g}^{q}$ . We find the optimal increment for each group of weights, such that it converts to a new set of binary bases, $B_{g}^{q + 1}$ . This optimization step searches a new space spanned by $B_{g}^{q + 1}$ based on the loss reduction, which prevents the pruned space to be always a subspace of the previous one. See Alg. 3 in Appendix B.2.1 for the detailed pseudocode.
+
+According to Eq.(9) and Eq.(10), the optimal $B_{g}$ w.r.t. the loss is updated by,
+
+$$
+\boldsymbol {B} _ {g} ^ {q + 1} = \underset {\boldsymbol {B} _ {g} \in \{- 1, + 1 \} ^ {n \times I _ {g}}} {\operatorname {a r g m i n}} f ^ {q} (\boldsymbol {B} _ {g}) \tag {17}
+$$
+
+$$
+\begin{array}{l} f ^ {q} = \left(\boldsymbol {g} ^ {q}\right) ^ {\mathrm {T}} \left(\boldsymbol {B} _ {g} \boldsymbol {\alpha} _ {g} ^ {q} - \hat {\boldsymbol {w}} _ {g} ^ {q}\right) + \\ \frac {1}{2} \left(\boldsymbol {B} _ {g} \boldsymbol {\alpha} _ {g} ^ {q} - \hat {\boldsymbol {w}} _ {g} ^ {q}\right) ^ {\mathrm {T}} \boldsymbol {H} ^ {q} \left(\boldsymbol {B} _ {g} \boldsymbol {\alpha} _ {g} ^ {q} - \hat {\boldsymbol {w}} _ {g} ^ {q}\right) \tag {18} \\ \end{array}
+$$
+
+where $\hat{w}_q^q = B_q^q\alpha_q^q$
+
+Since $H^q$ is diagonal in AMSGrad, each row vector in $B_g^{q+1}$ can be independently determined. For example, the $j^{\text{th}}$ row is computed as,
+
+$$
+\boldsymbol {B} _ {g, j} ^ {q + 1} = \underset {\boldsymbol {B} _ {g, j}} {\operatorname {a r g m i n}} \| \boldsymbol {B} _ {g, j} \boldsymbol {\alpha} _ {g} ^ {q} - \left(\hat {w} _ {g, j} ^ {q} - g _ {j} ^ {q} / H _ {j j} ^ {q}\right) \| \tag {19}
+$$
+
+In general, $n >> I_g$ . For each group, we firstly compute all $2^{I_g}$ possible values of
+
+$$
+\boldsymbol {b} ^ {\mathrm {T}} \boldsymbol {\alpha} _ {g} ^ {q}, \quad \boldsymbol {b} ^ {\mathrm {T}} \in \{- 1, + 1 \} ^ {1 \times I _ {g}} \tag {20}
+$$
+
+Then each row vector $B_{g,j}^{q+1}$ can be directly assigned by the optimal $b^{\mathrm{T}}$ through exhaustive search.
+
+Optimizing $\alpha_{g}$ . The above obtained set of binary bases $B_{g}$ spans a new linear space. The current $\alpha_{g}$ is unlikely to be a (local) optimal w.r.t. the loss in this space, so now we optimize $\alpha_{g}$ . Since $\alpha_{g}$ is full precision, i.e. $\alpha_{g} \in \mathbb{R}^{I_{g} \times 1}$ , there is no domain constraint and thus no need for projection updating. Optimizing full precision $\boldsymbol{w}_{g}$ takes incremental steps in original $n$ -dim full space (spanned by orthonormal bases). Similarly, optimizing $\alpha_{g}$ searches steps in a $I_{g}$ -dim subspace (spanned by $B_{g}$ ). Hence conventional training strategies can be directly used to optimize $\alpha_{g}$ . See Alg. 4 in Appendix B.2.2 for the detailed pseudocode.
+
+Similar as Eq.(11) and Eq.(12), we construct an AMS-Grad optimizer in $\alpha$ domain but without projection updating, for each group in the $p^{\mathrm{th}}$ iteration as,
+
+$$
+\boldsymbol {\alpha} _ {g} ^ {p + 1} = \boldsymbol {\alpha} _ {g} ^ {p} - a _ {\boldsymbol {\alpha}} ^ {p} \boldsymbol {m} _ {\boldsymbol {\alpha}} ^ {p} / \sqrt {\hat {\boldsymbol {v}} _ {\boldsymbol {\alpha}} ^ {p}} \tag {21}
+$$
+
+We also add an L2-norm regularization on $\alpha_{g}$ to enforce unimportant coordinates to zero. If there is a negative value in $\alpha_{g}$ , the corresponding basis is set to its negative complement, to keep $\alpha_{g}$ semi-positive definite. Optimizing $B_{g}$ and $\alpha_{g}$ does not influence the number of binary bases $I_{g}$ .
+
+Optimization Speedup. Since $\alpha_{g}$ is full precision, updating $\alpha_{g}^{q}$ is much cheaper than exhaustively search $B_{g}^{q + 1}$ . Even if the main purpose of the first step in Sec. 3.3 is optimizing bases, we also add an updating process for $\alpha_{g}^{q}$ in each optimizing iteration $q$ .
+
+We fix $B_g^{q + 1}$ , and update $\alpha_{g}^{q}$ . The overall increment of quantized weights from both updating processes is,
+
+$$
+\hat {\boldsymbol {w}} _ {g} ^ {q + 1} - \hat {\boldsymbol {w}} _ {g} ^ {q} = B _ {g} ^ {q + 1} \boldsymbol {\alpha} _ {g} ^ {q + 1} - B _ {g} ^ {q} \boldsymbol {\alpha} _ {g} ^ {q} \tag {22}
+$$
+
+Substituting Eq.(22) into Eq.(9) and Eq.(10), we have,
+
+$$
+\begin{array}{l} \boldsymbol {\alpha} _ {g} ^ {q + 1} = - \left(\left(\boldsymbol {B} _ {g} ^ {q + 1}\right) ^ {\mathrm {T}} \boldsymbol {H} ^ {q} \boldsymbol {B} _ {g} ^ {q + 1}\right) ^ {- 1} \times \tag {23} \\ \left(\left(\boldsymbol {B} _ {g} ^ {q + 1}\right) ^ {\mathrm {T}} \left(\boldsymbol {g} ^ {q} - \boldsymbol {H} ^ {q} \boldsymbol {B} _ {g} ^ {q} \boldsymbol {\alpha} _ {g} ^ {q}\right)\right) \\ \end{array}
+$$
+
+To ensure the inverse in Eq.(23) exists, we add $\lambda \mathbf{I}$ onto $(\pmb{B}_{g}^{q + 1})^{\mathrm{T}}\pmb{H}^{q}\pmb{B}_{g}^{q + 1}$ , where $\lambda = 10^{-6}$ .
+
+# 4. Activation Quantization
+
+To leverage bitwise operations for speedup, the inputs of each layer (i.e. the activation output of the last layer) also need to be quantized into the multi-bit form. Unlike previous works [43] that quantize activations with a different binary basis $(\{0, +1\})$ as weights, we also quantize activations with $\{-1, +1\}$ . This way, we only need 3 instructions rather than 5 instructions to replace the original 32 MACs (see Sec. 2).
+
+Our activation quantization follows the idea proposed in [2], i.e. a parameterized clipping for fixed-point activation
+
+quantization, but it is adapted to the multi-bit form. Specially, we replace ReLu with a step activation function. The vectorized activation $\pmb{x}$ of the $l^{\mathrm{th}}$ layer is quantized as,
+
+$$
+\boldsymbol {x} \doteq \hat {\boldsymbol {x}} = x _ {r e f} + D \boldsymbol {\gamma} = D ^ {\prime} \boldsymbol {\gamma} ^ {\prime} \tag {24}
+$$
+
+where $D \in \{-1, +1\}^{N_x \times I_x}$ , and $\gamma \in \mathbb{R}_+^{I_x \times 1}$ . $\gamma'$ is a column vector formed by $[x_{ref}, \gamma^{\mathrm{T}}]^{\mathrm{T}}$ ; $D'$ is a matrix formed by $[1^{N_x \times 1}, D]$ . $N_x$ is the dimension of $x$ , and $I_x$ is the quantization bitwidth for activations. $x_{ref}$ is the introduced layerwise (positive floating-point) reference to fit in the output range of ReLu. During inference, $x_{ref}$ is convoluted with the weights of the next layer and added to the bias. Hence the introduction of $x_{ref}$ does not lead to extra computations. The output of the last layer is not quantized, as it does not involve computations anymore. For other settings, we directly adopt them used in [43]. $\gamma$ and $x_{ref}$ are updated during the forward propagation with a running average to minimize the squared reconstruction error as,
+
+$$
+\gamma_ {n e w} ^ {\prime} = \left(\boldsymbol {D} ^ {\prime \mathrm {T}} \boldsymbol {D} ^ {\prime}\right) ^ {- 1} \boldsymbol {D} ^ {\prime \mathrm {T}} \boldsymbol {x} \tag {25}
+$$
+
+$$
+\gamma^ {\prime} = 0. 9 \gamma^ {\prime} + (1 - 0. 9) \gamma_ {n e w} ^ {\prime} \tag {26}
+$$
+
+The (quantized) weights are also further fine-tuned with our optimizer to resume the accuracy drop. Here, we only set a global bitwidth for all layers in activation quantization.
+
+# 5. Experiments
+
+We implement ALQ with Pytorch [30], and evaluate its performance on MNIST [22], CIFAR10 [19], and ILSVRC12 (ImageNet) [38] using LeNet5 [21], VGG [14, 36], and ResNet18/34 [11], respectively. More implementation details are provided in Appendix C.
+
+# 5.1. ALQ Initialization
+
+We adapt the network sketching proposed in [9] for $\hat{w}_g$ initialization, and realize a structured sketching (see Alg. 1 in Appendix A.1). Some important parameters in Alg. 1 are chosen as below.
+
+Group Size $n$ . We empirically decide a range for the group size $n$ by trading off between the weight reconstruction error and the storage compression rate. A group size from 32 to 512 achieves a good balance. Accordingly, for a convolution layer, grouping in channel-wise $(w_{c,:;,:})$ , kernel-wise $(w_{c,d,:,:})$ , and pixel-wise $(w_{c,:h,w})$ appears to be appropriate. Channel-wise $w_{c,:}$ and subchannel-wise $w_{c,d:d+n}$ grouping are suited for a fully connected layer. In addition, the most frequently used structures for current popular networks are pixel-wise (convolution layers) and (sub)channel-wise (fully connected layers), which align with the bit-packing approach in [31]. See Appendix A.2 for more details on grouping.
+
+Maximum Bitwidth $\mathrm{I}_{\mathrm{max}}$ for Group $g$ . The initial $I_{g}$ is set by a predefined initial reconstruction precision or a maximum bitwidth. We notice that the accuracy degradation
+
+caused by the initialization can be fully recovered after several optimization epochs proposed in Sec. 3.3, if the maximum bitwidth is 8. For example, ResNet18 on ILSVRC12 after such an initialization can be retrained to a Top-1/5 accuracy of $70.3\% / 89.4\%$ , even higher than its full precision counterpart $(69.8\% / 89.1\%)$ . For smaller networks, e.g. VGG on CIFAR10, a maximum bitwidth of 6 is sufficient.
+
+# 5.2. Convergence Analysis
+
+Settings. This experiment conducts the ablation study of our optimization step in Sec. 3.3. We show the advantages of our optimizer in terms of convergence, on networks quantized with a uniform bitwidth. Optimizing $B_{g}$ with speedup (also Alg. 3) is compared, since it takes a similar alternating step as previous works [42, 43]. Recall that our optimizer $(i)$ has no gradient approximation and $(ii)$ directly minimizes the loss. We use AMSGrad1 with a learning rate of 0.001, and compare with following baselines.
+
+- STE with rec. error: This baseline quantizes the maintained full precision weights by minimizing the reconstruction error (rather than the loss) during forward and approximates gradients via STE during backward. This approach is adopted in some of the best-performing quantization schemes such as LQ-Net [43].
+- STE with loss-aware: This baseline approximates gradients via STE but performs a loss-aware projection updating (adapted from our ALQ). It can be considered as a multi-bit extension of prior loss-aware quantizers for binary and ternary networks [14, 13]. See Alg. 6 in Appendix B.4 for the detailed pseudocode.
+
+
+
+
+
+
+Figure 1. Validation accuracy trained with ALQ/baselines.
+
+
+
+Results. Fig. 1 shows the Top-1 validation accuracy of different optimizers, with increasing epochs on uniform bitwidth MBNs. ALQ exhibits not only a more stable and faster convergence, but also a higher accuracy. The exception is 2-bit
+
+ResNet18. ALQ converges faster, but the validation accuracy trained with STE gradually exceeds ALQ after about 20 epochs. For training a large network with $\leq 2$ bitwidth, the positive effect brought from the high precision trace may compensate certain negative effects caused by gradient approximation. In this case, keeping full precision parameters will help calibrate some aggressive steps of quantization, resulting in a slow oscillating convergence to a better local optimum. This also encourages us to add several epochs of STE based optimization (e.g. STE with loss-aware) after low bitwidth quantization to further regain the accuracy.
+
+# 5.3. Effectiveness of Adaptive Bitwidth
+
+Settings. This experiment demonstrates the performance of incrementally trained adaptive bitwidth in ALQ, i.e. our pruning step in Sec. 3.2. Uniform bitwidth quantization (an equal bitwidth allocation across all groups in all layers) is taken as the baseline. The baseline is trained with the same number of epochs as the sum of all epochs during the bitwidth reduction. Both ALQ and the baseline are trained with the same learning rate decay schedule.
+
+Results. Table 1 shows that there is a large Top-1 accuracy gap between an adaptive bitwidth trained with ALQ and a uniform bitwidth (baseline). In addition to the overall average bitwidth $(I_W)$ , we also plot the distribution of the average bitwidth and the number of weights across layers (both models in Table 1) in Fig. 2. Generally, the first several layers and the last layer are more sensitive to the loss, thus require a higher bitwidth. The shortcut layers in ResNet architecture (e.g. the $8^{\text{th}}$ , $13^{\text{rd}}$ , $18^{\text{th}}$ layers in ResNet18) also need a higher bitwidth. We think this is due to the fact that the shortcut pass helps the information forward/backward propagate through the blocks. Since the average of adaptive bitwidth can have a decimal part, ALQ can achieve a compression rate with a much higher resolution than a uniform bitwidth, which not only controls a more precise trade-off between storage and accuracy, but also benefits our incremental bitwidth reduction (pruning) scheme.
+
+Table 1. Comparison between Baseline (Uniform Bitwidth) and ALQ (Adaptive Bitwidth)
+
+| Method | IW | Top-1 |
| Baseline VGG (uniform) | 1 | 91.8% |
| ALQ VGG | 0.66 | 92.0% |
| Baseline ResNet18 (uniform) | 2 | 66.2% |
| ALQ ResNet18 | 2.00 | 68.9% |
+
+It is worth noting that both the optimization step and the pruning step in ALQ follow the same metric, i.e. the loss increment modeled by a quadratic function, allowing them to work in synergy. We replace the step of optimizing $B_{g}$ in ALQ with an STE step (with the reconstruction forward, see in Sec. 5.2), and keep other steps unchanged in the pipeline.
+
+
+
+
+Figure 2. Distribution of the average bitwidth and the number of weights across layers.
+
+When the VGG model is reduced to an average bitwidth of 0.66-bit, the simple combination of an STE step with our pruning step can only reach $90.7\%$ Top-1 accuracy, which is significantly worse than ALQ's $92.0\%$ .
+
+# 5.4. Comparison with States-of-the-Arts
+
+# 5.4.1 Non-structured Pruning on MNIST
+
+Settings. Since ALQ can be considered as a (structured) pruning scheme in $\alpha$ domain, we first compare ALQ with two widely used non-structured pruning schemes: Deep Compression (DC) [10] and ADMM-Pruning (ADMM) [44], i.e. pruning in the original $w$ domain. For a fair comparison, we implement a modified LeNet5 model as in [10, 44] on MNIST dataset [22] and compare the Top-1 prediction accuracy and the compression rate. Note that the storage consumption only counts the weights, since the weights take the most majority of the storage (even after quantization) in comparison to others, e.g. bias, activation quantizer, batch normalization, etc. The storage consumption of weights in ALQ includes the look-up-table for the resulting $I_{g}$ in each group (the same goes for the following experiments).
+
+Table 2. Comparison with State-of-the-Art Non-structured Pruning Methods (LeNet5 on MNIST).
+
+| Method | Weights (CR) | Top-1 |
| FP | 1720KB (1×) | 99.19% |
| DC [10] | 44.0KB (39×) | 99.26% |
| ADMM [44] | 24.2KB (71×) | 99.20% |
| ALQ | 22.7KB (76×) | 99.12% |
+
+Results. ALQ shows the highest compression rate $(76\times)$ while keeping acceptable Top-1 accuracy compared to the two other pruning methods (see Table 2). FP stands for full precision, and the weights in the original full precision LeNet5 consume 1720KB [10]. CR denotes the compression
+
+rate of storing the weights.
+
+It is worth mentioning that both DC [10] and ADMM [44] rely on sparse tensors, which need special libraries or hardwares for execution [24]. Their operands (the shared quantized values) are still floating-point. Hence they hardly utilize bitwise operations for speedup. In contrast, ALQ achieves a higher compression rate without sparse tensors, which is more suited for general off-the-shelf platforms.
+
+The average bitwidth of ALQ is below 1.0-bit (1.0-bit corresponds to a compression rate slightly below 32), indicating some groups are fully removed. In fact, this process leads to a new network architecture containing less output channels of each layer, and thus the corresponding input channels of the next layers can be safely removed. The original configuration $20 - 50 - 500 - 10$ is now $18 - 45 - 231 - 10$ .
+
+# 5.4.2 Binary Networks on CIFAR10
+
+Settings. In this experiment, we compare the performance of ALQ with state-of-the-art binary networks [3, 36, 14]. A binary network is an MBN with the lowest bitwidth, i.e. single-bit. Thus, the storage consumption of a binary network can be regarded as the lower bound of a (multi-bit) binary network. For a fair comparison, we implement a small version of VGG from [40] on CIFAR10 dataset [19], as in many state-of-the-art binary networks [3, 14, 36].
+
+Table 3. Comparison with State-of-the-Art Binary Networks (VGG on CIFAR10).
+
+| Method | IW | Weights (CR) | Top-1 |
| FP | 32 | 56.09MB (1×) | 92.8% |
| BC [3] | 1 | 1.75MB (32×) | 90.1% |
| BWN [36]* | 1 | 1.82MB (31×) | 90.1% |
| LAB [14] | 1 | 1.77MB (32×) | 89.5% |
| AQ [18] | 0.27 | 1.60MB (35×) | 90.9% |
| ALQ | 0.66 | 1.29MB (43×) | 92.0% |
| ALQ | 0.40 | 0.82MB (68×) | 90.9% |
+
+*: both first and last layers are unquantized.
+
+Results. Table 3 shows the performance comparison to popular binary networks. $I_W$ stands for the quantization bitwidth for weights. Since ALQ has an adaptive quantization bitwidth, the reported bitwidth of ALQ is an average bitwidth of all weights. For the statistic information, we plot multiple training loss curves in Appendix C.2.
+
+ALQ allows to compress the network to under 1-bit, which remarkably reduces the storage and computation. ALQ achieves the smallest weight storage and the highest accuracy compared to all weights binarization methods BC [3], BWN [36], LAB [14]. Similar to results on LeNet5, ALQ generates a new network architecture with fewer output channels per layer, which further reduces our models in Table 3 to 1.01MB (0.66-bit) or even 0.62MB (0.40-bit). The computation and the run-time memory can also decrease.
+
+Furthermore, we also compare with AQ [18], the state-of-the-art adaptive fixed-point quantizer. It assigns a different bitwidth for each parameter based on its sensitivity, and also realizes a pruning for 0-bit parameters. Our ALQ not only consumes less storage, but also acquires a higher accuracy than AQ [18]. Besides, the non-standard quantization bitwidth in AQ cannot efficiently run on general hardware due to the irregularity [18], which is not the case for ALQ.
+
+# 5.4.3 MBNs on ILSVRC12
+
+Settings. We quantize both the weights and the activations of ResNet18/34 [11] with a low bitwidth ( $\leq$ 2-bit) on ILSVRC12 dataset [38], and compare our results with state-of-the-art multi-bit networks. The results for the full precision version are provided by Pytorch [30]. We choose ResNet18, as it is a popular model on ILSVRC12 used in the previous quantization schemes. ResNet34 is a deeper network used more in recent quantization papers.
+
+Results. Table 4 shows that ALQ obtains the highest accuracy with the smallest network size on ResNet18/34, in comparison with other weight and weight+activation quantization approaches. $I_W$ and $I_A$ are the quantization bitwidth for weights and activations respectively.
+
+Several schemes (marked with *) are not able to quantize the first and last layers, since quantizing both layers as other layers will cause a huge accuracy degradation [41, 28]. It is worth noting that the first and last layers with floating-point values occupy 2.09MB storage in ResNet18/34, which is still a significant storage consumption on such a low-bit network. We can simply observe this enormous difference between TWN [23] and LQ-Net [43] in Table 4 for example. The evolved floating-point computations in both layers can hardly be accelerated with bitwise operations either.
+
+For reported ALQ models in Table 4, as several layers have already been pruned to an average bitwidth below 1.0-bit (e.g. in Fig. 2), we add extra 50 epochs of our STE with loss-aware in the end (see in Sec. 5.2). The final accuracy is further boosted (see the results marked with ${}^{\mathrm{e}}$ ). ALQ can quantize ResNet18/34 with 2.00-bit (across all layers) without any accuracy loss. To the best of our knowledge, this is the first time that the 2-bit weight-quantized ResNet18/34 can achieve the accuracy level of its full precision version, even if some prior schemes keep the first and last layers unquantized. These results further demonstrate the high-performance of the pipeline in ALQ.
+
+# 6. Conclusion
+
+In this paper, we propose a novel loss-aware trained quantizer for multi-bit networks, which realizes an adaptive bitwidth for all layers (w.r.t. the loss). The experiments on current open datasets reveal that ALQ outperforms state-of-the-art multi-bit/binary networks in both accuracy and
+
+Table 4. Comparison with State-of-the-Art Multi-bit Networks (ResNet18/34 on ILSVRC12).
+
+| Method | IW/IA | Weights | Top-1 |
| ResNet18 |
| FP [30] | 32/32 | 46.72MB | 69.8% |
| TWN [23] | 2/32 | 2.97MB | 61.8% |
| LR [39] | 2/32 | 4.84MB | 63.5% |
| LQ [43]* | 2/32 | 4.91MB | 68.0% |
| QIL [17]* | 2/32 | 4.88MB | 68.1% |
| INQ [45] | 3/32 | 4.38MB | 68.1% |
| ABC [26] | 5/32 | 7.41MB | 68.3% |
| ALQ | 2.00/32 | 3.44MB | 68.9% |
| ALQe | 2.00/32 | 3.44MB | 70.0% |
| BWN [36]* | 1/32 | 3.50MB | 60.8% |
| LR [39]* | 1/32 | 3.48MB | 59.9% |
| DSQ [7]* | 1/32 | 3.48MB | 63.7% |
| ALQ | 1.01/32 | 1.77MB | 65.6% |
| ALQe | 1.01/32 | 1.77MB | 67.7% |
| LQ [43]* | 2/2 | 4.91MB | 64.9% |
| QIL [17]* | 2/2 | 4.88MB | 65.7% |
| DSQ [7]* | 2/2 | 4.88MB | 65.2% |
| GroupNet [48]* | 4/1 | 7.67MB | 66.3% |
| RQ [27] | 4/4 | 5.93MB | 62.5% |
| ABC [26] | 5/5 | 7.41MB | 65.0% |
| ALQ | 2.00/2 | 3.44MB | 66.4% |
| SYQ [6]* | 1/8 | 3.48MB | 62.9% |
| LQ [43]* | 1/2 | 3.50MB | 62.6% |
| PACT [2]* | 1/2 | 3.48MB | 62.9% |
| ALQ | 1.01/2 | 1.77MB | 63.2% |
| ResNet34 |
| FP [30] | 32/32 | 87.12MB | 73.3% |
| ALQe | 2.00/32 | 6.37MB | 73.6% |
| ALQe | 1.00/32 | 3.29MB | 72.5% |
| LQ [43]* | 2/2 | 7.47MB | 69.8% |
| QIL [17]* | 2/2 | 7.40MB | 70.6% |
| DSQ [7]* | 2/2 | 7.40MB | 70.0% |
| GroupNet [48]* | 5/1 | 12.71MB | 70.5% |
| ABC [26] | 5/5 | 13.80MB | 68.4% |
| ALQ | 2.00/2 | 6.37MB | 71.0% |
| TBN [41]* | 1/2 | 4.78MB | 58.2% |
| LQ [43]* | 1/2 | 4.78MB | 66.6% |
| ALQ | 1.00/2 | 3.29MB | 67.4% |
+
+*: both first and last layers are unquantized.
+e: adding extra epochs of STE with loss-aware in the end.
+
+storage. Currently, we are deploying ALQ on a mobile platform to measure the inference efficiency.
+
+# Acknowledgement
+
+We are grateful for the anonymous reviewers and area chairs for their valuable comments and suggestions. This research was supported in part by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant. Zimu Zhou is the corresponding author.
+
+# References
+
+[1] Jorge Albericio, Alberto Delmás, Patrick Judd, Sayeh Sharify, Gerard O'Leary, Roman Genov, and Andreas Moshovos. Bitpragmatic deep neural network computing. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, pages 382-394, 2017.
+[2] Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. PACT: parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, abs/1805.06085, 2018.
+[3] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Proceedings of Advances in Neural Information Processing Systems, pages 3123-3131, 2015.
+[4] Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016.
+[5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159, 2011.
+[6] Julian Faraone, Nicholas J. Fraser, Michaela Blott, and Philip Heng Wai Leong. SYQ: learning symmetric quantization for efficient deep neural networks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 4300-4309, 2018.
+[7] Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. In Proceedings of International Conference in Computer Vision, 2019.
+[8] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
+[9] Yiwen Guo, Anbang Yao, Hao Zhao, and Yurong Chen. Network sketching: exploiting binary structure in deep cnns. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 5955-5963, 2017.
+[10] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In Proceedings of International Conference on Learning Representations, 2016.
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016.
+[12] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. In Proceedings of NIPS Deep Learning Workshop, 2014.
+[13] Lu Hou and James T Kwok. Loss-aware weight quantization of deep networks. In Proceedings of International Conference on Learning Representations, 2018.
+
+[14] Lu Hou, Quanming Yao, and James T Kwok. Loss-aware binarization of deep networks. In Proceedings of International Conference on Learning Representations, 2017.
+[15] Qinghao Hu, Peisong Wang, and Jian Cheng. From hashing to cnns: Training binary weight networks via hashing. In Proceedings of AAAI Conference on Artificial Intelligence, 2018.
+[16] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Quantized neural networks: Training neural networks with low precision weights and activations. Journal of Machine Learning Research, 18(187):1-30, 2017.
+[17] Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Jae-Joon Han, Youngjun Kwak, Sung Ju Hwang, and Changkyu Choi. Learning to quantize deep networks by optimizing quantization intervals with task loss. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 4350-4359, 2019.
+[18] Soroosh Khoram and Jing Li. Adaptive quantization of neural networks. In Proceedings of International Conference on Learning Representations, 2018.
+[19] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. Cifar-10 (canadian institute for advanced research). http://www.cs.toronto.edu/~kriz/cifar.html.
+[20] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In Proceedings of Advances in Neural Information Processing Systems, pages 1097-1105, 2012.
+[21] Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+[22] Yann LeCun and Corinna Cortes. MNIST handwritten digit database. http://yann.learcun.com/exdb/mnist/, 2010.
+[23] Fengfu Li, Bo Zhang, and Bin Liu. Ternary weight networks. In Proceedings of Advances in Neural Information Processing Systems, 2016.
+[24] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. In Proceedings of International Conference on Learning Representations, 2017.
+[25] Darryl Lin, Sachin Talathi, and Sreekanth Annapureddy. Fixed point quantization of deep convolutional networks. In Proceedings of International Conference on Machine Learning, pages 2849-2858, 2016.
+[26] Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. In Proceedings of Advances in Neural Information Processing Systems, pages 345-353, 2017.
+[27] Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efratrios Gavves, and Max Welling. Relaxed quantization for discretized neural networks. In Proceedings of International Conference on Learning Representations, 2019.
+[28] Asit Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. In Proceedings of International Conference on Learning Representations, 2018.
+
+[29] Asit Mishra, Eriko Nurvitadhi, Jeffrey J Cook, and Debbie Marr. WRPN: Wide reduced-precision networks. In Proceedings of International Conference on Learning Representations, 2018.
+[30] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In Proceedings of NIPS Autodiff Workshop: The Future of Gradient-based Machine Learning Software and Techniques, 2017.
+[31] Fabrizio Pedersoli, George Tzanetakis, and Andrea Tagliasacchi. Espresso: Efficient forward propagation for binary deep neural networks. In Proceedings of International Conference on Learning Representations, 2018.
+[32] Pytorch. Pytorch example of lenet-5 on mnist. https://github.com/pytorch/examples/ blob/master/mnist/main.py. Accessed: 2019-09-28.
+[33] Pytorch. Pytorch example on cifar10. https://github.com/kuangliu/pytorch-cifar/blob/master/main.py. Accessed: 2019-10-08.
+[34] Pytorch. Pytorch example onImagenet. https://github.com/pytorch/examples/blob/master/imagenet/main.py. Accessed: 2019-09-24.
+[35] Pytorch. Pytorch example on resnet. https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py. Accessed: 2019-10-15.
+[36] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In Proceedings of European Conference on Computer Vision, pages 525-542, 2016.
+[37] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In Proceedings of International Conference on Learning Representations, 2018.
+[38] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3):211-252, 2015.
+[39] Oran Shayer, Dan Levi, and Ethan Fetaya. Learning discrete weights using the local reparameterization trick. In Proceedings of International Conference on Learning Representations, 2018.
+[40] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proceedings of International Conference on Learning Representations, 2015.
+[41] Diwen Wan, Fumin Shen, Li Liu, Fan Zhu, Jie Qin, Ling Shao, and Heng Tao Shen. Tbn: Convolutional neural network with ternary inputs and binary weights. In Proceedings of European Conference on Computer Vision, 2018.
+[42] Chen Xu, Jianqiang Yao, Zhouchen Lin, Wenwu Ou, Yuanbin Cao, Zhirong Wang, and Hongbin Zha. Alternating multi-bit quantization for recurrent neural networks. In Proceedings of International Conference on Learning Representations, 2018.
+
+[43] Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In Proceedings of European Conference on Computer Vision, pages 365-382, 2018.
+[44] Tianyun Zhang, Shaokai Ye, Kaiqi Zhang, Jian Tang, Wujie Wen, Makan Fardad, and Yanzhi Wang. A systematic dnn weight pruning framework using alternating direction method of multipliers. In Proceedings of European Conference on Computer Vision, pages 184-199, 2018.
+[45] Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. In Proceedings of International Conference on Learning Representations, 2017.
+[46] Aojun Zhou, Anbang Yao, Kuan Wang, and Yurong Chen. Explicit loss-error-aware quantization for low-bit deep neural networks. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 9426-9435, 2018.
+[47] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
+[48] Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Structured binary neural networks for accurate image classification and semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2019.
\ No newline at end of file
diff --git a/adaptivelossawarequantizationformultibitnetworks/images.zip b/adaptivelossawarequantizationformultibitnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..49488170ec4204eec9034ff7b9b566e635a0c423
--- /dev/null
+++ b/adaptivelossawarequantizationformultibitnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55a6e9ccbd15389d5b5a0c73055d8dcc7b31ff755266f3e93b93ea78e6d24fd2
+size 386766
diff --git a/adaptivelossawarequantizationformultibitnetworks/layout.json b/adaptivelossawarequantizationformultibitnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..73cdf816d8e0950157bef24864c392f1cb324b89
--- /dev/null
+++ b/adaptivelossawarequantizationformultibitnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7f1a461acec9e60c113d62bcf92b7ed99745f8b995ada764d80d3eefb9d0a79
+size 540153
diff --git a/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_content_list.json b/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b0b32ed9bbc3796ff031078332410294a3778456
--- /dev/null
+++ b/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3d18cbe459c9a29bc091864737349225d2ab7c2118ebc221118961a2515d8c2c
+size 79164
diff --git a/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_model.json b/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4bcb3011713ef5d3c515834ba1ecd4f361f4d813
--- /dev/null
+++ b/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2dc1a0f19c1bad5feed569cd829fcd8138d064fafe99a80f8fe3ffbdf471625e
+size 98526
diff --git a/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_origin.pdf b/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6f3556972f3bb89763343580493c9edd73cf17ad
--- /dev/null
+++ b/adaptivesubspacesforfewshotlearning/08d6f6f4-49e1-4801-989e-2364ed80c8d7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1aa5843fd1d8f6da76008f6fcc216c536f373b95cb060dad4a2e9f80ab4f5e53
+size 358516
diff --git a/adaptivesubspacesforfewshotlearning/full.md b/adaptivesubspacesforfewshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..84c24239a918c728a773d1c120556e4717e53920
--- /dev/null
+++ b/adaptivesubspacesforfewshotlearning/full.md
@@ -0,0 +1,325 @@
+# Adaptive Subspaces for Few-Shot Learning
+
+Christian Simon†,§ Piotr Koniusz†,§ Richard Nock†,‡,§ Mehrtash Harandi¶,§
+
+†The Australian National University, ‡Monash University,
+
+†The University of Sydney, § Data61-CSIRO
+
+first.last@{anu.edu.au, monash.edu, data61.csiro.au}
+
+# Abstract
+
+Object recognition requires a generalization capability to avoid overfitting, especially when the samples are extremely few. Generalization from limited samples, usually studied under the umbrella of meta-learning, equips learning techniques with the ability to adapt quickly in dynamical environments and proves to be an essential aspect of lifelong learning. In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples. A subspace method is exploited as the central block of a dynamic classifier. We will empirically show that such modelling leads to robustness against perturbations (e.g., outliers) and yields competitive results on the task of supervised and semi-supervised few-shot classification. We also develop a discriminative form which can boost the accuracy even further. Our code is available at https://github.com/chrysts/dsn_fewshot.
+
+# 1. Introduction
+
+Various studies show that many deep learning techniques in computer vision, speech recognition and natural language understanding, to name but a few, will fail to produce reliable models that generalize well if limited annotations are available. Apart from the labor associated with annotating data, precise annotation can become ill-posed in some cases. One prime example of such a difficulty is object detection labeling which requires annotating bounding boxes of objects as explained in [1]. In some other cases, labeling process may require expert knowledge (e.g. sign language recognition [2]).
+
+In contrast to the current trend in deep learning, humans can learn new objects from only a few examples. This in turn provides humans with lifelong learning abilities. Inspired by such learning abilities, several approaches are developed to study learning from limited samples [3-12]. This type of learning, known as Few-Shot Learning (FSL), has been tackled by a diverse set of ideas, from embedding learning [4,
+
+
+Figure 1: The accuracy of prototype and subspace classifiers evaluated with few (2-5) images. The feature extractor is ResNet-34 trained on the ImageNet. A prototype is an average pooling of few images within the same class and a subspace is the class specific basis vectors. The prototypes and subspaces are constructed directly from the generated features without additional learnable parameters.
+
+13, 14], to adaptation techniques [7, 8] and even generative models [3, 15].
+
+In this work, we first formulate FSL as a two-stage learning paradigm, namely, 1. learning a universal feature extractor followed by 2. learning to generate a classifier dynamically from limited data. We will demonstrate that many state-of-the-art FSL techniques fit nicely into such a learning paradigm. Furthermore, we will show that viewing the FSL as the above paradigm will be beneficial and provide us with tools to formalize FSL.
+
+Once we establish the two-stage learning paradigm, we will turn our attention to how one can reliably generate a classifier from limited data. Aside from limited annotation, we will show that a requirement in many challenging FSL problems is to learn the classifier from high-dimensional data. This ultimately boils down to learning a symmetric function1
+
+from high-dimensional data. To this end, we make another contribution and propose to construct the symmetric function using subspaces which have a long history in modeling visual data [16-19]. This differs in large from previous studies where the symmetric function is realized through a form of pooling (e.g., averaging as in [20]).
+
+As a motivating example, we compare and contrast the state-of-the-art prototypical networks [20] against our proposed subspace method using the CUB dataset [21]. To this end, for the universal feature extractor, we used the ResNet-34 trained on ImageNet [22]. We considered four FSL problems with various shots (two to five to be specific) and report the accuracy of the prototypical networks and our subspace method in Fig. 1. As will detail out shortly, in prototypical networks, one constructs low-shot classifiers by averaging all the samples within each class. Aside from being a natural choice, averaging is supported by 1. In [23], it is shown that all symmetric functions over a set $\mathcal{X}$ can be written as $\rho (\sum_{x\in \mathcal{X}}\phi (x))$ for suitable transformations $\rho$ and $\phi$ . 2. In [11], authors note that the average of samples within a class is highly correlated with the parameters of classifiers learned by the softmax, hence one hopes that averaging reflects the true parameters of a class in FSL as well. Nevertheless, we observe that our subspace solution consistently and comfortably outperforms the prototypical networks. This compelling result along our thorough set of experiments on supervised and semi-supervised FSL (see for example Table 1 and 5) suggest that in few-shot regimes, there exists better ways to build classifiers from limited observations with subspace-based ones being our recommendation.
+
+Contributions. In summary, we make the following contributions in this work:
+
+i. Few-shot learning solutions are formulated within a framework of generating dynamic classifiers.
+ii. We propose an extension of existing dynamic classifiers by using subspaces. We rely on a well-established concept stating that a second-order method generalizes better for classification tasks.
+iii. We also introduce a discriminative formulation where maximum discrimination between subspaces is encouraged during training. This solution boosts the performance even further.
+iv. We show that our method can make use of unlabeled data and hence it lends itself to the problem of semi-supervised few-shot learning and transductive setting. The robustness of such a variant is assessed in our experiments.
+
+# 2. Related Work
+
+In this section, we review the literature on few-shot learning and subspace methods for classification tasks. Few-shot learning was originally introduced to imitate the human learning ability. Some of the early works use generative models and similarity learning to capture the variation within parts and geometric configurations of objects [3, 15, 24]. These works use hand-crafted features to perform few-shot classification. Constellation model proposed in [15] takes into account the object parts for inference. The geometric structure of these parts helps discriminate between different objects. Furthermore, Torralba et al. [24] exploit similar features on visual objects but the model does not exploit the geometric structure. Another non-deep solution is the work by Lake et al. [3] which uses a set of primitives (strokes) to model few-shot classification. The above few-shot classification methods are not trained end-to-end and the given tasks are non-episodic.
+
+The deep learning has been very successful in learning discriminative features from images. Santoro et al. [25] and Vinyals et al. [4] attempted to solve few-shot classification with end-to-end deep neural networks. In majority of cases, the network, trained from episodes, aims to infer the underlying discriminative model of specific tasks from limited data. Meta-learning can also be used to obtain fast adaptive networks. A prominent idea is to learn initial values for the parameters (weights) of the neural network. With proper initialization, one can expect the network to adapt to different tasks using backpropagation from limited samples. Sachin et al. [8] uses long-short term memory (LSTM) to embed the gradients w.r.t. a given task to train the network. MAML [7] does not use LSTM to encode the gradients but it can still perform meta-learning, usually with a better performance. As an extension, MAML++ [26] uses an importance scheme to weigh the loss during the gradient updates. MetaNets [27] is another fast adaptive network with a mix of so-called fast and fixed weights. The fast weights change through backpropagation while the fixed-weights do not change. Thus, one can see this method as an optimization applied to selected weights only.
+
+FSL based on metric-learning is the closest direction to our work. Matching networks [4] and Siamese networks [13] learn sample-wise metric, meaning that distances to samples are used to determine the label of the query. In prototypical networks [20], Snell et al. extended the idea from samples to class-wise metric. The descriptors from all the samples of a specific class are grouped and considered as the class prototypes. The prototypes are subsequently used for inference. Learning a non-linear relationship between class representations and queries can be modeled by neural networks as shown for example in Relation Networks [14]. The underlying metric is learnt to preserve small distances between feature vectors sharing the same class label. Qiao et
+
+
+
+
+(a)
+
+
+(b)
+
+
+
+
+(c)
+
+
+(d)
+Figure 2: Various classifiers for few-shot classification. (a) Matching networks create pairwise classifiers. (b) Prototypical networks create mean classifiers based on the sample in the same class. (c) Relation networks produce non-linear classifiers. (d) Our proposed method creates classifiers using subspaces.
+
+al. [11] observed that the activation of a network is correlated with weights of its classifier (final layer) and advocates that prototype made of the activation is sufficient for classification. Other works use feature attention modules [28, 29] to modulate features for few-shot learning [30, 31].
+
+Several recent works target few-shot semi-supervised learning (FS-SSL). Garcia et al. [32] exploit graph neural networks for semi-supervised setting where unlabeled data is connected with the labeled data via Graph Neural Networks (GNN). Then, the features extracted from GNN are employed to classify the query. Another protocol for FS-SSL proposed by Ren et al. [33] shows that unlabeled images help samples from the support set to increase the performance of few-shot classification. The method proposed in [33] is based on the prototypical networks [20] with prototypes refined by the use of unlabeled images.
+
+# 3. Problem Setting
+
+We start by defining the terminology used in few-shot learning. A few of samples are trained for every iteration in meta-learning fashion. To obtain a trained model, so-called episodes are used to sample the data. An episode $\mathcal{T}_i$ consists of two sets, the support set $S$ and the query set $Q$ . This learning paradigm depicts how machine can improve their ability given fragmented data in each iteration. Specifically, the deep embeddings learn with limited amount of labels and inputs per episode. This learning paradigm is well-known as $N$ -way $K$ -shot classification (e.g., 20-way 1-shot and 5-way 5-shot). We introduce our notations for the $(N$ -way, $K$ -shot) few-shot learning. Each episode or task $\mathcal{T}_i$ is composed of the support set $S = \{(x_{1,1}, c_{1,1}), (x_{1,2}, c_{1,2}), \dots, (x_{N,K}, c_{N,K})\}$ and the query set $Q = \{q_1, \dots, q_{N \times M}\}$ , where $x_{i,j}$ denotes the $j$ -th sample from class $i$ and $c_{i,j} \in \{1, \dots, N\}$ . In the semi-supervised setting, there exist additionally an unlabeled set $\mathcal{R} = \{r_1, \dots, r_U\}$ within an episode.
+
+A related problem is semi-supervised few-shot learn
+
+ing where unlabeled data is provided to the model. In the literature, various configurations are considered for semi-supervised few-shot learning e.g., [32-34]. In this work, we follow the challenging protocol in [33] where the so-called distractors are introduced. Thus, an episode includes the support set $S$ , query set $Q$ , and unlabeled set $\mathcal{R}$ . The support (labeled) $S$ and query $Q$ sets are configured as in few-shot learning. Additionally, an unlabeled set $\mathcal{R}$ is provided to assist the classification task within an episode. In the unlabeled set, there are samples from two different sources: the support classes and the distractor classes. As the name implies, samples from distractor classes are irrelevant to the classification task and represent classes outside the support set.
+
+# 4. Proposed Method
+
+# 4.1. Preliminary
+
+We consider a few-shot learning problem in two stages: the feature extractor and the dynamic classifier. Let $f_{\Theta}: \mathcal{X} \to \mathbb{R}^{D}$ be a mapping from the input space $\mathcal{X}$ to a $D$ -dimensional representation realized by a neural network and $X_{c} = \{\pmb{x}_{c,1}, \dots, \pmb{x}_{c,K}\}$ be a class-specific set. We formulate the problem of few-shot learning as generating dynamic classifiers. To this end, the final layer of a neural network along the softmax layer implements:
+
+$$
+p (c | \boldsymbol {q}) = \frac {\exp \left(\boldsymbol {W} _ {c} ^ {\top} f _ {\Theta} (\boldsymbol {q})\right)}{\sum_ {c ^ {\prime}} \exp \left(\boldsymbol {W} _ {c ^ {\prime}} ^ {\top} f _ {\Theta} (\boldsymbol {q})\right)} = \frac {\exp \left(d _ {c} (\boldsymbol {q})\right)}{\sum_ {c ^ {\prime}} \exp \left(d _ {c ^ {\prime}} (\boldsymbol {q})\right)}, \tag {1}
+$$
+
+where $\mathbf{W}_c$ is a weight of class $c$ . Then, the problem of FSL can be understood as how $\mathbf{W}$ can be generated once a new task is provided. To showcase this setup, we discuss the pairwise classifier, the prototype, and the non-linear classifier below.
+
+Pair-Wise Classifier. It is possible to build a classifier directly from samples by calculating the similarity between them as shown in Fig. 2 (a). One seminal work using this
+
+
+Figure 3: The overall pipeline of our approach. The subspace classifier replaces a classifier with a single vector per class. Discriminative method is then applied to maximize the margin between subspaces.
+
+classifier is Matching Networks [4]. The samples are embedded through LSTMs and attention modules. However, this method does not pose an invariance w.r.t. the order of input images which affects the accuracy. The classifier weight $W_{c}$ is substituted with a function $g(\cdot)$ (e.g. LSTMs) to encode the samples. Then the class specific samples are summarized and a cosine similarity used for prediction.
+
+Prototype Classifier. Based on the observation for few-shot classification in [11], the parameters from the last fully-connected layer and prototypes correlate. Thus, the classifier is generated from the prototypes. By introducing a simple multi-layer perceptron, the average of feature vectors from the final activation layer is used to perform few-shot classification. This observation is also confirmed by prototypical networks [20] that learn directly feature embeddings. Some of the following works also use prototypes as dynamic classifiers such as [35, 36]. Thus, the $W_{c}$ is substituted with $\frac{1}{K}\sum_{\boldsymbol{x}_i\in \boldsymbol{X}_c}f_{\Theta}(\boldsymbol{x}_i)$ . Furthermore, this approach preserves the symmetric property (invariance to order of images) because the average operation is performed to generate the classifier. The illustration is depicted in Fig. 2 (b).
+
+Non-Linear Binary Classifier. This approach exploits the non-linearity of the decision boundaries. Relational networks use a non-linear binary classifier to calculate the similarity as shown in Fig 2 (c). Let $\mathbf{z} = (f_{\Theta}(\mathbf{x}_i), f_{\Theta}(\mathbf{q}))$ and $M \in \mathbb{R}^{2D}$ is the learnable classifier (comparator). We can redefine Eq. 1 as $p(c|q) = \sigma(z^\top M)$ , where $\sigma$ is a nonlinear function (e.g., sigmoid). Even though this classifier does not use a softmax function, it follows the principle of a generating classifier that learns a comparison of datapoint pairs.
+
+# 4.2. Subspaces for Few-Shot Classification
+
+We propose to model points by subspaces $\{Z_i\}_{i=1}^N$ . Each subspace $Z_i$ has a basis represented by $\mathbb{R}^{D \times n} \ni B_i = [b_1, \dots, b_n]$ ; $n \leq D$ , with $B_i^\top B_i = \mathbf{I}_n$ . Our goal is to learn the feature extractor $\Theta$ to generate subspaces, i.e., the function in a way that the resulting space is suitable for subspace classifiers.
+
+A basis for the subspace representing class $c$ can be obtained by a matrix decomposition e.g., singular value decomposition (SVD). We emphasize that more involved techniques to obtain robust subspaces can potentially improve the algorithm. Nevertheless, our goal is to assess whether the concept of subspace modeling for few-shot learning is well justified, thus we opt for truncated SVD in our implementation.
+
+# 4.3. Subspace Classifiers
+
+High-order information is preferred than low-order to improve the capability of the classifier. A subspace method can form a robust classifier. Below, we describe how to create a subspace and classify based on it. A new set of samples encoded by $\Theta$ can be expressed as $\tilde{\pmb{X}}_c = [f_{\Theta},(\pmb{x}_{c,1}) - \pmb{\mu}_c,\dots ,f_{\Theta}(\pmb{x}_{c,K}) - \pmb{\mu}_c]$ , where $\pmb{\mu}_c = \frac{1}{K}\sum_{\pmb{x}_i\in \pmb{X}_c}f_{\Theta}(\pmb{x}_i)$ . One of the classification methods on a subspace is to find the closest distance between the datapoint to its projection onto the subspace. To this end, a class-specific projection matrix $P_{c}$ is calculated from $\tilde{\pmb{X}}_c$ . Now a query $\pmb{q}_j$ can be projected onto $P_{c}$ and the classification based on the shortest distance from the query to its projection onto $P_{c}$ (in original space) is performed. Our general subspace classifier is defined as:
+
+$$
+d _ {c} (\boldsymbol {q}) = - \left\| (\mathbf {I} - \boldsymbol {M} _ {c}) \left(f _ {\Theta} (\boldsymbol {q}) - \boldsymbol {\mu} _ {c}\right) \right\| ^ {2}, \tag {2}
+$$
+
+where $M_{j} = P_{c}P_{c}^{\top}$ and $\mu_c$ can be interpreted as the offset between a point and the subspace. Thus, $P_{c}$ is a truncated matrix of a matrix $B_{c}$ with orthogonal basis for the linear subspace spanning $\mathbb{X}_c = \{f_\Theta (\pmb {x}_i);y_i = c\}$ (hence, $B_{c}^{\top}B_{c} = \mathbf{I})$
+
+We define the probability of the query assigned to class $c$ using a softmax function as:
+
+$$
+p _ {c, q} = p (c | \boldsymbol {q}) = \frac {\exp \left(d _ {c} (\boldsymbol {q})\right)}{\sum_ {c ^ {\prime}} \exp \left(d _ {c ^ {\prime}} (\boldsymbol {q})\right)}. \tag {3}
+$$
+
+Now, we can minimize the negative log of Eq. 3 and update $\Theta$ . To train the whole framework, backpropagation through SVD is required which is available in modern deep learning packages such as PyTorch [37]. Hereafter, we call our proposed method as deep subspace networks (DSN).
+
+# 4.4. Discriminative Deep Subspace Networks
+
+Our goal in this part is to enhance DSN by learning representations that lead to more discriminative subspaces. In doing so, we make use of the Grassmannian geometry [38] and propose to maximize the distance between subspaces during training. This can be achieved with ease using the projection metric on Grassmannian which enjoys several useful properties (see [39]). To be more specific, given the basis of two subspaces $P_3$ and $P_j$ , the projection metric is defined as:
+
+$$
+\delta_ {p} ^ {2} \left(\boldsymbol {P} _ {i}, \boldsymbol {P} _ {j}\right) = \left\| \boldsymbol {P} _ {i} \boldsymbol {P} _ {i} ^ {\top} - \boldsymbol {P} _ {j} \boldsymbol {P} _ {j} ^ {\top} \right\| _ {F} ^ {2} = 2 n - 2 \| \boldsymbol {P} _ {i} ^ {\top} \boldsymbol {P} _ {j} \| _ {F} ^ {2}. \tag {4}
+$$
+
+Maximizing the projection metric is achieved by minimizing $\| P_i^\top P_j\| _F^2$ , yielding the following loss:
+
+$$
+- \frac {1}{N M} \sum_ {c} \log \left(p _ {c, q}\right) + \lambda \sum_ {i \neq j} \| \boldsymbol {P} _ {i} ^ {\top} \boldsymbol {P} _ {j} \| _ {F} ^ {2}. \tag {5}
+$$
+
+Algorithm 1 explains the steps of training DSN. Our overall pipeline is depicted in Fig. 3
+
+Algorithm 1 Train Deep Subspace Networks
+Input: Each episode $\mathcal{T}_i$ with $S$ and $Q$
+1: $\Theta_0\gets$ random initialization
+2: for $t$ in $\{\mathcal{T}_1,\dots,\mathcal{T}_{N_T}\}$ do
+3: for $k$ in $\{1,\dots,N\}$ do
+4: $\tilde{\boldsymbol{X}}_c\gets \boldsymbol {S}_c$
+5: Calculate the average of the class
+6: Calculate mean refinement (MR) using Eq. 6
+7: Subtract $\tilde{\boldsymbol{X}}_c$ with an offset
+8: $[\mathcal{U},\Sigma ,\mathcal{V}^{\top}]\gets$ Decompose $(\tilde{\boldsymbol{X}}_c)$
+9: $P_{c}\gets$ Truncate $\mathcal{U}_{1,\dots,n}$
+10: for $q$ in $Q$ do
+11: Compute $d_c(q)$ using Eq. 2
+12: end for
+13: end for
+14: Compute final loss $\mathcal{L}_t$ using Eq. 5
+15: Update $\Theta$ using $\nabla \mathcal{L}_t$
+16: end for
+
+# 4.5. DSN for Semi-Supervised Few-Shot Learning
+
+In what follows, we extend the model developed in § 4.2 to address semi-supervised few-shot learning. In doing so, we need to take advantage of the unlabeled data to fit better subspaces to our data. We achieve this by refining the center of each class (mean-refinement) according to
+
+$$
+\tilde {\boldsymbol {\mu}} _ {c} = \frac {K \boldsymbol {\mu} _ {c} + \sum_ {i} m _ {i} f _ {\Theta} (\boldsymbol {r} _ {i})}{K + \sum_ {i} m _ {i}}, \tag {6}
+$$
+
+where,
+
+$$
+m _ {i} = \frac {\exp \left(- \| f _ {\Theta} (\boldsymbol {r} _ {i}) - \boldsymbol {\mu} _ {c}\right) \| ^ {2})}{\sum_ {c ^ {\prime}} \exp \left(- \| f _ {\Theta} (\boldsymbol {r} _ {i}) - \boldsymbol {\mu} _ {c ^ {\prime}}\right) \| ^ {2})}, \tag {7}
+$$
+
+where $m_{i}$ is the soft-assignment score for unlabeled samples. To work at the presence of distractors, we use a fake class with zero mean as in [33]. We empirically observed that such a simple modification to the means can improve the results without the need of refining the matrix decomposition step. Moreover, this technique is also applicable for transductive setting using query set as unlabeled data to refine mean of classes.
+
+Remark 1. To the best of our knowledge, subspaces have been used to address FSL in [40, 41] and our preliminary study [42]. A major difference between this work and TAPNET [40] is that the projection in our method is class-specific, while TAPNet makes use of task-specific projections. Our preliminary work [42] which precedes by $\sim 8$ months the work of Devos and Grossglauer [41] share the same spirit and can be considered as a class specific subspace method for FSL.
+
+# 5. Experiments
+
+Below we contrast and assess our method against state-of-the-art techniques on four challenging datasets, namely mini-ImageNet [8], tiered-ImageNet [33], CIFAR [43], and Open MIC [44]. Moreover, we used several CNN backbones such as 4-convolutional layers (Conv-4) as implemented in [20] and ResNet-12 as employed in [45] in our entire experiments for standard few-shot classification. We follow a general practice to evaluate the model with $N$ -way $K$ -shot and 15 query images. While perturbation analysis and semi-supervised few-shot (SS-FSL) classification, Conv-4 is adopted. The reported results of deep subspace networks (DSN) are provided on all datasets.
+
+mini-ImageNet. The mini-ImageNet [8] contains 60,000 images of the ImageNet [46] datasets. Images in the miniImageNet are of size $84 \times 84$ and represent 100 classes with 64, 16, and 20 classes used for training, validation, and testing, respectively. Every class has 600 images following the image list from [8]. It is clearly shown from previous work (e.g., [47]) that CNN backbone affects the performance. Thus, we employ 4-convolutional layer (4-Conv) and ResNet-12 to make fair comparisons. We also use the mini-ImageNet for semi-supervised classification with $40\%$ of labeled data.
+
+tiered-ImageNet. This dataset is also derived from ImageNet but contains a broader set of classes compared to the mini-ImageNet. There are 351 classes from 20 different categories for training, 97 classes from 6 different categories for validation, and 160 classes from 8 different categories for testing. We follow the implementation of 4-Conv and ResNet-12 backbones and image size of $84 \times 84$ as on mini-ImageNet.
+
+| Model | Backbone | 1-shot | 5-shot |
| Matching Nets [4] | Conv-4 | 43.56 ± 0.84 | 55.31 ± 0.73 |
| MAML [7] | Conv-4 | 48.70 ± 1.84 | 63.11 ± 0.92 |
| Reptile [48] | Conv-4 | 49.97 ± 0.32 | 65.99 ± 0.58 |
| R2-D2 [49] | Conv-4 | 48.70 ± 0.60 | 65.50 ± 0.60 |
| Prototypical Nets [20] | Conv-4 | 44.53 ± 0.76 | 65.77 ± 0.66 |
| Relation Nets [14] | Conv-4 | 50.44 ± 0.82 | 65.32 ± 0.70 |
| DSN | Conv-4 | 51.78 ± 0.96 | 68.99 ± 0.69 |
| DSN-MR | Conv-4 | 55.88 ± 0.90 | 70.50 ± 0.68 |
| Meta-Nets [27] | ResNet-12 | 57.10 ± 0.70 | 70.04 ± 0.63 |
| SNAIL [10] | ResNet-12 | 55.71 ± 0.99 | 68.88 ± 0.92 |
| AdaResNet [50] | ResNet-12 | 56.88 ± 0.62 | 71.94 ± 0.57 |
| TADAM [51] | ResNet-12 | 58.50 ± 0.30 | 76.70 ± 0.30 |
| Prototypical Nets [20] | ResNet-12 | 59.25 ± 0.64 | 75.60 ± 0.48 |
| FEAT [30] | ResNet-12 | 61.72 ± 0.11 | 78.32 ± 0.16 |
| CTM [52] | ResNet-18 | 62.05 ± 0.55 | 78.63 ± 0.06 |
| Qiao et al.\( ^‡ \)[11] | WRN-28-10 | 59.60 ± 0.41 | 73.74 ± 0.19 |
| LwoF [36] | WRN-28-10 | 60.06 ± 0.14 | 76.39 ± 0.11 |
| LEO\( ^‡ \)[53] | WRN-28-10 | 61.76 ± 0.08 | 77.59 ± 0.12 |
| wDAE-GNN\( ^‡ \)[54] | WRN-28-10 | 62.96 ± 0.15 | 78.85 ± 0.10 |
| MetaOpt-SVM\( ^‡ \)[45] | ResNet-12 | 64.09 ± 0.62 | 80.00 ± 0.45 |
| DSN | ResNet-12 | 62.64 ± 0.66 | 78.83 ± 0.45 |
| DSN-MR | ResNet-12 | 64.60 ± 0.72 | 79.51 ± 0.50 |
| DSN\( ^‡ \) | ResNet-12 | 65.38 ± 0.63 | 81.25 ± 0.45 |
| DSN-MR\( ^‡ \) | ResNet-12 | 67.09 ± 0.68 | 81.65 ± 0.69 |
+
+Table 1: Comparison with the state of the art. 5-way few-shot classification results with $95\%$ confidence interval on mini-ImageNet dataset with various backbones for 1-shot and 5-shot. Methods with $\ddagger$ include training and validation sets for training the models.
+
+CIFAR-100. We evaluate on the CIFAR-FS data split. All images on these datasets are $32 \times 32$ and the number of samples per class is 600. The CIFAR-FS dataset [49] is a few-shot learning benchmark containing all 100 classes from CIFAR-100 [43]. The dataset is divided into 64, 16 and 20 for training, validation, and testing, respectively.
+
+Open MIC. This dataset [44] contains images from 10 museum exhibition spaces. In this dataset, there are 866 classes and 1-20 images per class. The images undergo various photometric and geometric distortions, the classes are often fine-grained in their nature, thus making few-shot learning problem challenging. The protocols and baselines we use are proposed in [55] but excludes the easiest to classify classes to make it possible for testing more than 1-shot then we rerun the SoSN [55] method. The dataset is divided into four subsets: $p1 = (shn + hon + clv)$ , $p2 = (clk + gls + scl)$ , $p3 = (sci + nat)$ , $p4 = (shx + rlc)$ . Protocol [55] assumes evaluations on $p1 \to p2$ , $p2 \to p3$ , $p3 \to p4$ , and $p4 \to p1$ , where $x \to y$ denotes training on subset $x$ and testing on subset $y$ . Training in a subset and testing in another subset depicts a few-shot learning problem because the objects in every museum are distinct with different backgrounds. Note that, we eliminate classes with less than 3 examples and rerun all algorithms in our experiment.
+
+| Model | Backbone | 1-Shot | 5-Shot |
| Prototypical Nets [20] | ResNet-12 | 61.74 ± 0.77 | 80.00 ± 0.55 |
| CTM [52] | ResNet-18 | 64.78 ± 0.11 | 81.05 ± 0.52 |
| LEO‡ [53] | WRN-28-10 | 66.33 ± 0.05 | 81.44 ± 0.09 |
| MetaOpt - SVM‡ [45] | ResNet-12 | 65.81 ± 0.74 | 81.75 ± 0.53 |
| DSN | ResNet-12 | 66.22 ± 0.75 | 82.79 ± 0.48 |
| DSN-MR | ResNet-12 | 67.39 ± 0.82 | 82.85 ± 0.56 |
| DSN‡ | ResNet-12 | 66.83 ± 0.73 | 83.31 ± 0.64 |
| DSN-MR‡ | ResNet-12 | 68.44 ± 0.77 | 83.32 ± 0.66 |
+
+Table 2: 5-way few-shot classification results on tiered-ImageNet with $95\%$ confidence intervals. Methods with $\ddagger$ include training and validation sets for training the models.
+
+| Model | 1-Shot | 5-Shot |
| Prototypical Nets [4] | 72.2 ± 0.7 | 83.5 ± 0.5 |
| MetaOpt - RR [45] | 72.6 ± 0.7 | 84.3 ± 0.5 |
| MetaOpt - SVM [45] | 72.0 ± 0.7 | 84.2 ± 0.5 |
| MetaOpt - SVM‡ [45] | 72.8 ± 0.7 | 85.0 ± 0.5 |
| DSN | 72.3 ± 0.8 | 85.1 ± 0.6 |
| DSN-MR | 75.6 ± 0.9 | 86.2 ± 0.6 |
| DSN‡ | 73.6 ± 0.9 | 86.3 ± 0.6 |
| DSN-MR‡ | 78.0 ± 0.9 | 87.3 ± 0.6 |
+
+Table 3: 5-way few-shot classification results on the CIFAR-FS dataset using ResNet-12 with $95\%$ confidence intervals. Methods with $\ddagger$ include training and validation sets for training the models.
+
+# 5.1. Few-shot Learning
+
+We follow the general practice and evaluate our method on mini-ImageNet, tiered-ImageNet, CIFAR-FS, and Open MIC when it comes to few-shot learning and classification. The CNN architectures for mini-ImageNet are the same as the one used in [47] with 4 convolutional layers (Conv-4) and ResNet-12 [56]. While, only ResNet-12 is used for CIFAR-FS and tiered-ImageNet. We use ADAM [57] for optimizing Conv-4 and SGD for optimizing ResNet-12. For a fair comparison, we conduct similar experimental setups. Conv-4 backbone is trained without data augmentation following the other methods and cut learning rate to half every 5K episodes. We trained on 5-way 1-shot and 5-shot, then applied the same classification task setup during testing for Conv-4. Note that, Prototypical Nets [20] using Conv-4 were also trained and tested on 5-way. The training for ResNet-12 is performed with data augmentation and the learning rate is set 0.1 initially then it is adjusted to 0.003, 0.00032, and 0.00014 at epochs 12, 30, and 45, respectively. Moreover, the training strategy in [45] is utilized with 15-shot, 10 query images, and 8 episodes per batch. We cross-validated from a validation set and set $\lambda = 0.03$ for all experiments. The accuracy is evaluated over 1000 episodes.
+
+| Model | 5-way 1-shot | 5-way 3-shot |
| p1 → p2 | p2 → p3 | p3 → p4 | p4 → p1 | Avg | p1 → p2 | p2 → p3 | p3 → p4 | p4 → p1 | Avg |
| Matching Nets [4] | 69.40 | 57.30 | 76.35 | 53.68 | 64.18 | 84.10 | 74.20 | 87.47 | 70.83 | 79.15 |
| Relation Nets [14] | 70.10 | 49.70 | 66.90 | 46.90 | 58.40 | 80.90 | 61.90 | 78.50 | 58.90 | 70.05 |
| Prototypical Nets [20] | 66.33 | 52.03 | 74.28 | 54.30 | 61.74 | 81.60 | 73.55 | 83.55 | 69.15 | 76.96 |
| SoSN [55] | 78.00 | 60.10 | 75.50 | 57.80 | 67.85 | 87.10 | 72.60 | 85.90 | 72.80 | 79.60 |
| DSN | 75.87 | 62.13 | 78.25 | 62.11 | 69.59 | 87.93 | 75.78 | 88.42 | 76.59 | 82.18 |
+
+Table 4: Few-shot classification results using Conv-4 on the Open MIC dataset for 5-way 1-shot and 3-shot.
+
+| Dataset | Model | 1-shot | 5-shot |
| w/o D | w/D | w/o D | w/D |
| mini-
+ImageNet | PN-SSL, Non-Masked [33] | 50.09 ± 0.45 | 48.70 ± 0.32 | 64.59 ± 0.28 | 63.55 ± 0.28 |
| PN-SSL, Masked [33] | 50.41 ± 0.31 | 49.04 ± 0.31 | 64.39 ± 0.24 | 62.96 ± 0.14 |
| Semi DSN | 53.01 ± 0.82 | 51.01 ± 0.78 | 69.12 ± 0.62 | 67.12 ± 0.81 |
| tiered-
+ImageNet | PN-SSL, Non-Masked [33] | 51.85 ± 0.25 | 51.36 ± 0.31 | 70.25 ± 0.31% | 68.32 ± 0.22 |
| PN-SSL, Masked [33] | 52.39 ± 0.44 | 51.38 ± 0.38 | 69.88 ± 0.20% | 69.08 ± 0.25 |
| Semi DSN | 54.06 ± 0.96 | 53.89 ± 0.83 | 72.07 ± 0.69 | 70.15 ± 0.81 |
+
+Table 5: 5-way semi-supervised few-shot classification results using Conv-4 on mini-ImageNet and tiered-ImageNet with $40\%$ and $10\%$ labeled data, respectively. We show the classification results with (w/ D) and without distractors (w/o D).
+
+By design, our method needs more than one sample to identify the span of a subspace. Thus, for 1-shot case, we generate an additional sample by data augmentation through flipping support images.
+
+Results. Below, we provide our results based on the Conv-4 and ResNet-12 for a comprehensive comparison. Note that different backbones can affect the performance of few-shot learning. For the mini-ImageNet, Table 1 shows that our method outperforms state-of-the-art methods with various CNN backbones and the number of samples for 5-way 5-shot and 1-shot. Our method can also benefit from the mean refinement (MR) of the query set. Our method is even better on deeper CNN with more parameters such as ResNet-12 [56]. Our performance is $1.3\%$ better than MetaOpt-SVM [45] on 5-way 1-shot and 5-shot. Our method also consistently outperforms the other methods the tiered-ImageNet and CIFAR-FS datasets (see Tables 2 and 3).
+
+On the Open MIC dataset (see Table 4), a similar trend can be observed. Our methods outperform state-of-the-art embedding methods for few-shot learning (ie., Matching Nets [4], Prototypical Nets [20], and Second-order Similarity Network (SoSN) [55]). The results show that our subspace representation is robust to various photometric and geometric distortions posed by the Open MIC dataset, and it can model fine-grained concepts contained in this dataset well. Open MIC contains different exhibitions with different types of objects. Our model can generalize to different subsets of objects on Open MIC with around $2\%$ gain compared to other methods.
+
+# 5.2. Semi-Supervised Few-shot Learning
+
+For experiments in this section, we used the embedding architecture with 4-convolutional layers as in [33]. We followed the experimental setup proposed by [33]. The episode composition of labeled part of support and query sets is similar to the few-shot learning classification task, however, there is an additional unlabeled set provided in each episode. Our model was trained on 100K episodes on mini-ImageNet and tiered-ImageNet with $40\%$ and $10\%$ of labeled data, respectively. We used the ADAM solver [57], then set the learning rate to 0.001 with the weight decay and cut the rate to half every 10K episodes.
+
+The training was performed in the semi-supervised setting for which the unlabeled set was also used. The unlabeled set was composed of the samples from the classes in the support set and distractor classes. The number of supporting classes and distractor classes were set to five for training and testing. In the training stage, the number of samples in the unlabeled set was 50 (five samples from each class). In the testing stage, the unlabeled set consisted of 20 samples from each class. The query set had 20 samples per class for testing purposes. $\lambda$ was set to 0.03 and 0.005 for semi-supervised few-shot learning on mini-ImageNet and tiered-ImageNet, respectively.
+
+Results. The accuracy is evaluated over 600 episodes. The results are averaged over 10 random splits of labeled and unlabeled sets. The semi-supervised experiment detailed in Table 5 shows that our method improves the performance by exploiting unlabeled data. Our results are compared to proto-
+
+
+Figure 4: Experiments in the presence of outliers and additive noise on mini-ImageNet for 5-way 5-shot using Conv-4. The results of DSN, DSN without discriminative term, and prototypical networks are shown (see the legend). The first column shows the impact of introducing outliers among support samples (the classes of outliers are disjoint with the support classes of samples). The second, third and fourth columns show the impact of introducing noisy samples generated randomly according to the Gaussian distribution with random means and variance of $\sigma = \{0.15, 0.3, 0.4\}$ , respectively. The performance is measured w.r.t. the increasing number of outliers and noisy samples (x-axis).
+
+
+
+
+
+
+
+| Approach | 5-way 1-shot | 5-way 5-shot |
| Without Disc. Term | 50.44 ± 0.88 | 67.22 ± 0.69 |
| With Disc. Term | 51.78 ± 0.96 | 68.99 ± 0.69 |
+
+Table 6: Few-shot classification accuracy for DSN using Conv-4 with and without the discriminative term on miniImageNet.
+
+typical networks on semi-supervised learning (SS-FSL) with soft $K$ -means (non-masked) and masked $K$ -means (masked), as proposed by [33].
+
+# 5.3. Ablation Study
+
+Discriminative Term. Below, an ablation study w.r.t. the discriminative term is performed. The discriminative term in Eq. 4 encourages the orthogonality between subspaces of different classes. This term leads to a performance boost on few-shot classification tasks. We investigated results for this mechanism in Table 6 given the Conv-4 backbone. From results we conclude that the network learns discriminative subspaces which are pushed away from each other. This empirical study proves that the discriminative term gives a performance boost and results in more discriminative subspaces for classification.
+
+Subspace Dimensionality. In comparison to other models such as matching networks, prototypical networks, and relation networks, our DSN comes with an additional hyperparameter, the dimensionality of the subspaces (ie., $n$ ). As a rule of thumb, we recommend to use $n = K - 1$ to train and test our model. In fact, DSN exhibits a large degree of robustness to $n$ , which in turns, makes training of our model simple. We observe that the choice of $n$ from 2 to $K - 1$ does not affect the performance significantly ( $\pm 0.5\%$ ) on mini-ImageNet using Conv-4 backbone.
+
+# 6. Discussion
+
+Robustness to Perturbations. One may argue whether noise poses problems in few-shot learning. However, some noise patterns might not be obvious when collecting the data. Thus, the data cannot be guaranteed to be free from noise. We observed in our experiments that the performance for standard methods degrades significantly with a small degree of perturbations added to signal, as depicted in Fig. 4. However, our subspace-based model handles such a noise well.
+
+Computational Complexity. The computational complexity of our DSN approach is $\mathcal{O}(\min (ND^2 K, NDK^2))$ , where $K$ , $N$ , and $D$ are the number of shot, way, and the feature dimensionality, respectively. Compared to the complexity of the prototypical networks approach, ie., $\mathcal{O}(NDK)$ , our method is somewhat slower due to the use of the SVD step. However, to address the complexity of SVD, fast approximate SVD algorithms can be used [58].
+
+# 7. Conclusions
+
+This paper presents the DSN, a novel few-shot learning approach that employs a few-shot learning model via affine subspaces. Empirically, we showed that the representations learned via DSN are expressive across a wide-range of supervised and semi-supervised few-shot problems. Both of them are trained in meta-learning and the test set is not seen previously while training the model. The subspace model is proven to improve existing models by a large margin due to its nature to represent a few datapoints on a subspace.
+
+In DSN, each class classifier is represented by the subspace formed by all its samples, meaning that each class is modeled by the span of its training datapoints. We showed that DSN is robust to noise in few-shot learning. Our experiments demonstrated that a higher classification accuracy can be obtained by simply encouraging subspaces to be separated from each other.
+
+# References
+
+[1] B. Alexe, T. Deselaers, and V. Ferrari, "Measuring the objectness of image windows," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, pp. 2189-2202, 2012.
+[2] D. Li, C. Rodriguez, X. Yu, and H. Li, "Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison," in The IEEE Winter Conference on Applications of Computer Vision, 2020, pp. 1459-1469.
+[3] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, “Human-level concept learning through probabilistic program induction,” Science, vol. 350, pp. 1332–1338, 2015.
+[4] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Matching networks for one shot learning,” in Advances in Neural Information Processing Systems, 2016.
+[5] E. Triantafillou, R. Zemel, and R. Urtasun, “Few-shot learning through an information retrieval lens,” in Advances in Neural Information Processing Systems, 2017.
+[6] Z. Xu, L. Zhu, and Y. Yang, “Few-shot object recognition from machine-labeled web images,” in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
+[7] C. Finn, P. Abbeel, and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," in International Conference on Machine Learning, 2017.
+[8] S. Ravi and H. Larochelle, "Optimization as a model for few-shot learning," in International Conference on Learning Representations, 2017.
+[9] Y.-X. Wang, R. Girshick, M. Herbert, and B. Hariharan, “Low-shot learning from imaginary data,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
+[10] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel, "A simple neural attentive meta-learner," in International Conference on Learning Representations, 2018.
+[11] S. Qiao, C. Liu, W. Shen, and A. L. Yuille, “Few-shot image recognition by predicting parameters from activations,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
+[12] J. O. Neill and P. Buitelaar, "Few shot transfer learning betweenword relatedness and similarity tasks using a gated recurrent siamese network," in AAAI Conference on Artificial Intelligence, 2018.
+[13] G. Koch, R. Zemel, and R. Salakhutdinov, "Siamese neural networks for one-shot image recognition," in International Conference on Machine Learning Deep Learning 2015 Workshop, 2015.
+[14] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales, “Learning to compare: Relation network for few-shot learning,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199–1208.
+[15] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, pp. 594–611, 2006.
+
+[16] M. A. Turk and A. P. Pentland, “Face recognition using eigenfaces,” in Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991, pp. 586–591.
+[17] R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 218–233, 2003.
+[18] P. Zhou, Y. Hou, and J. Feng, “Deep adversarial subspace clustering,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1596–1604.
+[19] J. Wang and A. Cherian, “Gods: Generalized one-class discriminative subspaces for anomaly detection,” in The IEEE International Conference on Computer Vision, October 2019.
+[20] J. Snell, K. Swersky, and Z. Richard, “Prototypical networks for few-shot learning,” in Advances in Neural Information Processing Systems, 2017.
+[21] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, “Caltech-UCSD Birds 200,” California Institute of Technology, Tech. Rep. CNS-TR-2010-001, 2010.
+[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012.
+[23] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. R. Salakhutdinov, and A. J. Smola, "Deep sets," in Advances in neural information processing systems, 2017, pp. 3391-3401.
+[24] A. Torralba, K. P. Murphy, and W. T. Freeman, "Sharing visual features for multiclass and multiview object detection," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 5, pp. 854-869, 2007.
+[25] A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap, “Meta-learning with memory-augmented neural networks,” in International conference on machine learning, 2016, pp. 1842–1850.
+[26] A. Antoniou, H. Edwards, and A. Storkey, "How to train your maml," in International Conference on Learning Representations, 2019.
+[27] T. Munkhdalai and H. Yu, “Meta networks,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR.org, 2017, pp. 2554–2563.
+[28] Y. Shi, L. Liu, X. Yu, and H. Li, "Spatial-aware feature aggregation for image based cross-view geo-localization," in Advances in Neural Information Processing Systems, 2019, pp. 10090-10100.
+[29] P. Fang, J. Zhou, S. K. Roy, L. Petersson, and M. Harandi, "Bilinear attention networks for person retrieval," in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 8030-8039.
+[30] H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha, “Learning embedding adaptation for few-shot learning,” arXiv preprint arXiv:1812.03664, 2018.
+[31] R. Hou, H. Chang, M. Bingpeng, S. Shan, and X. Chen, "Cross attention network for few-shot classification," in Advances in Neural Information Processing Systems, 2019, pp. 4005-4016.
+
+[32] V. Garcia and J. Bruna, “Few-shot learning with graph neural networks,” in International Conference on Learning Representations, 2018.
+[33] M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel, “Meta-learning for semi-supervised few-shot classification,” in International Conference on Learning Representations, 2018.
+[34] R. Boney and A. Ilin, “Semi-supervised few-shot learning with prototypical networks,” arXiv preprint arXiv:1711.10856, 2017.
+[35] Y.-X. Wang, R. Girshick, M. Hebert, and B. Hariharan, “Low-shot learning from imaginary data,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
+[36] S. Gidaris and N. Komodakis, "Dynamic few-shot visual learning without forgetting," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4367-4375.
+[37] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. Devito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, "Automatic differentiation in pytorch," in NIPS Autodiff Workshop, 2017.
+[38] A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM journal on Matrix Analysis and Applications, vol. 20, pp. 303–353, 1998.
+[39] M. Harandi, R. Hartley, C. Shen, B. Lovell, and C. Sanderson, "Extrinsic methods for coding and dictionary learning on grassmann manifolds," International Journal of Computer Vision, vol. 114, pp. 113-136, 2015.
+[40] S. W. Yoon, J. Seo, and J. Moon, “Tapnet: Neural network augmented with task-adaptive projection for few-shot learning,” in International Conference on Machine Learning, 2019.
+[41] A. Devos and M. Grossglauser, "Subspace networks for few-shot classification," arXiv:1905.13613, 2019.
+[42] C. Simon, P. Koniusz, and M. Harandi, “Projective subspace networks for few-shot learning,” OpenReview, https://openreview.net/forum?id=rkzfuiA9F7, 2018.
+[43] A. Krizhevsky et al., "Learning multiple layers of features from tiny images," CiteSeer, Tech. Rep., 2009.
+[44] P. Koniusz, Y. Tas, H. Zhang, M. Harandi, F. Porikli, and R. Zhang, “Museum exhibit identification challenge for the supervised domain adaptation and beyond,” in The European Conference on Computer Vision, 2018.
+[45] K. Lee, S. Maji, A. Ravichandran, and S. Soatto, “Meta-learning with differentiable convex optimization,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10657–10665.
+[46] O. Russakovsky, J. Deng, H. Su, J. K. and Sanjeev Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and e. a. Michael Bernstein, "Imagenet large scale visual recognition challenge," International Journal of Computer Vision, vol. 115, pp. 211-252, 2015.
+
+[47] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang, “A closer look at few-shot classification,” in International Conference on Learning Representations, 2019.
+[48] A. Nichol, J. Achiam, and J. Schulman, “On first-order meta-learning algorithms,” arXiv preprint arXiv:1803.02999, 2018.
+[49] L. Bertinetto, J. F. Henriques, P. Torr, and A. Vedaldi, “Meta-learning with differentiable closed-form solvers,” in International Conference on Learning Representations, 2019.
+[50] T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler, "Rapid adaptation with conditionally shifted neurons," in International Conference on Machine Learning, 2018, pp. 3661-3670.
+[51] B. Oreshkin, P. Rodríguez López, and A. Lacoste, “Tadam: Task dependent adaptive metric for improved few-shot learning,” in Advances in Neural Information Processing Systems, 2018, pp. 719–728.
+[52] H. Li, D. Eigen, S. Dodge, M. Zeiler, and X. Wang, "Finding task-relevant features for few-shot learning by category traversal," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 1-10.
+[53] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell, "Meta-learning with latent embedding optimization," in International Conference on Learning Representations, 2019.
+[54] S. Gidaris and N. Komodakis, "Generating classification weights with GNN denoising autoencoders for few-shot learning," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019.
+[55] H. Zhang and P. Koniusz, "Power normalizing second-order similarity network for few-shot learning," in Winter Conference on Applications of Computer Vision, 2019.
+[56] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016.
+[57] D. P. Kingma and J. L. Ba, "Adam: A method for stochastic optimization," in International Conference on Learning Representations, 2015.
+[58] A. K. Menon and C. Elkan, "Fast algorithms for approximating the singular value decomposition," ACM Trans. Knowl. Discov. Data, vol. 5, pp. 13:1-13:36, 2011.
\ No newline at end of file
diff --git a/adaptivesubspacesforfewshotlearning/images.zip b/adaptivesubspacesforfewshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f3c5f27586c600205107d8502f2af5997907360e
--- /dev/null
+++ b/adaptivesubspacesforfewshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:149c41cb0bd0c67a93153c63ffbe9e2a6f9a844f5903989447a94d699887c5f3
+size 495899
diff --git a/adaptivesubspacesforfewshotlearning/layout.json b/adaptivesubspacesforfewshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..952fbc0033120efdbd5bc3795806ac4eb05493d1
--- /dev/null
+++ b/adaptivesubspacesforfewshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:221ca5fe3e954ef3397c96953ac4e94e25db8472f2e6ead4afa71748763871cd
+size 429143
diff --git a/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_content_list.json b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..bac1869a577972012fd1e4aeca530247af226c34
--- /dev/null
+++ b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fa573df9d209fba21406742885f8aa081593662718e50b934e54f2469217488f
+size 80008
diff --git a/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_model.json b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1220939e255dd29b7d55bb46868ebb9ab4b78b90
--- /dev/null
+++ b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97cbafd28740406fb108b38060dc4c55d9f63e850a5ba1f49b09a8cd3349141c
+size 101359
diff --git a/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_origin.pdf b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..781d57ad3d19b6cba81f6c17a924112ddc76b281
--- /dev/null
+++ b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/8e5a9ff9-345d-4cfb-83d4-dde41771662c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6580ba78383c50bc1fd9f40e179aa0b94e894cff78198e9a693cb00e3392d58b
+size 2198163
diff --git a/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/full.md b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b9ccf752cbf81f842744f176407edf22efab7fc3
--- /dev/null
+++ b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/full.md
@@ -0,0 +1,365 @@
+# AD-Cluster: Augmented Discriminative Clustering for Domain Adaptive Person Re-identification
+
+Yunpeng Zhai $^{1,2}$ , Shijian Lu $^{3}$ , Qixiang Ye $^{4,6}$ , Xuebo Shan $^{1,2}$ , Jie Chen $^{1,6}$ , Rongrong Ji $^{5,6}$ , Yonghong Tian $^{1,2,6*}$
+
+$^{1}$ School of Electronic and Computer Engineering, Peking University, China
+
+$^{2}$ NELVT, School of EE&CS, Peking University, Beijing, China
+
+$^{3}$ Nanyang Technological University, Singapore, $^{4}$ University of Chinese Academy of Sciences, China
+
+$^{5}$ Xiamen University, China, $^{6}$ Peng Cheng Laboratory, China
+
+{ypzhai, shanxb, yhtian}@pku.edu.cn, shijian.lu@ntu.edu.sg, qxye@ucas.ac.cn,
+
+chenj@pcl.ac.cn, rrji@xmu.edu.cn
+
+# Abstract
+
+Domain adaptive person re-identification (re-ID) is a challenging task, especially when person identities in target domains are unknown. Existing methods attempt to address this challenge by transferring image styles or aligning feature distributions across domains, whereas the rich unlabeled samples in target domains are not sufficiently exploited. This paper presents a novel augmented discriminative clustering (AD-Cluster) technique that estimates and augments person clusters in target domains and enforces the discrimination ability of re-ID models with the augmented clusters. AD-Cluster is trained by iterative density-based clustering, adaptive sample augmentation, and discriminative feature learning. It learns an image generator and a feature encoder which aim to maximize the intracluster diversity in the sample space and minimize the intracluster distance in the feature space in an adversarial min-max manner. Finally, AD-Cluster increases the diversity of sample clusters and improves the discrimination capability of re-ID models greatly. Extensive experiments over Market-1501 and DukeMTMC-reID show that AD-Cluster outperforms the state-of-the-art with large margins.
+
+# 1. Introduction
+
+Person re-identification (re-ID) aims to match persons in an image gallery collected from non-overlapping camera networks. Despite of the impressive progress of supervised methods in person re-ID [5] [50], models trained in one domain often fail to generalize well to others due to the change of camera configurations, lighting conditions, person views,
+
+
+Figure 1. AD-Cluster alternatively trains an image generator and a feature encoder, which respectively Maximizes intra-cluster distance (i.e., increase the diversity of sample space) and Minimizes intra-cluster distance in feature space (i.e., decrease the distance in new feature space). It enforces the discrimination ability of re-ID models in an adversarial min-max manner. (Best viewed in color)
+
+etc. Domain adaptive re-ID methods that can work across domains remain a very open research challenge.
+
+To implement domain adaptive re-ID, unsupervised domain adaptation (UDA) methods have been widely explored [44], [26], [27], [10], [45], [61], [32], [14], [52],
+
+[29]. One major line of UDA methods attempts to align the feature distributions of source and target domains [44], [26]. Another line of methods utilizes adversarial generative models as a style transformer to convert pedestrian images (with identity annotations) of a source domain into a target domain [27], [10], [45], [32]. The style-transferred images are then used to train a re-ID model in the target domain. Many UDA methods preserve discriminative information across domains or camera styles, but they largely ignore the unlabeled samples and so the substantial sample distributions in target domains. Recent approaches [14], [47] alleviate this problem by predicting pseudo-labels in target domains. They leverage the cluster (pseudo) labels for model fine-tuning directly but are often susceptible to noises and hard samples. This prevents them from maximizing model discrimination capacity in target domains.
+
+In this paper, we propose an innovative augmented discriminative clustering (AD-Cluster) technique for domain adaptive person re-ID. AD-Cluster aims to maximize model discrimination capacity in the target domain by alternating discriminative clustering and sample generation as illustrated in Fig. 1. Specifically, density-based clustering first predicts sample clusters in the target domain where sample features are extracted by a re-ID model that is pre-trained in the source domain. AD-Cluster then learns through two iterative processes. First, an image generator keeps translating the clustered images to other cameras to augment the training samples while retaining the original pseudo identity labels (i.e. cluster labels). Second, a feature encoder keeps learning to maximize the inter-cluster distance while minimizing the intra-cluster distance in feature space. The image generator and the feature encoder thus compete in an adversarial min-max manner which iteratively estimate cluster labels and optimize re-ID models. Finally, AD-Cluster aggregates the discrimination ability of re-ID models through such adversarial learning and optimization.
+
+The main contributions of this paper can be summarized in three aspects. First, it proposes a novel discriminative clustering method that addresses domain adaptive person re-ID by density-based clustering, adaptive sample augmentation, and discriminative feature learning. Second, it designs an adversarial min-max optimization strategy that increases the intra-cluster diversity and enforces discrimination ability of re-ID models in target domains simultaneously. Third, it achieves significant performance gain over the state-of-the-art on two widely used re-ID datasets: Market-1501 and DukeMTMC-reID.
+
+# 2. Related Works
+
+While person re-ID has been extensively investigated from various perspectives, we mainly review the domain adaptive person re-ID approaches, which are largely driven by unsupervised domain adaptation (UDA) methods.
+
+# 2.1. Unsupervised Domain Adaptation (UDA)
+
+Domain alignment. UDA defines a learning problem where source domains are fully labeled while sample labels in target domains are totally unknown. To learn discriminative modes in target domains, early methods focus on learning feature/sample mapping between source and target domains [38], [42]. As an representative method, correlation alignment (CORAL) [42] pursued minimizing domain shift by aligning the mean and co-variance of source and target distributions. Recent methods [22], [2], [28] attempted reducing the domain shift by using generative adversarial networks (GANs) to learn a pixel-level transformation. The most representative CYCADA [22] transferred samples across domains at both pixel- and feature-level.
+
+Domain-invariant features. The second line of UDA methods focuses on finding domain-invariant feature spaces [33], [31], [16], [30], [43], [17], [1]. To fulfill this purpose, Long et al. [30], [19] proposed the Maximum Mean Discrepancy (MMD), which maps features of both domains into the same Hilbert space. Ganin et al. [17] and Ajakan et al. [1] designed domain confusion loss to learn domain-invariant features. Saito et al. [39] proposed aligning distributions of source and target domains by maximizing the discrepancy of classifiers' outputs.
+
+Pseudo-label prediction. Another line of UDA methods involves learning representations in target domains by using the predicted pseudo-label. In general, this approach uses an alternative estimation strategy: predicting pseudo-labels of samples by simultaneous modelling and optimizing the model using predicted pseudo-labels [4], [37], [40], [54]. In the deep learning era, clustering loss has been designed for CNNs and jointly learning of features, image clusters, and re-ID models in an alternative manner [8], [51], [49], [11], [24], [3], [18].
+
+# 2.2. UDA for Person re-ID
+
+To implement domain adaptive person re-ID, researchers largely referred to the above reviewed UDA methods by incorporating the characteristics of person images.
+
+Domain alignment. In [26], Lin et al. proposed minimizing the distribution variation of the source's and the target's mid-level features based on Maximum Mean Discrepancy (MMD) distance. Wang et al. [44] utilized additional attribute annotations to align feature distributions of source and target domains in a common space. Other works enforced camera in-variance by learning consistent pairwise similarity distributions [46] or reducing the discrepancy between both domains and cameras [35].
+
+GAN-based methods have been extensively explored for domain adaptive person re-ID [32], [61], [45], [10], [27]. HHL [61] simultaneously enforced cameras invariance and domain connectedness to improve the generalization ability of models on the target set. PTGAN [45], SPGAN [10],
+
+
+Figure 2. The flowchart of the proposed AD-Cluster: The AD-Cluster consists of three components including density-based clustering, adaptive sample augmentation, and discriminative feature learning. Density-based clustering estimates sample pseudo-labels in the target domain. Adaptive sample augmentation maximizes the sample diversity cross cameras while retaining the original pseudo-labels. Discriminative learning drives the feature extractor to minimize the intra-cluster distance. $L_{div}$ denotes the diversity loss and $L_{tri}$ indicates the triplet loss. (Best viewed in color)
+
+ATNet [27], CR-GAN [6] and PDA-Net [23] transferred images with identity labels from source into target domains to learn discriminative models.
+
+By aligning feature and/or appearance, the above methods can preserve well the discriminative information from source domains; however, they largely ignore leveraging the unlabeled samples in target domains, which hinder them from maximizing the model discrimination capacity.
+
+Pseudo-label prediction. Recently, the problem about how to leverage the large number of unlabeled samples in target domains has attracted increasing attention [14], [52], [29], [47], [48], [62]. Clustering [14], [57], [55], [15] and graph matching [52] methods have been explored to predict pseudo-labels in target domains for discriminative model learning. Reciprocal search [29] and exemplar-invariance approaches [48] were proposed to refine pseudo labels, taking camera-invariance into account concurrently.
+
+Existing approaches have explored cluster distributions in the target domain. On the other hand, they still face the challenge on how to precisely predict the label of hard samples. The hard/difficult samples are crucial to a discriminative re-ID model but they often confuse clustering algorithms. We address this issues by iteratively generating and including diverse and representative samples in the target domain, which enforces the discrimination capability of re-ID models effectively.
+
+# 3. The Proposed Approach
+
+Under the context of unsupervised domain adaptation (UDA) for person re-ID, we have a fully labeled source do
+
+main $\{X_s,Y_s\}$ that contains $N_{s}$ person images of $M$ identities in total in the source domain. $X_{s}$ and $Y_{s}$ denote the sample images and identities in the source domain, respectively, where each image $x_{s,i}$ is associated with an identity $y_{s,i}$ . In addition, we have an unlabeled target domain $\{X_t\}$ that contains $N_{t}$ person images. The identities of images in the target domain are unavailable. The goal of AD-Cluster is to learn a re-ID model that generalizes well in the target domain by leveraging labeled samples in the source domain and unlabeled samples in the target domain.
+
+# 3.1. Overview
+
+AD-Cluster consists of two networks including a CNN as the feature encoder $f$ and a Generative Adversarial Network (GAN) as the image generator $g$ as shown in Fig. 2. The encoder $f$ is first trained using labeled samples in the source domain with cross-entropy loss and triplet loss [21]. In the target domain, unlabelled sample are represented by features that are extracted by $f$ , where density-based clustering groups them to clusters and uses the cluster IDs as the pseudo-labels of the clustered samples. With each camera being a new domain with different styles, $g$ translates each sample of the target domain to other cameras and this generates identity-preserving samples with increased diversity. After that, all samples in the target domain together with those generated are fed to re-train the feature encoder $f$ . The generator $g$ and encoder $f$ thus learn in an adversarial min-max manner iteratively, where $g$ keeps generating identity-preservative samples to maximize the intra-cluster variations in the sample space whereas $f$ learns discriminative representation to minimize the intra-cluster variations
+
+in the feature space as illustrated in Fig. 1.
+
+# 3.2. UDA Procedure
+
+Supervised learning in source domain: In the source domain, the CNN-based person re-ID model is trained by optimizing classification and ranking loss [21]:
+
+$$
+\mathcal {L} _ {s r c} = \mathcal {L} _ {c l s} + \mathcal {L} _ {t r i}. \tag {1}
+$$
+
+For a batch of samples, the classification loss is defined by
+
+$$
+\mathcal {L} _ {c l s} = - \frac {1}{n _ {s}} \sum_ {i = 1} ^ {n _ {s}} \log p \left(y _ {s, i} \mid x _ {s, i}\right), \tag {2}
+$$
+
+where $n_s$ , $i$ and $s$ denote the number of images in a batch, image index and source domain, respectively. $p(y_{s,i}|x_{s,i})$ is the predicted probability of image $x_{s,i}$ belonging to $y_{s,i}$ .
+
+The ranking triplet loss is defined as
+
+$$
+\begin{array}{l} \mathcal {L} _ {t r i} = \sum_ {i = 1} ^ {n _ {s}} [ m + \| f (x _ {s, i}) - f (x _ {s, i ^ {+}}) \| _ {2} \tag {3} \\ - \| f \left(x _ {s, i}\right) - f \left(x _ {s, i ^ {-}}\right) \| _ {2} ], \\ \end{array}
+$$
+
+where $x_{s,i^{+}}$ denotes the samples belonging to the same person with $x_{s,i}$ . $x_{s,i^{-}}$ denotes the samples belonging to different persons with $x_{s,i}$ . $m$ is a margin parameter [21].
+
+Density-based clustering in target domain: In each learning iteration, density-based clustering [12] is employed in the target domain for pseudo-label prediction. The clustering procedure includes three steps: (1) Extracting convolutional features for all person images. (2) Computing a distance matrix with k-reciprocal encoding [60] for all training samples and then performing density-based clustering to assign samples into different groups. (3) Assigning pseudo-labels $Y_{t}^{\prime}$ to the training samples $X_{t}$ according to the groups they belong to.
+
+Adaptive sample augmentation across cameras: Due to the domain gap, the pseudo-labels predicted by density-based clustering suffer from noises. In addition, the limited number of training samples in the target domain often leads to the low diversity of samples in each cluster. These two factors make it difficult to learn discriminative representation in the target domain.
+
+To address these issues, we propose to augment samples in the target domain with a GAN to aggregate sample diversity. The used GAN should possess the following two properties: (1) Generating new person images from existing ones while preserving the original identities; (2) Providing additional invariance such as camera configurations, lighting conditions, and person views.
+
+To fulfill these purposes, we employ StarGAN [7] to augment person images which can preserve the person identities while generating new images in multiple camera styles. The image generation procedure roots in the
+
+
+
+
+Figure 3. The proposed adversarial min-max learning: With a fixed feature encoder $f$ , the generator $g$ learns to generate samples that maximizes intra-cluster distance. With a fixed generator $g$ , the feature encoder $f$ learns to minimize the intra-cluster distance and maximize the inter-cluster distance under the guide of triplet loss.
+
+results of density-based clustering. Suppose there are $K$ cameras in total in the target domain. A StarGAN model is first trained which enables image-image translation between each camera pair. Using the learned StarGAN model, for an image $x_{t,i}$ with pseudo-label $y_{t,i}$ , we generate $K$ augmented images $\{x_{t,i}^{(1)}, y_{t,i}\}, \{x_{t,i}^{(2)}, y_{t,i}\}, \ldots, \{x_{t,i}^{(K)}, y_{t,i}\}$ , which have the pseudo-label $y_{t,i}$ with $x_{t,i}$ and similar styles as the images in camera 1, 2, ..., $K$ , respectively. In this way, the sample number in each cluster increases by a factor of $K - 1$ . The augmented images together with original images in target domain are used for discriminative feature learning, according to Eq. 3.
+
+# 3.3. Min-Max Optimization
+
+Although the adaptive sample augmentation enforces the discrimination ability of re-ID models, the sample generation procedure is completely independent from the clustering and feature learning which could lead to insufficient sample diversity across cameras.
+
+To fuse the adaptive data augmentation with discriminative feature learning, we propose an adversarial min-max optimization strategy as illustrated in Fig. 3. Specifically, we alternatively train an image generator and a feature encoder that maximize sample diversity and minimize intracluster distance for each mini-batch, respectively.
+
+Max-Step: Star-GAN [7] is employed as an image generator $(g)$ for a given feature encoder $(f)$ . In the procedure, the summation of Euclidean distances between samples and their cluster centers is defined as cluster diversity $\mathcal{D}_{div}$ . For
+
+each sample, the diversity is defined as
+
+$$
+\mathcal {D} _ {d i v} \left(x _ {t, i}\right) = \left\| f \left(g \left(x _ {t, i}\right)\right) - \frac {1}{\sum_ {j = 1} ^ {n _ {t}} a (i , j)} \sum_ {j = 1} ^ {n _ {t}} a (i, j) f \left(x _ {t, i}\right) \right\| _ {2}, \tag {4}
+$$
+
+where $a(i,j)$ indicates whether sample $x_{t,i}$ and $x_{t,j}$ belong to the same person or not. $a(i,j) = 1$ when $y_{t,i} = y_{t,j}$ , otherwise $a(i,j) = 0$ .
+
+For a batch of sample, a diversity loss is defined as
+
+$$
+\mathcal {L} _ {d i v} = \frac {1}{n _ {t}} \sum_ {i = 1} ^ {n _ {t}} e ^ {- \lambda \mathcal {D} _ {d i v} \left(x _ {t, i}\right)}, \tag {5}
+$$
+
+where $\lambda$ is hyper-parameter. We use a negative exponent function to prevent $\mathcal{D}_{div}$ from growing too large so as to preserve the identity of the augmented person images. According to Eq. 4 and Eq. 5, maximizing the sample diversity $\mathcal{D}_{div}$ in a cluster is equal to minimizing the loss, as
+
+$$
+\underset {g} {\arg \max } \mathcal {D} _ {d i v} \Leftrightarrow \underset {g} {\arg \min } \mathcal {L} _ {d i v}. \tag {6}
+$$
+
+$\mathcal{L}_{div}$ is combined with loss of StarGAN to optimize the generator $g$ while augmenting samples.
+
+Min-Step: Given a fixed generator $g$ , the feature encoder $f$ learns to minimize the intra-cluster distance while maximizing inter-cluster distance in feature space under the constraint of triplet loss, which is defined as
+
+$$
+\begin{array}{l} \mathcal {L} _ {t r i} = \sum_ {i = 1} ^ {n _ {t}} [ m + \| f (x _ {t, i}) - f (x _ {t, i ^ {+}}) \| _ {2} \tag {7} \\ \left. - \left\| f \left(x _ {t, i}\right) - f \left(x _ {t, i ^ {-}}\right) \right\| _ {2} \right], \\ \end{array}
+$$
+
+where $x_{t,i+}$ denotes the samples belonging to the same cluster with $x_{t,i}$ . $x_{t,i-}$ denotes the samples belonging to different clusters with $x_{t,i}$ . $m$ is a margin parameter. Specifically, we choose all the positive samples and the hardest negative sample to construct the triplets for each anchor sample, with a mini-batch of both original and generated sample images. The objective function is defined by
+
+$$
+\underset {f} {\arg \min } \mathcal {D} _ {d i v} \Leftrightarrow \underset {f} {\arg \min } \mathcal {L} _ {t r i}. \tag {8}
+$$
+
+When $g$ keeps producing more diverse samples with features far away from the cluster centers, $f$ will be equipped with stronger discrimination ability in the target domain, as illustrated in Fig. 4. Algorithm 1 shows the detailed training procedure of the proposed AD-Cluster.
+
+# 4. Experiments
+
+We detail the implementation and evaluation of AD-Cluster. During the evaluation, ablation studies, parameter analysis, and comparisons with other methods are provided.
+
+Algorithm 1 Training procedure of AD-Cluster
+Input: Source domain dataset S, target domain dataset T
+Output: Feature encoder f
+1: Pre-train feature encoder f on S by optimizing Eq. 1.
+2: for each clustering iteration do
+3: Extract features $\mathbf{F} = f(\mathbf{T})$ .
+4: Cluster training samples in target domain using F.
+5: for each mini-batch $\mathcal{B} \subset \mathbf{T}$ do
+6: Max-step: train image generator g by $\mathcal{B}$ .
+7: Min-step: train feature encoder f by $\{\mathcal{B}, g(\mathcal{B})\}$ .
+8: end for
+9: end for
+10: return Feature encoder f
+
+
+
+
+
+
+
+
+(a) Iteration 0
+
+
+(b) Iteration 15
+
+
+(c) Iteration 30
+Figure 4. The sparsely and incorrectly distributed person image features of different identities are grouped to more compact and correct clusters through the iterative clustering process. (Best viewed in color with zoom in.)
+
+# 4.1. Datasets and Evaluation Metrics
+
+The experiments were conducted over two public datasets Market1501 [58] and DukeMTMC-ReID [36] [59] by using the evaluation metrics Cumulative Matching Characteristic (CMC) curve and mean average precision (mAP).
+
+Market1501 [58]: This dataset contains 32,668 images of 1,501 identities from 6 disjoint surveillance cameras. Of the 32,668 person images, 12,936 images from 751 identities form a training set, 19,732 images from 750 identities (plus a number of distractors) form a gallery set, and 3,368 images from 750 identities form a query set.
+
+DukeMTMC-ReID [36] [59]: This dataset is a subset of the DukeMTMC. It consists of 16,522 training images, 2,228 query images, and 17,661 gallery images of 1,812 identities captured using 8 cameras. Of the 1812 identities, 1,404 appear in at least two cameras and the rest 408 (considered as distractors) appear in only one camera.
+
+# 4.2. Implementation Details
+
+We adopt the ResNet-50 [20] as the backbone network and initialize it by using parameters pre-trained on the ImageNet [9]. During training, the input image is uniformly resized to $256 \times 128$ and traditional image augmentation is performed via random flipping and random erasing. For each identity from the training set, a mini-batch of size 256
+
+| Methods | DukeMTMC-reID → Market-1501 | Market-1501 → DukeMTMC-reID |
| R-1 | R-5 | R-10 | mAP | R-1 | R-5 | R-10 | mAP |
| LOMO [25] | 27.2 | 41.6 | 49.1 | 8.0 | 12.3 | 21.3 | 26.6 | 4.8 |
| Bow [58] | 35.8 | 52.4 | 60.3 | 14.8 | 17.1 | 28.8 | 34.9 | 8.3 |
| UMDL [34] | 34.5 | 52.6 | 59.6 | 12.4 | 18.5 | 31.4 | 37.6 | 7.3 |
| PTGAN [45] | 38.6 | - | 66.1 | - | 27.4 | - | 50.7 | - |
| PUL [13] | 45.5 | 60.7 | 66.7 | 20.5 | 30.0 | 43.4 | 48.5 | 16.4 |
| SPGAN [10] | 51.5 | 70.1 | 76.8 | 22.8 | 41.1 | 56.6 | 63.0 | 22.3 |
| CAMEL [53] | 54.5 | - | - | 26.3 | - | - | - | - |
| ATNet [27] | 55.7 | 73.2 | 79.4 | 25.6 | 45.1 | 59.5 | 64.2 | 24.9 |
| MMFA [26] | 56.7 | 75.0 | 81.8 | 27.4 | 45.3 | 59.8 | 66.3 | 24.7 |
| SPGAN+LMP [10] | 57.7 | 75.8 | 82.4 | 26.7 | 46.4 | 62.3 | 68.0 | 26.2 |
| TJ-AIDL [44] | 58.2 | 74.8 | 81.1 | 26.5 | 44.3 | 59.6 | 65.0 | 23.0 |
| CamStyle [63] | 58.8 | 78.2 | 84.3 | 27.4 | 48.4 | 62.5 | 68.9 | 25.1 |
| HHL [61] | 62.2 | 78.8 | 84.0 | 31.4 | 46.9 | 61.0 | 66.7 | 27.2 |
| ECN [62] | 75.1 | 87.6 | 91.6 | 43.0 | 63.3 | 75.8 | 80.4 | 40.4 |
| UDAP [41] | 75.8 | 89.5 | 93.2 | 53.7 | 68.4 | 80.1 | 83.5 | 49.0 |
| AD-Cluster (Ours) | 86.7 | 94.4 | 96.5 | 68.3 | 72.6 | 82.5 | 85.5 | 54.1 |
+
+Table 1. Comparison of the proposed AD-Cluster with state-of-the-art methods: For the transfers DukeMTMC-reID $\rightarrow$ Market-1501 and Market-1501 $\rightarrow$ DukeMTMC-reID, the proposed AD-Cluster significantly outperforms all state-of-the-art methods over all evaluation metrics. The top-three results are highlighted with bold, italic, and underline fonts, respectively.
+
+is sampled with $\mathrm{P} = 32$ randomly selected identities and $\mathrm{K} = 8$ (original to augmented samples ratio $= 3:1$ ) randomly sampled images for computing the hard batch triplet loss.
+
+In addition, we set the margin parameter at 0.5 and use the SGD optimizer to train the model. The learning rate is set at $6 \times 10^{-5}$ and momentum at 0.9. The whole training process consists of 30 iterative min-max clustering process, each of which consists of 70 training epochs.
+
+Our network was implemented on a PyTorch platform and trained using 4 NVIDIA Tesla K80 GPUs (each with 12GB VRAM).
+
+# 4.3. Comparisons with State-of-the-Arts
+
+We compare AD-Cluster with state-of-the-art unsupervised person ReID methods including: 1) LOMO [25] and BOW [58] that used hand-crafted features; 2) UMDL [34], PUL [13] and CAMEL [53] that employed unsupervised learning; and 3) nine UDA-based methods including PTGAN [45], SPGAN [10], ATNet [27], CamStyle [63], HHL [61], and ECN [62] that used GANs; MMFA [26] and TJ-AIDL [44] that used images attributes; and UDAP [41] that employed clustering. Table 1 shows the person Re-ID performance while adapting from Market1501 to DukeMTMC-reID and vice versa.
+
+As Table 1 shows, LOMO and BOW using hand-crafted features do not perform well. UMDL [34], PUL [13] and CAMEL [53] derive image features through unsupervised learning, and they perform clearly better than LOMO and BOW under most evaluation metrics. The UDA-based methods further improve the person Re-ID performance in
+
+most cases. Specifically, UDAP performs much better than other methods as it employed the distribution of clusters in the target domains. The performance of the UDA methods using GAN is diverse. In particular, ECN performs better than most methods using GANs because it enforces cameras invariance and domain connectedness.
+
+In addition, AD-Cluster performs significantly better than all compared methods. As Table 1 shows, AD-Cluster achieves a rank-1 accuracy of $86.7\%$ and an mAP of $68.3\%$ for the unsupervised adaptation DukeMTMC-reID $\rightarrow$ Market1501, which outperforms the state-of-the-art (by UDAP) by $10.9\%$ and $14.6\%$ , respectively. For Market1501 $\rightarrow$ DukeMTMC-reID, AD-Cluster obtains a rank-1 accuracy of $72.6\%$ and an mAP of $54.1\%$ which outperforms the state-of-the-art (by UDAP) by $4.2\%$ and $5.1\%$ , respectively.
+
+Note that AD-Cluster improves differently for the two adaptations in reverse directions between the two datasets. This can also be observed for most existing methods as shown in Table 1. We conjecture that this is because the large variance of samples in DukeMTMC-reID caused more clustering noise, which reduces the effectiveness of pseudolabel prediction and hinders the model adaptation.
+
+# 4.4. Ablation Studies
+
+Extensive ablation studies are performed to evaluate each component of AD-Cluster as shown in Table 2.
+
+Baseline, the Upper and Lower Bounds: We first derive the upper and lower performance bounds for the ablation studies as shown in Table 2. Specifically, the upper bounds of Re-ID performance are derived by the Supervised
+
+| Methods | DukeMTMC-reID → Market-1501 | Market-1501 → DukeMTMC-reID |
| R-1 | R-5 | R-10 | mAP | R-1 | R-5 | R-10 | mAP |
| Supervised Model (upper bound) | 91.9 | 97.4 | 98.4 | 81.4 | 82.8 | 92.2 | 94.9 | 69.8 |
| Direct Transfer | 46.3 | 63.8 | 71.2 | 21.3 | 28.0 | 42.9 | 49.4 | 14.2 |
| Baseline | 73.8 | 85.7 | 89.0 | 51.0 | 68.6 | 79.3 | 82.2 | 49.0 |
| Baseline+ASA | 83.3 | 93.6 | 95.7 | 62.8 | 71.5 | 81.1 | 84.2 | 52.7 |
| Baseline+ASA+DL | 86.7 | 94.4 | 96.5 | 68.3 | 72.6 | 82.5 | 85.5 | 54.1 |
+
+Table 2. Ablation studies of AD-Cluster: Supervised Models: Re-ID models trained by using the labelled training images of the target domain; Direct Transfer: Re-ID models trained by using the labelled training images of the source domain; Baseline: Baseline Re-ID models trained via Density-based Clustering [12]; Baseline+ASA: Baseline model plus the proposed Adaptive Sample Augmentation; Baseline+ASA+DL: Baseline model plus the proposed Sample Augmentation and Discriminative Feature Learning.
+
+
+(a) Direct Transfer $(J = 0.0556)$
+
+
+(b) Density-based Clustering $(J = 0.1674)$
+Figure 5. Comparison of sample distributions on Market-1501 dataset with different transfer techniques: $J$ denotes the ratio between inter-class scatter and intra-class scatter and a larger $J$ means better transfer. (Best viewed in color)
+
+
+(c) Sample Augmentation $(J = 0.2205)$
+
+
+(d) Discriminative Learning $(J = 0.2555)$
+
+Models which are trained by using labelled target-domain training images and evaluated over the target-domain test images. The lower performance bounds are derived by the Direct Transfer models which are trained by using the labelled source-domain training images and evaluated over the target-domain test images. We can observe huge performance gaps between the Direct Transfer models and the Supervised Models due to the domain shift. Take the Market-1501 as an example. The rank-1 accuracy of the supervised model reaches up to $91.9\%$ but it drops significantly to $46.3\%$ for the directly transferred model which is trained by using the DukeMTMC-reID training images.
+
+In addition, Table 2 gives the performance of Baseline models which are transfer models as trained by iterative density-based clustering as described in [41]. As Table 2 shows, the Baseline model outperforms the Direct Transfer model by a large margin. For example, the rank-1 accuracy improves from $46.3\%$ to $73.8\%$ and from $28.0\%$ to $68.6\%$ , respectively, while evaluated over the datasets Market1501 and DukeMTMC-reID. This shows that the density-based clustering in the Baseline can group samples of same identities to any irregular distributions by utilizing the density correlation. At the same time, we can observe that there are still large performance gaps between the Baseline models and the Supervised Models, e.g., a drop of $30\%$ in mAP while transferring from DukeMTMC-reID to Market1501.
+
+Adaptive Sample Augmentation: We first evaluated the adaptive sample augmentation as described in Section
+
+3.2. For this experiment, we designed a network Baseline+ASA that just incorporates the adaptive sample augmentation into the Baseline that performs transfer via iterative density-based clustering. As shown in Table 2, adaptive sample augmentation improves the re-ID performance significantly. For DukeMTMC-reID $\rightarrow$ Market1501, the Baseline+ASA achieves a rank-1 accuracy of $83.3\%$ and an mAP of $62.8\%$ which are higher than the Baseline by $9.5\%$ and $11.8\%$ , respectively. The contribution of the proposed sample augmentation can also be observed in the perspective of sample distributions in the feature space as illustrated in Fig. 5(c), where the including of the proposed sample augmentation improves the sample distribution greatly as compared with density-based clustering as shown in Fig. 5(b).
+
+The large performance improvements can be explained by the effectiveness of the augmented samples. Specifically, the iterative injection of ID-preserving cross-camera images helps to reduce the feature distances of person images within the same cluster (i.e., the intra-cluster distances) and increase that of different clusters (i.e., the inter-cluster distances) simultaneously.
+
+Discriminative Learning: We evaluated the discriminative learning component as described in Section 3.3. For this experiment, we designed a new network Baseline $+ASA + DL$ that further incorporates discriminative learning into the Baseline $+ASA$ network as described in the previous subsection. As shown in Table 2, the incorporation of discriminative learning consistently improves the
+
+
+Figure 6. The min-max attenuation coefficient $\lambda$ in Eq. 5 affects both mAP and rank-1 accuracy (evaluated on Market-1501).
+
+
+Figure 7. Iterative min-max clustering outperforms density-based clustering consistently for both accuracy of pseudo-label prediction on the left and mAP & rank-1 accuracy of person Re-ID on the right (for DukeMTMC-reID $\rightarrow$ Market1501).
+
+person Re-ID performance beyond the Baseline $+$ ASA. Take the transfer DukeMTMC-reID $\rightarrow$ Market1501 as an example. The Baseline $+$ ASA $+DL$ achieves a rank-1 accuracy of $86.7\%$ and an mAP of $68.3\%$ which outperforms the corresponding Baseline $+$ ASA by $3.4\%$ and $5.5\%$ , respectively. The superior performance of the proposed discriminative learning can also be observed intuitively in the perspective of sample distributions in feature space as shown in Fig. 5(d). The effectiveness of the discriminative learning can be largely attributed to the min-max clustering optimization that alternately trains the image generator to generate more diverse samples for maximizing the sample diversity and the feature encoder for minimizing the intraclass distance.
+
+From another perspective, it can be seen that Baseline $+ASA + DL$ (i.e., the complete AD-Cluster model) outperforms the Baseline by up to $13\%$ in rank-1 accuracy and $17\%$ in mAP, respectively. This demonstrates the effectiveness of the proposed ID-preserving cross-camera sample augmentation and discriminative learning in UDA-based person Re-ID. In addition, we can observe that the performance of Baseline $+ASA + DL$ becomes even close to the Supervised Models. For example, the Baseline $+ASA + DL$ achieves a rank-1 accuracy of $86.7\%$ for the transfer DukeMTMC-reID $\rightarrow$ Market-1501 which is only $5.2\%$ lower than the corresponding Supervised Model.
+
+Specificity of AD-Cluster. The performance of the AD-Cluster is related to the sample generation method. In this work, we generate cross-camera images by using StarGAN which theoretically can be replaced by any other ID-preserving generators. The key is how well the re-ID model can learn camera style in-variance via generating new samples. The AD-Cluster could thus be influenced by two factors: the quality of generated samples and the strength of camera style in-variance of the sample distribution in the target domain. These variances explain the different improvements by AD-Cluster over different adaptation tasks.
+
+# 4.5. Discussion
+
+The min-max attenuation coefficient $\lambda$ in Eq. 5 will affect the ID-preserving min-max clustering and so the person Re-ID performance. We studied this parameter by setting it
+
+
+
+
+
+to different values and checking the person Re-ID performance. Fig. 6 shows experimental results on Market-1501. Using a smaller $\lambda$ usually leads to a higher cluster diversity, which further leads to better Re-ID performance. On the other hand, $\lambda$ should not be very small for the target of identity preservation. Experiments show that AD-Cluster performs best when $\lambda = 0.03$ . We also evaluate the accuracy of the pseudo-labels that are predicted during the iterative min-max clustering, as well as how the person ReID performance evolves during this process. Fig. 7 (left) shows that the f-score of the predicted pseudo-labels keeps improving during the iterative clustering process. Additionally, the proposed min-max clustering outperforms the density-based clustering [12] significantly in both mAP and rank-1 accuracy as shown in the right graph in Fig. 7.
+
+# 5. Conclusion
+
+This paper presents an augmented discriminative clustering (AD-Cluster) method for domain adaptive person re-ID. With density-based clustering, we introduce adaptive sample augmentation to generate more diverse samples and a min-max optimization scheme to learn more discriminative re-ID model. Experiments demonstrate the effectiveness of adaptive sample augmentation and min-max optimization for improving the discrimination ability of deep re-ID model. Our approach not only produces a new state-of-the-art in UDA accuracy on two large-scale benchmarks but also provides a fresh insight for general UDA problems. We expect that the proposed AD-Cluster will inspire new insights and attract more interests for better UDA-based recognition [15] and detection [56] in the near future.
+
+# Acknowledgement
+
+This work is partially supported by grants from the National Key R&D Program of China under grant 2017YFB1002400, the National Natural Science Foundation of China under contract No. 61825101, No. U1611461, No. 61836012 and No. 61972217.
+
+# References
+
+[1] Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, and Mario Marchand. Domain-adversarial neural networks. CoRR, abs/1412.4446, 2014.
+[2] Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In IEEE CVPR, pages 95-104, 2017.
+[3] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, 2018.
+[4] Minmin Chen, Kilian Q. Weinberger, and John Blitzer. Cotraining for domain adaptation. In NeurIPS, pages 2456-2464, 2011.
+[5] Tianlong Chen, Shaojin Ding, Jingyi Xie, Ye Yuan, Wuyang Chen, Yang Yang, Zhou Ren, and Zhangyang Wang. Abdnet: Attentive but diverse person re-identification. In IEEE ICCV, pages 8351–8361, 2019.
+[6] Yanbei Chen, Xiatian Zhu, and Shaogang Gong. Instance-guided context rendering for cross-domain person re-identification. In IEEE ICCV, 2019.
+[7] Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In IEEE CVPR, 2018.
+[8] Adam Coates and Andrew Y. Ng. Learning feature representations with k-means. In Neural Networks: Tricks of the Trade - Second Edition, volume 7700, pages 561-580. Springer, 2012.
+[9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In IEEE CVPR, 2009.
+[10] Weijian Deng, Liang Zheng, Qixiang Ye, Guoliang Kang, Yi Yang, and Jianbin Jiao. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification. In IEEE CVPR, 2018.
+[11] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin A. Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In NeurIPS, pages 766-774, 2014.
+[12] Martin Ester, Hans-Peter Kriegel, Jörg Sander, and Xiaowei Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. In KDD, pages 226-231, 1996.
+[13] Hehe Fan, Liang Zheng, Chenggang Yan, and Yi Yang. Unsupervised person re-identification: Clustering and finetuning. TOMCCAP, 14(4):83:1-83:18, 2018.
+[14] Hehe Fan, Liang Zheng, and Yi Yang. Unsupervised person re-identification: Clustering and fine-tuning. CoRR, abs/1705.10444, 2017.
+[15] Yang Fu, Yunchao Wei, Guanshuo Wang, Yuqian Zhou, Honghui Shi, and Thomas S. Huang. Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification. In IEEE ICCV, 2019.
+[16] Yaroslav Ganin and Victor S. Lempitsky. Unsupervised domain adaptation by backpropagation. In Francis R. Bach and
+
+David M. Blei, editors, ICML, volume 37, pages 1180-1189, 2015.
+[17] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. Domain-adversarial training of neural networks. J. Mach. Learn. Res., 17:59:1-59:35, 2016.
+[18] Kamran Ghasedi, Xiaoqian Wang, Cheng Deng, and Heng Huang. Balanced self-paced learning for generative adversarial clustering network. In IEEE CVPR, 2019.
+[19] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander J. Smola. A kernel two-sample test. J. Mach. Learn. Res., 13:723-773, 2012.
+[20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE CVPR, June 2016.
+[21] Alexander Hermans, Lucas Beyer, and Bastian Leibe. In defense of the triplet loss for person re-identification. CoRR, abs/1703.07737, 2017.
+[22] Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In ICML, pages 1994-2003, 2018.
+[23] Yu-Jhe Li, Ci-Siang Lin, Yan-Bo Lin, and Yu-Chiang Frank Wang. Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation. In IEEE ICCV, 2019.
+[24] Renjie Liao, Alexander G. Schwing, Richard S. Zemel, and Raquel Urtasun. Learning deep parsimonious representations. In NeurIPS, pages 5076-5084, 2016.
+[25] Shengcai Liao, Yang Hu, Xiangyu Zhu, and Stan Z. Li. Person re-identification by local maximal occurrence representation and metric learning. In IEEE CVPR, June 2015.
+[26] Shan Lin, Haoliang Li, Chang-Tsun Li, and Alex C. Kot. Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification. In BMVC, 2018.
+[27] Jiawei Liu, Zheng-Jun Zha, Di Chen, Richang Hong, and Meng Wang. Adaptive transfer network for cross-domain person re-identification. In IEEE CVPR, 2019.
+[28] Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In NeurIPS, pages 469-477, 2016.
+[29] Zimo Liu, Dong Wang, and Hutchuan Lu. Stepwise metric promotion for unsupervised video person re-identification. In IEEE ICCV, pages 2448–2457, 2017.
+[30] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Francis R. Bach and David M. Blei, editors, ICML, volume 37, pages 97-105, 2015.
+[31] Mingsheng Long, Guiguang Ding, Jianmin Wang, Jiaguang Sun, Yuchen Guo, and Philip S. Yu. Transfer sparse coding for robust image representation. In IEEE CVPR, pages 407-414, 2013.
+[32] Jianming Lv and Xintong Wang. Cross-dataset person re-identification using similarity preserved generative adversarial networks. In Weiru Liu, Fausto Giunchiglia, and Bo Yang, editors, KSEM, pages 171–183, 2018.
+
+[33] Saeid Motiian, Marco Piccirilli, Donald A. Adjeroh, and Gianfranco Doretto. Unified deep supervised domain adaptation and generalization. In IEEE ICCV, pages 5716-5726, 2017.
+[34] Peixi Peng, Tao Xiang, Yaowei Wang, Massimiliano Pontil, Shaogang Gong, Tiejun Huang, and Yonghong Tian. Unsupervised cross-dataset transfer learning for person re-identification. In IEEE CVPR, June 2016.
+[35] Lei Qi, Lei Wang, Jing Huo, Luping Zhou, Yinghuan Shi, and Yang Gao. A novel unsupervised camera-aware domain adaptation framework for person re-identification. In IEEE ICCV, 2019.
+[36] Ergys Ristani, Francesco Solera, Roger S. Zou, Rita Cucchiara, and Carlo Tomasi. Performance measures and a data set for multi-target, multi-camera tracking. In IEEE ECCV Workshops, 2016.
+[37] Marcus Rohrbach, Sandra Ebert, and Bernt Schiele. Transfer learning in a transductive setting. In NeurIPS, pages 46-54, 2013.
+[38] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In ECCV, 2010.
+[39] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In IEEE CVPR, 2018.
+[40] Ozan Sener, Hyun Oh Song, Ashutosh Saxena, and Silvio Savarese. Learning transferrable representations for unsupervised domain adaptation. In NeurIPS, pages 2110-2118, 2016.
+[41] Liangchen Song, Cheng Wang, Lefei Zhang, Bo Du, Qian Zhang, Chang Huang, and Xinggang Wang. Unsupervised domain adaptive re-identification: Theory and practice. CoRR, abs/1807.11334, 2018.
+[42] Baochen Sun, Jiashi Feng, and Kate Saenko. Return of frustratingly easy domain adaptation. In AAAI, pages 2058-2065, 2016.
+[43] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. CoRR, abs/1412.3474, 2014.
+[44] Jingya Wang, Xiatian Zhu, Shaogang Gong, and Wei Li. Transferable joint attribute-identity deep learning for unsupervised person re-identification. In IEEE CVPR, 2018.
+[45] Longhui Wei, Shiliang Zhang, Wen Gao, and Qi Tian. Person transfer gan to bridge domain gap for person re-identification. In IEEE CVPR, 2018.
+[46] Ancong Wu, Wei-Shi Zheng, and Jian-Huang Lai. Unsupervised person re-identification by camera-aware similarity consistency learning. In IEEE ICCV, 2019.
+[47] Jinlin Wu, Shengcai Liao, Zhen Lei, Xiaobo Wang, Yang Yang, and Stan Z. Li. Clustering and dynamic sampling based unsupervised domain adaptation for person re-identification. In IEEE ICME, pages 886–891, 2019.
+[48] Yu Wu, Yutian Lin, Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning. In IEEE CVPR, 2018.
+
+[49] Junyuan Xie, Ross B. Girshick, and Ali Farhadi. Unsupervised deep embedding for clustering analysis. In MariaFlorina Balcan and Kilian Q. Weinberger, editors, ICML, volume 48, pages 478-487, 2016.
+[50] Fan Yang, Ke Yan, Shijian Lu, Huizhu Jia, Xiaodong Xie, and Wen Gao. Attention driven person re-identification. Pattern Recognition, 86:143-155, 2019.
+[51] Jianwei Yang, Devi Parikh, and Dhruv Batra. Joint unsupervised learning of deep representations and image clusters. In IEEE CVPR, pages 5147-5156, 2016.
+[52] Mang Ye, Andy Jinhua Ma, Liang Zheng, Jiawei Li, and Pong C. Yuen. Dynamic label graph matching for unsupervised video re-identification. In IEEE ICCV, pages 5152–5160, 2017.
+[53] Hong-Xing Yu, Ancong Wu, and Wei-Shi Zheng. Cross-view asymmetric metric learning for unsupervised person re-identification. In IEEE ICCV, 2017.
+[54] Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. Collaborative and adversarial network for unsupervised domain adaptation. In IEEE CVPR, pages 3801-3809, 2018.
+[55] Xinyu Zhang, Jiewei Cao, Chunhua Shen, and Mingyu You. Self-training with progressive augmentation for unsupervised cross-domain person re-identification. In IEEE ICCV, 2019.
+[56] Xiaosong Zhang, Fang Wan, Chang Liu, Rongrong Ji, and Qixiang Ye. Freeanchor: Learning to match anchors for visual object detection. In Advances in Neural Information Processing Systems, pages 147-155, 2019.
+[57] Liang Zheng, Zhi Bie, Yifan Sun, Jingdong Wang, Chi Su, Shengjin Wang, and Qi Tian. MARS: A video benchmark for large-scale person re-identification. In ECCV, pages 868–884, 2016.
+[58] Liang Zheng, Liyue Shen, Lu Tian, Shengjin Wang, Jingdong Wang, and Qi Tian. Scalable person re-identification: A benchmark. In IEEE ICCV, 2015.
+[59] Zhedong Zheng, Liang Zheng, and Yi Yang. Unlabeled samples generated by gan improve the person re-identification baseline in vitro. In IEEE ICCV, 2017.
+[60] Zhun Zhong, Liang Zheng, Donglin Cao, and Shaozi Li. Re-ranking person re-identification with k-reciprocal encoding. In IEEE CVPR, 2017.
+[61] Zhun Zhong, Liang Zheng, Shaozi Li, and Yi Yang. Generalizing a person retrieval model hetero- and homogeneously. In ECCV, pages 176–192, 2018.
+[62] Zhun Zhong, Liang Zheng, Zhiming Luo, Shaozi Li, and Yi Yang. Invariance matters: Exemplar memory for domain adaptive person re-identification. In IEEE CVPR, 2019.
+[63] Zhun Zhong, Liang Zheng, Zhedong Zheng, Shaozi Li, and Yi Yang. Camstyle: A novel data augmentation method for person re-identification. IEEE TIP, 28(3):1176-1190, 2019.
\ No newline at end of file
diff --git a/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/images.zip b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..88b6394861b3ea6de5e11538e2757eff600a1abb
--- /dev/null
+++ b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c0890a4d7c4a64e502893fe877bef202ad91f7901cd49776abb51b338558e7c4
+size 504287
diff --git a/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/layout.json b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e6ac686ba461944988411c24466eec620d83bf90
--- /dev/null
+++ b/adclusteraugmenteddiscriminativeclusteringfordomainadaptivepersonreidentification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1880f71f43e649aeb8d4201326d360b1740a7f9d48f5c70c0a7e2507c1a8584
+size 466874
diff --git a/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_content_list.json b/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7ea201290a162e13ffa339acfc572824403906f8
--- /dev/null
+++ b/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0bd361df864d9cad3838dc99f7df6a2ea9ab68ef454e98fbb6893fbcd8fbb4f9
+size 77005
diff --git a/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_model.json b/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f8640120ae768dbbd780ed65ec7769a69b93134d
--- /dev/null
+++ b/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c6fdb17d63c11766ede506e91fe94632f3dc627886ac56f522b27682b444cc6
+size 93783
diff --git a/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_origin.pdf b/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..de587a09da4097f499e90504f64bbae17dc78a43
--- /dev/null
+++ b/addernetdowereallyneedmultiplicationsindeeplearning/51c32f20-0a9d-4ebe-bd38-be4b21444160_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff23e2a2ba075c128f0227427f053753afc0e1d6c42e2e3d87c580a20b61dd09
+size 559335
diff --git a/addernetdowereallyneedmultiplicationsindeeplearning/full.md b/addernetdowereallyneedmultiplicationsindeeplearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..94d4c6a99af7c8549e7c70aa46b3a639e7fff5af
--- /dev/null
+++ b/addernetdowereallyneedmultiplicationsindeeplearning/full.md
@@ -0,0 +1,338 @@
+# AdderNet: Do We Really Need Multiplications in Deep Learning?
+
+Hanting Chen $^{1,2*}$ , Yunhe Wang $^{2*}$ , Chunjing Xu $^{2\dagger}$ , Boxin Shi $^{3,4}$ , Chao Xu $^{1}$ , Qi Tian $^{2}$ , Chang Xu $^{5}$ $^{1}$ Key Lab of Machine Perception (MOE), Dept. of Machine Intelligence, Peking University.
+
+$^{2}$ Noah's Ark Lab, Huawei Technologies. $^{3}$ NELVT, Dept. of CS, Peking University. $^{4}$ Peng Cheng Laboratory.
+
+$^{5}$ School of Computer Science, Faculty of Engineering, The University of Sydney.
+
+{htchen, shiboxin}@pku.edu.cn, xuchao@cis.pku.edu.cn, c.xu@sydney.edu.au
+
+{yunhe.wang, xuchunjing, tian.qil}@huawei.com
+
+# Abstract
+
+Compared with cheap addition operation, multiplication operation is of much higher computation complexity. The widely-used convolutions in deep neural networks are exactly cross-correlation to measure the similarity between input feature and convolution filters, which involves massive multiplications between float values. In this paper, we present adder networks (AdderNets) to trade these massive multiplications in deep neural networks, especially convolutional neural networks (CNNs), for much cheaper additions to reduce computation costs. In AdderNets, we take the $\ell_1$ -norm distance between filters and input feature as the output response. The influence of this new similarity measure on the optimization of neural network have been thoroughly analyzed. To achieve a better performance, we develop a special back-propagation approach for AdderNets by investigating the full-precision gradient. We then propose an adaptive learning rate strategy to enhance the training procedure of AdderNets according to the magnitude of each neuron's gradient. As a result, the proposed AdderNets can achieve $74.9\%$ Top-1 accuracy $91.7\%$ Top-5 accuracy using ResNet-50 on the ImageNet dataset without any multiplication in convolutional layer. The codes are publicly available at: https://github.com/huawei/noah/AdderNet.
+
+# 1. Introduction
+
+Given the advent of Graphics Processing Units (GPUs), deep convolutional neural networks (CNNs) with billions of floating number multiplications could receive speed-ups and make important strides in a large variety of computer vision tasks, e.g. image classification [26, 17], object detection [23], segmentation [19], and human face verification [18].
+
+tion [32]. However, the high-power consumption of these high-end GPU cards (e.g. $250\mathrm{W}+$ for GeForce RTX 2080 Ti) has blocked modern deep learning systems from being deployed on mobile devices, e.g. smart phone, camera, and watch. Existing GPU cards are far from svelte and cannot be easily mounted on mobile devices. Though the GPU itself only takes up a small part of the card, we need many other hardware for supports, e.g. memory chips, power circuitry, voltage regulators and other controller chips. It is therefore necessary to study efficient deep neural networks that can run with affordable computation resources on mobile devices.
+
+Addition, subtraction, multiplication and division are the four most basic operations in mathematics. It is widely known that multiplication is slower than addition, but most of the computations in deep neural networks are multiplications between float-valued weights and float-valued activations during the forward inference. There are thus many papers on how to trade multiplications for additions, to speed up deep learning. The seminal work [5] proposed BinaryConnect to force the network weights to be binary (e.g.-1 or 1), so that many multiply-accumulate operations can be replaced by simple accumulations. After that, Hubara et al. [15] proposed BNNs, which binarized not only weights but also activations in convolutional neural networks at runtime. Moreover, Rastegari et al. [22] introduced scale factors to approximate convolutions using binary operations and outperform [15, 22] by large margins. Zhou et al. [39] utilized low bit-width gradient to accelerate the training of binarized networks. Cai et al. [4] proposed an halfwave Gaussian quantizer for forward approximation, which achieved much closer performance to full precision networks.
+
+Though binarizing filters of deep neural networks significantly reduces the computation cost, the original recognition accuracy often cannot be preserved. In addition, the training procedure of binary networks is not stable and usually requests a slower convergence speed with a small
+
+
+(a) Visualization of features in AdderNets
+
+
+(b) Visualization of features in CNNs
+Figure 1. Visualization of features in AdderNets and CNNs. Features of CNNs in different classes are divided by their angles. In contrast, features of AdderNets tend to be clustered towards different class centers, since AdderNets use the $\ell_1$ -norm to distinguish different classes. The visualization results suggest that $\ell_1$ -distance can serve as a similarity measure the distance between the filter and the input feature in deep neural networks
+
+learning rate. Convolutions in classical CNNs are actually cross-correlation to measure the similarity of two inputs. Researchers and developers are used to taking convolution as a default operation to extract features from visual data, and introduce various methods to accelerate the convolution, even if there is a risk of sacrificing network capability. But there is hardly no attempt to replace convolution with another more efficient similarity measure that is better to only involve additions. In fact, additions are of much lower computational complexities than multiplications. Thus, we are motivated to investigate the feasibility of replacing multiplications by additions in convolutional neural networks.
+
+In this paper, we propose adder networks that maximize the use of addition while abandoning convolution operations. Given a series of small template as "filters" in the neural network, $\ell_1$ -distance could be an efficient measure to summarize absolute differences between the input signal and the template as shown in Figure 1. Since subtraction can be easily implemented through addition by using its complement code, $\ell_1$ -distance could be a hardware-friendly measure that only has additions, and naturally becomes an efficient alternative of the convolution to construct neural networks. An improved back-propagation scheme with regularized gradients is designed to ensure sufficient updates of the templates and a better network convergence. The proposed AdderNets are deployed on several benchmarks, and experimental results demonstrate that AdderNets can achieve comparable recognition accuracy to conventional CNNs.
+
+This paper is organized as follows. Section 2 investigates related works on network compression. Section 3 proposes Adder Networks which replace the multiplication in
+
+the conventional convolution filters with addition. Section 4 evaluates the proposed AdderNets on various benchmark datasets and models and Section 5 concludes this paper.
+
+# 2. Related works
+
+To reduce the computational complexity of convolutional neural networks, a number of works have been proposed for eliminating useless calculations.
+
+Pruning based methods aims to remove redundant weights to compress and accelerate the original network. Denton et al. [6] decomposed weight matrices of fully-connected layers into simple calculations by exploiting singular value decomposition (SVD). Han et al. [9] proposed discarding subtle weights in pre-trained deep networks to omit their original calculations without affecting the performance. Wang et al. [31] further converted convolution filters into the DCT frequency domain and eliminated more floating number multiplications. In addition, Hu et al. [13] discarded redundant filters with less impacts to directly reduce the computations brought by these filters. Luo et al. [21] discarded redundant filters according to the reconstruction error. Hu et al. [14] proposed dubbed Robust Dynamic Inference Networks (RDI-Nets), which allows for each input to adaptively choose one of the multiple output layers to output its prediction. Wang et al. [29] proposed a E2-Training method, which can train deep neural networks with over $80\%$ energy savings.
+
+Instead of directly reducing the computational complexity of a pre-trained heavy neural network, lot of works focused on designing novel blocks or operations to replace the conventional convolution filters. Howard et al. [12] designed MobileNet, which decompose the conventional con
+
+volution filters into the point-wise and depth-wise convolution filters with much fewer FLOPs. Zhang et al. [38] combined group convolutions [35] and a channel shuffle operation to build efficient neural networks with fewer computations. Wu et al. [34] presented a parameter-free "shift" operation with zero flop and zero parameter to replace conventional filters and largely reduce the computational and storage cost of CNNs. Wang et al. [30] developed versatile convolution filters to generate more useful features utilizing fewer calculations and parameters. Xu et al. [16] proposed perturbative neural networks to replace convolution and instead computes its response as a weighted linear combination of non-linearly activated additive noise perturbed inputs. Han et al. [8] proposed GhostNet to generate more features from cheap operations and achieve the state-of-the-art performance on lightweight architectures.
+
+Besides eliminating redundant weights or filters in deep convolutional neural networks, Hinton et al. [11] proposed the knowledge distillation (KD) scheme, which transfer useful information from a heavy teacher network to a portable student network by minimizing the Kullback-Leibler divergence between their outputs. Besides mimic the final outputs of the teacher networks, Romero et al. [25] exploit the hint layer to distill the information in features of the teacher network to the student network. You et al. [37] utilized multiple teachers to guide the training of the student network and achieve better performance. Yim et al. [36] regarded the relationship between features from two layers in the teacher network as a novel knowledge and introduced the FSP (Flow of Solution Procedure) matrix to transfer this kind of information to the student network.
+
+Nevertheless, the compressed networks using these algorithms still contain massive multiplications, which costs enormous computation resources. As a result, subtractions or additions are of much lower computational complexities when compared with multiplications. However, they have not been widely investigated in deep neural networks, especially in the widely used convolutional networks. Therefore, we propose to minimize the numbers of multiplications in deep neural networks by replacing them with subtractions or additions.
+
+# 3. Networks without Multiplication
+
+Consider a filter $F \in \mathbb{R}^{d \times d \times c_{in} \times c_{out}}$ in an intermediate layer of the deep neural network, where kernel size is $d$ , input channel is $c_{in}$ and output channel is $c_{out}$ . The input feature is defined as $X \in \mathbb{R}^{H \times W \times c_{in}}$ , where $H$ and $W$ are the height and width of the feature, respectively. The output feature $Y$ indicates the similarity between the filter and the
+
+input feature,
+
+$$
+Y (m, n, t) = \sum_ {i = 0} ^ {d} \sum_ {j = 0} ^ {d} \sum_ {k = 0} ^ {c _ {i n}} S \left(X (m + i, n + j, k), F (i, j, k, t)\right), \tag {1}
+$$
+
+where $S(\cdot, \cdot)$ is a pre-defined similarity measure. If cross-correlation is taken as the metric of distance, i.e. $S(x, y) = x \times y$ , Eq. (1) becomes the convolution operation. Eq. (1) can also imply the calculation of a fully-connected layer when $d = 1$ . In fact, there are many other metrics to measure the distance between the filter and the input feature. However, most of these metrics involve multiplications, which bring in more computational cost than additions.
+
+# 3.1. Adder Networks
+
+We are therefore interested in deploying distance metrics that maximize the use of additions. $\ell_1$ distance calculates the sum of the absolute differences of two points' vector representations, which contains no multiplication. Hence, by calculating $\ell_1$ distance between the filter and the input feature, Eq. (1) can be reformulated as:
+
+$$
+Y (m, n, t) = - \sum_ {i = 0} ^ {d} \sum_ {j = 0} ^ {d} \sum_ {k = 0} ^ {c _ {i n}} | X (m + i, n + j, k) - F (i, j, k, t) |. \tag {2}
+$$
+
+Addition is the major operation in $\ell_1$ distance measure, since subtraction can be easily reduced to addition by using complement code. With the help of $\ell_1$ distance, similarity between the filters and features can be efficiently computed.
+
+Although both $\ell_1$ distance Eq. (2) and cross-correlation in Eq. (1) can measure the similarity between filters and inputs, there are some differences in their outputs. The output of a convolution filter, as a weighted summation of values in the input feature map, can be positive or negative, but the output of an adder filter is always negative. Hence, we resort to batch normalization for help, and the output of adder layers will be normalized to an appropriate range and all the activation functions used in conventional CNNs can then be used in the proposed AdderNets. Although the batch normalization layer involves multiplications, its computational cost is significantly lower than that of the convolutional layers and can be omitted. Considering a convolutional layer with a filter $F\in \mathbb{R}^{d\times d\times c_{in}\times c_{out}}$ , an input $X\in \mathbb{R}^{H\times W\times c_{in}}$ and an output $Y\in \mathbb{R}^{H^{\prime}\times W^{\prime}\times c_{out}}$ , the computation complexity of convolution and batch normalization is $\mathcal{O}(d^2c_{in}c_{out}HW)$ and $\mathcal{O}(c_{out}H'W')$ , respectively. In practice, given an input channel number $c_{in} = 512$ and a kernel size $d = 3$ in ResNet [10], we have $\frac{d^2c_{in}c_{out}HW}{c_{out}H'W'}\approx 4068$ . Since batch normalization layer has been widely used in the state-of-the-art convolutional neural networks, we can simply upgrade these networks into AddNets by replacing their convolutional layers into adder layers to speed up the inference and reduces the energy cost.
+
+Intuitively, Eq. (1) has a connection with template matching [3] in computer vision, which aims to find the parts of an image that match the template. $F$ in Eq. (1) actually works as a template, and we calculate its matching scores with different regions of the input feature $X$ . Since various metrics can be utilized in template matching, it is natural that $\ell_1$ distance can be utilized to replace the cross-correlation in Eq. (1). Note that Wang et al. [28] also discussed different metrics in deep networks. However, they focused on achieve high performance by employing complex metrics while we focus on the $\ell_1$ distance to minimize the energy consumption.
+
+# 3.2. Optimization
+
+Neural networks utilize back-propagation to compute the gradients of filters and stochastic gradient descent to update the parameters. In CNNs, the partial derivative of output features $Y$ with respect to the filters $F$ is calculated as:
+
+$$
+\frac {\partial Y (m , n , t)}{\partial F (i , j , k , t)} = X (m + i, n + j, k), \tag {3}
+$$
+
+where $i \in [m, m + d]$ and $j \in [n, n + d]$ . To achieve a better update of the parameters, it is necessary to derive informative gradients for SGD. In AdderNets, the partial derivative of $Y$ with respect to the filters $F$ is:
+
+$$
+\frac {\partial Y (m , n , t)}{\partial F (i , j , k , t)} = \operatorname {s g n} (X (m + i, n + j, k) - F (i, j, k, t)), \tag {4}
+$$
+
+where $\operatorname{sgn}(\cdot)$ denotes the sign function and the value of the gradient can only take $+1, 0$ , or $-1$ .
+
+Considering the derivative of $\ell_2$ -norm:
+
+$$
+\frac {\partial Y (m , n , t)}{\partial F (i , j , k , t)} = X (m + i, n + j, k) - F (i, j, k, t), \tag {5}
+$$
+
+Eq. (4) can therefore lead to a signSGD [2] update of $\ell_2$ -norm. However, signSGD almost never takes the direction of steepest descent and the direction only gets worse as dimensionality grows [1]. It is unsuitable to optimize the neural networks of a huge number of parameters using signSGD. Therefore, we propose using Eq. (5) to update the gradients in our AdderNets. The convergence of taking these two kinds of gradient will be further investigated in the supplementary material. Therefore, by utilizing the full-precision gradient, the filters can be updated precisely.
+
+Besides the gradient of the filters, the gradient of the input features $X$ is also important for the update of parameters. Therefore, we also use the full-precision gradient (Eq. (5)) to calculate the gradient of $X$ . However, the magnitude of the full-precision gradient may be larger than $+1$ or $-1$ . Denote the filters and inputs in layer $i$ as $F_{i}$ and $X_{i}$ . Different from $\frac{\partial Y}{\partial F_i}$ which only affects the gradient of $F_{i}$ itself, the change of $\frac{\partial Y}{\partial X_i}$ would influence the gradient in not only
+
+layer $i$ but also layers before layer $i$ according to the gradient chain rule. If we use the full-precision gradient instead of the sign gradient of $\frac{\partial Y}{\partial X}$ for each layer, the magnitude of the gradient in the layers before this layer would be increased, and the discrepancy brought by using full-precision gradient would be magnified. To this end, we clip the gradient of $X$ to $[-1, 1]$ to prevent gradients from exploding. Then the partial derivative of output features $Y$ with respect to the input features $X$ is calculated as:
+
+$$
+\frac {\partial Y (m , n , t)}{\partial X (m + i , n + j , k)} = \mathrm {H T} (F (i, j, k, t) - X (m + i, n + j, k)). \tag {6}
+$$
+
+where $\mathrm{HT}(\cdot)$ denotes the HardTanh function:
+
+$$
+\mathrm {H T} (x) = \left\{ \begin{array}{l l} x & \text {i f} - 1 < x < 1, \\ 1 & x > 1, \\ - 1 & x < - 1. \end{array} \right. \tag {7}
+$$
+
+# 3.3. Adaptive Learning Rate Scaling
+
+In conventional CNNs, assuming that the weights and the input features are independent and identically distributed following normal distribution, the variance of the output can be roughly estimated as:
+
+$$
+\begin{array}{l} \operatorname {V a r} \left[ Y _ {C N N} \right] = \sum_ {i = 0} ^ {d} \sum_ {j = 0} ^ {d} \sum_ {k = 0} ^ {c _ {i n}} \operatorname {V a r} [ X \times F ] \tag {8} \\ = d ^ {2} c _ {i n} \operatorname {V a r} [ X ] \operatorname {V a r} [ F ]. \\ \end{array}
+$$
+
+If variance of the weight is $Var[F] = \frac{1}{d^2c_{in}}$ , the variance of output would be consistent with that of the input, which will be beneficial for the information flow in the neural network. In contrast, for AdderNets, the variance of the output can be approximated as:
+
+$$
+\begin{array}{l} \operatorname {V a r} \left[ Y _ {\text {A d d e r N e t}} \right] = \sum_ {i = 0} ^ {d} \sum_ {j = 0} ^ {d} \sum_ {k = 0} ^ {c _ {i n}} \operatorname {V a r} [ | X - F | ] \tag {9} \\ = \sqrt {\frac {\pi}{2}} d ^ {2} c _ {i n} (V a r [ X ] + V a r [ F ]), \\ \end{array}
+$$
+
+when $F$ and $X$ follow normal distributions. In practice, the variance of weights $Var[F]$ is usually very small [7], e.g. $10^{-3}$ or $10^{-4}$ in an ordinary CNN. Hence, compared with multiplying $Var[X]$ with a small value in Eq. (8), the addition operation in Eq. (9) tends to bring in a much larger variance of outputs in AdderNets.
+
+We next proceed to show the influence of this larger variance of outputs on the update of AdderNets. To promote the effectiveness of activation functions, we introduce batch normalization after each adder layer. Given input $x$ over a mini-batch $\mathcal{B} = \{x_1, \dots, x_m\}$ , the batch normalization layer can be denoted as:
+
+$$
+y = \gamma \frac {x - \mu_ {\mathcal {B}}}{\sigma_ {\mathcal {B}}} + \beta , \tag {10}
+$$
+
+Algorithm 1 The feed forward and back propagation of adder neural networks.
+
+Input: An initialized adder network $\mathcal{N}$ and its training set $\mathcal{X}$ and the corresponding labels $\mathcal{V}$ , the global learning rate $\gamma$ and the hyper-parameter $\eta$ .
+
+1: repeat
+2: Randomly select a batch $\{(x,y)\}$ from $\mathcal{X}$ and $\mathcal{Y}$ ;
+3: Employ the AdderNet $\mathcal{N}$ on the mini-batch: $\mathrm{x} \rightarrow \mathcal{N}(\mathrm{x})$ ;
+4: Calculate the full-precision derivative $\frac{\partial Y}{\partial F}$ and $\frac{\partial Y}{\partial X}$ for adder filters using Eq. (5) and Eq. (6);
+5: Exploit the chain rule to generate the gradient of parameters in $\mathcal{N}$ ;
+6: Calculate the adaptive learning rate $\alpha_{l}$ for each adder layer according to Eq. (13).
+7: Update the parameters in $\mathcal{N}$ using stochastic gradient descent.
+8: until convergence
+
+Output: A well-trained adder network $\mathcal{N}$ with almost no multiplications.
+
+where $\gamma$ and $\beta$ are parameters to be learned, and $\mu_{\mathcal{B}} = \frac{1}{m}\sum_{i}x_{i}$ and $\sigma_{\mathcal{B}}^{2} = \frac{1}{m}\sum_{i}(x_{i} - \mu_{\mathcal{B}})^{2}$ are the mean and variance over the mini-batch, respectively. The gradient of loss $\ell$ with respect to $x$ is then calculated as:
+
+$$
+\frac {\partial \ell}{\partial x _ {i}} = \sum_ {j = 1} ^ {m} \frac {\gamma}{m ^ {2} \sigma_ {\mathcal {B}}} \left\{\frac {\partial \ell}{\partial y _ {i}} - \frac {\partial \ell}{\partial y _ {j}} [ 1 + \frac {(x _ {i} - x _ {j}) (x _ {j} - \mu_ {\mathcal {B}})}{\sigma_ {\mathcal {B}}} ] \right\}. \tag {11}
+$$
+
+Given a much larger variance $Var[Y] = \sigma_{\mathcal{B}}$ in Eq. (9), the magnitude of the gradient w.r.t $X$ in AdderNets would be much smaller than that in CNNs according to Eq. (11), and then the magnitude of the gradient w.r.t the filters in AdderNets would be decreased as a result of gradient chain rule.
+
+Table 1. The $\ell_2$ -norm of gradient of weight in each layer using different networks at 1st iteration.
+
+| Model | Layer 1 | Layer 2 | Layer 3 |
| AdderNet | 0.0009 | 0.0012 | 0.0146 |
| CNN | 0.2261 | 0.2990 | 0.4646 |
+
+Table 1 reports the $\ell_2$ -norm of gradients of filters $\|F\|_2$ in LeNet-5-BN using CNNs and AdderNets on the MNIST dataset during the 1st iteration. LeNet-5-BN denotes the LeNet-5 [18] adding an batch normalization layer after each convolutional layer. As shown in this table, the norms of gradients of filters in AdderNets are much smaller than that in CNNs, which could slow down the update of filters in AdderNets.
+
+A straightforward idea is to directly adopt a larger learning rate for filters in AdderNets. However, it is worth noticing that the norm of gradient differs much in different layers of AdderNets as shown in Table 1, which requests spe
+
+cial consideration of filters in different layers. To this end, we propose an adaptive learning rate for different layers in AdderNets. Specifically, the update for each adder layer $l$ is calculated by:
+
+$$
+\Delta F _ {l} = \gamma \times \alpha_ {l} \times \Delta L (F _ {l}), \tag {12}
+$$
+
+where $\gamma$ is a global learning rate of the whole neural network (e.g. for adder and BN layers), $\Delta L(F_l)$ is the gradient of the filter in layer $l$ and $\alpha_{l}$ is its corresponding local learning rate. As filters in AdderNets act subtraction with the inputs, the magnitude of filters and inputs are better to be similar to extract meaningful information from inputs. Because of the batch normalization layer, the magnitudes of inputs in different layers have been normalized, which then suggests a normalization for the magnitudes of filters in different layers. The local learning rate can therefore be defined as:
+
+$$
+\alpha_ {l} = \frac {\eta \sqrt {k}}{\| \Delta L (F _ {l}) \| _ {2}}, \tag {13}
+$$
+
+where $k$ denotes the number of elements in $F_{l}$ , and $\eta$ is a hyper-parameter to control the learning rate of adder filters. By using the proposed adaptive learning rate scaling, the adder filters in different layers can be updated with nearly the same step. The training procedure of the proposed AdderNet is summarized in Algorithm 1.
+
+# 4. Experiment
+
+In this section, we implement experiments to validate the effectiveness of the proposed AdderNets on several benchmark datasets, including MNIST, CIFAR and ImageNet. Ablation study and visualization of features are provided to further investigate the proposed method. The experiments are conducted on NVIDIA Tesla V100 GPU in PyTorch.
+
+# 4.1. Experiments on MNIST
+
+To illustrate the effectiveness of the proposed Adder-Nets, we first train a LeNet-5-BN [18] on the MNIST dataset. The images are resized to $32 \times 32$ and are preprocessed following [18]. The networks are optimized using Nesterov Accelerated Gradient (NAG), and the weight decay and the momentum were set as $5 \times 10^{-4}$ and 0.9, respectively. We train the networks for 50 epochs using the cosine learning rate decay [20] with an initial learning rate 0.1. The batch size is set as 256. For the proposed Adder-Nets, we replace the convolutional filters in LeNet-5-BN with our adder filters. Note that the fully connected layer can be regarded as a convolutional layer, we also replace the multiplications in the fully connect layers with subtractions. We set the hyper-parameter in Eq. (13) to be $\eta = 0.1$ , which achieves best performance compared with other values from the pool $\{1, \frac{1}{2}, \frac{1}{5}, \frac{1}{10}, \frac{1}{20}\}$ .
+
+Table 2. Classification results on the CIFAR-10 and CIFAR-100 datasets.
+
+| Model | Method | #Mul. | #Add. | XNOR | CIFAR-10 | CIFAR-100 |
| VGG-small | BNN | 0 | 0.65G | 0.65G | 89.80% | 65.41% |
| AddNN | 0 | 1.30G | 0 | 93.72% | 72.64% |
| CNN | 0.65G | 0.65G | 0 | 93.80% | 72.73% |
| ResNet-20 | BNN | 0 | 41.17M | 41.17M | 84.87% | 54.14% |
| AddNN | 0 | 82.34M | 0 | 91.84% | 67.60% |
| CNN | 41.17M | 41.17M | 0 | 92.25% | 68.14% |
| ResNet-32 | BNN | 0 | 69.12M | 69.12M | 86.74% | 56.21% |
| AddNN | 0 | 138.24M | 0 | 93.01% | 69.02% |
| CNN | 69.12M | 69.12M | 0 | 93.29% | 69.74% |
+
+The convolutional neural network achieves a $99.4\%$ accuracy with $\sim 435\mathrm{K}$ multiplications and $\sim 435\mathrm{K}$ additions. By replacing the multiplications in convolution with additions, the proposed AdderNet achieves a $99.4\%$ accuracy, which is the same as that of CNNs, with $\sim 870\mathrm{K}$ additions and almost no multiplication. In fact, the theoretical latency of multiplications in CPUs is also larger than that of additions and subtractions. There is an instruction table which lists the instruction latencies, throughputs and microoperation breakdowns for Intel, AMD and VIA CPUs. For example, in VIA Nano 2000 series, the latency of float multiplication and addition is 4 and 2, respectively. The AdderNet using LeNet-5 model will have $\sim 1.7\mathrm{M}$ latency while CNN will have $\sim 2.6\mathrm{M}$ latency in this CPU. In conclusion, the AdderNet can achieve similar accuracy with CNN but have fewer computational cost and latency. Noted that CUDA and cuDNN optimized adder convolutions are not yet available, we do not compare the actual inference time.
+
+# 4.2. Experiments on CIFAR
+
+We then evaluate our method on the CIFAR dataset, which consist of $32 \times 32$ pixel RGB color images. Since the binary networks [39] can use the XNOR operations to replace multiplications, we also compare the results of binary neural networks (BNNs). We use the same data augmentation and pro-processing in He et al. [10] for training and testing. Following Zhou et al. [39], the learning rate is set to 0.1 in the beginning and then follows a polynomial learning rate schedule. The models are trained for 400 epochs with a 256 batch size. We follow the general setting in binary networks to set the first and last layers as full-precision convolutional layers. In AdderNets, we use the same setting for a fair comparison. The hyper-parameter $\eta$ is set to 0.1 following the experiments on the MNIST dataset.
+
+The classification results are reported in Table 2. Since computational cost in batch normalization layer, the first layer and the last layer are significantly less than other layers, we omit these layers when counting FLOPs. We first evaluate the VGG-small model [4] in the CIFAR-10 and CIFAR-100 dataset. As a result, the AdderNets
+
+achieve nearly the same results (93.72% in CIFAR-10 and 72.64% in CIFAR-100) with CNNs (93.80% in CIFAR-10 and 72.73% in CIFAR-100) with no multiplication. Although the model size of BNN is much smaller than those of AdderNet and CNN, its accuracies are much lower (89.80% in CIFAR-10 and 65.41% in CIFAR-100). We then turn to the widely used ResNet models (ResNet-20 and ResNet-32) to further investigate the performance of different networks. As for the ResNet-20, The convolutional neural networks achieve the highest accuracy (i.e. 92.25% in CIFAR-10 and 68.14% in CIFAR-100) but with a large number of multiplications (41.17M). The proposed AdderNets achieve a 91.84% accuracy in CIFAR-10 and a 67.60% accuracy in CIFAR-100 without multiplications, which is comparable with CNNs. In contrast, the BNNs only achieve 84.87% and 54.14% accuracies in CIFAR-10 and CIFAR-100. The results in ResNet-32 also suggest that the proposed AdderNets can achieve similar results with conventional CNNs.
+
+# 4.3. Experiments on ImageNet
+
+We next conduct experiments on the ImageNet dataset [17], which consist of $224 \times 224$ pixel RGB color images. We use ResNet-18 model to evaluate the proposed AdderNets follow the same data augmentation and pro-processing in He et al. [10]. We train the AdderNets for 150 epochs utilizing the cosine learning rate decay [20]. These networks are optimized using Nesterov Accelerated Gradient (NAG), and the weight decay and the momentum are set as $10^{-4}$ and 0.9, respectively. The batch size is set as 256 and the hyper-parameter in AdderNets is the same as that in CIFAR experiments.
+
+Table 3 shows the classification results on the ImageNet dataset by exploiting different neural networks. The convolutional neural network achieves a $69.8\%$ top-1 accuracy and an $89.1\%$ top-5 accuracy in ResNet-18. However, there are 1.8G multiplications in this model, which bring enormous computational complexity. Since the addition operation has smaller computational cost than multiplication, we propose AdderNets to replace the multiplications in CNNs with subtractions. As a result, our AdderNet achieve a $67.0\%$ top-1 accuracy and an $87.6\%$ top-5 accuracy in
+
+Table 3. Classification results on the ImageNet datasets.
+
+| Model | Method | #Mul. | #Add. | XNOR | Top-1 Acc. | Top-5 Acc. |
| ResNet-18 | BNN | 0 | 1.8G | 1.8G | 51.2% | 73.2% |
| AddNN | 0 | 3.6G | 0 | 67.0% | 87.6% |
| CNN | 1.8G | 1.8G | 0 | 69.8% | 89.1% |
| ResNet-50 | BNN | 0 | 3.9G | 3.9G | 55.8% | 78.4% |
| AddNN | 0 | 7.7G | 0 | 74.9% | 91.7% |
| CNN | 3.9G | 3.9G | 0 | 76.2% | 92.9% |
+
+
+(a) Visualization of filters of AdderNets
+
+
+Figure 2. Visualization of filters in the first layer of LeNet-5-BN on the MNIST dataset. Both of them can extract useful features for image classification.
+
+
+
+
+
+
+
+
+
+
+(b) Visualization of filters of CNNs
+
+
+
+
+
+
+
+
+
+
+
+ResNet-18, which demonstrate the adder filters can extract useful information from images. Rastegari et al. [22] proposed the XNOR-net to replace the multiplications in neural networks with XNOR operations. Although the BNN can achieve high speed-up and compression ratio, it achieves only a $51.2\%$ top-1 accuracy and a $73.2\%$ top-5 accuracy in ResNet-18, which is much lower than the proposed AdderNet. We then conduct experiments on a deeper architecture (ResNet-50). The BNN could only achieve a $55.8\%$ top-1 accuracy and a $78.4\%$ top-5 accuracy using ResNet-50. In contrast, the proposed AdderNets can achieve a $74.9\%$ top-1 accuracy and a $91.7\%$ top-5 accuracy, which is closed to that of CNN ( $76.2\%$ top-1 accuracy and $92.9\%$ top-5 accuracy).
+
+# 4.4. Visualization Results
+
+Visualization on features. The AdderNets utilize the $\ell_1$ -distance to measure the relationship between filters and input features instead of cross correlation in CNNs. Therefore, it is important to further investigate the difference of the feature space in AdderNets and CNNs. We train a LeNet++ on the MNIST dataset following [33], which has six convolutional layers and a fully-connected layer for extracting powerful 3D features. Numbers of neurons in each convolutional layer are 32, 32, 64, 64, 128, 128, and 2, respectively. For the proposed AdderNets, the last fully connected layers are replaced with the proposed add filters.
+
+The visualization results are shown in Figure 1. The convolutional neural network calculates the cross correlation between filters and inputs. If filters and inputs are approximately normalized, convolution operation is then equivalent to calculate cosine distance between two vectors. That is probably the reason that features in different classes are divided by their angles in Figure 1. In contrast, AdderNets utilize the $\ell_1$ -norm to distinguish different classes. Thus, features tend to be clustered towards different class centers. The visualization results demonstrate that the proposed
+
+AdderNets could have the similar discrimination ability to classify images as CNNs.
+
+Visualization on filters. We visualize the filters of the LeNet-5-BN network in Figure 2. Although the AdderNets and CNNs utilize different distance metrics, filters of the proposed adder networks (see Figure 2 (a)) still share some similar patterns with convolution filters (see Figure 2 (b)). The visualization experiments further demonstrate that the filters of AdderNets can effectively extract useful information from the input images and features.
+
+Visualization on distribution of weights We then visualize the distribution of weights for the 3rd convolution layer on LeNet-5-BN. As shown in Figure 4, the distribution of weights with AdderNets is close to a Laplace distribution while that with CNNs looks more like a Gaussian distribution. In fact, the prior distribution of $\ell_1$ -norm is Laplace distribution [27] and that of $\ell_2$ -norm is Gaussian distribution [24] and the $\ell_2$ -norm is exactly same as the cross correlation, which will be analyzed in the supplemental material.
+
+# 4.5. Ablation Study
+
+We propose to use a full-precision gradient to update the filters in our adder filters and design an adaptive learning rate scaling for deal with different layers in AdderNets. It is essential to evaluate the effectiveness of these components. We first train the LeNet-5-BN without changing its learning rate, which results in $54.91\%$ and $29.26\%$ accuracies using full-precision gradient and sign gradient, respectively. The networks can be hardly trained since its gradients are very small. Therefore, it is necessary to increase the learning rate of adder filters.
+
+We directly increase the learning rate for filters in Adder-Nets by 100, which achieves best performance with full-precision gradient compared with other values from the pool $\{10,50,100,200,500\}$ . As shown in Figure 3, the Adder-Nets using adaptive learning rate (ALR) and increased learning rate (ILR) achieve $97.99\%$ and $97.72\%$ accuracy
+
+
+(a) Accuracy
+
+
+(b) Loss
+
+
+Figure 3. Learning curve of AdderNets using different optimization schemes. FP and Sgn gradient denote the full-precision and sign gradient. The proposed adaptive learning rate scaling with full-precision gradient achieves the highest accuracy (99.40%) with the smallest loss.
+
+
+Figure 4. Histograms over the weights with AdderNet (left) and CNN (right). The weights of AdderNets follow Laplace distribution while those of CNNs follow Gaussian distribution.
+
+with sign gradient, which is much lower than the accuracy of CNN (99.40%). Therefore, we propose the full-precision gradient to precisely update the weights in AdderNets. As a result, the AdderNet with ILR achieves a 98.99% accuracy using the full-precision gradient. By using the adaptive learning rate (ALR), the AdderNet can achieve a 99.40% accuracy, which demonstrate the effectiveness of the proposed ALR method.
+
+Table 4. The impact of parameter $\eta$ using LeNet-5-BN on the MNIST dataset
+
+| η | 1 | 0.5 | 0.2 | 0.1 | 0.05 |
| Acc. (%) | 99.26 | 99.30 | 99.35 | 99.40 | 99.32 |
+
+Impact of parameters As discussed above, the proposed adaptive learning rate scaling has a hyper-parameter: $\eta$ . We then test its impact on the accuracy of the student network by conducting the experiments on the MNIST dataset. We use LeNet-5-BN as the backbone of AdderNet. Other experimental settings are same as mentioned in Sec. 4.1. It can be seen from Table 4 that the AdderNets trained utilizing the adaptive learning rate scaling achieves the highest accuracy (99.40%) when $\eta = 0.1$ . Based on the above analysis, we keep the setting of hyper-parameters for the pro
+
+posed method.
+
+# 5. Conclusions
+
+The role of classical convolutions used in deep CNNs is to measure the similarity between features and filters, and we are motivated to replace convolutions with more efficient similarity measure. We investigate the feasibility of replacing multiplications by additions in this work. An AdderNet is explored to effectively use addition to build deep neural networks with low computational costs. This kind of networks calculate the $\ell_1$ -norm distance between features and filters. Corresponding optimization method is developed by using regularized full-precision gradients. Experiments conducted on benchmark datasets show that AdderNets can well approximate the performance of CNNs with the same architectures, which could have a huge impact on future hardware design. Visualization results also demonstrate that the adder filters are promising to replace original convolution filters for computer vision tasks. In our future work, we will investigate quantization results for AdderNets to achieve higher speed-up and lower energy consumption, as well as the generality of AdderNets not only for image classification but also for detection and segmentation tasks.
+
+# Acknowledgement
+
+We thank anonymous reviewers for their helpful comments. This work is supported by National Natural Science Foundation of China under Grant No. 61876007, 61872012, National Key R&D Program of China (2019YFF0302902), Beijing Academy of Artificial Intelligence (BAAI), and Australian Research Council under Project DE-180101438.
+
+# References
+
+[1] Jeremy Bernstein, Kamyar Azizzadenesheli, Yu-Xiang Wang, and Anima Anandkumar. Convergence rate of sign stochastic gradient descent for non-convex functions. 2018. 4
+[2] Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Anima Anandkumar. signsgd: Compressed optimisation for non-convex problems. arXiv preprint arXiv:1802.04434, 2018. 4
+[3] Roberto Brunelli. Template matching techniques in computer vision: theory and practice. John Wiley & Sons, 2009. 4
+[4] Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In CVPR, pages 5918-5926, 2017. 1, 6
+[5] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In NeuriPS, pages 3123-3131, 2015. 1
+[6] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NeuriPS, 2014. 2
+[7] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256, 2010. 4
+[8] Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. Ghostnet: More features from cheap operations. arXiv preprint arXiv:1911.11907, 2019. 3
+[9] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. 2
+[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 3, 6
+[11] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3
+[12] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 2
+[13] Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250, 2016. 2
+[14] Ting-Kuei Hu, Tianlong Chen, Haotao Wang, and Zhangyang Wang. Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. arXiv preprint arXiv:2002.10025, 2020. 2
+[15] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran ElYaniv, and Yoshua Bengio. Binarized neural networks. In NeuriPS, pages 4107-4115, 2016. 1
+
+[16] Felix Juefei-Xu, Vishnu Naresh Boddeti, and Marios Savvides. Perturbative neural networks. In CVPR, pages 3310-3318, 2018. 3
+[17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeuriPS, pages 1097-1105, 2012. 1, 6
+[18] Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 5
+[19] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431-3440, 2015. 1
+[20] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. 5, 6
+[21] Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. In ICCV, pages 5058-5066, 2017. 2
+[22] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In ECCV, pages 525-542. Springer, 2016. 1, 7
+[23] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NeuriPS, pages 91-99, 2015. 1
+[24] Jason Rennie. On 12-norm regularization and the gaussian prior. 2003. 7
+[25] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.3
+[26] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 1
+[27] Stephen M Stigler. The history of statistics: The measurement of uncertainty before 1900. Harvard University Press, 1986. 7
+[28] Chen Wang, Jianfei Yang, Lihua Xie, and Junsong Yuan. Kervolutional neural networks. In CVPR, pages 31-40, 2019. 4
+[29] Yue Wang, Ziyu Jiang, Xiaohan Chen, Pengfei Xu, Yang Zhao, Yingyan Lin, and Zhangyang Wang. E2-train: Training state-of-the-art cnns with over $80\%$ energy savings. In NeuriPS, pages 5139-5151, 2019. 2
+[30] Yunhe Wang, Chang Xu, Chunjing Xu, Chao Xu, and Dacheng Tao. Learning versatile filters for efficient convolutional neural networks. In NeuriPS, pages 1608-1618, 2018. 3
+[31] Yunhe Wang, Chang Xu, Shan You, Dacheng Tao, and Chao Xu. Cnnpack: Packing convolutional neural networks in the frequency domain. In NeuriPS, pages 253–261, 2016. 2
+[32] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, pages 499-515. Springer, 2016. 1
+[33] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In ECCV, 2016. 7
+
+[34] Bichen Wu, Alvin Wan, Xiangyu Yue, Peter Jin, Sicheng Zhao, Noah Golmant, Amir Gholaminejad, Joseph Gonzalez, and Kurt Keutzer. Shift: A zero flop, zero parameter alternative to spatial convolutions. In CVPR, pages 9127-9135, 2018. 3
+[35] Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, pages 1492-1500, 2017. 3
+[36] Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In CVPR, pages 4133-4141, 2017. 3
+[37] Shan You, Chang Xu, Chao Xu, and Dacheng Tao. Learning from multiple teacher networks. In SIGKDD, pages 1285-1294. ACM, 2017. 3
+[38] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, pages 6848-6856, 2018. 3
+[39] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. 1, 6
\ No newline at end of file
diff --git a/addernetdowereallyneedmultiplicationsindeeplearning/images.zip b/addernetdowereallyneedmultiplicationsindeeplearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..16722e1ea5c5d3e70f8cc1853e749e126159da4d
--- /dev/null
+++ b/addernetdowereallyneedmultiplicationsindeeplearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a5ec5beae0f5d6fb88a8a2251c71a5a056a95244b548cffa6b8568d0d8967252
+size 368790
diff --git a/addernetdowereallyneedmultiplicationsindeeplearning/layout.json b/addernetdowereallyneedmultiplicationsindeeplearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..99e44dcf0367ff9804133750381091af9b855188
--- /dev/null
+++ b/addernetdowereallyneedmultiplicationsindeeplearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a4d35a5302ef48e850102969fd358f27ca8b0af4a180b6d05c4c99b373d781f
+size 451486
diff --git a/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_content_list.json b/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a69293efb1e54ee990462e534d0816f664552991
--- /dev/null
+++ b/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7550e07fa773918e08835a5749be6f8043140501edc93d4a1750a7486b8b142d
+size 73673
diff --git a/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_model.json b/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e871235ec13e707c783fea296fdf83074347ca8
--- /dev/null
+++ b/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b500c2465cab655736ca3b526f17c345aa76b48508f09355bd632a34830e2fc
+size 90854
diff --git a/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_origin.pdf b/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..08c1591cc218be82f720080c808e7c8dd124fb70
--- /dev/null
+++ b/adinetattributedrivenincrementalnetworkforretinalimageclassification/dd52ad85-3b11-4777-b5c2-d8165195ee7e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e73656e608c486da8de29bb31bc904db63cdbb5c129ebb69cd804e9e7827ac8b
+size 1035553
diff --git a/adinetattributedrivenincrementalnetworkforretinalimageclassification/full.md b/adinetattributedrivenincrementalnetworkforretinalimageclassification/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f754f48fc6b9430c57ce6544de7e7ffc0cc2a35a
--- /dev/null
+++ b/adinetattributedrivenincrementalnetworkforretinalimageclassification/full.md
@@ -0,0 +1,345 @@
+# ADINet: Attribute driven incremental network for retinal image classification
+
+Qier Meng
+Research Center for Medical Bigdata, National Institute of Informatics
+qiermeng@nii.ac.jp
+
+# Abstract
+
+Retinal diseases encompass a variety of types, including different diseases and severity levels. Training a model with all possible types of disease is impractical. Dynamically training a model is necessary when a patient with a new disease appears. Deep learning techniques have stood out in recent years, but they suffer from catastrophic forgetting, i.e., a dramatic decrease in performance when new training classes appear. We found that keeping the feature distribution of a teacher model helps maintain the performance of incremental learning. In this paper, we design a framework named "Attribute Driven Incremental Network" (ADINet), a new architecture that integrates class label prediction and attribute prediction into an incremental learning framework to boost the classification performance. With image-level classification, we apply knowledge distillation (KD) to retain the knowledge of base classes. With attribute prediction, we calculate the weight of each attribute of an image and use these weights for more precise attribute prediction. We designed attribute distillation (AD) loss to retain the information of base class attributes as new classes appear. This incremental learning can be performed multiple times with a moderate drop in performance. The results of an experiment on our private retinal fundus image dataset demonstrate that our proposed method outperforms existing state-of-the-art methods. For demonstrating the generalization of our proposed method, we test it on the ImageNet-150K-sub dataset and show good performance.
+
+# 1. Introduction
+
+Retinal diseases encompass a variety of types, including different diseases and severities. Usually, there are several decades of types of retinal diseases [29]. As the diseases change in different stages, it is difficult to collect all of the disease types at the same time to train a model, especially in the case of some usual disease like retinal vein occlusion or central serous chorioretinopathy. Dynamically training a model is necessary when a patient with a new
+
+Satoh Shin'ichi
+Research Center for Medical Bigdata, National Institute of Informatics
+satoh@nii.ac.jp
+
+
+Figure 1. Examples of several retinal diseases. Figure illustrates our basic idea. We study images with image-level labels and corresponding attributes of base classes. We predict labels and attributes of new classes with teacher model.
+
+disease appears. In the medical imaging application, there is an increasing demand for systems that can implement incremental learning over a series of tasks. Because it is difficult to obtain all of the old dataset due to the privacy of the disease dataset. Deep convolutional networks have achieved great performance in classification tasks in computer vision. However, the incremental learning paradigm still suffers from catastrophic forgetting, i.e., a performance decrease for base classes when datasets for new classes appear [27]. In real-world object classification, systems must be continuously upgraded by examining new knowledge. However, retraining a modal always with old data and new data together is impractical [30]. When a new type of data appears, a natural way of performing incremental learning is to fine-tune a pre-trained model on new data. Figure 1 shows the basic idea of our proposed method and example images of a retinal disease dataset.
+
+Visual attributes are an important research area in computer vision because they can be a powerful mid-level representation that can bridge low-level features and high-level human recognition. Attributes used for mid-level representation have been investigated in a variety of computer vision tasks, including recognition, classification, and retrieval, for
+
+many years. In fundus imaging, attributes are summarized from the case histories of the disease symptoms of each patient. For image classification, we found that good attribute prediction helps in boosting classification performance.
+
+Given the scenarios described above, there are two straightforward solutions to learning new classes without forgetting the base classes: (1) preserving the parameters of an original model, namely, adding the initialized output layers to an original model and tuning the whole network [34, 14], and (2) preserving the knowledge of the base class in an original model with technology like knowledge distillation (KD) [11, 25]. However, while these methods can alleviate catastrophic forgetting to some extent, the overall classification performance remains significantly worse than classical joint learning.
+
+The main contribution of our work is to provide an attribute-based incremental learning approach. We hypothesize that attribute prediction for each class can be used to encode representations of models, and attribute prediction for teacher and student models can be constrained by using attribute distillation (AD) loss, as explained in Sec. 3.5. AD loss helps a model remember some visual knowledge. By integrating attribute prediction, we boost the performance of image classification.
+
+Another contribution is that our model can classify different classes and predict the attributes of each image. We asked ophthalmologists to provide attribute annotations for each class instead of each image. For predicting attributes precisely, we calculate the weight of each attribute in each image based on the information entropy of each attribute. Then, the weight of each attribute is integrated into a fully-connected layer to predict the attributes. We integrate the predicted attribute information into an incremental learning framework. Our paper is the first to use feature distribution to retain the knowledge of base classes to boost incremental learning performance. The resulting framework outperforms the existing state-of-the-art methods on our private retinal fundus image dataset. We found that the proposed method can be generalized in other domains, so we also experiment on a public dataset, ImageNet-150K [22], which contains attribute annotations made by experts.
+
+# 2. Related work
+
+Work related to the proposed method can be summarized into two categories: incremental learning and attribute learning. The following is an explanation of the connections and differences between our work and these methods in terms of corresponding aspects.
+
+# 2.1. Incremental learning
+
+Incremental learning has been a long-term problem in machine learning [3, 15]. Because the manner of the training procedure is incremental, the main problem is over
+
+coming catastrophic forgetting. On the basis of the success of deep learning, the existing works can be categorized into two types: parameter-based and distillation-based. Parameter-based methods estimate the weight parameters of a teacher model and student model according to the importance of network weights. MAS [1] also focuses on studying the importance of the weights of a network in an unsupervised and continuous manner. When new data appears, changes to important parameters can be penalized to prevent the forgetting of the previous knowledge. Distillation-based methods mainly rely on knowledge distillation. Knowledge distillation [11] is an effective way to transfer knowledge from one network to another. The first application of KD in the incremental learning is in Learning without Forgetting (LwF) [20], where a modified cross-entropy loss is used to preserve the knowledge in a teacher model. Hou et al. [12] propose a framework for distilling previous knowledge from a base class via distillation and retrospection. M. Castro et al. [2] proposed an end-to-end incremental framework by using KD loss to retain the knowledge, while cross-entropy loss is used to classify the new type. S. Rebuffi et al. [26] selects some exemplars near the mean exemplars and uses KD loss to distill the knowledge from them.
+
+# 2.2. Attribute learning
+
+Attribute learning has attracted much attention for image classification in large-scale datasets [16, 7]. Learning visual attributes is beneficial for boosting classification performance [17]. The attribute descriptions of an instance or category are useful as a semantically meaningful intermediate representation to bridge the gap between low-level features and high-level class concepts. [31] proposed a joint learning architecture that is for face recognition and attribute prediction. [32] proposed a multi-task learning mechanism for increasing the discrepancy between different classes. [19] addresses the large-scale content-based face image-retrieval problem by learning a binary code that is comprised of different attributes. There have been several applications of incremental learning used on the study of attribute like [13, 5, 33].
+
+# 3. Proposed method
+
+# 3.1. Motivation
+
+Our approach is motivated by the recent works on KD. In the incremental learning procedure, we use KD to retain the knowledge of base classes. We also designed the AD loss for preserving the knowledge of the attributes in base classes. For attribute prediction, we estimate the weight of each attribute in each image. The weight is used to calculate the representations used for predicting the attributes of each image. It helps predict the attributes precisely and boost the
+
+
+Figure 2. Framework of proposed method. We perform image-level classification and attribute prediction at same time. In attribute prediction, we estimate weight of each attribute for predicting attributes precisely. For distillation, we use KD loss for image-level classification and AD loss for attribute prediction.
+
+classification performance. Our loss function is specially designed to make use of partially labelled attributes, which is more general in the real world. All of the procedures proceed incrementally. Our proposed method aims to improve the performance of incremental learning classification and predict the attributes of each image by using attributes only via class annotation instead of attribute annotation via each image. A flowchart is shown in Fig. 2.
+
+# 3.2. Problem description
+
+In this section, we explain each symbol used in our proposed method. Assume that we have $N$ training images expressed as $X = [x_{1}, x_{2}, \ldots, x_{N}] \in \mathbb{R}^{d \times N}$ , where $x_{i} \in \mathbb{R}^{d}$ is the $i^{th}$ image with a $d$ -dimension representation. We also have the ground-truth class label as $l \in [1, 2, \ldots, P]$ , where $P$ indicates the number of labels. We also have the ground-truth annotation of the attributes of all categories in the form of a class-attribute annotation matrix, denoted by $A \in \{0, 1, 2\}^{P \times T}$ , where $T$ indicates the number of attributes per class. $a_{l,j}$ is an element in matrix $A$ that indicates the $j^{th}$ attribute in category $l^{th}$ . $a_{l,j} = 1$ and 0 indicates whether this attribute is present/absent. We use $a_{l,j} = 2$ to denote that the $j^{th}$ attribute is unrelated to category $l^{th}$ . Thus, the attribute label is missing, i.e., this attribute cannot provide useful information for classifying this category. In our proposed method, we need to perform image-level classification and attribute prediction at the same time. We also calculate the weight of each attribute $w_{l,j}$ to specify the contribution of the $j^{th}$ attribute to the $l^{th}$ class. We denote $p_l$ as the prediction of each class label $l$ , $p_{l,j}$ as the prediction of each attribute label, and $w_{l,j}$ as the weight of each attribute.
+
+# 3.3. Image-level classification
+
+We used ResNet50 [10] as the backbone of our proposed method since the ResNet architecture performs well in image classification. The global feature, i.e., a fully-connected layer $fc$ after avg-pooling of ResidualBlock4, is fed to $T + 1$ classifiers as $[fc^1, fc^2, \dots, fc^{T + 1}]$ . $fc^1$ is used for image-level classification. $fc^2$ to $fc^{T + 1}$ are used for classifying the attributes of each category.
+
+In image-level classification, we use a softmax layer to obtain the class prediction after the layer of $fc^{1}$ . This classification is multi-class classification.
+
+# 3.4. Weight estimation and attribute prediction
+
+Even in the same class, the same attribute contributes differently in different images. Treating all attributes as equally informative will degrade the prediction performance. For precisely reflecting the information amount of each attribute in each image, we estimate the weight of each attribute and perform attribute prediction in this section. We adopt ResNet50 as the feature extractor. Then, we apply $T$ attribute predictors, which consist of a fully-connected layer and a sigmoid layer, to proceed with attribute prediction. Figure 3 shows the procedure of weight estimation.
+
+We send an image into ResNet50 and obtain the initial prediction result of the $j^{th}$ attribute in category $l^{th}$ as $p_{l,j}$ first. Then, we calculate the entropy of this attribute to represent the information amount of this attribute. The entropy is calculated as:
+
+$$
+\operatorname {E n t r o p y} \left(p _ {l, j}\right) = - \frac {1}{2} \left(p _ {l, j} \log \left(p _ {l, j}\right) + \left(1 - p _ {l, j}\right) \log \left(1 - p _ {l, j}\right)\right). \tag {1}
+$$
+
+After we calculate the entropy of each attribute in each class, we calculate the exponent of each entropy as
+
+
+Figure 3. Framework of weight estimation and attribute prediction
+
+$\operatorname{Conf}(p_{l,j})$ and then normalize these exponents to obtain the weights:
+
+$$
+\operatorname {C o n f} \left(p _ {l, j}\right) = \left\{ \begin{array}{l l} e ^ {\frac {\text {E n t r o p y} \left(p _ {l , j}\right)}{\sigma^ {2}}}, & a _ {l, j} \neq 2, \\ 0, & a _ {l, j} = 2, \end{array} \right. \tag {2}
+$$
+
+$$
+w _ {l, j} = \frac {\operatorname {C o n f} \left(p _ {l , j}\right)}{\sum_ {j = 1} ^ {T} \operatorname {C o n f} \left(p _ {l , j}\right)}, \tag {3}
+$$
+
+this weight is examined to show how much of a contribution an attribute has to distinguish classes. When $a_{l,j} = 2$ , it means that this attribute has no contribution to distinguish this class, so we set $\text{Conf}(p_{l,j})$ as 0. We multiply these weights with the fully-connected layer output to obtain the new fully-connected layer output $f c_{l,j}^{new}$ of attribute $j$ for class $l$ :
+
+$$
+f c _ {l, j} ^ {\text {n e w}} = w _ {l, j} f c _ {l, j}, \tag {4}
+$$
+
+$f c_{l,j}^{new}$ is sent to the attribute predictor, which consists of a fully-connected layer and a sigmoid layer. The attribute predictor is trained by a modified binary cross-entropy loss with an attribute label.
+
+# 3.5. Loss function
+
+The whole framework is trained in an incremental manner; namely, at incremental learning step $t$ , we define the teacher model as $N_{t}$ and the student model as $N_{t+1}$ . The $N_{t}$ is trained on base classes $n$ , and $N_{t+1}$ is trained on base classes $n$ and new classes $m$ by adding $m$ neurons to the network's output layer of $N_{t}$ . The weight parameters of the student model are initialized by using the parameters from the teacher model except for the newly added neurons in the output layer, which are randomly initialized.
+
+To alleviate the catastrophic forgetting of base classes while training the data of new classes, we leverage KD in the loss function [11]. Instead of using hard labels to train the loss function, KD loss uses the teacher model's output as the ground-truth labels to train the student model. For the image-level classification, we jointly optimize KD loss on the base classes and cross-entropy loss on the new classes to achieve good classification performance.
+
+For the classification part, we back-propagate the loss of image-level classification and attribute classification. The loss function for classification $L_{cls}$ is defined as:
+
+$$
+L _ {c l s} = L _ {\text {c a t e g o r y}} + \alpha L _ {\text {a t t r i b u t e}}, \tag {5}
+$$
+
+$$
+L _ {\text {c a t e g o r y}} = - \sum_ {l = 1} ^ {n + m} l _ {l} \log \left(p _ {l}\right), \tag {6}
+$$
+
+$$
+L _ {l, j} = - \mathbb {I} \left(a _ {l, j} \neq 2\right) \left(a _ {l, j} \log \left(p _ {l, j}\right) + \left(1 - a _ {l, j}\right) \log \left(1 - p _ {l, j}\right)\right), \tag {7}
+$$
+
+$$
+L _ {\text {a t t r i b u t e}} = \sum_ {l = 1} ^ {n + m} \sum_ {j = 1} ^ {T} L _ {l, j}, \tag {8}
+$$
+
+where $L_{category}$ is the loss function for image-level classification, and $l_{l}$ is the ground-truth label of a disease label. $L_{attribute}$ is the loss function for attribute classification. $L_{l,j}$ is a modified cross entropy loss function, and $a_{l,j}$ is the ground-truth label of an attribute label. In $L_{l,j}$ , $\mathbb{I}(\text{cond.})$ is 1 when the condition is true and 0 otherwise. When an attribute label is missing, we have $\mathbb{I}(a_{l,j} = 2) = 0$ . $\alpha$ is a trade-off parameter defined as 0.5. $L_{cls}$ is the loss function for the classification part in our work.
+
+After we obtain the teacher model, we want to keep the knowledge of the base classes. We use the knowledge distillation loss in image-level classification [20]. The loss function $L_{dis}$ for distillation is defined as:
+
+$$
+L _ {d i s} = L _ {D} + \alpha L _ {A D}, \tag {9}
+$$
+
+$$
+L _ {D} = - \sum_ {l = 1} ^ {n} \hat {l} _ {l} ^ {t} \log \left(p _ {l} ^ {t + 1}\right), \tag {10}
+$$
+
+$$
+\hat {l} _ {l} ^ {t} = \operatorname {s o f t m a x} \left(f c _ {1} ^ {t} / T _ {\text {d i s}}\right), \tag {11}
+$$
+
+where $T_{dis}$ is a temperature scalar. $L_{D}$ is a distillation function for image-level classification (the KD loss in Fig. 2), and $\hat{l}_l^t$ is the distilled version of output probability of the $l^{th}$
+
+class of the teacher model $N_{t}$ . And $p_l^{t + 1}$ is the prediction probability of the $l^{th}$ class of the student model $N_{t + 1}$ [20].
+
+For keeping the attributes knowledge for base classes, we designed the attribute distillation loss $L_{AD}$ to preserve the knowledge of old attributes. For any given input image $x$ , let $b$ be the top base class predicted by $N_{t}$ , and we denote the attribute prediction vectors of models $N_{t}$ and $N_{t+1}$ as $A_{t}^{x,b}$ and $A_{t+1}^{x,b}$ . We use the sum of the element wise $L_{1}$ difference of these two attribute prediction vectors to calculate the $L_{AD}$ as:
+
+$$
+L _ {A D} = \sum_ {j = 1} ^ {T} \left| \left| A _ {t, j} ^ {x, b} - A _ {t + 1, j} ^ {x, b} \right| \right| _ {1}. \tag {12}
+$$
+
+Essentially, the attribute prediction of image $x$ represents the feature distribution, which reflects the teacher model's study of base classes. If $N_{t}$ and $N_{t + 1}$ have equivalent knowledge of base classes, they should predict attributes similarly. Therefore, $A_{t}^{x,b}$ and $A_{t + 1}^{x,b}$ should be similar.
+
+The overall loss combines the distillation loss and the classification loss:
+
+$$
+L = \lambda L _ {d i s} + (1 - \lambda) L _ {c l s}, \tag {13}
+$$
+
+where the scalar $\lambda$ is used to balance between the two terms. The scalar $\lambda$ is set to $\frac{n}{n + m}$ , where $n$ and $m$ are the number of base and new classes.
+
+# 3.6. Using exemplars of base classes
+
+In our application, we choose a pipeline to apply a small number of exemplars from the base classes to our training dataset. That is because only using a new class for the next iteration of training will cause a large part of the information on base classes to be lost. For reducing the training time and the storage cost for incremental learning, we select a part of the dataset of base classes instead of all of the dataset.
+
+Exemplar image selection is usually performed in two ways. The first, random selection, randomly selects a fixed number of images from each base class. The second, the exemplar management strategy proposed by iCaRL [26], selects images such that the average feature vector of exemplars will be closest to the mean value. In our pipeline, we select the second strategy.
+
+# 4. Experiment
+
+# 4.1. Experimental settings
+
+Dataset We conducted an experiment on two datasets. The first, the fundus image dataset, is our private fundus image dataset. It contains 6,000 images consisting of 20 different types of diseases. Twenty-four attributes were annotated in this dataset by multiple expert ophthalmologists for each class. Table 1 shows disease-label and attribute-label
+
+Table 1. Part of disease labels and of semantic attributes in fundus image dataset
+
+| Disease label | Attributes |
| Normal | central reflux of macula |
| Age-related macular degeneration-early [AMD (early)] | hemorrhage, ..., macular edema |
| Age-related macular degeneration-atrophic [AMD (atrophic)] | hemorrhage, ..., macular edema |
| Age-related macular degeneration-exudative [AMD (exudative)] | hemorrhage, ...,drusen, atrophy |
| Central serous chorioretinopathy | macular edema, ..., intraretinal fluid |
| Retinal vein occlusion branch (RVO(B)) | hemorrhage, ..., vitreous hemorrhage |
| Diabetic retinopathy (DR) | hemorrhage, ..., neovascularization |
| Glaucoma | pale optic disc, enlarged cupping |
| Myopic maculopathy | disc change, macular hole |
| Myopic choroidal neovascularization | geographic atrophy, ..., hemorrhage |
+
+parts $^{1}$ . Thirteen universities compiled this dataset, and multiple expert ophthalmologists annotated the images with image-level labels. $60\%$ of the dataset was used for training, $20\%$ of it was used for validation, and the remaining $20\%$ was used for testing. The second is ImageNet-150K-sub, which is a subset of ImageNet-150K [22]. ImageNet-150K is a subset of the ILSVRC2012 dataset [28] with 150,000 images. In ImageNet-150K, 148 images from the training set and 2 images from the validation set were selected from the dataset for each of the 1,000 categories. Twenty-five attributes for each image were annotated in this dataset by multiple experts for each image. Each attribute has three kinds of labels, in which -1 means absent, 0 means uncertain, and is treated as missing in [22], and 1 means present. We treat an attribute with 1 as present and -1 as absent. Because "uncertain" cannot provide useful information on an attribute, we treat an attribute with 0 as missing. We randomly select a subset of 100 classes to conduct our experiment. We call this dataset "ImageNet-150K-sub."
+
+Here, we experimented with the dataset with attribute annotation for each class and the dataset with attribute annotation for each image respectively. We wanted to compare the performance and see whether the simple annotation could improve the experiment performance or not.
+
+# 4.2. Implementation details
+
+Before the training, we preprocessed the fundus images by cropping them to remove the black regions because these regions contain no information. After that, we resized the images to $512 \times 512$ pixels. During the training phase, we augmented the dataset with randomly rotated and horizontally and vertically flipped images from the dataset. The CNN model was implemented by using PyTorch trained on a Quadro GV100. We initially fine-tuned the ResNet50 [10] with ImageNet [4]. A gradient descent optimizer was used with a momentum of 0.9. We trained our CNN model
+
+with a batch size of 16 and an initial learning rate of 0.01. We chose iCarL [26] as the baseline of our work. For the experiment with fundus images, 10 images for each base class were stored as exemplars. For the experiment with the ImageNet-150K-sub, 20 images for each base class were stored as exemplars.
+
+# 4.3. Evaluation on incremental learning
+
+Incremental learning was evaluated by the curve of the classification accuracies after each phase. We also calculated the average of all of the accuracies, i.e., average incremental accuracy.
+
+We compared our proposed method ADINet with the state-of-the-art on our private fundus image dataset. We compared the results of ADINet with the results from learning without forgetting (LwF) [20], incremental classifier and representation learning (iCaRL) [26], and end-to-end incremental learning (EEIL) [2]. Figure 4 shows the results with an incremental setting of 10, 5, and 2 phases. The average incremental accuracies are shown in the bracket next to each method. As shown in Fig. 4, ADINet outperformed the state-of-the-art either in terms of the trend of the classification accuracy curve or average incremental accuracy. Particularly, with 5 phases, the increase of ADINet was more than $10\%$ over iCaRL [26]. According to Fig. 4, it can be seen that ADINet could retain the knowledge of the base classes, which solved the imbalance between the base and new classes. The average incremental accuracies of ADINet with 10, 5, and 2 phases are $82.7\%$ , $83.2\%$ , and $83.1\%$ , respectively. According to Fig. 4, it can be seen that as the incremental phase increased, the performance dropped due to catastrophic forgetting on our fundus image dataset.
+
+We also compared ADINet for the dataset of ImageNet-150K-sub. We show the performance of the proposed method on the validation dataset. Figure 5 shows the results of comparison with 10, 5, and 2 phases. The average incremental accuracies of ADINet with 10, 5, and 2 phases were $87.4\%$ , $87.3\%$ , and $82.5\%$ respectively.
+
+The average incremental accuracy of ADINet for the fundus dataset was not as good as the average incremental accuracy for the ImageNet-150K-sub due to the dataset in ImageNet-150K-sub having more significant variance, which can be easier to classify. ImageNet-150K-sub provides attribute labels per image, but our fundus dataset provides only attribute labels per class. However, observing the results curves of these two datasets, we found that only using attribute labels per class can also alleviate the catastrophic forgetting of incremental learning effectively in each phase. On the basis of the results, our method still can boost the performance of incremental learning with only attribute labels per class.
+
+Table 2. Classification results on fundus image dataset
+
+| Method | top-1 accuracy |
| Bilinear-CNN [21] | 76.1% |
| PDFR [35] | 77.0% |
| FV-CNN [8] | 77.2% |
| FCAN [23] | 78.5% |
| RA-CNN [6] | 78.1% |
| Boost-CNN [24] | 79.5% |
| MA-CNN [36] | 81.2% |
| A3M [9] | 80.5% |
| ADINet | 82.7% |
+
+Table 3. Classification results for ImageNet-150K-sub
+
+| Method | top-1 accuracy |
| Bilinear-CNN [21] | 81.2% |
| PDFR [35] | 83.0% |
| FV-CNN [8] | 83.4% |
| FCAN [23] | 83.9% |
| RA-CNN [6] | 84.2% |
| Boost-CNN [24] | 84.9% |
| MA-CNN [36] | 85.4% |
| A3M [9] | 85.7% |
| ADINet | 87.4% |
+
+# 4.4. Evaluation on image classification
+
+We compared ADINet with some classical image classification methods. We conducted the experiment on our private fundus image dataset and ImageNet-150K-sub. We used the classification accuracy (top-1 accuracy) for measurement. The results are summarized in Table 2 and Table 3. We used the average incremental accuracy of 10 phases as the classification performance for comparison with the state-of-the-art. From the results shown, our model showed competitive performance. This indicates that integrating with attribute and weight estimation boosts the incremental learning performance, which is effective when performing image classification.
+
+# 4.5. Evaluation on attribute prediction
+
+We compared our proposed method in terms of attribute prediction. We conducted our experiment on the dataset of ImageNet-150K-sub. We used the classification accuracy (top-1 accuracy) for measurement. Table 4 compares the results of 10 attributes in the ImageNet-150K-sub dataset with two previous methods ${}^{2}$ . One is only using ResNet50 to predict attribute recognition, and the other is DeepMAR [18], which performs attribute recognition by using the correlations between human attributes to improve the overall recognition performance further. ADINet had a more significant improvement in terms of low ratio attributes than the other two previous methods, according to Table 4. By using only ResNet50, it can be seen that the overall performance of attribute recognition was relatively low. By considering the correlations between attributes, the overall performance is increased. Calculating the weight prediction and integrating the weights with attribute recognition can
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+Figure 4. Performance on our private fundus image dataset with incremental setting of 10, 5, and 2 phases.
+(a)
+Figure 5. Performance on ImageNet-150K-sub with incremental setting of 10, 5, and 2 phases.
+
+
+(b)
+
+
+(c)
+
+
+Macular hemorrhage: 0.846
+Drusten: 0.786
+
+
+Hemorrhage:0.815 Striate hemorrhage:0.902
+
+
+Drusen:0.815 Neovascularization:0.701
+
+
+Multiple small drusen: 0.953 Intermediate drusen: 0.824
+Figure 6. Attribute recognition on fundus image dataset. First row shows attribute prediction results of three different diseases, and second row shows attribute prediction results of three different severities of AMD. Blue arrows show corresponding regions of attributes. We show attributes with top-2 prediction scores.
+
+
+Hemorrhage: 0.905 Intermediate drusen : 0.863
+
+
+Bruen: 0.782
+Atrophy: 0.853
+
+boost the performance significantly. Figure 6 shows examples of attribute prediction with the fundus image dataset. It can be seen that our approach predicted the attributes of each fundus image effectively, which would help ophthalmologists in making diagnoses.
+
+Table 4. Attribute recognition comparison for ImageNet-150K-sub
+
+| Attribute | Ratio | ResNet50 | DeepMAR | ADINet |
| Black | 0.12 | 57.2 | 65.2 | 75.2 |
| Blue | 0.0235 | 66.3 | 76.1 | 81.4 |
| Brown | 0.0895 | 62.8 | 71.6 | 78.2 |
| Gray | 0.0265 | 61.8 | 67.7 | 73.6 |
| Green | 0.0315 | 52.9 | 65.2 | 78.9 |
| Orange | 0.012 | 57.3 | 69.0 | 73.4 |
| Pink | 0.0075 | 57.4 | 67.8 | 72.1 |
| Red | 0.0435 | 59.3 | 76.2 | 75.1 |
| Purple | 0.003 | 64.1 | 81.7 | 83.2 |
| White | 0.111 | 67.6 | 78.7 | 82.3 |
| Average | * | 60.7 | 71.9 | 77.3 |
+
+# 4.6. Ablation study
+
+We now analyze the components of our approach and demonstrate their contribution to the overall performance. We evaluated our approach with an incremental setting of 10 phases. We performed two different experiments: ADINet without attribute distillation and ADINet without weight estimation. In the first experiment, we conducted our method on the fundus image dataset and ImageNet-150K-sub. We report the classification accuracy (top-1 accuracy) for each incremental step. We also compare the average incremental accuracy with different methods. In the second experiment, we also conducted our method on the both datasets. We compared the classification accuracy (top-1 accuracy) of each incremental step on our private fundus image dataset and ImageNet-150K-sub. And we compared the average attribute recognition accuracy on the ImageNet-150K-sub.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+Figure 7. Ablation study with fundus images. Results for (a), (b), (c) are for 10, 5, and 2 phases, respectively.
+(a)
+Figure 8. Ablation study with ImageNet-150K-sub. Results for (a), (b), (c) are for 10, 5, and 2 phases, respectively.
+
+
+(b)
+
+
+(c)
+
+# 4.6.1 Evaluation on attribute distillation
+
+As aforementioned, for estimating the effect of the attribute in the image classification, we add attribute distillation for boosting the classification performance. Comprehensive experiments were performed, and the results are displayed in Fig. 7 and Fig. 8. We compared ADINet and ADINet without attribute distillation. According to the figures, without attribute distillation, the classification accuracy of each incremental step dropped significantly. In particular, as the incremental step increased, the discrepancy between ADINet and ADINet without attribute distillation increased. This demonstrates that distilling the attribute information from the base classes helps retain the feature preservation and boosts the classification performance.
+
+# 4.6.2 Evaluation on weight estimation
+
+We added weight estimation for predicting the attributes precisely for each class. According to Fig. 7 and Fig. 8, after removing the weight estimation, the classification accuracy of each incremental step degraded. That is because, in each image, different attributes contribute differently. A more informative attribute should be given more weight. We also compared the average accuracy of attribute recognition between ADINet and ADINet without estimation in Table 5. According to this table, it can be seen that our proposed method produced more precise recognition results for the dataset of ImageNet-150K-sub. We only show the comparison results of iCaRL here. The comparison of other
+
+Table 5. Comparison of attribute recognition accuracy for two datasets
+
+| Dataset | ADINet w/o weight estimation | ADINet |
| ImageNet-150K-sub | 73.6 | 76.6 |
+
+basielines can be found in supplementary material.
+
+# 5. Conclusion
+
+We explored the incremental learning problem for the task of image classification, and we proposed a method: attribute distillation and attribute weight estimation. By integrating the attribute information to transfer the knowledge of a base class from a teacher to student model, the proposed method boosts the performance of classification. At the same time, our proposed method can also investigate the predicted attributes of each image. This approach outperforms the state-of-the-art. Regarding future work, the proposed method could be applied to a scenario in which there are a few attribute labels even without attribute labels. Incremental attribute recognition is a challenging problem due to the absence of ground-truth attributes for each image. We intend to extend our work in this direction.
+
+Acknowledgement The fundus image dataset is provided by Japan Ocular Imaging Registry Research Group. This research is supported by the ICT infrastructure establishment and implementation of artificial intelligence for clinical and medical research from Japan Agency for Medical Research and development, AMED.
+
+# References
+
+[1] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. Proceedings of the European Conference on Computer Vision (ECCV), 11 2017. 2
+[2] Francisco M. Castro, Manuel J. Marin-Jimenez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), September 2018. 2, 6
+[3] Gert Cauwenberghs and Tomaso Poggio. Incremental and decremental support vector machine learning. In Proceedings of the 13th International Conference on Neural Information Processing Systems, NIPS'00, pages 388-394, 2000. 2
+[4] J. Deng, W. Dong, R. Socher, and L. Li. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, June 2009. 5
+[5] Emrah Ergul. Relative attribute based incremental learning for image recognition. CAAI Transactions on Intelligence Technology, 2(1):1-11, 2017. 2
+[6] J. Fu, H. Zheng, and T. Mei. Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4476-4484, July 2017. 6
+[7] Gang Wang and D. Forsyth. Joint learning of visual attributes, object classes and visual saliency. In 2009 IEEE 12th International Conference on Computer Vision, pages 537-544, Sep. 2009. 2
+[8] Philippe Henri Gosselin, Naila Murray, Hervé Jégou, and Florent Perronnin. Revisiting the fisher vector for fine-grained classification. Pattern Recognition Letters, 49:92-98, 2014. 6
+[9] Kai Han, Jianyuan Guo, Chao Zhang, and Mingjian Zhu. Attribute-aware attention model for fine-grained representation learning. In Proceedings of the 26th ACM International Conference on Multimedia, MM '18, pages 2040-2048, New York, NY, USA, 2018. ACM. 6
+[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, June 2016. 3, 5
+[11] Geoffrey Hinton, Oriol Vinyls, and Jeffrey Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015. 2, 4
+[12] Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. Lifelong learning via progressive distillation and retrospection. In Proceedings of the European Conference on Computer Vision (ECCV), 2018. 2
+[13] P. Kankuekul, A. Kawewong, S. Tangruamsub, and O. Hasegawa. Online incremental attribute-based zero-shot learning. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 3657-3664, June 2012. 2
+[14] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei Rusu, Kieran Milan,
+
+John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114, 12 2016. 2
+[15] Ilja Kuzborskij, Francesco Orabona, and Barbara Caputo. From N to N+1: Multiclass transfer incremental learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2013. 2
+[16] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class attribute transfer. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 951-958, June 2009. 2
+[17] Ryan Layne, Timothy Hospedales, and Shaogang Gong. Person re-identification by attributes. In Proceedings of the British Machine Vision Conference (BMVC), volume 2, 01 2012. 2
+[18] D. Li, X. Chen, and K. Huang. Multi-attribute learning for pedestrian attribute recognition in surveillance scenarios. In 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), pages 111–115, Nov 2015. 6
+[19] Yan Li, Ruiping Wang, Haomiao Liu, Huajie Jiang, Shiguang Shan, and Xilin Chen. Two birds, one stone: Jointly learning binary code for large-scale face image retrieval and attributes prediction. In The IEEE International Conference on Computer Vision (ICCV), December 2015. 2
+[20] Zhizhong Li and Derek Hoiem. Learning without forgetting. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 40, pages 2935-2947, 2016. 2, 4, 5, 6
+[21] T. Lin, A. RoyChowdhury, and S. Maji. Bilinear CNN models for fine-grained visual recognition. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 1449-1457, Dec 2015. 6
+[22] H. Liu, R. Wang, S. Shan, and X. Chen. Learning multifunctional binary codes for both category and attribute oriented retrieval tasks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6259-6268, July 2017. 2, 5
+[23] Xiao Liu, Tian Xia, Jiang Wang, and Yuanqing Lin. Fully convolutional attention localization networks: Efficient attention localization for fine-grained recognition. CoRR, abs/1603.06765, 2016. 6
+[24] Mohammad Saberian Jian Yang Nuno Vasconcelos Mohammad Moghimi, Serge Belongie and Li-Jia Li. Boosted convolutional neural networks. In Edwin R. Hancock Richard C. Wilson and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pages 24.1-24.13. BMVA Press, September 2016. 6
+[25] Amal Rannen, Rahaf Aljundi, Matthew B. Blaschko, and Tinne Tuytelaars. Encoder based lifelong learning. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 2
+[26] Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert. icarl: Incremental classifier and representation learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 2, 5, 6
+
+[27] ANTHONY ROBINS. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2):123-146, 1995. 1
+[28] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. 5
+[29] Steffen Schmitz-Valckenberg, Frank Holz, Alan Bird, and Richard Spaide. Fundus autofluorescence imaging: Review and perspectives. Retina (Philadelphia, Pa.), 28:385–409, 04 2008. 1
+[30] Konstantin Shmelkov, Cordelia Schmid, and Karteek Alahari. Incremental learning of object detectors without catastrophic forgetting. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 1
+[31] Fariborz Taherkhani, Nasser M. Nasrabadi, and Jeremy Dawson. A deep face identification network enhanced by facial attributes prediction. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. 2
+[32] Zhanxiong Wang, Keke He, Yanwei Fu, Rui Feng, Yu-Gang Jiang, and Xiangyang Xue. Multi-task deep neural network for joint face recognition and facial attribute prediction. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, ICMR '17, pages 365-374, New York, NY, USA, 2017. ACM. 2
+[33] Liuyu Xiang, Xiaoming Jin, Guiguang Ding, Jungong Han, and Leida Li. Incremental few-shot learning for pedestrian attribute recognition. pages 3912-3918, 08 2019. 2
+[34] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic intelligence. In ICML, 2017. 2
+[35] X. Zhang, H. Xiong, W. Zhou, W. Lin, and Q. Tian. Picking deep filter responses for fine-grained image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1134-1142, June 2016. 6
+[36] Heliang Zheng, Jianlong Fu, Tao Mei, and Jiebo Luo. Learning multi-attention convolutional neural network for fine-grained image recognition. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017. 6
\ No newline at end of file
diff --git a/adinetattributedrivenincrementalnetworkforretinalimageclassification/images.zip b/adinetattributedrivenincrementalnetworkforretinalimageclassification/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e2fc2faab5608ec879545163b5130505b05cb921
--- /dev/null
+++ b/adinetattributedrivenincrementalnetworkforretinalimageclassification/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65ec39ce7af62878a36a88bc935a395d6bc3a025d69c2c15b58977dfef4f787f
+size 594976
diff --git a/adinetattributedrivenincrementalnetworkforretinalimageclassification/layout.json b/adinetattributedrivenincrementalnetworkforretinalimageclassification/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fd8ff1ec532654c9a3ee82c596b481bfe63b511e
--- /dev/null
+++ b/adinetattributedrivenincrementalnetworkforretinalimageclassification/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a3eef21b652df930000bf4581ec453b9390c4f179f938e04135faf4a6559c93d
+size 402341
diff --git a/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_content_list.json b/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9f6f2f60d990695d6632f7ecec83ff5595a6cc60
--- /dev/null
+++ b/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3deb3f5b43edda56d44ee74a79cef9447d99fc040f618809000cbdb2efbba7c8
+size 76303
diff --git a/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_model.json b/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..940aa27153cfb90d674f70a9d336538e06d5d4b8
--- /dev/null
+++ b/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6912a3c363db4b0324a067ff89eebf0dc34d2167fe539cd17f08c9643428d579
+size 97988
diff --git a/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_origin.pdf b/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fd636a5085a8cbbe4c045dbbc2fd0913a5bfb836
--- /dev/null
+++ b/advancinghighfidelityidentityswappingforforgerydetection/572888b7-02a5-4762-838f-697d54a50346_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc876a36a260f3e19e318d0711146083d4c29dec50e7dcc6fba50604037a0f45
+size 3895175
diff --git a/advancinghighfidelityidentityswappingforforgerydetection/full.md b/advancinghighfidelityidentityswappingforforgerydetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..520504c3403ab0f10d8795caa5ff1e349fc8fbc9
--- /dev/null
+++ b/advancinghighfidelityidentityswappingforforgerydetection/full.md
@@ -0,0 +1,369 @@
+# Advancing High Fidelity Identity Swapping for Forgery Detection
+
+Lingzhi Li $^{1*}$
+
+Jianmin Bao $^{2\dagger}$
+
+$^{1}$ Peking University
+
+Hao Yang2
+
+Dong Chen
+
+Fang Wen2
+
+$^{2}$ Microsoft Research
+
+lilingzhi@pku.edu.cn
+
+{jianbao, haya, doch, fangwen}@microsoft.com
+
+
+Source
+Target
+Result
+Figure 1: The face in the source image is taken to replace the face in the target image. Results of FaceShifter appear on the right.
+
+
+Source
+
+# Abstract
+
+In this work, we study various existing benchmarks for deepfake detection researches. In particular, we examine a novel two-stage face swapping algorithm, called FaceShifter, for high fidelity and occlusion aware face swapping. Unlike many existing face swapping works that leverage only limited information from the target image when synthesizing the swapped face, FaceShifter generates the swapped face with high-fidelity by exploiting and integrating the target attributes thoroughly and adaptively. FaceShifter can handle facial occlusions with a second synthesis stage consisting of a Heuristic Error Acknowledging Refinement Network (HEAR-Net), which is trained to recover anomaly regions in a self-supervised way without any manual annotations. Experiments show that existing deepfake detection algorithm performs poorly with FaceShifter, since it achieves advantageous quality over all existing benchmarks. However, our newly developed Face X-Ray [23] method can reliably detect forged images created by FaceShifter.
+
+# 1. Introduction
+
+Face swapping has attracted great interest in the vision and graphics community, because of its potential wide applications in movie composition, computer games, and privacy protection [35]. More worth mentioning is that better face swapping technologies will help build better face forgery detection technologies.
+
+Recent research [36] shows that the previous face swapping algorithms can be easily detected by a binary classifier.
+
+This is because the quality of synthetic faces from these algorithms are usually unsatisfactory. Early replacement-based works [6, 42] simply replace the pixels of inner face region. Thus, they are sensitive to the variations in posture and perspective. 3D-based works [7, 12, 26, 31] used a 3D model to deal with the posture or perspective difference. However the accuracy and robustness of 3D reconstruction of faces are all unsatisfactory. Recently, GAN-based works [21, 28, 29, 30, 4] have illustrated impressive results. But it remains challenging to synthesize both realistic and high-fidelity results.
+
+In this work, we focus on improving the fidelity of face swapping, and examine the face forgery face detection algorithm on the this new face swapping algorithm. In order to make the results more perceptually appealing, it is important that the synthesized swapped face not only shares the pose and expression of the target face, but also can be seamlessly fitted into the target image without inconsistency: the rendering of the swapped face should be faithful to the lighting (e.g. direction, intensity, color) of the target scene, the pixel resolution of the swapped face should also be consistent with the target image resolution. Neither of these can be well handled by a simple alpha or Poisson blending. Instead, we need a thorough and adaptive integration of target image attributes during the synthesis of the swapped face, so that the attributes from the target image, including scene lighting or image resolution, can help make the swapped face more realistic.
+
+However, previous face swapping methods either neglect the requirement of this integration, or lack the ability to perform it in a thorough and adaptive way. In specific, many previous methods use only pose and expression guidances from the target image to synthesize the swapped face, the face is then blended into the target image using masks of
+
+
+Figure 2: Failure cases of a previous method on FaceForensics++ [36] dataset. From left to right we show the input source images, the input target images, the results of FaceSwap [2], and the results of our method. FaceSwap follows the strategy that, first synthesizes the inner face region, then blends it into the target face. Such strategy causes artifacts, such as the defective lighting effect on the nose (row 1), failing to preserve the face shape of the source identity (row 2) and the mismatched image resolutions (row 3). While our method addresses all these issues.
+
+the target faces. This process is easy to cause artifacts, because: 1) Besides pose and expression, it leverages little knowledge about the target image when synthesizing the swapped face, which can hardly respect target attributes like the scene lightings or the image resolutions; 2) Such a blending will discard all the peripheral area of the source face that locates outside the target face mask. Thus these methods cannot preserve the face shape of the source identity. We show some typical failure cases in Figure 2.
+
+In order to achieve high-fidelity face swapping results, in the first stage of our framework, we design a GAN-based network, named Adaptive Embedding Integration Network (AEI-Net), for a thorough and adaptive integration of target attributes. We made two improvements to the network structure: 1) we propose a novel multi-level attributes encoder for extracting target attributes in various spatial resolutions, instead of compressing it into a single vector as RSGAN [29] and IPGAN [5]. 2) we present a novel generator with carefully designed Adaptive Attentional Denormalization (AAD) layers which adaptively learns where to integrate the attributes or identity embeddings. Such an adaptive integration brings considerable improvements over the single level integration used by RSGAN [29], FSNet [28] and IPGAN [5]. With these two improvements, the proposed AEI-Net can solve the problem of inconsistent illumination and face shape, as shown in Figure 2.
+
+Moreover, handling facial occlusions is always challenging in face swapping. Unlike Nirkin et al. [30, 31] that trains face segmentation to obtain occlusion-aware face masks,
+
+our method can learn to recover face anomaly regions in a self-supervised way without any manual annotations. We observe that when feeding the same face image as both the target and source into a well trained AEI-Net, the reconstructed face image deviates from the input in multiple areas, these deviations strongly hint the locations of face occlusions. Thus, we propose a novel Heuristic Error Acknowledging Refinement Network (HEAR-Net) to further refine the result under the guidance of such reconstruction errors. The proposed method is more general, thus it identifies more anomaly types, such as glasses, shadow and reflection effects, and other uncommon occlusions.
+
+The proposed two-stage face swapping framework, FaceShifter, is subject agnostic. Once trained, the model can be applied to any new face pairs without requiring subject specific training as DeepFakes [1] and Korshunova et al. [21]. Experiments demonstrate that our method achieves results considerably more realistic and more faithful to inputs than other state-of-the-art methods.
+
+# 2. Related Works
+
+Face swapping has a long history in vision and graphics researches. Early efforts [6, 42] only swap faces with similar poses. Such a limitation is addressed by recent algorithms roughly divided in two categories: 3D-based approaches and GAN-based approaches.
+
+3D-Based Approaches. Blanz et al. [7] considers 3D transform between two faces with different poses, but requiring user interaction and not handling expressions. Thies et al. [39] captures head actions from a RGB-D image using 3DMM, turning a static face into a controllable avatar. It is extended for RGB references in Face2Face [40]. Olszewski et al. [32] dynamically inferences 3D face textures for improved manipulation quality. Kim et al. [20] separately models different videos using 3DMM to make the portraits controllable, while Nagano et al. [27] needs only one image to reenact the portrait within. Recently, Thies et al. [38] adopt neural textures, which can better disentangle geometry in face reenactment. However, when applied on face swapping, these methods hardly leverage target attributes like occlusions, lighting or photo styles. To preserve the target facial occlusions, Nirkin et al. [31, 30] collected data to train an occlusion-aware face segmentation network in a supervised way, which helps predict a visible target face mask for blending in the swapped face. While our method find the occlusions in a self-supervised way without any manually annotations.
+
+GAN-Based Approaches. In the GAN-based face swapping methods, Korshunova et al. [22] swap faces like transfer styles. It separately models different source identities, such as a CageNet for Nicolas Cage, a SwiftNet for Taylor Swift. The recently popular DeepFakes [1] is another example of such subject-aware face swapping: for each new in-
+
+put, a new model has to be trained on two video sequences, one for the source and one for the target.
+
+This limitation has been addressed by subject-agnostic face swapping researches: RSGAN [29] learns to extract vectorized embeddings for face and hair regions separately, and recombines them to synthesize a swapped face. FSNet [28] represents the face region of source image as a vector, which is combined with a non-face target image to generate the swapped face. IPGAN [5] disentangles the identity and attributes of faces as vectors. By introducing supervi sions directly from the source identity and the target image, IPGAN supports face swapping with better identity preservation. However, due to the information loss caused by the compressed representation, and the lack of more adaptive information integration, these three methods are incapable of generating high-quality face images. Recently, FSGAN [30] performs face reenactment and face swapping together. It follows a similar reenact and blend strategy with [32, 27]. Although FSGAN utilizes an occlusion-aware face segmentation network for preserving target occlusions, it hardly respects target attributes like the lighting or image resolution, it can neither preserve the face shape of the source identity.
+
+# 3. Methods
+
+Our method requires two input images, i.e., a source image $X_{s}$ to provide identity and a target image $X_{t}$ to provide attributes, e.g., pose, expression, scene lighting and background. The swapped face image is generated through a two-stage framework, called FaceShifter. In the first stage, we use an Adaptive Embedding Integration Network (AEI-Net) to generate a high fidelity face swapping result $\hat{Y}_{s,t}$ based on information integration. In the second stage, we use the Heuristic Error Acknowledging Network (HEAR-Net) to handle the facial occlusions and refine the result, the final result is denoted by $Y_{s,t}$ .
+
+# 3.1. Adaptive Embedding Integration Network
+
+In the first stage, we aim to generate a high fidelity face image $\hat{Y}_{s,t}$ , which should preserve the identity of the source $X_{s}$ and the attributes (e.g. pose, expression, lighting, background) of the target $X_{t}$ . To achieve this goal, our method consists of 3 components: i) the Identity Encoder $\mathbf{z}_{id}(X_s)$ , which extracts identity from the source image $X_{s}$ ; ii) the Multi-level Attributes Encoder $\mathbf{z}_{att}(X_t)$ , which extracts attributes of the target image $X_{t}$ ; iii) Adaptive Attentional Denormalization (AAD) Generator, which generates swapped face image. Figure 3(a) shows whole network structure.
+
+Identity Encoder: We use a pretrained state-of-the-art face recognition model [13] as identity encoder. The identity embedding $z_{id}(X_s)$ is defined to be the last feature vector generated before the final FC layer. We believe that by training on a large quantity of 2D face data, such a face
+
+recognition model can provide more representative identity embeddings than the 3D-based models like 3DMM [7, 8].
+
+Multi-level Attributes Encoder: Face attributes, such as pose, expression, lighting and background, require more spatial informations than identity. In order to preserve such details, we propose to represent the attributes embedding as multi-level feature maps, instead of compressing it into a single vector as previous methods [5, 29]. In specific, we feed the target image $X_{t}$ into a U-Net-like structure. Then we define the attributes embedding as the feature maps generated from the U-Net decoder. More formally, we define
+
+$$
+\boldsymbol {z} _ {a t t} (X _ {t}) = \left\{\boldsymbol {z} _ {a t t} ^ {1} (X _ {t}), \boldsymbol {z} _ {a t t} ^ {2} (X _ {t}), \dots \boldsymbol {z} _ {a t t} ^ {n} (X _ {t}) \right\}, \quad (1)
+$$
+
+where $\mathbf{z}_{att}^{k}(X_{t})$ represents the $k$ -th level feature map from the U-Net decoder, $n$ is the number of feature levels.
+
+Our attributes embedding network does not require any attribute annotations, it extracts the attributes using self-supervised training: we require that the generated swapped face $\hat{Y}_{x_t}$ and the target image $X_{t}$ have the same attributes embedding. The loss function will be introduce in Equation 7. In the experimental part (Section 4.2), we also study what the attributes embedding has learned.
+
+Adaptive Attentional Denormalization Generator: We then integrate such two embeddings $\mathbf{z}_{id}(X_s)$ and $\mathbf{z}_{att}(X_t)$ for generating a raw swapped face $\hat{Y}_{s,t}$ . Previous methods [5, 29] simply integrate them through feature concatenation. It will lead to relatively blurry results. Instead, we propose a novel Adaptive Attentional Denormalization (AAD) layer to accomplish this task in a more adaptive fashion. Inspired by the mechanisms of SPADE [33] and AdaIN [14, 16], the proposed AAD layers leverage denormalizations for feature integration in multiple feature levels.
+
+As shown in Figure 3(c), in the $k$ -th feature level, let $\pmb{h}_{in}^{k}$ denote the activation map that is fed into an AAD layer, which should be a 3D tensor of size $C^k\times H^k\times W^k$ with $C^k$ being the number of channels and $H^{k}\times W^{k}$ being the spatial dimensions. Before integration, we perform batch normalization [17] on $\pmb{h}_{in}^{k}$ :
+
+$$
+\bar {h} ^ {k} = \frac {\mathbf {h} _ {i n} ^ {k} - \boldsymbol {\mu} ^ {k}}{\sigma^ {k}}. \tag {2}
+$$
+
+Here $\pmb{\mu}^{k}\in \mathbb{R}^{C^{k}}$ and $\sigma^k\in \mathbb{R}^{C^k}$ are the means and standard deviations of the channel-wise activations within $h_{i\underline{n}}^k$ 's mini-batch. Then, we design 3 parallel branches from $\bar{h}^k$ for 1) attributes integration, 2) identity integration, 3) adaptively attention mask.
+
+For attributes embedding integration, let $\mathbf{z}_{att}^{k}$ be the attributes embedding on this feature level, which should be a 3D tensor of size $C_{att}^{k} \times H^{k} \times W^{k}$ . In order to integrate $\mathbf{z}_{att}^{k}$ into the activation, we compute an attribute activation $A^{k}$ by denormalizing the normalized $\bar{h}^{k}$ according to the attributes embedding, formulated as
+
+$$
+\boldsymbol {A} ^ {k} = \gamma_ {\text {a t t}} ^ {k} \otimes \bar {\boldsymbol {h}} ^ {k} + \beta_ {\text {a t t}} ^ {k}, \tag {3}
+$$
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 3: AEI-Net for the first stage. AEI-Net is composed of an Identity Encoder, a Multi-level Attributes Encoder, and an AAD-Generator. The AAD-Generator integrates informations of identity and attributes in multiple feature levels using cascaded AAD ResBlks, which is built on AAD layers.
+
+where $\gamma_{att}^{k}$ and $\beta_{att}^{k}$ are two modulation parameters both convolved from $z_{att}^{k}$ . They share the same tensor dimensions with $\bar{h}^{k}$ . The computed $\gamma_{att}^{k}$ and $\beta_{att}^{k}$ are multiplied and added to $\dot{\bar{h}}^k$ element-wise.
+
+For identity embedding integration, let $z_{id}^{k}$ be the identity embedding, which should be a 1D vector of size $C_{id}$ . We also integrate $z_{id}^{k}$ by computing an identity activation $I^{k}$ in a similar way to integrating attributes. It is formulated as
+
+$$
+\boldsymbol {I} ^ {k} = \gamma_ {i d} ^ {k} \otimes \bar {\boldsymbol {h}} ^ {k} + \beta_ {i d} ^ {k}, \tag {4}
+$$
+
+where $\gamma_{id}^{k}\in \mathbb{R}^{C^{k}}$ and $\beta_{id}^{k}\in \mathbb{R}^{C^{k}}$ are another two modulation parameters generated from $\pmb{z}_{id}$ through FC layers.
+
+One key design of the AAD layer is to adaptively adjust the effective regions of the identity embedding and the attributes embedding, so that they can participate in synthesizing different parts of the face. For example, the identity embedding should focus relatively more on synthesizing the face parts that are most discriminative for distinguishing identities, e.g. eyes, mouth and face contour. Therefore, we adopt an attention mechanism into the AAD layer. Specifically, we generate an attentional mask $M^k$ using $\bar{h}^k$ through convolutions and a sigmoid operation. The values of $M^k$ are between 0 and 1.
+
+Finally, the output of this AAD layer $\pmb{h}_{out}^{k}$ can be obtained as a element-wise combination of the two activations $\pmb{A}^{k}$ and $\pmb{I}^{k}$ , weighted by the mask $M^{k}$ , as shown in Figure 3(c). It is formulated as
+
+$$
+\boldsymbol {h} _ {\text {o u t}} ^ {k} = \left(1 - \boldsymbol {M} ^ {k}\right) \otimes \boldsymbol {A} ^ {k} + \boldsymbol {M} ^ {k} \otimes \boldsymbol {I} ^ {k}. \tag {5}
+$$
+
+The AAD-Generator is then built with multiple AAD layers. As illustrated in Figure 3(a), after extracting the identity embedding $\mathbf{z}_{id}$ from source $X_{s}$ , and the attributes embedding $\mathbf{z}_{att}$ from target $X_{t}$ , we cascade AAD Residual Blocks (AAD ResBlks) to generate the swapped face $\hat{Y}_{s,t}$ , the structure of the AAD ResBlks is shown in Figure 3(b). For the AAD ResBlk on the $k$ -th feature level, it first takes the up-sampled activation from the previous level as input, then integrates this input with $\mathbf{z}_{id}$ and $\mathbf{z}_{att}^{k}$ . The final output image $\hat{Y}_{s,t}$ is convolved from the last activation.
+
+Training Losses We utilize adversarial training for AEI-Net. Let $\mathcal{L}_{adv}$ be the adversarial loss for making $\hat{Y}_{s,t}$ realistic. It is implemented as a multi-scale discriminator [33] on the downsampled output images. In addition, an identity preservation loss is used to preserve the identity of the source. It is formulated as
+
+$$
+\mathcal {L} _ {i d} = 1 - \cos \left(\boldsymbol {z} _ {i d} \left(\hat {Y} _ {s, t}\right), \boldsymbol {z} _ {i d} \left(X _ {s}\right)\right), \tag {6}
+$$
+
+where $\cos(\cdot, \cdot)$ represents the cosine similarity of two vectors. We also define the attributes preservation loss as $\mathcal{L} - 2$ distances between the multi-level attributes embeddings from $X_{t}$ and $\hat{Y}_{s,t}$ . It is formulated as
+
+$$
+\mathcal {L} _ {a t t} = \frac {1}{2} \sum_ {k = 1} ^ {n} \left\| z _ {a t t} ^ {k} \left(\hat {Y} _ {s, t}\right) - z _ {a t t} ^ {k} \left(X _ {t}\right) \right\| _ {2} ^ {2}. \tag {7}
+$$
+
+When the source and target images are the same in a training sample, we define a reconstruction loss as pixel level $\mathcal{L}-2$ distances between the target image $X_{t}$ and $\hat{Y}_{s,t}$
+
+$$
+\mathcal {L} _ {r e c} = \left\{ \begin{array}{l l} \frac {1}{2} \left\| \hat {Y} _ {s, t} - X _ {t} \right\| _ {2} ^ {2} & \text {i f} X _ {t} = X _ {s} \\ 0 & \text {o t h e r w i s e} \end{array} . \right. \tag {8}
+$$
+
+The AEI-Net is finally trained with a weighted sum of above losses as
+
+$$
+\mathcal {L} _ {\mathrm {A E I - N e t}} = \mathcal {L} _ {a d v} + \lambda_ {a t t} \mathcal {L} _ {a t t} + \lambda_ {i d} \mathcal {L} _ {i d} + \lambda_ {r e c} \mathcal {L} _ {r e c}, \tag {9}
+$$
+
+with $\lambda_{att} = \lambda_{rec} = 10, \lambda_{id} = 5$ . The trainable modules of AEI-Net include the Multi-level Attributes Encoder and the ADD-Generator.
+
+# 3.2. Heuristic Error Acknowledging Refinement Network
+
+Although the face swap result $\hat{Y}_{s,t}$ generated with AEI-Net in the first stage can well retain target attributes like pose, expression and scene lighting, it often fails to preserve the occlusions appeared on the target face $X_{t}$ . Previous methods [31, 30] address face occlusions with an additional face segmentation network. It is trained on face data containing occlusion-aware face masks, which require lots of manual annotations. Besides, such a supervised approach may hardly recognize unseen occlusion types.
+
+
+(a)
+
+
+(b)
+Figure 4: HEAR-Net for the second stage. $\hat{Y}_{t,t}$ is the reconstruction of the target image $X_{t}$ , i.e., $\hat{Y}_{t,t} = \mathsf{AEI - Net}(X_t,X_t)$ . $\hat{Y}_{s,t}$ is the swapped face from the first stage.
+
+We proposed a heuristic method to handle facial occlusions. As shown in Figure 4(a), when the target face was occluded, some occlusions might disappear in the swapped face, e.g., the hair covering the face or the chains hang from the turban. Meanwhile, we observe that if we feed the same image as both the source and target images into a well trained AEI-Net, these occlusions would also disappear in the reconstructed image. Thus, the error between the reconstructed image and its input can be leveraged to locate face occlusions. We call it the heuristic error of the input image, since it heuristically indicates where anomalies happen.
+
+Inspired by the above observation, we make use of a novel HEAR-Net to generate a refined face image. We first get the heuristic error of the target image as
+
+$$
+\Delta Y _ {t} = X _ {t} - \operatorname {A E I} - \operatorname {N e t} \left(X _ {t}, X _ {t}\right). \tag {10}
+$$
+
+Then we feed the heuristic error $\Delta Y_{t}$ and the result of the first stage $\hat{Y}_{s,t}$ into a U-Net structure, and obtain the refined image $Y_{s,t}$ :
+
+$$
+Y _ {s, t} = \operatorname {H E A R} - \operatorname {N e t} \left(\hat {Y} _ {s, t}, \Delta Y _ {t}\right). \tag {11}
+$$
+
+The pipeline of HEAR-Net is illustrated in Figure 4(b).
+
+We train HEAR-Net in a fully self-supervised way, without using any manual annotations. Given any target face image $X_{t}$ , with or without occlusion regions, we utilize the following losses for training HEAR-Net. The first is an identity preservation loss to preserve the identity of the source. Similar as stage one, it is formulated as
+
+$$
+\mathcal {L} _ {i d} ^ {\prime} = 1 - \cos \left(\boldsymbol {z} _ {i d} \left(Y _ {s, t}\right), \boldsymbol {z} _ {i d} \left(X _ {s}\right)\right). \tag {12}
+$$
+
+The change loss $\mathcal{L}_{chg}^{\prime}$ guarantees the consistency between the results of the first stage and the second stage:
+
+$$
+\mathcal {L} _ {c h g} ^ {\prime} = \left| \hat {Y} _ {s, t} - Y _ {s, t} \right|. \tag {13}
+$$
+
+The reconstruction loss $\mathcal{L}_{rec}^{\prime}$ restricts that the second stage is able to reconstruct the input when the source and target images are the same:
+
+$$
+\mathcal {L} _ {r e c} ^ {\prime} = \left\{ \begin{array}{l l} \frac {1}{2} \| Y _ {s, t} - X _ {t} \| _ {2} ^ {2} & \text {i f} X _ {t} = X _ {s} \\ 0 & \text {o t h e r w i s e} \end{array} . \right. \tag {14}
+$$
+
+Since the number of occluded faces is very limited in most face datasets, we propose to augment data with synthetic occlusions. The occlusions are randomly sampled from a variety of datasets, including the EgoHands [3], GTEA Hand2K [15, 25, 24] and ShapeNet [9]. They are blended onto existing face images after random rotations, rescaling and color matching. Note that we do not utilize any occlusion mask supervision during training, even from these synthetic occlusions.
+
+Finally, HEAR-Net is trained with a sum of above losses:
+
+$$
+\mathcal {L} _ {\text {H E A R - N e t}} = \mathcal {L} _ {r e c} ^ {\prime} + \mathcal {L} _ {i d} ^ {\prime} + \mathcal {L} _ {c h g} ^ {\prime}. \tag {15}
+$$
+
+# 4. Experiments
+
+Implementation Detail: For each face image, we first align and crop the face using five point landmarks extracted with [11], the cropped image is of size $256 \times 256$ covering the whole face, as well as some background regions. The number of attribute embeddings in AEI-Net is set to $n = 8$ (Equation 1). The number of downsamples/upsamples in HEAR-Net is set to 5. Please refer to the supplemental material for more details concerning the network structure and training strategies.
+
+The AEI-Net is trained using CelebA-HQ [18], FFHQ [19] and VGGFace [34]. While the HEAR-Net is trained using only a portion of faces that have Top- $10\%$ heuristic errors in these datasets, and with additional augmentations of synthetic occlusions. Occlusion images are randomly sampled from the EgoHands [3], GTEA Hand2K [15, 25, 24] and object renderings from ShapeNet [9].
+
+# 4.1. Comparison with Previous Methods
+
+Qualitative Comparison: We compare our method with FaceSwap [2], Nirkin et al. [31], DeepFakes [1] and IPGAN [5] on the FaceForensics++ [36] test images in Figure 5. Comparison with the latest work FSGAN [30] is shown in Figure 6. We can see that, since FaceSwap, Nirkin et al., DeepFakes, and FSGAN all follow the strategy that first synthesizing the inner face region then blending it into the target face, as expected, they suffer from the blending inconsistency. All faces generated by these methods share exactly the same face contours with their target faces, and ignore the source face shapes (Figure 5 rows 1-4, Figure 6 rows 1-2). Besides, their results can not well respect critical informations from the target image, such as the lighting (Figure 5 row 3, Figure 6 rows 3-5), the image resolutions (Figure 5 rows 2 and 4). IPGAN [5] suffers from decreased resolutions in all samples, due to its single-level attributes representation. IPGAN cannot well preserve expression of the target face, such as the closed eyes (Figure 5 row 2).
+
+
+Figure 5: Comparison with FaceSwap [2], Nirkin et al. [31], DeepFakes [1], IPGAN [5] on FaceForensics++ [36] face images. Our results better preserve the face shapes of the source identities, and are also more faithful to the target attributes (e.g. lightings, image resolutions).
+
+| method | ID retrieval ↑ | pose ↓ | expression ↓ |
| DeepFakes [1] | 81.96 | 4.14 | 2.57 |
| FaceSwap [2] | 54.19 | 2.51 | 2.14 |
| Nirkin et al. [31] | 76.57 | 3.29 | 2.33 |
| IPGAN [5] | 82.41 | 4.04 | 2.50 |
| Ours | 97.38 | 2.96 | 2.06 |
+
+Our method addresses all these issues well. We achieve higher fidelity by well preserving the face shapes of the source (instead of the target), and faithfully respecting the lighting and image resolution of the target (instead of the source). Our method also has the ability to go beyond FSGAN [30] to handle occlusions.
+
+Quantitative Comparison: The experiment is constructed on FaceForensics++ [36] dataset. For FaceSwap [2] and DeepFakes [1], the test set consists of 10K face images for each method by evenly sampled 10 frames from each video clip. For IPGAN [5], Nirkin et al. [31] and our method, 10K face images are generated with the same source and target image pairs as the other methods. Then we conduct quantitative comparison with respect to three metrics: ID retrieval, pose error and expression error.
+
+We extract identity vector using a different face recognition model [41] and adopt the cosine similarity to measure the identity distance. For each swapped face from the test set, we search the nearest face in all FaceForensics++ original video frames and check whether it belongs to the correct source video. The averaged accuracy of all such retrievals is reported as the $ID$ retrieval in Table 1, serving to measure identity preservation ability. Our method achieves higher $ID$ retrieval score with a large margin.
+
+We use a pose estimator [37] to estimate head pose and a 3D face model [10] to retrieve expression vectors. We report the $\mathcal{L}-2$ distances of pose and expression vectors be
+
+
+Figure 6: Comparison with FSGAN [30]. Besides the advantages in face quality and fidelity to inputs, our results preserve common occlusions as good as FSGAN. Please also refer to Figures 1, 10 and 11 for more challenging cases.
+
+Table 1: Comparison on FaceForensics++ videos.
+
+| method | id. | attr. | realism |
| DeepFakes [1] | 13.7 | 6.8 | 6.1 |
| FaceSwap [2] | 12.1 | 23.7 | 6.8 |
| Nirkin et al. [31] | 21.3 | 7.4 | 4.2 |
| Ours | 52.9 | 62.1 | 82.9 |
+
+Table 2: User study results. We show the averaged selection percentages of each method.
+
+tween the swapped face and its target face in Table 1 as the pose and the expression errors. Our method is advantageous in expression preservation while comparable with others in pose preservation. We do not use the face landmark comparison as [30], since face landmarks involve identity information which should be inconsistent between the swapped face and the target face.
+
+Human Evaluation: Three user studies are conducted to evaluate the performance of the proposed model. We let the users select: i) the one having the most similar identity with the source face; ii) the one sharing the most similar head pose, face expression and scene lighting with the target image; iii) the most realistic one. In each study unit, two real face images, the source and the target, and four reshuffled face swapping results generated by FaceSwap [2], Nirkin et al. [31], DeepFakes [1] and ours, are presented. We ask users to select one face that best matches our description.
+
+
+Figure 7: Comparing AEI-Net with three baseline models. The two models Add and Cat are for ablation studies of the adaptive embedding integration. The model Compressed is for ablating multi-level attributes representation.
+
+For each user, 20 face pairs are randomly drawn from the 1K FaceForensics++ test set without duplication. Finally, we collect answers from 100 human evaluators. The averaged selection percentage for each method on each study is presented in Table 2. It shows that our model surpasses the other three methods all in large margins.
+
+# 4.2. Analysis of the Framework
+
+Adaptive Embedding Integration: To verify the necessity of adaptive integration using attentional masks, we compare AEI-Net with two baseline models: i) Add: element-wise plus operations is adopted in AAD layers instead of using masks $M^k$ as in Equation 5. The output activation $h_{out}^k$ of this model is directly calculated with $h_{out}^k = A^k + I^k$ ; ii) Cat: element-wise concatenation is adopted without using masks $M^k$ . The output activation becomes $h_{out}^k = \text{Concat}[A^k, I^k]$ . Results of the two baseline models, as well as the AEI-Net, are compared in Figure 7. Without a soft mask for fusing embeddings adaptively, the faces generated by baseline models are relatively blurry and contain lots of ghosting artifacts.
+
+We also visualize the masks $M^k$ of AAD layers on different levels in Figure 8, where a brighter pixel indicates a higher weight for identity embedding in Equation 5. It shows that the identity embedding takes more effect in low level layers. Its effective region becomes sparser in middle levels, where it activates only in some key regions that strongly relates to the face identity, such as the locations of eyes, mouth and face contours.
+
+Multi-level Attributes: To verify whether it is necessary to extract multi-level attributes, we compare with another baseline model called Compressed, which shares the same network structure with AEI-Net, but only utilizes the first three level embeddings $\mathbf{z}_{att}^{k}$ , $k = 1,2,3$ . Its last embedding
+
+
+
+
+Figure 8: Visualizing attentional masks $M^k$ of AAD layers on different feature levels. These visualizations reflect that identity embeddings are mostly effective in low and middle feature levels.
+Figure 9: Query results using attributes embedding.
+
+$z_{att}^{3}$ is fed into all higher level AAD integrations. Its results are also compared in Figure 7. Similar to IPGAN [5], its results suffer from artifacts like blurriness, since a lot of attributes information from the target images are lost.
+
+To understand what is encoded in the attributes embedding, we concatenate the embeddings $z_{att}^{k}$ (bilinearly upsampled to $256 \times 256$ and vectorized) from all levels as a unified attribute representation. We conduct PCA to reduce vector dimensions as 512. We then perform tests querying faces from the training set with the nearest $\mathcal{L}-2$ distances of such vectors. The three results illustrated in Figure 9 verify our intention, that the attributes embeddings can well reflect face attributes, such as the head pose, hair color, expression and even the existence of sunglasses on the face. Thus it also explains why our AEI-Net sometimes can preserve occlusions like sunglasses on the target face even without a second stage (Figure 10(8)).
+
+Second Stage Refinement: Multiple samples are displayed with both one-stage results $\hat{Y}_{s,t}$ and two-stage results $Y_{s,t}$ in Figure 10. It shows that the AEI-Net is able to generate high-fidelity face swapping results, but sometimes its output $\hat{Y}_{s,t}$ does not preserve occlusions in the target. Fortunately, the HEAR-Net in the second stage is able to recover them.
+
+The HEAR-Net can handle occlusions of various kinds, such as the medal (1), hand (2), hair (3), face painting (4), mask (5), translucent object (6), eyeglasses (7), headscarf (8) and floating text (9). Besides, it is also able to correct the color-shift that might occasionally happen in $\hat{Y}_{s,t}$ (10). Moreover, the HEAR-Net can help rectify the face shape when the target face has a very large pose (6).
+
+# 4.3. More Results on Wild Faces
+
+Furthermore, we demonstrate the strong capability of FaceShifter by testing wild face images downloaded from the Internet. As shown in Figure 11, our method can handle face images under various conditions, including large
+
+
+$X_{s}$
+$X_{t}$
+$\hat{Y}_{s,t}$
+$Y_{s,t}$
+(1)
+(2)
+(3)
+(4)
+(5)
+(6)
+(7)
+(8)
+(9)
+(10)
+
+
+Figure 10: Second-stage refining results presenting the strong adaptability of HEAR-Net on various kinds of errors, including occlusions, reflections, slightly shifted pose and color etc.
+Result Target Source
+Figure 11: Our face swapping results on wild face images under various challenging conditions. All results are generated using a single well-trained two-stage model.
+
+poses, uncommon lightings and occlusions of very challenging kinds.
+
+# 4.4. Examining Forged Face Detection Algorithms
+
+Finally, we examine the performance of different face forgery detection algorithms on our face swapping results. First, we randomly generate 5,000 face swapping images and 5,000 real images. Then we apply the model from $\mathrm{FF}++$ [36] and Face X-Ray [23] and show the detection results in Table 3. We can notice that Face X-Ray has impressive results on our generated images.
+
+# 5. Conclusions
+
+In this paper, we proposed a novel framework named FaceShifter for high fidelity and occlusion aware face swap
+
+| Methods | AUC | AP | EER |
| FF++ [36] | 52.22 | 52.87 | 0.4805 |
| Face X-Ray [23] | 96.82 | 90.53 | 0.0956 |
+
+Table 3: Results in terms of AUC, AP and EER for FF++ [36] and Face X-ray [23] on our generated faces.
+
+ping. The AEI-Net in the first stage adaptively integrates the identity and the attributes for synthesizing high fidelity results. The HEAR-Net in the second stage recovers anomaly region in a self-supervised way without any manual annotations. The proposed framework shows superior performance in generating realistic face images given any face pairs without subject specific training. Extensive experiments demonstrate that the proposed framework significantly outperforms previous face swapping methods, setting up a new benchmark for face forensics researches.
+
+# References
+
+[1] DeepFakes. https://github.com/ondyari/FaceForensics/tree/master/dataset/DeepFakes. Accessed: 2019-09-30. 2,5,6
+[2] FaceSwap. https://github.com/ondyari/FaceForensics/tree/master/dataset/FaceSwapKowalski. Accessed: 2019-09-30. 2, 5, 6
+[3] Sven Bambach, Stefan Lee, David J Crandall, and Chen Yu. Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions. In Proceedings of the IEEE International Conference on Computer Vision, pages 1949-1957, 2015. 5
+[4] Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua. Cvae-gan: fine-grained image generation through asymmetric training. In Proceedings of the IEEE International Conference on Computer Vision, pages 2745-2754, 2017. 1
+[5] Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua. Towards open-set identity preserving face synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2, 3, 5, 6, 7
+[6] Dmitri Bitouk, Neeraj Kumar, Samreen Dhillon, Peter Belhumeur, and Shree K Nayar. Face swapping: automatically replacing faces in photographs. In ACM Transactions on Graphics (TOG), volume 27, page 39. ACM, 2008. 1, 2
+[7] Volker Blanz, Kristina Scherbaum, Thomas Vetter, and Hans-Peter Seidel. Exchanging faces in images. In Computer Graphics Forum, volume 23, pages 669-676. Wiley Online Library, 2004. 1, 2, 3
+[8] Volker Blanz, Thomas Vetter, et al. A morphable model for the synthesis of 3d faces. In Siggraph, volume 99, pages 187-194, 1999. 3
+[9] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 5
+[10] Bindita Chaudhuri, Noranart Vesdapunt, and Baoyuan Wang. Joint face detection and facial motion retargeting for multiple faces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9719-9728, 2019. 6
+[11] Dong Chen, Shaoqing Ren, Yichen Wei, Xudong Cao, and Jian Sun. Joint cascade face detection and alignment. In European Conference on Computer Vision, pages 109-122. Springer, 2014. 5
+[12] Yi-Ting Cheng, Virginia Tzeng, Yu Liang, Chuan-Chang Wang, Bing-Yu Chen, Yung-Yu Chuang, and Ming Ouhyoung. 3d-model-based face replacement in video. In SIGGRAPH'09: Posters, page 29. ACM, 2009. 1
+[13] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4690-4699, 2019. 3
+
+[14] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudur. A learned representation for artistic style. arXiv preprint arXiv:1610.07629, 2016.3
+[15] Alireza Fathi, Xiaofeng Ren, and James M Rehg. Learning to recognize objects in egocentric activities. In CVPR 2011, pages 3281-3288. IEEE, 2011. 5
+[16] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pages 1501-1510, 2017. 3
+[17] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 3
+[18] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. 5
+[19] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019. 5
+[20] Hyeongwoo Kim, Pablo Carrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick Pérez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt. Deep video portraits. ACM Transactions on Graphics (TOG), 37(4):163, 2018. 2
+[21] Iryna Korshunova, Wenzhe Shi, Joni Dambre, and Lucas Theis. Fast face-swap using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3677-3685, 2017. 1, 2
+[22] Iryna Korshunova, Wenzhe Shi, Joni Dambre, and Lucas Theis. Fast face-swap using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3677-3685, 2017. 2
+[23] Lingzhi Li, Jianmin Bao, Ting Zhang, Hao Yang, Dong Chen, Fang Wen, and Baining Guo. Face x-ray for more general face forgery detection. arXiv preprint arXiv:1912.13458, 2019. 1, 8
+[24] Yin Li, Alireza Fathi, and James M Rehg. Learning to predict gaze in egocentric video. In Proceedings of the IEEE International Conference on Computer Vision, pages 3216-3223, 2013. 5
+[25] Yin Li, Zhefan Ye, and James M Rehg. Delving into egocentric actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 287-295, 2015. 5
+[26] Yuan Lin, Shengjin Wang, Qian Lin, and Feng Tang. Face swapping under large pose variations: A 3d model based approach. In 2012 IEEE International Conference on Multimedia and Expo, pages 333-338. IEEE, 2012. 1
+[27] Koki Nagano, Jaewoo Seo, Jun Xing, Lingyu Wei, Zimo Li, Shunsuke Saito, Aviral Agarwal, Jens Fursund, and Hao Li. pagan: real-time avatars using dynamic textures. In SIGGRAPH Asia 2018 Technical Papers, page 258. ACM, 2018. 2, 3
+[28] Ryota Natsume, Tatsuya Yatagawa, and Shigeo Morishima. Fsnet: An identity-aware generative model for image-based face swapping. In Asian Conference on Computer Vision, pages 117-132. Springer, 2018. 1, 2, 3
+
+[29] Ryota Natsume, Tatsuya Yatagawa, and Shigeo Morishima. Rsgan: face swapping and editing using face and hair representation in latent spaces. arXiv preprint arXiv:1804.03447, 2018. 1, 2, 3
+[30] Yuval Nirkin, Yosi Keller, and Tal Hassner. Fsgan: Subject agnostic face swapping and reenactment. In Proceedings of the IEEE International Conference on Computer Vision, pages 7184-7193, 2019. 1, 2, 3, 4, 5, 6
+[31] Yuval Nirkin, Iacopo Masi, Anh Tran Tuan, Tal Hassner, and Gerard Medioni. On face segmentation, face swapping, and face perception. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 98-105. IEEE, 2018. 1, 2, 4, 5, 6
+[32] Kyle Olszewski, Zimo Li, Chao Yang, Yi Zhou, Ronald Yu, Zeng Huang, Sitao Xiang, Shunsuke Saito, Pushmeet Kohli, and Hao Li. Realistic dynamic facial textures from a single image using gans. In Proceedings of the IEEE International Conference on Computer Vision, pages 5429-5438, 2017. 2, 3
+[33] Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu. Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2337-2346, 2019. 3, 4
+[34] Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, et al. Deep face recognition. In bmvc, volume 1, page 6, 2015. 5
+[35] Arun Ross and Asem Othman. Visual cryptography for biometric privacy. IEEE transactions on information forensics and security, 6(1):70-81, 2010. 1
+[36] Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. Faceforensics++: Learning to detect manipulated facial images. arXiv preprint arXiv:1901.08971, 2019. 1, 2, 5, 6, 8
+[37] Nataniel Ruiz, Eunji Chong, and James M Rehg. Fine-grained head pose estimation without keypoints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2074–2083, 2018. 6
+[38] Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: Image synthesis using neural textures. arXiv preprint arXiv:1904.12356, 2019. 2
+[39] Justus Thies, Michael Zollhöfer, Matthias Nießner, Levi Valgaerts, Marc Stamminger, and Christian Theobalt. Real-time expression transfer for facial reenactment. ACM Trans. Graph., 34(6):183-1, 2015. 2
+[40] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2387-2395, 2016. 2
+[41] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5265-5274, 2018. 6
+[42] Hong-Xia Wang, Chunhong Pan, Haifeng Gong, and Huai-Yu Wu. Facial image composition based on active appearance model. In 2008 IEEE International Conference on
+
+Acoustics, Speech and Signal Processing, pages 893-896. IEEE, 2008. 1, 2
\ No newline at end of file
diff --git a/advancinghighfidelityidentityswappingforforgerydetection/images.zip b/advancinghighfidelityidentityswappingforforgerydetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1e8c42e0d9f978a5428a351122eb3af43ba46b0c
--- /dev/null
+++ b/advancinghighfidelityidentityswappingforforgerydetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a9e95fce333b8de3b8f0cb16b9b2dc0a780be89583c56649a150473d41eca8f
+size 822184
diff --git a/advancinghighfidelityidentityswappingforforgerydetection/layout.json b/advancinghighfidelityidentityswappingforforgerydetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..18fb25dca52f2d38c14e79c65ca7d70810603a86
--- /dev/null
+++ b/advancinghighfidelityidentityswappingforforgerydetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dbbe7f3e4b008a78ef29ff545bda3a51b002497526a36e210b49b1f0d44a7e7c
+size 435691
diff --git a/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_content_list.json b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..81c8cb5a286c8a1ec9b08a3ad71d5c9d2e946100
--- /dev/null
+++ b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2ac0cf586bf13f9d70ccd0cba64a8398666a7cb994c6edef01a3a1678500e57
+size 77688
diff --git a/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_model.json b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4de566934e81736cd75162b55c5e98f6cec3cc5a
--- /dev/null
+++ b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e39aae34a41d320f1004c8aabda6c30c69b4cb07acbf4db259fde5c4f6fe24a
+size 93618
diff --git a/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_origin.pdf b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..75347114b688df68aa3d3dc2c409dc4cb145a16f
--- /dev/null
+++ b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/995c5f3d-db23-4367-adfb-7fd2e1e3db82_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:355f32df8b96b5d4b8d34443327d4e2a7f03fe4752614f551fbaa9d482d49c6c
+size 2040806
diff --git a/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/full.md b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e58c24fd6a9979c20150c7aed898216dec6b78e7
--- /dev/null
+++ b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/full.md
@@ -0,0 +1,374 @@
+# Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles
+
+Ranjie Duan $^{1}$ Xingjun Ma $^{2\dagger}$ Yisen Wang $^{3}$ James Bailey $^{2}$ A. K. Qin $^{1}$ Yun Yang $^{1}$ $^{1}$ Swinburne University of Technology $^{2}$ The University of Melbourne $^{3}$ Shanghai Jiao Tong University
+
+# Abstract
+
+Deep neural networks (DNNs) are known to be vulnerable to adversarial examples. Existing works have mostly focused on either digital adversarial examples created via small and imperceptible perturbations, or physical-world adversarial examples created with large and less realistic distortions that are easily identified by human observers. In this paper, we propose a novel approach, called Adversarial Camouflage (AdvCam), to craft and camouflage physical-world adversarial examples into natural styles that appear legitimate to human observers. Specifically, AdvCam transfers large adversarial perturbations into customized styles, which are then "hidden" on-target object or off-target background. Experimental evaluation shows that, in both digital and physical-world scenarios, adversarial examples crafted by AdvCam are well camouflaged and highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers. Hence, AdvCam is a flexible approach that can help craft stealthy attacks to evaluate the robustness of DNNs. AdvCam can also be used to protect private information from being detected by deep learning systems. code
+
+# 1. Introduction
+
+Deep neural networks (DNNs) are a family of powerful models that have been widely used in various AI systems, and they have achieved a great success across many applications, such as image classification [13], speech recognition [26], natural language processing [33], and autonomous driving [6]. However, DNNs are known to be vulnerable to adversarial examples (or attacks) that are crafted by adding carefully designed perturbations on normal examples [25, 12, 20, 2, 27, 28]. This raises serious concerns for security-critical applications [23, 8, 10, 15, 21]. For example, as shown in Figure 1, the addition of a carefully crafted perturbation that resembles stain, snow and discoloration on the surface of a stop sign. A self-driving car equipped with state-of-the-art classifier detects the modified stop sign as other objects with very high confidence (we will examine how this perturbation was created later).
+
+
+Figure 1: Camouflaged adversarial examples crafted by proposed AdvCam attack. Given a target image in (a), an adversary can choose different camouflage styles from (b) to craft adversarial examples in (c) that appear naturally occurring, yet can fool a DNN classifier to make incorrect predictions with high confidence.
+
+Adversarial attacks can be applied in two different settings: 1) digital setting, where the attacker can feed input digital images directly into the DNN classifier; and 2) physical-world setting, where the DNN classifier only accepts inputs from a camera and the attacker can only present adversarial images to the camera. There are three properties which may be used to characterize adversarial attack: 1) adversarial strength, which represents the ability to fool DNNs; 2) adversarial stealthiness, which is about whether the adversarial perturbations can be detected by human observers; and 3) camouflage flexibility, which is the degree to which the attacker can control the appearance of adversarial image.
+
+Most attacking methods have been developed for the digital setting, such as Projected Gradient Decent (PGD) [22], Carlini and Wagner (CW) attack [4] and adversarial examples crafted using generative adversarial networks (AdvGAN) [30]. For digital attacks, small perturbations are often sufficient. However, physical-world attacks require large or even unrestricted perturbations [17, 1], since small perturbations are too subtle to be captured by cameras in complex physical-world environments. There already exist
+
+
+(a) $RP_{2}$
+
+
+(b) AdvCam
+
+
+(c) AdvPatch
+Figure 2: Examples of successful physical-world attacks. AdvCam refers to our proposed adversarial camouflage.
+
+
+(d) AdvCam
+
+several attacking methods that go beyond small perturbations, such as adversarial patch (AdvPatch) [3] and robust physical perturbations $(RP_{2})$ [10]. The properties of existing attacks are summarized in Table 1, where $\star \star$ means better than $\star$ . To summarize, stealthiness can be achieved with small perturbations, which are only useful in digital setting. Also, existing attacks require exact perturbation size to achieve stealthiness, while it is difficult to decide a proper perturbation size for both visual imperceptibility and adversarial strength, especially in the physical setting. Besides, the generation process of current methods is difficult to control, e.g., an attacker cannot decide the appearance of adversarial examples. As such, the camouflage flexibility of these methods is rather limited. A flexible (yet strong and stealthy) camouflage mechanism for large perturbations is still an open problem that needs to be addressed.
+
+Table 1: Summary of existing attacks and our AdvCam.
+
+| Attack | Digital | Physical | Stealthiness | Flexibility |
| PGD | ✓ | × | ★★ | ★ |
| AdvPatch | × | ✓ | ★ | ★ |
| RP2 | × | ✓ | ★ | ★ |
| AdvCam | ✓ | ✓ | ★★ | ★★ |
+
+To address this gap, in this paper, we propose a novel adversarial camouflage approach (AdvCam) to craft and camouflage adversarial examples into natural styles using style transfer techniques. The style of an image is an abstract concept that generally refers to its visual appearance such as color and texture, in contrast to its structure information [16]. In AdvCam, the camouflage style and attack region can be customized by the attacker according to different attacking scenarios. For example, Figure 1 shows several adversarial traffic signs crafted by our AdvCam attack. A quick visual comparison of AdvCam to existing physical-world attacks can be found in Figure 2. While all the adversarial examples in Figure 2 attack DNNs successfully, we can see that AdvCam is able to generate adversarial perturbation on stop sign with natural stains compared to artificial graffiti created by $RP_{2}$ , or a camouflaged product label compared to a patch with obtrusive pattern generated by AdvPatch. Our proposed AdvCam is capable of generating highly stealthy adversarial examples, which are robust to various physical-world conditions at the same time.
+
+In summary, AdvCam is not a perturbation-restricted attack and therefore is not inherently subjected to the limited amount of perturbation which is typically required in existing perturbation-restricted techniques. We define a flexible mechanism that induces perturbation appearing in natural-looking, which is a totally different paradigm from previous attacks. It is such an intrinsic difference of the working principles that makes AdvCam to produce more realistic images than existing methods.
+
+Our key contributions in this paper are:
+
+- We propose a flexible adversarial camouflage approach, AdvCam, to craft and camouflage adversarial examples.
+- AdvCam allows the generation of large perturbations, customizable attack regions and camouflage styles. It is very flexible and useful for vulnerability evaluation of DNNs against large perturbations for physical-world attacks.
+- Experiments on both digital and physical-world scenarios show that adversarial examples camouflaged by AdvCam are highly stealthy, while remaining effective in fooling state-of-the-art DNN image classifiers.
+
+# 2. Related Work
+
+# 2.1. Adversarial Attack
+
+Adversarial attack is to generate adversarial examples by maximizing the classification error of the target model (the model to attack) [25]. There are targeted and untargeted attacks. Targeted attack is to fool the network to misclassify the adversarial example into the class that attacker expects, while untargeted attack is to fool the network to misclassify the adversarial example into any incorrect classes [12]. Adversarial attacks can be applied either in a digital setting directly on the target model, or in a physical-world setting, where recaptured photos of adversarial examples are fed to the target model [17].
+
+# 2.1.1 Digital attacks
+
+Adversarial examples can be crafted by one or more steps of perturbation following the direction of adversarial gradients. This includes the classic Fast Gradient Sign Method (FGSM) [12], the Basic Iterative Method (BIM) [17], the strongest first-order method Projected Gradient Decent (PGD) [22], and the Skip Gradient Method (SGD) [29] for transferable attacks. These attacks can either be targeted or untargeted, and their perturbations are bounded by a small norm-ball $\| \cdot \| _p\leq \epsilon$ with $L_{2}$ and $L_{\infty}$ being the most commonly used norms. Optimization-based attacks, such as Carlini and Wagner (CW) attack [4] and elastic-net (EAD) attack [7], directly minimize the perturbations as part of the adversarial loss.
+
+There also exist several unrestricted adversarial attacks. These attacks search for modifications on substitutable components (attributes) of an image such as color [14], texture and physical parameters [34, 18] while preserving critical components of images. However, these attacks either produce large unnatural distortions or rely on training data that has semantic annotations. Moreover, these attacks cannot generate complex adversarial patterns and thus are quite limited for complicated real scenarios. Adversarial examples can also be generated by generative adversarial networks (GANs) [30, 24]. However, it is difficult to craft targeted attacks for a given test image with GANs, since the generation process is hard to control.
+
+Most existing attacks achieve stealthiness by either crafting small perturbations or modifying semantic attributes of the target image. However, a flexible camouflage mechanism that can effectively hide adversarial examples with natural styles is still missing from the literature. In this paper, we address this gap by proposing one such approach.
+
+# 2.1.2 Physical-world attacks
+
+A study has shown that, by printing and recapturing using a cell-phone camera, digital adversarial examples can still be effective [17]. However, follow-up works have found that such attacks are not easy to realize under physical-world conditions, due to viewpoint shifts, camera noise, and other natural transformations [1]. Thus, strong physical-world attacks require large perturbations, and specific adaptations over a distribution of transformations including lighting, rotation, perspective projection etc. The AdvPatch attack allows large perturbations and is immune to scaling or rotation, thus being directly applicable as a physical-world attack [3]. Adversarial stickers and graffiti have also been used to attack such as traffic sign classifiers and ImageNet classifiers in physical-world scenarios [10]. Other physical-world attacks include adversarial eye-glass frames [23], vehicles [35], or t-shirts [32] that can fool face recognition systems or object detectors. All these physical-world attacks generate large perturbations to increase adversarial strength, which inevitably results in large and unrealistic distortions. This greatly reduces their stealthiness, as shown in Figure 2.
+
+# 2.2. Neural Style Transfer
+
+Neural style transfer evolves from the problem of texture transfer, for which the goal is to transfer the texture of a source image to a target image while preserving the structural information of the target image. Traditional texture transfer methods mainly focus on non-parametric algorithms that resample pixels from the source texture [9]. However, these methods suffer from the fundamental limitation of pixel replacement, i.e., they cannot handle complex styles. Neural style transfer demonstrates remarkable
+
+results for image stylization [11]. In it, the content and style information of an image can be separated from its feature representations learned by a convolutional neural network (CNN). Then, the style information of the style image can be recombined into the target image to achieve style transfer. This technique has attracted several follow-up works for different aspects of improvement [5, 19]. In this paper, we will exploit these techniques for the camouflage of adversarial examples.
+
+# 3. Our Adversarial Camouflage Approach
+
+In this section, we first give an overview of the adversarial attack problem and our proposed camouflage approach. We then introduce the loss functions used by our proposed approach, and adaptations for physical world conditions.
+
+# 3.1. Overview
+
+Given a test image $x \in \mathbb{R}^m$ with class label $y$ , a DNN classifier $F: \mathbb{R}^m \to \{1, \dots, k\}$ mapping image pixels to a discrete label set, and a target class $y_{adv} \neq y$ , adversarial attack is to find an adversarial example $x'$ for target image $x$ by solving the following optimization problem:
+
+$$
+\underset {\text {d e f}} {\text {m i n i m i z e}} \quad \mathcal {D} \left(x, x ^ {\prime}\right) + \lambda \cdot \mathcal {L} _ {a d v} \left(x ^ {\prime}\right) \tag {1}
+$$
+
+$$
+\text {s u c h} x ^ {\prime} \in [ 0, 2 5 5 ] ^ {m},
+$$
+
+where $\mathcal{D}(x,x^{\prime})$ is a distance metric that defines the stealthiness property of the adversarial example, $\mathcal{L}_{adv}$ is the adversarial loss, [0, 255] indicates the valid pixel values, $\lambda$ is a parameter that adjusts the adversarial strength. Note that there is a trade-off between stealthiness and adversarial strength. In the whole experiments, we fix all other parameters as constants [19], and only adjust adversarial strength parameter $\lambda$ .
+
+Our goal is to develop a mechanism that crafts and camouflages adversarial examples with large perturbations into customized styles, and either attack area or style can be defined by attacker flexibly. We use style transfer techniques to achieve the goal of camouflage and adversarial attack techniques to achieve adversarial strength. The final loss is a combination of an adversarial loss $\mathcal{L}_{adv}$ for adversarial strength, a style loss $\mathcal{L}_s$ for style generation, a content loss $\mathcal{L}_c$ to preserve the content of the source image and a smoothness loss $\mathcal{L}_m$ to generate locally smooth regions. We denote this final loss as the adversarial camouflage loss:
+
+$$
+\mathcal {L} = \left(\mathcal {L} _ {s} + \mathcal {L} _ {c} + \mathcal {L} _ {m}\right) + \lambda \cdot \mathcal {L} _ {a d v}, \tag {2}
+$$
+
+where the three loss functions in brackets together serve the purpose of camouflage. The overview of our approach is illustrated in Figure 3. An attacker defines target image, target attack region and expected target style. Our proposed AdvCam then generates adversarial perturbation with expected style on the expected area as shown on the right of
+
+
+Figure 3: Overview of the proposed approach.
+Figure 3. To make adversarial example robust to various environment conditions, including lighting, rotation, etc., we add an extra physical adaptation training for generated $x'$ in each step.
+
+# 3.2. Adversarial Camouflage Loss
+
+# 3.2.1 Style loss
+
+For traditional attacks, the stealthiness metric is defined by $\mathcal{D}(x,x^{\prime}) = \| x - x^{\prime}\|_{p}$ , where $\| \cdot \| _p$ is the $L_{p}$ norm and $L_{2}$ and $L_{\infty}$ are typically used. This is to constrain the perturbations to be "small". For our proposed camouflage, the stealthiness is defined by a style metric between adversarial example $x^{\prime}$ and a style reference image $x^{s}$ . The style distance between two images can be defined by their differences in style representations:
+
+$$
+\mathcal {D} _ {s} = \sum_ {l \in \mathcal {S} _ {l}} \left\| \mathcal {G} \left(\widetilde {F} _ {l} \left(x ^ {s}\right)\right) - \mathcal {G} \left(\widetilde {F} _ {l} \left(x ^ {\prime}\right)\right) \right\| _ {2} ^ {2}, \tag {3}
+$$
+
+where $\widetilde{F}$ is a feature extractor (such as a public DNN model) that can be different from the target model, and $\mathcal{G}$ is the Gram matrix of deep features extracted at a set of style layers of $\widetilde{F}$ [11]. As different styles can be learned at different layers, we use all convolutional layers of the network as the style layers. To generate stylized perturbation in expected area of the target image, we denote the masks that define the attack and non-attack regions by $M$ and $\overline{M}$ respectively. After each step in generation process, we mask the adversarial image $x'$ by $M$ to make only attack region modifiable with non-attack area unchanged ( $x$ masked by $\overline{M}$ ).
+
+# 3.2.2 Content loss
+
+The above style loss can craft an adversarial image in the reference style, however, the content of the adversarial image may appear very different to that of the original image. The content of the original image can be preserved by a
+
+content preservation loss:
+
+$$
+\mathcal {L} _ {c} = \sum_ {l \in \mathcal {C} _ {l}} \left\| \widetilde {F} _ {l} (x) - \widetilde {F} _ {l} \left(x ^ {\prime}\right) \right\| _ {2} ^ {2}, \tag {4}
+$$
+
+where $\mathcal{C}_l$ represents the set of content layers used for extracting content representations. This is to ensure that the adversarial image has very similar content to the original image in the deep representation space. We use deeper layers of the feature extractor network as the content layers.
+
+Note that the content loss is optional when the attack only occurs in a small region that does not contain any particular content. However, if the attack region contains semantic content, the content loss can help reduce the semantic difference between the adversarial image and its original version.
+
+# 3.2.3 Smoothness loss
+
+The smoothness of the adversarial image can be improved by reducing the variations between adjacent pixels. For an adversarial image $x'$ , the smoothness loss is defined as:
+
+$$
+\mathcal {L} _ {m} = \sum \left(\left(x _ {i, j} ^ {\prime} - x _ {i + 1, j}\right) ^ {2} + \left(x _ {i, j} ^ {\prime} - x _ {i, j + 1}\right) ^ {2}\right) ^ {\frac {1}{2}}, \tag {5}
+$$
+
+where $x_{i,j}^{\prime}$ is the pixel at coordinate $(i,j)$ of image $x^{\prime}$ . Intuitively, this will encourage the image to have low-variance (e.g. smooth) local patches. We note that the smoothness loss is limited in improving stealthiness if the surface of both the target image and the style image are already smooth. But we still recommend adding it in physical setting, as Sharif et al. pointed out [23], the smoothness term is useful for improving the robustness of adversarial examples in physical environment.
+
+# 3.2.4 Adversarial loss
+
+For the adversarial loss, we use the following cross-entropy loss:
+
+$$
+\mathcal {L} _ {a d v} = \left\{ \begin{array}{l l} \log \left(p _ {y _ {a d v}} \left(x ^ {\prime}\right)\right), & \text {f o r t a r g e t e d a t t a c k} \\ - \log \left(p _ {y} \left(x ^ {\prime}\right)\right), & \text {f o r u n t a r g e t e d a t t a c k}, \end{array} \right. \tag {6}
+$$
+
+where $p_{y_{adv}}()$ is the probability output (softmax on logits) of the target model $F$ with respect to class $y_{adv}$ . We note that the proposed camouflage attack is not restricted to the particular form of the adversarial loss, and can be used in combination with existing attacking methods.
+
+# 3.3. Adaptation for Physical-world Conditions
+
+To make adversaries generated by AdvCam physically realizable, we model physical conditions in the process of generating camouflaged adversaries. As the physical-world environment often involves condition fluctuations such as viewpoint shifts, camera noise, and other natural transformations [1], we use a series of adaptations to accommodate such varying conditions. In particular, we adapt a technique similar to Expectation Over Transformation (EOT) [1] but without expectation. In Xie's work [31], they also adapted EOT to improve the transferability of adversarial examples. However, we aim to improve the adaptation of adversarial examples under various physical conditions. Thus we consider the transformations for simulation of physical-world condition fluctuations, including rotation, scale size, color shift (to simulate lightening change) and random background.
+
+$$
+\min _ {x ^ {\prime}} \left(\left(\mathcal {L} _ {s} + \mathcal {L} _ {c} + \mathcal {L} _ {m}\right) + \max _ {T \in \mathcal {T}} \lambda \cdot \mathcal {L} _ {a d v} (o + T \left(x ^ {\prime}\right)\right), \tag {7}
+$$
+
+where $o$ represents a random background image that we sample in the physical world, and $T$ represents a random transformation of rotation, resize and color shift.
+
+In principle, "vision" is the main sense of perception for both human observers and DNN models. By adjusting the style reference image $x^{s}$ according to the original image $x$ and the background image $o$ , the proposed camouflage attack can craft highly stealthy adversarial examples that can deceive both human observers and DNN models.
+
+# 4. Experimental Evaluation
+
+In this section, we first outline the experimental setup. Then we analyze our AdvCam attack via an ablation study. Afterwards, we evaluate camouflage performance of AdvCam by human perception study for digital attacks, and also present several adversarial examples of high stealthiness crafted by AdvCam. We further perform physical-world attacks at last.
+
+# 4.1. Experimental Setup
+
+# 4.1.1 Baseline attacks
+
+we compare our AdvCam attack with two existing representative methods: PGD [22] and adversarial patch (AdvPatch) [3]. PGD represents the digital attacks, which is the strongest first-order attack. And AdvPatch represents the unrestricted attacks that can directly applied to the physical-world setting. Some other physical-world attacks such as $RP_{2}$ require case-by-case manual design, thus are limited for mass production. We compare the methods in terms of attack success rate and visual effect. For AdvCam attack, we use the same layers of the network as used in [19] to extract style and content (where necessary) representations.
+
+# 4.1.2 Threat model
+
+We test both targeted and untargeted attacks in both digital and physical-world settings. The threat model adopts a gray-box setting: the source and target networks are both VGG-19 networks but were trained separately on ImageNet. For a physical-world test, we first use a Google Pixel2 smartphone to take a photo of the target object, then craft an adversarial image in the digital setting attacking the source network, after which we print out the adversarial image to replace or place it on the target object, then we re-photo the object using the same smartphone from various viewing angles and distances. The physical-word attack success rate is measured by the prediction accuracy of the target network on the re-photoed adversarial images.
+
+# 4.2. Ablation Study
+
+Here, we conduct a series of experiments on ImageNet to analyze the following aspects of our AdvCam attack: 1) shape and location of camouflage region, 2) camouflage losses (e.g. style loss, content loss and smoothness loss), and 3) adversarial strength parameter $\lambda$ and region size.
+
+# 4.2.1 Camouflage region: shape and location
+
+Here, we show how the selection of camouflage region in terms of shape and location impacts the adversarial strength of the crafted adversarial example. Given a selected attacking region with shape and size, we increase strength parameter $\lambda$ from 1000 to 10000 with interval 1000 until the attack succeeds. The range is selected according to extensive experiments, [1000, 10000] is an effective range to find adversary with both high adversarial strength and stealthiness. Figure 4 shows camouflaged adversarial examples crafted with different regions. We find that either the camouflage region is at the center or away from the center of the target object, attacks can succeed with high confidence. We will show in the last part of this subsection that attacks can be camouflaged into an area that is even off the target object, secretly hidden in the background.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+
+
+Figure 4: Ablation of region shape and size, on two targeted attacks: "backpack" $\rightarrow$ "tank" (top row) and "poncho" $\rightarrow$ "traffic light". (a): original clean image with intended style (bottom left corner). (b) - (e): left: selected attack region, right: crafted camouflage adversarial example with top-1 prediction ("predicted class, confidence") given at the bottom of the image.
+(a) Original
+Figure 5: Ablation of the 3 camouflage losses: (a): original images with intended camouflage style at the bottom right corner; (b) - (d): camouflaged adversarial examples using different loss functions.
+
+
+(b) $\mathcal{L}_s$
+
+
+(c) $\mathcal{L}_s + \mathcal{L}_c$
+
+
+(d) All
+
+# 4.2.2 Camouflage losses $(\mathcal{L}_s,\mathcal{L}_c,\mathcal{L}_m)$
+
+Figure 5 illustrates three groups of adversarial examples camouflaged with or without the two optional enhancements (content preservation $\mathcal{L}_c$ and smoothness enhancement $\mathcal{L}_m$ ). When incorporating one enhancement, its loss function is directly added to the final object by following Eq. (2) ( $\lambda$ in $\mathcal{L}_{adv}$ was set to 2000). As can be observed, the content preservation can help preserve the original content as in the "traffic sign" example (third column), while smoothness enhancement can help produce a smooth object surface. These enhancements are optional because they improve the visual appearance only slightly for some adversarial examples, for example, content preservation in the "table lamp"example (third row) or smoothness enhancement in all examples.
+
+# 4.2.3 Adversarial strength $(\lambda)$ and region size
+
+We craft both targeted and untargeted camouflage attacks on randomly selected 2000 ImageNet test images from 50 categories with varying $\lambda \in [1000, 10000]$ . For targeted at
+
+
+
+
+Figure 6: Ablation of adversarial strength $\lambda$ and region size: success rate of untargeted (left) and targeted attack (right).
+
+tack, the target class is randomly selected and is different to the true class. To also test the influence of $\lambda$ under different attack regions, here we also vary the size of the region from $40*40$ to $120*120$ . Figure 6 illustrates the top-1/5 success rates, which are measured by whether the target class is in the top-1/5 classes. As shown in the figure, when the region is fixed, larger adversarial strength $\lambda$ can increase success rate by up to $20\%$ ; and when $\lambda$ is fixed, larger attack region can improve success rate by up to $40\%$ . Compared to targeted attacks, untargeted attacks are much easier to succeed. Between top-1 and top-5 success rates, top-1 targeted attacks are more difficult to achieve (solid lines are lower than dashed lines). The standard errors with respect to different adversarial strengths and region sizes are between $0.07\%$ and $1.13\%$ . Note that in these experiments, the camouflage styles and locations of attack region are selected randomly, which may decrease the camouflage effect and success rate. However, this does not affect the conclusion that larger $\lambda$ and region size can help craft stronger attacks.
+
+# 4.3. Digital Attacks
+
+# 4.3.1 Attack setting
+
+We randomly select 150 clean images from 5 categories of ImageNet ILSVRC2012 test set. We then apply the
+
+three methods (PGD, AdvPatch and our AdvCam) to craft a targeted adversarial example for each clean image. The pairs of selected source and target classes are: "ashcan" $\rightarrow$ "robin", "backpack" $\rightarrow$ "water jug", "cannon" $\rightarrow$ "folklift", "jersey" $\rightarrow$ "horizontal bar", "mailbox" $\rightarrow$ "entertainment center". For PGD and AdvCam, we attack the main object region obtained via manual selection, while for AdvPatch, we further select a circular attack area inside of the object region. For PGD, we use maximum perturbation $\epsilon = 16 / 255$ (denoted as PGD-16). For AdvCam, we randomly select a second image from the same category as the style image, and gradually increase $\lambda$ from 1000 to 10000 until an adversarial example is found. For fair comparison, we filter out the failed adversarial examples. Finally, we collect 132, 101, 122 adversarial examples for PGD, AdvPatch and AdvCam respectively. Figure 7 shows a few crafted adversarial examples by the three methods that we used to perform human perception study.
+
+
+
+
+(a) Original
+
+
+
+
+(b) PGD-16
+
+
+
+
+(c) AdvPatch
+
+
+
+
+(d) AdvCam
+Figure 7: The original and adversarial images crafted by PGD-16, AdvPatch and our AdvCam attack.
+
+# 4.3.2 Human perception study results
+
+We set up a human perception study on Amazon Mechanical Turk (AMT) to ask human evaluators to choose whether a shown image is "natural and realistic" or "not natural or realistic". To simulate adversarial examples in a real world scenario, we present users with three types of adversarial images in random order and individually rather than in pairs. Finally, we collected 1953 selections from 130 participants. AdvPatch was chosen as "natural and realistic" $19.0 \pm 1.68\%$ of the time, PGD was chosen $77.3 \pm 1.53\%$ of the time, while our AdvCam was chosen $80.7 \pm 1.53\%$ of the time. We summarize these statistics as stealthiness scores for the three methods and show in Figure 8. This confirms that our proposed AdvCam attack is capable of crafting adversarial examples that are as stealthy as the small perturbation PGD-16 method, although the perturbations of AdvCam attacks are unrestricted in size.
+
+
+Figure 8: Stealthiness of AdvPatch, PGD-16 and AdvCam.
+
+# 4.3.3 Customized examples
+
+Here, we show how AdvCam can craft extremely stealthy camouflages, especially off-target ones. Figure 9 illustrates a few such examples. The first-row shows on-target camouflage examples, and the second-row shows off-target camouflages, which are crafted by attacking a carefully chosen background area. For on-target ones, AdvCam generates natural and stealthy perturbation on the surface of the target objects. For off-target ones, in the first off-target example (left two images in the second row), we hide the attack into the price tag to fool a VGG-19 classifier to misclassify a revolver into a toilet tissue. In the second example (middle two images in the second row), AdvCam successfully camouflages a blenheim spaniel into a bearskin by adding flowers in the background. In the third example (right two images in the second row), we camouflage the attack into the wall posters in the background which causes a minivan parked in front of the wall to be misrecognized as a traffic light. These examples not only demonstrate the stealthiness and flexibility of our AdvCam attack, but also indicate that threats to deep learning systems are ubiquitous, and in many cases, may hardly be noticeable even to human observers.
+
+# 4.4. Physical-world Attacks
+
+We further design three physical-world attacking scenarios to test the camouflage power of our AdvCam attack. We also perform AdvPatch and PGD attacks for comparison. For PGD attack, we test different $\epsilon$ in (16/255, 32/255, 64/255, 128/255) and show successful adversarial examples with the least $\epsilon$ . We print out the adversarial patterns on a A3 or A4 paper, then take 20 photos at various viewing angles and distances.
+
+The first scenario is to camouflage a wild pattern into a street sign, which could cause problems for self-driving cars. The top row in Figure 10 illustrates some successful patterns crafted by PGD-128 ( $\epsilon = 128/255$ ), AdvPatch and AdvCam. As can be seen, the attack is perfectly camouflaged by AdvCam into the texture of the tree. Although PGD is highly stealthy in digital setting, it requires large perturbation ( $\epsilon = 128/255$ ) in physical environment. Thus the adversarial pattern is much less stealthy than AdvCam, same as AdvPatch. The second scenario is to protect the identity of a person wearing a jersey. We simulate such a
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 9: Camouflaged adversarial images crafted by our AdvCam attack and their original versions.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a) AdvPatch
+
+
+
+
+(b) PGD-128
+
+
+
+
+(c) AdvCam
+
+
+Figure 10: Top: Adversarial wood texture recognized as street sign. Bottom: Adversarial logo on t-shirt.
+
+
+Figure 11: Adversarial traffic sign with 3 styles of stains.
+
+
+
+scenario by attacking the jersey using a camouflaged fashion logo "pikachu" (see bottom row in Figure 10). All the three attacks perform the "jersey" to "Irish terrier" attack. Note that PGD failed the attack even with the largest perturbation tested. This shows the high flexibility of AdvCam with customized camouflage styles, providing a flexible creation of stealthiness satisfying various attacking scenarios.
+
+We also perform a "street sign" to "barbershop" attack using AdvCam with three different natural styles (see Figure
+
+11). The patterns of AdvCam are smooth and natural that can hardly be detected by human observers, but deceive the classifier with high confidence successfully. To summarize, with high stealthiness of adversaries generated by AdvCam, it poses ubiquitous threats for current DNNs-based systems. Thus AdvCam can be a useful tool to evaluate the robustness of DNNs employed in the physical-world.
+
+# 5. Conclusion and Future Work
+
+In this paper, we have investigated the stealthiness of adversarial examples, and propose a novel approach called adversarial camouflage (AdvCam), which combines neural style transfer and adversarial attack techniques, to craft and camouflage adversarial examples into stealthy natural-looking styles. AdvCam is a flexible approach that can help craft stealthy attacks for robustness evaluation of DNN models. Apart from the view of attack, the proposed AdvCam can be a meaningful camouflage technique to protect objects or human being detected by both human observers and DNN based equipments.
+
+The proposed AdvCam currently still requires the attacker to manually specify the attack region and target style, where we plan to explore semantic segmentation techniques to automatically achieve this in our future work. Also, we will explore to apply AdvCam on other computer vision tasks including object detection and segmentation. Moreover, effective defense strategies against camouflaged attacks will be another crucial and promising direction.
+
+# Acknowledgement
+
+Yun Yang is supported by Australian Research Council Discovery Project under Grant DP180100212. We are also grateful for early stage discussions with Dr Sheng Wen from Swinburne University of Technology.
+
+# References
+
+[1] Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In $ICLR$ , 2017. 1, 3, 5
+[2] Yang Bai, Yan Feng, Yisen Wang, Tao Dai, Shu-Tao Xia, and Yong Jiang. Hilbert-based generative defense for adversarial examples. In ICCV, 2019. 1
+[3] Tom B Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. In NIPS Workshop, 2017. 2, 3, 5
+[4] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In IEEE S&P, 2017. 1, 2
+[5] Alex J Champandard. Semantic style transfer and turning two-bit doodles into fine artworks. In ICLR, 2016. 3
+[6] Chenyi Chen, Ari Seff, Alain Kornhauser, and Jianxiong Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In ICCV, 2015. 1
+[7] Pin-Yu Chen, Yash Sharma, Huan Zhang, Jinfeng Yi, and Cho-Jui Hsieh. Ead: elastic-net attacks to deep neural networks via adversarial examples. In AAAI, 2018. 2
+[8] Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, and Jun Zhu. Efficient decision-based black-box adversarial attacks on face recognition. In CVPR, pages 7714-7722, 2019. 1
+[9] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In PACMCGIT, 2001. 3
+[10] Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song. Robust physical-world attacks on deep learning models. In CVPR, 2018. 1, 2, 3
+[11] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. 3, 4
+[12] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2014. 1, 2
+[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 1
+[14] Hossein Hosseini and Radha Poovendran. Semantic adversarial examples. In CVPR Workshop, 2018. 3
+[15] Linxi Jiang, Xingjun Ma, Shaoxiang Chen, James Bailey, and Yu-Gang Jiang. Black-box adversarial attacks on video recognition models. In ACM MM, 2019. 1
+[16] Sergey Karayev, Matthew Trentacoste, Helen Han, Aseem Agarwala, Trevor Darrell, Aaron Hertzmann, and Holger Winnemoeller. Recognizing image style. arXiv preprint arXiv:1311.3715, 2013. 2
+[17] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In ICLR, 2016. 1, 2, 3
+[18] Hsueh-Ti Derek Liu, Michael Tao, Chun-Liang Li, Derek Nowrouzezahrai, and Alec Jacobson. Beyond pixel normballs: Parametric adversaries using an analytically differentiable renderer. In ICLR, 2018. 3
+[19] Fujun Luan, Sylvain Paris, Eli Shechtman, and Kavita Bala. Deep photo style transfer. In CVPR, 2017. 3, 5
+
+[20] Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant Schoenebeck, Dawn Song, Michael E Houle, and James Bailey. Characterizing adversarial subspaces using local intrinsic dimensionality. In ICLR, 2018. 1
+[21] Xingjun Ma, Yuhao Niu, Lin Gu, Yisen Wang, Yitian Zhao, James Bailey, and Feng Lu. Understanding adversarial attacks on deep learning based medical image analysis systems. In arXiv:1907.10456, 2019. 1
+[22] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018. 1, 2, 5
+[23] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In CCS, 2016. 1, 3, 4
+[24] Yang Song, Rui Shu, Nate Kushman, and Stefano Ermon. Constructing unrestricted adversarial examples with generative models. In NIPS, 2018. 3
+[25] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In ICLR, 2013. 1, 2
+[26] Yisen Wang, Xuejiao Deng, Songbai Pu, and Zhiheng Huang. Residual convolutional ctc networks for automatic speech recognition. arXiv preprint arXiv:1702.07793, 2017. 1
+[27] Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, and Quanquan Gu. On the convergence and robustness of adversarial training. In ICML, 2019. 1
+[28] Yisen Wang, Difan Zou, Jinfeng Yi, James Bailey, Xingjun Ma, and Quanquan Gu. Improving adversarial robustness requires revisiting misclassified examples. In ICLR, 2020. 1
+[29] Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, and Xingjun Ma. Skip connections matter: On the transferability of adversarial examples generated with resnets. In ICLR, 2020. 2
+[30] Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating adversarial examples with adversarial networks. In *IJCAI*, 2018. 1, 3
+[31] Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Yuille. Improving transferability of adversarial examples with input diversity. In CVPR, 2019, 5
+[32] Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, and Xue Lin. Adversarial t-shirt! evading person detectors in a physical world. arXiv preprint arXiv:1910.11099, 2019. 3
+[33] Min Zeng, Yisen Wang, and Yuan Luo. Dirichlet latent variable hierarchical recurrent encoder-decoder in dialogue generation. In EMNLP, 2019. 1
+[34] Xiaohui Zeng, Chenxi Liu, Yu-Siang Wang, Weichao Qiu, Lingxi Xie, Yu-Wing Tai, Chi-Keung Tang, and Alan L Yuille. Adversarial attacks beyond the image space. In CVPR, 2019. 3
+[35] Yang Zhang, Hassan Foroosh, Philip David, and Boqing Gong. Camou: Learning physical vehicle camouflages to adversarially attack detectors in the wild. In ICLR, 2019. 3
\ No newline at end of file
diff --git a/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/images.zip b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..56ca08c65bc2d4c56f8f279e3015d64fa16b1901
--- /dev/null
+++ b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59cf69b0849b5b7a49a7757ba6cca849447d192d5f4d3a5969869fbd837e0899
+size 586482
diff --git a/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/layout.json b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a15e977cccef4778535d0403d34c872d594a4b54
--- /dev/null
+++ b/adversarialcamouflagehidingphysicalworldattackswithnaturalstyles/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b787973e48f26e0dfc7b4bce2f7df994b0b0ea8252b927f7fe766893432bc0a
+size 437858
diff --git a/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_content_list.json b/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..494ccbd2da83b97d1cf9a2279628f3fbd3888dde
--- /dev/null
+++ b/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a16c7406b355eb173a080a2ea775fa2a679daf10cfb28a4abfbd644ffb69ed6a
+size 82090
diff --git a/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_model.json b/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2a2fb5898233303c25e7d203166e30d6a2a3421b
--- /dev/null
+++ b/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18084881cf3efd392d61100741d914785a8337e48da8ec1d355e35a4ae2d5faa
+size 101686
diff --git a/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_origin.pdf b/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..4f10b47945549e04a6ffb0d93d90f185f1505260
--- /dev/null
+++ b/adversarialexamplesimproveimagerecognition/3dc146ac-4ab9-4927-9e98-585e6d50de0a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e2b63764ddf98604bc68de1d074d8ead5b16016cfd791d6f5836e925ad47b27
+size 434225
diff --git a/adversarialexamplesimproveimagerecognition/full.md b/adversarialexamplesimproveimagerecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..fed31247f47f5a1adbceb4fa614dfc8091f8ab3e
--- /dev/null
+++ b/adversarialexamplesimproveimagerecognition/full.md
@@ -0,0 +1,333 @@
+# Adversarial Examples Improve Image Recognition
+
+Cihang Xie $^{1,2*}$ Mingxing Tan $^{1}$ Boqing Gong $^{1}$ Jiang Wang $^{1}$ Alan Yuille $^{2}$ Quoc V. Le $^{1}$ $^{1}$ Google $^{2}$ Johns Hopkins University
+
+# Abstract
+
+Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an opposite perspective: adversarial examples can be used to improve image recognition models if harnessed in the right manner. We propose AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to our method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.
+
+We show that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger. For instance, by applying AdvProp to the latest EfficientNet-B7 [41] on ImageNet, we achieve significant improvements on ImageNet (+0.7%), ImageNet-C (+6.5%), ImageNet-A (+7.0%) and Stylized-ImageNet (+4.8%). With an enhanced EfficientNet-B8, our method achieves the state-of-the-art 85.5% ImageNet top-1 accuracy without extra data. This result even surpasses the best model in [24] which is trained with 3.5B Instagram images (~3000× more than ImageNet) and ~9.4× more parameters. Models are available at https://github.com/tensorflow(tpu/tree/master/models/official/efficientnet.
+
+# 1. Introduction
+
+Adversarial examples crafted by adding imperceptible perturbations to images, can lead Convolutional Neural Networks (ConvNets) to make wrong predictions. The existence of adversarial examples not only reveals the limited generalization ability of ConvNets, but also poses security threats on the real-world deployment of these models. Since the first discovery of the vulnerability of ConvNets to adversarial attacks [40], many efforts [2, 7, 15, 16, 18, 23, 29, 36, 42, 45, 50] have been made to improve network robustness.
+
+In this paper, rather than focusing on defending against adversarial examples, we shift our attention to leveraging adversarial examples to improve accuracy. Previous works show that training with adversarial examples can
+
+
+Figure 1. AdvProp improves image recognition. By training models on ImageNet, AdvProp helps EfficientNet-B7 [41] to achieve $85.2\%$ accuracy on ImageNet [33], $52.9\%$ mCE (mean corruption error, lower is better) on ImageNet-C [9], $44.7\%$ accuracy on ImageNet-A [10] and $26.6\%$ accuracy on Stylized-ImageNet [6], beating its vanilla counterpart by $0.7\%$ , $6.5\%$ , $7.0\%$ and $4.8\%$ , respectively. Theses sample images are randomly selected from the category "goldfinch".
+
+
+
+enhance model generalization but are restricted to certain situations—the improvement is only observed either on small datasets (e.g., MNIST) in the fully-supervised setting [7, 20], or on larger datasets but in the semi-supervised setting [26, 30]. Meanwhile, recent works [18, 16, 45] also suggest that training with adversarial examples on large datasets, e.g., ImageNet [33], with supervised learning results in performance degradation on clean images. To summarize, it remains an open question of how adversarial examples can be used effectively to help vision models.
+
+We observe all previous methods jointly train over clean images and adversarial examples without distinction even though they should be drawn from different underlying distributions. We hypothesize this distribution mismatch between clean examples and adversarial examples is a key factor that causes the performance degradation in previous works [16, 18, 45].
+
+In this paper, we propose AdvProp, short for Adversarial Propagation, a new training scheme that bridges the distribution mismatch with a simple yet highly effective two-batchnorm approach. Specifically, we propose to use two batch norm statistics, one for clean images and one auxiliary for adversarial examples. The two batchnorms properly disentangle the two distributions at normalization layers for accurate statistics estimation. We show this distribution disentangling is crucial, enabling us to successfully improve, rather than degrade, model performance with adversarial examples.
+
+To our best knowledge, our work is the first to show adversarial examples can improve model performance in the fully-supervised setting on the large-scale ImageNet dataset. For example, an EfficientNet-B7 [41] trained with AdvProp achieves $85.2\%$ top-1 accuracy, beating its vanilla counterpart by $0.8\%$ . The improvement by AdvProp is more notable when testing models on distorted images. As shown in Fig. 1, AdvProp helps EfficientNet-B7 to gain an absolute improvement of $9.0\%$ , $7.0\%$ and $5.0\%$ on ImageNet-C [9], ImageNet-A [10] and Stylized-ImageNet [6], respectively.
+
+As AdvProp effectively prevents overfitting and performs better with larger networks, we develop a larger network, named EfficientNet-B8, by following similar compound scaling rules in [41]. With our proposed AdvProp, EfficientNet-B8 achieves the state-of-the-art $85.5\%$ top-1 accuracy on ImageNet without any extra data. This result even surpasses the best model reported in [24], which is pretrained on 3.5B extra Instagram images ( $\sim 3000 \times$ more than ImageNet) and requires $\sim 9.4 \times$ more parameters than our EfficientNet-B8.
+
+# 2. Related Work
+
+Adversarial Training. Adversarial training, which trains networks with adversarial examples, constitutes the current foundation of state-of-the-arts for defending against adversarial attacks [7, 18, 23, 45]. Although adversarial training significantly improves model robustness, how to improve clean image accuracy with adversarial training is still under-explored. VAT [26] and deep co-training [30] attempt to utilize adversarial examples in semi-supervised settings, but they require enormous extra unlabeled images. Under supervised learning settings, adversarial training is typically considered hurting accuracy on clean images [32], e.g., $\sim 10\%$ drop on CIFAR-10 [23] and $\sim 15\%$ drop on ImageNet [45]. Tsipras et al. [43] argue that the performance
+
+tradeoff between adversarial robustness and standard accuracy is provably inevitable, and attribute this phenomenon as a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. Other works try to explain this tradeoff phenomenon from the perspective of the increased sample complexity of adversary [37, 25, 28], the limited amount of training data [1, 27, 34, 44, 48], or network overparameterization [31].
+
+This paper focuses on standard supervised learning without extra data. Although using similar adversarial training techniques, we stand on an opposite perspective to previous works—we aim at using adversarial examples to improve clean image recognition accuracy.
+
+Benefits of Learning Adversarial Features. Many works corroborate that training with adversarial examples brings additional features to ConvNets. For example, compared with clean images, adversarial examples make network representations align better with salient data characteristics and human perception [43]. Moreover, such trained models are much more robust to high frequency noise [47]. Zhang et al. [51] further suggest these adversially learned feature representations are less sensitive to texture distortions and focus more on shape information.
+
+Our proposed AdvProp can be characterized as a training paradigm which fully exploits the complementarity between clean images and their corresponding adversarial examples. The results further suggest that adversarial features are indeed beneficial for recognition models, which agree with the conclusions drawn from these aforementioned studies.
+
+Data augmentation. Data augmentation, which applies a set of label-preserving transformations to images, serves as an important and effective role to prevent networks from overfitting [17, 35, 8]. Besides traditional methods like horizontal flipping and random cropping, different augmentation techniques have been proposed, e.g., applying masking out [5] or adding Gaussian noise [22] to regions in images, or mixing up pairs of images and their labels in a convex manner [49]. Recent works also demonstrate that it is possible to learn data augmentation policies automatically for achieving better performance on image classification [3, 4, 19, 21, 52] and object detection [53, 4].
+
+Our work can be regarded as one type of data augmentation: creating additional training samples by injecting noise. However, all previous attempts, by augmenting either with random noise (e.g., Tab. 5 in [18] shows the result of training with random normal perturbations) or adversarial noise [16, 18, 42], fail to improve accuracy on clean images.
+
+# 3. A Preliminary Way to Boost Performance
+
+Madry et al. [23] formulate adversarial training as a min-max game and train models exclusively on adversar
+
+
+Figure 2. Two take-home messages from the experiments on ImageNet: (1) training exclusively on adversarial examples results in performance degradation; and (2) simply training with adversarial examples and clean images in turn can improve network performance on clean images. Fine-tuning details: we train networks with adversarial examples in the first 175 epochs, and then fine-tune with clean images in the rest epochs.
+
+ial examples to effectively boost model robustness. However, such trained models usually cannot generalize well to clean images as shown in [23, 45]. We validate this result by training a medium-scale model (EfficientNet-B3) and a large-scale model (EfficientNet-B7) on ImageNet using PGD attacker[23]—both adversarially trained models obtain much lower accuracy on clean images compared to their vanilla counterparts. For instance, such adversarially trained EfficientNet-B3 only obtains an accuracy of $78.2\%$ on the clean images, whereas vanilla trained EfficientNet-B3 achieves $81.7\%$ (see Fig. 2).
+
+We hypothesize such performance degradation is mainly caused by distribution mismatch—adversarial examples and clean images are drawn from two different domains therefore training exclusively on one domain cannot well transfer to the other. If this distribution mismatch can be properly bridged, then performance degradation on clean images should be mitigated even if adversarial examples are used for training. To validate our hypothesis, we hereby examine a simple strategy—pre-train networks with adversarial examples first, and then fine-tune with clean images.
+
+The results are summarized in Fig. 2. As expected, this simple fine-tuning strategy (marked in light orange) always yields much higher accuracy than Madry's adversarial training baseline (marked in grey), e.g., it increases accuracy by $3.3\%$ for EfficientNet-B3. Interestingly, while compared to the standard vanilla training setting where only clean images are used (marked in blue), this fine-tuning strategy sometimes even help networks to achieve superior performance, e.g., it increases EfficientNet-B7 accuracy by $0.3\%$ , achieving $84.8\%$ top-1 accuracy on ImageNet.
+
+The observation above delivers a promising signal—adversarial examples can be beneficial for model performance if harnessed properly. Nonetheless, we note that
+
+this approach fails to improve performance in general, e.g., though such trained EfficientNet-B3 significantly outperforms the Madry's adversarial training baseline, it is still slightly below $(-0.2\%)$ the vanilla training setting. Therefore, a natural question arises: is it possible to distill valuable features from adversarial examples in a more effective manner and boost model performance further generally?
+
+# 4. Methodology
+
+The results in Sec. 3 suggest that properly integrating information from both adversarial examples and clean images even in a simple manner improves model performance. However, such fine-tuning strategy may partially override features learned from adversarial examples, leading to a sub-optimal solution. To address this issue, we propose a more elegant approach, named AdvProp, to jointly learn from clean images and adversarial examples. Our method handles the issue of distribution mismatch via explicitly decoupling batch statistics on normalization layers, and thus enabling a better absorption from both adversarial and clean features. In this section, we first revisit the adversarial training regime in Sec. 4.1, and then introduce how to enable disentangled learning for a mixture of distributions via auxiliary BNs in Sec. 4.2. Finally, we summarize the training and testing pipeline in Sec. 4.3.
+
+# 4.1. Adversarial Training
+
+We first recall the vanilla training setting, and the objective function is
+
+$$
+\underset {\theta} {\arg \min } \mathbb {E} _ {(x, y) \sim \mathbb {D}} \left[ L (\theta , x, y) \right], \tag {1}
+$$
+
+where $\mathbb{D}$ is the underlying data distribution, $L(\cdot ,\cdot ,\cdot)$ is the loss function, $\theta$ is the network parameter, and $x$ is training sample with ground-truth label $y$ .
+
+Consider Madry's adversarial training framework [23], instead of training with original samples, it trains networks with maliciously perturbed samples,
+
+$$
+\underset {\theta} {\arg \min } \mathbb {E} _ {(x, y) \sim \mathbb {D}} \left[ \underset {\epsilon \in \mathbb {S}} {\max } L (\theta , x + \epsilon , y) \right], \tag {2}
+$$
+
+where $\epsilon$ is a adversarial perturbation, $\mathbb{S}$ is the allowed perturbation range. Though such trained models have several nice properties as described in [51, 47, 43], they cannot generalize well to clean images [23, 45].
+
+Unlike Madry's adversarial training, our main goal is to improve network performance on clean images by leveraging the regularization power of adversarial examples. Therefore we treat adversarial images as additional training samples and train networks with a mixture of adversarial examples and clean images, as suggested in [7, 18],
+
+$$
+\underset {\theta} {\arg \min } \left[ \mathbb {E} _ {(x, y) \sim \mathbb {D}} \left(L (\theta , x, y) + \underset {\epsilon \in \mathbb {S}} {\max } L (\theta , x + \epsilon , y)\right) \right]. \tag {3}
+$$
+
+Ideally, such trained models should enjoy the benefits from both adversarial and clean domains. However, as observed in former studies [7, 18], directly optimizing Eq. (3) generally yields lower performance than the vanilla training setting on clean images. We hypothesize that the distribution mismatch between adversarial examples and clean images prevents networks from accurately and effectively distilling valuable features from both domains. Next, we will introduce how to properly disentangle different distributions via our auxiliary batch norm design.
+
+# 4.2. Disentangled Learning via An Auxiliary BN
+
+Batch normalization (BN) [14] serves as an essential component for many state-of-the-art computer vision models [8, 12, 39]. Specifically, BN normalizes input features by the mean and variance computed within each mini-batch. One intrinsic assumption of utilizing BN is that the input features should come from a single or similar distributions. This normalization behavior could be problematic if the mini-batch contains data from different distributions, therefore resulting in inaccurate statistics estimation.
+
+
+(a) Traditional BN
+
+
+(b) Proposed Auxiliary BN Design
+Figure 3. Comparison between (a) traditional BN usage and (b) the utilization of auxiliary BN. The left and right panels illustrate the information flow in the corresponding network architectures and the estimated normalization statistics when facing a mixture of adversarial and clean images, respectively.
+
+We argue that adversarial examples and clean images have different underlying distributions, and the adversarial training framework in Eq. (3) essentially involves a two-component mixture distribution. To disentangle this mixture distribution into two simpler ones respectively for the clean and adversarial images, we hereby propose an auxiliary BN to guarantee its normalization statistics are exclusively preformed on the adversarial examples. Specifically, as illustrated in Fig. 3(b), our proposed auxiliary BN helps to disentangle the mixed distributions by keeping separate BNs to features that belong to different domains. Oth
+
+erwise, as illustrated in Fig. 3(a), simply maintaining one set of BN statistics results in incorrect statistics estimation, which could possibly lead to performance degradation.
+
+Note that we can generalize this concept to multiple auxiliary BNs, where the number of auxiliary BNs is determined by the number of training sample sources. For example, if training data contains clean images, distorted images and adversarial images, then two auxiliary BNs should be maintained. Ablation studies in Sec. 5.4 demonstrate that such fine-grained disentangled learning with multiple BNs can improve performance further. A more general usage of multiple BNs will be further explored in future works.
+
+# 4.3. AdvProp
+
+We formally propose AdvProp in Algorithm 1 to accurately acquire clean and adversarial features during training. For each clean mini-batch, we first attack the network using the auxiliary BNs to generate its adversarial counterpart; next we feed the clean mini-batch and the adversarial mini-batch to the same network but applied with different BNs for loss calculation, i.e., use the main BNs for the clean mini-batch and use the auxiliary BNs for the adversarial mini-batch; finally we minimize the total loss w.r.t. the network parameter for gradient updates. In other words, except BNs, convolutional and other layers are jointly optimized for both adversarial examples and clean images.
+
+Note the introduction of auxiliary BN in AdvProp only increases a negligible amount of extra parameters for network training, e.g., $0.5\%$ more parameters than the baseline on EfficientNet-B7. At test time, these extra auxiliary BNs are all dropped, and we only use the main BNs for inference.
+
+# Algorithm 1: Pseudo code of AdvProp
+
+Data: A set of clean images with labels;
+
+Result: Network parameter $\theta$
+
+for each training step do
+
+Sample a clean image mini-batch $x^{c}$ with label $y$ ;
+
+Generate the corresponding adversarial mini-batch $x^{a}$ using the auxiliary BNs;
+
+Compute loss $L^{c}(\theta, x^{c}, y)$ on clean mini-batch $x^{c}$ using the main BNs;
+
+Compute loss $L^a (\theta ,x^a,y)$ on adversarial mini-batch $x^{a}$ using the auxiliary BNs;
+
+Minimize the total loss w.r.t. network parameter $\arg \min_{\theta}L^{a}(\theta ,x^{a},y) + L^{c}(\theta ,x^{c},y)$
+
+end
+
+return $\theta$
+
+Experiments show that such disentangled learning framework enables networks to get much stronger performance than the adversarial training baseline [7, 18]. Besides, compared to the fine-tuning strategy in Sec. 3, AdvProp also demonstrates superior performance as it enables networks to jointly learn useful feature from adversarial examples and clean examples at the same time.
+
+# 5. Experiments
+
+# 5.1. Experiments Setup
+
+Architectures. We choose EfficientNets [41] at different computation regimes as our default architectures, ranging from the light-weight EfficientNet-B0 to the large EfficientNet-B7. Compared to other ConvNets, EfficientNet achieves much better accuracy and efficiency. We follow the settings in [41] to train these networks: RMSProp optimizer with decay 0.9 and momentum 0.9; batch norm momentum 0.99; weight decay 1e-5; initial learning rate 0.256 that decays by 0.97 every 2.4 epochs; a fixed AutoAugment policy [3] is applied to augment training images.
+
+Adversarial Attackers. We train networks with a mixture of adversarial examples and clean images as in Eq. (3). We choose Projected Gradient Descent (PGD) [23] under $L_{\infty}$ norm as the default attacker for generating adversarial examples on-the-fly. We try PGD attackers with different perturbation size $\epsilon$ , ranging from 1 to 4. We set the number iteration for the attackers $n = \epsilon + 1$ , except for the case $\epsilon = 1$ where $n$ is set to 1. The attack step size is fixed to $\alpha = 1$ .
+
+Datasets. We use the standard ImageNet dataset [33] to train all models. In addition to reporting performance on the original ImageNet validation set, we go beyond by testing the models on the following test sets:
+
+- ImageNet-C [9]. The ImageNet-C dataset is designed for measuring the network robustness to common image corruptions. It consists of 15 diverse corruption types and each type of corruption has five levels of severity, resulting in 75 distinct corruptions.
+- ImageNet-A [10]. The ImageNet-A dataset adversarially collects 7,500 natural, unmodified but "hard" real-world images. These images are drawn from some challenging scenarios (e.g., occlusion and fog scene) which are difficult for recognition.
+- Stylized-ImageNet [6]. The Stylized-ImageNet dataset is created by removing local texture cues while retaining global shape information on natural images via AdaIN style transfer [13]. As suggested in [6], networks are required to learn more shape-based representations to improve accuracy on Stylized-ImageNet.
+
+Compared to ImageNet, images from ImageNet-C, ImageNet-A and Stylized-ImageNet are much more challenging, even for human observers.
+
+# 5.2. ImageNet Results and Beyond
+
+ImageNet Results. Fig. 4 shows the results on the ImageNet validation set. We compare our method with the vanilla training setting. The family of EfficientNets provides a strong baseline, e.g., EfficientNet-B7's $84.5\%$ top-1 accuracy is the prior art on ImageNet [41].
+
+
+Figure 4. AdvProp boosts model performance over the vanilla training baseline on ImageNet. This improvement becomes more significant if trained with larger networks. Our strongest result is reported by the EfficientNet-B7 trained with AdvProp, i.e., $85.2\%$ top-1 accuracy on ImageNet.
+
+As different networks favor different attacker strengths when trained with AdvProp (which we ablate next), we first report the best result in Fig. 4. Our proposed AdvProp substantially outperforms the vanilla training baseline on all networks. This performance improvement is proportional to the network capacity and larger networks tend to perform better if they are trained with AdvProp. For example, the performance gain is at most $0.4\%$ for networks smaller than EfficientNet-B4, but is at least $0.6\%$ for networks larger than EfficientNet-B4.
+
+Compared to the prior art, i.e., $84.5\%$ top-1 accuracy, an EfficientNet-B6 trained with AdvProp (with $\sim 2\times$ less FLOPs than EfficientNet-B7) already surpasses it by $0.3\%$ . Our strongest result is obtained by the EfficientNet-B7 trained with AdvProp which achieves $85.2\%$ top-1 accuracy on ImageNet, beating the prior art by $0.7\%$ .
+
+Generalization on Distorted ImageNet Datasets. Next, we evaluate models on distorted ImageNet datasets, which are much more difficult than the original ImageNet. For instance, though ResNet-50 demonstrates reasonable performance on ImageNet (76.7% accuracy), it only achieves 74.8% mCE (mean corruption error, lower is better) on ImageNet-C, 3.1% top-1 accuracy on ImageNet-A and 8.0% top-1 accuracy on Stylized-ImageNet.
+
+The results are summarized in Tab. 1. Again, our proposed AdvProp consistently outperforms the vanilla training baseline for all models on all distorted datasets. The improvement here is much more significant than that on
+
+| Model | ImageNet-C* [9] | ImageNet-A [10] | Stylized-ImageNet* [6] |
| mCE ↓ | Top-1 Acc. ↑ | Top-1 Acc. ↑ |
| ResNet-50 | 74.8 | 3.1 | 8.0 |
| EfficientNet-B0 | 70.7 | 6.7 | 13.1 |
| + AdvProp (ours) | 66.2 (-4.5) | 7.1 (+0.4) | 14.6 (+1.5) |
| EfficientNet-B1 | 65.1 | 9.0 | 15.0 |
| + AdvProp (ours) | 60.2 (-4.9) | 10.1 (+1.1) | 16.7 (+1.7) |
| EfficientNet-B2 | 64.1 | 10.8 | 16.8 |
| + AdvProp (ours) | 61.4 (-2.7) | 11.8 (+1.0) | 17.8 (+1.0) |
| EfficientNet-B3 | 62.9 | 17.9 | 17.8 |
| + AdvProp (ours) | 57.8 (-5.1) | 18.0 (+0.1) | 21.4 (+3.6) |
| EfficientNet-B4 | 60.7 | 26.4 | 20.2 |
| + AdvProp (ours) | 58.6 (-2.1) | 27.9 (+1.5) | 22.5 (+1.7) |
| EfficientNet-B5 | 62.3 | 29.4 | 20.8 |
| + AdvProp (ours) | 56.2 (-6.1) | 34.4 (+5.0) | 24.4 (+3.6) |
| EfficientNet-B6 | 60.6 | 34.5 | 20.9 |
| + AdvProp (ours) | 53.6 (-7.0) | 40.6 (+6.1) | 25.9 (+4.0) |
| EfficientNet-B7 | 59.4 | 37.7 | 21.8 |
| + AdvProp (ours) | 52.9 (-6.5) | 44.7 (+7.0) | 26.6 (+4.8) |
+
+Table 1. AdvProp significantly boost models' generalization ability on ImageNet-C, ImageNet-A and Stylized-ImageNet. The highest result on each dataset is $52.9\%$ , $44.7\%$ and $26.6\%$ respectively, all achieved by the EfficientNet-B7 trained with AdvProp. *For ImageNet-C and Stylized-ImageNet, as distortions are specifically designed for images of the size $224 \times 224 \times 3$ , so we follow the previous setup [6, 9] to always fix the testing image size at the scale of $224 \times 224 \times 3$ for a fair comparison.
+
+the original ImageNet. For example, AdvProp improves EfficientNet-B3 by $0.2\%$ on ImageNet, and substantially boosts the performance by $5.1\%$ on ImageNet-C and $3.6\%$ on Stylized-ImageNet.
+
+The EfficientNet-B7 trained with AdvProp reports the strongest results on these datasets—it obtains $52.9\%$ mCE on ImageNet-C, $44.7\%$ top-1 accuracy on ImageNet-A and $26.6\%$ top-1 accuracy on Stylized-ImageNet. These are the best results so far if models are not allowed to train with corresponding distortions [6] or extra data [24, 46].
+
+To summarize, the results suggest that AdvProp significantly boosts the generalization ability by allowing models to learn much richer internal representations than the vanilla training. The richer representations not only provide models with global shape information for better classifying Stylized-ImageNet dataset, but also increase model robustness against common image corruptions.
+
+Ablation on Adversarial Attacker Strength. We now ablate the effects of attacker strength used in AdvProp on network performance. Specifically, the attacker strength here is determined by perturbation size $\epsilon$ , where larger perturbation size indicates stronger attacker. We try with different $\epsilon$ ranging from 1 to 4, and report the corresponding accuracy on the ImageNet validation set in Tab. 2.
+
+ | B0 | B1 | B2 | B3 | B4 | B5 | B6 | B7 |
| PGD5 (ε=4) | 77.1 | 79.2 | 80.3 | 81.8 | 83.3 | 84.3 | 84.8 | 85.2 |
| PGD4 (ε=3) | 77.3 | 79.4 | 80.4 | 81.9 | 83.3 | 84.3 | 84.7 | 85.1 |
| PGD3 (ε=2) | 77.4 | 79.4 | 80.4 | 81.9 | 83.1 | 84.3 | 84.7 | 85.0 |
| PGD1 (ε=1) | 77.6 | 79.6 | 80.5 | 81.8 | 83.1 | 84.3 | 84.6 | 85.0 |
+
+Table 2. ImageNet performance of models trained with AdvProp and different attack strength. In general, smaller networks favor weaker attackers, while larger networks favor stronger attackers.
+
+With AdvProp, we observe that smaller networks generally favor weaker attackers. For example, the lightweight EfficientNet-B0 achieves the best performance by using 1-step PGD attacker with perturbation size 1 (denoted as PGD1 $(\epsilon = 1)$ ), significantly outperforms the counterpart which trained with 5-step PGD attacker with perturbation size 4 (denoted as PGD5 $(\epsilon = 4)$ ), i.e., $77.6\%$ v.s. $77.1\%$ . This phenomenon is possibly due to that small networks are limited by their capacity to effectively distill information from strong adversarial examples, even the mixture distributions are well disentangled via auxiliary BNs.
+
+Meanwhile, networks with enough capacity tend to favor stronger attackers. By increasing attacker strength from PGD1 $(\epsilon = 1)$ to PGD5 $(\epsilon = 4)$ , AdvProp boosts EfficientNet-B7's accuracy by $0.2\%$ . This observation motivate our later ablation on keeping increasing attackers strength to fully exploit the potential of large networks.
+
+# 5.3. Comparisons to Adversarial Training
+
+As shown in Fig. 4 and Tab. 1, AdvProp improves models for better recognition than the vanilla training baseline. These results contradict previous conclusions [18, 42, 16] that the performance degradation is always observed if adversarial examples are used for training. We hereby provide a set of ablations for explaining this inconsistency. We choose the PGD5 ( $\epsilon = 4$ ) as the default attacker to generate adversarial examples during training.
+
+
+Figure 5. AdvProp substantially outperforms adversarial training [7] on ImageNet, especially for small models.
+
+Comparison Results. We compare AdvProp to traditional adversarial training [7], and report evaluation results on ImageNet validation set in Fig. 5. Compared to the traditional adversarial training, our method consistently achieves better accuracy on all models. This result suggests that carefully handling BN statistics estimation is important for training better models with adversarial examples.
+
+The biggest improvement is observed when using EfficientNet-B0 where our method beats the traditional adversarial training by $0.9\%$ . While by using larger models, this improvement becomes smaller—it stays at $\sim 0.5\%$ until scaling to EfficientNet-B5, but then drops to $0.3\%$ for EfficientNet-B6 and $0.1\%$ for EfficientNet-B7, respectively.
+
+Quantifying Domain Differences. One possible hypothesis for the observation above is that more powerful networks have stronger ability to learn a unified internal representations on the mixed distributions, therefore mitigate the issue of distribution mismatch at normalization layers even without the help of auxiliary BNs. To support this hypothesis, we take models trained with AdvProp, and compare the performance difference between the settings that use either the main BNs or the auxiliary BNs. As such resulted networks share all other layers except BNs, the corresponding performance gap empirically captures the degree of distribution mismatch between adversarial examples and clean images. We use ImageNet validation set for evaluation, and summarize the results in Tab. 3.
+
+ | B0 | B1 | B2 | B3 | B4 | B5 | B6 | B7 |
| BN | 77.1 | 79.2 | 80.3 | 81.8 | 83.3 | 84.3 | 84.8 | 85.2 |
| Auxiliary BN | 73.7 | 75.9 | 77.0 | 78.6 | 80.5 | 82.1 | 82.7 | 83.3 |
| △ | +3.4 | +3.3 | +3.3 | +3.2 | +2.8 | +2.2 | +2.1 | +1.9 |
+
+Table 3. Performance comparison between settings that use either the main BNs and auxiliary BNs on ImageNet. This performance difference captures the degree of distribution mismatch between adversarial examples and clean images.
+
+By training with larger networks, we observe this performance difference gets smaller. Such gap for EfficientNet-B0 is $3.4\%$ , but then is reduced to $1.9\%$ for EfficientNet-B7. It suggests that the internal representations of adversarial examples and clean images learned on large networks are much more similar than that learned on small networks. Therefore, with a strong enough network, it is possible to accurately and effectively learn a mixture of distributions even without a careful handling at normalization layers.
+
+Why AdvProp? For small networks, our comparison shows that AdvProp substantially outperforms the adversarial training baseline. We attribute this performance improvement mainly to the successful disentangled learning via auxiliary BNs.
+
+For larger networks, though the improvement is relatively small on ImageNet, AdvProp consistently outperforms the adversarial training baseline by a large margin on distorted ImageNet datasets. As shown in Tab. 4, AdvProp improves EfficientNet-B7 by $3.1\%$ on ImageNet-C, $4.3\%$ on ImageNet-A and $1.5\%$ on Stylized-ImageNet over the adversarial training baseline.
+
+| Model | ImageNet-C [9] | ImageNet-A [10] | Stylized-ImageNet [6] |
| mCE↓ | Top-1 Acc. ↑ | Top-1 Acc. ↑ |
| B6 + Adv. Training | 55.8 | 37.0 | 24.7 |
| B6 + AdvProp (ours) | 53.6 | 40.6 | 25.9 |
| B7 + Adv. Training | 56.0 | 40.4 | 25.1 |
| B7 + AdvProp (ours) | 52.9 | 44.7 | 26.6 |
+
+Table 4. AdvProp demonstrates much stronger generalization ability on distorted ImageNet datasets (e.g., ImageNet-C) than the adversarial training baseline for larger models.
+
+Moreover, AdvProp enables large networks to perform better if trained with stronger attackers. For example, by slightly increasing attacker strength from PGD5 ( $\epsilon = 4$ ) to PGD7 ( $\epsilon = 6$ ), AdvProp further helps EfficientNet-B7 to
+
+achieve $85.3\%$ top-1 accuracy on ImageNet. Conversely, applying such attacker to traditional adversarial training decreases EfficientNet-B7's accuracy to $85.0\%$ , possibly due to a more severe distribution mismatch between adversarial examples and clean images.
+
+In summary, AdvProp enables networks to enjoy the benefits of adversarial examples even with limited capacity. For networks with enough capacity, compared to adversarial training, AdvProp demonstrates much stronger generalization ability and better at exploiting model capacity for improving performance further.
+
+Missing Pieces in Traditional Adversarial Training. In our reproduced adversarial training, we note it is already better than the vanilla training setting on large networks. For example, our adversarily trained EfficientNet-B7 has $85.1\%$ top-1 accuracy on ImageNet, which beats the vanilla training baseline by $0.6\%$ . However, previous works [18, 16] show adversarial training always degrades performance.
+
+Compared to [18, 16], we make two changes in our re-implementation: (1) using stronger networks; and (2) training with weaker attackers. For examples, previous works use networks like Inception or ResNet for training, and set the perturbation size $\epsilon = 16$ ; while we use much stronger EfficientNet for training, and limit the perturbation size to a much smaller value $\epsilon = 4$ . Intuitively, weaker attackers push the distribution of adversarial examples less away from the distribution of clean images, and larger networks are better at bridging domain differences. Both factors mitigate the issue of distribution mismatch, thus making networks much easier to learn valuable feature from both domains.
+
+# 5.4. Ablations
+
+Fine-grained Disentangled Learning via Multiple Auxiliary BNs. Following [41], our networks are trained with AutoAugment [3] by default, which include operations like rotation and shearing. We hypothesize these operations (slightly) shift the original data distribution and propose to add an extra auxiliary BN to disentangle these augmented data further for fine-grained learning. In total, we keep one main BN for clean images without AutoAugment, and two auxiliary BNs for clean images with AutoAugment and adversarial examples, respectively.
+
+We try PGD attackers with perturbation size ranging from 1 to 4, and report the best result on ImageNet in Tab. 5. Compared to the default AdvProp, this fine grained strategy further improves performance. It helps EfficientNet-B0 to achieve $77.9\%$ accuracy with just 5.3M parameters, which is the state-of-the-art performance for mobile networks. As a comparison, MobileNetv3 has 5.4M parameters with $75.2\%$ accuracy [11]. These results encourage the future investigation on more fine-grained disentangled learning with mixture distributions in general, not just for adversarial training.
+
+ | B0 | B1 | B2 | B3 | B4 | B5 | B6 | B7 |
| AdvProp | 77.6 | 79.6 | 80.5 | 81.9 | 83.3 | 84.3 | 84.8 | 85.2 |
| Fine-Grained AdvProp | 77.9 | 79.8 | 80.7 | 82.0 | 83.5 | 84.4 | 84.8 | 85.2 |
+
+Table 5. Fine-grained AdvProp substantially boosts model accuracy on ImageNet, especially for small models. We perform fine-grained disentangled learning by keeping an additional auxiliary BN for AutoAugment images.
+
+Comparison to AutoAugment. Training with adversarial examples is a form of data augmentation. We choose the standard Inception-style pre-processing [38] as baseline, and compare the benefits of additionally applying AutoAugment or AdvProp. We train networks with PGD5 $(\epsilon = 4)$ and evaluate performance on ImageNet.
+
+Results are summarized in Tab. 6. For small models, AutoAugment is slightly better than AdvProp although we argue this gap can be addressed by adjusting the attacker strength. For large models, AdvProp significantly outperforms AutoAugment. Training with AutoAugment and AdvProp in combination is better than using AdvProp alone.
+
+ | B0 | B1 | B2 | B3 | B4 | B5 | B6 | B7 |
| Inception Pre-process [38] | 76.8 | 78.8 | 79.8 | 81.0 | 82.6 | 83.2 | 83.7 | 84.0 |
| + AutoAugment [3] | +0.5 | +0.4 | +0.5 | +0.7 | +0.4 | +0.5 | +0.5 | +0.5 |
| + AdvProp (ours) | +0.3 | +0.3 | +0.2 | +0.4 | +0.3 | +0.8 | +0.9 | +0.9 |
| + Both (ours) | +0.3 | +0.4 | +0.5 | +0.8 | +0.7 | +1.1 | +1.1 | +1.2 |
+
+Table 6. Both AutoAugment and AdvProp improves model performance over the Inception-style pre-processing baseline on ImageNet. Large Models generally perform better with AdvProp than AutoAugment. Training with a combination of both is better than using AdvProp alone on all networks
+
+Attackers Other Than PGD. We hereby study the effects of applying different attackers in AdvProp on model performance. Specifically, we try two different modifications on PGD: (1) we no longer limit the perturbation size to be within the $\epsilon$ -ball, and name this attacker to Gradient Descent (GD) as it removes the projection step in PGD; or (2) we skip the random noise initialization step in PGD, turn it to I-FGSM [18]. Other attack hyper-parameters are unchanged: the maximum perturbation size $\epsilon = 4$ (if applicable), number of attack iteration $n = 5$ and attack step size $\alpha = 1.0$ .
+
+For simplicity, we only experiment with EfficientNet-B3, EfficientNet-B5 and EfficientNet-B7, and report the ImageNet performance in Tab. 7. We observe that all attackers substantially improve model performance over the vanilla training baseline. This result suggests that our AdvProp is not designed for a specific attacker (e.g., PGD), but a general mechanism for improving image recognition models with different adversarial attackers.
+
+ | B3 | B5 | B7 |
| Vanilla Training | 81.7 | 83.7 | 84.5 |
| PGD [23] | 81.8 | 84.3 | 85.2 |
| I-FGSM [18] | 81.9 | 84.3 | 85.2 |
| GD | 81.7 | 84.3 | 85.3 |
+
+Table 7. ImageNet performance when trained with different attackers. With AdvProp, all attackers successfully improve model performance over the vanilla training baseline.
+
+ResNet Results. Besides EfficientNets, we also experiment with ResNet [8]. We compare AdvProp against two baselines: vanilla training and adversarial training. We apply PGD5 ( $\epsilon = 4$ ) to generate adversarial examples, and follow the settings in [8] to train all networks.
+
+We report model performance on ImageNet in Tab. 8. Compared to vanilla training, adversarial training always degrades model performance while AdvProp consistently leads to better accuracy on all ResNet models. Take ResNet-152 for example, adversarial training decreases the baseline performance by $2.0\%$ , but our AdvProp further boosts the baseline performance by $0.8\%$ .
+
+ | ResNet-50 | ResNet-101 | ResNet-152 | ResNet-200 |
| Vanilla Training | 76.7 | 78.3 | 79.0 | 79.3 |
| Adversarial Training | -3.2 | -1.8 | -2.0 | -1.4 |
| AdvProp (ours) | +0.4 | +0.6 | +0.8 | +0.8 |
+
+Table 8. Performance comparison among vanilla training, adversarial training and AdvProp on ImageNet. AdvProp reports the best result on all ResNet models.
+
+In Sec. 5.3, we show that adversarial training can improve performance if large EfficientNets are used for training. However, this phenomenon is not observed on ResNet, e.g., adversarial training still leads to inferior accuracy even trained with the large ResNet-200. It may suggest that architecture design also plays an important role when training with adversarial example, and we leave it as a future work.
+
+Pushing The Envelope with a Larger Model. Previous results suggest AdvProp performs better with larger networks. To push the envelope, we train a larger network, EfficientNet-B8, by scaling up EfficientNet-B7 further according to the compound scaling rule in [41].
+
+Our AdvProp improves the accuracy of EfficientNet-B8 from $84.8\%$ to $85.5\%$ , achieving a new state-of-the-art accuracy on ImageNet without using extra data. This result even surpasses the best model reported in [24], which is pretrained on 3.5B extra Instagram images ( $\sim 3000 \times$ more than ImageNet) and requires $\sim 9.4 \times$ more parameters (829M vs. 88M) than our EfficientNet-B8.
+
+# 6. Conclusion
+
+Previous works commonly view adversarial examples as a threat to ConvNets, and suggest training with adversarial examples lead to accuracy drop on clean images. Here we offer a different perspective: to use adversarial examples for improving accuracy of ConvNets. As adversarial examples have different underlying distributions to normal examples, we propose to use an auxiliary batch norm for disentangled learning by processing adversarial examples and clean images separately at normalization layers. Our method, AdvProp, significantly improves accuracy of all ConvNets in our experiments. Our best model reports the state-of-the-art $85.5\%$ top-1 accuracy on ImageNet without any extra data.
+
+Acknowledgement: This work was partially supported by ONR N00014-15-1-2356 to Cihang Xie and Alan Yuille.
+
+# References
+
+[1] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. In NeurIPS, 2019. 2
+[2] Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, and Cho-Jui Hsieh. Cat: Customized adversarial training for improved robustness. arXiv preprint arXiv:2002.06789, 2020.1
+[3] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. In CVPR, 2019. 2, 5, 7, 8
+[4] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical data augmentation with no separate search. arXiv preprint arXiv:1909.13719, 2019. 2
+[5] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. 2
+[6] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In ICLR, 2018. 1, 2, 5, 6, 7
+[7] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015. 1, 2, 3, 4, 6
+[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 2, 4, 8
+[9] Dan Hendrycks and Thomas G Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. arXiv preprint arXiv:1807.01697, 2018. 1, 2, 5, 6, 7
+[10] Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. Natural adversarial examples. arXiv preprint arXiv:1907.07174, 2019. 1, 2, 5, 6, 7
+[11] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for mobilenetv3. In ICCV, 2019. 7
+[12] Gao Huang, Zhuang Liu, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017. 4
+[13] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In ICCV, 2017. 5
+[14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 4
+[15] Charles Jin and Martin Rinard. Manifold regularization for adversarial robustness. arXiv preprint arXiv:2003.04286, 2020. 1
+[16] Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. 1, 2, 6, 7
+[17] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Image-net classification with deep convolutional neural networks. In NIPS, 2012. 2
+
+[18] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. In ICLR, 2017. 1, 2, 3, 4, 6, 7, 8
+[19] Joseph Lemley, Shabab Bazrafkan, and Peter Corcoran. Smart augmentation learning an optimal data augmentation strategy. IEEE Access, 2017. 2
+[20] Yan Li, Ethan X Fang, Huan Xu, and Tuo Zhao. Inductive bias of gradient descent based adversarial training on separable data. arXiv preprint arXiv:1906.02931, 2019. 1
+[21] Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast autoaugment. arXiv preprint arXiv:1905.00397, 2019. 2
+[22] Raphael Gontijo Lopes, Dong Yin, Ben Poole, Justin Gilmer, and Ekin D Cubuk. Improving robustness without sacrificing accuracy with patch gaussian augmentation. arXiv preprint arXiv:1906.02611, 2019. 2
+[23] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In ICLR, 2018. 1, 2, 3, 5, 8
+[24] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In ECCV, 2018. 1, 2, 6, 8
+[25] Yifei Min, Lin Chen, and Amin Karbasi. The curious case of adversarially robust models: More data can help, double descend, or hurt generalization. arXiv preprint arXiv:2002.11080, 2020. 2
+[26] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. TPAMI, 2018. 1, 2
+[27] Amir Najafi, Shin-ichi Maeda, Masanori Koyama, and Takeru Miyato. Robustness to adversarial perturbations in learning from incomplete data. In NeurIPS, 2019. 2
+[28] Preetum Nakkiran. Adversarial robustness may be at odds with simplicity. arXiv preprint arXiv:1901.00532, 2019. 2
+[29] Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Hang Su, and Jun Zhu. Boosting adversarial training with hypersphere embedding. arXiv preprint arXiv:2002.08619, 2020. 1
+[30] Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, and Alan Yuille. Deep co-training for semi-supervised image recognition. In ECCV, 2018. 1, 2
+[31] Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. Understanding and mitigating the tradeoff between robustness and accuracy. arXiv preprint arXiv:2002.10716, 2020. 2
+[32] Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C Duchi, and Percy Liang. Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032, 2019. 2
+[33] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. 1, 5
+[34] Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. In NeurIPS. 2
+
+[35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 2
+[36] David Stutz, Matthias Hein, and Bernt Schiele. Confidence-calibrated adversarial training and detection: More robust models generalizing beyond the attack used during training. arXiv preprint arXiv:1910.06259, 2019. 1
+[37] David Stutz, Matthias Hein, and Bernt Schiele. Disentangling adversarial robustness and generalization. In CVPR, 2019. 2
+[38] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In CVPR, 2015. 8
+[39] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016. 4
+[40] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In *ICLR*, 2014. 1
+[41] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 2019, 1, 2, 5, 7, 8
+[42] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In ICLR, 2018. 1, 2, 6
+[43] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. There is no free lunch in adversarial robustness (but there are unexpected benefits). arXiv:1805.12152, 2018. 2, 3
+[44] Jonathan Uesato, Jean-Baptiste Alayrac, Po-Sen Huang, Alhussein Fawzi, Robert Stanforth, and Pushmeet Kohli. Are labels required for improving adversarial robustness? In NeurIPS, 2019. 2
+[45] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In CVPR, 2019. 1, 2, 3
+[46] Qizhe Xie, Eduard Hovy, Minh-Thang Luong, and Quoc Le. Self-training with noisy student improves imagenet classification. arXiv preprint arXiv:1911.04252, 2019. 6
+[47] Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin D Cubuk, and Justin Gilmer. A fourier perspective on model robustness in computer vision. arXiv preprint arXiv:1906.08988, 2019. 2, 3
+[48] Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, and Liwei Wang. Adversarily robust generalization just requires more unlabeled data. arXiv preprint arXiv:1906.00555, 2019. 2
+[49] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In ICLR, 2018. 2
+[50] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. In ICML, 2019. 1
+
+[51] Tianyuan Zhang and Zhanxing Zhu. Interpreting adversarially trained convolutional neural networks. In ICML, 2019, 2, 3
+[52] Xinyu Zhang, Qiang Wang, Jian Zhang, and Zhao Zhong. Adversarial autoaugment. 2020. 2
+[53] Barret Zoph, Ekin D Cubuk, Golnaz Ghiasi, Tsung-Yi Lin, Jonathon Shlens, and Quoc V Le. Learning data augmentation strategies for object detection. arXiv preprint arXiv:1906.11172, 2019. 2
\ No newline at end of file
diff --git a/adversarialexamplesimproveimagerecognition/images.zip b/adversarialexamplesimproveimagerecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..624781373ef1fc0ff8a40ab8b045f7aa0a40ae86
--- /dev/null
+++ b/adversarialexamplesimproveimagerecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a8f458d174c42389beebd5e6e2e42b2dbd381386a17c2375e3944c5f70123e2
+size 365605
diff --git a/adversarialexamplesimproveimagerecognition/layout.json b/adversarialexamplesimproveimagerecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8fb07288022179ba1938b2f788294d5d77e5d2b5
--- /dev/null
+++ b/adversarialexamplesimproveimagerecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f77784b1b5d4393cc778b1ceb663f8f102237b83e070ac00ca17a07d3b752d8
+size 425754
diff --git a/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_content_list.json b/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f0bf4be69c9e18f26b7e368b7f72db00284adf59
--- /dev/null
+++ b/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:39d117f14524680bd9f276346271e13b5e244c42f5f17b7da1850787c69aa5ef
+size 75549
diff --git a/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_model.json b/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..615243c50f28c0a740eab663d79fb0c5697caa81
--- /dev/null
+++ b/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:397720a3b189d576114807b3423da195143b1dfbd488d3df6a39eeb834a5dedd
+size 91479
diff --git a/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_origin.pdf b/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6b3659fe9451b1255a1e0b9b35237bd877b362c9
--- /dev/null
+++ b/adversarialfeaturehallucinationnetworksforfewshotlearning/672b338c-4637-446f-936c-df0d066ef6c0_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a766dcbc04ec48e32ce054acf40ef51d31115f8fd94845ddc1f698ff0beecc2f
+size 2636508
diff --git a/adversarialfeaturehallucinationnetworksforfewshotlearning/full.md b/adversarialfeaturehallucinationnetworksforfewshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..aef94e69bd112c7f200493a9cff4d293805aa168
--- /dev/null
+++ b/adversarialfeaturehallucinationnetworksforfewshotlearning/full.md
@@ -0,0 +1,298 @@
+# Adversarial Feature Hallucination Networks for Few-Shot Learning
+
+Kai Li $^{1}$ , Yulun Zhang $^{1}$ , Kunpeng Li $^{1}$ , Yun Fu $^{1,2}$
+
+$^{1}$ Department of Electrical and Computer Engineering, Northeastern University, Boston, USA $^{2}$ Khoury College of Computer Science, Northeastern University, Boston, USA
+
+{kaili, kunpengli, yunfu}@ece.neu.edu, yulun100@gmail.com
+
+# Abstract
+
+The recent flourish of deep learning in various tasks is largely accredited to the rich and accessible labeled data. Nonetheless, massive supervision remains a luxury for many real applications, boosting great interest in labelscarcce techniques such as few-shot learning (FSL), which aims to learn concept of new classes with a few labeled samples. A natural approach to FSL is data augmentation and many recent works have proved the feasibility by proposing various data synthesis models. However, these models fail to well secure the discriminability and diversity of the synthesized data and thus often produce undesirable results. In this paper, we propose Adversarial Feature Hallucination Networks (AFHN) which is based on conditional Wasserstein Generative Adversarial networks (cWGAN) and hallucinates diverse and discriminative features conditioned on the few labeled samples. Two novel regularizers, i.e., the classification regularizer and the anti-collapse regularizer, are incorporated into AFHN to encourage discriminability and diversity of the synthesized features, respectively. Ablation study verifies the effectiveness of the proposed cWGAN based feature hallucination framework and the proposed regularizers. Comparative results on three common benchmark datasets substantiate the superiority of AFHN to existing data augmentation based FSL approaches and other state-of-the-art ones.
+
+# 1. Introduction
+
+The rich and accessible labeled data fuel the revolutionary success of deep learning [7, 46, 20]. However, in many specific real applications, only limited labeled data are available. This motivates the investigation of few-shot learning (FSL) where we need to learn concept of new classes based on a few labeled samples. To combat with deficiency of labeled data, some FSL methods resort to enhance the discriminability of the feature representations such that a simple linear classifier learned from a few labeled samples can reach satisfactory classification re
+
+sults [39, 36, 38]. Another category of methods investigate techniques of quickly and effectively updating a deep neural network with a few labeled data, either by learning a meta-network and the corresponding updating rules [9, 24, 32, 28], or by learning a meta-learner model that generates some components of a classification network directly from the labeled samples [21, 12, 34]. Alternatively, the third group of methods address this problem with data augmentation by distorting the labeled images or synthesizing new images/features based on the labeled ones [4, 10, 35, 5].
+
+Our proposed method falls into the data augmentation based category. The basic assumption of approaches in this category is that the intra-class cross-sample relationship learned from seen (training) classes can be applied to unseen (test) classes. Once the cross-sample relationship is modeled and learned from seen classes, it can be applied on the few labeled samples of unseen class to hallucinated new ones. It is believed that the augmented samples can diversify the intra-class variance and thus help reach sharper classification boundaries [45]. Whatever data augmentation technique is used, it is critical to secure discriminability of the augmented samples, as otherwise they shall cast catastrophic impact on the classifier. On the other hand, the decision boundary of a classifier can be determined precisely only when labeled samples exhibit sufficient intra-class variance. Thus, diversity of the augmented samples is also of a crucial role. This is in fact the essential motivation of investigating data augmentation for FSL, as a few labeled samples encapsulate limited intra-class variance.
+
+Though various data augmentation based FSL methods have been proposed recently, they fail to simultaneously guarantee discriminability and diversity of the synthesized samples. Some methods learn a finite set of transformation mappings between samples in each base (label-rich) classes and directly apply them to seed samples of novel (label-scarce) classes. However, the arbitrary mapping may destroy discriminability of the synthesized samples [6, 15, 35]. Other methods synthesize samples specifically for certain tasks which regularize the synthesis process [41, 28]. Thus, these methods can guarantee discriminability of the synth
+
+sized samples. But the task would constrain the synthesis process and consequently the synthesized samples tend to collapse into certain modes, thus failing to secure diversity.
+
+To avoid limitations of the existing methods, we propose Adversarial Feature Hallucination Networks (AFHN) which consists of a novel conditional Wasserstein Generative Adversarial Networks (cWGAN) [13] based feature synthesis framework and two novel regularizers. Unlike many other data augmentation based FSL approaches that perform data augmentation in the image space [3, 6, 4], our cWGAN based framework hallucinates new features by using the features of the seed labeled samples as the conditional context. To secure discriminability of the synthesized features, AFHN incorporates a novel classification regularizer that constrains the synthesized features being of high correlation with features of real samples from the same class while of low correlation with those from the different classes. With this constraint, the generator is encouraged to generate features encapsulating discriminative information of the class used as the conditional context.
+
+It is more complicated to ensure diversity of the synthesized features, as conditional GANs are notoriously susceptible to the mode collapse problem that only samples from limited distribution modes are synthesized. This is caused by the use of usually high dimensional and structured data as the condition tends to make the generator ignore the latent code, which controls diversity. To avoid this problem, we propose a novel anti-collapse regularizer which assigns high penalty for the case where mode collapse likely occurs. It is derived from the observation that noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to the feature space. We directly penalize the ratio of the dissimilarity of the two synthesized feature vectors and the dissimilarity of the two noise vectors generating them. With this constraint, the generator is forced to explore minor distribution modes, thus encouraging diversity of the synthesized features.
+
+With discriminative and diverse features synthesized, we can get highly effective classifiers and accordingly appealing recognition results. In summary, the contributions of this paper are as follows: (1) We propose a novel cWGAN based FSL framework which synthesizes fake features by taking those of the few labeled samples as the conditional context. (2) We propose two novel regularizers that guarantee discriminability and diversity of the synthesized features. (3) The proposed method reaches the state-of-the-art performance on three common benchmark datasets.
+
+# 2. Related Work
+
+Regarding the perspective of addressing FSL, existing algorithms can generally be divided into three categories. The first category of methods aim to enhance the discriminability of the feature representations extracted from im
+
+ages. To this goal, a number of methods resort to deep metric learning and learn deep embedding models that produce discriminative feature for any given image [33, 39, 36, 38]. The difference lies in the loss functions used. Other methods following this line focus on improving the deep metric learning results by learning a separate similarity metric network [37], task dependent adaptive metric [30], patch-wise similarity weighted metric [14], neural graph based metric [18, 25], etc.
+
+A more common category of algorithms address FSL by enhancing flexibility of a model such that it can be readily updated using a few labeled samples. These methods utilize meta-learning, also called learning to learn, which learns an algorithm (meta-learner) that outputs a model (the learner) that can be applied on a new task when given some information (meta-data) about that task. Following this line, some approaches aim to optimize a meta-learned classification model such that it can be easily fine-tuned using a few labeled data [32, 9, 24, 24, 32, 28, 29]. Other approaches adopt neural network generation and train a meta-learning network which can adaptively generate entire or some components of a classification neural network from a few labeled samples of novel classes [31, 12, 22, 21]. The generated neural network is supposed to be more effective to classify unlabeled samples from the novel classes, as it is generated from the labeled samples and encapsulates discriminative information about these classes.
+
+The last category of methods combat deficiency of the labeled data directly with data augmentation. Some methods try to employ additional samples by some forms of transfer learning from external data [33, 42]. More popular approaches perform data augmentation internally by applying transformations on the labeled images or the corresponding feature representations. Naively distorting the images with common transformation techniques (e.g., adding Gaussian perturbation, color jittering, etc.) is particularly risky as it likely jeopardizes the discriminative content in the images. This is undesirable for FSL as we only have a very limited number of images to be utilized; quality control of the synthesizing results for any single image is crucial as otherwise the classifier could be ruined by the low-quality images. Chen et al. propose a series of methods of performing quality-controlled image distortions by applying perturbation in the semantic feature space [6], shuffling image patches [3] and explicitly learning an image transformation network [4]. Performing data augmentation in the feature space seems more promising as the feature variance directly affects the classifier. Many approaches with this idea have been proposed by hallucinating new samples for novel class based on seen classes [35, 15], composing synthesized representations [5, 44], and using GANs [10, 45].
+
+This paper proposes Adversarial Feature Hallucination Networks (AFHN), a new GAN-based FSL model that aug-
+
+ments labeled samples by synthesizing fake features conditioned on those of the labeled ones. AFHN significantly differs from the two existing GAN based models [45, 10] in the following aspects. First, AFHN builds upon Wasserstein GAN (WGAN) model which is known for more stable performance, while [45, 10] adopt the conventional GAN framework. Second, neither [45] nor [10] has a classification regularizer. The most similar optimization objective in [10] is the one which optimizes the synthesized features as the outlier class (relative to the real class), while that in [45] is a cycle-consistency objective. We instead regularize the synthesized features of being high correlation with real features from the same classes and low correlation with those from the different classes. Third, After training the generator, we learn a standard Softmax classifier using the synthesize features, while [45, 10] utilize them to enhance existing FSL methods. Last, we further propose the novel anti-collapse regularizer to encourage diversity of synthesized features, while [45, 10] do not.
+
+AFHN also bears some similarity with an existing feature hallucination based FSL method [41]. But apparently we adopt the GAN framework which has the discriminator to regularize the features produced by the generator, while [41] uses the simple generative model. Besides, AFHN synthesizes new features to learn a standard Softmax classifier for new classes, while [41] utilizes them to enhance existing FSL classifier. Moreover, we aim to hallucinate diverse features with the novel anti-collapse regularizer, while [41] does not have such an objective.
+
+# 3. Algorithm
+
+In this section, we first briefly introduce Wasserstein GAN and then elaborate the details of how we build the proposed AFHN model upon it.
+
+# 3.1. Wasserstein GAN
+
+GAN is a recently proposed generative model that has shown impressive performance on synthesizing realistic images. The generative process in GAN is modeled as a game between two competitive models, the generator and the discriminator. The generator aims to generate from noise fake samples as realistic as possible such that the discriminator cannot tell whether they are real or fake. The discriminator instead tries the best to make the correct judgment. This adversarial game pushes the generator to extensively explores the data distribution and consequently produces more visually appealing samples than conventional generative models. However, it is known that GAN is highly unstable in training. [1] analyzes the convergence properties of the objective function of GAN and proposes the Wasserstein GAN (WGAN) which utilizes the Wasserstein distance in the objective function and is shown of better theoretical properties than the vanilla GAN. We adopt the improved variant
+
+of WGAN [13], which optimizes the following min-max problem,
+
+$$
+\begin{array}{l l} \underset {G} {\min } \underset {D} {\max } & \underset {\tilde {\mathbf {x}} \sim \mathbb {P} _ {g}} {\mathbb {E}} [ D (\tilde {\mathbf {x}}) ] - \underset {\mathbf {x} \sim \mathbb {P} _ {r}} {\mathbb {E}} [ D (\mathbf {x}) ] \\ & + \lambda \underset {\hat {\mathbf {x}} \sim \mathbb {P} _ {\hat {\mathbf {x}}}} {\mathbb {E}} [ (\| \nabla_ {\hat {\mathbf {x}}} D (\hat {\mathbf {x}}) \| _ {2} - 1) ^ {2} ], \end{array} \tag {1}
+$$
+
+where $\mathbb{P}_r$ is the data distribution and $\mathbb{P}_g$ is the model distribution defined by $\tilde{\mathbf{x}}\sim G(\mathbf{z})$ , with $\mathbf{z}\sim p(\mathbf{z})$ randomly sampled from noise distribution $p$ . $\mathbb{P}_{\hat{\mathbf{x}}}$ is defined by sampling uniformly along straight lines between pairs of points sampled from the data distribution $\mathbb{P}_r$ and the generator distribution $\mathbb{P}_g$ , i.e., $\hat{\mathbf{x}} = \alpha \mathbf{x} + (1 - \alpha)\tilde{\mathbf{x}}$ with $\alpha \sim U(0,1)$ . The first two terms approximate the Wasserstein distance and the third term penalizes the gradient norm of $\hat{\mathbf{x}}$ .
+
+# 3.2. Adversarial Feature Hallucination Networks
+
+Following the literature, we formally define FSL as follows: Given a distribution of tasks $P(\mathcal{T})$ , a sample task $\mathcal{T} \sim P(\mathcal{T})$ is a tuple $\mathcal{T} = (S_{\mathcal{T}}, Q_{\mathcal{T}})$ where the support set $S_{\mathcal{T}} = \{\{\mathbf{x}_{i,j}\}_{i=1}^{K}, y_j\}_{j=1}^{N}$ contains $K$ labeled samples from each of the $N$ classes. This is usually known as $K$ -shot $N$ -way classification. $Q_{\mathcal{T}} = \{(\mathbf{x}_q, y_q)\}_{q=1}^Q$ is the query set where the samples come from the same $N$ classes as the support set $S_{\mathcal{T}}$ . The learning objective is to minimize the classification prediction risk of $Q_{\mathcal{T}}$ , according to $S_{\mathcal{T}}$ .
+
+The proposed AFHN approaches this problem by proposing a general conditional WGAN based FSL framework and two novel regularization terms. Figure 1 illustrates the training pipeline.
+
+FSL framework with conditional WGAN. For a typical FSL task $\mathcal{T} = (S_{\mathcal{T}},Q_{\mathcal{T}})$ , the feature extraction network $F$ produces a representation vector for each image. Specifically for an image from the support set $(\mathbf{x},y)\in S_{\mathcal{T}}$ , $F$ generates
+
+$$
+\mathbf {s} = F (\mathbf {x}). \tag {2}
+$$
+
+When there are multiple samples for class $y$ , i.e., $K > 1$ , we simply average the feature vectors and take the averaged vector as the prototype of class $y$ [36]. Conditioned on $\mathbf{s}$ , we synthesize fake features for the class.
+
+Unlike previous GAN models which sample a single random noise variable from some distribution, we sample two noise variables $\mathbf{z}_1$ and $\mathbf{z}_2 \sim N(0,1)$ . The generator $G$ synthesizes fake feature $\tilde{\mathbf{s}}_1(\tilde{\mathbf{s}}_2)$ taking as input $\mathbf{z}_1(\mathbf{z}_2)$ and the class prototype $\mathbf{s}$ ,
+
+$$
+\tilde {\mathbf {s}} _ {i} = G (\mathbf {s}, \mathbf {z} _ {i}), i = 1, 2. \tag {3}
+$$
+
+The generator $G$ aims to synthesize $\tilde{\mathbf{s}}_i$ to be as similar as possible to $\mathbf{s}$ . The discriminator $D$ , taking $\mathbf{z}_i$ and $\mathbf{s}$ as input, tries to discern $\tilde{\mathbf{s}}_i$ as fake and $\mathbf{s}$ as real. Within the WGAN framework, the adversarial training objective is as follows,
+
+$$
+\begin{array}{l} L _ {G A N _ {i}} = \underset {(\mathbf {x}, \mathcal {Y}) \sim S _ {\mathcal {T}}} {\mathbb {E}} [ D (\tilde {\mathbf {s}} _ {i}, \mathbf {z} _ {i}) ] - \underset {(\mathbf {x}, \mathcal {Y}) \sim S _ {\mathcal {T}}} {\mathbb {E}} [ D (\mathbf {s}, \mathbf {z} _ {i}) ] (4) \\ + \lambda_ {\left(\mathbf {x}, y\right) \sim S _ {\mathcal {T}}} \left[ \left(\| \nabla_ {\hat {\mathbf {s}} _ {i}} D \left(\hat {\mathbf {s}} _ {i}, \mathbf {z} _ {i}\right) \| _ {2} - 1\right) ^ {2} \right], \quad i = 1, 2. (4) \\ \end{array}
+$$
+
+
+Figure 1. Framework of the proposed AFHN. AFHN takes as input a support set and a query set where images in the query set belongs to the sampled classes in the support set. Each image in the support set is fed to the feature extraction network $F$ , resulting the feature embedding $\mathbf{s}$ . With $\mathbf{s}$ , feature generator $G$ synthesizes two fake features $\tilde{\mathbf{s}}_1$ and $\tilde{\mathbf{s}}_2$ , by combining $\mathbf{s}$ with two randomly sampled variables $\mathbf{z}_1$ and $\mathbf{z}_2$ . Discriminator $D$ discriminates real feature $\mathbf{s}$ and fake features $\tilde{\mathbf{s}}_1$ and $\tilde{\mathbf{s}}_2$ , resulting in the GAN loss $L_{GAN}$ . By analyzing the relationship between $(\mathbf{z}_1, \mathbf{z}_2)$ and $(\tilde{\mathbf{s}}_1, \tilde{\mathbf{s}}_2)$ , we get the anti-collapse loss $L_{ar}$ . The proposed few-shot classifier classifies the features of the query images based on the fake features $\tilde{\mathbf{s}}_1$ and $\tilde{\mathbf{s}}_2$ . This results in the classification loss $L_{cr}$ .
+
+Simply training the model with the above GAN loss does not guarantee the generated features are well suited for learning a discriminative classifier because it neglects the inter-class competing information among different classes. Moreover, since the conditioned feature vectors are of high dimension and structured, it is likely that the generator will neglect the noise vectors and all synthesized features collapse to a single or few points in the feature space, i.e., the so-called mode collapse problem. To avoid these problems, we append the objective function with a classification regularization term and an anti-collapse regularization term, aiming to encourage both diversity and discriminability of the synthesized features.
+
+Classification regularizer. As our training objective is to classify well samples in the query set $Q_{\mathcal{T}}$ , given the support set $S_{\mathcal{T}}$ , we encourage discriminability of the synthesized features by requiring them serving well the classification task as the real features. Inspired by [36], we define a non-parametric FSL classifier which calculates the possibility of a query image $(\mathbf{x}_q, y_q) \in Q_{\mathcal{T}}$ of being the same class as synthesized feature $\tilde{\mathbf{s}}_i$ as
+
+$$
+P (y _ {q} = y | \mathbf {x} _ {q}) = \frac {\exp (\cos (\tilde {\mathbf {s}} _ {i} , \mathbf {q}))}{\sum_ {j = 1} ^ {N} \exp (\cos (\tilde {\mathbf {s}} _ {i} ^ {j} , \mathbf {q}))}, \tag {5}
+$$
+
+where $\mathbf{q} = F(\mathbf{x}_q)$ . $\tilde{\mathbf{s}}_i^j$ is the synthesized feature for the $j$ -th class and $\cos (\mathbf{a},\mathbf{b})$ is the Cosine similarity of two vectors. The adoption of Cosine similarity, rather than Euclidean distance as in [36], is inspired by a recent FSL algorithm [12] which proves that Cosine similarity can bound and reduce variance of the features and result in models of better generalization.
+
+With the proposed FSL classifier, the classification regu
+
+larizer in a typical FSL task is defined as follows:
+
+$$
+L _ {c r _ {i}} = \underset {\left(\mathbf {x} _ {q}, y _ {q}\right) \sim Q _ {\mathcal {T}}} {\mathbb {E}} \left[ \frac {1}{N} \sum_ {y = 1} ^ {N} y \log \left[ - P \left(y _ {q} = y \mid \mathbf {x} _ {q}\right) \right] \right], \tag {6}
+$$
+
+for $i = 1,2$ . We can see that this regularizer explicitly encourages the synthesized features to have high correlation with features from the same class (the conditional context), while low correlation with features from the different classes. To achieve this, the synthesized features must encapsulate discriminative information about the conditioned class and thus secure discriminability.
+
+Anti-collapse regularizer. GAN models are known for suffering from the notorious mode collapse problem, especially conditional GANs where structured and high-dimensional data (e.g., images) are usually used as the conditional contexts. As a consequence, the generator likely ignores the latent code (noises) that accounts for diversity and focuses only on the conditional contexts, which is undesirable. Specifically to our case, our goal is to augment the few labeled samples in the feature space; when mode collapse occurs, all synthesized features may collapse to a single or a few points in the feature space, failing to diversify the labeled samples. Observing that noise vectors that are closer in the latent code space are more likely to be collapsed into the same mode when mapped to the feature space, we directly penalize the ratio of the dissimilarity two synthesized feature vectors and the dissimilarity of the two noise vectors generating them.
+
+Remember that we sample two random variables $\mathbf{z}_1$ and $\mathbf{z}_2$ . We generate two fake feature vectors $\tilde{\mathbf{s}}_1$ and $\tilde{\mathbf{s}}_2$ from them. When $\mathbf{z}_1$ and $\mathbf{z}_2$ are closer, $\mathbf{s}_1$ and $\mathbf{s}_2$ are more likely to be collapsed into the same mode. To mitigate this, we
+
+Algorithm 1. Proposed FSL algorithm
+Input: Training set $\mathcal{D}_t = \{\mathcal{X}_t, \mathcal{Y}_t\}$ , parameters $\lambda$ , $\alpha$ , and $\beta$ .
+Output: Feature extractor $F$ , generator $G$ , discriminator $D$ .
+1. Train $F$ as a standard classification task using $\mathcal{D}_t$ .
+while not done do
+// Fix $G$ and update $D$ .
+2. Sample from $\mathcal{D}_t$ a batch of FSL tasks $\mathcal{T}_i^d \sim p(\mathcal{D}_t)$ .
+For each $\mathcal{T}_i^d$ do
+3. Sample a support set $S_{\mathcal{T}} = \{\{\mathbf{x}_{i,j}\}_{i=1}^{K}, y_j\}_{j=1}^{N}$ and query set $Q_{\mathcal{T}} = \{\{\mathbf{x}_{k,j}\}_{k=1}^{Q}, y_j\}_{j=1}^{N}$ .
+4. Compute prototypes of the $N$ classes $\mathcal{P} = \{\mathbf{s}_j\}_{j=1}^{N}$ , where $\mathbf{s}_j = \frac{1}{K} \sum_{i=1}^{K} F(\mathbf{x}_{i,j})$ .
+5. Sample $N$ noise variables $\mathcal{Z}_1 = \{\mathbf{z}_1^j\}_{j=1}^{N}$ and variables $\mathcal{Z}_2 = \{\mathbf{z}_2^j\}_{j=1}^{N}$ .
+6. Generate fake feature sets $\tilde{\mathcal{Z}}_1 = \{\tilde{\mathbf{z}}_1^j\}_{j=1}^{N}$ and $\tilde{\mathcal{Z}}_2 = \{\tilde{\mathbf{z}}_2^j\}_{j=1}^{N}$ according to Eq. (3).
+7. Update $D$ by maximizing Eq. (8).
+end For
+// Fix $D$ and update $G$ .
+8. Sample from $\mathcal{D}_t$ a batch of FSL tasks $\mathcal{T}_i^g \sim p(\mathcal{D}_t)$ .
+For each $\mathcal{T}_i^g$ do
+9. Execute steps 3-7.
+10. Update $G$ by minimizing Eq. (8).
+end For
+end while
+
+define the anti-collapse regularization term as
+
+$$
+\mathcal {L} _ {a r} = \underset {(\mathbf {x}, y) \sim S _ {\mathcal {T}}} {\mathbb {E}} \left[ \frac {1 - \cos (\tilde {\mathbf {s}} _ {1} , \tilde {\mathbf {s}} _ {2})}{1 - \cos (\mathbf {z} _ {1} , \mathbf {z} _ {2})} \right]. \tag {7}
+$$
+
+We can observe that this term amplifies the dissimilarity of the two fake feature vectors when the latent codes generating them are of high similarity. With the case mode collapse more likely occurs being assigned with higher penalty, the generator is forced to mine minor modes in the feature space during training. The discriminator will also handle fake features from the minor modes. Thus, it is expected that more diverse features can be synthesized when applying the generator on novel classes.
+
+With the above two regularization terms, we reach our final training objective as
+
+$$
+\min _ {G} \max _ {D} \sum_ {i = 1} ^ {2} L _ {G A N _ {i}} + \alpha \sum_ {i = 1} ^ {2} L _ {c r _ {i}} + \beta \frac {1}{L _ {a r}}, \tag {8}
+$$
+
+where $\alpha$ and $\beta$ are two hyper-parameters. Algorithm 1 outlines the main training steps of the proposed method.
+
+# 3.3. Classification with Synthesized Samples
+
+In the test stage, given an FSL task $\mathcal{T}' = (S_{\mathcal{T}}', Q_{\mathcal{T}}')$ randomly sampled from the test set that the classes have no overlap with those in the training set, we first augment the labeled support set $S_{\mathcal{T}}'$ with the learned generator $G$ . Then, we train a classifier with the augmented
+
+supported set. The classifier is used to classify samples from the query set $Q_{\mathcal{T}}^{\prime}$ . Specifically, suppose after data augmentation, we get an enlarged support set $\hat{S}_{\mathcal{R}}^{\prime} = \{(\mathbf{s}_1,y_1),(\mathbf{s}_2,y_2),\dots ,(\mathbf{s}_{N\times K^{\prime}},y_{N\times K^{\prime}})\}$ where $K^{\prime}$ is the number of samples synthesized for each class. With $\hat{S}_{\mathcal{R}}^{\prime}$ we train a standard Softmax classifier $\mathbf{f}_c$ as
+
+$$
+\min _ {\theta} \mathbb {E} _ {(\mathbf {s}, y) \sim \hat {S} _ {\mathcal {R}} ^ {\prime}} \log [ - P (y | \mathbf {s}; \theta) ], \tag {9}
+$$
+
+where $\theta$ is the parameter of $\mathbf{f}_c$ . With $\mathbf{f}_c$ , we classify samples from $Q_{\mathcal{T}}'$ .
+
+# 4. Experiments
+
+We evaluate AFHN on three common benchmark datasets, namely, Mini-ImageNet [39], CUB [40] and CIFAR100 [19]. The Mini-ImageNet dataset is a subset of ImageNet. It has 60,000 images from 100 classes, 600 images for each class. We follow previous methods and use the splits in [32] for evaluation, i.e., 64, 16, 20 classes as training, validation, and testing sets, respectively. The CUB dataset is a fine-grained dataset of totally 11,788 images from 200 categories of birds. We use the split in [17] and 100, 50, 50 classes for training, validation, and testing, respectively. The CIFAR-100 dataset contains 60,000 images from 100 categories. We use the same data split as in [47]. In particular, 64, 16 and 20 classes are used for training, validation and testing, respectively.
+
+Following previous methods, we evaluate 5-way 1-shot and 5-way 5-shot classification tasks where each task instance involves classifying test images from 5 sampled classes with 1 or 5 randomly sampled images for each class as the support set. In order to reduce variance, we repeat the evaluation task 600 times and report the mean of the accuracy with a $95\%$ confidence interval.
+
+# 4.1. Implementation Details
+
+Following the previous data augmentation based methods [35, 6, 4], we use ResNet18 [16] as our feature extraction network $F$ . We implement the generator $G$ as a two-layer MLP, with LeakyReLU activation for the first layer and ReLU activation for the second one. The dimension of the hidden layer is 1024. The discriminator is also a two-layer MLP, with LeakyReLU as the activation function for the first layer and Sigmoid for the second layer. The dimension of the hidden layer is also 1024. The noise vectors $\mathbf{z}_1$ and $\mathbf{z}_2$ are drawn from a unit Gaussian with the same dimensionality as the feature embeddings.
+
+Following the data augmentation based FSL methods [35, 6], we perform two-step training procedures. In the first step, we only train the feature extraction network $F$ as a multi-class classification task using only the training split. We use Adam optimizer with an initial learning rate $10^{-3}$
+
+| cWGAN | ✘ | ✘ | ✓ | ✓ | ✓ |
| CR | ✘ | ✓ | ✘ | ✓ | ✓ |
| AR | ✘ | ✘ | ✘ | ✘ | ✓ |
| 52.73 | 55.65 | 57.58 | 60.56 | 62.38 |
+
+Table 1. Ablation study on the Mini-ImageNet dataset for the 5-way 1-shot setting. cWGAN, CR, and AR represent the conditional WGAN framework, the classification regularizer, and the anti-collapse regularizer, respectively. The baseline result (52.73) is obtained by applying the SVM classifier directly on ResNet18 features without data augmentation. The result (55.65) with only CR added is obtained from the synthesized features produced by the generator without the discriminator and AR during training.
+
+which decays to the half every 10 epochs. We train $F$ with 100 epochs with batch size of 128. In the second training stage, we train the generator and discriminator alternatively, using features extracted by $F$ and update $G$ after every 5 updates of $D$ . We also use Adam optimizer which has an initial learning rate of $10^{-5}$ and decays to the half every 20 epochs for both $F$ and $G$ . We train the whole network with 100 epochs with 600 randomly sampled FSL tasks in each epoch. For the hyper-parameters, we set $\lambda = 10$ as suggested by [13], and $\alpha = \beta = 1$ for all the three datasets. During the test stage, we synthesize 300 fake features for each class.
+
+The code is developed based on PyTorch.
+
+# 4.2. Ablation Study
+
+The proposed AFHN consists of the novel conditional WGAN (cWGAN) based feature synthesize framework and the two regularizers that encourage diversity and discriminability of the synthesized features, i.e., the Classification Regularizer (CR) and Anti-collapse Regularizer (AR). To evaluate the effectiveness and impact of these components, we conduct ablation study on the Mini-ImageNet dataset for the 5-way 1-shot setting. The results are shown in Table 1.
+
+CR. This regularizer constrains the synthesized features to have desirable classification property such that we can train from them a discriminative classifier. We can see that when it is used as the only regularization for the generator, it raises the baseline result from 52.73 to 55.65. On the other hand, when it is used along with cWGAN (the discriminator regularizes the generated features, resulting in the GAN loss), it helps further boost the performance from 57.58 to 60.56. Therefore, in the both cases (with and without cWGAN), CR helps enhance discriminability of the synthesized features and leads to performance boost.
+
+cWGAN. Compared with the baseline (without data augmentation), cWGAN helps raise the accuracy from 52.73 to 57.58. This is because the synthesized features enhance the intra-class variance, which makes classification decision boundaries much sharper. Moreover, with CR as the regularizer, our cWGAN based generative model boosts the
+
+performance of the naive generative model from 55.65 to 60.56. This further substantiates the effectiveness of the proposed cWGAN framework. The performance gain is due to the adversarial game between the generator and the discriminator, which enhances the generator's capability of modeling complex data distribution among training data. The enhanced generator is therefore able to synthesize features of both higher diversity and discriminability.
+
+As mentioned in the related work, one of the major differences of the proposed AFHN from the other feature hallucination based FSL method [41] is that AFHN is an adversarial generative model while [41] uses a naive generative model. This study thus evidences the advantage of AFHN over [41].
+
+AR. AR aims to encourage the diversity of the synthesized features by explicitly penalizing the case where mode collapses more likely occur. Table 1 shows that it further brings about $2\%$ performance gains, thus proving its effectiveness.
+
+# 4.3. Comparative Results
+
+Mini-Imagenet. Mini-Imagenet is the most extensively evaluated dataset. From Table 2 we can observe that AFHN attains the new state-of-the-art, for both the 1-shot and 5-shot setting. Compared with the other four data augmentation based methods, AFHN reaches significant improvements: it beats $\Delta$ -encoder [35] by more than $8\%$ for the 5-shot setting and Dual TriNet [6] by more than $3\%$ for the 1-shot setting. Compared with MetaGAN [45] which is also based on GAN, AFHN achieves about $10\%$ improvements for both the 1-shot and 5-shot settings. Besides the significant advantages over the peer data augmentation based methods, AFHN also exhibits remarkable advantages over the other two categories of methods. It beats the best metric learning based method DCEM [8] by about $3.5\%$ for the 1-shot setting. It also performs better than the state-of-the-art meta-learning based algorithms. Compared with the baseline method, "ResNet18+SVM", AFHN reaches about $10\%$ and $5\%$ improvements for the 1-shot and 5-shot settings, respectively. This substantiates the effectiveness of our proposed data augmentation techniques.
+
+CUB. This is a fine-grained bird dataset widely used for fine-grained classification. Recently, it has been employed for few-shot classification evaluation. Thus, relatively less results are reported on this dataset. From Table 3 we can see that AFHN reaches comparable results with both the other two data augmentation based methods Dual TriNet and $\Delta$ -encoder. It beats the best metric learning based method SAML [14] by $2.4\%$ for the 5-shot setting, and performs significantly better than the meta-learning based methods. Compared with the baseline, we only have a moderate improvement in the 1-shot setting and reach only a marginal boost for the 5-shot setting. We speculate the reason is that this dataset is relatively small, less than 60 images per class
+
+ | | Backbone | Reference | 1-shot | 5-shot |
| ResNet18 + SVM (baseline) | ResNet18 | | 52.73±1.44 | 73.31±0.81 |
| MetricL | Matching Net [39] | Conv-64F | NeurIPS'16 | 43.56±0.84 | 55.31±0.73 |
| PROTO Net [36] | Conv-64F | NeurIPS'17 | 49.42±0.78 | 68.20±0.66 |
| MM-Net [2] | Conv-64F | CVPR'18 | 53.37±0.48 | 66.97±0.35 |
| GNN [11] | Conv-256F | Arxiv'17 | 50.33±0.36 | 66.41±0.63 |
| RELATION NET [37] | Conv-64F | CVPR'18 | 50.44±0.82 | 65.32±0.70 |
| DN4 [23] | Conv-64F | CVPR'19 | 51.24±0.74 | 71.02±0.64 |
| TPN [25] | ResNet8 | ICLR'19 | 55.51±0.86 | 69.86±0.65 |
| PARN [43] | Conv-64F | ICCV'19 | 55.22±0.84 | 71.55±0.66 |
| SAML [14] | Conv-64F | ICCV'19 | 57.69±0.20 | 73.03±0.16 |
| DCEM [8] | ResNet18 | ICCV'19 | 58.71±0.62 | 77.28±0.46 |
| MetaL | MAML [9] | Conv-32F | ICML'17 | 48.70±1.84 | 63.11±0.92 |
| META-LSTM [32] | Conv-32F | ICLR'17 | 43.44±0.77 | 60.60±0.71 |
| SNAIL [27] | ResNet-256F | ICLR'18 | 55.71±0.99 | 68.88±0.92 |
| MACO [17] | Conv-32F | Arxiv'18 | 41.09±0.32 | 58.32±0.21 |
| DFSVL [12] | Conv-64F | CVPR'18 | 55.95±0.89 | 73.00±0.68 |
| META-SGD [24] | Conv-32F | Arxiv'17 | 50.47±1.87 | 64.03±0.94 |
| PPA [31] | WRN-28-10 | CVPR'18 | 59.60±0.41 | 73.74±0.19 |
| UFDA [21] | ResNet18 | CIKM'19 | 60.51 | 77.08 |
| LEO [34] | WRN-28-10 | ICLR'19 | 61.76±0.08 | 77.59±0.12 |
| DataAug | MetaGAN [45] | Conv-32F | NeurIPS'18 | 52.71±0.64 | 68.63±0.67 |
| Dual TriNet [4] | ResNet18 | TIP'19 | 58.80±1.37 | 76.71±0.69 |
| Δ-encoder [35] | ResNet18 | NeurIPS'18 | 59.90 | 69.70 |
| IDeMe-Net [4] | ResNet18 | CVPR'19 | 59.14±0.86 | 74.63±0.74 |
| AFHN (Proposed) | ResNet18 | | 62.38±0.72 | 78.16±0.56 |
+
+Table 2. Few-shot classification accuracy on Mini-Imagenet. "MetricL", "MetaL" and "DataAug" represent metric learning based category, meta-learning based category and data augmentation based category, respectively. The $\pm$ indicates $95\%$ confidence intervals over tasks. The best results are in bold.
+
+ | | Backbone | Reference | CUB | CIFAR100 |
| 1-shot | 5-shot | 1-shot | 5-shot |
| ResNet18 + SVM (baseline) | ResNet18 | | 66.54±0.53 | 82.38±0.43 | 59.65±0.78 | 76.75±0.73 |
| MetricL | Matching Net [39] | Conv-64F | NeurIPS'16 | 49.34 | 59.31 | 50.53±0.87 | 60.30±0.82 |
| PROTO Net [36] | Conv-64F | NeurIPS'17 | 45.27 | 56.35 | - | - |
| DN4 [23] | Conv-64F | CVPR'19 | 53.15±0.84 | 81.90±0.60 | - | - |
| SAML [14] | Conv-64F | ICCV'19 | 69.33±0.22 | 81.56±0.15 | - | - |
| MetaL | MAML [9] | Conv-32F | ICML'17 | 38.43 | 59.15 | 49.28±0.90 | 58.30±0.80 |
| META-LSTM [32] | Conv-32F | ICLR'17 | 40.43 | 49.65 | - | - |
| MACO [17] | Conv-32F | Arxiv'18 | 60.76 | 74.96 | - | - |
| META-SGD [24] | Conv-32F | Arxiv'17 | 66.90 | 77.10 | 61.60 | 77.90 |
| DataAug | Dual TriNet [6] | ResNet18 | TIP'19 | 69.61 | 84.10 | 63.41±0.64 | 78.43±0.64 |
| Δ-encoder [35] | ResNet18 | NeurIPS'18 | 69.80±0.46 | 82.60±0.35 | 66.70 | 79.80 |
| AFHN (Proposed) | ResNet18 | | 70.53±1.01 | 83.95±0.63 | 68.32±0.93 | 81.45±0.87 |
+
+Table 3. Few-shot classification accuracy on CUB and CIFAR100. Please refer Table 2 for details.
+
+on average; a large number of classes only have about 30 images. Due to the small scale of this dataset, the intra-class variance is less significant than that of the Mini-Imagenet dataset, such that 5 labeled samples are sufficient to capture most of the intra-class variance. Performing data augmentation is less crucial than that for the other datasets.
+
+CIFAR100. This dataset has the identical structure as the Mini-ImageNet dataset. Table 3 shows that AFHN performs the best over all the existing methods and the advantages
+
+are sometimes significant. AFHN beats Dual TriNet by $5\%$ and $3\%$ for 1-shot and 5-shot respectively. Compared with the best meta-learning based method, we get $7\%$ and $4\%$ improvements for the 1-shot and 5-shot respectively. Compared with the baseline method, AFHN also reach remarkable gains. We reach about $10\%$ and $5\%$ improvements for 1-shot and 5-shot respectively. This great improvement convincingly substantiates the effectiveness of our GAN based data augmentation method for solving the FSL problem.
+
+
+Figure 2. t-SNE [26] visualization of synthesized feature embeddings. The real features are indicated by $\star$ . Different colors represent different classes.
+
+
+
+
+
+
+Figure 3. Impact of the number of synthesized samples for each class on the Mini-ImageNet dataset.
+
+In summary, among all the three datasets, we reach significant improvements over existing state-of-the-art methods for two of them, while being comparable for the left one. For all the datasets, our method reaches significant boost to the baseline method where there is no data augmentation. These experiments substantiate the effectiveness and superiority of the proposed method.
+
+# 4.4. Further Analysis
+
+Impact of the number of synthesized features. Figure 3 shows the analysis on Mini-ImageNet about the recognition accuracy with respect to the number of synthesized features for each class during test. We can observe that the classification accuracy keeps boosted with more features synthesized at the beginning, and remains stable with even more synthesized samples. This is reasonable because the class variance encapsulated by the few labeled samples has a upper bound; data augmentation based on these labeled samples can enlarge the variance to some extent, but it is still bounded by the few labeled samples themselves. When it reaches the peak, the performance reasonably turns stable.
+
+Visualization of synthesized features. We showed quantitatively in the ablation study that owing to the CR and AR regularizers, we can generate diverse and discriminative features which bring significant performance gains. Here we further study the effect of the two regularizers by show-
+
+ing the t-SNE visualization of the synthesized features. As shown in Figure 2, the synthesized features of different classes mix up together when using only cWGAN for augmentation. As analyzed before, cWGAN does not guarantee synthesizing semantically meaningful features. The problem is substantially resolved when we train cWGAN with CR. The synthesized features exhibit clear clustering structure, which helps train a discriminative classifier. Furthermore, with AR added, the synthesized features still exhibit favorable clustering structure. But taking a closer look of the visualization, we can find that the features synthesized with AR added are more diverse than that without it: the clusterings are less compact, stretched to larger regions, and even contains some noises. This shows AR indeed helps diversify the synthesized features.
+
+# 5. Conclusions
+
+We introduce the Adversarial Feature Hallucination Networks (AFHN), a new data augmentation based few-shot learning approach. AFHN consists of a novel conditional Wasserstein GAN (cWGAN) based feature synthesis framework, the classification regularizer (CR) and the anticollapse regularizer (AR). Based on cWGAN, our framework synthesizes fake features for new classes by using the features of the few labeled samples as the conditional context. CR secures feature discriminability by requiring the synthesized features to be of high similarity with features of the samples from the same classes, while of low similarity with those from the different classes. AR aims to enhance the diversity of the synthesized features by directly penalizing the cases where the mode collapse problem likely occurs. The ablation study shows the effectiveness of the cWGAN based feature synthesis framework, as well as the two regularizers. Comparative results verify the superiority of AFHN to the existing data augmentation based FSL approaches as well as other state-of-the-art ones.
+
+Acknowledgement: This research is supported by the U.S. Army Research Office Award W911NF-17-1-0367.
+
+# References
+
+[1] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017.
+[2] Qi Cai, Yingwei Pan, Ting Yao, Chenggang Yan, and Tao Mei. Memory matching networks for one-shot image recognition. In CVPR, 2018.
+[3] Zitian Chen, Yanwei Fu, Kaiyu Chen, and Yu-Gang Jiang. Image block augmentation for one-shot learning. In AAAI, 2019.
+[4] Zitian Chen, Yanwei Fu, Yu-Xiong Wang, Lin Ma, Wei Liu, and Martial Hebert. Image deformation meta-networks for one-shot learning. In CVPR, 2019.
+[5] Zitian Chen, Yanwei Fu, Yinda Zhang, Yu-Gang Jiang, Xi-angyang Xue, and Leonid Sigal. Semantic feature augmentation in few-shot learning. arXiv preprint arXiv:1804.05298, 2018.
+[6] Zitian Chen, Yanwei Fu, Yinda Zhang, Yu-Gang Jiang, Xi-angyang Xue, and Leonid Sigal. Multi-level semantic feature augmentation for one-shot learning. IEEE Transactions on Image Processing, 2019.
+[7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. 2009.
+[8] Nikita Dvornik, Cordelia Schmid, and Julien Mairal. Diversity with cooperation: Ensemble methods for few-shot classification. In ICCV, 2019.
+[9] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017.
+[10] Hang Gao, Zheng Shou, Alireza Zareian, Hanwang Zhang, and Shih-Fu Chang. Low-shot learning via covariance-preserving adversarial augmentation networks. In NeurIPS, 2018.
+[11] Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043, 2017.
+[12] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In CVPR, 2018.
+[13] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In NeurIPS, 2017.
+[14] Fusheng Hao, Fengxiang He, Jun Cheng, Lei Wang, Jianzhong Cao, and Dacheng Tao. Collect and select: Semantic alignment metric learning for few-shot learning. In ICCV, 2019.
+[15] Bharath Hariharan and Ross Girshick. Low-shot visual recognition by shrinking and hallucinating features. In CVPR, 2017.
+[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+[17] Nathan Hilliard, Lawrence Phillips, Scott Howland, Artem Yankov, Courtney D Corley, and Nathan O Hadas. Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376, 2018.
+[18] Jongmin Kim, Taesup Kim, Sungwooong Kim, and Chang D Yoo. Edge-labeling graph neural network for few-shot learning. In CVPR, 2019.
+
+[19] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009.
+[20] Kai Li, Zhengming Ding, Kunpeng Li, Yulun Zhang, and Yun Fu. Support neighbor loss for person re-identification. In ACM MM, 2018.
+[21] Kai Li, Martin Renqiang Min, Bing Bai, Yun Fu, and Hans Peter Graf. On novel object recognition: A unified framework for discriminability and adaptability. In CIKM, 2019.
+[22] Kai Li, Martin Renqiang Min, and Yun Fu. Rethinking zero-shot learning: A conditional visual classification perspective. In ICCV, 2019.
+[23] Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, and Jiebo Luo. Revisiting local descriptor based image-to-class measure for few-shot learning. In CVPR, 2019.
+[24] Zhenguo Li, Fengwei Zhou, Fei Chen, and Hang Li. Metasgd: Learning to learn quickly for few shot learning. arXiv preprint arXiv:1707.09835, 2017.
+[25] Yanbin Liu, Juho Lee, Minseop Park, Saehoon Kim, Eunho Yang, Sung Ju Hwang, and Yi Yang. Learning to propagate labels: Transductive propagation network for few-shot learning. In ICLR, 2019.
+[26] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008.
+[27] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141, 2017.
+[28] Tsendsuren Munkhdalai and Hong Yu. Meta networks. In ICML, 2017.
+[29] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms. CoRR, abs/1803.02999, 2018.
+[30] Boris Oreshkin, Pau Rodríguez López, and Alexandre Lacoste. Tadam: Task dependent adaptive metric for improved few-shot learning. In NeurIPS, 2018.
+[31] Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L Yuille. Few-shot image recognition by predicting parameters from activations. In CVPR, 2018.
+[32] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017.
+[33] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and Richard S Zemel. Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676, 2018.
+[34] Andrei A Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In ICLR, 2019.
+[35] Eli Schwartz, Leonid Karlinsky, Joseph Shtok, Sivan Harary, Mattias Marder, Rogerio Feris, Abhishek Kumar, Raja Giryes, and Alex M Bronstein. Delta-encoder: an effective sample synthesis method for few-shot object recognition. arXiv preprint arXiv:1806.04734, 2018.
+[36] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NeurIPS, 2017.
+
+[37] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Relation network for few-shot learning. In CVPR, 2018.
+[38] Eleni Triantafillou, Richard Zemel, and Raquel Urtasun. Few-shot learning through an information retrieval lens. In NeurIPS, 2017.
+[39] Oriol Vinyals, Charles Blundell, Tim Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In NeurIPS, 2016.
+[40] Catherine Wah, Steve Branson, Pietro Perona, and Serge Belongie. Multiclass recognition and part localization with humans in the loop. In ICCV, 2011.
+[41] Yu-Xiong Wang, Ross Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. In CVPR, 2018.
+[42] Yu-Xiong Wang and Martial Hebert. Learning to learn:
+
+Model regression networks for easy small sample learning. In ECCV, 2016.
+[43] Ziyang Wu, Yuwei Li, Lihua Guo, and Kui Jia. Parn: Position-aware relation networks for few-shot learning. In ICCV, 2019.
+[44] Aron Yu and Grauman Kristen. Low-shot learning via covariance-preserving adversarial augmentation networks. In ICCV, 2017.
+[45] Ruixiang Zhang, Tong Che, Zoubin Ghahramani, Yoshua Bengio, and Yangqiu Song. Metagan: An adversarial approach to few-shot learning. In NeurIPS, 2018.
+[46] Handong Zhao, Zhengming Ding, and Yun Fu. Multi-view clustering via deep matrix factorization. In AAAI, 2017.
+[47] Fengwei Zhou, Bin Wu, and Zhenguo Li. Deep meta-learning: Learning to learn in the concept space. arXiv preprint arXiv:1802.03596, 2018.
\ No newline at end of file
diff --git a/adversarialfeaturehallucinationnetworksforfewshotlearning/images.zip b/adversarialfeaturehallucinationnetworksforfewshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d719b8a1c54ea61e7f3799d5735fe92913851efb
--- /dev/null
+++ b/adversarialfeaturehallucinationnetworksforfewshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0a8ae11c5730b57b65c9941538f76c00b771a070cad5be69f1158b35714b5d0e
+size 475174
diff --git a/adversarialfeaturehallucinationnetworksforfewshotlearning/layout.json b/adversarialfeaturehallucinationnetworksforfewshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9f78902cbb1614af00ae2e18eb476660026a3fa8
--- /dev/null
+++ b/adversarialfeaturehallucinationnetworksforfewshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65c4e2896ae6580ca1a5d4b908a49cd5231eb336e4c0faa0635de34c20c0faa1
+size 420122
diff --git a/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_content_list.json b/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d7724a65894aadc386d449b2d618595545814462
--- /dev/null
+++ b/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f82073f031a112e6020242ed2ef186893bdd0fb9abaa9d30eb375764cf36e470
+size 75077
diff --git a/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_model.json b/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e4c4c954c7490fe19031418707e61c0ad33d75c
--- /dev/null
+++ b/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8a550268dba913033cefa230b56d74014a5f62186864819f4c5b40dbf791b4c
+size 97325
diff --git a/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_origin.pdf b/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d9f852f22367e5bbe1fa5477e212f60d99e2ce7c
--- /dev/null
+++ b/adversariallatentautoencoders/5c758fd2-d874-4d5f-83a5-fe03434f0aba_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5c079417686da7e8d0dae8a4bb33f2907aced7598d57c27049e39429e11cc5f
+size 9255480
diff --git a/adversariallatentautoencoders/full.md b/adversariallatentautoencoders/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d41c566f3a62d3149ac4e473258cbe230ad1dfc2
--- /dev/null
+++ b/adversariallatentautoencoders/full.md
@@ -0,0 +1,339 @@
+# Adversarial Latent Autoencoders
+
+Stanislav Pidhorskyi Donald A. Adjeroh Gianfranco Doretto Lane Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV 26506 { stpidhorskyi, daadjeroh, gadoretto }@mix.wvu.edu
+
+# Abstract
+
+Autoencoder networks are unsupervised approaches aiming at combining generative and representational properties by learning simultaneously an encoder-generator map. Although studied extensively, the issues of whether they have the same generative power of GANs, or learn disentangled representations, have not been fully addressed. We introduce an autoencoder that tackles these issues jointly, which we call Adversarial Latent Autoencoder (ALAE). It is a general architecture that can leverage recent improvements on GAN training procedures. We designed two autoencoders: one based on a MLP encoder, and another based on a StyleGAN generator, which we call StyleALAE. We verify the disentanglement properties of both architectures. We show that StyleALAE can not only generate $1024 \times 1024$ face images with comparable quality of StyleGAN, but at the same resolution can also produce face reconstructions and manipulations based on real images. This makes ALAE the first autoencoder able to compare with, and go beyond the capabilities of a generator-only type of architecture.
+
+# 1. Introduction
+
+Generative Adversarial Networks (GAN) [13] have emerged as one of the dominant unsupervised approaches for computer vision and beyond. Their strength relates to their remarkable ability to represent complex probability distributions, like the face manifold [33], or the bedroom images manifold [53], which they do by learning a generator map from a known distribution onto the data space. Just as important are the approaches that aim at learning an encoder map from the data to a latent space. They allow learning suitable representations of the data for the task at hand, either in a supervised [29, 46, 40, 14, 52], or unsupervised [37, 58, 19, 25, 4, 3] manner.
+
+Autoencoder (AE) [28, 41] networks are unsupervised approaches aiming at combining the "generative" as well as the "representational" properties by learning simultaneously an encoder-generator map. General issues subject of
+
+investigation in AE structures are whether they can: (a) have the same generative power as GANs; and, (b) learn disentangled representations [1]. Several works have addressed (a) [35, 31, 6, 9, 20]. An important testbed for success has been the ability for an AE to generate face images as rich and sharp as those produced by a GAN [23]. Progress has been made but victory has not been declared. A sizable amount of work has addressed also (b) [19, 25, 10], but not jointly with (a).
+
+We introduce an AE architecture that is general, and has generative power comparable to GANs while learning a less entangled representation. We observed that every AE approach makes the same assumption: the latent space should have a probability distribution that is fixed a priori and the autoencoder should match it. On the other hand, it has been shown in [24], the state-of-the-art for synthetic image generation with GANs, that an intermediate latent space, far enough from the imposed input space, tends to have improved disentanglement properties.
+
+The observation above has inspired the proposed approach. We designed an AE architecture where we allow the latent distribution to be learned from data to address entanglement (A). The output data distribution is learned with an adversarial strategy (B). Thus, we retain the generative properties of GANs, as well as the ability to build on the recent advances in this area. For instance, we can seamlessly include independent sources of stochasticity, which have proven essential for generating image details, or can leverage recent improvements on GAN loss functions, regularization, and hyperparameters tuning [2, 30, 38, 34, 36, 3]. Finally, to implement (A) and (B) we impose the AE reciprocity in the latent space (C). Therefore, we can avoid using reconstruction losses based on simple $\ell_2$ norm that operate in data space, where they are often suboptimal, like for the image space. We regard the unique combination of (A), (B), and (C) as the major technical novelty and strength of the approach. Since it works on the latent space, rather than autoencoding the data space, we named it Adversarial Latent Autoencoder (ALAE).
+
+We designed two ALAEs, one with a multilayer perceptron (MLP) as encoder with a symmetric generator, and
+
+another with the generator derived from a StyleGAN [24], which we call StyleALAE. For this one, we designed a companion encoder and a progressively growing architecture. We verified qualitatively and quantitatively that both architectures learn a latent space that is more disentangled than the imposed one. In addition, we show qualitative and quantitative results about face and bedroom image generation that are comparable with StyleGAN at the highest resolution of $1024 \times 1024$ . Since StyleALAE learns also an encoder network, we are able to show at the highest resolution, face reconstructions as well as several image manipulations based on real images rather than generated.
+
+# 2. Related Work
+
+Our approach builds directly on the vanilla GAN architecture [12]. Since then, a lot of progress has been made in the area of synthetic image generation. LAPGAN [5] and StackGAN [55, 56] train a stack of GANs organized in a multi-resolution pyramid to generate high-resolution images. HDGAN [57] improves by incorporating hierarchically-nested adversarial objectives inside the network hierarchy. In [51] they use a multi-scale generator and discriminator architecture to synthesize high-resolution images with a GAN conditioned on semantic label maps, while in BigGAN [3] they improve the synthesis by applying better regularization techniques. In PGGAN [23] it is shown how high-resolution images can be synthesized by progressively growing the generator and the discriminator of a GAN. The same principle was used in StyleGAN [24], the current state-of-the-art for face image generation, which we adapt it here for our StyleALAE architecture. Other recent work on GANs has focussed on improving the stability and robustness of the training [44]. New loss functions have been introduced [2], along with gradient regularization methods [39, 36], weight normalization techniques [38], and learning rate equalization [23]. Our framework is amenable to these improvements, as we explain in later sections.
+
+Variational AE architectures [28, 41] have not only been appreciated for their theoretical foundation, but also for their stability during training, and the ability to provide insightful representations. Indeed, they stimulated research in the area of disentanglement [1], allowing learning representations with controlled degree of disentanglement between factors of variation in [19], and subsequent improvements in [25], leading to more elaborate metrics for disentanglement quantification [10, 4, 24], which we also use to analyze the properties of our approach. VAEs have also been extended to learn a latent prior different than a normal distribution, thus achieving significantly better models [48].
+
+A lot of progress has been made towards combining the benefits of GANs and VAEs. AAE [35] has been the precursor of those approaches, followed by VAE/GAN [31]
+
+with a more direct approach. BiGAN [6] and ALI [9] provide an elegant framework fully adversarial, whereas VEEGAN [47] and AGE [49] pioneered the use of the latent space for autoencoding and advocated the reduction of the architecture complexity. PIONEER [15] and IntroVAE [20] followed this line, with the latter providing the best generation results in this category. Section 4.1 describes how the proposed approach compares with those listed here.
+
+Finally, we quickly mention other approaches that have shown promising results with representing image data distributions. Those include autoregressive [50] and flow-based methods [27]. The former forego the use of a latent representation, but the latter does not.
+
+# 3. Preliminaries
+
+A Generative Adversarial Network (GAN) [13] is composed of a generator network $\mathsf{G}$ mapping from a space $\mathcal{Z}$ onto a data space $\mathcal{X}$ , and a discriminator network $\mathsf{D}$ mapping from $\mathcal{X}$ onto $\mathbb{R}$ . The $\mathcal{Z}$ space is characterized by a known distribution $p(z)$ . By sampling from $p(z)$ , the generator $\mathsf{G}$ produces data representing a synthetic distribution $q(x)$ . Given training data $\mathcal{D}$ drawn from a real distribution $p_{\mathcal{D}}(x)$ , a GAN network aims at learning $\mathsf{G}$ so that $q(x)$ is as close to $p_{\mathcal{D}}(x)$ as possible. This is achieved by setting up a zero-sum two-players game with the discriminator $\mathsf{D}$ . The role of $\mathsf{D}$ is to distinguish in the most accurate way data coming from the real versus the synthetic distribution, while $\mathsf{G}$ tries to fool $\mathsf{D}$ by generating synthetic data that looks more and more like real.
+
+Following the more general formulation introduced in [39], the GAN learning problem entails finding the minimax with respect to the pair $(G, D)$ (i.e., the Nash equilibrium), of the value function defined as
+
+$$
+V (\mathbb {G}, \mathrm {D}) = E _ {p _ {\mathcal {D}} (x)} [ f (\mathrm {D} (x)) ] + E _ {p (z)} [ f (- \mathrm {D} (\mathbb {G} (z))) ], \tag {1}
+$$
+
+where $E[\cdot]$ denotes expectation, and $f: \mathbb{R} \to \mathbb{R}$ is a concave function. By setting $f(t) = -\log(1 + \exp(-t))$ we obtain the original GAN formulation [13]; instead, if $f(t) = t$ we obtain the Wasserstein GAN [2].
+
+# 4. Adversarial Latent Autoencoders
+
+We introduce a novel autoencoder architecture by modifying the original GAN paradigm. We begin by decomposing the generator $\mathsf{G}$ and the discriminator $\mathsf{D}$ in two networks: $F, G$ , and $E, D$ , respectively. This means that
+
+$$
+\mathbb {G} = G \circ F, \quad \text {a n d} \quad \mathbb {D} = D \circ E, \tag {2}
+$$
+
+see Figure 1. In addition, we assume that the latent spaces at the interface between $F$ and $G$ , and between $E$ and $D$ are the same, and we indicate them as $\mathcal{W}$ . In the most general case we assume that $F$ is a deterministic map, whereas we
+
+
+Figure 1: ALAE Architecture. Architecture of an Adversarial Latent Autoencoder.
+
+allow $E$ and $G$ to be stochastic. In particular, we assume that $G$ might optionally depend on an independent noisy input $\eta$ , with a known fixed distribution $p_{\eta}(\eta)$ . We indicate with $G(w,\eta)$ this more general stochastic generator.
+
+Under the above conditions we now consider the distributions at the output of every network. The network $F$ simply maps $p(z)$ onto $q_{F}(w)$ . At the output of $G$ the distribution can be written as
+
+$$
+q (x) = \int_ {w} \int_ {\eta} q _ {G} (x | w, \eta) q _ {F} (w) p _ {\eta} (\eta) \mathrm {d} \eta \mathrm {d} w, \tag {3}
+$$
+
+where $q_{G}(x|w,\eta)$ is the conditional distribution representing $G$ . Similarly, for the output of $E$ the distribution becomes
+
+$$
+q _ {E} (w) = \int_ {x} q _ {E} (w | x) q (x) \mathrm {d} x, \tag {4}
+$$
+
+where $q_{E}(w|x)$ is the conditional distribution representing $E$ . In (4) if we replace $q(x)$ with $p_{\mathcal{D}}(x)$ we obtain the distribution $q_{E,\mathcal{D}}(w)$ , which describes the output of $E$ when the real data distribution is its input.
+
+Since optimizing (1) leads toward the synthetic distribution matching the real one, i.e., $q(x) = p_{\mathcal{D}}(x)$ , it is obvious from (4) that doing so also leads toward having $q_{E}(w) = q_{E,\mathcal{D}}(w)$ . In addition to that, we propose to ensure that the distribution of the output of $E$ be the same as the distribution at the input of $G$ . This means that we set up an additional goal, which requires that
+
+$$
+q _ {F} (w) = q _ {E} (w). \tag {5}
+$$
+
+In this way we could interpret the pair of networks $(G, E)$ as a generator-encoder network that autoencodes the latent space $\mathcal{W}$ .
+
+If we indicate with $\Delta(p\|q)$ a measure of discrepancy between two distributions $p$ and $q$ , we propose to achieve the goal (5) via regularizing the GAN loss (1) by alternating the following two optimizations
+
+$$
+\min _ {F, G} \max _ {E, D} V (G \circ F, D \circ E) \tag {6}
+$$
+
+$$
+\min _ {E, G} \Delta (F \| E \circ G \circ F) \tag {7}
+$$
+
+where the left and right arguments of $\Delta$ indicate the distributions generated by the networks mapping $p(z)$ , which
+
+| Autoencoder | (a) Data Distribution | (b) Latent Distribution | (c) Reciprocity Space |
| VAE [28, 41] | similarity | imposed/divergence | data |
| AAE [35] | similarity | imposed/adversarial | data |
| VAE/GAN [31] | similarity | imposed/divergence | data |
| VampPrior [48] | similarity | learned/divergence | data |
| BiGAN [6] | adversarial | imposed/adversarial | adversarial |
| ALI [9] | adversarial | imposed/adversarial | adversarial |
| VEEGAN [47] | adversarial | imposed/divergence | latent |
| AGE [49] | adversarial | imposed/adversarial | latent |
| IntroVAE [20] | adversarial | imposed/adversarial | data |
| ALAE (ours) | adversarial | learned/divergence | latent |
+
+Table 1: Autoencoder criteria used: (a) for matching the real to the synthetic data distribution; (b) for setting/learning the latent distribution; (c) for which space reciprocity is achieved.
+
+correspond to $q_{F}(w)$ and $q_{E}(w)$ , respectively. We refer to a network optimized according to (6) (7) as an Adversarial Latent Autoencoder (ALAE). The building blocks of an ALAE architecture are depicted in Figure 1.
+
+# 4.1. Relation with other autoencoders
+
+Data distribution. In architectures composed by an encoder network and a generator network, the task of the encoder is to map input data onto a space characterized by a latent distribution, whereas the generator is tasked to map latent codes onto a space described by a data distribution. Different strategies are used to learn the data distribution. For instance, some approaches impose a similarity criterion on the output of the generator [28, 41, 35, 48], or even learn a similarity metric [31]. Other techniques instead, set up an adversarial game to ensure the generator output matches the training data distribution [6, 9, 47, 49, 20]. This latter approach is what we use for ALAE.
+
+Latent distribution. For the latent space instead, the common practice is to set a desired target latent distribution, and then the encoder is trained to match it either by minimizing a divergence type of similarity [28, 41, 31, 47, 48], or by setting up an adversarial game [35, 6, 9, 49, 20]. Here is where ALAE takes a fundamentally different approach. Indeed, we do not impose the latent distribution, i.e., $q_{E}(w)$ , to match a target distribution. The only condition we set, is given by (5). In other words, we do not want $F$ to be the identity map, and are very much interested in letting the learning process decide what $F$ should be.
+
+Reciprocity. Another aspect of autoencoders is whether and how they achieve reciprocity. This property relates to the ability of the architecture to reconstruct a data sample $x$ from its code $w$ , and viceversa. Clearly, this requires that $x = G(E(x))$ , or equivalently that $w = E(G(w))$ . In the first case, the network must contain a reconstruction term that operates in the data space. In the latter one, the term operates in the latent space. While most approaches follow the first strategy [28, 41, 35, 31, 20, 48], there are some that implement the second [47, 49], including ALAE. Indeed, this can be achieved by choosing the divergence in (7) to be
+
+
+Figure 2: StyleALAE Architecture. The StyleALAE encoder has Instance Normalization (IN) layers to extract multiscale style information that is combined into a latent code $w$ via a learnable multilinear map.
+
+the expected coding reconstruction error, as follows
+
+$$
+\Delta (F \| E \circ G \circ F) = E _ {p (z)} \left[ \| F (z) - E \circ G \circ F (z) \| _ {2} ^ {2} \right] \tag {8}
+$$
+
+Imposing reciprocity in the latent space gives the significant advantage that simple $\ell_2$ , $\ell_1$ or other norms can be used effectively, regardless of whether they would be inappropriate for the data space. For instance, it is well known that element-wise $\ell_2$ norm on image pixel differences does not reflect human visual perception. On the other hand, when used in latent space its meaning is different. For instance, an image translation by a pixel could lead to a large $\ell_2$ discrepancy in image space, while in latent space its representation would hardly change at all. Ultimately, using $\ell_2$ in image space has been regarded as one of the reasons why autoencoders have not been as successful as GANs in reconstructing/generating sharp images [31]. Another way to address the same issue is by imposing reciprocity adversarially, as it was shown in [6, 9]. Table 1 reports a summary of the main characteristics of most of the recent generator-encoder architectures.
+
+# 5. StyleALAE
+
+We use ALAE to build an autoencoder that uses a StyleGAN based generator. For this we make our latent space $\mathcal{W}$ play the same role as the intermediate latent space in [24]. Therefore, our $G$ network becomes the part of StyleGAN depicted on the right side of Figure 2. The left side is a novel architecture that we designed to be the encoder $E$ .
+
+Since at every layer, $G$ is driven by a style input, we design $E$ symmetrically, so that from a corresponding layer we extract style information. We do so by inserting Instance Normalization (IN) layers [21], which provide instance averages and standard deviations for every channel. Specifically, if $y_{i}^{E}$ is the output of the $i$ -th layer of $E$ , the IN mod-
+
+ule extracts the statistics $\mu (y_i^E)$ and $\sigma (y_i^E)$ representing the style at that level. The IN module also provides as output the normalized version of the input, which continues down the pipeline with no more style information from that level. Given the information flow between $E$ and $G$ , the architecture is effectively mimicking a multiscale style transfer from $E$ to $G$ , with the difference that there is not an extra input image that provides the content [21, 22].
+
+The set of styles that are inputs to the Adaptive Instance Normalization (AdaIN) layers [21] in $G$ are related linearly to the latent variable $w$ . Therefore, we propose to combine the styles output by the encoder, and to map them onto the latent space, via the following multilinear map
+
+$$
+w = \sum_ {i = 1} ^ {N} C _ {i} \left[ \begin{array}{c} \mu \left(y _ {i} ^ {E}\right) \\ \sigma \left(y _ {i} ^ {E}\right) \end{array} \right] \tag {9}
+$$
+
+where the $C_i$ 's are learnable parameters, and $N$ is the number of layers.
+
+Similarly to [23, 24] we use progressive growing. We start from low-resolution images ( $4 \times 4$ pixels) and progressively increase the resolution by smoothly blending in new blocks to $E$ and $G$ . For the $F$ and $D$ networks we implement them using MLPs. The $\mathcal{Z}$ and $\mathcal{W}$ spaces, and all layers of $F$ and $D$ have the same dimensionality in all our experiments. Moreover, for StyleALAE we follow [24], and chose $F$ to have 8 layers, and we set $D$ to have 3 layers.
+
+# 6. Implementation
+
+Adversarial losses and regularization. We use a non-saturating loss [13, 36], which in (1) we introduce by setting $f(\cdot)$ to be a SoftPlus function [11]. This is a smooth version of the rectifier activation function, defined as $f(t) = \mathrm{softplus}(t) = \log(1 + \exp(t))$ . In addition, we use gradient regularization techniques [8, 36, 43]. We utilize $R_1$ [44,
+
+# Algorithm 1 ALAE Training
+
+1: $\theta_F, \theta_G, \theta_E, \theta_D \gets$ Initialize network parameters
+2: while not converged do
+3: Step I. Update $E$ , and $D$
+4: $x\gets$ Random mini-batch from dataset
+5: $z\gets$ Samples from prior $\mathcal{N}(0,I)$
+6: $L_{adv}^{E,D}\gets \mathrm{softplus}(D\circ E\circ G\circ F(z))) + \mathrm{softplus}(-D\circ E(x)) + \frac{\gamma}{2}\mathrm{E}_{p_{\mathcal{D}}(x)}\left[\| \nabla D\circ E(x)\| ^2\right]$
+7: $\theta_{E},\theta_{D}\gets \mathrm{ADAM}(\nabla_{\theta_{D},\theta_{E}}L_{adv}^{E,D},\theta_{D},\theta_{E},\alpha ,\beta_{1},\beta_{2})$
+8: Step II. Update $F$ , and $G$
+9: $z\gets$ Samples from prior $\mathcal{N}(0,I)$
+10: $L_{adv}^{F,G}\gets$ softplus $(D\circ E\circ G\circ F(z)))$
+11: $\theta_F,\theta_G\gets \mathrm{ADAM}(\nabla_{\theta_F,\theta_G}L_{a_{adv}}^{F,G},\theta_F,\theta_G,\alpha ,\beta_1,\beta_2)$
+12: Step III. Update $E$ , and $G$
+13: $z\gets$ Samples from prior $\mathcal{N}(0,I)$
+14: $L_{error}^{E,G}\gets \| F(z) - E\circ G\circ F(z)\| _2^2$
+15: $\theta_{E},\theta_{G}\gets \mathrm{ADAM}(\nabla_{\theta_{E},\theta_{G}}L_{error}^{E,G},\theta_{E},\theta_{G},\alpha ,\beta_{1},\beta_{2})$
+16: end while
+
+Input
+
+
+
+BiGAN
+
+
+
+ALAE
+
+
+Figure 3: MNIST reconstruction. Reconstructions of the permutation-invariant MNIST. Top row: real images. Middle row: BiGAN reconstructions. Bottom row: ALAE reconstructions. The same MLP architecture is used in both methods.
+
+36], a zero-centered gradient penalty term which acts only on real data, and is defined as $\frac{\gamma}{2}\mathrm{E}_{p_{\mathcal{D}}(x)}\left[\| \nabla D\circ E(x)\| ^2\right]$ , where the gradient is taken with respect to the parameters $\theta_{E}$ and $\theta_{D}$ of the networks $E$ and $D$ , respectively.
+
+Training. In order to optimize (6) (7) we use alternating updates. One iteration is composed of three updating steps: two for (6) and one for (7). Step I updates the discriminator (i.e., networks $E$ and $D$ ). Step II updates the generator (i.e., networks $F$ and $G$ ). Step III updates the latent space autoencoder (i.e., networks $G$ and $E$ ). The procedural details are summarized in Algorithm 1. For updating the weights we use the Adam optimizer [26] with $\beta_{1} = 0.0$ and $\beta_{2} = 0.99$ , coupled with the learning rate equalization technique [23] described below. For non-growing architectures (i.e., MLPs) we use a learning rate of 0.002, and batch size of 128. For growing architectures (i.e., StyleALAE) learning rate and batch size depend on the resolution.
+
+# 7. Experiments
+
+Code and uncompressed images are available at https://github.com/podgorskiy/ALAE.
+
+# 7.1. Representation learning with MLP
+
+We train ALAE with MNIST [32], and then use the feature representation for classification, reconstruction, and analyzing disentanglement. We use the permutation-invariant setting, where each $28 \times 28$ MNIST image is treated as a 784D vector without spatial structure, which requires to use a MLP instead of a CNN. We follow [7] and use a three layer MLP with a latent space size of 50D. Both networks, $E$ and $G$ have two hidden layers with 1024 units each. In [7] the features used are the activations of the layer before the last of the encoder, which are 1024D vectors. We
+
+$\mathcal{Z}$ space
+
+
+Figure 4: MNIST traversal. Reconstructed of the interpolations in the $\mathcal{Z}$ space, and the $\mathcal{W}$ space, between the same digits. The latter transition appears to be smoother.
+
+$\mathcal{W}$ space
+
+Table 2: MNIST classification. Classification accuracy $(\%)$ on the permutation-invariant MNIST [32] using 1NN and linear SVM, with same writers (SW) and different writers (DW) settings, and short features (sf) vs. long features (lf), indicated as sf/lf.
+
+ | 1NN SW | Linear SVM SW | 1NN DW | Linear SVM DW |
| AE(ℓ1) | 97.15/97.43 | 88.71/97.27 | 96.84/96.80 | 89.78/97.72 |
| AE(ℓ2) | 97.52/97.37 | 88.78/97.23 | 97.05/96.77 | 89.78/97.72 |
| LR | 92.79/97.28 | 89.74/97.56 | 91.90/96.69 | 90.03/97.80 |
| JLR | 92.54/97.02 | 89.23/97.19 | 91.97/96.45 | 90.82/97.62 |
| BiGAN [7] | 95.83/97.14 | 90.52/97.59 | 95.38/96.81 | 91.34/97.74 |
| ALAE (ours) | 93.79/97.61 | 93.47/98.20 | 94.59/97.47 | 94.23/98.64 |
+
+refer to those as long features. We also use, as features, the 50D vectors taken from the latent space, $\mathcal{W}$ . We refer to those as short features.
+
+MNIST has an official split into training and testing sets of sizes 60000 and 10000 respectively. We refer to it as different writers (DW) setting since the human writers of the digits for the training set are different from those who wrote the testing digits. We consider also a same writers (SW) setting, which uses only the official training split by further splitting it in two parts: a train split of size 50000 and a test split of size 10000, while the official testing split is ignored. In SW the pools of writers in the train and test splits overlap, whereas in DW they do not. This makes SW an easier setting than DW.
+
+Results. We report the accuracy with the 1NN classifier as in [7], and extend those results by reporting also the accuracy with the linear SVM, because it allows a more direct analysis of disentanglement. Indeed, we recall that a disentangled representation [45, 42, 1] refers to a space consisting of linear subspaces, each of which is responsible for one factor of variation. Therefore, a linear classifier based on a disentangled feature space should lead to better performance compared to one working on an entangled space. Table 2 summarizes the average accuracy over five trials for ALAE, BiGAN, as well as the following baselines proposed in [7]: Latent Regressor (LR), Joint Latent Regressor (JLR), Autoencoders trained to minimize the $\ell_2$ ( $\mathrm{AE}(\ell_2)$ ) or the $\ell_1$ ( $\mathrm{AE}(\ell_1)$ ) reconstruction error.
+
+The most significant result of Table 2 is drawn by comparing the 1NN with the corresponding linear SVM columns. Since 1NN does not presume disentanglement in order to be effective, but linear SVM does, larger performance drops signal stronger entanglement. ALAE is the approach that remains more stable when switching from 1NN to linear SVM, suggesting a greater disentanglement of the space. This is true especially for short features, whereas for long features this effect fades away because linear separability grows.
+
+We also note that ALAE does not always provide the best accuracy, and the baseline AE (especially $\mathrm{AE}(\ell_2)$ ) does well with 1NN, and more so with short features. This might
+
+| Method | FFHQ | LSUN Bedroom |
| StyleGAN [24] | 4.40 | 2.65 |
| PGGAN [23] | - | 8.34 |
| IntroVAE [20] | - | 8.84 |
| Pioneer [16] | - | 18.39 |
| Balanced Pioneer [17] | - | 17.89 |
| StyleALAE Generation | 13.09 | 17.13 |
| StyleALAE Reconstruction | 16.52 | 15.92 |
+
+be explained by the baseline AE learning a representation that is closer to a discriminative one. Other approaches instead focus more on learning representations for drawing synthetic random samples, which are likely richer, but less discriminative. This effect also fades for longer features.
+
+Another observation is about SW vs. DW. 1NN generalizes less effectively for DW, as expected, but linear SVM provides a small improvement. This is unclear, but we speculate that DW might have fewer writers in the test set, and potentially slightly less challenging.
+
+Figure 3 shows qualitative reconstruction results. It can be seen that BiGAN reconstructions are subject to semantic label flipping much more often than ALAE. Finally, Figure 4 shows two traversals: one obtained by interpolating in the $\mathcal{Z}$ space, and the other by interpolating in the $\mathcal{W}$ space. The second shows a smoother image space transition, suggesting a lesser degree of entanglement.
+
+# 7.2. Learning style representations
+
+FFHQ. We evaluate StyleALAE with the FFHQ [24] dataset. It is very recent and consists of 70000 images of people faces aligned and cropped at resolution of $1024 \times 1024$ . In contrast to [24], we split FFHQ into a training set of 60000 images and a testing set of 10000 images. We do so in order to measure the reconstruction quality for which we need images that were not used during training.
+
+We implemented our approach with PyTorch. Most of the experiments were conducted on a machine with $4 \times$ GPU Titan X, but for training the models at resolution $1024 \times 1024$ we used a server with $8 \times$ GPU Titan RTX. We trained StyleALAE for 147 epochs, 18 of which were spent at resolution $1024 \times 1024$ . Starting from resolution $4 \times 4$ we grew StyleALAE up to $1024 \times 1024$ . When growing to a new resolution level we used $500k$ training samples during the transition, and another $500k$ samples for training stabilization. Once reached the maximum resolution of $1024 \times 1024$ , we continued training for 1M images. Thus, the total training time measured in images was $10M$ . In contrast, the total training time for StyleGAN [24] was $25M$ images, and $15M$ of them were used at resolution $1024 \times 1024$ . At the same resolution we trained StyleALAE with only 1M images, so, 15 times less.
+
+Table 3 reports the FID score [18] for generations and
+
+Table 3: FID scores. FID scores (lower is better) measured on FFHQ [24] and LSUN Bedroom [54].
+
+| Method | Path length |
| full | end |
| StyleGAN | 412.0 | 415.3 |
| StyleGAN no mixing | 200.5 | 160.6 |
| StyleGAN | 231.5 | 182.1 |
| StyleALAE | 300.5 | 292.0 |
| StyleALAE | 134.5 | 103.4 |
+
+reconstructions. Source images for reconstructions are from the test set and were not used during training. The scores of StyleALAE are higher, and we regard the large training time difference between StyleALAE and StyleGAN (1M vs 15M) as the likely cause of the discrepancy.
+
+Table 4 reports the perceptual path length (PPL) [24] of SyleALAE. This is a measurement of the degree of disentanglement of representations. We compute the values for representations in the $\mathcal{W}$ and the $\mathcal{Z}$ space, where StyleALAE is trained with style mixing in both cases. The StyleGAN score measured in $\mathcal{Z}$ corresponds to a traditional network, and in $\mathcal{W}$ for a style-based one. We see that the PPL drops from $\mathcal{Z}$ to $\mathcal{W}$ , indicating that $\mathcal{W}$ is perceptually more linear than $\mathcal{Z}$ , thus less entangled. Also, note that for our models the PPL is lower, despite the higher FID scores.
+
+Figure 6 shows a random collection of generations obtained from StyleALAE. Figure 5 instead shows a collection of reconstructions. In Figure 9 instead, we repeat the style mixing experiment in [24], but with real images as sources and destinations for style combinations. We note that the original images are faces of celebrities that we downloaded from the internet. Therefore, they are not part of FFHQ, and come from a different distribution. Indeed, FFHQ is made of face images obtained from Flickr.com depicting noncelebrity people. Often the faces do not wear any makeup, neither have the images been altered (e.g., with Photoshop). Moreover, the imaging conditions of the FFHQ acquisitions are very different from typical photoshoot stages, where professional equipment is used. Despite this change of image statistics, we observe that StyleALAE works effectively on both reconstruction and mixing.
+
+LSUN. We evaluated StyleALAE with LSUN Bedroom [54]. Figure 7 shows generations and reconstructions from unseen images during training. Table 3 reports the FID scores on the generations and the reconstructions.
+
+Table 4: PPL. Perceptual path lengths on FFHQ measured in the $\mathcal{Z}$ and the $\mathcal{W}$ spaces (lower is better).
+
+ | FID | PPL full |
| PGGAN [23] | 8.03 | 229.2 |
| GLOW [27] | 68.93 | 219.6 |
| PIONEER [16] | 39.17 | 155.2 |
| Balanced PIONEER [17] | 25.25 | 146.2 |
| StyleALAE (ours) | 19.21 | 33.29 |
+
+Table 5: Comparison of FID and PPL scores for CelebAHQ images at $256\times 256$ (lower is better). FID is based on 50,000 generated samples compared to training samples.
+
+
+Figure 5: FFHQ reconstructions. Reconstructions of unseen images with StyleALAE trained on FFHQ [24] at $1024 \times 1024$ .
+
+
+Figure 6: FFFQ generations. Generations with StyleALAE trained on FFFQ [24] at $1024 \times 1024$ .
+
+
+Figure 7: LSUN generations and reconstructions. Generations (first row), and reconstructions using StyleALAE trained on LSUN Bedroom [54] at resolution $256 \times 256$ .
+
+CelebA-HQ. CelebA-HQ [23] is an improved subset of CelebA [33] consisting of 30000 images at resolution $1024 \times 1024$ . We follow [16, 17, 27, 23] and use CelebAHQ downscaled to $256 \times 256$ with training/testing split of 27000/3000. Table 5 reports the FID and PPL scores, and Figure 8 compares StyleALE reconstructions of unseen faces with two other approaches.
+
+# 8. Conclusions
+
+We introduced ALAE, a novel autoencoder architecture that is simple, flexible and general, as we have shown to
+
+
+
+
+
+
+
+
+Figure 8: CelebA-HQ reconstructions. CelebA-HQ reconstructions of unseen samples at resolution $256 \times 256$ . Top row: real images. Second row: StyleALAE. Third row: Balanced PIONEER [17]. Last row: PIONEER [16]. StyleALAE reconstructions look sharper and less distorted.
+
+be effective with two very different backbone generator-encoder networks. Differently from previous work it allows learning the probability distribution of the latent space, when the data distribution is learned in adversarial settings. Our experiments confirm that this enables learning representations that are likely less entangled. This allows us to extend StyleGAN to StyleALAE, the first autoencoder capable of generating and manipulating images in ways not possible with StyleGAN alone, while maintaining the same level of visual detail.
+
+# Acknowledgments
+
+This material is based upon work supported by the National Science Foundation under Grants No. OIA-1920920, and OAC-1761792.
+
+
+Figure 9: Two sets of real images were picked to form the Source set and the Destination set. The rest of the images were generated by copying specified subset of styles from the Source set into the Destination set. This experiment repeats the one from [24], but with real images. Copying the coarse styles brings high-level aspects such as pose, general hair style, and face shape from Source set, while all colors (eyes, hair, lighting) and finer facial features resemble the Destination set. Instead, if we copy middle styles from the Source set, we inherit smaller scale facial features like hair style, eyes open/closed from Source, while the pose, and general face shape from Destination are preserved. Finally, copying the fine styles from the Source set brings mainly the color scheme and microstructure.
+
+# References
+
+[1] Alessandro Achille and Stefano Soatto. Emergence of invariance and disentanglement in deep representations. The Journal of Machine Learning Research, 19(1):1947-1980, 2018. 1, 2, 5
+[2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. In arXiv:1701.07875, 2017. 1, 2
+[3] A. Brock, J. Donahue, and K. Simonyan. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019. 1, 2
+[4] R. T. Q. Chen, X. Li, R. Grosse, and R. Duvenaud. Isolating sources of disentanglement in variational autoencoders. In NeurIPS, 2018. 1, 2
+[5] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems (NIPS), pages 1486-1494, 2015. 2
+[6] J. Donahue, P. Krahenbuhl, and T. Darrell. Adversarial feature learning. In ICLR, 2016. 1, 2, 3, 4
+[7] Jeff Donahue, Philipp Krahenbuhl, and Trevor Darrell. Adversarial feature learning. arXiv preprint arXiv:1605.09782, 2016. 5
+[8] Harris Drucker and Yann Le Cun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3(6):991-997, 1992. 4
+[9] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville. Adversarily learned inference. In ICLR, 2016. 1, 2, 3, 4
+[10] Cian Eastwood and Christopher KI Williams. A framework for the quantitative evaluation of disentangled representations. In ICLR, 2018. 1, 2
+[11] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315-323, 2011. 4
+[12] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. 2
+[13] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014. 1, 2, 4
+[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1
+[15] Ari Heljakka, Arno Solin, and Juho Kannala. Pioneer networks: Progressively growing generative autoencoder. In Asian Conference on Computer Vision (ACCV), pages 22-38. Springer, 2018. 2
+[16] Ari Heljakka, Arno Solin, and Juho Kannala. Pioneer networks: Progressively growing generative autoencoder. In Asian Conference on Computer Vision, pages 22-38. Springer, 2018. 6, 7
+[17] Ari Heljakka, Arno Solin, and Juho Kannala. Towards photographic image manipulation with balanced growing of generative autoencoders. arXiv preprint arXiv:1904.06145, 2019. 6, 7
+
+[18] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626-6637, 2017. 6
+[19] I. Higgins, L. Matthew, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017. 1, 2
+[20] H. Huang, Z. Li, R. He, Z. Sun, and T. Tan. Introvae: Intranspective variational autoencoders for photographic image synthesis. In NIPS, 2018. 1, 2, 3, 6
+[21] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pages 1501-1510, 2017. 4
+[22] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 172-189, 2018. 4
+[23] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. 1, 2, 4, 5, 6, 7
+[24] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 1, 2, 4, 6, 7, 8
+[25] H. Kim and A. Mnih. Disentangling by factorising. In ICML, 2018. 1, 2
+[26] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.5
+[27] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pages 10215-10224, 2018. 2, 6, 7
+[28] D. P. Kingma and W. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. 1, 2, 3
+[29] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems (NIPS), pages 1097-1105, 2012. 1
+[30] Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The gan landscape: Losses, architectures, regularization, and normalization. In arXiv:1807.04720, 2018. 1
+[31] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning (ICML), pages 1558-1566, 2016. 1, 2, 3, 4
+[32] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998. 5
+
+[33] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, pages 3730-3738, 2015. 1, 7
+[34] Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In Advances in neural information processing systems (NeurIPS), pages 700-709, 2018. 1
+[35] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. In arXiv:1511.05644, 2015. 1, 2, 3
+[36] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for gans do actually converge? arXiv:1801.04406, 2018. 1, 2, 4, 5
+[37] M. Mirza and S. Osindero. Conditional generative adversarial nets. In arXiv:1411.1784, 2014. 1
+[38] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In arXiv:1802.05957, 2018. 1, 2
+[39] J. Z. Nagarajan, V.and Kolter. Gradient descent GAN optimization is locally stable. In International Conference on Neural Information Processing Systems (NIPS), pages 5591-5600, 2017. 2
+[40] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems (NIPS), pages 91-99, 2015. 1
+[41] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning (ICML), 2014. 1, 2, 3
+[42] Karl Ridgeway. A survey of inductive biases for factorial representation-learning. arXiv preprint arXiv:1612.05299, 2016. 5
+[43] Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Thirty-second AAAI conference on artificial intelligence, 2018. 4
+[44] Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, and Thomas Hofmann. Stabilizing training of generative adversarial networks through regularization. In Advances in neural information processing systems, pages 2018-2028, 2017. 2, 5
+[45] Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992. 5
+[46] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. *ICLR*, abs/1409.1556, 2015. 1
+[47] A. Srivastava, L. Valkov, C. Russell, M. U. Gutmann, and C. Sutton. Veegan: Reducing mode collapse in gans using implicit variational learning. In NIPS, 2017. 2, 3
+[48] Jakub M Tomczak and Max Welling. VAE with a VampPrior. In AISTATS, 2018. 2, 3
+
+[49] D. Ulyanov, A. Vedaldi, and V. Lempitsky. It takes (only) two: Adversarial generator-encoder networks. In AAAI, 2018. 2, 3
+[50] Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International Conference on Machine Learning, pages 1747-1756, 2016. 2
+[51] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8798-8807, 2018. 2
+[52] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In The European Conference on Computer Vision (ECCV), September 2018. 1
+[53] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao. LSUN: construction of a large-scale image dataset using deep learning with humans in the loop. In arXiv:1506.03365, 2015. 1
+[54] Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365, 2015. 6, 7
+[55] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 5907-5915, 2017. 2
+[56] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stack-gan++: Realistic image synthesis with stacked generative adversarial networks. IEEE transactions on pattern analysis and machine intelligence, 41(8):1947-1962, 2018. 2
+[57] Zizhao Zhang, Yuanpu Xie, and Lin Yang. Photographic text-to-image synthesis with a hierarchically-nested adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6199-6208, 2018. 2
+[58] Jun Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. IEEE International Conference on Computer Vision (ICCV), 2017-Octob:2242-2251, 2017. 1
\ No newline at end of file
diff --git a/adversariallatentautoencoders/images.zip b/adversariallatentautoencoders/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..bf24b0075d43e6d87cda988f7e6dc764161b631d
--- /dev/null
+++ b/adversariallatentautoencoders/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26da86ee89f0653674303ca56add3b80e7a1114a998d776751c496f03a72b2b1
+size 873885
diff --git a/adversariallatentautoencoders/layout.json b/adversariallatentautoencoders/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7cf94a116d864d1515d369e7c5ac4db1456c46ac
--- /dev/null
+++ b/adversariallatentautoencoders/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e52d519e97fd488f72a83cbc747a34316433de457841c293f127ac25ddca2ada
+size 497302
diff --git a/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_content_list.json b/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6f4794d23aeb0711157ed42e91c452b3c93451e6
--- /dev/null
+++ b/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48c96381ff7ce4183f3bf236c53ec7434591da27e6050ddc59752b470b589787
+size 76635
diff --git a/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_model.json b/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..12d3e4c5c4954032ac29bb589b1d04960b829525
--- /dev/null
+++ b/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:576cca3df3f827dad3b92c98cd1c0a6e482429fe5c2930125171808cf15de3d5
+size 99039
diff --git a/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_origin.pdf b/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d30f019e01054c2d14f95ac1cbd9f2335fab3741
--- /dev/null
+++ b/adversarialnasadversarialneuralarchitecturesearchforgans/be2b4076-d328-48f8-a665-778028f67b73_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:369934e4a9c012bdc85d96266bb9de80e650a17f3ea401a172798e24a98d7ebb
+size 360831
diff --git a/adversarialnasadversarialneuralarchitecturesearchforgans/full.md b/adversarialnasadversarialneuralarchitecturesearchforgans/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..43de8411e998ba76bcc0fa79c3197c243f15cd85
--- /dev/null
+++ b/adversarialnasadversarialneuralarchitecturesearchforgans/full.md
@@ -0,0 +1,336 @@
+# AdversarialNAS: Adversarial Neural Architecture Search for GANs
+
+Chen Gao $^{1,2,4}$ , Yunpeng Chen $^{4}$ , Si Liu $^{3*}$ , Zhenxiong Tan $^{5}$ , Shuicheng Yan $^{4}$
+
+$^{1}$ Institute of Information Engineering, Chinese Academy of Sciences $^{2}$ University of Chinese Academy of Sciences $^{3}$ School of Computer Science and Engineering, Beihang University $^{4}$ Yitu Technology $^{5}$ Beijing Forestry University
+
+gaochen@iie.ac.cn, liusi@buaa.edu.cn, yunpeng.chen@yitu-inc.com
+
+# Abstract
+
+Neural Architecture Search (NAS) that aims to automate the procedure of architecture design has achieved promising results in many computer vision fields. In this paper, we propose an AdversarialNAS method specially tailored for Generative Adversarial Networks (GANs) to search for a superior generative model on the task of unconditional image generation. The AdversarialNAS is the first method that can search the architectures of generator and discriminator simultaneously in a differentiable manner. During searching, the designed adversarial search algorithm does not need to compute any extra metric to evaluate the performance of the searched architecture, and the search paradigm considers the relevance between the two network architectures and improves their mutual balance. Therefore, AdversarialNAS is very efficient and only takes 1 GPU day to search for a superior generative model in the proposed large search space $(10^{38})$ . Experiments demonstrate the effectiveness and superiority of our method. The discovered generative model sets a new state-of-the-art FID score of 10.87 and highly competitive Inception Score of 8.74 on CIFAR-10. Its transferability is also proven by setting new state-of-the-art FID score of 26.98 and Inception score of 9.63 on STL-10. Code is at: https://github.com/chengaopro/AdversarialNAS.
+
+# 1. Introduction
+
+Image generation is a fundamental task in the field of computer vision. Recently, GANs [10] have attracted much attention due to their remarkable performance for generating realistic images. Previous architectures of GANs are designed by human experts with laborious trial-and-error testings (Fig. 1 a)) and the instability issue in GAN training extremely increases the difficulty of architecture design. Therefore, the architecture of the generative model in GAN
+
+
+Figure 1. Comparisons of different ways of designing GAN architectures. a) The previous hand-crafted GAN architectures depend on the experience of human experts. b) AutoGAN [9] adopts IS or FID as reward to update the architecture controller via reinforcement learning. c) The proposed AdversarialNAS searches architecture in a differentiable way with an adversarial search mechanism, which achieves better performance with higher efficiency.
+
+literature has very few types and can be simply divided into two styles: DCGANs-based [32] and ResNet-based [14]. On the other hand, the benefits of specially designing the network architecture have been proven through lost of discriminative networks, such as ResNet [14], DenseNet [17], MobileNet [34], ShuffleNet [46], EfficientNet [36] and HRNet [35]. Therefore, the research about the backbone architecture of GANs needs more attention to further improve the performance of the generative model.
+
+Recently, Neural Architecture Search (NAS) has been studied heatedly owing to its ability to automatically discover the optimal network architecture, which significantly reduces human labor. However, on generation tasks, specifically GANs-based generation, only AutoGAN [9] and
+
+AGAN [38] have explored the application of NAS.
+
+To design a NAS algorithm specially tailored for GANs on the unconditional image generation task, there are two main challenges. First, it is expected to utilize an efficient supervision signal to guide the search process in this unsupervised task. However, the existing works [9, 38] both adopt the Inception Score (IS) [33] or FID to evaluate the architecture performance and take IS or FID as a reward to update the architecture controller via reinforcement learning strategy. Obtaining IS or FID needs to generate hundreds of images and use the statistics produced by an Inception network to calculate the final score. Thus it is extremely time-consuming, e.g. 200 GPUs over 6 days [38]. Second, the relevance and balance between generator and discriminator need to be considered during searching since the training process of GANs is a unique competition. However, AutoGAN search for a generator with a pre-defined growing discriminator (Fig. 1 b)), where the architecture of the discriminator can be regarded as fixed and may limit the algorithm to search for an optimal architecture of generator.
+
+In this work, we propose an Adversarial Neural Architecture Search (AdversarialNAS) method to address the above challenges (Fig. 1 c)). First, we design a large search space $(10^{38})$ for fragile GAN and relax the search space to be continuous. Thus the architecture can be represented by a set of continuous variables obeying certain probability distribution and searched in a differentiable manner. Second, we propose to directly utilize the existing discriminator to evaluate the architecture of generator in each search iteration. Specifically, when searching for the generator architecture, the discriminator provides the supervision signal to guide the search direction, which is technically utilized to update the architecture distribution of generator through gradient descent. Therefore, our method is much more efficient since the extra computation cost for calculating evaluation metric is eliminated. Third, in order to consider the relevance and balance between the generator and discriminator, we propose to dynamically change the architecture of discriminator simultaneously during searching. Accordingly, we adopt the generator to evaluate the architecture of discriminator and compute the loss to update the discriminator architecture through ascending the stochastic gradient. The two architectures play against each other in a competition to continually improve their performance, which is essentially an adversarial searching mechanism. Therefore, the AdversarialNAS gets rid of calculating extra evaluation metric and solves the unsupervised task through an adversarial mechanism. It adequately considers the mutual balance between the two architectures, which benefits for searching a superior generative model.
+
+To sum up, our main contributions are three-fold.
+
+- We propose a novel AdversarialNAS method, which is the first gradient-based NAS method in GAN field and
+
+achieves state-of-art performance with much higher efficiency. We design a large architecture search space $(10^{38})$ for GAN and make it feasible to search in. Our AdversarialNAS can only tasks 1 GPU day for searching an optimal architecture in the large search space.
+
+- Considering GAN is an unique competition between two networks, the proposed AdversarialNAS alternately searches the architecture of both of them under an adversarial searching strategy to improve their mutual balance, which is specifically tailored for GAN.
+- The searched architecture has more advanced transferability and scalability while achieving state-of-the-art performance on both CIFAR-10 and STL-10 datasets.
+
+# 2. Related Work
+
+# 2.1. Generative Adversarial Networks
+
+Although Restricted Boltzmann Machines [15] and flow-based generative models [6] are all capable of generating natural images, GANs [10] are still the most widely used methods in recent years due to their impressive generation ability. GANs based approaches have achieved advanced results in various generation tasks, such as image-to-image translation [18, 5, 19, 48], text-to-image translation [43, 45] and image inpainting [29]. However, the potential of GANs has not been fully explored since there is rare work [32] studying the impact of architecture design on the performance of GANs. In this work, we aim to search for a powerful and effective network structure specifically for the generative model via an automatic manner.
+
+# 2.2. Neural Architecture Search
+
+Automatic Machine Learning (AutoML) has attracted lots of attention recently, and Neural Architecture Search (NAS) is one of the most important direction. The goal of NAS is to automatically search for an effective architecture that satisfies certain demands. The NAS technique has applied to many computer vision tasks such as image classification [2, 25, 26, 31, 49], dense image prediction [24, 47, 3] and object detection [8, 30].
+
+Early works of NAS adopt heuristic methods such as reinforcement learning [49] and evolutionary algorithm [41]. Obtaining an architecture with remarkable performance using such methods requires huge computational resources, e.g., 2000 GPUs days [41]. Therefore, lots of works design various strategies to reduce the expensive costs including weight sharing [31], performance prediction [1], progressive manner [25] and one-shot mechanism [26, 42]. The DARTS [26] in one-shot literature is the first approach that relaxes the search space to be continuous and conducts searching in a differentiable way. The architecture parameters and network weights can be trained simultaneously
+
+in an end-to-end fashion by gradient descent. Thus it extremely compresses the search time.
+
+However, all of these methods are designed for recognition and supervision tasks. To the best of our knowledge, there have been limited works [9] exploring applying NAS to unsupervised or weakly supervised tasks. In this work, we present the first gradient-based NAS method in GAN field and achieve state-of-the-art performance with much higher efficiency in the unsupervised image generation task.
+
+# 2.3. NAS in GANs
+
+Recently, a few works have attempted to incorporate neural architecture search with GANs. AutoGAN [9] adopts the reinforcement learning strategy to discover the architecture of generative models automatically. However, it only searches for the generator with a fixed architecture of discriminator. This mechanism limits the performance of the searched generator since the stability of GANs training is highly dependent on the balance between these two players. Besides, the search space is relatively small $(10^{5})$ , thus its randomly searched architecture can achieve acceptable results, e.g., FID (lower is better): 21.39 (random) and 12.42 (search) in CIFAR-10 dataset. The AGAN [38] enlarges the search space specifically for the generative model, but the computational cost is expensive (1200 GPUs days) under the reinforcement learning framework. The performance of the discovered model is slightly worse, e.g., FID: 30.5 in CIFAR-10. Moreover, the reward used to update the weights of the network controller during evaluation stage is Inception Score, which is not a suitable supervisory single to guide the architecture search since it is time-consuming.
+
+Instead, we search the architecture in a differentiable way and discard the evaluation stage. The reward of previous methods is obtained after a prolonged training and evaluation process, while our signal (loss) for guiding the search direction is given instantly in each iteration. Thus our method is more efficient. The designed adversarial search algorithm improves the mutual balance of the two networks for stabilizing and optimizing the search process.
+
+# 3. Method
+
+In this section, we first introduce the proposed search space of GANs and the way for relaxing it to be continuous. Then we describe the AdversarialNAS method.
+
+# 3.1. Search Space for GANs
+
+The goal of the proposed AdversarialNAS is to automatically search for an superior architecture of generative model through an adversarial searching mannr. Specifically, we aim to search for a series of cells, including Up-Cell and Down-Cell, as the building blocks to construct the final architecture of GAN. Three Up-Cells and four Down-Cells
+
+are stacked to form a generator and discriminator respectively. Since the convolution neural network has a natural hierarchical structure and each layer has unique function, we search for the cells each with a different architecture.
+
+We represent a cell as a Directed Acyclic Graph (DAG) consisting of an ordered sequence of N nodes (Fig. 2). The cell takes image features as input and outputs processed features, where each node $x_{i}$ in DAG indicates an intermediate feature and each edge $f_{i,j}$ between two nodes $x_{i}, x_{j}$ is a specific operation. Since we aim to search for an optimal architecture of generator that is actually an upsampling network, we design a search space for specific Up-Cell that is almost fully connected topology, as given in the left of Fig. 2. The Up-Cell consists of 4 nodes, and each node can be obtained by its previous nodes through selecting an operation from a candidate set according to the search algorithm. The search space of generator $\mathbb{F}_G$ includes a candidate set of normal operations, which is designed as below.
+
+- None
+- Convolution $1 \times 1$ , Dilation=1
+Convolution 3x3, Dilation=1
+- Convolution 5x5, Dilation=1
+
+- Identity
+Convolution 3x3, Dilation=2
+Convolution 5x5, Dilation=2
+
+The 'None' means there is no operation between two corresponding nodes, which is used to change the topology of the cell. The 'Identity' denotes the skip connection operation that provides multi-scale features. The stride of these operations is 1 so that they will keep the spatial resolution. The search space of generator also contains a subset of upsampling operations, which is devised as below.
+
+- Transposed Convolution 3x3
+- Nearest Neighbor Interpolation
+- Bilinear Interpolation
+
+Note that, these operations can only be searched by edge $0 \to 1$ and $0 \to 2$ in a specific Up-Cell. To search for the generator in an adversarial way, we simply invert the Up-Cell to form a Down-Cell (shown in the right of Fig. 2) ensuring their balance. The search space of discriminator $\mathbb{F}_D$ also contains a candidate set of normal operations, which is the same as the one of Up-Cell. However, the candidate set of downsampling operations is achieved by
+
+Average Pooling
+Convolution 3x3, Dilation=1
+Convolution 5x5, Dilation=1
+
+Max Pooling
+Convolution 3x3, Dilation=2
+Convolution 5x5, Dilation=2
+
+With stride equaling 2, the downsampling operations can only be searched in edge $2 \to 4$ and $3 \to 4$ . Therefore, during searching, there are totally $10^{38}$ different network architectures for GANs.
+
+# 3.2. Continuous Relaxation of Architectures
+
+The goal of the search algorithm is to select a specific operation from the pre-defined candidate set for each edge. Therefore, the intermediate node $x_{n,j}$ in the n-th cell can be calculated through the selected functions and its previous connected nodes as $x_{n,j} = \sum_{i < j} f_{n,i,j}(x_{n,i})$ . For RL-
+
+based NAS algorithms, the function $f_{n,i,j}$ is directly sampled from the candidate set according to the learnable architecture controller. Inspired by Gradient-based NAS algorithm [26], we relax the function $f_{n,i,j}$ to a soft version through Gumbel-Max trick [27]:
+
+$$
+f _ {n, i, j} ^ {\text {s o f t}} (x) = \sum_ {f \in \mathbb {F} _ {G}} \frac {\exp \left(\left(p _ {n , i , j} ^ {f} + o ^ {f}\right) / \tau\right)}{\sum_ {f ^ {\prime} \in \mathbb {F} _ {G}} \exp \left(\left(p _ {n , i , j} ^ {f ^ {\prime}} + o ^ {f ^ {\prime}}\right) / \tau\right)} f (x), \tag {1}
+$$
+
+where $o^f$ is the noise sampled from the Gumbel $(0,1)$ distribution, and the $\tau$ is the softmax temperature. The $p_{n,i,j}^{f}$ is the probability of selecting a specific function $f$ in edge $i\rightarrow j$ of n-th cell. The Gumbel version softmax is applied to follow the learned probability distribution more strictly. Therefore, each edge will contain a probability vector $[p^{f_1},\dots,p^{f_m}],m = |\mathbb{F}_G|$ . This discrete probability distribution is calculated through a simple softmax function as $p^f = \frac{\exp(\alpha^f)}{\sum_{f\in\mathbb{F}_G}\exp(\alpha^{f'})}$ , where the $\alpha$ is the learnable parameter. Therefore, the goal of searching for an architecture is converted to learning an optimal set of probability vectors for every edge, and the architecture can be derived from the learned probability distribution. Besides, in order to dynamically change the architecture of discriminator simultaneously, we also conduct a set of continuous parameters $\beta$ for calculating the probability of each function in discriminator as $q^{f} = \frac{\exp(\beta^{f})}{\sum_{f\in\mathbb{F}_{D}}\exp(\beta^{f'})}$ . Therefore, the soft version of the function can be achieved like the generator as
+
+$$
+f _ {n, i, j} ^ {\text {s o f t}} (x) = \sum_ {f \in \mathbb {F} _ {D}} \frac {\exp \left(\left(q _ {n , i , j} ^ {f} + o ^ {f}\right) / \tau\right)}{\sum_ {f ^ {\prime} \in \mathbb {F} _ {D}} \exp \left(\left(q _ {n , i , j} ^ {f ^ {\prime}} + o ^ {f ^ {\prime}}\right) / \tau\right)} f (x). \tag {2}
+$$
+
+Then, the proposed AdversarialNAS aims to learn a set of continuous parameters $\alpha$ and $\beta$ in a differentiable manner and obtain the final architecture of generator by simply preserving the most likely operations in the search space. Note that, we term the networks with all operations softly combined by the architecture parameters as Super-G and Super-D. The topology of the network would be changed by the learned high probability 'None' operation, and the 'Identity' operation would provide multi-scale fusion.
+
+# 3.3. Adversarial Architecture Search
+
+Before introducing the optimization strategy of the proposed AdversarialNAS, we first briefly revisit the optimization function in the classification literature. The searching process is formulated as a bilevel optimization problem:
+
+$$
+\begin{array}{l} \min _ {\alpha} L _ {v a l} (w ^ {*} (\alpha), \alpha) \\ s. t. w ^ {*} (\alpha) = \underset {w} {\arg \min } L _ {t r a i n} (w, \alpha), \end{array} \tag {3}
+$$
+
+where $L_{val}$ and $L_{train}$ denote the loss function on the validation and training set respectively. The goal of the search
+
+
+Figure 2. The search space of Up-cell and Down-Cell. The architectures of both Up-Cell and Down-Cell will continuously promote each other in an adversarial manner.
+
+algorithm is to discover an optimal architecture $\alpha^{*}$ by calculating and minimizing the validation loss $L_{val}(w^{*},\alpha)$ , where $w^{*}$ is the optimal weights of the current architecture $\alpha$ and is obtained by calculating and minimizing the training loss $L_{train}(w,\alpha)$ . Both the weight and architecture are optimized by ascending its gradient descent.
+
+However, in the task of unconditional image generation, there are no labels to supervise the searching procedure. AutoGAN [9] and AGAN [38] apply IS to evaluate the architecture performance and optimize the architecture by RL strategy. Computing IS requires generating hundreds of images and adopts Inception model to infer the result offline after a prolonged training trajectory of each discrete architecture, which is extremely time consuming. Therefore, we propose to make the architectures of generator and discriminator compete with each other to improve both of their performance, i.e., utilizing discriminator to guide the generator search and vice versa. AdversarialNAS leverages an adversarial optimization strategy that is inspired by the formulation of original GANs [10] for optimizing the architecture in a differentiable way. Thus the optimization process is defined as a two-player min-max game with value function $V(\alpha, \beta)$ where the weight of each network must be current optimal. The formulation of the introduced algorithm is given in Eqn.(4):
+
+$$
+\begin{array}{l} \min _ {\alpha} \max _ {\beta} V (\alpha , \beta) = \mathbb {E} _ {x \sim p _ {d a t a} (x)} [ \log D (x \mid \beta , W _ {D} ^ {*} (\beta)) ] \\ + \mathbb {E} _ {z \sim p _ {z} (z)} [ \log (1 - D (G (z \mid \alpha , W _ {G} ^ {*} (\alpha)) \mid \beta , W _ {D} ^ {*} (\beta))) ] \\ \end{array}
+$$
+
+s.t.
+
+$$
+\begin{array}{l} W _ {D} ^ {*} (\beta) = \underset {W _ {D} (\beta)} {\arg \max } \mathbb {E} _ {x \sim p _ {d a t a} (x)} [ \log D (x \mid \beta , W _ {D} (\beta)) ] \\ + \mathbb {E} _ {z \sim p _ {z} (z)} [ \log (1 - D (G _ {D _ {\beta}} ^ {*} (z) \mid \beta , W _ {D} (\beta))) ] \\ \end{array}
+$$
+
+$$
+W _ {G} ^ {*} (\alpha) = \underset {W _ {G} (\alpha)} {\arg \min } \mathbb {E} _ {z \sim p _ {z} (z)} [ \log (1 - D _ {G _ {\alpha}} ^ {*} (G (\alpha \mid W _ {G} (\alpha))) ], \tag {4}
+$$
+
+where $p_{data}$ means true data distribution and $p_z$ is a prior distribution. In the up-level stage the $W_D^*(\beta)$ denotes the
+
+optimal weights of discriminator under the specific architecture $\beta$ and $W_{G}^{*}(\alpha)$ represents the optimal weights of generator under the architecture $\alpha$ . In the low-level stage, the two optimal weights $\{W_{G}^{*}(\alpha), W_{D}^{*}(\beta)\}$ for any particular pair of architectures $\{\alpha, \beta\}$ can be obtained through another min-max game between $W_{G}$ and $W_{D}$ :
+
+$$
+\begin{array}{l} \min _ {W _ {G} (\alpha)} \max _ {W _ {D} (\beta)} V (W _ {G} (\alpha), W _ {D} (\beta)) = \\ \mathbb {E} _ {x \sim p _ {d a t a} (x)} [ \log D (x \mid \beta , W _ {D} (\beta)) ] \\ + \mathbb {E} _ {z \sim p _ {z} (z)} [ \log (1 - D (G (z \mid \alpha , W _ {G} (\alpha)) \mid \beta , W _ {D} (\beta))) ]. \tag {5} \\ \end{array}
+$$
+
+However, this inner optimization (Eq. 5) is time-consuming. For NAS in the classification task [26, 7, 4], the inner optimization (Eq. 3) is normally approximated by one step training as $\nabla_{\alpha}L_{val}(w^{*}(\alpha),\alpha)\approx \nabla_{\alpha}L_{val}(w-\xi \nabla_{w}L_{train}(w,\alpha),\alpha)$ . Inspired by this technique, for a given pair of architectures $\{\alpha ,\beta \}$ , the corresponding optimal weights $\{W_G^* (\alpha),W_D^* (\beta)\}$ can be obtained by single step of adversarial training (Eq. 5) as vanilla GANs.
+
+Algorithm 1 Minibatch stochastic gradient descent training of Adversarial Neural Architecture Search.
+
+1: for number of training iterations do
+2: for $k$ step do
+3: Sample minibatch of $2m$ noise samples $\{z^{(1)},\dots,z^{(2m)}\}$ from noise prior.
+4: Sample minibatch of 2m examples $\left\{x^{(1)},\dots,x^{(2m)}\right\}$ from real data distribution.
+5: Update the architecture of discriminator by ascending its stochastic gradient:
+
+$$
+\nabla_ {\beta} \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ \log \left(x ^ {i}\right) + \log \left(1 - D \left(G \left(z ^ {i}\right)\right)\right) \right]
+$$
+
+6: Update the weights of discriminator by ascending its stochastic gradient:
+
+$$
+\nabla_ {W _ {D}} \frac {1}{m} \sum_ {i = m + 1} ^ {2 m} \left[ \log (x ^ {i}) + \log (1 - D (G (z ^ {i}))) \right]
+$$
+
+7: end for
+
+8: Sample minibatch of $2m$ noise samples $\{z^{(1)},\dots,z^{(2m)}\}$ from noise prior.
+9: Update the architecture of generator by descending its stochastic gradient:
+
+$$
+\nabla_ {\alpha} \frac {1}{m} \sum_ {i = 1} ^ {m} \left[ \log \left(1 - D \left(G \left(z ^ {i}\right)\right)\right) \right]
+$$
+
+10: Update the weights of generator by descending its stochastic gradient:
+
+$$
+\nabla_ {W _ {G}} \frac {1}{m} \sum_ {i = m + 1} ^ {2 m} [ \log (1 - D (G (z ^ {i}))) ]
+$$
+
+11: end for
+
+Moreover, the min-max game between two architectures can also be searched in an alternative way. Specifically, the currently optimal architecture of generator for the given discriminator can be achieved through single step of adversarial training, which has been proven by Goodfellow in [10]. The proposed AdversarialNAS algorithm is shown in Alg. 1, and optimal architectures or weights in each era
+
+tion can be achieved by ascending or descending the corresponding stochastic gradient. Note that, the order of the updating strategy is architecture first in each training iteration, which guarantees the weights for updating the corresponding architecture to be currently optimal. For example, the discriminator used in ninth line of Alg. 1 is $\mathrm{D}^*$ with optimal architecture and weights for the current generator.
+
+The proposed AdversarialNAS method can be plug-and-play to the original training procedure of GANs to search the architecture more naturally, which is specifically tailored for GANs.
+
+# 4. Experiments
+
+# 4.1. Experimental Setup
+
+Datasets. Following [9, 38], we adopt CIFAR-10 [22] and STL-10 to evaluate the effectiveness of our approach. The CIFAR-10 contains 60,000 natural images including 10 different classes in $32 \times 32$ spatial resolution. Concretely, we use its training set that consists of 50,000 images without any data augmentation technique to search for the optimal architecture of the generator. We also apply this training set to train the discovered architecture. To further evaluate the transferability of the architecture, we also adopt totally 105,000 images in STL-10 dataset to directly train the searched architecture without any data augmentation for a fair comparison with previous works.
+
+Implementation. We use Adam optimizer [21] and hinge loss to train the shared weights of Super-GAN and provide the supervision signal for updating the architectures. Specifically, the hyper-parameters of optimizers for training the weights of both generator and discriminator are set to $\beta_{1} = 0.0$ , $\beta_{2} = 0.9$ and learning rate is set to 0.0002. The hyper-parameters of optimizers for optimizing both architectures are set to $\beta_{1} = 0.5$ , $\beta_{2} = 0.9$ and the learning rate is 0.0003 with the weight decay of 0.0001. When searching, the batch size is set to 100 for both generator and discriminator, and we search for about 2,500 iterations. When training the derived generator, we directly adopt the discriminator used in AutoGAN [9] for a fair comparison, which is similar to the one in SNGAN [28]. The batch size is set to 40 for generator and 20 for discriminator, respectively. We train the network for about 500 epochs, and the hyper-parameters of the optimizer are the same as the ones in searching. Besides, the same as all other methods, we randomly generate 50,000 images for calculating the Inception Score and FID to evaluate the network performance.
+
+Computational Costs. The proposed AdversarialNAS takes about 12 hours to converge for searching for an optimal architecture on two NVIDIA RTX 2080Ti GPUs. It requires only 1 GPU day to achieve the final architecture in a large search space (about $10^{38}$ ), while AutoGAN [9] requires 2 GPU days in a quite small search space (about $10^{5}$ )
+
+and AGAN [38] needs even 1200 GPU days for searching in a comparable space. Note that we directly use the released code of AutoGAN to search on the same hardware 2080Ti GPU and the searching time of AGAN is from their original paper (running on NVIDIA Titan X GPU).
+
+# 4.2. Compared with State-of-the-Art Approaches
+
+In this section, we discuss the searched architecture and compare its performance with state-of-the-art approaches including hand-crafted and auto-discovered ones. To explore the transferability of the discovered architecture, we directly apply it to another dataset and retrain its weights for comparing with other methods. Besides, we further study the scalability of the searched architecture and prove its superiority to other methods.
+
+# 4.2.1 Results on CIFAR-10
+
+At the end of the searching program, we directly sample the architecture from the search space by picking the operations with maximum weights $\alpha$ . The optimal architecture searched on CIFAR-10 is shown in Tab. 1 and some valuable observations can be received from this table.
+
+- The searched generator prefer 'Bilinear' operation for upsampling features although it has no learnable parameters. Besides, the 'Bilinear Interpolation' provides more accurate expanded features than simple 'Nearest' operation, which is discovered by the searching algorithm.
+- Surprisingly, there is no dilated convolution in this architecture. It seems that, for low-resolution images $(32 \times 32)$ , simply stacking normal convolutions may already satisfy and achieve the optimal Effective Receptive Field (ERF) of the generator.
+- We can also observe that the deeper cell tends to be more shallow since more 'None' operations are preferred. The shallow cell has more multi-scale feature fusion operation, which is represented by the discovered parallel 'Identity' connection of convolution.
+
+The quantitative comparisons with previous state-of-the-art methods are given in Tab. 2. From the table, we can see that the proposed AdversarialNAS is the first gradient-based approach that can search in a large search space with affordable cost. The designed search space has $10^{38}$ different architectures of GANs, which is several orders of magnitude larger than the search space $(10^{5})$ of AutoGAN [9]. Moreover, the proposed method only takes about 1 GPU day for searching for an optimal architecture while the AGAN [38] spends 1200 GPU days under a comparable search space. In the CIFAR-10 dataset, our discovered 'AdversarialNAS-GAN' achieves new state-of-the-art FID score (10.87), which is quite encouraging. It also obtains
+
+| Up-Cell | Edge | Operation | Num | Resolution |
| Cell-1 | 0 → 1 | Bilinear | 1 | 4 → 8 |
| 0 → 2 | Bilinear | 1 | 4 → 8 |
| 1 → 3 | Identity | 1 | 8 → 8 |
| 1 → 4 | Conv 3 × 3 | 256 | 8 → 8 |
| 2 → 3 | None | - | - |
| 2 → 4 | Conv 3 × 3 | 256 | 8 → 8 |
| 3 → 4 | Identity | 1 | 8 → 8 |
| 3 → c2 | Bilinear | 1 | 8 → 16 |
| 3 → c3 | Nearest | 1 | 8 → 32 |
| Cell-2 | 0 → 1 | Bilinear | 1 | 8 → 16 |
| 0 → 2 | Bilinear | 1 | 8 → 16 |
| 1 → 3 | None | - | - |
| 1 → 4 | Conv 3 × 3 | 256 | 16 → 16 |
| 2 → 3 | Identity | 1 | 16 → 16 |
| 2 → 4 | Conv 3 × 3 | 256 | 16 → 16 |
| 3 → 4 | Conv 3 × 3 | 256 | 16 → 16 |
| 3 → c3 | Nearest | 1 | 16 → 32 |
| Cell-3 | 0 → 1 | Nearest | 1 | 16 → 32 |
| 0 → 2 | Bilinear | 1 | 16 → 32 |
| 1 → 3 | None | - | - |
| 1 → 4 | Conv 3 × 3 | 256 | 32 → 32 |
| 2 → 3 | Conv 3 × 3 | 256 | 32 → 32 |
| 2 → 4 | None | - | - |
| 3 → 4 | Conv 3 × 3 | 256 | 32 → 32 |
+
+Table 1. The searched optimal architecture of generator by the proposed AdversarialNAS on CIFAR-10 with no category labels used. The 'Num' indicates the number of operations.
+
+an Inception Score $(8.74 \pm 0.07)$ that is highly competitive with state-of-the-art Progressive GAN [20] $(8.80 \pm 0.05)$ and superior to AutoGAN [9] $(8.55 \pm 0.10)$ . It is worth noting that the Progressive GAN applies a well-designed progressive training strategy that is time-consuming, while we directly train the discovered generator as vanilla GANs.
+
+Besides, we randomly generate 50 images without cherry-picking, which are given in the Fig. 3. These qualitative results demonstrate that our searched generator can create diverse images that contain realistic appearance and natural texture without any clue of model collapse.
+
+
+Figure 3. The CIFAR-10 images generated by discovered generator in random without cherry-picking.
+
+| Method | Search Method | Search Space | Search Cost | Size (MB) | IS↑ on C-10 | FID↓ on C-10 | IS↑ on S-10 | FID↓ on S-10 |
| DCGANs [32] | | | | | 6.64 ± 0.14 | - | - | - |
| Improved GAN [33] | | | | | 6.86 ± 0.06 | - | - | - |
| LRGAN [44] | | | | | 7.17 ± 0.17 | - | - | - |
| DFM [40] | | | | | 7.72 ± 0.13 | - | 8.51 ± 0.13 | - |
| ProbGAN [13] | | | | | 7.75 | 24.6 | 8.87 ± 0.09 | 46.74 |
| WGAN-GP, ResNet [12] | Manual | - | - | - | 7.86 ± 0.07 | - | - | - |
| Splitting GAN [11] | | | | | 7.90 ± 0.09 | - | - | - |
| MGAN [16] | | | | | 8.33 ± 0.10 | 26.7 | - | - |
| Dist-GAN [37] | | | | | | 17.61 | - | 36.19 |
| Progressive GAN [20] | | | | | 8.80 ± 0.05 | - | - | - |
| Improving MMD-GAN [39] | | | | | 8.29 | 16.21 | 9.23 ± 0.08 | 37.64 |
| SN-GAN [28] | | | | 4.3 | 8.22 ± 0.05 | 21.7 | 9.16 ± 0.12 | 40.1 |
| AGAN [38] | RL | - | 1200 | 20.1 | 8.29 ± 0.09 | 30.5 | 9.23 ± 0.08 | 52.7 |
| Random Search [23]† | Random | 105 | 2 | - | 8.09 | 17.34 | - | - |
| AutoGAN [9] | RL | 105 | 2 | 4.4 | 8.55 ± 0.10 | 12.42 | 9.16 ± 0.12 | 31.01 |
| Random Search [23]†† | Random | 1038 | 1 | 12.5 | 6.74 ± 0.07 | 38.32 | 7.66 ± 0.08 | 53.45 |
| AdversarialNAS-GAN | Gradient | 1038 | 1 | 8.8 | 8.74 ± 0.07 | 10.87 | 9.63 ± 0.19 | 26.98 |
+
+Table 2. The quantitative comparisons with state-of-the-art approaches. $\dagger$ indicates the results are achieved in the search space of AutoGAN and †† denotes the results in our search space.
+
+# 4.2.2 Transferability of the Architectures
+
+Following the setting of AutoGAN [9] and AGAN [38], we directly apply the generator searched on CIFAR-10 to STL-10 dataset for evaluating the transferability of the architecture. Specifically, we adopt totally 105,000 images with no labels used to train this network. The number of training epochs is the same as the one on CIFAR-10 and we also randomly generate 50,000 images for calculating the Inception Score and FID. We alter the resolution of input noise to $6 \times 6$ for generating the image with the size of $48 \times 48$ , as the AutoGAN and AGAN do.
+
+The quantitative results are shown in Tab. 2. We can observe that our network suffers no overfitting on the CIFAR-10 dataset and has a superior ability of generalization. Specifically, it achieves the state-of-the-art Inception Score (9.63) and FID (26.98) on STL-10, which are far better than all hand-crafted and auto-discovered methods. The qualitative results are also given in Fig. 4 to prove its ability to generate diverse and realistic images.
+
+# 4.2.3 Scalability of the Architectures
+
+In this section, we further explore the scalability of the discovered architecture on the CIFAR-10 dataset.
+
+We compare our searched generator with two representative works, manual-designed SNGAN [28] and autodiscovered AutoGAN [9]. We scale the parameter size of these generators from 1 MB to $25\mathrm{MB}$ through channel dimension, which is a large scope. Note that, for a fair comparison, we use the same discriminator with a fixed size in
+
+
+Figure 4. The STL-10 images randomly generated without cherry-picking by the generator discovered on CIFAR-10.
+
+all experiments to observe the impact of generator capacity changes. The qualitative comparisons are illustrated in Fig. 5 and Fig. 6. The x-axis in both figures denotes the parameter size (MB) of the specific generator. The y-axis is IS in Fig. 5 and is FID in Fig. 6. These experiments demonstrate that our searched architecture is more stable and almost unaffected when scaling the model size. When the size is extremely compressed to only 1 MB, the SNGAN and AutoGAN all suffer from the disaster of performance degradation, while the performance of 'AdversarialNAS-GAN' is almost unchanged. Notably, the performance will continually drop when expanding the generator size because the enlarged generator will not be balanced with the fixed-size discriminator any more. However, both Fig. 5 and Fig. 6 demonstrate that our discovered architecture will not suffer from performance drop, which means it is more robust and has superior inclusiveness for discriminators.
+
+| Methods | Discriminator | CIFAR-10 | STL-10 |
| Architecture | Type | IS↑ | FID↓ | IS↑ | FID↓ |
| Random Search | Fixed | AutoGAN-D | 6.74 ± 0.13 | 38.32 | 7.66 ± 0.11 | 53.45 |
| SingalNAS | Fixed | SNGAN-D | 7.72 ± 0.03 | 27.79 | 6.56 ± 0.12 | 84.19 |
| SingalNAS | Fixed | AutoGAN-D | 7.86 ± 0.08 | 24.04 | 8.52 ± 0.05 | 38.85 |
| SingalNAS | Fixed | Super-D | 7.77 ± 0.05 | 23.01 | 8.62 ± 0.03 | 41.57 |
| AdversarialNAS | Dynamic | Searched-D | 8.74 ± 0.07 | 10.87 | 9.63 ± 0.19 | 26.98 |
+
+Table 3. We search the generative model on CIFAR-10 with different methods and retain the weight of these searched architectures to evaluate their performance on both CIFAR-10 and STL-10.
+
+# 4.3. Ablation Studies
+
+To further evaluate the effectiveness of the proposed AdversarialNAS, we conduct a series of ablation studies.
+
+First, we conduct a random search strategy [23] to search for the generator where we adopt the fixed-architecture discriminator of AutoGAN for a fair comparison. The performance of the searched generative model is shown in Tab. 3. Second, we propose 'SingalNAS' to search the optimal generator with different types of fixed architecture of discriminator, while the weights of discriminator can still be trained. Accordingly, the supervision signal for updating the generator architecture comes from the fixed architecture of discriminator, and the discriminator architecture does not dynamically change according to generator during searching. We adopt the discriminator architecture of SNGAN and AutoGAN, respectively. In addition, to verify the influences of our search space, we also conduct 'SingalNAS' with the fixed Super-D. Third, we use the proposed 'Adversarial-NAS' to search the generator and discriminator simultaneously. Note that, the time consuming of both searching and training in all experiments is constrained to be consistent.
+
+The effectiveness of our adversarial searching strategy can be observed from the comparisons in Tab. 3.
+
+
+Figure 5. Inception Score curves of different methods.
+
+
+Figure 6. FID curves of different methods.
+
+# 5. Conclusion
+
+In this work, we propose a large search space for GANs and a novel AdversarialNAS method to search for a superior generative model automatically. The proposed searching algorithm can directly be inserted to the original procedure of GAN training and search the architecture of generator in a differentiable manner through an adversarial mechanism, which extremely reduces the search cost. The discovered network achieves state-of-the-art performance on both CIFAR-10 and STL-10 datasets, and it also has advanced transferability and scalability.
+
+Furthermore, we believe the idea behind our AdversarialNAS is not only specific to NAS-GAN and may benefit other potential field where there are multiple network architectures requiring mutual influence, such as network architecture distillation, pruning and mutual learning.
+
+Acknowledgement This work is partially supported by the National Natural Science Foundation of China (Grant 61572493, Grant 61876177), Beijing Natural Science Foundation (L182013, 4202034) and Fundamental Research Funds for the Central Universities. This work is also sponsored by Zhejiang Lab (No.2019KD0AB04).
+
+# References
+
+[1] Andrew Brock, Theodore Lim, James Millar Ritchie, and Nicholas J Weston. Smash: One-shot model architecture search through hypernetworks. In ICLR, 2018.
+[2] Han Cai, Ligeng Zhu, and Song Han. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332, 2018.
+[3] Liang-Chieh Chen, Maxwell Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, and Jon Shlens. Searching for efficient multi-scale architectures for dense image prediction. In NIPS, 2018.
+[4] Xin Chen, Lingxi Xie, Jun Wu, and Qi Tian. Progressive differentiable architecture search: Bridging the depth gap between search and evaluation. arXiv preprint arXiv:1904.12760, 2019.
+[5] Yunjoy Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In CVPR, 2018.
+[6] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014.
+[7] Xuanyi Dong and Yi Yang. One-shot neural architecture search via self-evaluated template network. In ICCV, 2019.
+[8] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In CVPR, 2019.
+[9] Xinyu Gong, Shiyu Chang, Yifan Jiang, and Zhangyang Wang. Autogan: Neural architecture search for generative adversarial networks. In ICCV, 2019.
+[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
+[11] Guillermo L. Grinblat, Lucas C. Uzal, and Pablo M. Granitto. Class-splitting generative adversarial networks. ArXiv, abs/1709.07359, 2017.
+[12] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of wasserstein gans. In NIPS, 2017.
+[13] Hao He, Hao Wang, Guang-He Lee, and Yonglong Tian. Probgan: Towards probabilistic gan with theoretical guarantees. In ICLR, 2019.
+[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+[15] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. science, 313(5786):504-507, 2006.
+[16] Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Q. Phung. Mgan: Training generative adversarial nets with multiple generators. In ICLR, 2018.
+[17] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In CVPR, 2017.
+[18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
+
+[19] Wentao Jiang, Si Liu, Chen Gao, Jie Cao, Ran He, Jiashi Feng, and Shuicheng Yan. Psgan: Pose and expression robust spatial-aware gan for customizable makeup transfer. arXiv preprint arXiv:1909.06956, 2019.
+[20] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
+[21] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[22] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009.
+[23] Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. arXiv preprint arXiv:1902.07638, 2019.
+[24] Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L Yuille, and Li Fei-Fei. Autodeplab: Hierarchical neural architecture search for semantic image segmentation. In CVPR, pages 82–92, 2019.
+[25] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In ECCV, 2018.
+[26] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
+[27] Chris J Maddison, Daniel Tarlow, and Tom Minka. A* sampling. In NIPS, 2014.
+[28] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
+[29] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016.
+[30] Junran Peng, Ming Sun, Zhaoxiang Zhang, Tieniu Tan, and Junjie Yan. Efficient neural architecture transformation searchin channel-level for object detection. arXiv preprint arXiv:1909.02293, 2019.
+[31] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. In ICML, 2018.
+[32] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
+[33] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, 2016.
+[34] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In CVPR, 2018.
+[35] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In CVPR, 2019.
+[36] Mingxing Tan and Quoc V Le. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019.
+
+[37] Ngoc-Trung Tran, Tuan-Anh Bui, and Ngai-Man Cheung. Dist-gan: An improved gan using distance constraints. In ECCV, 2018.
+[38] Hanchao Wang and Jun Huan. Agan: Towards automated design of generative adversarial networks. arXiv preprint arXiv:1906.11080, 2019.
+[39] Wei Wang, Yuan Sun, and Saman K. Halgamuge. Improving mmd-gan training with repulsive loss function. ArXiv, abs/1812.09916, 2018.
+[40] David Warde-Farley and Yoshua Bengio. Improving generative adversarial networks with denoising feature matching. In ICLR, 2017.
+[41] Lingxi Xie and Alan Yuille. Genetic cnn. In ICCV, 2017.
+[42] Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018.
+[43] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In CVPR, 2018.
+[44] Jianwei Yang, Anitha Kannan, Dhruv Batra, and Devi Parikh. Lr-gan: Layered recursive generative adversarial networks for image generation. arXiv preprint arXiv:1703.01560, 2017.
+[45] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017.
+[46] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In CVPR, 2018.
+[47] Yiheng Zhang, Zhaofan Qiu, Jingen Liu, Ting Yao, Dong Liu, and Tao Mei. Customizable architecture search for semantic segmentation. In CVPR, 2019.
+[48] Defa Zhu, Si Liu, Wentao Jiang, Chen Gao, Tianyi Wu, and Guodong Guo. Ugan: Untraceable gan for multi-domain face translation. arXiv preprint arXiv:1907.11418, 2019.
+[49] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
\ No newline at end of file
diff --git a/adversarialnasadversarialneuralarchitecturesearchforgans/images.zip b/adversarialnasadversarialneuralarchitecturesearchforgans/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e8e37783ca0e0a6b0880cf74853f69055649e67d
--- /dev/null
+++ b/adversarialnasadversarialneuralarchitecturesearchforgans/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5189fddd8571269bd2d91d40a89bcb9f20bbcb7138079910180847cb1e993578
+size 582756
diff --git a/adversarialnasadversarialneuralarchitecturesearchforgans/layout.json b/adversarialnasadversarialneuralarchitecturesearchforgans/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..33604784e2d2d71fc6fe25350426992e4539fa56
--- /dev/null
+++ b/adversarialnasadversarialneuralarchitecturesearchforgans/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a3454c13a291a77d77a4e2677941f09123cb6f72fd195b0e51adb9ec15a69e37
+size 395683
diff --git a/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_content_list.json b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d323608c6ffb65cd27c69ca10d6fb92b4691ebbf
--- /dev/null
+++ b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4ec8cf4abe67e1599d3b699d5be6b3eed025d4a6a1c60a4bcb443320786ba8d
+size 79355
diff --git a/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_model.json b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f7ee3b24d8969fa5c90f8b3df3571370869a19ab
--- /dev/null
+++ b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3ea219304dd8d255cc2b0702ddfa69785c1fa582a0101cb2d57b97b2e72ec88
+size 98357
diff --git a/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_origin.pdf b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6f9dcee840b4cb4413195822f97c6fad10a1c760
--- /dev/null
+++ b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/722b658f-a3ad-4775-923e-5bdedf801b86_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7e122c1194b4fba918748a00e89b1b5b4b93f2979e400f1c98a2ceeeaa8dafc
+size 714144
diff --git a/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/full.md b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..28352eda3eb2add162490c46a5253373586fecfd
--- /dev/null
+++ b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/full.md
@@ -0,0 +1,295 @@
+# Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
+
+Tianlong Chen1, Sijia Liu2, Shiyu Chang2, Yu Cheng3, Lisa Amini2, Zhangyang Wang1
+1Texas A&M University, 2MIT-IBM Watson AI Lab, IBM Research 3Microsoft Dynamics 365 AI Research
+
+{wiwjp619,atlaswang}@tamu.edu,{sijia.liu,shiyu.chang,lisa.amini} $@$ ibm.com,yu.cheng@microsoft.com
+
+# Abstract
+
+Pretrained models from self-supervision are prevalently used in fine-tuning downstream tasks faster or for better accuracy. However, gaining robustness from pretraining is left unexplored. We introduce adversarial training into self-supervision, to provide general-purpose robust pretrained models for the first time. We find these robust pretrained models can benefit the subsequent fine-tuning in two ways: i) boosting final model robustness; ii) saving the computation cost, if proceeding towards adversarial fine-tuning. We conduct extensive experiments to demonstrate that the proposed framework achieves large performance margins (e.g., $3.83\%$ on robust accuracy and $1.3\%$ on standard accuracy, on the CIFAR-10 dataset), compared with the conventional end-to-end adversarial training baseline. Moreover, we find that different self-supervised pretrained models have diverse adversarial vulnerability. It inspires us to ensemble several pretraining tasks, which boosts robustness more. Our ensemble strategy contributes to a further improvement of $3.59\%$ on robust accuracy, while maintaining a slightly higher standard accuracy on CIFAR-10. Our codes are available at https://github.com/TAMU-VITA/Adv-SS-Pretraining.
+
+# 1. Introduction
+
+Supervised training of deep neural networks requires massive, labeled datasets, which may be unavailable and costly to assemble [15, 2, 28, 36]. Self-supervised and unsupervised training techniques attempt to address this challenge by eliminating the need for manually labeled data. Representations pretrained through self-supervised techniques enable fast fine-tuning to multiple downstream tasks, and lead to better generalization and calibration [20, 23]. Examples of tasks proven to attain high accuracy through self-supervised pretraining include position predicting tasks (Selfie [35], Jigsaw [25, 3]), rotation predicting tasks (Rotation [9]), and a variety of other perception tasks [6, 41, 8].
+
+The labeling and sample efficiency challenges of deep learning are further exacerbated by vulnerability to adversarial attacks. For example, Convolutional Neural Networks
+
+
+Figure 1: Summary of our achieved performance (CIFAR-10). The upper right corner indicates the best performance in terms of both standard and robust accuracy. The size of markers represents the number of training epochs to achieve the best robust accuracy. Black circle $(\bullet)$ is the baseline method: end-to-end adversarial training. Blue circles $(\bullet)$ are fine-tuned models that inherit robust models from different self-supervised pretraining tasks. Orange circle $(\bullet)$ is the ensemble of three self-supervised pretraining tasks. Red Star $(\star)$ is the ensemble of three fine-tuned models. The correspondence between the marker size and # epochs is given by, Ensemble Fine-tune $(\star, 144$ epochs) $>$ Baseline $(\bullet, 99$ epochs) $>$ Ensemble Pretrain $(\bullet, 56$ epochs) $>$ Selfie $(\bullet, 50$ epochs) $>$ Jigsaw $(\bullet, 48$ epochs) $>$ Rotation $(\bullet, 46$ epochs)
+
+(CNNs), are widely leveraged for perception tasks, due to high predictive accuracy. However, even a well-trained CNN suffers from high misclassification rates when imperceivable perturbations are applied the input [18, 24]. As suggested by [30], the sample complexity of learning an adversarially robust model with current methods is significantly higher than that of standard learning. Adversarial training (AT) [21], the state-of-the-art model defense approach, is also known to be computationally more expensive than standard training (ST). The above facts make it especially meaningful to explore:
+
+Can appropriately pretrained models play a similar role for adversarial training as they have for ST? That is, can they lead to more efficient fine-tuning and better, adversially-robust generalization?
+
+Self-supervision has only recently been linked to the study of robustness. An approach is offered in [14], by incorpora-
+
+ing the self-supervised task as a complementary objective, which is co-optimized with the conventional classification loss through the method of AT [21]. Their co-optimization approach presents scalability challenges, and does not enjoy the benefits of pretrained embeddings. Further, it leaves many unanswered questions, especially with respect to efficient tuning, which we tackle in this paper.
+
+Contributions. This paper introduces a framework for self-supervised pretraining and fine-tuning into the adversarial robustness field. We motivate our study with the following three scientific questions:
+
+Q1: Is an adversarially pretrained model effective in boosting the robustness of subsequent fine-tuning?
+Q2: Which provides the better accuracy and efficiency: adversarial pretraining or adversarial fine-tuning?
+Q3: How does the type of self-supervised pretraining task affect the final model's robustness?
+
+Our contributions address the above questions and can be summarized as follows:
+
+A1: We demonstrate for the first time that robust pretrained models leveraged for adversarial fine-tuning result in a large performance gain. As illustrated by Figure 1, the best pretrained model from a single self-supervised task (Selfie) leads to $3.83\%$ on robust accuracy1 and $1.3\%$ on standard accuracy on CIFAR-10 when being adversarially fine-tuned, compared with the strong AT baseline. Even performing standard fine-tuning (which consumes fewer resources) with the robust pretrained models improves the resulting model's robustness.
+A2: We systematically study all possible combinations between pretraining and fine-tuning. Our extensive results reveal that adversarial fine-tuning contributes to the dominant portion of robustness improvement, while robust pretraining mainly speeds up adversarial fine-tuning. That can also be read from Figure 1 (smaller marker sizes denote less training epochs needed).
+A3: We experimentally show that the pretrained models resulting from different self-supervised tasks have diverse adversarial vulnerabilities. In view of that, we propose to pretrain with an ensemble of self-supervised tasks, in order to leverage their complementary strengths. On CIFAR-10, our ensemble strategy further contributes to an improvement of $3.59\%$ on robust accuracy, while maintaining a slightly higher standard accuracy. Our
+
+approach establishes a new benchmark result on standard accuracy $(86.04\%)$ and robust accuracy $(54.64\%)$ in the setting of AT.
+
+# 2. Related Work
+
+Self-supervised pretraining. Numerous self-supervised learning methods have been developed in recent years, including: region/component filling (e.g. inpainting [6] and colorization [41]); rotation prediction [9]; category prediction [8]; and patch-base spatial composition prediction (e.g., Jigsaw [25, 3] and Selfie [35]). All perform standard training, and do not tackle adversarial robustness. For example, Selfie [35], generalizes BERT to image domains. It masks out a few patches in an image, and then attempts to classify a right patch to reconstruct the original image. Selfie is first pretrained on unlabeled data and fine-tuned towards the downstream classification task.
+
+Adversarial robustness. Many defense methods have been proposed to improve model robustness against adversarial attacks. Approaches range from adding stochasticity [7], to label smoothening and feature squeezing [27, 38], to denoising and training on adversarial examples [22, 19]. A handful of recent works point out that those empirical defenses could still be easily compromised [1]. Adversarial training (AT) [21] provides one of the strongest current defenses, by training the model over the adversarially perturbed training data, and has not yet been fully compromised by new attacks. [10, 16] showed AT is also effective in compressing or accelerating models [42] while preserving learned robustness.
+
+Several works have demonstrated model ensembles [32, 34] to boost adversarial robustness, as the ensemble diversity can challenge the transferability of adversarial examples. Recent proposals [26, 37] formulate the diversity as a training regularizer for improved ensemble defense. Their success inspires our ensembled self-supervised pretraining.
+
+Unlabeled data for adversarial robustness. Self-supervised training learns effective representations for improving performance on downstream tasks, without requiring labels. Because robust training methods have higher sample complexity, there has been significant recent attention on how to effectively utilize unlabeled data to train robust models.
+
+Results show that unlabeled data can become a competitive alternative to labeled data for training adversarially robust models. These results are concurred by [39], who also finds that learning with more unlabeled data can result in better adversarially robust generalization. Both works [31, 4] use unlabeled data to form an unsupervised auxiliary loss (e.g., a label-independent robust regularizer or a pseudo-label loss).
+
+To the best of our knowledge, [14] is the only work so far that utilizes unlabeled data via self-supervision to train a robust model given a target supervised classification task. It improves AT by leveraging the rotation prediction self-supervision as an auxiliary task, which is co-optimized with the conventional AT loss. Our self-supervised pretraining and fine-tuning differ from all above settings.
+
+# 3. Our Proposal
+
+In this section, we introduce self-supervised pretraining to learn feature representations from unlabeled data, followed by fine-tuning on a target supervised task. We then generalize adversarial training (AT) to different self-supervised pretraining and fine-tuning schemes.
+
+# 3.1. Setup
+
+Self-Supervised Pretraining Let $\mathcal{T}_{\mathrm{p}}$ denote a pretraining task and $\mathcal{D}_{\mathrm{p}}$ denote the corresponding (unlabeled) pretraining dataset. The goal of self-supervised pretraining is to learn a model from $\mathcal{D}_{\mathrm{p}}$ itself without explicit manual supervision. This is often cast as an optimization problem, in which a proposed pretraining loss $\ell_{\mathrm{p}}(\theta_{\mathrm{p}},\theta_{\mathrm{pc}};\mathcal{D}_{\mathrm{p}})$ is minimized to determine a model parameterized by $\theta_{\mathrm{p}}$ . Here $\theta_{\mathrm{pc}}$ signifies additional parameters customized for a given $\mathcal{T}_{\mathrm{p}}$ . In the rest of the paper, we focus on the following self-supervised pretraining tasks (details on each pretraining task are provided in the supplement):
+
+Selfie [35]: By masking out select patches in an image, Selfie constructs a classification problem to determine the correct patch to be filled in the masked location.
+
+Rotation [9]: By rotating an image by a random multiple of 90 degrees, Rotation constructs a classification problem to determine the degree of rotation applied to an input image.
+
+Jigsaw [25, 3]: By dividing an image into different patches, Jigsaw trains a classifier to predict the correct permutation of these patches.
+
+Supervised Fine-tuning Let $\mathbf{r}(\mathbf{x};\boldsymbol{\theta}_{\mathrm{p}})$ denote the mapping (parameterized by $\boldsymbol{\theta}_{\mathrm{p}}$ ) from an input sample $\mathbf{x}$ to its embedding space learnt from the self-supervised pretraining task $\mathcal{T}_{\mathrm{p}}$ . Given a target finetuning task $\mathcal{T}_{\mathrm{f}}$ with the labeled dataset $\mathcal{D}_{\mathrm{f}}$ , the goal of fine-tuning is to determine a classifier, parameterized by $\boldsymbol{\theta}_{\mathrm{f}}$ , which maps the representation $\mathbf{r}(\mathbf{x};\boldsymbol{\theta}_{\mathrm{p}})$ to the label space. To learn the classifier, one can minimize a common supervised training loss $\ell_{\mathrm{f}}(\boldsymbol{\theta}_{\mathrm{p}},\boldsymbol{\theta}_{\mathrm{f}};\mathcal{D}_{\mathrm{f}})$ with a fixed or re-trainable model $\boldsymbol{\theta}_{\mathrm{p}}$ , corresponding to partial fine-tuning and full fine-tuning, respectively.
+
+AT versus standard training (ST) AT is known as one of the most powerful methods to train a robust classifier against adversarial attacks [21, 1]. Considering an $\epsilon$ -tolerant $\ell_{\infty}$ attack $\delta$ subject to $\| \delta \|_{\infty} \leq \epsilon$ , an adversarial example of a
+
+benign input $\mathbf{x}$ is given by $\mathbf{x} + \boldsymbol{\delta}$ . With the aid of adversarial examples, AT solves a min-max optimization problem of the generic form
+
+$$
+\underset {\theta} {\text {m i n i m i z e}} \mathbb {E} _ {\mathbf {x} \in \mathcal {D}} \left[ \underset {\| \delta \| _ {\infty} \leq \epsilon} {\text {m a x i m i z e}} \ell (\boldsymbol {\theta}, \mathbf {x} + \boldsymbol {\delta}) \right], \tag {1}
+$$
+
+where $\theta$ denotes the parameters of an ML/DL model, $\mathcal{D}$ is a given dataset, and $\ell$ signifies a classification loss evaluated at the model $\theta$ and the perturbed input $\mathbf{x} + \boldsymbol{\delta}$ . By fixing $\delta = 0$ , problem (1) then simplifies to the ST framework minimize $\theta \mathbb{E}_{\mathbf{x} \in \mathcal{D}}[\ell(\boldsymbol{\theta}, \mathbf{x})]$ .
+
+# 3.2. AT meets self-supervised pretraining and finetuning
+
+AT given by (1) can be specified for either self-supervised pretraining or supervised fine-tuning. For example, AT for self-supervised pretraining can be cast as problem (1) by letting $\pmb{\theta} \coloneqq [\pmb{\theta}_{\mathrm{p}}^{T}, \pmb{\theta}_{\mathrm{pc}}^{T}]^{T}$ and $\mathcal{D} \coloneqq \mathcal{D}_{\mathrm{p}}$ , and specifying $\ell$ as $\ell_{\mathrm{p}}$ . In Table 1, we summarize all the possible scenarios when AT meets self-supervised pretraining.
+
+Table 1: Summary of self-supervised pretraining scenarios.
+
+| Scenario | Pretraining method | Loss ℓ in (1) | Variables θ in (1) | dataset D in (1) |
| P1 | None1 | NA2 | NA | NA |
| P2 | ST3 | ℓp | [θTp, θTpc]T | Dp |
| P3 | AT | ℓp | [θTp, θTpc]T | Dp |
+
+1 None: the model form of $\theta_{\mathrm{p}}$ is known in advance.
+2 NA: Not applicable.
+3ST:A special case of (1) with $\delta = 0$
+
+Table 2: Summary of fine-tuning scenarios.
+
+| Scenario | Fine-tuning type | Fine-tuning method | Loss \( \ell \) in (1) | Variables \( \theta \) in (1) | dataset \( \mathcal{D} \) in (1) |
| \( {\mathcal{F}}_{1} \) | Partial (with fixed \( {\mathbf{\theta }}_{\mathrm{p}} \))1 | ST | \( {\ell }_{\mathrm{f}} \) | \( {\mathbf{\theta }}_{\mathrm{f}} \) | \( {\mathcal{D}}_{\mathrm{f}} \) |
| \( {\mathcal{F}}_{2} \) | Partial (with fixed \( {\mathbf{\theta }}_{\mathrm{p}} \)) | AT | \( {\ell }_{\mathrm{f}} \) | \( {\mathbf{\theta }}_{\mathrm{f}} \) | \( {\mathcal{D}}_{\mathrm{f}} \) |
| \( {\mathcal{F}}_{3} \) | Full2 | ST | \( {\ell }_{\mathrm{f}} \) | \( {\left\lbrack {\mathbf{\theta }}_{\mathrm{p}}^{T},{\mathbf{\theta }}_{\mathrm{f}}^{T}\right\rbrack }^{T} \) | \( {\mathcal{D}}_{\mathrm{f}} \) |
| \( {\mathcal{F}}_{4} \) | Full | AT | \( {\ell }_{\mathrm{f}} \) | \( {\left\lbrack {\mathbf{\theta }}_{\mathrm{p}}^{T},{\mathbf{\theta }}_{\mathrm{f}}^{T}\right\rbrack }^{T} \) | \( {\mathcal{D}}_{\mathrm{f}} \) |
+
+1 Fixed $\theta_{\mathrm{p}}^{*}$ signifies the model learnt in a given pretraining scenario.
+Full fine-tuning retrans $\theta_{\mathrm{p}}$
+
+Given a pretrained model $\theta_{\mathrm{p}}$ , adversarial fine-tuning could have two forms: a) AT for partial fine-tuning and b) AT for full fine-tuning. Here the former case a) solves a supervised fine-tuning task under the fixed model $(\theta_{\mathrm{p}})$ , and the latter case b) solves a supervised fine-tuning task by retraining $\theta_{\mathrm{p}}$ . In Table 2, we summarize different scenarios when AT meets supervised fine-tuning.
+
+It is worth noting that our study on the integration of AT with a pretraining+fine-tuning scheme $(\mathcal{P}_i, \mathcal{F}_j)$ provided by Tables 1-2 is different from [14], which conducted one-shot AT over a supervised classification task integrated with a rotation self-supervision task.
+
+In order to explore the network robustness against different configurations $\{(\mathcal{P}_i,\mathcal{F}_j)\}$ , we ask: is AT for robust pretraining sufficient to boost the adversarial robustness of fine-tuning? What is the influence of fine-tuning strategies (partial or full) on the adversarial robustness of image classification? How does the type of self-supervised pretraining task affect the classifier's robustness?
+
+We provide detailed answers to the above questions in Sec. 4.3, Sec. 4.4 and Sec. 4.5. In a nutshell, we find that robust representation learnt from adversarial pretraining is transferable to down-stream fine-tuning tasks to some extent. However, a more significant robustness improvement is obtained by adversarial fine-tuning. Moreover, AT for full fine-tuning outperforms that for partial fine-tuning in terms of both robust accuracy and standard accuracy (except the Jigsaw-specified self-supervision task). Furthermore, different self-supervised tasks demonstrate diverse adversarial vulnerability. As will be evident later, such diversified tasks provide complementary benefits to model robustness and therefore can be combined.
+
+# 3.3. AT by leveraging ensemble of multiple self-supervised learning tasks
+
+In what follows, we generalize AT to learn a robust pretrained model by leveraging the diversified pretraining tasks. More specifically, consider $M$ self-supervised pretraining tasks $\{\mathcal{T}_{\mathrm{P}}^{(i)}\}_{i = 1}^{M}$ , each of which obeys the formulation in Section 3.1. We generalize problem (1) to
+
+$$
+\underset {\boldsymbol {\theta} _ {r}, \left\{\boldsymbol {\theta} _ {\mathrm {p c}} ^ {(i)} \right\}} {\text {m i n i m i z e}} \mathbb {E} _ {\mathbf {x} \sim \mathcal {D} _ {p}} \left[ \mathcal {L} _ {\text {a d v}} \left(\boldsymbol {\theta} _ {\mathrm {p}}, \left\{\boldsymbol {\theta} _ {\mathrm {p c}} ^ {(i)} \right\}, \mathbf {x}\right) \right], \tag {2}
+$$
+
+where $\mathcal{L}_{\mathrm{adv}}$ denotes the adversarial loss given by
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {a d v}} \left(\boldsymbol {\theta} _ {\mathrm {p}}, \left\{\boldsymbol {\theta} _ {\mathrm {p c}} ^ {(i)} \right\}, \mathbf {x}\right) \\ := \underset {\{\| \boldsymbol {\delta} ^ {(i)} \| _ {\infty} \leq \epsilon \}} {\text {m a x i m i z e}} \sum_ {i = 1} ^ {M} \ell_ {\mathrm {p}} ^ {(i)} \left(\boldsymbol {\theta} _ {\mathrm {p}}, \boldsymbol {\theta} _ {\mathrm {p c}} ^ {(i)}, \mathbf {x} + \boldsymbol {\delta} ^ {(i)}\right) \\ + \lambda g \left(\boldsymbol {\theta} _ {\mathrm {p}}, \left\{\boldsymbol {\theta} _ {\mathrm {p c}} ^ {(i)} \right\}, \left\{\boldsymbol {\delta} ^ {(i)} \right\}\right). \tag {3} \\ \end{array}
+$$
+
+In (2), for ease of notation, we replace $\{\cdot\}_{i=1}^{M}$ with $\{\cdot\}$ , $\theta_{\mathrm{p}}$ denotes the common network shared among different self-supervised tasks, and $\theta_{\mathrm{pc}}^{(i)}$ denotes a sub-network customized for the $i$ th task. We refer readers to Figure 2 for an overview of our proposed model architecture. In (3), $\ell_{\mathrm{p}}^{(i)}$ denotes the $i$ th pretraining loss, $g$ denotes a diversity-promoting regularizer, and $\lambda \geq 0$ is a regularization parameter. Note that $\lambda = 0$ gives the averaging ensemble strategy. In our case, we perform grid search to tune $\lambda$ around the value chosen in [26]. Details are referred to the supplement.
+
+Spurred by [26, 37], we quantify the diversity-promoting regularizer $g$ through the orthogonality of input gradients of different self-supervised pretraining losses,
+
+$$
+g \left(\boldsymbol {\theta} _ {\mathrm {p}}, \left\{\boldsymbol {\theta} _ {\mathrm {p c}} ^ {(i)} \right\}, \left\{\boldsymbol {\delta} ^ {(i)} \right\}\right) := \log \det \left(\mathbf {G} ^ {T} \mathbf {G}\right), \tag {4}
+$$
+
+where each column of $\mathbf{G}$ corresponds to a normalized input gradient $\{\nabla_{\delta_i}\ell_p^{(i)}(\pmb {\theta}_p,\pmb{\theta}_{pc}^{(i)},\mathbf{x} + \pmb{\delta}^{(i)})\}$ , and $g$ reaches the maximum value 0 as input gradients become orthogonal, otherwise it is negative. The rationale behind the diversity-promoting adversarial loss (3) is that we aim to design a robust model $\pmb{\theta}_{\mathrm{p}}$ by defending attacks from diversified perturbation directions.
+
+# 4. Experiments and Results
+
+In this section, we design and conduct extensive experiments to examine the network robustness against different configurations $\{(\mathcal{P}_i,\mathcal{F}_j)\}$ for image classification. First, we show adversarial self-supervised pretraining (namely, $\mathcal{P}_3$ in Table 1) improves the performance of downstream tasks. We also discuss the influence of different fine-tuning strategies $\mathcal{F}_j$ on the adversarial robustness. Second, we show the diverse impacts of different self-supervised tasks on their resulting pretrained models. Third, we ensemble those self-supervised tasks to perform adversarial pretraining. At the fine-tuning phase, we also ensemble three best models with the configuration $(\mathcal{P}_3,\mathcal{F}_4)$ and show its performance superiority. Last, we report extensive ablation studies to reveal the influence of the size of the datasets $\mathcal{D}_{\mathrm{p}}$ and the resolution of images in $\mathcal{D}_{\mathrm{p}}$ , as well as other defense options beyond AT.
+
+# 4.1. Datasets
+
+Dataset Details We consider four different datasets in our experiments: CIFAR-10, CIFAR-10-C [13], CIFAR-100 and R-ImageNet-224 (a specifically constructed "restricted" version of ImageNet, with resolution $224 \times 224$ ). For the last one, we indeed to demonstrate our approach on high-resolution data despite the computational challenge. We follow [29] to choose 10 super classes which contain a total of 190 ImageNet classes. The detailed classes distribution of each super class can be found in our supplement.
+
+For the ablation study of different pretraining dataset sizes, we sample more training images from the 80 Million Tiny Images dataset [33] where CIFAR-10 was selected from. Using the same 10 super classes, we form CIFAR-30K (i.e., 30,000 for images), CIFAR-50K, CIFAR-150K for training, and keep another 10,000 images for hold-out testing.
+
+Dataset Usage In Sec. 4.3, Sec. 4.4 and Sec. 4.5, for all results, we use CIFAR-10 training set for both pretraining and fine-tuning. We evaluate our models on the CIFAR-10 testing set and CIFAR-10-C. In Sec. 4.6, we use CIFAR-10, CIFAR-30K, CIFAR-50K, CIFAR-150K and R-ImageNet-224 for pretraining, and CIFAR-10 training set for fine-tuning, while evaluating on CIFAR-10 testing set. We also validate our approaches on CIFAR-100 in the supplement. In all of our experiments, we randomly split the original training set into a training set and a validation set (the ratio is 9:1).
+
+
+Figure 2: The overall framework of ensemble adversarial pretraining. The pretrained weights $\theta_{\mathrm{p}}$ are the first three blocks of ResNet-50v2 [11]; Green arrows $(\rightarrow)$ , Blue arrows $(\rightarrow)$ and Red arrows $(\rightarrow)$ represent the feed forward paths of Selfie, Jigsaw and Rotation, respectively.
+
+# 4.2. Implementation Details
+
+Model Architecture: For pretraining with the Selfie task, we identically follow the setting in [35]. For Rotation and Jigsaw pretraining tasks, we use ResNet-50v2 [12]. For the fine-tuning, we use ResNet-50v2 for all. Each fine-tuning network will inherit the corresponding robust pretrained weights to initialize the first three blocks of ResNet-50v2, while leaving the remaining blocks randomly initialized.
+
+Training & Evaluation Details: All pretraining and fine-tuning tasks are trained using SGD with 0.9 momentum. We use batch sizes of 256 for CIFAR-10, ImageNet-32 and 64 for R-ImageNet-224. All pretraining tasks adopt cosine learning rates. The maximum and minimum learning rates are 0.1 and $10^{-6}$ for Rotation and Jigsaw pretraining; 0.025 and $10^{-6}$ for Selfie pretraining; and 0.001 and $10^{-8}$ for ensemble pretraining. All fine-tuning phases follow a multi-step learning rate schedule, starting from 0.1 and decayed by 10 times at epochs 30 and 50 for a 100 epochs training.
+
+We use 10-step and 20-step $\ell_{\infty}$ PGD attacks [21] for adversarial training and evaluation, respectively. Unless otherwise specified, we follow [14]'s setting with $\epsilon = \frac{8.0}{225}$ and $\alpha = \frac{2.0}{255}$ . For all adversarial evaluations, we use the full testing datasets (i.e. 10,000 images for CIFAR-10) to generate adversarial images. We also consider unforeseen attacks [17, 13].
+
+Evaluation Metrics & Model Picking Criteria: We follow [40] to use: i) Standard Testing Accuracy (TA): the classification accuracy on the clean test dataset; II) Robust Testing Accuracy (RA): the classification accuracy on the attacked test dataset. In our experiments, we use TA to pick models for a better trade-off with RA. Results of models picked using RA criterion are included in the supplement.
+
+# 4.3. Adversarial self-supervised pertraining & finetuning helps classification robustness
+
+We systematically study all possible configurations of pretraining and fine-tuning considered in Table 1 and Table 2, where recall that the expression $(\mathcal{P}_i,\mathcal{F}_j)$ denotes a specified pretraining+fine-tuning scheme. The baseline schemes are given by the end-to-end standard training (ST), namely, $(\mathcal{P}_1,\mathcal{F}_3)$ and the end-to-end adversarial training (AT), namely, $(\mathcal{P}_1,\mathcal{F}_4)$ . Table 3 shows TA, RA, and iteration complexity of fine-tuning (in terms of number of epochs) under different pretraining+fine-tuning strategies involving different self-supervised pretraining tasks, Selfie, Rotation and Jigsaw. In what follows, we analyze the results of Table 3 and provide additional insights.
+
+We begin by focusing on the scenario of integrating the standard pretraining strategy $\mathcal{P}_2$ with fine-tuning schemes $\mathcal{F}_3$ and $\mathcal{F}_4$ used in baseline methods. Several observations can be made from the comparison $(\mathcal{P}_2, \mathcal{F}_3)$ vs. $(\mathcal{P}_1, \mathcal{F}_3)$ and $(\mathcal{P}_2, \mathcal{F}_4)$ vs. $(\mathcal{P}_1, \mathcal{F}_4)$ in Table 3. 1) The use of self-supervised pretraining consistently improves TA and/or RA even if only standard pretraining is conducted; 2) The use of adversarial fine-tuning $\mathcal{F}_4$ (against standard fine-tuning $\mathcal{F}_3$ ) is crucial, leading to significantly improved RA under both $\mathcal{P}_1$ and $\mathcal{P}_2$ ; 3) Compared $(\mathcal{P}_1, \mathcal{F}_4)$ with $(\mathcal{P}_2, \mathcal{F}_4)$ , the use of self-supervised pretraining offers better eventual model robustness (around $3\%$ improvement) and faster fine-tuning speed (almost saving the half number of epochs).
+
+Next, we investigate how the adversarial pretraining (namely, $\mathcal{P}_3$ ) affects the eventual model robustness. It is shown by $(\mathcal{P}_3,\mathcal{F}_1)$ and $(\mathcal{P}_3,\mathcal{F}_2)$ in Table 3 that the robust feature representation learnt from $\mathcal{P}_3$ benefits adversarial robustness even in the case of partial fine-tuning, but the use of adversarial partial fine-tuning, namely, $(\mathcal{P}_3,\mathcal{F}_2)$ , yields a $30\%$ more improvement. We also observe from the case of $(\mathcal{P}_3,\mathcal{F}_3)$ that the standard full fine-tuning harms the ro
+
+Table 3: Evaluation Results of Eight Different $(\mathcal{P}_i,\mathcal{F}_j)$ Scenarios. Table 1 and Table 2 provide detailed definitions for $\mathcal{P}_1$ (without pre-training), $\mathcal{P}_2$ (standard self-supervision pre-training), $\mathcal{P}_3$ (adversarial self-supervision pre-training), $\mathcal{F}_1$ (partial standard fine-tuning), $\mathcal{F}_2$ (partial adversarial fine-tuning), $\mathcal{F}_3$ (full standard fine-tuning), and $\mathcal{F}_4$ (full adversarial fine-tuning). The best results are highlighted $(1^{\mathrm{st}},2^{\mathrm{nd}})$ under each column of different self-supervised pretraining tasks.
+
+| Scenario | Selfie Pretraining | Rotation Pretraining | Jigsaw Pretraining |
| TA (%) | RA (%) | Epochs | TA (%) | RA (%) | Epochs | TA (%) | RA (%) | Epochs |
| (P1,F3) | 94.24 | 0.00 | 92 | 94.24 | 0.00 | 92 | 94.24 | 0.00 | 92 |
| (P1,F4) | 84.72 | 47.22 | 99 | 84.72 | 47.22 | 99 | 84.72 | 47.22 | 99 |
| (P2,F3) | 95.09 | 0.00 | 97 | 95.45 | 0.00 | 92 | 93.93 | 0.00 | 89 |
| (P2,F4) | 85.56 | 50.42 | 60 | 86.66 | 50.95 | 45 | 85.18 | 50.94 | 46 |
| (P3,F1) | 78.93 | 6.30 | 82 | 86.83 | 18.22 | 99 | 80.47 | 2.68 | 87 |
| (P3,F2) | 74.30 | 37.65 | 64 | 82.32 | 45.10 | 47 | 72.76 | 32.59 | 51 |
| (P3,F3) | 94.69 | 0.00 | 86 | 94.79 | 0.00 | 92 | 93.06 | 0.00 | 93 |
| (P3,F4) | 86.02 | 51.05 | 50 | 85.66 | 50.40 | 46 | 84.50 | 49.61 | 48 |
+
+bust feature representation learnt from $\mathcal{P}_3$ , leading to $0\%$ RA. Furthermore, when the adversarial full fine-tuning is adopted, namely, $(\mathcal{P}_3,\mathcal{F}_4)$ , the most significant robustness improvement is acquired. This observation is consistent with $(\mathcal{P}_2,\mathcal{F}_4)$ against $(\mathcal{P}_2,\mathcal{F}_3)$ .
+
+Third, at the first glance, adversarial full fine-tuning (namely, $\mathcal{F}_4$ ) is the most important step to improve the final mode robustness. However, adversarial pretraining is also a key, particularly for reducing the computation cost of fine-tuning; for example, less than 50 epochs in $(\mathcal{P}_3,\mathcal{F}_4)$ vs. 99 epochs in the end-to-end AT $(\mathcal{P}_1,\mathcal{F}_4)$ .
+
+Last but not the least, we note that the aforementioned results are consistent against different self-supervised prediction tasks. However, Selfie and Rotation are more favored than Jigsaw to improve the final model robustness. For example, in the cases of adversarial pretraining followed by standard and adversarial partial fine-tuning, namely, $(\mathcal{P}_3,\mathcal{F}_1)$ and $(\mathcal{P}_3,\mathcal{F}_2)$ , Selfie and Rotation yields at least $3.5\%$ improvement in RA. As the adversarial full fine-tuning is used, namely, $(\mathcal{P}_3,\mathcal{F}_4)$ , Selfie and Rotation outperform Jigsaw in both TA and RA, where Selfie yields the largest improvement, around $2.5\%$ in both TA and RA.
+
+# 4.4. Comparison with one-shot AT regularized by self-supervised prediction task
+
+In what follows, we compare our proposed adversarial pretraining followed by adversarial fine-tuning approach, namely, $(\mathcal{P}_3,\mathcal{F}_4)$ in Table 3 with the one-shot AT that optimizes a classification task regularized by the self-supervised rotation prediction task [14]. In addition to evaluating this comparison in TA and RA (evaluated at $\ell_{\infty}$ PGD attack [21]), we also measure the robustness in eventual classification against 12 unforeseen attacks that are not used in AT [17]. More results can be found in the supplement.
+
+Figure 3 presents the multi-dimensional performance comparison of our approach vs. the baseline method in [14].
+
+As we can see, our approach yields $1.97\%$ improvement on TA while $0.74\%$ degradation on RA. However, our approach yields consistent robustness improvement in defending all 12 unforeseen attacks, where the improvement ranges from $1.03\%$ to $6.53\%$ . Moreover, our approach separates pretraining and fine-tuning such that the target image classifier can be learnt from a warm start, namely, the adversarial pretrained representation network. This mitigates the computation drawback of one-shot AT in [14], recalling that our advantage in saving computation cost was shown in Table 3. Next, Figure 4 presents the performance of our approach under different types of self-supervised prediction task. As we can see, Selfie provides consistently better performance than others, where Jigsaw performs the worst.
+
+
+Figure 3: The summary of the accuracy over unforeseen adversarial attackers. Our models are obtained after adversarial fine-tuning with adversarial Rotation pretraining. Baseline are co-optimized models with Rotation auxiliary task [14].
+
+
+Figure 4: The summary of the accuracy over unforeseen adversarial attackers. Competition among adversarial fine-tuned models with Selfie, Rotation and Jigsaw adversarial pretraining.
+
+# 4.5. Diversity vs. Task Ensemble
+
+In what follows, we show that different self-supervised prediction tasks demonstrate a diverse adversarial vulnerability even if their corresponding RAs remain similar. We evaluate such a diversity through the transferability of adversarial examples generated from robust classifiers fine-tuned from the adversarially pretrained models using different self-supervised prediction tasks. We then demonstrate the performance of our proposed adversarial pretraining method (2) by leveraging an ensemble of Selfie, Rotation, and Jigsaw.
+
+In Table 4, we present the transferbility of PGD attacks generated from the final model trained using adversarial pretraining followed by adversarial full fine-tuning, namely, $(\mathcal{P}_3,\mathcal{F}_4)$ , where for ease of presentation, let Model(t) denote the classifier learnt using the self-supervised pretraining task $t\in \{\text{Selfie},\text{rotation},\text{Jigsaw}\}$ . Given the PGD attacks from Model(t), we evaluate their transferbility, in terms of attack success rate (ASR²), against Model(t'). If $t' = t$ , then ASR reduces to 1 - RA. If $t'\neq t$ , then ASR reflects the attack transferbility from Model(t) to Model(t'). As we can see, the diagonal entries of Table 4 correspond to the largest ASR at each column. This is not surprising, since transferring to another model makes the attack being weaker. One interesting observation is that ASR suffers a larger drop when transferring attacks from Model(Jigsaw) to other target models. This implies that Model(Selfie) and Model(Rotation) yields better robustness, consistent with our previous results like Figure 4.
+
+At the first glance, the values of ASR of transfer attacks from Model(t) to Model $(t^{\prime})$ $(t^{\prime}\neq t)$ keep similar, e.g., the first column of Table 4 where $t =$ Selfie and $t^\prime =$ Rotation (38.92% ASR) or $t^\prime =$ Jigsaw (38.96% ASR). However,
+
+Figure 5 shows that the seemingly similar transferability are built on more diverse adversarial examples that succeed to attack Model(Rotation) and Model(Jigsaw), respectively. As we can see, there exist at least $14\%$ transfer examples that are non-overlapped when successfully attacking Model(Rotation) and Model(Jigsaw). This diverse distribution of transferred adversarial examples against models using different self-supervised pretraining tasks motivates us to further improve the robustness by leveraging an ensemble of diversified pretraining tasks.
+
+In Figure 2, we demonstrate the effectiveness of our proposed adversarial pretraining via diversity-promoted ensemble (AP + DPE) given in (2). Here we consider 4 baseline methods: 3 single task based adversarial pretraining, and adversarial pretraining via standard ensemble (AP + SE), corresponding to $\lambda = 0$ in (2). As we can see in Table 5, AP + DPE yields at least $1.17\%$ improvement on RA while at most $3.02\%$ degradation on TA, comparing with the best single fine-tuned model. In addition to the ensemble at the pretraining stage, we consider a simple but the most computationally intensive ensemble strategy, an averaged predictions over three final robust models learnt using adversarial pretraining $\mathcal{P}_3$ followed by adversarial fine-tuning $\mathcal{F}_4$ over Selfie, rotation, and Jigsaw. As we can see in Table 6, the best combination, ensemble of three fine-tuned models, yields at least $3.59\%$ on RA while maintains a slight higher TA. More results of other ensemble configurations can be found in the supplement.
+
+Table 4: The vulnerability diversity among fine-tuned models with Selfie, Rotation and Jigsaw self-supervised adversarial pretraining. The results take full adversarial fine-tuning. The highest ASRs are highlighted $(1^{\mathrm{st}},2^{\mathrm{nd}})$ under each column of PGD attacks from different fine-tuned models. Ensemble model results to different PGD attacks can be found in our supplement.
+
+| (P3, F4)AttackEvaluation | PGD attacks from Model(Selfie) | PGD attacks from Model(Rotation) | PGD attacks from Model(Jigsaw) |
| Model(Selfie) | 48.95% | 37.75% | 36.65% |
| Model(Rotaion) | 38.92% | 49.60% | 38.12% |
| Model(Jigsaw) | 38.96% | 39.56% | 51.17% |
+
+# 4.6. Ablation Study and Analysis
+
+For comparison fairness, we fine-tune all models in the same CIFAR-10 dataset. In each ablation, we show results under scenarios $(\mathcal{P}_3,\mathcal{F}_2)$ and $(\mathcal{P}_3,\mathcal{F}_4)$ , where $\mathcal{P}_3$ represents adversarial pretraining, $\mathcal{F}_2$ represents partial adversarial fin-tuning and $\mathcal{F}_4$ represents full adversarial fine-tuning. More ablation results can be found in the supplement.
+
+Ablation of the pretraining data size As shown in Table 7, as the pretraining dataset grows larger, the standard
+
+
+Figure 5: The VENN plot between sets of successful transfer adversarial examples from Model(Selfie) to Model(Rotation) and Model(Selfie) to Model(Jigsaw). The overlapping Brown area (■) represents the successful transfer attacks both on Model(Rotation) and Model(Jigsaw) from Model(Selfie). The Pink area (■) represents the successful transfer attacks only on Model(Jigsaw) from ModelSelfie. The Green area (■) represents the successful transfer attacks only on Model(Rotation) from Model(Selfie).
+
+
+Successful Transfer Attacks both on Model(Rotation) and Model(Jigsaw)
+
+
+Successful Transfer Attacks only on Model(Rotation)
+
+
+Successful Transfer Attacks only on Model(Jigsaw)
+
+Table 5: Results comparison between fine-tuned model from single task pretraining and fine-tuned model from tasks ensemble pretraining. AP + SE represents adversarial pretraining via standard ensemble. AP + DPE represents adversarial pretraining via diversity-promoted ensemble. The best results are highlighted $(1^{\mathrm{st}},2^{\mathrm{nd}})$ under each column of evaluation metrics.
+
+| Models | TA (%) | RA (%) | Epochs |
| Selfie Pretraining | 86.02 | 51.05 | 50 |
| Rotation Pretraining | 85.66 | 50.40 | 46 |
| Jigsaw Pretraining | 83.74 | 48.83 | 48 |
| AP + SE | 84.44 | 49.53 | 47 |
| AP + DPE | 83.00 | 52.22 | 56 |
+
+Table 6: Ensemble results of fine-tuned models with different adversarial pretrainings. The best results are highlighted $(1^{\mathrm{st}},2^{\mathrm{nd}})$ under each column of evaluation metrics.
+
+| Fine-tuned Models (P3, F4) | TA (%) | RA (%) |
| Jigsaw + Rotation | 85.36 | 53.08 |
| Jigsaw + Selfie | 85.64 | 53.32 |
| Rotation + Selfie | 86.51 | 53.83 |
| Jigsaw + Rotation + Selfie | 86.04 | 54.64 |
+
+and robust accuracies both demonstrate steady growth. Under the $(\mathcal{P}_3, \mathcal{F}_4)$ scenario, when the pretraining data size increases from 30K to 150K, we observe a $0.97\%$ gain on robust accuracy with nearly the same standard accuracy. That aligns with the existing theory [30]. Since self-supervised pretraining requires no label, we could in future grow the unlabeled data size almost for free to continuously boost the pretraining performance.
+
+Table 7: Ablation results of the size of pretraining datasets. All pretraining datasets have $32 \times 32$ resolution and 10 classes.
+
+| Scenario | CIFAR-30K |
| TA (%) | RA (%) | Epochs |
| (P3, F2) | 65.65 | 30.00 | 70 |
| (P3, F4) | 85.29 | 49.64 | 42 |
| Scenario | CIFAR-50K |
| TA (%) | RA (%) | Epochs |
| (P3, F2) | 66.87 | 30.42 | 87 |
| (P3, F4) | 85.26 | 49.66 | 61 |
| Scenario | CIFAR-150K |
| TA (%) | RA (%) | Epochs |
| (P3, F2) | 67.73 | 30.24 | 95 |
| (P3, F4) | 85.18 | 50.61 | 55 |
+
+Table 8: Ablation results of defense approaches. Instead of adversarial training, we perform random smoothing [5] for pretraining.
+
+| Random Smoothing | Selfie Pretraining | Rotation Pretraining | Jigsaw Pretraining |
| TA (%) | RA (%) | Epochs | TA (%) | RA (%) | Epochs | TA (%) | RA (%) | Epochs |
| F2 | 71.9 | 30.57 | 61 | 74.7 | 34.23 | 78 | 74.66 | 33.84 | 68 |
| F4 | 85.14 | 50.23 | 48 | 85.62 | 51.25 | 46 | 85.18 | 50.94 | 46 |
+
+Ablation of defense approaches in pretraining In Table 8, we use random smoothing [5] in place of AT to robustify pretraining, while other protocols remain all unchanged. We obtain consistent results to using adversarial pretraining: robust pretraining speed up adversarial fine-tuning and helps final model robustness, while the full adversarial fine-tuning contributes the most to the robustness boost.
+
+# 5. Conclusions
+
+In this paper, we combine adversarial training with self-supervision to gain robust pretrained models, that can be readily applied towards downstream tasks through finetuning. We find that adversarial pretraining can not only boost final model robustness but also speed up the subsequent adversarial fine-tuning. We also find adversarial finetuning to contribute the most to the final robustness improvement. Further motivated by our observed diversity among different self-supervised tasks in pretraining, we propose an ensemble pretraining strategy that boosts robustness further. Our results observe consistent gains over state-of-the-art AT in terms of both standard and robust accuracy, leading to new benchmark numbers on CIFAR-10. In the future, we are interested to explore several promising directions revealed by our experiments and ablation studies, including incorporating more self-supervised tasks, extending the pretraining dataset size, and scaling up to high-resolution data.
+
+# References
+
+[1] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. 2018 ICML, arXiv preprint arXiv:1802.00420, 2018. 2, 3
+[2] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Advances in neural information processing systems, pages 153-160, 2007. 1
+[3] Fabio M Carlucci, Antonio D'Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2229-2238, 2019. 1, 2, 3
+[4] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, Percy Liang, and John C Duchi. Unlabeled data improves adversarial robustness. arXiv preprint arXiv:1905.13736, 2019. 2
+[5] Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019. 8
+[6] Antonio Criminisi, Patrick Pérez, and Kentaro Toyama. Region filling and object removal by exemplar-based image inpainting. IEEE Transactions on image processing, 13(9):1200-1212, 2004. 1, 2
+[7] Guneet S Dhillon, Kamyar Azizzadenesheli, Zachary C Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, and Anima Anandkumar. Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442, 2018. 2
+[8] Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE transactions on pattern analysis and machine intelligence, 38(9):1734-1747, 2015. 1, 2
+[9] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. 1, 2, 3
+[10] Shupeng Gui, Haotao N Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, and Ji Liu. Model compression with adversarial robustness: A unified optimization framework. In Advances in Neural Information Processing Systems, pages 1283-1294, 2019. 2
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 5
+[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630-645. Springer, 2016. 5
+[13] Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. Proceedings of the International Conference on Learning Representations, 2019. 4, 5
+[14] Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. Using self-supervised learning can im
+
+prove model robustness and uncertainty. arXiv preprint arXiv:1906.12340, 2019. 1, 3, 5, 6
+[15] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. 1
+[16] Ting-Kuei Hu, Tianlong Chen, Haotao Wang, and Zhangyang Wang. Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. In International Conference on Learning Representations, 2020. 2
+[17] Daniel Kang, Yi Sun, Dan Hendrycks, Tom Brown, and Jacob Steinhardt. Testing robustness against unforeseen adversaries. arXiv preprint arXiv:1908.08016, 2019. 5, 6
+[18] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. Adversarial machine learning at scale. CoRR, abs/1611.01236, 2016. 1
+[19] Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1778-1787, 2018. 2
+[20] Hong Liu, Mingsheng Long, Jianmin Wang, and Michael I Jordan. Towards understanding the transferability of deep representations. arXiv preprint arXiv:1909.12031, 2019. 1
+[21] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. 2018 ICLR, arXiv preprint arXiv:1706.06083, 2018. 1, 2, 3, 5, 6
+[22] Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 135-147. ACM, 2017. 2
+[23] Sina Mohseni, Mandar Pitale, JBS Yadawa, and Zhangyang Wang. Self-supervised learning for generalizable out-of-distribution detection. AAAI, 2020. 1
+[24] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016. 1
+[25] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pages 69-84. Springer, 2016. 1, 2, 3
+[26] Tianyu Pang, Kun Xu, Chao Du, Ning Chen, and Jun Zhu. Improving adversarial robustness via promoting ensemble diversity. arXiv preprint arXiv:1901.08846, 2019. 2, 4
+[27] Nicolas Papernot and Patrick McDaniel. Extending defensive distillation. arXiv preprint arXiv:1705.05264, 2017. 2
+[28] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught learning: transfer learning from unlabeled data. In Proceedings of the 24th international conference on Machine learning, pages 759-766. ACM, 2007. 1
+[29] Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Computer vision with a single (robust) classifier. arXiv preprint arXiv:1906.09453, 2019. 4
+
+[30] Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. In Advances in Neural Information Processing Systems, pages 5014-5026, 2018. 1, 8
+[31] Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli, et al. Are labels required for improving adversarial robustness? arXiv preprint arXiv:1905.13725, 2019. 2
+[32] Thilo Strauss, Markus Hanselmann, Andrej Junginger, and Holger Ulmer. Ensemble methods as a defense to adversarial perturbations against deep neural networks. arXiv preprint arXiv:1709.03423, 2017. 2
+[33] Antonio Torralba, Rob Fergus, and William T Freeman. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence, 30(11):1958-1970, 2008. 4
+[34] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017. 2
+[35] Trieu H Trinh, Minh-Thang Luong, and Quoc V Le. Self-supervised pretraining for image embedding. arXiv preprint arXiv:1906.02940, 2019. 1, 2, 3, 5
+[36] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371-3408, 2010. 1
+[37] Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, and Bo Li. Towards a unified min-max framework for adversarial exploration and robustness, 2019. 2, 4
+[38] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. 2
+[39] Runtian Zhai, Tianle Cai, Di He, Chen Dan, Kun He, John Hopcroft, and Liwei Wang. Adversarily robust generalization just requires more unlabeled data. arXiv preprint arXiv:1906.00555, 2019. 2
+[40] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019. 2, 5
+[41] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pages 649-666. Springer, 2016. 1, 2
+[42] Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for natural language understanding. In International Conference on Learning Representations, 2020. 2
\ No newline at end of file
diff --git a/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/images.zip b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..40a4d6893becd23b572b15df27399ace1cc2f111
--- /dev/null
+++ b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ead6a29961697db11bb5030027f56466bb392032f26374f2590d54966e60ebc
+size 450744
diff --git a/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/layout.json b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..88cab1100afe0580f1888dfba2ca7ce5425be247
--- /dev/null
+++ b/adversarialrobustnessfromselfsupervisedpretrainingtofinetuning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ea2c939a4db47a7921ddce53ac5e90c169369547ec2baf8c82a8b31db686f39
+size 461986
diff --git a/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_content_list.json b/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..60b186865a45d7723845bb39d426b2c1d0c81616
--- /dev/null
+++ b/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13474b975c7bfda3f5e86a249c3fb006bbe743c94b0adef98d7bc530d65ccbf4
+size 66428
diff --git a/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_model.json b/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7317fb9ecb13d0a82fb3fcd80fbe5ae95f52ded9
--- /dev/null
+++ b/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fdeaa4bf5683a77e1cc2d326964e3f1e1adabf3ad0c02ba057e6d35cd2e12951
+size 81486
diff --git a/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_origin.pdf b/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d30e957b3be7ff27bd8245a62534daf216399c15
--- /dev/null
+++ b/adversarialtextureoptimizationfromrgbdscans/844b4e2a-6831-435e-82a1-a2b6d9ffde09_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f4e909831657e314ef5b198d2b21b753c3097a06df2735b0a08e0137bd21a3d
+size 3396092
diff --git a/adversarialtextureoptimizationfromrgbdscans/full.md b/adversarialtextureoptimizationfromrgbdscans/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b918d21b540935c13cebd8d86dbafecba667aba1
--- /dev/null
+++ b/adversarialtextureoptimizationfromrgbdscans/full.md
@@ -0,0 +1,265 @@
+# Adversarial Texture Optimization from RGB-D Scans
+
+Jingwei Huang $^{1,3}$ , Justus Thies $^{2}$ , Angela Dai $^{2}$ , Abhijit Kundu $^{3}$ , Chiyu “Max” Jiang $^{3,4}$ , Leonidas Guibas $^{1}$ , Matthias Nießner $^{2}$ , and Thomas Funkhouser $^{3}$
+
+$^{1}$ Stanford University
+
+$^{2}$ Technical University of Munich
+
+3Google Research
+
+$^{4}$ UC Berkeley
+
+
+Figure 1. Our goal is to reconstruct high-quality textures from an RGB-D scan. Unlike traditional methods which optimize for a parametric color map to reduce misalignment error (Zhou and Koltun [36]), we learn a misalignment-tolerant discriminator, producing sharper textures.
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Abstract
+
+Realistic color texture generation is an important step in RGB-D surface reconstruction, but remains challenging in practice due to inaccuracies in reconstructed geometry, misaligned camera poses, and view-dependent imaging artifacts. In this work, we present a novel approach for color texture generation using a conditional adversarial loss obtained from weakly-supervised views. Specifically, we propose an approach to produce photorealistic textures for approximate surfaces, even from misaligned images, by learning an objective function that is robust to these errors. The key idea of our approach is to learn a patch-based conditional discriminator which guides the texture optimization to be tolerant to misalignments. Our discriminator takes a synthesized view and a real image, and evaluates whether the synthesized one is realistic, under a broadened definition of realism. We train the discriminator by providing as 'real' examples pairs of input views and their misaligned versions – so that the learned adversarial loss will tolerate errors from the scans. Experiments on synthetic and real data under quantitative or qualitative evaluation demonstrate the advantage of our approach in comparison to state of the art (see Figure 1, right). Our code is publicly available with video demonstration2.
+
+# 1. Introduction
+
+The wide availability of consumer range cameras has spurred extensive research in geometric reconstruction of real-world objects and scenes, with state-of-the-art 3D reconstruction approaches now providing robust camera tracking and 3D surface reconstruction [22, 19, 33, 9]. However, producing photorealistic models of real-world environments requires not only geometric reconstruction but also high-quality color texturing. Unfortunately, due to noisy input data, poorly estimated surface geometry, misaligned camera poses, unmodeled optical distortions, and view-dependent lighting effects, aggregating multiple real-world images into high-quality, realistic surface textures is still a challenging problem. In order to overcome these problems, various approaches have been developed to optimize color textures using models to adjust camera poses [36, 15], distort images [4, 15, 36], and balance colors [15, 36]. However, these prior approaches are not expressive enough and/or their optimization algorithms are not robust enough to handle the complex distortions and misalignments commonly found in scans with commodity cameras – and therefore they fail to produce high-quality results for typical scans, as shown in the results from Zhou and Koltun [36] in Figure 1.
+
+To address these issues, we propose a flexible texture optimization framework based on a learned metric that is robust to common scanning errors (right side of Figure 1).
+
+
+Figure 2. All methods target at optimizing a texture solution. Existing methods optimize the texture jointly with camera parameters [36, 15], image mapping [36, 4] or color balance [15]. Instead, we jointly solve texture with an adversarial evaluation metric to tolerate the errors.
+
+The key idea behind our approach is to account for misalignments in a learned objective function of the texture optimization. Rather that using a traditional object function, like $L1$ or $L2$ , we learn a new objective function (adversarial loss) that is robust to the types of misalignment present in the input data. This novel approach eliminates the need for hand-crafted parametric models for fixing the camera parameters [36, 15], image mapping [4, 36], or color balance [15] (bottom row of Figure 2) and replaces them all with a learned evaluation metric (green box in Figure 2). As such, it adapts to the input data.
+
+Inspired by the success of adversarial networks in image synthesis [14], we propose to use a learned conditional discriminator to serve our objective function, and jointly optimize the color texture of a reconstructed surface with this discriminator. The condition is a captured image $I_A$ from the source view $V_A$ , and the query is either (i) "real:" a second captured image $I_B$ (from an auxiliary view $V_B$ ) projected onto the surface and then rendered back to $V_A$ , or (ii) "fake:" an image of the optimized synthetic texture rendered to view $V_A$ . By optimizing the surface texture while jointly training this conditional discriminator, we aim to produce a texture that is indistinguishable from reprojections of captured images from all other views. During the optimization, the discriminator learns invariance to the misalignments and distortions present in the input dataset, while recognizing synthetic artifacts that do not appear in the real images (local blurs and seams). Therefore, the textures optimized to fool the discriminator (ours in Figure 1) appear more realistic than in previous approaches.
+
+Our experiments show that this adversarial optimization framework produces notably improved performance compared to state-of-the-art methods, both quantitatively on synthetic data and qualitatively on real data. Moreover, since it tolerates gross misalignments, we are able to generate realistic textures on CAD models which have been only roughly aligned to 3D scans, in spite of large mismatches in surface geometry. This opens up the potential to produce CAD models with realistic textures for content creation.
+
+# 2. Related Work
+
+RGB-D scanning has a rich history, and many approaches have been proposed for color texture generation.
+
+View aggregation. Common texture generation methods [19, 36] average projected input images to generate textures. To reduce blurriness artifacts, some approaches select a single or a few candidate views for each region [11]. Others formulate a multi-label selection energy minimization problem to minimize seam artifacts [21, 27, 30, 32, 15]. For instance, [15] aims at selecting the best view for each region to balance the visual sharpness and color consistency of boundaries between neighboring regions with different views selected, which is modeled as a multi-label graph-cut problem [5]. Our method does not explicitly define the aggregation method, but implicitly aggregates colors from different views based on a learned adversarial metric.
+
+Parametric color optimization. Several approaches have been proposed to improve the mapping of input images to textures with parametric models, leveraging both human supervision [12, 23, 24, 34], as well as automatic optimization [3, 25]. Zhou et al. [36] propose to optimize a parametric model comprising camera poses and non-rigid grid deformations of input images to minimize an L2 color consistency metric. While these methods are able to fix small misalignments, their deformation models are often not expressive enough to handle many real-world distortions, particularly those due to largely approximate surface geometry. In contrast to a hand-crafted deformation model, we learn a distortion-tolerant adversarial loss.
+
+Patch-based color optimization. Patch-based image synthesis strategies have been proposed for color texture optimization [4]. Rather than non-rigid image warping, they re-synthesize the input image with the nearest patch [26] to handle misalignments. However, general misalignment cannot be accurately modeled by translating patches, and the L2 loss is not robust to color, lighting or sharpness differences. Our method optimizes the discriminator to cover all these problems without requiring explicit re-synthesis.
+
+Neural textures. Recently, neural rendering approaches have been proposed to synthesize a feature map on a surface that can be interpreted by a deep network to produce novel image views. For instance, [29] stores appearance information as high-dimensional features in a neural texture map associated with the coarse geometry proxy and decodes to color when projected to novel views. [28] stores the appearance information as high-dimensional features in volumes, and [2] uses features stored with points. These methods rely on the representation power of generative networks at rendering times to obtain novel viewpoints, which limits their applicability in standard graphics pipelines.
+
+
+Figure 3. Texture Generation. From an input RGB-D scan, we optimize for both its texture image and a learned texture objective function characterized by a discriminator network. The discriminator operates on reprojections of input color images in order to maintain robustness to various misalignments. We randomly pick a pair of input images, source and auxiliary, and synthesize the fake and real examples from the source view, conditioned on the re-projected source image. The texture image and discriminator are trained in an alternating process.
+
+# 3. Method
+
+Our goal is to optimize for a color texture that can be used to render a scanned scene using a classical computer graphics pipeline. During the scanning procedure, we obtain color images and their estimated camera poses. These views, along with the reconstructed geometry, are input to our method. To optimize for a texture, we must specify an objective function; in this case, we must account for misalignments of the color images and the reconstructed model. Thus we propose to learn the loss function in conjunction with the texture (see Figure 3). The function is modeled as an adversarial loss using a discriminator network to identify 'real' and 'fake' imagery, and is designed to provide a misalignment-tolerant metric for our texture optimization.
+
+# 3.1. Misalignment-Tolerant Metric
+
+Our key insight is to propose to learn a conditional discriminator as a misalignment-tolerant metric adaptive to the error distribution of the input data. Figure 4(a) shows a 2D example where two observations (b) and (c) are misaligned by 2 units in the horizontal directions, and an L2 loss results in blurry appearance. Ultimately, we aim to synthesize a texture that appears as realistic as either observation. To achieve this, we ask the discriminator to consider both (b) and (c) as real conditioned on either observation. With such a discriminator, the blurred (d) results in a large loss and the texture will instead converge to either (b) or (c).
+
+We extend this intuition to 3D where the geometry is observed from different viewpoints. We then aim to optimize a texture such that local patches of the texture rendered to various views look realistic. Therefore, conditioned on any arbitrary view, we generate real examples by a re-projection from any other view to this view, as shown in Figure 3. Such re-projection can be achieved by projecting the color image
+
+onto the surface and then rendering back to another view. Unlike the simple 2D example, it is highly possible that there is no texture solution so that each local patch perfectly matches the one view from the input images, given camera and geometry error. However, the proposed approach is expected to push those inconsistencies to the smooth textured regions to hide any artifacts that can be easily identified by the discriminator, and thereby producing locally consistent realistic texture solution.
+
+For each optimization iteration, we randomly select two input images, $I_A$ (source image) and $I_B$ (auxiliary image) with corresponding camera poses $V_A$ and $V_B$ . The conditioning is $I_A$ from the viewpoint $V_A$ , and the 'real' image is $I_B$ projected to the scan geometry and rendered from $V_A$ , while the 'fake' image is the synthesized texture rendered from $V_A$ . We alternating optimize the texture and discriminator. During texture optimization, we adjust the texture pixel colors to maximize the adversarial loss such that it looks more realistic under the discriminator scoring. During discriminator optimization, we minimize the adversarial loss such that it better classifies real and fake examples. We linearly combine adversarial loss with an L1 loss that decays exponentially as the optimization proceeds, which helps the optimizer find a good initial texture solution.
+
+Network Architecture Our framework is adopted from the PatchGAN discriminator architecture proposed by Isola et al. [18]. We choose that framework because it is designed to produce local details that look as realistic as a given set of input images. We use three convolutional layers, resulting in a patch size of $70 \times 70$ , which we find suitable for our input images of resolution $640 \times 480$ . We apply a PatchGan to evaluate local $70 \times 70$ patches of images rather than the entire image. Patches are selected for discriminator training
+
+
+(a) Ground Truth
+
+
+(b) Observation 1
+
+
+(c) Observation 2
+
+
+(d) Optimal Texture under L2
+
+
+(b)
+
+
+(b) or (c)
+
+
+Discriminator
+
+
+(c)
+
+
+(b) or (c)
+
+
+Discriminator
+Figure 4. 2D example of a misalignment. (a) shows the ground truth pattern, which is observed with misalignment in (b) and (c); an L2 loss results in blurring (d). We train a discriminator which only accepts (b) and (c) as real examples conditioned on each other, and use it to optimize the texture, which converges to either (b) or (c).
+
+
+(b)
+
+if more than half of the patch is not occluded. Thus, patches used for training have sufficient overlap. Unlike the original, we remove all batch normalization layers and feed a single view example for each optimization iteration, which we empirically found to improve performance. Conditioned on the input view, we ask the discriminator to evaluate the residual of the synthesized example subtracted by the condition input. Finally, since we focus on evaluating foreground regions (pixels corresponding to input geometry), we remove the loss terms for regions where background comprises more than $90\%$ of the receptive field.
+
+# 3.2. Texture Optimization
+
+To retrieve a texture, we jointly optimize the texture and the misalignment-tolerant metric. Inspired by the adversarial loss used in Pix2Pix [18], we express our view-conditioned adversarial loss as:
+
+$$
+\begin{array}{l} \mathcal {L} _ {c} (T, D) = \mathbb {E} _ {x, y} (\log D (x, y)) + \tag {1} \\ \mathbb {E} _ {x, M _ {x}} \left(\log (1 - D (x, M _ {x} (T)))\right), \\ \end{array}
+$$
+
+where $T$ and $D$ represent the target texture image and the discriminator parameters we are optimizing for. $x$ is the condition, a reprojected color image from the input sequence of captured images. $M_x$ is the fixed texture-to-image mapping given the camera pose associated with $x$ . Here, a real example is an image $y$ re-projected to the view of $x$ . We optimize $D$ with the objective to correctly identify real examples, misaligned real imagery, and fake examples rendered from the texture as $M_x(T)$ . Simultaneously, we optimize the texture $T$ such that it is difficult to be identified as fake when mapped to view of $x$ .
+
+Since the adversarial loss alone can be difficult to train, we additionally add an L1 loss to the texture optimization
+
+to provide initial guidance for the optimization:
+
+$$
+\mathcal {L} _ {L 1} (T) = \mathbb {E} _ {x, y, M _ {x}} | | y - M _ {x} (T) | | _ {1}. \tag {2}
+$$
+
+Our objective texture solution is:
+
+$$
+T ^ {*} = \arg \min _ {T} \max _ {D} \mathcal {L} _ {c} (T, D) + \lambda \mathcal {L} _ {L 1} (T). \tag {3}
+$$
+
+During training, we initialize all pixels in texture image to zero and $\lambda = 10$ . The high $\lambda$ allows the L1 loss to provide an initial texture, and for every 1000 steps we exponentially decay the lambda by a factor of 0.8. We optimize in alternating fashion for each optimization step, using the Adam optimizer for both the texture and discriminator with learning rates $10^{-3}$ and $10^{-4}$ respectively. For each object or scene, we optimize for 50000 steps to finalize our texture.
+
+# 3.3. Differentiable Rendering and Projection
+
+To enable the optimization of the RGB texture of a 3D model, we leverage a differentiable rendering to generate synthesized 'fake' views. We pre-compute a view-to-texture mapping using pyRender [17], and can then implement the rendering with a differentiable bilinear sampling.
+
+To create the misaligned 'real' images $(I_B$ seen from $V_{A})$ we compute a reprojection; note that here we do not need to maintain gradient information. For each pixel $\mathbf{P}_A$ in the source image, we need to determine the corresponding pixel $\mathbf{P}_B$ in the auxiliary image, so that a bilinear sampling can be applied to warp image from the $V_{B}$ to $V_{A}$ . Specifically, for $\mathbf{P}_A$ with depth value $d_A$ from the source depth map, we can determine its 3D location in the source view's space as $\mathbf{p}_{\mathbf{A}} = d_{A}\mathbf{K}^{-1}\mathbf{P}_{\mathbf{A}}$ where $\mathbf{K}$ is the intrinsic camera matrix. Suppose the transformations from the camera to the world space for the source and the auxiliary views are given as $\mathbf{T}_A$ and $\mathbf{T}_B$ , the corresponding 3D and pixel location in the auxiliary view are $\mathbf{p}_B = \mathbf{T}_B^{-1}\mathbf{T}_A\mathbf{p}_{\mathbf{A}}$ and $\mathbf{P}_B = \mathbf{K}\mathbf{p}_B$ . The pixel is visible in the auxiliary view if $\mathbf{P}_B$ is in the scope of the image and the difference between z-dimension of $\mathbf{p}_B$ and $d_B$ from the auxiliary depth map is $< \theta_z$ . We use $\theta_z = 0.1$ meters for scenes $\theta_z = 0.03$ for object level scanning.
+
+# 4. Experiments
+
+Evaluation Metric For evaluation, we adopt several different metrics to measure the quality of the generated texture compared to the ground truth. First, we propose the nearest patch loss as an indicator of how close the patch appearance of the texture is to the ground truth. Specifically, for each pixel $\mathbf{u}$ we extract a $7\times 7$ patch centered around it in the generated texture and find the L2 distance $d(\mathbf{u})$ between it and the nearest neighbor patch in the ground truth texture. We define the nearest patch loss as the average of all $d(\mathbf{u})$ . Second, we adopt the perceptual metric [35] to evaluate perceptual quality. Finally, we propose to measure the difference between generated textures and ground
+
+
+Figure 5. Texture Generation on 2D. The texture provided by our approach is visually closer to the ground truth image while avoiding blurring artifacts such as those introduced by an L1 loss. An exact patch loss favors alignment over perceptual similarity, while the nearest patch loss is a more robust metric.
+
+truth according to sharpness [31] and the average intensity of image gradients, in order to evaluate how robust the generated textures are to blurring artifacts without introducing noise artifacts. Note that standard image quality metrics such as the mean square error, PSNR [10] or SSIM [6] are ill-suited, as they assume perfect alignment between target and the ground truth [35].
+
+Synthetic 2D Example We first verify the effectiveness of our method with a synthesized 2D example. We aim to optimize for a 2D image, given input observations with 2D micro-translation errors. We use an image resolution of $512 \times 512$ and translation error $\in [-16,16]^2$ . During texture optimization, we randomly select one observation as the source and another observation as the auxiliary, and optimize the target image to be more realistic under the current discriminator. Figure 5 shows the resulting image optimized with our approach in comparison to a naive L1 loss.
+
+Visually, our optimized image is sharper and perceptually closer to the ground truth while an L1 loss results in blurry effect from aggregating multiple misaligned observations. In this simple setting, we evaluate the exact patch loss for each pixel quantitatively as the L2 distance of patches centered at this pixel between the generated image and the same one in the ground truth. The exact overall exact patch loss is the L2 norm of exact patch losses for all pixels. We additionally evaluate the nearest patch loss. Optimization with the L1 loss achieves 10.7 exact patch loss while ours is 11.3. However, we achieve 1.53 nearest patch loss, which is smaller than L1 as 7.33. This suggests that our method prefers realistic misalignment to blur. We successfully derive an image where every local patch is nearly identical to a misaligned version of the patch in the ground truth image.
+
+Synthetic 3D Example In order to quantitatively evaluate our 3D texture generation, we create a synthetic dataset
+
+| Camera / Geometry | Patch | Perceptual | Gradient | Sharp |
| L1 | 6.01 / 6.65 | 0.256 / 0.261 | 0.10 / 0.065 | 0.041 / 0.037 |
| ColorMap | 6.34 / 6.49 | 0.252 / 0.287 | 0.14 / 0.061 | 0.036 / 0.036 |
| Sharpest | 6.06 / 6.06 | 0.385 / 0.407 | 0.06 / 0.036 | 0.087 / 0.069 |
| 3DLite | 5.96 / 5.92 | 0.236 / 0.281 | 0.06 / 0.058 | 0.027 / 0.025 |
| VGG | 6.97 / 7.09 | 0.390 / 0.388 | 0.15 / 0.094 | 0.061 / 0.065 |
| Ours | 5.81 / 5.29 | 0.193 / 0.181 | 0.03 / 0.028 | 0.022 / 0.018 |
+
+Table 1. Evaluation of different methods on our 3D synthetic dataset averaged across different levels of camera pose and geometry errors.
+
+of 16 models randomly selected from ShapeNet [7] across different categories. These shapes typically contain sharp edges and self-occlusion boundaries, complexities reflecting those of real-world objects. Since we aim to address arbitrary texturing, we enrich the appearance of these shapes by using 16 random color images from the internet as texture images. To create virtual scans of the objects, we uniformly sample $>900$ views on a unit hemisphere by subdividing an icosahedron, from which we render the textured geometry as observed color images. To simulate misalignment, we associate each rendered image with a slightly perturbed camera pose, and to simulate geometry errors, we apply random perturbations to the geometric model. We use a set of errors increasing from $n = 1$ to $n = 4.5$ , and refer to the supplemental material for additional detail regarding generating camera and geometry perturbations.
+
+In Table 1, we study the effect of varying camera and geometry errors in this synthetic 3D setting. We report evaluation metrics for our approach as well as several state-of-the-art texture optimization methods, including methods based on an L1 loss and texturing using sharpest frame selection [31]. Our approach outperforms all other methods, as it avoids blurring effects often seen with L1 and ColorMap [36], and it avoids seams and over-sharpsness introduced by methods relying on sharpness selection (3DLite [15] and sharpest frame selection). VGG [20] aggregates views by blending deep features, which is insufficient for handling misalignment artifacts. Two example scenes with increasing errors in camera and geometry are shown in Figure 6.
+
+We additionally study the behavior of all methods in this experiment using the perceptual metric [35] in Figure 8. Although the performance drops for all methods with the increase of camera/geometry errors, our approach maintains the best perceptual quality as the errors increase. Figure 7 shows a qualitative comparison; our approach maintains a sharp result while ColorMap produces increasingly blurriness as the error increases.
+
+Alternative Discriminators? We analyze the design choices for our misalignment-tolerant conditional discriminator in Figure 9. Removing the auxiliary view (b) and thus relying only on the source view to provide 'real' examples to the discriminator (similar to pix2pix [18]) renders the
+
+
+Figure 6. Texture generation in case of high camera or geometry errors. ColorMap [36] suffers from blurring, and Sharpest or 3DLite [15] selection lead to inconsistent boundaries or breaks structures. VGG [20] aggregates views by blending deep features with noises, which is not sufficient for handling misalignment artifacts. Ours is visually closest to the ground truth.
+
+
+Figure 7. Texture generation under increasing camera or geometry errors. ColorMap [36] produces more blurry results under camera/geometry error while ours maintains sharp textures.
+
+metric unable to handle misalignments. We also evaluate a general discriminator that classifies whether a generated patch is real or fake among entire input view sets without any condition (c), resulting in ambiguity as to where real patches come from. Our conditional discriminator leverag-
+
+
+Perceptual Loss with Different Errors
+
+
+Figure 8. Perceptual loss under increasing camera or geometry errors; we outperform existing methods at various levels of error.
+Figure 9. Comparing different discriminator options. (b) removes the auxiliary view from the discriminator, resulting in the lack of robustness to misalignments. (c) removes the condition from the discriminator, resulting in ambiguity in local regions. (d) our conditional discriminator leveraging auxiliary views to provide examples of realistic misalignments enables tolerance to misalignment and generation of textures reflecting input image characteristics.
+
+ing reprojected auxiliary views enables robustness to misalignment, resulting in realistic texturing.
+
+Real Object Scans We compare our method to state-of-the-art texturing methods on scanned objects from real environments. We use a structure sensor [1] along with its SLAM system to scan 35 chairs, producing scanned geometry, RGB-D frames and the camera poses ( $\approx$ 500 frames per scan). The foreground/background for the object in the RGB frames is determined by whether a ray intersects with the reconstructed geometry. Figure 11 (rows 1-4) shows qualitative comparisons. With an L1 loss or ColorMap [36], blur artifacts are induced by misalignment errors. Sharpest selection and 3DLite [15] use sharp region selection, resulting in seams and inconsistent global structures, as shown in the flower, leaf, and chair arms. A VGG loss [20] produces excess noise artifacts. Our approach produces sharp and consistent texturing, including detailed patterns such as the leaves in row 1 and woven structures in rows 2 and 3.
+
+Additionally, we show a quantitative evaluation in Table 2 (first column) by evaluating the perceptual metric [35] for rendered textures against input observed views; our approach achieves the most realistic texturing.
+
+ | Object | ScanNet | CAD |
| L1 | 0.197 | 0.470 | 0.199 |
| ColorMap | 0.186 | 0.461 | 0.234 |
| Sharpest | 0.222 | 0.510 | 0.260 |
| 3DLite | 0.185 | 0.445 | 0.238 |
| VGG | 0.272 | 0.534 | 0.289 |
| Ours | 0.175 | 0.395 | 0.176 |
+
+Table 2. Mean perceptual loss comparing the input images and rendered textures from different methods. Our method achieves best performance in the real and CAD datasets.
+
+Real Scene Scans To demonstrate the capability of our approach to optimize texture on a larger scale, we run our algorithm on the ScanNet dataset [8], which provides RGB-D sequences and reconstructed geometry of indoor scenes. We evaluate our approach on scenes with $\mathrm{ID} \leq 20$ ( $\approx 2000 - 3000$ frames per scan) and compare it with the existing state of the arts. Figure 11 (rows 5-9) and Table 2 (middle column) show qualitative and quantitative comparisons. Our method produces texturing most perceptually similar to the observed images; our misalignment-tolerant metrics aids in avoiding blur, increased sharpness, or excess noise produces by other methods due to camera and geometry errors in real-world scans.
+
+Real to CAD Models Since our method can better handle errors from approximate surface geometry, it is possible to consider texturing CAD models using real-world images to attain realistic appearances. While large datasets of 3D CAD models are now available [7], they are often un-textured or textured simplistically, resulting in notably different appearance from real-world objects. To test whether our method can be applied in this challenging scenario, we use our collected dataset of real object scans, retrieve similar CAD models from ShapeNet manifold [16], and rigidly align them to the scanned objects. We then replace the scanned geometry with the CAD model and then use the captured color images and estimated poses from the scan to optimize the CAD texture. Qualitative and quantitative evaluation of our approach in comparison to existing state-of-the-art methods are show in Figure 11 (rows 10-13) and Table 2 (right column), respectively. Our approach is able to handle both camera poses errors as well as the synthetic-real geometry differences to produce texturing perceptually very similar to observed imagery, whereas other methods suffer strong blur, noise, and seam artifacts under these errors.
+
+Perceptual Quality Although we lack ground truth texturing for objects in real environments, we can compare the perceptual loss [35] of the rendered textured geometry from the corresponding viewpoint. We select 10 views uniformly distributed from the scanning video and render the textured model to compute the mean of the perceptual loss. Table 2 shows the performance of different methods on the
+
+
+Figure 10. User study. We ask people to vote for the rendered textures from different methods that look closest to the input image.
+
+object scans, scene scans and the CAD models; our method achieves the best performance in these three scenarios.
+
+Additionally, we perform a user study to evaluate the quality of the texture, shown in Figure 10. Our user study comprised 63 participants who were asked to vote for the texture which produced a rendering closest to the input image. For some views, it can sometimes be difficult for users to differentiate between different methods when regions are largely uniform in color. Nevertheless, our method is still notably preferred over other texturing approaches. We provide additional comparisons with [32] and [13] in supplemental C and describe the influence of sparse views for training discriminators in supplemental D.
+
+Runtime. On average, our released implementation takes 7.3 minutes per object and 33.4 minutes per scene on a single Titan X GPU.
+
+# 5. Conclusion
+
+We have proposed a misalignment-tolerant metric for texture optimization of RGB-D scans, introducing a learned texturing objective function for maintaining robustness to misalignment errors in camera poses and geometry. We represent the learned function as a conditional discriminator trained with an adversarial loss where 'real' examples characterize various misalignment errors seen in the input data. This avoids explicit parametric modeling of scanning errors, and enables our optimization to produce texturing reflective of the realism. Our approach opens up the potential for texturing synthetic CAD models with real-world imagery. It also makes an important step towards creating digital content from real-world scans, towards democratized use, for instance in the context of AR and VR applications.
+
+# Acknowledgements
+
+This work was supported by the ZD.B and ERC Starting Grant Scan2CAD (804724), NSF grants CHS-1528025 and IIS-1763268, a Vannevar Bush Faculty Fellowship, and grants from the Samsung GRO program and the Stanford SAIL Toyota Research Center.
+
+
+Figure 11. Visual comparison on object scans, ScanNet [8] scans of scenes, and CAD models aligned with object scans. Due to misalignment errors in both camera pose and geometry, both L1 loss and ColorMap [36] produce blurry artifacts, sharpest selection and 3DLite [15] result in inconsistent regions or breaks in texture structure, and VGG [20] blends learned features resulting in structural artifacts and noise. Our misalignment-tolerant approach produces sharp and consistent textures.
+
+# References
+
+[1] Structure sensor. https://structure.io.6
+[2] Kara-Ali Aliev, Dmitry Ulyanov, and Victor Lempitsky. Neural point-based graphics. arXiv preprint arXiv:1906.08240, 2019. 2
+[3] Fausto Bernardini, Ioana M. Martin, and Holly Rushmeier. High-quality texture reconstruction from multiple scans. IEEE Transactions on Visualization and Computer Graphics, 7(4):318-332, 2001. 2
+[4] Sai Bi, Nima Khademi Kalantari, and Ravi Ramamoorthi. Patch-based optimization for image-based texture mapping. ACM Trans. Graph., 36(4):106-1, 2017. 1, 2
+[5] Yuri Boykov, Olga Veksler, and Ramin Zabih. Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence, 23(11):1222-1239, 2001. 2
+[6] Dominique Brunet, Edward R Vrscay, and Zhou Wang. On the mathematical properties of the structural similarity index. IEEE Transactions on Image Processing, 21(4):1488-1499, 2011. 5
+[7] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 5, 7
+[8] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5828-5839, 2017. 7, 8
+[9] Angela Dai, Matthias Nießner, Michael Zollhöfer, Shahram Izadi, and Christian Theobalt. Bundlefusion: Real-time globally consistent 3d reconstruction using on-the-fly surface reintegration. ACM Transactions on Graphics (TOG), 36(3):24, 2017. 1
+[10] Johannes F De Boer, Barry Cense, B Hyle Park, Mark C Pierce, Guillermo J Tearney, and Brett E Bouma. Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography. Optics letters, 28(21):2067-2069, 2003. 5
+[11] Arnaud Dessein, William AP Smith, Richard C Wilson, and Edwin R Hancock. Seamless texture stitching on a 3d mesh by poisson blending in patches. In 2014 IEEE International Conference on Image Processing (ICIP), pages 2031-2035. IEEE, 2014. 2
+[12] Thomas Franken, Matteo Dellepiane, Fabio Ganovelli, Paolo Cignoni, Claudio Montani, and Roberto Scopigno. Minimizing user intervention in registering 2d images to 3d models. The Visual Computer, 21(8-10):619-628, 2005. 2
+[13] Yanping Fu, Qingan Yan, Long Yang, Jie Liao, and Chunxia Xiao. Texture mapping for 3d reconstruction with rgb-d sensor. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4645-4653, 2018. 7, 12
+[14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances
+
+in neural information processing systems, pages 2672-2680, 2014. 2
+[15] Jingwei Huang, Angela Dai, Leonidas J Guibas, and Matthias Nießner. 3dlite: towards commodity 3d scanning for content creation. ACM Trans. Graph., 36(6):203-1, 2017. 1, 2, 5, 6, 8
+[16] Jingwei Huang, Hao Su, and Leonidas Guibas. Robust watertight manifold surface generation method for shapenet models. arXiv preprint arXiv:1802.01698, 2018. 7
+[17] Jingwei Huang, Yichao Zhou, Thomas Funkhouser, and Leonidas J Guibas. Framenet: Learning local canonical frames of 3d surfaces from a single rgb image. In Proceedings of the IEEE International Conference on Computer Vision, pages 8638-8647, 2019. 4
+[18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125-1134, 2017. 3, 4, 5
+[19] Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, et al. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559-568. ACM, 2011. 1, 2
+[20] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer, 2016. 5, 6, 8
+[21] Victor Lempitsky and Denis Ivanov. Seamless mosaicing of image-based texture maps. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-6. IEEE, 2007. 2
+[22] Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohi, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In Mixed and augmented reality (ISMAR), 2011 10th IEEE international symposium on, pages 127-136. IEEE, 2011. 1
+[23] Eyal Ofek, Erez Shilat, Ari Rappoport, and Michael Werman. Multiresolution textures from image sequences. IEEE Computer Graphics and Applications, (2):18-29, 1997. 2
+[24] Frédéric Pighin, Jamie Hecker, Dani Lischinski, Richard Szeliski, and David H Salesin. Synthesizing realistic facial expressions from photographs. In ACM SIGGRAPH 2006 Courses, page 19. ACM, 2006. 2
+[25] Kari Pulli and Linda G Shapiro. Surface reconstruction and display from range and color data. Graphical Models, 62(3):165-201, 2000. 2
+[26] Denis Simakov, Yaron Caspi, Eli Shechtman, and Michal Irani. Summarizing visual data using bidirectional similarity. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2008. 2
+[27] Sudipta N Sinha, Drew Steedly, Richard Szeliski, Maneesh Agrawala, and Marc Pollefeys. Interactive 3d architectural
+
+modeling from unordered photo collections. In ACM Transactions on Graphics (TOG), volume 27, page 159. ACM, 2008. 2
+[28] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2437-2446, 2019. 2
+[29] Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: Image synthesis using neural textures. arXiv preprint arXiv:1904.12356, 2019. 2
+[30] Luiz Velho and Jonas Sossai Jr. Projective texture atlas construction for 3d photography. The Visual Computer, 23(9-11):621-629, 2007. 2
+[31] Cuong T Vu, Thien D Phan, and Damon M Chandler. S3: A spectral and spatial measure of local perceived sharpness in natural images. IEEE transactions on image processing, 21(3):934-945, 2011. 5
+[32] Michael Waechter, Nils Moehrle, and Michael Goesele. Let there be color! large-scale texturing of 3d reconstructions. In European Conference on Computer Vision, pages 836-850. Springer, 2014. 2, 7, 12
+[33] Thomas Whelan, Stefan Leutenegger, Renato F Salas-Moreno, Ben Glocker, and Andrew J Davison. Elasticfusion: Dense slam without a pose graph. Proc. Robotics: Science and Systems, Rome, Italy, 2015. 1
+[34] Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. Deep view synthesis from sparse photometric images. ACM Transactions on Graphics (TOG), 38(4):76, 2019. 2
+[35] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 4, 5, 6, 7
+[36] Qian-Yi Zhou and Vladlen Koltun. Color map optimization for 3d reconstruction with consumer depth cameras. ACM Transactions on Graphics (TOG), 33(4):155, 2014. 1, 2, 5, 6, 8
\ No newline at end of file
diff --git a/adversarialtextureoptimizationfromrgbdscans/images.zip b/adversarialtextureoptimizationfromrgbdscans/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..35e79f56bc7a3451a36a300252fe5bea1a80bf6d
--- /dev/null
+++ b/adversarialtextureoptimizationfromrgbdscans/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4711f0f66207efef03b9a08769da3a6118477f88f4b5714de2e98c37748664cc
+size 1003627
diff --git a/adversarialtextureoptimizationfromrgbdscans/layout.json b/adversarialtextureoptimizationfromrgbdscans/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a38026043aa49c4b8dc15a07e5dfd24247d35dbc
--- /dev/null
+++ b/adversarialtextureoptimizationfromrgbdscans/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64d072f77f1269d54ba0813c1b8d0a2105388d8b687641f1940ed7d4dc433bd0
+size 342994
diff --git a/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_content_list.json b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1c7dd9a160a42710a348a3b379ea2b18f4635ae
--- /dev/null
+++ b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4fd8e8ae0b1827a14220dba612a4d57b2c5bab4f40c5321cc1c4b0ebd6b054ec
+size 81078
diff --git a/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_model.json b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..99639437d41bb7b29343f1ad074158afd2955ec3
--- /dev/null
+++ b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4992bbf4b7707b5646a66a0b3de74517fe327aa349778bd1f6c5ea0287aca4b0
+size 99208
diff --git a/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_origin.pdf b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..48b155d0a85ae7a6121ee3cb5d8f60a34290617f
--- /dev/null
+++ b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/72c0a83a-43ef-424a-9a9d-8b912eced2fc_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:907214ecc1f239b7c20b23e893619263492bdd1c48bc40077330171a5be4e208
+size 235895
diff --git a/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/full.md b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..eea1dffe53a14ea42b266c0ae3f5786f555664da
--- /dev/null
+++ b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/full.md
@@ -0,0 +1,371 @@
+# Adversarial Vertex Mixup: Toward Better Adversarily Robust Generalization
+
+Saehyung Lee Hyungyu Lee Sungroh Yoon* Electrical and Computer Engineering, ASRI, INMC, and Institute of Engineering Research Seoul National University, Seoul 08826, South Korea
+
+{halo8218, rucy74, sryoon}@snu.ac.kr
+
+# Abstract
+
+Adversarial examples cause neural networks to produce incorrect outputs with high confidence. Although adversarial training is one of the most effective forms of defense against adversarial examples, unfortunately, a large gap exists between test accuracy and training accuracy in adversarial training. In this paper, we identify Adversarial Feature Overfitting (AFO), which may cause poor adversarily robust generalization, and we show that adversarial training can overshoot the optimal point in terms of robust generalization, leading to AFO in our simple Gaussian model. Considering these theoretical results, we present soft labeling as a solution to the AFO problem. Furthermore, we propose Adversarial Vertex mixup (AVmixup), a soft-labeled data augmentation approach for improving adversially robust generalization. We complement our theoretical analysis with experiments on CIFAR10, CIFAR100, SVHN, and Tiny ImageNet, and show that AVmixup significantly improves the robust generalization performance and that it reduces the trade-off between standard accuracy and adversarial robustness.
+
+# 1. Introduction
+
+Deep neural networks (DNNs) have produced impressive results for various machine learning tasks, including computer vision [15] and natural language processing [10]. Neural networks, however, can be easily fooled by small adversarial perturbations of their input with a high degree of confidence [34]. This vulnerability of DNNs has led to the proposal of several methods to defend adversarial attacks [27, 21, 30, 41]. Despite these attempts, many of these defenses have been defeated by strong adversarial attacks [16, 18, 3], or were eventually found to rely on obfuscated gradients [1].
+
+Adversarial training [18] is one of the most effective adversarial defense methods which substitutes adversarial
+
+examples for the training samples. Given a dataset $D = \{(\pmb{x}_i, y_i)\}_{i=1}^n$ with $\pmb{x}_i \in \mathbb{R}^d$ as an example in the $d$ -dimensional input space and $y_i$ as its associated label, the goal of adversarial training is to train models by using adversarial empirical risk minimization [18]:
+
+$$
+\min _ {\theta} \underset {(\boldsymbol {x}, y) \sim D} {\mathbb {E}} \left[ \max _ {\delta \in S} \mathcal {L} (\boldsymbol {x} + \boldsymbol {\delta}, y; \theta) \right]. \tag {1}
+$$
+
+Here, $\mathcal{L}(\pmb{x} + \delta, y; \theta)$ is the loss function on adversarial examples, and $S$ represents the set of perturbations an adversary can apply to deceive the model, which is normally the set of $\ell_p$ -bounded perturbations.
+
+Many studies of the properties of these adversarial perturbations have been reported. Gilmer et al. [6] noted that the phenomenon of adversarial examples appears because most high dimensional data points in the data distribution are very close to the points that could be adversarial examples. Schmidt et al. [31] proved that robust training requires significantly larger sample complexity than that of standard training, postulating that the difficulty of robust training originates from the large sample complexity. Tsipras et al. [35] showed that a trade-off may exist between adversarial robustness and standard accuracy. They argued that the features learned during adversarial training differ from those learned during standard training, and attributed the trade-off to this difference.
+
+Recently, Ilyas et al. [12] demonstrated that the features used to train deep learning models can be divided into adversarially robust features and non-robust features, and the problem of adversarial examples may arise from these non-robust features. Then, if adversarial examples are features, rather than bugs, it is natural to wonder: Could we take into account the generalization between "adversarial features" in our adversarial training? If so, is the large gap between test accuracy and training accuracy under adversarial perturbations during adversarial training caused by the failure of adversarial feature generalization?
+
+Motivated by these questions, we present a theoretical model which demonstrates the robust generalization performance changes during adversarial training. Specifically, we
+
+identify a generalization problem of adversarial training and show that our proposed method can alleviate the generalization problem. In summary, our paper makes the following contributions:
+
+- We present a theoretical analysis which demonstrates the extent to which the change in the variance of the feature representations affects the robust generalization.
+- We uncover Adversarial Feature Overfitting (AFO), the phenomenon of the model overfitting to the adversarial features during adversarial training leading to poor robust generalization.
+- We propose Adversarial Vertex mixup (AVmixup), a soft-labeled data augmentation approach for adversarial training in a collaborative fashion.
+- We analyze our proposed method with the results of experiments on CIFAR10, CIFAR100, SVHN, and Tiny ImageNet, and show that AVmixup substantially increases the effectiveness of state-of-the-art adversarial training methods.
+
+# 2. Background
+
+# 2.1. Adversarily Robust Generalization
+
+Schmidt et al. [31] showed that the sample complexity for robust generalization can be much larger than the sample complexity for standard generalization by constructing a toy example as follows:
+
+Example 1. (Schmidt et al.) Let $\theta^{\star} \in \mathbb{R}^{d}$ be the per-class mean vector and let $\sigma > 0$ be the variance parameter. Then the $(\theta^{\star}, \sigma)$ -Gaussian model is defined by the following distribution over $(x, y) \in \mathbb{R}^{d} \times \{\pm 1\}$
+
+$$
+y ^ {u. a. r.} \{- 1, + 1 \}, \quad x ^ {i. i. d.} \mathcal {N} (y \cdot \theta^ {\star}, \sigma^ {2} I). \tag {2}
+$$
+
+Here, the difficulty of the binary classification task is controlled by adjusting the variance parameter $\sigma$ which implies the amount of overlap between the two classes.
+
+To characterize robust generalization, the definitions of standard and robust classification error are defined as follows (Schmidt et al.):
+
+Definition 1. Let $Q:\mathbb{R}^d\times \{\pm 1\} \to \mathbb{R}$ be a distribution. Then the standard classification error $\beta$ of a classifier $f:\mathbb{R}^d\rightarrow \{\pm 1\}$ is defined as $\beta = \mathbb{P}_{(\boldsymbol {x},y)\sim Q}[f(\boldsymbol {x})\neq y]$
+
+Definition 2. Let $Q: \mathbb{R}^d \times \{\pm 1\} \to \mathbb{R}$ be a distribution and let $S \in \mathbb{R}^d$ be a perturbation set that the adversary could apply to fool the model. Then the $S$ -robust classification error $\beta$ of a classifier $f: \mathbb{R}^d \to \{\pm 1\}$ is defined as $\beta = \mathbb{P}_{(\boldsymbol{x},y) \sim Q}[\exists \boldsymbol{\delta} \in S : f(\boldsymbol{x} + \boldsymbol{\delta}) \neq y]$ .
+
+Hence, the $\ell_p^\epsilon$ -robustness is defined as robustness with respect to the perturbation set $S = \{\delta \in \mathbb{R}^d \mid \| \delta \|_p \leq \epsilon\}$ . In our work, we focus on $\ell_{\infty}$ -bounded perturbations, because
+
+this is the most common type in the context of adversarial perturbations [18, 16, 41, 40].
+
+To calculate the sample complexities for robust and standard generalization, Schmidt et al. [31] used the following linear classifier model:
+
+Definition 3. (Schmidt et al.) Let $(\pmb{x}_1, y_1), \dots, (\pmb{x}_n, y_n) \in \mathbb{R}^d \times \{\pm 1\}$ be drawn i.i.d. from a $(\theta^\star, \sigma)$ -Gaussian model with $\| \theta^\star \|_2 = \sqrt{d}$ . Let the weight vector $\pmb{w} \in \mathbb{R}^d$ be the unit vector in the direction of $\bar{z} = \frac{1}{n} \sum_{i=1}^{n} y_i \pmb{x}_i$ . Then the linear classifier $f_{n,\sigma}$ is defined as
+
+$$
+f _ {n, \sigma} = \operatorname {s g n} \left(\boldsymbol {w} ^ {\top} \boldsymbol {x}\right). \tag {3}
+$$
+
+It was shown that the linear classifier can achieve satisfactory generalization performance even with a single sample when the variance of the data distribution is small. The upper $\ell_{\infty}$ -bound of adversarial perturbations was also derived for a certain $\ell_{\infty}^{\epsilon}$ -robust classification error under the same conditions with standard classification.
+
+# 2.2. Robust and Non-robust Features
+
+Recent studies [35, 12] considered the adversarial robustness in the existence of a distinction between robust features and non-robust features. They noted that adversarial examples can arise from the non-robust features of input data which are useful for standard classification but have an adverse effect on robust classification [12]. They provided evidence to support the hypothesis by showing that non-robust features alone are sufficient for standard classification but not for robust classification. They also demonstrated that standard training on the set of robust features yields a fairly small robust classification error.
+
+Tsipras et al. [35] indicated that the existence of a provable trade-off between standard accuracy and its robustness. They theoretically showed the possibility that adversarial robustness is incompatible with standard accuracy in a simple setting using a Gaussian model. In addition, they emphasized that adversarial training may reduce the contribution of non-robust features to zero with the following lemma:
+
+Lemma 1. (Tsipras et al.) Minimizing the adversarial empirical risk results in a classifier that assigns 0 weight to non-robust features.
+
+# 2.3. Soft Labeling
+
+Szegedy et al. [33] proposed label-smoothing as a mechanism to regularize the classifier. They argued that maximizing the log-likelihood of the correct label may result in overfitting, and label-smoothing can alleviate the overfitting problem.
+
+Zhang et al. [39] introduced a novel data augmentation method named Mixup. Mixup constructs virtual training examples as follows:
+
+$$
+\tilde {x} = \alpha x _ {i} + (1 - \alpha) x _ {j}, \quad \tilde {y} = \alpha y _ {i} + (1 - \alpha) y _ {j}. \tag {4}
+$$
+
+$(x_{i},y_{i})$ and $(x_{j},y_{j})$ are two examples drawn at random from the training data, and $\alpha \in [0,1]$ . They showed that Mixup improves generalization on various tasks.
+
+# 3. Methods
+
+# 3.1. Theoretical Motivation
+
+In this section, we theoretically analyze the statistical aspects of robust generalization. First, a simple Gaussian data model is used to demonstrate the need to minimize feature representation variance for robust generalization. It is then shown that the optimal model parameter in terms of robust generalization differs from the model parameter which minimizes the adversarial empirical risk using data which consist of robust and non-robust features. Ultimately, we provide evidence that most deep neural networks are not free from AFO by showing that even in our simple Gaussian data model, the robust generalization performance is degraded as the model is overly trained on adversarial examples.
+
+Based on Example 1 and the linear classifier defined in Definition 3, we prove the following theorem:
+
+Theorem 1. For the variance parameters $\sigma_r$ and $\sigma_s$ (subscript $r$ for robust and $s$ for standard), let $\sigma_r = \nu \sigma_s$ where $\nu \in [0,1]$ . Then, the upper bound on the standard classification error of $f_{n,\sigma_s}$ and the upper bound on the $\ell_{\infty}^{\epsilon}$ -robust classification error of $f_{n,\sigma_r}$ be equal with probability at least $\left(1 - 2\exp \left(-\frac{d}{8(\sigma_s^2 + 1)}\right)\right) \cdot \left(1 - 2\exp \left(-\frac{d}{8(\sigma_r^2 + 1)}\right)\right)$ if
+
+$$
+\epsilon \leq \frac {(2 \sqrt {n} - 1) (1 - \nu)}{2 \sqrt {n} + 4 \sigma_ {s}}. \tag {5}
+$$
+
+(All the proofs of the theorems and corollaries in our work can be found in the supplementary material.) We can see that the theorem is consistent with our intuition. For example, when $\nu = 1$ , i.e., when both variances are equal, the probability that the robust generalization ability for $\epsilon > 0$ is the same as the standard generalization ability effectively becomes zero. Thus, to ensure that our model shows robust generalization at the same level as standard generalization, a smaller variance of feature representations is required than that of standard learning.
+
+Corollary 1. For the variance parameters $\sigma_r$ and $\sigma_s$ , let $\sigma_r = \nu \sigma_s$ where $\nu \in [0,1]$ . Let the upper bound on the standard classification error of $f_{n,\sigma_s}$ and the upper bound on the $\ell_{\infty}^{\epsilon}$ -robust classification error of $f_{n,\sigma_r}$ be equal. Then, as $\sigma_r$ decreases, the upper bound of $\epsilon$ increases in proportion to $\pi_{n,\sigma_s}$ , which is given by
+
+$$
+\pi_ {n, \sigma_ {s}} = \frac {2 \sqrt {n} - 1}{\sigma_ {s} (2 \sqrt {n} + 4 \sigma_ {s})}. \tag {6}
+$$
+
+Hence, the smaller the variance of feature representations, the more effective the robust generalization performance of the model.
+
+Next, we show the change in the variance of feature representations as we train the model to minimize the adversarial empirical risk. Specifically, we utilize the concept of robust and non-robust features, and show the way in which adversarial training results in AFO in a model similar to that used before [35].
+
+Example 2. Let $0 < \sigma_{A} \ll \sigma_{B}$ . Then, the distribution $\Psi_{\text{true}}$ is defined by the following distribution over $(x,y) \in \mathbb{R}^{d+1} \times \{\pm 1\}$ :
+
+$$
+y \stackrel {{u. a. r}} {{\sim}} \{- 1, + 1 \} \quad a n d
+$$
+
+$$
+x _ {1} \sim \mathcal {N} \left(y, \sigma_ {A} ^ {2}\right), \quad x _ {2}, \dots , x _ {d + 1} \stackrel {i. i. d.} {\sim} \mathcal {N} \left(\eta y, \sigma_ {B} ^ {2}\right). \tag {7}
+$$
+
+Here, $x_{1}$ is a robust feature that is strongly correlated with the label, and the other features $x_{2},\ldots ,x_{d + 1}$ are non-robust features that are weakly correlated with the label. Here, $\eta < 1$ is a non-negative constant, which is small but sufficiently large such that a simple classifier attains a small standard classification error.
+
+The difficulty associated with robust learning is that a significantly large sample complexity is required [31]. Given this postulation, we extend Example 2 to Example 3 with the following assumption:
+
+Assumption 1. Assume the number of non-robust features in our data is $N$ . Then, because of the lack of data samples in robust learning, $M$ features out of $N$ non-robust features form a sample distribution which is far from the true distribution.
+
+In Assumption 1, we refer to $M$ non-robust features as "insufficient" non-robust features. Contrarily, the other non-robust features are referred to as "sufficient" non-robust features.
+
+Example 3. Let $0 < c < d$ . Then the sample distribution $\Psi_{\text{sample},c}$ which is formed by the sampled input-label pairs $(x,y)$ $\stackrel{i.i.d.}{\sim}\Psi_{\text{true}}$ is defined as follows:
+
+$$
+\begin{array}{l} y \stackrel {u. a. r.} {\sim} \{- 1, + 1 \}, x _ {1} \sim \mathcal {N} (y, \sigma_ {A} ^ {2}), \\ x _ {2}, \dots , x _ {c + 1} \stackrel {i. i. d.} {\sim} \mathcal {N} (y, \sigma_ {A} ^ {2}), \tag {8} \\ x _ {c + 2}, \ldots , x _ {d + 1} \stackrel {i. i. d.} {\sim} \mathcal {N} (\eta y, \sigma_ {B} ^ {2}). \\ \end{array}
+$$
+
+In Example 3, our data has a true distribution as in Example 2. However, the Gaussian distribution is changed for the insufficient non-robust features $x_{2}, \ldots, x_{c+1}$ in our sampled data according to Assumption 1. For simplicity, in this example, we suppose that the insufficient non-robust features form the same sample distribution as that of the robust features.
+
+We show the variance of feature representations during adversarial training on $\Psi_{\text{sample},c}$ by using the following linear classifier:
+
+Definition 4. Let $Z$ be a function set. Let $w$ be the weight vector of the classifier. Let $\zeta_f$ be the objective function of the linear classifier $f_w$ . Then our linear classifier $f_Z$ is defined as
+
+$$
+f _ {Z} \in \left\{f _ {\boldsymbol {w}} \mid \zeta_ {f} \in Z, \boldsymbol {w} \in \mathbb {R} _ {+} ^ {d + 1}, \| \boldsymbol {w} \| _ {1} = 1 \right\}. \tag {9}
+$$
+
+In our model, it is reasonable to investigate the variance of $\boldsymbol{w}^{\top}\boldsymbol{x}$ to show the extent to which adversarial training affects robust generalization. Based on Example 3 and the linear classifier defined in Definition 4, we can prove the following theorem:
+
+Theorem 2. Let $\boldsymbol{w}_B \in \mathbb{R}^{d - c}$ be the weight vector for the sufficient non-robust features of $\Psi_{\text{sample},c}$ . Let $Z_{sc}$ be a set of strictly convex functions. Then, when the classifier $f_{Z_{sc}}$ is trained on $\Psi_{\text{sample},c}$ , the $\boldsymbol{w}_B^\star$ which minimizes the variance of $\boldsymbol{w}^\top \boldsymbol{x}$ with respect to $\Psi_{\text{sample},c}$ is
+
+$$
+\boldsymbol {w} _ {B} ^ {\star} = \vec {0}. \tag {10}
+$$
+
+This result is consistent with that of [35], which presumed that the number of samples for all the non-robust features is sufficiently large. However, we have a limited number of samples for the non-robust features in Example 3 and this may cause the result to differ from that of Theorem 2. Therefore, we need to find $\boldsymbol{w}_B^\star$ with respect to the true distribution $\Psi_{true}$ for the purpose of showing robust generalization ability for our model.
+
+Theorem 3. Let $\mathbf{w}_B \in \mathbb{R}^{d - c}$ be the weight vector for sufficient non-robust features of $\Psi_{\text{sample},c}$ . Let $Z_{sc}$ be a set of strictly convex functions. Then, when the classifier $f_{Z_{sc}}$ is trained on $\Psi_{\text{sample},c}$ , the $\mathbf{w}_B^*$ that minimizes the variance of $\mathbf{w}^\top \mathbf{x}$ with respect to $\Psi_{\text{true}}$ is
+
+$$
+\boldsymbol {w} _ {B} ^ {\star} = \frac {c}{c d + 2 c + 1} \cdot \vec {1}. \tag {11}
+$$
+
+For simplicity, we assume that the classifier assigns the same weight value to features with the same distribution in Theorem 3, and the limited feasible set does not change the optimal weight of the classifier. As a result, we can predict the robust generalization performance of the classifier by observing $\boldsymbol{w}_B$ in the robust learning procedure. Note that Lemma 1 also applies with our classifier. Therefore, if our sampled data have insufficient non-robust features, $\boldsymbol{w}_B$ approaches $\vec{0}$ during adversarial training, even though the optimal $\boldsymbol{w}_B^\star$ is not $\vec{0}$ in terms of robust generalization. We refer to this phenomenon as Adversarial Feature Overfitting (AFO).
+
+AFO is caused by the relation between the weight values for the features in our data. In this regard, most deep neural networks involve intertwined features, suggesting that most deep neural networks are also adversely affected by the problem we point out in the example.
+
+
+Figure 1: Adversarial vertex. The adversarial vertex is located in the same direction as the adversarial example but $\gamma$ times farther away.
+
+# 3.2. Adversarial Vertex Mixup
+
+AFO arises when the model is overly optimized only for sufficient non-robust features, when the training data have many types of insufficient non-robust features. From this point of view, we can think of several methods to address AFO. First, the diversity of the algorithm that constructs adversarial examples during training could be increased. This may be a fundamental solution to overcome the poor robust generalization caused by the large sample complexity. Second, when the large sample complexity of robust learning cannot be satisfied, label-smoothing can directly regularize the overfitting problem as in [33]. Essentially, soft labeling can be employed to prevent the weights for the sufficient non-robust features from becoming zero. In this paper, we present a method to improve the robust generalization using soft labeling.
+
+Several algorithms that use soft-labeled data to improve the generalization performance have been proposed [33, 39, 28]. Among them, Mixup [39] trains a model by utilizing linear interpolation between training data. This method can be seen as a variant of the label-smoothing method, because it linearly interpolates both input vectors and their labels. Inspired by Mixup, we propose Adversarial Vertex mixup (AVmixup), which is a soft-labeled data augmentation method designed to improve robust generalization.
+
+AVmixup, similar to Mixup, also extends the training distribution by using linear interpolation. Unlike Mixup, however, for each raw input vector, AVmixup defines a virtual vector in the adversarial direction and extends the training distribution via linear interpolation of the virtual vector and the raw input vector. We refer to the virtual vector as an adversarial vertex (see Figure 1). Formally, the adversarial vertex is defined as follows:
+
+Definition 5. Let $\delta_{\pmb{x}} \in \mathbb{R}^{d}$ be the adversarial perturbation for the raw input vector $\pmb{x} \in \mathbb{R}^{d}$ . Then, for a scaling factor $\gamma \geq 1$ , adversarial vertex $\pmb{x}_{av}$ is defined as
+
+$$
+\boldsymbol {x} _ {a v} = \boldsymbol {x} + \gamma \cdot \delta_ {\boldsymbol {x}}. \tag {12}
+$$
+
+Figure 1 shows how the adversarial vertex is found. After we obtain the adversarial vertex, AVmixup constructs virtual training examples as follows:
+
+Definition 6. Let $(\pmb{x},\pmb{y})$ be the raw input-label pair. Let $\phi$ be a label-smoothing function. Then, for the real value $\alpha$ sampled from a uniform distribution $\mathcal{U}(0,1)$ and the label-smoothing parameters $\lambda_1\in \mathbb{R}$ and $\lambda_{2}\in \mathbb{R}$ , the virtual input vector $\hat{\pmb{x}}\in \mathbb{R}^d$ and its associated label $\hat{\pmb{y}}\in \mathbb{R}^{k}$ are constructed by
+
+$$
+\begin{array}{l} \hat {\boldsymbol {x}} = \alpha \boldsymbol {x} + (1 - \alpha) \boldsymbol {x} _ {a v}, \\ \hat {\boldsymbol {y}} = \alpha \phi (\boldsymbol {y}, \lambda_ {1}) + (1 - \alpha) \phi (\boldsymbol {y}, \lambda_ {2}). \tag {13} \\ \end{array}
+$$
+
+For the label-smoothing function $\phi$ , we use an existing label-smoothing method [33]. Specifically, in the case of $k$ classes, the algorithm assigns $\lambda \in (0,1)$ to the true class and equally distributes $\frac{1 - \lambda}{k - 1}$ to the other classes.
+
+In summary, the overall procedure of adversarial training with AVmixup is described in Algorithm 1.
+
+Algorithm 1 Adversarial Training with AVmixup
+Require: Dataset $D$ , batch size $n$ , training epochs $T$ , learning rate $\tau$ , scaling factor $\gamma$ , label-smoothing factors $\lambda_1,\lambda_2$
+Require: Label-smoothing function $\phi$
+Require: Adversarial perturbation function $\mathcal{G}$
+1: for $t = 1$ to $T$ do
+2: for mini-batch $\{\pmb {x}_i,\pmb {y}_i\}_{i = 1}^n\sim D$ do
+3: $\delta_{i}\gets \mathcal{G}(\pmb {x}_{i},\pmb {y}_{i};\pmb {\theta})$
+4: AVmixup:
+5: $\bar{\pmb{x}}_i\gets \pmb {x}_i + \gamma \cdot \pmb {\delta}_i$ $\alpha_{i}\sim \mathcal{U}(0,1)$
+6: $\hat{\pmb{x}}_i\gets \alpha_i\pmb {x}_i + (1 - \alpha_i)\bar{\pmb{x}}_i$
+7: $\hat{\pmb{y}}_i\gets \alpha_i\phi (\pmb {y}_i,\lambda_1) + (1 - \alpha_i)\phi (\pmb {y}_i,\lambda_2)$
+8: model update:
+9: $\pmb {\theta}\gets \pmb {\theta} - \tau \cdot \frac{1}{n}\sum_{i = 1}^{n}\nabla_{\theta}\mathcal{L}(\hat{\pmb{x}}_i,\hat{\pmb{y}}_i;\pmb {\theta})$
+10: end for
+11: end for
+12: Output: robust model parameter $\theta$
+
+# 4. Related Work
+
+# 4.1. Adversarial Attack Methods
+
+Adversarial attacks confuse the trained deep neural networks with adversarial examples. The Fast Gradient Sign Method (FGSM) [7] is an efficient one-step attack method.
+
+Projected Gradient Descent (PGD) [18] constructs adversarial examples by applying a multi-step variant of FGSM. The Carlini & Wagner (CW) attack [3] uses a specific objective function to create adversarial examples under various conditions. Apart from these attacks, many adversarial attacks exist in white-box settings [26, 23, 16, 22]. In black-box settings, adversarial attacks are conducted using substitute models, according to which adversarial examples are generated from the substitute models [25]. Additionally, black-box attacks which only rely on the prediction score or the decision of the model have been proposed [4, 2, 32, 11, 8].
+
+# 4.2. Adversarial Defense Methods
+
+Various adversarial defense methods have been employed to make DNNs robust to adversarial attacks. Adversarial training [7] uses adversarial examples as training data to train the robust network. Many approaches [13, 21, 29, 41, 40] improve the model robustness through regularizers or variants of adversarial training. Various techniques [20, 19, 37, 30, 36] can defend adversarial attacks by denoising adversarial perturbations from input data or detect adversarial examples from among the input data. We cover further related works in the supplementary material.
+
+# 5. Experimental Results And Discussion
+
+In this section, we show that label-smoothing [33] and AVmixup improve the robust generalization with extensive experiments across many benchmark datasets including CIFAR10 [14], CIFAR100 [14], SVHN [24] and Tiny Imagenet [5]. Especially, we note that the combination of AVmixup with the state-of-the-art adversarial defense method [40], would enable us to significantly outperform existing defense methods. A description of the datasets used in the experiments is summarized in the supplementary material.
+
+# 5.1. Implementation Details
+
+We use WRN-34-10 [38] for the experiments on CIFAR, WRN-16-8 [38] for the experiments on SVHN, and PreActResNet18 [9] for the experiments on Tiny Imagenet. We run 80k training steps on CIFAR and SVHN and 50k training steps on Tiny Imagenet. The initial learning rate for CIFAR and Tiny Imagenet is set to 0.1 and 0.01 for SVHN. The learning rate decay is applied at $50\%$ and $75\%$ of total training steps with decay factor 0.1, and weight decay factor is set to $2\mathrm{e} - 4$ . We use the same adversarial perturbation budget $\epsilon = 8$ as in [18]. To evaluate adversarial defense methods, we apply several adversarial attacks including FGSM [7], PGD [18], CW [3] (PGD approach with CW loss) and transfer-based black-box attack [25]. We mainly compare the following settings in our experiments:
+
+1. Standard: The model which is trained with the original dataset.
+
+Table 1: Comparison of the accuracy of our proposed approach AVmixup with that of PGD [18] and LSλ ( $\lambda \in \{0.8, 0.9\}$ ) [33] against white-box attacks on CIFAR10.
+
+| Model | Clean | FGSM | PGD10 | PGD20 | CW20 |
| Standard | 95.48 | 7.25 | 0.0 | 0.0 | 0.0 |
| PGD | 86.88 | 62.68 | 47.69 | 46.34 | 47.35 |
| LS0.8 | 87.28 | 66.09 | 53.49 | 50.87 | 50.60 |
| LS0.9 | 87.64 | 65.96 | 52.82 | 50.29 | 50.30 |
| AVmixup | 93.24 | 78.25 | 62.67 | 58.23 | 53.63 |
+
+
+Figure 2: CIFAR10 accuracy curves. The robustness of the PGD model (blue line) overfits after 40k steps. The AVmixup model, on the other hand, shows a steady increase in robustness (red line).
+
+2. PGD: The model trained using adversarial examples from PGD [18] with step size $= 2$ , iterative steps $= 10$
+3. LS $\lambda$ : With the PGD-based approach [18], we apply the label-smoothing method [33] for the model with label-smoothing factor $\lambda$ .
+4. AVmixup: We apply our proposed method for the model with the PGD-based approach [18].
+
+Note that PGD and CW attacks with $T$ iterative steps are denoted as PGDT and CWT, respectively, and the original test set is denoted as Clean.
+
+# 5.2.CIFAR10
+
+Because the CIFAR10 dataset is the most commonly used dataset for adversarial robustness studies [18, 37, 41, 40], we analyze our method in both white-box and black-box settings, and compare our method to a state-of-the-art defense method, TRADES [41], on CIFAR10. We set the scaling factor $\gamma = 2.0$ and label-smoothing factors $\lambda_{1} = 1.0$ and $\lambda_{2} = 0.1$ in the following experiments.
+
+Empirical evidence for AFO We provide Figure 2 in support of our theoretical analysis and the effectiveness of
+
+AVmixup. In Figure 2, the validation accuracy curve against PGD10 of the PGD model shows that the model starts to overfit from about 40k steps, while the AVmixup model continues to improve.
+
+White-box setting We conduct white-box attacks on the models trained with baseline methods and our proposed method AVmixup. We set the step size $= 2.0$ for PGD and CW attacks. We first evaluate the models on Clean to compare the trade-off between accuracy and robustness of the models. Then, we evaluate the models against FGSM, PGD10, PGD20, and CW20. The results are summarized in Table 1.
+
+The results in Table 1 indicate that models trained with soft labels are more accurate in all attacks including clean data than the model trained with one-hot labels, which is consistent with our theoretical analysis. In particular, the accuracy on PGD20 of the AVmixup model is $11.89\%$ higher than that of the PGD model, with a decrease of only $2.24\%$ in accuracy on Clean compared to the Standard model.
+
+Black-box setting Athalye et al. [1] indicated that obfuscated gradients, a phenomenon that leads to non-true adversarial defenses, can be identified in several ways. One such way is black-box attack evaluation.
+
+In black-box settings, we apply transfer-based black-box attacks to the models [25]. After constructing adversarial examples from each of the trained models, we apply these adversarial examples to the other models and evaluate the performances. The results are summarized in Table 2, and more results can be found in the supplementary material. The columns represent the attack models of the transfer-based black-box attacks, and the rows represent the defense models which are evaluated. The results in Table 2 indicate that the AVmixup model is the most robust against black-box attacks from all of the attack models with significant margins. We also observe that the model trained with AVmixup shows higher accuracy against black-box attacks than against white-box attacks. Thus, we confirm that our proposed method improves the adversarial defense performance as a result of an increase in the robustness of the model rather than with obfuscated gradients [1].
+
+Comparison We compare our method with a recently proposed defense method, TRADES [41], which uses a regularization-based adversarial training approach. TRADES requires approximately twice as much GPU memory as conventional adversarial training to calculate the additional regularization term which processes both natural examples and adversarial examples simultaneously. In contrast, AVmixup hardly incurs additional cost and can be implemented with only a few lines of code. In this experiment, we implement AVmixup based on the official Py-
+
+Table 2: Accuracy comparisons against transfer-based black-box attacks (PGD20).
+
+| Defense model | Attack model |
| Standard | PGD | LS0.8 | LS0.9 |
| PGD | 85.6 | - | 65.70 | 64.91 |
| LS0.8 | 86.03 | 63.60 | - | 64.83 |
| LS0.9 | 86.40 | 63.74 | 65.78 | - |
| AVmixup | 89.53 | 68.51 | 71.48 | 70.50 |
+
+Table 3: Accuracy comparisons with TRADES [41].
+
+| Models | Clean | PGD20 |
| PGD [41] | 87.3 | 47.04 |
| TRADES (1/λ = 1) [41] | 88.64 | 49.14 |
| TRADES (1/λ = 6) [41] | 84.92 | 56.61 |
| AVmixup | 90.36 | 58.27 |
+
+Torch code of TRADES [41] and train the model with the same configurations as [18]. The results are listed in Table 3, which shows that our proposed method has superior robustness with a smaller trade-off than TRADES.
+
+Discussion In Tabel 1, in contrast with FGSM, PGD10 and PGD20, the AVmixup model does not show significant improvement against the CW20 attack, and this trend becomes more severe for challenging datasets that have a larger number of classes and smaller number of training examples per class such as CIFAR100. We can infer that this property appears as AVmixup uses linear interpolations. In other words, algorithms that utilize virtual data constructed using linear interpolation between data points tightly generalize only the features observed in the training steps. We confirm this explanation by a simple experiment, the details and further discussion of which can be found in the supplementary material. It implies that while AVmixup shows a high level of robustness against adversarial attacks used in adversarial training, it may not be able to withstand other types of attacks. Therefore, the diversity of adversarial examples generated during the adversarial training procedure is even more important for AVmixup. We thus report the results of AVmixup combined with the PGD-based approach by focusing on PGD-based attacks. The results using an algorithm that constructs diverse adversarial examples are discussed in Section 5.4.
+
+# 5.3. Other Datasets
+
+We also verify the effectiveness of our method on CIFAR100, SVHN and Tiny Imagenet. We specify the same hyperparameters for AVmixup as in the CIFAR10 experiments. The results from these experiments are provided in Table 4.
+
+Table 4: Comparisons of AVmixup on SVHN [24], CIFAR100 [14], and Tiny ImageNet [5].
+
+| Dataset | Model | Clean | FGSM | PGD20 |
| CIFAR100 | PGD | 61.29 | 46.01 | 25.17 |
| LS0.8 | 62.1 | 52.33 | 28.81 |
| LS0.9 | 61.77 | 53.17 | 27.13 |
| AVmixup | 74.81 | 62.76 | 38.49 |
| SVHN | PGD | 92.4 | 75.31 | 58.22 |
| LS0.8 | 92.15 | 75.84 | 59.75 |
| LS0.9 | 92.34 | 76.14 | 59.28 |
| AVmixup | 95.59 | 81.83 | 61.90 |
| Tiny ImageNet | PGD | 41.67 | 20.30 | 13.14 |
| LS0.8 | 42.89 | 22.75 | 15.43 |
| LS0.9 | 41.71 | 20.96 | 14.03 |
| AVmixup | 54.27 | 35.46 | 20.31 |
+
+CIFAR100 Tabel 4 shows that the accuracy of AVmixup increases by $13.52\%$ p and $13.32\%$ p for Clean and PGD20, respectively, compared to the PGD model. The results of additional experiments on CIFAR100 can be found in the supplementary material.
+
+SVHN The SVHN image classification task is much easier than the image classification tasks with more complicated input images such as CIFAR and Tiny Imagenet. As shown previously [31], generalization problems with poor robustness are less common for simple image datasets such as MNIST than for complex image datasets such as CIFAR. Thus, it is possible to predict that our proposed method, starting from the robust generalization problem, would be less effective on the SVHN dataset than on other datasets, and it is indeed observed from Table 4. The accuracy of AVmixup improves by $3.19\%$ p and $3.68\%$ p compared to the PGD model for Clean and PGD20, respectively, which are small improvements compared to those observed on the other datasets that are tested.
+
+Tiny Imagenet Tabel 4 shows an improvement in accuracy of $12.6\%$ p and $7.17\%$ p compared to the PGD model for Clean and PGD20, respectively.
+
+# 5.4. When AVmixup Meets Diversity
+
+As discussed in 5.2, the diversity of adversarial examples during adversarial training is important to enable AVmixup to be effective against various adversarial attacks. In this sense, we utilize a recent method [40] (Feature Scatter) which promotes data diversity by taking the inter-sample relationships into consideration during adversarial training. We combine Feature Scatter with our method AVmixup,
+
+Table 5: Comparisons of AVmixup with feature scattering-based approach [40]. For PGD, we refer to the accuracy of [40]. For Feature Scatter, we reproduce and evaluate the model at the end of the training.
+
+| Dataset | Model | Clean | FGSM | PGD20 | PGD100 | CW20 | CW100 |
| CIFAR10 | PGD [40] | 85.7 | 54.9 | 44.9 | 44.8 | 45.7 | 45.4 |
| Feature Scatter | 90.22 | 78.19 | 69.74 | 67.35 | 60.77 | 58.29 |
| Feature Scatter + AVmixup | 92.37 | 83.49 | 82.31 | 81.88 | 71.88 | 69.50 |
| CIFAR100 | PGD [40] | 59.9 | 28.5 | 22.6 | 22.3 | 23.2 | 23.0 |
| Feature Scatter | 74.9 | 72.99 | 45.29 | 42.77 | 27.35 | 24.89 |
| Feature Scatter + AVmixup | 78.62 | 78.92 | 47.28 | 46.29 | 33.20 | 31.22 |
| SVHN | PGD [40] | 93.9 | 68.4 | 47.9 | 46.0 | 48.7 | 47.3 |
| Feature Scatter | 96.42 | 95.92 | 58.67 | 46.98 | 51.23 | 38.89 |
| Feature Scatter + AVmixup | 96.07 | 95.26 | 73.65 | 70.24 | 67.06 | 62.01 |
+
+Table 6: Sensitivity of the combination of AVmixup and Feature Scatter to label-smoothing factors $(\gamma = 1)$ on CIFAR10.
+
+| λ1/λ2 | Clean | FGSM | PGD20 | CW20 |
| 0.1/0.5 | 91.94 | 80.09 | 74.43 | 62.87 |
| 0.5/0.3 | 92.82 | 77.81 | 57.67 | 55.18 |
| 0.5/0.7 | 92.37 | 83.49 | 82.31 | 71.88 |
| 1.0/0.5 | 93.07 | 79.55 | 53.42 | 56.72 |
+
+and evaluate the performance of the model on CIFAR10, CIFAR100 and SVHN.
+
+We implement AVmixup on the PyTorch code of Feature Scatter released in [40], hence we use the same model architecture and configuration as in this report [40]. For CIFAR10 and SVHN, we set $(\gamma = 1.0, \lambda_1 = 0.5, \lambda_2 = 0.7)$ . For CIFAR100, we set $(\gamma = 1.5, \lambda_1 = 0.3, \lambda_2 = 0.42)$ . We evaluate the models at the end of the training. The results are summarized in Table 5.
+
+The joint application of AVmixup with Feature Scatter results in significantly higher accuracy than with Feature Scatter alone. Specifically, on CIFAR10, the combination shows powerful adversarial robustness of $82.31\%$ and $81.88\%$ for PGD20 and PGD100, respectively. Furthermore, our experiments on SVHN demonstrate state-of-the-art robustness against the PGD and CW attacks. Moreover, in contrast with the experimental results of the models trained with the PGD-based approach, the combination of AVmixup and Feature Scatter shows a significant improvement not only for PGD attacks but also for CW attacks.
+
+Note that the results on CIFAR100 differ from those on CIFAR10 or SVHN. The combination also provides state-of-the-art accuracy in all respects, but the increase in accuracy for PGD and CW is small compared to that for other datasets. We can infer the reason for these results from Ta
+
+ble 6, which indicates that the combination is sensitive to the label-smoothing factors. In this respect, as the number of labels of the dataset increases, the sensitivity of the combination to soft label values increases, which may destabilize the effect of AVmixup. In addition, we can see that the accuracy on FGSM is slightly higher than that on Clean. This is because of the property of AVmixup, not because of label leaking [17], since the feature scattering-based approach prevents label leaking. Further discussions of the results can be found in the supplementary material.
+
+# 6. Conclusion
+
+In this work, we identified AFO, the phenomenon that leads to poor robust generalization, and used both theoretical and empirical approaches to show the extent to which soft labeling can help improve robust generalization. We also introduced AVmixup, a soft-labeled data augmentation method, and demonstrated its outstanding performance through extensive experiments. Although AVmixup has shown its excellence in various experiments, AVmixup has the disadvantage of being sensitive to its hyperparameters. This forces the appropriate hyperparameters for AVmixup to be found by line search or exhaustive search, and this task will be time consuming if there are limited resources available. Therefore, we aim to develop advanced algorithms by analyzing in detail the meaning and effects of linear interpolation in AVmixup for future research.
+
+Acknowledgements: This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (Ministry of Science and ICT) [2018R1A2B3001628], the Brain Korea 21 Plus Project in 2020, Samsung Research Funding & Incubation Center of Samsung Electronics under Project Number SRFC-IT1901-12, and AIR Lab (AI Research Lab) in Hyundai Motor Company through HMC-SNU AI Consortium Fund.
+
+# References
+
+[1] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018. 1, 6
+[2] Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248, 2017. 5
+[3] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. IEEE, 2017. 1, 5
+[4] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based blackbox attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15-26. ACM, 2017. 5
+[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. IEEE, 2009. https://tiny-imagenet.herokuapp.com/.5,7
+[6] Justin Gilmer, Luke Metz, Fartash Faghri, Samuel S Schoenholz, Maithra Raghu, Martin Wattenberg, and Ian Goodfellow. Adversarial spheres. arXiv preprint arXiv:1801.02774, 2018. 1
+[7] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. 5
+[8] Chuan Guo, Jacob R Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q Weinberger. Simple black-box adversarial attacks. arXiv preprint arXiv:1905.07121, 2019. 5
+[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630-645. Springer, 2016. 5
+[10] Geoffrey Hinton, Li Deng, Dong Yu, George Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Brian Kingsbury, et al. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal processing magazine, 29, 2012. 1
+[11] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. arXiv preprint arXiv:1804.08598, 2018. 5
+[12] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019. 1, 2
+[13] Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing. arXiv preprint arXiv:1803.06373, 2018. 5
+[14] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009. 5, 7
+[15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net
+
+works. In Advances in neural information processing systems, pages 1097-1105, 2012. 1
+[16] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016. 1, 2, 5
+[17] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. 8
+[18] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 1, 2, 5, 6, 7
+[19] Dongyu Meng and Hao Chen. Magnet: a two-pronged defense against adversarial examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 135-147. ACM, 2017. 5
+[20] Jan Hendrik Metzen, Tim Genewein, Volker Fischer, and Bastian Bischoff. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267, 2017. 5
+[21] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979–1993, 2018. 1, 5
+[22] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765-1773, 2017. 5
+[23] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016. 5
+[24] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011. 5, 7
+[25] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506-519. ACM, 2017. 5, 6
+[26] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pages 372-387. IEEE, 2016. 5
+[27] Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, and Ananthram Swami. Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP), pages 582-597. IEEE, 2016. 1
+[28] Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey Hinton. Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548, 2017. 4
+
+[29] Andrew Slavin Ross and Finale Doshi-Velez. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Thirty-second AAAI conference on artificial intelligence, 2018. 5
+[30] Pouya Samangouei, Maya Kabbab, and Rama Chellappa. Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605, 2018. 1, 5
+[31] Ludwig Schmidt, Shibani Santurkar, Dimitris Tsipras, Kunal Talwar, and Aleksander Madry. Adversarily robust generalization requires more data. In Advances in Neural Information Processing Systems, pages 5014-5026, 2018. 1, 2, 3, 7
+[32] Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 2019. 5
+[33] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826, 2016. 2, 4, 5, 6
+[34] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1
+[35] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. arXiv preprint arXiv:1805.12152, 2018. 1, 2, 3, 4
+
+[36] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 501-509, 2019. 5
+[37] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155, 2017. 5, 6
+[38] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. 5
+[39] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017. 2, 4
+[40] Haichao Zhang and Jianyu Wang. Defense against adversarial attacks using feature scattering-based adversarial training. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 1829-1839. Curran Associates, Inc., 2019. https://github.com/Haichao-Zhang/FeatureScatter.2,5,6,7,8
+[41] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. Theoretically principled trade-off between robustness and accuracy. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 7472-7482, Long Beach, California, USA, 09-15 Jun 2019. PMLR. https://github.com/yaodongyu/TRADES.1,2,5,6,7
\ No newline at end of file
diff --git a/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/images.zip b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..48ba8114d78a844bd933b7986de02d7d5383f6b5
--- /dev/null
+++ b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:970f34e462024a762a97b08debe92998c63e73ee78a0d6bccea17aa165043577
+size 317048
diff --git a/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/layout.json b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0641834ff1fef85b858f16e65a2a382b4b1cda49
--- /dev/null
+++ b/adversarialvertexmixuptowardbetteradversariallyrobustgeneralization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:114b42618eb173c6e372b860464c76ec3ae4a246c32e7780ff35e0394bc891d3
+size 451798
diff --git a/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_content_list.json b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..fcac20e4e875eb354121ba23f74ea88c7b3dc0e5
--- /dev/null
+++ b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cc9fa54be6cfcd3adc82ae67d771c99126c70e51d8ef44218d54dc30e9f63811
+size 67110
diff --git a/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_model.json b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7730ad1ef4eeb81cdaa026f0583ba0cc1236f574
--- /dev/null
+++ b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:24ff8ea29e7bdcd9dd2306e06ee65f0024f200c15128f23c53f2ce1003403d31
+size 83400
diff --git a/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_origin.pdf b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d4f8459a10f6696b050a381f0bd1d165983f5b61
--- /dev/null
+++ b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/25fd00d4-ea80-491f-adc9-33c4ec2742b5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7308be235c38a6807e9a162bb1bd6b71d7b73b52e6301346af0c3cbcd555253
+size 8014556
diff --git a/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/full.md b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e5021c4a60f0b871c88297808370deaac1aca35a
--- /dev/null
+++ b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/full.md
@@ -0,0 +1,245 @@
+# Advisable Learning for Self-driving Vehicles by Internalizing Observation-to-Action Rules
+
+Jinkyu Kim, Suhong Moon, Anna Rohrbach, Trevor Darrell, and John Canny
+EECS, University of California, Berkeley
+
+{jinkyu.kim, suhong.moon, anna.rohrbach, trevordarrell, canny}@berkeley.edu
+
+# Abstract
+
+Humans learn to drive through both practice and theory, e.g. by studying the rules, while most self-driving systems are limited to the former. Being able to incorporate human knowledge of typical causal driving behaviour should benefit autonomous systems. We propose a new approach that learns vehicle control with the help of human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly. Moreover, to enhance interpretability of our system, we introduce a fine-grained attention mechanism which relies on semantic segmentation and object-centric RoI pooling. We show that our approach of training the autonomous system with human advice, grounded in a rich semantic representation, matches or outperforms prior work in terms of control prediction and explanation generation. Our approach also results in more interpretable visual explanations by visualizing object-centric attention maps. Code is available at https://github.com/JinkyuKimUCB/advisable-driving.
+
+# 1. Introduction
+
+Autonomous driving control has made dramatic progress in the last several years. The proposed vehicle controllers use a variety of approaches; recent efforts [5] suggest that deep neural networks can be effectively applied to the controllers in an end-to-end manner. These models, however, are known to be opaque. One way to simplify and expose the underlying reasoning, is via a situation-specific dependence on visible objects in the scene, i.e. by only attending to image areas that are causally linked to the driver's actions [15]. However, the resulting attention maps are not always compelling or human interpretable. Another option is to verbalize the autonomous vehicle's behaviour with natural language [17], Figure 2 (B). The resulting textual explanations are human understandable, but tend to be rather "shallow", as they report the more common objects over
+
+
+Figure 1: Our model consists of four main parts: (1) an object-centric visual encoder built upon a semantic segmentation model, (2) an observation generator, which generates textual observation about the scenes ("The road is wet"), (3) an observation-to-action module, which maps a visual scene description to a (high-level) action command ("Slow down"), and (4) a vehicle controller conditioned on the generated action command.
+
+the less common ones, which may be more important (e.g. construction cones). Both approaches fall short of demonstrating causal behaviour akin to a typical human driver.
+
+To address this issue, [16] augment an imitation learning dataset with instantaneous human advice (e.g. "there is a pedestrian ahead", or "turn left"), see Figure 2 (A). They show that providing such inputs helps more closely imitate a human driver's behavior. While promising, this method requires ground-truth human inputs at test time.
+
+Humans learn to drive not only from practice and demonstration, but also from theory, e.g. by studying the rules. We advocate for a more principled way of integrating human advice during learning. We assume that at training time, human advice is available in the form of observation-action
+
+
+(A)
+Goal-conditioned End-to-end Driving Model
+Figure 2: (A) Existing goal-conditioned end-to-end driving models that takes (as an input) discrete [7], natural language commands [16, 28], and intended navigational route [12]. (B) Existing explainable end-to-end driving models that transduce DNN states to natural language [17] or visual explanations [15]. (C) Combining two above-mentioned ideas, we can create "Advisable" driving model that takes human-to-vehicle advice in the form of observation-action rules. To incorporate such rules, our model involves a Sequence-to-Sequence Observation-to-Action module, which generates a soft condition-action rule that maps a textual observation to a high-level action command. For details see Section 3.
+
+
+(B)
+Explainable End-to-end Driving Model
+
+
+(C) Ours
+Advisable End-to-end Driving Model
+
+rules (e.g. "if the road is wet, slow down"). Incorporating such rules could help driving models learn more human-like behavior, see Figure 1.
+
+A key requirement of an advisable driving model is its explainability – exposing the controller's internal state is important for a user as an acknowledgement that the system is following advice. As mentioned earlier, visual attention is often used in recent explainable models [15, 17]. These models generate spatial attention maps, which are then displayed over the original images. However, such attention maps are coarse and have limited interpretability. They usually have a low spatial resolution (as the last convolutional layer) and are upsampled with a 2D Gaussian kernel. This blurs out the details and makes it difficult to determine what the model actually attends to. We advocate for using a richer representation, such as semantic segmentation, which provides pixel-wise prediction and delineates object boundaries in images. The output of the last convolutional layer retains information of the corresponding local image regions, which can be advantageous for obtaining more fine-grained attention maps. We thus propose to use semantic segmentation as our input representation. To further improve the quality of the attention maps, we also use an instance segmentation model, which allows us to distribute attention over individual objects.
+
+Overall, we propose a novel self-driving model that is both advisable and explainable, see Figure 2 (C). Our model learns advice from human inputs which convey global rules that the user expects the vehicle to follow (e.g. "If a heavy
+
+fog interferes with your forward visibility, drive slowly'). We can also provide both visual explanations - by producing fine-grained attention maps, and textual explanations - by generating textual utterances (e.g. "the traffic light ahead turned red", thus "the car stopped"). We ground both functionalities in our object-centric visual representation.
+
+We evaluate our approach on the BDD-X dataset [17] and show that our model matches or outperforms prior work in control prediction and textual observation generation. Our attention maps, tied to the semantic segmentation, result in object-centric (and thus more interpretable) visualization of internal states. Our human evaluation in a simulated environment (Carla [8]) further shows that our advisable system can increase user trust.
+
+# 2. Related Work
+
+End-to-End Learning for Self-driving Vehicles. Recent works [4, 12] suggest that a driving policy can be successfully learned by neural networks through supervised learning over observation (e.g. video) and action (e.g. steering) pairs, that are collected from human demonstration. Bojarski et al. [5] trained a 5-layer ConvNet to predict steering controls from a dashcam image, while Xu et al. [39] utilized a dilated ConvNet combined with an LSTM so as to predict vehicle's discretized future motions. Recently, Hecker et al. [12] explored the extended model that takes a surround-view multi-camera system, a route planner, and a CAN bus reader. Codevilla et al. [7] explored a conditional end-to-end driving model that takes high-level command input (i.e.
+
+left-/right-turn, lane following, and intersection passing) at test time, see Figure 2 (A). To reduce the complexity, there is growing interest in end-to-mid [41] and mid-to-mid [4] driving models that produce a mid-level output representation in the form of a drivable trajectory by consuming either raw sensor or an intermediate scene representation as input. Their behavior, however, is opaque and learning to drive in urban areas remains challenging. These driving models are also known to be "black boxes" and thus lack of transparency may be a major drawback in self-driving applications where a high level of user trust is required to accept such a radical technology.
+
+Visual and Textual Explanations. Explainability of deep neural networks has become a growing field in computer vision and machine learning communities [10]. In landmark work, [40] utilized deconvolution layers to visualize the internal representation of a ConvNet. Other approaches [42, 30] have explored synthesizing an image that highly activates a neuron. However, they lack formal measures of how the function estimated by the network is affected by spatially-extended features. Attention-based approaches may be exceptions to this rule. Kim et al. [15] utilized an attention model followed by additional salience filtering to show regions that causally affect the output. Wang et al. [36] and Wu et al. [38] introduced an instance-level attention model that finds objects (e.g., cars, pedestrians) that the network needs to pay attention to. However, such attention may be less convenient (especially in the driving domain) for users to "replay". It is also important to be able to justify the decisions that were made and explain why they are reasonable in a human understandable manner, i.e. in natural language [13, 14]. Kim et al. [17] proposed a textual explanation model to explain the rationales behind the vehicle controller, see Figure 2 (B). Explainable models can help reveal what the model is doing and show the basis for its decisions, which makes it easier to expose weaknesses and further improve. We propose a model that is both explainable and advisable. Human-to-vehicle advice can take a variety of forms, while natural language is an intuitive form of communication for humans. Our approach is inspired by [17], but we incorporate advice through learning to generate observations and corresponding actions in natural language.
+
+Advice-taking Models. Recognition of the value of advicetaking has a long history in AI community [23], but few attempts have been made to exploit textual advice. Several approaches have been proposed to translate natural language advice to formal semantic representations, which are then used to bias actions for simulated soccer [19], mobile manipulation [24, 25, 31], and navigation [2]. Recent work suggests that incorporating natural language human feedback can improve text-based QA agents [21, 37] and image captioning performance [22]. Despite its potential, there are
+
+
+Figure 3: The detailed overview of our Object-centric Visual Encoder that is built upon an instance mask detector and a semantic segmentation model, both of which provide pixel-wise category predictions from images along with delineating the boundaries of object.
+
+various challenges with collecting human feedback on the actions taken by self-driving cars (e.g. safety and liability). Other notable approaches (in the reinforcement learning setting) include the work by Tung et al. [32] that learns a visual reward detector conditioned on natural language action descriptions, which is then used to train agents. Kim et al. [16] introduced an approach to ground instantaneous human-to-vehicle advice w.r.t. perception and action and showed that accepting such advice improves overall control prediction accuracy, while Roh et al. [28] focused on conditioning natural language instructions to the driving model, see Figure 2 (A). Inspired by these work, we incorporate observation-action rules at training time, and learn to recognize when to follow advice at test time, rather than expecting such advice to be given by a "passenger" at test time.
+
+# 3. Advisable Learning
+
+In this paper, we propose a novel driving model that is both explainable and advisable. Our model can provide the basis of its decision both by visualizing image regions that it attends to and by verbalizing the observations of what it sees (e.g. "it is snowing"). Our model is also advisable by incorporating general observation-action rules, which it is expected to follow.
+
+As shown in Figure 2 (C), our model includes four main components. Our Object-centric Visual Encoder extracts visual (semantic) representations through a ConvNet that is pretrained on the task of semantic segmentation (Section 3.1). The Vehicle Controller is trained to predict control commands conditioned on the high-level action commands (e.g. "stop at the crosswalk") (Section 3.2). The Ob
+
+
+Figure 4: The detailed overview of our goal-conditioned Vehicle Controller. We take an action command in natural language as an input and ground it into the controller. Our model adopts spatial attention mechanism $\pi$ , which guides where the controller looks. Conditioned on the attended feature and (optionally) the current speed $v_{t}$ , our model outputs future trajectory $\mathcal{P}$ and speed $\hat{v}_{t}$ .
+
+servation Generator produces variable-length textual observations about the scenes (e.g. "pedestrians are waiting to cross") (Section 3.3). Finally, our Sequence-to-Sequence Observation-to-Action module generates soft condition-action rules that map visual scene descriptions (e.g. "it is snowing") to high-level action commands (e.g. "maintain a slow speed") (Section 3.4). Note that, our Vehicle Controller utilizes a visual (spatial) attention mechanism, which can highlight image regions the model fixates on for the network's output. This attended feature is then fed into the Observation Generator for the final prediction.
+
+# 3.1. Object-centric Visual Encoder
+
+We use images that are down-sampled to $10\mathrm{Hz}$ and are resized to have dimensionality $144\times 256\times 3$ by applying bilinear interpolation. Each image is normalized by subtracting the global mean from the raw pixels and dividing by the global standard deviation [29], see Figure 3.
+
+Segmentation as an Input Representation. Instead of training a ConvNet from scratch, we use a semantic segmentation model that is pre-trained on the Mapillary Vistas street-view scene understanding dataset [26]. Our front-end vision module is therefore trained to recognize pixel-wise category predictions from images along with delineating the boundaries of each object. Here, we use the DeepLab v3 model [6], a state-of-the-art network that uses atrous spatial pyramid pooling to robustly segment objects at multiple scales with various filters of different sampling rates and fields-of-view. We obtain a high-level visual representation of an input image at each time step $t$ . This representation $\mathbf{X}_t$ (of size $18 \times 32 \times 256$ ) contains a set of 256-dimensional latent vectors over the spatial dimension, i.e. $\mathbf{X}_t = \{\mathbf{x}_{t,1}, \mathbf{x}_{t,2}, \dots, \mathbf{x}_{t,l}\}$ , where $l$ ( $= w \times h$ ) is the spa
+
+tial dimension. Note, that the use of semantic segmentation as the internal representation of visual scenes is generally transferable between real-world and simulated setting.
+
+Object-centric RoI Pooling. To further provide object-centric attention heat maps, which highlight more precise object regions, we use an instance detection model, MaskR-CNN model [11], and tie the predicted instance masks to the feature $\mathbf{X}_t$ . Given the instance regions (RoIs), a position-sensitive RoI pooling layer is used to aggregate the latent vectors $\mathbf{x}_{t,i}$ for $i = \{1,2,\dots ,l\}$ for each RoI to obtain the visual feature $\mathbf{y}$ . Note, that the pooled latent vector is then distributed equally to replace the original representations. This provides a subset of feature slices that share the same latent representation, and thus allows the model to equally attend to parts of RoI.
+
+# 3.2. Goal-conditioned Vehicle Controller
+
+Grounding Natural Language Action Command. Our vehicle controller is trained to predict control commands conditioned on the high-level action command (e.g. "maintains a slow speed"). We use a textual encoder that takes a variable-length textual command and grounds it into the vehicle controller. Following [16], we use an LSTM to encode an input word sequence and yield a 256-dimensional latent vector $\mathbf{u}_{\mathbf{t}}$ . We combine this vector with the visual feature $\mathbf{y}_{t,i}$ by an element-wise multiplication and obtain a feature vector $\mathbf{z}_{t,i} = \mathbf{y}_{t,i} \odot \mathbf{u}_{\mathbf{t}}$ for $i = \{1,2,\dots,l\}$ , which is then fed into visual attention module to generate attention maps. We provide detailed model architecture in Figure 4.
+
+Visual Attention. Visual attention provides introspective (visual) explanations by filtering out non-salient image regions, while the attended regions have a potential causal effect on the output. The goal of visual attention mechanism is to find a context $\mathbf{C}_t = \{\mathbf{c}_{t,1},\mathbf{c}_{t,2},\dots ,\mathbf{c}_{t,l}\}$ by minimizing a loss function, where $\mathbf{c}_{t,i} = \pi (\alpha_{t,i},\mathbf{z}_{t,i}) = \alpha_{t,i}\mathbf{z}_{t,i}$ for $i = \{1,2,\ldots ,l\}$ . Note that a scalar attention weight value $\alpha_{t,i}$ is in [0, 1] such that $\sum_{i}\alpha_{t,i} = 1$ . We use a multi-layer perceptron to compute these attention weights, i.e. $\alpha_{t,i} = f_{\mathrm{att}}(\mathbf{z}_{t,i},\mathbf{h}_{t - 1})$ , conditioned on the previous hidden state $\mathbf{h}_{t - 1}$ (of the Attention LSTM), and the current advice-grounded feature vector $\mathbf{z}_{t,i}$ . Softmax regression function is used to obtain the final normalized attention weight.
+
+Output. Inspired by the prior work [4, 41], our vehicle controller predicts a future trajectory $\mathcal{P} = [p_{t,\Delta}, p_{t,2\Delta}, \dots, p_{t,N\Delta}]$ along with speed $\hat{v}_t$ . Each point $p_{t,j\Delta}$ for $j = \{1,2,\dots,N\}$ is characterized by its future longitudinal and latitudinal location after the time $j\Delta$ . This trajectory can be converted into low-level driving control commands (i.e. steering, braking, and acceleration) by an optimizer within the constraints of the vehicle's dynamics. Different types of vehicles may utilize different control outputs to achieve the same driving trajectory, which argues
+
+against training a network to directly output low-level steering and acceleration control.
+
+To predict the future trajectory, we use additional hidden layers $f_{\mathrm{out}}$ conditioned on the latent representation $\mathbf{C}_t$ (from our Advice-grounded Visual Attention) and the current speed $v_t$ , i.e. $\mathcal{P} = f_{\mathrm{out}}([f_{\mathrm{flatten}}(\mathbf{C}_t), f_{\mathrm{speed}}(v_t))$ ), where $f_{\mathrm{speed}}$ denotes additional hidden layers to encode the speed in a high-dimensional latent space. $f_{\mathrm{flatten}}$ is a flattening function. We use $\Delta$ as 0.5 seconds and $N$ as 6 (thus, we predict the future trajectory in the next 3 seconds).
+
+Loss Function. We minimize the proportional control error (i.e. the difference between human-demonstrated and predicted) to train our future trajectory predictor.
+
+$$
+\mathcal {L} _ {c t l} = \frac {1}{N T} \sum_ {t = 1} ^ {T} \sum_ {j = 1} ^ {N} \lambda_ {j} \| p _ {t, j \Delta} - \hat {p} _ {t, j \Delta} \| _ {2} ^ {2} + \lambda_ {0} \| v _ {t} - \hat {v} _ {t} \| _ {2} ^ {2} \tag {1}
+$$
+
+where $\lambda_{j}$ and $\lambda_0$ control the strength of each term, chosen to be inversely proportional to the global variance.
+
+# 3.3. Textual Observation Generator
+
+The main goal of our textual observation generator is to summarize visual observations, which need to be considered while driving, e.g. "there is a school bus with lights flashing" (this usually means the vehicle should pull over and remain stopped). Here, we use the term "observation" to convey the notion of the model's ability to actively perceive and register visual cues as being important for the vehicle controller. These observations can take a variety of forms with different levels of urgency and will be provided to the vehicle controller at every time step.
+
+To generate such observations, our model involves a video-to-text module that takes a sequence of video frames and generates variable-length textual observations. In order to implement such a model, we start from the work of [17] that is originally designed to generate textual descriptions/explanations such as a pair "vehicle slows down" (description) and "because it is approaching an intersection and the light is red" (explanation). Unlike [17], where descriptions/explanations are predicted jointly as a single sequence (separated by a token), we focus on generating the later part (i.e. explanations) and treat them as observations. These observations are then used to predict the corresponding textual action commands, directing the vehicle to behave in a certain way (e.g. go, pass, turn), in Section 3.4.
+
+We collect the latent vector $\bar{\mathbf{c}}_t$ over the past $T$ timesteps by summing over the attended feature vectors $\{\mathbf{c}_{t,i}\}$ , i.e. $\bar{\mathbf{c}}_t = \sum_{i=1}^l \mathbf{c}_{t,i}$ . We then apply a temporal attention mechanism with weights $\beta_{k,t}$ to those vectors at each time step $k$ (of sentence generation), i.e. $\mathbf{g}_k = \sum_{t=t_0-T+1}^{t_0} \beta_{k,t} \bar{\mathbf{c}}_t$ where $t_0$ is the current timestep and $\sum_{t} \beta_{k,t} = 1$ with $\beta_{k,t}$ is in [0,1]. The weight $\beta_{k,t}$ is computed by an attention model, which is similar to the spatial attention. This is common practice in sequence-to-sequence models and allows flexibility in output tokens relative to input samples [3].
+
+Our decoder outputs per-word softmax probabilities. We minimize the following negative log-likelihood $\mathcal{L}_{obs} = -\sum_{k}\log p(\mathbf{o}_k|\mathbf{o}_{k - 1},\mathbf{g}_k)$
+
+# 3.4. Sequence-to-Sequence Observation-to-Action
+
+We want our model to incorporate natural language human-to-vehicle advice. Such advice is typically high-level, rather than low-level (where the vehicle controller operates). Recent work [16] proposed a model that allows short-term (or local) textual advice from passengers (e.g. "there are construction cones" or "slow down"). More generally, advice might take the form of condition-action rules. In this work, we focus on such long-term (or global) advice from humans (e.g. driving instructors).
+
+We use a general encoder-decoder framework to incorporate the observation-action rules. Our LSTM encoder takes a generated variable-length textual observation ("there is a sharp turn ahead") and yields a representative latent vector, while the decoder (another LSTM) outputs an action command sequence ("slow down"). The model is trained by minimizing the negative log-likelihood (similar to the observation generator). Our model is supervised by human inputs in the form of observation-action rules that the user expects the vehicle to follow. The predicted action commands are given as input to the vehicle controller. Note that such rules are learned during offline training separately from our vehicle controller and textual observation generator - we can learn such rules with different datasets. Our approach can also be applicable in an online setting by reinforcing the learning of our observation-action rules. A policy-gradient method can be used to train an agent to generate such rules in an online setting while estimating the reward signal by measuring automatic scores.
+
+Human-to-Vehicle Instantaneous Advice. We currently assume that advice is given offline, rather than during online human-vehicle interaction. Note, however, that our model can also take instantaneous human-to-vehicle advice. As shown in Figure 2 (C), we use two multiplexers to accept observational and navigational advice. The observational advice is mapped to an action command by our model.
+
+Loss Function. Our Observation-to-Action module outputs per-word softmax probabilities and we minimize the following negative log-likelihood $\mathcal{L}_{obs2act}$ :
+
+$$
+\mathcal {L} _ {\text {o b s 2 a c t}} = - \sum_ {m} \log p \left(\mathbf {a} _ {m} \mid \mathbf {a} _ {m - 1}, \left\{\mathbf {0} _ {1}, \mathbf {0} _ {2}, \dots , \mathbf {0} _ {K} \right\}\right) \tag {2}
+$$
+
+We minimize the following loss function $\mathcal{L}$ to train our entire driving model end-to-end, $\mathcal{L} = \mathcal{L}_{obs} + \mathcal{L}_{ctl} + \mathcal{L}_{obs2act}$ .
+
+# 4. Experiments
+
+Dataset. We use the Berkeley DeepDrive-eXplanation (BDD-X) dataset [17] to train and evaluate our proposed
+
+
+
+
+Figure 5: (A) Example observations and action commands generated by our model. We provide input raw images and attention maps of the vehicle controller. (B) The distribution of our top-100 generated observation/action pairs by their first four or three words, respectively. The ordering of the words starts from the center, and the length of the arc indicates the proportion of the number of words. Note that we remove areas where the number of words is too small to show.
+
+Table 1: We report the vehicle control prediction performance for our approach and existing baselines. We compare the performance in terms of the median of average displacement errors (ADEs) as well as the 1st (Q1) and 3rd (Q3) quartiles (lower is better), i.e. Median [Q1, Q3].
+
+| Model | ADE (in meters) ↓ |
| without speed inputs | with speed inputs |
| A. CNN+FC [5] | 2.36 [1.18, 4.61] | - |
| B. A + LSTM [39] | 3.29 [1.49, 6.93] | - |
| C. B + Attention [15] | 2.22 [1.17, 4.61] | - |
| D. A + Discrete commands (w/ branched output) [7] | 2.28 [0.89, 4.56] | 1.35 [0.66, 2.76] |
| E. C + (natural language) commands [16] | 2.11 [0.84, 4.86] | 1.35 [0.42, 2.94] |
| F. D + Long-term (global) Advice | 2.14 [0.93, 4.57] | 0.81 [0.45, 1.61] |
| G. F + Object-centric Visual Encoder (ours) | 1.93 [1.03, 4.26] | 0.65 [0.46, 1.43] |
+
+model. BDD-X contains front-view dashcam videos ( $\approx$ 40 seconds) collected during urban driving in the United States, covering all the typical driving events. Alongside the video data, the dataset provides corresponding timestamped IMU sensor measurements, which we use as a ground-truth control signal. We provide the dataset details in supplemental material.
+
+Moreover, the dataset provides textual (i) descriptions of the vehicle's actions (what the driver is doing), and (ii) explanations for the driver's actions (why the driver took that action from the point of view of a driving instructor), such as the pair: "the car slows down" and "because it is approaching an intersection". This dataset is collected from
+
+human annotators in Amazon Mechanical Turk. We supervise our Textual Observation Generator with the textual explanations, while our Sequence-to-Sequence Observation-to-Action module is supervised with action descriptions (i.e. as navigational commands).
+
+Training and Evaluation Details. Except for our object-centric visual encoder, we train other parts end-to-end using random initialization (i.e. no pre-trained weights). Unless otherwise stated, we use a single LSTM layer for all the components of our framework. For training, we use Adam optimization algorithm [18] and Xavier initialization [9]. For evaluation, we use the average displacement error (ADE) to quantitatively evaluate control prediction performance by comparing to ground-truth human-demonstrated control commands. To evaluate the textual utterances generated by our model, we use popular automatic metrics: BLEU [27], METEOR [20], CIDEr-D [34], and SPICE [1].
+
+Driving Performance Evaluation. We report the vehicle control prediction performance for our model and a number of baselines to evaluate the ability to control a vehicle conditioned on the determined actions. We compare to end-to-end driving models, CNN+FC [5], CNN+FC+LSTM [39], and CNN+FC+LSTM+Attention [15] and goal-conditioned driving models that ground different types of goal: discrete commands [7], top-down view intended route [12], and natural language commands [16]. For a fair comparison, we use the same base CNN [7] in all cases except the model
+
+
+Figure 6: (A) The sum of normalized attention weights (blue) over the individual semantic regions for the baseline [15] and our model; differences shown in red. Our model attends more to road, car, pedestrian area, lane markings, and less to buildings, sky, vegetation. (We chose the top-20 most frequently attended regions of our model.) (B) We provide input images and compare attention maps from the baseline and our model. Attention maps are overlaid by their contour lines and shown over the input images. Higher value (red) of attention weight shows what the driving model attends to.
+
+
+
+$\mathbf{G}$ , which uses our object-centric front-end visual encoder. All models have the same output layer and are trained by minimizing the same loss function.
+
+We report performance of the aforementioned models in Table 1 (lower is better). Consistent with the prior work, goal-conditioned models [7, 16] ( $\mathbf{D}$ and $\mathbf{E}$ ) generally provide better control prediction performance against the nongoal-conditioned models (top three rows). Our model is built upon the model $\mathbf{D}$ - a goal-conditioned driving model that takes four different discrete navigational commands (i.e. lane following, turning, merging, parking). Based on this, our model $\mathbf{F}$ takes natural language commands for stimulus-driven action, e.g. the vehicle may make a stop, slow and/or deviate because of traffic participants, obstacles, any other environmental reasons. We observe that our model is further improved by adding long-term (or global) advising module (compare $\mathbf{F}$ vs. $\mathbf{D}$ ). Our controller shares the attended feature with the Observation Generator, and thus encourages the model to attend to important visual cues (e.g. stop sign, traffic lights, pedestrians). Using our Object-centric Visual Encoder (instead of training a ConvNet from scratch) further improves control prediction performance (compare $\mathbf{G}$ vs. $\mathbf{F}$ ).
+
+Analysis of Observation-to-Action Module. In Figure 5 (A), we provide qualitative examples of the textual observations (e.g. "because the car in front is stopped") and corresponding high-level action commands ("the car is stopped") generated by our model. We also show the generated attention maps, which highlight image regions that have influenced the network's outputs (i.e. both textual observations and control commands). Our model attends to relevant visual cues and generates corresponding textual sequences. The vehicle controller also looks at other driving-related objects, e.g. lane markings. Importantly, our model is able to learn observation-action rules, which are provided by hu
+
+Table 2: We report the quality of the generated textual observations (top) and action commands (bottom). We rely on standard automatic metrics: BLEU-4 [27], METEOR [20], CIDEr-D [34], and SPICE [1]. $\dagger$ : reported by [17]
+
+| Model | Textual Observation Generation |
| BLEU-4 | METEOR | CIDEr-D | SPICE |
| S2VT [35]+SA+TA† | 5.84 | 10.9 | 52.7 | 14.3 |
| S2VT+SA+TA+WAA [17]† | 7.28 | 12.2 | 69.5 | 17.5 |
| Transformer-based Decoder [33] | 9.90 | 13.6 | 70.1 | 17.5 |
| Ours | 11.7 | 16.0 | 98.2 | 20.7 |
| Model | Textual Action Commands Generation |
| S2VT [35]+SA+TA† | 27.1 | 26.4 | 157.0 | 55.1 |
| S2VT+SA+TA+WAA [17]† | 32.3 | 29.2 | 215.8 | 59.6 |
| Ours | 42.6 | 34.6 | 338.5 | 62.6 |
+
+mans at training time, and correctly reflect typical links between visual causes and actions of human driving behavior.
+
+To see the distribution of the learned observation-to-action rules, we cluster observation/action pairs based on the first few words (e.g. the-light-is-red-car-is-stopped from the pair: "because the light is red" and "the car is stopped") as shown in Figure 5 (B). Our model generates a variety of observation-to-action pairs, which are compatible with the human driver's general knowledge. For example, the observation starts with "the road is wet" produces an action command starting with "the car maintains slow speed".
+
+Towards Semantically Rich Driving Model. Analyzing the generated attention maps confirms that our model focuses more on important object-related visual cues (e.g. vehicles, lane markings, etc). In contrast, a baseline model [15] often attends to background (e.g. sky, trees, buildings, etc) but under-attends to important visual cues.
+
+In Figure 6 (A), we provide the top 20 semantic segmentation labels where our model attends to. Blue bars represent the sum of normalized attention weights for each label. The top 3 attended regions for our model are road, ego-vehicle, pedestrian area, while the baseline focuses on building, road, sky. To see the difference between those models, we also visualize the differences as a red bar. Ours clearly focuses more on driving-related features, e.g. road, car, pedestrian area, lane markings, snow, and less on buildings, sky, vegetation, etc. In Figure 6 (B), we further compare the attention maps between ours and a baseline model [15]. We provide input video frames (1st row), attention maps generated by the baseline model (2nd row), and our attention maps (3rd row). Attention maps show that our model attends to important object-related visual cues (e.g. vehicles, lane markings, etc) with delineated object boundaries.
+
+Generated Observation/Action Quality. Next we evaluate the quality of our generated observations and action commands, see Table 2 (higher is better). Our Textual Observation Generator predicts natural language observations based on the visual inputs. Some of our baselines are video captioning approaches, which do not take the vehicle control into account (S2VT [35] + SA (spatial attention) + TA (temporal attention) and Transformer-based approach [33]). At the same time, our full system is trained end-to-end, including the loss on the predicted controls, thus our textual observations are encouraged to be relevant to driving behavior. Therefore, we also compare to the best version of [17], the WAA model (weakly-aligned attention). This model generates action descriptions and explanations conditioned on predicted vehicle control, and we interpret the latter as observations. This is unlike our approach, where, conversely, vehicle control is predicted based on observations/action commands. Nevertheless, these are meaningful reference numbers for our approach. As we see, our model obtains the highest scores in all metrics both for generated observations and action commands.
+
+Simulation and Human Evaluation. Explanable and advisable driving models can increase user trust by providing effective communication, which helps users convey their preferences/guidance to the vehicle and vice versa. To verify this, we run a human evaluation. We first migrate our driving model from the offline setting to a simulated environment, Carla [8], i.e. our model is trained on the BDD-X dataset and tested in the Carla simulator. We choose three different driving scenarios: (i) stopping at red lights, (ii) stopping at red lights in heavy rain, and (iii) stopping at a stop marking. In these experiments our driving model fails to stop for (ii) and (iii) scenarios. We then test the model with the following advice: "the light is red" and "there is a stop sign" for respective scenarios. We observe that the failure rate drops (see Figure 7 (A)). Further, we recruit 20 human judges and study the following three
+
+
+without Advice with Advice
+
+
+Non-explainable Model
+Explainable w/ attention and textual explanations
+Explainable w/ human-to-vehicle advice
+
+
+Figure 7: (A) We report the failure rate with and without advice inputs in the following three scenarios on a Carla simulator. (B-C) We also report the responses from our human study for the questions: (B) "How much do you trust this system?", and (C) "To what level has the system improved with the human-to-vehicle advice?" Answers were measured on a 1-5 Likert scale.
+
+cases: (i) user only observes the car's behavior, (ii) user observes the model's behavior along with the attention and textual explanations, and (iii) user observes the model's behavior, attention, and textual explanations, before and after providing advice. As shown in Figure 7 (B), our explainable and advisable system shows better responses for user-trust. Specifically, providing visual and textual explanations slightly improves the user trust (blue vs. red). Further, showing users an example where the driving model accepts human-to-vehicle advice significantly improves the user-trust (red vs. yellow). In addition, we obtain feedback from the users by asking "To what level has the system improved with the human-to-vehicle advice?" Our evaluators acknowledge that advice improves the driving system, see Figure 7 (C). We provide the details of our evaluation in the Carla simulator in the supplemental material.
+
+# 5. Conclusion
+
+Towards learning more human-like driving behavior, we propose to use human advice in the form of observation-action rules. Specifically, we present a new approach where such advice is used as supervision during training and the controls are predicted based on the textual action commands. We rely on a semantic visual representation to better ground the textual observations and generate object-centric attention maps. Our experiments on the BDD-X dataset show that our model matches or outperforms prior work in control prediction and textual observation generation. Our human evaluation on the Carla simulator further shows that our advisable system can increase user trust.
+
+Acknowledgements. We thank Y. Gao, D. Wang, O. Watkins, and C. Devin at UC Berkeley for their helpful discussion. This work was supported by DARPA XAI program and Berkeley DeepDrive. J. Kim was in part supported by Samsung Scholarship.
+
+# References
+
+[1] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In ECCV, pages 382-398. Springer, 2016. 6, 7
+[2] Yoav Artzi and Luke Zettlemoyer. Weakly supervised learning of semantic parsers for mapping instructions to actions. TACL, 2013. 3
+[3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2014. 5
+[4] Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learning to drive by imitating the best and synthesizing the worst. RSS, 2019. 2, 3, 4
+[5] Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. End to end learning for self-driving cars. CoRR abs/1604.07316, 2016. 1, 2, 6
+[6] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 2018. 4
+[7] Felipe Codevilla, Matthias Muiller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy. End-to-end driving via conditional imitation learning. In ICRA, pages 1-9. IEEE, 2018. 2, 6, 7
+[8] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. Carla: An open urban driving simulator. CoRL, 2017. 2, 8
+[9] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, 2010. 6
+[10] David Gunning. Explainable artificial intelligence (xai). Defense Advanced Research Projects Agency (DARPA), 2017. 3
+[11] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In ICCV, pages 2961-2969, 2017. 4
+[12] Simon Hecker, Dengxin Dai, and Luc Van Gool. End-to-end learning of driving models with surround-view cameras and route planners. In ECCV, 2018. 2, 6
+[13] Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and Trevor Darrell. Generating visual explanations. In ECCV, 2016. 3
+[14] Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, and Zeynep Akata. Grounding visual explanations. In ECCV, 2018. 3
+[15] Jinkyu Kim and John Canny. Interpretable learning for self-driving cars by visualizing causal attention. ICCV, 2017. 1, 2, 3, 6, 7, 8
+[16] Jinkyu Kim, Terihusa Misu, Yi-Ting Chen, Ashish Tawari, and John Canny. Grounding human-to-vehicle advice for self-driving vehicles. CVPR, 2019. 1, 2, 3, 4, 5, 6, 7
+[17] Jinkyu Kim, Anna Rohrbach, Trevor Darrell, John Canny, and Zeynep Akata. Textual explanations for self-driving vehicles. In ECCV, 2018. 1, 2, 3, 5, 7, 8
+
+[18] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. 6
+[19] Gregory Kuhlmann, Peter Stone, Raymond Mooney, and Jude Shavlik. Guiding a reinforcement learner with natural language advice: Initial results in robocop soccer. In AAAI Workshop, 2004. 3
+[20] Alon Lavie and Abhaya Agarwal. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In EMNLP, 2005. 6, 7
+[21] Jiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823, 2016. 3
+[22] Huan Ling and Sanja Fidler. Teaching machines to describe images via natural language feedback. arXiv preprint arXiv:1706.00130, 2017. 3
+[23] John McCarthy. Programs with common sense. RLE and MIT computation center, 1960. 3
+[24] Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. *IJRR*, 2016. 3
+[25] Dipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. Environment-driven lexicon induction for high-level instructions. In ACL, 2015. 3
+[26] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In ICCV, 2017. 4
+[27] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, 2002. 6, 7
+[28] Junha Roh, Chris Paxton, Andrzej Pronobis, Ali Farhadi, and Dieter Fox. Conditional driving from natural language instructions. CoRL, 2019. 2, 3
+[29] Samuel Rota Bulò, Lorenzo Porzi, and Peter Kontschieder. In-place activated batchnorm for memory-optimized training of dnns. In CVPR, 2018. 4
+[30] Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, pages 618-626, 2017. 3
+[31] Stefanie TELlex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth J Teller, and Nicholas Roy. Understanding natural language commands for robotic navigation and mobile manipulation. In AAAI, 2011. 3
+[32] Hsiao-Yu Fish Tung, Adam W Harley, Liang-Kang Huang, and Katerina Fragkiadaki. Reward learning from narrated demonstrations. CVPR, 2018. 3
+[33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 7, 8
+[34] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In ICCV, 2015. 6, 7
+[35] Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko.
+
+Sequence to sequence-video to text. In ICCV, pages 4534-4542, 2015. 7, 8
+[36] Dequan Wang, Coline Devin, Qi-Zhi Cai, Fisher Yu, and Trevor Darrell. Deep object centric policies for autonomous driving. ICRA, 2019. 3
+[37] Jason E Weston. Dialog-based language learning. In NeurIPS, 2016. 3
+[38] Jialin Wu and Raymond J Mooney. Faithful multimodal explanation for visual question answering. arXiv preprint arXiv:1809.02805, 2018. 3
+[39] Huazhe Xu, Yang Gao, Fisher Yu, and Trevor Darrell. End-to-end learning of driving models from large-scale video datasets. In CVPR, 2017. 2, 6
+[40] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European Conference on Computer Vision, pages 818-833. Springer, 2014. 3
+[41] Wenyuan Zeng, Wenjie Luo, Simon Suo, Abbas Sadat, Bin Yang, Sergio Casas, and Raquel Urtasun. End-to-end interpretable neural motion planner. In CVPR, pages 8660-8669, 2019. 3, 4
+[42] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In CVPR, pages 2921-2929, 2016. 3
\ No newline at end of file
diff --git a/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/images.zip b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..12b134104826d620b9dd4af6ec3e46ef1628d2e0
--- /dev/null
+++ b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:503d57975cdd1f42545479c1ebe50d57b91ce418c033f0054d6749921b797df8
+size 544947
diff --git a/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/layout.json b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..0db16073766f1857266808f7fe70d6b52e54aca7
--- /dev/null
+++ b/advisablelearningforselfdrivingvehiclesbyinternalizingobservationtoactionrules/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5d3a0964f4194ee427a8ee4e31c0711a345788341185b23a8ec53d17419b2d0
+size 318062
diff --git a/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_content_list.json b/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3909fb05bc2a2a208570d41c8c09c5839e823c61
--- /dev/null
+++ b/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5b46928e5d3bf2e93f8f7dfba3d2b78842eeecf793022b90f6f4b0eff838ebc
+size 72553
diff --git a/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_model.json b/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5bd3bdb19fcd81a89a080690f54115d7f444431b
--- /dev/null
+++ b/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c0497e769116847d789f85b388c81898e5afdf408a856bfd4f111a631a14922
+size 86568
diff --git a/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_origin.pdf b/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a22e213121cab646bcd025200843151df1398d7a
--- /dev/null
+++ b/affinitygraphsupervisionforvisualrecognition/25d2008b-969b-4c33-a1f5-f54dda0bcded_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:413f03aedb8254bce6fb43df591809801a2cf652dcfb7b75d61bf2c2cd0624da
+size 1126892
diff --git a/affinitygraphsupervisionforvisualrecognition/full.md b/affinitygraphsupervisionforvisualrecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..426978dac7322d0e3e01aa7161f35092cab93d0e
--- /dev/null
+++ b/affinitygraphsupervisionforvisualrecognition/full.md
@@ -0,0 +1,329 @@
+# Affinity Graph Supervision for Visual Recognition
+
+Chu Wang $^{1}$ Babak Samari $^{1}$ Vladimir G. Kim $^{2}$ Siddhartha Chaudhuri $^{2,3}$ Kaleem Siddiqi $^{1*}$ McGill University $^{2}$ Adobe Research $^{3}$ IIT Bombay
+
+{chuwang,babak,siddiqi}@cim.mcgill.ca {vokim,sidch}@adobe.com
+
+# Abstract
+
+Affinity graphs are widely used in deep architectures, including graph convolutional neural networks and attention networks. Thus far, the literature has focused on abstracting features from such graphs, while the learning of the affinities themselves has been overlooked. Here we propose a principled method to directly supervise the learning of weights in affinity graphs, to exploit meaningful connections between entities in the data source. Applied to a visual attention network [9], our affinity supervision improves relationship recovery between objects, even without the use of manually annotated relationship labels. We further show that affinity learning between objects boosts scene categorization performance and that the supervision of affinity can also be applied to graphs built from mini-batches, for neural network training. In an image classification task we demonstrate consistent improvement over the baseline, with diverse network architectures and datasets.
+
+# 1. Introduction
+
+Recent advances in graph representation learning have led to principled approaches for abstracting features from such structures. In the context of deep learning, graph convolutional neural networks (GCNs) have shown great promise [3, 15]. The affinity graphs in GCNs, whose nodes represent entities in the data source and whose edges represent pairwise affinity, are usually constructed from a predefined metric space and are therefore fixed during the training process [3, 15, 22, 29]. In related work, self-attention mechanisms [26] and graph attention networks [27] have been proposed. Here, using pairwise weights between entities, a fully connected affinity graph is used for feature aggregation. In contrast to the graphs in GCNs, the parametrized edge weights change during the training of the graph attention module. More recent approaches also consider elaborate edge weight parametrization strategies [13, 18] to further improve the flexibility of graph structure learning.
+
+However, the learning of edge (attention) weights in the graph is entirely supervised by a main objective loss, to improve performance in a downstream task.
+
+Whereas representation learning from affinity graphs has demonstrated great success in various applications [9, 34, 30, 11, 10], little work has been done thus far to directly supervise the learning of affinity weights. In the present article, we propose to explicitly supervise the learning of the affinity graph weights by introducing a notion of target affinity mass, which is a collection of affinity weights that need to be emphasized. We further propose to optimize a novel loss function to increase the target affinity mass during the training of a neural network, to benefit various visual recognition tasks. Our affinity supervision method is generalizable and supports the flexible design of supervision targets according to the needs of particular tasks. This feature is absent in related work, where the learning of affinity graphs is either constrained by distance metrics [13] or is dependent on the main objective loss [26, 27, 18].
+
+With the proposed supervision of the learning of affinity weights, a visual attention network [9] is able to compete in a relationship proposal task with the present state-of-the-art [33] without any explicit use of relationship labels. Enabling relationship labels provides an additional $25\%$ boost over [33] in relative terms. This improved relationship recovery is particularly beneficial when applied to a scene categorization task, since scenes are comprised of collections of distinct objects. We also explore the general idea of affinity supervised mini-batch training of a neural network, which is common to a vast number of computer vision and other applications. For image classification tasks we demonstrate a consistent improvement over the baseline, across multiple architectures and datasets. Our proposed affinity supervision method leads to no computational overhead, since we do not introduce additional parameters.
+
+# 2. Related Work
+
+# 2.1. Graph Convolutional Neural Networks
+
+In GCNs layer-wise convolutional operations are applied to abstract features in graph structures. Current approaches
+
+
+
+
+
+
+Figure 1: A comparison of recovered relationships on test images, with no relationship annotations used during training. We show the reference object (blue box), regions with which it learns relationships (orange boxes) and the relationship weights in red text (zoom in on the PDF). Left: baseline visual attention networks [9] often recover relationships between a reference object and its immediate surrounding context. Right: our proposed affinity supervision better emphasizes potential relationships between distinct and spatially separated objects.
+
+
+
+build the affinity graph from a predefined input [3, 22, 29] or embedding space [7, 26], following which features are learned using graph based filtering in either the spatial or spectral domain. Little work has been carried out so far to directly learn the structure of the affinity graph itself. In this article, we propose a generic method for supervising the learning of pairwise affinities in such a graph, without the need for additional ground truth annotations.
+
+# 2.2. Visual Attention Networks
+
+Attention mechanisms, first proposed in [26], have been successfully applied to a diverse range of computer vision tasks [9, 34, 30]. In the context of object detection [9], the attention module uses learned pairwise attention weights between region proposals, followed by per region feature aggregation, to boost object detection. The learned attention weights do not necessarily reflect relations between entities in a typical scene. In fact, for a given reference object (region), Relation Networks [9] tend to predict high attention weights with scaled or shifted bounding boxes surrounding the same object instance (Figure 1).
+
+A present limitation of visual attention networks is their
+
+minimization of only the main objective loss during training [9, 34, 30], without any direct supervision of attention between entities. Whereas attention based feature aggregation has been shown to boost performance for general vision tasks [11, 10], the examples in Figure 1 provide evidence that relationships between distinct entities may not be sufficiently captured. In this paper we address this limitation by directly supervising the learning of attention. An affinity graph is first built from the pair-wise attention weights and a novel target affinity mass loss is then applied to guide the learning of attention between distinct objects, allowing the recovery of more plausible relationships.
+
+# 2.3. Mini-batch Training
+
+The training of a neural network often requires working with mini-batches of data, because typical datasets are too large for present architectures to handle. The optimization of mini-batch training is thus a research topic in its own right. Much work has focused on improving the learning strategies, going beyond stochastic gradient descent (SGD), including [23, 5, 1, 14]. In addition, batch normalization [12] has shown to improve the speed, performance, and stability of mini-batch training, via the normalization of each neuron's output to form a unified Gaussian distribution across the mini-batch.
+
+In the present article we show that our affinity supervision on a graph built from mini-batch features can benefit the training of a neural network. By increasing the affinity (similarity) between mini-batch entries that belong to the same category, performance in image classification on a diverse set of benchmarks is consistently improved. We shall discuss mini-batch affinity learning in more detail in Section 5.
+
+# 3. Affinity Graph Supervision
+
+We now introduce our approach to supervising the weights in an affinity graph. Later we shall cover two applications: affinity supervision on visual attention networks (built on top of Relation Networks [9]) in Section 4 and affinity supervision on a batch similarity graph in Section 5.
+
+# 3.1. Affinity Graph
+
+We assume that there are $N$ entities generated by a feature embedding framework, for example, a region proposal network (RPN) together with ROI pooling on a single image [25], or a regular CNN applied over a batch of images. Let $\mathbf{f}^i$ be the embedding feature for the $i$ -th entity. We define an affinity function $\mathcal{A}$ which computes an affinity weight between a pair of entities $m$ and entity $n$ , as
+
+$$
+\omega^ {m n} = \mathcal {A} \left(\mathbf {f} ^ {m}, \mathbf {f} ^ {n}\right). \tag {1}
+$$
+
+A specific form of this affinity function applied in attention networks [9, 26] is reviewed in Section 4, and another sim
+
+ple form of this affinity function applied in batch training is defined in section 5.
+
+We now build an affinity graph $G$ whose vertices $m$ represent entities in the data source with features $\mathbf{F}_{in} = \{\mathbf{f}^m\}$ and whose edge weights $\{\omega^{mn}\}$ represent pairwise affinities between the vertices. We define the graph adjacency matrix for this affinity graph as the $N \times N$ matrix $\mathcal{W}$ with entries $\{\omega^{mn}\}$ . We propose to supervise the learning of $\mathcal{W}$ so that those matrix entries $\omega^{mn}$ selected by a customized supervision target matrix $\mathcal{T}$ will increase, thus gaining emphasis over the other entries.
+
+# 3.2. Affinity Target $\mathcal{T}$
+
+We now explain the role of a supervision target matrix $\mathcal{T}$ for affinity graph learning. In general, $\mathcal{T} \in \mathbf{R}^{N \times N}$ with
+
+$$
+\mathcal {T} [ i, j ] = \left\{ \begin{array}{l l} 1 & \text {i f} (i, j) \in \mathcal {S} \\ 0 & \text {o t h e r w i s e ,} \end{array} \right. \tag {2}
+$$
+
+where $S$ stands for a set of possible connections between entities in the data source.
+
+Target Affinity Mass We would like $\mathcal{W}$ to have higher weights at those entries where $\mathcal{T}[i,j] = 1$ , to place emphasis on the entries that are selected by the supervision target. We capture this via a notion of target affinity mass $\mathcal{M}$ of the affinity graph, defined as
+
+$$
+\mathcal {M} = \sum \tilde {\mathcal {W}} \odot \mathcal {T}, \tag {3}
+$$
+
+where $\tilde{\mathcal{W}} = \text{softmax}(\mathcal{W})$ is a matrix-wise softmax. A study on affinity mass design is in our arXiv version [28].
+
+# 3.3. Affinity Mass Loss $\mathcal{L}_G$
+
+We propose to optimize the learning of the parameters $\theta$ of a neural network to achieve
+
+$$
+\max _ {\theta} \mathcal {M}. \tag {4}
+$$
+
+Our aim is to devise a strategy to maximize $\mathcal{M}$ with an empirically determined choice of loss form. There are several loss forms that could be considered, including smooth $L1$ loss, $L2$ loss, and a focal loss variant. Defining $x = 1 - \mathcal{M} \in [0,1]$ , we define losses
+
+$$
+L _ {2} (x) = x ^ {2} \tag {5}
+$$
+
+and
+
+$$
+\operatorname {s m o o t h} _ {L _ {1}} (x) = \left\{ \begin{array}{l l} x ^ {2} & \text {i f} \quad | x | < 0. 5 \\ | x | - 0. 2 5 & \text {o t h e r w i s e .} \end{array} \right. \tag {6}
+$$
+
+The focal loss on $\mathcal{M}$ is a negative log likelihood loss, weighted by the focal normalization term proposed in [19], which is defined as
+
+$$
+\mathcal {L} _ {G} = L _ {\text {f o c a l}} (\mathcal {M}) = - (1 - \mathcal {M}) ^ {\gamma} \log (\mathcal {M}). \tag {7}
+$$
+
+The focal term $(1 - \mathcal{M})^{\gamma}$ [19] helps narrow the gap between well converged affinity masses and those that are far from convergence.
+
+Empirically, we have found that the focal loss variant gives the best results in practice, as described in the ablation study reported in Section 6.4. The choice of the $\gamma$ term depends on the particular tasks, so we provide experiments to justify our choices in Section 6.4.
+
+# 3.4. Optimization and Convergence of $\mathcal{L}_G$
+
+The minimization of the affinity mass loss $\mathcal{L}_G$ places greater emphasis on entries in $\mathcal{W}$ which correspond to ground truth connections in $S$ , through network training. However, when optimized in conjunction with a main objective loss, which could be an object detection loss $\mathcal{L}_{main} = \mathcal{L}_{det} + \mathcal{L}_{rpn}$ in visual attention networks or a cross entropy loss $\mathcal{L}_{main} = \mathcal{L}_{class}$ in mini-batch training, a balance between $\mathcal{L}_{main}$ and $\mathcal{L}_G$ is required. The total loss can be written as
+
+$$
+\mathcal {L} = \mathcal {L} _ {\text {m a i n}} + \lambda \mathcal {L} _ {G}. \tag {8}
+$$
+
+Empirically, we choose $\lambda = 0.01$ for visual attention networks and for mini-batch training, we choose $\lambda = 0.1$ . Figure 5 demonstrates the convergence of the target mass, justifying the effectiveness of using loss $\mathcal{L}_G$ in the optimization of equation 4.
+
+# 4. Affinity in Attention Networks
+
+We review the computation of attention weights in [26], given a pair of nodes from the affinity graph defined in Section 3.1. Let an entity node $m$ consist of its feature embedding, defined as $\mathbf{f}^m$ . The collection of input features of all the nodes then becomes $\mathbf{F}_{in} = \{\mathbf{f}^m\}$ . Consider node $m$ as a reference object with the attention weight $\tilde{\omega}^{mn}$ indicating its affinity to a surrounding entity node $n$ . This affinity is computed as a softmax activation over the scaled dot products $\omega^{mn}$ defined as:
+
+$$
+\tilde {\omega} ^ {m n} = \frac {\exp \left(\omega^ {m n}\right)}{\sum_ {k} \exp \left(\omega^ {m k}\right)}, \quad \omega^ {m n} = \frac {< W _ {K} \mathbf {f} ^ {m} , W _ {Q} \mathbf {f} ^ {n} >}{\sqrt {d _ {k}}}. \tag {9}
+$$
+
+Both $W_{K}$ and $W_{Q}$ are matrices and so this linear transformation projects the embedding features $\mathbf{f}^{m}$ and $\mathbf{f}^{n}$ into metric spaces to measure how well they match. The feature dimension after projection is $d_{k}$ . With the above formulation, the attention graph affinity matrix is defined as $\mathcal{W} = \{\omega^{mn}\}$ . For a given reference entity node $m$ , the attention module also outputs a weighted aggregation of $m$ 's neighbouring nodes' features, which is
+
+$$
+\mathbf {f} _ {\text {o u t}} ^ {m} = \sum_ {n} \tilde {\omega} ^ {m n} f ^ {n}. \tag {10}
+$$
+
+The set of feature outputs for all nodes is thus defined as $\mathbf{F}_{out} = \{\mathbf{f}_{out}^{m}\}$ . Additional details are provided in [26, 9].
+
+
+Figure 2: An overview of our affinity graph supervision in visual attention networks, in application to two tasks. The blue dashed box surrounds the visual attention network backbone, implemented according to Relation Networks [9]. The purple dashed box highlights our core component for affinity learning and for relation proposal generation. The green dashed box surrounds the branch for scene categorization. An example affinity target is visualized in the bottom left corner, with solid circles representing ground truth objects colored by their class. The dashed lines between pairs of solid circles give rise to a value of 1 for the corresponding entry in matrix $\mathcal{T}$ . See the text in Section 4.1 for a detailed description. A detailed illustration of the attention module is in the supplementary material of our arXiv version [28].
+
+# 4.1. Affinity Target Design
+
+For visual attention networks, we want our attention weights to focus on relationships between objects from different categories, so for each entry $\mathcal{T}[a,b]$ of the supervision target matrix $\mathcal{T}$ , we assign $\mathcal{T}[a,b] = 1$ only when:
+
+1. proposal $a$ overlaps with ground truth object $\alpha$ 's bounding box with intersection over union $>0.5$ .
+2. proposal $b$ overlaps with ground truth object $\beta$ 's bounding box with intersection over union $>0.5$ .
+3. ground truth objects $\alpha$ and $\beta$ are two different objects coming from different classes.
+
+Note that NO relation annotation is required to construct such supervision target.
+
+We choose to emphasize relationships between exemplars from different categories in the target matrix, because this can provide additional contextual features in the attention aggregation (Equation 10) for certain tasks. Emphasizing relationships between objects within the same category might be better suited to modeling co-occurrence. We pro
+
+vide a visualization of the affinity target and additional studies, in the supplementary material of our arXiv version [28]. We now discuss applications that could benefit from affinity supervision of the attention weights: object detection, relationship proposal generation, and scene categorization.
+
+# 4.2. Object Detection and Relationship Proposals
+
+In Figure 2 (part A to part B) we demonstrate the use of attention networks for object detection and relationship proposal generation. Here part A is identical to Relation Networks [9]. The network is end-to-end trainable with detection loss, RPN loss and the target affinity mass loss. In addition to the ROI pooling features $\mathbf{F}_{in} \in \mathcal{R}^{N_{obj} \times 1024}$ from the Faster R-CNN backbone of [25], contextual features $\mathbf{F}_{out}$ from attention aggregation are applied to boost detection performance. The final feature descriptor for the detection head is $\mathbf{F} + \mathbf{F}_c$ , following [9]. In parallel, the attention matrix output $\mathcal{W} \in \mathcal{R}^{N \times N}$ is used to generate relationship proposals by finding the top K weighted pairs in the matrix.
+
+# 4.3. Scene Categorization
+
+In Figure 2 (part A to part C) we demonstrate an application of visual attention networks to scene categorization. Since there are no bounding box annotations in most scene recognition datasets, we adopt a visual attention network (described in the previous section), pretrained on the MSCOCO dataset, in conjunction with a new scene recognition branch (part C in Figure 2), to perform scene recognition. From the CNN backbone, we apply an additional $1 \times 1$ convolution layer, followed by a global average pooling to acquire the scene level feature descriptor $\mathbf{F}_s$ . The attention module takes as input the object proposals' visual features $\mathbf{F}_{in}$ , and outputs the aggregation result as the scene contextual feature $\mathbf{F}_c$ . The input to the scene classification head thus becomes $\mathbf{F}_{meta} = \text{concat}(\mathbf{F}_s, \mathbf{F}_c)$ , and the class scores are output. In order to maintain the learned relationship weights from the pre-trained visual attention network, which helps encode object relation context in the aggregation result $\mathbf{F}_{out}$ , we fix the parameters in part A (blue box), but make all other layers in part C trainable.
+
+# 5. Affinity in Mini-Batch Training
+
+Moving beyond the specific problems of object detection, relationship proposal generation and scene categorization, we now turn to a more general application of affinity supervision, that of mini-batch training in neural networks. Owing to the large size of most databases and limitations in memory, virtually all deep learning models are trained using mini-batches. We shall demonstrate that emphasizing pairwise affinities between entities during training can boost performance for a variety of image classification tasks.
+
+# 5.1. Affinity Graph
+
+We consider image classification over a batch of $N$ images, processed by a convolutional neural network (CNN) to generate feature representations. Using the notation in Section 3, we denote the feature vectors of this batch of images as $\mathbf{F}_{in} = \{\mathbf{f}^i\}$ , where $i \in 1\dots N$ is the image index in the batch. We then build a batch affinity graph $G$ whose nodes represent images, and the edge $\omega^{mn} \in \mathcal{W}$ encode pairwise feature similarity between node $m$ and $n$ .
+
+Distance Metric. A straightforward $L2$ distance based measure can be applied to compute the edge weights as
+
+$$
+\omega^ {m n} = \mathcal {A} (f ^ {m}, f ^ {n}) = - \frac {\| f ^ {m} - f ^ {n} \| _ {2} ^ {2}}{2}. \tag {11}
+$$
+
+
+Figure 3: An overview of our affinity graph supervision in mini-batch training of a standard convolutional neural network. Blue box: CNN backbone for image classification. Purple box: Affinity supervision module for minibatch training. The colored tiles represent entries of the affinity matrix $\tilde{\mathcal{W}}$ and target $\mathcal{T}$ , where a darker color denotes a larger numerical value. Minimization of the affinity mass loss aims to increase the value of the purple squares representing entries in mass $\mathcal{M}$ (see equation 3).
+
+# 5.2. Affinity Target Design
+
+In the mini-batch training setting, we would like feature representations from the same class to be closer to each other in a metric space, with those from different classes being spread apart. To this end, we build the affinity target matrix $\mathcal{T}$ as follows. For each entry $\mathcal{T}[a,b]$ in the matrix, we assign $\mathcal{T}[a,b] = 1$ only when mini-batch node $a$ and $b$ belong to the same category. Thus, the affinity target here selects those entries in $\mathcal{W}$ which represent pairwise similarity between images from the same class. During the optimization of the affinity mass loss (defined in Section 3.3), the network will increase the affinity value from the entries in $\mathcal{W}$ selected by $T$ , while suppressing the other ones. This should in principle leads to improved representation learning and thus benefit the underlying classification task.
+
+# 5.3. Overview of Approach
+
+A schematic overview of our mini-batch affinity learning approach is presented in Figure 3. Given a batch of $N$ images, we first generate the feature representations $\mathbf{F}_{in}$
+
+from a CNN followed by fully connected layers. We then send $\mathbf{F}_{in}$ to an affinity graph module, which contains a pairwise distance metric computation followed by a matrix-wise softmax activation, to acquire the affinity graph matrix $\tilde{\mathcal{W}}$ . Next, we built the affinity target matrix $\mathcal{T}$ from the image category labels following Section 5.2. An element-wise multiplication with $\tilde{\mathcal{W}}$ is used to acquire the target affinity mass $\mathcal{M}$ , which is used in computing the affinity mass loss. During training, the network is optimized by both cross entropy loss $\mathcal{L}_{class}$ and the target affinity loss $\mathcal{L}_G$ , using the balancing scheme discussed in Section 3.4.
+
+# 6. Experiments
+
+# 6.1. Datasets
+
+VOC07: which is part of the PASCAL VOC detection dataset [6], with 5k images in trainval and 5k in test set. We used this trainval/test split for model ablation purposes. MSCOCO: which consists of 80 object categories [20]. We used 30k validation images for training. 5k "minival" images are used for testing, as a common practice [9].
+
+Visual Genome: which is a large relationship understanding benchmark [16], consisting of 150 object categories and human annotated relationship labels between objects. We used 70k images for training and 30k for testing, as in the scene graph literature [32, 31].
+
+MIT67: which is a scene categorization benchmark with 67 scene categories, with 80 training images and 20 test images in each category [24]. We used this official split.
+
+CIFAR10/100: which is a popular benchmark dataset containing 32 by 32 tiny images from 10 or 100 categories [17]. We used the official train/test split and we randomly sampled $10\%$ of train set to form a validation set.
+
+Tiny Imagenet: which is a simplified version of the ILSVRC 2012 image classification challenge [4] containing 200 classes [2] with 500 training images and 50 validation images in each class. We used the official validation set as the test set since the official test set is not publicly available. For validation, we randomly sample $10\%$ of the training set.
+
+# 6.2. Network Training Details
+
+Visual Attention Networks. We first train visual attention networks [9] end-to-end, using detection loss, RPN loss and affinity mass loss (Figure 2 parts A and B). The loss scale for affinity loss is chosen to be 0.01 as discussed in Section 3.4. Upon convergence, the network can be directly applied for object detection and relationship proposal tasks. For scene categorization, we first acquire a visual attention network that is pretrained on the COCO dataset, and then use the structural modification in Section 6.6 (Figure 2 parts A and C) to fine tune it on the MIT67 dataset. Unless stated otherwise, all visual attention networks are based on a ResNet101 [8] architecture, trained with a batch size of 2
+
+/images), using a learning rate of $5e - 4$ which is decreased to $5e - 5$ after 5 epochs. There are 8 epochs in total for each training session. We apply stochastic gradient descent (SGD) with momentum optimizer and set the momentum to 0.9. We evaluate the model at the end of 8 epochs on the test set to report our results.
+
+Mini-batch Affinity Supervision. We applied various architectures including ResNet-20/56/110 for CIFAR and ResNet-18/50/101 for tiny ImageNet, as described in [8]. The CIFAR networks are trained for 200 epochs with a batch size of 128. We set the initial learning rate to 0.1 and reduce it by a factor of 10 at epochs 100 and 150, respectively. The tiny ImageNet networks are trained for 90 epochs with a batch size of 128, an initial learning rate of 0.1, and a factor of 10 reduction at epochs 30 and 60. For all experiments in mini-batch affinity supervision, the SGD optimizer with momentum is applied, with the weight decay and momentum set to $5e - 4$ and 0.9. For data augmentation during training, we have applied random horizontal flipping. During training we save the best performing model on validation set, and report its test set performance.
+
+# 6.3. Tasks and Metrics
+
+We evaluate affinity graph supervision on the following tasks, using the associated performance metrics.
+
+Relationship Proposal Generation. We evaluate the learned relationships on the Visual Genome dataset, using a recall metric which measures the percentage of ground truth relations that are covered in the predicted top K relationship list, which is consistent with [33, 32, 31].
+
+Classification. For the MIT67, CIFAR10/100 and Tiny ImageNet evaluation, we use classification accuracy.
+
+Object Detection. For completeness we also evaluate object detection on VOC07, using mAP (mean average precision) as the evaluation metric [6, 20]. Additional detection results on MSCOCO are in the supplementary material.
+
+# 6.4. Ablation Study on Loss Functions
+
+We first carry out ablation studies to examine different loss functions for optimizing the target affinity mass $\mathcal{M}$ as well as varying focal terms $r$ , as introduced in Section 3.3. The results in Table 1 show that focal loss is in general better than smooth L1 and L2 losses, when supervising the target mass. In our experiments on visual attention networks, we therefore apply focal loss with $\gamma = 2$ , which empirically gives the best performance in terms of recovering relationships while still maintaining a good performance in detection task. The results in Table 1 serve solely to determine the best loss configuration. Here we do not claim improve
+
+| VOC07 Ablation | F-RCNN [25] | RelNet [9] | smooth L1 | L2 | γ = 0 | γ = 2 | γ = 5 |
| mAP@all (%) | 47.0 | 47.7 ± 0.1 | 48.0 ± 0.1 | 47.7 ± 0.2 | 47.9 ± 0.2 | 48.2 ± 0.1 | 48.6 ± 0.1 |
| mAP@0.5 (%) | 78.2 | 79.3 ± 0.2 | 79.6 ± 0.2 | 79.7 ± 0.2 | 79.4 ± 0.1 | 79.9 ± 0.2 | 80.0 ± 0.2 |
| recall@5k (%) | - | 43.5 | 60.3 ± 0.3 | 64.6 ± 0.5 | 62.1 ± 0.3 | 69.9 ± 0.3 | 66.8 ± 0.2 |
+
+Table 1: An ablation study on loss functions comparing against the baseline faster RCNN [25] and Relation Networks [9], using the VOC07 database. The results are reported as percentages (\%) averaged over 3 runs. The relationship recall metric is also reported with ground truth relation labels constructed as described in Section 4.1, using only object class labels.
+
+| MIT67 | CNN | CNN | CNN + ROIs | CNN + Attn | CNN + Attn + L_G |
| Pretraining | Imgnet | Imgnet+COCO | Imgnet+COCO | Imgnet+COCO | Imgnet+COCO |
| Features | F_S | F_S | F_S, max(F_in) | F_S, F_C | F_S, F_C |
| Accuracy (%) | 75.1 | 76.8 | 78.0 ± 0.3 | 77.1 ± 0.2 | 80.2 ± 0.3 |
+
+Table 2: MIT67 Scene Categorization Results, averaged over 3 runs. A visual attention network with affinity supervision gives the best result (the boldfaced entry), with an improvement over a non-affinity supervised version (4-th column) and the baseline methods (columns 1 to 3). See the text in Section 6.6 for details. $F_{s}$ , $F_{c}$ and $F_{in}$ are described in Section 4.3.
+
+ment on detection tasks. The results of additional tests using ablated models are in the arXiv version of this article [28].
+
+# 6.5. Relationship Proposal Task
+
+Figure 4 compares the relationships recovered on the Visual Genome dataset, by a visual attention network "baseline" model (similar to [9]), our affinity supervised network with affinity targets built using only object class labels "aff-sup-object" (see Section 4.1), and an affinity target built from human annotated ground truth relation labels "aff-sup-rel". We also include the reported recall metric from Relationship Proposal Networks [33], a state of the art level one-stage relationship learning network with strong supervision, using ground truth relationship annotations. Our affinity mass loss does not require potentially costly human annotated relationship labels for learning (only object class labels were used) and but matches the present state-of-the-art [33] (the blue curve in Figure 4) in performance. When supervised with a target built from the ground truth relation labels instead of the object labels, we outperform relation proposal networks (by $25\%$ in relative terms for all K thresholds) with this recall metric (the red curve).
+
+# 6.6. Scene Categorization Task
+
+For scene categorization we adopt the base visual attention network (Figure 2, part A), and add an additional scene task branch (Figure 2, part C) to fine tune it on MIT67, as discussed in Section 4.3. Table 2 shows the results of applying this model to the MIT67 dataset. We refer to the baseline CNN as "CNN" (first column), which is an ImageNet pretrained ResNet101 model directly applied to an image classification task. In the second column, we first acquire a COCO pretrained visual attention network (Figure 2, part A), and fine tune it using only the scene level feature $F_{S}$ (Figure 2, part C). In the third column, for the same
+
+
+Figure 4: The percentage of the true relations that are in the top $K$ retrieved relations, with varying $K$ , in a relation proposal task. We compare a baseline network (black), Relation Proposal Networks [33] (blue), our affinity supervision using object class labels (but no explicit relations) (orange) and our affinity supervision with ground truth relation labels (red). We match the state of the art with no ground truth relation labels used (the overlapping blue and orange curves) and improve on it by a large margin (25% in relative terms) when ground truth relations are used.
+
+COCO pretrained visual attention network, we concatenate object proposals' ROI pooling features with $F_{S}$ to serve as meta scene level descriptor. In the fourth and fifth columns, we apply the full scene architecture in Figure 2 part C, but with a visual attention network that is pretrained without and with (supervised) target affinity loss, respectively. The affinity supervised case (fifth column) demonstrates a nontrivial improvement over the baseline (first to third columns) and also significantly outperforms the unsupervised case
+
+
+Figure 5: An ablation study on mini-batch affinity supervision, with the evaluation metric on a test set over epochs (horizontal axis), with the best result highlighted with a red dashed box. Left Plots: classification error rates and target mass with varying focal loss' $\gamma$ parameter. Right Plots: error rates and target mass with varying loss balancing factor $\lambda$ (defined in section 3.4).
+
+
+
+
+
+
+
+
+Figure 6: Left: t-SNE plot of learned feature representations for a baseline ResNet20 network on CIFAR10 dataset. Right: t-SNE plot for affinity supervised ResNet20 network.
+
+
+
+(fourth column). The attention weights learned solely by minimizing detection loss do not generalize well to a scene task, whereas those learned by affinity supervision can.
+
+# 6.7. Mini-Batch Affinity Supervision
+
+We conducted a model ablation study on the $\gamma$ and $\lambda$ parameters introduced in Section 3 (summarized in Figure 5) and subsequently chose $\gamma = 4$ and $\lambda = 0.1$ for our experiments, based on the associated error rates.
+
+Convergence of Target Mass. We plot results showing convergence of the target affinity mass during learning in Figure 5. There is a drastic improvement over the baseline target mass convergence, when affinity supervision is enabled. The chosen $\lambda = 0.1$ empirically provides acceptable convergence rates (right-most in Figure 5).
+
+Feature Separation Between Classes. A comparison of t-SNE [21] plots on learned feature representations from 1) baseline CNN and 2) a CNN supervised with affinity mass loss is presented in Figure 6. Note that the feature separation between different classes is better in our case.
+
+Results. We now summarize the results for mini-batch affinity learning on CIFAR10, CIFAR100 and TinyImageNet in Table 3. Overall, we observe a consistent improvement over the baseline, when using the affinity supervision in mini-batch training. For datasets with a large number of categories, such as CIFAR100 (100-classes) and tiny ImageNet (200-classes), the performance gain is above $1\%$ .
+
+| CIFAR-10 | ResNet 20 | ResNet 56 | ResNet 110 |
| base CNN | 91.34 ± 0.27 | 92.24 ± 0.48 | 92.64 ± 0.59 |
| Affinity Sup | 92.03 ± 0.21 | 92.90 ± 0.35 | 93.42 ± 0.38 |
| CIFAR-100 | ResNet 20 | ResNet 56 | ResNet 110 |
| base CNN | 66.51 ± 0.46 | 68.36 ± 0.68 | 69.12 ± 0.63 |
| Affinity Sup | 67.27 ± 0.31 | 69.79 ± 0.59 | 70.5 ± 0.60 |
| TinyImagenet | ResNet 18 | ResNet 50 | ResNet 101 |
| base CNN | 48.35 ± 0.27 | 49.86 ± 0.80 | 50.72 ± 0.82 |
| Affinity Sup | 49.30 ± 0.21 | 51.04 ± 0.68 | 51.82 ± 0.71 |
+
+Table 3: Batch Affinity Supervision results. Numbers are classification accuracy in percentages. CIFAR results are reported over 10 runs and tiny ImageNet over 5 runs.
+
+Affinity supervision does not introduce any additional network layers or parameters beyond the construction of the $N \times N$ affinity matrix and its loss. Hence, training time with affinity supervision is very close to that of the baseline CNN.
+
+# 7. Conclusion
+
+We have addressed the overlooked problem of directly supervising the learning of affinity graph weights for deep models in computer vision. Our main methodological contribution is the introduction of a novel target affinity mass, and its optimization using an affinity mass loss, which leads to demonstrable improvements in relationship retrieval. We have also shown that the improved recovery of relationships between objects boosts scene categorization performance. Finally, we have explored a more general problem, which is the supervision of affinity in mini-batches. Here, in diverse visual recognition problems, we see improvements once again. Given that our affinity supervision approach introduces no additional parameters or layers in the neural network, it adds little computational overhead to the baseline architecture. Hence it shows promise for affinity based training in other computer vision applications as well.
+
+Acknowledgments We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) and Adobe Research for research funding.
+
+# References
+
+[1] RMSprop optimizer. http://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf. Accessed: 2019-11-11. 2
+[2] Tiny imagenet visual recognition challenge. https://tiny-imagenet.herokuapp.com/. Accessed: 2019-11-11.6
+[3] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. NIPS, 2016. 1, 2
+[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. CVPR, 2009. 6
+[5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011. 2
+[6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object classes (voc) challenge. *IJCV*, 2010. 6
+[7] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. pages 1025-1035, 2017. 2
+[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016. 6
+[9] Han Hu, Jiayuan Gu, Zheng Zhang, Jifeng Dai, and Yichen Wei. Relation networks for object detection. CVPR, 2018. 1, 2, 3, 4, 6, 7
+[10] Jie Hu, Li Shen, Samuel Albanie, Gang Sun, and Andrea Vedaldi. Gather-excite: Exploiting feature context in convolutional neural networks. NIPS, 2018. 1, 2
+[11] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. CVPR, 2018. 1, 2
+[12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 2
+[13] Bo Jiang, Ziyan Zhang, Doudou Lin, Jin Tang, and Bin Luo. Semi-supervised learning with graph learning-convolutional networks. CVPR, 2019. 1
+[14] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2015. 2
+[15] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. ICLR, 2017. 1
+[16] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017. 6
+[17] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009. 6
+[18] Ruoyu Li, Sheng Wang, Feiyun Zhu, and Junzhou Huang. Adaptive graph convolutional neural networks. AAAI, 2018. 1
+[19] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. CVPR, 2017. 3
+
+[20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. ECCV, 2014. 6
+[21] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 2008. 8
+[22] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodolà, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. CVPR, 2017. 1, 2
+[23] Ning Qian. On the momentum term in gradient descent learning algorithms. Neural networks, 1999. 2
+[24] Ariadna Quattoni and Antonio Torralba. Recognizing indoor scenes. CVPR, 2009. 6
+[25] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. NIPS, 2015. 2, 4, 7
+[26] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. NIPS, 2017. 1, 2, 3
+[27] Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. arXiv preprint arXiv:1710.10903, 2017. 1
+[28] Chu Wang, Babak Samari, Vladimir G.Kim, Siddhartha Chaudhuri, and Kaleem Siddiqi. Affinity graph supervision for visual recognition. arXiv preprint arXiv:2003.09049, 2020. https://arxiv.org/abs/2003.09049 (visited: 2020-03-27). 3, 4, 7
+[29] Chu Wang, Babak Samari, and Kaleem Siddiqi. Local spectral graph convolution for point set feature learning. ECCV, 2018. 1, 2
+[30] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. CVPR, 2018. 1, 2
+[31] Danfei Xu, Yuke Zhu, Christopher Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. CVPR, 2017. 6
+[32] Rowan Zellers, Mark Yatskar, Sam Thomson, and Yejin Choi. Neural motifs: Scene graph parsing with global context. CVPR, 2018. 6
+[33] Ji Zhang, Mohamed Elhoseiny, Scott Cohen, Walter Chang, and Ahmed Elgammal. Relationship proposal networks. CVPR, 2017. 1, 6, 7
+[34] Hengshuang Zhao, Yi Zhang, Shu Liu, Jianping Shi, Chen Change Loy, Dahua Lin, and Jiaya Jia. Psanet: Point-wise spatial attention network for scene parsing. ECCV, 2018. 1, 2
\ No newline at end of file
diff --git a/affinitygraphsupervisionforvisualrecognition/images.zip b/affinitygraphsupervisionforvisualrecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6b035ec2b3d43cdd7cdaf2d1c6d4cea342389e32
--- /dev/null
+++ b/affinitygraphsupervisionforvisualrecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1f8744049c74b3691a6e03854553cf938e508950ef90cd34520b81eb078c5cc
+size 498970
diff --git a/affinitygraphsupervisionforvisualrecognition/layout.json b/affinitygraphsupervisionforvisualrecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..781d6f971380f2bb86ab8143e0014ec7ecc8fce2
--- /dev/null
+++ b/affinitygraphsupervisionforvisualrecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f93d035013fd390121d7d28b9764679cb63d3e4be2b6273a47571feb4aaa66e
+size 408940
diff --git a/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_content_list.json b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2c6002e96ed9eacb86ad4dd6518e97957b35d675
--- /dev/null
+++ b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa207168889a4a48ddbadbaba2bd84cfe35aa2f9a765ae2a313f8d6767a0ac16
+size 86670
diff --git a/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_model.json b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..24d154846228df05bb132554d0e224ead5f4b1c4
--- /dev/null
+++ b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04a46bb62a17fa83c9fb6514185d78281cd0618e0bca72245824c3dbce42b6b4
+size 106002
diff --git a/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_origin.pdf b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..72f6dfd777145c893908f19aa9bcd10bda28aff1
--- /dev/null
+++ b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/6aea1fc8-6b7c-4688-b9ea-67db7a203d17_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99559bd50d08c9c95295495ade4237146596d161225dbf29105977f888079a3d
+size 5096624
diff --git a/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/full.md b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..49f106a9fd5e6ec462185484d2240993a738e04c
--- /dev/null
+++ b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/full.md
@@ -0,0 +1,356 @@
+# Agriculture-Vision: A Large Aerial Image Database for Agricultural Pattern Analysis
+
+Mang Tik Chiu $^{1*}$ , Xingqian Xu $^{1*}$ , Yunchao Wei $^{1}$ , Zilong Huang $^{1}$ , Alexander Schwing $^{1}$ , Robert Brunner $^{1}$ , Hrant Khachatrian $^{2}$ , Hovnatan Karapetyan $^{2}$ , Ivan Dozier $^{2}$ , Greg Rose $^{2}$ , David Wilson $^{2}$ , Adrian Tudor $^{2}$ , Naira Hovakimyan $^{2,1}$ , Thomas S. Huang $^{1}$ , Honghui Shi $^{3,1}$
+
+$^{1}$ UIUC, $^{2}$ Intelinair, $^{3}$ University of Oregon
+
+# Abstract
+
+The success of deep learning in visual recognition tasks has driven advancements in multiple fields of research. Particularly, increasing attention has been drawn towards its application in agriculture. Nevertheless, while visual pattern recognition on farmlands carries enormous economic values, little progress has been made to merge computer vision and crop sciences due to the lack of suitable agricultural image datasets. Meanwhile, problems in agriculture also pose new challenges in computer vision. For example, semantic segmentation of aerial farmland images requires inference over extremely large-size images with extreme annotation sparsity. These challenges are not present in most of the common object datasets, and we show that they are more challenging than many other aerial image datasets. To encourage research in computer vision for agriculture, we present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns. We collected 94,986 high-quality aerial images from 3,432 farmlands across the US, where each image consists of RGB and Near-infrared (NIR) channels with resolution as high as $10\mathrm{cm}$ per pixel. We annotate nine types of field anomaly patterns that are most important to farmers. As a pilot study of aerial agricultural semantic segmentation, we perform comprehensive experiments using popular semantic segmentation models; we also propose an effective model designed for aerial agricultural pattern recognition. Our experiments demonstrate several challenges Agriculture-Vision poses to both the computer vision and agriculture communities. Future versions of this dataset will include even more aerial images, anomaly patterns and image channels.
+
+# 1. Introduction
+
+Since the introduction of ImageNet [14], a large-scale image classification dataset, research in computer vision and pattern recognition using deep neural nets has seen unprecedented development [31, 23, 49, 48, 25]. Deep neural networks based algorithms have proven to be effective across multiple domains such as medicine and astronomy [34, 2, 59], across multiple datasets [20, 51, 17], across different computer vision tasks [58, 28, 56, 9, 11, 10, 47, 43, 59] and across different numerical precision and hardware architectures [57, 61]. However, progress of visual pattern recognition in agriculture, one of the fundamental aspects of the human race, has been relatively slow [29]. This is partially due to the lack of relevant datasets that encourage the study of agricultural imagery and visual patterns, which poses many distinctive characteristics.
+
+A major direction of visual recognition in agriculture is aerial image semantic segmentation. Solving this problem is important because it has tremendous economic potential. Specifically, efficient algorithms for detecting field conditions enable timely actions to prevent major losses or to increase potential yield throughout the growing season. However, this is much more challenging compared to typical semantic segmentation tasks on other aerial image datasets. For example, to segment weed patterns in aerial farmland images, the algorithm must be able to identify sparse weed clusters of vastly different shapes and coverages. In addition, some of these aerial images have sizes exceeding $20000 \times 30000$ pixels, these images pose a huge problem for end-to-end segmentation in terms of computation power and memory consumption. Agricultural data are also inherently multi-modal, where information such as field temperature and near-infrared signal are essential for determining field conditions. These properties deviate from those of conventional semantic segmentation tasks, thus reducing their applicability to this area of research.
+
+| Dataset | # Images | # Classes | # Labels | Tasks | Image Size (pixels) | # Pixels | Channels | Resolution (GSD) |
| Aerial images |
| Inria Aerial Image [38] | 180 | 2 | 180 | seg. | 5000 × 5000 | 4.5B | RGB | 30 cm/px |
| DOTA [54] | 2,806 | 14 | 188,282 | det. | ≤ 4000 × 4000 | 44.9B | RGB | various |
| iSAID [52] | 2,806 | 15 | 655,451 | seg. | ≤ 4000 × 4000 | 44.9B | RGB | various |
| AID [55] | 10,000 | 30 | 10,000 | cls. | 600 × 600 | 3.6B | RGB | 50-800 cm/px |
| DeepGlobe Building [13] | 24,586 | 2 | 302,701 | det. / seg. | 650 × 650 | 10.4B | 9 bands | 31-124 cm/px |
| EuroSAT [24] | 27,000 | 10 | 27,000 | cls. | 256 × 256 | 1.77B | 13 Bands | 30 cm/px |
| SAT-4 [3] | 500,000 | 4 | 500,000 | cls. | 28 × 28 | 0.39B | RGB, NIR | 600 cm/px |
| SAT-6 [3] | 405,000 | 6 | 405,000 | cls. | 28 × 28 | 0.32B | RGB, NIR | 600 cm/px |
| Agricultural images |
| Crop/Weed discrimination [22] | 60 | 2 | 494 | seg. | 1296 × 966 | 0.08B | RGB | N/A |
| Sensefly Crop Field [1] | 5,260 | N/A | N/A | N/A | N/A | N/A | NRG, Red edge | 12.13 cm/px |
| DeepWeeds [42] | 17,509 | 1† | 17,509 | cls. | 1920 × 1200 | 40.3B | RGB | N/A |
| Agriculture-Vision (ours) | 94,986 | 9 | 169,086 | seg. | 512 × 512 | 22.6B | RGB, NIR | 10/15/20 cm/px |
+
+$\dagger$ DeepWeeds has only weed annotations at image-level, but there are 8 sub-categories of weeds.
+
+Table 1: This table shows the statistics from other datasets. All datasets are compared on number of images, categories, annotations, image size, pixel numbers and color channels. If it is an aerial image dataset, we also provide the ground sample resolution (GSD). "cls.", "det." and "seg." stand for classification, detection and segmentation respectively.
+
+To encourage research on this challenging task, we present Agriculture-Vision, a large-scale and high-quality dataset of aerial farmland images for advancing studies of agricultural semantic segmentation. We collected images throughout the growing seasons at numerous farming locations in the US, where several important field patterns were annotated by agronomy experts.
+
+Agriculture-Vision differs significantly from other image datasets in the following aspects: (1) unprecedented aerial image resolutions up to $10\mathrm{cm}$ per pixel $(\mathrm{cm / px})$ ; (2) multiple aligned image channels beyond RGB; (3) challenging annotations of multiple agricultural anomaly patterns; (4) precise annotations from professional agronomists with a strict quality assurance process; and (5) large size and shape variations of annotations. These features make Agriculture-Vision a unique image dataset that poses new challenges for semantic segmentation in aerial agricultural images.
+
+Our main contributions are summarized as follows:
+
+- We introduce a large-scale and high quality aerial agricultural image database for advancing research in agricultural pattern analysis and semantic segmentation.
+- We perform a pilot study with extensive experiments on the proposed database and provide a baseline for semantic segmentation using deep learning approaches to encourage further research.
+
+# 2. Related Work
+
+Most segmentation datasets primarily focus on common objects or street views. For example, Pascal VOC [16], MS-COCO [36] and ADE20K [64] segmentation datasets respectively consist of 20, 91 and 150 daily object categories such as airplane, person, computer, etc. The Cityscapes dataset [12], where dense annotations of street
+
+scenes are available, opened up research directions in street-view scene parsing and encouraged more research efforts in this area.
+
+Aerial image visual recognition has also gained increasing attention. Unlike daily scenes, aerial images are often significantly larger in image sizes. For example, the DOTA dataset [54] contains images with sizes up to $4000 \times 4000$ pixels, which are significantly larger than those in common object datasets at around $500 \times 500$ pixels. Yet, aerial images are often of much lower resolutions. Precisely, the CVPR DeepGlobe2018 Building Extraction Challenge [13] uses aerial images at a resolution of $31~\mathrm{cm / px}$ or lower. As a result, finer object details such as shape and texture are lost and have to be omitted in later studies.
+
+Table 1 summarizes the statistics of the most related datasets, including those of aerial images and agricultural images. As can be seen from the table, there has been an apparent lack of large-scale aerial agricultural image databases, which, in some sense, hinders agricultural visual recognition research from rapid growth as evidenced for common images [41].
+
+Meanwhile, many agricultural studies have proposed solutions to extract meaningful information through images. These papers cover numerous subtopics, such as spectral analysis on land and crops [63, 35, 27, 30], aerial device photogrammetry [21, 32], color indices and low-level image feature analysis [50, 44, 18, 53, 15], as well as integrated image processing systems [32, 33]. One popular approach in analyzing agricultural images is to use geo-color indices such as the Normalized-Difference-Vegetation-Index (NDVI) and Excess-Green-Index (ExG). These indices have high correlation with land information such as water [60] and plantations [39]. Besides, recent papers in computer vision have been eminently motivated by deep convolu
+
+tional neural networks (DCNN) [31]. DCNN is also in the spotlight in agricultural vision problems such as land cover classification [37] and weed detection [40]. In a similar work [37], Lu et. al. collected aerial images using an EOS 5D camera at $650\mathrm{m}$ and $500\mathrm{m}$ above ground in Penzhou and Guanghan County, Sichuan, China. They labeled cultivated land vs. background using a three-layer CNN model. In another recent work [45], Rebetez et. al. utilized an experimental farmland dataset conducted by the Swiss Confederation's Agroscope research center and proposed a DCNN-HistNN hybrid model to categorize plant species on a pixel-level. Nevertheless, since their datasets are limited in scale and their research models are outdated, both works fail to fuse state-of-the-art deep learning approaches in agricultural applications in the long run.
+
+# 3. The Agriculture-Vision Dataset
+
+Agriculture-Vision aims to be a publicly available large-scale aerial agricultural image dataset that is high-resolution, multi-band, and with multiple types of patterns annotated by agronomy experts. In its current stage, we have captured 3,432 farmland images with nine types of annotations: double plant, drydown, endrow, nutrient deficiency, planter skip, storm damage, water, waterway and weed cluster. All of these patterns have substantial impacts on field conditions and the final yield. These farmland images were captured between 2017 and 2019 across multiple growing seasons in numerous farming locations in the US. The proposed Agriculture-Vision dataset contains 94,986 images sampled from these farmlands. In this section, we describe the details on how we construct the Agriculture-Vision dataset, including image acquisition, preprocessing, pattern annotation, and finally image sample generation.
+
+# 3.1. Field Image Acquisition
+
+Farmland images in the Agriculture-Vision dataset were captured by specialized mounted cameras on aerial vehicles flown over numerous fields in the US, which primarily consist of corn and soybean fields around Illinois and Iowa. All images in the current version of Agriculture-Vision were collected from the growing seasons between 2017 and 2019. Each field image contains four color channels: Near-infrared (NIR), Red, Green and Blue.
+
+| Year | Channel | Resolution | Description | Camera |
| 2017 | N, R, G, B | 15cm/px | Narrow band | 2×Canon SLR |
| 2018 | N, R, G B | 10cm/px 20cm/px | Narrow band Wide band | 2×Nikon D850 1×Nikon D800E |
| 2019 | N, R, G, B | 10cm/px | Narrow band | WAMS |
+
+Table 2: Camera settings for capturing the 4-channel field images: Near-infrared (N), Red (R), Green (G) and Blue (B). The Blue channel images captured in 2018 are scaled up to align with the NRG images.
+
+The camera settings for capturing farmland images are shown in Table 2. Farmland images in 2017 were taken with two aligned Canon SLR cameras, where one captures RGB images and the other captures only the NIR channel. For farmland images in 2018, the NIR, Red and Green (NRG) channels were taken using two Nikon D850 cameras to enable $10\mathrm{cm / px}$ resolution. Custom filters were used to capture near-infrared instead of the blue channel. Meanwhile, the separate Blue channel images were captured using one Nikon D800E at $20~\mathrm{cm / px}$ resolution, which were then scaled up to align with the corresponding NRG images. Farmland images in 2019 were captured using a proprietary Wide Area Multi-Spectral System (WAMS) commonly used for remote sensing. The WAMS captures all four channels simultaneously at $10\mathrm{cm / px}$ resolution. Note that compared to other aerial image datasets in Table 1, our dataset contains images in resolutions higher than all others.
+
+# 3.2. Farmland image preprocessing
+
+Farmland images captured in 2017 were already stored in regular pixel values between 0 and 255, while those captured in 2018 and 2019 were initially stored in camera raw pixel format. Following the conventional method for normalizing agricultural images, for each of the four channels in one field image, we first compute the $5^{th}$ and $95^{th}$ percentile pixel values, then clip all pixel values in the image by a lower bound and an upper bound:
+
+$$
+\begin{array}{l} V _ {\text {l o w e r}} = \max \left(0, p _ {5} - 0. 4 \times \left(p _ {9 5} - p _ {5}\right)\right) \tag {1} \\ V _ {u p p e r} = \min (2 5 5, p _ {9 5} + 0. 4 \times (p _ {9 5} - p _ {5})) \\ \end{array}
+$$
+
+where $V_{lower}$ , $V_{upper}$ stand for lower and upper bound of pixel values respectively, $p_5$ and $p_{95}$ stand for the $5^{th}$ and $95^{th}$ percentile respectively.
+
+Note that farmland images may contain invalid areas, which were initially marked with a special pixel value. Therefore, we exclude these invalid areas when computing pixel percentiles for images in 2018 and 2019.
+
+To intuitively visualize each field image and prepare for later experiments, we separate the four channels into a regular RGB image and an additional single-channel NIR image, and store them as two JPG images.
+
+# 3.3. Annotations
+
+All annotations in Agriculture-Vision were labeled by five annotators trained by expert agronomists through a commercial software. Annotated patterns were then reviewed by the agronomists, where unsatisfactory annotations were improved. The software provides visualizations of several image channels and vegetation indices, including RGB, NIR and NDVI, where NDVI can be derived from the Red and NIR channel by:
+
+$$
+N D V I = \frac {N I R - R E D}{N I R + R E D} \tag {2}
+$$
+
+
+Figure 1: Visualization of an aerial farmland image before sub-sampling. This image (including invalid areas, shown in black) has a size of $10875 \times 3303$ pixels and only contains drydown annotations at the rightmost region. Due to the large image size and sparse annotation, training semantic segmentation models on entire images is impractical and inefficient.
+
+# 3.4. Image sample generation
+
+Unprocessed farmland images have extremely large image sizes. For instance, Figure 1 shows one field image with a size of $10875 \times 3303$ pixels. In fact, the largest field image we collected is $33571 \times 24351$ pixels in size. This poses significant challenges to deep network training in terms of computation time and memory consumption. In addition, Figure 1 also shows the sparsity of some annotations. This means training a segmentation model on the entire image for these patterns would be very inefficient, and would very possibly yield suboptimal results.
+
+On the other hand, unlike common objects, visual appearances of anomaly patterns in aerial farmland images are preserved under image sub-sampling methods such as flipping and cropping. This is because these patterns represent regions of the anomalies instead of individual objects. As a result, we can sample image patches from these large farmland images by cropping around annotated regions in the image. This simultaneously improves data efficiency, since the proportion of annotated pixels is increased.
+
+Motivated by the above reasons, we construct the Agriculture-Vision dataset by cropping annotations with a window size of $512 \times 512$ pixels. For field patterns smaller than the window size, we simply crop the region centered at the annotation. For field patterns larger than the window size, we employ a non-overlapping sliding window technique to cover the entirety of the annotation. Note that we discard images covered by more than $90\%$ of annotations, such that all images retain sufficient context information.
+
+In many cases, multiple small annotations are located near each other. Generating one image patch for every annotation would lead to severe re-sampling of those field regions, which causes biases in the dataset. To alleviate the issue, if two image patches have an Intersection-over-Union of over $30\%$ , we discard the one with fewer pixels annotated
+
+as field patterns. When cropping large annotations using a sliding window, we also discard any image patches with only background pixels. A visualization of our sample generation method is illustrated in Figure 2, and some images in the final Agriculture-Vision dataset are shown in Figure 3.
+
+
+Figure 2: This figure illustrates our field image patch generation method for AgriVision. For annotations smaller than $512 \times 512$ pixels, we crop the image by a single window around the annotation center (shown in red). For larger annotations, we use multiple non-overlapping windows to cover the entire annotation (shown in purple). Note that the bottom two polygons are enclosed by just one window.
+
+# 3.5. Dataset splitting
+
+We first randomly split the 3,432 farmland images with a $6/2/2$ train/val/test ratio. We then assign each sampled image to the split of the farmland image they are cropped from. This guarantees that no cropped images from the same farmland will appear in multiple splits in the final dataset. The generated Agriculture-Vision dataset thus contains 56,944/18,334/19,708 train/val/test images.
+
+
+
+
+
+
+
+
+
+
+(a) Double plant
+
+
+
+
+(b) Drydown
+
+
+
+
+
+
+
+
+(c) Endrow
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(d) Nutrient deficiency
+
+
+
+
+(e) Planter skip
+
+
+
+
+(f) Storm damage (not evaluated)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(g) Water
+
+
+Figure 3: For each annotation, top: RGB image; bottom: NRG image. Invalid regions have been blacked out. Note the extreme size and shape variations of some annotations. Note that images in our dataset can contain multiple patterns, the visualizations above are chosen to best illustrate each pattern. Images best viewed with color and zoomed in.
+
+
+(h) Waterway
+
+
+
+
+(i) Weed cluster
+
+
+
+
+Figure 4: Total area of annotations for each class. Some categories occupy significantly larger areas than others, resulting in extreme class imbalance.
+
+
+Figure 5: Number of images containing each anntoation class. A sudden drop in the number of storm damage samples indicate the difficulty for a model to recognize this pattern.
+
+
+Figure 6: Percentages of annotated pixels in images. Some patterns are almost take up the entire image.
+
+# 4. Dataset Statistics
+
+# 4.1. Annotation areas
+
+Field patterns have different shapes and sizes. For example, weed clusters can appear in either small patches or enormous regions, while double plant usually occur in small areas on the field. At the same time, these patterns also appear at different frequencies. Therefore, patterns that are large and more common occupy significantly larger areas than patterns that are smaller and relatively rare.
+
+Figure 4 shows the total number of pixels for each type of annotations in Agriculture-Vision. We observe significantly more drydown, nutrient deficiency and weed cluster pixels than other categories in our dataset, which indicate extreme label imbalance across categories.
+
+# 4.2. Annotation counts
+
+The frequency at which a model observes a pattern during training determines the model's ability to recognize the same pattern during inference. It is therefore very important to understand the sample distribution for each of these field patterns in the Agriculture-Vision dataset.
+
+Figure 5 shows the number of images that contain each annotation category. While most annotations fall under a natural and smooth occurrence distribution, we observe a sudden drop of images containing storm damage patterns. The extreme scarcity of storm damage annotations would be problematic for model training. As a result, we ignore any storm damage annotations when performing evaluations.
+
+# 4.3. Annotation proportions
+
+As previously described, field patterns can vary dramatically in size. Correspondingly in Agriculture-Vision, each generated image sample may also contain various proportions of annotations. We show in Figure 6 that many images contain more than $50\%$ annotated pixels, some even
+
+occupy more than $80\%$ of the image. Training a model to segment large patterns can be difficult, since recognition of field patterns relies heavily on the contextual information of the surrounding field.
+
+# 5. Pilot Study on Agriculture-Vision
+
+# 5.1. Baseline models
+
+There are many popular models for semantic segmentation on common object datasets. For example, U-Net [46] is a light-weight model that leverages an encoder-decoder architecture for pixel-wise classification. PSPNet [62] uses spatial pooling at multiple resolutions to gather global information. DeepLab [4, 5, 6, 7] is a well-known series of deep learning models that use atrous convolutions for semantic segmentation. More recently, many new methods have been proposed and achieve state-of-the-art results on CityScapes benchmark. For example, SPGNet [8] proposes a Semantic Prediction Guidance (SPG) module which learns to re-weight the local features through the guidance from pixel-wise semantic prediction, and [26] proposes Criss-Cross Network (CCNet) for obtaining better contextual information in a more effective and efficient way. In our experiments, we perform comparative evaluations on the Agriculture-Vision dataset using DeepLabV3 and DeepLabV3+, which are two well-performing models across several semantic segmentation datasets. We also propose a specialized FPN-based model that outperforms these two milestones in Agriculture-Vision.
+
+To couple with Agriculture-Vision, we make minor modifications on the existing DeepLabV3 and DeepLabV3+ architectures. Since Agriculture-Vision contains NRGB images, we duplicate the weights corresponding to the Red channel of the pretrained convolution layer. This gives a convolution layer with four input channels in the backbone.
+
+| Model | mIoU (%) | Background | Double plant | Drydown | Endrow | Nutrient deficiency | Planter skip | Water | Waterway | Weed cluster |
| DeepLabv3 (os=8) [6] | 35.29 | 73.01 | 21.32 | 56.19 | 12.00 | 35.22 | 20.10 | 42.19 | 35.04 | 22.51 |
| DeepLabv3+ (os=8) [7] | 37.95 | 72.76 | 21.94 | 56.80 | 16.88 | 34.18 | 18.80 | 61.98 | 35.25 | 22.98 |
| DeepLabv3 (os=16) [6] | 41.66 | 74.45 | 25.77 | 57.91 | 19.15 | 39.40 | 24.25 | 72.35 | 36.42 | 25.24 |
| DeepLabv3+ (os=16) [7] | 42.27 | 74.32 | 25.62 | 57.96 | 21.65 | 38.42 | 29.22 | 73.19 | 36.92 | 23.16 |
| Ours | 43.40 | 74.31 | 28.45 | 57.43 | 21.74 | 38.86 | 33.55 | 73.59 | 34.37 | 28.33 |
+
+Table 3: mIoUs and class IoUs of modified semantic segmentation models and our proposed FPN-based model on Agriculture-Vision validation set. Our model is customized for aerial agricultural images and perform better than all others.
+
+| Model | mIoU (%) | Background | Double plant | Drydown | Endrow | Nutrient deficiency | Planter skip | Water | Waterway | Weed cluster |
| DeepLabv3 (os=8) [6] | 32.18 | 70.42 | 21.51 | 50.97 | 12.60 | 39.37 | 20.37 | 15.69 | 33.71 | 24.98 |
| DeepLabv3+ (os=8) [7] | 39.05 | 70.99 | 19.67 | 50.89 | 19.50 | 41.32 | 24.42 | 62.25 | 34.14 | 28.27 |
| DeepLabv3 (os=16) [6] | 42.22 | 72.73 | 25.15 | 53.62 | 20.99 | 43.95 | 24.57 | 70.42 | 38.63 | 29.91 |
| DeepLabv3+ (os=16) [7] | 42.42 | 72.50 | 25.99 | 53.57 | 24.10 | 44.15 | 24.39 | 70.33 | 37.91 | 28.81 |
| Ours | 43.66 | 72.55 | 27.88 | 52.32 | 24.43 | 43.79 | 30.95 | 71.33 | 38.81 | 30.87 |
+
+Table 4: mIoUs and class IoUs of semantic segmentation models and our proposed model on Agriculture-Vision test set. The results are consistent with the validation set, where our model outperforms common object semantic segmentation models.
+
+# 5.2. The proposed FPN-based model
+
+In our FPN-based model, the encoder of the FPN is a ResNet [23]. We retain the first three residual blocks of the ResNet, and we change the last residual block (layer4) into a dilated residual block with rate $= 4$ . The modified block shares the same structure with the Deeplab series [4,5,6,7]. We implement the lateral connections in the FPN decoder using two $3 \times 3$ and one $1 \times 1$ convolution layers. Each of the two $3 \times 3$ convolution layers is followed by a batch normalization layer (BN) and a leaky ReLU activation with a negative slope of 0.01. The last $1 \times 1$ convolution layer does not contain bias units. For the upsampling modules, instead of bilinear interpolation, we use a deconvolution layer with kernel size $= 3$ , stride $= 2$ and padding $= 1$ , followed by a BN layer, leaky ReLU activation and another $1 \times 1$ convolution layer without bias. The output from each lateral connection and the corresponding upsampling module are added together, the output is then passed through two more $3 \times 3$ convolution layers with BN and leaky ReLU. Lastly, outputs from all pyramid levels are upsampled to the highest pyramid resolution using bilinear interpolation and are then concatenated. The result is passed to a $1 \times 1$ convolution layer with bias units to predict the final semantic map.
+
+# 5.3. Training details
+
+We use backbone models pretrained on ImageNet in all our experiments. We train each model for 25,000 iterations with a batch size of 40 on four RTX 2080Ti GPUs. We use SGD with a base learning rate of 0.01 and a weight decay of $5 \times 10^{-4}$ . Within the 25,000 iterations, we first warm-up the training for 1,000 iterations [19], where the learning rate linearly grows from 0 to 0.01. We then train for 7,000 iterations with a constant learning rate of 0.01. We finally
+
+decrease the learning rate back to 0 with the "poly" rule [5] in the remaining 17,000 iterations.
+
+Table 3 and Table 4 show the validation and test set results of DeepLabV3 and DeepLabV3+ with different output strides and our proposed FPN-based model. Our model consistently outperforms these semantic segmentation models in Agriculture-Vision. Hence, in the following experiments, we will use our FPN-based model for comparison studies.
+
+# 5.4. Multi-spectral data and model complexity
+
+One major study of our work is the effectiveness of training multi-spectral data in image recognition. AgricultureVision consists of NIR-Red-Green-Blue (NRGB) images, which is beyond many conventional image recognition tasks. Therefore, we investigate the differences in performance between semantic segmentation from multi-spectral images, including NRG and NRGB images, and regular RGB images.
+
+We simultaneously investigate the impact of using models with different complexities. Specifically, we train our FPN-based model with ResNet-50 and ResNet-101 as backbone. We evaluate combinations of multi-spectral images and various backbones and report the results in Table 5.
+
+# 5.5. Multi-scale data
+
+Aerial farmland images contain annotations with vastly different sizes. As a result, models trained from images at different scales can result in significantly different performances. In order to justify our choice of using $512 \times 512$ windows to construct the Agriculture-Vision dataset, we additionally generate two versions of the dataset with different window sizes. The first version (Agriculture-Vision-1024) uses $1024 \times 1024$ windows to crop annotations. The second
+
+| Backbone | Channels | Val mIoU (%) | Test mIoU (%) |
| ResNet-50 | RGB | 39.28 | 38.26 |
| ResNet-50 | NRG | 42.89 | 41.34 |
| ResNet-50 | NRGB | 42.16 | 41.82 |
| ResNet-101 | RGB | 40.48 | 39.63 |
| ResNet-101 | NRG | 42.25 | 40.05 |
| ResNet-101 | NRGB | 43.40 | 43.66 |
+
+version (Agriculture-Vision-MS) uses three window sizes: $1024 \times 1024$ , $1536 \times 1536$ and $2048 \times 2048$ .
+
+In Agriculture-Vision-MS, images are cropped with the smallest window size that completely encloses the annotation. If an annotation exceeds $2048 \times 2048$ pixels, we again use the sliding window cropping method to generate multiple sub-samples. We use Agriculture-Vision-MS to evaluate if retaining the integrity of large annotations helps to improve performances. Note that this is different from conventional multi-scale inference used in common object image segmentation, since in Agriculture-Vision-MS the images are of different sizes.
+
+We cross-evaluate models trained on each dataset version with all three versions. Results in Table 6 show that the model trained on the proposed Agriculture-Vision dataset with a $512 \times 512$ window size is the most stable and performs the best, thus justifying our dataset with the chosen image sampling method.
+
+# 6. Discussion
+
+We would like to highlight the use of Agriculture-Vision to tackle the following crucial tasks:
+
+- Agriculture images beyond RGB: Deep convolutional neural networks (DCNN) are channel-wise expandable by nature. Yet few datasets promote in-depth research on such capability. We have demonstrated that aerial agricultural semantic segmentation is more effective using NRGB images rather than just RGB images. Future versions of Agriculture-Vision will also include thermal images, soil maps and topographic maps. Therefore, further studies in multi-spectral agriculture images are within our expectation.
+- Transfer learning: Our segmentation task induces an uncommon type of transfer learning, where a model pretrained on RGB images of common objects is transferred to multi-spectral agricultural images. Although the gap between the source and target domain is tremendous, our experiments show that transfer learning remains an effective way of learning to recognize
+
+Table 5: mIoUs using our proposed model with various ResNet backbones and image channels.
+
+ | Val mIoU (%) | Test mIoU (%) |
| 512 | 1024 | MS | 512 | 1024 | MS |
| Train | 512 | 43.40 | 39.44 | 37.64 | 43.66 | 39.68 | 37.27 |
| 1024 | 36.33 | 34.37 | 36.16 | 35.01 | 35.27 | 35.87 |
| MS | 34.16 | 32.45 | 35.67 | 31.17 | 30.72 | 35.77 |
+
+Table 6: mIoUs of our model trained and tested on different Agriculture-Vision versions. 512: the proposed Agriculture-Vision dataset, 1024: Agriculture-Vision-1024, MS: Agriculture-Vision-MS. The model trained on the proposed dataset yields the best results across all versions.
+
+field patterns. Similar types of transfer learning are not regularly seen, but they are expected to become more popularized with Agriculture-Vision. The effectiveness of fine-tuning can be further explored, such as channel expansion in convolution layers and domain adaptation from common objects to agricultural patterns.
+
+- Learning from extreme image sizes: The current version of Agriculture-Vision provides a pilot study of aerial agricultural pattern recognition with conventional image sizes. However, our multi-scale experiments show that there is still much to explore in effectively leveraging large-scale aerial images for improved performance. Using Agriculture-Vision as a starting point, we hope to initiate related research on visual recognition tasks that are generalizable to extremely large aerial farmland images. We envision future work in this direction to enable large-scale image analysis as a whole.
+
+# 7. Conclusion
+
+We introduce Agriculture-Vision, an aerial agricultural semantic segmentation dataset. We capture extremely large farmland images and provide multiple field pattern annotations. This dataset poses new challenges in agricultural semantic segmentation from aerial images. As a baseline, we provide a pilot study on Agriculture-Vision using well-known off-the-shelf semantic segmentation models and our specialized one.
+
+In later versions, Agriculture-Vision will include more field images and patterns, as well as more image modalities, such as thermal images, soil maps and topographic maps. This would make Agriculture-Vision an even more standardized and inclusive aerial agricultural dataset. We hope this dataset will encourage more work on improving visual recognition methods for agriculture, particularly on large-scale, multi-channel aerial farmland semantic segmentation.
+
+# References
+
+[1] Sensefly agriculture dataset. https://www.sensefly.com/education/datasets. Accessed:2018/11/16. 2
+[2] AK Aniyan and Kshitij Thorat. Classifying radio galaxies with the convolutional neural network. The Astrophysical Journal Supplement Series, 230(2):20, 2017. 1
+[3] Saikat Basu, Sangram Ganguly, Supratik Mukhopadhyay, Robert DiBiano, Manohar Karki, and Ramakrishna Nemani. Deepsat: a learning framework for satellite imagery. In Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems, page 37. ACM, 2015. 2
+[4] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062, 2014. 6, 7
+[5] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE transactions on pattern analysis and machine intelligence, 40(4):834-848, 2017. 6, 7
+[6] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017. 6, 7
+[7] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801-818, 2018. 6, 7
+[8] Bowen Cheng, Liang-Chieh Chen, Yunchao Wei, Yukun Zhu, Zilong Huang, Jinjun Xiong, Thomas S Huang, WenMei Hwu, and Honghui Shi. Spgnet: Semantic prediction guidance for scene parsing. In Proceedings of the IEEE International Conference on Computer Vision, pages 5218-5228, 2019. 6
+[9] Bowen Cheng, Yunchao Wei, Honghui Shi, Shiyu Chang, Jinjun Xiong, and Thomas S Huang. Revisiting pre-training: An efficient training method for image classification. arXiv preprint arXiv:1811.09347, 2018. 1
+[10] Bowen Cheng, Yunchao Wei, Honghui Shi, Rogerio Feris, Jinjun Xiong, and Thomas Huang. Decoupled classification refinement: Hard false positive suppression for object detection. arXiv preprint arXiv:1810.04002, 2018. 1
+[11] Bowen Cheng, Bin Xiao, Jingdong Wang, Honghui Shi, Thomas S Huang, and Lei Zhang. Higherhrnet: Scale-aware representation learning for bottom-up human pose estimation. arXiv preprint arXiv:1908.10357, 2019. 1
+[12] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213-3223, 2016. 2
+[13] Ilke Demir, Krzysztof Koperski, David Lindenbaum, Guan Pang, Jing Huang, Saikat Basu, Forest Hughes, Devis Tuia,
+
+and Ramesh Raska. Deep globe 2018: A challenge to parse the earth through satellite images. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 172-17209. IEEE, 2018. 2
+[14] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009. 1
+[15] MS El-Faki, N Zhang, and DE Peterson. Weed detection using color machine vision. Transactions of the ASAE, 43(6):1969, 2000. 2
+[16] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303-338, 2010. 2
+[17] Yang Fu, Yunchao Wei, Guanshuo Wang, Yuqian Zhou, Honghui Shi, and Thomas S Huang. Self-similarity grouping: A simple unsupervised cross domain adaptation approach for person re-identification. In Proceedings of the IEEE International Conference on Computer Vision, pages 6112–6121, 2019. 1
+[18] Anatoly A Gitelson, Yoram J Kaufman, Robert Stark, and Don Rundquist. Novel algorithms for remote estimation of vegetation fraction. Remote sensing of Environment, 80(1):76-87, 2002. 2
+[19] Priya Goyal, Piotr Dolkar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. 7
+[20] Yunhui Guo, Honghui Shi, Abhishek Kumar, Kristen Grauman, Tajana Rosing, and Rogerio Feris. Spottune: transfer learning through adaptive fine-tuning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4805-4814, 2019. 1
+[21] Norbert Haala, Michael Cramer, Florian Weimer, and Martin Trittler. Performance test on uav-based photogrammetric data collection. Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 38(1/C22):7-12, 2011. 2
+[22] Sebastian Haug and Jörn Ostermann. A crop/weed field image dataset for the evaluation of computer vision based precision agriculture tasks. In European Conference on Computer Vision, pages 105-116. Springer, 2014. 2
+[23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1, 7
+[24] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. arXiv preprint arXiv:1709.00029, 2017. 2
+[25] Gao Huang, Zhuang Liu, Geoff Pleiss, Laurens Van Der Maaten, and Kilian Weinberger. Convolutional networks with dense connectivity. IEEE transactions on pattern analysis and machine intelligence, 2019. 1
+
+[26] Zilong Huang, Xinggang Wang, Lichao Huang, Chang Huang, Yunchao Wei, and Wenyu Liu. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 603-612, 2019. 6
+[27] E Raymond Hunt, W Dean Hively, Stephen J Fujikawa, David S Linden, Craig ST Daughtry, and Greg W McCarty. Acquisition of nir-green-blue digital photographs from unmanned aircraft for crop monitoring. Remote Sensing, 2(1):290-305, 2010. 2
+[28] Jianbo Jiao, Yunchao Wei, Zequn Jie, Honghui Shi, Rynson WH Lau, and Thomas S Huang. Geometry-aware distillation for indoor semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2869-2878, 2019. 1
+[29] Andreas Kamilaris and Francesc X Prenafeta-Boldú. Deep learning in agriculture: A survey. Computers and Electronics in Agriculture, 147:70–90, 2018. 1
+[30] Joshua Kelcey and Arko Lucieer. Sensor correction of a 6-band multispectral imaging sensor for uav remote sensing. Remote Sensing, 4(5):1462-1493, 2012. 2
+[31] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. 1, 3
+[32] Andrea S Laliberte and Albert Rango. Image processing and classification procedures for analysis of sub-decimeter imagery acquired with an unmanned aircraft over arid rangelands. GIScience & Remote Sensing, 48(1):4-23, 2011. 2
+[33] Ross D Lamm, David C Slaughter, and D Ken Giles. Precision weed control system for cotton. Transactions of the ASAE, 45(1):231, 2002. 2
+[34] David B Larson, Matthew C Chen, Matthew P Lungren, Safwan S Halabi, Nicholas V Stence, and Curtis P Langlotz. Performance of a deep-learning neural network model in assessing skeletal maturity on pediatric hand radiographs. Radiology, 287(1):313-322, 2017. 1
+[35] Valentine Lebourgeois, Agnès Bégué, Sylvain Labbe, Benjamin Mallavan, Laurent Prévot, and Bruno Roux. Can commercial digital cameras be used as multispectral sensors? a crop monitoring test. Sensors, 8(11):7300-7322, 2008. 2
+[36] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 2
+[37] Heng Lu, Xiao Fu, Chao Liu, Long-guo Li, Yu-xin He, and Nai-wen Li. Cultivated land information extraction in uav imagery based on deep convolutional neural network and transfer learning. Journal of Mountain Science, 14(4):731-741, 2017. 3
+[38] Emmanuel Maggiori, Yuliya Tarabalka, Guillaume Charpiat, and Pierre Alliez. Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. In IEEE International Geoscience and Remote Sensing Symposium (IGARSS). IEEE, 2017. 2
+
+[39] JA Marchant, Hans Jørgen Andersen, and CM Onyango. Evaluation of an imaging sensor for detecting vegetation using different waveband combinations. Computers and Electronics in Agriculture, 32(2):101-117, 2001. 2
+[40] Andres Milioto, Philipp Lottes, and Cyril Stachniss. Real-time blob-wise sugar beets vs weeds classification for monitoring fields using convolutional neural networks. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 4:41, 2017. 3
+[41] Dmytro Mishkin, Nikolay Sergievskiy, and Jiri Matas. Systematic evaluation of convolution neural network advances on the imagenet. Computer Vision and Image Understanding, 161:11-19, 2017. 2
+[42] Alex Olsen, Dmitry A Konovalov, Bronson Philippa, Peter Ridd, Jake C Wood, Jamie Johns, Wesley Banks, Benjamin Girgenti, Owen Kenny, James Whinney, et al. Deepweeds: A multiclass weed species image dataset for deep learning. Scientific reports, 9(1):2058, 2019. 2
+[43] Rui Qian, Yunchao Wei, Honghui Shi, Jiachen Li, Jiaying Liu, and Thomas Huang. Weakly supervised scene parsing with point-based distance metric learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8843-8850, 2019. 1
+[44] Anushree Ramanath, Saipreethi Muthusrinivasan, Yiqun Xie, Shashi Shekhar, and Bharathkumar Ramachandra. Ndi versus cnn features in deep learning for land cover classification of aerial images. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pages 6483-6486. IEEE, 2019. 2
+[45] J Rebetez, HF Satizábal, M Mota, D Noll, L Buchi, Marina Wendling, B Cannelle, A Perez-Uribe, and Stéphane Burgos. Augmenting a convolutional neural network with local histogramsa case study in crop classification from high-resolution UAV imagery. In European Symp. on Artificial Neural Networks, Computational Intelligence and Machine Learning, pages 515-520, 2016. 3
+[46] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015. 6
+[47] Honghui Shi. Geometry-aware traffic flow analysis by detection and tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 116-120, 2018. 1
+[48] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1
+[49] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-9, 2015. 1
+[50] Anju Unnikrishnan, V Sowmya, and KP Soman. Deep learning architectures for land cover classification using red and near-infrared satellite images. Multimedia Tools and Applications, 78(13):18379-18394, 2019. 2
+
+[51] Zhonghao Wang, Mo Yu, Yunchao Wei, Rogerior Feris, Jinjun Xiong, Wen mei Hwu, Thomas S. Huang, and Honghui Shi. Differential treatment for stuff and things: A simple unsupervised domain adaptation method for semantic segmentation. arXiv preprint arXiv:2003.08040, 2020. 1
+[52] Syed Waqas Zamir, Aditya Arora, Akshita Gupta, Salman Khan, Guolei Sun, Fahad Shahbaz Khan, Fan Zhu, Ling Shao, Gui-Song Xia, and Xiang Bai. isaid: A large-scale dataset for instance segmentation in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 28-37, 2019. 2
+[53] David M Woebbecke, George E Meyer, K Von Bargen, and DA Mortensen. Color indices for weed identification under various soil, residue, and lighting conditions. Transactions of the ASAE, 38(1):259-269, 1995. 2
+[54] Gui-Song Xia, Xiang Bai, Jian Ding, Zhen Zhu, Serge Belongie, Jiebo Luo, Mihai Datcu, Marcello Pelillo, and Liangpei Zhang. Dota: A large-scale dataset for object detection in aerial images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3974-3983, 2018. 2
+[55] Gui-Song Xia, Jingwen Hu, Fan Hu, Baoguang Shi, Xiang Bai, Yanfei Zhong, Liangpei Zhang, and Xiaoqiang Lu. Aid: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing, 55(7):3965-3981, 2017. 2
+[56] Hanchao Yu, Yang Fu, Haichao Yu, Yunchao Wei, Xin chao Wang, Jianbo Jiao, Matthew Bramlet, Thenkurussi Kesavadas, Honghui Shi, Zhangyang Wang, et al. A novel framework for 3d-2d vertebra matching. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 121–126. IEEE, 2019. 1
+[57] Haichao Yu, Haoxiang Li, Honghui Shi, Thomas S. Huang, and Gang Hua. Any-precision deep neural networks. arXiv preprint arXiv:1911.07346, 2019. 1
+[58] Haichao Yu, Ding Liu, Honghui Shi, Hanchao Yu, Zhangyang Wang, Xinchao Wang, Brent Cross, Matthew Bramler, and Thomas S Huang. Computed tomography super-resolution using convolutional neural networks. In 2017 IEEE International Conference on Image Processing (ICIP), pages 3944-3948. IEEE, 2017. 1
+[59] Hanchao Yu, Shanhui Sun, Haichao Yu, Xiao Chen, Honghui Shi, Thomas Huang, and Terrence Chen. Foal: Fast online adaptive learning for cardiac motion estimation. arXiv preprint arXiv:2003.04492, 2020.1
+[60] Pablo J Zarco-Tejada, Victoria González-Dugo, LE Williams, L Suárez, José AJ Berni, D Goldhamer, and E Fereres. A pri-based water stress index combining structural and chlorophyll effects: Assessment using diurnal narrow-band airborne imagery and the cwsi thermal index. Remote sensing of environment, 138:38–50, 2013. 2
+[61] Xiaofan Zhang, Haoming Lu, Cong Hao, Jiachen Li, Bowen Cheng, Yuhong Li, Kyle Rupnow, Jinjun Xiong, Thomas Huang, Honghui Shi, et al. Skynet: a hardware-efficient method for object detection and tracking on embedded systems. arXiv preprint arXiv:1909.09709, 2019. 1
+[62] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In
+
+Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881-2890, 2017. 6
+[63] Xin Zhao, Yitong Yuan, Mengdie Song, Yang Ding, Fenfang Lin, Dong Liang, and Dongyan Zhang. Use of unmanned aerial vehicle imagery and deep learning unet to extract rice lodging. Sensors, 19(18):3859, 2019. 2
+[64] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 2
\ No newline at end of file
diff --git a/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/images.zip b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d6ef01fabac265fe496081620ff7f7ccd742532c
--- /dev/null
+++ b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:377395cf0788c2576c9c98a6d74797ebd0bd2f19e1c59ad35f4b2afffbd9d706
+size 743563
diff --git a/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/layout.json b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bd5a299b46cec60410169cb8e2847ff28236f98a
--- /dev/null
+++ b/agriculturevisionalargeaerialimagedatabaseforagriculturalpatternanalysis/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e78b7ee4999189be55f1dd7cfac00442f74f63fe45e53b1c03e1a1a136c81ee
+size 429770
diff --git a/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_content_list.json b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8c4959688171887bd20b67fc2699d6581348220a
--- /dev/null
+++ b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09fa2da7c11ac810ef1696236474fb02256b9c96f01552ae7d4368fe152cb197
+size 81595
diff --git a/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_model.json b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..285330cc75d086b6f1c95b5cab444e071b2a30d7
--- /dev/null
+++ b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:971bbbeb8d31e7ca2e163017d6258615e443a25b9c4e2e7bc6d17d55b7b62d19
+size 101313
diff --git a/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_origin.pdf b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..98cfe06629e31c6caf3eebfa2ec3928c961e9a04
--- /dev/null
+++ b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/4ec79ba8-5315-44df-8ab2-d7b019cf5197_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:417eb5343962c363a7bb959c6f0f218e8e0097a28a603cd6ed6400354c5f7ea5
+size 2640567
diff --git a/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/full.md b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..15d1e95f0fad9851b08669dd4c1959c1ea0fc12d
--- /dev/null
+++ b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/full.md
@@ -0,0 +1,356 @@
+# ALFRED
+
+# A Benchmark for Interpreting Grounded Instructions for Everyday Tasks
+
+Mohit Shridhar1
+
+Winson Han3
+
+Jesse Thomason1
+
+Roozbeh Mottaghi $^{1,3}$
+
+Daniel Gordon1
+
+Luke Zettlemoyer1
+
+Yonatan Bisk $^{1,2,3}$
+
+Dieter Fox1,4
+
+AskForALFRED.com
+
+# Abstract
+
+We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with nonreversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker" and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.
+
+# 1. Introduction
+
+A robot operating in human spaces must learn to connect natural language to the world. This symbol grounding [21] problem has largely focused on connecting language to static images. However, robots need to understand task-oriented language, for example "Rinse off a mug and place it in the coffee maker," as illustrated in Figure 1.
+
+Platforms for translating language to action have become increasingly popular, spawning new test-beds [3, 12, 14, 41]. These benchmarks include language-driven navigation and embodied question answering, which have seen dramatic improvements in modeling thanks to environments like Matterport 3D [3, 11], AI2-THOR [25], and AI Habitat [44]. However, these datasets ignore complexities arising from describing task-oriented behaviors with objects.
+
+We introduce ALFRED, a new benchmark for con
+
+
+Goal: "Rinse off a mug and place it in the coffee maker"
+
+
+
+
+
+
+state changes
+
+
+memory
+
+
+Figure 1: ALFRED consists of 25k language directives corresponding to expert demonstrations of household tasks. We highlight several frames corresponding to portions of the accompanying language instruction. ALFRED involves interactions with objects, keeping track of state changes, and references to previous instructions.
+
+necting human language to actions, behaviors, and objects in interactive visual environments. Planner-based expert demonstrations are accompanied by both high- and low-level human language instructions in 120 indoor scenes in AI2-THOR 2.0 [25]. These demonstrations involve partial observability, long action horizons, underspecified natural language, and irreversible actions.
+
+ALFRED includes 25,743 English language directives describing 8,055 expert demonstrations averaging 50 steps each, resulting in 428,322 image-action pairs. Motivated by work in robotics on segmentation-based grasping [36], agents in ALFRED interact with objects visually, specifying a pixelwise interaction mask of the target object. This inference is more realistic than simple object class prediction, where localization is treated as a solved problem. Existing beam-search [17, 47, 52] and backtracking solutions [24, 28] are infeasible due to the larger action and state spaces, long horizon, and inability to undo certain actions.
+
+ | — Language — | — Virtual Environment — | — Inference — |
| # Human Annotations | Granularity | Visual Quality | Movable Objects | State Changes | Vis. Obs. | Navigation | Interaction |
| TACoS [42] | 17k+ | High&Low | Photos | X | X | - | - | - |
| R2R [3]; Touchdown [14] | 21k+; 9.3k+ | Low | Photos | X | X | Ego | Graph | X |
| EQA [15] | X | High | Low | X | X | Ego | Discrete | X |
| Matterport EQA [54] | X | High | Photos | X | X | Ego | Discrete | X |
| IQA [20] | X | High | High | X | ✓ | Ego | Discrete | Discrete |
| VirtualHome [41] | 2.7k+ | High&Low | High | ✓ | ✓ | 3rdPerson | X | Discrete |
| VSP [57] | X | High | High | ✓ | ✓ | Ego | X | Discrete |
| ALFRED | 25k+ | High&Low | High | ✓ | ✓ | Ego | Discrete | Discrete + Mask |
+
+Table 1: Dataset comparison. ALFRED is the first interactive visual dataset to include high-level goal and low-level natural language instructions for object and environment interactions. TACoS [42] provides detailed high- and low-level text descriptions of cooking videos, but does not facilitate task execution. For navigation, ALFRED enables discretized, grid-based movement, while other datasets use topological graph navigation or avoid navigation altogether. ALFRED requires an agent to generate spatially located interaction masks for action commands. By contrast, other datasets only require choosing from a discrete set of available interactions and object classes or offer no interactive capability.
+
+To establish baseline performance levels, we evaluate a sequence-to-sequence model akin to existing vision-and-language navigation tasks [27]. This model is not effective on the complex tasks in ALFRED, achieving less than $5\%$ success rates. For analysis, we also evaluate individual sub-goals. While performance is better for isolated sub-goals, the model lacks the reasoning capacity for long-horizon and compositional task planning.
+
+In summary, ALFRED facilitates learning models that translate from language to sequences of actions and interactions in a visually and physically realistic simulation environment. This benchmark captures many challenges present in real-world settings for translating human language to robot actions for accomplishing household tasks. Models that can overcome these challenges will begin to close the gap towards real-world, language-driven robotics.
+
+# 2. Related Work
+
+Table 1 summarizes the benefits of ALFRED relative to other visual action datasets with language annotations.
+
+Vision & Language Navigation. In vision-and-language navigation tasks, either natural or templated language describes a route to a goal location through egocentric visual observations [3, 12, 13, 14, 30]. Since the proposal of R2R [3], researchers have dramatically improved the navigation performance of models [17, 24, 28, 52, 53] with techniques like progress monitoring [27], as well as introduced task variants with additional, on-route instructions [37, 38, 50]. Much of this research is limited to static environments. By contrast, ALFRED tasks include navigation, object interactions, and state changes.
+
+Vision & Language Task Completion. There are sev-
+
+eral existing benchmarks based on simple block worlds and fully observable scenes [9, 33]. ALFRED provides more difficult tasks in richer, visually complex scenes, and uses partially observable environments. The CHAI benchmark [32] evaluates agents performing household instructions, but uses a generic "interact" action. ALFRED has seven manipulation actions, such as pick up, turn on, and open, state changes like clean versus dirty, and variation in language and visual complexity.
+
+Previous work in the original AI2-THOR environment investigated the task of visual semantic planning [19, 57]. Artificial language came from templates, and environment interaction was handled with discrete class predictions, for example selecting apple as the target object from predefined options. ALFRED features human language instructions, and object selections are carried out with class-agnostic, pixelwise interaction masks. In VirtualHome [41], programs are generated from video demonstration and natural language instructions, but inference does not involve egocentric visual and action feedback or partial observability.
+
+There is an extensive literature on language-based instruction following in the natural language processing community. There, research has focused on mapping instructions to actions [5, 13, 31, 35, 48], but these works do not involve visual, interactive environments.
+
+Embodied Question Answering. Existing datasets for visual question answering in embodied environments use templated language or static scenes [15, 20, 54, 56]. In ALFRED, rather than answering a question, the agent must complete a task specified using natural language, which requires both navigation and interaction with objects.
+
+Instruction Alignment. Language annotations of videos enable discovering visual correspondences between words
+
+
+Figure 2: ALFRED annotations. We introduce 7 different task types parameterized by 84 object classes in 120 scenes. An example of each task type is given above. For the Clean & Place demonstration, we also show the three crowdsourced language directives. Please see the supplemental material for example demonstrations and language for each task.
+
+and concepts [1, 45, 42, 55, 58]. ALFRED requires performing tasks in an interactive setting as opposed to learning from recorded videos.
+
+Robotics Instruction Following. Instruction following is a long-standing topic of interest in robotics [7, 10, 29, 34, 39, 40, 46, 51]. Lines of research consider different tasks such as cooking [10], table clearing [39], and mobile manipulation [29]. In general, they are limited to a few scenes [34], consider a small number of objects [29], or use the same environment for training and testing [7]. In contrast, ALFRED includes 120 scenes, many object classes with diverse appearances, and a test set of unseen environments.
+
+# 3. The ALFRED Dataset
+
+The ALFRED dataset comprises 25,743 language directives corresponding to 8,055 expert demonstration episodes. Each directive includes a high-level goal and a set of step-by-step instructions. Each expert demonstration can be deterministically replayed in the AI2-THOR 2.0 simulator.
+
+# 3.1. Expert Demonstrations
+
+Expert demonstrations are composed of an agent's egocentric visual observations of the environment and what action is taken at each timestep as well as ground-truth interaction masks. These demonstrations are generated by a planner [23] using metadata not available to the agent at inference time. Navigation actions move the agent or change its camera orientation, while manipulation actions include picking and placing objects, opening and closing cabinets and drawers, and turning appliances on and off. Interactions can involve multiple objects, such as using a knife to slice an apple, cleaning a mug in the sink, and heating a potato in the microwave. Manipulation actions are accompanied by a ground truth segmentation of the target object.
+
+Figure 2 gives examples of the high-level agent tasks in ALFRED, like putting a cleaned object at a destination. These tasks are parameterized by the object of focus, the destination receptacle (e.g., table top), the scene in which to carry out the task, and in the case of Stack & Place, a base object (e.g., plate). ALFRED contains expert demonstrations of these seven tasks executed using combinations of 58 unique object classes and 26 receptacle object classes across 120 different indoor scenes. For object classes like potato slice, the agent must first pick up a knife and find a potato to create slices. All object classes contain multiple visual variations with different shapes, textures, and colors. For example, there are 30 unique variants of the apple class. Indoor scenes include different room types: 30 each of kitchens, bathrooms, bedrooms, and living rooms.
+
+For 2,685 combinations of task parameters, we generate three expert demonstrations per parameter set, for a total of 8,055 unique demonstrations with an average of 50 action steps. The distributions of actions steps in AL-FRED demonstrations versus related datasets is given in Figure 3. As an example, for task parameters {task: Heat & Place, object: potato, destination: counter top, scene: KITCHEN-8}, we generate three different expert demonstrations by starting the agent and objects in randomly chosen locations. Object start positions have some commonsense, class-specific constraints, for example a fork can start inside a drawer, but an apple cannot.
+
+Contrasting navigation-only datasets where expert demonstrations can come from an $A^{*}$ planner, our state space includes object positions and state changes. Thus, to generate expert demonstrations we encode the agent and object states, as well as high-level environment dynamics, into Planning Domain Definition Language (PDDL) rules [18]. We then define task-specific PDDL goal conditions, for example that a heated potato is resting on a table top. Note
+
+ | Train | Validation | Test |
| | Seen | Unseen | Seen | Unseen |
| # Annotations | 21,023 | 820 | 821 | 1,533 | 1,529 |
| # Scenes | 108 | 88 | 4 | 107 | 8 |
+
+Table 2: ALFRED Data Splits. All expert demonstrations and associated language directives in the validation and test folds are distinct from those in the train fold. The validation and test sets are split into seen and unseen folds. Scenes in the seen folds of validation and test data are subsets of those in the train fold. Scenes in the unseen validation and test folds are distinct from the train folds and from each other.
+
+that the planner encodes the environment as fully observable and has perfect knowledge about world dynamics. For training and testing agent models, however, the environment is partially observable: it is only viewed through the agent's egocentric vision as actions are carried out.
+
+We split these expert demonstrations into training, validation, and test folds (Table 2). Following work in vision-and-language navigation [3], we further split the validation and test into two conditions: seen and unseen environments. This split facilitates examining how well models generalize to entirely new spaces with novel object class variations.
+
+# 3.2. Language Directives
+
+For every expert demonstration, we collect open vocabulary, free-form language directives from at least three different annotators using Amazon Mechanical Turk (AMT), resulting in 25k total language directives. Language directives include a high-level goal together with low-level instructions, as shown in Figures 1 and 2. The distribution of language annotation token lengths in ALFRED versus related datasets is given in Figure 3.
+
+AMT workers are told to write instructions to tell a "smart robot" how to accomplish what is shown in a video. We create a video of each expert demonstration and segment it such that each segment corresponds to an instruction. We consult the PDDL plan for the expert demonstration to identify task sub-goals, for example the many low-level steps to navigate to a knife, or the several steps to heat a potato slice in the microwave once standing in front of it. We visually highlight action sequences related to sub-goals via colored timeline bars below the video. In each HIT (Human Intelligence Task), a worker watches the video, then writes low-level, step-by-step instructions for each highlighted sub-goal segment. The worker also writes a high-level goal that summarizes what the robot should accomplish during the expert demonstration.
+
+These directives are validated through a second HIT by at least two annotators, with a possible third tie-breaker. For validation, we show a worker all three language directive
+
+
+Figure 3: Comparison to Existing Datasets. Expert demonstration steps and instruction tokens of ALFRED compared to other datasets with human language for action sequences: Touchdown (TD) [14], VirtualHome (VH) [41], and Room-to-Room (R2R) [3]. The total number of demonstrations or annotations is given with the dataset label.
+
+
+
+annotations without the video. The worker selects whether the three directives describe the same actions, and if not, which is most different. If a directive is chosen as most different by a majority of validation workers, it is removed and the demonstration is subsequently re-annotated by another worker. Qualitatively, these rejected annotations contain incorrect object referents (e.g., "egg" instead of "potato") or directions (e.g., "go left towards..." instead of "right").
+
+# 4. Baseline Models
+
+An agent trained for ALFRED tasks needs to jointly reason over vision and language input and produce a sequence of low-level actions to interact with the environment.
+
+# 4.1. Sequence-to-Sequence Models
+
+We model the interactive agent with a CNN-LSTM sequence-to-sequence (SEQ2SEQ) architecture. A CNN enodes the visual input, a bidirectional LSTM generates a representation of the language input, and a decoder LSTM infers a sequence of low-level actions while attending over the encoded language. See Figure 4 for an overview and the supplementary material for implementation details.
+
+Supervision. We train all models using imitation learning on expert trajectories. This ensures the language directives match the visual inputs. At each timestep, the model is trained to produce the expert action and associated interaction mask for manipulation actions.
+
+We note that a DAgger-style [43] student-forcing paradigm in ALFRED is non-trivial, even disregarding language alignment. Obtaining expert demonstration actions on the fly in navigation-only datasets like R2R [3] only requires rerunning $A^{*}$ . In ALFRED, on the fly demonstrations require re-planning. In same cases re-planning is not
+
+
+Figure 4: Model overview. At each step, our model reweights the instruction based on the history $(\hat{x}_t)$ , and combines the current observation features $(v_{t})$ and the previously executed action $(a_{t - 1})$ . These are passed as input to an LSTM cell to produce the current hidden state. Finally, the new hidden state $(h_t)$ is combined with the previous features to predict both the next action $(a_{t})$ and a pixelwise interaction mask over the observed image to indicate an object.
+
+possible: if during a task of {Clean & Place, apple, refrigerator, KITCHEN-3} a student-forcing model slices the only apple in the scene, the action cannot be recovered from and the task cannot be completed.
+
+Visual encoding. Each visual observation $o_t$ is encoded with a frozen ResNet-18 [22] CNN, where we take the output of the final convolution layer to preserve spatial information necessary for grounding specific objects in the visual frame. We embed this output using two more $1 \times 1$ convolution layers and a fully-connected layer. During training, a set of $T$ observations from the expert demonstration is encoded as $\overline{V} = \langle v_1, v_2, \ldots, v_T \rangle$ , where $v_t$ is the visual feature vector at time-step $t$ .
+
+Language encoding. Given a natural language goal $\overline{G} = \langle g_1, g_2, \ldots, g_{L_g} \rangle$ of $L_g$ words, and step-by-step instructions $\overline{S} = \langle s_1, s_2, \ldots, s_{L_s} \rangle$ of $L_s$ words, we append them into a single input sequence $\overline{X} = \langle g_1, g_2, \ldots, g_{L_g}, <\mathrm{SEP}>, s_1, s_2, \ldots, s_{L_s} \rangle$ with the $<\mathrm{SEP}>$ token indicating the separation between the high-level goal and low-level instructions. This sequence is fed into a bidirectional LSTM encoder to produce an encoding $x = \{x_1, x_2, \ldots, x_{L_g + L_s}\}$ for each word in $\overline{X}$ .
+
+Attention over language. The agent's action at each timestep is based on an attention mechanism weighting tokens in the instruction. We perform soft-attention on the language features $x$ to compute the attention distribution $\alpha_{t}$ conditioned on the hidden state of the decoder $h_{t - 1}$ from the last timestep:
+
+$$
+z _ {t} = \left(W _ {x} h _ {t - 1}\right) ^ {\top} x,
+$$
+
+$$
+\alpha_ {t} = \operatorname {S o f t m a x} \left(z _ {t}\right), \tag {1}
+$$
+
+$$
+\hat {x} _ {t} = \alpha_ {t} ^ {\top} x
+$$
+
+where $W_{x}$ are learnable parameters of a fully-connected layer, $z_{t}$ is a vector of scalar values that represent the attention mass for each word in $x$ , and $\hat{x}_t$ is the weighted sum of $x$ over the attention distribution $\alpha_{t}$ induced from $z_{t}$ .
+
+Action decoding. At each timestep $t$ , upon receiving a new observation image $o_t$ , the LSTM decoder takes in the visual feature $v_t$ , language feature $\hat{x}_t$ , and the previous action $a_{t-1}$ , and outputs a new hidden state $h_t$ :
+
+$$
+u _ {t} = \left[ v _ {t}; \hat {x} _ {t}; a _ {t - 1} \right],
+$$
+
+$$
+h _ {t} = \operatorname {L S T M} \left(u _ {t}, h _ {t - 1}\right) \tag {2}
+$$
+
+where $[\cdot ]$ denotes concatenation. The hidden state $h_t$ is used to obtain the attention weighted language feature $\hat{x}_{t + 1}$
+
+Action and mask prediction. The agent interacts with the environment by choosing an action and producing a pixelwise binary mask indicating a specific object in the frame. Although AI2-THOR supports continuous control for agent navigation and object manipulation, we discretize the action space. The agent chooses from among 13 actions. There are 5 navigation actions: MoveAhead, RotateRight, RotateLeft, LookUp, and LookDown together with 7 interaction actions: Pickup, Put, Open, Close, ToggleOn, ToggleOff, and Slice. Interaction actions require a pixelwise mask to denote the object of interest. Finally, the agent predicts a Stop action to end the episode. We concatenate the hidden state $h_t$ with the input features $u_t$ and train two separate networks to predict the next action
+
+$a_{t}$ and interaction mask $m_{t}$ :
+
+$$
+a _ {t} = \operatorname {a r g m a x} \left(W _ {a} \left[ h _ {t}; u _ {t} \right]\right), \tag {3}
+$$
+
+$$
+m _ {t} = \sigma \left(\mathbf {d e c o n v} \left[ h _ {t}; u _ {t} \right]\right)
+$$
+
+where $W_{a}$ are learnable parameters of a fully connected layer, deconv is a three-layer deconvolution network, and $\sigma$ is a sigmoid activation function. Action selection is trained using softmax cross entropy with the expert action. The interaction masks are learned end-to-end in a supervised manner based on ground-truth object segmentations using binary cross-entropy loss. The mask loss is rebalanced to account for sparsity in these dense masks in which target objects can take up a small portion of the visual frame.
+
+# 4.2. Progress Monitors
+
+ALFRED tasks require reasoning over long sequences of images and instruction words. We propose two auxiliary losses (Eq. 4 & 5) that use additional temporal information to reduce this burden and form a sequence-to-sequence model with progress monitoring (SEQ2SEQ+PM).
+
+Ma et al. [27] showed that agents benefit from maintaining an internal estimate of their progress towards the goal for navigation tasks. Akin to learning a value function in reinforcement learning, progress monitoring helps to learn the utility of each state in the process of achieving the overall task. Intuitively, this allows our agent to better distinguish between visually similar states such as just before putting an object in the microwave versus just after taking the object out. We introduce a simple module that predicts progress, $p_t \in [0,1]$ , conditioned on the decoder hidden state $h_t$ and the concatenated input $u_t$ :
+
+$$
+p _ {t} = \sigma \left(W _ {p} \left[ h _ {t}; u _ {t} \right]\right). \tag {4}
+$$
+
+The supervision for $p_t$ is based on normalized time-step values $t / T$ , where $t$ is the current time-step, and $T$ is the total length of the expert demonstration (trained via L2 loss).
+
+We also train the agent to predict the number of sub-goals completed so far, $c_{t}$ . These sub-goals represent segments in the demonstration corresponding to sequences of actions like navigation, pickup, and heating as identified in the PDDL plan, discussed in Section 3.2. Each segment has a corresponding language instruction, but the alignment must be learned. This sub-goal prediction encourages the agent to coarsely track its progress through the language directive. This prediction is also conditioned on the decoder hidden state $h_{t}$ and the concatenated input $u_{t}$ :
+
+$$
+c _ {t} = \sigma \left(W _ {c} \left[ h _ {t}; u _ {t} \right]\right). \tag {5}
+$$
+
+We train $c_t$ in a supervised fashion by using the normalized number of sub-goals accomplished in the expert trajectory at each timestep, $c_t / C$ , as the ground-truth label for a task with $C$ sub-goals. We again train with an L2 loss.
+
+# 5. Experiments
+
+We evaluate the baseline models in the AI2-THOR simulator. When evaluating on test folds, we run models with the lowest validation loss. Episodes that exceed 1000 steps or cause more than 10 failed actions are terminated. Failed actions arise from bumping into walls or predicting action interaction masks for incompatible objects, such as attempting to Pickup a counter top. These limitations encourage efficiency and reliability. We assess the overall and partial success of models' task executions across episodes.
+
+# 5.1. Evaluation Metrics
+
+ALFRED allows us to evaluate both full task and task goal-condition completion. In navigation-only tasks, one can only measure how far the agent is from the goal. In ALFRED, we can also evaluate whether task goal-conditions have been completed, for example that a potato has been sliced. For all of our experiments, we report both Task Success and Goal-Condition Success. Each Goal-Condition relies on multiple instructions, for example navigating to an object and then slicing it.
+
+Task Success. Each expert demonstration is parameterized by a task to be performed, as illustrated in Figure 2. Task Success is defined as 1 if the object positions and state changes correspond correctly to the task goal-conditions at the end of the action sequence, and 0 otherwise. Consider the task: "Put a hot potato slice on the counter". The agent succeeds if, at the end of the episode, any potato slice object has changed to the heated state and is resting on any counter top surface.
+
+Goal-Condition Success. The goal-condition success of a model is the ratio of goal-conditions completed at the end of an episode to those necessary to have finished a task. For example, in the previous Heat & Place example, there are four goal-conditions. First, a potato must be sliced. Second, a potato slice should become heated. Third, a potato slice should come to rest on a counter top. Fourth, the same potato slice that is heated should be on the counter top. If the agent slices a potato, then moves a slice to the counter top without heating it, then the goal-condition success score is $2/4 = 50\%$ . On average, tasks in ALFRED have 2.55 goal conditions. The final score is calculated as the average goal-condition success of each episode. Task success is 1 only if goal-condition success is 1.
+
+Path Weighted Metrics. We include a Path Weighted version of both metrics that considers the length of the expert demonstration [2]. Expert demonstrations found via a PDDL solver on global information are not guaranteed to be optimal. However, they avoid exploration, use shortest path
+
+Validation
+
+| Model | Seen | Unseen |
| Task | Goal-Cond | Task | Goal-Cond |
| NO LANGUAGE | 0.0 (0.0) | 5.9 (3.4) | 0.0 (0.0) | 6.5 (4.7) |
| NO VISION | 0.0 (0.0) | 5.7 (4.7) | 0.0 (0.0) | 6.8 (6.0) |
| GOAL-ONLY | 0.1 (0.0) | 6.5 (4.3) | 0.0 (0.0) | 6.8 (5.0) |
| INSTRUCTIONS-ONLY | 2.3 (1.1) | 9.4 (6.1) | 0.0 (0.0) | 7.0 (4.9) |
| SEQ2SEQ | 2.4 (1.1) | 9.4 (5.7) | 0.1 (0.0) | 6.8 (4.7) |
| + PM PROGRESS-ONLY | 2.1 (1.1) | 8.7 (5.6) | 0.0 (0.0) | 6.9 (5.0) |
| + PM SUBGOAL-ONLY | 2.1 (1.2) | 9.6 (5.5) | 0.0 (0.0) | 6.6 (4.6) |
| + PM Both | 3.7 (2.1) | 10.0 (7.0) | 0.0 (0.0) | 6.9 (5.1) |
| HUMAN | - | - | - | - |
+
+Test
+
+| Seen | Unseen |
| Task | Goal-Cond | Task | Goal-Cond |
| 0.2 (0.0) | 5.0 (3.2) | 0.2 (0.0) | 6.6 (4.0) |
| 0.0 (0.0) | 3.9 (3.2) | 0.2 (0.1) | 6.6 (4.6) |
| 0.1 (0.1) | 5.0 (3.7) | 0.2 (0.0) | 6.9 (4.4) |
| 2.7 (1.4) | 8.2 (5.5) | 0.5 (0.2) | 7.2 (4.6) |
| 2.1 (1.0) | 7.4 (4.7) | 0.5 (0.2) | 7.1 (4.5) |
| 3.0 (1.7) | 8.0 (5.5) | 0.3 (0.1) | 7.3 (4.5) |
| 3.8 (1.7) | 8.9 (5.6) | 0.5 (0.2) | 7.1 (4.5) |
| 4.0 (2.0) | 9.4 (6.3) | 0.4 (0.1) | 7.0 (4.3) |
| - | - | 91.0 (85.8) | 94.5 (87.6) |
+
+Table 3: Task and Goal-Condition Success. For each metric, the corresponding path weighted metrics are given in parentheses. The highest values per fold and metric are shown in blue. All values are percentages.
+
+navigation, and are generally efficient. The path weighted score $p_{s}$ for metric $s$ is given as
+
+$$
+p _ {s} = s \times \frac {L ^ {*}}{\operatorname* {m a x} \left(L ^ {*} , \hat {L}\right)} \tag {6}
+$$
+
+where $\hat{L}$ is the number of actions the model took in the episode, and $L^{*}$ is the number of actions in the expert demonstration. Intuitively, a model receives half-credit for taking twice as long as the expert to accomplish a task.
+
+# 5.2. Sub-Goal Evaluation
+
+Completing the entire sequence of actions required to finish a task is challenging. In addition to assessing full task success, we study the ability of a model to accomplish the next sub-goal conditioned on the preceding expert sequence. The agent is tested by first forcing it to follow the expert demonstration to maintain a history of states leading up to the sub-goal, then requiring it to complete the sub-goal conditioned on the entire language directive and current visual observation. For the task "Put a hot potato slice on the counter" for example, we can evaluate the sub-goal of navigating to the potato after using the expert demonstration to navigate to and pick up a knife. The tasks in ALFRED contain on average 7.5 such sub-goals (results in Table 4).
+
+# 6. Analysis
+
+Results from our experiments are presented in Table 3. We find that the initial model, without spatial or semantic maps, object segmentations, or explicit object-state tracking, performs poorly on ALFRED's long-horizon tasks with high-dimensional state-spaces. The SEQ2SEQ model achieves $\sim 8\%$ goal-condition success rate, showing that the agent does learn to partially complete some tasks. This headroom (as compared with humans) motivates further research into models that can perform the complex vision-and-language planning introduced by ALFRED. The performance starkly contrasts other vision-and
+
+language datasets focused on navigation, where sequence-to-sequence with progress monitoring performs well [27].
+
+# 6.1. Random Agent
+
+A random agent is commonly employed as a baseline in vision-and-language tasks. In ALFRED, an agent that chooses a uniform random action and generates a uniform random interaction mask at each timestep achieves $0\%$ on all folds, even without an API failure limit.
+
+# 6.2. Unimodal Ablations
+
+Previous work established that learned agents without visual inputs, language inputs, or both performed better than random agents and were competitive with initial baselines for several navigation and question answering tasks [49]. These performance gaps were due to structural biases in the datasets or issues with model capacity. We evaluate these ablation baselines (NO LANGUAGE and NO VISION) to study vision and language bias in ALFRED.
+
+The unimodal ablation performances in Table 3 indicate that both vision and language modalities are necessary to accomplish the tasks in ALFRED. The NO LANGUAGE model finishes some goal-conditions by interacting with familiar objects seen during training. The NO VISION model similarly finishes some goal-conditions by following low-level language instructions for navigation and memorizing interaction masks for common objects like microwaves that are centered in the visual frame.
+
+# 6.3. Model Ablations
+
+We additionally ablate the amount of language supervision available to the model, as language directives are given as both a high-level goal and step-by-step instructions. Providing only high-level, underspecified goal language (GOAL-ONLY) is insufficient to complete the tasks, but is enough to complete some goal-conditions. Using just low-level, step-by-step instructions (INSTRUCTIONS-ONLY
+
+performs similarly to using both high- and low-levels. Thus, this simple model does not seem to exploit the goal instruction to plan out sub-goals for step-by-step execution.
+
+The two progress monitoring signals are marginally helpful, increasing the success rate from $\sim 1\%$ to $\sim 2\%$ . Progress monitoring leads to more efficient task completion, as indicated by the consistently higher path weighted scores. They may help avoid action repetition and with the prediction of the Stop action.
+
+The agent takes more steps than the expert in all cases, as indicated by the lower path weighted scores. Sometimes, this is caused by failing to keep track of state-changes, for example heating up an egg in the microwave multiple times. Further, the models also do not generalize well to unseen scenes, due to the overall visual complexity in ALFRED arising from new scenes and novel object class instances.
+
+# 6.4. Human evaluation
+
+We obtained a human evaluation of 100 randomly sampled directives from the unseen test fold. The experiment involved 5 participants who completed 20 tasks each using a keyboard-and-mouse interface. Before the experiment, the participants were allowed to familiarize themselves with AI2-THOR. The action-space and task restrictions were identical to that of the baseline models. Overall, the participants obtained a high success rate of $91\%$ , while taking slightly longer than the expert with $86\%$ path-length weighted success rate. This indicates that the directives in ALFRED are well-aligned with the demonstrations.
+
+# 6.5. Sub-Goal Performance
+
+We also examine performance of the SEQ2SEQ model on individual sub-goals in ALFRED. For this experiment, we use the expert trajectory to move the agent through the episode up to the sub-task. Then, the agent begins inference based on the language directive and current visual frame.
+
+Table 4 presents path-length weighted success scores for 8 sub-goals. Goto and Pickup sub-tasks with the SEQ2SEQ+PM model achieve $\sim 51\%$ and $\sim 32\%$ , respectively, even in seen environments. Visual semantic navigation is considerably harder in unseen environments. Similarly, interaction masks for Pickup actions in unseen environments are worse due to unfamiliar scenes and object instances. Simple sub-goals like Cool, and Heat are achieved at a high success rate of $\sim 90\%$ because these tasks are mostly object-agnostic. For example, the agent becomes familiar with using microwaves to heat things regardless of the object in-hand, because microwaves have little visual diversity across kitchens. Overall, the sub-goal evaluations indicate that models that exploit modularity and hierarchy, or make use of pretrained object segmentation models, may make headway on full task sequences.
+
+| Model | Goto | Pickup | Put | Cool | Heat | Clean | Slice | Toggle | Avg. |
| Seen | No Lang | 28 | 22 | 71 | 89 | 87 | 64 | 19 | 90 | 59 |
| S2S | 49 | 32 | 80 | 87 | 85 | 82 | 23 | 97 | 67 |
| S2S + PM | 51 | 32 | 81 | 88 | 85 | 81 | 25 | 100 | 68 |
| Unseen | No Lang | 17 | 9 | 31 | 75 | 86 | 13 | 8 | 4 | 30 |
| S2S | 21 | 20 | 51 | 94 | 88 | 21 | 14 | 54 | 45 |
| S2S + PM | 22 | 21 | 46 | 92 | 89 | 57 | 12 | 32 | 46 |
+
+Table 4: Evaluations by path weighted sub-goal success. All values are percentages. The highest values per fold and task are shown in blue. We note that the NO VISION model achieves less than $2\%$ on all sub-goals. See supplemental material for more.
+
+# 7. Conclusions
+
+We introduced ALFRED, a benchmark for learning to map natural language instructions and egocentric vision to sequences of actions. ALFRED moves us closer to a community goal of language-driven robots capable of navigation and interaction. The environment dynamics and interaction mask predictions required in ALFRED narrow the gap between what is required of agents in simulation and robots operating in the real world [36].
+
+We use ALFRED to evaluate a sequence-to-sequence model with progress monitoring, shown to be effective in other vision-and-language navigation tasks [27]. While this model is relatively competent at accomplishing some sub-goals (e.g. operating microwaves is similar across Heat & Place tasks), the overall task success rates are poor. The long horizon of ALFRED tasks poses a significant challenge with sub-problems including visual semantic navigation, object detection, referring expression grounding, and action grounding. These challenges may be approachable by models that exploit hierarchy [8, 26], modularity [4, 16], and structured reasoning and planning [6]. We are encouraged by the possibilities and challenges that the ALFRED benchmark introduces to the community.
+
+# Acknowledgements
+
+Thanks to our UW colleagues for helpful feedback, to Eli Vanderbilt and Eric Kolve for their help with AI2-THOR and leaderboard setup, and Victor Zhong for early modeling design. And finally, thanks to Ranjay Krishna for sharing the Mechanical Turk annotation interface. This research was supported in part by the ARO (ARO-W911NF16-1-0121), the NSF (IIS1252835, IIS-1562364, NSF-NRI1637479), and the Allen Institute for Artificial Intelligence.
+
+# References
+
+[1] Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Ivan Laptev, Josef Sivic, and Simon Lacoste-Julien. Unsupervised learning from narrated instruction videos. In CVPR, 2016.
+[2] Peter Anderson, Angel Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, et al. On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757, 2018.
+[3] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR, 2018.
+[4] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In CVPR, June 2016.
+[5] Yoav Artzi and Luke Zettlemoyer. Weakly supervised learning of semantic parsers for mapping instructions to actions. TACL, 2013.
+[6] Masataro Asai and Alex Fukunaga. Classical planning in deep latent space: Bridging the subsymbolic-symbolic boundary. In AAAI, 2018.
+[7] Michael Beetz, Ulrich Klank, Ingo Kresse, Alexis Maldonado, Lorenz Mösenlechner, Dejan Pangercic, Thomas Rühr, and Moritz Tenorth. Robotic roommates making pancakes. In IEEE-RAS, 2011.
+[8] Yonatan Bisk, Daniel Marcu, and William Wong. Towards a dataset for human computer communication via grounded language acquisition. In AAAI Workshop on Symbiotic Cognitive Systems, 2016.
+[9] Yonatan Bisk, Deniz Yuret, and Daniel Marcu. Natural language communication with robots. In NAACL, 2016.
+[10] Mario Bollini, Stefanie TELlex, Tyler Thompson, Nicholas Roy, and Daniela Rus. Interpreting and executing recipes with a cooking robot. In *ISER*, 2012.
+[11] Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from RGB-D data in indoor environments. 3DV, 2017.
+[12] Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding. In AAAI, 2017.
+[13] David L Chen and Raymond J Mooney. Learning to interpret natural language navigation instructions from observations. In AAAI, 2011.
+[14] Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. Touchdown: Natural language navigation and spatial reasoning in visual street environments. In CVPR, 2019.
+[15] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answering. In CVPR, 2018.
+[16] Abhishek Das, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Neural Modular Control for Embodied Question Answering. In CoRL, 2018.
+
+[17] Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, and Trevor Darrell. Speaker-follower models for vision-and-language navigation. In NeurIPS, 2018.
+[18] Malik Ghallab, Adele Howe, Craig Knoblock, Drew McDermott, Ashwin Ram, Manuela Veloso, Daniel Weld, and David Wilkins. Pddl the planning domain definition language. 1998.
+[19] Daniel Gordon, Dieter Fox, and Ali Farhadi. What should i do now? marrying reinforcement learning and symbolic planning. arXiv preprint arXiv:1901.01492, 2018.
+[20] Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and Ali Farhadi. Iqa: Visual question answering in interactive environments. In CVPR, 2017.
+[21] Stevan Harnad. The symbol grounding problem. Physica D, 42:335-346, 1990.
+[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+[23] Jörg Hoffmann and Bernhard Nebel. The ff planning system: Fast plan generation through heuristic search. JAIR, 2001.
+[24] Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, and Siddhartha Srinivasa. Tactical rewind: Self-correction via backtracking in vision-and-language navigation. In CVPR, 2019.
+[25] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv preprint arXiv:1712.05474, 2017.
+[26] Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In NeurIPS, pages 3675-3683, 2016.
+[27] Chih-Yao Ma, Jiasen Lu, Zuxuan Wu, Ghassan AlRegib, Zsolt Kira, Richard Socher, and Caiming Xiong. Self-monitoring navigation agent via auxiliary progress estimation. In ICLR, 2019.
+[28] Chih-Yao Ma, Zuxuan Wu, Ghassan AlRegib, Caiming Xiong, and Zsolt Kira. The regretful agent: Heuristic-aided navigation through progress estimation. In CVPR, 2019.
+[29] James MacGlashan, Monica Babes-Vroman, Marie des-Jardins, Michael L. Littman, Smaranda Muresan, Shawn Squire, Stefanie Tellex, Dilip Arumugam, and Lei Yang. Grounding english commands to reward functions. In RSS, 2015.
+[30] Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. Walk the talk: Connecting language, knowledge, and action in route instructions. In AAAI, 2006.
+[31] Jonathan Malmaud, Earl Wagner, Nancy Chang, and Kevin Murphy. Cooking with semantics. In ACL Workshop on Semantic Parsing, 2014.
+[32] Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, and Yoav Artzi. Mapping instructions to actions in 3D environments with visual goal prediction. In EMNLP, 2018.
+
+[33] Dipendra Misra, John Langford, and Yoav Artzi. Mapping instructions and visual observations to actions with reinforcement learning. In EMNLP, 2017.
+[34] Dipendra Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. Tell me daved: Context-sensitive grounding of natural language to manipulation instructions. In RSS, 2014.
+[35] Dipendra Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. Environment-driven lexicon induction for high-level instructions. In ACL, 2015.
+[36] Arsalan Mousavian, Clemens Eppner, and Dieter Fox. 6-dof grapnet: Variational grasp generation for object manipulation. In ICCV, 2019.
+[37] Khanh Nguyen and Hal Daumé III. Help, Anna! Visual Navigation with Natural Multimodal Assistance via Retrospective Curiosity-Encouraging Imitation Learning. In EMNLP, 2019.
+[38] Khanh Nguyen, Debadeepta Dey, Chris Brockett, and Bill Dolan. Vision-based navigation with language-based assistance via imitation learning with indirect intervention. In CVPR, 2019.
+[39] Daniel Nyga, Subhro Roy, Rohan Paul, Daehyung Park, Mihai Pomerlan, Michael Beetz, and Nicholas Roy. Grounding robot plans from natural language instructions with incomplete world knowledge. In CoRL, 2018.
+[40] Rohan Paul, Jacob Arkin, Derya Aksaray, Nicholas Roy, and Thomas M. Howard. Efficient grounding of abstract spatial concepts for natural language interaction with robot platforms. *IJRR*, 2018.
+[41] Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. Virtualhome: Simulating household activities via programs. In CVPR, 2018.
+[42] Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. TACL, 2013.
+[43] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011.
+[44] Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: A platform for embodied ai research. In ICCV, 2019.
+[45] Ozan Sener, Amir R. Zamir, Silvio Savarese, and Ashutosh Saxena. Unsupervised semantic parsing of video collections. In ICCV, 2015.
+[46] Mohit Shridhar and David Hsu. Interactive visual grounding of referring expressions for human-robot interaction. In RSS, 2018.
+[47] Hao Tan, Licheng Yu, and Mohit Bansal. Learning to navigate unseen environments: Back translation with environmental dropout. In NAACL, 2019.
+[48] Moritz Tenorth, Daniel Nyga, and Michael Beetz. Understanding and executing instructions for everyday manipulation tasks from the world wide web. In ICRA, 2010.
+[49] Jesse Thomason, Daniel Gordon, and Yonatan Bisk. Shifting the baseline: Single modality performance on visual navigation & qa. In NAACL, 2019.
+
+[50] Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. Vision-and-dialog navigation. In CoRL, 2019.
+[51] Jesse Thomason, Shiqi Zhang, Raymond Mooney, and Peter Stone. Learning to interpret natural language commands through human-robot dialog. In *IJCAI*, 2015.
+[52] Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. In CVPR, 2019.
+[53] Xin Wang, Wenhan Xiong, Hongmin Wang, and William Yang Wang. Look before you leap: Bridging model-free and model-based reinforcement learning for planned-ahead vision-and-language navigation. In ECCV, 2018.
+[54] Erik Wijmans, Samyak Datta, Oleksandr Maksymets, Abhishek Das, Georgia Gkioxari, Stefan Lee, Irfan Essa, Devi Parikh, and Dhruv Batra. Embodied question answering in photorealistic environments with point cloud perception. In CVPR, 2019.
+[55] Haonan Yu and Jeffrey Mark Siskind. Grounded language learning from video described with sentences. In ACL, 2013.
+[56] Licheng Yu, Xinlei Chen, Georgia Gkioxari, Mohit Bansal, Tamara L. Berg, and Dhruv Batra. Multi-target embodied question answering. In CVPR, 2019.
+[57] Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, and Ali Farhadi. Visual semantic planning using deep successor representations. In ICCV, 2017.
+[58] Dimitri Zhukov, Jean-Baptiste Alayrac, Ramadan Gokberk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. Cross-task weakly supervised learning from instructional videos. In CVPR, 2019.
\ No newline at end of file
diff --git a/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/images.zip b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..396619216fc9b7b111f9a37187cf9f33a34f821f
--- /dev/null
+++ b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2113156b4f0f273e56121aefe93c5822e093a1cbf3f6563a73abb763fbea897b
+size 502183
diff --git a/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/layout.json b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a1d569585f6dc396f22c25cd1ef5e6483eeae90d
--- /dev/null
+++ b/alfredabenchmarkforinterpretinggroundedinstructionsforeverydaytasks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06a972be7ef434179a5f45413b4bd7c9e158b731e20828bf98978f0b5fe14ef9
+size 396797
diff --git a/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_content_list.json b/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..99a2c7c20af4894c31ad79b7e82c4be46a29ca63
--- /dev/null
+++ b/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f861d0595e9960a60ebcd1361d5010538c7514738fcb9a8e6797a276a99aa89
+size 71754
diff --git a/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_model.json b/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0769f2c474b8b1023554d2a7ac1a2c6c5864f5eb
--- /dev/null
+++ b/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00e64bfdb10491b306540182d0902e3e56d82a0308ef63bbdfc4604bc3e22788
+size 92032
diff --git a/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_origin.pdf b/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..30337c1353cab811e15228fd59df7ac58300856e
--- /dev/null
+++ b/alleviationofgradientexplodingingansfakecanbereal/9d09d319-724f-4551-8546-a54af15ed68f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50c54c8fff9d3b4f25a2c4acb7fbc5636a39cdede36bb09e1df202207ee4428e
+size 19455379
diff --git a/alleviationofgradientexplodingingansfakecanbereal/full.md b/alleviationofgradientexplodingingansfakecanbereal/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3fd1815c850a750ec5c9e7868aea049e88879e57
--- /dev/null
+++ b/alleviationofgradientexplodingingansfakecanbereal/full.md
@@ -0,0 +1,355 @@
+# Alleviation of Gradient Exploding in GANs: Fake Can Be Real
+
+Song Tao, Jia Wang*
+Department of Electronic Engineering
+Shanghai Jiao Tong University
+{taosong, jiawang}@sjtu.edu.cn
+
+# Abstract
+
+In order to alleviate the notorious mode collapse phenomenon in generative adversarial networks (GANs), we propose a novel training method of GANs in which certain fake samples are considered as real ones during the training process. This strategy can reduce the gradient value that generator receives in the region where gradient exploding happens. We show the process of an unbalanced generation and a vicious circle issue resulted from gradient exploding in practical training, which explains the instability of GANs. We also theoretically prove that gradient exploding can be alleviated by penalizing the difference between discriminator outputs and fake-as-real consideration for very close real and fake samples. Accordingly, Fake-As-Real GAN (FARGAN) is proposed with a more stable training process and a more faithful generated distribution. Experiments on different datasets verify our theoretical analysis.
+
+# 1. Introduction
+
+In the past few years, Generative Adversarial Networks (GANs) [10] have been one of the most popular topics in generative models and achieved great success in generating diverse and high-quality images [5, 16, 8]. GANs can be expressed as a zero-sum game between discriminator and generator. When a final theoretical equilibrium is achieved, discriminator can never distinguish between real and fake generated samples. However, we show that a theoretical equilibrium actually can be seldom realized in practice with only discrete finite samples in datasets during the training process.
+
+Although GANs have achieved remarkable progress, numerous researchers have tried to improve the performance of GANs from various aspects [2, 23, 11, 21], because of the inherent problem in GAN training, such as instability and mode collapse. [3] showed that a theoretical generalization guarantee does not be provided with the original
+
+GAN objective and analyzed the generalization capacity of neural network distance. The author argued that for a low capacity discriminator, it can not provide generator enough information to fit the target distribution owing to lack of ability to detect mode collapse. [31] argued that poor generation capacity in GANs comes from the discriminators trained on finite training samples resulting in overfitting to real data samples and gradient exploding when generated datapoints approach real ones. As a result, [31] proposed a zero-centered gradient penalty on linear interpolations between real and fake samples to improve generalization capability and prevent mode collapse resulted from gradient exploding. Recent work [32] further studied generalization from a new perspective of privacy protection.
+
+In this paper, we focus on mode collapse resulted from gradient exploding studied in [31] and achieve a better generalization with a much more stable training process. Our contributions are as follows:
+
+1. We explain the generation process of an unbalanced distribution in GAN training, which becomes more and more serious as training progresses owing to the existence of the vicious circle issue resulted from gradient exploding.
+2. We prove that the gradient exploding issue can be effectively alleviated by difference penalization for discriminator between very close real and fake samples and fake-as-real consideration where gradient exploding happens.
+3. We propose a novel GAN training method by considering certain fake samples as real ones (FARGAN) according to discriminator outputs in a training minibatch to effectively prevent the unbalanced generation. Experiments on synthetic and real world datasets verify that our method can stabilize training process and achieve a more faithful generated distribution.
+
+In the sequel, we use the terminologies of generated samples (datapoints) and fake samples (datapoints) indiscriminately. Tab. 1 lists some key notations used in the rest of the paper.
+
+Table 1. NOTATIONS
+
+| Symbol | Meaning |
| pr | the target dstribution |
| pg | the model distribution |
| D | the discriminator with sigmoid function in the last layer |
| D0 | the discriminator with sigmoid function in the last layer removed |
| Dr={x1,···,xn} | the set of n real samples |
| Dg={y1,···,ym} | the set of m generated samples |
| DFAR={y1,···,yN0} | the set of N0 generated samples considered as real |
+
+# 2. Related work
+
+Instability. GANs have been considered difficult to train and often play an unstable role in training process [30]. Various methods have been proposed to improve the stability of training. A lot of works stabilized training with well-designed structures [27, 15, 35, 6] and utilizing better objectives [23, 36, 2, 19]. Gradient penalty to enforce Lipschitz continuity is also a popular direction to improve the stability including [11, 25, 28, 26]. From the theoretical aspect, [22] showed that GAN optimization based on gradient descent is locally stable and [20] proved local convergence for simplified zero-centered gradient penalties under suitable assumptions. For a better convergence, a two time-scale update rule (TTUR) [13] and exponential moving averaging (EMA) [34] have also been studied.
+
+Mode collapse. Mode collapse is another persistent essential problem for the training of GANs, which means lack of diversity in the generated samples. The generator may sometimes fool the discriminator by producing a very small set of high-probability samples from the data distribution. Recent work [3, 4] studied the generalization capacity of GANs and showed that the model distributions learned by GANs do miss a significant number of modes. A large number of ideas have been proposed to prevent mode collapse. Multiple generators are applied in [3, 9, 14] to achieve a more faithful distribution. Mixed samples are considered as the inputs of discriminator in [17, 18] to convey information on diversity. Recent work [12] studied mode collapse from probabilistic treatment and [33, 7] from the entropy of distribution.
+
+# 3. Background
+
+In the original GAN [10], the discriminator D maximizes the following objective:
+
+$$
+\mathcal {L} = E _ {x \sim p _ {r}} [ \log (D (x)) ] + E _ {y \sim p _ {g}} [ \log (1 - D (y)) ], \tag {1}
+$$
+
+and to prevent gradient collapse, the generator G in NonSaturating GAN (NSGAN) [10] maximizes
+
+$$
+\mathcal {L} _ {G} = E _ {y \sim p _ {g}} [ \log (D (y)) ], \tag {2}
+$$
+
+where $D$ is usually represented by a neural network. [10] showed that the optimal discriminator $D$ in Eqn.1 is $D^{*}(v) = \frac{p_{r}(v)}{p_{r}(v) + p_{g}(v)}$ for any $v\in supp(p_r)\cup supp(p_g)$ . As training progresses, $p_{g}$ will be pushed closer to $p_{r}$ . If G and D are given enough capacity, a global equilibrium is reached when $p_r = p_g$ , in which case the best strategy for D on $supp(p_r)\cup supp(p_g)$ is just to output $\frac{1}{2}$ and the optimal value for Eqn.1 is $2\log (\frac{1}{2})$ .
+
+With finite training examples in training dataset $D_r$ in practice, we empirically use $\frac{1}{n}\sum_{i=1}^{n}\log(D(x_i))$ to estimate $E_{x\sim p_r}[\log(D(x))]$ and $\frac{1}{m}\sum_{i=1}^{m}[1 - \log(D(y_i))]^\circ$ to estimate $E_{y\sim p_g}[\log(1 - D(y))]$ , where $x_i, y_i$ is from $D_r$ and generated dataset $D_g$ , respectively.
+
+Mode collapse in generator is attributed to gradient exploding in discriminator, according to [31]. When a fake datapoint $y_0$ is pushed to a real datapoint $x_0$ and if $|D(x_0) - D(y_0)| \geq \epsilon$ is satisfied, the absolute value of directional derivative of $D$ in the direction $\mu = x_0 - y_0$ will approach infinity:
+
+$$
+\begin{array}{l} | (\nabla_ {\mu} D) _ {x _ {0}} | = \lim _ {y _ {0} \stackrel {\mu} {\rightarrow} x _ {0}} \frac {| D (x _ {0}) - D (y _ {0}) |}{| | x _ {0} - y _ {0} | |} \\ \geq \lim _ {y _ {0} \xrightarrow {\mu} x _ {0}} \frac {\epsilon}{\left| \left| x _ {0} - y _ {0} \right| \right|} = \infty , \tag {3} \\ \end{array}
+$$
+
+in which case the gradient norm of discriminator at $y_0$ , $||\nabla_{y_0}D(y_0)||$ , is equivalent to $|(\nabla_\mu D)_{x_0}|$ and gradient explodes. Since $\nabla_{y_0}D(y_0)$ outweighs gradients towards other modes in a training minibatch, gradient exploding at datapoint $y_0$ will move multiple fake datapoints towards $x_0$ resulting in mode collapse.
+
+# 4. Unbalanced Generation
+
+Theoretically, discriminator outputs a constant $\frac{1}{2}$ when a global equilibrium is reached. However in practice, discriminator can often easily distinguish between real and fake samples [10, 2]. Because the target distribution $p_r$ is unknown for discriminator, discriminator will always consider training samples in $D_r$ as real while generated samples in $D_g$ as fake. Even when the generated distribution
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+
+
+(f)
+
+
+(g)
+Figure 1. Results on finite samples from a Gaussian distribution of GANs trained with different gradient penalties and our method. Blue datapoints represent real samples and red datapoints represent generated samples. (a)(e) NSGAN with no GP, iter. 100k and 200k. (b)(f) NSGAN-0GP-sample, iter. 100k and 200k. (c)(g) NSGAN-0GP-interpolation, iter. 100k and 200k. (d)(h) NSGAN-0GP-sample with our method, iter. 100k and 200k.
+
+
+(h)
+
+$p_{g}$ is equivalent to the target distribution $p_{r}$ , $D_{r}$ and $D_{g}$ is disjoint with probability 1 when they are sampled from two continuous distributions respectively (Proposition 1 in [31]). In this case, actually $D_{g}$ is pushed towards samples in $D_{r}$ . We will explain specifically the generation process of an unbalanced distribution that deviates from $p_{r}$ .
+
+Definition 1 For $x_0 \in D_r, y_0 \in D_g$ , $\{x_0, y_0\}$ is a $\delta$ close pair if $y_0 \in \mathcal{N}^\delta(x_0) = \{y_0 : d(x_0, y_0) \leq \delta, 0 < \delta \ll d(x_i, x_j), \forall x_i, x_j \in D_r\}$ . Additionally, $x_0$ is called an overfitting source in a close pair $\{x_0, y_0\}$ .
+
+During the process of $D_g$ approaching $D_r$ , multiple overfitting sources will appear. The following proposition shows that the optimal empirical discriminator does not give equal outputs between the corresponding real and fake samples for all close pairs.
+
+Proposition 1 If overfitting sources exist, an empirical discriminator satisfying $D(x_0) - D(y_0) \geq \epsilon$ on a close pair $\{x_0, y_0\}$ can be easily constructed as a MLP with only $\mathcal{O}(2\dim(x))$ parameters.
+
+See Appendix A for the detailed proof. The discriminators used in practice usually contains hundreds of millions parameters, which are much more powerful than the discriminator we constructed above. Although [31] constructed a discriminator to distinguish all samples between $D_r$ and $D_g$ , they use much more parameters which are comparable to that used in practice and we needn't distinguish all samples but only a close pair $\{x_0, y_0\}$ .
+
+From Eqn.2, the gradient norm generator receives from discriminator at $y_0$ for a close pair $\{x_0, y_0\}$ can be computed as
+
+$$
+\left| \left| \nabla_ {y _ {0}} \mathcal {L} _ {G} \left(y _ {0}\right) \right| \right| = \frac {1}{D \left(y _ {0}\right)} \lim _ {y _ {0} \xrightarrow {\mu} x _ {0}} \frac {\left| D \left(x _ {0}\right) - D \left(y _ {0}\right) \right|}{\left| \left| x _ {0} - y _ {0} \right| \right|}. \tag {4}
+$$
+
+When $D(x_0) - D(y_0) \geq \epsilon$ is satisfied and $\{x_0, y_0\}$ happens to be a close pair, the gradient of generator at $y_0$ explodes and outweighs the gradients towards other modes excessively. Fake samples will be moved in the direction $\mu = x_0 - y_0$ and especially other fake samples in a minibatch will not be moved towards the corresponding modes, making an unbalanced generation visible. See the generated results on a Gaussian dataset of the original GAN in Fig. 1a, 1e. The generated distribution neither covers the target Gaussian distribution nor fits all the real samples in $D_r$ .
+
+# 5. Gradient Alleviation
+
+In this section, we search for ways of alleviating the gradient exploding issue to achieve a more faithful generated distribution. For the simplicity of analysis, we extract sigmoid function $\sigma$ from the last layer of $D$ , i.e. $D(\cdot) = \sigma(D_0(\cdot))$ . The gradient norm of generator at $y_0$ for a close pair $\{x_0, y_0\}$ can be rewritten as
+
+$$
+\left| \left| \nabla_ {y _ {0}} \mathcal {L} _ {G} (y _ {0}) \right| \right| = \sigma (- D _ {0} (y _ {0})) \lim _ {y _ {0} \xrightarrow {\mu} x _ {0}} \frac {\left| D _ {0} \left(x _ {0}\right) - D _ {0} \left(y _ {0}\right) \right|}{\left| \left| x _ {0} - y _ {0} \right| \right|}. \tag {5}
+$$
+
+Consider the scenario in which $x_0$ , in a set of $n$ real samples, is an overfitting source for $\{y_1, y_2, \dots, y_{m_0}\}$ , in a set of $m$ generated samples, i.e., $\{x_0, y_i\}, i = 1, \dots, m_0$ are close pairs. We are specially interested in the outputs of the optimal discriminator at $x_0$ and $\{y_1, y_2, \dots, y_{m_0}\}$ . For simplicity, we make the assumption that the outputs of discriminator at these interested points are not affected by other samples in $D_r$ and $D_g$ . We also assume discriminator has enough capacity to achieve the optimum in this local region.
+
+# 5.1. Difference Penalization
+
+We first consider penalizing the $L_{2}$ norm of the output differences on close pairs, resulting in the following empirical discriminator objective:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {D P}} = \frac {1}{n} \left[ \log \sigma \left(D _ {0} \left(x _ {0}\right)\right) + \sum_ {i = 1} ^ {n - 1} \log \sigma \left(D _ {0} \left(x _ {i}\right)\right) \right] \\ + \frac {1}{m} \left[ \sum_ {i = 1} ^ {m _ {0}} \log \left(1 - \sigma \left(D _ {0} \left(y _ {i}\right)\right)\right) \right. \\ \left. + \sum_ {i = m _ {0} + 1} ^ {m} \log \left(1 - \sigma \left(D _ {0} \left(y _ {i}\right)\right)\right) \right] \\ - \frac {k}{m _ {0}} \sum_ {i = 1} ^ {m _ {0}} (D _ {0} (x _ {0}) - D _ {0} (y _ {i})) ^ {2} \\ = C _ {1} + \frac {1}{n} f \left(D _ {0} \left(x _ {0}\right), D _ {0} \left(y _ {1}\right), \dots , D _ {0} \left(y _ {m _ {0}}\right)\right), \tag {6} \\ \end{array}
+$$
+
+where $k$ is the weight of the $L_{2}$ norms and $C_{1}$ is an inconsequential term. Denoting $D_0(x_0)$ as $\xi_0$ and $D_0(y_i)$ as $\xi_i$ , $i = 1, \dots, m_0$ , the interested term $f(\xi_0, \xi_1, \dots, \xi_{m_0})$ in Eqn.6 is
+
+$$
+f = \log \sigma \left(\xi_ {0}\right) + \frac {n}{m} \sum_ {i = 1} ^ {m _ {0}} \log \left(1 - \sigma \left(\xi_ {i}\right)\right) - \frac {n k}{m _ {0}} \sum_ {i = 1} ^ {m _ {0}} \left(\xi_ {0} - \xi_ {i}\right) ^ {2}. \tag {7}
+$$
+
+Proposition 2 Assume that $\{\xi_0^*,\dots ,\xi_{m_0}^*\}$ achieves the maximum of $f(\xi_0,\xi_1,\dots ,\xi_{m_0})$ . Then with $k$ increasing, $\sigma (-\xi_i^*)(\xi_0^* -\xi_i^*)$ decreases, and, with $m_0$ increasing, $\sigma (-\xi_i^*)(\xi_0^* -\xi_i^*)$ increases, $\forall i = 1,\dots ,m_0$
+
+See Appendix B for the detailed proof. Hence, the gradient norm of generator in this local region decreases with the weight $k$ of difference penalization increasing, while increases with the number of close pairs $m_0$ increasing from Eqn.5.
+
+Gradient penalty. Actually in practice, it is hard to find close pairs to make the corresponding difference penalization. If we directly penalize the $L_{2}$ norm of $D_0(x_i) - D_0(y_i)$ , the gradient norm at $y_{i}$ may get even larger when $\{x_{i},y_{i}\}$ is not a close pair. Considering $D_0(y_i) > D_0(x_i)$ which could happen when the number of close pairs at $x_{i}$ is larger than that at $y_{i}$ , direct penalization will make $D_0(y_i)$
+
+lower and further the gradient norm at $y_{i}$ larger from Eqn.5. Thus in practice we could enforce a zero-centered gradient penalty of the form $||(\nabla D_0)_v||^2$ to stabilize the discriminator output for close pairs, where $v$ can be real or fake samples. Although far from perfection, Fig. 1b, 1f generate more faithful results compared with Fig. 1a, 1e with no gradient penalty added.
+
+To prevent gradient exploding, [31] proposed another zero-centered gradient penalty of the form $||(\nabla D_0)_v||^2$ , where $v$ is a linear interpolation between real and fake samples. However, we consider it's not a very efficient method to fill the gap here. To begin with, the result of interpolation may not lie in $\text{supp}(p_r) \cup \text{supp}(p_g)$ . Furthermore, for arbitrary pair of real and fake samples, the probability that linear interpolation between them lies where close pairs exist is close to 0 especially for high-dimensional situations.
+
+Vicious circle. Gradient exploding near overfitting source $x_0$ results in multiple fake samples moved towards $x_0$ . Then more close pairs results in a more serious gradient exploding issue, forming a vicious circle. It partly explains the instability of GAN training process that especially during the later stage of training, similar generated samples are seen. Compared with Fig. 1a, 1b, 1c at iter.100k, Fig. 1e, 1f, 1g at iter.200k have a more unbalanced generation and more similar samples are generated as training progresses.
+
+# 5.2. Fake-as-Real Consideration
+
+Based on discussions above, we add a fake-as-real consideration on $m_0$ fake samples $\{y_1,y_2,\dots ,y_{m_0}\}$ , resulting in the following empirical discriminator objective:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {F A R}} = \mathcal {L} _ {\mathrm {D P}} + \lambda \sum_ {i = 1} ^ {m _ {0}} \log \sigma \left(D _ {0} \left(y _ {i}\right)\right) \\ = C _ {2} + \frac {1}{n} h \left(\xi_ {0}, \xi_ {1}, \dots , \xi_ {m _ {0}}\right), \tag {8} \\ \end{array}
+$$
+
+where $\lambda$ is the weight of considering fake as real and $C_2$ is an inconsequential term. The interested term $h(\xi_0,\xi_1,\dots ,\xi_{m_0})$ in Eqn.8 is
+
+$$
+h = f + n \lambda \sum_ {i = 1} ^ {m _ {0}} \log \sigma \left(\xi_ {i}\right). \tag {9}
+$$
+
+Proposition 3 Assume that $\{\xi_0^*,\dots ,\xi_{m_0}^*\}$ achieves the maximum of $h(\xi_0,\xi_1,\dots ,\xi_{m_0})$ . Then with $\lambda$ increasing, $\sigma (-\xi_i^*)(\xi_0^* -\xi_i^*)$ decreases, and, when $\lambda \to \infty$ $\sigma (-\xi_i^*)(\xi_0^* -\xi_i^*)\rightarrow 0,\forall i = 1,\dots ,m_0.$
+
+See Appendix C for the detailed proof. The gradient exploding issue in this local region can also be alleviated by considering fake as real. Theoretically, when the weight of fake-as-real term tends to infinity, the gradient norm of generator here becomes 0, completely solving the concerned
+
+issue while making discriminator lose the capability of distinguishing among samples in this local region. In practice, it is enough to alleviate the gradient here to make it comparable to other gradients in a minibatch, hence we needn't weigh fake-as-real term excessively.
+
+Alleviation for vicious circle. Recall the vicious circle caused by gradient exploding. When more close pairs appear at an overfitting source, the fake-as-real term also turns larger from Eqn.9, providing an alleviation for a further gradient exploding issue. See the results with fake-as-real consideration applied in Fig. 1d, 1h. A faithful distribution is generated even for a long time training.
+
+# 5.3. Implementation
+
+In this section, we give the specific implementation of Fake-As-Real GAN (FARGAN) based on gradient penalty in practical training.
+
+For the original $N$ real samples and $M$ fake samples in a minibatch during the discriminator training process, we fix the overall number $N$ of real samples including original $N_{1}$ real samples and $N_{0}$ fake samples considered as real ones, where $N = N_{0} + N_{1}$ . Note that we hope the fake samples considered as real should be in the regions where multiple close pairs exist, because fake samples should no longer be moved towards these regions and the gradient exploding issue is relatively serious here owing to the vicious circle. For that discriminator tends to have a lower output for the region where more close pairs exist1, we pick out the needed $N_{0}$ fake samples $\widetilde{y}_{i}$ denoted as set $D_{\mathrm{FAR}}$ as real from a larger generated set containing $f*N_0$ fake samples according to the corresponding discriminator output:
+
+$$
+\begin{array}{l} D _ {\text {F A R}} = \{\widetilde {y} _ {1}, \dots , \widetilde {y} _ {N _ {0}} \} = \left\{y _ {i}, i \in \text {i n d e x o f t o p} N _ {0} \text {i n} \right. \\ \left\{- D _ {0} \left(y _ {M + 1}\right), - D _ {0} \left(y _ {M + 2}\right), \dots , - D _ {0} \left(y _ {M + f * N _ {0}}\right) \right\}. \tag {10} \\ \end{array}
+$$
+
+When more close pairs exist, the probability of fake samples being selected in this region is higher for a lower discriminator output, in which case practical implementation still provides an alleviation for the vicious circle issue. We also add a zero-centered gradient penalty on real samples [20] based on the discussions in Section 5.1, resulting in the following empirical discriminator objective in our FARGAN:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {F A R}} = \frac {1}{N} \left[ \sum_ {i = 1} ^ {N _ {1}} \log \left(\sigma \left(D _ {0} \left(x _ {i}\right)\right)\right) + \sum_ {i = 1} ^ {N _ {0}} \log \left(\sigma \left(D _ {0} \left(\tilde {y} _ {i}\right)\right) \right] \right. \\ + \frac {1}{M} \sum_ {i = 1} ^ {M} \log \left(1 - \sigma \left(D _ {0} \left(y _ {i}\right)\right) + \frac {k}{N} \sum_ {i = 1} ^ {N} \left\| \left(\nabla D _ {0}\right) _ {c _ {i}} \right\| ^ {2}, \right. \tag {11} \\ \end{array}
+$$
+
+where $x_{i}\in D_{r},y_{i}\in D_{g},\widetilde{y}_{i}\in D_{\mathrm{FAR}}$ and $\{c_1,\dots ,c_N\} =$ $\{x_{1},\dots ,x_{N_{1}},\widetilde{y}_{1},\dots ,\widetilde{y}_{N_{0}}\}$ . To prevent gradient vanishing
+
+# Algorithm 1 Minibatch stochastic gradient descent training of FARGAN
+
+# for number of training iterations do
+
+while discriminator updating do
+
+- Sample minibatch of $N_{1}$ real examples $\{x_{1},\dots ,x_{N_{1}}\}$ from training dataset $D_{r}$ .
+- Sample minibatch of $M + f * N_0$ fake examples $\{y_1, \dots, y_{M + f * N_0}\}$ from generated dataset $D_g$ .
+- Determine $\widetilde{y}_i$ with a lower discriminator output: $\{y_i, i \in \text{index of top } N_0 \text{ in } \{-D_0(y_{M+1}), \dots, -D_0(y_{M+f*N_0})\}$ .
+- Update the discriminator by ascending its stochastic gradient: $\nabla_{\theta_d}\mathcal{L}_{\mathrm{FAR}}$
+
+# end while
+
+- Sample minibatch of $M$ fake examples $\{y_1, \dots, y_M\}$ from generated dataset $D_g$ .
+- Update the generator by ascending its stochastic gradient: $\nabla_{\theta_g}\frac{1}{M}\sum_{i = 1}^{M}\log (\sigma (D_0(y_i)))$
+
+# end for
+
+for G especially early in learning, we use the non-saturating form in the original GAN for G update. The training procedure is formally presented in Algorithm 1.
+
+# 6. Experiments
+
+In this section, we present our experimental results on synthetic data and real-world datasets including CIFAR-10 [1], CIFAR-100 [1] and a more challenging dataset ImageNet [29]. When we talk the fake-as-real method, a zero-centered gradient penalty on real samples is also added as a default in our experiments. We use Pytorch [24] for development.
+
+# 6.1. Synthetic data
+
+To test the effectiveness of FARGAN on preventing an unbalanced generation, we designed a dataset with finite training samples coming from a Gaussian distribution. Based on a simple MLP network, we trained NonSaturating GAN (NSGAN) with our method and different gradient penalties including zero-centered gradient penalty on real samples (NSGAN-0GP-sample) and on interpolation between real and fake samples (NSGAN-0GP-interpolation). We set the weight $k$ of gradient penalty to be 10, the size of minibatch $N = M = 64$ and $f = 8$ , $N_0 = 16$ for FARGAN. Learning rate is set to be 0.003 for both G and D. The result is shown in Fig. 1. It can be observed that NSGAN, NSGAN-0GP-sample and NSGAN-0GP-interpolation all generate unbalanced distributions as training progresses, while our method can generate much better results with good generalization.
+
+We also test FARGAN on a mixture of 8 Gaussians dataset where random samples in different modes are far
+
+from each other. The evolution of FARGAN is depicted in Fig.2. Although FARGAN only covers 3 modes at the beginning, it can cover other modes gradually for the powerful capability of gradient exploding alleviation. Hence, FARGAN has the ability to find the uncovered modes to achieve a faithful distribution even when samples in high dimensional space are far from each other. More synthetic experiments can be found in Appendix E.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+# 6.2. CIFAR-10 and CIFAR-100
+
+In this section, we compare the fake-as-real method with that with only zero-centered gradient penalty (0GP) on real samples added. All experiments are repeated 3 times with random initialization to show the consistent results in Tab. 2.
+
+Parameter settings. We set the weight $k$ of gradient penalty to be 10, the size of minibatch $N = M = 64$ and $f = 8$ , $N_0 = 32$ for fake-as-real method as a default. RM-SProp optimizer with $\alpha = 0.99$ and a learning rate of $10^{-4}$ is used.
+
+Quantitative measures. Inception score [30] and FID [13] are used as quantitative measures. For Inception score, we follow the guideline from [30]. The FID score is evaluated on $10k$ generated images. Better generation can be achieved with higher inception score and lower FID value.
+
+Results with different architectures. We test FARGAN with both a ResNet architecture the same as that in [20] and a conventional architecture similar to a progressively growing GAN [15] while with no batch normalization. The results are shown in Fig. 3 and 4 respectively. FARGAN outperforms NSGAN-0GP with both architectures on CIFAR-10 and CIFAR-100 by a large margin. Note although the
+
+
+(a)
+
+
+(b)
+
+
+Figure 2. Evolution of our method on a mixture of 8 Gaussians dataset. (a) iter. 0. (b) iter. 100k. (c) iter. 335k. (d) iter. 500k.
+Figure 3. Results with ResNet architecture on CIFAR dataset.
+(a)
+
+
+(b)
+Figure 4. Results with conventional architecture on CIFAR dataset.
+
+speed of FARGAN to cover real ones could be slightly slowed down at the beginning of training with some fake samples considered as real ones, it can consistently improve the results of generation and achieve a more balanced dis
+
+Table 2. Inception score and FID on CIAFR-10, CIFAR-100 at iter. 500k and ImageNet at iter. 600k. Experiments were repeated 3 times.
+
+ | IS | FID |
| 0GP | FAR | 0GP | FAR |
| CIFAR-10 (500k) | | | | |
| ResNet NSGAN | 6.26 ± 0.09 | 6.81 ± 0.03 | 24.22 ± 0.72 | 17.82 ± 0.33 |
| ResNet WGAN | 6.15 ± 0.06 | 6.83 ± 0.04 | 24.72 ± 0.41 | 18.12 ± 0.23 |
| ResNet HingeGAN | 6.19 ± 0.08 | 6.88 ± 0.07 | 24.55 ± 0.31 | 16.99 ± 0.18 |
| ResNet LSGAN | 5.90 ± 0.05 | 6.63 ± 0.02 | 26.41 ± 0.12 | 19.97 ± 0.38 |
| Conventional NSGAN | 6.94 ± 0.03 | 7.63 ± 0.05 | 16.66 ± 0.14 | 12.80 ± 0.31 |
| CIFAR-100 (500k) | | | | |
| ResNet NSGAN | 6.27 ± 0.04 | 7.03 ± 0.06 | 28.46 ± 0.28 | 21.95 ± 0.35 |
| Conventional NSGAN | 6.92 ± 0.08 | 7.84 ± 0.04 | 22.28 ± 0.45 | 17.69 ± 0.24 |
| ImageNet (600k) | | | | |
| ResNet NSGAN | 10.66 ± 0.11 | 11.44 ± 0.05 | 44.57 ± 0.34 | 39.69 ± 0.57 |
+
+
+(a)
+
+
+(b)
+
+tribution finally.
+
+The losses of discriminator and generator during the training process with ResNet architecture on CIFAR-10 are shown in Fig.5. FARGAN has a much more stable training process with smaller fluctuations and no obvious deviation seen for the losses. Note when serious mode collapse happens, discriminator has a lower loss while generator has a higher loss compared with the theoretical value $(2\log 2 \approx 1.386$ for discriminator and $\log 2 \approx 0.693$ for generator) $^{2}$ . The gradual deviation of losses for discrimini
+
+
+(a)
+
+
+Figure 5. Losses of discriminator (not including regularization term) and generator on CIFAR-10.
+(b)
+Figure 6. Results of different GAN variants on CIFAR-10.
+
+nator and generator in NSGAN-0GP shows a serious mode collapse. Hence, FARGAN can stabilize training process and effectively prevent mode collapse. The losses of discriminator and generator on CIFAR-100 and generated image samples can be found in Appendix E.
+
+Results of different GAN-variants. Besides NSGAN, we also test fake-as-real method for WGAN [2], HingeGAN [36] and LSGAN [19] to show the effectiveness on a more faithful generation for different GAN-variants. The results are shown in Fig. 6. Fake-as-real method can also improve the performance of different GAN-variants by alleviating the gradient exploding issue which consistently happens for finite training samples.
+
+
+(a)
+
+
+(b)
+Figure 7. Results of FARGAN with different $f$ and $N_0$ .
+
+Results with different $f$ and $N_0$ in FARGAN. We make an ablation study on the selection of parameters $f$ and $N_0$ in FARGAN. With ResNet architecture on CIFAR-10, we first fix $N_0 = 32$ and change the value of $f$ . Then we fix $f = 8$ and change the value of $N_0$ . The results are shown in Fig. 7. Note that the training speed could be slightly slowed down with $f$ and $N_0$ increasing while a better generation could be achieved. An obvious improvement is achieved with $f$ increasing until $f$ is big enough, e.g. $f = 8$ . An improvement is also seen with $N_0$ increasing appropriately while a collapse happens when $N_0$ is too big e.g. $N_0 = 48$ , for the too weak capability of discriminator. Hence, in practice we set $f = 8$ and $N_0 = 32$ as a default.
+
+Note that when $f = 1$ , we select fake samples randomly as real ones, and, when $N_0 = 0$ , no fake samples are considered as real ones. We observe that an obvious improvement is not achieved for FARGAN with $f = 1$ compared with $N_0 = 0$ . However, FARGAN with $f = 8$ improves the performance by a large margin. Hence, the key point is considering fake samples in the gradient exploding regions instead of selected randomly as real ones according to our theoretical analysis and experiments.
+
+# 6.3. ImageNet
+
+For the challenging ImageNet task which contains 1000 classes, we train GANs with ResNet architecture to learn generative models. We use images at resolution $64 \times 64$ and no labels are used in our models. We use the Adam optimizer with $\alpha = 0$ , $\beta = 0.9$ . Other settings are the same as
+
+
+(a)
+
+
+(b)
+Figure 8. Results on ImageNet.
+
+that in CIFAR experiments. The results in Fig.8 show that FARGAN still outperforms NSGAN-0GP on ImageNet and produces samples of state of the art quality without using any labels or particular architectures like progressive growing trick [15]. Random selected samples and losses of discriminator and generator during the training process can be found in Appendix E.
+
+# 7. Conclusion
+
+In this paper, we explain the reason that an unbalanced distribution is often generated in GANs. We show that the existence of vicious circle resulted from gradient exploding, makes unbalanced generation more and more serious as training progresses. We analyze methods of gradient exploding alleviation including difference penalization between discriminator outputs on close real and fake pairs and trick of considering fake as real. Based on the theoretical analysis, we propose FARGAN by considering fake as real according to the discriminator outputs in a training minibatch. Experiments on diverse datasets verify that our method can stabilize the training process and improve the performance by a large margin.
+
+# Acknowledgement
+
+This work was supported by the National Natural Science Foundation of China under grant (61771305, 61771303), and Science and Technology Commission of Shanghai Municipality (STCSM, Grant No.18DZ1200102).
+
+# References
+
+[1] Torralba Antonio, Fergus Rob, and William T Freeman. 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(11):1958-1970, 2008.
+[2] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Proceedings of the 34th International Conference on Machine Learning, pages 214-223, 2017.
+[3] Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium in generative adversarial nets (gans). In ICML, pages 224-232, 2017.
+[4] Sanjeev Arora, Andrej Risteski, and Yi Zhang. Do GANs learn the distribution? some theory and empirics. In International Conference on Learning Representations, 2018.
+[5] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations, 2019.
+[6] Ting Chen, Mario Lucic, Neil Houlsby, and Sylvain Gelly. On self modulation for generative adversarial networks. In International Conference on Learning Representations, 2019.
+[7] Adji B Dieng, Francisco JR Ruiz, David M Blei, and Michalis K Titsias. Prescribed generative adversarial networks. arXiv preprint arXiv:1910.04302, 2019.
+[8] Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. arXiv preprint arXiv:1907.02544, 2019.
+[9] Arnab Ghosh, Viveka Kulharia, Vinay P Namboodiri, Philip HS Torr, and Puneet K Dokania. Multi-agent diverse generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8513-8521, 2018.
+[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
+[11] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent-Dumoulin, and Aaron C Courville. Improved training of Wasserstein gans. In Advances in neural information processing systems, pages 5767-5777, 2017.
+[12] Hao He, Hao Wang, Guang-He Lee, and Yonglong Tian. Bayesian modelling and monte carlo inference for GAN. In International Conference on Learning Representations, 2019.
+[13] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pages 6626-6637, 2017.
+[14] Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Phung. MGAN: Training generative adversarial nets with multiple generators. In International Conference on Learning Representations, 2018.
+
+[15] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations, 2018.
+[16] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4401-4410, 2019.
+[17] Zinan Lin, Ashish Khetan, Giulia C. Fanti, and Sewoong Oh. Pacgan: The power of two samples in generative adversarial networks. In NeurlPS, pages 1505-1514, 2018.
+[18] Thomas Lucas, Corentin Tallec, Yann Ollivier, and Jakob Verbeek. Mixed batches and symmetric discriminators for gan training. In ICML, pages 2850-2859, 2018.
+[19] Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2794-2802, 2017.
+[20] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? In Proceedings of the 35th International Conference on Machine Learning, pages 3481-3490, 2018.
+[21] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
+[22] Vaishnavh Nagarajan and J Zico Kolter. Gradient descent gan optimization is locally stable. In Advances in Neural Information Processing Systems, pages 5585-5595, 2017.
+[23] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. fgan: Training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems, pages 271-279, 2016.
+[24] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+[25] Henning Petzka, Asja Fischer, and Denis Lukovnikov. On the regularization of Wasserstein GANs. In International Conference on Learning Representations, 2018.
+[26] Guo-Jun Qi. Loss-sensitive generative adversarial networks on lipschitz densities. arXiv preprint arXiv:1701.06264, 2017.
+[27] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
+[28] Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, and Thomas Hofmann. Stabilizing training of generative adversarial networks through regularization. In Advances in neural information processing systems, pages 2018-2028, 2017.
+[29] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+
+[30] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In NIPS, pages 2226-2234, 2016.
+[31] Hoang Thanh-Tung, Truyen Tran, and Svetha Venkatesh. Improving generalization and stability of generative adversarial networks. In International Conference on Learning Representations, 2019.
+[32] Bingzhe Wu, Shiwan Zhao, Haoyang Xu, ChaoChao Chen, Li Wang, Xiaolu Zhang, Guangyu Sun, and Jun Zhou. Generalization in generative adversarial networks: A novel perspective from privacy protection. arXiv preprint arXiv:1908.07882, 2019.
+[33] Shoichiro Yamaguchi and Masanori Koyama. DISTRIBUTIONAL CONCavity REGULARIZATION FOR GAN-S. In International Conference on Learning Representations, 2019.
+[34] Yasin Yazici, Chuan-Sheng Foo, Stefan Winkler, Kim-Hui Yap, Georgios Piliouras, and Vijay Chandrasekhar. The unusual effectiveness of averaging in GAN training. In International Conference on Learning Representations, 2019.
+[35] Han Zhang, Ian J. Goodfellow, Dimitris N. Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In ICML, pages 7354–7363, 2019.
+[36] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016.
\ No newline at end of file
diff --git a/alleviationofgradientexplodingingansfakecanbereal/images.zip b/alleviationofgradientexplodingingansfakecanbereal/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4a4b1054e50fdabe700181dba39faaeed83ef046
--- /dev/null
+++ b/alleviationofgradientexplodingingansfakecanbereal/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8cba0f77649b6c1a87bac59f1ef44faaca2887291d832210fc6bd206e02c879
+size 611795
diff --git a/alleviationofgradientexplodingingansfakecanbereal/layout.json b/alleviationofgradientexplodingingansfakecanbereal/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ff8576f57c34b35fb39fc21a5c0e106fbfac7a0c
--- /dev/null
+++ b/alleviationofgradientexplodingingansfakecanbereal/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b38b898192f17d4ed13a8178e318464a5959104f3f592404ca54dd0172a7f41d
+size 495177
diff --git a/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_content_list.json b/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e8790e8a3103cec8cc5673b54c807bfe27f978dd
--- /dev/null
+++ b/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98059bdee1fc88321a0775f1a38fc8289fdf0a47b9abf6a5e609c29336b5ed42
+size 73644
diff --git a/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_model.json b/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..4e1f2d1c9e4c5b564d71663b7e8c553b3d3d6d4f
--- /dev/null
+++ b/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cef51604c2123161e619d81b87314e0d5c398609854ed1f9cf19fd9dfa78ac70
+size 101369
diff --git a/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_origin.pdf b/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..540e13264e6b98ba03aac07bbb42868a266ba0e1
--- /dev/null
+++ b/allinonebadweatherremovalusingarchitecturalsearch/8e2999d9-30a3-481e-856c-2a9d18945217_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99372e88f8a35de8664a10c87c004cb742d59147a6703cf1f707585b653e6072
+size 5916391
diff --git a/allinonebadweatherremovalusingarchitecturalsearch/full.md b/allinonebadweatherremovalusingarchitecturalsearch/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ec89bd8b0ff27a6f055351a6f07737884a1e08e7
--- /dev/null
+++ b/allinonebadweatherremovalusingarchitecturalsearch/full.md
@@ -0,0 +1,391 @@
+# All in One Bad Weather Removal using Architectural Search
+
+Ruoteng Li $^{1}$ , Robby T. Tan $^{1,2}$ , and Loong-Fah Cheong $^{1}$
+
+$^{1}$ National University of Singapore
+
+$^{2}$ Yale-NUS College
+
+# Abstract
+
+Many methods have set state-of-the-art performance on restoring images degraded by bad weather such as rain, haze, fog, and snow, however they are designed specifically to handle one type of degradation. In this paper, we propose a method that can handle multiple bad weather degradations: rain, fog, snow and adherent raindrops using a single network. To achieve this, we first design a generator with multiple task-specific encoders, each of which is associated with a particular bad weather degradation type. We utilize a neural architecture search to optimally process the image features extracted from all encoders. Subsequently, to convert degraded image features to clean background features, we introduce a series of tensor-based operations encapsulating the underlying physics principles behind the formation of rain, fog, snow and adherent raindrops. These operations serve as the basic building blocks for our architectural search. Finally, our discriminator simultaneously assesses the correctness and classifies the degradation type of the restored image. We design a novel adversarial learning scheme that only backpropagates the loss of a degradation type to the respective task-specific encoder. Despite being designed to handle different types of bad weather, extensive experiments demonstrate that our method performs competitively to the individual and dedicated state-of-the-art image restoration methods.
+
+# 1. Introduction
+
+Bad weather image restoration problem has been studied intensively in the research fields of image processing and computer vision; examples include deraining [20, 18, 59, 8, 53, 30, 50, 62, 36, 4, 44, 29], dehazing/defogging [47, 3, 1, 57, 7, 13, 26, 43], desnowing [44, 37], and adherent raindrops removal [41, 42], etc.. Most of these works focus only on single weather types and propose dedicated
+
+
+Figure 1: High-level view of our network, with different types of bad weather images as input and the respective clean images as output. The proposed method is able to process multiple types of bad weather images using the same set of weights/parameters.
+
+solutions [29, 43, 41]. While they can attain excellent performance, they may not yield optimal results on other types of bad weather degradations, since the factors that cause the degradations in other types are not carefully considered. As a result, real outdoor systems would have to decide and switch between a series of bad weather image restoration algorithms.
+
+In this paper, we develop a single network-based method to deal with many types of bad weather phenomena including rain, fog, snow and adherent raindrop. It is worth noting that a few recent studies attempt to recover multiple degradation problems [39, 6]. However, none of them can deal with multiple degradations with solely one set of pretrained weights. To achieve our goal, we need to consider a few factors related to our problem.
+
+First, different bad weathers are formed based on different physical principles, which means the degraded images do not share the same characteristics. In order to yield the
+
+optimal performance, we need to design the network according to the underlying physics principles.
+
+Second, bad weather image restoration can be considered as a many-to-one feature mapping problem, i.e., image features from different bad weather domains (rain, fog, raindrop, snow) are transformed to clean image features by a set of network parameters (multiple encoders), after which the clean features are transformed to the clean natural images (one decoder). Hence, it is critical to find a proper way to process features from multiple domains and subject them to further appropriate operations. This motivates us to design an architectural-search approach that automatically finds an optimal network architecture for the aforementioned task. The basic building blocks for our network-search module are made up of a series of fundamental operations that can convert degraded image features to clean features based on the physics characteristics of bad weather image degradation.
+
+Third, most existing discriminators in GAN-based approaches are trained to judge whether the restored images are real or not. However, it does not provide error signals for the generative network to differentiate the images into different degradation types. The encoders may not be able to update their learnable parameters based on its own assessment of degradation type independently. To solve this problem, we propose a multi-class auxiliary discriminator that can classify the image degradation type and judge the correctness of the restored image simultaneously. In addition, unlike other existing GAN-based methods, our network has multiple feature encoders, each of which corresponds to a particular degradation type. When we backpropagate the discriminative loss, the network propagates only the loss to the respective encoder based on the classified results. Thus, only the corresponding encoder will update the parameters based on the adversarial loss; and, all the other encoders will not be affected.
+
+We summarise the contributions of our method as follow:
+
+1. We propose an all-in-one bad weather removal method that can deal with multiple bad weather conditions (rain streaks, rain veiling effect, snow, and adherent raindrop) in one network.
+
+2. We propose a neural architecture search technique to find the best architecture for processing the features using different weather encoders. A series of fundamental operations that result in features invariant to bad weather are introduced. These fundamental operations form the basic building blocks for the search.
+
+3. We propose a novel end-to-end learning scheme that can handle multiple bad weather image restoration tasks. The key idea is to let the errors of the discriminative loss backpropagate into a specific encoder, in accordance to the type of the bad weather input.
+
+# 2. Related Works
+
+Deep learning based solutions have achieved promising performance in various image processing problems such as denoising [58], image completion [16], super-resolution [21], deblurring [45], style transfer [9], etc. This is also true for bad weather restoration or image enhancement, such as dehazing [47, 3, 1, 57, 7, 13, 26, 43], removal of raindrop and dirt [20, 18, 59, 8, 53, 30, 50, 62, 36, 4], of moderate rain [44, 29], and of heavy rain [25, 54]. These recent works have all shown the superiority of deep neural network models to conventional methods.
+
+Rain Removal Kang et al.'s [20] is the first work to introduce single image deraining method that decomposes an input image into its low frequency and high-frequency components using bilateral filter. Recent state-of-the-art rain removal strategies are dominated by deep neural networks. Fu et al.'s [8] develop a deep CNN to extract discriminative features from the high frequency component of the rain image. Yang et al. [54] design a multi-task deep learning architecture that learns the location and intensity of rain streaks simultaneously. Li et al.'s [25] propose a network that addresses the rain streaks and rain veiling effects prevalent in heavy rain scenes. This method not only proposes a residue decomposition step, but also elegantly integrates the physics-based rain model and adversarial learning to achieve state-of-the-art performance. It jointly learns the physics parameters of heavy rain, including streak intensity, transmission, atmospheric light and utilizes generative adversarial network to bridge the domain gaps between the proposed rain model and real rain.
+
+Raindrop Removal There are a number of methods that detect and remove raindrops from single image based on traditional hand-crafted features [52, 55]. Eigen et al.'s [5] train a CNN with pairs of raindrop-degraded images and the corresponding raindrop-free images. Its network is a fairly shallow model that only has 3 convolutional layers. While the method works, particularly for relatively sparse and small droplets as well as dirt, the result tends to be blurry. Qian et al. [41] use attention maps in a GAN network that successfully removes raindrop from single image. However, the main drawback of this approach is the attention maps that are inherently difficult to obtain. The automatically computed attention map ground truth often results in poor quality. Quan et al.'s [42] further explore the generation of attention maps based on the mathematical description of the shape of raindrops. It combines the raindrop attention maps and detected raindrop edges to obtain state-of-the-art performance of single image raindrop removal.
+
+Snow Removal [2, 46] use HOG techniques to capture characteristics of snow flakes for snow removal from single images. Xu et al. [51] utilize color assumptions to model the falling snow particles. In contrast to these handcrafted features that capture partial characteristic of snow
+
+
+Figure 2: Left: Full architecture of the proposed network. The dotted lines indicate the back-propagation paths of the adversarial loss from the discriminator. The discriminator classifies the degradation type and also determines whether the input image is real or fake. The classification error of a particular degradation type is only used to update the corresponding encoder assigned to this type in the generative network. The loss from one degradation type is only propagated to update the corresponding feature encoder. Right: The detailed structure of the generator of the proposed network. In our experiment, we set the number of cells to 3 (cell-2, cell-1 and cell 0). FE stands for Feature Extractor.
+
+
+
+flakes and streaks, Li et al.'s [24] encode snow flakes or rain streaks using an online multi-scale convolutional sparse coding model.
+
+Neural Architecture Search (NAS) Neural Architecture Search aims at automatically designing neural network architectures to achieve optimal performance, while minimizing human hours and efforts. Early works like [63, 19, 11] directly construct the entire network and train it automatically with supervision from designing a reinforcement learning controller RNN. Many recent papers [32, 40] point out that searching the repeatable cell structure and fixing the network level structure are more effective and efficient. The PNAS method [32] proposed a progressive search that significantly reduces the computation cost. Our work is closely related to [34, 31] that further relax the network searching task into an end-to-end optimization problem.
+
+# 3. Proposed Method
+
+# 3.1. Problem Formulation
+
+Different weather phenomena degrade images according to different physics principles. For example, a heavy rain image (where rain veiling effect, visually similar to fog/mist, is prevalent) is modelled as [25]:
+
+$$
+\mathbf {I} (x) = \mathbf {t} (x) (\mathbf {J} (x) + \sum_ {i} \mathbf {R} _ {i} (x)) + (1 - \mathbf {t} (x)) A, \quad (1)
+$$
+
+where $\mathbf{I}(x)$ is the rain image at location $x$ , $\mathbf{t}$ is the transmission map and $A$ is the global atmospheric light of the scene. $\mathbf{R}_i$ represents the rain streaks at the $i$ -th layer along the line
+
+of sight. An adherent raindrop image is modelled as [41]:
+
+$$
+\mathbf {I} (x) = (1 - \mathbf {M} (x)) \mathbf {J} (x) + \mathbf {K} (x), \tag {2}
+$$
+
+where $\mathbf{I}$ is the colored raindrop image and $\mathbf{M}$ is the binary mask. $\mathbf{J}$ is the background image and $\mathbf{K}$ is the imagery brought about by the adherent raindrops, representing the blurred imagery formed the light reflected by the environment. Lastly, a snow image can be modelled as [37]:
+
+$$
+\mathbf {I} (x) = \mathbf {z} \mathbf {S} (x) + \mathbf {J} (x) (1 - \mathbf {z}), \tag {3}
+$$
+
+where $\mathbf{S}$ represents the snow flakes and $\mathbf{z}$ is a binary mask indicating the location of snow.
+
+From the formulations of these different bad weather images, it is evident that these problems do not share the same intrinsic characteristics, which explains why a dedicated algorithm designed for one task does not work on the other tasks. To address this problem, we model the bad weather tasks with the following generic function:
+
+$$
+\mathbf {J} (x) = \mathcal {F} (\mathbf {I} (x)), \tag {4}
+$$
+
+where $\mathcal{F}$ represents an auto-encoder that maps degraded images to clean background images, and should embody the mentioned formulations such as Eq. (1)(2)(3). To realize this, we consider a network with multiple encoders:
+
+$$
+\mathbf {J} (x) = \mathcal {G} \odot \mathcal {E} _ {\rho} \left(\mathbf {I} _ {\rho} (x)\right), \tag {5}
+$$
+
+where $\mathcal{E}_{\rho}$ represents the encoder that takes in a degraded image $\mathbf{I}_{\rho}$ with respect to a degradation type $\rho$ . $\mathcal{G}$ is the generic decoder that restores the input to a clean background image $\mathbf{J}$ .
+
+
+Figure 3: The detailed illustration of the feature-search block in our proposed network. This diagram shows a possible architecture layout after training. In the figure, $dil5x5$ refers to dilated convolution with kernel $5 \times 5$ . $sep3x3$ refers to separable convolution with kernel $3 \times 3$ .
+
+# Fundamental Operations for Feature Search
+
+
+Deveiling Operation
+1
+
+
+Residue Operation
+
+
+Decomp Operation
+Self-Attention Operation
+
+
+Figure 4: An detailed illustration of the 4 fundamental operations in the search component designed for bad weather restoration tasks.
+
+# 3.2. Architectural Search Methodology
+
+To effectively process features coming in from multiple encoders, we utilize an NAS technique at the ends of the multiple encoders. In this search module (details shown in Fig. 3), we follow the configuration of most of the recent NAS methods, which define a cell to be the smallest basic module that can be repeated multiple times to construct the entire network architecture. Therefore, the network search space comprises of both a network level search, which refers to finding the structure of connections between cells, and a cell level search, which explores the structure inside a cell.
+
+# 3.2.1 Network Cell Architecture
+
+Due to limited computing resources, we adopt the fundamental rule of [31] in designing the basic cell structure: a cell is a directed acyclic graph consisting of $B$ blocks. Each block $i$ in the $l$ -th cell $C^l$ is represented as a 5-tuple $(I_1, I_2, O_1, O_2, H)$ two-to-one mapping structure, where $I_1, I_2 \in I_i^l$ are the selections of the input tensors; and $O_1, O_2 \in \mathcal{O}$ are the selections of the layer types applied to the corresponding input tensors. $H \in \mathcal{H}$ is the method used to combine the outputs of two layers $O_1, O_2$ . The set of possible layer types $\mathcal{O}$ consists of the following ten operators:
+
+- $3 \times 3$ separable conv
+- $5 \times 5$ separable conv
+- 3x3 dilated conv
+- 5x5 dilated conv
+- no/skip connection
+
+- Deveiling Ops
+- Residue Ops
+Self-Attention Ops
+- Decomposition Ops
+
+Beside the common convolutional operations such as dilated convolution, depthwise-separable convolution, we introduce new fundamental operations to deal with the bad weather image degradation according to the physics laws embedded in the formation of each degradation type, namely decomposition operation, residue operation, self-attention operation, and deveiling operation (shown in Fig. 4). In the following paragraphs, we describe the design and function of the new operations in details.
+
+Deveiling Operation Most of the existing methods solve the problem according to the following model of fog/haze formation:
+
+$$
+\mathbf {I} (x) = \mathbf {t} \mathbf {J} (x) + (\mathbf {1} - \mathbf {t}) A. \tag {6}
+$$
+
+The variables in this equation follow the same meaning in Eq. (1). Inspired by [23, 28], it is possible to learn a latent variable $\mathbf{M}$ that transforms the veiling images into clean background images:
+
+$$
+\mathbf {J} (x) = \mathbf {M} (x) \odot \mathbf {I} (x), \tag {7}
+$$
+
+where $\mathbf{M}(x) = (\mathbf{I}(x) + \mathbf{t}A - A) / \mathbf{t}\mathbf{I}(x)$ , and is a learnable latent variable dependent on the input image $\mathbf{I}$ . Thus, we apply one layer of $\operatorname{conv}1 \times 1$ on the extracted image feature to estimate the latent variable $\mathbf{M}$ , and multiply $\mathbf{M}$ with the extracted feature as shown in Fig. 4.1.
+
+Decomposition Operation Image decomposition has been widely used in rain streaks removal and snow removal [20, 8, 25]. We consider the decomposition in a feature space as a fundamental operation that is effective for snow and rain removal. As shown in Fig. 4.2, we apply deep image-guided filters with a kernel family ranging in $2^{k}$ ,
+
+where $\mathrm{k} = 1,2,\ldots,6$ . We use a conv1 × 1 layer at the end of both the low- and high-frequency components to extract appropriate features for next layer.
+
+Residue Operation Inspired by [28], we have implemented the residue channel operation (shown in Fig. 4.3 in the feature space. The residue channel [27] has been shown to be effective to remove rain from a single image.
+
+Self-Attention Operation A few methods [41, 42] have demonstrated the advantages of using an attention map for removing adherent raindrops. In these methods, the raindrop attention maps are explicitly learned from the ground truths of raindrop masks. However, obtaining these raindrop masks is expensive [12] and any lack of quality may affect the performance. To overcome this problem, a self-attention mechanism [56, 48] has been applied to many image reconstruction methods [60, 33]. We consider this self-attention module as a fundamental block of operation that can be exploited by the NAS (shown in Fig. 4.4).
+
+# 3.2.2 Architecture Search Space
+
+Following the continuous relaxation described in [35], in each block $i$ , the output tensor $T_{i}^{l}$ is connected to all the input tensor $I_{i}^{l}$ through searched operation $O_{j \to i}$ :
+
+$$
+T _ {i} ^ {l} = \sum_ {H _ {j} ^ {l} \in I _ {i} ^ {l}} O _ {j \rightarrow i} \left(T _ {j} ^ {l}\right). \tag {8}
+$$
+
+We approximate the step of searching best $O_{j\rightarrow i}$ with continuous relaxation, yielding $\bar{O}_{j\rightarrow i}$ :
+
+$$
+\bar {O} _ {j \rightarrow i} = \sum_ {O \in \mathcal {O}} \alpha_ {j \rightarrow i} O _ {j \rightarrow i} \left(T _ {j} ^ {l}\right), \tag {9}
+$$
+
+where $\sum \alpha_{j\rightarrow i} = 1$ . In practice, this is implemented as a softmax operation. Therefore, the cell level architecture can be summarized as:
+
+$$
+T ^ {l} = \operatorname {C e l l} \left(T ^ {l - 1}, T ^ {l - 2}; \alpha\right). \tag {10}
+$$
+
+To maximize the potential of these fundamental knowledge competencies, we allow hierarchical connections between cells and blocks.
+
+# 3.3. Categorical Adversarial Training
+
+In the standard generative adversarial network (GAN) configuration, the generator $G$ takes a random noise vector $z$ and produces an image $X_{\text{fake}}$ . The discriminator $D$ takes a ground truth image and the output image of the generator to predict a probability distribution $P(S|X)$ over possible
+
+images sources [10]. During training process, the discriminator is trained to maximize the log-likelihood of the correct source:
+
+$$
+\begin{array}{l} \mathcal {L} _ {s} = \mathbb {E} [ \log P (S = r e a l) | X _ {r e a l}) ] (11) \\ + \mathbb {E} [ \log P (S = f a k e | X _ {f a k e}) ]. (12) \\ \end{array}
+$$
+
+In our multi-class discriminator network, however, the discriminator does not only learn to determine the correctness of the restored image, but also strives to classify the type of the degradation from the restored image:
+
+$$
+\mathcal {L} _ {c} = \mathbb {E} [ \log P (C = c _ {i}) | X _ {\text {r e a l}}) ] + \mathbb {E} [ \log P (C = c _ {i} | X _ {\text {f a k e}}) ]. \tag {13}
+$$
+
+Hence, the discriminator $D$ is trained to minimize $\mathcal{L}_s + \mathcal{L}_c$ , while the generator is trained to minimize $\mathcal{L}_c - \mathcal{L}_s$ . This approach has been proven to be effective in tackling mode collapse in a standard GAN [14]. To summarize, the loss function for the discriminator is:
+
+$$
+\mathcal {L} _ {d i s} = \mathcal {L} _ {c} + \mathcal {L} _ {s}. \tag {14}
+$$
+
+For the generator, we also apply the MSE loss to compute the difference between the predicted clean image $\mathbf{J}_{pred}$ and the ground truth clean image $\mathbf{J}_{gt}$ :
+
+$$
+\mathcal {L} _ {\text {g e n}} = \mathcal {L} _ {c} + \mathcal {L} _ {\text {m s e}} \left(\mathbf {J} _ {\text {p r e d}}, \mathbf {J} _ {\text {g t}}\right) - \mathcal {L} _ {s}. \tag {15}
+$$
+
+Updating Relevant Encoders As mentioned, different bad weathers are formed based on different physical principles. The classification error of one degradation type $\mathcal{L}_{c_i}$ may not be effective to update the encoders of other degradation types $j$ , where $j \neq i$ . In our approach, the classification error of degradation type $i$ is only used to update encoder $\mathcal{E}_i$ , as shown in Fig. 2. By backpropagating the adversarial loss specifically to the appropriate encoder, we strengthen the ability of the multi-encoder generator to map images from different domains to a common feature space.
+
+# 4. Implementation
+
+# 4.1. Datasets
+
+We train our network on different bad weather datasets, including "Outdoor-Rain" [25], "Snow100K" [38], and "Raindrop" [41]. "Outdoor-Rain" contains 9,000 training samples and 1,500 validation samples. "Snow100K" contains 100k synthetic snow images and the corresponding snow-free ground truth images. "Raindrop" comprises 1,119 pairs of real adherent-raindrop images and the corresponding ground truth background images. Since the number of images in each dataset are not equal, we sample 9000 images from "Snow100K" at each training epoch. For the small "Raindrop" dataset, we over-sample them by performing data augmentation, such as rotation, affine transformation, noise and random cropping. As a result, in each
+
+Table 1: A comparison of our algorithm with the baseline methods performed on Test 1 [25] dataset.
+
+| Method | Test 1 |
| Metric | PSNR | SSIM |
| DetailsNet [8] + Dehaze | DHF | 13.36 | 0.583 |
| DRF | 15.68 | 0.640 |
| RESCAN [29] + Dehaze | DHF | 14.72 | 0.587 |
| DRF | 15.91 | 0.615 |
| Pix2Pix [17] | 19.09 | 0.710 |
| CycleGAN [61] | 17.62 | 0.656 |
| HRGAN [25] | 21.56 | 0.855 |
| Ours | 24.71 | 0.898 |
+
+Table 2: A comparison of our algorithm with the baseline methods performed on Raindrop [41] dataset.
+
+| Metric | Pix2Pix [17] | AttentGAN [41] | Quan et al. [42] | Ours |
| PSNR | 28.02 | 31.57 | 31.44 | 31.12 |
| SSIM | 0.8547 | 0.9023 | 0.9263 | 0.9268 |
+
+Table 3: A comparison of our algorithm with the baseline methods performed on Snow100K-L [37] test dataset.
+
+| Metric | DetailsNet [8] | DesnowNet [38] | Ours |
| PSNR | 19.18 | 27.17 | 28.33 |
| SSIM | 0.7495 | 0.8983 | 0.8820 |
+
+epoch, the number of samples from all the datasets is uniform at 9,000.
+
+# 4.2. Training Details
+
+Our network is trained on all of the bad weather datasets in an end-to-end manner. To reduce the training expenses under limited resources, we adopt the first-order approximation in [35] and split the training data into two disjoint sets Set1 and Set2. We simultaneously optimize the parameters of the image restoration network on Set1 and the architecture-search parameters on Set2. The learning rate is set to 0.002 initially and is divided by 2 after every 5 epochs until the $40^{th}$ epoch. We use Adam optimizer [22] with a weight decay $10^{-4}$ to optimize the network parameters.
+
+# 5. Experiments
+
+We evaluate our method using both synthetic and real bad weather images, including rain, snow and adherent rain-drop. The test dataset for rain (with fog) is the test set from HRGAN [25]. The baseline methods for deraining comprise of the state-of-the-art heavy rain removal HRGAN [25], RESCAN [29], and DetailsNet [8]. The test dataset for snow is from the Snow100K-L test set adopted from
+
+[37]. Since there are only a few works related to snow recently, we compare DeSnowNet [37] and the baseline methods from that paper. Lastly, the test dataset for adherent raindrop is from Qian et al. [41]. We compare our method with the most recent raindrop removal methods [42] and Qian et al. [41].
+
+# 5.1. Qualitative Results
+
+Rain and Fog We show the results produced by the various methods on synthetic rain images in Fig. 5. One can observe that our network, while is trained for multiple bad weather types, achieve competitive performance compared with dedicated state-of-the-art deraining methods. We provide more results in the supplementary material.
+
+Snow We show the results produced by the various method on synthetic snow images from the Snow100K dataset [38] in Fig. 9. $^{1}$
+
+Raindrop We list the results of our method compared with recent raindrop removal methods in Fig. 7. Although our method does not produce the best result in terms of the PSNR, we still achieve a competitive performance without incorporating extra information such as that obtained from the edge attention mechanism in [42].
+
+# 5.2.Quantitative Results
+
+Table 1 and 2 demonstrate the quantitative results of our proposed method compared with dedicated state-of-the-art methods and generic image restoration methods. The quantitative results are evaluated based on two metrics: PSNR [15] and SSIM [49]. For the raindrop removal task, our method does not yield the best result in terms of the PSNR, but achieve a competitive performance.
+
+# 6. Ablation Study
+
+To study the effectiveness of each of the components in our proposed network, we conduct an ablation study, which the results are shown in Table. 4. As can be seen, our network with the feature search performs better than simple concatenation. We also conduct an ablation study on the categorical adversarial training component. The quantitative results tested on the deraining task are shown in Table. 4.
+
+# 7. Conclusion
+
+In this paper, we propose a novel all-in-one bad weather image enhancement solution that can handle multiple types of bad weather degradations using only one single network. The competitive performance of our network stems from
+
+
+
+
+(a) input
+(b) RESCAN [29]
+(c) HRGAN [25]
+(d) Ours
+(e) GT
+
+
+Figure 5: Synthetic rain and fog removal results of our method compared with state-of-the-art dedicated rain and fog removal methods.
+
+
+(a) input
+(b) DetailsNet [8]
+(c) RESCAN [29]
+(d) HRGAN [25]
+(e) Ours
+Figure 6: Realistic rain and fog removal results of our method compared with state-of-the-art dedicated rain and fog removal methods.
+
+Table 4: Ablation study on our search component and categorical adversarial learning component in the proposed network. The evaluation is conducted on the test dataset Test 1 [25] for the rain and fog removal task.
+
+| Method | Rainfog dataset [25] |
| Metric | PSNR | SSIM |
| No Feature Search | 20.82 | 0.827 |
| No Categorical Discriminator | 21.58 | 0.86 |
| Full Architecture | 24.71 | 0.898 |
+
+our two main contributions. First, to find the most effective way to process features from different bad weather do
+
+mains, we propose an architectural search equipped with, among others, four fundamental operations designed for bad weather, namely deveiling, residue, self-attention and decomposition. Second, we design a multi-class discriminator that classifies image degradation types and assesses image correctness simultaneously. The proposed new training scheme updates the encoders in the generator based on the classification results of the discriminator. Finally, comprehensive experiments demonstrate the effectiveness of our method compared with the dedicated state-of-the-art algorithms on rain, snow and raindrop removal tasks.
+
+
+
+
+
+
+(a) input
+(b) AttentGAN[41]
+(c) Quan et al. [42]
+(d) Ours
+(e) Ground Truth
+
+
+Figure 7: Raindrop removal results of our method compared with state-of-the-art dedicated raindrop removal methods.
+
+
+(a) input
+(b) DetailsNet [8]
+(c) DeSnowNet [38]
+(d) Ours
+
+
+Figure 8: Snow removal results of our method compared with state-of-the-art dedicated snow removal methods.
+(a) Input
+(b) No Categorical Discriminator
+(c) No Feature Search
+(d) Ours
+Figure 9: Comparison between different networks in ablation study (zoom to view details).
+
+# References
+
+[1] D. Berman, T. Treibitz, and S. Avidan. Non-local image dehazing. In IEEE Conference on Computer Vision and Pattern
+
+Recognition (CVPR), 2016.
+[2] Jérémie Bossu, Nicolas Hautière, and Jean-Philippe Tarel. Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International Journal of Computer Science, 1998, 30(4), 576-583.
+
+nal of Computer Vision, 93(3):348-367, Jul 2011.
+[3] Bolun Cai, Xiangmin Xu, Kui Jia, Chunmei Qing, and Dacheng Tao. Dehazenet: An end-to-end system for single image haze removal. Trans. Img. Proc., 25(11):5187-5198, Nov. 2016.
+[4] Jie Chen, Cheen-Hau Tan, Junhui Hou, Lap-Pui Chau, and He Li. Robust video content alignment and compensation for rain removal in a cnn framework. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[5] D. Eigen, D. Krishnan, and R. Fergus. Restoring an image taken through a window covered with dirt or rain. In 2013 IEEE International Conference on Computer Vision, pages 633-640, Dec 2013.
+[6] Qingnan Fan, Dongdong Chen, Lu Yuan, Gang Hua, Nenghai Yu, and Baoquan Chen. Decouple learning for parameterized image operators. In The European Conference on Computer Vision (ECCV), September 2018.
+[7] Raanan Fattal. Dehazing using color-lines. ACM Trans. Graph., 34(1), Dec. 2015.
+[8] Xueyang Fu, Jiabin Huang, Delu Zeng, Yue Huang, Xinghao Ding, and John Paisley. Removing rain from single images via a deep detail network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+[9] Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. A neural algorithm of artistic style. CoRR, abs/1508.06576, 2015.
+[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672-2680. Curran Associates, Inc., 2014.
+[11] K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, and J. Schmidhuber. Lstm: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10):2222-2232, Oct 2017.
+[12] Zhixiang Hao, Shaodi You, Yu Li, Kunming Li, and Feng Lu. Learning from synthetic photorealistic raindrop for single image raindrop removal. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2019.
+[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770-778, 2016.
+[14] Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Phung. MGAN: Training generative adversarial nets with multiple generators. In International Conference on Learning Representations, 2018.
+[15] Q. Huynh-Thu and M. Ghanbari. Scope of validity of psnr in image/video quality assessment. *Electronics Letters*, 44(13):800-801, June 2008.
+[16] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and Locally Consistent Image Completion. ACM Transactions on Graphics (Proc. of SIGGRAPH 2017), 36(4):107:1-107:14, 2017.
+
+[17] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. arxiv, 2016.
+[18] Tai-Xiang Jiang, Ting-Zhu Huang, Xi-Le Zhao, Liang-Jian Deng, and Yao Wang. A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+[19] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network architectures. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML'15, pages 2342-2350. JMLR.org, 2015.
+[20] L. W. Kang, C. W. Lin, and Y. H. Fu. Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4):1742-1755, April 2012.
+[21] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. CoRR, abs/1511.04587, 2015.
+[22] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
+[23] Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, and Dan Feng. Aod-net: All-in-one dehazing network. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+[24] Minghan Li, Xiangyong Cao, Qian Zhao, Lei Zhang, Chenqiang Gao, and Deyu Meng. Video rain/snow removal by transformed online multiscale convolutional sparse coding. CoRR, abs/1909.06148, 2019.
+[25] Ruoteng Li, Loong-Fah Cheong, and Robby T. Tan. Heavy rain image restoration: Integrating physics model and conditional adversarial learning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+[26] R. Li, J. Pan, Z. Li, and J. Tang. Single image dehazing via conditional generative adversarial network. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8202-8211, June 2018.
+[27] Ruoteng Li, Robby T. Tan, and Loong-Fah Cheong. Robust optical flow in rainy scenes. In The European Conference on Computer Vision (ECCV), September 2018.
+[28] Ruoteng Li, Robby T. Tan, Loong-Fah Cheong, Angelica I. Aviles-Rivero, Qingnan Fan, and Carola-Bibiane Schonlieb. Rainflow: Optical flow under rain streaks and rain veiling effect. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[29] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In The European Conference on Computer Vision (ECCV), September 2018.
+[30] Yu Li, Robby T. Tan, Xiaojie Guo, Jiangbo Lu, and Michael S. Brown. Rain streak removal using layer priors. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
+[31] Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan L. Yuille, and Li Fei-Fei. Autodeplab: Hierarchical neural architecture search for semantic image segmentation. CoRR, abs/1901.02985, 2019.
+
+[32] Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan L. Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. CoRR, abs/1712.00559, 2017.
+[33] Ding Liu, Bihan Wen, Yuchen Fan, Chen Change Loy, and Thomas S. Huang. Non-local recurrent network for image restoration. CoRR, abs/1806.02919, 2018.
+[34] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: differentiable architecture search. CoRR, abs/1806.09055, 2018.
+[35] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019.
+[36] Jiaying Liu, Wenhan Yang, Shuai Yang, and Zongming Guo. Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[37] Yun-Fu Liu, Da-Wei Jaw, Shih-Chia Huang, and Jenq-Neng Hwang. Desnownet: Context-aware deep network for snow removal. CoRR, abs/1708.04512, 2017.
+[38] Y. Liu, D. Jaw, S. Huang, and J. Hwang. Desnownet: Context-aware deep network for snow removal. IEEE Transactions on Image Processing, 27(6):3064-3073, June 2018.
+[39] Jinshan Pan, Sifei Liu, Deqing Sun, Jiawei Zhang, Yang Liu, Jimmy Ren, Zechao Li, Jinhui Tang, Huchuan Lu, Yu-Wing Tai, and Ming-Hsuan Yang. Learning dual convolutional neural networks for low-level vision. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[40] Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. CoRR, abs/1802.03268, 2018.
+[41] Rui Qian, Robby T. Tan, Wenhan Yang, Jiajun Su, and Jiaying Liu. Attentive generative adversarial network for raindrop removal from a single image. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[42] Yuhui Quan, Shijie Deng, Yixin Chen, and Hui Ji. Deep learning for seeing through window with raindrops. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[43] Wenqi Ren, Si Liu, Hua Zhang, Jin shan Pan, Xiaochun Cao, and Ming-Hsuan Yang. Single image dehazing via multiscale convolutional neural networks. In ECCV, 2016.
+[44] Weihong Ren, Jiandong Tian, Zhi Han, Antoni Chan, and Yandong Tang. Video desnowing and deraining based on matrix decomposition. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+[45] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Scholkopf. Learning to deblur. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(7):1439-1451, July 2016.
+[46] Soo-Chang Pei, Yu-Tai Tsai, and Chen-Yu Lee. Removing rain and snow in a single image using saturation and visibility features. In 2014 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pages 1-6, July 2014.
+
+[47] R. T. Tan. Visibility in bad weather from a single image. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8, June 2008.
+[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc., 2017.
+[49] Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600-612, April 2004.
+[50] Wei Wei, Lixuan Yi, Qi Xie, Qian Zhao, Deyu Meng, and Zongben Xu. Should we encode rain streaks in video as deterministic or stochastic? In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+[51] Jing Xu, Wei Zhao, Peng Liu, and Xianglong Tang. An improved guidance image based method to remove rain and snow in a single image. Computer and Information Science, 5, 04 2012.
+[52] A. Yamashita, Y. Tanaka, and T. Kaneko. Removal of adherent waterdrops from images acquired with stereo camera. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 400-405, 2005.
+[53] Wenhan Yang, Robby T. Tan, Jiashi Feng, Jiaying Liu, Zongming Guo, and Shuicheng Yan. Joint rain detection and removal via iterative region dependent multi-task learning. CoRR, abs/1609.07769, 2016.
+[54] W. Yang, R. T. Tan, J. Feng, J. Liu, S. Yan, and Z. Guo. Joint rain detection and removal from a single image with contextualized deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 1-1, 2019.
+[55] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi. Adherent raindrop modeling, detection and removal in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9):1721-1733, 2016.
+[56] Han Zhang, Ian J. Goodfellow, Dimitris N. Metaxas, and Augustus Odena. Self-attention generative adversarial networks. ArXiv, abs/1805.08318, 2018.
+[57] He Zhang and Vishal M. Patel. Density-aware single image de-raining using a multi-stream dense network. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[58] K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142-3155, July 2017.
+[59] X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng. Rain removal in video by combining temporal and chromatic properties. In 2006 IEEE International Conference on Multimedia and Expo, pages 461-464, July 2006.
+[60] Yulun Zhang, Kunpeng Li, Kai Li, Bineng Zhong, and Yun Fu. Residual non-local attention networks for image restoration. In International Conference on Learning Representations, 2019.
+
+[61] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.
+[62] Lei Zhu, Chi-Wing Fu, Dani Lischinski, and Pheng-Ann Heng. Joint bi-layer optimization for single-image rain streak removal. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+[63] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. CoRR, abs/1707.07012, 2017.
\ No newline at end of file
diff --git a/allinonebadweatherremovalusingarchitecturalsearch/images.zip b/allinonebadweatherremovalusingarchitecturalsearch/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e6f02e8b94266fabfc67b1dc7f27d9c492de5eda
--- /dev/null
+++ b/allinonebadweatherremovalusingarchitecturalsearch/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1051e8a0f54aa5da258123bb384faa0c2261cb2c248b225d0ad5cd3f13dd7d4e
+size 819216
diff --git a/allinonebadweatherremovalusingarchitecturalsearch/layout.json b/allinonebadweatherremovalusingarchitecturalsearch/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4d6037ef1935f17ceac7066958fd626af3081503
--- /dev/null
+++ b/allinonebadweatherremovalusingarchitecturalsearch/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f7b80fb9d3e30622e2a2157dcddd74eb88127fa9e3123b5c44c0fe94f3cdb16
+size 437694
diff --git a/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_content_list.json b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ed9b7f370003389e22e9f988703cc83399864d7f
--- /dev/null
+++ b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76e367a23b393d983727240fbeca23ed10891de531c8cb31ea1f8fdaba4d5e5c
+size 80358
diff --git a/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_model.json b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..afce61ee7597cbb7e4c20bf218f577449996f01c
--- /dev/null
+++ b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ca088c5a727ff78acc2f2992e8fd493a30928d056c6e15912c7c18748fb0b8d1
+size 104946
diff --git a/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_origin.pdf b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..00f70b1b2ef117e6c437178ec4dcfa730bbc0e82
--- /dev/null
+++ b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/cefe57cc-918c-4fd6-bb8c-87c78c15f0f9_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:12c53a497f3e70acb00105e1b8920103996461878f10f149fdbbf9a26abdf205
+size 2921261
diff --git a/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/full.md b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0f4ddc6579e0eb50b5440abcc59a3d45123bbfb4
--- /dev/null
+++ b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/full.md
@@ -0,0 +1,321 @@
+# An Adaptive Neural Network for Unsupervised Mosaic Consistency Analysis in Image Forensics
+
+Quentin Bammey Rafael Grompone von Gioi Jean-Michel Morel CMLA, CNRS, ENS Paris-Saclay, Universite Paris-Saclay
+
+{quentin.bammey, grompone, morel}@ens-paris-saclay.fr
+
+# Abstract
+
+Automatically finding suspicious regions in a potentially forged image by splicing, inpainting or copy-move remains a widely open problem. Blind detection neural networks trained on benchmark data are flourishing. Yet, these methods do not provide an explanation of their detections. The more traditional methods try to provide such evidence by pointing out local inconsistencies in the image noise, JPEG compression, chromatic aberration, or in the mosaic. In this paper we develop a blind method that can train directly on unlabelled and potentially forged images to point out local mosaic inconsistencies. To this aim we designed a CNN structure inspired from demosaicing algorithms and directed at classifying image blocks by their position in the image modulo $(2 \times 2)$ . Creating a diversified benchmark database using varied demosaicing methods, we explore the efficiency of the method and its ability to adapt quickly to any new data.
+
+# 1. Introduction
+
+Detecting image forgeries is a problem with critical applications ranging from the control of fake news in online media and social networks [59] to the avoidance of scientific misconduct involving image manipulation1. Images are easy to modify in a visually realistic way, but those modifications can be difficult to detect automatically.
+
+The most common image forgery techniques are copymove, both internal and external (splicing), inpainting and enhancement, which may include a modification of the hue, contrast, brightness, etc., of an image to hide objects or change their meaning [22, 75]. The settings in which these images are created and distributed may further alter the image and hinder certain detection methods. For instance, uncompressed images have characteristic demosaicing and noise signatures which are nearly erased by a
+
+strong compression. On the other hand, detection in tampered JPEG images may be based on the inconsistency of JPEG encoding caused by splicing [56, 33, 21, 43, 6, 7, 42]. Yet, this detection method is so efficient that research on counterforensics has been very active and has proposed efficient ways to reinstate a coherent JPEG encoding after forgery [62, 63, 66, 20].
+
+There are two concurrent paradigms for forgery detection techniques. The first way consists in developing many different methods, that address separately the varied forgeries and inconsistencies created by these forgeries. Error Level Analysis (ELA) [34] fits in this category and creates a heatmap by recompressing the image and visualising the difference. As we just mentioned, many methods look for inconsistencies in JPEG encoding; many other try to detect noise discrepancies [36, 58, 14, 27, 48, 49, 5, 51, 48, 49, 5, 51, 41, 67, 35, 16, 54, 47, 44, 76, 73] or attempt to directly detect internal copy-move operations [68, 71, 70, 61, 1, 23]. The variety of setups before and after forgery makes exhaustiveness difficult, yet results obtained by such specific methods are self-explanatory. However, with few exceptions such as the recent development of Siamese Networks, which we briefly describe in Section 2, most of these methods are created manually, which can limit their performances, especially when forged images are created with a combination of methods rather than just one move.
+
+Another possibility is to consider forgery detection as a unique learning problem and develop a structure – usually a neural network – to classify and/or localise forgeries independently of the setup and forgery type. For instance in [74] a heat map is computed, in [3] the network segments the image into forged and non-forged parts. See also [10] and [4]. While exhaustiveness is theoretically possible with these methods, it is actually limited by the database itself: they learn how to detect forgeries as seen in a training database, and can thus fail when confronted with images whose forgeries were made differently.
+
+In this article, we choose to focus on the detection of demosaicing artefacts to detect forged regions on an image. Most cameras cannot directly capture colour. In order to
+
+
+Figure 1: The Bayer matrix is the most common CFA. Each pixel is represented as the colour in which the camera samples it.
+
+create colour images, they instead use a filter, named colour filter array (CFA) or mosaic, before the light rays reach the camera's sensor. As a consequence, each pixel is sampled in only one colour, and the other colours must be interpolated from neighbouring pixels sampled in other colours. These interpolation algorithms leave artefacts that can be detected to know in which colour each pixel was sampled.
+
+The most commonly used CFA is by far the Bayer matrix, shown in Fig. 1, which samples two pixels of green for one pixel of red and another of blue. Although other CFA exist, their use is marginal. As a consequence, we only consider the Bayer matrix in our article.
+
+When an image is forged by copying part of this image or of another image onto it, there is a $\frac{3}{4}$ probability that the mosaic of the forged region will not be aligned to that of the main image. As a consequence, locally detecting the position of the mosaic in images can lead to finding discrepancies caused by forgeries.
+
+While detecting the presence of demosaicing can be done reliably with current state-of-the-art methods, the interpretation of these artefacts is still a challenge. Most methods make assumptions of linearity of the interpolation or even assume the colour channels to be independently demosaiced. These assumptions are invalid with most commonly-used demosaicing methods, and even state-of-the-art mosaic detection algorithms thus tend to yield a large number of false positives.
+
+Many different demosaicing algorithms exist, furthermore most of those used in commercial cameras are undisclosed. Learning-based methods must thus take into account the impossibility to learn on all existing algorithms.
+
+In this paper, we overcome the above limitations by using an unsupervised convolutional neural network that learns to detect changes in the underlying pattern of mosaic artefacts. This network can be trained on unlabelled authentic images to detect forgeries in new images. Similarly to zero-shot learning, it can also train directly on a database of potentially forged images to adapt to JPEG compression.
+
+The contributions of our article are three-fold. We create a new convolutional neural network (CNN) structure tailored to the specific task of mosaic artefacts detection, and that beats state-of-the-art mosaic detection methods. It can
+
+be trained in a fully unsupervised manner, and can even be directly retrained on a set of images to adapt to their specific conditions. To do that, we propose a new use for pixelwise convolutions in neural networks. Their main use in the literature has been to reduce the dimensionality of a network before performing heavier spatial operations, such as in [64]. We argue that they can also be used stacked to each other, to process the causality relations between previously-computed spatial features as, for the same price as spatial convolutions, they can have more and bigger layers; furthermore, they do not add any more spatial dependency to the results. Finally, working over the Dresden image dataset [28], we create a new dataset aimed specifically at benchmarking forgery detection via demosaicing artefacts.
+
+Both the code and the dataset can be found on https://github.com/qbammey/adaptive_cfa_forensics.
+
+# 2. Related works
+
+Identification of demosaicing artefacts for forgery detection is not a new subject. A pioneer paper on this field is provided by [57]. They propose to work independently on the different colour channels, and use an expectation-maximisation (EM) algorithm to jointly compute a linear estimation of the demosaicing algorithm and find the probability of each pixel being interpolated or originally sampled. They then apply the Fast Fourier Transform (FFT) on the pseudo-probability map to detect changes in the magnitude and phase at the 2-periodicity peaks, which can correspond to changes in the CFA artefacts.
+
+[29, 2] improve on [57] by replacing the EM algorithm with a direct linear estimation of the algorithm in all four possible positions. [29] uses the Discrete Cosine Transform (DCT) instead of the FFT in order to see changes of the mosaic, which can correspond to copy-move forgeries, as a change in the sign of the DCT, which is easier to see than a change of phase in the FFT. [2] notes that in a scenario where many different methods are needed to detect the variety of forgeries, it becomes especially important to strictly control the number of false positives for each of them. They propose a simple method to detect the presence of a significant CFA pattern, by pooling the error map into blocks, each of which votes for one of four grids. In the absence of demosaicing, the votes should be uniformly distributed between the four grids. They thus look at the number of votes for each position, and threshold the detection on the rate at which a detection at least as significant would happen in the absence of demosaicing. All three methods make two strong assumptions:
+
+they assume that demosaicing is done independently in each colour channel, and
+- they assume that a linear estimation can sufficiently represent the demosaicing algorithm.
+
+Even though these two assumptions might have been at least partially true in the green channel for most demosaicing algorithms commonly used in 2005, when [57] was first published, it is far from being the case nowadays.
+
+Another important method is provided by [40]. They propose to directly detect the mosaic used in the image, and to do that, they mosaic the image in all four possible positions, and redemosaic it using a simple algorithm such as bilinear interpolation. The reasoning is that the demosaicing should produce an image closer to the original when remosaicked and demosaiced in the correct position. They thus compare the residual maps to detect which of the possible mosaics has been used. Claiming demosaicing artefacts can usually be seen more clearly in the green channel, they first decide on the position of the green sampled pixels. They then use the most significant of the red and blue channels to decide on the remaining to position. This order of decision has been used in most of the literature since then. Their use of the bilinear algorithm limits them in the same way as [57, 29] because of the linearity and colour independence of the bilinear algorithm, which is not shared by most modern demosaicing algorithms. However, their method does not depend on the choice of the algorithm, and they can thus provide very good results in the rare case where the demosaicing algorithm of a studied image is known.
+
+In order to detach themselves from one specific algorithm, [11] notes that pixels are more likely to feature locally extremal values in the channel in which they are sampled, and on the contrary to take intermediary values where interpolated. As a consequence, they count the number of intermediate values in all four positions to decide which position is the correct one, using the decision pipeline introduced in [40]. The assumption that pixels are more likely to take extremal values in their sampled channel is usually verified with most algorithms, which leads this method to yield good classification scores. However, the probability bias can sometimes be reversed when algorithms make extensive use of other channels' high frequencies, which can lead to some regions of the image being detected in a wrong position with a strong confidence.
+
+[60] is the first method that tries to alleviate the colour channels independence assumption. Instead of working separately in each channel, they compute the difference of the green channel separately with both the red and blue channels. Using the variance of those differences, they decide on the correct position using a similar pipeline as above. Although the colour independence is hard-coded, the colour difference is used in many current algorithms. Using this instead of the raw channels thus provide a first step toward a correct understanding of demosaicing artefacts.
+
+[45] is currently, to the best of our knowledge, the only method offering to use a neural network for mosaic detec
+
+tion. They notice that most forgery detection methods involve first computing a residual error map, as in [40], or a similar feature map, as in [57, 29, 11, 60], and then interpreting it, for instance with the FFT in [57]. They first compute an error map based on the green channel, then use a CNN to interpret the error map, and distinguish forgeries from post-processing steps such as JPEG compression. However, distinguishing demosaicing artefacts from JPEG or resampling artefacts can already be seen with simple methods such as [57]'s FFT or [2]'s a contrario approach. With most current methods, the first source of indistinguishable errors in the feature map come not from post-processing applied to the image - which can hinder CFA detection rather than create false detections without being visible with manual method - but from a lack of fit between the detection method and the image's demosaicing. As a consequence, we believe that a CFA detection method would benefit from the use of a neural network more in the computation of the feature map than in the interpretation thereof. They do claim high accuracy results. Unfortunately, their tests were made on raw images from the Dresden database [28], which they thus demosaic themselves, without indicating which algorithm has been used, and no code is provided to verify the results.
+
+Neural networks have also gained popularity in image forensics in the form of Siamese networks [8]. The goal of these networks is to compare two samples. Features for both samples are processed with a first network with shared weight, and a second network is applied to the residual between the two samples to decide on their similarity. This approach has already been successfully applied in several areas of forensics, including camera source detection [50], and prediction of the probability of two patches sharing the same EXIF data [32] for splicing detection. While we could use Siamese networks to compare the CFA pattern of different patches, Siamese networks are especially powerful to compare patches when classification is cumbersome – for instance because of a high number of classes, some of which may not be present in the training data –, in which case it can become more practical to directly compare patches without explicitly classifying them. On the other hand, the mosaic of an image belongs to one of four classes. This means that Siamese network do not necessarily offer an advantage to CFA grid detection, and we can probably use directly a classifying network, which is less complex as we do not need to compare all pairs of patches.
+
+# 3. Proposed method
+
+A standard approach to finding copy-move forgeries through demosaicing artefacts would be to first detect the image's initial mosaic, and then detect if parts of the image actually have a different mosaic. Our manual attempts to detect the original mosaic were not successful. Indeed, cri
+
+teria to do this heavily depend on the demosaicing method. Instead, we designed a convolutional neural network (CNN) to train on blocks of the image and directly predict their position in the image modulo $(2, 2)$ . The only cue to this relative position are the periodic artefacts, such as CFA, resampling and JPEG artefacts. Hence, a change of the mosaic can lead to forged blocks being detected at incorrect positions modulo $(2, 2)$ and thus flagged as forged. Because the target output is only the relative position of blocks on the image, all that is required to train the network is a set of demosaiced images, without additional labels.
+
+In a standard unsupervised scenario, the CNN can be trained with many authentic images and then used on new images to detect forgeries on them. However, if we have to detect forgeries on a large database, and if we can assume that the images in the database are similar in terms of demosaicing and post-processing – and in particular JPEG compression –, then we can retrain the CNN, performing unsupervised transfer learning directly on the test data. As the forged regions generally occupy a small part of the images, and only a small proportion of the images under study are forged, the risk that the CNN will overfit on the forged regions will be small.
+
+The network consists of several parts, all of which serving different purposes. It only uses 31,504 trainable parameters. In the initial training phase, overfitting can occur both on the image contents and on the specific algorithms used for demosaicing. Although the former can easily be avoided by using more images for training, avoiding overfitting on the algorithms is harder. The small size of the network thus helps to avoid overfitting during training. It is even more useful when retraining on the same images to be studied, as overfitting on those images is much harder to avoid, and can make the network miss forgeries.
+
+# 3.1. Spatial network
+
+The first layers extract spatial features from the images. Due to the nature of demosaicing, we make use of two specific types of convolutions.
+
+Most demosaicing algorithms try to avoid interpolating against a strong gradient [25], which would lead to visual artefacts. As a consequence, they often interpolate in one direction along edges. To mimic this, the first layers perform 10 horizontal, 10 vertical and 5 full convolutions, which are concatenated at the end of each layer.
+
+In a mosaiced image, only one in four pixels is red and one in four is blue. As visualised in Fig. 2, this means that at the location of a sampled pixel, the closest sampled neighbours are all located at 2 pixels distance horizontally and/or vertically of the current position. We can take advantage of this by using dilated convolutions, which will only involve pixels belonging to the same mosaic.
+
+We first use a sequence of two layers of 10 horizontal
+
+
+Figure 2: If we use a $3 \times 3$ convolution with a dilation of 2, the convolution at the central pixel sampled in blue only involves pixels sampled in the same colour. More generally, a 2-dilated convolution will look at pixels that all belong to the same colour channel.
+
+
+Figure 3: Spatial part of the network, containing 17,160 trainable parameters
+
+$1 \times 3$ , 10 vertical $1 \times 3$ and 5 full $3 \times 3$ convolutions. In parallel, we perform 10 horizontal, 10 vertical and 5 full convolutions, which are all 2-dilated. The outputs of both parts are concatenated with a skip-connection from the input image. To this output is applied a similar sequence of two layers of 10 horizontal, 10 vertical and 5 full convolutions, in parallel with 10 horizontal, 10 vertical and 5 full convolutions with a dilation of 2. The spatial output is the concatenation of the output of the second and fourth non-dilated convolutions, and of the two dilated convolutions.
+
+All layers in this part of the network are separated by a leaky rectified linear unit [46]. A diagram of this structure can be found in Fig. 3.
+
+
+Figure 4: Pixelwise $1 \times 1$ convolutional part of the network, containing 6105 parameters
+
+# 3.2. Pixelwise Causal network
+
+Summarising, the network uses values that are up to four pixels away both horizontally and vertically from each pixel (the receptive field is thus $9 \times 9$ ). We consider this spatial span sufficient. Indeed, most demosaicing algorithms do not look farther to demosaic a given pixel. However, some algorithms still feature complex transfers between the different colour channels, especially in the high frequencies. As a consequence, the second part of our network consists of pixelwise $(1 \times 1)$ convolutions, which enable us to capture complex causal relations without adding more spatial dependencies to the convolutions. Although pixelwise convolutions are often used in the literature, their primary use is often to reduce data dimensionality. The Inception network [64], uses pixelwise convolutions before large convolutions to reduce dimensionality. Other networks use depth-wise separable convolutions, where standard convolutions are replaced with one depthwise convolution followed by a pixelwise convolution [12, 31].
+
+In our network, however, we do not stack them to reduce dimensionality, but to perform complex operations after the spatial features have been computed. Linking pointwise convolutions with each other enables us to represent complex relations at a low computational cost, with few parameters and without incrementing spatial dependency.
+
+This part of the network consists of four layers of respectively 30, 15, 15 and $30 \times 1$ convolutions. The output of the first convolution is skip-connected to the third and fourth convolutions, and the output of the second convolution is skip-connected to the fourth convolution. As a consequence, the last convolution takes the results of all previous pointwise layers into consideration to prepare features for the next step.
+
+All the layers in this part of the network are separated by Softplus activation [18]. A diagram of the structure can be seen in Fig. 4.
+
+# 3.3. Blocks preparation
+
+Although relative positions could be detected at the pixel level, grouping the pixels into blocks can lead to more reliable predictions. However, the blocks must be created carefully in order to avoid any bias.
+
+
+Figure 5: Processing the image into blocks
+
+Given an input image $I$ of shape $(2Y, 2X, C)$ , where $C$ is the number of channels ( $C = 30$ after our pixelwise network) and $2Y$ and $2X$ represent the spatial dimensions, we start by splitting the four modulo $(2, 2)$ positions of this image. We thus create four images $I_{00}, I_{01}, I_{10}$ and $I_{11}$ , each of shape $(Y, X, C)$ and defined by
+
+$$
+I _ {\delta_ {x} \delta_ {y}} [ y, x, c ] = I [ 2 y + \delta_ {y}, 2 x + \delta_ {x}, c ]. \tag {1}
+$$
+
+We then concatenate these four images in different ways into four new images $J_{00}, J_{01}, J_{10}$ and $J_{11}$ , each of shape $(Y, X, 4C)$ and defined as follows:
+
+$$
+J _ {\delta_ {x} \delta_ {y}} [ y, x, 4 c ] = I _ {\delta_ {x} \delta_ {y}} [ y, x, c ]
+$$
+
+$$
+\begin{array}{l} \begin{array}{l l} J _ {\delta_ {x} \delta_ {y}} [ y, x, 4 c + 1 ] & = I _ {(1 - \delta_ {x}) \delta_ {y}} [ y, x, c ] \\ J _ {\delta_ {- \delta_ {-}}} [ y, x, 4 c + 2 ] & = I _ {\delta_ {(1 - \delta)} [ y, x, c ]} \end{array} . \tag {2} \\ J _ {\delta_ {x} \delta_ {y}} [ y, x, 4 c + 3 ] = I _ {(1 - \delta_ {x}) (1 - \delta_ {y})} [ y, x, c ] \\ \end{array}
+$$
+
+These four images are merely channel-wise permutations of one another, which enables the network to keep balance between the four patterns.
+
+Finally, each of these images is decomposed in blocks. Because all spatial and pixelwise features have already been computed in the previous parts, we can directly view the decomposition in blocks as one big average pooling, so that each block is spatially represented by one pixel. We thus get four output images $B_{00}, B_{01}, B_{10}$ and $B_{11}$ , each of shape $\left(\frac{Y}{16}, \frac{X}{16}, 4C\right)$ . Each image is thus spatially $32 \times 32$ times smaller than the original image.
+
+Thanks to this permutation, the detection problem is slightly shifted: Pixels in $J_{\delta_x \delta_y}$ are shifted so that all blocks of $B_{\delta_x \delta_y}$ should be detected at the same relative position modulo $(2, 2)$ , $\delta_x \delta_y$ . This process is explained in Fig. 5.
+
+# 3.4. Blockwise Causal network
+
+Because blocks are represented through average pooling, each block is spatially represented by one pixel. As a consequence, creating new pointwise convolutions amounts to processing the data independently - but with shared weights - in each block.
+
+Furthermore, the four values $B_{\delta_x \delta_y}[y, x, 4c + i]$ for $i \in (0,1,2,3)$ represent the same feature, averaged independ-
+
+
+Figure 6: Blockwise part of the network, containing 8,239 trainable parameters
+
+dently in each of the four possible mosaics $\delta_x\delta_y$ . To compare these features separately before merging them, we start by stacking three layers of respectively 180, 90 and 90 grouped pixelwise convolution, where the output in one channel at one given block-position is made using only the values of the same feature, in the four mosaics and at the same position. Finally, we merge these features together with two additional layers, each of 45 full-depth pointwise convolutions. Like in the pixelwise network, the layers are separated by Softplus activation [18]. The structure of the blockwise causal network is shown in Fig. 6.
+
+# 3.5. Decision and loss module
+
+A final layer of four pointwise convolutions is placed to predict scores for each position. In an authentic image, all blocks from each image $B_{\delta_x \delta_y}$ would be expected to detect their own position as $\delta_x \delta_y$ . If training on several images whose main mosaic may be different, we let the network permute the output of the four images either horizontally, vertically, or diagonally, in order to have the lowest of the four global losses before computing the local loss. This enables the loss to take into account the possibility of different images having different main positions.
+
+# 3.6. Auxiliary prediction for training
+
+Because the spatial and pixelwise networks are used at full resolution – whereas the resolution of images is reduced by a factor $32 \times 32$ in the blockwise network –, the first part of the network takes a higher computational toll than the rest. In order to speed up training, we work in a manner similar to [64] and start by training the spatial and pixelwise networks together. We add an additional layer of 4 pointwise convolutions at the end of the pixelwise network, and train it with the cross-entropy loss to detect the position of each pixel modulo (2, 2).
+
+Once the first part of the network is trained, we remove this auxiliary layer and process the output of the training images into blocks, as explained in 3.3. We then train the blockwise network, using the preprocessed output of the pixelwise network.
+
+By training the first part of the network separately, and more importantly using a loss computed at full resolution,
+
+we can train it in fewer and faster iterations. Processing the images into blocks, which also requires a significant time, must only be applied once between the two global training steps. Finally, the blockwise part of the network can be trained very quickly, because there is no need to propagate into and from the full-resolution network at each iteration, making each individual iteration quicker.
+
+Training is done first on the spatial (Fig. 3) and pixelwise (Fig. 4) networks, using the aforementioned auxiliary layer. Then, the blockwise network (Fig. 6) is trained alone, using the results of the pixelwise network, processed into blocks as seen in Fig. 5. All training is done with the Cross-Entropy loss and the Adam optimiser [39], with a learning rate of $10^{-3}$ .
+
+# 4. Dataset
+
+Several datasets exist to benchmark image forgery detection, most notably Coverage [69], CoMoFoD [65], Cassia [17] and [13]. However, these datasets were created for generic copy-move detection. They do not allow for a demosaicing based detection. Indeed, the images of those datasets either do not present any trace of demosaicing, or were all demosaiced with the same algorithm. They are therefore useless for benchmarking CFA-based forgery detection algorithms.
+
+The Dresden Image Database [28] provides 16,961 authentic images taken with 27 different cameras. Among them, 1,491 pictures taken with three different cameras, the Nikon D200, D70 and D70s, are provided unprocessed in a RAW format, which enabled us to perform demosaicing ourselves. Using these images, we created a new forgery detection database aimed specifically at the detection of forgeries by an analysis of CFA demosaicing inconsistencies.
+
+To create the database, we cropped randomly each of the 1,491 images into smaller $648 \times 648$ pictures. We demosaiced them with one of eleven publicly available demosaicing algorithms: Bilinear interpolation, LMMSE [25], Hamilton-Adams [30], RI [37], MLRI [38], ARI [52], GBTF [55], contour stencils [26], adaptive inter-channel correlation [19], Gunturk [24] and self-similarity[9].
+
+We then split the resulting set of images in three equal parts. One third of the images were left unmodified. In the second third, we took half of the images and used them to perform a splicing into the other half. Each pair of images had been previously demosaiced with the same algorithm. In the last third, we picked half the images again and used them to falsify the other half. However, we did not enforce pairs of images to be demosaiced with the same algorithm in this set. Note that the source images for the forgeries are not part of the resulting dataset; therefore, there is the same number of authentic and forged images. At least half the forged images were created with a source image demosaiced with the same algorithm as for the target.
+
+
+Figure 7: Examples of forged images in our database.
+
+
+Figure 8: Network's results. For each image, in this order: Forged image, pixelwise predictions for each of the 4 grids (auxiliary network output), blockwise predictions for each of the 4 grids (full network output), detected forged blocks, ground truth. The mosaic of the image and the forgery is aligned for the two images in the last row, which explain why no detection can be made with our method.
+
+To forge an image, we cropped the source image inside a random mask and pasted it onto the forged image. The masks were created as areas surrounded by random Bezier curves. They were enforced to contain at least one $64 \times 64$ square block, and to cover less than $10\%$ of the image.
+
+Examples of forged images found in our database can be seen in Fig. 7.
+
+# 5. Experiments
+
+We trained our network with a small database of 19 images, downsampled four times to remove any demosaicing trace. Each image has a size of at most $774 \times 518$ pixel, and was demosaiced by three different algorithms: bilinear interpolation, LMMSE [25] and Hamilton-Adams [30]. We trained the first part of the network for 1500 iterations and the second part for 500 iterations. Examples of detections can be seen in Fig. 8.
+
+We also adapted the pretrained network to the database by retraining it directly on it for the 1000 iterations on the first part of the network and 500 on the second part. This training was done without knowledge of which image is forged or authentic.
+
+We compare our results with intermediate value mosaic detection [11], variance of colour difference [60], as well
+
+as with ManTraNet [72], a state-of-the-art forgery detection method that directly trains a neural network to detect various forgeries on standard datasets. Results are measured with the ROC curve on the number of detected forged blocks.
+
+By nature of demosaicing, a region forged by copy-move has a $\frac{1}{4}$ probability of having its mosaic aligned with the main one, and in such case it cannot be detected by its CFA position. In our databases, aligned forgeries account for $26.7\%$ of the total number of blocks. The results of our algorithms on the whole dataset is shown on Fig. 9b. Such results are closer to what could be detected in practical applications. However, because forgeries with aligned mosaic are not detectable by mosaic detection algorithms, we present other results with a modified ground truth, in which we consider a block as forged only if its mosaic is different than the position of the original image. These scores are thus given relative to what could theoretically be detected with perfect knowledge of the mosaics. Results under this definition of the ground truth can be seen in Fig. 9a.l The database features three algorithms that were also used for pretraining the network: bilinear interpolation, LMMSE [25] and Hamilton-Adams demosaicing [30]. In order to ensure fairness in the comparison, we remove all images demosaiced with, or containing a forged region demosaiced with, one of these three algorithms. The results are presented in Fig. 9c. We can see that the results are similar to those on the whole database, which shows that the network generalised well to new algorithms.
+
+Finally, we test the robustness of our models to JPEG compression by compressing all the images at a quality of 95. The results are presented in Fig. 9d. [60] does no better than random guessing, with an AUC score of 0.52 in the global evaluation and 0.49 in the local evaluation, and both [11] and our pretrained network do little better. On the other hand, the adaptive network, by adapting directly to the database and thus learning to detect the CFA position over JPEG artefacts, was able to perform much better.
+
+# 6. Discussion
+
+We have shown that a small convolutional neural network could be used to accurately detect and interpret CFA artefacts, and subsequently use them to detect forgeries in images. Even without new training, this network can adapt well to images demosaiced with unseen and more complex algorithms than those studied during training. Our neural network is small and can process images almost as quickly as methods presented in the literature, while offering detections of superior quality.
+
+The forgeries in our database are very basic, since they are only made to evaluate CFA detection. Despite this, state-of-the-art generic methods such as ManTraNet yield detections that are little better than random, and worse than
+
+
+
+
+(a) Only misaligned forgeries are considered.
+
+
+(b) All forgeries are considered.
+
+
+(c) Only misaligned forgeries, algorithms on which our network was pretrained are excluded.
+(d) Only misaligned forgeries, images are compressed at a JPEG quality of 95.
+Figure 9: ROC curves comparing the detections of our methods to ManTraNet [72], Intermediate Values (IV) detection [11] and Variance of Colour Differance (VCD) detection [60].
+
+simple manual algorithms such as [11, 60]. This shows that detection methods that focus on specific artefacts, such as demosaicing detection, JPEG compression [53] or camera noise [15], still have a big role to play.
+
+Our network was trained on few images, which were not taken from the evaluation dataset, and with only three algorithms. This enabled us to show, in Fig. 9c, that we could
+
+get strong results even on images demosaiced with algorithms on which the network was not trained. In order to test and show its capacity to generalise to new images and algorithms, we only trained it with 19 images from a different dataset than the evaluation images, and three algorithms. A full instance of this network, trained with all known and available algorithm, would probably yield even better results.
+
+Unfortunately, the pretrained model is not sufficient to process mosaic artefacts in compressed images. This is to be expected, as JPEG compression erases high frequencies even at a high quality, which is also where demosaicing algorithms leave artefacts. However, adapting the network to the new compressed data by retraining it directly on the studied data enabled it to retrieve demosaicing traces over JPEG compression.
+
+We believe that our method shows sufficient results for use in demosaiced images without postprocessing. The next step is to consider common post-processing effects, including but not limited to JPEG compression, added noise, or colour change. Further work will study the robustness of the network to various post-processing setups, and try to improve adaptation of the network to post-processed images.
+
+Adapting a pre-trained network to the testing data by retraining it on said data is of course something that must be done carefully. A network that is too big can easily overfit if too few samples are available, and see its quality lower in comparison to the pre-trained network. More experiments must thus be done to fully understand what can be done this way. Namely, two big questions arise: How much similar data is needed, and how similar does it need to be, in order to be able to improve the quality of a neural network by retraining it on this data? More importantly, can we prevent the network from overfitting when retrained on small amounts of data?
+
+Since overfitting is the only reason why a network could worsen by trying to adapt to new data, it is likely that preventing or limiting it would make it possible to improve a network by retraining it on new data. We have shown that it was possible to drastically improve the performance of the network by retraining on the full database. Would it still be possible if we only have several images, or, in the most extreme but also the most frequent case, only one image? Preliminary experiments suggest that it would be possible providing the image is big enough, but more work is necessary to determine this.
+
+This is a difficult challenge, however learning to make a network fit to new unlabelled data could greatly help make it more robust for practical applications.
+
+# References
+
+[1] Irene Amerini, Lamberto Ballan, Roberto Caldelli, Alberto Del Bimbo, and Giuseppe Serra. A sift-based forensic method for copy-move attack detection and transformation recovery. IEEE Transactions on Information Forensics and Security, 6(3):1099-1110, 2011. 1
+[2] Q. Bammey, R. Grompone von Gioi, and J. Morel. Automatic detection of demosaicing image artifacts and its use in tampering detection. In 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 424-429, April 2018. 2, 3
+[3] Jawadul H Bappy, Amit K Roy-Chowdhury, Jason Bunk, Lakshmanan Nataraj, and BS Manjunath. Exploiting spatial structure for localizing manipulated image regions. In Proceedings of the IEEE international conference on computer vision, pages 4970-4979, 2017. 1
+[4] Jawadul H Bappy, Cody Simons, Lakshmanan Nataraj, BS Manjunath, and Amit K Roy-Chowdhury. Hybrid lstm and encoder-decoder architecture for detection of image forgeries. IEEE Transactions on Image Processing, 2019. 1
+[5] Belhassen Bayar and Matthew C Stamm. Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection. IEEE Transactions on Information Forensics and Security, 13(11):2691-2706, 2018. 1
+[6] Tiziano Bianchi, Alessia De Rosa, and Alessandro Piva. Improved dct coefficient analysis for forgery localization in JPEG images. In 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2444-2447. IEEE, 2011. 1
+[7] Tiziano Bianchi and Alessandro Piva. Image forgery localization via block-grained analysis of JPEG artifacts. IEEE Transactions on Information Forensics and Security, 7(3):1003-1017, 2012. 1
+[8] Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. In Advances in neural information processing systems, pages 737-744, 1994. 3
+[9] A. Buades, B. Coll, J. M. Morel, and C. Sbert. Self-similarity Driven Demosaicking. Image Processing On Line, 1:51-56, 2011. 6
+[10] Jason Bunk, Jawadul H Bappy, Tajuddin Manhar Mohammed, Lakshmanan Nataraj, Arjuna Flenner, BS Manjunath, Shivkumar Chandrasekaran, Amit K Roy-Chowdhury, and Lawrence Peterson. Detection and localization of image forgeries using resampling features and deep learning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1881-1889. IEEE, 2017. 1
+[11] Chang-Hee Choi, Jung-Ho Choi, and Heung-Kyu Lee. Cfa pattern identification of digital cameras using intermediate value counting. In Proceedings of the Thirteenth ACM Multimedia Workshop on Multimedia and Security, MM&Sec '11, pages 21–26, New York, NY, USA, 2011. ACM. 3, 7, 8
+[12] F. Chollet. Xception: Deep learning with depthwise separable convolutions. In 2017 IEEE Conference on Computer
+
+Vision and Pattern Recognition (CVPR), pages 1800-1807, July 2017. 5
+[13] V. Christlein, C. Riess, J. Jordan, C. Riess, and E. Angelopoulou. An evaluation of popular copy-move forgery detection approaches. IEEE Transactions on Information Forensics and Security, 7(6):1841-1854, Dec 2012. 6
+[14] Davide Cozzolino, Giovanni Poggi, and Luisa Verdoliva. Splicebuster: A new blind image splicing detector. In 2015 IEEE International Workshop on Information Forensics and Security (WIFS), pages 1-6. IEEE, 2015. 1
+[15] Davide Cozzolino and Luisa Verdoliva. Noiseprint: a cnn-based camera model fingerprint. IEEE Transactions on Information Forensics and Security, 2019. 8
+[16] Christophe Destruel, Vincent Itier, Olivier Strauss, and William Puech. Color noise-based feature for splicing detection and localization. In 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), pages 1-6. IEEE, 2018. 1
+[17] Jing Dong, Wei Wang, and Tieniu Tan. Casia image tampering detection evaluation database. pages 422-426, 07 2013. 6
+[18] Charles Dugas, Y. Bengio, François Bélisle, Claude Nadeau, and Rene Garcia. Incorporating second-order functional knowledge for better option pricing. pages 472-478, 01 2000. 5, 6
+[19] J. Duran and A. Buades. Self-similarity and spectral correlation adaptive algorithm for color demosaicking. IEEE TIP, 23(9):4031-4040, Sept 2014. 6
+[20] Wei Fan, Kai Wang, François Cayre, and Zhang Xiong. A variational approach to jpeg anti-forensics. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 3058-3062. IEEE, 2013. 1
+[21] Hany Farid. Exposing digital forgeries from JPEG ghosts. IEEE transactions on information forensics and security, 4(1):154-160, 2009. 1
+[22] Hany Farid. Photo Forensics. The MIT Press, 2016. 1
+[23] Anselmo Ferreira, Siovani C Felipussi, Carlos Alfaro, Pablo Fonseca, John E Vargas-Munoz, Jefersson A dos Santos, and Anderson Rocha. Behavior knowledge space-based fusion for copy-move forgery detection. IEEE Transactions on Image Processing, 25(10):4729-4742, 2016. 1
+[24] Pascal Getreuer. Gunturk-altunbasak-mersereau alternating projections image demosaicking. Image Processing on Line, 1:90–97, 2011. 6
+[25] Pascal Getreuer. Zhang-wu directional Immse image demosaicking. Image Processing On Line, 1:117-126, 2011. 4, 6, 7
+[26] P. Getreuer. Image Demosaicking with Contour Stencils. IPOL, 2:22-34, 2012. 6
+[27] Aurobrata Ghosh, Zheng Zhong, Terrance E Boult, and Maneesh Singh. Spliceradar: A learned method for blind image forensics. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 1
+[28] Thomas Gloe and Rainer Böhme. The ‘Dresden Image Database’ for benchmarking digital image forensics. In Proceedings of the 25th Symposium On Applied Computing (ACM SAC 2010), volume 2, pages 1585–1591, 2010. 2, 3, 6
+
+[29] Edgar González Fernández, Ana Sandoval Orozco, Luis García Villalba, and Julio Hernández-Castro. Digital image tamper detection technique based on spectrum analysis of cfa artifacts. Sensors, 18(9):2804, Aug 2018. 2, 3
+[30] John F Hamilton Jr and James E Adams Jr. Adaptive color plan interpolation in single sensor color electronic camera, May 13 1997. US Patent 5,629,734. 6, 7
+[31] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andretto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. CoRR, abs/1704.04861, 2017. 5
+[32] Minyoung Huh, Andrew Liu, Andrew Owens, and Alexei A. Efros. Fighting fake news: Image splice detection via learned self-consistency. In The European Conference on Computer Vision (ECCV), September 2018. 3
+[33] Chryssanthi Iakovidou, Markos Zampoglou, Symeon Papadopoulos, and Yiannis Kompatsiaris. Content-aware detection of jpeg grid inconsistencies for intuitive image forensics. Journal of Visual Communication and Image Representation, 54:155-170, 2018. 1
+[34] Daniel Cavalcanti Jeronymo, Yuri Cassio Campbell Borges, and Leandro dos Santos Coelho. Image forgery detection by semi-automatic wavelet soft-thresholding with error level analysis. Expert Systems with Applications, 85:348-356, 2017. 1
+[35] Thibaut Julliand, Vincent Nozick, and Hugues Talbot. Automated image splicing detection from noise estimation in raw images. 2015. 1
+[36] Yongzhen Ke, Qiang Zhang, Weidong Min, and Shuguang Zhang. Detecting image forgery based on noise estimation. International Journal of Multimedia and Ubiquitous Engineering, 9(1):325-336, 2014. 1
+[37] Daisuke Kiku, Yusuke Monno, Masayuki Tanaka, and Masatoshi Okutomi. Residual interpolation for color image demosaicking. In 2013 IEEE International Conference on Image Processing, pages 2304-2308. IEEE, 2013. 6
+[38] Daisuke Kiku, Yusuke Monno, Masayuki Tanaka, and Masatoshi Okutomi. Minimized-laplacian residual interpolation for color image demosaicking. In Digital Photography X, volume 9023, page 90230L. International Society for Optics and Photonics, 2014. 6
+[39] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
+[40] Matthias Kirchner. Efficient estimation of CFA pattern configuration in digital camera images. In *Media Forensics and Security*, 2010. 3
+[41] Paweł Korus. Digital image integrity—a survey of protection and verification techniques. Digital Signal Processing, 71:1-26, 2017. 1
+[42] Weihai Li, Yuan Yuan, and Nenghai Yu. Passive detection of doctoredJPEG image via block artifact grid extraction. Signal Processing, 89(9):1821-1829, 2009. 1
+[43] Zhouchen Lin, Junfeng He, Xiaou Tang, and Chi-Keung Tang. Fast, automatic and fine-grained tampered JPEG image detection via dct coefficient analysis. Pattern Recognition, 42(11):2492-2501, 2009. 1
+
+[44] Bo Liu and Chi-Man Pun. Splicing forgery exposure in digital image by detecting noise discrepancies. International Journal of Computer and Communication Engineering, 4(1):33, 2015. 1
+[45] Lu Liu, Yao Zhao, Rongrong Ni, and Qi Tian. Copy-move forgery localization using convolutional neural networks and cfa features. Int. J. Digit. Crime For., 10(4):140-155, Oct. 2018. 3
+[46] Andrew L. Maas. Rectifier nonlinearities improve neural network acoustic models. 2013. 4
+[47] Babak Mahdian and Stanislav Saic. Using noise inconsistencies for blind image forensics. Image and Vision Computing, 27(10):1497-1503, 2009. 1
+[48] Owen Mayer, Belhassen Bayar, and Matthew C Stamm. Learning unified deep-features for multiple forensic tasks. In Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, pages 79–84. ACM, 2018. 1
+[49] Owen Mayer and Matthew C Stamm. Learned forensic source similarity for unknown camera models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2012-2016. IEEE, 2018. 1
+[50] O. Mayer and M. C. Stamm. Learned forensic source similarity for unknown camera models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2012-2016, April 2018. 3
+[51] O. Mayer and M. C. Stamm. Forensic similarity for digital images. IEEE Transactions on Information Forensics and Security, 2019. 1
+[52] Yusuke Monno, Daisuke Kiku, Masayuki Tanaka, and Masatoshi Okutomi. Adaptive residual interpolation for color image demosaicking. In 2015 IEEE International Conference on Image Processing (ICIP), pages 3861-3865. IEEE, 2015. 6
+[53] T. Nikoukhah, J. Anger, T. Ehret, M. Colom, J.M. Morel, and R. Grompone von Gioi. Jpeg grid detection based on the number of dct zeros and its application to automatic and localized forgery detection. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 8
+[54] Xunyu Pan, Xing Zhang, and Siwei Lyu. Exposing image splicing with inconsistent local noise variances. In 2012 IEEE International Conference on Computational Photography (ICCP), pages 1-10. IEEE, 2012. 1
+[55] Ibrahim Pekkucuksen and Yucel Altunbasak. Gradient based threshold free color filter array interpolation. In 2010 IEEE International Conference on Image Processing, pages 137-140. IEEE, 2010. 6
+[56] Tomás Pevny and Jessica Fridrich. Detection of double-compression in JPEG images for applications in steganography. Information Forensics and Security, IEEE Transactions on, 3(2):247-258, 2008. 1
+[57] A.C. Popescu and H. Farid. Exposing digital forgeries in color filter array interpolated images. Trans. Sig. Proc., 53(10):3948-3959, Oct. 2005. 2, 3
+[58] Alin C Popescu and Hany Farid. Statistical tools for digital forensics. In Information Hiding, pages 128-147. Springer, 2004. 1
+
+[59] M Ali Qureshi and M Deriche. A review on copy move image forgery detection techniques. In 2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14), pages 1-5. IEEE, 2014. 1
+[60] Hyun Jun Shin, Jong Ju Jeon, and Il Kyu Eom. Color filter array pattern identification using variance of color difference image. Journal of Electronic Imaging, 26(4):1 - 12, 2017. 3, 7, 8
+[61] Ewerton Silva, Tiago Carvalho, Anselmo Ferreira, and Anderson Rocha. Going deeper into copy-move forgery detection: Exploring image telltales via multi-scale analysis and voting processes. Journal of Visual Communication and Image Representation, 29(0):16 - 32, 2015. 1
+[62] Matthew C Stamm, Steven K Tjoa, W Sabrina Lin, and KJ Liu. Undetectable image tampering through JPEG compression anti-forensics. In Image Processing (ICIP), 2010 17th IEEE International Conference on, pages 2109–2112. IEEE, 2010. 1
+[63] Matthew Christopher Stamm, Steven K Tjoa, W Sabrina Lin, and KJ Ray Liu. Anti-forensics of jpeg compression. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 1694–1697. IEEE, 2010. 1
+[64] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. 2, 5, 6
+[65] D. Tralic, I. Zupancic, S. Grgic, and M. Grgic. Comofod — new database for copy-move forgery detection. In Proceedings ELMAR-2013, pages 49–54, Sep. 2013. 6
+[66] Giuseppe Valenzise, Marco Tagliasacchi, and Stefano Tubaro. Revealing the traces ofjpeg compression antiforensics. Information Forensics and Security, IEEE Transactions on, 8(2):335-349, 2013. 1
+[67] Savita Walia and Mandeep Kaur. Forgery detection using noise inconsistency: A review. International Journal of Computer Science and Information Technologies, 5(6):7618-7622, 2014. 1
+[68] Nathalie Diane Wandji, Sun Xingming, and Moise Fah Kue. Detection of copy-move forgery in digital images based on dct. arXiv preprint arXiv:1308.5661, 2013. 1
+[69] Bihan Wen, Ye Zhu, Ramanathan Subramanian, Tian-Tsong Ng, Xuanjing Shen, and Stefan Winkler. Coverage – a novel database for copy-move forgery detection. In IEEE International Conference on Image processing (ICIP), pages 161–165, 2016. 6
+[70] Bihan Wen, Ye Zhu, Ramanathan Subramanian, Tian-Tsong Ng, Xuanjing Shen, and Stefan Winkler. Coverage—a novel database for copy-move forgery detection. In 2016 IEEE International Conference on Image Processing (ICIP), pages 161-165. IEEE, 2016. 1
+[71] Yue Wu, Wael Abd-Almageed, and Prem Natarajan. Buster-net: Detecting copy-move image forgery with source/target localization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 168-184, 2018. 1
+[72] Yue Wu, Wael AbdAlmageed, and Premkumar Natarajan. *Mantra-net: Manipulation tracing network for detection and
+
+localization of image forgeries with anomalous features. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 7, 8
+[73] Heng Yao, Shuozhong Wang, Xinpeng Zhang, Chuan Qin, and Jinwei Wang. Detecting image splicing based on noise level inconsistency. Multimedia Tools and Applications, 76(10):12457-12479, 2017. 1
+[74] Wael AbdAlmageed Yue Wu and Premkumar Natarajan. *Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features*. 2019. 1
+[75] Markos Zampoglou, Symeon Papadopoulos, Yiannis Kompatsiaris, Ruben Bouwmeester, and Jochen Spangenberg. Web and social media image forensics for news professionals. In SMN@ ICWSM, 2016. 1
+[76] Hui Zeng, Yifeng Zhan, Xiangui Kang, and Xiaodan Lin. Image splicing localization using pca-based noise level estimation. Multimedia Tools and Applications, 76(4):4783-4799, 2017. 1
\ No newline at end of file
diff --git a/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/images.zip b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9e659af00680a27c7d979a30704532e0a7b92b46
--- /dev/null
+++ b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c77dfc0e14d62362c042d752e847a531e9000f3c23c20625576a1ff6cb860bd3
+size 312781
diff --git a/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/layout.json b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..dc6c51ea11ce945447b0be65f7647807da571e93
--- /dev/null
+++ b/anadaptiveneuralnetworkforunsupervisedmosaicconsistencyanalysisinimageforensics/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59c80b43813b2af9b6cf8c91d5fed9481780d6ba641a9ecb80dd60226ae5a95e
+size 377088
diff --git a/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_content_list.json b/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5ddd39e15354804fe2ddf8ebb1dad49a6e0e94c5
--- /dev/null
+++ b/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9e28997611605935577b885a92470cd5e32f4838108076f374818cbc58fde2d
+size 79386
diff --git a/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_model.json b/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6cc16842d07a4dd7075c819c7f139c2867eb35c2
--- /dev/null
+++ b/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04f6c1eb978e70d88f3f3a899f49e08d169a7c1d6d73254fe778189451f6d611
+size 97513
diff --git a/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_origin.pdf b/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..642373587f54ef62c6511a02abc24aa5da986473
--- /dev/null
+++ b/analyzingandimprovingtheimagequalityofstylegan/311d642e-3b7e-407a-b1f6-0151d7ae7553_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0be643dabfa94d5e2c210239c9e3bc888b70fc82a7ee041ceef32e89f409c3d1
+size 3141887
diff --git a/analyzingandimprovingtheimagequalityofstylegan/full.md b/analyzingandimprovingtheimagequalityofstylegan/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f66ad64ed29481e541cf57a928b13bd5b16e6e9e
--- /dev/null
+++ b/analyzingandimprovingtheimagequalityofstylegan/full.md
@@ -0,0 +1,330 @@
+# Analyzing and Improving the Image Quality of StyleGAN
+
+Tero Karras
+NVIDIA
+
+Samuli Laine NVIDIA
+
+Miika Aittala NVIDIA
+
+Janne Hellsten NVIDIA
+
+Jaakko Lehtinen
+NVIDIA and Aalto University
+
+Timo Aila NVIDIA
+
+# Abstract
+
+The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.
+
+# 1. Introduction
+
+The resolution and quality of images produced by generative methods, especially generative adversarial networks (GAN) [13], are improving rapidly [20, 26, 4]. The current state-of-the-art method for high-resolution image synthesis is StyleGAN [21], which has been shown to work reliably on a variety of datasets. Our work focuses on fixing its characteristic artifacts and improving the result quality further.
+
+The distinguishing feature of StyleGAN [21] is its unconventional generator architecture. Instead of feeding the input latent code $\mathbf{z} \in \mathcal{Z}$ only to the beginning of a network, the mapping network $f$ first transforms it to an intermediate latent code $\mathbf{w} \in \mathcal{W}$ . Affine transforms then produce styles that control the layers of the synthesis network $g$ via adaptive instance normalization (AdaIN) [18, 8, 11, 7]. Additionally, stochastic variation is facilitated by providing
+
+additional random noise maps to the synthesis network. It has been demonstrated [21, 33] that this design allows the intermediate latent space $\mathcal{W}$ to be much less entangled than the input latent space $\mathcal{Z}$ . In this paper, we focus all analysis solely on $\mathcal{W}$ , as it is the relevant latent space from the synthesis network's point of view.
+
+Many observers have noticed characteristic artifacts in images generated by StyleGAN [3]. We identify two causes for these artifacts, and describe changes in architecture and training methods that eliminate them. First, we investigate the origin of common blob-like artifacts, and find that the generator creates them to circumvent a design flaw in its architecture. In Section 2, we redesign the normalization used in the generator, which removes the artifacts. Second, we analyze artifacts related to progressive growing [20] that has been highly successful in stabilizing high-resolution GAN training. We propose an alternative design that achieves the same goal—training starts by focusing on low-resolution images and then progressively shifts focus to higher and higher resolutions—without changing the network topology during training. This new design also allows us to reason about the effective resolution of the generated images, which turns out to be lower than expected, motivating a capacity increase (Section 4).
+
+Quantitative analysis of the quality of images produced using generative methods continues to be a challenging topic. Fréchet inception distance (FID) [17] measures differences in the density of two distributions in the high-dimensional feature space of an InceptionV3 classifier [34]. Precision and Recall (P&R) [31, 22] provide additional visibility by explicitly quantifying the percentage of generated images that are similar to training data and the percentage of training data that can be generated, respectively. We use these metrics to quantify the improvements.
+
+Both FID and P&R are based on classifier networks that have recently been shown to focus on textures rather than shapes [10], and consequently, the metrics do not accurately capture all aspects of image quality. We observe that the perceptual path length (PPL) metric [21], originally introduced as a method for estimating the quality of latent space
+
+
+Figure 1. Instance normalization causes water droplet-like artifacts in StyleGAN images. These are not always obvious in the generated images, but if we look at the activations inside the generator network, the problem is always there, in all feature maps starting from the $64 \times 64$ resolution. It is a systemic problem that plagues all StyleGAN images.
+
+
+
+
+
+
+
+
+
+interpolations, correlates with consistency and stability of shapes. Based on this, we regularize the synthesis network to favor smooth mappings (Section 3) and achieve a clear improvement in quality. To counter its computational expense, we also propose executing all regularizations less frequently, observing that this can be done without compromising effectiveness.
+
+Finally, we find that projection of images to the latent space $\mathcal{W}$ works significantly better with the new, path-length regularized StyleGAN2 generator than with the original StyleGAN. This makes it easier to attribute a generated image to its source (Section 5).
+
+Our implementation and trained models are available at https://github.com/NVlabs/stylegan2
+
+# 2. Removing normalization artifacts
+
+We begin by observing that most images generated by StyleGAN exhibit characteristic blob-shaped artifacts that resemble water droplets. As shown in Figure 1, even when the droplet may not be obvious in the final image, it is present in the intermediate feature maps of the generator. The anomaly starts to appear around $64 \times 64$ resolution, is present in all feature maps, and becomes progressively stronger at higher resolutions. The existence of such a consistent artifact is puzzling, as the discriminator should be able to detect it.
+
+We pinpoint the problem to the AdaIN operation that normalizes the mean and variance of each feature map separately, thereby potentially destroying any information found in the magnitudes of the features relative to each other. We hypothesize that the droplet artifact is a result of the generator intentionally sneaking signal strength information past instance normalization: by creating a strong, localized spike that dominates the statistics, the generator can effectively scale the signal as it likes elsewhere. Our hypothesis is supported by the finding that when the normalization step is removed from the generator, as detailed below, the droplet artifacts disappear completely.
+
+# 2.1. Generator architecture revisited
+
+We will first revise several details of the StyleGAN generator to better facilitate our redesigned normalization. These changes have either a neutral or small positive effect on their own in terms of quality metrics.
+
+Figure 2a shows the original StyleGAN synthesis network $g$ [21], and in Figure 2b we expand the diagram to full detail by showing the weights and biases and breaking the AdaIN operation to its two constituent parts: normalization and modulation. This allows us to re-draw the conceptual gray boxes so that each box indicates the part of the network where one style is active (i.e., "style block"). Interestingly, the original StyleGAN applies bias and noise within the style block, causing their relative impact to be inversely proportional to the current style's magnitudes. We observe that more predictable results are obtained by moving these operations outside the style block, where they operate on normalized data. Furthermore, we notice that after this change it is sufficient for the normalization and modulation to operate on the standard deviation alone (i.e., the mean is not needed). The application of bias, noise, and normalization to the constant input can also be safely removed without observable drawbacks. This variant is shown in Figure 2c, and serves as a starting point for our redesigned normalization.
+
+# 2.2. Instance normalization revisited
+
+One of the main strengths of StyleGAN is the ability to control the generated images via style mixing, i.e., by feeding a different latent $\mathbf{w}$ to different layers at inference time. In practice, style modulation may amplify certain feature maps by an order of magnitude or more. For style mixing to work, we must explicitly counteract this amplification on a per-sample basis—otherwise the subsequent layers would not be able to operate on the data in a meaningful way.
+
+If we were willing to sacrifice scale-specific controls (see video), we could simply remove the normalization, thus removing the artifacts and also improving FID slightly [22]. We will now propose a better alternative that removes the artifacts while retaining full controllability. The main idea is to base normalization on the expected statistics of the incoming feature maps, but without explicit forcing.
+
+
+(a) StyleGAN
+
+
+(b) StyleGAN (detailed)
+
+
+(c) Revised architecture
+Figure 2. We redesign the architecture of the StyleGAN synthesis network. (a) The original StyleGAN, where $\boxed{\mathbf{A}}$ denotes a learned affine transform from $\mathcal{W}$ that produces a style and $\boxed{\mathbf{B}}$ is a noise broadcast operation. (b) The same diagram with full detail. Here we have broken the AdaIN to explicit normalization followed by modulation, both operating on the mean and standard deviation per feature map. We have also annotated the learned weights $(w)$ , biases $(b)$ , and constant input $(c)$ , and redrawn the gray boxes so that one style is active per box. The activation function (leaky ReLU) is always applied right after adding the bias. (c) We make several changes to the original architecture that are justified in the main text. We remove some redundant operations at the beginning, move the addition of $b$ and $\boxed{\mathbf{B}}$ to be outside active area of a style, and adjust only the standard deviation per feature map. (d) The revised architecture enables us to replace instance normalization with a "demodulation" operation, which we apply to the weights associated with each convolution layer.
+
+
+(d) Weight demodulation
+
+Recall that a style block in Figure 2c consists of modulation, convolution, and normalization. Let us start by considering the effect of a modulation followed by a convolution. The modulation scales each input feature map of the convolution based on the incoming style, which can alternatively be implemented by scaling the convolution weights:
+
+$$
+w _ {i j k} ^ {\prime} = s _ {i} \cdot w _ {i j k}, \tag {1}
+$$
+
+where $w$ and $w'$ are the original and modulated weights, respectively, $s_i$ is the scale corresponding to the $i$ th input feature map, and $j$ and $k$ enumerate the output feature maps and spatial footprint of the convolution, respectively.
+
+Now, the purpose of instance normalization is to essentially remove the effect of $s$ from the statistics of the convolution's output feature maps. We observe that this goal can be achieved more directly. Let us assume that the input activations are i.i.d. random variables with unit standard deviation. After modulation and convolution, the output activations have standard deviation of
+
+$$
+\sigma_ {j} = \sqrt {\sum_ {i , k} w _ {i j k} ^ {\prime}} ^ {2}, \tag {2}
+$$
+
+i.e., the outputs are scaled by the $L_{2}$ norm of the corresponding weights. The subsequent normalization aims to restore the outputs back to unit standard deviation. Based on Equation 2, this is achieved if we scale ("demodulate")
+
+each output feature map $j$ by $1 / \sigma_{j}$ . Alternatively, we can again bake this into the convolution weights:
+
+$$
+w _ {i j k} ^ {\prime \prime} = w _ {i j k} ^ {\prime} / \sqrt {\sum_ {i , k} w _ {i j k} ^ {\prime 2} + \epsilon}, \tag {3}
+$$
+
+where $\epsilon$ is a small constant to avoid numerical issues.
+
+We have now baked the entire style block to a single convolution layer whose weights are adjusted based on $s$ using Equations 1 and 3 (Figure 2d). Compared to instance normalization, our demodulation technique is weaker because it is based on statistical assumptions about the signal instead of actual contents of the feature maps. Similar statistical analysis has been extensively used in modern network initializers [12, 16], but we are not aware of it being previously used as a replacement for data-dependent normalization. Our demodulation is also related to weight normalization [32] that performs the same calculation as a part of reparameterizing the weight tensor. Prior work has identified weight normalization as beneficial in the context of GAN training [38].
+
+Our new design removes the characteristic artifacts (Figure 3) while retaining full controllability, as demonstrated in the accompanying video. FID remains largely unaffected (Table 1, rows A, B), but there is a notable shift from precision to recall. We argue that this is generally desirable, since recall can be traded into precision via truncation, whereas
+
+| Configuration | FFHQ, 1024×1024 | LSUN Car, 512×384 |
| FID ↓ | Path length ↓ | Precision ↑ | Recall ↑ | FID ↓ | Path length ↓ | Precision ↑ | Recall ↑ |
| A Baseline StyleGAN [21] | 4.40 | 212.1 | 0.721 | 0.399 | 3.27 | 1484.5 | 0.701 | 0.435 |
| B + Weight demodulation | 4.39 | 175.4 | 0.702 | 0.425 | 3.04 | 862.4 | 0.685 | 0.488 |
| C + Lazy regularization | 4.38 | 158.0 | 0.719 | 0.427 | 2.83 | 981.6 | 0.688 | 0.493 |
| D + Path length regularization | 4.34 | 122.5 | 0.715 | 0.418 | 3.43 | 651.2 | 0.697 | 0.452 |
| E + No growing, new G & D arch. | 3.31 | 124.5 | 0.705 | 0.449 | 3.19 | 471.2 | 0.690 | 0.454 |
| F + Large networks (StyleGAN2) | 2.84 | 145.0 | 0.689 | 0.492 | 2.32 | 415.5 | 0.678 | 0.514 |
| Config A with large networks | 3.98 | 199.2 | 0.716 | 0.422 | - | - | - | - |
+
+Table 1. Main results. For each training run, we selected the training snapshot with the lowest FID. We computed each metric 10 times with different random seeds and report their average. Path length corresponds to the PPL metric, computed based on path endpoints in $\mathcal{W}$ [21], without the central crop used by Karras et al. [21]. The FFHQ dataset contains $70\mathrm{k}$ images, and the discriminator saw $25\mathrm{M}$ images during training. For LSUN CAR the numbers were $893\mathrm{k}$ and $57\mathrm{M}$ . $\uparrow$ indicates that higher is better, and $\downarrow$ that lower is better.
+
+
+Figure 3. Replacing normalization with demodulation removes the characteristic artifacts from images and activations.
+
+the opposite is not true [22]. In practice our design can be implemented efficiently using grouped convolutions, as detailed in Appendix B. To avoid having to account for the activation function in Equation 3, we scale our activation functions so that they retain the expected signal variance.
+
+# 3. Image quality and generator smoothness
+
+While GAN metrics such as FID or Precision and Recall (P&R) successfully capture many aspects of the generator, they continue to have somewhat of a blind spot for image quality. For an example, refer to Figures 3 and 4 in the Supplement that contrast generators with identical FID and P&R scores but markedly different overall quality. $^2$
+
+
+(a) Low PPL scores
+
+
+(b) High PPL scores
+
+
+Figure 4. Connection between perceptual path length and image quality using baseline StyleGAN (config A) with LSUN CAT. (a) Random examples with low PPL ( $\leq 10^{\text{th}}$ percentile). (b) Examples with high PPL ( $\geq 90^{\text{th}}$ percentile). There is a clear correlation between PPL scores and semantic consistency of the images.
+(a) StyleGAN (config A)
+Figure 5. (a) Distribution of PPL scores of individual images generated using baseline StyleGAN (config A) with LSUN CAT $(\mathrm{FID} = 8.53, \mathrm{PPL} = 924)$ . The percentile ranges corresponding to Figure 4 are highlighted in orange. (b) StyleGAN2 (config F) improves the PPL distribution considerably (showing a snapshot with the same $\mathrm{FID} = 8.53, \mathrm{PPL} = 387$ ).
+
+
+(b) StyleGAN2 (config F)
+
+We observe a correlation between perceived image quality and perceptual path length (PPL) [21], a metric that was originally introduced for quantifying the smoothness of the mapping from a latent space to the output image by measuring average LPIPS distances [44] between generated images under small perturbations in latent space. Again consulting Figures 3 and 4 in the Supplement, a smaller PPL (smoother generator mapping) appears to correlate with higher over
+
+all image quality, whereas other metrics are blind to the change. Figure 4 examines this correlation more closely through per-image PPL scores on LSUN CAT, computed by sampling the latent space around $\mathbf{w} \sim f(\mathbf{z})$ . Low scores are indeed indicative of high-quality images, and vice versa. Figure 5a shows the corresponding histogram and reveals the long tail of the distribution. The overall PPL for the model is simply the expected value of these per-image PPL scores. We always compute PPL for the entire image, as opposed to Karras et al. [21] who use a smaller central crop.
+
+It is not immediately obvious why a low PPL should correlate with image quality. We hypothesize that during training, as the discriminator penalizes broken images, the most direct way for the generator to improve is to effectively stretch the region of latent space that yields good images. This would lead to the low-quality images being squeezed into small latent space regions of rapid change. While this improves the average output quality in the short term, the accumulating distortions impair the training dynamics and consequently the final image quality.
+
+Clearly, we cannot simply encourage minimal PPL since that would guide the generator toward a degenerate solution with zero recall. Instead, we will describe a new regularizer that aims for a smoother generator mapping without this drawback. As the resulting regularization term is somewhat expensive to compute, we first describe a general optimization that applies to any regularization technique.
+
+# 3.1. Lazy regularization
+
+Typically the main loss function (e.g., logistic loss [13]) and regularization terms (e.g., $R_{1}$ [25]) are written as a single expression and are thus optimized simultaneously. We observe that the regularization terms can be computed less frequently than the main loss function, thus greatly diminishing their computational cost and the overall memory usage. Table 1, row C shows that no harm is caused when $R_{1}$ regularization is performed only once every 16 minibatches, and we adopt the same strategy for our new regularizer as well. Appendix B gives implementation details.
+
+# 3.2. Path length regularization
+
+We would like to encourage that a fixed-size step in $\mathcal{W}$ results in a non-zero, fixed-magnitude change in the image. We can measure the deviation from this ideal empirically by stepping into random directions in the image space and observing the corresponding w gradients. These gradients should have close to an equal length regardless of w or the image-space direction, indicating that the mapping from the latent space to image space is well-conditioned [28].
+
+At a single $\mathbf{w} \in \mathcal{W}$ , the local metric scaling properties of the generator mapping $g(\mathbf{w}) : \mathcal{W} \mapsto \mathcal{V}$ are captured by the Jacobian matrix $\mathbf{J}_{\mathbf{w}} = \frac{\partial g(\mathbf{w})}{\partial \mathbf{w}}$ . Motivated by the desire to preserve the expected lengths of vectors regardless
+
+of the direction, we formulate our regularizer as
+
+$$
+\mathbb {E} _ {\mathbf {w}, \mathbf {y} \sim \mathcal {N} (0, \mathbf {I})} \left(\left\| \mathbf {J} _ {\mathbf {w}} ^ {T} \mathbf {y} \right\| _ {2} - a\right) ^ {2}, \tag {4}
+$$
+
+where $\mathbf{y}$ are random images with normally distributed pixel intensities, and $\mathbf{w} \sim f(\mathbf{z})$ , where $\mathbf{z}$ are normally distributed. We show in Appendix C that, in high dimensions, this prior is minimized when $\mathbf{J}_{\mathbf{w}}$ is orthogonal (up to a global scale) at any $\mathbf{w}$ . An orthogonal matrix preserves lengths and introduces no squeezing along any dimension.
+
+To avoid explicit computation of the Jacobian matrix, we use the identity $\mathbf{J}_{\mathbf{w}}^{T}\mathbf{y} = \nabla_{\mathbf{w}}(g(\mathbf{w})\cdot \mathbf{y})$ , which is efficiently computable using standard backpropagation [5]. The constant $a$ is set dynamically during optimization as the long-running exponential moving average of the lengths $\| \mathbf{J}_{\mathbf{w}}^{T}\mathbf{y}\|_{2}$ , allowing the optimization to find a suitable global scale by itself.
+
+Our regularizer is closely related to the Jacobian clamping regularizer presented by Odena et al. [28]. Practical differences include that we compute the products $\mathbf{J}_{\mathbf{w}}^{T}\mathbf{y}$ analytically whereas they use finite differences for estimating $\mathbf{J}_{\mathbf{w}}\delta$ with $\mathcal{Z} \ni \delta \sim \mathcal{N}(0,\mathbf{I})$ . It should be noted that spectral normalization [26] of the generator [40] only constrains the largest singular value, posing no constraints on the others and hence not necessarily leading to better conditioning. We find that enabling spectral normalization in addition to our contributions—or instead of them—invariably compromises FID, as detailed in Appendix E.
+
+In practice, we notice that path length regularization leads to more reliable and consistently behaving models, making architecture exploration easier. We also observe that the smoother generator is significantly easier to invert (Section 5). Figure 5b shows that path length regularization clearly tightens the distribution of per-image PPL scores, without pushing the mode to zero. However, Table 1, row D points toward a tradeoff between FID and PPL in datasets that are less structured than FFHQ.
+
+# 4. Progressive growing revisited
+
+Progressive growing [20] has been very successful in stabilizing high-resolution image synthesis, but it causes its own characteristic artifacts. The key issue is that the progressively grown generator appears to have a strong location preference for details; the accompanying video shows that when features like teeth or eyes should move smoothly over the image, they may instead remain stuck in place before jumping to the next preferred location. Figure 6 shows a related artifact. We believe the problem is that in progressive growing each resolution serves momentarily as the output resolution, forcing it to generate maximal frequency details, which then leads to the trained network to have excessively high frequencies in the intermediate layers, compromising shift invariance [43]. Appendix A shows an example. These
+
+
+Figure 6. Progressive growing leads to "phase" artifacts. In this example the teeth do not follow the pose but stay aligned to the camera, as indicated by the blue line.
+
+
+Figure 7. Three generator (above the dashed line) and discriminator architectures. [Up] and [Down] denote bilinear up and down-sampling, respectively. In residual networks these also include $1 \times 1$ convolutions to adjust the number of feature maps. [tRGB] and [fRGB] convert between RGB and high-dimensional per-pixel data. Architectures used in config E and F are shown in green.
+
+issues prompt us to search for an alternative formulation that would retain the benefits of progressive growing without the drawbacks.
+
+# 4.1. Alternative network architectures
+
+While StyleGAN uses simple feedforward designs in the generator (synthesis network) and discriminator, there is a vast body of work dedicated to the study of better network architectures. Skip connections [29, 19], residual networks [15, 14, 26], and hierarchical methods [6, 41, 42] have proven highly successful also in the context of generative methods. As such, we decided to re-evaluate the network design of StyleGAN and search for an architecture that produces high-quality images without progressive growing.
+
+Figure 7a shows MSG-GAN [19], which connects the matching resolutions of the generator and discriminator using multiple skip connections. The MSG-GAN generator is modified to output a mipmap [37] instead of an image, and a similar representation is computed for each real im
+
+| FFHQ | D original | D input skips | D residual |
| FID | PPL | FID | PPL | FID | PPL |
| G original | 4.32 | 265 | 4.18 | 235 | 3.58 | 269 |
| G output skips | 4.33 | 169 | 3.77 | 127 | 3.31 | 125 |
| G residual | 4.35 | 203 | 3.96 | 229 | 3.79 | 243 |
| LSUN Car | D original | D input skips | D residual |
| FID | PPL | FID | PPL | FID | PPL |
| G original | 3.75 | 905 | 3.23 | 758 | 3.25 | 802 |
| G output skips | 3.77 | 544 | 3.86 | 316 | 3.19 | 471 |
| G residual | 3.93 | 981 | 3.40 | 667 | 2.66 | 645 |
+
+Table 2. Comparison of generator and discriminator architectures without progressive growing. The combination of generator with output skips and residual discriminator corresponds to configuration E in the main result table.
+
+age as well. In Figure 7b we simplify this design by upsampling and summing the contributions of RGB outputs corresponding to different resolutions. In the discriminator, we similarly provide the downsampled image to each resolution block of the discriminator. We use bilinear filtering in all up and downsampling operations. In Figure 7c we further modify the design to use residual connections. $^3$ This design is similar to LAPGAN [6] without the per-resolution discriminators employed by Denton et al.
+
+Table 2 compares three generator and three discriminator architectures: original feedforward networks as used in StyleGAN, skip connections, and residual networks, all trained without progressive growing. FID and PPL are provided for each of the 9 combinations. We can see two broad trends: skip connections in the generator drastically improve PPL in all configurations, and a residual discriminator network is clearly beneficial for FID. The latter is perhaps not surprising since the structure of discriminator resembles classifiers where residual architectures are known to be helpful. However, a residual architecture was harmful in the generator—the lone exception was FID in LSUN CAR when both networks were residual.
+
+For the rest of the paper we use a skip generator and a residual discriminator, without progressive growing. This corresponds to configuration E in Table 1, and it significantly improves FID and PPL.
+
+# 4.2. Resolution usage
+
+The key aspect of progressive growing, which we would like to preserve, is that the generator will initially focus on low-resolution features and then slowly shift its attention to finer details. The architectures in Figure 7 make it possible for the generator to first output low resolution images that are not affected by the higher-resolution layers in a significant way, and later shift the focus to the higher-resolution
+
+
+(a) StyleGAN-sized (config E)
+
+
+(b) Large networks (config F)
+Figure 8. Contribution of each resolution to the output of the generator as a function of training time. The vertical axis shows a breakdown of the relative standard deviations of different resolutions, and the horizontal axis corresponds to training progress, measured in millions of training images shown to the discriminator. We can see that in the beginning the network focuses on low-resolution images and progressively shifts its focus on larger resolutions as training progresses. In (a) the generator basically outputs a $512^2$ image with some minor sharpening for $1024^2$ , while in (b) the larger network focuses more on the high-resolution details.
+
+layers as the training proceeds. Since this is not enforced in any way, the generator will do it only if it is beneficial. To analyze the behavior in practice, we need to quantify how strongly the generator relies on particular resolutions over the course of training.
+
+Since the skip generator (Figure 7b) forms the image by explicitly summing RGB values from multiple resolutions, we can estimate the relative importance of the corresponding layers by measuring how much they contribute to the final image. In Figure 8a, we plot the standard deviation of the pixel values produced by each tRGB layer as a function of training time. We calculate the standard deviations over 1024 random samples of $\mathbf{w}$ and normalize the values so that they sum to $100\%$ .
+
+At the start of training, we can see that the new skip generator behaves similar to progressive growing—now achieved without changing the network topology. It would thus be reasonable to expect the highest resolution to dominate towards the end of the training. The plot, however, shows that this fails to happen in practice, which indicates that the generator may not be able to "fully utilize" the target resolution. To verify this, we inspected the generated images manually and noticed that they generally lack some of the pixel-level detail that is present in the training data—the images could be described as being sharpened versions of $512^{2}$ images instead of true $1024^{2}$ images.
+
+This leads us to hypothesize that there is a capacity problem in our networks, which we test by doubling the number of feature maps in the highest-resolution layers of both networks. This brings the behavior more in line with expecta
+
+| Dataset | Resolution | StyleGAN (A) | StyleGAN2 (F) |
| FID | PPL | FID | PPL |
| LSUN CAR | 512×384 | 3.27 | 1485 | 2.32 | 416 |
| LSUN CAT | 256×256 | 8.53 | 924 | 6.93 | 439 |
| LSUN CHURCH | 256×256 | 4.21 | 742 | 3.86 | 342 |
| LSUN HORSE | 256×256 | 3.83 | 1405 | 3.43 | 338 |
+
+Table 3. Improvement in LSUN datasets measured using FID and PPL. We trained CAR for 57M images, CAT for 88M, CHURCH for 48M, and HORSE for 100M images.
+
+tions: Figure 8b shows a significant increase in the contribution of the highest-resolution layers, and Table 1, row F shows that FID and Recall improve markedly. The last row shows that baseline StyleGAN also benefits from additional capacity, but its quality remains far below StyleGAN2.
+
+Table 3 compares StyleGAN and StyleGAN2 in four LSUN categories, again showing clear improvements in FID and significant advances in PPL. It is possible that further increases in the size could provide additional benefits.
+
+# 5. Projection of images to latent space
+
+Inverting the synthesis network $g$ is an interesting problem that has many applications. Manipulating a given image in the latent feature space requires finding a matching latent code $\mathbf{w}$ for it first. Previous research [1, 9] suggests that instead of finding a common latent code $\mathbf{w}$ , the results improve if a separate $\mathbf{w}$ is chosen for each layer of the generator. The same approach was used in an early encoder implementation [27]. While extending the latent space in this fashion finds a closer match to a given image, it also enables projecting arbitrary images that should have no latent representation. Instead, we concentrate on finding latent codes in the original, unextended latent space, as these correspond to images that the generator could have produced.
+
+Our projection method differs from previous methods in two ways. First, we add ramped-down noise to the latent code during optimization in order to explore the latent space more comprehensively. Second, we also optimize the stochastic noise inputs of the StyleGAN generator, regularizing them to ensure they do not end up carrying coherent signal. The regularization is based on enforcing the autocorrelation coefficients of the noise maps to match those of unit Gaussian noise over multiple scales. Details of our projection method can be found in Appendix D.
+
+# 5.1. Attribution of generated images
+
+Detection of manipulated or generated images is a very important task. At present, classifier-based methods can quite reliably detect generated images, regardless of their exact origin [24, 39, 35, 45, 36]. However, given the rapid pace of progress in generative methods, this may not be a lasting situation. Besides general detection of fake images, we may also consider a more limited form of the problem:
+
+
+StyleGAN — generated images
+
+
+Figure 9. Example images and their projected and re-synthesized counterparts. For each configuration, top row shows the target images and bottom row shows the synthesis of the corresponding projected latent vector and noise inputs. With the baseline StyleGAN, projection often finds a reasonably close match for generated images, but especially the backgrounds differ from the originals. The images generated using StyleGAN2 can be projected almost perfectly back into generator inputs, while projected real images (from the training set) show clear differences to the originals, as expected. All tests were done using the same projection method and hyperparameters.
+
+
+StyleGAN2 — generated images
+
+
+
+
+StyleGAN2 - real images
+
+
+
+
+
+
+LSUN CAR, StyleGAN
+
+
+FFHQ, StyleGAN
+
+
+LSUN CAR, StyleGAN2
+Figure 10. LPIPS distance histograms between original and projected images for generated (blue) and real images (orange). Despite the higher image quality of our improved generator, it is much easier to project the generated images into its latent space $\mathcal{W}$ . The same projection method was used in all cases.
+
+
+FFHQ, StyleGAN2
+
+being able to attribute a fake image to its specific source [2]. With StyleGAN, this amounts to checking if there exists a $\mathbf{w} \in \mathcal{W}$ that re-synthesis the image in question.
+
+We measure how well the projection succeeds by computing the LPIPS [44] distance between original and resynthesized image as $D_{\mathrm{LPIPS}}[\pmb{x}, g(\tilde{g}^{-1}(\pmb{x}))]$ , where $\pmb{x}$ is the image being analyzed and $\tilde{g}^{-1}$ denotes the approximate projection operation. Figure 10 shows histograms of these distances for LSUN CAR and FFHQ datasets using the original StyleGAN and StyleGAN2, and Figure 9 shows example projections. The images generated using StyleGAN2 can be projected into $\mathcal{W}$ so well that they can be almost unambiguously attributed to the generating network. However, with the original StyleGAN, even though it should technically be possible to find a matching latent code, it appears that the mapping from $\mathcal{W}$ to images is too complex for this to succeed reliably in practice. We find it encouraging that StyleGAN2 makes source attribution easier even though the image quality has improved significantly.
+
+# 6. Conclusions and future work
+
+We have identified and fixed several image quality issues in StyleGAN, improving the quality further and considerably advancing the state of the art in several datasets. In some cases the improvements are more clearly seen in motion, as demonstrated in the accompanying video. Appendix A includes further examples of results obtainable using our method. Despite the improved quality, StyleGAN2 makes it easier to attribute a generated image to its source.
+
+Training performance has also improved. At $1024^{2}$ resolution, the original StyleGAN (config A in Table 1) trains at 37 images per second on NVIDIA DGX-1 with 8 Tesla V100 GPUs, while our config E trains $40\%$ faster at 61 img/s. Most of the speedup comes from simplified dataflow due to weight demodulation, lazy regularization, and code optimizations. StyleGAN2 (config F, larger networks) trains at 31 img/s, and is thus only slightly more expensive to train than original StyleGAN. Its total training time was 9 days for FFHQ and 13 days for LSUN CAR.
+
+The entire project, including all exploration, consumed 132 MWh of electricity, of which 0.68 MWh went into training the final FFHQ model. In total, we used about 51 single-GPU years of computation (Volta class GPU). A more detailed discussion is available in Appendix F.
+
+In the future, it could be fruitful to study further improvements to the path length regularization, e.g., by replacing the pixel-space $L_{2}$ distance with a data-driven feature-space metric. Considering the practical deployment of GANs, we feel that it will be important to find new ways to reduce the training data requirements. This is especially crucial in applications where it is infeasible to acquire tens of thousands of training samples, and with datasets that include a lot of intrinsic variation.
+
+Acknowledgements We thank Ming-Yu Liu for an early review, Timo Viitanen for help with the public release, David Luebke for in-depth discussions and helpful comments, and Tero Kuosmanen for technical support with the compute infrastructure.
+
+# References
+
+[1] Rameen Abdul, Yipeng Qin, and Peter Wonka. Image2StyleGAN: How to embed images into the StyleGAN latent space? In ICCV, 2019. 7
+[2] Michael Albright and Scott McCloskey. Source generator attribution via inversion. In CVPR Workshops, 2019. 8
+[3] Carl Bergstrom and Jevin West. Which face is real? http://www.whichfaceisreal.com/learn.html, Accessed November 15, 2019. 1
+[4] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. CoRR, abs/1809.11096, 2018. 1
+[5] Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. Equilibrated adaptive learning rates for non-convex optimization. CoRR, abs/1502.04390, 2015. 5
+[6] Emily L. Denton, Soumith Chintala, Arthur Szlam, and Robert Fergus. Deep generative image models using a Laplacian pyramid of adversarial networks. CoRR, abs/1506.05751, 2015. 6
+[7] Vincent Dumoulin, Ethan Perez, Nathan Schucher, Florian Strub, Harm de Vries, Aaron Courville, and Yoshua Bengio. Feature-wise transformations. Distill, 2018. https://distill.pub/2018/features-wise-transformations.1
+[8] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. CoRR, abs/1610.07629, 2016. 1
+[9] Aviv Gabbay and Yedid Hoshen. Style generator inversion for image enhancement and animation. CoRR, abs/1906.11880, 2019. 7
+[10] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, and Wieland Brendel. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. CoRR, abs/1811.12231, 2018. 1, 4
+[11] Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, and Jonathon Shlens. Exploring the structure of a real-time, arbitrary neural artistic stylization network. CoRR, abs/1705.06830, 2017. 1
+[12] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 249-256, 2010. 3
+[13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In NIPS, 2014. 1, 5
+[14] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. CoRR, abs/1704.00028, 2017. 6
+[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, abs/1512.03385, 2015. 6
+[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. CoRR, abs/1502.01852, 2015.3
+
+[17] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Proc. NIPS, pages 6626-6637, 2017. 1
+[18] Xun Huang and Serge J. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. CoRR, abs/1703.06868, 2017. 1
+[19] Animesh Karnewar and Oliver Wang. MSG-GAN: multiscale gradients for generative adversarial networks. In Proc. CVPR, 2020. 6
+[20] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. CoRR, abs/1710.10196, 2017. 1, 5
+[21] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proc. CVPR, 2018. 1, 2, 4, 5
+[22] Tuomas Kynkänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. In Proc. NeurIPS, 2019, 1, 2, 4
+[23] Barbara Landau, Linda B. Smith, and Susan S. Jones. The importance of shape in early lexical learning. Cognitive Development, 3(3), 1988. 4
+[24] Haodong Li, Han Chen, Bin Li, and Shunquan Tan. Can forensic detectors identify GAN generated images? In Proc. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2018. 7
+[25] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. Which training methods for GANs do actually converge? CoRR, abs/1801.04406, 2018. 5
+[26] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. CoRR, abs/1802.05957, 2018. 1, 5, 6
+[27] Dmitry Nikitko. StyleGAN - Encoder for official TensorFlow implementation. https://github.com/Puzer/stylegan-encoder/, 2019. 7
+[28] Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B. Brown, Christopher Olah, Colin Raffel, and Ian Goodfellow. Is generator conditioning causally related to GAN performance? CoRR, abs/1802.08768, 2018. 5
+[29] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-Net: Convolutional networks for biomedical image segmentation. In Proc. Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 234–241, 2015. 6
+[30] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. ImageNet large scale visual recognition challenge. In Proc. CVPR, 2015. 4
+[31] Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. CoRR, abs/1806.00035, 2018. 1
+[32] Tim Salimans and Diederik P. Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. CoRR, abs/1602.07868, 2016. 3
+[33] Yujun Shen, Jinjin Gu, Xiaou Tang, and Bolei Zhou. Interpreting the latent space of GANs for semantic face editing. CoRR, abs/1907.10786, 2019. 1
+
+[34] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. 1, 4
+[35] Run Wang, Lei Ma, Felix Juefei-Xu, Xiaofei Xie, Jian Wang, and Yang Liu. FakeSpotter: A simple baseline for spotting AI-synthesized fake faces. CoRR, abs/1909.06122, 2019. 7
+[36] Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A. Efros. CNN-generated images are surprisingly easy to spot... for now. CoRR, abs/1912.11035, 2019. 7
+[37] Lance Williams. Pyramidal parametrics. SIGGRAPH Comput. Graph., 17(3):1-11, 1983. 6
+[38] Sitao Xiang and Hao Li. On the effects of batch and weight normalization in generative adversarial networks. CoRR, abs/1704.03971, 2017. 3
+[39] Ning Yu, Larry Davis, and Mario Fritz. Attributing fake images to GANs: Analyzing fingerprints in generated images. CoRR, abs/1811.08180, 2018. 7
+[40] Han Zhang, Ian Goodfellow, Dimitris Metaxas, and Augustus Odena. Self-attention generative adversarial networks. CoRR, abs/1805.08318, 2018. 5
+[41] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris N. Metaxas. StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. 6
+[42] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N. Metaxas. StackGAN++: realistic image synthesis with stacked generative adversarial networks. CoRR, abs/1710.10916, 2017. 6
+[43] Richard Zhang. Making convolutional networks shift-invariant again. In Proc. ICML, 2019. 5
+[44] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In Proc. CVPR, 2018. 4, 8
+[45] Xu Zhang, Svebor Karaman, and Shih-Fu Chang. Detecting and simulating artifacts in GAN fake images. CoRR, abs/1907.06515, 2019. 7
\ No newline at end of file
diff --git a/analyzingandimprovingtheimagequalityofstylegan/images.zip b/analyzingandimprovingtheimagequalityofstylegan/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d721ca401fb786ae7faec5f5f54b95b4c7fbd161
--- /dev/null
+++ b/analyzingandimprovingtheimagequalityofstylegan/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:447e58762c0784a801d35640eb86872f9b142dfb72ca80b970129391e45462c7
+size 590354
diff --git a/analyzingandimprovingtheimagequalityofstylegan/layout.json b/analyzingandimprovingtheimagequalityofstylegan/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b6b0d8e38ef3ba9c0a0d6c66959010d2076430e3
--- /dev/null
+++ b/analyzingandimprovingtheimagequalityofstylegan/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6030431e71a34b70895f6c671c7f46e179866e46207820197f24d44db68daf30
+size 412700
diff --git a/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_content_list.json b/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..304fd407303c5e7fa24f483447cf02893ce62193
--- /dev/null
+++ b/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9405aaa816d3fbe179ce4d7d4b39c9e8d0f0348f13b3abda43fcc916f887e0f0
+size 81325
diff --git a/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_model.json b/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c2b9fee9881757955958a2b27073256ef9ff2dc0
--- /dev/null
+++ b/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:468cd4a28fb96db7b49d349b863ad2b09a8315b0867a29dcce9b4bdc55846b10
+size 100524
diff --git a/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_origin.pdf b/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e1bdf083937d279283c0041c3283a186e200c186
--- /dev/null
+++ b/anefficientpointlstmforpointcloudsbasedgesturerecognition/48c732b1-861f-4882-9115-55787140912c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8017ae9fae0055d1b2aa0ed859ec2111b2dcff54b54146565fa1b564cd8f361
+size 821105
diff --git a/anefficientpointlstmforpointcloudsbasedgesturerecognition/full.md b/anefficientpointlstmforpointcloudsbasedgesturerecognition/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ecdc664a4d9b79dd3bcc6a1c7678d46fdcdd6c16
--- /dev/null
+++ b/anefficientpointlstmforpointcloudsbasedgesturerecognition/full.md
@@ -0,0 +1,359 @@
+# An Efficient PointLSTM for Point Clouds Based Gesture Recognition
+
+Yuecong Min $^{1,2}$ , Yanxiao Zhang $^{1,2}$ , Xiujuan Chai $^{3}$ , Xilin Chen $^{1,2}$
+
+1Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing, 100190, China
+
+$^{2}$ University of Chinese Academy of Sciences, Beijing, 100049, China
+
+3Agricultural Information Institute, Chinese Academy of Agricultural Sciences, Beijing, 100081, China
+
+{yuecong.min,yanxiao.zhang}@vipl.ict.ac.cn,chaixiujuan@caas.cn,xlchen@ict.ac.cn
+
+# Abstract
+
+Point clouds contain rich spatial information, which provides complementary cues for gesture recognition. In this paper, we formulate gesture recognition as an irregular sequence recognition problem and aim to capture long-term spatial correlations across point cloud sequences. A novel and effective PointLSTM is proposed to propagate information from past to future while preserving the spatial structure. The proposed PointLSTM combines state information from neighboring points in the past with current features to update the current states by a weight-shared LSTM layer. This method can be integrated into many other sequence learning approaches. In the task of gesture recognition, the proposed PointLSTM achieves state-of-the-art results on two challenging datasets (NVGesture and SHREC'17) and outperforms previous skeleton-based methods. To show its advantages in generalization, we evaluate our method on MSR Action3D dataset, and it produces competitive results with previous skeleton-based methods.
+
+# 1. Introduction
+
+Vision-based gesture recognition [37] is a well-studied yet challenging problem in computer vision and has considerable potential applications in human-computer interaction, such as touchless interfaces and nonverbal communication systems. Effectively extracting spatio-temporal features from sequence data is one of the most crucial issues for gesture recognition. Most previous gesture recognition systems capture dynamic motion information with two-stream based methods [24, 34, 22, 44] or 3D convolutional neural networks [18, 22, 24, 25, 34, 35].
+
+Compared with RGB data, point clouds precisely describe the latent geometric structure and distance information of object surfaces, which provide complementary cues for gesture recognition. Nevertheless, how to lever
+
+
+(a) PointLSTM on ordered point clouds.
+
+
+(b) PointLSTM on orderless point clouds.
+Figure 1. The basic idea of PointLSTM ( $s$ and $f$ are states and features, respectively). (a) In an ideal situation, each point in the current frame can find the corresponding point in the previous frame. (b) The proposed PointLSTM relaxes the previous assumption and gather relevant information from past neighbors.
+
+age such rich spatial information in point clouds remains a major challenge. Instead of representing point clouds as voxels or multi-view formats, Qi et al. [30] propose the PointNet architecture to extract structure information from raw point clouds directly. PointNet++ [31] extends PointNet by applying hierarchical grouping and sampling operations to capture local structure information. Some recent works [19, 20, 23] modify the grouping operation to extract both motion and structure features from spatio-temporal neighbors. Nevertheless, these methods merely focus on short-term modeling and have insufficient abilities to capture long-term relationships.
+
+The recent successes of recurrent neural network (RNN) and long short-term memory (LSTM) in sequence model
+
+ing [3, 7, 12, 15, 38] provide some methodological insights on how to address the above problem. With the help of LSTM, it becomes possible to capture both motion and appearance changes evolving with time from spatio-temporal correspondences. However, most point cloud data are orderless, and directly applying a weight-shared LSTM layer on misaligned point cloud sequences will cause optimization difficulties. Therefore, how to leverage temporal information while preserving spatial structure is the primary challenge in irregular sequence modeling.
+
+To solve this problem, we propose a modified version of LSTM for Point cloud sequences (PointLSTM). Fig. 1 illustrates the basic idea of the proposed method. In an ideal situation, for each point in the current frame, we can find the corresponding point in the previous frame (Fig. 1(a)), and a weight-shared LSTM is sufficient to process such point cloud sequences. However, this assumption is seldom met in practice due to occlusion and other causes. For this reason, we relax it by searching relevant points in the previous frame and collecting the state information (predicted by LSTM) from predecessors.
+
+As illustrated in Fig. 1(b), we concatenate the features of the current point and the previous neighboring states. A weight-shared LSTM is used to update state information for each point pair, and a pooling operation is applied to collect relevant information and update current point states. Although there are probably no corresponding point pairs between two consecutive frames, the proposed update mechanism can still collect the structure and motion information for each point from its spatial neighbors.
+
+Moreover, a simplified version with point-shared states (PointLSTM-PSS) is proposed to reduce the computation and explore the origin of the improvement. Comprehensive ablation studies are conducted on NVIDIA Dynamic Hand Gesture Dataset [24] and SHREC'17 Track Dataset [5], and the proposed method achieves better results than the latest methods. To demonstrate the generalization of the PointLSTM, we validate the proposed method for action recognition on MSR Action3D Dataset [17], and our method shows competitive results with skeleton-based methods.
+
+In summary, the main contributions of this paper are:
+
+- The proposed PointLSTM can leverage long-term spatio-temporal relationships in irregular sequence data while preserving the spatial structure for irregular sequence recognition problem.
+- A simplified version (PointLSTM-PSS) is presented to reduce the computation and explore the origin of the improvement.
+- Evaluations of the proposed PointLSTM on two sequence recognition tasks, 3D gesture recognition and action recognition, show great potential for real-time applications.
+
+# 2. Related Work
+
+# 2.1. Vision-Based Dynamic Gesture Recognition
+
+Efficiently capturing spatio-temporal information is the main challenge of dynamic hand gesture recognition [37]. In the past decades, researchers focused on designing appropriate features, such as histogram of oriented gradients (HOG) [9] and the ensemble of shape function (ESF) [16]. With the success of deep learning, several previous works [4, 18, 22, 24, 25, 34, 35, 44] explore the use of 3D convolution for gesture recognition from video data. One limitation of these methods is that hands only occupy a small fraction of the frame. In other words, video data contains much irrelevant information, and video-based models are more likely to overfit. Therefore, ensemble algorithms [18, 22, 25] are widely used to integrate information from multiple modalities and further improve performance, which leads to unacceptable training and inference time in practice.
+
+With the recent development of commodity depth sensors and hand pose estimation methods [10, 11, 42], it becomes feasible to estimate the sequences of hand joints as intermediates in gesture recognition. Recent works [2, 26, 27] apply graph convolutional network and LSTM to learn the spatial and temporal features from the sequences of hand joints. However, skeleton-based methods highly rely on the quality of the estimated results, which is sensitive to self-occlusion, motion velocity, and image resolution and may cause an unrecoverable mistake.
+
+Compared to skeleton data, point clouds reflect the latent geometric structure of the object surface, which provides reliable and complementary cues for gesture recognition. Inspired by the pioneering works [30, 31] that directly use point clouds as input and extract features from its neighbors, several recent works [19, 20, 23] attempt to extract dynamic features from point cloud sequences. Liu et al. [19] propose FlowNet3D to estimate scene flow between two consecutive frames, and several recent approaches [20, 23] modify grouping operation to find the temporal correspondences between frames. However, these methods only focus on short-term modeling and are different from the proposed method, which can capture long-term spatio-temporal relationships.
+
+# 2.2. LSTM for Sequence Modeling
+
+For sequence modeling, some studies [3, 7, 12, 15, 38] have demonstrated that LSTM, as a special case of RNN, has an excellent ability of long-term modeling. The key idea of LSTM is its update mechanism: an input gate $\pmb{i}^{(t)}$ and a forget gate $\pmb{f}^{(t)}$ control information flow from the input $\pmb{y}^{(t)}$ and past hidden state $\pmb{h}^{(t-1)}$ to the cell state $\pmb{c}^{(t)}$ , and an output gate $\pmb{o}^{(t)}$ controls the final hidden state $\pmb{h}^{(t)}$ , which will be propagated to the next step. We use the fol
+
+lowing equations to represent the entire process ( $\odot$ denotes the Hadamard product):
+
+$$
+\boldsymbol {i} ^ {(t)} = \sigma (\boldsymbol {U} ^ {(i)} \boldsymbol {y} ^ {(t)} + \boldsymbol {W} ^ {(i)} \boldsymbol {h} ^ {(t - 1)} + \boldsymbol {b} ^ {(i)}),
+$$
+
+$$
+\boldsymbol {f} ^ {(t)} = \sigma (\boldsymbol {U} ^ {(f)} \boldsymbol {y} ^ {(t)} + \boldsymbol {W} ^ {(f)} \boldsymbol {h} ^ {(t - 1)} + \boldsymbol {b} ^ {(f)}),
+$$
+
+$$
+\pmb {o} ^ {(t)} = \sigma (\pmb {U} ^ {(o)} \pmb {y} ^ {(t)} + \pmb {W} ^ {(o)} \pmb {h} ^ {(t - 1)} + \pmb {b} ^ {(o)}),
+$$
+
+$$
+\tilde {\boldsymbol {c}} ^ {(t)} = \tanh (\boldsymbol {U} ^ {(c)} \boldsymbol {y} ^ {(t)} + \boldsymbol {W} ^ {(c)} \boldsymbol {h} ^ {(t - 1)} + \boldsymbol {b} ^ {(c)}), \tag {1}
+$$
+
+$$
+\boldsymbol {c} ^ {(t)} = \boldsymbol {f} ^ {(t)} \odot \boldsymbol {c} ^ {(t - 1)} + \boldsymbol {i} ^ {(t)} \odot \tilde {\boldsymbol {c}} ^ {(t)},
+$$
+
+$$
+\boldsymbol {h} ^ {(t)} = \boldsymbol {o} ^ {(t)} \odot \boldsymbol {c} ^ {(t)},
+$$
+
+where $\boldsymbol{U}^{(\cdot)}$ and $\boldsymbol{W}^{(\cdot)}$ are the input-to-hidden and hidden-to-hidden weight matrices, and $\boldsymbol{b}^{(\cdot)}$ are bias vectors.
+
+In the visual sequence learning task [7, 15, 38], LSTMs are mostly attached on top of the last layer of pre-trained CNNs, which will hurt the ability of LSTM to capture dynamic spatial correlations evolving with time. Some LSTM-based models are proposed to solve this problem. ConvLSTM [38] adopts the convolution operation to control state information transitions while preserving grid structure. AGC-LSTM [32] proposes graph convolutional LSTM to capture the discriminative features in spatial configuration and temporal dynamics.
+
+Several recent works attempt to utilize LSTM and RNN in point clouds. Ye et al. [41] split the whole 3D space into uniformly-space blocks and adopt a two-direction hierarchical RNN to explore long-range spatial relationships for semantic segmentation. PointRNN [8] and CloudLSTM [43] apply RNN on dynamic point clouds for pointwise prediction. These works are close to ours, but there are clear differences: distinct from using pooling or weighted sum operation to summarize the local information for pointwise prediction, we keep the spatial structure and using pooling operation to find the relevant information for the global feature extraction.
+
+# 3. Method
+
+In this section, firstly, we present the PointLSTM, the core idea of this work, and then consider several network architecture proposals for gesture and action recognition.
+
+# 3.1. PointLSTM
+
+As illustrated in Eq. 1, although LSTM is equipped with the capability of creating short paths across multiple states to model long-range relationships, it is hard to utilize it to unaligned point cloud sequences. Here we aim to design a suitable mechanism for inaccurate correspondent point clouds. For this purpose, we propose two types of solutions to tolerate rough alignment according to whether points in the same frame share state information or not.
+
+Here are some notations. A point cloud sequence with $T$ frames is represented by $(\mathbb{P}^{(1)},\mathbb{P}^{(2)},\dots ,\mathbb{P}^{(T)})$ and each
+
+frame contains arbitrary number of points $\mathbb{P}^{(t)} = \{p_i^{(t)}|i = 1,2,\dots ,n_t\}$ . Besides, any point in a point cloud sequence may have no corresponding point in other frames due to occlusions and other causes. Each point $p_i^{(t)}$ can be represented as two parts: a $d$ -dim coordinate vector $\boldsymbol{x}_i^{(t)}$ and a $m$ -dim feature vector $\boldsymbol{f}_i^{(t)}$ . $\mathcal{N}_{\Delta t}(\boldsymbol{x}_i^{(t)})$ is the neighboring point set of point $p_i^{(t)}$ in frame $\mathbb{P}^{(t + \Delta t)}$ . The general LSTM layer (Eq. 1) can be abbreviated as follows:
+
+$$
+\boldsymbol {h} ^ {(t)}, \boldsymbol {c} ^ {(t)} = \operatorname {L S T M} \left(\boldsymbol {y} ^ {(t)}, \boldsymbol {h} ^ {(t - 1)}, \boldsymbol {c} ^ {(t - 1)}\right). \tag {2}
+$$
+
+Point-independent states. In this case, we assume each point $p_i^{(t)}$ in the point cloud sequence has independent hidden state $\pmb{h}_i^{(t)}$ and cell state $\pmb{c}_i^{(t)}$ . If we can obtain the one-to-one correspondence between neighboring point clouds, the problem can be simplified as a general sequence learning problem. However, it is impractical in most situations. Therefore, we relax this assumption by searching the relevant points in its past neighbors. The state information of previous frame can propagate to the next, and the entire process is illustrated in Fig. 2(a). For each point pair $(p_i^{(t)}, p_j^{(t-1)})$ , $p_j^{(t-1)} \in \mathcal{N}_{-1}(\pmb{x}_i^{(t)})$ , we formulate the updating mechanism as:
+
+$$
+\boldsymbol {y} _ {i, j} ^ {(t)} = \left[ \boldsymbol {x} _ {i} ^ {(t)} - \boldsymbol {x} _ {j} ^ {(t - 1)}; \boldsymbol {f} _ {i} ^ {(t)} \right],
+$$
+
+$$
+\tilde {\boldsymbol {h}} _ {i, j} ^ {(t)}, \tilde {\boldsymbol {c}} _ {i, j} ^ {(t)} = \operatorname {L S T M} \left(\boldsymbol {y} _ {i, j} ^ {(t)}, \boldsymbol {h} _ {j} ^ {(t - 1)}, \boldsymbol {c} _ {j} ^ {(t - 1)}\right), \tag {3}
+$$
+
+where we use $[\cdot ;\cdot ]$ to denote the concatenation operation, and $\tilde{\pmb{h}}_{i,j}^{(t)},\tilde{\pmb{c}}_{i,j}^{(t)}$ are virtual hidden and cell states for point pair $(p_i^{(t)},p_j^{(t - 1)})$ . The final states of $p_i^{(t)}$ are updated by:
+
+$$
+\boldsymbol {h} _ {i} ^ {(t)} = g \left(\tilde {\boldsymbol {h}} _ {i, 1} ^ {(t)}, \tilde {\boldsymbol {h}} _ {i, 2} ^ {(t)}, \dots , \tilde {\boldsymbol {h}} _ {i, n _ {t - 1}} ^ {(t)}\right), \tag {4}
+$$
+
+$$
+\boldsymbol {c} _ {i} ^ {(t)} = g \big (\tilde {\boldsymbol {c}} _ {i, 1} ^ {(t)}, \tilde {\boldsymbol {c}} _ {i, 2} ^ {(t)}, \dots , \tilde {\boldsymbol {c}} _ {i, n _ {t - 1}} ^ {(t)} \big),
+$$
+
+where $\pmb{h}_i^{(t)},\pmb{c}_i^{(t)}$ are corresponding to the updated hidden and cell states of point $p_i^{(t)}$ , $g$ is a symmetric function and implemented as a max-pooling layer.
+
+Point-shared states. In the previous point-independent states scheme, each point owns the independent state and gathers information from past neighbors. It will be time-consuming, especially when the size of the point set is large. To facilitate the updating process and explore the essential components of PointLSTM, we propose a simplified version with the Point-Shared States in the same frame called PointLSTM-PSS. All points in the same frame $\mathbb{P}^{(t)}$ have shared hidden states $h^{(t)}$ and cell states $c^{(t)}$ . The updating mechanism is shown in Fig. 2(b) and formulated as:
+
+$$
+\boldsymbol {y} _ {i} ^ {(t)} = \left[ \boldsymbol {x} _ {i} ^ {(t)}; \boldsymbol {f} _ {i} ^ {(t)} \right],
+$$
+
+$$
+\tilde {\boldsymbol {h}} _ {i} ^ {(t)}, \tilde {\boldsymbol {c}} _ {i} ^ {(t)} = \operatorname {L S T M} \left(\boldsymbol {y} _ {i} ^ {(t)}, \boldsymbol {h} ^ {(t - 1)}, \boldsymbol {c} ^ {(t - 1)}\right), \tag {5}
+$$
+
+
+(a) PointLSTM
+
+
+(b) PointLSTM-PSS
+Figure 2. Overview of PointLSTM and PointLSTM-PSS. (a) In the PointLSTM, each point owns the independent state, which is updated based on current inputs and states of the neighborhood in the previous frame. (b) In the PointLSTM-PSS, points in the same frame share the same states, and the global states are obtained by averaging all updated states in the current frame.
+
+where $\tilde{\pmb{h}}_i^{(t)},\tilde{\pmb{c}}_i^{(t)}$ are virtual hidden and cell states of point $p_i^{(t)}$ and the final states at time $t$ are defined as:
+
+$$
+\boldsymbol {h} ^ {(t)} = g \left(\tilde {\boldsymbol {h}} _ {1} ^ {(t)}, \tilde {\boldsymbol {h}} _ {2} ^ {(t)}, \dots , \tilde {\boldsymbol {h}} _ {n _ {t}} ^ {(t)}\right), \tag {6}
+$$
+
+$$
+\pmb {c} ^ {(t)} = g (\tilde {\pmb {c}} _ {1} ^ {(t)}, \tilde {\pmb {c}} _ {2} ^ {(t)}, \dots , \tilde {\pmb {c}} _ {n _ {t}} ^ {(t)}),
+$$
+
+where $\pmb{h}^{(t)},\pmb{c}^{(t)}$ are corresponding to the updated hidden and cell states at time $t$ , $g$ is a symmetric function and implemented as the avg-pooling layer.
+
+# 3.2. Neighborhood Grouping
+
+The idea of gathering information from predecessors in PointLSTM is similar to the concept of the receptive field in the convolution neural network. Nevertheless, things become more complicated when considering non-rigid motions. To investigate the effect of misalignment, we follow the previous literature [20, 23] and evaluate two types of grouping methods: Direct grouping and Aligned grouping.
+
+Direct grouping. In order to capture motion information from previous frames, we directly find the $k$ -nearest neighbors of the centroid point $p_{t,i}$ in the previous frame as its neighboring point set $\mathcal{N}_{-1}(\boldsymbol{x}_i^{(t)}; k)$ . This operation can integrate spatial information of neighboring frames when objects keep static. Since there is no radius limitation, direct grouping operation can also capture relative motion information when objects move fast.
+
+Aligned grouping. Several recent approaches [19, 20] estimate scene flow for rigid objects. If we can estimate the backward flow $\Delta \pmb{x}_i^{(t)} = \tilde{\pmb{x}}_i^{(t - 1)} - \pmb{x}_i^{(t)}$ between the centroid point $p_i^{(t)}$ and its virtual corresponding point $\tilde{p}_i^{(t - 1)}$ in previous frame, then the $\mathcal{N}_{-1}(\pmb{x}_i^{(t)};k)$ can be decided by the $k$ -nearest neighbors of $\tilde{p}_i^{(t - 1)}$ in frame $\mathbb{P}^{(t - 1)}$ .
+
+However, non-rigid scene flow estimation is still a challenging task [14]. To evaluate the robustness of the proposed method for small shifts, we roughly align neighbor
+
+ing point clouds by aligning their centroids:
+
+$$
+\Delta \bar {\boldsymbol {x}} ^ {(t)} = \frac {1}{n _ {t - 1}} \sum_ {i = 1} ^ {n _ {t - 1}} \boldsymbol {x} _ {i} ^ {(t - 1)} - \frac {1}{n _ {t}} \sum_ {i = 1} ^ {n _ {t}} \boldsymbol {x} _ {i} ^ {(t)}, \tag {7}
+$$
+
+$$
+\Delta \pmb {x} _ {i} ^ {(t)} \approx \Delta \bar {\pmb {x}} ^ {(t)}, \mathrm {f o r} i = 1, \dots , n _ {t}.
+$$
+
+It is worth mentioning that ConvLSTM is actually a special case of PointLSTM when applying the grid grouping strategy on regular data, and the proposed point-shared states can be considered as a global grouping strategy.
+
+# 3.3. Network Architecture
+
+We evaluate the proposed method on 3D gesture and action recognition tasks. As shown in Fig. 3, we take a modified version of FlickerNet [23] as baseline, which utilizes one intra-frame PointNet++ [31] layer and three inter-frame PointNet++ layers. Spatio-temporal grouping and modified sampling layers are used to downsample point clouds between two neighboring inter-frame layers.
+
+The proposed PointLSTM can be embedded in existing architectures that can measure similarities between features. To further investigate the effect of PointLSTM at different stages, we consider four architecture designs: PointLSTM-raw, early, middle, and late.
+
+PointLSTM-raw. Compared with the video sequences, raw point cloud sequences contain rich structure and distance information. We replace the first intra-frame layer with a single PointLSTM layer to test whether it can capture structure information from the raw point cloud.
+
+PointLSTM-early, middle, and late. We replace three intra-frame layers with a single PointLSTM layer, respectively, to see how well the PointLSTM can capture the motion and structure information at different stages.
+
+Moreover, to find the difference between PointLSTM and the general LSTM, we replace stage-5 with an LSTM layer, which extracts action information at the frame level. We refer to this method as baseline-LSTM.
+
+
+Figure 3. The basic network architecture used in this paper. This architecture contains five stages: the first stage extracts intra-frame features using spatial grouping, and the second to fourth stages extract inter-frame features with spatial-temporal grouping and density-based sampling. The fifth stage extracts point-wise features, and a max-pooling layer is followed to obtain global features. PointLSTM-raw, early, middle, and late replace stage-1,2,3,4 with a PointLSTM layer, respectively.
+
+# 3.4. Implementation Details
+
+Density-based sampling layer. The number of points extracted from depth video is large, and most of them contain similar depth information. The previous work [23] shows a small number of points (about 100-200) from each frame is a reasonable choice for gesture recognition. However, different from gesture recognition, the dynamic parts of human action only occupy a small ratio of the whole clouds. To reduce the redundant computation, we adopt a simple density-based sampling method [21]. The estimated density of point $p_i^{(t)}$ at position $x_i^{(t)}$ is given as:
+
+$$
+\rho \left(\boldsymbol {x} _ {i} ^ {(t)}\right) = \frac {1}{n _ {t} r ^ {d}} \sum_ {j = 1} ^ {n _ {t}} w \left(\frac {\boldsymbol {x} _ {i} ^ {(t)} - \boldsymbol {x} _ {j} ^ {(t)}}{r}\right), \tag {8}
+$$
+
+where $r$ is an Euclidean distance between $p_i^{(t)}$ and its $k$ th nearest neighbor in frame $\mathbb{P}^{(t)}$ , $w$ is a bounded integrable weight function. At each sampling layer, we select points with lower density, referring that they correspond to the boundary of point clouds. Examples are visualized in Fig. 4.
+
+Training and inference. Following common practice, we uniformly sample a 32-frame clip along its temporal axis and generate 512 points for each frame. We train all the models from scratch for 200 epochs with a mini-batch size of 8 on a single Tesla P100. Adam, with a momentum of 0.9, is used with a learning rate of $10^{-4}$ , which is divided by 10 at epoch 100, 160, and 180. At the training stage, we randomly sample 128 points (examples are shown in Fig. 4) from the pre-processed point cloud data, and uniformly sample when testing. We augment the training set with randomly scaling $(\pm 20\%)$ , rotating $(\pm 15^{\circ})$ , and dropping input points $(20\%)$ . No testing augmentation strategy is used. To reduce random effects, we run all experiments four times with different random seeds and report mean accuracies and standard deviations.
+
+
+(a) SHREC'17 [5]
+
+
+(b) MSR Action 3D [17]
+Figure 4. Examples of point cloud sequences. The first row shows input point cloud sequences, and each frame contains 128 points. The second row presents 64-point sequences after sampling. The third row visualizes the corresponding skeleton sequences.
+
+# 4. Experiments
+
+In this section, we firstly exemplify the proposed method on two challenging dynamic gesture datasets, NVGesture and SHREC'17. Some architectural experiments are performed to gain some basic understanding of our model. Moreover, we conduct ablation experiments to demonstrate the effectiveness of the proposed method. Finally, we present experiments on a multi-modality action recognition dataset, MSR Action3D, to verify the universality and applicability of the proposed model.
+
+# 4.1. Datasets
+
+NVGesture [24]. NVIDIA Dynamic Hand Gesture Dataset is a challenging dataset for human-computer interfaces in vehicles. This dataset provides multiple modalities, including RGB, depth, and IR images. A total of 1532 videos in 25 classes are split by subject into 1050 training videos and 482 test videos.
+
+SHREC'17 [5]. SHREC'17 Track Dataset is a public dynamic hand gesture dataset presented for the SHREC'17 Track. Gestures in SHREC'17 are defined by the hand mo
+
+Table 1. Performance comparison (%) of applying PointLSTM (direct grouping) at different stages with different window sizes.
+
+ | NVGesture | SHREC'17 (28 gestures) |
| k=1 | k=4 | k=16 | PSS | k=1 | k=4 | k=16 | PSS |
| baseline | 85.9(±0.5) | 87.2(±1.0) |
| baseline-LSTM | 82.8(±0.8) | 88.9(±1.0) |
| PointLSTM-raw | 82.5(±1.3) | 82.9(±0.5) | 83.3(±0.7) | 83.0(±0.8) | 90.7(±0.7) | 90.3(±1.4) | 90.5(±0.7) | 89.5(±0.3) |
| PointLSTM-early | 87.9(±0.7) | 86.4(±0.5) | 86.9(±0.4) | 87.3(±0.4) | 93.5(±0.6) | 93.4(±0.8) | 93.3(±0.6) | 92.8(±0.4) |
| PointLSTM-middle | 85.4(±0.6) | 86.0(±0.4) | 86.9(±0.6) | 86.8(±0.9) | 94.3(±0.3) | 94.0(±0.1) | 94.7(±0.1) | 93.1(±0.2) |
| PointLSTM-late | 87.3(±0.1) | 87.5(±1.0) | 86.4(±1.1) | 86.4(±0.9) | 93.2(±0.4) | 93.5(±0.2) | 92.5(±0.5) | 92.4(±0.4) |
+
+tion or hand shape through the gesture, which are corresponding to coarse and fine gestures. The dataset contains 2800 videos in 14 gesture classes, and each gesture is performed in two ways: using one finger, or the whole hand. It has been split into 1960 train sequences (70%) and 840 test sequences (30%). This dataset also provides the coordinates of 22 hand joints in the 3D world space and is widely used in skeleton-based gesture recognition.
+
+MSR Action3D [17]. MSR Action3D Dataset contains 20 classes, and each class is performed by ten subjects. These actions cover various movements of arms, legs, torso, and their combinations. The original dataset has a total of 567 sequences, and 10 of these sequences are discarded due to having too many noises [36, 45]. This dataset also provides 20 joint locations for skeleton-based action recognition.
+
+# 4.2. Gesture Recognition
+
+Our main application is dynamic gesture recognition, which is a basic but essential task for human-computer interaction. We extract point cloud sequences from hand regions, which can be segmented by depth information from detection results or original depth videos. We firstly evaluate when and where to use PointLSTM to encode the long-term features of point cloud sequences.
+
+Comparison with baseline methods. Comparisons between baseline and baseline-LSTM in Table 1 reveal different characteristics of these two datasets. Since most of the gestures in NVGesture are relatively simple, such as "OK" and "move hand left", the baseline shows better results than the baseline-LSTM. In contrast, SHREC'17 has more complicated gestures that need long-term temporal information, like "Swipe +" and "Swipe X", thus the baseline-LSTM performs better.
+
+The proposed method shows promising results on both datasets. On NVGesture, PointLSTM obtains $1.9\%$ and $5.1\%$ gains in comparison with baseline and baseline-LSTM, which indicates the PointLSTM does capture temporal information while preserving the spatial structure. Moreover, PointLSTM obtains $7.5\%$ and $5.8\%$ gains in comparison with baseline and baseline-LSTM on SHREC'17, and these results suggest that the proposed method is more powerful in capturing temporal relationships than both baseline and baseline-LSTM.
+
+
+Figure 5. Recognition accuracies on SHREC'17 (28 gestures): the PointLSTM-middle $(k = 16, 94.70\%)$ vs. the baseline $(88.90\%)$ . H and F are corresponding to gestures performed with the whole hand and one finger. We present performance gains (orange) and drops (green) of the PointLSTM with respect to the baseline, and highlight several gestures with clear improvements (black). Categories are sorted by the performance of baseline(H).
+
+Fig. 5 presents per-class performance comparison between the PointLSTM-middle and the baseline. The proposed method produces better scores in most categories, and obtains dramatic improvements on both coarse gestures ("Swipe Up", "Swipe Down" and "Tap") and fine gestures ("Grab", "Rotation Clockwise" and "Pinch"). These observations demonstrate the effectiveness of the proposed method on both hand shape and motion recognition. The proposed PointLSTM only drops in two categories: "Swipe X (H)" $(-8.3\%)$ and "Pinch (F)" $(-7.3\%)$ , which tend to be confused with "Swipe + (H)" and "Grab (F)" (see the supplementary material for detailed confusion matrices).
+
+Comparison on the embedding stage. As mentioned above, the PointLSTM can be applied at any stage to embed long-term information. Therefore, we compare the performance of using the PointLSTM at different stages with different window sizes ( $k$ for kNN) in Table 1. Applying the PointLSTM on raw point cloud sequences only leads to a small improvement, but using it in later stages results in more effective progress. This comparison suggests that the PointLSTM is better utilized in later stages, which provide more reliable cues for relevant information collection.
+
+Comparison on the range of the window size. Another observation from Table 1 is that even if the nearest neighbor is used, the proposed method can still yield compet
+
+
+(a) NVGesture (28 gestures)
+
+
+(b) SHREC'17
+Figure 6. Comparison of different grouping strategies with different window size $k$ of PointLSTM-middle.
+
+itive results in comparison with larger window sizes, and PointLSTM-early with $k = 1$ achieves the highest accuracy on NVGesture. We suspect this is because gestures in NVGesture are more relevant to hand shape, and the neighboring frames are well aligned. The PointLSTM can collect context information from its neighbor for more accurate recognition, which has been illustrated in Fig. 1(a).
+
+To further compare the effects of the misalignment, we evaluate the PointLSTM-middle with different grouping operations and window sizes in Fig. 6. PointLSTM-middle with aligned grouping achieves better results than direct grouping on NVGesture (Fig. 6(a)), which verifies that good alignment will help PointLSTM to collect more relevant information. However, results in Fig. 6(b) show the opposite tendency. We analyze the failure cases and attribute the primary performance degradation to the unstable alignments caused by centroid displacements and inaccurate detections, which bring in noisy motion information when aligning centroids (examples can be found in supplementary materials). This result indicates that inaccurate alignments will deteriorate the performance, and direct grouping can be an acceptable choice when accurate alignments are challenging to obtain.
+
+Comparison on the states sharing. As shown in Table 1, PointLSTM-PSS yields superior performance than baseline, and the primary difference between PointLSTM and PointLSTM-PSS is the grouping strategy. Therefore, we can infer that the major improvement (from $87.2\%$ to $93.1\%$ for PointLSTM-middle) on SHREC'17 is obtained from the weight-shared LSTM layer, and independent states and grouping operations cause the further improvement (from $93.1\%$ to $94.7\%$ ). This interesting result proves that the proposed model can handle small misalignments in the gesture recognition task. We refer to PointLSTM-middle with point-shared states as the default PointLSTM-PSS in the
+
+Table 2. Performance comparison (%) on NVGesture dataset.
+
+| Method | Modality | Accuracy |
| R3DCNN [24] | IR image | 63.5 |
| R3DCNN [24] | optical flow | 77.8 |
| R3DCNN [24] | depth video | 80.3 |
| PreRNN [40] | depth video | 84.4 |
| MTUT [1] | depth video | 84.9 |
| R3DCNN [24] | rgb video | 74.1 |
| PreRNN [40] | rgb video | 76.5 |
| MTUT [1] | rgb video | 81.3 |
| PointNet++ [31] | point clouds | 63.9 |
| FlickerNet [23] | point clouds | 86.3 |
| Baseline | point clouds | 85.9(±0.5) |
| PointLSTM-early | point clouds | 87.9(±0.7) |
| PointLSTM-PSS | point clouds | 87.3(±0.4) |
| PointLSTM-middle | point clouds | 86.9(±0.6) |
| PointLSTM-late | point clouds | 87.5(±1.0) |
| Human [24] | rgb video | 88.4 |
+
+Table 3. Performance comparison (\%) on SHREC'17 dataset. The results of both 14 and 28 gestures are reported.
+
+| Method | Modality | 14 | 28 |
| Key frames [5] | depth sequence | 82.9 | 71.9 |
| SoCJ+HoHD+HoWR [4] | skeleton | 88.2 | 81.9 |
| Res-TCN [13] | skeleton | 91.1 | 87.3 |
| STA-Res-TCN [13] | skeleton | 93.6 | 90.7 |
| ST-GCN [39] | skeleton | 92.7 | 87.7 |
| DG-STA [2] | skeleton | 94.4 | 90.7 |
| Baseline | point clouds | 90.5 | 87.6 |
| PointLSTM-early | point clouds | 95.4 | 93.5 |
| PointLSTM-PSS | point clouds | 95.0 | 93.1 |
| PointLSTM-middle | point clouds | 95.9 | 94.7 |
| PointLSTM-late | point clouds | 94.9 | 93.5 |
+
+rest of this paper.
+
+Comparison with the state-of-the-art. We compare the proposed method with several state-of-the-art methods on two datasets and present results in Table 2 and Table 3. From Table 2, we can see that the PointLSTM-early achieves the best performance of $87.9\%$ on NVGesture, which is clearly higher than single-modality approaches using other modalities and makes a small improvement than FlickerNet. Meanwhile, the proposed method is approaching human performance on RGB videos $(88.4\%)$ .
+
+Due to the SHREC'17 dataset providing skeletal data of the hand and fingers, most previous works use skeleton sequences as inputs which provide relatively accurate hand pose structure and joint trajectories. As shown in Table 3, the proposed method shows clear improvement (1.5% for 14 gestures and 4.0% for 28 gestures) compared with these methods. Different from general action, the gesture
+
+Table 4. Model size and inference time of PointLSTM. We present the #Paras (the number of parameters) and inference time, as well as computational complexity measured in FLOPs (floating-point operations).
+
+| Model | #Paras | FLOPs | Time(ms) |
| baseline | 0.9M | 6.2G | 22.5 |
| PointLSTM-raw | 0.9M | 7.3G | 36.1 |
| PointLSTM-early | 1.0M | 12.1G | 36.0 |
| PointLSTM-PSS | 1.2M | 3.7G | 27.5 |
| PointLSTM-middle | 1.2M | 16.1G | 33.6 |
| PointLSTM-late | 2.2M | 30.6G | 43.1 |
+
+is designed to convey information. Thus the visible surface delivers reliable information for recognition, which can be captured by point clouds based methods. Using such consistent inputs is more robust than estimated skeleton sequences, which is sensitive to occlusions. Moreover, these results demonstrate that clouds based methods can achieve excellent performance even if accurate hand poses are hard to obtain in real-world cases.
+
+Model size and inference time. Table 4 gives model parameters and inference time of the baseline and the proposed PointLSTM ( $k = 16$ ). Replacing a PointNet++ layer with a PointLSTM layer has little effect on the number of parameters, as most parameters are stored in the last two stages with higher dimensions. Inference time for 32 frames ( $\approx$ 2.67 seconds sampling @12FPS case, 128 points for each frame) is also provided. For PointLSTM-middle, it only needs about (12*33.6/32) 12.6ms to finish computing for 12 frames (about 1 seconds sampling) with a single Tesla P100 GPU, and using point-shared states can further increase the inference speed, which shows great potential for real-time application.
+
+# 4.3. Action Recognition
+
+Different from gesture, which is designed for non-verbal communication, the action is a form of human behaviors to accomplish a purpose and has a larger intra-class variation. To evaluate the generalization performance of the proposed method, we evaluate our approach to MSR Action3D dataset for dynamic action recognition.
+
+There are different experimental protocols [17, 36, 45] associated with this dataset, and we follow the protocol in [17] that divides the dataset equally into training and testing set by subjects and runs the classification algorithm ten times with different splittings. However, we found there exists a significant variance in the dataset split. For example, we train our model with odd-numbered subjects and test with even-numbered subjects and vice versa. The performance of the former is $96.73\%$ , while the latter is $91.44\%$ . Therefore, we randomly split the dataset by subjects five times and exchange the train and test set for another five times to make the comparison more reliable. Average accu
+
+Table 5. Performance comparison (%) on MSR Action3D dataset.
+
+| Method | Modality | Accuracy |
| Actionlets [36] | skeleton | 88.20 |
| H-HMM [29] | skeleton | 89.01 |
| Lie [33] | skeleton | 89.48 |
| Traj.Shape [6] | skeleton | 92.10 |
| Gram Handkel [45] | skeleton | 94.74 |
| HON4D [28] | depth video | 88.89 |
| MeteorNet [20] | point clouds | 88.50 |
| Baseline | point clouds | 87.62±1.48 |
| PointLSTM-early | point clouds | 91.78±3.10 |
| PointLSTM-PSS | point clouds | 90.79±3.14 |
| PointLSTM-middle | point clouds | 91.08±3.43 |
| PointLSTM-late | point clouds | 92.29±3.09 |
+
+racies and standard deviations are reported.
+
+Comparison with the state-of-the-art. Table 5 shows the comparison results with recent approaches. The proposed method outperforms baseline by a considerable margin and achieves the best recognition accuracy for point clouds input. However, recent skeleton-based methods produce better results than the proposed method. We believe this is primarily because body action contains lower degrees of freedom, and skeleton sequence has much clearer physical meaning and more robust to intra-class diversities than point cloud sequences. Point clouds contain much more action-irrelevant information (see Fig. 4) and make the network easier to overfit. More efficient point cloud processing methods need to be studied to solve this problem. Overall, the results on MSR Action3D verify that our approach is generic for various visual sequence learning problems and can be potentially combined with other modules and modalities in future work.
+
+# 5. Conclusion
+
+In this paper, we propose a PointLSTM layer that can directly capture long-term relationships from dynamic point cloud sequences, which are robust to occlusion and motion velocity. Extensive experiments show that our method is generally applicable to different point clouds based sequence learning tasks and demonstrate that the weight-shared LSTM layer is the primary component of the proposed method. Besides, we provide insights into understanding the performance differences between point clouds based methods and skeleton-based methods. In future work, we intend to explore and expand our method on more irregular sequence learning tasks, such as activity predicting and multi-frame scene flow estimation.
+
+Acknowledgements. This work was partially supported by the Natural Science Foundation of China under contract Nos. 61702486, U19B2036, and 61532018.
+
+# References
+
+[1] Mahdi Abavisani, Hamid Reza Vaezi Joze, and Vishal M Patel. Improving the performance of unimodal dynamic hand-gesture recognition with multimodal training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1165-1174, 2019.
+[2] Yuxiao Chen, Long Zhao, Xi Peng, Jianbo Yuan, and Dimitris N Metaxas. Construct dynamic graphs for hand gesture recognition via spatial-temporal attention. In British Machine Vision Conference, 2019.
+[3] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1724-1734, 2014.
+[4] Quentin De Smedt, Hazem Wannous, and Jean-Philippe Vandeborre. Skeleton-based dynamic hand gesture recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1–9, 2016.
+[5] Quentin De Smedt, Hazem Wannous, Jean-Philippe Vandeborre, Joris Guerry, Bertrand Le Saux, and David Filliat. Shrec'17 track: 3d hand gesture recognition using a depth and skeletal dataset. In In Eurographics Workshop on 3D Object Retrieval, 2017.
+[6] Maxime Devanne, Hazem Wannous, Stefano Berretti, Pietro Pala, Mohamed Daoudi, and Alberto Del Bimbo. 3-d human action recognition by shape analysis of motion trajectories on riemannian manifold. IEEE transactions on cybernetics, 45(7):1340-1352, 2014.
+[7] Jeffrey Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2625-2634, 2015.
+[8] Hehe Fan and Yi Yang. Point recurrent neural network for moving point cloud processing. arXiv preprint arXiv:1910.08287, 2019.
+[9] William T Freeman and Michal Roth. Orientation histograms for hand gesture recognition. In International workshop on automatic face and gesture recognition, volume 12, pages 296-301, 1995.
+[10] Guillermo Garcia-Hernando, Shanxin Yuan, Seungryul Baek, and Tae-Kyun Kim. First-person hand action benchmark with rgb-d videos and 3d hand pose annotations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 409-419, 2018.
+[11] Liuhao Ge, Yujun Cai, Junwu Weng, and Junsong Yuan. Hand pointnet: 3d hand pose estimation using point sets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8417-8426, 2018.
+[12] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
+[13] Jingxuan Hou, Guijin Wang, Xinghao Chen, Jing-Hao Xue, Rui Zhu, and Huazhong Yang. Spatial-temporal attention res-tn for skeleton-based dynamic hand gesture recognition.
+
+In Proceedings of the European Conference on Computer Vision, pages 273-286, 2018.
+[14] Mariano Jaimez, Mohamed Souiai, Jorg Stuckler, Javier Gonzalez-Jimenez, and Daniel Cremers. Motion cooperation: Smooth piece-wise rigid scene flow from rgb-d images. In 2015 International Conference on 3D Vision, pages 64-72. IEEE, 2015.
+[15] Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128-3137, 2015.
+[16] Alina Kuznetsova, Laura Leal-Taixe, and Bodo Rosenhahn. Real-time sign language recognition using a consumer depth camera. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 83-90, 2013.
+[17] Wanqing Li, Zhengyou Zhang, and Zicheng Liu. Action recognition based on a bag of 3d points. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pages 9–14. IEEE, 2010.
+[18] Chi Lin, Jun Wan, Yanyan Liang, and Stan Z Li. Large-scale isolated gesture recognition using a refined fused model based on masked res-c3d network and skeleton lstm. In Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition, pages 52-58. IEEE, 2018.
+[19] Xingyu Liu, Charles R Qi, and Leonidas J Guibas. Flownet3d: Learning scene flow in 3d point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 529-537, 2019.
+[20] Xingyu Liu, Mengyuan Yan, and Jeannette Bohg. Meteornet: Deep learning on dynamic 3d point cloud sequences. In Proceedings of the IEEE International Conference on Computer Vision, pages 9246-9255, 2019.
+[21] YP Mack and Murray Rosenblatt. Multivariate k-nearest neighbor density estimates. Journal of Multivariate Analysis, 9(1):1-15, 1979.
+[22] Qiguang Miao, Yunan Li, Wanli Ouyang, Zhenxin Ma, Xin Xu, Weikang Shi, and Xiaochun Cao. Multimodal gesture recognition based on the resc3d network. In Proceedings of the IEEE International Conference on Computer Vision, pages 3047-3055, 2017.
+[23] Yuecong Min, Xiujuan Chai, Lei Zhao, and Xin Chen. Flickernet: Adaptive 3d gesture recognition from sparse point clouds. In British Machine Vision Conference, 2019.
+[24] Pavlo Molchanov, Xiaodong Yang, Shalini Gupta, Kihwan Kim, Stephen Tyree, and Jan Kautz. Online detection and classification of dynamic hand gestures with recurrent 3d convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4207-4215, 2016.
+[25] Pradyumna Narayana, Ross Beveridge, and Bruce A Draper. Gesture recognition: Focus on the hands. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5235-5244, 2018.
+[26] Xuan Son Nguyen, Luc Brun, Olivier Lézoray, and Sébastien Bougleux. A neural network based on spd manifold learning for skeleton-based hand gesture recognition. In Proceed-
+
+ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12036-12045, 2019.
+[27] Juan C Nunez, Raul Cabido, Juan J Pantrigo, Antonio S Montemayor, and Jose F Velez. Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognition, 76:80-94, 2018.
+[28] Omar Oreifej and Zicheng Liu. Hon4d: Histogram of oriented 4d normals for activity recognition from depth sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 716-723, 2013.
+[29] Liliana Lo Presti, Marco La Cascia, Stan Sclaroff, and Octavia Camps. Gesture modeling by Hanklet-based hidden markov model. In Asian Conference on Computer Vision, pages 529-546. Springer, 2014.
+[30] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 652-660, 2017.
+[31] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems, pages 5099-5108, 2017.
+[32] Chenyang Si, Wentao Chen, Wei Wang, Liang Wang, and Tieniu Tan. An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1227-1236, 2019.
+[33] Raviteja Vemulapalli, Felipe Arrate, and Rama Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 588-595, 2014.
+[34] Jun Wan, Yibing Zhao, Shuai Zhou, Isabelle Guyon, Sergio Escalera, and Stan Z Li. Chalearn looking at peoplergbd isolated and continuous datasets for gesture recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 56-64, 2016.
+[35] Huogen Wang, Pichao Wang, Zhanjie Song, and Wanqing Li. Large-scale multimodal gesture recognition using heterogeneous networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3129-3137, 2017.
+[36] Jiang Wang, Zicheng Liu, Ying Wu, and Junsong Yuan. Mining actionlet ensemble for action recognition with depth cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1290-1297. IEEE, 2012.
+[37] Ying Wu and Thomas S Huang. Vision-based gesture recognition: A review. In International Gesture Workshop, pages 103-115. Springer, 1999.
+[38] SHI Xingjian, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In Advances in Neural Information Processing Systems, pages 802-810, 2015.
+[39] Sijie Yan, Yuanjun Xiong, and Dahua Lin. Spatial temporal graph convolutional networks for skeleton-based action
+
+recognition. In Thirty-Second AAAI Conference on Artificial Intelligence, pages 7444-7452, 2018.
+[40] Xiaodong Yang, Pavlo Molchanov, and Jan Kautz. Making convolutional networks recurrent for visual sequence learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6469-6478, 2018.
+[41] Xiaoqing Ye, Jiamao Li, Hexiao Huang, Liang Du, and Xiaolin Zhang. 3d recurrent neural networks with context fusion for point cloud semantic segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 403-417, 2018.
+[42] Shanxin Yuan, Guillermo Garcia-Hernando, Bjorn Stenger, Gyeongsik Moon, Ju Yong Chang, Kyoung Mu Lee, Pavlo Molchanov, Jan Kautz, Sina Honari, Liuhao Ge, et al. Depth-based 3d hand pose estimation: From current achievements to future goals. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2636-2645, 2018.
+[43] Chaoyun Zhang, Marco Fiore, Iain Murray, and Paul Pa-tras. Cloudlstm: A recurrent neural model for spatiotemporal point-cloud stream forecasting. arXiv preprint arXiv:1907.12410, 2019.
+[44] Liang Zhang, Guangming Zhu, Peiyi Shen, Juan Song, Syed Afaq Shah, and Mohammed Bennamoun. Learning spatiotemporal features using 3dcnn and convolutional LSTM for gesture recognition. In Proceedings of the IEEE International Conference on Computer Vision, pages 3120-3128, 2017.
+[45] Xikang Zhang, Yin Wang, Mengran Gou, Mario Sznaier, and Octavia Camps. Efficient temporal sequence comparison and classification using gram matrix embeddings on a riemannian manifold. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4498-4507, 2016.
\ No newline at end of file
diff --git a/anefficientpointlstmforpointcloudsbasedgesturerecognition/images.zip b/anefficientpointlstmforpointcloudsbasedgesturerecognition/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..438e6230b844f30ac4a31359ae27f9da0e9cba78
--- /dev/null
+++ b/anefficientpointlstmforpointcloudsbasedgesturerecognition/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:689e096a00847ab9808a9288d21435dfc32cc3ac69a511abb37b2b43107266e2
+size 578396
diff --git a/anefficientpointlstmforpointcloudsbasedgesturerecognition/layout.json b/anefficientpointlstmforpointcloudsbasedgesturerecognition/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..791d655d662273a7e850bc6514100a68cc239b73
--- /dev/null
+++ b/anefficientpointlstmforpointcloudsbasedgesturerecognition/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bddc35f74d2d203dbcb9120862b3997a90eed265a9f49515794fff7d362f559b
+size 400673
diff --git a/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_content_list.json b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..17b3ca687846bf065bbb73d2e26a25719b191ef2
--- /dev/null
+++ b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c6f700cf73dd100317b0ff43ab37f5e9e8a2a8f13acfb5a636876b4616a34d4
+size 78168
diff --git a/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_model.json b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b1c399a27e2f4e7edf30458a3f0c7ee9fe87c492
--- /dev/null
+++ b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e5fa7c6ae1fea09acd2e0361338921f132db149d5ead591bc038e93bf3de0113
+size 95146
diff --git a/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_origin.pdf b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a4c7aa4aa410c167a8dffcdb6903a89e8e53f29d
--- /dev/null
+++ b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/7dc15a63-efb5-4fcc-b911-71e23e7ebc01_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc61c4021421d67f84a29531dbff15c642ab4e97274d6191ecf4e3b89364a34e
+size 864313
diff --git a/anendtoendedgeaggregationnetworkformovingobjectsegmentation/full.md b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..78668c492c90dd53e315d6874e8dc5f7e366dc5a
--- /dev/null
+++ b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/full.md
@@ -0,0 +1,326 @@
+# An End-to-End Edge Aggregation Network for Moving Object Segmentation
+
+Prashant W. Patil, Kuldeep M. Biradar, Akshay Dudhane, and Subrahmanyam Murala
+CVPR Lab, Indian Institute of Technology Ropar, INDIA
+
+2017eez0006@iitrpr.ac.in
+
+# Abstract
+
+Moving object segmentation in videos (MOS) is a highly demanding task for security-based applications like automated outdoor video surveillance. Most of the existing techniques proposed for MOS are highly dependent on fine-tuning a model on the first frame(s) of test sequence or complicated training procedure, which leads to limited practical serviceability of the algorithm. In this paper, the inherent correlation learning-based edge extraction mechanism (EEM) and dense residual block (DRB) are proposed for the discriminative foreground representation. The multi-scale EEM module provides the efficient foreground edge related information (with the help of encoder) to the decoder through skip connection at subsequent scale. Further, the response of the optical flow encoder stream and the last EEM module are embedded in the bridge network. The bridge network comprises of multi-scale residual blocks with dense connections to learn the effective and efficient foreground relevant features. Finally, to generate accurate and consistent foreground object maps, a decoder block is proposed with skip connections from respective multi-scale EEM module feature maps and the subsequent down-sampled response of previous frame output. Specifically, the proposed network does not require any pre-trained models or fine-tuning of the parameters with the initial frame(s) of the test video. The performance of the proposed network is evaluated with different configurations like disjoint, cross-data, and global training-testing techniques. The ablation study is conducted to analyse each model of the proposed network. To demonstrate the effectiveness of the proposed framework, a comprehensive analysis on four benchmark video datasets is conducted. Experimental results show that the proposed approach outperforms the state-of-the-art methods for MOS.
+
+# 1. Introduction
+
+Moving object segmentation (MOS) for video captured under the uncontrolled weather, different illumination conditions, or dynamic background is a challenging task for many computer vision applications like automated video
+
+
+
+
+
+
+Input frame
+
+
+
+
+
+
+Estimated output
+
+
+
+
+
+
+Ground-truth
+Figure 1. Sample results of the proposed framework on (a) weather degraded video, (b) traffic video with multi-objects and (c) crowded video with single object.
+
+surveillance [30], traffic monitoring [4], anomaly detection [27], etc. It aims to automatically generate precise and consistent pixel masks for foreground object(s). The accuracy achieved for indoor videos is higher as compared to outdoor videos. Because, outdoor videos suffer from several factors like poor visibility, inclement weather situations, low contrast, local motion, etc. Also, one important attention for automated video applications is that more than $70\%$ of pixel information is redundant and irrelevant for high-level processing task [3]. This redundant information degrades the overall performance of automated applications like video surveillance, traffic monitoring, etc. Learning-based approaches gave significant performance improvement for many computer vision applications [35], [34], [14], [23], [37], [15], [26], [2], [4], [1], [40], [24]. Many approaches [26], [40], [43], [4], [1] are proposed with fine-tuning of pre-trained model using first frame(s) of test sequences. Additionally, several techniques [24], [37] achieved significant performance with high system complexity. Even though these methods delivered impressive results, the practical serviceability of these approaches is
+
+limited. Thus, MOS is a challenging task from several aspects in day-to-day life.
+
+The main motivation of the proposed framework for MOS is to design a model which does not rely on finetuning of a pre-trained model on the first frame(s) of the test sequence. Also, the system complexity is considered for more practical serviceability i.e. the system should be simple, fast, end-to-end, and strong. To achieve this goal, in this work, a multi-frame multi-scale encoder-decoder adversarial learning network with edge extraction mechanism and the dense residual block is proposed for MOS. A very important and crucial step in the encoder-decoder network is that how to connect the pixel-level multi-scale encoder feature in a meaningful manner to the respective scale of the decoder. Also, while designing the network, the choice of the filter size plays an important role for better feature learning for a specific task. To do this, an inherent correlation-based edge extraction mechanism is proposed. Additionally, the predicted output of the previous frame is used with subsequent scale to provide the consistent matching among current and previous frame at the decoder for the learning of discriminative foreground representation. Some of the sample results on weather degraded, multi-object traffic, and the crowd with single object video are shown in Figure 1.
+
+# 2. Related work
+
+Existing MOS algorithms are broadly classified as unsupervised, semi-supervised, on-line, and propagation-based methods. A brief overview of existing approaches for MOS is given below.
+
+Unsupervised video object segmentation approaches [9], [45] segment foreground-background automatically over an unconstrained video without any user annotation. Brent et al. [9] proposed motion and visual saliency-based approach for MOS. The forward propagation-based approach is proposed in [45] to estimate the object proposals. Wang et al. [35] proposed an unsupervised MOS approach with dynamic visual attention prediction and attention guided object segmentation in spatio-temporal and spatial domain respectively.
+
+Semi-supervised video object segmentation relies on preliminary provided ground-truth masks [34], [24], [15], [44], [20], [28], [7]. Paul et al. [34] proposed semantic pixel-wise feature concatenation with global and local matching techniques for moving object detection. The probabilistic generative approach is proposed in [14] for the prediction of the target and background appearance. The generative appearance, backbone feature extractor and prediction modules are used for efficient feature extraction. The primary focus of existing state-of-the-art learning-based approaches is to learn the appearance and motion-based feature for frame segmentation. Along with these features, Lu et al. [23] proposed a co-attention mecha
+
+nism to improve the discriminative foreground representations. Khoreva et al. [15] proposed data augmentation technique i.e. lucid data dreaming for semi-supervised video object segmentation (VOS). A two-stream network with a memory module is proposed in [33] to get the appearance and motion-based features. Some of the researchers used tracking-based methods to detect the region-of-interest for VOS [7]. Luiten et al. [24] proposed an approach with semantic proposal generation, refinement, and merging techniques for MOS. The results delivered in [24] are impressive, but the complexity of system is high as they used four different networks together with fine-tuning.
+
+On-line learning based methods [11], [40], [26], [6], [43] are semi-supervised methods which are mainly relied on fine-tuning of pre-trained models on first frame of test sequence. Motion-guided cascaded refinement network [11] is proposed for MOS with the assumption that the foreground motion is different from the background motion. Maninis et al. [26] proposed an orthogonal approach without temporal information for VOS. Here, the learned features on ImageNet are used for transferring the generic semantic information for foreground-background segmentation (FBS). The spatial and temporal dependencies are encoded in [6] using CNN trained model and optical flow. Recently, generative adversarial network (GAN) based approaches shows significant improvement in various computer vision applications like image de-hazing [8], FBS [29], underwater MOS [31], etc. To capture appearance and motion cues, the temporal coherence branch with pretraining in an adversarial fashion is utilized in [40]. Based on both of the learned cues, spatial segmentation branch is proposed for accurate segmentation of objects. Akilan et al. proposed 3D CNN based approach with 3D transpose convolution and residual connection [2], encoder-decoder CNN technique with the help of multi-view receptive field [4], slow encoder-decoder with stripped convolution and temporal median filtering-based background generation [1]. Here, authors trained their models [2], [4], [1] on baseline video and fine-tuned on frames of target video for better generalization and accurate foreground detection. The training on baseline video, fine-tuning on more number of frames, and testing on remaining video frames from target video leads to the limited practical applicability of the algorithm.
+
+Propagation based [38], [37], [41], [39] approaches make use of previous frame(s) output for efficient and effective MOS. Along with the visual and spatial guidance, Linjie et al. [41] has introduced a modulator to manage the learning of intermediate layers of segmentation network. Seoung et al. [38] proposed identical encoder network to process the key frame and reference frame interdependently. Finally, the refinement module with residual learning is used for fast MOS. Similarly, Ziqin et al. [37] proposed a ranking attention technique to integrate the matching and
+
+propagation-based encoder-decoder network for VOS. In [39] and [44], along-with input frames, optical flow [12] is used as input to guide the propagation process for foreground motion clustering.
+
+Proposed approach overcomes the shortcomings of [2], [4], [1] with less data for training and [11], [40], [26] with no fine tuning on frame(s) from target video. Additionally, the optical flow using [22] and output of previous frame with respective scale are used to guide the propagation process in the proposed approach. The proposed work has the following key contributions:
+
+1. An end-to-end multi-frame multi-scale encoder-decoder adversarial learning network is proposed for moving object segmentation.
+2. A novel edge extraction mechanism (EEM) is proposed to integrate the multi-frame pixel-level multiscale encoder features with respective decoder features through skip connections.
+3. Bridge network with a dense residual block is proposed to embed the motion features which are extracted from optical flow encoder stream and feature maps from the last EEM module.
+4. Effectiveness of the proposed approach for MOS is examined on four benchmark video databases with disjoint, global, and cross-data training-testing techniques and compared to the state-of-the-art methods.
+
+# 3. Proposed system framework
+
+Various researchers have taken the advantage of pretrained models of convolutional neural network (CNN) [7],[6], [38], [44], [20], [28] for MOS. Also, some of the approaches fine-tuned the pre-trained network on initial frame(s) of test video for MOS [11], [40], [26], [43], [2], [4], [1]. Additionally, some methods [24], [37] achieved state-of-the-art performance, resulting in a high computational complexity. Above all factors lead the MOS towards the limited practical usability. This motivated us to design an end-to-end network for MOS, which does not rely on fine-tuning and leads towards more practical serviceability. There are two major challenges in the MOS task. First, separation of foreground objects from background. Based on the hypothesis of different background foreground motion [37], we have proposed a multi-frame multi-scale encoder-decoder network for MOS. The proposed network takes video frames and optical flow as inputs to learn the inherent correlation between multi-scale encoder features of three successive frames. As multi-frame encoder gives foreground-background probability maps, learning of multi-frame multi-scale encoder features is required, and it should be propagated to the decoder network in an effective and meaningful manner. To do this, the multi-frame multiscale edge extraction mechanism with correlation learning
+
+is proposed in this work. Also, encoded foreground edge related features using the last EEM module and encoded feature maps from optical flow encoder stream are fused using a bridge network to learn robust foreground relevant features. Second, consistent segmentation of foreground objects across the video frames. Based on the assumption that the previous frame foreground object(s) are not that much deviated for the current frame, we make use of estimated previous frame output with respective scale to guide the decoder network for discriminative foreground feature representation. Detailed visualization of the proposed network is given in Figure 2.
+
+# 3.1. Multi-frame multi-scale encoder
+
+The proposed approach takes RGB video frames $(I_{t} \in \mathbb{R}^{3 \times M \times N \times 3})$ and extracted optical flow $(O_{t} \in \mathbb{R}^{M \times N \times 3})$ [22] as input. Here, multi-frame based encoders are used to obtain the multi-scale edge information related to foreground i.e. three frames are fed to three different encoder streams. Each block of encoder stream comprises of two convolution filters with a kernel size of $3 \times 3$ and $7 \times 7$ followed by a leaky rectified linear unit (ReLU) to extract the pixel-level multi-scale features. Additionally, estimated optical flow [39] between pair of frames $(t - 1, t, t + 1)$ is given to the fourth encoder stream to learn motion features. For better visualization, the optical flow is considered as HSV representation [25]. Where, the hue and saturation represent the direction of motion and its magnitude, respectively. In this work, the only magnitude is considered and appended three times to get the three-channel image. As the performance of early or late fusion of optical flow stream features with appearance stream features is not effective [39], a mid-level fusion of motion feature from optical flow encoder stream and last EEM module features is considered in the proposed approach. An encoder block is defined as $EN_{L,L \times f}$ ; $[L \in (1,4), f = 32]$ where, L and $(\mathrm{L} \times \mathrm{f})$ represent encoder level and number of filters in encoder respectively (more details please refer Figure 2).
+
+# 3.2. Edge extraction mechanism module
+
+As encoder gives foreground-background probability maps [37], effective learning of inherent correlation between multi-frame encoder with multi-scale features is required. To do this, learning based edge extraction mechanism (EEM) module is proposed. Here, EEM module is applied on each scale of the encoder network to focus on foreground relevant feature learning and to ignore the background regions. Initially, pixel-wise subtraction is performed between one scale feature of encoder and another scale feature of another encoder. All subtracted features are concatenated to get the overall response of that particular encoder level as given in Eq. (1).
+
+$$
+C = \Psi \left\{X _ {S - k}, Y _ {S - k}, Z _ {S - k} \right\} \tag {1}
+$$
+
+
+Figure 2. Overview of the proposed framework for MOS. First, the multi-scale features related to the foreground objects are extracted from three consecutive frames with the help of the proposed edge extraction mechanism (EEM) module. Encoded feature maps from optical flow encoder stream and last EEM module are embedded to learn effective features related to the foreground. Finally, to segment current frame, the down-sampled output response of the previous frame and respective EEM module feature maps are combined in the decoder network.
+
+where, $\Psi$ indicates the concatenation of subtracted features
+
+$$
+X _ {S. k} = W _ {(k) _ {-} (S)} ^ {(i, j)} \ominus W _ {(k + 4) _ {-} (S + 1)} ^ {(i, j)} ; k = 3, S \in (1, 2)
+$$
+
+$$
+Y _ {S \_ k} = W _ {(k) \_ (S)} ^ {(i, j)} \ominus W _ {(k - 4) \_ (S + 1)} ^ {(i, j)} ; k = 7, S \in (1, 2)
+$$
+
+$$
+Z _ {S - k} = W _ {(k) - (S)} ^ {(i, j)} \ominus W _ {(k) - (S + 2)} ^ {(i, j)}; k \in (3, 7), S = 1
+$$
+
+where, $\ominus$ is element-wise subtraction, $W_{(k) - (S)}^{(i,j)}$ are features of S stream at location $(i,j)$ with $k\times k$ size kernel.
+
+The ablation study is conducted to demonstrate the impact of concatenation over addition operation for multiscale feature extraction (please refer Table 4). The detailed visualisation of sample feature maps of first EEM module is given in Figure 3. Response of each EEM module is essentially preserved for segmentation and passed to the respective decoder network through skip connections for effective and meaningful foreground representations.
+
+# 3.3. Bridge network
+
+The bridge network is constructed for embedding of the motion features from optical flow encoder stream with last EEM module features of encoder. The EEM module is denoted as $\{EEM_{L,L\times f};[L\in (1,4),f = 32]\}$ . The approaches used for automated video applications need to process large amount of data for training. The training of deeper network undergoes the vanishing gradients prob
+
+lem [10]. To overcome these limitations, multi-scale residual blocks (MSBs) with dense connections named as dense residual block (DRB) is proposed to learn prominent features related to foreground. Specifically, we conduct the ablation study to analyse the importance of DRB block in the proposed network (please refer Table 5). The technique for dense connections is defined as,
+
+$$
+M S B _ {n} = \sum_ {i = 1} ^ {n - 1} M S B _ {i}; n > 1 \tag {2}
+$$
+
+where, $MSB_{n}$ is input to the $n^{th}$ MSB module, $MSB_{i}$ is response of $i^{th}$ MSB module and $n \in (1,6)$ . Each MSB is having parallel convolution filters with kernel size of $3 \times 3$ , $5 \times 5$ and $7 \times 7$ followed by ReLU. For effective learning, we integrate the multi-scale features with the concatenation operation followed by separate convolution block. Finally, responses of each concatenated features are added to get robust features learned by different scales with residual connection (please refer Figure 2 for more details).
+
+# 3.4. Foreground prediction with propagation
+
+In [37], ranking attention module is proposed to select the important features for similarity maps. Matching of current frame foreground object features with reference/first frame features [38] may fail in some of the practical scenar-
+
+
+Figure 3. Visualization of two samples of feature maps of first EEM module maps.
+
+ios like illumination, occlusion, motion blur, etc. Also, for automated video surveillance applications, reference frame object(s) may completely vanish after a few frames, and the new foreground object(s) may come in the current frame. However, the matching between current and previous frames usually referred to avoid false positive matches because motion is less. Hence, we make use of a simple mask propagation method the same as [37] i.e. predicted output of the previous frame is used to guide the subsequent decoder layer with respective scale to improve the potential of the proposed network for systematic foreground segmentation. Along with bridge network features and previous frame output with subsequent scale, the correlated feature from the respective EEM module is given to the decoder network for final foreground segmentation. The decoder block is defined as $\{DE_{L,L\times f};[L\in (4,1),f = 32]\}$ . Thus, proposed generator is represented as: $EN_{L,L\times f}\rightarrow EEM_{L,L\times f};[L\in (1,4),f = 32],$ $DE_{L,L\times f};[L\in (4,1),f = 32]$
+
+Additionally, we are able to train the proposed network in end-to-end manner for single object, multi-object and thermal data based segmentation with disjoint, global and cross-data training-testing techniques.
+
+# 4. Training procedure of the proposed method
+
+The proposed method makes use of an end-to-end adversarial training procedure and is deliberately straightforward. Because, MOS is a similar task like image-to-image translation [13] where the goal is to learn the mapping between the provided input frames and the desired response i.e. foreground object(s). One advantage of the proposed framework is that it does not require a pre-training model or fine-tuning on the first frame of testing video.
+
+We have trained the proposed network adversarially in three different configurations. (1) training and testing videos are segregated within database without any overlap [16] (disjoint training-testing), (2) training and testing video frames are segregated without any overlap [1] (global
+
+training-testing) and (3) training and testing datasets are totally different [30] (cross-data training-testing). Training details for each configuration are discussed in the next sub-sections. Note that the proposed network training procedure is much simpler than the existing methods [24], [26], [38], where we do not require pre-trained models or finetuning of the proposed network on the initial frame(s) of testing video.
+
+# 4.1. Disjoint training-testing (DTT)
+
+For disjoint training-testing (DTT), DAVIS-2016 [32] and SegTrack-v2 [18] database are used. DAVIS-2016 database is having 50 videos with different attributes like fast-motion, dynamic background, scale variation, background clutter, interacting objects, etc. From that, 30 videos (along with respective ground truth) are selected for training. To cover more challenging practical scenarios like slow motion, complex deformation, appearance change, background foreground color similarity, SegTrack-v2 database is included for DTT. Out of 14 sequences, randomly 8 videos are chosen for training. Hence, total 38 $(30 + 8)$ videos are used for training, and remaining $(20 + 6)$ videos are used for testing similar to STCRF [16]. During training, we performed data augmentation, which includes horizontal flipping similar to [21].
+
+# 4.2. Global training-testing (GTT)
+
+For global training-testing (GTT), CDnet-2014 database [36] is considered similar to [2] and [4]. CDnet-2014 database covers a variety of practical scenarios like bad weather, camera jitter, shadow, traffic, etc. videos. In [2], [4], [1], $70\%$ of video frames are used for training the network and rest of $(30\%)$ video frames are used to examine the effectiveness of network with video-wise fine-tuning. For the ideal case, the network should be able to give good performance on less training data, and there is no rule of thumb to pick an optimal number of frames that would lead to the best performance. In the proposed method, $50\%$ of frames from each video are used together for training, and remaining frames are used for testing without video-wise fine-tuning. Specifically, we trained the proposed network on combined $50\%$ frames of each video i.e. no training on baseline video and no fine-tuning on frames of test sequence similar to [1].
+
+# 4.3. Cross-data training-testing (CTT)
+
+CDnet-2014 database [36] and GTFD [17] are used for cross-data training and testing respectively. From CDnet-2014, the thermal video category is used for training of the proposed method. Total 5690 video frames from the thermal video category are selected. As per our knowledge, this is the first approach which uses the different database for training and different database for testing. For this technique, the optical flow encoder stream is removed from pro
+
+posed network i.e. only thermal frames are used for training and testing.
+
+The remaining settings for all training configurations of the proposed method are similar to [13]. Weight parameters of the proposed network in all the training-testing techniques are initialized randomly and iteratively learned using stochastic gradient descent (SGD) algorithm with a learning rate of 0.0002. The weight parameters of the network are updated on NVIDIA DGX station with processor $2.2\mathrm{GHz}$ , Intel Xeon E5-2698, NVIDIA Tesla V100 16 GB GPU.
+
+# 5. Network losses
+
+In adversarial training, the objective function of generator network with discriminator $(D)$ is defined as,
+
+$$
+\begin{array}{l} \mathbb {L} _ {G A N} (G, D) = \mathbb {E} _ {I, S} [ \log D (I, S) ] + \tag {3} \\ \mathbb {E} _ {I, Z} [ \log (1 - D (I, G (I, Z)) ] \\ \end{array}
+$$
+
+where, $I$ , $S$ and $Z$ are input, ground-truth and random noise vector. To minimize the loss of generator network, structural similarity index metric (SSIM) and Edge losses (Sobel operator) are considered. Thus, loss function is defined as,
+
+$$
+\mathbb {L} (G, D) = \arg \min _ {G} \max _ {D} \left(\mathbb {L} _ {G A N} + \mathbb {L} _ {S S I M} + \mathbb {L} _ {E d g e}\right) \tag {4}
+$$
+
+# 6. Experiments
+
+In this section, we evaluate the proposed network for MOS and multi-object segmentation on four benchmark databases, namely as DAVIS-2016 [32], SegTrack-v2 [18], CDnet-2014 [36] and GTFD [17]. Quantitative results in terms of average F-measure and visual results are evaluated and verified with the state-of-the-art methods for MOS. Further, several ablation experiments are conducted for a comprehensive understanding of the proposed method on DAVIS-2016.
+
+# 6.1. Results analysis of DTT
+
+For DTT model, the effectiveness is examined on testing set of DAVIS-2016 and SegTrack-v2 database in terms of average F-measure. We compare the proposed method results with 10 recently published methods, i.e. FEELVOS [34], AGAME [14], LUCID [15], CNIM [6], OSVOS [26], RANet [37], PReMVOS[24], DTNet [44], STMN [20], MGAVOS [28]. The quantitative results are given in Table 1 and Table 2 for DAVIS-2016 and SegTrack-v2 database respectively. Also, the visual results on DAVIS-2016 and SegTrack-v2 database are compared with state-of-the-art methods and given in the Figure 4 and Figure 5 respectively.
+
+Some of the recently published work [6], [26], and [37] achieved the significant improvement in accuracy, but these models make use of pre-trained weights or require finetuning on a first frame(s) of test video. The DeepLabv2 VGG16 pre-trained on PASCAL VOC and is used as the
+
+
+Figure 4. Visual results on DAVIS-2016 database. (a) input frames, (b) to (e) are the results from RANet-[37], OSVOS-[26], PReMVOS-[24], proposed method respectively, (f) ground-truth.
+
+
+Figure 5. Visual results of proposed method (PM) and existing [16] on SegTrack-v2 database.
+
+| Methods | PT | OF | Year | F-measure |
| FEELVOS [34] | ✓ | - | CVPR-19 | 0.822 |
| AGAME [14] | - | - | CVPR-19 | 0.822 |
| LUCID [15] | ✓ | - | IJCV-19 | 0.820 |
| DTNet [44] | ✓ | - | ICCV-19 | 0.835 |
| CNIM [6] | ✓ | ✓ | CVPR-18 | 0.850 |
| OSVOS [26] | ✓ | ✓ | PAMI-19 | 0.875 |
| RANet [37] | - | - | ICCV-19 | 0.876 |
| PReMVOS [24] | ✓ | - | ACCV-18 | 0.886 |
| STMN [28] | - | - | ICCV-19 | 0.899 |
| MGAVOS [20] | ✓ | - | ICCV-19 | 0.902 |
| PM | - | - | - | 0.915 |
+
+Table 1. Quantitative results comparison of proposed method (PM) with existing sate-of-the-art methods on DAVIS-2016. We use "√" to represent method with pre-training (PT) model or on-line fine-tuning (OF).
+
+| Methods | Publications | F-measure |
| DSL [19] | CVPR-16 | 0.734 |
| STCRF [16] | TIP-18 | 0.899 |
| UOVOS [45] | TIP-19 | 0.643 |
| Proposed Method | - | 0.918 |
+
+Table 2. Results comparison of proposed method and existing methods on SegTrack-v2 database.
+
+initial weight parameter in [6] with VGG-Net as a backbone network. Similarly, a three-stage (base, parent, and test) network is proposed in [26]. Initially, the parent network is trained on the DAVIS training set with pre-trained
+
+weights of ImageNet through the base network. Further, for VOS, the trained parent network is fine-tuned on one frame along with the ground-truth of each test sequence. Similarly, the network is trained on MSRA10K, ECSSD, and HKU-IS for static image segmentation in [37]. Further, this trained model is fine-tuned on the DAVIS-2016 database for MOS. Semantic proposal generation, refinement, and merging techniques for MOS is proposed in [24]. The results delivered in [24] are impressive, but the complexity of system is high and high computational time is required as they used four different networks together with fine-tuning.
+
+On the other hand, the proposed method achieved state-of-the-performance (please refer Table 1 and 2) without pre-training models or fine-tuning on the first frame of test video. The Table 1, 2 and Figure 4, 5 are evident that the proposed network outperforms the other existing state-of-the-art methods for MOS on DAVIS-2016 and SegTrackv2.
+
+# 6.2. Ablation study
+
+To examine the effect of an individual component of the proposed network, a comprehensive ablation study is conducted on the DAVIS-2016 database.
+
+Proposed network used three consecutive RGB frames and optical flow as inputs. Thus, the contribution of each input is to be analyzed. To do this, the effectiveness is evaluated on the presence of combined and individual inputs. Table 3 gives a quantitative comparison in terms of average F-measure and mean absolute error (MAE). The combination of input frames with optical flow contributed more as compared to individual inputs.
+
+In the proposed approach, four inputs streams (three RGB frames and optical flow) are processed parallelly. Does the parallel processing of three RGB frames contributed to the proposed network? To examine this, results are obtained using three-stream (two RGB frames and optical flow) and four-stream. Also, the extracted feature from each encoder level of each scale is subtracted and concatenated in the EEM module. How important is the feature concatenation against addition? To evaluate the importance, results are examined with addition and concatenation operation in the EEM module. While designing the network, the filter size plays a key role for effective feature learning. Thus, accuracy is analysed by combining the $3 \times 3$ filters with $5 \times 5$ and $7 \times 7$ filters. Specifically, the combination of $3 \times 3$ and $5 \times 5$ filters in the EEM module with the additional operation is denoted as 3_5_ADD and similarly for all other combinations. The results of all combinations is illustrated in Table 4. From Table 4, it is concluded that the parallel processing of four streams with $3 \times 3$ and $7 \times 7$ concatenation operation i.e. 3_7_CONCAT in EEM module outperform the other combinations.
+
+The motion features from the optical flow encoder
+
+| Input(s) | F-measure | MAE |
| Only optical flow (OF) | 0.8648 | 0.0296 |
| Only Input Frames (IFs) | 0.8246 | 0.0395 |
| Combination of OF and IFs | 0.9149 | 0.0191 |
+
+Table 3. Result ablation with different combination of input to the network on DAVIS-2016.
+
+| Approach with | 3 Stream | 4 Stream |
| F mea | MAE | F mea | MAE |
| 3_5_ADD | 0.8545 | 0.0258 | 0.8635 | 0.0239 |
| 3_5_CONCAT | 0.8601 | 0.0249 | 0.8733 | 0.0265 |
| 3_7_ADD | 0.8793 | 0.0222 | 0.8937 | 0.0215 |
| 3_7_CONCAT | 0.8908 | 0.0219 | 0.9149 | 0.0191 |
+
+Table 4. Fusion ablation of Multi-scale feature on DAVIS-2016.
+
+| Approach | F-measure | MAE |
| without DRB | 0.8701 | 0.0229 |
| with one DRB | 0.8917 | 0.0201 |
| with two DRB | 0.9149 | 0.0191 |
| with three DRB | 0.8667 | 0.0239 |
+
+Table 5. Results analysis with different number of DRBs in bridge network on DAVIS-2016 database.
+
+stream is combined with appearance-based features of the last EEM module using the bridge network. How to bridge network helps the proposed approach to learn the effective foreground relevant features? In a bridge network, DRB blocks are used for effective feature learning. Hence, we verified the performance of the proposed network without DRB and with different number of DRBs. Quantitative results with a fusion of DRBs is given in the Table 5. The proposed network with two DRBs shows improved performance compared to the other existing combinations.
+
+In summary, we verified that how each component (parallel processing, multi-scale filters, DRBs block) is helping the proposed network for effective and significant feature learning of the foreground object(s) segmentation.
+
+# 6.3. Results analysis of GTT
+
+In this experiments, the detection accuracy of the proposed method is verified on CDnet-2014 dataset using globally trained network. Video frames having spatial resolution varying from $320 \times 420$ to $720 \times 480$ and duration of videos from 900 to 7000 frames with different number of moving objects. The considered videos from different video categories are baseline (highway, office, pedestrians, PETS2006), bad weather (blizzard, skating), camera jitter (boulevard, traffic) and shadow (backdoor, copyMachine, peoplenShade). Accuracy is measured in terms of average F-measure and compared with state-of-the-art methods [2], [4] and [1], [30], [5]. Quantitative and visual results are illustrated in Table 6 and Figure 6 respectively. Some of the
+
+
+Input frame
+
+
+sEnDec Output [1]
+
+
+PM output
+Figure 6. Visual results on CDnet-2014 database with sEnDec [1].
+
+
+Ground-truth
+
+| Methods | Publications | F-measure |
| MSFgNet [30] | TITS-18 | 0.915 |
| DeepBs [5] | PRL-18 | 0.932 |
| sEnDec [1] | TITS-19 | 0.961 |
| 3DLSTM [2] | TITS-19 | 0.964 |
| MRCNN [4] | TVT-19 | 0.941 |
| Proposed Method | - | 0.969 |
+
+approaches [2], [4] and [1] achieved promising results on CDnet-2014 database with baseline video training and fine-tuning on the some frames of target video. From Table 6 and Figure 6, it is clear that the proposed approach outperforms the existing state-of-the-art methods [2], [4] and [1], [30] without fine-tuning on the target video frames (only with global training) for MOS.
+
+# 6.4. Results analysis of CTT
+
+GTFD database is one of the recently published video databases for the MOS task with RGB as well as thermal data. To analyse the effectiveness of the proposed approach without optical flow, thermal data based training-testing is carried out. GTFD database comprises of 25 videos with high diversity and under different challenging situations like low illumination, etc. For result analysis purpose, each video frame is annotated manually by one person to keep a high consistency. The quantitative results of the proposed method are compared with existing state-of-the-art methods in terms of average F-measure and it is given in Table 7. The sample visual results is illustrated in Figure 7. The visual and quantitative results from Figure 7 and Table 7 are evidence that the proposed method outperforms the existing state-of-the-art methods on thermal data for MOS.
+
+Performance analysis: Proposed method achieved state-of-the-art performance in terms of accuracy when compared to the existing end-to-end models [34], [14], [15], [44], [6], [26], [37], [24], [20], [28]. Some existing methods achieved promising results regardless of system complexity or requires fine-tuning on first frame of test video [24], [11], [40], [26], [28]. Also, the accuracy of the proposed method on weather degraded or multi-objects traffic videos is bet-
+
+
+Figure 7. Visual results on GTFD database with CLoD [42].
+
+
+
+
+
+
+
+Table 6. Average F-measure comparison of proposed method with existing methods for MOS on CDnet-2014 database.
+
+| Methods | Publications | F-measure |
| CLoD [42] | TCSVT-18 | 0.66 |
| WELD [17] | TCSVT-17 | 0.67 |
| F-WELD [17] | TCSVT-17 | 0.73 |
| Proposed Method | - | 0.75 |
+
+Table 7. Quantitative results comparison of proposed method with existing sate-of-the-art methods on GTFD database.
+
+ter than [2], [4]. On a single GPU of NVIDIA DGX station, we measured the average time required to process one frame is 51 msec, including optical flow time. These above observations lead the proposed method towards more practical serviceability. Finally, we observed that two scenarios in which the performance of the proposed method is limited. (1) multi-objects scenarios with moving background (2) complex motion with long-term occlusion. This could be because of the fast-moving background and long-term occlusion.
+
+# 7. Conclusion
+
+MOS is a highly demanding and challenging task for automated outdoor video surveillance. Many methods are proposed with fruitful results for the MOS task, but some of them have limited practical usability because of complex training procedures or system complexity. At this end, we proposed an inherent correlation learning-based edge extraction mechanism (EEM) and dense residual block (DRBs) with parallel processing of RGB frames and optical flow for discriminative foreground representation. Additionally, to generate accurate and consistent foreground object mask, the decoder block is used with skip connections of subsequent multi-scale EEM features and respective down-sampled version of previous frame output. To demonstrate the effectiveness of the proposed framework, experiments are conducted on four benchmark and challenging datasets i.e. DAVIS-2016, SegTrack-v2, CDnet-2014 and GTFD. The experimental analysis demonstrates that the proposed network achieves favorable performance compared to the state-of-the-art methods without any pretrained model or fine-tuning of the model on a test video frame(s) for MOS.
+
+# Acknowledgement
+
+This work was supported by Science and Engineering Research Board (DST-SERB), India, under Grant ECR/2018/001538.
+
+# References
+
+[1] Thangarajah Akilan and Qingming Jonathan Wu. sendec: An improved image to image cnn for foreground localization. IEEE Transactions on Intelligent Transportation Systems, 2019.
+[2] Thangarajah Akilan, Qingming Jonathan Wu, Amin Safaei, Jie Huo, and Yimin Yang. A 3d cnn-lstm-based image-to-image foreground segmentation. IEEE Transactions on Intelligent Transportation Systems, 2019.
+[3] Thangarajah Akilan, QM Jonathan Wu, and Yimin Yang. Fusion-based foreground enhancement for background subtraction using multivariate multi-model gaussian distribution. Information Sciences, 430:414-431, 2018.
+[4] Thangarajah Akilan, QM Jonathan Wu, and Wandong Zhang. Video foreground extraction using multi-view receptive field and encoder-decoder dcnn for traffic and surveillance applications. IEEE Transactions on Vehicular Technology, 2019.
+[5] Mohammadreza Babaee, Duc Tung Dinh, and Gerhard Rigoll. A deep convolutional neural network for video sequence background subtraction. Pattern Recognition, 76:635-649, 2018.
+[6] Linchao Bao, Baoyuan Wu, and Wei Liu. Cnn in mrf: Video object segmentation via inference in a cnn-based higher-order spatio-temporal mrf. In Proceedings of the IEEE Conference on CVPR, pages 5977-5986, 2018.
+[7] Jingchun Cheng, Yi-Hsuan Tsai, Wei-Chih Hung, Shengjin Wang, and Ming-Hsuan Yang. Fast and accurate online video object segmentation via tracking parts. In Proceedings of the IEEE Conference on CVPR, pages 7415-7424, 2018.
+[8] Akshay Dudhane, Harshjeet Singh Aulakh, and Subrahmanyam Murala. Ri-gan: An end-to-end network for single image haze removal. In Proceedings of the IEEE Conference on CVPRW, pages 0-0, 2019.
+[9] Brent Griffin and Jason Corso. Tukey-inspired video object segmentation. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1723-1733. IEEE, 2019.
+[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[11] Ping Hu, Gang Wang, Xiangfei Kong, Jason Kuen, and Yap-Peng Tan. Motion-guided cascaded refinement network for video object segmentation. In Proceedings of the IEEE Conference on CVPR, pages 1400-1409, 2018.
+[12] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2462-2470, 2017.
+[13] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on CVPR, pages 1125-1134, 2017.
+[14] Joakim Johnander, Martin Danelljan, Emil Brissman, Fahad Shahbaz Khan, and Michael Felsberg. A generative ap
+
+Pearance model for end-to-end video object segmentation. In Proceedings of the IEEE Conference on CVPR, pages 8953-8962, 2019.
+[15] Anna Khoreva, Rodrigo Benenson, Eddy Ilg, Thomas Brox, and Bernt Schiele. Lucid data dreaming for video object segmentation. International Journal of Computer Vision, 127(9):1175-1197, 2019.
+[16] Trung-Nghia Le and Akihiro Sugimoto. Video salient object detection using spatiotemporal deep features. IEEE Transactions on Image Processing, 27(10):5002-5015, 2018.
+[17] Chenglong Li, Xiao Wang, Lei Zhang, Jin Tang, Hejun Wu, and Liang Lin. Weighted low-rank decomposition for robust grayscale-thermal foreground detection. IEEE Transactions on CSVT, 27(4):725-738, 2017.
+[18] Fuxin Li, Taeyoung Kim, Ahmad Humayun, David Tsai, and James M Rehg. Video segmentation by tracking many figure-ground segments. In ICCV, pages 2192-2199, 2013.
+[19] Guanbin Li and Yizhou Yu. Deep contrast learning for salient object detection. In CVPR, pages 478-487, 2016.
+[20] Haofeng Li, Guanqi Chen, Guanbin Li, and Yizhou Yu. Motion guided attention for video salient object detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 7274-7283, 2019.
+[21] Huaijia Lin, Xiaojuan Qi, and Jiaya Jia. Agss-vos: Attention guided single-shot video object segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3949-3957, 2019.
+[22] Pengpeng Liu, Michael Lyu, Irwin King, and Jia Xu. Selfflow: Self-supervised learning of optical flow. In The IEEE Conference on CVPR, June 2019.
+[23] Xiankai Lu, Wenguan Wang, Chao Ma, Jianbing Shen, Ling Shao, and Fatih Porikli. See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In Proceedings of the IEEE Conference on CVPR, pages 3623-3632, 2019.
+[24] Jonathon Luiten, Paul Voigtlaender, and Bastian Leibe. Premvos: Proposal-generation, refinement and merging for video object segmentation. In Asian Conference on Computer Vision, pages 565-580. Springer, 2018.
+[25] Léo Maczyta, Patrick Bouthemy, and O Le Meur. Unsupervised motion saliency map estimation based on optical flow inpainting. pages 4469-4473, 2019.
+[26] K-K Maninis, Sergi Caelles, Yuhua Chen, Jordi Pont-Tuset, Laura Leal-Taixe, Daniel Cremers, and Luc Van Gool. Video object segmentation without temporal information. IEEE transactions on PAMI, 41(6):1515-1530, 2018.
+[27] Kuldeep Marotirao Biradar, Ayushi Gupta, Murari Mandal, and Santosh Kumar Vipparthi. Challenges in time-stamp aware anomaly detection in traffic videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 13-20, 2019.
+[28] Seoung Wug Oh, Joon-Young Lee, Ning Xu, and Seon Joo Kim. Video object segmentation using space-time memory networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 9226–9235, 2019.
+[29] Prashant Patil and Subrahmanyam Murala. Fggan: A cascaded unpaired learning for background estimation and fore
+
+ground segmentation. In 2019 IEEE WACV, pages 1770-1778. IEEE, 2019.
+[30] Prashant W Patil and Subrahmanyam Murala. Msfgnet: A novel compact end-to-end deep network for moving object detection. IEEE Transactions on Intelligent Transportation Systems, 20(10):4066-4077, 2019.
+[31] Prashant W Patil, Omkar Thawakar, Akshay Dudhane, and Subrahmanyam Murala. Motion saliency based generative adversarial network for underwater moving object segmentation. In 2019 IEEE International Conference on Image Processing (ICIP), pages 1565-1569. IEEE, 2019.
+[32] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE Conference on CVPR, pages 724-732, 2016.
+[33] Pavel Tokmakov, Cordelia Schmid, and Karteek Alahari. Learning to segment moving objects. International Journal of Computer Vision, 127(3):282-301, 2019.
+[34] Paul Voigtlaender, Yuning Chai, Florian Schroff, Hartwig Adam, Bastian Leibe, and Liang-Chieh Chen. Feelvos: Fast end-to-end embedding learning for video object segmentation. In Proceedings of the IEEE Conference on CVPR, pages 9481-9490, 2019.
+[35] Wenguan Wang, Hongmei Song, Shuyang Zhao, Jianbing Shen, Sanyuan Zhao, Steven CH Hoi, and Haibin Ling. Learning unsupervised video object segmentation through visual attention. In Proceedings of the IEEE Conference on CVPR, pages 3064-3074, 2019.
+[36] Yi Wang, Pierre-Marc Jodoin, Fatih Porikli, Janusz Konrad, Yannick Benezeth, and Prakash Ishwar. Cdnet 2014: an expanded change detection benchmark dataset. In Proceedings of the IEEE conference on CVPRW, pages 387-394, 2014.
+[37] Ziqin Wang, Jun Xu, Li Liu, Fan Zhu, and Ling Shao. Ranet: Ranking attention network for fast video object segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3978-3987, 2019.
+[38] Seoung Wug Oh, Joon-Young Lee, Kalyan Sunkavalli, and Seon Joo Kim. Fast video object segmentation by reference-guided mask propagation. In Proceedings of the IEEE Conference on CVPR, pages 7376-7385, 2018.
+[39] Christopher Xie, Yu Xiang, Zaid Harchaoui, and Dieter Fox. Object discovery in videos as foreground motion clustering. In Proceedings of the IEEE Conference on CVPR, pages 9994-10003, 2019.
+[40] Kai Xu, Longyin Wen, Guorong Li, Liefeng Bo, and Qingming Huang. Spatiotemporal cnn for video object segmentation. In Proceedings of the IEEE Conference on CVPR, pages 1379-1388, 2019.
+[41] Linjie Yang, Yanran Wang, Xuehan Xiong, Jianchao Yang, and Aggelos K Katsaggelos. Efficient video object segmentation via network modulation. In Proceedings of the IEEE Conference on CVPR, pages 6499-6507, 2018.
+[42] Sen Yang, Bin Luo, Chenglong Li, Guizhao Wang, and Jin Tang. Fast grayscale-thermal foreground detection with collaborative low-rank decomposition. IEEE Transactions on CSVT, 28(10):2574–2585, 2018.
+
+[43] Xiaohui Zeng, Renjie Liao, Li Gu, Yuwen Xiong, Sanja Fidler, and Raquel Urtasun. Dmm-net: Differentiable mask-matching network for video object segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3929-3938, 2019.
+[44] Lu Zhang, Zhe Lin, Jianming Zhang, Huchuan Lu, and You He. Fast video object segmentation via dynamic targeting network. In Proceedings of the IEEE International Conference on Computer Vision, pages 5582-5591, 2019.
+[45] Tao Zhuo, Zhiyong Cheng, Peng Zhang, Yongkang Wong, and Mohan Kankanhalli. Unsupervised online video object segmentation with motion property understanding. IEEE Transactions on Image Processing, 29:237-249, 2019.
\ No newline at end of file
diff --git a/anendtoendedgeaggregationnetworkformovingobjectsegmentation/images.zip b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9d5695e7f972feeb61e0e66e219060e1673468d2
--- /dev/null
+++ b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:37098bb40a931e4fdac82deb61fc51b7cb850ff92f9b4da6bf3ceb229594a8f3
+size 528177
diff --git a/anendtoendedgeaggregationnetworkformovingobjectsegmentation/layout.json b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..71b8e2e410f6729d731d28dcc3f0a26ab2815a70
--- /dev/null
+++ b/anendtoendedgeaggregationnetworkformovingobjectsegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:db00e64834bb543377503ba7309f39229c3a34cd9d4c53cfccccfb072f9ae322
+size 354117
diff --git a/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_content_list.json b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3e4fdbd4a127c83b2fff3bfeddb4495098e98281
--- /dev/null
+++ b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b26c69571b83bbe78c41ee0b959b97541fd5c1116d4a04e2705a7523205f0e31
+size 74922
diff --git a/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_model.json b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b95f91c13e3256baf0d6995c3b1b84dcd18db3c
--- /dev/null
+++ b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:95912e957ec6e57e70609ebe33e72d6e5059dccee058ceb2506b978a605b971a
+size 91273
diff --git a/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_origin.pdf b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..50a5dd9e6b56e05bf40208ac53cb60bd8df65822
--- /dev/null
+++ b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/2cd0caba-72d7-41b4-b352-c95abbce1558_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2c2b8f673efd93e3199db3cf0b337837fd56cf6bed897f92ae2547d9cb1e897e
+size 2183478
diff --git a/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/full.md b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f464ba6891ce018afa2601e6833cdc0191ba2407
--- /dev/null
+++ b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/full.md
@@ -0,0 +1,291 @@
+# AnimalWeb: A Large-Scale Hierarchical Dataset of Annotated Animal Faces
+
+Muhammad Haris Khan1, John McDonagh2, Salman Khan1, Muhammad Shahabuddin4 Aditya Arora1, Fahad Shahbaz Khan1, Ling Shao1, Georgios Tzimiropoulos3
+
+$^{1}$ Inception Institute of Artificial Intelligence, UAE $^{2}$ University of Nottingham, UK
+$^{3}$ Queen Mary University of London, UK $^{4}$ Comsats University Islamabad, Pakistan
+
+{muhammad.haris, salman.khan, fahad.khan, ling.shao}@inceptioniai.org, shahab.pk05@gmail.com
+
+john.mcdonagh@nottingham.ac.uk, g.tzimiropoulos@qmul.ac.uk
+
+# Abstract
+
+Several studies show that animal needs are often expressed through their faces. Though remarkable progress has been made towards the automatic understanding of human faces, this has not been the case with animal faces. There exists significant room for algorithmic advances that could realize automatic systems for interpreting animal faces. Besides scientific value, resulting technology will foster better and cheaper animal care.
+
+We believe the underlying research progress is mainly obstructed by the lack of an adequately annotated dataset of animal faces, covering a wide spectrum of animal species. To this end, we introduce a large-scale, hierarchical annotated dataset of animal faces, featuring 22.4K faces from 350 diverse species and 21 animal orders across biological taxonomy. These faces are captured in-the-wild conditions and are consistently annotated with 9 landmarks on key facial features. The dataset is structured and scalable by design; its development underwent four systematic stages involving rigorous, overall effort of over 6K man-hours. We benchmark it for face alignment using the existing art under two new problem settings. Results showcase its challenging nature, unique attributes and present definite prospects for novel, adaptive, and generalized face-oriented CV algorithms. Further benchmarking the dataset across face detection and fine-grained recognition tasks demonstrates its multi-task applications and room for improvement. The dataset is available at: https://fdmapproject.wordpress.com/.
+
+# 1. Introduction
+
+Animals are a fundamental part of our world. Their needs are often expressed through faces which, if understood properly, can help us improve the well-being of animals in labs, farms and homes. Behavioural and neurophysiologi-
+
+
+
+
+Figure 1: AnimalWeb: We introduce a large-scale, hierarchical dataset of annotated animal faces featuring diverse species while covering a broader spectrum of animal biological taxonomy. It exhibits unique challenges e.g., large biodiversity in species, high variations in pose, scale, appearance, and backgrounds. Further, it offers unique attributes like class imbalance (CI), multi-task applications (MTA), and zero-shot face alignment (ZFA). Facial landmarks shown in blue and the images belong to classes with identical color in the hierarchy.
+
+
+
+
+
+
+
+
+
+cal studies have shown that mammalian brains can interpret social signals on fellow animal's faces and have developed specialized skills to process facial features. Therefore, the study of animal faces is of prime importance.
+
+Facial landmarks can help us better understand animals and foster their well-being via deciphering their facial expressions. Facial expressions reflect the internal emotions and psychological state of an animal being. As an example, animals with different anatomical structure (such as mice, horses, rabbits and sheep), show a similar grimace expression when in pain i.e., tighten eyes and mouth, flatten cheeks and unusual ear postures. Understanding abnormal animal expressions and behaviours with visual imagery is a much cheaper and quicker alternative to clinical examinations and vital signs monitoring. Encouraging indi
+
+cators show that such powerful technologies could indeed be possible, e.g., fearful cows widen their eyes and flatten their ears [19], horses close eyes in depression [10], sheep positions its ears backward when facing unpleasant situations [2], and rats ear change colors and shape when in joy [9]. Furthermore, large-scale annotated datasets of animal faces can help advance the animal psychology understanding. For example, for non-primate animals, the scientific understanding of animal expressions is generally limited to the development of only pain coding systems [13]. However, other expressions could be equally important to understand e.g., sadness, boredom, hunger, anger and fear.
+
+We believe the research progress towards automatic understanding of animal facial behaviour is largely hindered by the lack of sufficiently annotated animal faces (Tab. 1), covering a wide spectrum of animal species. In comparison, significant progress has been made towards automatic understanding and interpretation of human faces [40, 5, 35, 34, 3, 21, 38], while animal face analysis is largely unexplored in vision community [41, 25]. There is a plenty of room for new algorithms and a pressing need to develop computational tools capable of understanding animal facial behavior. To this end, we introduce a large-scale, hierarchical dataset of annotated animal faces, termed AnimalWeb, featuring diverse species while covering a broader spectrum of animal biological taxonomy. Every image has been labelled with the genus-species terminology. Fig. 1 provides a holistic overview of the dataset key features.
+
+Contributions: To our knowledge, we build and annotate the largest animal faces dataset captured under altogether in-the-wild conditions. It encompasses 21 different orders and within order explores various families and genomes. This diverse coverage results in 350 different animal species and a total count of $22.4\mathrm{K}$ animal faces. Each face is consistently annotated with 9 fiducial landmarks on key facial components (e.g., eyes and mouth). Finally, the dataset design and development followed four systematic stages involving an overall, rigorous effort of over 6K man-hours by experts and trained volunteers.
+
+We benchmark AnimalWeb for face alignment with the state-of-the-art (SOTA) human face alignment algorithms [3, 39]. Results show that it is challenging for them particularly due to biodiversity, species imbalance, and adverse in-the-wild conditions (e.g., extreme poses). We further validate this by reporting results from various analysis, including pose-wise and face sizes. We show the capability of our dataset for testing under two novel problem settings: few-shot and zero-shot face alignment. Further, we demonstrate related applications possible with this dataset: animal face detection and fine-grained species recognition. Our results show that it 1) is a strong experimental base for algorithmic advances, and 2) will facilitate the development of novel, adaptive, and generalized face-oriented algorithms.
+
+# 2. Related Datasets
+
+This section briefly overviews existing human and animal face alignment benchmarks.
+
+Human Face Alignment. Since the seminal work of Active Appearance Models (AAMs) [6], various 2D datasets featuring human face landmark annotations have been proposed. Among these, the prominent ones are XM2VTS [22], BioID [16], FRGC [23], and Multi-PIE [12]. These datasets were collected under constrained environments with limited expression, frontal pose, and normal lighting variations. Following them, few datasets were proposed with faces showing occlusions and other variations such as COFW [4, 11] and AFW [44].
+
+300W [29] is a popular dataset amongst several others in human face alignment, and has been widely adopted both by scientific community and industry [34, 40, 26, 43]. It was developed for the 300W competition held in conjunction with ICCV 2013. 300W benchmark originated from LFPW [1], AFW [44], IBUG [29], and 300W private [28] datasets. In total, it provides 4,350 images with faces annotated using the 68 landmark frontal face markup scheme. To promote face tracking research, 300VW [30] is introduced featuring 114 videos. Such datasets paced research progress towards human face alignment in challenging conditions.
+
+Recently, efforts are directed to manifest greater range of variations. For instance, Annotated Facial Landmarks in the wild (AFLW) [18] proposed a collection of 25K annotated human faces with up to 21 landmarks. It, however, excluded locations of invisible landmarks. Zhu et al. [43] provided manual annotations for invisible landmarks, but there are no landmark annotations along the face contour. Along similar lines, Zhu et al. [44] developed a large scale training dataset by synthesizing profile views from 300W dataset using a 3D Morphable Model (3DMM). Though it could serve as a large training set, the synthesized profile faces have artifacts that can hurt fitting accuracy. Jeni et al. [15] introduced a dataset in an ECCV 2016 competition, comprising photographed images in controlled conditions or synthetically produced images.
+
+Lately, Menpo benchmark [8] was released in competitions held along ICCV 2017. It contains 2D and 3D landmarks annotations and exhibits large variations in pose, expression, illumination and occlusions. Faces are also classified into semi-frontal and profile based on their orientation and annotated accordingly. Menpo-2D contains 7,576 and 7,281 annotated training and testing images, respectively.
+
+Animal Face Alignment. Despite scientific value, pressing need and direct impact on animal healthcare, only little attention has been paid in developing an annotated dataset of animal faces [41, 25]. Although datasets such as ImageNet [8] and iNaturalist [36] offer reasonable species variety, they are targeted at image-level classification and region-level detection tasks. The two animal face alignment
+
+
+Figure 2: Some representative examples from randomly chosen species in AnimalWeb. Animal faces tend to exhibit large variations in pose, scale, appearance and expressions.
+
+| Dataset | Target Face | Faces | Points |
| Multi-PIE [12] (semi-frontal) | Human | 6665 | 68 |
| Multi-PIE [12] (profile) | Human | 1400 | 39 |
| AFLW [18] | Human | 25,993 | 21 |
| COFW [4] | Human | 1007 | 29 |
| COFW [11] | Human | 507 | 68 |
| 300 W[29, 28] | Human | 3837 | 68 |
| Menpo 2D [8] (semi-frontal) | Human | 10,993 | 68 |
| Menpo 2D [8] (profile) | Human | 3852 | 39 |
| AFLW2000-3D [44] | Human | 2000 | 68 |
| 300W-LP [44](synthetic) | Human | 61,225 | 68 |
| Sheep faces [41] | Animal | 600 | 8 |
| Horse faces [25] | Animal | 3717 | 8 |
| AnimalWeb (Ours) | Animal | 22,451 | 9 |
+
+Table 1: Comparison between AnimalWeb and various popular face alignment datasets. AnimalWeb is bigger (in terms of faces offered) than $80\%$ of the datasets targeted at human face alignment. Further, the existing efforts on animal face datasets are limited to only single species. This work targets a big gap in this area by building a large-scale annotated animal faces dataset.
+
+datasets were reported in [41] and [25]. Yang et al. [41] collected 600 sheep faces and annotated them with 8 fiducial landmarks. Similarly, Rashid et al. [25] reported a collection of 3717 horse faces with points marked around 8 facial features. These datasets are severely limited in terms of biodiversity, size, and range of possible real-world conditions. To our knowledge, the proposed dataset is a first largescale, hierarchical collection of annotated animal faces with 9 landmarks, possessing real-world properties (e.g., large poses) and unique attributes e.g., species imbalance, multitask applications, and zero-shot face alignment.
+
+# 3. AnimalWeb Properties
+
+In this section, we highlight some of the unique aspects of the newly introduced dataset (Fig. 2).
+
+
+Figure 3: Distribution of faces per species in AnimalWeb. We see that $29\%$ of the total species contain $65\%$ of the total faces. The dataset shows the natural occurrence patterns of different species.
+
+Scale. The proposed dataset is offering a large-scale and diverse coverage of annotated animal faces. It contains 22.4K annotated faces, offering 350 different animal species with variable number of animal faces in each species. Fig. 3 shows the distribution of faces per species. We see that $29\%$ of the total species contain $65\%$ of the total faces. Also, the maximum and minimum number of faces per species are 239 and 1, respectively. Both these statistics highlight the large imbalance between species and high variability in the instance count for different species. This marks the conformity with the real-world where different species are observed with varying frequencies.
+
+Tab. 1 compares AnimalWeb and various popular datasets for face alignment. AnimalWeb is bigger (in face count) compared to $80\%$ of datasets targeted at human face alignment. Importantly, very little or rather no attention is subjected towards constructing annotated animal faces dataset mimicking real-world properties, and the existing ones are limited to only single species.
+
+Diversity. Robust computational tools aimed at detecting/tracking animal facial behaviour in open environments are difficult to realize without observations that can exhibit real-world scenarios as much as possible. We therefore aim at ensuring diversity along two important dimensions, (1)
+
+imaging variations in scale, pose, expression, and occlusion, (2) species coverage in the animal biological taxonomy. Fig. 2 shows some example variations captured in the dataset. We observe that animal faces exhibit great pose variations and their faces are captured from very different angles (e.g., top view) that are quite unlikely for human faces. In addition, animal faces can show great range of pose and scale variations.
+
+Fig. 4 (top row) reveals that faces in AnimalWeb exhibits much greater range of shape deformations. Each image is obtained by warping all possible ground truth shapes to a reference shape, thereby removing similarity transformations. Fig. 4 (bottom row) attempts to demonstrate image diversification in AnimalWeb and other datasets. We observe that it comprises more diversified images than other commonly available human face alignment datasets. To gauge scale diversity, we plot the distribution of normalized face sizes for AnimalWeb in Fig. 5 and popular human face alignment datasets. AnimalWeb offers $32\%$ more range of small face sizes $(< 0.2)$ in comparison to competing datasets for human face alignment.
+
+
+Figure 4: Top: AnimalWeb covers significantly larger deformations. Bottom: It offers more diversity - large variability in appearances, viewpoints, poses, clutter and occlusions resulting in the blurriest mean image with the smallest lossless JPG file size.
+
+
+Figure 5: Face sizes distribution in AnimalWeb and popular human face alignment datasets. AnimalWeb offers $32\%$ more range of small face sizes $(< 0.2)$ in comparison to competing datasets.
+
+Fig. 6 provides a miniature view of the hierarchical nature, illustrating diversity in AnimalWeb. Primates and Carnivora orders have been shown with randomly chosen 8 and 5 families alongside a few genomes. We observe that it exhibits hierarchical structure with variable number of children nodes for each parent node. We refer to Tab. 2 for the count of families, genomes, species, and faces in top 5 orders (ranked by face count).
+
+
+Figure 6: A miniature glimpse of the hierarchical nature of AnimalWeb. Primates and Carnivora orders have been shown with a few families and respective genomes.
+
+# 4. Constructing AnimalWeb
+
+This section details four key steps followed towards the construction of AnimalWeb (see Fig. 7). They include image collection, workflow development, facial point annotation, and annotation refinement.
+
+# 4.1. Image Collection
+
+We first developed a taxonomic framework to realise a structured, scalable dataset design followed by a detailed collection protocol to ensure real-world conditions before starting image collection process.
+
+Taxonomic Framework Development. A simple, hierarchical tree-like data structure is designed following the well established biological animal classification. The prime motivation is to carry out image collection - the next step - in a structured and principled way. Further, this methodology enables recording various statistics e.g., image count at different nodes of the tree.
+
+Data Collection Protocol. Starting from animal kingdom we restricted ourselves to vertebrates group (phylum), and further within vertebrates to Mammalia class. We wanted those animals whose faces exhibit roughly regular and identifiable face structure. Some excluded animal examples are insects and worms that possibly violate this condition. Given these restrictions, 21 orders were shortlisted for collection task. Scientific names of top 5 orders in terms of face count are reported in Tab. 2.
+
+| Order | Families | Genuses | Species | Faces |
| Carnivora | 11 | 57 | 144 | 8281 |
| Artiodactyla | 7 | 42 | 55 | 4546 |
| Primates | 12 | 30 | 59 | 3468 |
| Rodentia | 11 | 19 | 19 | 1521 |
| Sphenisciformes | 1 | 5 | 10 | 1516 |
+
+Table 2: Top 5 orders in terms of face count covered in Animal-Web. For each order we show the number of families, genomes, species, and faces. There are a total of 21 orders and each order explores on average 3 families, 8 genomes, and 1024 faces.
+
+
+Figure 7: Four systematic stages in AnimalWeb development with details and man-hours involved. Zoom-in for details.
+
+Finally, we set the bound for number of images to be collected per genus-species between 200-250. This would increase the chances of valuable collection effort to be spent in exploring the different possible species - improving biodiversity - rather than heavily populating a few (commonly seen). With this constraint, we ended up with an average of 65 animal faces per species.
+
+Image Source. The Internet is the only source used for collecting images for this dataset. Other large-scale computer vision datasets such as ImageNet [7] and MS COCO [20] have also relied on this source to achieve the same. Specifically, we choose Flickr1, which is a large image hosting website, to search first, then select, and finally download relevant animal faces.
+
+Collection. We use both common and scientific names of animal species from the taxonomic framework (described earlier) to query images. Selection is primarily based on capturing various in-the-wild conditions e.g. various face poses. A team of 3 trained volunteers completed the image collection process under the supervision of an expert. For each worker, it took an average of 100 images per hour amounting to a total of $\sim 250$ man-hours. After download, we collected around $25\mathrm{K}$ candidate images. Finally, a visual filtering step helped removing potential duplicates across species in 43.8 man-hours.
+
+# 4.2. Workflow Development
+
+Annotating faces can unarguably be the most important, labour-intensive and thus a difficult step towards this dataset construction. To actualize this, we leveraged the great volunteers resource from a large citizen science web portal, called Zooniverse $^{2}$ . It is home to many successful citizen science projects. We underwent the following stages to accomplish successful project launch through this portal.
+
+Project Review. This is the first stage and it involves project design and review. The project is only launched
+
+once it gets reviewed by Zooniverse experts panel whom main selection criterion revolves around gauging the impact of a research project.
+
+Workflow design and development. Upon clearing review process, in the second phase, the relevant image metadata is uploaded to the server and an annotator interface (a.k.a workflow) is developed. The workflow is first designed for annotating points and is then thoroughly verified. Two major quality checks are 1) its ease of use for a large volunteer group, bearing different domain expertise, and 2) its fitness towards the key project deliverables. In our case, the workflow defines 'order' and 'name' for each facial point. Further, it also comprises a clear action-plan in case of ambiguities (e.g., invisible landmarks) by linking a professionally developed help page. It shows instructions and illustrations to annotate points across all possible species across diverse poses. Lastly, our workflow is thoroughly tested by a 5-member team of experts and it took 20 man-hours of effort.
+
+9 pts. markup scheme. The annotator interface in our case required annotators to adhere to the 9 landmarks markup scheme as shown in Fig. 8. We believe that 9 landmarks provide good trade-off between annotation effort and facial features coverage.
+
+# 4.3. Facial Point Annotation
+
+After workflow development, the project is exposed to a big pool of Zooniverse volunteers for annotating facial landmarks. These volunteers have a prior experience of annotating many different successful citizen science projects related to animals. Every face is annotated by at least 5 different volunteers and this equals a labour-intensive effort of $\sim 5408$ man-hours in total. Multiple annotations of a single face improves the likelihood of recovering annotated points closer to the actual location of facial landmarks, provided more than half of these multiple annotations qualify this assumption. To this end, we choose to take median value of multiple annotations of a single face.
+
+The annotation portal allows annotators to raise a query
+
+
+Figure 8: Nine landmarks markup scheme used for annotation of faces in AnimalWeb. The markup scheme covers major facial features around key face components (eyes, nose, and lips) while keeping the total landmark count low.
+
+with the experts throughout the annotation life cycle. This also helps in removing many different annotation ambiguities for other volunteers as well who might experience the same later in time. The whole exercise of Zooniverse crowdsourcing took 80 man-hours of experts' time.
+
+# 4.4. Refining Annotations
+
+Annotations performed by zooniverse volunteers can be inaccurate and missing for some facial points. Further they could be inconsistent, and unordered. Unordered point annotations result if, for instance, left eye landmark is swapped with right eye. Above mentioned errors are in some sense justifiable since point annotations on animal faces, captured in real-world settings, is a complicated task.
+
+We hired a small team of 4 trained volunteers for refinement. It had to perform manual corrections and was also supervised by an expert. The refinement completed in two passes listed below.
+
+Refinement Passes. In the first pass, major errors were rectified e.g., correcting points ordering. This refinement proceeded species-wise to enforce consistency in annotations across every possible species in the dataset. A total of 548 man-hours were spent in the first pass. In the second pass, pixel perfect annotations were ensured by cross-annotator review in 438 man-hours of effort. For instance, the refinements on the portion of the dataset done by some member in the first pass is now reviewed and refined by another member of the team.
+
+# 5. Benchmarking AnimalWeb
+
+We extensively benchmark AnimalWeb for face alignment task. In addition, we demonstrate multi-task applications by demonstrating experimental results for face detection and fine-grained image recognition.
+
+# 5.1. Animal Facial Point Localization
+
+We select the state-of-the-art (SOTA) method in 2D human face alignment for evaluating AnimalWeb. Specifically, we take Hourglass (HG) deep learning based architecture; it has shown excellent results on a range of challenging 2D face alignment datasets [3, 32] and competitions [39].
+
+Datasets and Evaluation Protocols. We use 300W-public, 300W-private, AFLW2000-3D, and COFW for comparison
+
+as they are the most challenging ones and are publicly available. 300W-public contains 3148 training images and 689 testing images. 300W-private comprises 600 images for testing only. We only use COFW for testing purposes; its testing set contains 507 images. Similarly, AFLW2000-3D is used for testing only after training on 300WLP dataset.
+
+We use Normalized Mean Error (NME) as the face alignment evaluation metric,
+
+$$
+\mathrm {N M E} = \frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {l = 1} ^ {L} (\frac {\| x i ^ {\prime} (l) - x i ^ {g} (l) \|}{d _ {i}}).
+$$
+
+It calculates the Euclidean distance between the predicted and the ground truth point locations and normalizes by $d_{i}$ . We choose ground truth face bounding box size as $d_{i}$ , as other measures such as Interocular distance could be biased for profile faces [24]. In addition to NME, we report results using Cumulative Error Distribution (CED) curves, Area Under Curve (AUC) @0.08 (NME) error, and Failure Rate (FR) @0.08 (NME) error.
+
+Training Details. For all our experiments, we use the settings described below to train HG networks both for human datasets and AnimalWeb. Note, these are similar settings as described in [32, 39] to obtain top performances on 2D face alignment datasets. We set the initial learning rate to $10^{-4}$ and used a mini-batch of 10. During the process, we divide the learning rate by 5, 2, and 2 at 30, 60, and 90 epochs, respectively, for training a total of 110 epochs. We also applied random augmentation: rotation (from $-30^{\circ}$ to $30^{\circ}$ ), color jittering, scale noise (from 0.75 to 1.25). All networks were trained using RMSprop [33].
+
+Evaluation Settings. AnimalWeb is assessed under two different settings. The first randomly takes $80\%$ images for training and the rest $20\%$ for testing purposes from each species3. We call it 'Known species evaluation' or so-called 'few-shot face alignment' since during training the network sees examples from every species expected upon testing phase. The second setting randomly divides all species into $80\%$ for training and $20\%$ for testing. We term it as 'Unknown species evaluation' or so-called 'zero-shot face Alignment' (ZFA) as the species encountered in testing phase are not available during training. Unknown species evaluation is, perhaps, more akin to real-world settings than its counterpart. It is likely for a deployed facial behaviour monitoring system to experience some species that were unavailable at training. It is also more challenging than first as facial appearance of species during testing can be quite different to the ones available at training time.
+
+Known Species Evaluation. Tab. 3 reveals comparison between AnimalWeb and various human face alignment benchmarks, when stacking 2 and 3 modules of HG network. Human face alignment results are shown both in
+
+| Datasets | 9 pts. | 68 pts. |
| HG-2 | HG-3 | HG-2 | HG-3 |
| 300W(common) | 1.21/84.8/0.18 | 1.19/85.0/0.00 | 1.26/84.1/0.00 | 1.25/84.2/0.00 |
| 300W(full) | 1.42/82.1/0.14 | 1.40/82.4/0.00 | 1.41/82.2/0.00 | 1.40/82.3/0.00 |
| 300W(challenging) | 2.28/71.4/0.00 | 2.25/71.7/0.00 | 2.03/74.5/0.00 | 2.01/74.8/0.00 |
| 300W(private) | 2.26/72.2/0.66 | 2.31/72.4/1.16 | 1.82/77.5/0.50 | 1.77/77.8/0.16 |
| AFLW2000-3D | 3.27/60.8/3.27 | 3.23/61.3/2.75 | 2.73/66.5/0.50 | 2.71/66.9/0.55 |
| COFW | 3.43/60.0/3.74 | 3.26/61.3/3.55 | 2.66/67.2/1.97 | 2.60/68.2/1.57 |
| AnimalWeb (Known) | 5.22/46.8/16.4 | 5.12/47.4/16.3 | - | - |
| AnimalWeb (Unknown) | 6.14/41.5/22.0 | 5.96/42.9/20.7 | - | - |
+
+terms of 68 pts. and 9 pts. For fair comparison, the 9 pts. chosen on human faces are the same as for animal faces. Further, 9 pts. results correspond to the model trained with 9 pts. on human faces. We see a considerable gap (NME difference) between all the results for human face alignment datasets and AnimalWeb. For instance, the NME difference between COFW tested using HG-2 network is $\sim 1$ unit with AnimalWeb under the known species evaluation protocol. We observe a similar trend in the CED curves displayed in Fig. 9. Performance of COFW dataset, the most challenging among human faces, is $15\%$ higher across the whole spectrum of pt-pt-error. Finally, we display some example fittings under known species evaluation settings in the first row of Fig. 10. We see that the existing art struggles under adverse in-the-wild situations exhibited in AnimalWeb.
+
+
+Figure 9: Comparison between AnimalWeb and popular face alignment datasets using HG-2&3 networks.
+
+
+
+
+Figure 12: Specie-wise results for AnimalWeb under Known Species settings. Zoom-in for details.
+
+Fig. 12 depicts species-wise testing results for Animal-Web. For each species, we average results along the number of instances present in it. We observe poorer performance for some species compared to others. This is possibly due to large intra-species variations coupled with the scarcity of enough training instances relative to others. For instance, hogdeer species has only 20 training samples compared to amurleopard species populated with 91 training examples. Next, we report pose-wise results based on yaw angle in
+
+Table 3: Accuracy comparison between the AnimalWeb and 6 different human face alignment benchmarks when stacking 2 and 3 modules of HG network. We show human face alignment results both in terms of 68 pts. and 9 pts. Format for each table entry is: NME error/AUC@0.08 (NME) error/FailureRate@0.08 (NME) error. All results are in %.
+
+Tab. 4. We can observe that AnimalWeb is challenging for large poses. The performance drops as we move towards the either end of (shown) yaw angle spectrum from $[-45^o, 45^o]$ range. Further, Tab. 5 shows results under different face sizes. We observe room for improvement across a wide range of face sizes.
+
+Unknown Species Evaluation. Here, we report results under unknown species settings. Note, we randomly choose $80\%$ of the species for training and the rest $20\%$ for testing. Tab. 3 draws comparison between unknown species settings and its counterpart. As expected, accuracy is lower for unknown case versus the known case. For example, HG-2 displays $\sim 1$ unit poor performance under unknown case in comparison to known. Animal faces display much larger inter-species variations between some species. For example, adeliepenguins and giantpandas whom face appearances are radically different (Fig. 10). Bottom row of Fig. 10 displays example fittings under this setting. We see that the fitting quality is low for frontal poses; the face appearance of species seen during training could be very different to ones testing species.
+
+Low accuracy of existing methods under unknown species present opportunities for the development of 'zero-shot face alignment algorithms' that are robust to unseen facial appearance patterns. For instance, new methods that can better leverage similarities across seen species to perform satisfactorily under unknown species.
+
+# 5.2. Animal Face Detection
+
+We evaluate the performance of animal face detection using a Faster R-CNN [27] baseline. Our ground-truth is a tightly enclosed face bounding box for each animal face, that is obtained by fitting the annotated facial landmarks. We first evaluate our performance on the face localization task. We compare our dataset with one of the most challenging human face detection dataset WIDER Face [42] in terms of Precision-Recall curve (Fig. 11). Note that WIDER Face is a large-scale dataset with 393,703 face instances in 32K images and introduces three protocols for evaluation namely 'easy', 'medium' and 'hard' with the increasing level of difficulty. The performance on our dataset lies close to that of medium curve of WIDER Face, which shows that there exists a reasonable margin of improvement for animal face detection. We also compute overall class-wise detec
+
+
+Figure 10: Example landmark fittings from AnimalWeb. Top row: fittings under known species evaluation. Bottom row: fittings under unknown species evaluation. Red points denote fittings results of HG-3 and blue points are the ground truths.
+
+
+Figure 11: Precision-recall curve for AnimalWeb settings and WIDER Face datasets.
+
+
+Figure 13: Example face detections from AnimalWeb. Green/red boxes denote true/missed detections from Faster-RCNN [27] baseline.
+
+| Yaw | -90° | [-90°,-45°] | [-45°,45°] | [45°,90°] | 90° |
| Faces | 584 | 993 | 1092 | 991 | 689 |
| NME | 6.75 | 5.02 | 3.31 | 4.99 | 6.94 |
+
+Table 4: Pose-wise NME(%) based on yaw-angles with HG-3 under Known species settings of AnimalWeb.
+
+| Face size | [0,0.16] | [0.16,0.32] | [0.32,0.48] |
| Faces | 3388 | 817 | 129 |
| NME | 5.29 | 4.41 | 4.73 |
+
+Table 5: NME(%) w.r.t face size distribution with HG-3 under Known species settings of AnimalWeb. Face sizes are normalized by the corresponding image sizes.
+
+tion scores where the Faster R-CNN model achieves a mAP of 0.727. Some qualitative examples of our animal face detector are shown in Fig. 13.
+
+# 5.3. Fine-grained species recognition
+
+Since our dataset is labeled with fine-grained species, one supplementary task of interest is the fine-grained classification. We evaluate the recognition performance on our dataset by applying Residual Networks [14] with varying depths (18, 34, 50 and 101). Results are reported in Tab. 6. We can observe a gradual boost in top-1 accuracy as the network capacity is increased. Our dataset shows a similar difficulty level in comparison to other fine-grained datasets of comparable scale, e.g., CUB-200-2011 [37] and Stanford Dogs [17] with 200 and 120 classes, respectively. A ResNet50 baseline on CUB-200 and Stanford Dogs achieve
+
+| Network | ResNet18 | ResNet34 | ResNet50 | ResNet101 |
| Accuracy | 78.46 | 81.51 | 83.09 | 84.23 |
+
+Table 6: Fine-grained recognition accuracy on AnimalWeb. Top-1 accuracies (in %) are reported using four ResNet variants [14].
+
+an accuracy of $81.7\%$ and $81.1\%$ [31], while the same network achieves an accuracy of $83.09\%$ on AnimalWeb.
+
+# 6. Conclusion
+
+We introduce a large-scale, hierarchical dataset, named AnimalWeb, of annotated animal faces. It features 22.4K faces from 350 diverse animal species while exploring 21 different orders. Each face is consistently annotated with 9 landmarks around key facial features. Benchmarking AnimalWeb under two novel settings for face alignment, employing current SOTA method, reveals its challenging nature. We observe that SOTA methods for human face alignment relatively underperform for animal faces. This highlights the need for specialized and robust algorithms to analyze animal faces. We also show the applications of the dataset for face detection and fine-grained recognition. Our results show that it is a promising experimental base for algorithmic advances.
+
+Acknowledgments This work was supported by the EP-SRC project EP/M02153X/1 Facial Deformable Models of Animals. Further, it uses data generated via the Zooniverse.org platform, funded by Google Global Impact Award and Alfred P. Sloan Foundation.
+
+# References
+
+[1] Peter N Belhumeur, David W Jacobs, David J Kriegman, and Neeraj Kumar. Localizing parts of faces using a consensus of exemplars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2930-2940, 2013. 2
+[2] Alain Boissy, Arnaud Aubert, Lara Désiré, Lucile Greiveldinger, Eric Delval, Isabelle Veissier, et al. Cognitive sciences to relate ear postures to emotions in sheep. *Animal Welfare*, 20(1):47, 2011. 2
+[3] Adrian Bulat and Georgios Tzimiropoulos. How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks). In Proceedings of the IEEE International Conference on Computer Vision, pages 1021-1030, 2017. 2, 6
+[4] Xavier P Burgos-Artizzu, Pietro Perona, and Piotr Dólar. Robust face landmark estimation under occlusion. In Proceedings of the IEEE International Conference on Computer Vision, pages 1513-1520, 2013. 2, 3
+[5] Xudong Cao, Yichen Wei, Fang Wen, and Jian Sun. Face alignment by explicit shape regression. International Journal of Computer Vision, 107(2):177-190, 2014. 2
+[6] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. In European Conference on Computer Vision, pages 484-498. Springer, 1998. 2
+[7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern recognition, pages 248–255. IEEE, 2009. 5
+[8] Jiankang Deng, Anastasios Roussos, Grigorios Chrysos, Evangelos Ververas, Irene Kotsia, Jie Shen, and Stefanos Zafeiriou. The menpo benchmark for multi-pose 2d and 3d facial landmark localisation and tracking. International Journal of Computer Vision, pages 1-26, 2018. 2, 3
+[9] Kathryn Finlayson, Jessica Frances Lampe, Sara Hintze, Hanno Würbel, and Luca Melotti. Facial indicators of positive emotions in rats. PloS one, 11(11):e0166446, 2016. 2
+[10] Carole Fureix, Patrick Jego, Séverine Henry, Léa Lansade, and Martine Hausberger. Towards an ethological animal model of depression? a study on horses. PLoS One, 7(6):e39280, 2012. 2
+[11] Golnaz Ghiasi and Charless C Fowlkes. Occlusion coherence: Detecting and localizing occluded faces. arXiv preprint arXiv:1506.08347, 2015. 2, 3
+[12] Ralph Gross, Iain Matthews, Jeffrey Cohn, Takeo Kanade, and Simon Baker. Multi-pie. Image and Vision Computing, 28(5):807-813, 2010. 2, 3
+[13] M.J. Guesgen, N.J. Beausoleil, M. Leach, E.O. Minot, M. Stewart, and K.J. Stafford. Coding and quantification of a facial expression for pain in lambs. Behavioural Processes, 132:49 - 56, 2016. 2
+[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016. 8
+[15] László A Jeni, Sergey Tulyakov, Lijun Yin, Nicu Sebe, and Jeffrey F Cohn. The first 3d face alignment in the wild
+
+(3dfaw) challenge. In European Conference on Computer Vision, pages 511-520. Springer, 2016. 2
+[16] Oliver Jesorsky, Klaus J Kirchberg, and Robert W Frischholz. Robust face detection using the hausdorff distance. In International Conference on audio-and video-based biometric person authentication, pages 90-95. Springer, 2001. 2
+[17] Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao, and Li Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011. 8
+[18] Martin Koestinger, Paul Wohlhart, Peter M Roth, and Horst Bischof. Annotated facial landmarks in the wild: A largescale, real-world database for facial landmark localization. In 2011 IEEE international Conference on Computer Vision workshops (ICCV workshops), pages 2144-2151. IEEE, 2011. 2, 3
+[19] T Kutzer, M Steilen, L Gygax, and B Wechsler. Habituation of dairy heifers to milking routine—effects on human avoidance distance, behavior, and cardiac activity during milking. Journal of Dairy Science, 98(8):5241-5251, 2015. 2
+[20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision, pages 740-755. Springer, 2014. 5
+[21] Iacopo Masi, Anh Tun Trn, Tal Hassner, Jatuporn Toy Leksut, and Gérard Medioni. Do we really need to collect millions of faces for effective face recognition? In European Conference on Computer Vision, pages 579-596. Springer, 2016. 2
+[22] Kieron Messer, Jiri Matas, Josef Kittler, Juergen Luettin, and Gilbert Maitre. Xm2vtsdb: The extended m2vts database. 1999. 2
+[23] P Jonathon Phillips, Patrick J Flynn, Todd Scruggs, Kevin W Bowyer, Jin Chang, Kevin Hoffman, Joe Marques, Jaesik Min, and William Worek. Overview of the face recognition grand challenge. In 2005 IEEE Computer society Conference on Computer Vision and Pattern recognition (CVPR'05), volume 1, pages 947-954. IEEE, 2005. 2
+[24] Deva Ramanan and Xiangxin Zhu. Face detection, pose estimation, and landmark localization in the wild. In 2012 IEEE Conference on Computer Vision and Pattern recognition, pages 2879-2886. IEEE, 2012. 6
+[25] Maheen Rashid, Xiuye Gu, and Yong Jae Lee. Interspecies knowledge transfer for facial keypoint detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6894-6903, 2017. 2, 3
+[26] Shaoqing Ren, Xudong Cao, Yichen Wei, and Jian Sun. Face alignment at 3000 fps via regressing local binary features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1685-1692, 2014. 2
+[27] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015. 7, 8
+
+[28] Christos Sagonas, Epameinondas Antonakos, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 faces in-the-wild challenge: Database and results. Image and Vision computing, 47:3-18, 2016. 2, 3
+[29] Christos Sagonas, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 397-403, 2013. 2, 3
+[30] Jie Shen, Stefanos Zafeiriou, Grigoris G Chrysos, Jean Kossaifi, Georgios Tzimiropoulos, and Maja Pantic. The first facial landmark tracking in-the-wild challenge: Benchmark and results. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 50-58, 2015. 2
+[31] Ming Sun, Yuchen Yuan, Feng Zhou, and Errui Ding. Multi-attention multi-class constraint for fine-grained image recognition. In Proceedings of the European Conference on Computer Vision (ECCV), pages 805–821, 2018. 8
+[32] Zhiqiang Tang, Xi Peng, Shijie Geng, Lingfei Wu, Shaoting Zhang, and Dimitris Metaxas. Quantized densely connected u-nets for efficient landmark localization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 339-354, 2018. 6
+[33] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. In COURSERA: Neural networks for Machine learning, page 4(2), 2012. 6
+[34] George Trigeorgis, Patrick Snape, Mihalis A Nicolaou, Epameinondas Antonakos, and Stefanos Zafeiriou. Mnemonic descent method: A recurrent process applied for end-to-end face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4177-4187, 2016. 2
+[35] Georgios Tzimiropoulos. Project-out cascaded regression with an application to face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3659-3667, 2015. 2
+[36] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8769-8778, 2018. 2
+[37] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011. 8
+[38] Dayong Wang, Charles Otto, and Anil K Jain. Face search at scale. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1122-1136, 2017. 2
+[39] Pengfei Xiong, Guoqing Li, and Yuhang Sun. Combining local and global features for 3d face tracking. In Proceedings of the IEEE International Conference on Computer Vision, pages 2529-2536, 2017. 2, 6
+[40] Xuehan Xiong and Fernando De la Torre. Supervised descent method and its applications to face alignment. In Proceed-
+
+ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 532-539, 2013. 2
+[41] Heng Yang, Renqiao Zhang, and Peter Robinson. Human and sheep facial landmarks localisation by triplet interpolated features. In 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1-8. IEEE, 2016. 2, 3
+[42] Shuo Yang, Ping Luo, Chen-Change Loy, and Xiaou Tang. Wider face: A face detection benchmark. In Proceedings of the IEEE Conference on Computer Vision and Pattern recognition, pages 5525–5533, 2016. 7
+[43] Shizhan Zhu, Cheng Li, Chen-Change Loy, and Xiaou Tang. Unconstrained face alignment via cascaded compositional learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3409–3417, 2016. 2
+[44] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z Li. Face alignment across large poses: A 3d solution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 146-155, 2016. 2, 3
\ No newline at end of file
diff --git a/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/images.zip b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0917e3b5379f705d613762bb97b4cb31bd8a9a5a
--- /dev/null
+++ b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd59c355c09c5c2c7523233c4925f36f13f7e62da3c9d4f1bc0aeb24d5922b54
+size 739047
diff --git a/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/layout.json b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e8f19e99fc30d5b59fc9cdd679a5616842c0622a
--- /dev/null
+++ b/animalwebalargescalehierarchicaldatasetofannotatedanimalfaces/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed08849261095b4de979956deb6ce97fb558bb416678c2e9ddcdf9d45318978e
+size 331326
diff --git a/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_content_list.json b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d0395aed60dca2bd57a80a785921eef83e94a696
--- /dev/null
+++ b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8f073d6c3f15cc203ae1dc515e46877096b9cbdbbb510582b2f370f2950afed
+size 72751
diff --git a/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_model.json b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f493f4eaab4caeeab61edc60ba4e321309ba18ab
--- /dev/null
+++ b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e3c0a287691b4ef48604051fa915bb98d2936d172fb01e6750ab7c9e21e04a00
+size 87519
diff --git a/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_origin.pdf b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a9ac15d816f0a9235e7d0dffb1ef5a0752545931
--- /dev/null
+++ b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/2aa7eb07-b1f7-4514-bbb0-fa6a7c49b318_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d85b13304150a70708e1469215b42e0ecac8ce818c97007f3a73888ecb8224f
+size 553098
diff --git a/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/full.md b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b7c5e5e4f9e0f639375970ef1a5dfb311ecc5fca
--- /dev/null
+++ b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/full.md
@@ -0,0 +1,353 @@
+# An Internal Covariate Shift Bounding Algorithm for Deep Neural Networks by Unitizing Layers' Outputs
+
+You Huang Fuzhou University
+
+youhuang0607@gmail.com
+
+Yuanlong Yu* Fuzhou University
+
+yu.yuanlong@fzu.edu.cn
+
+# Abstract
+
+Batch Normalization (BN) techniques have been proposed to reduce the so-called Internal Covariate Shift (ICS) by attempting to keep the distributions of layer outputs unchanged. Experiments have shown their effectiveness on training deep neural networks. However, since only the first two moments are controlled in these BN techniques, it seems that a weak constraint is imposed on layer distributions and furthermore whether such constraint can reduce ICS is unknown. Thus this paper proposes a measure for ICS by using the Earth Mover (EM) distance and then derives the upper and lower bounds for the measure to provide a theoretical analysis of BN. The upper bound has shown that BN techniques can control ICS only for the outputs with low dimensions and small noise whereas their control is not effective in other cases. This paper also proves that such control is just a bounding of ICS rather than a reduction of ICS. Meanwhile, the analysis shows that the high-order moments and noise, which BN cannot control, have great impact on the lower bound. Based on such analysis, this paper furthermore proposes an algorithm that unitizes the outputs with an adjustable parameter to further bound ICS in order to cope with the problems of BN. The upper bound for the proposed unitization is noise-free and only dominated by the parameter. Thus, the parameter can be trained to tune the bound and further to control ICS. Besides, the unitization is embedded into the framework of BN to reduce the information loss. The experiments show that this proposed algorithm outperforms existing BN techniques on CIFAR-10, CIFAR-100 and ImageNet datasets.
+
+# 1. Introduction
+
+Deep neural networks (DNNs) show good performance in image recognition [18], speech recognition [13] and other fields [23, 32] in recent years. However, how to train DNNs is still a fundamental problem which is more complicated
+
+than training shallow networks because of deep architectures. It was commonly thought that stacking more layers suffers from the problem of vanishing or exploding gradients [7], but there are some problems of training with unclear definitions. A problem called Internal Covariate Shift (ICS) [15] may hinder the convergence of training DNNs.
+
+ICS is derived from Covariate Shift (CS) that is caused by using data from two different distributions to respectively train and test a model, generally in the supervised learning process [28]. However, ICS mainly exists in the feed-forward networks. Considering the $l$ th layer of a network with $L$ layers, a stack of following $L - l$ layers forms a local network $f_{l + 1:L}$ , whose input is the output of the $l$ th layer. Thus, the distribution of the input is affected by all the former $l$ layers' weights. In detail, the objective function of $f_{l + 1:L}$ is defined as
+
+$$
+\mathcal {L} \left(\Theta_ {l + 1: L}; p _ {l} ^ {(t)}, p _ {\boldsymbol {y}}\right) = \mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t)}, \boldsymbol {y} \sim p _ {\boldsymbol {y}} (\cdot | \boldsymbol {x})} [ h (\boldsymbol {x}, \boldsymbol {y}; \Theta_ {l + 1: L}) ], \tag {1}
+$$
+
+where $p_l^{(t)}$ is the distribution of the $l$ th layer's outputs at the $t$ th iteration; $p_{\mathbf{y}}$ is the conditional distribution of the ground truth at the last layer given $\mathbf{x}$ ; $\Theta_{l+1:L}$ is the weights of $f_{l+1:L}$ ; $h(\mathbf{x}, \mathbf{y}; \Theta_{l+1:L})$ is the loss for a sample pair $(\mathbf{x}, \mathbf{y})$ . We use Back Propagation algorithm to train networks. However, the objective function $\mathcal{L}(\Theta_{l+1:L}; p_l^{(t+1)}, p_{\mathbf{y}})$ at the $(t+1)$ -th iteration would be different from the previous one as the distribution changes from $p_l^{(t)}$ to $p_l^{(t+1)}$ . So using the gradients obtained from $\mathcal{L}(\Theta_{l+1:L}; p_l^{(t)}, p_{\mathbf{y}})$ to update $\Theta_{l+1:L}$ might not reduce $\mathcal{L}(\Theta_{l+1:L}; p_l^{(t+1)}, p_{\mathbf{y}})$ due to the divergence between $p_l^{(t)}$ and $p_l^{(t+1)}$ . Furthermore, the divergence will become much larger with increase of the number of network layers.
+
+The technique called Batch Normalization (BN) [15] has been proposed to reduce ICS by attempting to make the distributions remain unchanged. In practice, BN normalizes the outputs to control the first two moments, i.e., mean and variance, and uses two adjustable parameters to recover the information lost in normalizing outputs. Empirical results have shown that BN can speed up network training and also
+
+improve the success rate [11, 29]. However, whether BN can really reduce ICS is not clear in theory. It is obvious that the first issue is about how to measure the divergence. Furthermore, since BN techniques only control the first and second moments, the constraint imposed on distributions by BN is weak. So how to theoretically analyze the bounding of ICS imposed by BN techniques is the second issue.
+
+Meanwhile, some experiments have shown that the performance gain of BN seems unrelated to the reduction of ICS [27]. In fact, ICS always exists when we train networks based on the fact that the gradient strategy must give the weight update in order to train the network such that the distribution of each layer varies. Furthermore, in the case of gradient vanishing, the ICS is totally eliminated. However, the network training cannot work. This case illustrates that very slight ICS cannot support effective training. Thus, it seems that controlling ICS instead of eliminating it is effective for training networks. So how to control ICS in order to improve network training is another challenge issue.
+
+This paper proposed an ICS measure, i.e., the divergence, by using the Earth Mover (EM) distance [33], inspired by the success of Wasserstein Generative Adversarial Networks (WGAN) [1]. Furthermore, this paper simplifies the measure by leveraging the Kantorovich-Rubinstein duality [33].
+
+Based on this proposed ICS measure, this paper furthermore derives an upper bound of ICS between $p_{l}^{(t + \Delta t)}$ and $p_{l}^{(t)}$ at the $(t + \Delta t)$ -th and $t$ th iterations, respectively. The upper bound has shown that BN techniques can control the bounding of ICS in the low-dimensional case with small noise. Otherwise, the upper bound is out of control by using BN techniques. So it is required to analyze the lower bound of ICS especially for non-trivial distributions. Thus, this paper also derives a lower bound of ICS. The result has shown that the high-order moments and noise have great impact on the lower bound.
+
+In order to control ICS, this paper proposes an algorithm that unitizes the normalized outputs. It is obvious that normalizing the outputs can introduce the moment-dependent upper bound, but such normalization would degrade when the moment estimation is not accurate. In contrast, unitizing the outputs in this proposed algorithm can lead to a constant upper bound without noise. However, by simply unitizing the outputs, the bound is very tight such that the weights cannot be updated in a reasonable range. Instead, this paper introduces a trainable parameter $\alpha$ in the unitization, such that the upper bound is adjustable and ICS can be further controlled by fine-tuning $\alpha$ . It is important that the proposed unitization is embedded into the BN framework in order to reduce the information loss. The experiments show that the proposed unitization can outperform existing BN techniques in the benchmark datasets including CIFAR-10, CIFAR-100 [17] and ImageNet [25].
+
+# 2. Related work
+
+Batch Normalization aims to reduce ICS by stabilizing the distributions of layers' outputs [15]. In fact, BN only controls the first two moments by normalizing the outputs, which is inspired by the idea of whitening the outputs to make the training faster [20]. However, BN is required to work with a sufficiently large batch size in order to reduce the noise of moments, and will degrade when the restriction on batch sizes is more demanding in some tasks [6, 9, 24]. Thus the methods including LN [2], IN [5] and GN [36] have been proposed. These variants estimate the moments within each sample, mitigating the impact of micro-batch. Besides, Kalman Normalization (KN) addresses this problem by the merits of Kalman Filtering [34], and a method called 'EvalNorm' is proposed to estimate the moments more accurately for BN during inference [31].
+
+Other methods inspired by BN have been proposed to improve network training. Weight Normalization decouples the length of the weight from the direction by reparameterizing the weights, and speeds up convergence of the training [26]. Cho and Lee regard the weight space in a BN layer as a Riemannian manifold and provide a new learning rule following the intrinsic geometry of this manifold [4]. Cosine Normalization uses cosine similarity and bounds the results of dot product, addressing the problem of the large variance [22]. Wu et al. propose the algorithm normalizing the layers' inputs with $l_{1}$ norm to reduce computation and memory [35]. Huang et al. propose Decorrelated Batch Normalization that whitens instead of normalizing the activations [14].
+
+However, there is no complete analysis in theory for BN. Santurkar et al. attempt to demonstrate that the performance gain of BN is unrelated to the reduction of ICS by experiments [27]. However, the first experiment only shows that BN can improve network training by other ways. In the second experiment, the difference between gradients is an unsuitable ICS measure since the gradients are sensitive and the accurate estimations require sufficient samples. Kohler et al. provide the theoretical analysis of BN [16], but it requires strong assumptions. In addition, Cai et al. focus on ordinary least squares regression and analyze the effects of gradient descent with BN in stability and convergence [3]. On the other hand, Yang et al. find that the maximum trainable depth of networks with BN is limited due to the gradient explosion caused by BN [37]. In general, the reason why BN works still remains unclear.
+
+# 3. Unitization
+
+The EM distance requires weak assumptions, and has been empirically proved to be effective in improving Generative Adversarial Networks (GANs) [8], which replaces the traditional KL-divergence in making the objective func
+
+tion [1]. According to the EM distance, the ICS measure for the $l$ th layer's outputs is defined as
+
+$$
+W \left(p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}\right) = \inf _ {\gamma \in \Pi \left(p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}\right)} \mathbb {E} _ {\left(\boldsymbol {x}, \boldsymbol {y}\right) \sim \gamma} \left[ | | \boldsymbol {x} - \boldsymbol {y} | | \right], \tag {2}
+$$
+
+where $\prod(p_{l}^{(t + \Delta t)},p_{l}^{(t)})$ denotes the set of all joint distributions whose marginals are $p_{l}^{(t + \Delta t)}$ and $p_{l}^{(t)}$ , respectively [1]. Then, by the Kantorovich-Rubinstein duality [33], the EM distance Eq. (2) can be rewritten as
+
+$$
+W \left(p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}\right) = \sup _ {| | f | | _ {L} \leq 1} \mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta t)}} [ f (\boldsymbol {x}) ] - \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} [ f (\boldsymbol {y}) ], \tag {3}
+$$
+
+where the distance is obtained by optimizing $f$ over the 1-Lipschitz function space (see the algorithm that estimates the EM distance in the supplementary material).
+
+# 3.1. The Upper Bound
+
+For $d$ -dimensional outputs of the $l$ th layer, denote by $\pmb{\mu}^{(t)} = (\mu_1^{(t)},\mu_2^{(t)},\dots,\mu_d^{(t)})$ and $(\pmb{\sigma}^{(t)})^2 = ((\sigma_1^{(t)})^2,(\sigma_2^{(t)})^2,\dots,(\sigma_d^{(t)})^2)$ the mean and variance of the distribution $p_l^{(t)}$ , respectively. The upper bound over $W(p_{l}^{(t + \Delta t)},p_{l}^{(t)})$ is formed by the first two moments (see the proofs of all the theorems in the supplementary material).
+
+Theorem 1. Suppose that $|\mu_i^{(t)}| < \infty, |\mu_i^{(t + \Delta t)}| < \infty, 1 \leq i \leq d$ . Then,
+
+$$
+\begin{array}{l} W \left(p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}\right) \leq \sum_ {i = 1} ^ {d} \left(\sigma_ {i} ^ {(t + \Delta t)}\right) ^ {2} + \sum_ {i = 1} ^ {d} \left(\sigma_ {i} ^ {(t)}\right) ^ {2} \\ + \left(\sum_ {i = 1} ^ {d} \left(\mu_ {i} ^ {(t + \Delta t)} - \mu_ {i} ^ {(t)}\right) ^ {2}\right) ^ {\frac {1}{2}} + 2. \tag {4} \\ \end{array}
+$$
+
+In BN, the output is normalized by the estimated mean $\hat{\mu}_i$ and standard deviation $\hat{\sigma}_i$ . Thus, for the normalized output, assume that $\mu_i^{(t)} = \epsilon_{\mu ,i}^{(t)},(\sigma_i^{(t)})^2 = 1 + \epsilon_{\sigma^2,i}^{(t)},1\leq i\leq d,$ where $\epsilon_{\mu ,i}^{(t)},\epsilon_{\sigma^2,i}^{(t)},1\leq i\leq d$ are noise. According to the above theorem, the upper bound is
+
+$$
+\begin{array}{l} W \left(p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}\right) \leq \sum_ {i = 1} ^ {d} \epsilon_ {\sigma^ {2}, i} ^ {(t + \Delta t)} + \sum_ {i = 1} ^ {d} \epsilon_ {\sigma^ {2}, i} ^ {(t)} + 2 d \tag {5} \\ + \left(\sum_ {i = 1} ^ {d} \left(\epsilon_ {\mu , i} ^ {(t + \Delta t)} - \epsilon_ {\mu , i} ^ {(t)}\right) ^ {2}\right) ^ {\frac {1}{2}} + 2. \\ \end{array}
+$$
+
+It's obvious that normalizing the outputs by noise-free moments will lead to a constant upper bound and impose a constraint on ICS. In contrast, the distance for the unnormalized outputs is unbounded (see an example of the unbounded distance in the supplementary material). Nevertheless, the noise cannot be controlled in practice, and for
+
+high-dimensional outputs, the bound in Eq. (5) might be too loose to constraint the distance due to a large $d$ . In this case, the ICS for the non-trivial distributions cannot be bounded effectively by controlling the first two moments as BN techniques have done. Then, the analysis of the lower bound is required.
+
+# 3.2. The Lower Bound
+
+For convenience, let $\pmb{x} = (x_{1}, x_{2}, \ldots, x_{d})$ and $\pmb{y} = (y_{1}, y_{2}, \ldots, y_{d})$ . Then, the lower bound on the distance is obtained by constructing a 1-Lipschitz function.
+
+Theorem 2. Suppose that $C > 0$ is a real number, and $n \geq 2$ is an integer. Then,
+
+$$
+\begin{array}{l} W \left(p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}\right) = \sup _ {\| f \| _ {L} \leq 1} \mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta t)}} [ f (\boldsymbol {x}) ] - \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} [ f (\boldsymbol {y}) ] \\ \geq \left| \mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta t)}} \left[ f _ {n, C} (\boldsymbol {x}) \right] - \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} \left[ f _ {n, C} (\boldsymbol {y}) \right] \right|, \tag {6} \\ \end{array}
+$$
+
+where $f_{n,C}$ is the 1-Lipschitz function defined as
+
+$$
+f _ {n, C} (\boldsymbol {x}) = \frac {1}{n C ^ {n - 1} d ^ {\frac {1}{2}}} \Bigg (\sum_ {| x _ {i} | \leq C} x _ {i} ^ {n} + \sum_ {x _ {i} < - C} (- C) ^ {n} + \sum_ {x _ {i} > C} C ^ {n} \Bigg). \tag {7}
+$$
+
+To simplify the analysis, assume that the supports of the distributions are subsets of $[-C_0,C_0]^d$ for some $C_0 > 0$ . Then, the lower bound is formed by the nth-order moments. For $n > 2$ , it's straightforward that the high-order moments affect the lower bound, which cannot be controlled by BN especially in the case of the relaxed upper bound. On the other hand, for $n = 2$ and the normalized output, the lower bound is
+
+$$
+\begin{array}{l} W (p _ {l} ^ {(t + \Delta t)}, p _ {l} ^ {(t)}) \\ \geq \frac {1}{2 C _ {0} d ^ {\frac {1}{2}}} \left| \sum_ {i = 1} ^ {d} \left(\epsilon_ {\mu , i} ^ {(t + \Delta t)}\right) ^ {2} + \epsilon_ {\sigma^ {2}, i} ^ {(t + \Delta t)} - \left(\epsilon_ {\mu , i} ^ {(t)}\right) ^ {2} - \epsilon_ {\sigma^ {2}, i} ^ {(t)} \right|. \tag {8} \\ \end{array}
+$$
+
+The lower bound in Eq. (8) is dominated by the noise. Thus, BN degrades in such case especially for micro-batches. Some methods have been proposed, e.g., GN, to reduce the noise of the moments rather than eliminating the noise. So the lower bound is still dependent on moments.
+
+Based on such analysis of BN, this paper proposes an algorithm with an adjustable upper bound that is noise-free and moment-independent to further bound the distance.
+
+# 3.3. Vanilla Unitization
+
+The proposed algorithm unitizes layers' outputs, and the vanilla unitization transformation is defined as
+
+$$
+g (\boldsymbol {x}) = \left\{ \begin{array}{l l} \frac {\boldsymbol {x}}{\| \boldsymbol {x} \| _ {2}} & , \| \boldsymbol {x} \| _ {2} \neq 0 \\ \boldsymbol {c} & , \| \boldsymbol {x} \| _ {2} = 0 \end{array} \right., \tag {9}
+$$
+
+where $\pmb{c}$ is a constant unit vector. Similarly, the upper bound for the unitized output is given. In fact, the EM distance for $g(\pmb{x})$ is defined as
+
+$$
+\begin{array}{l} W (p _ {U} ^ {(t + \Delta t)}, p _ {U} ^ {(t)}) \\ = \sup _ {| | f | | _ {L} \leq 1} \mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta t)}} [ f (g (\boldsymbol {x})) ] - \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} [ f (g (\boldsymbol {y})) ], \tag {10} \\ \end{array}
+$$
+
+where $p_U^{(t)}$ is the distribution of the unitized outputs.
+
+Theorem 3.1. Suppose that for $\pmb{x} \sim p_l^{(t)}$ , $g(\pmb{x}) \sim p_U^{(t)}$ . Then,
+
+$$
+W \left(p _ {U} ^ {(t + \Delta t)}, p _ {U} ^ {(t)}\right) \leq 2. \tag {11}
+$$
+
+For $g$ , the upper bound is absolutely a constant independent of all parameters including $d$ . Then, ICS for the unitized outputs is exactly bounded in spite of the distribution $p_{l}^{(t)}$ . Hence, by unitizing the outputs, ICS is fully controlled by this constant bound in fact. However, the constant upper bound leads to the other problem. For $t = 0$ and any $\Delta t > 0$ , the distribution $p_{U}^{(\Delta t)}$ is constrained such that the distance between $p_{U}^{(\Delta t)}$ and $p_{U}^{(0)}$ is no more than the constant 2. This might be a severe problem, especially when the network is poorly initialized. Thus, the unitization has to be modified.
+
+# 3.4. Modified Unitization
+
+To alleviate the problem of the very tight bound, define the transformation, which partly unitizes the outputs, as
+
+$$
+g (\boldsymbol {x}; \alpha) = \left\{ \begin{array}{l l} \boldsymbol {c} & , | | \boldsymbol {x} | | _ {2} = 0, \alpha = 1 \\ \frac {\boldsymbol {x}}{\alpha | | \boldsymbol {x} | | _ {2} + (1 - \alpha)} & , o t h e r \end{array} \right., \tag {12}
+$$
+
+where $\alpha \in [0,1]$ is a parameter. Analogously, the upper bound w.r.t. $g(\pmb {x};\alpha)$ is given.
+
+Theorem 3.2. Suppose that for $\alpha \in [0,1]$ and $\pmb{x} \sim p_l^{(t)}$ , $g(\pmb{x};\alpha) \sim p_U^{(t)}$ . Then,
+
+$$
+\begin{array}{l} W \left(p _ {U} ^ {(t + \Delta t)}, p _ {U} ^ {(t)}\right) \leq \mathbb {I} _ {\alpha = 0} (\alpha) \cdot \left(\mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta)}} [ | | \boldsymbol {x} | | _ {2} ] \right. \\ + \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} [ \| \boldsymbol {y} \| _ {2} ]) + \mathbb {I} _ {\alpha > 0} (\alpha) \cdot \frac {2}{\alpha}. \tag {13} \\ \end{array}
+$$
+
+Note that $\alpha = 0$ implies $g(\pmb{x};\alpha)$ is an identity mapping, and $\alpha > 0$ implies the distance is exactly bounded by $2 / \alpha$ . Thus, the bound is dominated by $\alpha$ , and the very bound is obtained by fine-tuning $\alpha$ over $[0,1]$ .
+
+Furthermore, considering a set of parameters $\alpha = (\alpha_{1},\alpha_{2},\dots ,\alpha_{d})\in [0,1]^{d}$ , the general unitization is defined as
+
+$$
+g (\boldsymbol {x}; \boldsymbol {\alpha}) = \left\{ \begin{array}{l l} \mathbf {0} & , | | \boldsymbol {x} | | _ {2} = 0 \\ \left(\left(| | \boldsymbol {x} | | _ {2} - 1\right) \cdot \operatorname {d i a g} (\boldsymbol {\alpha}) + E\right) ^ {- 1} \boldsymbol {x} & , | | \boldsymbol {x} | | _ {2} > 0 \end{array} \right. \tag {14}
+$$
+
+where $\mathrm{diag}(\alpha)$ is a diagonal matrix of $\alpha$ and $E$ is a identity matrix. Likewise, the upper bound for $g(\pmb{x};\pmb{\alpha})$ is given.
+
+Theorem 3.3. Suppose that for $\alpha \in [0,1]^d$ and $\pmb{x} \sim p_l^{(t)}$ , $g(\pmb{x};\pmb{\alpha}) \sim p_U^{(t)}$ . Then,
+
+$$
+\begin{array}{l} W \left(p _ {U} ^ {(t + \Delta t)}, p _ {U} ^ {(t)}\right) \leq \mathbb {I} _ {\min _ {j} \alpha_ {j} > 0} (\boldsymbol {\alpha}) \cdot \frac {2}{\min _ {j} \alpha_ {j}} \\ + \mathbb {I} _ {\min _ {j} \alpha_ {j} = 0} (\boldsymbol {\alpha}) \cdot \left(\mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta)}} [ | | \boldsymbol {x} | | _ {2} ] \right. \\ + \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} [ \| \boldsymbol {y} \| _ {2} ] + 2). \tag {15} \\ \end{array}
+$$
+
+The minimum $\alpha_{*} = \min_{j}\alpha_{j}$ dominates the upper bound. If $\alpha_{*} = 0$ , then there exists $i$ such that the scale of $x_{i}$ remains unchanged after the unitization, and the EM distance for the marginal distribution of $x_{i}$ is unbounded. In contrast, the constant bound $2 / \alpha^{*}$ is obtained by $\alpha_{*} > 0$ . Furthermore, if $\alpha_{i}$ is fixed for some $i$ , the other parameters $\alpha_{j}, j \neq i$ can be freely fine-tuned over $[\alpha_{i},1]$ without changing the bound. Thus, $g(\pmb{x};\pmb{\alpha})$ is more flexible, and used in the proposed algorithm. However, the unitized outputs lose some information, e.g., similarity between the samples, which cannot be recovered by an affine transformation like that in BN. The smaller bound leads to more information loss, and a trade-off is required. Then, the unitization algorithm is given.
+
+# 3.5. Algorithm
+
+For a network, $\alpha$ in each unitization layer is trained with the weights to reduce the objective function. However, since $\alpha \in [0,1]^d$ , the training would lead to a constrained optimization problem. To avoid the problem and make the training stable, this paper uses a simple interpolation method for Eq. (14). The practical transformation is defined as
+
+$$
+g (\boldsymbol {x}; \boldsymbol {\alpha}, \epsilon) = \left[ \frac {1}{\sqrt {| | \boldsymbol {x} | | _ {2} ^ {2} + \epsilon}} \boldsymbol {\alpha} + (\mathbf {1} - \boldsymbol {\alpha}) \right] \odot \boldsymbol {x}, \tag {16}
+$$
+
+where $\epsilon > 0$ makes the non-zero denominator, and $\odot$ represents element-wise production. Likewise, we provide the upper bound for the practical unitization.
+
+Theorem 3.4. Suppose that for $\alpha \in \mathbb{R}^d, \epsilon > 0$ and $\pmb{x} \sim p_l^{(t)}, g(\pmb{x}; \pmb{\alpha}, \epsilon) \sim p_g^{(t)}$ . Then,
+
+$$
+\begin{array}{l} W \left(p _ {g} ^ {(t + \Delta t)}, p _ {g} ^ {(t)}\right) \leq 2 \| \boldsymbol {\alpha} \| _ {\infty} + \| \mathbf {1} - \boldsymbol {\alpha} \| _ {\infty} \left(\mathbb {E} _ {\boldsymbol {x} \sim p _ {l} ^ {(t + \Delta t)}} [ \| \boldsymbol {x} \| _ {2} ] \right. \\ + \mathbb {E} _ {\boldsymbol {y} \sim p _ {l} ^ {(t)}} [ \| \boldsymbol {y} \| _ {2} ]). \tag {17} \\ \end{array}
+$$
+
+Intuitively, the theoretical unitization (14) and practical unitization (16) tune the bounds (15) and (17) respectively by $\alpha$ in a similar way that $\alpha \rightarrow 1^{-}$ yields the tight bound while $\alpha \rightarrow 0^{+}$ yields the loose bound, though the bound
+
+(17) for the practical one is relatively relaxed. In addition, $||\mathbf{1} - \pmb{\alpha}||_{\infty}$ in the second term of the bound (17) requires $\pmb{\alpha} \in [0,2]^d$ for the reduction of ICS, which is a weaker constraint, compared with $\pmb{\alpha} \in [0,1]^d$ in the theoretical bound (15). Therefore, the effects of the practical unitization are similar to those of theoretical one.
+
+In the proposed algorithm, the unitization Eq. (16) is embedded into the framework of BN to reduce the information loss. In fact, only the unitization might require large $\alpha$ to bound the EM distance with more information loss. In contrast, there is less information loss in BN since similarity between the normalized outputs remains unchanged and the affine transformation in BN can recover some information. Thus, the proposed algorithm integrates these two techniques to bound ICS with reasonable information loss. The algorithm is presented in Algorithm 1, where element-wise division is also denoted by $/$ . The moments $\mu$ and $\sigma^2$ in inference is computed in the same way [15].
+
+Algorithm 1 Unitization Algorithm
+Input: dataset $\{\pmb {x}_i\}_{i = 1}^n$ , trainable parameters $\alpha ,\gamma$ and $\beta$
+Output: unitized results $\{\pmb {y}_i\}_{i = 1}^n$
+1: $\pmb {\mu}\gets \frac{1}{n}\sum_{i}\pmb{x}_{i}$
+2: $\sigma^2\gets \frac{1}{n}\sum_i(\pmb {x}_i - \pmb {\mu})^2$
+3: for $i\gets 1$ to n do
+4: $\hat{\pmb{x}}_i\gets (\pmb {x}_i - \pmb {\mu}) / \sqrt{\pmb{\sigma}^2 + \epsilon}$
+5: $p\gets 1 / \sqrt{||\hat{\pmb{x}}_i||_2^2 + \epsilon}$
+6: $\overline{\pmb{x}}_i\gets [p\pmb {\alpha} + (\mathbf{1} - \pmb {\alpha})]\odot \hat{\pmb{x}}_i$
+7: $\pmb {y}_i\gets \gamma \odot \overline{\pmb{x}}_i + \beta$
+8: end for
+
+# 3.6. Unitized Convolutional Layers
+
+To take into account the spatial context of image data, this paper also proposes the unitized convolutional layers. As recommended by [15], the moments $\mu$ and $\sigma^2$ in Algorithm 1 are computed over the whole mini-batch at different locations w.r.t. a feature map, and they are shared within the same feature map (Figure 1(a)). But the norm in the unitization is computed in a different way. How to compute the norm is determined by the definition of a single sample for image data. A simple algorithm follows the idea of BN in convolutional layers, where the pixels at the same location in all channels is regarded as a single sample (Figure 1(b)), and then this sample will be unitized by its norm. However, this algorithm scales the pixels by location-dependent norms, ignoring the spatial context.
+
+Thus, the computation of the norms has to be modified. Instead, the pixels at all locations in all feature maps within an image form a single sample (Figure 1(c)). The sample's norm will be location-independent and shared within these pixels. However, this will lead to a very large norm for a pixel. As the inverse of the norm, $p$ in Algorithm 1 will be
+
+Algorithm 2 Unitization Algorithm for Convolutional Lay-ers
+Input: dataset $\mathcal{D} = \{x_{ij}^{(k)}|1\leq k\leq N,1\leq i\leq C,1\leq j$ $\leq HW\}$ , trainable parameters $\alpha ,\gamma$ and $\beta$
+Output: unitized results $\{y_{ij}^{(k)}|1\leq k\leq N,1\leq i\leq C,$ $1\leq j\leq HW\}$
+1: for $k\gets 1$ to $N$ do
+2: $s\gets 0$
+3: for $i\gets 1$ to $C$ do
+4: for $j\gets 1$ to HW do
+5: $\hat{x}_{ij}^{(k)} = \mathrm{BN}(x_{ij}^{(k)};\mathcal{D})$
+6: $s\gets s + \hat{x}_{ij}^{(k)2}$
+7: end for
+8: end for
+9: $s\gets s / (nHW)$
+10: $p\gets 1 / \sqrt{s + \epsilon}$
+11: for $i\gets 1$ to $C$ do
+12: for $j\gets 1$ to HW do
+13: $\bar{x}_{ij}^{(k)}\gets [p\alpha_i + (1 - \alpha_i)]\hat{x}_{ij}^{(k)}$
+14: $y_{ij}^{(k)}\gets \gamma_i\bar{x}_{ij}^{(k)} + \beta_i$
+15: end for
+16: end for
+17: end for
+
+relatively small and make $p\alpha$ be ignored when $\alpha$ is being fine-tuned. Then $\alpha$ only scales $\hat{\pmb{x}}_i$ by $1 - \alpha$ , but the scale has been controlled by $\gamma$ . Hence, the norm is divided by a constant related to the number of the pixels before the unitization.
+
+The modified unitization is presented in Algorithm 2, where $x_{ij}^{(k)}$ denotes the value at the $j$ th location in the $i$ th feature map, generated from the $k$ th training sample; $\hat{x}_{ij}^{(k)}$ , $\bar{x}_{ij}^{(k)}$ and $y_{ij}^{(k)}$ are defined in the same way; $\mathrm{BN}(\cdot ;\mathcal{D})$ denotes the normalization transformation for convolutional layers [15] using the dataset $\mathcal{D}$ without the affine transformation; $\alpha_{i},\gamma_{i}$ and $\beta_{i}$ denote the $i$ th element of $\alpha ,\gamma$ and $\beta$ , respectively; the $n$ in Line 9 is a hyper-parameter that is set to $HW$ by default.
+
+# 4. Experiments
+
+# 4.1. Estimated Moments
+
+To verify the ability of the unitization controlling higher moments, we train simple neural networks on the MNIST dataset [19] and estimate the moments of a certain layer's outputs.
+
+Network Architectures: The inputs of the networks are $28 \times 28$ images, followed by a stack of fully-connected layers with ReLU activations, consisting of 10 100-unit layers and one 8-unit layer. A BN/unitization layer follows
+
+
+Figure 1: Different methods for estimating the statistics. Like the visualization of normalization methods [36], each subplot shows a feature map tensor, with $N$ , $C$ and $(H, W)$ as the batch axis, the channel axis and the spatial axes, respectively. (a) shows the values of the pixels in red are used to compute the moments $\mu$ and $\sigma$ in BN, while (b) and (c) show the values of the pixels are used to obtain the norm. The estimated moments and norms are shared within these pixels.
+
+
+
+
+
+
+Figure 2: Moments estimated over the 8-unit layers' outputs. Each line in a subfigure represents the moment w.r.t. a unit's output. In general, more stable moments are obtained by using the unitization.
+
+
+
+
+
+
+
+each fully-connected layer. The networks end with a fully-connected layer for 10 classes.
+
+Implementation Details: The networks are trained using Mini-batch Gradient Descent for 200 epochs, and the batch size is 128. The learning rate starts from 0.05 and is divided by 5 at the 61th, 121th and 161th epochs. At the end of each epoch, the training samples are fed into the networks to get the normalized/unitized 8-unit layer's outputs. Then, we estimate the mean, variance, skewness and kurtosis over the outputs.
+
+Results: As is shown in Figure 2, the estimated mean and variance for both BN and the unitization layers are stable through the training. However, the estimated skewness and kurtosis are unstable w.r.t. the BN layer, with large fluctuation in the red line. By the lower bound in Eq. (6), the EM distance will be large. In contrast, there are more stable skewness and kurtosis of the unitized outputs. The proposed unitization controls the high-order moments
+
+# 4.2. Classification Results on CIFAR
+
+For the tasks of image recognition, we run the experiments on CIFAR-10 and CIFAR-100 datasets [17], following the data augmentation recommended by [21] (the experiments of comparing the unitization with GN [36] are provided in the supplementary material).
+
+Network Architectures: The ResNet-20, ResNet-110 and ResNet-164 [11, 12] with BN or the unitization are trained to compare the performance. The ResNets follow the general architecture [12] with full pre-activation blocks.
+
+Implementation Details: In each experiment, for BN and the unitization, the networks are initialized by the same weights that are generated using the method of [10] to reduce impacts of initialization. Each network is trained by Mini-batch Gradient Descent with Nesterov's momentum, and the same learning rate in the moment experiments is used. The momentum is 0.9 and the weight decay is 0.0005. The mini-batch sizes are 128 and 64 in training {ResNet-20,
+
+ResNet-110\} and ResNet-164, respectively. Each network is evaluated after 200 epochs and the median accuracy of 5 runs is reported. The parameter $\alpha$ is initialized to 0. The parameters $\gamma$ and $\beta$ are initialized to 1 and 0, respectively, as suggested by [15].
+
+Table 1 Classification accuracy on the CIFAR-10 testing dataset.
+
+| Network | Mini-batch size | Accuracy |
| ResNet-20 (BN) | 128 | 91.79% |
| ResNet-20 (Unitization) | 128 | 92.21% |
| ResNet-110 (BN) [12] | 128 | 93.63% |
| ResNet-110 (BN) | 128 | 93.99% |
| ResNet-110 (Unitization) | 128 | 94.12% |
| ResNet-164 (BN) [12] | 128 | 94.54% |
| ResNet-164 (BN) | 64 | 94.34% |
| ResNet-164 (Unitization) | 64 | 94.62% |
+
+Results: Table 1 shows the results on the CIFAR-10 dataset, where the results in [12] are also presented for comparison. The proposed algorithm has better performance, compared with BN, which raises the classification accuracy for each ResNet. But there is less improvement in the accuracy of the deeper network, which might be explained by the less benefit from stacking more layers in deeper networks. Actually, the ResNet-1001 [12] only achieves an accuracy of $95.08\%$ that is the limit of these ResNets with BN. The accuracy of ResNet-164 in [12] is only $0.54\%$ less than that of the ResNet-1001, but using the unitization raises the accuracy by $0.08\%$ .
+
+The results on the CIFAR-100 dataset are reported in Table 2, where the unitization still outperforms BN, with increase of over $1\%$ in the accuracy of each experiment.
+
+Table 2 Classification accuracy on the CIFAR-100 testing dataset.
+
+| Network | Mini-batch size | Accuracy |
| ResNet-20 (BN) | 128 | 66.43% |
| ResNet-20 (Unitization) | 128 | 67.49% |
| ResNet-110 (BN) | 128 | 72.27% |
| ResNet-110 (Unitization) | 128 | 73.31% |
| ResNet-164 (BN) [12] | 128 | 75.67% |
| ResNet-164 (BN) | 64 | 76.56% |
| ResNet-164 (Unitization) | 64 | 77.58% |
+
+# 4.3. Classification Results on ImageNet
+
+We further evaluate the unitization on the ImageNet 2012 classification dataset [25]. The networks are trained on the
+
+1.28M training images, and evaluated on the 50k validation images. Only the scale augmentation [11,30] is used.
+
+Network Architectures: Only the ResNet-101 with BN or the unitization is trained to compare the performance. The network follows the architecture [11] but with full preactivation blocks [12]. Besides, the outputs of each shortcut connection and the final block are not normalized or unitized.
+
+Implementation Details: Like the experiments on CIFAR datasets, the weights are initialized by the method [10], shared between the experiments and trained by the gradient descent with the same momentum, but the weight decay is 0.0001. The learning rate starts from 0.01 and is divided by 5 at the 31th, 61th and 91th epochs. The batch size is 64 for a single GPU. After 120 epochs, the networks are evaluated on the validation data by two methods. The first method resize the images with the sorter side in \{224, 256, 384, 480, 640\} and averages the scores over 42 crops at all scales (2 central crops for 224-scale images, 10 standard crops for other resized images). The second method adopts the fully-convolutional form and averages the scores over the same multi-scale images [11]. Besides, the $n$ in Line 9 of Algorithm 2 is fixed and set to the same value in the training for the multi-scale images.
+
+Table 3 Classification accuracy on the ImageNet dataset.
+
+| Algorithm | Method/Mini-batch size | Top-1 | Top-5 |
| BN | multi-scale crops/64 | 78.12% | 93.45% |
| Unitization | multi-scale crops/64 | 78.33% | 93.22% |
| BN | fully-convolution/64 | 76.47% | 93.02% |
| Unitization | fully-convolution/64 | 77.84% | 93.33% |
| BN [11] | fully-convolution/256 | 80.13% | 95.40% |
+
+Results: In the results, the unitization outperforms BN in general, with only the top-5 accuracy for the first method less than that of BN. However, there is a performance gap between the reproduced result and the accuracy in [11], which might be explained by the different implementation details including the data augmentation, the architecture and hyper-parameters like the batch size and the learning rate. But for the evaluation method of fully-convolution recommended by [11], the accuracy increases by over $1\%$ using the unitization. In general, the unitization shows higher performance for classification tasks.
+
+# 5. Conclusion
+
+This paper proposes an ICS measure by using the EM distance, and provides a theoretical analysis of BN through the upper and lower bounds. The moment-dependent upper
+
+bound has shown that BN techniques can effectively control ICS only for the low-dimensional outputs with small noise in the moments, but would degrade in other cases. Meanwhile, the high-order moments and noise that are out of BN's control have great impact on the lower bound. Then, this paper proposes the unitization algorithm with the noise-free and moment-independent upper bound. By training the parameter in the unitization, the bound can be fine-tuned to further control ICS. The experiments demonstrate the proposed algorithm's control of high-order moments and performance on the benchmark datasets including CIFAR-10, CIFAR-100 and ImageNet.
+
+# References
+
+[1] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In International Conference on Machine Learning, pages 214-223, 2017.
+[2] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
+[3] Yongqiang Cai, Qianxiao Li, and Zuowei Shen. A quantitative analysis of the effect of batch normalization on gradient descent. arXiv preprint arXiv:1810.00122, 2018.
+[4] Minhyung Cho and Jaehyung Lee. Riemannian approach to batch normalization. In Advances in Neural Information Processing Systems, pages 5225-5235, 2017.
+[5] Victor Lempitsky Dmitry Ulyanov, Andrea Vedaldi. Instance normalization: The missing ingredient for fast stylization. arXiv:1607.08022, 2016.
+[6] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440-1448, 2015.
+[7] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pages 249-256, 2010.
+[8] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
+[9] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2980-2988. IEEE, 2017.
+[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026-1034, 2015.
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European
+
+conference on computer vision, pages 630-645. Springer, 2016.
+[13] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82-97, 2012.
+[14] Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorated batch normalization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 791-800, 2018.
+[15] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+[16] Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, and Thomas Hofmann. Exponential convergence rates for batch normalization: The power of length-direction decoupling in non-convex optimization. arXiv preprint arXiv:1805.10694, 2018.
+[17] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009.
+[18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
+[19] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
+[20] Yann A LeCun, Léon Bottou, Genevieve B Orr, and Klaus-Robert Müller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9-48. Springer, 2012.
+[21] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. In Artificial Intelligence and Statistics, pages 562-570, 2015.
+[22] Chunjie Luo, Jianfeng Zhan, Lei Wang, and Qiang Yang. Cosine normalization: Using cosine similarity instead of dot product in neural networks. arXiv preprint arXiv:1702.05870, 2017.
+[23] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
+[24] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015.
+[25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252, 2015.
+[26] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems, pages 901-909, 2016.
+
+[27] Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In Advances in Neural Information Processing Systems, pages 2483-2493, 2018.
+[28] Hidetoshi Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference, 90(2):227-244, 2000.
+[29] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
+[30] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+[31] Saurabh Singh and Abhinav Shrivastava. Evalnorm: Estimating batch normalization statistics for evaluation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3633-3641, 2019.
+[32] Jonathan J Tompson, Arjun Jain, Yann LeCun, and Christoph
+
+Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In Advances in neural information processing systems, pages 1799-1807, 2014.
+[33] Cédric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
+[34] Guangrun Wang, Ping Luo, Xinjiang Wang, Liang Lin, et al. Kalman normalization: Normalizing internal representations across network layers. In Advances in Neural Information Processing Systems, pages 21-31, 2018.
+[35] Shuang Wu, Guoqi Li, Lei Deng, Liu Liu, Dong Wu, Yuan Xie, and Luping Shi. L1-norm batch normalization for efficient training of deep neural networks. IEEE transactions on neural networks and learning systems, 2018.
+[36] Yuxin Wu and Kaiming He. Group normalization. arXiv preprint arXiv:1803.08494, 2018.
+[37] Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S Schoenholz. A mean field theory of batch normalization. arXiv preprint arXiv:1902.08129, 2019.
\ No newline at end of file
diff --git a/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/images.zip b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..cb3dc4e24a24fbaae5f1ede26b9d67a39014b408
--- /dev/null
+++ b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b16370050ade05cbcbed59fc5dfa522a93e746bd4e4116036d44d44a729af085
+size 387773
diff --git a/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/layout.json b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2ec476f550a92424d3f6cc4cf24200718b0e7c50
--- /dev/null
+++ b/aninternalcovariateshiftboundingalgorithmfordeepneuralnetworksbyunitizinglayersoutputs/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d61e32a1a46335a0c70ab7f4ace385fe0584de6dfa2a4310ec5408e8329b6f7
+size 444818
diff --git a/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_content_list.json b/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..84a26b73cd3c5dcdcf49cbbad8e3f5771cbaa882
--- /dev/null
+++ b/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5c97a7fb0ae8da911dee3518ad48d2781db18eda2c06c23288a611bc930c022
+size 83176
diff --git a/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_model.json b/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b38a6b3960c16263eb097817fcd4fb2ebb1c6826
--- /dev/null
+++ b/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:948d16f6a4b8d9ddc856b15e39224e8dfee84a94e08069a8e7d5e9f9657c6047
+size 102103
diff --git a/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_origin.pdf b/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f5364cd32ba9c759aae889c39828e772131a5777
--- /dev/null
+++ b/aninvestigationintothestochasticityofbatchwhitening/538aecfc-a2ae-43a3-8e5d-8799225141e2_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:caf4b75d16e2adc8d399a343b2e1e5488cfffffbee72f67ec691f4377f9fe281
+size 824611
diff --git a/aninvestigationintothestochasticityofbatchwhitening/full.md b/aninvestigationintothestochasticityofbatchwhitening/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0586a4bd51bc8eb144628f641cd2e8f4d66e6a50
--- /dev/null
+++ b/aninvestigationintothestochasticityofbatchwhitening/full.md
@@ -0,0 +1,338 @@
+# An Investigation into the Stochasticity of Batch Whitening
+
+Lei Huang† Lei Zhao Yi Zhou† Fan Zhu† Li Liu† Ling Shao†
+
+†Inception Institute of Artificial Intelligence (IIAI), Abu Dhabi, UAE
+
+{lei.huang, yi.zhou, fan.zhu, li.liu, ling.shao}@inceptioniai.org bhneo@126.com
+
+# Abstract
+
+Batch Normalization (BN) is extensively employed in various network architectures by performing standardization within mini-batches. A full understanding of the process has been a central target in the deep learning communities. Unlike existing works, which usually only analyze the standardization operation, this paper investigates the more general Batch Whitening (BW). Our work originates from the observation that while various whitening transformations equivalently improve the conditioning, they show significantly different behaviors in discriminative scenarios and training Generative Adversarial Networks (GANs). We attribute this phenomenon to the stochasticity that BW introduces. We quantitatively investigate the stochasticity of different whitening transformations and show that it correlates well with the optimization behaviors during training. We also investigate how stochasticity relates to the estimation of population statistics during inference. Based on our analysis, we provide a framework for designing and comparing BW algorithms in different scenarios. Our proposed BW algorithm improves the residual networks by a significant margin on ImageNet classification. Besides, we show that the stochasticity of BW can improve the GAN's performance with, however, the sacrifice of the training stability.
+
+# 1. Introduction
+
+Normalization techniques have been extensively used for learning algorithms during data preprocessing [24, 22, 11]. It has been shown that centering, scaling and decorrelating the inputs speeds up the training [24]. Furthermore, whitening the input that combines all above operations, improves the conditioning of the Hessian, making the gradient descent updates similar to the Newton updates [24, 44, 15].
+
+Batch Normalization (BN) [18] extends the idea of normalizing the input into the activations of intermediate layers of Deep Neural Networks (DNNs), and represents a milestone technique in the deep learning community [11, 41, 45]. BN standardizes the activations by executing centering and scaling within a mini-batch of data, facilitating the optimization [18, 21, 34] and generalization [3, 4]. Batch Whitening
+
+
+(a) BN standardization
+
+
+(c) ZCA whitening
+
+
+
+
+(b) PCA whitening
+(d) CD whitening
+Figure 1. We sample 1000 examples (black points) from a Gaussian distribution in a 16-dimensional space, and show the examples in the two-dimension sub-space (the 6th and 16th dimension). Given an example $\mathbf{x}$ (red diamond), when combining with 100 different mini-batches $\mathbf{X}^B$ ( $B = 64$ ), we provide the normalized output $\hat{\mathbf{x}}$ (yellow pentagram), where (a), (b), (c) and (d) show the results of BN standardization, PCA, ZCA and CD whitening, respectively.
+
+(BW) further extends the scope of BN by removing the correlation of standardized activations [15]. It has been shown to improve the performance in discriminative scenarios [15] and Generative Adversarial Networks (GAN) [37].
+
+Despite BW's theoretical support in improving conditioning, there remains some intriguing observations relating to BW that are not well explored. Firstly, while various whitening transformations can equivalently improve the conditioning [19], they show significant differences in performance: 1) Principal Component Analysis (PCA) whitening hardly converges while Zero-phase Component Analysis (ZCA) whitening works well in discriminative scenarios [15]; 2) Cholesky Decomposition (CD) whitening achieves significantly better performance than ZCA in GAN training, while it has slightly worse performance in discriminative cases [37]. Secondly, while group based whitening—where features are divided into groups and whitening is performed within each one—is essential in discriminative scenarios
+
+[15, 30], full feature whitening has been shown to achieve better performance in training GANs [37].
+
+This paper focuses on explaining the above observations of BW. We find that the stochasticity introduced by normalization over batch data (Figure 1) can be key to uncovering the intriguing phenomena of BW. We quantitatively investigate the magnitude of stochasticity in different whitening transformations, using the evaluation method called Stochastic Normalization Disturbance (SND) [16]. By doing so, we demonstrate that PCA whitening has significantly larger stochasticity, which is difficult to be controlled by either increasing batch size or using group based whitening. On the other side, ZCA whitening has the smallest stochasticity, while CD whitening has a moderate value, and more importantly, their stochasticity can be well controlled. This suggests ZCA whitening should have better optimization behaviors in training, while PCA whitening has problems in converging, due to the increased stochasticity which slows down the progress of the optimization [40, 16].
+
+We also investigate the stochasticity during inference, which is caused by the estimation of population statistics averaged over the mini-batch statistics during training. We show that in terms of estimating the population statistics of the whitening matrix, it is more stable to use the mini-batch covariance matrix indirectly (We calculate the whitening matrix after training) than the mini-batch whitening matrix directly. We further provide an empirical investigation to understand the reasons behind this observation, and find that the stochastic sequences of the mini-batch whitening matrix have a large diversity than the covariance matrix.
+
+Based on the above analyses, we provide a general framework for designing and comparing BW algorithms in different scenarios. We design new BW algorithm and apply them to the residual networks [11] for ImageNet dataset [7], significantly improving the performance over the original one. We further conduct thorough experiments on training GANs. We show that full feature whitening, when combined with coloring, can improve the final score of evaluation. However, it reduces the training stability and is more sensitive to the hyper-parameters configurations. We attribute this phenomenon to two main effects caused by the introduced stochasticity of BW: 1) Strong stochasticity can increase the diversity of generated images and thus improve the GAN's performance; 2) At the same time, high stochasticity harms optimization and thus is more sensitive to the hyper-parameters. We argue that controlling the magnitude of whitening (stochasticity) is also important in training GANs and we validate this argument with our experiments.
+
+# 2. Related Work
+
+The previous analyses on BN mainly focus on the optimization. One main argument is that BN can improve the conditioning of the optimization problem. This argument was initially introduced in the BN paper [18] and further
+
+refined in [34], showing that BN leads to a smoother landscape of the optimization problem under certain assumptions. Ghorbani et al. [9] investigated this explanation by computing the spectrum of the Hessian for a large-scale dataset. It is believed that the improved conditioning enables large learning rates, thus improving the generalization, as shown in [4]. Another argument is that BN can adaptively adjust the learning rate [6, 14, 1] due to its scale invariance [18, 3]. This effect has been further discussed in combination with weight decay [47]. Other works have included an investigation into the signal propagation and gradient back-propagation [46]. Different from these approaches, our work focuses on analyzing the stochasticity of whitening over batch data.
+
+The stochasticity introduced by normalization over batch data was first mentioned in the BN paper [18], and further explored in [2, 42] from the perspective of Bayesian optimization. This stochasticity results in differences between the training distribution (using mini-batch statistics) and the test distribution (using estimated population statistics) [17], which is believed to be the main cause of the small-batch-problem of BN [45]. To address this issue, a number of approaches have been proposed [45, 32, 28, 17, 43, 39]. Furthermore, it has been observed that BN also encounters difficulties in optimization during training [35, 16]. This phenomenon is explored by the stochastic analysis shown in [16]. Different from the above research which focuses on standardization, we analyze, for the first time, the stochasticity on batch whitening. We propose that analyzing whitening rather than standardization, has several advantages in understanding the behaviors of normalization over batch data: 1) There are an infinite number of whitening transformations and the main ones show significant differences as discussed in Section 1; 2) The extent of the whitening (stochasticity) can be well controlled by the batch and group size, which provides more information in designing experiments.
+
+Our work is related to the previously proposed whitening methods regarding the activation of DNNs. One approach is to consider the whitening matrix as model parameters to be estimated using full data [8, 27]. This kind of whitening has also been exploited in image style transformation tasks [25, 36]. Another line of research is batch whitening, which is what this paper discusses. This approach treats the normalization as a function over a mini-batch input. The main works include PCA whitening, ZCA whitening [15] and its approximation ItN [16], and CD whitening [37]. Pan et al. [30] propose switchable whitening to learn different batch/instance whitening/standardization operations in DNNs. However, they used only the ZCA whitening transformation. Our work aims to understand different whitening transformation behaviors based on stochastic analysis.
+
+# 3. Stochasticity Analysis of Batch Whitening
+
+Let $\mathbf{X} \in \mathbf{R}^{d \times m}$ be a data matrix denoting the mini-batch input of size $m$ in a certain layer. For simplifying the discussion, we assume that the data is centered, by performing
+
+$\mathbf{X} \coloneqq \mathbf{X} - \mu \cdot \mathbf{1}^T$ where $\mu = \frac{1}{m}\mathbf{X} \cdot \mathbf{1}$ is the mean of $\mathbf{X}$ , and $\mathbf{1}$ is a column vector of all ones. Whitening performs normalization over the mini-batch input as follows:
+
+$$
+\widehat {\mathbf {X}} = \mathbf {G} \mathbf {X}, \tag {1}
+$$
+
+where $\mathbf{G}$ is the mini-batch whitening matrix that is derived from the corresponding covariance matrix $\Sigma = \frac{1}{m}\mathbf{X}\mathbf{X}^T$ . The population statistics of the whitening matrix $\widehat{\mathbf{G}}$ used for inference, is usually calculated by running average over the mini-batches as follows:
+
+$$
+\widehat {\mathbf {G}} = (1 - \lambda) \widehat {\mathbf {G}} + \lambda \mathbf {G}. \tag {2}
+$$
+
+It is clear that both the whitened output $\widehat{\mathbf{X}}$ (Eqn. 1) and the population statistics $\widehat{\mathbf{G}}$ (Eqn. 2) can be viewed as stochastic variables, because they depend on the mini-batch inputs which are sampled over datasets. For illustration, we defer the analysis of the stochasticity to Sections 3.2 and 3.3, and first provide a review of the whitening transformations.
+
+# 3.1. Whitening Transformations
+
+There are an infinite number of possible whitening matrices, as shown in [19, 15], since any whitened data with a rotation is still whitened. This paper focuses on the whitening transformations based on PCA, ZCA and CD, since these three transformations have shown significant differences in performance when used in training DNNs [15, 37]. Note that BN [18] can be viewed as a special case of batch whitening, since it performs standardization without removing correlations, where $\mathbf{G}_{BN} = (\mathrm{diag}(\Sigma))^{-1/2}$ with $\mathrm{diag}(\cdot)$ setting the off-diagonal elements of a matrix to zeros. To simplify the description, this paper regards BN as a (reduced) whitening transformation, unless otherwise stated.
+
+PCA Whitening uses $\mathbf{G}_{\mathit{PCA}} = \Lambda^{-\frac{1}{2}}\mathbf{D}^T$ , where $\Lambda = \mathrm{diag}(\sigma_1,\dots ,\sigma_d)$ and $\mathbf{D} = [\mathbf{d}_1,\dots ,\mathbf{d}_d]$ are the eigenvalues and associated eigenvectors of $\Sigma$ , i.e. $\Sigma = \mathbf{D}\Lambda \mathbf{D}^{T}$ . Under this transformation, the variables are first rotated by the eigen-matrix $(\mathbf{D})$ of the covariance, then scaled by the square root inverse of the eigenvalues $(\Lambda^{-\frac{1}{2}})$ . PCA whitening over batch data suffers significant instability in training DNNs, and hardly converges, due to the so called Stochastic Axis Swapping (SAS), as explained in [15].
+
+ZCA Whitening uses $\mathbf{G}_{ZCA} = \mathbf{D}\Lambda^{-\frac{1}{2}}\mathbf{D}^T$ , where the PCA whitened input is rotated back by the corresponding rotation matrix $\mathbf{D}$ . ZCA whitening works by stretching/squeezing the dimensions along the eigenvectors. It has been shown that ZCA whitening avoids the SAS and achieves better performance over standardization (used in BN) on discriminative classification tasks [15].
+
+CD Whitening uses $\mathbf{G}_{CD} = \mathbf{L}^{-1}$ where $\mathbf{L}$ is a lower triangular matrix from the Cholesky decomposition, with $\mathbf{LL}^T = \boldsymbol{\Sigma}$ . This kind of whitening works by recursively decorrelating the current dimension over the previous decorrelated ones, resulting in a triangular form of its whitening matrix. CD whitening has been shown to achieve the state-of-the-art performance in training GANs, while ZCA whitening has degenerated performance.
+
+
+(a)
+
+
+(b)
+Figure 2. SND comparison of different batch whitening methods. We sample 60,000 examples from a Gaussian distribution as the training set. To calculate SND, we use $s = 200$ and $N = 20$ . We show (a) the SND with respect to the dimensions ranging from $2^{1}$ to $2^{9}$ , under a batch size of 1024; (b) the SND with respect to the batch size ranging from $2^{7}$ to $2^{12}$ , under a dimension of 128.
+
+The primary motivation of this paper is to investigate the following problem: while all whitening methods equivalently improve the conditioning of the layer input, why do they show significantly different behaviors in training DNNs? In the following sections, we provide a unified analysis and demonstrate that the key is the stochasticity introduced by normalization over batch data.
+
+# 3.2. Stochasticity During Training
+
+Given a sample $\mathbf{x} \in \mathbb{R}^d$ from a distribution $P_{\chi}$ , we take a sample set $\mathbf{X}^B = \{\mathbf{x}_1, \dots, \mathbf{x}_B, \mathbf{x}_i \sim P_{\chi}\}$ with a size of $B$ . The whitened output $\hat{\mathbf{x}}$ can be formulated as $\hat{\mathbf{x}} = \mathbf{G}(\mathbf{X}^B; \mathbf{x})$ . For a certain $\mathbf{x}$ , $\mathbf{X}^B$ can be viewed as a random variable [2, 42, 16]. $\hat{\mathbf{x}}$ is thus another random variable showing stochasticity. Here, we investigate the stochasticity effects of different whitening transformations.
+
+To provide a more intuitive illustration, we conduct a toy experiment to show how the normalized output of one sample changes when combined with a different sample set $\mathbf{X}^B$ , by performing different whitening methods. Figure 1 (a), (b), (c) and (d) show the results when performing BN, PCA, ZCA and CD whitening, respectively. It is clear that the distribution of the PCA whitened output of one sample is very sparse, which means that $\hat{\mathbf{x}}$ has significant diversity. This suggests that PCA whitening provides large stochasticity. On the other side, the BN standardized output shows a tight Gaussian-style distribution, which suggests that BN has smaller stochasticity. Note that BN can not guarantee that the normalized output has an identity covariance matrix, while other whitening methods can. Similarly, we also observe that ZCA whitening provides reduced stochasticity, compared to CD. In fact, ZCA whitening has been shown to minimize the total squared distance between the original and whitened variables [19, 15]. We conjecture that such a property results in ZCA whitening having smaller stochasticity than CD.
+
+# 3.2.1 Quantitative Analysis
+
+To provide a qualitative comparison, we exploit the evaluation called Stochastic Normalization Disturbance (SND), introduced in [16]. The empirical estimation of SND for
+
+
+(a)
+
+
+(b)
+Figure 3. Experiments on a 4-layer MLP with 256 neurons in each layer, for MNIST classification. We use a batch size of 1024 and report the training errors. (a) The results of full whitening methods; (b) The results of group based whitening, where 'ZCA-16' indicates ZCA whitening with a group size of 16.
+
+sample $\mathbf{x}$ over the normalization $\mathbf{G}(\cdot)$ is defined as:
+
+$$
+\widehat {\boldsymbol {\Delta}} _ {\mathbf {G}} (\mathbf {x}) = \frac {1}{s} \sum_ {i = 1} ^ {s} \| \mathbf {G} \left(\mathbf {X} _ {i} ^ {B}; \mathbf {x}\right) - \frac {1}{s} \sum_ {j = 1} ^ {s} \mathbf {G} \left(\mathbf {X} _ {j} ^ {B}; \mathbf {x}\right) \|, \tag {3}
+$$
+
+where $s$ denotes the number of mini-batches $\{\mathbf{X}_j^B\}_{j=1}^s$ that are randomly sampled from the dataset. SND can be used to evaluate the stochasticity of a sample after the normalization operation [16]. Further, a normalization operation $G(\cdot)$ 's empirical SND is defined as $\widehat{\Delta}_G = \frac{1}{N}\sum_{i=1}^N \widehat{\Delta}(\mathbf{x}_i)$ given $N$ samples. $\widehat{\Delta}_G$ describes the magnitudes of stochasticity for the corresponding normalization operations.
+
+Here, we conduct experiments to quantitatively evaluate the effects of different normalization methods. Noticeably, the stochasticity is related to the batch size $m$ and the dimension $d$ . Figure 2 (a) shows the SND of different normalization methods with respect to the dimensions, when fixing the batch size to 1024. We find that PCA whitening shows the largest SND while BN the smallest, over all the dimensions, which is consistent with the observation shown in Figure 1. We notice that all whitening methods have an increased SND when the dimension increases. Besides, ZCA has a smaller SND than CD, over all the dimensions, which is also consistent with the data shown in Figure 1. Figure 2 (b) shows the SND of different normalization methods with respect to the batch size, when fixing the dimension to 128. An interesting observation is that PCA whitening has nearly the same large SND among different batch sizes. This suggests that the PCA whitening is extremely unstable, no matter how accurate the estimation of the mini-batch covariance matrix is. This effect is in accordance with the explanation of Stochastic Axis Swapping (SAS) shown in [15], where a small change over the examples (when performing PCA whitening) results in a large change of representation.
+
+To further investigate how this stochasticity affects DNN training, we perform experiments on a four-layer Multilayer Perceptron (MLP) with 256 neurons in each layer. We evaluate the training loss with respect to the epochs, and show the results in Figure 3 (a). We find that, among all the whitening methods, ZCA works the best, while PCA is the worst. We argue that this correlates closely with the SND they produce. Apparently, the increased stochasticity can slow down training, even though all the whitening methods have equivalently
+
+
+(a)
+Figure 4. Group-based whitening experiments. (a) We show the SND of different normalization operations with respect to the group size. The experimental setup is the same as Figure 2 and the input dimension is $d = 512$ . (b) We show the spectrum of covariance matrix of the ZCA whitened output (Note that CD/PCA whitening has the same spectrum as ZCA whitening.), where 'G16' indicates whitening with a group size of 16.
+
+
+(b)
+
+improved conditioning. An interesting observation is that, in this case, BN works better than ZCA whitening. This is surprising since ZCA has improved conditioning over BN by removing the correlation [15], and it should theoretically have a better optimization behavior. However, the amplified stochasticity of ZCA whitening mitigates this advantage in optimization, thus resulting in a degenerated performance. Therefore, from an optimization perspective, we should control the extent of the stochasticity.
+
+# 3.2.2 Controlling the Stochasticity by Groups
+
+Huang et al. [15] proposed to use groups to control the extent of whitening. They argue that this method reduces the inaccuracy in estimating the full covariance matrix when the batch size is not sufficiently large. Here, we empirically show how group based whitening affects the SND, providing a good trade-off between introduced stochasticity and improved conditioning. This is essential for achieving a better optimization behavior.
+
+We evaluate the SND of different whitening transformations by varying the group size ranging from 2 to 512, as shown in Figure 4 (a). We also display the spectrum of the covariance matrix of the whitened output (based on groups) in Figure 4 (b). We find that the group size effectively controls the SND of the ZCA/CD whitening. With decreasing group size, ZCA and CD show reduced stochasticity (Figure 4 (a)), while having also degenerated conditioning (Figure 4 (b)), since the output is only partially whitened. Besides, we observe that PCA whitening still has a large SND over all group sizes, and with no significant differences. This observation further corroborates the explanation of SAS given in [15], i.e., that the PCA whitening is extremely unstable.
+
+We also show the SND of the approximate ZCA whitening method (called ItN [16]) in Figure 4 (a), which uses Newton's iteration to approximately calculate the whitening matrix. We denote 'ItN5' as the ItN method with an iteration number of 5. An interesting observation is that ItN has smaller SND than BN, when using a large group size (e.g., 256) with a smaller iteration (e.g., $T = 5$ ). This suggests that we can further combine group size and iteration number to control the stochasticity for ItN, providing an efficient and stable solution to approximate ZCA whitening [16].
+
+
+(a) ZCA Whitening
+
+
+(b) CD Whitening
+Figure 5. Comparison of estimating $\widehat{\mathbf{G}}$ using different estimation objects $(\mathbf{G} / \Sigma)$ . We train MLPs with varying widths (number of neurons in each layer) and batch sizes, for MNIST classification. We evaluate the difference of test accuracy between $\widehat{\mathbf{G}}_{\Sigma}$ and $\widehat{\mathbf{G}}_{\mathbf{G}}$ : $AC(\widehat{\mathbf{G}}_{\Sigma}) - AC(\widehat{\mathbf{G}}_{\mathbf{G}})$ . (a) and (b) show the results using ZCA and CD whitening, respectively, under a learning rate of 1. We also tried other learning rates and obtained similar observations (See supplementary materials for details).
+
+The above observations furnish substantial insights into the application of whitening in neural networks (especially in scenarios that requires decorating the activation in a certain layer), and we will further elaborate on this in Section 4.
+
+We also use group-based ZCA/CD whitening methods on the four-layer MLP experiments. The results are shown in Figure 3 (b). We observe that ZCA and CD whitening, with a group size of 16 to control the stochasticity, achieve better training behaviors than BN.
+
+# 3.3. Stochasticity During Inference
+
+In the previous section we have shown that performing different whitening transformations, by introducing different magnitudes of stochasticity, results in significantly different training behaviors from an optimization perspective. It's clear that such a stochasticity will also affect the final test performance during inference, because the population statistics $\widehat{\mathbf{G}}$ is estimated by running average (Eqn. 2) over the stochastic sequences $\{\mathbf{G}_t\}_{t=1}^T$ , where $T$ is the total number of training iterations. The more diverse of the stochastic sequence $\{\mathbf{G}_t\}_{t=1}^T$ , the more difficult to accurately estimate $\widehat{\mathbf{G}}$ . Rather than estimating $\widehat{\mathbf{G}}$ directly, Siarohin et al. [37] proposed to first estimate the population statistic of the covariance matrix $\widehat{\Sigma}$ , then compute $\widehat{\mathbf{G}}$ based on $\widehat{\Sigma}$ after training. However, no further analysis was provided to explain why they do like this.
+
+Here, we provide an empirical investigation on how the estimation object $(\mathbf{G} / \Sigma)$ in Eqn. 2 affects the test performance. Let $\widehat{\mathbf{G}}_{\mathbf{G}}$ and $\widehat{\mathbf{G}}_{\Sigma}$ denote the method to estimate $\widehat{\mathbf{G}}$ by $\mathbf{G}$ and $\Sigma$ , respectively. We conduct experiments on MLP with variations in width (the number of neurons in each layer) and batch size, for MNIST classification. Figure 5 (a) and (b) show the results of ZCA and CD whitening, respectively. We find that $\widehat{\mathbf{G}}_{\Sigma}$ has better performance than $\widehat{\mathbf{G}}_{\mathbf{G}}$ , especially under the scenarios with large width and small batch size (Intuitively, estimating in high-dimensional space with a small batch size will make the estimation noisier). This suggests that using $\Sigma$ to estimate $\widehat{\mathbf{G}}$ indirectly is more stable than
+
+
+(a) histogram of $\delta (\widehat{\mathbf{G}}_{\mathbf{M}})$
+
+
+(b) histogram of $\tilde{\delta} (\hat{\mathbf{G}}_{\mathbf{M}})$
+
+Figure 6. Analysis of the diversity of the stochastic sequences in estimating $\widehat{\mathbf{G}}$ . We report the histogram of $\delta(\widehat{\mathbf{G}}_{\mathbf{M}})$ and $\delta(\widehat{\mathbf{G}}_{\mathbf{M}})$ .
+
+using $\mathbf{G}$ directly. Another interesting observation is that, by comparing Figure 5 (a) and (b), the differences between the estimation methods using CD whitening is smaller than the ZCA whitening.
+
+We further analyze how the diversity of stochastic sequences $\{\mathbf{M}_t\}_{t = 1}^T$ affect the estimation of $\widehat{\mathbf{G}}$ , where $\mathbf{M}\in \{\mathbf{G},\Sigma \}$ . Intuitively, if the stochastic sequence has high diversity, the effects of estimation will be worse. We view each element of $\widehat{\mathbf{G}}$ as an independent stochastic variable and calculate the standard deviation of each element during training, as follows:
+
+$$
+\delta \left(\widehat {\mathbf {G}} _ {\mathbf {M}} ^ {i j}\right) = \sqrt {\frac {1}{T} \sum_ {t = 1} ^ {T} \left(\mathbf {M} _ {t} ^ {i j} - \frac {1}{T} \sum_ {t ^ {\prime} = 1} ^ {T} \left(\mathbf {M} _ {t ^ {\prime}} ^ {i j}\right)\right) ^ {2}}, \tag {4}
+$$
+
+where $\widehat{\mathbf{G}}^{ij}(\mathbf{M}_t^{ij})$ indicates the (i,j)-th element of $\widehat{\mathbf{G}}(\mathbf{M}_t)$ . Furthermore, we calculate the normalized standard deviation of each element $\widetilde{\delta}(\widehat{\mathbf{G}}_{\mathbf{M}}^{ij})$ , which, as defined in Eqn. 4, is calculated over $\widetilde{\mathbf{M}}_t^{ij} = \mathbf{M}_t^{ij} / \sqrt{\sum_{t=1}^T (\mathbf{M}_t^{ij})^2}$ . Figure 6 (a) and (b) show the histogram of $\delta(\widehat{\mathbf{G}}_{\mathbf{M}})$ and $\widetilde{\delta}(\widehat{\mathbf{G}}_{\mathbf{M}})$ , respectively, when using ZCA whitening. We clearly find that $\widehat{\mathbf{G}}_{\mathbf{G}}$ has a larger standard deviation on average, thus a large diversity in general, compared to $\widehat{\mathbf{G}}_{\Sigma}$ . This reveals why using $\Sigma$ is more stable for estimating $\widehat{\mathbf{G}}_{\mathbf{G}}$ than using $\mathbf{G}$ .
+
+# 4. Evaluation on Vision Tasks
+
+Based on the previous analysis, we can design new BW algorithms, and construct more effective DNNs by using BW. We investigate this in classification and training GANs. The code to reproduce the experiments is available at https://github.com/huangleiBuaa/StochasticityBW.
+
+# 4.1. The Landscape of Batch Whitening Algorithms
+
+We provide a general view of batch whitening algorithms in Algorithm 1 for vectorial inputs $\mathbf{X} \in \mathbb{R}^{d \times m}$ . Backpropagation is necessary to pass through the whitening transformation, and we provide the details in supplementary materials for completeness. For feature map inputs $\mathbf{X}_F \in \mathbb{R}^{h \times w \times d \times m}$ , where $h$ and $w$ indicate the height and width, the whitening transformation is performed over the unrolled $\mathbf{X} \in \mathbb{R}^{d \times (mhw)}$ , since each spatial position of the feature map can be viewed as a sample [18].
+
+Note that an extra step to recover the representation capacity of normalization is given in Line 11 of Algorithm 1, which is shown to be empirically effective in [18, 3, 15, 45].
+
+Algorithm 1 A general view of batch whitening algorithms.
+1: Input: mini-batch inputs $\mathbf{X}\in \mathbb{R}^{d\times m}$
+2: Output: $\mathbf{Y}\in \mathbb{R}^{d\times m}$
+3: if Training then
+4: Calculate covariance matrix: $\Sigma = \frac{1}{m}\mathbf{X}\mathbf{X}^T +\epsilon \mathbf{I}$
+5: Calculate whitening matrix: $\mathbf{G} = \phi_1(\boldsymbol {\Sigma})$
+6: Calculate whitened output: $\widehat{\mathbf{X}} = \mathbf{G}\mathbf{X}$
+7: Update population statistics: $\widehat{\mathbf{G}} = \phi_{2}(\Sigma /\mathbf{G})$
+8: else
+9: Calculate whitened output: $\widehat{\mathbf{X}} = \widehat{\mathbf{G}}\mathbf{X}$
+10: end if
+11: Recover representation: $\mathbf{Y} = \phi_3(\widehat{\mathbf{X}})$
+
+| COMPONENT | VALUE |
| Whitening transformation | {'ZCA', 'PCA', 'CD', 'ItN'} |
| Estimation object | {'Σ', 'G'} |
| Recovery operation | {'Scale & Shift', 'Coloring'} |
+
+Table 1. The scope of value in Algorithm 1 for different components this paper discusses. The Cartesian product of the values considers the landscape of the batch whitening algorithms used in this study.
+
+There are two alternatives for $\phi_3$ : One is a dimension-wise scale and shift [18, 15]: $\mathbf{Y}_k = \gamma_k\widehat{\mathbf{X}}_k + \beta_k$ , $(k = 1,\dots,d)$ ; The other is a coloring transformation: $\mathbf{Y} = \mathbf{W}\widehat{\mathbf{X}} +\mathbf{b}$ , which was proposed in [37] to achieve better performance in training GANs. By combining whitening transformation $\phi_1$ , estimation object $\phi_2$ and recovery operation $\phi_3$ , we can design different BW algorithms. Table 1 shows the scope of value for different components this paper discusses. Note that ItN [16] is the efficient and numerical stable approximation of ZCA whitening. We use the 'Scale & Shift' operation in $\phi_3$ for all algorithm, unless otherwise stated.
+
+# 4.2. Investigation on Discriminative Scenarios
+
+In the section, we first investigate how different whitening transformations affect the training of VGG network [38] for CIFAR-10 datasets [22], from the perspective of optimization. We then show the performance on large-scale ImageNet classification [7], by applying the designed BW algorithm on residual networks [11].
+
+# 4.2.1 VGG on CIFAR-10
+
+We use the VGG networks [38] tailored for $32 \times 32$ inputs (16 convolutional layers and 1 fully-connected layer). We add the normalization methods after each convolutional layer in the VGGs. We compare several methods including: 1) The full whitening methods 'PCA', 'ZCA' and 'CD'; 2) The approximate whitening 'ItN' and the standardization 'BN'; 3) Group-based whitening methods with the group size ranging in $\{512, 256, 128, 64, 32, 16\}$ . We denote 'ZCA-16' as ZCA whitening with group size 16. We focus on comparing the training performance from an optimization perspective, and we use mini-batch covariance matrix 'Σ' as the estimation object for all methods during inference. We use SGD with a batch size of 256 to optimize the model. We set the initial learning rate to 0.1, then divide it by 5 after 60
+
+
+
+
+Figure 7. Experiments on VGG for CIFAR-10 classification.
+
+epochs and finish the training at 120 epochs.
+
+The main experimental observations include: 1) 'PCA' fails training under all the configurations, which means that either the training loss does not decrease or numerical instability appears. This observation is consistent with the previous MLP models. 2) 'ZCA-16' can train well, while other 'ZCA' related configurations fail training due to numerical instability. This is caused by the back-propagation through the eigen decomposition, which requires different eigenvalues for the mini-batch covariance matrix [15, 37]. 3) 'CD' has no numerical instability and can ensure full feature whitening of the models. Fully whitening features by 'CD' have significantly worse performance than the group-based ones shown in Figure 7. This again suggests that it is essential to control the extent of the whitening for discriminative models. 4) We find that 'ItN' (the approximates of the ZCA whitening) works the best.
+
+# 4.2.2 Residual Network on ImageNet
+
+We investigate the effectiveness of all kinds of whitening algorithms on residual networks for ImageNet classification with 1000 classes [7]. We use the given official 1.28M training images as a training set, and evaluate the top-1 accuracy on the validation set with 50k images.
+
+Ablation Study on ResNet-18 We first execute an ablation study on the 18-layer residual network (ResNet-18) to explore multiple positions for replacing BN with BW. We consider three architectures: 1) $\mathbf{ARC}_A$ : where we only replace the first BN module of ResNet-18 proposed in [15]; 2) $\mathbf{ARC}_B$ : where we further plug-in the BW layer after the last average pooling (before the last linear layer) to learn the decorrelated feature representations based on $\mathbf{ARC}_A$ , as proposed in [16]; 3) $\mathbf{ARC}_C$ : where we also replace the $\{2n, n = 1, 2, \ldots\}$ th BN modules (the $\{3n\}$ th for ResNet-50) based on $\mathbf{ARC}_B$ . We compare all the whitening transformations and estimation objects in Table 1. 'ZCA' and 'ZCA $_{\Sigma}$ ' denote the ZCA whitening that estimate the population statistics using G and $\Sigma$ , respectively. For 'PCA', 'ZCA' and 'CD', we use a group size in $\{16, 64\}$ and report the best performance from these two configurations.
+
+We follow the same experimental setup as described in [11], except that we use one GPU and train over 100 epochs. We apply SGD with a mini-batch size of 256, momentum of 0.9 and weight decay of 0.0001. The initial learning rate is set to 0.1 and divided by 10 at 30, 60 and 90 epochs.
+
+The results are shown in Table 2. For $\mathbf{A}\mathbf{R}\mathbf{C}_A$ , we find
+
+
+Figure 8. Stability experiments on GAN with hinge loss [29, 5] for unconditional image generation. The box shows the quartiles while the whiskers extend to show the rest of the distribution (we limit the FID ranging in (20,100) for better representation, and we show the full range of FID in the supplementary materials). Note that all methods use covariance matrix to estimate population statistics.
+
+| Method | \( \mathrm{ARCA}_A \) | \( \mathrm{ARCB}_B \) | \( \mathrm{ARCC}_C \) |
| Baseline [11] | 70.31 | - | - |
| PCA [15] | 59.93 (↓10.38) | - | - |
| \( \mathrm{PCA}_{\Sigma} \) | 70.01 (↓0.30) | - | - |
| ZCA [15] | 70.58 (↑0.27) | - | - |
| \( \mathrm{ZCA}_{\Sigma} \) | 70.62 (↑0.31) | - | - |
| CD | 70.46 (↑0.15) | 70.80 (↑0.49) | 68.15 (↓2.16) |
| \( \mathrm{CD}_{\Sigma} \) [37] | 70.55 (↑0.24) | 70.89 (↑0.58) | 68.56 (↓1.75) |
| ItN [16] | 70.62 (↑0.31) | 71.14 (↑0.83) | 71.26 (↑0.95) |
| \( \mathrm{ItN}_{\Sigma} \) | 70.63 (↑0.32) | 71.33 (↑1.02) | 71.62 (↑1.31) |
+
+that all whitening methods, except PCA related methods, improve the performance over the baselines. We observe that ZCA $(\mathrm{ZCA}_{\Sigma})$ and its approximate ItN $(\mathrm{ItN}_{\Sigma})$ , achieve slightly better performance than CD $(\mathrm{CD}_{\Sigma})$ , and this observation is consistent with the results in [37]. This suggests ZCA whitening, which minimizes the distortion introduced by whitening under the L2 distance [15], usually works better than other whitening methods for discriminate classification tasks. Under $\mathbf{A}\mathbf{R}\mathbf{C}_B$ and $\mathbf{A}\mathbf{R}\mathbf{C}_C$ , ZCA/PCA related methods suffer from numerical instability. We also observe that CD-related methods have significantly degenerated performance under $\mathbf{A}\mathbf{R}\mathbf{C}_C$ . This implies that the stochasticity introduced by decorrelating multiple layers with CD whitening harms the learning. We find that ItN-related methods can effectively control the stochasticity and achieve further performance improvement on $\mathbf{A}\mathbf{R}\mathbf{C}_C$ . We try a ResNet-18 where all BN layers are replaced with ItN. However, the network has no performance improvement over $\mathbf{A}\mathbf{R}\mathbf{C}_A$ , while the computations introduced are significant, as already observed in [16]. These results demonstrate that controlling the extent of whitening (stochasticity) is important for achieving performance improvements over the standardization.
+
+From all the architectures and whitening methods, we observe that using $\Sigma$ to estimate the population statistics is better than using $\mathbf{G}$ , especially on $\mathbf{ARC}_C$ .
+
+Results on ResNet-50 Based on the above observation, we further apply the $\mathrm{ItN}_{\Sigma}$ to ResNet-50. In addition to the standard step learning rate decay [11] used in the previous
+
+Table 2. Comparison of validation accuracy (\%, single model and single-crop) on an 18-layer residual network for ImageNet.
+
+| Method | Step decay | Cosine decay |
| Baseline | 76.20 | 76.62 |
| ItNΣ-ARCB | 77.18 (↑0.98) | 77.68 (↑1.06) |
| ItNΣ-ARCC | 77.28 (↑1.08) | 77.92 (↑1.30) |
+
+Table 3. Results using ResNet-50 for ImageNet. We evaluate the top-1 validation accuracy (\%, single model and single-crop).
+
+setup, we also consider the cosine learning rate decay [26] that is also a basic setting [12] when training on ImageNet, and we want to address that the improvement of the proposed method can be obtained under different setups. For cosine decay, we start with a learning rate of 0.1 and decay it to 0.00001 over 100 epochs. We find that the proposed models significantly improves the performance over the original one, under all configurations. The additional time cost of $\mathrm{ItN}_{\Sigma}$ - $\mathrm{ARCB}_B$ and $\mathrm{ItN}_{\Sigma}$ - $\mathrm{ARCC}_C$ is $7.03\%$ and $30.04\%$ over the original. Note that $\mathrm{ItN}_{\Sigma}$ consistently provides a slight improvement (around 0.1 to 0.4 for all configurations in this experiment) over the original ItN proposed in [16], and please see supplementary materials for details. Moreover, $\mathrm{ItN}_{\Sigma}$ can bear a larger group size and smaller batch size than ItN, which is particularly important in the scenario of training GANs, as we will discuss.
+
+# 4.3. Investigation on Training GANs
+
+In this section, we provide an empirical analysis on BW algorithms for training GANs. We focus on investigating the effects of different BW algorithms in stabilizing the training and improving the performance. We evaluate on unconditional image generation for the CIFAR-10 dataset. In our qualitative evaluation of the generated samples, we use the two most common metrics: Fréchet Inception Distance (FID) [13] (the lower the better) and Inception Score (IS) [33] (the higher the better). Following the setup in [37], we only apply the BW on the generator. Unless otherwise noted, the batch whitening methods used in this section use the covariance matrix $\Sigma$ to estimate the whitening matrix $\widehat{\mathbf{G}}$ .
+
+# 4.3.1 Stability Experiments
+
+Following the analysis in [23], we conduct experiments to demonstrate the effects of BW algorithms in stabilizing the
+
+| Method | Scale & Shift | Coloring |
| IS | FID | IS | FID |
| BN | 7.243 ± 0.073 | 27.89 | 7.317 ± 0.093 | 27.61 |
| CD | 6.986 ± 0.065 | 30.11 | 7.612 ± 0.112 | 24.44 |
| ZCA-64 | 7.265 ± 0.069 | 26.8 | 7.412 ± 0.118 | 25.79 |
| ItN | 7.246 ± 0.104 | 27.44 | 7.599 ± 0.089 | 24.68 |
+
+
+Figure 9. Comparison of BW methods between using Covariance Matrix (CM) and Whitening Matrix (WM) as estimation object.
+
+training of GANs. We use DCGAN [31] architectures and use the hinge loss [29, 5]. We provide the implementation details in the supplementary materials.
+
+We use the Adam optimizer [20] and train for 100 epochs. We consider 15 hyper-parameter settings (See supplementary materials for details) by varying the learning rate $\alpha$ , first and second momentum $(\beta_{1},\beta_{2})$ of Adam, and number of discriminator updates per generator update $n_{dis}$ . We use both the 'Scale&Shift' (denoted as '-S') and 'Coloring' (denoted as '-C') to recover the representation of BW. We calculate the FID distribution of the trained models under 15 configurations, and the results are shown in Figure 8. The lower the variance, the more stable the model from an optimization perspective. In Table 4, we also provide the best FID and the corresponding IS from all the configurations for different whitening methods. Note that full ZCA whitening also suffers from the numerical instability in this experiment, and we thus use the group-based ZCA whitening.
+
+We observe that 'CD' whitening combined with coloring transformation (CD-C) achieves the best performances. However, it has the worst stability over all whitening methods. We argue that full feature whitening introduces strong stochasticity, which benefits the diversity of the generated examples. However, the strong stochasticity also harms the training and makes the model sensitive to the hyperparameter. We find that the coloring operation can improve the performances, especially for the whitening transformations (e.g., BN benefits less). Nevertheless, it also makes the training unstable and sensitive to the hyperparameter. Besides, we observe that 'ZCA-64' achieves better performance than 'CD-64'. This suggests that the advantage of ZCA whitening over CD in discriminative scenarios still exists when training GANS. We argue that controlling the magnitude of whiten
+
+Table 4. Results on stability experiments shown in Section 4.3.1. We report the best FID and the corresponding IS from all the configurations for different whitening methods.
+
+| Method | DCGAN | ResNet |
| IS | FID | IS | FID |
| CD | 7.820 ± 0.099 | 22.89 | 8.522 ± 0.113 | 18.34 |
| ZCA-64 | 7.672 ± 0.068 | 22.91 | 8.492 ± 0.106 | 17.86 |
| ItN | 7.652 ± 0.070 | 22.5 | 8.375 ± 0.093 | 17.55 |
+
+Table 5. Performance comparison between different whitening methods on the DCGAN and ResNet architectures used in [37]. For fairness, we report all the results at the end of the training.
+
+ing (stochasticity) is also still important in training GANs. For example, for ItN, the approximation of ZCA whitening has nearly the same performance as CD (Table 4), and shows better stability (Figure 8) due to its effective control of the stochasticity, as explained in [16].
+
+We also have obtained similar results when using the non-saturated loss [10], as shown in the supplementary material.
+
+Comparison of Estimation Object Considering that the CD whitening in [37] uses the covariance matrix to indirectly estimate the whitening matrix when training a GAN, we also compare the whitening methods with different estimation objects. The results are shown in Figure 9. We observe that the methods using the covariance matrix as the estimation object, achieve consistently better FID scores and are generally more stable than using the whitening matrix.
+
+# 4.3.2 Validation on Larger Architectures
+
+Based on our previous analysis, we observe that ItN and ZCA whitening (with an appropriate group size) have similar performance to CD in training GANs, when using the covariance matrix as the estimation object. We further apply the ZCA whitening and ItN to the DCGAN and Resnet models, where CD whitening achieves nearly the state-of-the-art performance on CIFAR-10. We use the code provided in [37] and use the same setup as in [37]. We replace all the CD whitening in the models with ItN/ZCA whitening. The results are shown in Table 5. We observe that the CD whitening has no significant advantages over ItN/ZCA-G64. Ultimately, ItN achieves the best performances in terms of the FID evaluation, while CD whitening is the best in terms of IS. This suggests that there is still room to improve the performance of batch whitening in training GANs by further finely controlling the stochasticity.
+
+# 5. Conclusions
+
+In this paper, we provided a stochasticity analysis of batch whitening that thoroughly explains why different whitening transformations show significant differences in performance when training DNNs, despite their equivalent improvement of the conditioning. Our analysis provides insights for designing new normalization algorithms and constructing new network architectures. We believe that our analysis will open new avenues of research in better understanding normalization over batch data.
+
+Acknowledgement We thank Anna Hennig and Ying Hu for their help with proofreading.
+
+# References
+
+[1] Sanjeev Arora, Zhiyuan Li, and Kaifeng Lyu. Theoretical analysis of auto rate-tuning by batch normalization. In ICLR, 2019. 2
+[2] Andrei Atanov, Armenii Ashukha, Dmitry Molchanov, Kirill Neklyudov, and Dmitry Vetrov. Uncertainty estimation via stochastic batch normalization. In ICLR Workshop, 2018. 2, 3
+[3] Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016. 1, 2, 5
+[4] Johan Bjorck, Carla Gomes, and Bart Selman. Understanding batch normalization. In NIPS, 2018. 1, 2
+[5] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019. 7, 8
+[6] Minhyung Cho and Jaehyung Lee. Riemannian approach to batch normalization. In NIPS, 2017. 2
+[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. 2, 6
+[8] Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, and koray kavukcuoglu. Natural neural networks. In NIPS, 2015. 2
+[9] Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net optimization via hessian eigenvalue density. In ICML, 2019. 2
+[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS. 2014. 8
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 2, 6, 7
+[12] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. In CVPR, 2019. 7
+[13] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NIPS. 2017. 7
+[14] Elad Hoffer, Ron Banner, Itay Golan, and Daniel Soudry. Norm matters: efficient and accurate normalization schemes in deep networks. In NIPS, 2018. 2
+[15] Lei Huang, Dawei Yang, Bo Lang, and Jia Deng. Decorated batch normalization. In CVPR, 2018. 1, 2, 3, 4, 5, 6, 7
+[16] Lei Huang, Yi Zhou, Fan Zhu, Li Liu, and Ling Shao. Iterative normalization: Beyond standardization towards efficient whitening. In CVPR, 2019. 2, 3, 4, 6, 7, 8
+[17] Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models. In NIPS, 2017. 2
+[18] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 1, 2, 3, 5, 6
+[19] Agnan Kessy, Alex Lewin, and Korbinian Strimmer. Optimal whitening and decorrelation. The American Statistician, 72(4):309-314, 2018. 1, 3
+
+[20] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. 8
+[21] Jonas Kohler, Hadi Daneshmand, Aurelien Lucchi, Ming Zhou, Klaus Neymeyr, and Thomas Hofmann. Towards a theoretical understanding of batch normalization. arXiv preprint arXiv:1805.10694, 2018. 1
+[22] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. 1, 6
+[23] Karol Kurach, Mario Lučić, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. A large-scale study on regularization and normalization in GANs. In ICML, 2019. 7
+[24] Yann LeCun, Léon Bottou, Genevieve B. Orr, and Klaus-Robert Müller. Efficient backprop. In Neural Networks: Tricks of the Trade, pages 9-50, 1998. 1
+[25] Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. Universal style transfer via feature transforms. In NIPS, 2017. 2
+[26] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with restarts. In *ICLR*, 2017. 7
+[27] Ping Luo. Learning deep architectures via generalized whitened neural networks. In ICML, 2017. 2
+[28] Ping Luo, Jiamin Ren, and Zhanglin Peng. Differentiable learning-to-normalize via switchable normalization. arXiv preprint arXiv:1806.10779, 2018. 2
+[29] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In ICLR, 2018. 7, 8
+[30] Xingang Pan, Xiaohang Zhan, Jianping Shi, Xiaou Tang, and Ping Luo. Switchable whitening for deep representation learning. In ICCV, 2019. 2
+[31] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. 8
+[32] Mengye Ren, Renjie Liao, Raquel Urtasun, Fabian H. Sinz, and Richard S. Zemel. Normalizing the normalizers: Comparing and extending network normalization schemes. In ICLR, 2017. 2
+[33] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training gans. In NIPS, 2016. 7
+[34] Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How does batch normalization help optimization? In NIPS, 2018. 1, 2
+[35] Alexander Shekhotsov and Boris Flach. Stochastic normalizations as bayesian learning. In ACCV, 2018. 2
+[36] Lu Sheng, Ziyi Lin, Jing Shao, and Xiaogang Wang. Avatar-net: Multi-scale zero-shot style transfer by feature decoration. In CVPR, 2018. 2
+[37] Aliaksandr Siarohin, Enver Sangineto, and Nicu Sebe. Whitening and coloring batch transform for gans. In ICLR, 2019. 1, 2, 3, 5, 6, 7, 8
+[38] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 6
+[39] Saurabh Singh and Abhinav Shrivastava. Evalnorm: Estimating batch normalization statistics for evaluation. In ICCV, 2019. 2
+
+[40] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929-1958, Jan. 2014. 2
+[41] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, 2016. 1
+[42] Mattias Teye, Hossein Azizpour, and Kevin Smith. Bayesian uncertainty estimation for batch normalized deep networks. In ICML, 2018. 2, 3
+[43] Guangrun Wang, Jiefeng Peng, Ping Luo, Xinjiang Wang, and Liang Lin. Kalman normalization: Normalizing internal representations across network layers. In NIPS, 2018. 2
+[44] Simon Wiesler and Hermann Ney. A convergence analysis of log-linear training. In NIPS, pages 657-665, 2011. 1
+[45] Yuxin Wu and Kaiming He. Group normalization. In ECCV, 2018. 1, 2, 5
+[46] Greg Yang, Jeffrey Pennington, Vinay Rao, Jascha Sohl-Dickstein, and Samuel S. Schoenholz. A mean field theory of batch normalization. In ICLR, 2019. 2
+[47] Guodong Zhang, Chaoqi Wang, Bowen Xu, and Roger B. Grosse. Three mechanisms of weight decay regularization. In ICLR, 2019. 2
\ No newline at end of file
diff --git a/aninvestigationintothestochasticityofbatchwhitening/images.zip b/aninvestigationintothestochasticityofbatchwhitening/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9bca49c7517c9a2bfa21dc9579094a93470fc2d7
--- /dev/null
+++ b/aninvestigationintothestochasticityofbatchwhitening/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c575b0381da8cfa9805b28a2b14c5626b1db10bd75ab185a9b8888ce8492538f
+size 415038
diff --git a/aninvestigationintothestochasticityofbatchwhitening/layout.json b/aninvestigationintothestochasticityofbatchwhitening/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..47e51009b6ff14f8339c01f15c993e58f564fbe6
--- /dev/null
+++ b/aninvestigationintothestochasticityofbatchwhitening/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c1a9486dd7ec5e0df7f4abd22dd0cce80860ad429417635322517eb5a3e67ac
+size 469944
diff --git a/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_content_list.json b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..de36318b84f8472a8b745653e286fdde188169ba
--- /dev/null
+++ b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76975b5961e5ad8d95b4c38393dab12ff91a78a5ca9dad1c9c9284efa9dbebc9
+size 79824
diff --git a/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_model.json b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..0925cdcca60a6d55605c64ce10ae5e5a6245974a
--- /dev/null
+++ b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd92cb705e66a205f732768816d004167b2a37336986428aab0cc4cd61d62f3e
+size 89455
diff --git a/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_origin.pdf b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..410ee57c40fd4a7a3af505f8a8f88dc08210b33b
--- /dev/null
+++ b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/de2a75d7-5937-4d15-b390-ec0c5f99b34e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8cd33677445b5ea37487e038faa0fb0a80b7d4f7387ef061254029520ac0b105
+size 1973044
diff --git a/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/full.md b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c6a2e2f8fe270d8e3bda7cd66d4fda14154a3a2d
--- /dev/null
+++ b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/full.md
@@ -0,0 +1,315 @@
+# Anisotropic Convolutional Networks for 3D Semantic Scene Completion *
+
+Jie Li $^{1}$ Kai Han $^{2}$ Peng Wang $^{3\dagger}$ Yu Liu $^{4}$ Xia Yuan $^{1}$
+
+$^{1}$ Nanjing University of Science and Technology, China $^{2}$ University of Oxford, United Kingdom
+
+3University of Wollongong, Australia 4The University of Adelaide, Australia
+
+# Abstract
+
+As a voxel-wise labeling task, semantic scene completion (SSC) tries to simultaneously infer the occupancy and semantic labels for a scene from a single depth and/or RGB image. The key challenge for SSC is how to effectively take advantage of the 3D context to model various objects or stuffs with severe variations in shapes, layouts and visibility. To handle such variations, we propose a novel module called anisotropic convolution, which properties with flexibility and power impossible for the competing methods such as standard 3D convolution and some of its variations. In contrast to the standard 3D convolution that is limited to a fixed 3D receptive field, our module is capable of modeling the dimensional anisotropy voxelwisely. The basic idea is to enable anisotropic 3D receptive field by decomposing a 3D convolution into three consecutive 1D convolutions, and the kernel size for each such 1D convolution is adaptively determined on the fly. By stacking multiple such anisotropic convolution modules, the voxel-wise modeling capability can be further enhanced while maintaining a controllable amount of model parameters. Extensive experiments on two SSC benchmarks, NYU-Depth-v2 and NYUCAD, show the superior performance of the proposed method. Our code is available at https://waterljwant.github.io/SSC/.
+
+# 1. Introduction
+
+To behave in the 3D physical world, it requires an accurate understanding of both the 3D geometry as well as the semantics of the environment. Humans can easily infer such geometrical and semantic information of a scene from partial observations. An open topic in computer vision is to study how to enable machines such an ability, which is desirable in many applications such as navigation [4], grasping [20], 3D home design [1], to name a few.
+
+Semantic scene completion (SSC) [16] is a computer vision task teaching the machine how to perceive the 3D world from the static depth and/or RGB image. The task has two coupled objectives: one is 3D scene completion, which aims at inferring the volumetric occupancy of the scene, and the other is 3D scene labeling, which requires to predict the semantic labels voxel-wisely. As the objects within the physical scene carry severe variations in shapes, layouts, and visibility due to occlusions, the main challenge thereon is how to model the 3D context to learn each voxel effectively.
+
+Recently, promising progress has been achieved for SSC [16, 6, 8, 10, 13] by employing deep convolutional neural networks (CNNs). A direct solution is to use 3D convolutional neural network [16] to model the volumetric context, which consists of a stack of conventional 3D convolutional layers. This solution, however, suffers from apparent limitations. On the one hand, 3D convolution renders a fixed receptive field that does not cater to the variations of the objects. On the other hand, 3D convolution is resource demanding, which causes massive computational and memory consumption. 3D convolution variations [10, 21] are proposed to address such shortcomings. For example, a lightweight dimensional decomposition network is proposed in [10] to alleviate the resource consumption, but it still leaves the object variation issue unattended.
+
+In this work, we propose a novel module, termed anisotropic convolution, to model object variation, which properties with flexibility and power impossible for competing methods. In contrast to standard 3D convolution and some of its variations that are limited to the fixed receptive field, the new module adapts to the dimensional anisotropy property voxel-wisely and enables receptive field with varying sizes, a.k.a anisotropic receptive field. The basic idea is to decompose a 3D convolution operation into three consecutive 1D convolutions and equip each such 1d convolution with a mixer of different kernel sizes. The combination weights of such kernels along each 1D convolution are learned voxel-wisely and thus anisotropic 3D context can essentially be modeled by consecutively performing such adaptive 1D convolutions. Although we use multiple ker
+
+nels, e.g. 3, due to the dimensional decomposition scheme, our module is still parameter-economic comparing to the 3D counterpart. By stacking multiple such modules, a more flexible 3D context, as well as an effective mapping function from such context to the voxel output, can be obtained.
+
+The contributions of this work are as follows:
+
+- We present a novel anisotropic convolutional network (AIC-Net) for the task of semantic scene completion. It renders flexibility in modeling the object variations in a 3D scene by automatically choosing proper receptive fields for different voxels.
+- We propose a novel module, termed anisotropic convolution (AIC) module, which adapts to the dimensional anisotropy property voxel-wisely and thus implicitly enables 3D kernels with varying sizes.
+- The new module is much less computational demanding with higher parameter efficiency comparing to the standard 3D convolution units. It can be used as a plug-and-play module to replace the standard 3D convolution unit.
+
+We thoroughly evaluate our model on two SSC benchmarks. Our method outperforms existing methods by a large margin, establishing the new state-of-the-art. Code will be made available.
+
+# 2. Related Work
+
+# 2.1. Semantic Scene Completion
+
+SSCNet [16] proposed by Song et al. is the first work that tries to simultaneously predict the semantic labels and volumetric occupancy of a scene in an end-to-end network. The expensive cost of 3D CNN, however, limits the depth of the network, which hinders the accuracy achieved by SSCNet. Zhang et al. [21] introduced spatial group convolution (SGC) into SSC for accelerating the computation of 3D dense prediction task. However, its accuracy is slightly lower than that of SSCNet. By combining the 2D CNN and 3D CNN, Guo and Tong [8] proposed the view-volume network (VVNet) to efficiently reduce the computation cost and enhance the network depth. Li et al. [11] use both depth and voxels as the inputs of a hybrid network and consider the importance of elements at different positions [23] while training.
+
+Garbade et al. [6] proposed a two-stream approach that jointly leverages the depth and visual information. In specific, it first constructs an incomplete 3D semantic tensor for the inferred 2D semantic information, and then adopts a vanilla 3D CNN to infer the complete 3D semantic tensor. Liu et al. [13] also used RGB-D image as input and proposed a two-stage framework to sequentially carry out the 2D semantic segmentation and 3D semantic scene completion, which are connected via a 2D-3D re-projection layer.
+
+However, their two-stage method can suffer from the error accumulation, producing inferior results. Although significant improvements have been achieved, these methods are limited by the cost of 3D convolution and the fixed receptive fields. Li et al. [10] introduced a dimensional decomposition residual network (DDRNet) for the 3D SSC task. Although it achieves good accuracy with less parameters, it still leaves the limitation of using fixed receptive field unattended.
+
+# 2.2. Going Beyond Fixed Receptive Field
+
+Most existing models utilize fixed-size kernel to model fixed visual context, which are less robust and flexible when dealing with objects with various sizes.
+
+Inception family [17, 19, 18] take receptive field with multiple sizes into account, and it implements this concept by launching multi-branch CNNs with different convolution kernels. The similar idea appears in atrous spatial pyramid pooling (ASPP) [2], multi-scale information was captured via using several parallel convolutions with different atrous(dilation) rates on the top of feature map. These strategies essentially embrace the idea of multi-scale fusion, and the same fusion strategy is uniformly applied to all the positions. Zhang et al. [22] choose a more suitable receptive field by weighting convolutions with different kernel sizes.
+
+STN [9] designs a Spatial Transformer module to achieve invariance in terms of translation, rotation, and scale. However, it treats the whole image as a unit, rather than adjusts the receptive field pixel-wisely. Deformable CNN (DCNv1) [3] attempts to adaptively adjust the spatial distribution of receptive fields according to the scale and shape of the object. Specifically, it utilizes offset to control the spatial sampling. DCNv2 [25] increases the modeling power by stacking more deformable convolutional layers to improve its modelling ability and proposes to use a teacher network to guide the training process. However, DCNv2 still struggles to control the offset in order to focus on relevant pixels only.
+
+Different from the above methods, the proposed AIC module is tailored for 3D tasks, in particular for SSC in this paper. It is capable of handling objects with variations in shapes, layouts and visibility by learning anisotropic receptive field voxel-wisely. At the same time, it achieves trade-off between semantic completion accuracy and computational cost.
+
+# 3. Anisotropic Convolutional Networks
+
+In this section, we introduce our anisotropic convolutional networks (AIC-Net) for 3D semantic scene completion. At the core of AIC-Net is our proposed anisotropic convolutional (AIC) module. Given a single-view RGB-D image of a 3D scene, AIC-Net predicts a dense 3D voxel representation and maps each voxel in the
+
+
+Figure 1. The overall network structure of AIC-Net. AIC-Net has two feature extractors in parallel to capture the features from RGB and depth images, respectively. The feature extractor contains a projection layer to map the 2D feature to 3D space. After that, we use stacked AICs to obtain information with adaptive receptive fields. The multi-scale features are concatenated and then fused through another two AICs followed by three voxel-wise convolutions to predict occupancy and object labels simultaneously.
+
+view frustum to one of the labels $C = \{c_1, c_2, \dots, c_{N+1}\}$ , where $N$ is the number of object classes, $c_{N+1}$ represents the empty voxels, $\{c_1, c_2, \dots, c_N\}$ represent the voxels occupied by objects of different categories.
+
+Fig. 1 illustrates the overall architecture of our AIC-Net. It consists of a hybrid feature extractor for feature extraction from the depth map and RGB image, a multi-stage feature aggregation module with a stack of AIC modules to aggregate features obtained by the hybrid feature extractor, two extra AIC modules to fuse multi-stage information, followed by a sequence of voxel-wise 3D convolution layers to reconstruct the 3D semantic scene. The hybrid feature extractor contains two parallel branches to extract features for the depth map and the RGB image, respectively. Each branch contains a hybrid structure of 2D and 3D CNNs. The 2D and 3D CNNs are bridged by a 2D-3D projection layer, allowing the model to convert the 2D feature maps into 3D feature maps that are suitable for 3D semantic scene completion. The structure of our hybrid feature extractor follows that of DDRNet [10]. The multi-stage feature aggregation module consists of a sequence of AIC modules, each of which can voxel-wisely adjust the 3D context on the fly. The outputs of these AIC modules are concatenated together, and another two AIC modules fuse such multi-stage information. The 3D semantic scene can then be reconstructed by applying a sequence of voxel-wise 3D convolutional layers on the fused feature.
+
+In the rest of this section, we will introduce our AIC module (section 3.1), the multi-path kernel selection mechanism achieved by stacking our AIC modules (section 3.2), and the training loss for our model (section 3.3) in detail.
+
+# 3.1. Anisotropic Convolution
+
+Considering the variations in object shapes, layouts as well as the varying levels of occlusion in SSC, it will be beneficial to model different context information to infer the occupancy and semantics for different voxel positions. The anisotropic convolution (AIC) module is proposed to adapt to such variations, allowing the convolution to accommodate 3D geometric deformation. Fig. 2 shows the struc
+
+
+Figure 2. Anisotropic convolution. For each dimension, we set 3 parallel convolution with different kernel sizes as an example. The learned modulation factors for different kernels are denoted with different colors. The values of the modulation factors are positive and the values of each row sum up to 1.
+
+ture of our AIC module. Instead of using the 3D kernels $(k_{1} \times k_{2} \times k_{3})$ that are limited to the fixed 3D receptive field, we model the dimensional anisotropy property by enabling the kernel size for each 3D dimension to be learnable. To achieve this, we first decompose the 3D convolution operation as the combination of three 1D convolution operations along each dimension $x$ , $y$ , $z$ . In each dimension, we can inject multiple (e.g. 3 in our implementation) kernels of different sizes to enable more flexible context modeling. For example, for dimension $x$ , we can have three kernels as $(1 \times 1 \times k_{1}^{x})$ , $(1 \times 1 \times k_{2}^{x})$ , and $(1 \times 1 \times k_{3}^{x})$ . A set of selection weights, a.k.a. modulation factors, will be learned to select proper kernels along each of the three dimensions.
+
+
+Figure 3. Bottleneck version AIC module. The first convolution reduces the number of channels from $D$ to $D'$ ( $D' < D$ ) and the last convolution increases the channels back to $D$ .
+
+Note that the kernel candidates for different dimensions are not necessary to be the same. When there are $n, m$ , and $l$ candidate kernels along $x, y$ , and $z$ dimensions respectively, the possible kernel combinations can grow exponentially as, $\{k_1^z, k_2^z, \dots, k_l^z\} \times \{k_1^y, k_2^y, \dots, k_m^y\} \times \{k_1^x, k_2^x, \dots, k_n^x\}$ . The AIC module can learn to select different kernels for each dimension, forming an anisotropic convolution to capture anisotropic 3D information.
+
+Modulation factors To enable the model to determine the optimal combination of the candidate kernels and consequently adaptively controlling the context to model different voxels, we introduce a modulation module in the AIC module. As shown in Fig. 2, assume the input to an AIC module is a tensor $\mathbf{X}_{t-1} \in \mathbb{R}^{L \times W \times H \times D}$ , where $L$ , $W$ , $H$ denotes the length, width, height of the tensor, and $D$ indicates the dimensionality of the feature. The output $\mathbf{X}_t \in \mathbb{R}^{L \times W \times H \times D}$ can be formulated as,
+
+$$
+\mathbf {X} _ {t} = \mathcal {F} ^ {z} \left(\mathcal {F} ^ {y} \left(\mathcal {F} ^ {x} \left(\mathbf {X} _ {t - 1}\right)\right)\right) + \mathbf {X} _ {t - 1}, \tag {1}
+$$
+
+where $\mathcal{F}^u$ represents the anisotropic convolution along the $u\in \{x,y,z\}$ dimension. We adopt a residual structure to obtain the output by element-wisely summing up the input tensor and the output of three consecutive anisotropic 1D convolutions. Without losing generality, we represent $\mathcal{F}^x (\mathbf{X}_{t - 1})$ as,
+
+$$
+\mathbf {X} _ {t} ^ {x} = \sum_ {i = 1} ^ {n} f ^ {x} \left(\mathbf {X} _ {t - 1}, \theta_ {i} ^ {x}\right) \odot g ^ {x} \left(\mathbf {X} _ {t - 1}, \phi^ {x}\right) [ i ], \tag {2}
+$$
+
+where $f^{x}(\mathbf{X}_{t - 1},\theta_{i}^{x})$ represents performing convolution to $\mathbf{X}_{t - 1}$ using parameter $\theta_{i}^{x}$ which has kernel size $(1,1,k_i^x)$ with $k_{i}^{x}\in \{k_{1}^{x},k_{2}^{x},\dots ,k_{n}^{x}\}$ , $n$ is the coal nummer of candidate kernels for dimension $x$ , and $\odot$ denotes element-wise multiplication. $g^{x}(\mathbf{X}_{t - 1},\phi^{x})$ is a mapping function from the input tensor to the weights or modulation factors used to select the kernels along dimension $x$ and $\phi^x$ denotes the parameters of the mapping function. We perform softmax to $g^{u}(\cdot ,\cdot)[i]$ in order that the weights for the kernels of each
+
+
+Figure 4. Illustration of multi-path kernel selection in one dimension. In this example, four AIC modules are stacked and for each module the kernel sizes for each dimension are $\{3,5,7\}$ . The background darkness of the kernel indicates the value of the modulation factor, and thus reflects the selection tendency for this kernel. Stacking multiple AIC modules can increase the range of receptive fields exponentially.
+
+dimension $u\in \{x,y,z\}$ sum up to 1, that is,
+
+$$
+\sum_ {i = 1} ^ {p \in \{n, m, l \}} g ^ {u} (\cdot , \phi^ {u}) [ i ] = 1, \quad g ^ {u} (\cdot , \phi^ {u}) [ i ] \geq 0. \tag {3}
+$$
+
+In this sense, we adopt a soft constraint with a set of weights to determine the importance of different kernels. The two extreme cases are that the learned modulation factor is 1 or 0, indicating that the corresponding kernel will be the unique selected or be ignored. By using soft values, we can control the contributions of these kernels more flexibly.
+
+In Fig. 2, we show an example of the AIC module with $m = n = l = 3$ and as seen, $g^{u}(\cdot ,\cdot)$ is realized by a 1-layer 3D convolution with kernel $(1\times 1\times 1)$ .
+
+Bottleneck anisotropic convolution To further reduce the parameters of our AIC module, we propose a bottleneck based AIC module. As shown in Fig. 3, for each AIC module, a $(1 \times 1 \times 1)$ convolution is added both before and after the AIC operation. These two convolutions are responsible for reducing and restoring the feature channels, allowing the AIC module to have a more compact input. In the remainder of the paper, unless stated otherwise, AIC refers to the bottleneck based AIC.
+
+# 3.2. Multi-path Kernel Selection
+
+Despite the attractive properties in a single AIC module, here we show that greater flexibility can be achieved by stacking multiple AIC modules. Stacking multiple AIC modules forms multiple possible paths between layers implicitly and consequently enables an extensive range of receptive field variations in the model. Fig. 4 shows a stack of four AIC modules, and each module sets the kernel sizes to $\{3,5,7\}$ along all three dimensions. For one specific dimension, when each module tends to select the kernel size 7, a maximum receptive field of 25 will be obtained for this dimension. On the contrary, a minimum receptive field of 9 can be obtained for a dimension, if kernel size 3 dominates the selections of all four AIC modules in this dimension. In theory, the receptive field for this particular dimension
+
+ | scene completion | semantic scene completion |
| Methods | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| Lin et al. [12] | 58.5 | 49.9 | 36.4 | 0.0 | 11.7 | 13.3 | 14.1 | 9.4 | 29.0 | 24.0 | 6.0 | 7.0 | 16.2 | 1.1 | 12.0 |
| Geiger et al. [7] | 65.7 | 58.0 | 44.4 | 10.2 | 62.5 | 19.1 | 5.8 | 8.5 | 40.6 | 27.7 | 7.0 | 6.0 | 22.6 | 5.9 | 19.6 |
| SSCNet [16] | 57.0 | 94.5 | 55.1 | 15.1 | 94.7 | 24.4 | 0.0 | 12.6 | 32.1 | 35.0 | 13.0 | 7.8 | 27.1 | 10.1 | 24.7 |
| EsscNet [21] | 71.9 | 71.9 | 56.2 | 17.5 | 75.4 | 25.8 | 6.7 | 15.3 | 53.8 | 42.4 | 11.2 | 0 | 33.4 | 11.8 | 26.7 |
| DDRNet [10] | 71.5 | 80.8 | 61.0 | 21.1 | 92.2 | 33.5 | 6.8 | 14.8 | 48.3 | 42.3 | 13.2 | 13.9 | 35.3 | 13.2 | 30.4 |
| VVNet [8] | 69.8 | 83.1 | 61.1 | 19.3 | 94.8 | 28.0 | 12.2 | 19.6 | 57.0 | 50.5 | 17.6 | 11.9 | 35.6 | 15.3 | 32.9 |
| AIC-Net | 62.4 | 91.8 | 59.2 | 23.2 | 90.8 | 32.3 | 14.8 | 18.2 | 51.1 | 44.8 | 15.2 | 22.4 | 38.3 | 15.7 | 33.3 |
+
+Table 1. Results on the NYU [15] dataset. Bold numbers represent the best scores.
+
+ | scene completion | semantic scene completion |
| Methods | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| Zheng et al. [24] | 60.1 | 46.7 | 34.6 | - | - | - | - | - | - | - | - | - | - | - | - |
| Firman et al. [5] | 66.5 | 69.7 | 50.8 | - | - | - | - | - | - | - | - | - | - | - | - |
| SSCNet [16] | 75.4 | 96.3 | 73.2 | 32.5 | 92.6 | 40.2 | 8.9 | 33.9 | 57.0 | 59.5 | 28.3 | 8.1 | 44.8 | 25.1 | 40.0 |
| TS3D [6] | 80.2 | 91.0 | 74.2 | 33.8 | 92.9 | 46.8 | 27.0 | 27.9 | 61.6 | 51.6 | 27.6 | 26.9 | 44.5 | 22.0 | 42.1 |
| DDRNet [10] | 88.7 | 88.5 | 79.4 | 54.1 | 91.5 | 56.4 | 14.9 | 37.0 | 55.7 | 51.0 | 28.8 | 9.2 | 44.1 | 27.8 | 42.8 |
| VVNet [8] | 86.4 | 92.0 | 80.3 | - | - | - | - | - | - | - | - | - | - | - | - |
| AIC-Net | 88.2 | 90.3 | 80.5 | 53.0 | 91.2 | 57.2 | 20.2 | 44.6 | 58.4 | 56.2 | 36.2 | 9.7 | 47.1 | 30.4 | 45.8 |
+
+Table 2. Results on the NYUCAD dataset [24]. Bold numbers represent the best scores.
+
+can freely vary in the range of (9, 25). When considering three dimensions simultaneously, the number of 3D receptive fields supported by our AIC network will grow exponentially, which will provide flexibility and power for modeling object variations impossible for competing methods.
+
+# 3.3. Training Loss
+
+Our proposed AIC-Net can be trained in an end-to-end fashion. We adopt the voxel-wise cross-entropy loss function [16] for the network training. The loss function can be expressed as,
+
+$$
+\mathcal {L} = \sum_ {i, j, k} w _ {i j k} \mathcal {L} _ {s m} \left(p _ {i j k}, y _ {i j k}\right), \tag {4}
+$$
+
+where $\mathcal{L}_{sm}$ is the cross-entropy loss, $y_{ijk}$ is the ground truth label for coordinates $(i,j,k)$ , $p_{ijk}$ is the predicted probability for the same voxel, and $w_{ijk}$ is the weight to balance the semantic categories. We follow [16, 10] and use the same weights in our experiments.
+
+# 4. Experiments
+
+In this section, we start by introducing some key implementation details, followed by the description of the datasets as well as the evaluation metrics. Then we present some quantitative comparisons between the propose AIC-Net and some other existing works. Furthermore, qualitative comparisons are given through visualization. Finally, comprehensive ablation studies are performed to inspect some critical aspects of AIC-Net.
+
+# 4.1. Implementation Details
+
+In our AIC-Net, we stack three AIC modules for each branch in the multi-stage feature aggregation part, and two
+
+AIC modules are adopted to fuse these features. All the AIC modules used are the bottleneck version as shown in Fig. 3. For the three AIC modules in feature aggregation, the bottleneck layer is used to decrease the dimensionality of the features from $D = 64$ to $D' = 32$ . For the AIC modules in feature fusion part, the dimensionalities of features before and after the bottleneck layer are $D = 256$ and $D' = 64$ . Unless stated otherwise, we use three candidate kernels with kernel size $\{3,5,7\}$ for each dimension of all AIC modules. More details about the network structure can be found in the supplements.
+
+Our model is trained by using SGD with a momentum of 0.9 and a weight decay of $10^{-4}$ . The initial learning rate is set to be 0.01, which decays by a factor of 10 every 15 epochs. The batch size is 4. We implement our model using PyTorch. All the experiments are conducted on a PC with 4 NVIDIA RTX2080TI GPUs.
+
+Datasets. We evaluate the proposed AIC-Net on two SSC datasets. One dataset is the NYU-Depth-V2 [15], which is also known as the NYU dataset. The NYU dataset consists of 1,449 depth scenes captured by a Kinect sensor. Following SSCNet [16], we use the 3D annotations provided by [14] for semantic scene completion task. The second dataset is the NYUCAD dataset [5]. This dataset uses the depth maps generated from the projections of the 3D annotations to reduce the misalignment of depths and the annotations and thus can provide higher-quality depth maps.
+
+Evaluation metrics. For semantic scene completion, we measure the intersection over union (IoU) between the predicted voxel labels and ground-truth labels for all object classes. Overall performance is also given by computing the average IoU over all classes. For scene completion, all voxels are to be categorized into either empty or occupied.
+
+
+ceil. floor wall win. chair bed sofa table tvs furn. objects
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(a) RGB and Depth images
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(b) Ground truth
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(c) SSCNet
+Figure 5. Qualitative results on NYUCAD. From left to right are input RGB-D image, the ground truth, results generated by SSCNet [16], DDRNet [10] and the proposed AIC-Net. (Best viewed in color.)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(d) DDRNet
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+(e) Ours
+
+A voxel is counted as occupied if it belongs to any of the semantic classes. For scene completion, apart from IoU, precision and recall are also reported. Note that the IoU for semantic scene completion is commonly accepted as a more important metric in the SSC task.
+
+# 4.2. Comparison with the State-of-the-Art
+
+We compare our AIC-Net with the state-of-the-art methods on NYU and NYUCAD. The results are reported in Table 1 and Table 2, respectively. In Table 1, we can see that for the semantic scene completion our method signifi
+
+ | scene completion | semantic scene completion |
| Methods | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| AIC-Net, k={3,5,7} | 88.2 | 90.3 | 80.5 | 53.0 | 91.2 | 57.2 | 20.2 | 44.6 | 58.4 | 56.2 | 36.2 | 9.7 | 47.1 | 30.4 | 45.8 |
| AIC-Net, k={5,7} | 88.3 | 89.5 | 79.9 | 51.0 | 91.3 | 56.8 | 18.6 | 41.3 | 58.6 | 59.4 | 34.6 | 4.8 | 46.7 | 30.9 | 44.9 |
| AIC-Net, k={7} | 86.3 | 90.3 | 79.1 | 50.7 | 91.7 | 54.5 | 21.2 | 38.0 | 55.5 | 57.1 | 33.2 | 7.9 | 44.9 | 29.4 | 44.0 |
| AIC-Net, k={5} | 87.8 | 88.2 | 78.4 | 49.6 | 91.3 | 55.3 | 15.7 | 38.7 | 58.6 | 52.8 | 30.9 | 0. | 43.9 | 30.2 | 42.5 |
+
+Table 3. The performance of AIC-Net under different kernel sets. We use the same kernel set $k = (k_{1},k_{2},\dots ,k_{n})$ for each dimension. Results are reported on NYUCAD [24] dataset.
+
+ | scene completion | semantic scene completion |
| Methods | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| NYU | | | | | | | | | | | | | | | |
| AIC-Net-noMFs | 71.4 | 79.0 | 59.9 | 22.3 | 90.8 | 32.0 | 14.4 | 14.5 | 47.5 | 41.3 | 12.6 | 16.8 | 32.8 | 12.7 | 30.7 |
| AIC-Net | 62.4 | 91.8 | 59.2 | 23.2 | 90.8 | 32.3 | 14.8 | 18.2 | 51.1 | 44.8 | 15.2 | 22.4 | 38.3 | 15.7 | 33.3 |
| NYUCAD | | | | | | | | | | | | | | | |
| AIC-Net-noMF | 87.2 | 90.3 | 79.6 | 51.1 | 91.7 | 57.0 | 18.5 | 39.3 | 51.4 | 51.8 | 30.7 | 1.3 | 45.0 | 30.1 | 42.5 |
| AIC-Net | 88.2 | 90.3 | 80.5 | 53.0 | 91.2 | 57.2 | 20.2 | 44.6 | 58.4 | 56.2 | 36.2 | 9.7 | 47.1 | 30.4 | 45.8 |
+
+Table 4. The importance of the modulation factors. AIC-Net-noMFs denotes we set all the modulation factors to be 1. Results are reported on the NYU [15] and NYUCAD [24] datasets.
+
+cantly outperforms other methods in overall accuracy. The proposed AIC-Net achieves $2.9\%$ better than the cutting-edge approach DDRNet [10] in terms of the average IoU. For scene completion, our method is slightly outperformed by DDRNet [10]. The scene completion task requires to predict the volumetric occupancy, which is class-agnostic. Since our AIC-Net aims at modeling the object variation voxel-wisely, its advantage will fade in the binary completion task. In Table 2, our AIC-Net achieves the best semantic segmentation performance as well, and our average IoU outperforms the second-best approach by $3\%$ . For scene completion, our method also observes superior performance, although the advantage is not as significant. Among the comparing methods, SSCNet [16] is built using standard 3D convolution. The inferior performance lies twofold. First, the fixed receptive field is not ideal for addressing object variations. Second, 3D convolution is resource demanding, which can limit the depth of the 3D network and consequently sacrifices the modeling capability.
+
+Another interesting observation from these two tables is that our AIC-Net tends to obtain better performance on some categories that have more severe shape variations, e.g. chair, table, objects.
+
+# 4.3. Qualitative Results
+
+In Fig. 5, we show some visualization results to evaluate the effectiveness of our AIC-Net qualitatively. Generally, we can see that the proposed AIC-Net can handle diverse objects with various shapes and thus give more accurate semantic predictions and shape completion than SS-CNet [16] and DDRNet [10]. Some challenging examples include "chairs" and "tables" in Row 1, Row 3, and Row 5, which require a model to adaptively adjust the receptive field voxel-wisely. For example, for some more delicate parts like "legs", a smaller receptive field can be more
+
+beneficial. It shows that our AIC-Net can identify such objects more clearly. While for some other objects like "windows" in Row 5 and Row 7, it expects to see the larger context. Both SSCNet and DDRNet fail in this case, but our method still successfully identifies them from other surrounding distractors. The "bed" in Row 2, the "wall" in Row 6, and the "sofa" in Row 4 also demonstrate the superiority of our approach. In Row 8, the "objects" marked by the red dashed rectangle are in a messy environment. Our AIC-Net is less vulnerable to the influence of surrounding objects and more accurately distinguishes the categories and shapes of these "objects".
+
+# 4.4. Ablation Study
+
+In this section, we dive into the AIC-Net to investigate its key aspects in detail. Specifically, we try to answer the following questions. 1). Is it beneficial to use multiple candidate kernels along each dimension of the AIC module? 2). Is the performance improvement simply coming from multiple kernels? 3). Will that work if the AIC module is used as a plug-and-play module? 4). The trade-off between SSC performance and cost.
+
+The effectiveness of using multiple kernels In our AIC module, we use multiple candidate kernels in each dimension $x,y,z$ , and use the learned modulation factors to choose proper kernels along each of these dimensions. Since we expect our AIC-Net to be able to deal with objects of varying shapes, the kernels in AIC should be sufficiently distinct. In our experiments, we set the kernel set to be $\{3,5,7\}$ across all three dimensions. The first question needs to be clarified is that will it be enough to use only the maximum kernel, i.e. 7 in our network? Then, are three kernels better than two? From the results of Table 3, we can see, either two kernels $\{5,7\}$ or three kernels $\{3,5,7\}$ can
+
+ | scene completion | semantic scene completion |
| method | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| DDRNet-DDR-ASPP [10] | 88.7 | 88.5 | 79.4 | 54.1 | 91.5 | 56.4 | 14.9 | 37.0 | 55.7 | 51.0 | 28.8 | 9.2 | 44.1 | 27.8 | 42.8 |
| DDRNet-AIC-ASPP | 87.9 | 89.1 | 79.4 | 48.0 | 90.9 | 56.1 | 20.1 | 41.6 | 56.6 | 55.0 | 33.1 | 12.6 | 45.3 | 29.0 | 44.4 |
| DDRNet-DDR-AIC | 88.0 | 89.6 | 79.7 | 49.0 | 91.4 | 57.6 | 19.7 | 40.5 | 52.3 | 52.9 | 32.5 | 6.1 | 44.6 | 30.7 | 43.4 |
| DDRNet-AIC-AIC | 87.5 | 89.3 | 79.1 | 51.7 | 91.5 | 56.4 | 16.5 | 44.1 | 56.3 | 56.4 | 35.4 | 12.3 | 46.1 | 30.4 | 45.2 |
+
+Table 5. AIC module as plug-and-play modules. The components of DDRNet [10] are replaced by the AIC modules. Results are reported on NYUCAD [24] dataset.
+
+| Methods | Params/k | FLOPs/G | SC-IoU | SSC-IoU |
| SSCNet [16] | 930.0 | 163.8 | 73.2 | 40.0 |
| DDRNet [10] | 195.0 | 27.2 | 79.4 | 42.8 |
| 3D conv, k=(3,3,3) | 440.1 | 61.0 | - | - |
| 3D conv, k=(5,5,5) | 1443.6 | 191.1 | - | - |
| 3D conv, k=(7,7,7) | 3675.9 | 480.4 | - | - |
| AIC-Net*, k={3,5,7} | 628.7 | 85.5 | 79.1 | 45.2 |
| AIC-Net, k={3,5,7} | 847.0 | 113.7 | 80.5 | 45.8 |
| AIC-Net, k={5,7} | 716.0 | 96.77 | 79.9 | 44.9 |
+
+Table 6. Params, FLOPs and Performance of our approach compared with other methods. 3D conv, $k = (k_{1},k_{2},k_{3})$ denotes we replace our AIC module with a 3D convolution unit with 3D kernel $(k_{1},k_{2},k_{3})$ . AIC-Net* denotes a AIC-Net with one AIC module in feature fusion part, while by default we use two AIC modules.
+
+outperform kernel 7. Since the maximum receptive field for all these three options is 7, the results demonstrate the benefits of using multiple kernels. At the same time, three kernels outperform two kernels by about $1\%$ because it renders more flexibility in modeling the context.
+
+Is it necessary to use modulation factors? In the above paragraph, we show the benefit of using multiple kernels along each dimension. However, another question arises that is the improvement simply coming from multiple kernels? In other words, is that necessary to learn modulation factors to adaptively select the kernels voxel-wisely? From Table 4, we can see when we discard the modulation factors in AIC modules, the performance of AIC-Net observes obvious degradation on both NYU and NYUCAD datasets. These results show that the superior performance of AIC-Net relies on modeling the dimensional anisotropy property by adaptively selecting proper kernels along each dimension. To further inspect the anisotropic nature of the learned kernels, we observed the statistical values of the modulation factors and found that: 1.) the selected kernel sizes are basically consistent with the object sizes; 2.) the modulation values for different voxels vary a lot within one scene; 3.) the modulation values among the three separable dimensions have significant variation. This indicates the learned "3D receptive field" are anisotropic and adaptive.
+
+AIC module used as a plug-and-play module Due to its ability to model the anisotropic context, our AIC module is expected to be able to benefit other networks when it is used as a plug-and-play module. To validate this, we choose the DDRNet [10] as the test-bed, and use the AIC module to replace its building blocks, DDR and ASPP. DDR
+
+block models 3D convolution in a lightweight manner with the fixed receptive field. ASPP is a feature fusion scheme commonly used in semantic segmentation to take advantage of the multi-scale context. Table 5 shows the comparison. When we use AIC to replace the DDR module in DDRNet [10], the SSC-IoU is improved by $1.6\%$ . When we replace ASPP by our AIC module, we still observe a $0.6\%$ improvement in semantic segmentation. Finally, when we replace both DDR and ASPP by AIC, the result can be further boosted.
+
+Trade-off in performance and cost Since we decompose the 3D convolution into three consecutive 1D convolutions, the model parameters and computation grow linearly with the number of candidate kernels in each dimension. While for standard 3D convolution, the parameters and computation will have cubic growth. Table 6 presents some comparisons in terms of both efficiency and accuracy. For the 3D conv, $k = (k_{1}, k_{2}, k_{3})$ in the table, it means we use this particular 3D convolution to replace our AIC module. As can be seen, when the 3D kernel size is $(5, 5, 5)$ , it will result in 3 times of parameters and FLOPs comparing to our AIC-Net. When the kernel size is increased to $(7, 7, 7)$ , the parameter and computation scale will be 8 times more than ours. DDRNet is a lightweight structure, which consumes the least parameters and has the lowest computation complexity, but it observes a glaring performance gap comparing to our method. Thus, our AIC-Net achieves a better trade-off between performance and cost.
+
+# 5. Conclusion
+
+In this paper, we proposed a novel AIC-Net, to handle the object variations in the semantic scene completion (SSC) task. At the core of AIC-Net is our proposed AIC module, which can learn anisotropic convolutions by adaptively choosing the convolution kernels along all three dimensions voxel-wisely. By stacking multiple such AIC modules, it allows us more flexibly to control the receptive field for each voxel. This AIC module can be freely inserted into existing networks as a plug-and-play module to effectively model the 3D context in a parameter-economic manner. Thorough experiments were conducted on two SSC datasets, and the AIC-Net outperforms existing methods by a large margin, establishing the new state-of-the-art.
+
+# References
+
+[1] Planner5d. https://planner5d.com/. 1
+[2] Liang-Chieh Chen, George Papandreou, Florian Schroff, and Hartwig Adam. Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587, 2017. 2
+[3] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764-773, 2017. 2
+[4] Anh-Dzung Doan, Yasir Latif, Tat-Jun Chin, Yu Liu, Thanh-Toan Do, and Ian Reid. Scalable place recognition under appearance change for autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pages 9319-9328, 2019. 1
+[5] Michael Firman, Oisin Mac Aodha, Simon Julier, and Gabriel J Brostow. Structured prediction of unobserved voxels from a single depth image. In CVPR, pages 5431-5440, 2016. 5
+[6] Martin Garbade, Johann Sawatzky, Alexander Richard, and Juergen Gall. Two stream 3d semantic scene completion. arXiv:1804.03550, 2018. 1, 2, 5
+[7] Andreas Geiger and Chaohui Wang. Joint 3d object and layout inference from a single rgb-d image. In GCR, pages 183-195, 2015. 5
+[8] Yuxiao Guo and Xin Tong. View-volume network for semantic scene completion from a single depth image. In Proc. IJCAI, pages 726-732, 7 2018. 1, 2, 5
+[9] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017-2025, 2015. 2
+[10] Jie Li, Yu Liu, Dong Gong, Qinfeng Shi, Xia Yuan, Chunxia Zhao, and Ian Reid. Rgbd based dimensional decomposition residual network for 3d semantic scene completion. In CVPR, pages 7693-7702, 2019. 1, 2, 3, 5, 6, 7, 8
+[11] Jie Li, Yu Liu, Xia Yuan, Chunxia Zhao, Roland Siegwart, Ian Reid, and Cesar Cadena. Depth based semantic scene completion with position importance aware loss. IEEE Robotics and Automation Letters, 5(1):219-226, 2019. 2
+[12] Dahua Lin, Sanja Fidler, and Raquel Urtasun. Holistic scene understanding for 3d object detection with rgbd cameras. In ICCV, pages 1417-1424, 2013. 5
+[13] Shice Liu, Yu Hu, Yiming Zeng, Qiankun Tang, Beibei Jin, Yinhe Han, and Xiaowei Li. See and think: Disentangling semantic scene completion. In Advances in Neural Information Processing Systems, pages 263-274, 2018. 1, 2
+[14] Jason Rock, Tanmay Gupta, Justin Thorsen, JunYoung Gwak, Daeyun Shin, and Derek Hoiem. Completing 3d object shape from one depth image. In CVPR, pages 2484-2493. IEEE, 2015. 5
+[15] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, pages 746-760. Springer, 2012. 5, 7
+[16] Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. In CVPR, pages 190-198, 2017. 1, 2, 5, 6, 7, 8
+
+[17] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. 2
+[18] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1-9, 2015. 2
+[19] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818-2826, 2016. 2
+[20] Jacob Varley, Chad DeChant, Adam Richardson, Joaquin Ruales, and Peter Allen. Shape completion enabled robotic grasping. In Int. Conf. IROS, pages 2442-2447, 2017. 1
+[21] Jiahui Zhang, Hao Zhao, Anbang YaoE, Yurong Chen, Li Zhang, and Hongen LiaoE. Efficient semantic scene completion network with spatial group convolution. In ECCV, pages 733-749, 2018. 1, 2, 5
+[22] Lei Zhang, Zhiqiang Lang, Peng Wang, Wei Wei, Shengcai Liao, Ling Shao, and Yanning Zhang. Pixel-wise deep function-mixture network for spectral super-resolution. In AAAI Conference on Artificial Intelligence, 2020. 2
+[23] Lei Zhang, Peng Wang, Chunhua Shen, Lingqiao Liu, Wei Wei, Yanning Zhang, and Anton van den Hengel. Adaptive importance learning for improving lightweight image superresolution network. International Journal of Computer Vision, pages 1-21, 2019. 2
+[24] Bo Zheng, Yibiao Zhao, Joey C Yu, Katsushi Ikeuchi, and Song-Chun Zhu. Beyond point clouds: Scene understanding by reasoning geometry and physics. In CVPR, pages 3127-3134, 2013. 5, 7, 8
+[25] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019. 2
\ No newline at end of file
diff --git a/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/images.zip b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6c3e100787573f8fdac80c783b680cf1fd45d5d8
--- /dev/null
+++ b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62cae248856e0ca1b7fda6f3d6386a38ad784300a31249ce0d82e267fe749c91
+size 805368
diff --git a/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/layout.json b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7776ae7dd4d94879cd08c029b1f22437af84d6cd
--- /dev/null
+++ b/anisotropicconvolutionalnetworksfor3dsemanticscenecompletion/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2223fd393fd3dbc10d6c32b0bccf88f53bfbce139c48948404e7ca855bd86718
+size 375980
diff --git a/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_content_list.json b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f7e35e8c158ab6be099f37bdbc0f4f7465db54de
--- /dev/null
+++ b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64f14324a09813ab194a29996b10dbb40d0f083f4c9be2caf1ff2cef9ba59913
+size 76574
diff --git a/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_model.json b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..33478c60e1833df7a2f8a4fbccdf6e0c79699371
--- /dev/null
+++ b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9ce7d998a39c51705f0b1e72db6c209e215e52b7654ddde2681c25f1167dd35
+size 93453
diff --git a/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_origin.pdf b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c2f7ebf9663146694cd37489a1de3ee8768532cf
--- /dev/null
+++ b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/58df69a8-d78c-4eff-ae4e-08890fc1b411_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18e8b96edef5f5bfa9826e5d42ba812dc50f9fb11e1574a6eedfab6d01b24c1e
+size 486434
diff --git a/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/full.md b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0847f6f9263f81ec185a51ff46bfd984b3df7653
--- /dev/null
+++ b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/full.md
@@ -0,0 +1,326 @@
+# AOWS: Adaptive and optimal network width search with latency constraints
+
+Maxim Berman*1 Leonid Pishchulin2 Ning Xu2 Matthew B. Blaschko1 Gérard Medioni2
+
+$^{1}$ Center for Processing Speech and Images, Department of Electrical Engineering, KU Leuven $^{2}$ Amazon Go
+
+# Abstract
+
+Neural architecture search (NAS) approaches aim at automatically finding novel CNN architectures that fit computational constraints while maintaining a good performance on the target platform. We introduce a novel efficient one-shot NAS approach to optimally search for channel numbers, given latency constraints on a specific hardware. We first show that we can use a black-box approach to estimate a realistic latency model for a specific inference platform, without the need for low-level access to the inference computation. Then, we design a pairwise MRF to score any channel configuration and use dynamic programming to efficiently decode the best performing configuration, yielding an optimal solution for the network width search. Finally, we propose an adaptive channel configuration sampling scheme to gradually specialize the training phase to the target computational constraints. Experiments on ImageNet classification show that our approach can find networks fitting the resource constraints on different target platforms while improving accuracy over the state-of-the-art efficient networks.
+
+# 1. Introduction
+
+Neural networks define the state of the art in computer vision for a wide variety of tasks. Increasingly sophisticated deep learning-based vision algorithms are being deployed on various target platforms, but they must be adapted to the platform-dependent latency/memory requirements and different hardware profiles. This motivates the need for task-aware neural architecture search (NAS) methods [1, 35, 25].
+
+Multiple NAS approaches have been proposed in the literature and successfully applied to image recognition [3, 23, 22, 12, 28, 36] and language modeling tasks [35]. Despite their impressive performance, many of these approaches are prohibitively expensive, requiring the training of thousands of architectures in order to find a best performing model [35, 23, 36, 12, 19]. Some methods therefore try to
+
+
+Figure 1: Overview of OWS applied to a 3-layer neural network. Top: slimmable network; bottom: MRF for optimal selection of channel numbers $c_{1}$ and $c_{2}$ .
+
+dramatically reduce compute overhead by summarizing the entire search space using a single over-parametrized neural network [22, 28]. AutoSlim [31] nests the entire search space (varying channel numbers) in a single slimmable network architecture [33, 32], trained to operate at different channel number configurations at test time.
+
+In this work, we build on the concept of slimmable networks and propose a novel adaptive optimal width search (AOWS) for efficiently searching neural network channel configurations. We make several key contributions. First, we introduce a simple black-box latency modeling method that allows to estimate a realistic latency model for a specific hardware and inference modality, without the need for low-level access to the inference computation. Second, we design an optimal width search (OWS) strategy, using dynamic programming to efficiently decode the best performing channel configuration in a pairwise Markov random field (MRF). We empirically show that considering the entire channel configuration search space results into better NAS solutions compared to a greedy iterative trimming procedure [31]. Third, we propose an adaptive channel configuration sampling scheme. This approach gradually specializes the NAS proxy to our specific target at training-time, leading to an improved accuracy-latency trade-off in practice. Finally, we extensively evaluate AOWS on the ImageNet classification task for 3 target platforms and show significant accuracy improvements over state-of-the-art efficient networks.
+
+Related work. The last years have seen a growing interest for automatic neural architecture search (NAS) methods [1, 35, 25]. Multiple NAS approaches have been proposed and successfully applied to image recognition [3, 23, 22, 12, 28, 36, 19] and language modeling tasks [35]. Pioneer approaches [35, 36] use reinforcement learning to search for novel architectures with lower FLOPs and improved accuracy. MNasNet [23] directly searches network architecture for mobile devices. They sample a few thousand models during architecture search, train each model for a few epochs only and evaluate on a large validation set to quickly estimate potential model accuracy. Many of these approaches require very heavy computations, and therefore resort to proxy tasks (e.g. small number of epochs, smaller datasets, reduced search space) before selecting the top-performing building blocks for further learning on largescale target task [23, 19, 36]. To overcome these limitations, one group of methods directly learns the architectures for large-scale target tasks and target hardware platforms. For instance, [3] assumes a network structure composed of blocks (e.g. MNasNet [23]) and relies on a gradient-based approach, similar to DARTS [13], to search inside each block. Another group of methods intends to dramatically reduce compute overhead by summarizing the entire search space using a single over-parametrized neural network [22, 28, 33, 32]. Single-path NAS [22] uses this principle of nested models and combines search for channel numbers with a search over kernel sizes. However, single-path NAS restricts the search over channel numbers to 2 choices per layer and only optimizes over a subset of the channel numbers of the network, fixing the backbone channel numbers and optimizing only the expansion ratios of the residual branches in architectures such as Mobilenet-v2 [20].
+
+AutoSlim [31] uses a slimmable network architecture [33, 32], which is trained to operate at different channel number configurations, as a model for the performance of a network trained to operate at a single channel configuration. Thus, the entire search space (varying channel numbers) is nested into one unique network. Once the slimmable network is trained, AutoSlim selects the final channel numbers with a greedy iterative trimming procedure, starting from the maximum-channel number configuration, until the resource constraints are met. Our approach is closely related to AutoSlim, as we also build on slimmable networks. In section 3, we further detail these prior works [32, 33, 31], highlighting their similarities and differences with our approach, which we introduce in sections 4 to 6.
+
+# 2. Neural architecture search
+
+We now briefly outline the NAS problem statement. A general NAS problem can be expressed as:
+
+Problem 2.1 (NAS problem). Given a search space $\mathcal{S}$ , a set of resource constraints $\mathcal{C}$ , minimize $\Delta(N)$ for $N \in \mathcal{S} \cap \mathcal{C}$ .
+
+In a supervised learning setting, the error $\Delta(N)$ is typically defined as the error on the validation set after training network $N$ on the training set. In the following, we discuss the choices of the search space $\mathcal{S}$ and of the constraint set $\mathcal{C}$ .
+
+Search space. The hardness of the NAS problem depends on the search space. A neural network in $S$ can be represented by its computational graph, types of each node in the graph, and the parameters of each node. More specialized NAS approaches fix the neural network connectivity graph and operations but aim at finding the right parameters for these operations, e.g. kernel sizes, or number of input/output channels (width) per layer in the network. Single-path NAS [22] searches for kernel sizes and channel numbers, while AutoSlim [31] searches for channel numbers only. The restriction of the NAS problem to the search of channel numbers allows for a much more fine-grained search than in more general NAS methods. Furthermore, channel number calibration is essential to the performance of the network and is likely to directly affect the inference time.
+
+Even when searching for channel numbers only, the size of the search space is a challenge: if a network $N$ with $n$ layers is parametrized by its channel numbers $(c_0, \ldots, c_n)$ , where $c_i$ can take values among a set of choices $C_i \subseteq \mathbb{N}$ , the size of the design space
+
+$$
+\boldsymbol {S} _ {\text {c h a n n e l s}} = \left\{N \left(c _ {0}, c _ {1}, \dots , c _ {n}\right), c _ {i} \in C _ {i} \right\} \tag {1}
+$$
+
+is exponential in the number of layers1. Therefore, efficient methods are needed to explore the search space, e.g. by relying on approximations, proxies, or by representing many elements of the search space using a single network.
+
+Resource constraints. The resource constraints $\mathcal{C}$ in the NAS problem (problem 2.1) are hardware- and inference engine-specific constraints used in the target application. $\mathcal{C}$ considered by many NAS approaches is a bound on the number of FLOPs or performance during a single inference. While FLOPs can be seen as a metric broadly encompassing the desired physical limitations the inference is subjected to (e.g. latency and power consumption), it has been shown that FLOPs correlate poorly with these end metrics [29]. Therefore, specializing the NAS to a particular inference engine and expressing the resource constraints as a bound on the target platform limitations is of particular interest. This has given a rise to more resource-specific NAS approaches, using resource constraints of the form
+
+$$
+\mathcal {C} = \{N | M (N) < M _ {T} \} \tag {2}
+$$
+
+where $M(N)$ is the resource metric and $M_T$ its target. $M(N)$ can represent latency, power consumption constraints, or combinations of these objectives [29, 30]. Given
+
+the size of the search space, NAS often requires evaluating the resource metric on a large number of networks during the course of the optimization. This makes it often impracticable to rely on performance measurements on-hardware during the search. Multiple methods therefore rely on a model, such as a latency model [22], which is learned beforehand and maps a given network $N$ to an expected value of the resource metric $M(N)$ during the on-hardware inference.
+
+# 3. Slimmable networks and AutoSlim
+
+We now briefly review slimmable networks [33] and the AutoSlim [31] approach.
+
+Slimmable networks. Slimmable neural network training [33] is designed to produce models that can be evaluated at various network widths at test time to account for different accuracy-latency trade-offs. At each training iteration $t$ a random channel configuration $c^t = (c_0^t, \dots, c_n^t)$ is selected, where each channel number $c_i$ is picked among a set of choices $C_i$ representing the desired operating channels for layer $i$ . This allows the optimization to account for the fact that number of channels will be selected dynamically at test time. The so-called sandwich rule (where each iteration minimizes the error of the maximum and minimum size networks in addition to a random configuration) and in-place distillation (application of knowledge-distillation [9] between the maximum network and smaller networks) have been further introduced by [32] to improve slimmable network training and increase accuracy of the resulting networks. Dynamically selecting channel numbers at test time requires re-computing of batch normalization statistics. [32] showed that for large batch sizes, these statistics can be estimated using the inference of a single batch, which is equivalent to using the batch normalization layer in training mode at test time.
+
+Channel number search. A slippable network is used for the determination of the optimized channel number configurations under specified resources constraints. This determination relies on the following assumption:
+
+Assumption 3.1 (Slimmable NAS assumption). The performance of a slimmable network evaluated for a given channel configuration $c \in C_0 \times \ldots \times C_n$ is a good proxy for the performance of a neural network trained in a standard fashion with only this channel configuration.
+
+Given this assumption, AutoSlim proposes a greedy iterative trimming scheme in order to select the end channel configuration from a trained slimmable network. The procedure starts from the maximum channel configuration $c = M$ . At each iteration:
+
+- The performance of channel configuration $c^{\prime k} = (c_{0},\ldots ,c_{k - 1},d,c_{k + 1},\ldots ,c_{n})$ with $d = \max_{c < c_k}C_k$ is measured on a validation set for all $k\in [1,n - 1]$ ;
+
+- The configuration among $(c^{\prime k})_{k = 1\dots n - 1}$ that least increases the validation error is selected for next iteration.
+
+This trimming is repeated until the resource constraint $M(N(c)) < M_T$ is met. The output of AutoSlim is a channel configuration $c$ that satisfies the resource constraint, which is then trained from scratch on the training set.
+
+Discussion. Reliance on the one-shot slimmable network training makes AutoSlim training very efficient, while channel configuration inference via greedy iterative slimming is also performed efficiently by using only one large batch per tested configuration [31]. The greedy optimization strategy employed by AutoSlim is known to yield approximation guarantees with respect to an optimal solution for resource constrained performance maximization under certain assumptions on the underlying objective, notably submodularity [5]. However, in practice, optimization of a slimmable network configuration does not satisfy submodularity or related conditions, and the employment of an iterative greedy algorithm is heuristic.
+
+In this work we also build on the ideas of slimmable network training. However, in contrast to AutoSlim, we show that better NAS solutions can be found by employing a nongreedy optimization scheme that considers the entire channel configuration search space and efficiently selects a single channel configuration meeting the resource requirements. This is achieved through the use of a Lagrangian relaxation of the NAS problem, statistic aggregation during training, and Viterbi decoding (section 5). Selecting optimal channel configuration under available compute constraints requires precise hardware-specific latency model. Thus in section 4 we propose an accurate and simple black-box latency estimation approach that allows to obtain a realistic hardware-specific latency model without the need for low-level access to the inference computation. Finally, we propose a biased path sampling to progressively reduce the search space at training time, allowing a gradual specialization of the training phase to fit the target computational constraints. Our dynamic approach (section 6) specializes the NAS proxy to our specific target and leads to improved accuracy-latency trade-offs in practice.
+
+# 4. Black-box latency model for network width search
+
+We propose a latency model suited to the quick evaluation of the latency of a network $L(N)$ with varying channel numbers, which we use in our method. While other works have designed latency models [6, 22, 26], creating an accurate model for the fine-grained channel number choices allowed by our method is challenging. In theory, the FLOPs of a convolutional layer scale as
+
+$$
+c _ {\text {i n}} c _ {\text {o u t}} W H k ^ {2} / s ^ {2}, \tag {3}
+$$
+
+where $c_{\mathrm{in}}$ , $c_{\mathrm{out}}$ are input and output channel numbers, $(W, H)$ are the input spatial dimensions, $k$ is the kernel size and $s$ the stride. However, the dependency of the latency measured in practice to the number of FLOPs is highly non-linear. This can be explained by various factors: (i) parallelization of the operations make the latency dependent on external factors, such as the number of threads fitting on a device for given parameters; (ii) caching and memory allocation mechanisms are function of the input and output shapes; (iii) implementation of the operators in various inference libraries such as CuDNN or TensorRT are tuned towards a particular choice of channel numbers.
+
+Rather than attempting to model the low-level phenomena that govern the dependency between the channel numbers and the inference time, we use a look-up table modelling the latency of each layer in the network as a function of the channel numbers. For each layer $i = 0 \dots n - 1$ , we encode as $\Theta_{i}$ the layer parameters that are likely to have an impact on the layer latency. In the case of the mobilenet-v1 network used in our experiments, we used $\Theta_{i} = (H,W,s,k,dw)$ where $H \times W$ the layer input size, $s$ its stride, $k$ its kernel size and $dw \in \{0,1\}$ an indicator of the layer type: fully convolutional, or pointwise + depthwise convolutional. We assume that the latency can be written as a sum over layers
+
+$$
+L \left(N \left(c _ {0}, \dots , c _ {n}\right)\right) = \sum_ {i = 0} ^ {n - 1} L _ {\Theta_ {i}} \left(c _ {i}, c _ {i + 1}\right), \tag {4}
+$$
+
+where each layer's latency depends on the input and output channel numbers $c_{i}$ , $c_{i + 1}$ as well as the fixed parameters $\Theta_{i}$ .
+
+Populating each element $L_{\Theta_i}(c_i, c_j)$ in the lookup table is non-trivial. The goal is to measure the contribution of each individual layer to the global latency of the network. However, the measure of the inference latency of one layer in isolation includes a memory allocation and CPU communication overhead that is not necessarily present once the layer is inserted in the network. Indeed, memory buffers allocated on the device are often reused across different layers.
+
+We therefore profile entire networks, rather than profiling individual layers in isolation. We measure the latency of a set of $p$ channel configurations $(c^{1} \dots c^{p})$ such that each individual layer configuration in our search space
+
+$$
+\left\{L _ {\Theta_ {i}} \left(c _ {i}, c _ {i + 1}\right), i \in [ 0, n - 1 ], c _ {i} \in C _ {i}, c _ {i + 1} \in C _ {j} \right\} \tag {5}
+$$
+
+is sampled at least once. This sampling can be done uniformly among channel configurations, or biased towards unseen layer configurations using dynamic programming, as detailed in supplementary A. As a result, we obtain a set of measured latencies $(L(N(c^{j})) = l_{j})_{j = 1\dots P}$ , which by eq. (4) yield a linear system in the variables of our latency model $L_{\Theta_i}(c_i,c_{i + 1})$
+
+$$
+\sum_ {i = 0} ^ {n - 1} L _ {\Theta_ {i}} \left(c _ {i} ^ {j}, c _ {i + 1} ^ {j}\right) = l _ {j} \quad \forall j = 1 \dots P. \tag {6}
+$$
+
+This system can be summarized as $Ax = l$ where $A$ is a sparse matrix encoding the profiled configurations, $l$ is the corresponding vector of measured latencies and $x$ contains all the variables in our latency model (i.e. the individual layer latencies in eq. (5)). We solve the linear system using least-squares to obtain the desired individual layer latencies.
+
+We have found that this "black-box" approach results in a very accurate latency model for the search of channel numbers. The method is framework-agnostic and does not depend on the availability of low-level profilers on the inference platform. Moreover, access to a low-level profiler would still require solving the problem of assigning the memory allocation and transfers to the correct layer in the network. Our approach deals with this question automatically, and optimally assigns these overheads in order to best satisfy the assumed latency model of eq. (4).
+
+The solution to linear system in eq. (6) can be slightly improved by adding monotonicity priors, enforcing inequalities of the form $L_{\Theta_i}(c_i, c_k) \leq L_{\Theta_i}(c_j, c_k)$ if $c_i < c_j$ and $L_{\Theta_i}(c_i, c_k) \leq L_{\Theta_i}(c_i, c_l)$ if $c_k < c_l$ , as one expects the latency to be increasing in the number of input/output channels of the layer. Similar inequalities can be written between configurations with differing input sizes. It is straightforward to write all these inequalities as $Vx \leq 0$ where $V$ is a sparse matrix, and added to the least-squares problem. Rather than enforcing these inequalities in a hard way, we found it best to use a soft prior, which translates into
+
+$$
+\min _ {\boldsymbol {x}} \| \boldsymbol {A} \boldsymbol {x} - \boldsymbol {l} \| ^ {2} + \lambda \| \max (\boldsymbol {V} \boldsymbol {x}, \boldsymbol {0}) \| _ {1}, \tag {7}
+$$
+
+where the weighting parameter $\lambda$ is set using a validation set; this minimization can be solved efficiently using a second-order cone program solver [4, 17].
+
+# 5. Optimal width search (OWS) via Viterbi inference
+
+For the special case of optimizing the number of channels under a latency constraint, the NAS problem 2.1 writes as
+
+$$
+\min _ {\boldsymbol {c} \in C _ {0} \times \dots \times C _ {n}} \Delta (N (\boldsymbol {c})) \quad \text {s . t .} L (N (\boldsymbol {c})) < L _ {T} \tag {8}
+$$
+
+with $L_{T}$ our latency target. We consider the following Lagrangian relaxation of the problem:
+
+$$
+\max _ {\gamma} \min _ {\boldsymbol {c}} \Delta (N (\boldsymbol {c})) + \gamma (L (N (\boldsymbol {c})) - L _ {T}) \tag {9}
+$$
+
+with $\gamma$ a Lagrange multiplier, similar to the formulation proposed by [21] for network compression. If the subproblems
+
+$$
+\min _ {\boldsymbol {c}} \Delta (N (\boldsymbol {c})) + \gamma L (N (\boldsymbol {c})) \tag {10}
+$$
+
+can be solved efficiently, the maximization in eq. (9) can be solved by binary search over $\gamma$ by using the fact that the objective is concave in $\gamma$ [2, prop. 5.1.2]. This corresponds
+
+to setting the runtime penalty in eq. (10) high enough that the constraint is satisfied but no higher.
+
+Our key idea to ensure that eq. (10) can be solved efficiently is to find an estimate of the error of a network that decomposes over the individual channel choices as $\Delta(N(c)) \approx \sum_{i=1}^{n-1} \delta_i(c_i)$ ; indeed, given that our latency model decomposes over pairs of successive layers (eq. (4)), this form allows to write eq. (10) as
+
+$$
+\min _ {c} \sum_ {i = 1} ^ {n - 1} \delta_ {i} \left(c _ {i}\right) + \gamma \sum_ {i = 0} ^ {n - 1} L _ {\Theta_ {i}} \left(c _ {i}, c _ {i + 1}\right), \tag {11}
+$$
+
+which is solved efficiently by the Viterbi algorithm [27] applied to the pairwise MRF illustrated in fig. 1
+
+We leverage this efficient selection algorithm in a procedure that we detail in the remainder of this section. As in section 3, we train a slippable network. In order to ensure faster exploration of the search space, rather than sampling one unique channel configuration per training batch, we sample a different channel configuration separately for each element in the batch. This can be implemented efficiently at each layer $i$ by first computing the "max-channel" output for all elements in the batch, before zeroing-out the channels above the sampled channel numbers for each individual element. This batched computation is in aggregate faster than a separate computation for each element.
+
+For each training example $\pmb{x}^{(t)}$ , a random configuration $\pmb{c}^{(t)}$ is sampled, yielding a loss $\ell(\pmb{x}^{(t)}, \pmb{c}^{(t)})$ ; we also retain the value of the loss corresponding to the maximum channel configuration $\ell(\pmb{x}^{(t)}, \pmb{M})$ – available due to sandwich rule training (section 3). For each $i = 1 \dots n - 1$ , we consider all training iterations $T_{i}(c_{i}) = \{t \mid c_{i}^{(t)} = c_{i}\} \subseteq \mathbb{N}$ where a particular channel number $c_{i} \in C_{i}$ was used. We then define
+
+$$
+\delta_ {i} \left(c _ {i}\right) = \frac {1}{\left| T _ {i} \left(c _ {i}\right) \right|} \sum_ {t \in T _ {i} \left(c _ {i}\right)} \ell \left(\boldsymbol {x} ^ {(t)}, \boldsymbol {c} ^ {(t)}\right) - \ell \left(\boldsymbol {x} ^ {(t)}, \boldsymbol {M}\right) \tag {12}
+$$
+
+as the per-channel error rates in eq. (11). Measuring the loss relative to the maximum configuration loss follows the intuition that good channel numbers lead to lower losses on average. Empirically, we found that computing the average in eq. (12) over the last training epoch yields good results.
+
+Equation (11) is designed for efficient inference by neglecting the interaction between the channel numbers of different layers. We show in our experiments (section 7) that this trade-off between inference speed and modeling accuracy compares favorably to the greedy optimization strategy described in section 3. On the one hand, the number of training iterations considered in eq. (12) is sufficient to ensure that the per-channel error rates are well estimated. Approaches that would consider higher-order interactions between channel numbers would require an exponentially higher number of iterations to achieve estimates with the same level of statistical accuracy. On the other hand, this decomposition
+
+
+Figure 2: Min-sum relaxation at temperatures $T = 0.1$ (left) and $T = 0.001$ (right). Red: min-sum path. Colored: marginal sampling probabilities. The sampled configurations approach the min-sum path as $T \to 0$ .
+
+allows the performance of an exhaustive search over channel configurations using the Viterbi algorithm (eq. (11)). We have observed that this selection step takes a fraction of a second and does not get stuck in local optima as occurs when using a greedy approach. The greedy approach, by contrast, took hours to complete.
+
+# 6. Adaptive refinement of Optimal Width Search (AOWS)
+
+We have seen in section 5 how layerwise modeling and Viterbi inference allows for an efficient global search over configurations. In this section, we describe how this efficient selection procedure can be leveraged in order to refine the training of the slimmable network thereby making assumption 3.1 more likely to hold.
+
+Our strategy for adaptive refinement of the training procedure stems from the following observation: during the training of the slimmable model, by sampling uniformly over the channel configurations we visit many of configurations that have a latency greater than our objective $L_{T}$ , or that have a poor performance according to our current channel estimates $\delta_{i}(c_{i})$ . As the training progresses, the sampling of the channel configurations should be concentrated around the region of interest in the NAS search space.
+
+In order to refine the sampling around the solutions close to the minimum of eq. (11), we relax the Viterbi algorithm (min-sum) using a differentiable dynamic programming procedure described in [16]. This strategy relaxes the minimization in eq. (11) into a smoothed minimization, which we compute by replacing the min operation by a log-sum-exp operation in the Viterbi forward pass. The messages sent from variable $c_{i}$ to variable $c_{i + 1}$ become
+
+$$
+m \left(c _ {i + 1}\right) = \log \sum_ {c _ {i}} \exp - \frac {1}{T} \left(m \left(c _ {i}\right) + \delta_ {i} \left(c _ {i + 1}\right) + \gamma L _ {\Theta_ {i}} \left(c _ {i}, c _ {i + 1}\right)\right), \tag {13}
+$$
+
+where $T$ is a temperature parameter that controls the smoothness of the relaxation. The forward-backward pass of the relaxed min-sum algorithm yields log-marginal probabilities $\log p_i(c_i)$ for each layer whose mass is concentrated close
+
+to configurations minimizing eq. (11). For $T = 1$ , these correspond to the marginal probabilities of the pairwise CRF defined by the energy of eq. (11). In the limit $T \rightarrow 0$ , the probabilities become Dirac distributions corresponding to the MAP inference of the CRF as computed by the Viterbi algorithm (fig. 2).
+
+We introduce the following dynamic training procedure. First, we train a slippable network for some warmup epochs, using uniform sampling of the configurations as in section 3. We then turn to a biased sampling scheme. We initially set $T = 1$ . At each iteration, we
+
+1. sample batch configurations according to the marginal probabilities $p_i(c_i)$ ,
+2. do a training step of the network,
+3. update the unary statistics (eq. (12)),
+4. decrease $T$ according to an annealing schedule.
+
+This scheme progressively favours configurations that are close to minimizing eq. (11). This reduction in diversity of the channel configurations ensures that:
+
+- training of the slimmable model comes closer to the training of a single model, thereby making assumption 3.1 more likely to hold;
+- per-channel error rates (eq. (12)) are averaged only over relevant configurations, thereby enforcing an implicit coupling between the channel numbers of different layers in the network.
+
+Our experiments highlight how this joint effect leads to channel configurations with a better accuracy/latency trade-off.
+
+# 7. Experiments
+
+Experimental setting. We focus on the optimization of the channel numbers of MobileNet-v1 [11]. The network has 14 different layers with adjustable width. We consider up to 14 channel choices for each layer $i$ , equally distributed between $20\%$ and $150\%$ of the channels of the original network. These numbers are rounded to the nearest multiple of 8, with a minimum of 8 channels. We train AOWS and OWS models for 20 epochs with batches of size 512 and a constant learning rate 0.05. For the AOWS versions, after 5 warmup epochs (with uniform sampling), we decrease the temperature following a piece-wise exponential schedule detailed in supplementary B. We train the selected configurations with a training schedule of 200 epochs, batch size 2048, and the training tricks described in [8], including cosine annealing [14].
+
+# 7.1. TensorRT latency target
+
+We first study the optimization of MobileNet-v1 under TensorRT (TRT) $^2$ inference on a NVIDIA V100 GPU. Table 1 motivates this choice by underlining the speedup al
+
+Table 1: Run-time vs. accuracy comparison for timings obtained with batch size 64. The GPU+TRT column lists the latencies and speedups allowed by TensorRT. AOWS is obtained with Mobilenet-v1/TensorRT optimization with $L_{T} = 0.04\mathrm{ms}$ , and a longer training schedule of 480 epochs.
+
+| Method | GPU ms/fr | GPU+TRT ms/fr speedup | Top-1 Error (%) |
| AOWS | 0.18 | 0.04 4.5x | 27.5 |
| AutoSlim [31] | 0.15 | 0.04 3.75x | 28.5 |
| Mobilenet-v1 [11] | 0.25 | 0.05 5x | 29.1 |
| Shufflenet-v2 [15] | 0.13 | 0.07 1.9x | 30.6 |
| MNasNet [23] | 0.26 | 0.07 3.7x | 26.0 |
| SinglePath-NAS [22] | 0.28 | 0.07 4.0x | 25.0 |
| ResNet-18 [7] | 0.25 | 0.08 3.1x | 30.4 |
| FBNet-C [28] | 0.32 | 0.09 3.6x | 25.1 |
| Mobilenet-v2 [20] | 0.28 | 0.10 2.8x | 28.2 |
| Shufflenet-v1 [34] | 0.21 | 0.10 2.1x | 32.6 |
| ProxylessNAS-G [3] | 0.31 | 0.12 2.7x | 24.9 |
| DARTS [13] | 0.36 | 0.16 2.3x | 26.7 |
| ResNet-50 [7] | 0.83 | 0.19 4.3x | 23.9 |
| Mobilenet-v3-large [10] | 0.30 | 0.20 1.5x | 24.8 |
| NASNet-A* [35] | 0.60 | - | 26.0 |
| EfficientNet-b0 [24] | 0.59 | 0.47 1.3x | 23.7 |
+
+* TRT inference failed due to loops in the underlying graph
+
+
+Figure 3: Measured vs. predicted latency of 200 randomly sampled networks in our search space for the TensorRT latency model, trained using 9500 inference samples.
+
+lowed by TRT inference, compared to vanilla GPU inference under the MXNet framework. While the acceleration makes TRT attractive for production environments, we see that it does not apply uniformly across architectures, varying between 1.3x for EfficientNet-b0 and 5x for mobilenet-v1.
+
+Latency model. Figure 3 visualizes the precision of our latency model as described in section 4 for 200 randomly sampled configurations in our search space, and show that our pairwise decomposable model (section 4) adequately predicts the inference time on the target platform.
+
+Proxy comparison. Figure 4 shows the correlation between the error predictor and the observed errors for several
+
+
+Figure 4: Comparison of the slimmable proxy used by greedy validation (left) and our error estimates used in OWS (right). The proxy errors of 13 networks in our search space are compared to the final error after full training of these configurations.
+
+networks in our search space. The slimmable proxy used in AutoSlim uses the validation errors of specific configurations in the slimmable model. OWS uses the simple layerwise error model of eq. (12). We see that both models have good correlation with the final error. However, the slimmable proxy requires a greedy selection procedure, while the layer-wise error model leads to an efficient and global selection.
+
+**Optimization results.** We set the TRT runtime target $L_{T} = 0.04\mathrm{ms}$ , chosen as the reference runtime of AutoSlim mobilenet-v1. Table 2 gives the final top-1 errors obtained by the configurations selected by the different algorithms. greedy reproduces AutoSlim greedy selection procedure with this TRT latency target on the slimmable proxy (section 3). OWS substitutes the global selection algorithm based on channel estimates (eq. (11)). Finally, AOWS uses the adaptive path sampling procedure (section 6). Figure 5 illustrates the differences between the found configurations (which are detailed in supplementary D). As in [31], we observe that the configurations generally have more weights at the end of the networks, and less at the beginning, compared to the original mobilenet-v1 architecture [11].
+
+Despite the simplicity of the per-channel error rates, we see that OWS leads to a superior configuration over greedy, on the same slimmable model. This indicates that greedy selection can fall into local optimas and miss more advantageous global channel configurations. The AOWS approach uses the Viterbi selection but adds an adaptive refinement of the slimmable model during training, which leads to superior final accuracy.
+
+Table 1 compares the network found by AOWS with architectures found by other NAS approaches. The proposed AOWS reaches the lowest latency on-par with AutoSlim [31], while reducing the Top-1 image classification error by $1\%$ . This underlines the importance of the proposed platform-specific latency model, and the merits of our algorithm.
+
+AOWS training epochs. One important training hyperparameter is the number of training epochs of AOWS. Table 3 shows that training for 10 epochs leads to a suboptimal
+
+Table 2: Accuracies and latencies of channel configurations found for TRT optimization with $L_{T} = 0.04\mathrm{ms}$ .
+
+| Method | ms/fr | Top-1 error (%) |
| greedy | 0.04 | 29.3 |
| OWS | 0.04 | 28.2 |
| AOWS | 0.04 | 27.8 |
+
+
+Figure 5: Channel configurations found, as a ratio with respect to the original channels of MobileNet-v1.
+
+model; however, the results at epoch 30 are on-par with the results at epoch 20, which motivates our choice of picking our results at epoch 20.
+
+Table 3: Effect of the number of epochs when training AOWS, for TRT optimization under $L_{T} = 0.04\mathrm{ms}$ .
+
+| Epochs | 10 | 20 | 30 |
| Top-1 error (%) | 28.1 | 27.8 | 27.9 |
+
+# 7.2. FLOPS, CPU and GPU targets
+
+We experiment further with the application of AOWS to three different target constraints. First, we experiment with a FLOPs objective. The expression of the FLOPs decomposes over pairs of successive channel numbers, and can therefore be written analytically as a special case of our latency model (section 4). Table 4 gives the FLOPs and top-1 errors obtained after end-to-end training of the found configurations. We note that final accuracies obtained by AOWS are on-par or better than the reproduced AutoSlim variant (greedy). AutoSlim [31] lists better accuracies in the 150 and 325 MFLOPs regimes; we attribute this to different choice of search space (channel choices) and training hyperparameters, which were not made public; one other factor is the use of a 480 epochs training schedule, while we limit to 200 here.
+
+We turn to realistic latency constraints, considering CPU inference on an Intel Xeon CPU with batches of size 1, and GPU inference on an NVIDIA V100 GPU with batches of
+
+Table 4: Optimizing for FLOPs
+
+| Variant | MFLOPs | Top-1 error (%) |
| AutoSlim [31] | 150 | 32.1 |
| greedy150 | 150 | 35.8 |
| AOWS | 150 | 35.9 |
| AutoSlim [31] | 325 | 28.5 |
| greedy325 | 325 | 31.0 |
| AOWS | 325 | 29.7 |
| AutoSlim [31] | 572 | 27.0 |
| greedy572 | 572 | 27.6 |
| AOWS | 572 | 26.7 |
+
+
+(a) CPU
+Figure 6: Pareto front of greedy, vs. Pareto front of AOWS optimized for CPU and GPU latency models.
+
+
+(b) GPU
+
+size 16, under PyTorch [18].3 Tables 5 and 6 show the results for 3 latency targets, and the resulting measured latency. We also report the latencies of the channel numbers on the greedy solution space corresponding to the three configurations in table 4. By comparison of the accuracy/latency tradeoff curves in fig. 6, it is clear that using AOWS leads to more optimal solutions than greedy; in general, we consistently find models that are faster and more accurate.
+
+We observe that the gains of AOWS over greedy are more consistent than in the case of the FLOPs optimization (section 4). We note that the analytical FLOPs objective varies more regularly in the channel configurations, and therefore presents less local optima, than empirical latencies measured on-device. This might explain why the greedy approach succeeds at finding appropriate configurations in the case of the FLOPs model better than in the case of realistic latency models.
+
+# 8. Conclusion
+
+Efficiently searching for novel network architectures while optimizing accuracy under latency constraints on a
+
+Table 5: Optimizing for CPU latency (@ indicates the latency targets)
+
+| Variant | ms/fr | Top-1 error (%) |
| AOWS @ 15ms | 13.8 | 33.8 |
| AOWS @ 20ms | 18.2 | 30.3 |
| AOWS @ 30ms | 27.7 | 27.3 |
| greedy150 | 14.5 | 35.8 |
| greedy325 | 22.4 | 31.0 |
| greedy572 | 34.0 | 27.6 |
+
+Table 6: Optimizing for GPU latency (@ indicates the latency targets)
+
+| Variant | ms/fr | Top-1 error (%) |
| AOWS @ 2.2ms | 2.25 | 28.5 |
| AOWS @ 2.4ms | 2.34 | 27.7 |
| AOWS @ 2.6ms | 2.57 | 27.2 |
| greedy150 | 2.08 | 35.8 |
| greedy325 | 2.22 | 31.0 |
| greedy572 | 2.94 | 27.6 |
+
+target platform and task is of high interest for the computer vision community. In this paper we propose a novel efficient one-shot NAS approach to optimally search CNN channel numbers, given latency constraints on a specific hardware. To this end, we first design a simple but effective black-box latency estimation approach to obtain precise latency model for a specific hardware and inference modality, without the need for low-level access to the inference computation. Then, we introduce a pairwise MRF framework to score any network channel configuration and use the Viterbi algorithm to efficiently search for the most optimal solution in the exponential space of possible channel configurations. Finally, we propose an adaptive channel configuration sampling strategy to progressively steer the training towards finding novel configurations that fit the target computational constraints. Experiments on ImageNet classification task demonstrate that our approach can find networks fitting the resource constraints on different target platforms while improving accuracy over the state-of-the-art efficient networks. Code and trained models have been released at http://github.com/bermanmaxim/AOWS.
+
+Acknowledgements. We thank Kellen Sunderland and Haohuan Wang for help with setting up and benchmarking TensorRT inference, and Jayan Eledath for useful discussions. M. Berman and M. B. Blaschko acknowledge support from the Research Foundation - Flanders (FWO) through project numbers G0A2716N and G0A1319N, and funding from the Flemish Government under the Onderzoekprogramma Artificièle Intelligentie (AI) Vlaanderen programme.
+
+# References
+
+[1] Peter J. Angeline, Gregory M. Saunders, and Jordan B. Pollack. An evolutionary algorithm that constructs recurrent neural networks. IEEE transactions on neural networks, 5(1):54-65, 1994. 1, 2
+[2] Dimitri P Bertsekas. Nonlinear programming. Athena Scientific, 2nd edition, 1995. 4
+[3] Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In ICLR, 2019. 1, 2, 6
+[4] Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1-5, 2016. 4
+[5] Satoru Fujishige. Submodular Functions and Optimization. Elsevier, 2005. 3
+[6] Jussi Hanhirova, Teemu Kämäräinen, Sipi Seppälä, Matti Siekkinen, Vesa Hirvisalo, and Antti Ylä-Jäaski. Latency and throughput characterization of convolutional neural networks for mobile computer vision. In Proceedings of the 9th ACM Multimedia Systems Conference, pages 204-215, 2018. 3
+[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 06 2016. 6
+[8] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. ArXiv, abs/1812.01187, 2018.6
+[9] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. 3
+[10] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for mobilenetv3. In The IEEE International Conference on Computer Vision (ICCV), October 2019. 6
+[11] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andretto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. ArXiv, abs/1704.04861, 2017. 6, 7, II
+[12] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In The European Conference on Computer Vision (ECCV), September 2018. 1, 2
+[13] Hanxiao Liu, Karen Simonyan, and Yiming Yang. DARTS: Differentiable architecture search. In International Conference on Learning Representations, 2019. 2, 6
+[14] Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with restarts. ArXiv, abs/1608.03983, 2016. 6
+[15] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In The European Conference on Computer Vision (ECCV), September 2018. 6
+[16] Arthur Mensch and Mathieu Blondel. Differentiable dynamic programming for structured prediction and attention. In ICML, 2018. 5
+
+[17] B. O'Donoghue, E. Chu, N. Parikh, and S. Boyd. Conic optimization via operator splitting and homogeneous self-dual embedding. Journal of Optimization Theory and Applications, 169(3):1042-1068, June 2016. 4
+[18] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017. 8
+[19] Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. International Conference on Machine Learning, 2018. 1, 2
+[20] Mark Sandler, Andrew G. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 2, 6
+[21] Shivangi Srivastava, Maxim Berman, Matthew B. Blaschko, and Devis Tuia. Adaptive compression-based lifelong learning. In Proceedings of the British Machine Vision Conference (BMVC), 2019. 4
+[22] Dimitrios Stamoulis, Ruizhou Ding, Di Wang, Dimitrios Lymberopoulos, Bodhi Priyantha, Jie Liu, and Diana Marculescu. Single-path nas: Device-aware efficient convnet design. ArXiv, abs/1905.04159, 2019. 1, 2, 3, 6
+[23] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V. Le. MnasNet: Platform-aware neural architecture search for mobile. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 1, 2, 6
+[24] Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In ICML, 2019. 6
+[25] Frank Hutter Thomas Elsken, Jan Hendrik Metzen. Neural architecture search: A survey. ArXiv, abs/1808.05377, 2018. 1, 2
+[26] Stylianos I. Venieris and Christos-Savvas Bouganis. Latency-driven design for FPGA-based convolutional neural networks. In 2017 27th International Conference on Field Programmable Logic and Applications (FPL), 2017. 3
+[27] Andrew Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory, 13(2):260-269, 1967. 5
+[28] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 1, 2, 6
+[29] Tien-Ju Yang, Yu-Hsin Chen, and Vivienne Sze. Designing energy-efficient convolutional neural networks using energy-aware pruning. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6071-6079, 2016. 2
+[30] Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, and Hartwig Adam. Netadapt: Platform-aware neural network adaptation for mobile applications. In ECCV, 2018. 2
+
+[31] Jiahui Yu and Thomas Huang. AutoSlim: Towards One-Shot Architecture Search for Channel Numbers. arXiv e-prints, page arXiv:1903.11728, Mar 2019. 1, 2, 3, 6, 7, 8
+[32] Jiahui Yu and Thomas S. Huang. Universally slimmable networks and improved training techniques. In The IEEE International Conference on Computer Vision (ICCV), October 2019. 1, 2, 3
+[33] Jiahui Yu, Linjie Yang, Ning Xu, Jianchao Yang, and Thomas Huang. Slimmable neural networks. In International Conference on Learning Representations, 2019. 1, 2, 3
+[34] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6848-6856, 2018. 6
+[35] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. *ICLR*, 2016. 1, 2, 6
+[36] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. Learning transferable architectures for scalable image recognition. CoRR, abs/1707.07012, 2017. 1, 2
\ No newline at end of file
diff --git a/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/images.zip b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a96659084da4748ca560d9f08df6d5093e4aae60
--- /dev/null
+++ b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1414426ef9e41f5c301b059da08201a8dff981c2bcca38c351f847972e86476
+size 345647
diff --git a/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/layout.json b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f6b144308ebbc5a57a9de975d2f9c9ad972a75fa
--- /dev/null
+++ b/aowsadaptiveandoptimalnetworkwidthsearchwithlatencyconstraints/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:42842d3fbf42938bfffedd8a890e09d422a2ce0a8f798db2dd2c251a8c405d69
+size 389260
diff --git a/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_content_list.json b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f0aa4c100b73b3da8d383403c678e19a72faa1f9
--- /dev/null
+++ b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8a49234a7e1d84f69d39d2eda2d8741da2273bc6cdbe4d758cef7e9785fa608
+size 79181
diff --git a/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_model.json b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..db645148df3e348fbc3845de8d1a5735420d5c8e
--- /dev/null
+++ b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9f41f2463f07d8eff784d24f2a0fbfac4a021e66d3331196a517c69ba802a48
+size 95292
diff --git a/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_origin.pdf b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..88eb07501a6c23b889a1a1780aae83e25da14e4a
--- /dev/null
+++ b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/c4644957-9847-47e6-b1ca-e07b7d4acfef_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d66b666c472d3cafba3af91c27391a3ff748a4359a87d2a7d329b2238624f620
+size 2937072
diff --git a/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/full.md b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..8aea814aa37dc338957858e5de8b71a333f26543
--- /dev/null
+++ b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/full.md
@@ -0,0 +1,336 @@
+# Appearance Shock Grammar for Fast Medial Axis Extraction from Real Images
+
+Charles-Olivier Dufresne Camaro1, Morteza Rezanejad4, Stavros Tsogkas1,2*, Kaleem Siddiqi4, Sven Dickinson1,2,3*
+
+1University of Toronto 2Samsung Toronto AI Research Center 3Vector Institute for Artificial Intelligence
+
+$^{4}$ School of Computer Science and Centre for Intelligent Machines, McGill University
+
+{camaro, tsogkas, sven}@cs.toronto.edu, {morteza, siddiqi}@cim.mcgill.ca
+
+# Abstract
+
+We combine ideas from shock graph theory with more recent appearance-based methods for medial axis extraction from complex natural scenes, improving upon the present best unsupervised method, in terms of efficiency and performance. We make the following specific contributions: i) we extend the shock graph representation to the domain of real images, by generalizing the shock type definitions using local, appearance-based criteria; ii) we then use the rules of a Shock Grammar to guide our search for medial points, drastically reducing run time when compared to other methods, which exhaustively consider all points in the input image; iii) we remove the need for typical post-processing steps including thinning, non-maximum suppression, and grouping, by adhering to the Shock Grammar rules while deriving the medial axis solution; iv) finally, we raise some fundamental concerns with the evaluation scheme used in previous work and propose a more appropriate alternative for assessing the performance of medial axis extraction from scenes. Our experiments on the BMAX500 and SK-LARGE datasets demonstrate the effectiveness of our approach. We outperform the present state-of-the-art, excelling particularly in the high-precision regime, while running an order of magnitude faster and requiring no post-processing.
+
+# 1. Introduction
+
+Object shape has a fundamental role in visual perception theory. Shape defines a basic level of abstraction that determines the spatial extent of structures in the physical world, and drives object recognition. A popular representation of 2D shape is the Medial Axis Transform (MAT) [4].
+
+The medial axis has been of particular interest in both human and computer vision because of its direct relationship to local symmetries of objects. Local symmetries effectively decompose a shape into salient parts, aiding recognition and pose estimation, while being robust to viewpoint changes. At the same time, symmetry in general has been proven to be instrumental in the analysis of complex scenes [26, 32], facilitating the encoding of shape and their discrimination and recall from memory [3, 23, 39]. The importance of symmetry for scene categorization has been recently re-confirmed in [21, 41].
+
+There are many algorithms that compute the MAT of 2D binary shapes. This problem was first discussed by Blum in his seminal work [4, 5], followed by several extensions and variants, including smooth local symmetries [6], shock graphs [25, 31], bone graphs [15, 16, 17] Hamilton-Jacobi skeletons [29], augmented fast-marching [34], hierarchical skeletons [33], and the scale axis transform [10].
+
+In an effort to broaden the application of such methods, interest in the problem of skeleton extraction from natural images has been recently revived, with a focus on using supervised learning. The first such approach is that of Tsogkas and Kokkinos [37], which was later followed by other methods, including the deployment of random forests [35], or convolutional neural networks [8, 11, 13, 14, 27, 40, 43]. Departing from this trend, Tsogkas and Dickinson defined the first complete MAT for color images, formulating medial axis extraction as a set cover problem [36]. However, all these recent approaches have an important limitation: medial points are extracted in isolation, without explicit consideration of the local context, i.e., the structural constraints imposed by the fact that they must lie on skeletal segments within regions bounded by curves, with the associated generic classification of the medial axis point types [9]. As a result, one has to consider medial proposals at multiple scales for each point, resulting in a very large space of medial point proposals to search. To make things worse,
+
+1. Disk cost computation
+
+- local maxima or r
+- local minima of $C(\mathbf{x}, \mathbf{r})$
+
+
+2. Seed proposals
+
+
+4. Final output
+
+
+Figure 1: Our ASG algorithm consists of the following steps: (1) Disk cost computation associates each valid medial disk proposal with a cost $C(\mathbf{x}, r)$ . Low costs (blue) represent high "medialness", whereas high costs (yellow) denote disks that span heterogeneous image regions. (2) Seed proposals are selected as local scale maxima and local disk cost minima (example seeds and the respective disks are shown). (3) Branch growth of the selected seed into a medial branch. By following the rules of the SG grammar, the ASG only needs to examine a small, fixed number of proposals in a scale-space neighborhood around a medial point (shown in yellow), making it orders of magnitude faster than the AMAT [36], which naively considers $O(NR)$ medial disk proposals at each step. (4) Final output after growing all seeds. The SG grammar rules automatically enforce connectivity and single-pixel width constraints, producing a sparse, piecewise smooth scene medial axis. In comparison, the AMAT produces much noisier results, that require further post-processing.
+
+post-processing steps, such as non-maximum suppression, are required to ensure 1-pixel width results, or to group medial points into meaningful segments.
+
+In this paper we propose a method that reduces search space redundancies and mitigates the need for post-processing in appearance-based medial point extraction from natural scenes, by combining ideas from shock graph (SG) theory [24, 30, 31] and the AMAT [36]. Specifically, we use the notion of medial disk costs introduced in [36] to come up with new definitions for shock types, tailored to the domain of RGB images. Our new shock type definitions have all the properties of their binary counterparts, unlocking the use of the SG grammar defined in [31]. The grammar allows us to view medial point generation and grouping from natural images as a generative process, in the same spirit as the synthesis of binary shapes via a combination of birth, growth, and death rules, first proposed in [30]. The explicit use of the grammar drastically reduces the size and complexity of the search space and imposes structural regularities in detection, improving performance and computational efficiency. This is visually illustrated in Figure 1.
+
+The benefits of our technique are noteworthy: our appearance shock grammar (ASG) results in a $11 \times$ speed-up with respect to the original AMAT algorithm, without the need for postprocessing; the use of the ASG ensures that all resulting medial points are connected to form single pixel-wide medial branches. We also raise concerns with the standard evaluation benchmarks that involve multi
+
+ple scene skeleton ground truths, such as SYMMAX300 [37] and BMAX500 [36]. We propose an alternative evaluation protocol that addresses these issues, and also takes into account the relative importance of individual medial points with regard to boundary reconstruction. On this improved protocol, the ASG outperforms the state-of-the-art in unsupervised medial axis extraction by $7.6\%$ with a sparser and piecewise smoother output.
+
+# 2. Related Work
+
+Shock-based medial axis extraction. Blum defined medial axes as the loci of the centers of all disks that can be maximally inscribed in the interior of the shape [4, 5]. An equivalent definition involves shocks [12], the points where "grassfire" wavefronts, starting from the boundaries of the shape, meet. Siddiqi et al. [30, 31] assume that the shocks composing the medial axis of a bounding contour are first computed and then introduce the concept of shock graphs. They "color" shocks into different types according to the local variation of the medial axis radius function, and then define a shock grammar that determines how shocks of different types are connected with one another. The shock grammar can be used to convert a skeleton into a directed acyclic graph, for use by graph matching algorithms to perform shape matching. Shock graphs have also been successfully used in recognition [25] and database indexing [24]. Bone graphs [15, 16, 17] build on shock graphs by decomposing medial axes into object parts, leading to related graphs
+
+for object recognition applications. Medial branches corresponding to salient object parts are tagged as bones, while ligature segments [2] connect the bones together.
+
+Medial axis extraction in natural images. Most recent work on skeleton extraction from natural images relies on supervised learning. Tsogkas and Kokkinos [37] propose a multiple instance learning approach combined with handcrafted features, tailored specifically to local reflective symmetries. Teo et al. [35] improve on this approach by using a more powerful random forest classifier, and by encouraging global symmetric consistency through an MRF representation. Shen et al. [27] introduced the first deep-learning approach to solving this problem, where a fully convolutional neural network (CNN) extracts the locations of the skeleton points, while estimating the local medial disk radii, by combining deep features at multiple scales. Ke et al. [11] propose a similar framework that stacks Residual Units in its side outputs, improving performance and robustness. In contrast to works that simply fuse (concatenate) side-output responses, Zhao et al. [43] create an explicit hierarchy of skeleton features at different scales. This allows for the refinement of responses at finer scales using high-level semantic context, but also of coarser scale responses by using high-detail local responses from early layers of the CNN. Finally, Wang et al. [40] frame the skeleton extraction problem as a 2D vector field generation problem using a CNN, where each vector maps an image point to a skeleton point, similar to the Hamilton-Jacobi skeleton algorithm [7, 29].
+
+A completely different, unsupervised approach, the AMAT, was proposed by Tsogkas and Dickinson [36]. The AMAT frames medial axis extraction in color images as a geometric set cover problem and solves it using a greedy approximate solution [38]. The cost assigned to each potential covering element (disk) is provided by a function that prioritizes the selection of maximal disks, leading to a solution approximating the medial axes of structures in the scene.
+
+In the present paper, we use the same concept of disk costs to generalize the definitions of shocks [31] and, in turn, exploit the shock graph theory in the RGB domain. Unlike [31], we do not assume the medial axis is given. Rather, we use the rules of the SG grammar to constrain the number of eligible medial disks that are considered at every step. This allows us to be much more efficient than the AMAT [36], where disks at all possible locations and scales are valid candidates for the greedy algorithm.
+
+# 3. Shock Theory
+
+A shock graph (SG) [30, 31] is a directed acyclic graph (DAG) built from a skeleton. Its nodes correspond to connected components of shocks of the same type, and its edges represent connections between these components. The direction of an edge indicates the direction of the medial axis
+
+radius derivative between the coarser scale and the finer scale shock. The root of the graph is called the birth shock.
+
+Shocks represent a colouring for medial points with specific scale (medial axis radius) gradients. A type 4 shock (blob) corresponds to a single medial point that is a local maximum in scale. Its counterpart, the type 2 shock (neck), represents a single medial point that is a local minimum in scale and splits its medial branch into separate parts when removed. Type 3 shocks (ribbons or bends) are sets of connected medial points of equal scales. Finally, type 1 shocks (protrusion) are sets of connected medial points with monotonically decreasing scales in one direction.
+
+Formally, the shocks can be defined as follows. For a given closed shape $X$ , let $M(X)$ be its medial axis representation. $M(X)$ consists of medial points $\mathbf{x}$ of scales $R(\mathbf{x}) \equiv R_{\mathbf{x}}$ . For a medial point $\mathbf{x} \in M(X)$ and an open disk $D(\mathbf{x}, \epsilon)$ of radius $\epsilon$ centered at $\mathbf{x}$ , let $N(\mathbf{x}, \epsilon) = M(X) \cap D(\mathbf{x}, \epsilon) \setminus \{\mathbf{x}\}$ represent its $\epsilon$ -neighbourhood. $\mathbf{x}$ is
+
+$$
+\text {t y p e} 4 \text {i f} \exists \epsilon > 0 \text {s . t .} R _ {\mathbf {x}} > R _ {\mathbf {y}}, \forall \mathbf {y} \in N (\mathbf {x}, \epsilon);
+$$
+
+$$
+\text {t y p e 3} \quad \exists \epsilon > 0 \text {s . t .} R _ {\mathbf {x}} = R _ {\mathbf {y}}, \forall \mathbf {y} \in N (\mathbf {x}, \epsilon) \neq \emptyset ;
+$$
+
+$$
+\begin{array}{l} \text {t y p e 2 i f} \exists \epsilon > 0 \text {s . t .} R _ {\mathbf {x}} < R _ {\mathbf {y}}, \forall \mathbf {y} \in N (\mathbf {x}, \epsilon) \neq \emptyset \\ \text {a n d} N (\mathbf {x}, \epsilon) \text {i s n o t c o n n e c t e d ;} \end{array}
+$$
+
+type 1 otherwise.
+
+While the shock graph represents the relations between connected medial points in terms of their radii, the shock graph grammar reverses the underlying grassfire flow in time. The successive application of its rules defines a generative process that grows parts of an object. The birth rule dictates that birth shocks can only be types 3 or 4, while the death rules allow the shock graph to terminate at any shock type. The protrusion rules define how an interval of medial points, with a monotonically changing radius value, can attach at junctions. Finally, the union rules define the conditions under which distinct branches can be connected together.
+
+# 3.1. Defining Shocks for Natural Images
+
+The ideas presented in Section 3 assume that $M(X)$ has already been extracted using some skeletonization algorithm. In this work we turn the problem on its head: rather than using the shock grammar to define a graph on a pre-existing skeleton, we use the rules imposed by the grammar to constrain the search space for medial points. To do that, first we have to formally extend the shock type definitions to the domain of natural images. We employ the same notation as in Section 3, introducing new notation when needed.
+
+The key component for determining the coloring of a shock in the binary domain is the computation of $R(\mathbf{x})$ , the radius of the largest disk, centered at $\mathbf{x}$ , that remains contained in the open interior $\mathcal{X}$ of a closed 2D shape. The contour of such disks is tangent to the shape's boundary at
+
+
+(a) 1-shocks (protrusion).
+
+
+(b) 2-shocks (neck).
+
+
+(c) 3-shocks (ribbon).
+
+
+(d) 4-shocks (blob).
+Figure 2: Appearance based shock type examples from BMAX500 [36]. Medial axes are shown in red, contour in blue and selected shocks in yellow.
+
+2 points, at least. Exact computation of $R(\mathbf{x})$ is feasible because the boundary of a 2D shape is well defined (i.e., the points where the image values change from "0" to "1" or vice-versa). This is not the case in the natural image domain, where extracting object boundaries is an ill-posed problem that typically admits a probabilistic solution.
+
+To deal with this ambiguity, we follow the region-based approach of [36] and assign a cost $C(\mathbf{x}, r)$ to each disk proposal $D(\mathbf{x}, r) = D_{\mathbf{x}, r}$ . This cost acts as a "soft maximality" indicator: if $r$ is close to the ideal (maximal) value, $C(\mathbf{x}, r)$ is low, whereas disks that are not maximal or cross image boundaries, are severely penalized.
+
+More concretely, let $\mathbf{x} \in \mathbb{R}^2$ , $\mathbf{y} \in N(\mathbf{x}, \epsilon)$ be medial points, and $R_{\mathbf{x}}, R_{\mathbf{y}} \in \mathbb{R}$ denote the radii of the respective maximal disks centered at $\mathbf{x}$ and $\mathbf{y}$ . Also, let a small quantity $\delta_r > 0$ denote an acceptable "cost margin" for determining disk maximality, and $\epsilon_r > 0$ . Intuitively, if $C(\mathbf{x}, r + \epsilon_r) - C(\mathbf{x}, r) < \delta_r$ , then $D_{\mathbf{x}, r + \epsilon_r}$ is a better candidate for being the maximal disk centered at $\mathbf{x}$ than $D_{\mathbf{x}, r}$ . We formalize the scale maximality criterion as follows:
+
+$$
+C (\mathbf {x}, R _ {\mathbf {x}}) + \delta_ {r} < C (\mathbf {x}, R _ {\mathbf {x}} + \epsilon_ {r}). \tag {1}
+$$
+
+This condition should be satisfied for all disk proposals that are added to our solution. We also define a "cost smoothness" criterion, expressing the fact that the costs of neighboring medial points should not vary significantly. This is another direct analogy to the shock theory for binary shapes, which dictates that the radii of neighboring medial points are bound to vary slowly. This is due to the fact that shocks coincide with singularities of a continuous Euclidean distance function from the boundary [5]. Letting $\delta_c > 0$ , we define the cost smoothness criterion as
+
+$$
+\left\| C (\mathbf {x}, R _ {\mathbf {x}}) - C (\mathbf {y}, R _ {\mathbf {y}}) \right\| < \delta_ {c}. \tag {2}
+$$
+
+By combining these two criteria with the binary shock type definitions, we redefine shock coloring rules in the RGB do
+
+main. These rules are agnostic to the exact nature of the cost function – we discuss potential choices for $C$ in Section 4.1. Note, however, that, contrary to the binary case, we must consider all possible locations and scale candidates $(\mathbf{x}, r_{\mathbf{x}})$ , since we know neither the centers $\mathbf{x} \in M(X)$ nor the radii $R$ of the true medial disks. Finally, our shock coloring definitions are adapted to accommodate a discrete pixel grid. For instance, the neighbourhood of a point $N(\mathbf{x}, 1)$ corresponds to its immediate 8-connected neighbours, while radii only take positive integer values.
+
+# 4. Constrained Medial Point Search Using the Shock Graph Grammar
+
+The formal definition for RGB shocks described in Section 3.1 allows one to use the SG grammar to progressively build an object skeleton while constraining the search space of candidate medial points. We summarize the steps of such an approach in Algorithm 1.
+
+Algorithm 1: Overview of algorithm
+Input: RGB image I
+Output: Medial points M
+1 Initialization: $M \gets \emptyset$
+2 $D \gets$ generateProposals(I);
+3 $Q_{s} \gets$ extractSeeds(D);
+4 while not Empty(Qs) do
+5 $\begin{array}{r}\left(\mathbf{x}_s,r_s\right)\gets \mathrm{selectSeed}(Q_s);\\ M\gets \mathrm{growSeed}((\mathbf{x}_s,r_s),D);\\ Q_s\gets \mathrm{pruneSeeds}(Q_s,M); \end{array}$
+6 7 8 M growEndPoints(D,M):
+
+First, we generate medial disk (point) proposals $D_{\mathbf{x},r}$ at multiple scales $r$ . Second, we extract birth seeds $(\mathbf{x}_s,r_s)$ from the pool of proposals and store them in a queue $Q_{s}$ . We grow each seed into a medial axis, by iteratively attaching low-cost medial points1. Every time we attach a new point to the axis, we make sure this attachment is consistent with the rules of the SG grammar, and that the medial axis remains connected and one-pixel wide. We greedily continue growing an axis until no points can be added without violating one of these constraints, and then pick the next seed in $Q_{s}$ to grow. Note that, because birth seeds can only be type 3 or 4 shocks, which correspond to local scale maxima, the medial axis is constructed in a coarse-to-fine manner. After $Q_{s}$ has been exhausted, we relax $\delta_c$ and grow branch end points that may have been cut short due to the cost constraint. This step allows the algorithm to extend branch growths into more expensive/ambiguous image regions for completeness. We now
+
+describe each one of these steps in more detail.
+
+Proposal generation. Each medial disk candidate $D_{\mathbf{x},r}$ is associated with a cost $C(\mathbf{x},r)$ that represents how close $D_{\mathbf{x},r}$ is to being "maximal". In the domain of real images, a low value for $C$ is equivalent to a perceptually homogeneous appearance within the disk-shaped region $D_{\mathbf{x},r}^{I} \subset I$ . In Section 4.1 we describe in detail two options for $C$ based on: i) RGB encodings [36]; and ii) image intensity histograms. We compute $C(\mathbf{x},r)$ for all points $\mathbf{x}$ in the image, at all potential scales $r \in [r_{min}, r_{max}]$ . Proposals corresponding to disks that are not fully enclosed in the image are ignored.
+
+Seed extraction should only return type 3 or type 4 shocks. To extract 4-shock seed candidates, we scan the space of positions and scales, and check whether the type 4 criterion holds. For 3-shock seed candidates, we check if there is at least one valid neighbour sharing the same scale, as per the shock type 3 definition. Finally, we impose an additional requirement: a type $3/4$ shock $\mathbf{x}_s$ qualifies as a seed iff it corresponds to a local cost minimum, i.e.,
+
+$$
+C \left(\mathbf {x} _ {s}, r _ {\mathbf {x} _ {s}}\right) \leq C (\mathbf {y}, r _ {\mathbf {y}}), \forall \mathbf {y} \in N \left(\mathbf {x} _ {s}, 1\right). \tag {3}
+$$
+
+All seed candidates are added into a queue $Q_{s}$ . Because a 4-seed can eventually grow into a nearby 3-seed as the medial axis is formed (provided that both seeds are part of the same object), once a seed has stopped growing, we also remove any other seeds in $Q_{s}$ that have been added to $M$ .
+
+Seed selection follows a coarse-to-fine strategy. We prioritize the selection of seeds with larger radii and lower costs $C$ because we expect their cost computation to be less sensitive to noise, resulting in more robust axis growth.
+
+Seed growth involves attaching medial point proposals to a selected seed $(\mathbf{x}_s, r_s)$ , following the shock grammar. At each step, the least expensive valid proposal $(\mathbf{x}, r_{\mathbf{x}})$ in the neighborhood of the axis is added to $M$ . Proposals whose regions $D_{\mathbf{x},r}^{I}$ are subsumed by $M^I$ , the union of disk regions centered at points in $M$ , are ignored, as they offer no new information about the object's shape. The growth process ends when no more valid proposals can be added. To emulate the cost constraint in the RGB shock coloring definitions, we introduce a cost upper bound
+
+$$
+C _ {t o l} = C \left(\mathbf {x} _ {s}, r _ {s}\right) \left(1 + \alpha_ {c}\right) > 0, \tag {4}
+$$
+
+where $\alpha_{c}$ is a small arbitrary positive constant. We ignore proposals with costs larger than $C_{tol}$ to ensure that the quality of attached points does not degrade during growth.
+
+Single points do not provide sufficient spatial context for determining robust axis growth directions. To resolve these
+
+ambiguities we grow a seed by attaching fragments of valid connected medial points, $F$ , instead. For simplicity, we model medial axis fragments $F$ as linear segments of length $l_{F} \leq l_{max}$ , producing a piecewise-linear approximation of the true medial axis. To rank the quality of candidate fragments we define a fragment cost
+
+$$
+\bar {C} _ {F} = \frac {\alpha \left(l _ {F}\right)}{l _ {F}} \sum_ {j = 1} ^ {l _ {F}} C \left(\mathbf {x} _ {j}, r _ {j}\right), \tag {5}
+$$
+
+which is proportional to the mean cost of its constituent points. The more expensive a fragment, the less likely it is to be part of the medial axis. To prioritize longer fragments, which provide more context, $\bar{C}_F$ is weighted by a length-dependent parameter $\alpha (l_F)$ , i.e., between two fragments with equal mean cost, the longer one will be selected.
+
+At each iteration, we generate multiple candidate fragments and add the one with the lowest $\bar{C}_F$ to $M$ . Growth then continues from the endpoint of the last added fragment. This step is repeated until no more valid fragments, i.e., fragments that follow the SG grammar and whose regions are not subsumed by $M^I$ , can be attached to the current medial branch. In practice this can happen either because the branch is fully grown or because the remaining fragment candidates are too expensive. Then, additional medial branches can be grown from the seed $(\mathbf{x}_s,r_s)$ .
+
+A medial branch may also terminate at a junction point: a medial point from which multiple branches emerge2. In this case, new branches can also be grown from that point, as shown in Figure 2d. To identify a junction point, we check if multiple fragments can be attached to it without violating the SG grammar's protrusion rules.
+
+End point growth. Restricting the growth of medial branches using a cost-based threshold for medial fragments promotes robustness and avoids committing to potentially erroneous growth paths. However, the resulting medial axes may not be fully fleshed out: branches corresponding to the fine image details are grown last, and do not always survive this pruning step. To recover these lost medial branches, we perform a final refinement step: we revisit each medial end point and allow it to grow further by relaxing the tolerance constraint $C_{tol}$ , thus allowing less salient fragments to be added. The algorithm terminates when no more valid fragments can be added to any medial end point.
+
+# 4.1. Cost functions
+
+Color homogeneity. We use the default cost function $C$ of the AMAT [36], after smoothing the input image $I$ using [42]. The cost of a disk region $D_{\mathbf{x},r}^{I}$ with area $A_{r}$ is
+
+$$
+C _ {c o l o r} (\mathbf {x}, r) = \frac {c (\mathbf {x} , r)}{A _ {r}} + \frac {w _ {s}}{r}, \tag {6}
+$$
+
+where $c(\mathbf{x},r)$ represents a measure of homogeneity based on $\mathbf{f}_{\mathbf{x},r}$ , the average CIELAB space value within $D_{\mathbf{x},r}^{I}$ :
+
+$$
+c (\mathbf {x}, r) = \sum_ {k} \sum_ {l} | | \mathbf {f} _ {\mathbf {x}, r} - \mathbf {f} _ {\mathbf {x} _ {k}, r _ {l}} | | _ {2} ^ {2} \forall k, l: D _ {\mathbf {x} _ {k}, r _ {l}} ^ {I} \subset D _ {\mathbf {x}, r} ^ {I}. \tag {7}
+$$
+
+Intensity histogram. While straightforward to compute, Equation (6) is sensitive to gradual changes in intensity. We consider a more powerful cost function that is based on local histograms of image intensity and is more appropriate for applications to regions with texture. We first smooth the image using [42]. Then, we precompute a tiling of the image using $6 \times 6$ squares. For each tile we compute an average intensity value per color channel. We then construct a local histogram $\mathbf{H}$ for each channel, by placing these averages into one of 10 bins. To compute $c(\mathbf{x}, r)$ , we replace the $l^2$ -norm in Equation (7) with a standard Bhattacharyya distance between normalized histograms $\mathbf{H}_1, \mathbf{H}_2$
+
+$$
+d _ {\mathbf {B h a t t .}} \left(\mathbf {H} _ {1}, \mathbf {H} _ {2}\right) = \sqrt {1 - \frac {\sum_ {i} \sqrt {\mathbf {H} _ {1} (i) \cdot \mathbf {H} _ {2} (i)}}{\sqrt {\sum_ {i} \mathbf {H} _ {1} (i)} \cdot \sqrt {\sum_ {i} \mathbf {H} _ {2} (i)}}}, \tag {8}
+$$
+
+averaged over the 3 color channels, as used for unsupervised texture segmentation in region-based active contours [20]. For each disk under consideration, the histograms are computed using only the enclosed tiles. We also rescale, and add a scale-dependent constant to obtain
+
+$$
+C _ {h i s t} (\mathbf {x}, r) = \frac {c (\mathbf {x} , r)}{r} + \frac {w _ {s}}{r}. \tag {9}
+$$
+
+# 5. Experiments
+
+We conduct experiments on scene and object skeleton detection, on two representative datasets: BMAX500 [36] and SK-LARGE [27]. BMAX500 is built by automatically extracting skeletons of human-annotated region segments from the BSDS500 dataset [1]; each image typically comes with 5-7 such annotations. We use the downsampled version of BMAX500 as in [36], but we also evaluate on the full resolution dataset, to more effectively highlight the computational gains of our approach. SK-LARGE, on the other hand, focuses on object-centric skeleton detection: each image contains a centered object and the ground truth is only the foreground object skeleton. Note that this is a different problem than the one the ASG (and the AMAT) aim to solve, making comparison on the SK-LARGE unfair to our algorithm, but we still include it for completeness.
+
+# 5.1. Evaluation Protocol and Criticisms
+
+Traditionally, the evaluation of skeleton extraction methods has followed the protocol originally introduced for the task of boundary detection on the BSDS500 benchmark [8][18, 19]. According to that protocol, the extracted
+
+
+Figure 3: Boundary (middle) and skeleton (right) annotations on the BSDS/BMAX500. Different colors denote annotations extracted from different segmentations. Whereas boundaries for the same scene form a natural hierarchy, skeletons actually conflict with one another, making the evaluation protocol used in [36] unsuitable.
+
+
+
+
+
+
+Figure 4: Segmentation (left), binary GT skeletons (middle), and their weighted version based on uniqueness of medial disk area (right) [22]. The most salient skeleton parts are retained (yellow), while skeletal points with low boundary support have low weights (blue).
+
+
+
+
+
+(boundary/skeleton) map is binarized, and then matched to each one of the available annotations for a given image, using a bipartite graph matching routine that allows for small localization errors. To compute precision (P), a detected point can be matched to any of its ground truth (GT) counterparts, while, for perfect recall (R), all ground truth points must be matched with a point in the output.
+
+We argue that this benchmarking approach can be misleading for the task of skeleton detection. To better understand why, see Figure 3. The boundary annotations for the same scene form a natural hierarchy: fine-grained interpretations of a scene complement the coarser ones, resulting in modest variation in the recall scores. Skeleton annotations, on the other hand, not only change significantly when the source segmentation changes, but actually conflict with one another. Even if a predicted skeleton perfectly matches one of the ground truths, it may be at complete odds with the rest, hurting the associated recall and F-score.
+
+Although we employ the same evaluation scheme used in previous work for consistency, we propose the following alternative: for each image, we consider each annotation individually and report scores for the one with the maximum F-score. This is a much more reasonable expectation - we require the output to match at least one of the acceptable
+
+| Resolution | 161 × 241 pixels (half) | 321 × 481 pixels (full) |
| Method (C(x, r)) | AMAT (Color) | ASG (Color) | AMAT (Hist) | ASG (Hist) | AMAT (Hist) |
| P | .393 | .237 | .396 | .246 | .431 | .268 | .506 | .343 | .471 |
| R | .640 | .665 | .452 | .485 | .623 | .658 | .541 | .595 | .769 |
| F1 | .487 | .350 | .422 | .326 | .509 | .380 | .522 | .435 | .584 |
| *R gains | +.043 | +.047 | +.032 | +.039 | +.016 | +.020 | +.035 | +.040 | +.018 |
| *F1 gains | +.012 | +.006 | +.014 | +.008 | +.006 | +.004 | +.016 | +.011 | +.005 |
| t (s) | 57.4 | 7.0 (↓ 8.2×) | 33.7 | 6.5 (↓ 5.2×) | 393.2 |
+
+Table 1: Results on the BMAX500 using standard evaluation (black) and our proposed single-annotation protocol (blue). Gains for the ligature-weighted version of BMAX500 are denoted by *. Timings are averages over the BMAX500 test set. Cost function computation times are excluded from the runtime measurements to compare the two algorithms head to head.
+
+scene interpretations, rather than all of them jointly.
+
+Another observation we make is that large portions of the medial axis may have little to do with boundary reconstruction, but are due to the ligature or the "glue" that holds parts of the object together [5]. Curiously, all studies benchmarked on BMAX500 or SK-LARGE, have ignored this fact. With this in mind, we use a uniqueness of medial disk area-based ligature measure proposed by Rezanejad [22], to weight the contribution of each medial axis point on a scale from $[0,1]$ . Figure 4 shows a typical example, where the lower weights near the branch points signal the ligature.
+
+Parameters are optimized on the BMAX500 validation set. We use $\alpha_{c} = 0.75$ , $l_{max} = 10$ . $\alpha(l_{F})$ is set to decrease linearly from $\alpha(1) = 1$ to $\alpha(l_{max}) = 0.85$ . We fix these values for all experiments, including those on SK-LARGE.
+
+We use the same values as in [36] for the color cost function, namely $w_{s} = 1e - 4$ and the default values for the smoothing operation [42]. For the histogram based cost function, we use $w_{s} = 2e - 8$ . Finally, we set $r \in [2,41]$ for the half-resolution images and $r \in [2,82]$ for the full resolution images. During evaluation, any detected medial point within distance equal to $1\%$ of the image diagonal (in pixels) from the ground truth can be a true positive.
+
+# 5.2. Results
+
+We report quantitative results for scene skeleton extraction, on the half- and full-resolution BMAX500 dataset, in Table 1. We compare the AMAT [36] with post-processing (i.e., grouping and thinning) and the ASG, using the two cost functions described in Section 4.1. We include results for both the standard and our proposed evaluation protocol, as well as the gains due to our ligature weighting.
+
+The cost function matters. Using the histogram-based cost increases performance noticeably for both the AMAT and the ASG (+2% and +10% F-score respectively). This result confirms our hypothesis that a powerful cost function that is robust to texture and other local appearance variations is crucial in order to obtain good quality medial axes.
+
+Performance analysis. We focus on the results for the intensity-histogram cost function. The standard evaluation protocol rewards the AMAT's dense yet imprecise output: predicted points have multiple "shots" at matching with one of the multiple GT annotations, and, conversely, a GT point is more likely to match a detected point. This increases recall, making the AMAT perform on par with our method, which produces a much sparser (59% fewer points at full resolution), but precise output. Using one of the GT per image (blue) calibrates P/R, yielding +5.5% and +7.6% F-score for half and full resolution, respectively, and aligns quantitative results to what we witness qualitatively in Figure 5: a clear advantage of obeying the rules of a shock grammar in skeleton detection. The ASG skeletons are smoother, as singularity theory dictates [9], and less sensitive to boundary artefacts, while maintaining agreement with the ground truth. On the contrary, the AMAT skeletons, where medial hypotheses are evaluated in isolation, contain spurious points and invalid branching topology.
+
+Finally, using a ligature-weighted version of BMAX increases recall for both algorithms, with a net advantage for the ASG, suggesting that branches missed by our method tend to be less important for boundary reconstruction.
+
+ASG dramatically reduces runtime. Comparison of the histogram variants of AMAT and ASG in Table 1 show a speedup of $5 \times$ for the latter at half-resolution and $11 \times$ at full-resolution. Our approach not only is faster by an order of magnitude, it also scales much better with the input image size and the number of scales considered. A detailed breakdown of the algorithm is shown in Table 2.
+
+Comparison with supervised methods. In Table 3 we compare to supervised learning methods. SK-LARGE contains annotations only for the foreground object skeletons, so we ignore medial axes outside the object during evaluation. Both the AMAT and ASG produce lower F-scores than Hi-Fi [43] and DeepFlux [40], but this is expected because they are not solving the same problem: the former rely solely on bottom-up features to extract medial axes
+
+
+
+
+
+
+
+
+Figure 5: Qualitative results. Left to right: Ground truth (single annotation), ASG (this work), AMAT [36] (after post-processing). Our method produces sparser, cleaner, and more accurate medial axes, without any post-processing.
+
+
+
+
+
+| Resolution | 161 × 241 | 321 × 481 |
| Proposal Generation | 3.63s | 36.0% | 63.51s | 64.7% |
| Seed Growth | 4.60s | 45.6% | 18.28s | 18.6% |
| End Point Growth | 1.85s | 18.3% | 15.71s | 15.9% |
| Other | 0.01s | 0.1% | 0.73s | 0.8% |
| Total | 10.09s | 100% | 98.23s | 100% |
+
+Table 2: Runtime breakdown of the ASG. Timings are averages over the 200 images in the BMAX500 test set. Other includes the seed extraction, selection, and pruning steps.
+
+ | AMAT [36] | ASG | Hi-Fi [43] | DeepFlux [40] |
| F1 | .509 | .511 | .724 | .732 |
| t (s) | 511.9 | 63.2 | 0.030 (GPU) | 0.019 (GPU) |
+
+Table 3: Results on SK-LARGE [27]. Runtimes are averages over the SK-LARGE test images.
+
+of homogeneous image regions, whereas the latter incorporate high-level, object-specific information to detect semantic object skeletons. Taking these numbers at face value also ignores many "hidden costs" of supervised learning: 1) training deep CNNs for skeleton extraction requires GPUs and segmentations, which are costly and time consuming to collect; 2) these models do not generalize on other datasets: [36] showed that FSDS [28], trained on SK-LARGE, fails to generalize on BMAX500; and 3) they do not easily scale to new classes or granularities; e.g., if a new class is added to the dataset, the model must be retrained.
+
+# 6. Discussion
+
+Our new approach for the efficient extraction of medial axes from cluttered natural scenes uses elements from the shock graph theory of shape. In particular, we have generalized the concept of shocks to the RGB domain by considering region-based cost functions, and have devised an algorithm that leverages the rules of the shock graph grammar to guide the search for medial points. Our approach has several merits: 1) it is fully unsupervised and thus can generalize to new datasets without any training; 2) it outperforms the state-of-the-art in unsupervised approaches and is an order of magnitude faster and much more efficient in the number of skeletal pixels generated; and 3) it requires no post-processing, such as thinning or grouping of medial points. In our experiments, we have also raised a concern regarding the way scene skeleton detection frameworks are typically evaluated in our community. To address this, we have proposed an alternative, ligature-based weighted evaluation scheme, that takes into account the relative importance of each medial point for boundary reconstruction, and better reflects performance on benchmarks with multiple ground truth annotations per scene.
+
+# Acknowledgements
+
+We thank the Natural Sciences and Engineering Research Council of Canada (NSERC), the Fonds de recherche du Québec - Nature et technologies (FRQNT), and Samsung for research funding.
+
+# References
+
+[1] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898-916, May 2011. 6
+[2] Jonas August, Kaleem Siddiqi, and Steven W Zucker. Ligure instabilities in the perceptual organization of shape. Computer Vision and Image Understanding, 76(3):231-243, 1999. 3
+[3] H.B. Barlow and B.C. Reeves. The versatility and absolute efficiency of detecting mirror symmetry in random dot displays. Vision research, 19(7):783-793, 1979. 1
+[4] Harry Blum. A transformation for extracting new descriptions of shape. In Symposium on Modeis for the Perception of Speech and Visual Form, 1967. 1, 2
+[5] Harry Blum. Biological shape and visual science (part i). Journal of theoretical Biology, 1973. 1, 2, 4, 7
+[6] Michael Brady and Haruo Asada. Smoothed local symmetries and their implementation. The International Journal of Robotics Research, 3(3):36-61, 1984. 1
+[7] Pavel Dimitrov, James N Damon, and Kaleem Siddiqi. Flux invariants for shape. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., volume 1, pages I-I. IEEE, 2003. 3
+[8] Christopher Funk, Seungkyu Lee, Martin R Oswald, Stavros Tsogkas, Wei Shen, Andrea Cohen, Sven Dickinson, and Yanxi Liu. 2017 ICCV challenge: Detecting symmetry in the wild. In 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), pages 1692-1701. IEEE, Oct 2017. 1, 6
+[9] Peter J. Giblin and Benjamin B. Kimia. On the local form and transitions of symmetry sets, medial axes, and shocks. International Journal of Computer Vision, 54(1-3):143-157, 2003. 1, 7
+[10] Joachim Giesen, Balint Miklos, Mark Pauly, and Camille Wormser. The scale axis transform. In Proceedings of the twenty-fifth annual symposium on Computational geometry, pages 106-115. ACM, 2009. 1
+[11] Wei Ke, Jie Chen, Jianbin Jiao, Guoying Zhao, and Qixiang Ye. Srn: side-output residual network for object symmetry detection in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1068-1076, 2017. 1, 3
+[12] Peter Lax. Shock waves and entropy. In Contributions to nonlinear functional analysis, pages 603-634. Elsevier, 1971. 2
+[13] Chang Liu, Wei Ke, Jianbin Jiao, and Qixiang Ye. Rsrn: Rich side-output residual network for medial axis detection. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017. 1
+[14] Xiaolong Liu, Pengyuan Lyu, Xiang Bai, and Ming-Ming Cheng. Fusing image and segmentation cues for skeleton extraction in the wild. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017. 1
+[15] Diego Macrini, Sven Dickinson, David Fleet, and Kaleem Siddiqi. Bone graphs: Medial shape parsing and abstraction.
+
+Computer Vision and Image Understanding, 115(7):1044-1061, 2011. 1, 2
+[16] Diego Macrini, Sven Dickinson, David Fleet, and Kaleem Siddiqi. Object categorization using bone graphs. Computer Vision and Image Understanding, 115(8):1187-1206, 2011. 1, 2
+[17] Diego Macrini, Kaleem Siddiqi, and Sven Dickinson. From skeletons to bone graphs: Medial abstraction for object recognition. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8. IEEE, 2008. 1, 2
+[18] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, volume 2, pages 416-423. IEEE, 2001. 6
+[19] David R Martin, Charless C Fowlkes, and Jitendra Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE transactions on pattern analysis and machine intelligence, 26(5):530-549, 2004. 6
+[20] Oleg Michailovich, Yogesh Rathi, and Allen Tannenbaum. Image segmentation using active contours driven by the bhattacharyya gradient flow. IEEE Transactions on Image Processing, 16(11):2787-2801, 2007. 6
+[21] Morteza Rezanejad, Gabriel Downs, John Wilder, Dirk B. Walther, Allan Jepson, Sven Dickinson, and Kaleem Siddiqi. Scene categorization from contours: Medial axis based salience measures. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 1
+[22] Morteza Rezanejad and Kaleem Siddiqi. View sphere partitioning via flux graphs boosts recognition from sparse views. Front. ICT, 2:24, 2015. 6, 7
+[23] Fred L Royer. Detection of symmetry. Journal of Experimental Psychology: Human Perception and Performance, 7(6):1186, 1981. 1
+[24] Thomas B. Sebastian, Philip N. Klein, and Benjamin B. Kimia. Shock-based indexing into large shape databases. In European Conference on Computer Vision, pages 731-746. Springer, 2002. 2
+[25] Thomas B. Sebastian, Philip N. Klein, and Benjamin B. Kimia. Recognition of shapes by editing their shock graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(5):550-571, 2004. 1, 2
+[26] Daniel Sharvit, Jacky Chan, Hüseyin Tek, and Benjamin B. Kimia. Symmetry-based indexing of image databases. J. Visual Communication and Image Representation, 9(4):366-380, 1998. 1
+[27] Wei Shen, Kai Zhao, Yuan Jiang, Yan Wang, Xiang Bai, and Alan Yuille. Deepskeleton: Learning multi-task scale-associated deep side outputs for object skeleton extraction in natural images. IEEE Transactions on Image Processing, 26(11):5298-5311, 2017. 1, 3, 6, 8
+[28] Wei Shen, Kai Zhao, Yuan Jiang, Yan Wang, Zhijiang Zhang, and Xiang Bai. Object skeleton extraction in natural images by fusing scale-associated deep side outputs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 222-230, 2016. 8
+
+[29] Kaleem Siddiqi, Sylvain Bouix, Allen Tannenbaum, and Steven W Zucker. Hamilton-jacobi skeletons. International Journal of Computer Vision, 48(3):215-231, 2002. 1, 3
+[30] Kaleem Siddiqi and Benjamin B. Kimia. A shock grammar for recognition. In 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96), June 18-20, 1996 San Francisco, CA, USA, pages 507-513. IEEE Computer Society, 1996. 2, 3
+[31] Kaleem Siddiqi, Ali Shokoufandeh, Sven J Dickinson, and Steven W Zucker. Shock graphs and shape matching. International Journal of Computer Vision, 35(1):13-32, 1999. 1, 2, 3
+[32] Hüseyin Tek, Perry A. Stoll, and Benjamin B. Kimia. Shocks from images: propagation of orientation elements. In 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97), June 17-19, 1997, San Juan, Puerto Rico, pages 839-845. IEEE Computer Society, 1997. 1
+[33] Alexandru Telea, Cristian Sminchisescu, and Sven Dickinson. Optimal inference for hierarchical skeleton abstraction. In Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., volume 4, pages 19-22. IEEE, 2004. 1
+[34] Alexandru Telea and Jarke J. Van Wijk. An augmented fast marching method for computing skeletons and centerlines. Eurographics 2002, 2002. 1
+[35] Ching L Teo, Cornelia Fermuller, and Yiannis Aloimonos. Detection and segmentation of 2d curved reflection symmetric structures. In Proceedings of the IEEE International Conference on Computer Vision, pages 1644-1652, 2015. 1, 3
+[36] Stavros Tsogkas and Sven Dickinson. Amat: Medial axis transform for natural images. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 2727-2736. IEEE, 2017. 1, 2, 3, 4, 5, 6, 7, 8
+[37] Stavros Tsogkas and Iasonas Kokkinos. Learning-based symmetry detection in natural images. In European Conference on Computer Vision, pages 41-54. Springer, 2012. 1, 2, 3
+[38] Vijay V Vazirani. Approximation algorithms. Springer Science & Business Media, 2013. 3
+[39] Johan Wagemans. Parallel visual processes in symmetry perception: Normality and pathology. Documenta ophthalmologica, 95(3-4):359, 1998. 1
+[40] Yukang Wang, Yongchao Xu, Stavros Tsogkas, Xiang Bai, Sven Dickinson, and Kaleem Siddiqi. Deepflux for skeletons in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5287-5296, 2019. 1, 3, 7, 8
+[41] John Wilder, Morteza Rezanejad, Sven Dickinson, Kaleem Siddiqi, Allan Jepson, and Dirk B Walther. Local contour symmetry facilitates scene categorization. Cognition, 182:307-317, 2019. 1
+[42] Li Xu, Cewu Lu, Yi Xu, and Jiaya Jia. Image smoothing via 10 gradient minimization. ACM Trans. Graph., 30(6):1-12, Dec. 2011. 5, 6, 7
+[43] Kai Zhao, Wei Shen, Shanghua Gao, Dandan Li, and Ming-Ming Cheng. Hi-fi: Hierarchical feature integration for skeleton detection. *IJCAI*, 2018. 1, 3, 7, 8
\ No newline at end of file
diff --git a/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/images.zip b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6843c0314ac0cdbd519633aaa78524fe20abaad5
--- /dev/null
+++ b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d3f417cf1ae578ff5e52edd51be94be704e90ed337fcb120cf45741fc2fce3b
+size 566575
diff --git a/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/layout.json b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..c7c6217bba0b90a9f1ad4eaf87fe76005b2c5e49
--- /dev/null
+++ b/appearanceshockgrammarforfastmedialaxisextractionfromrealimages/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:873b516f5c878c1fdb2542b88f9144638145d08905c2f064c309127c0904df50
+size 405362
diff --git a/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_content_list.json b/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..0f727bd87db23ae6236128196e61a95186181186
--- /dev/null
+++ b/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:46eefa9cd5e9747cc924b0f70ae966ccdd934b8481dfcbec17cf779a133c487d
+size 75305
diff --git a/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_model.json b/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5acfd12bd1eb9b657797d83c1ddc722a667162cd
--- /dev/null
+++ b/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aabe9c5880ecd2ae313914284cfdd753d75bf3263bf4d1e4fc2f3f6b364d50b4
+size 90535
diff --git a/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_origin.pdf b/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..82577f0d90f4a51dab578c802bb1127183944917
--- /dev/null
+++ b/approximatingshapesinimageswithlowcomplexitypolygons/fc8c9782-6ca7-4141-b93b-bbc9034870e1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ae183cb52af2782949287d32305f24953f6d193ff23eb543ace4ffe4b6d2cd5
+size 3485631
diff --git a/approximatingshapesinimageswithlowcomplexitypolygons/full.md b/approximatingshapesinimageswithlowcomplexitypolygons/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5bbdacde12d53a5dcd992b89121b515472ddd5d4
--- /dev/null
+++ b/approximatingshapesinimageswithlowcomplexitypolygons/full.md
@@ -0,0 +1,351 @@
+# Approximating shapes in images with low-complexity polygons
+
+Muxingzi Li Florent Lafarge
+Université Côte d'Azur, Inria
+firstname.lastname@inria.fr
+
+Renaud Marlet
+Valeo.ai & LIGM, Ecole des Ponts,
+Univ Gustave Eiffel, CNRS, Marne-la-Vallee, France
+renaud.marlet@enpc.fr
+
+# Abstract
+
+We present an algorithm for extracting and vectorizing objects in images with polygons. Departing from a polygonal partition that oversegments an image into convex cells, the algorithm refines the geometry of the partition while labeling its cells by a semantic class. The result is a set of polygons, each capturing an object in the image. The quality of a configuration is measured by an energy that accounts for both the fidelity to input data and the complexity of the output polygons. To efficiently explore the configuration space, we perform splitting and merging operations in tandem on the cells of the polygonal partition. The exploration mechanism is controlled by a priority queue that sorts the operations most likely to decrease the energy. We show the potential of our algorithm on different types of scenes, from organic shapes to man-made objects through floor maps, and demonstrate its efficiency compared to existing vectorization methods.
+
+# 1. Introduction
+
+Extracting objects in images is traditionally operated at the pixel scale, one object being represented as a group of pixels. Such a resolution-dependent representation is often not adapted to the end-users. In many application scenarios as urban mapping or sketching, objects need to be captured with more compact and editable vector representations. In particular, polygons with floating coordinates allow both the approximation of free-form shapes, e.g., organic objects, and the fine description of piecewise-linear structures, e.g., buildings and many other man-made objects.
+
+We consider the task of capturing objects in images by polygons with three objectives. First, fidelity: the output polygons should approximate well the object silhouettes in the input image. Second, complexity: the output polygons should be composed of a small number of edges to offer a compact and editable representation. Last, geometric guarantees: the output polygons should be intersection-free, closed, potentially with holes, and form an image partition.
+
+The simplest way to capture the silhouette of an object
+
+as a polygon is to vectorize a chain of pixels representing the object contours [9, 12, 38]. While the complexity of the polygon can be easily controlled, these simplification processes do not take into account structural information contained in the input image. Consequently, output polygons are often imprecise, typically with edges that do not fit accurately the object silhouettes. Recent works on the partitioning of images into polygonal cells [1, 3, 11, 16] suggest that grouping cells from these partitions can produce more accurate results than traditional vectorization methods. This strategy however suffers from imprecise partitions, typically with some polygonal cells overlapping two different objects. Existing works in the field focus on merging polygonal cells only and omit the necessity of splitting operations to deliver more precise results.
+
+In this work, we propose an algorithm to capture objects by compact floating polygons. Inspired by mesh deformation techniques in Geometry Processing, the main idea consists in refining the geometry of an imprecise polygonal partition while labeling each cell by a semantic class.
+
+Our algorithm relies on two key contributions. First, we design an energy function to measure the quality of a polygonal partition by taking into account both the fidelity to input data (image and semantic information) and the complexity of the output polygons. Second, we propose an efficient optimization scheme to minimize that energy. We explore the solution space by splitting and merging cells within the polygonal partition. The mechanism is controlled by a priority queue that sorts the operations that are most likely to decrease the energy.
+
+We demonstrate the potential of our method on different types of scenes, from organic shapes to man-made objects through floor maps and line-drawing sketches, and show its efficiency with respect to existing vectorization approaches.
+
+# 2. Related work
+
+We distinguish four families of existing, related methods.
+
+Vectorization pipelines. The most popular strategy consists in extracting the object contours by chains of pixels that are then simplified into polygons. Contour extraction
+
+can be performed by various methods such as Grabcut [33], superpixel grouping [23] or the popular object saliency detection algorithms [7, 24, 37]. The subsequent simplification step traditionally relies upon the Douglas-Peucker algorithm [38] or mechanisms that simplify Delaunay triangulations [12, 9]. Because these algorithms only measure the geometric deviation from an initial configuration of highly complex polygons, their output can easily drift from the object silhouettes, leading to high accuracy loss in practice.
+
+Methods based on geometric primitives. Another strategy consists in detecting geometric primitives such as line segments in the input image and assemble them into closed contours. The assembling step can be performed by analyzing an adjacency graph between line segments [34], or by gap filling reasoning [39]. These algorithms however do not guarantee the output polygons to be intersection-free. Polygonal Markov random fields [22] are an alternative to sample polygons from images directly. But this model is very slow to simulate in practice and operates on simple synthetic images only. Delaunay point process [14] allows the sampling of vertices within a Delaunay triangulation while grouping the triangulation facets into polygons.
+
+NN architectures. Polygon-RNN [5] and its improved version [2] offer a semi-automatic object annotation with polygons. These models produce polygons with possible self-intersections and overlaps, let alone because the RNN-decoders consider only three preceding vertices when predicting the next vertex at each time step. In contrast, PolyCNN [19] is automatic and avoids self-intersections. This CNN-based architecture is however restricted to output simple polygons with four vertices. PolyMapper [25] proposes a more advanced solution based on CNNs and RNNs with convolutional long-short term memory modules. In practice, these deep learning techniques give good results for extracting polygons with a low number of edges, typically residential buildings from remote sensing images. However, extracting more complex shapes with potentially hundred of edges per polygon is still a challenging issue.
+
+Methods based on polygonal partitions. A last strategy consists in over-segmenting an image into polygonal cells, and then grouping them to approximate the object silhouettes. The vectorization of superpixels [1] is a straightforward way to create a polygonal partition, that is however composed of non-convex cells whose spatial connection is not clearly defined. Polygonal partitions can be more robustly created by fitting a geometric data structure on the input image. Many methods have been built upon the Line Segment Detector [36] to geometrically characterize object contours with a set of disconnected line segments. The latter are then used for constructing a Voronoi diagram whose edges conform to these line segments [11], a convex mesh with constrained edges [16], or a planar graph using a kinetic framework [3]. The cells of such polygonal partitions
+
+
+Figure 1. Goal of our approach. Our algorithm takes as input an image with a rough semantic probability map and outputs a set of low-complexity polygons capturing accurately the objects of interest, here dogs and cats.
+
+are then grouped to form polygons, either by graph-cut [3] or other aggregation mechanisms [26, 32]. This strategy delivers accurate results when the polygonal partition fits well the input image, which is rarely the case in practice. Unfortunately, the refinement of polygonal partitions has not been deeply explored in the literature. The only solution proposed to our knowledge consists in a splitting phase which incrementally refines a Delaunay triangulation before merging the triangles [18]. Unfortunately, handling triangular cells does not allow to produce compact polygons.
+
+# 3. Overview
+
+The algorithm takes as input an image and an associated probability map that estimates the probability of each pixel to belong to the different classes of interest. This probability map is typically generated by state-of-the-art semantic segmentation methods or saliency detection algorithms.
+
+The algorithm departs from a polygonal partition generated by kinetic propagation of line segments [3]. Each cell of this partition is enriched by a semantic label chosen as the class of interest with the highest mean over the inside pixels in the probability map. The goal of our algorithm is then to refine this semantic polygonal partition by splitting and merging cells in tandem. These refinement operations are guided by an energy that accounts for both fidelity to input data and complexity of output.
+
+The algorithm ends when no splitting or merging operations can decrease the energy anymore. Each cell in the output is a polygon associated with a class of interest, as illustrated in Fig. 1. By construction, the set of output polygons is guaranteed to recover the entire image domain without overlaps, to be closed and intersection-free, and does not contain edge-adjacent cells with the same semantic label.
+
+# 4. Algorithm
+
+We denote a semantic polygonal partition by $\mathbf{x} = (\mathbf{m},\mathbf{l})$ where $\mathbf{m}$ defines a 2D polygon mesh on the image domain while $\mathbf{l}$ represents the semantic labels associated to the facets of $\mathbf{m}$ . We denote by $\mathcal{F}_{\mathbf{x}}$ (respectively $\mathcal{E}_{\mathbf{x}}$ ) the set of facets (resp. non-border edges) of the polygon mesh $\mathbf{m}$ .
+
+# 4.1. Energy formulation
+
+We measure the quality of a semantic polygonal partition $\mathbf{x}$ with an energy function $U$ of the form:
+
+$$
+U (\mathbf {x}) = (1 - \lambda) U _ {\text {f i d e l i t y}} (\mathbf {x}) + \lambda U _ {\text {c o m p l e x i t y}} (\mathbf {x}) \tag {1}
+$$
+
+The first term $U_{\text{fidelity}}$ measures the coherence of the configuration $\mathbf{x}$ with the input data while $U_{\text{complexity}}$ encourages low-complexity outputs. These two terms, that are balanced by a model parameter $\lambda \in [0,1]$ , are typically expressed with local energies on the edges and facets of the mesh $\mathbf{m}$ .
+
+Fidelity term $U_{\text{fidelity}}$ has two objectives: (i) encouraging the semantic label of each facet to be coherent with the probability map, and (ii) encouraging edges to align with high gradients of the input image. These objectives are balanced by parameter $\beta$ , set to $10^{-3}$ in our experiments:
+
+$$
+U _ {\text {f i d e l i t y}} (\mathbf {x}) = \sum_ {f \in \mathcal {F} _ {\mathbf {x}}} - w _ {f} \log P _ {\text {m a p}} \left(l _ {f}\right) + \beta \sum_ {e \in \mathcal {E} _ {\mathbf {x}}} w _ {e} A (e) \tag {2}
+$$
+
+where $w_{f}$ is the ratio of the area of facet $f$ to the area of the whole image domain, $P_{map}(l_f)$ is the mean of the probability map for class $l_{f}$ over the pixels inside facet $f$ , and $w_{e}$ is the inverse of the length of the image diagonal if the two adjacent facets $f$ and $f^{\prime}$ of edge $e$ have different labels $l_{f} \neq l_{f^{\prime}}$ , and 0 otherwise. Finally, $A(e)$ is a function measuring the alignment of edge $e$ with image gradients:
+
+$$
+A (e) = \sum_ {i \in N _ {e}} r _ {i} \left[ 1 - \hat {F} (m _ {i}) \exp \left(- \frac {\Delta \theta_ {i} ^ {2}}{2 \sigma^ {2}}\right) \right] \quad (3)
+$$
+
+where $N_{e}$ is the set of pixels that overlap with edge $e$ , $r_i$
+
+is the inverse of the number of edges that overlap pixel $i$ , $\Delta \theta_{i}$ is the angular difference between the gradient direction at pixel $i$ and the normal vector of edge $e$ , and $\sigma$ is a model parameter set to $\frac{\pi}{8}$ in our experiments. Denoting $\hat{F}$ the empirical cumulative density distribution of gradient magnitudes over the input image, $\hat{F}(m_i)$
+
+
+
+is the probability that the gradient magnitude of a random pixel in the input image is smaller than the gradient magnitude $m_{i}$ at pixel $i$ . Note that, instead of image gradients, more general discontinuity maps such as [21, 10] could be used by modifying the density distribution $\hat{F}$ in Eq. (3).
+
+Complexity term $U_{complexity}$ penalizes a complex polygon mesh with the number of edges (the lower, the better):
+
+$$
+U _ {\text {c o m p l e x i t y}} (\mathbf {x}) = \left| \mathcal {E} _ {\mathbf {x}} \right| \tag {4}
+$$
+
+As illustrated in Figure 2, the model parameter $\lambda$ is a tradeoff between fidelity to input data and complexity of the output polygons. Note that our data term measures data
+
+
+$\lambda = 10^{-6}$ 6 polygons 157 vertices
+
+
+
+
+$\lambda = 10^{-5}$ 5 polygons 94 vertices
+Figure 2. Trade-off between fidelity to data and complexity to output polygons. Increasing $\lambda$ gives more compact, yet less accurate, output polygons. Objects of interest: horses, persons and cars.
+
+
+
+
+$\lambda = 10^{-4}$ 4 polygons 62 vertices
+
+fidelity independently of polygon complexity. In particular, $A(e)$ is designed as a linear function so that, if an edge $e$ is composed of two collinear edges $e_1$ and $e_2$ , then $A(e) = A(e_1) + A(e_2)$ . The linearity of $A(e)$ requires that each gradient pixel should not contribute multiple times to the total energy, which explains the factor $r_i$ in Eq. (3).
+
+# 4.2. Exploration mechanism
+
+Both continuous variables for representing the polygon mesh and discrete semantic labels are involved in the minimization of the (non-convex) energy $U$ . Inspired by edge contraction algorithms for simplifying triangle meshes [4, 17], we explore efficiently such a large solution space via an iterative mechanism based on local operators that split and merge facets of the polygon mesh $\mathbf{m}$ . Starting from an initial configuration, we compute the energy variations for splitting each facet as well as the energy variations for merging each pair of adjacent facets. All the energy variations (values to add to the energy if performing the corresponding operation) are sorted into a priority queue in ascending order, i.e., with more negative energy variations first. The exploration mechanism then consists in operating the splitting or merging at the top of the priority queue, i.e., the move that gives the highest energy decrease. This modification is followed by an update of the priority queue. A pseudo-code of the exploration mechanism is given in Algorithm 1. We now detail the main components of this algorithm.
+
+# Algorithm 1 Pseudo-code of exploration mechanism
+
+1: Initialize the semantic polygonal partition $\mathbf{x}$
+2: Initialize the priority queue $Q$
+3: while The top operation $i$ of $Q$ decreases energy U do
+4: Update $\mathbf{x}$ with the merging or splitting operation $i$
+5: Update $Q$
+6: end while
+
+
+Image domain
+
+
+Voronoi partition
+
+
+Kinetic partition
+
+
+Image domain + simulated annealing
+
+Initialization. Because the exploration mechanism finds a local minimum, a good initial configuration is required. In our experiments, we build the initial semantic polygonal partition using the kinetic partitioning method proposed in [3]. It produces in a fast and scalable manner a partition of polygonal cells that captures well the homogeneous regions in images. This partition is turned into a 2D polygon mesh. We then assign to each facet the semantic label that returns the highest mean over the inside pixels in the probability map. The impact of initialization is illustrated in Figure 3.
+
+Merging operator. The merging operator merges two facets with at least one edge in common into a single facet. The update consists in removing all common edges in between the two original facets as illustrated in Figure 4. The semantic label of the new, merged facet is chosen as the most probable label with respect to the probability map.
+
+Splitting operator. This operator divides a facet into multiple facets by inserting new edges and vertices. We first detect a set of cutting directions inside the original facet. These directions are found by fitting line segments to the input image with a region growing algorithm [31]. To
+
+avoid detecting line segments overlapping the edges of the facet, only pixels inside the facet shrunk by 2 pixels are considered for the fitting (see the set of pink pixels inside the red facet in the inset). The detected
+
+
+
+line segments are then extended until they collide with the outside edges of the original facet or themselves, as illustrated in Figure 4. The collision points (respectively the prolonged line segments) correspond to new vertices (resp. edges) inserted in the 2D polygon mesh. For each new facet,
+
+
+Figure 3. Initialization. The top (resp. bottom) row shows the initial partitions (resp. output polygons). Objects of interest are persons and bikes. Starting the exploration mechanism from a partition composed of one rectangular facet (column 1) typically produces results with missing objects such as the bike. An initial Voronoi partition [11] (column 2) is too fragmented to output low complexity polygons. Our algorithm performs best from kinetic partitions [3] (column 3) with a good trade-off between accuracy and polygon complexity. This option returns similar results than a simulated annealing exploration (column 4) but with processing times reduced by two orders of magnitude. For clarity reasons, here and in the following figures, we do not display the background polygons (at the image border) in the visual results.
+Figure 4. Merging and splitting operators. The merging operator merges two adjacent facets with different semantic labels by removing the common edges (top). The splitting operator divides one facet into multiple facets that have different semantic labels (bottom). The black dashed lines indicate the cutting directions detected in the input image (bottom left).
+
+we associate the most probable semantic label with respect to the probability map. If two new adjacent (sub)facets have the same semantic label, they are immediately merged, as part of the splitting operation.
+
+Priority queue. After a configuration $\mathbf{x}$ is modified, the priority queue must be updated. We first remove from the priority queue the current operation and all the merging or splitting operations concerning the modified facets. We then compute the energy variations of all possible operations that can affect the new facets and insert them in the priority queue, appropriately sorted. Because the energy is formulated as the sum of local terms and a global complexity term, these variations are not costly to compute. When a split occurs, only the parent facet, its new split facets and
+
+
+
+
+
+
+Figure 5. Vectorization of linear structures. Our algorithm can be used to vectorize floor map photographs (top) or line-drawings (bottom). While thin, these linear structures can be captured by compact polygons with a good accuracy (see closeups).
+
+the edges composing these facets are involved in the energy updates of the priority queue. These updates are fast and local; they do not propagate through the whole mesh. In our experiments, the average number of facets created per split is 2.1 and the average number of updated edges is 7.2.
+
+Stopping criterion. The exploration mechanism ends when the energy variations sorted in the priority queue become all positive, i.e., when no operation can decrease the energy anymore. Note that this criterion guarantees the exploration mechanism to converge quickly without bumping effects. Besides, the final solution cannot contain two edgeadjacent polygons with the same semantic class, as merging them necessarily decreases the energy (lower $U_{\text{fidelity}}$ , thanks to the convexity of $-\log$ , and lower $U_{\text{complexity}}$ ).
+
+Details for speeding-up the exploration. The exploration mechanism is local. This choice is motivated by low running time and the presence of good initial configurations. (An alternative could be to use a non-local optimization algorithm such as the simulated annealing, cf. Figure 8.)
+
+Observing that a complex initial partition often oversegments the probability map, we initially (before exploration) merge all adjacent facets that contain only pixels classified with the same label. This highly reduces the processing time without affecting the results.
+
+To reduce the time for detecting line segments when new splitting operations are considered, we allow a merged facet to inherit the already-detected line segments of its parent facets. We detect new line segments only in the area around the removed edges. In addition to time savings, this allows us to refine the edges between two adjacent facets by operating a merging and then a splitting on the same facet.
+
+# 5. Experiments
+
+Our algorithm has been implemented in $\mathrm{C + + }$ using the Computational Geometry Algorithms Library (CGAL) [35]. All experiments have been done on a single computer with Intel Core i7 processor clocked at $2.4\mathrm{GHz}$ .
+
+Parameters. We have 3 model parameters $\lambda$ , $\beta$ , $\sigma$ , that are set respectively to $10^{-5}$ , $10^{-3}$ , $\frac{\pi}{8}$ in all experiments, despite the dataset variety. (Note that our algorithm does not need any threshold to stop the exploration.) The values of $\lambda$ and $\beta$ were chosen based on a grid search; $\sigma$ was set to roughly model the standard deviation of gradient directions.
+
+Flexibility and robustness. Our algorithm has been tested on different types of scenes and objects. Piecewise-linear structures such as buildings are captured with fine details as long as probability maps have a good accuracy. Organic shapes such as humans and animals are approximated by low complexity polygons. In addition to the silhouettes of objects in images, our algorithm can also be used to vectorize floor map photographs or line-drawing sketches. These two applications usually require the use of specialized methods to detect, filter and connect corner points into a floor map [27] or strokes into a network of parametric curves [15]. In contrast, our algorithm finely reconstructs these linear structures, as illustrated in Figure 5. Our algorithm offers a good robustness to imprecise probability maps thanks to the second part of the data term that favors the alignment of edges with image discontinuities. As illustrated in Figure 6, the output polygons can accurately capture the silhouette of objects even if the probability map is ambiguous where different objects meet.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6. Vectorization of multi-class objects. Probability maps are often ambiguous and only roughly indicate the shape of the objects (see colors for different classes). Our algorithm captures the silhouette of theses objects with low-complexity polygons with a good precision. Note in particular how the polygons nicely delineate close objects, such as the lady face and the couch (see close-ups). A failure case is shown in the bottom right example where the quality of the probability map is too poor to capture the underlying object. Images are from the PASCAL VOC2012 dataset.
+
+
+
+
+
+Ablation study. As semantic maps are often blurry at
+
+object boundaries, using only the first data term yields polygons that do not contour well the objects, as illustrated in the inset. This result, obtained
+
+
+
+
+
+with $\beta = 0$ , must be compared with the results obtained in Figure 6 in the same initial conditions but with $\beta \neq 0$ .
+
+Quantitative evaluation. We compared our algorithm to state-of-the-art methods on three different datasets.
+
+We first tested our algorithm on the HKU-IS dataset [24] designed to evaluate salient object detection methods. We computed the probability map for each image using the algorithm of Li and Yu [24]. We compared our algorithm to two vectorization pipelines in which the same saliency maps [24] are binarized before chaining and simplifying the pixels on the object contours, either by the popular Douglas-Peucker algorithm [38] or by polyline decimation [12]. We also compared to two cell grouping algorithms that generate a polygonal partition by Voronoi diagram construction [11] or by kinetic propagation of line segments [3]. The polygons are then extracted from these partitions by thresholding the saliency map averaged over each cell. We de
+
+| Method | Compression ratio |
| 10 | 15 | 20 | 25 | 33 | 50 |
| Voronoi [11] | 77.7 | 75.2 | 71.6 | 68.2 | 64.1 | 57.5 |
| Kippi [3] | 79.2 | 77.1 | 72.8 | 69.5 | 65.6 | 62.1 |
| Douglas-Peucker [38] | 83.8 | 83.3 | 81.2 | 79.4 | 76.0 | 65.7 |
| Polyline [12] | 83.9 | 83.7 | 82.5 | 81.2 | 77.5 | 69.0 |
| Ours | 84.1 | 84.0 | 83.7 | 83.1 | 81.3 | 77.0 |
+
+Table 1. Accuracy (%) vs compression on HKU-IS.
+
+| Method | Compression ratio |
| 10 | 15 | 20 | 25 | 33 | 50 |
| Voronoi [11] | 87.9 | 86.4 | 83.6 | 81.3 | 77.7 | 74.3 |
| Kippi [3] | 88.9 | 87.6 | 85.4 | 83.0 | 79.5 | 75.2 |
| Douglas-Peucker [38] | 91.2 | 90.9 | 90.1 | 88.8 | 86.6 | 79.8 |
| Polyline [12] | 91.2 | 91.1 | 90.6 | 89.9 | 88.1 | 85.8 |
| Ours | 91.7 | 91.6 | 91.5 | 91.4 | 91.2 | 89.7 |
+
+Table 2. Accuracy (%) vs compression on PASCAL VOC2012.
+
+note these methods respectively by Voronoi and Kippi. The accuracy is measured using Intersection-over-Union of our pixelized output polygons against the ground truth. We also measure compression as the ratio of the number of pixels of the ground truth region boundary to the number of polygon vertices. In practice, we produce polygons at different complexity by varying $\lambda$ , as shown in Figure 2.
+
+Table 1 shows the evolution of accuracy vs compression on the HKU-IS dataset. While all methods exploit the same saliency maps, only our algorithm maintains high accuracy at high compression ratios, i.e., when the output polygons have a very low number of vertices. Fig. 7 shows visual comparisons of the methods at low and high compression. At low compression, the vectorization pipelines Douglas-Peucker and Polyline produce accurate polygons, similarly to our algorithm. Because these pipelines simplify the geometry of polygons without data consistency, their accuracy significantly drops for higher compression ratios, typically from 25. Cell grouping methods Voronoi and Kippi suffer from imperfect polygonal partitions where cells often overlap several types of objects. In contrast, the merging and splitting operations of our algorithm allow us to refine cells with respect to the probability map and the input image.
+
+We also tested our algorithm on the Pascal VOC2012 dataset [13] designed for multi-class segmentation tasks. This dataset contains 20 object classes and 1 background class. The evaluation was done on the validation set. We compared our algorithm to the same four methods (Douglas-Peucker, Polyline, Voronoi and Kippi) with the same accuracy and compression metrics. Probability maps were generated by the DeepLab algorithm [6] by taking the output layer before the final argmax operation over class channels. Table 2 shows the evolution of accuracy against compression for the five algorithms. Similarly to the quantitative results obtained on the HKU-IS dataset, our algorithm outclasses the other methods, in particular with a significant accuracy gain at high compression. Figure 6 shows visual results obtained by our algorithm on different object classes.
+
+
+
+
+
+
+Douglas-Peucker
+
+
+
+
+
+
+Polyline
+
+
+
+
+
+
+Voronoi
+
+
+
+
+
+
+Kippi
+
+
+
+
+
+
+Ours
+
+| Method | AP | \( AP_{50} \) | \( AP_{75} \) | AR | \( AR_{50} \) | \( AR_{75} \) |
| R-CNN [20] | 41.9 | 67.5 | 48.8 | 47.6 | 70.8 | 55.5 |
| PANet [28] | 50.7 | 73.9 | 62.6 | 54.4 | 74.5 | 65.2 |
| PolyMapper[25] | 55.7 | 86.0 | 65.1 | 62.1 | 88.6 | 71.4 |
| Ours | 65.8 | 87.6 | 73.4 | 78.7 | 94.3 | 86.1 |
+
+Table 3. Performance on the CrowdAI mapping challenge dataset. Average precision (AP) and average recall (AR) in %.
+
+We finally tested our algorithm on the CrowdAI mapping challenge dataset [29] which is composed of $\sim 60\mathrm{k}$ satellite images of urban landscapes. Probability maps were generated using a U-Net variant [8]. We followed the same experimental protocol than in [25] for extracting the contours of buildings from this dataset. In particular, we used the same average precision (AP) and average recall (AR) metrics. We compared our algorithm with the deep learning methods PolyMapper [25], Mask R-CNN [20] based on the implementation of [30], and PANet [28]. Table 3 presents the quantitative results on these four methods. Our algorithm obtains the best average precision and average recall scores. In particular, our algorithm outclasses Polymapper with significant gains. This difference is partly explained by the iterative mechanism of vertex insertion of Polymapper whose efficiency decreases for complex shapes. By refining polygonal cells on a topologically-valid partition, our algorithm does not suffer from this problem. Figure 9 shows visual results on an urban scene of the CrowdAI dataset.
+
+Performance. Figure 8 shows that our exploration mechanism reaches similar energies as a non-local simulated annealing while being two orders of magnitude faster.
+
+Our exploration mechanism is inspired by edge contraction algorithms for mesh simplification. While lo
+
+
+Figure 7. Visual comparisons at two compression ratios: 10 (top) and 33 (bottom). While the vectorization pipelines Polyline and, to a lesser extent, Douglas-Peucker yield accurate polygons at low compression, their precision drops at high compression, with polygons not aligning well with silhouettes anymore (cf. closeups). The cell-grouping algorithms Voronoi and Kippi are less accurate on such free-form shapes where cells often overlap several object classes. In contrast, we accurately capture the elephants at both compression ratios.
+Figure 8. Evolution of energy $U$ during our exploration mechanism (red curve) and a simulated annealing optimization (SA, blue curve). While the two optimization techniques converge towards a similar energy, our exploration mechanism requires two orders of magnitude less iterations than the simulated annealing.
+
+cal, greedy and old, such algorithms, e.g., [4, 17], are still very popular and commonly used in the field.
+
+As shown in the inset, our algorithm typically requires a few seconds for a 100K-pixel image and about 2 min for a 10M-pixel image.
+
+
+
+Note that our code has not been optimized (beyond the general strategy expressed at the end of Sect. 4.2). In particular, the exploration mechanism runs sequentially on CPU (no parallelization). The most time-consuming operation is the update of the priority queue, and especially the simulation of splitting operations for the new large facets. If the initial partition contains $N_{f}$ facets and $N_{e}$ non-border edges, the priority queue is constructed by sorting the energy variations of the $N_{f}$ possible splits and $N_{e}$ possible merges; the running time for this is negligible (< 0.1% of total time). Last, the computation of cutting directions depends on the
+
+
+Figure 9. Extraction of buildings from satellite images with our algorithm: 1,178 buildings of a half square kilometer area of Chicago, USA, are extracted with low complexity polygons (8,683 vertices). While compact, the polygons capture some fine details (see closeups).
+
+number of image pixels. It is very fast and performed only once at priority queue initialization. Getting split directions from the input image lowers the dependency on the initial partition and allows larger solution space explorations.
+
+Limitations. As energy $U(\mathbf{x})$ is not convex and as our exploration mechanism is local, results depend on the quality of the initial partition. As shown in Fig. 3, splits provide robustness to a range of under-segmentations; yet, an initial partition that over-segments well the image leads to more accurate results. If a good initial partition cannot be provided or guaranteed, simulated annealing can be a better choice regarding accuracy, but not running time (cf. Fig. 8).
+
+Thanks to the gradient alignment term in $U_{\text{fidelity}}$ , our algorithm is robust to some level of error or ambiguity in semantic maps, in particular at object border; see, e.g., the polygons capturing the lady's face and the couch from the blurry semantic map in Fig. 6. Yet, the class probability of most pixels has to be correct, as is also the case for shape grammar parsers. Note that depending on external methods (initial partition, semantic map) is a strength: our performance will improve along with the related state of the art.
+
+Also, while parameter $\lambda$ balances data fidelity and output complexity, it does not allow to control the exact number of output vertices, contrary to vectorization pipelines.
+
+# 6. Conclusion
+
+We proposed an algorithm for extracting and vectorizing objects in images with low-complexity polygons. Our algorithm refines the geometry of an initial polygonal partition while labeling its cells by a semantic class. Based on local merging and splitting of cells, the underlying mechanism is simple, efficient and guaranteed to deliver intersection-free polygons. We demonstrated the robustness and the flexibility of our algorithm on a variety of scenes from organic shapes to man-made objects through floor maps and line-drawing sketches. We also showed on different datasets that it outperforms the state-of-the-art vectorization methods.
+
+In future work, we plan to investigate the user control of the number of output vertices. One way could be to design a third operator that removes and adds relevant vertices in the partition. We would also like to generalize our algorithm to the extraction of Bezier cycles, i.e., polygons where two successive vertices are not connected by a straight line, but by a Bezier curve; it would allow us to capture free-form shapes with a better complexity-distortion trade-off.
+
+Acknowledgments. We thank Jean-Philippe Bauchet for technical discussions. This work was partially supported by ANR-17-CE23-0003 project BIOM.
+
+# References
+
+[1] R. Achanta and S. Susstrunk. Superpixels and polygons using simple non-iterative clustering. In CVPR, 2017. 1, 2
+[2] D. Acuna, H. Ling, A. Kar, and S. Fidler. Efficient interactive annotation of segmentation datasets with polygon-rnn++. In CVPR, 2018. 2
+[3] J.-P. Bauchet and F. Lafarge. Kippi: Kinetic polygonal partitioning of images. In CVPR, 2018. 1, 2, 4, 6
+[4] M. Botsch, L. Kobbelt, M. Pauly, P. Alliez, and B. Lévy. *Polygon Mesh Processing*. AK Peters / CRC Press, 2010. 3, 7
+[5] L. Castrejon, K. Kundu, R. Urtasun, and S. Fidler. Annotating object instances with a polygon-rnn. In CVPR, 2017. 2
+[6] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018. 6
+[7] M. Cheng, N. Mitra, X. Huang, P. Torr, and S.-M. Hu. Global contrast based salient region detection. PAMI, 37(3), 2015. 2
+[8] J. Czakon, K. Kaczmarek, Andrzej P., and P. Tarasiewicz. Best practices for elegant experimentation in data science projects. In EuroPython, 2018. 7
+[9] F. De Goes, D. Cohen-Steiner, P. Alliez, and M. Desbrun. An optimal transport approach to robust reconstruction and simplification of 2d shapes. Computer Graphics Forum, 30(5), 2011. 1, 2
+[10] P. Dollar and C. L. Zitnick. Structured forests for fast edge detection. In ICCV, 2013. 3
+[11] L. Duan and F. Lafarge. Image partitioning into convex polygons. In CVPR, 2015. 1, 2, 4, 6
+[12] Christopher Dyken, Morten Dæhlen, and Thomas Sevaldrud. Simultaneous curve simplification. Journal of Geographical Systems, 11(3), 2009. 1, 2, 6
+[13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012/workshop/index.html.6
+[14] J.D. Favreau, F. Lafarge, A. Bousseau, and A. Auvolat. Extracting geometric structures in images with delaunay point processes. PAMI, 42(4), 2020. 2
+[15] J.-D. Favreau, F. Lafarge, and A. Bousseau. Fidelity vs. Simplicity: a Global Approach to Line Drawing Vectorization. ACM Trans. on Graphics, 35(4), 2016. 5
+[16] J. Forsythe, V. Kurlin, and A. Fitzgibbon. Resolution independent superpixels based on convex constrained meshes without small angles. In International Symposium on Visual Computing, 2016. 1, 2
+[17] M. Garland and P. Heckbert. Surface simplification using quadric error metrics. In SIGGRAPH, 1997. 3, 7
+[18] T. Gevers and A. W. M. Smeulders. Combining region splitting and edge detection through guided delaunay image subdivision. In CVPR, 1997. 2
+[19] N. Girard and Y. Tarabalka. End-to-End Learning of Polygons for Remote Sensing Image Classification. In IGARSS, 2018. 2
+
+[20] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask R-CNN. In ICCV, 2017. 7
+[21] P. Isola, D. Zoran, D. Krishnan, and E.H. Adelson. Crisp boundary detection using pointwise mutual information. In ECCV, 2014. 3
+[22] R. Kluszczynski, M. N. M. van Lieshout, and T. Schreiber. Image segmentation by polygonal markov fields. Annals of the Institute of Statistical Mathematics, 59(3), 2007. 2
+[23] A. Levinshtein, C. Sminchisescu, and S. Dickinson. Optimal contour closure by superpixel grouping. In ECCV, 2010. 2
+[24] G. Li and Y. Yu. Deep contrast learning for salient object detection. In CVPR, 2016. 2, 6
+[25] Z. Li, J. Dirk Wegner, and A. Lucchi. Topological map extraction from overhead images. In ICCV, 2019. 2, 7
+[26] Z. Li, Wu Z.-M., and S.-F. Chang. Segmentation using superpixels: A bipartite graph partitioning approach. In CVPR, 2012. 2
+[27] C. Liu, J. Wu, P. Kohli, and Y. Furukawa. Raster-to-vector: Revisiting floorplan transformation. In ICCV, 2017. 5
+[28] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia. Path aggregation network for instance segmentation. In CVPR, 2018. 7
+[29] S. Mohanty. Crowdai dataset: the mapping challenge. https://www.aicrowd.com/challenges/.7
+[30] S.P. Mohanty. Crowdai mapping challenge: Baseline with mask r-cnn. https://github.com/crowdAI/ crowdai-mapping-challenge-mask-rcnn/.7
+[31] S. Oesau, Y. Verdie, C. Jamin, P. Alliez, F. Lafarge, and S. Giraudot. Point set shape detection. In CGAL User and Reference Manual. CGAL Editorial Board, 4.14 edition, 2018. 4
+[32] Z. Ren and G. Shakhnarovich. Image segmentation by cascaded region agglomeration. In CVPR, 2013. 2
+[33] C. Rother, V. Kolmogorov, and A. Blake. Grabcut - interactive foreground extraction using iterated graph cuts. ACM Trans. on Graphics, 23(3), 2004. 2
+[34] X. Sun, M. Christoudias, and P. Fua. Free-shape polygonal object localization. In ECCV, 2014. 2
+[35] The CGAL Project. CGAL, computational geometry algorithms library. 5
+[36] R. Von Gioi, J. Jakubowicz, J.-M. Morel, and G. Randall. Lsd: A fast line segment detector with a false detection control. PAMI, 32(4), 2010. 2
+[37] L. Wang, L. Wang, H. Lu, P. Zhang, and X. Ruan. Saliency detection with recurrent fully convolutional networks. In ECCV, 2016. 2
+[38] S. T. Wu and M. R. G. Marquez. A non-self-intersection Douglas-Peucker algorithm. In IEEE Symposium on Computer Graphics and Image Processing, 2003. 1, 2, 6
+[39] Z. Zhang, S. Fidler, J. Waggoner, Y. Cao, S. Dickinson, J. Siskind, and S. Wang. Superedge grouping for object localization by combining appearance and shape informations. In CVPR, 2012. 2
\ No newline at end of file
diff --git a/approximatingshapesinimageswithlowcomplexitypolygons/images.zip b/approximatingshapesinimageswithlowcomplexitypolygons/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3fcdce608860bc7322283c7cc557cc36f2b24f68
--- /dev/null
+++ b/approximatingshapesinimageswithlowcomplexitypolygons/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ecc12bdb940f2e477dce8e3faa8df7aaeda0b3ed6c4d667dc613c6d2e2945cbb
+size 1092897
diff --git a/approximatingshapesinimageswithlowcomplexitypolygons/layout.json b/approximatingshapesinimageswithlowcomplexitypolygons/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ca9f4f65014b059f7a8226c609ea4f92bdb276ef
--- /dev/null
+++ b/approximatingshapesinimageswithlowcomplexitypolygons/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8b92eed5874e39ffeca5dafcc0e5326e728f7ff1ca42332f153969ded727931
+size 412233
diff --git a/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_content_list.json b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ca524edc327a538956bbd3b2cb22e20391dacf41
--- /dev/null
+++ b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1842a8162eb19f7fbc2304b102a71e3cb4dc19807d0c90d193a9b35ecee1274
+size 69749
diff --git a/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_model.json b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dc8e0ce33792261cee37aaf4cd4bae9a1a08296f
--- /dev/null
+++ b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe33e58fa64c939bdef1cc15b8e1a785c8aefc4157155ffd0f8d0c39d441a1d1
+size 87465
diff --git a/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_origin.pdf b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fd10407ac54f9186ab698061a8d89bcb3f97857b
--- /dev/null
+++ b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/ed50289f-9b7b-4119-b2be-b28d95893012_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0fcb2ab348fcfea376185a12b805e6a6cea9a30d63ebafc872c2dce767101c8a
+size 1065982
diff --git a/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/full.md b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a94cce8ef9ce505f1cf2e890f51490f4878ce65
--- /dev/null
+++ b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/full.md
@@ -0,0 +1,269 @@
+# APQ: Joint Search for Network Architecture, Pruning and Quantization Policy
+
+Tianzhe Wang $^{1,2}$ Kuan Wang $^{1}$ Han Cai $^{1}$ Ji Lin $^{1}$ Zhijian Liu $^{1}$ Hanrui Wang $^{1}$ Yujun Lin $^{1}$ Song Han $^{1}$ $^{1}$ Massachusetts Institute of Technology Shanghai Jiao Tong University
+
+# Abstract
+
+We present APQ, a novel design methodology for efficient deep learning deployment. Unlike previous methods that separately optimize the neural network architecture, pruning policy, and quantization policy, we design to optimize them in a joint manner. To deal with the larger design space it brings, we devise to train a quantization-aware accuracy predictor that is fed to the evolutionary search to select the best fit. Since directly training such a predictor requires time-consuming quantization data collection, we propose to use predictor-transfer technique to get the quantization-aware predictor: we first generate a large dataset of $\langle NN$ architecture, ImageNet accuracy\rangle pairs by sampling a pretrained unified once-for-all network and doing direct evaluation; then we use these data to train an accuracy predictor without quantization, followed by transferring its weights to train the quantization-aware predictor, which largely reduces the quantization data collection time. Extensive experiments on ImageNet show the benefits of this joint design methodology: the model searched by our method maintains the same level accuracy as ResNet34 8-bit model while saving $8\times$ BitOps; we achieve $2\times /1.3\times$ latency/energy saving compared to MobileNetV2+HAQ [30, 36] while obtaining the same level accuracy; the marginal search cost of joint optimization for a new deployment scenario outperforms separate optimizations using ProxylessNAS+AMC+HAQ [5, 12, 36] by $2.3\%$ accuracy while reducing orders of magnitude GPU hours and $\mathrm{CO}_{2}$ emission with respect to the training cost.
+
+# 1. Introduction
+
+Deep learning has prevailed in many real-world applications like autonomous driving, robotics, and mobile VR/AR, while efficiency is the key to bridge research and deployment. Given a constrained resource budget on the target hardware (e.g., latency, model size, and energy consumption), it requires an elaborated design of network architecture to achieve the optimal performance within the constraint. Traditionally, the deployment of efficient deep learning can be split into model architecture design and model compression (pruning and quantization). Some existing works [10, 9] have shown that such a sequential pipeline can significantly reduce the cost of existing models. Never
+
+
+Figure 1. The illustration of marginal search cost for a upcoming scenario measured in pounds of $\mathrm{CO}_{2}$ emission. Simply extending existing methods could still cost a considerable $\mathrm{CO}_{2}$ emission which is not environmental-friendly.
+
+theless, careful hyper-parameter tuning is required to obtain optimal performance [12]. The number of hyper-parameters grows exponentially when we consider the three stages in the pipeline together, which will soon exceed acceptable human labor bandwidth. To tackle the problem, recent works have applied AutoML techniques to automate the process. Researchers proposed Neural Architecture Search (NAS) [44, 45, 18, 19, 2, 4] to automate the model design, outperforming the human-designed models by a large margin. Based on a similar technique, researchers adopt reinforcement learning to compress the model by automated pruning [12] and automated quantization [36]. However, optimizing these three factors in separate stages will lead to sub-optimal results: e.g., the best network architecture for the full-precision model is not necessarily the optimal one after pruning and quantization. Besides, this three-step strategy also requires considerable search time and energy consumption [32]. Therefore, we need a solution to jointly optimize the deep learning model for a certain hardware platform.
+
+However, directly extending existing AutoML techniques to joint model optimization setting can be problematic. Firstly, the joint search space is cubic compared to stage-wise search, making the search difficult. Introducing pruning and quantization into the pipeline will also greatly increase the total search time, as both of them require time-consuming post-processing (e.g., fine-tuning) to get accuracy approximation [36, 39]. As shown in Fig. 1, searching for each deployment would lead to a considerable $\mathrm{CO}_{2}$ emission, which can exacerbate the greenhouse effect and deteriorate the environment seriously. Moreover, the search space of each step in pipeline could be entangled, and each
+
+step has its own optimization objective (eg. accuracy, latency, energy), so that the final policy of the pipeline always turns out to be sub-optimal.
+
+To this end, we proposed $APQ$ , a joint design method to solve this problem. To take care of the large space it brings, we reorganize the traditional pipeline of "model design $\rightarrow$ pruning $\rightarrow$ quantization" into "architecture search + mixed-precision search". The former consists of both coarse-grained architecture search (topology, operator choice, etc.) and fine-grained channel search (replacing the traditional channel pruning [13]). The latter aims to find the optimal mixed-precision quantization policy trading off between accuracy and resource consumption. It is reasonable since "model design" and "pruning", act on the topology of network, can be viewed as an integrity while "quantization", acts on the details for each block, is more microscopic and orthogonal to such integrity. We work on both aspects to address the search efficiency. For architecture search, we need to train a highly flexible once-for-all network that supports not only the operator change but also fine-grained channel change, so that we can perform joint search over architecture and channel number. For the mixed-precision search, due to the orthogonality for "architecture" versus "quantization" and the time-consuming fine-tuning which is required for quantized accuracy evaluation, We instead apply a predictor to predict the accuracy after quantization. Nevertheless, it is difficult to train such a predictor for two main reasons. 1. It predicts the accuracy of models with both different architecture and different bitwidth. Therefore, it is more complicated than the predictors in [18, 7] which only takes architecture as input. 2. Collecting predictor training data could be prohibitive due to the time-consuming fine-tuning process. To address this dilemma, we proposed Predictor-Transfer Technique to dramatically improve the sample efficiency: our quantization-aware accuracy predictor is transferred from full-precision accuracy predictor, which is firstly trained on cheap data points collected using our flexible once-for-all network (evaluation only, no training required). After the training of this quantization-aware predictor $P$ (arch, prune, quantization), we can perform search at ultra fast speed just using the predictor. With the above design, we are able to efficiently perform joint search over model architecture, channel number, and mixed-precision quantization. The predictor can also be used for new hardware and deployment scenarios, without training the whole system again.
+
+Extensive experiments show the superiority of our method: while maintaining the same level accuracy with 8-bit version of ResNet34 model, we achieve $8 \times$ reduction in BitOps; we obtain the same level accuracy as MobileNetV2+HAQ, and achieve $2 \times /1.3 \times$ latency/energy saving; our models outperform separate optimizations using ProxylessNAS+AMC+HAQ by $2.3\%$ accuracy under same
+
+latency constraints, while reducing $600 \times$ GPU hours and $\mathrm{CO}_{2}$ emission, which could mitigate the ecological stress and accelerate the deployment process of deep model.
+
+The contributions of this paper are:
+
+- We devise a methodology $APQ$ to jointly perform NAS-pruning-quantization, thus unifying the conventionally separated stages into an integrated solution.
+- We propose a predictor-transfer method to tackle the high cost of the quantization-aware accuracy predictor's dataset collection (NN architecture, quantization policy, accuracy).
+- We achieve significant speedup to search optimal network architecture with quantization policy via this joint optimization, and enable automatic model adjustment in diverse deployment scenarios.
+
+# 2. Background and Outline
+
+Researchers have proposed various methods to accelerate the model inference, including architecture design [14, 30], network pruning [11, 21] and network quantization [10].
+
+Neural Architecture Search. Tracing back to the development of NAS, one can see the reduction in the search time. Former NAS [45, 29] use an RL agent to determine the cell-wise architecture. To efficiently search for the architecture, many later works viewed architecture searching as a path finding problem [20, 5], it cuts down the search time by jointly training rather than iteratively training from scratch. Inspired by the path structure, some one-shot methods [8] have been proposed to further leverage the network's weights in training time and begin to handle mixed-precision case for efficient deployment. Another line of works tries to grasp the information by a performance predictor [23, 7], which reduces the frequent evaluation for target dataset when searching for the optimal.
+
+Pruning. Extensive works show the progress achieved in pruning: in early time, researchers proposed fine-grained pruning [11, 10] by cutting off the connections (i.e., elements) within the weight matrix. However, such kind of method is not friendly to the CPU and GPU, and requires dedicated hardware[26, 40] to support sparse matrix multiplication, which are highly demanding to design [35, 34, 24]. Later, some researchers proposed channel-level pruning [13, 21, 17, 25, 1, 15, 27] by pruning the entire convolution channel based on some importance score (e.g., L1-norm) to enable acceleration on general-purpose hardware. However, both fine-grained pruning and channel-level pruning introduces an enormous search space as different layer has different sensitivities (e.g., the first convolution layer is very sensitive to be pruned as it extracts important
+
+ | ProxylessNAS | ChamNet | SPOS | AMC | HAQ | APQ |
| Hardware-aware | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| No extra training during searching | | ✓ | ✓ | | | ✓ |
| No extra inference during searching | | ✓ | | | | ✓ |
| Channel pruning | | | | ✓ | | ✓ |
| Support mixed-precision | | | ✓ | | ✓ | ✓ |
+
+Table 1. Comparisons of architecture search approaches for efficient models (ProxylessNAS [5], SPOS: Single Path One-Shot [8], ChamNet [7], AMC [12], HAQ[36] and APQ (Ours). APQ distinguishes from other works by directly searching mixed-precision architecture without extra interaction with target dataset.
+
+low-level features; while the last layer can be easily pruned as it's very redundant). To this end, recent researches leverage the AutoML techniques [12, 39] to automate this exploration process and surpass the human design.
+
+Quantization. Quantization is a necessary technique to deploy the models on hardware platforms like FPGAs and mobile phones. [10] quantized the network weights to reduce the model size by grouping the weights using k-means. [6] binarized the network weights into $\{-1, +1\}$ ; [42] quantized the network using one bit for weights and two bits for activation; [28] binarized each convolution filter into $\{-w, +w\}$ ; [43] mapped the network weights into $\{-w_{\mathrm{N}}, 0, +w_{\mathrm{P}}\}$ using two bits with a trainable range; [41] explicitly regularized the loss perturbation and weight approximation error in a incremental way to quantize the network using binary or ternary weights. [16] used 8-bit integers for both weights and activation for deployment on mobile devices. Some existing works explored the relationship between quantization and network architecture. HAQ [36] proposed to leverage AutoML to determine the bit-width for a mixed-precision quantized model. A better trade-off can be achieved when different layers are quantized with different bits, showing the strong correlation between network architecture and quantization.
+
+Multi-Stage Optimization. Above methods are orthogonal to each other and a straightforward combination approach is to apply them sequentially in multiple stages i.e. NAS+Pruning+Quantization:
+
+- In the first stage, we can search the neural network architecture with the best accuracy on the target dataset [33, 5, 37]:
+
+$$
+\mathcal {A} _ {\text {N A S}} ^ {*}, w _ {\text {N A S}} ^ {*} = \underset {\mathcal {A}, w} {\arg \max } A C C _ {v a l} (\mathcal {A}, w). \tag {1}
+$$
+
+- In the second stage, we can prune the channels in the model automatically [12]:
+
+$$
+\mathcal {A} _ {P} ^ {*}, w _ {P} ^ {*} = \underset {P} {\arg \max } A C C _ {v a l} \left(P \left(\mathcal {A} _ {\mathrm {N A S}} ^ {*}, w _ {\mathrm {N A S}} ^ {*}\right)\right). \tag {2}
+$$
+
+- In the third stage, we can quantize the model to mixed-
+
+precision [36]:
+
+$$
+\mathcal {A} ^ {*}, w ^ {*} = \underset {Q} {\arg \max } A C C _ {v a l} \left(Q \left(\mathcal {A} _ {P} ^ {*}, w _ {P} ^ {*}\right)\right) \tag {3}
+$$
+
+However, this separation usually leads to a sub-optimal solution: e.g., the best neural architecture for the floating-point model may not be optimal for the quantized model. Moreover, frequent evaluations on the target dataset make such kind of methods time-costly: e.g., a typical pipeline as above can take about 300 GPU hours, making it hard for researchers with limited computation resources to do automatic design.
+
+Joint Optimization. Instead of optimizing NAS, pruning and quantization independently, joint optimization aims to find a balance among these configurations and search for the optimal strategy. To this end, the joint optimization objective can be formalized into:
+
+$$
+\mathcal {A} ^ {*} = \underset {\mathcal {A}, w, P, Q} {\arg \max } A C C _ {v a l} (Q (P (\mathcal {A}, w))), \tag {4}
+$$
+
+However, the search space of this new objective is tripled as original one, so it becomes challenging to perform joint optimization. We endeavor to unify NAS, pruning and quantization as joint optimization. The outline is: 1. Train a once-for-all network that covers a large search space and every sub-network can be directly extracted without re-training. 2. Build a quantization-aware accuracy predictor to predict quantized accuracy given a sub-network and quantization policy. 3. Construct a latency/energy lookup table and do resource constrained evolution search. Thereby, this optimization problem can be tackled jointly.
+
+# 3. Joint Design Methodology
+
+The overall framework of our joint design is shown in Figure 2. It consists of a highly flexible once-for-all network with fine-grained channels, an accuracy predictor, and evolution search to jointly optimize architecture, pruning, and quantization.
+
+# 3.1. Once-For-All Network with Fine-grained Channels
+
+Neural architecture search aims to find a good subnetwork from a large search space. Traditionally, each sam
+
+
+Figure 2. An overview of our joint design methodology. The serial number represents the order of the steps. We first train an accuracy predictor for the full precision NN, then incrementally train an accuracy predictor for the quantized NN (predictor-transfer). Finally, evolutionary search is performed to find the specialized NN architecture with quantization policy that fits hardware constraints.
+
+# Algorithm 1: APQ framework
+
+Input: Pretrained once-for-all network $S$ , evolution round iterMax, population size $N$ , mutation rate prob, architecture constraints $C$ .
+
+1 Use $S$ to generate FP32 model dataset $\mathcal{D}_{FP}$ (arch, acc) and quantized model dataset $\mathcal{D}_{MP}$ (quantization policy, arch, acc).
+2 Use $\mathcal{D}_{FP}$ to train a full precision (FP) accuracy predictor $\mathcal{M}_{FP}$ .
+3 Use $\mathcal{D}_{MP}$ and $\mathcal{M}_{FP}$ (pretrained weight to transfer) to train a mixed precision (MP) accuracy predictor $\mathcal{M}_{MP}$ .
+4 Randomly generate initial population $\mathcal{P}$ (quantization policy, arch) with size $N$ satisfying $C$ .
+5 for $i = 1\ldots iterMax$ do
+
+6 Use $\mathcal{M}_{MP}$ to predict accuracy for candidates in $\mathcal{P}$ and update $Top_{k}$ with the candidates having Top k highest accuracy.
+7 $\mathcal{P}_{\text{crossover}} = \text{Crossover}(\text{Top}_k, N/2, C)$
+8 $\mathcal{P}_{\text{mutation}} = \text{Mutation}(\text{Top}_k, N/2, \text{prob}, C)$
+9 $\mathcal{P} = \mathcal{P}\cup \mathcal{P}_{\text{crossover}}\cup \mathcal{P}_{\text{mutation}}$
+
+Output: Candidate with best accuracy in $Top_{k}$ .
+
+pled network is trained to obtain the actual accuracy [44], which is time-consuming. Recent one-shot based NAS [8] first trains a large, multi-branch network. At each time, a sub-network is extracted from the large network to directly evaluate the approximated accuracy. Such a large network is called once-for-all network. Since the choice of different layers in a deep neural network is largely independent, a popular way is to design multiple choices (e.g., kernel size, expansion ratios) for each layer.
+
+In this paper, we used MobileNetV2 as backbone to build a once-for-all network that supports different kernel sizes (i.e. 3, 5, 7) and channel number (i.e. $4 \times \mathcal{B}$ to $6 \times \mathcal{B}$ , 8 as interval, $\mathcal{B}$ is the base channel number in that block) in block level, and different depths (i.e. 2, 3, 4) in stage level. The combined search space contains more than $10^{35}$ subnetworks, which is large enough to perform search on top
+
+of it.
+
+Properties of the Once-For-All Network. To ensure efficient architecture search, we find that the once-for-all network needs to satisfy the following properties: (1) For every extracted sub-network, the performance could be directly evaluated without re-training, so that the cost of training only need to be paid once. (2) Support an extremely large and fine-grained search space to support channel number search. As we hope to incorporate pruning policy into architecture space, the once-for-all network not only needs to support different operators, but also fine-grained channel numbers (8 as interval). Thereby, the new space is significantly enlarged (nearly quadratic from $10^{19}$ to $10^{35}$ ).
+
+However, it is hard to achieve the two goals at the same time due to the nature of once-for-all network training: it is generally believed that if the search space gets too large (e.g., supporting fine-grained channel numbers), the accuracy approximation would be inaccurate [22]. A large search space will result in high variance when training the once-for-all network. To address the issue, we adopt progressive shrinking (PS) algorithm [3] to train the once-for-all network. Specifically, we first train a full sub-network with largest kernel sizes, channel numbers and depths in the once-for-all network, and use it as a teacher to progressively distill the smaller sub-networks sampled from the once-for-all network. During distillation, the trained sub-networks still update the weights to prevent accuracy loss. The PS algorithm effectively reduces the variance during once-for-all network training. By doing so, we can assure that the extracted sub-network from the once-for-all network preserves competitive accuracy without re-training.
+
+# 3.2. Quantization-Aware Accuracy Predictor
+
+To reduce the cost for designs in various deployment scenarios, we propose to build a quantization-aware accuracy predictor $P$ , which predicts the accuracy of the mixed-precision (MP) model based on architecture configurations
+
+and quantization policies. During search, we used the predicted accuracy $\overline{acc} = P$ (arch, prune, quantize) instead of the measured accuracy. The input to the predictor $P$ is the encoding of the network architecture, the pruning strategy, and the quantization policy.
+
+Architecture and Quantization Policy Encoding. We encode the network architecture block by block: for each building block (i.e. bottleneck residual block like MobileNetV2 [30]), we encode the kernel size, channel numbers, weight/activation bits for pointwise and depthwise convolutions into one-hot vectors, and concatenate these vectors together as the encoding of the block. For example, a block has 3 choices of kernel sizes (e.g. 3,5,7) and 4 choices of channel numbers(e.g. 16,24,32,40), if we choose kernel size=3 and channel numbers=32, then we get two vectors [1,0,0] and [0,0,1,0], and we concatenate them together and use [1,0,0,0,0,1,0] to represent this block's architecture. Likewise, we also use one-hot vectors to denote the choice of bitwidth for certain weights/activation of pointwise and depthwise layers, e.g. suppose weight/activation bitwidth choices for pointwise/depthwise layer are 4 or 8, we use [1,0,0,1,0,1,1,0] to denote the choice (4,8,8,4) for quantization policy. If this block is skipped, we set all values of the vector to 0. We further concatenate the features of all blocks as the encoding of the whole network. Then for a 5-layer network, we can use a 75-dim $(5 \times (3 + 4 + 2 \times 4) = 75)$ vector to represent such an encoding. In our setting, the choices of kernel sizes are [3,5,7], the choices of channel number depend on the base channel number for each block, and bitwidth choices are [4,6,8], there are 21 blocks in total to design.
+
+Accuracy Predictor. The predictor we use is a 3-layer feed-forward neural network with each embedding dim equaling to 400. As shown in the left of Figure 3, the input of the predictor is the one-hot encoding described above and the output is the predicted accuracy. Different from existing methods [20, 5, 37], our predictor based method does not require frequent evaluation of architecture on target dataset in the search phase. Once we have the predictor, we can integrate it with any search method (e.g. reinforcement learning, evolution, bayesian optimization, etc.) to perform joint design over architecture-pruning-quantization at a negligible cost. However, the biggest challenge is how to collect the $\langle$ architecture, quantization policy, accuracy $\rangle$ dataset to train the predictor for quantized models, which is due to: 1) collecting quantized model's accuracy is time-consuming: fine-tuning is required to recover the accuracy after quantization, which takes about 0.2 GPU hours per data point. In fact, we find that for training a good full precision accuracy predictor, 80k $\langle$ NN architecture, ImageNet accuracy $\rangle$ data pairs would be enough. However, if we collect a quantized dataset with the same size as the full precision one, it can cost 16,000 GPU hours, which is far beyond affordable. 2)
+
+
+Figure 3. Predictor-transfer technique. We start from a pre-trained full-precision predictor and add another input head (green square at bottom right) denoting quantization policy. Then fine-tune the quantization-aware accuracy predictor.
+
+The quantization-aware accuracy predictor is harder to train than a traditional accuracy predictor on full-precision models: the architecture design and quantization policy affect network performance from two separate aspects, making it hard to model the mutual influence. Thus using traditional way to train quantization-aware accuracy predictor can result in a significant performance drop (Table 2).
+
+Transfer Predictor to Quantized Models. Collecting a quantized NN dataset for training the predictor is difficult (needs finetuning), but collecting a full-precision NN dataset is easy: we can directly pick sub-networks from the once-for-all network and measure its accuracy. We propose the predictor-transfer technique to increase the sample efficiency and make up for the lack of data. As the order of accuracy before and after quantization is usually preserved, we first pre-train the predictor on a large-scale dataset to predict the accuracy of full-precision models, then transfer to quantized models. The quantized accuracy dataset is much smaller and we only perform short-term fine-tuning. As shown in Figure 3, we add the quantization bits (weights& activation) of the current block into the input embedding to build the quantization-aware accuracy predictor. We then further fine-tune the quantization-aware accuracy predictor using pre-trained FP predictor's weights as initialization. Since most of the weights are inherited from the full-precision predictor, the training requires much less data compared to training from scratch.
+
+# 3.3. Hardware-Aware Evolutionary Search
+
+As different hardware might have drastically different properties (e.g., cache size, level of parallelism), the optimal network architecture and quantization policy for one hardware is not necessarily the best for the other. Therefore,
+
+| Model | ImageNet Top1 (%) | Latency (ms) | Energy (mJ) | BitOps (G) | Design cost (GPU hours) | CO2e (marginal) | Cloud compute cost (marginal) |
| MobileNetV2 - 8bit | 71.8 | 9.10 | 12.46 | 19.2 | - | - | - |
| ProxylessNAS - 8bit | 74.2 | 13.14 | 14.12 | 19.5 | 200N | 56.72 | $148 – $496 |
| ProxylessNAS + AMC - 8bit | 73.3 | 9.77 | 10.53 | 15.0 | 204N | 57.85 | $151 – $506 |
| MobileNetV2 + HAQ | 71.9 | 8.93 | 11.82 | - | 96N | 27.23 | $71 – $238 |
| ProxylessNAS + AMC + HAQ | 71.8 | 8.45 | 8.84 | - | 300N | 85.08 | $222 – $744 |
| DNAS [38] | 74.0 | - | - | 57.3 | 40N | 11.34 | $30 – $99 |
| Single Path One-Shot [8] | 74.6 | - | - | 51.9 | 288 + 24N | 6.81 | $18 – $60 |
| Ours-A (w/o transfer) | 72.1 | 8.85 | 11.79 | 13.2 | 2400 + 0.5N | 0.14 | $0.4 – $1.2 |
| Ours-B (w/ transfer) | 74.1 | 8.40 | 12.18 | 16.5 | 2400 + 0.5N | 0.14 | $0.4 – $1.2 |
| Ours-C (w/ transfer) | 75.1 | 12.17 | 14.14 | 23.6 | 2400 + 0.5N | 0.14 | $0.4 – $1.2 |
+
+Table 2. Comparison with state-of-the-art efficient models for hardware with fixed quantization or mixed precision. Our method cuts down the marginal search time by two-order of magnitudes while achieving better performance than others. The marginal $\mathrm{CO}_{2}$ emission (lbs) and cloud compute cost ($) [32] is negligible for search in a new scenario. Here marginal cost means the cost for searching in a new deployment scenario, we use $N$ to denote the number of up-coming deployment scenarios and we include the cost for training our once-for-all network in the "design cost". The listed "our models" are searched under different latency constraints for fair comparison.
+
+
+Figure 4. Comparison with mixed-precision models searched by HAQ [36] under latency/energy constraints. The baselines are 4-bit and 6-bit fixed precision, respectively. When the constraint is strict, our model can outperform fixed precision model by more than $10\%$ accuracy, and $5\%$ compared with HAQ. Such performance boost may benefit from the dynamic architecture search space rather than fixed one as MobileNetV2.
+
+
+
+instead of relying on some indirect signals (e.g., BitOps), our optimization is directly based on the measured latency and energy on the target hardware.
+
+Measuring Latency and Energy. Evaluating each candidate policy on actual hardware can be very costly. Thanks to the sequential structure of neural network, we can approximate the latency (or energy) of the model by summing up the latency (or energy) of each layer. We can first build a lookup table containing the latency and energy of each layer under different architecture configurations and bit-widths. Afterwards, for any candidate policy, we can break it down and query the lookup table to directly calculate the latency (or energy) at negligible cost. In practice, we find that such practice can precisely approximate the actual inference cost.
+
+Resource-Constrained Evolution Search. We adopt the evolution-based architecture search [8] to explore the best resource-constrained model. Based on this, we further replace the evaluation process with our quantization-aware accuracy predictor to estimate the performance of each can-
+
+didate directly. The cost for each candidate can then be reduced from $N$ times of model inference to only one time of predictor inference (where $N$ is the size of the validation set). Furthermore, we can verify the resource constraints by our latency/energy lookup table to avoid the direct interaction with the target hardware. Given a resource budget, we directly eliminate the candidates that exceed the constraints.
+
+# 4. Implementation Details
+
+Data Preparation for Quantization-aware Accuracy Predictor. We generate two kinds of data (2,500 for each): 1. random sample both architecture and quantization policy; 2. random sample architecture, and sample 10 quantization policies for each architecture configuration. We mix the data for training the quantization-aware accuracy predictor, and use full-precision pretrained predictor's weights to transfer. The number of data to train a full precision predictor is 80,000. As such, our quantization accuracy predictor can have the ability to generalize among different
+
+architecture/quantization policy pairs and learn the mutual relation between architecture and quantization policy.
+
+Evolutionary Architecture Search. For evolutionary architecture search, we set the population size to be 100, and choose Top-25 candidates to produce the next generation (50 by mutation, 50 by crossover). Each population is a network architecture with quantization policy, using the same encoding as quantization-aware accuracy predictor. The mutation rate is 0.1 for each layer, which is the same as that in [8], and we randomly choose the new kernel size and channel number for mutation. For crossover, each layer is randomly choose from the layer configuration of its parents. We set max iterations to 500, and choose the best candidate among the final population.
+
+Quantization. We follow the implementation in [36] to do quantization. Specifically, we quantize the weights and activations with the specific quantization policies. For each layer with weights $w$ with quantization bit $b$ , we linearly quantize it to $[-v, v]$ , the quantized weight is:
+
+$$
+w ^ {\prime} = \max (0, \min (2 v, r o u n d (\frac {2 w}{2 ^ {b} - 1}) \cdot v)) - v \tag {5}
+$$
+
+We set choose different $v$ for each layer that minimize the KL-divergence $\mathcal{D}(w||w')$ between origin weights $w$ and quantized weights $w'$ . For activation weights, we quantize it to $[0, v]$ since the value is non-negative after ReLU6 layer.
+
+# 5. Experiments
+
+To verify the effectiveness of our methods, we conduct experiments that cover two of the most important constraints for on-device deployment: latency and energy consumption in comparison with some state-of-the-art models using neural architecture search. Besides, we compare BitOps with some multi-stage optimized models.
+
+Dataset, Models and Hardware Platform. The experiments are conducted on ImageNet dataset. We compare the performance of our joint designed models with mixed-precision models searched by [36, 12, 5] and some SOTA fixed precision 8-bit models. The platform we used to measure the resource consumption for mixed-precision model is BitFusion [31], which is a state-of-the-art spatial ASIC design for neural network accelerator. It employs a 2D systolic array of Fusion Units which spatially sum the shifted partial products of two-bit elements from weights and activations.
+
+# 5.1. Comparison with SOTA Efficient Models
+
+Table 2 presents the results for different efficiency constraints. As one can see, our model can consistently outperform state-of-the-art models with either fixed or mixed-precision. Specifically, our small model (Ours-B) can have $2.2\%$ accuracy boost than mixed-precision MobileNetV2
+
+
+Figure 5. Comparison with sequentially designed mixed-precision models searched by AMC and HAQ [5, 12, 36] under latency constraints. Our joint designed model while achieving better accuracy than sequentially designed models.
+
+
+Figure 6. Comparison with quantized model under BitOps constraint. The ResNet-34 baselines are $2/3/4$ bit weight and activation. Our model achieves $0.5\%$ accuracy boost (from $74.6\%$ to $75.1\%$ ) compared models searched by single path one-shot while occupies half of BitOps. Also, the accuracy of our model is the same level as 8-bit version of ResNet-34 model ( $75.0\%$ ) while saving $8 \times$ BitOps.
+
+search by HAQ (from $71.9\%$ to $74.1\%$ ); our large model (Ours-C) attains better accuracy (from $74.6\%$ to $75.1\%$ ) while only requires half of BitOps. When applied with transfer technology, it does help for the model to get better performance (from $72.1\%$ to $74.1\%$ ). It is also notable that the marginal cost for cloud computer and $\mathrm{CO}_{2}$ emission is two orders of magnitudes smaller than other works.
+
+# 5.2. Effectiveness of Joint Design
+
+Comparison with MobileNetV2+HAQ. Figure 4 show the results on the BitFusion platform under different latency constraints and energy constraints. Our jointly designed models consistently outperform both mixed-precision and fixed precision SOTA models under certain constraints. It is notable when constraint is tight, our models have signif
+
+
+Figure 7. Illustration of the performance w/ or w/o predictor-transfer technique. Pairwise accuracy is a metric that measures the relative relationship between each two architectures. Left graph shows that the quantization-aware predictor could attain a faster and higher convergence compared w/o transfer. Right graph shows that when data is limited, predictor-transfer technique could largely improve the pairwise accuracy (from $64.6\%$ to $75.6\%$ ). Using predictor-transfer technique, we can achieve $85\%$ pairwise accuracy using less than 3k data points, while at least 4k data will be required without this technique.
+
+
+
+icant improvement compared with state-of-the-art mixed-precision models. Specifically, with similar efficiency constraints, we improve the ImageNet top1 accuracy from the MobileNetV2 baseline $61.4\%$ to $71.9\%$ $(+10.5\%)$ and $72.7\%$ $(+11.3\%)$ for latency and energy constraints, respectively. Moreover, we show some models searched by our quantization-aware predictor without predictor-transfer technique. With this technique applied, the accuracy can consistently have an improvement, since the nontransferred predictor might loss some mutual information between architecture and quantization policy.
+
+Comparison with Multi-Stage Optimized Model. Figure 5 compares the multi-stage optimization with our joint optimization results. As one can see, under the same latency/energy constraint, our model can attain better accuracy than the multi-stage optimized model (74.1% vs 71.8%). This is reasonable since the per-stage optimization might not find the global optimal model as joint design does.
+
+Comparison under Limited BitOps. Figure 6 reports the results with limited BitOps budget. As one can see, under a tight BitOps constraint, our model improves over $2\%$ accuracy (from $71.5\%$ to $73.9\%$ ) compared with searched model using [8]. Moreover, our models achieve the same level accuracy $(75.1\%)$ as ResNet34 8-bit model while saving $8 \times$ BitOps.
+
+# 5.3. Effectiveness of Predictor-Transfer
+
+Figure 7 shows the performance of our predictor-transfer technique compared with training from scratch. For each setting, we train the predictor to convergence and evaluate the pairwise accuracy (i.e. the proportion that predictor correctly identifies which is better between two randomly selected candidates from a held-out dataset), which is a mea
+
+surrement for the predictor's performance. We use the same test set with 2000 $\langle$ NN architecture, ImageNet accuracy $\rangle$ pairs that are generated by randomly choosing network architecture and quantization policy. Typically, for training with $N$ data points, the number of two kinds of data as mentioned in Sec. 4 is equal, i.e., $N / 2$ . As shown, the transferred predictor have a higher and faster pairwise accuracy convergence. Also, when the data is very limited, our method can have more than $10\%$ pairwise accuracy over scratch training.
+
+# 6. Conclusion
+
+We propose $APQ$ , a joint design method for architecting mixed-precision model. Unlike former works that decouple into separated stages, we directly search for the optimal mixed-precision architecture without multi-stage optimization. We use predictor-base method that can have no extra evaluation for target dataset, which greatly saves GPU hours for searching under an upcoming scenario, thus reducing marginally $\mathrm{CO}_{2}$ emission and cloud compute cost. To tackle the problem for high expense of data collection, we propose predictor-transfer technique to make up for the limitation of data. Comparisons with state-of-the-art models show the necessity of joint optimization and prosperity of our joint design method.
+
+# Acknowledgments
+
+We thank NSF Career Award #1943349, MIT-IBM Watson AI Lab, Samsung, SONY, AWS Machine Learning Research Award for supporting this research.
+
+# References
+
+[1] Sajid Anwar and Wonyong Sung. Compact deep convolutional neural networks with coarse pruning, 2016. 2
+[2] Han Cai, Tianyao Chen, Weinan Zhang, Yong Yu, and Jun Wang. Efficient architecture search by network transformation. In AAAI, 2018. 1
+[3] Han Cai, Chuang Gan, and Song Han. Once for all: Train one network and specialize it for efficient deployment, 2019. 4
+[4] Han Cai, Jiacheng Yang, Weinan Zhang, Song Han, and Yong Yu. Path-level network transformation for efficient architecture search. In ICML, 2018. 1
+[5] Han Cai, Ligeng Zhu, and Song Han. ProxylessNAS: Direct neural architecture search on target task and hardware. In ICLR, 2019. 1, 2, 3, 5, 7
+[6] Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1. arXiv, 2016. 3
+[7] Xiaoliang Dai, Peizhao Zhang, Bichen Wu, Hongxu Yin, Fei Sun, Yanghan Wang, Marat Dukhan, Yunqing Hu, Yiming Wu, Yangqing Jia, et al. Chamnet: Towards efficient network design through platform-aware model adaptation. CVPR, 2019. 2, 3
+[8] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420, 2019. 2, 3, 4, 6, 7, 8
+[9] Song Han, Han Cai, Ligeng Zhu, Ji Lin, Kuan Wang, Zhijian Liu, and Yujun Lin. Design automation for efficient deep learning computing. arXiv preprint arXiv:1904.10616, 2019. 1
+[10] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. 1, 2, 3
+[11] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NeurIPS, 2015. 2
+[12] Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In ECCV, 2018. 1, 3, 7
+[13] Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. ICCV, 2017. 2
+[14] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 2
+[15] Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven neuron pruning approach towards efficient deep architectures, 2016. 2
+[16] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. In CVPR, 2018. 3
+
+[17] Ji Lin, Yongming Rao, Jiwen Lu, and Jie Zhou. Runtime Neural Pruning. In NIPS, 2017. 2
+[18] Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In ECCV, 2018. 1, 2
+[19] Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. In ICLR, 2018. 1
+[20] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. In ICLR, 2019. 2, 5
+[21] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In ICCV, 2017. 2
+[22] Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Tim Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. In ICCV, 2019. 4
+[23] Renqian Luo, Fei Tian, Tao Qin, and Tie-Yan Liu. Neural architecture optimization. In NeurIPS, 2018. 2
+[24] Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, Mehrdad Khani Shirkoohi, Songtao He, Vikram Nathan, et al. Park: An open platform for learning-augmented computer systems. In Advances in Neural Information Processing Systems, pages 2490-2502, 2019. 2
+[25] Pavlo Molchanov, Stephen Tyree, Tero Karras, Timo Aila, and Jan Kautz. Pruning convolutional neural networks for resource efficient inference, 2016. 2
+[26] Subhankar Pal, Jonathan Beaumont, Dong-Hyeon Park, Aporva Amarnath, Siying Feng, Chaitali Chakrabarti, Hun-Seok Kim, David Blaauw, Trevor Mudge, and Ronald Dreslinski. Outerspace: An outer product based sparse matrix multiplication accelerator. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 724-736. IEEE, 2018. 2
+[27] A. Polyak and L. Wolf. Channel-level acceleration of deep face representations. IEEE Access, 2015. 2
+[28] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. XNOR-Net - ImageNet Classification Using Binary Convolutional Neural Networks. In ECCV, 2016. 3
+[29] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In AAAI, 2019. 2
+[30] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In CVPR, 2018. 1, 2, 5
+[31] Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, and Hadi Esmaeilzadeh. Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. ISCA, Jun 2018. 7
+[32] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. ACL, 2019. 1, 6
+
+[33] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In CVPR, 2019. 3
+[34] Hanrui Wang, Kuan Wang, Jiacheng Yang, Linxiao Shen, Nan Sun, Hae-Seung Lee, and Song Han. Tts: Transferable transistor sizing with graph neural networks and reinforcement learning. In ACM/IEEE 57th Design Automation Conference (DAC), 2020. 2
+[35] Hanrui Wang, Jiacheng Yang, Hae-Seung Lee, and Song Han. Learning to design circuits. In NeurIPS 2018 Machine Learning for Systems Workshop, 2018. 2
+[36] Kuan Wang, Zhijian Liu, Yujun Lin, Ji Lin, and Song Han. Haq: Hardware-aware automated quantization. In CVPR, 2019. 1, 3, 6, 7
+[37] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In CVPR, 2019. 3, 5
+[38] Bichen Wu, Yanghan Wang, Peizhao Zhang, Yuandong Tian, Peter Vajda, and Kurt Keutzer. Mixed precision quantization of convnets via differentiable neural architecture search. arXiv preprint arXiv, 1812, 2018. 6
+[39] Tien-Ju Yang, Andrew Howard, Bo Chen, Xiao Zhang, Alec Go, Mark Sandler, Vivienne Sze, and Hartwig Adam. Netadapt: Platform-aware neural network adaptation for mobile applications. Lecture Notes in Computer Science, 2018. 1, 3
+[40] Zhekai Zhang, Hanrui Wang, Song Han, and William J. Dally. Sparch: Efficient architecture for sparse matrix multiplication. In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020. 2
+[41] Aojun Zhou, Anbang Yao, Kuan Wang, and Yurong Chen. Explicit loss-error-aware quantization for low-bit deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9426-9435, 2018. 3
+[42] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. CoRR, abs/1606.06160, 2016. 3
+[43] Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. In ICLR, 2017. 3
+[44] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. In ICLR, 2017. 1, 4
+[45] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In CVPR, 2018. 1, 2
\ No newline at end of file
diff --git a/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/images.zip b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ffc3f9b17b0703e35fa2f93b2a20e4ae3a1df3c2
--- /dev/null
+++ b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22e5a9d7e64565a901109c386965270f31bdf63ee9d67ee6483b22dd49243e4d
+size 487635
diff --git a/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/layout.json b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a76a3c10eb0fa77af9660041d400ac34c6168a70
--- /dev/null
+++ b/apqjointsearchfornetworkarchitecturepruningandquantizationpolicy/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:379480b0e308f9f89929b08af52a119d0ce4d301139a8042e4ebe0c4a6171328
+size 372111
diff --git a/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_content_list.json b/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..41deb92a8085010a2d1f161ab1fc0256fbe3a55e
--- /dev/null
+++ b/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dba4808fc46ac8d8cdb72f3ff589bbe66188c4914451fc7dfd0109bd0694627f
+size 83269
diff --git a/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_model.json b/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..bde8cc65049b88e0dda9247a979301ac00f47024
--- /dev/null
+++ b/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27608162cbdfb72420912b749d91611e4bc2bd7f7f147eadd48b3f715b2e0abb
+size 106200
diff --git a/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_origin.pdf b/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..83e97f5dbce9f6cbe59a97c41bb0bd075c0dd93b
--- /dev/null
+++ b/archanimatablereconstructionofclothedhumans/99f33670-f2a3-43c5-b6e8-5d7105dfb332_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9471020c0eb602831d7988bf9f2c5dce7cdf74010363d3f68213fb9ad4a0d1e
+size 4911884
diff --git a/archanimatablereconstructionofclothedhumans/full.md b/archanimatablereconstructionofclothedhumans/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..9dc7f26050b13c383db3125f4b9eab6900215e73
--- /dev/null
+++ b/archanimatablereconstructionofclothedhumans/full.md
@@ -0,0 +1,380 @@
+# ARCH: Animatable Reconstruction of Clothed Humans
+
+Zeng Huang $^{1,2*}$ , Yuanlu Xu $^{1}$ , Christoph Lassner $^{1}$ , Hao Li $^{2}$ , Tony Tung $^{1}$ $^{1}$ Facebook Reality Labs, Sausalito, USA $^{2}$ University of Southern California, USA
+zenghuan@usc.edu, merayxu@gmail.com, classner@fb.com, hao@hao-li.com, tony.tung@fb.com
+
+# Abstract
+
+In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image. Existing approaches to digitize 3D humans struggle to handle pose variations and recover details. Also, they do not produce models that are animation ready. In contrast, ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image. A Semantic Space and a Semantic Deformation Field are created using a parametric 3D body estimator. They allow the transformation of 2D/3D clothed humans into a canonical space, reducing ambiguities in geometry caused by pose variations and occlusions in training data. Detailed surface geometry and appearance are learned using an implicit function representation with spatial local features. Furthermore, we propose additional per-pixel supervision on the 3D reconstruction using opacity-aware differentiable rendering. Our experiments indicate that ARCH increases the fidelity of the reconstructed humans. We obtain more than $50\%$ lower reconstruction errors for standard metrics compared to state-of-the-art methods on public datasets. We also show numerous qualitative examples of animated, high-quality reconstructed avatars unseen in the literature so far.
+
+# 1. Introduction
+
+3D human reconstruction has been explored for several decades in the field of computer vision and computer graphics. Accurate methods based on stereo or fusion have been proposed using various types of sensors [12, 42, 31, 33, 38, 49, 50], and several applications have become popular in sports, medicine and entertainment (e.g., movies, games, AR/VR experiences). However, these setups require tightly controlled environments. To date, full 3D human reconstruction with detailed geometry and appearance from in-the-wild pictures is still challenging (i.e., taken in natural conditions as opposed to laboratory environments).
+
+
+Figure 1. Given an image of a subject in arbitrary pose (left), ARCH creates an accurate and animatable avatar with detailed clothing (center). As rigging and albedo are estimated, the avatar can be reposed and relit in new environments (right).
+
+Moreover, the lack of automatic rigging prevents animation-based applications.
+
+Recent computer vision models have enabled the recovery of 2D and 3D human pose and shape estimation from a single image. However, they usually rely on representations that have limitations: (1) skeletons [11] are kinematic structures that are accurate to represent 3D poses, but do not carry body shape information. (2) surface meshes [18, 35, 51] can represent body shape geometry, but have topology constraints; (3) voxels [44] are topology-free, but memory costly with limited resolution, and need to be rigged for animation. In this paper, we propose the ARCH (Animatable Reconstruction of Clothed Humans) framework that possesses all benefits of current representations. In particular, we introduce a learned model that has human body structure knowledge (i.e., body part semantics), and is trained with humans in arbitrary poses.
+
+First, 3D body pose and shape estimation can be inferred from a single image of a human in arbitrary pose by a prediction model [51]. This initialization step is used for normalized-pose reconstruction of clothed human shape within a canonical space. This allows us to define a Semantic Space (SemS) and a Semantic Deformation Field (SemDF) by densely sampling 3D points around the clothed
+
+
+Figure 2. ARCH overview. The framework contains three components: i) estimation of correspondences between an input image space and the canonical space, ii) implicit surface reconstruction in the canonical space from surface occupancy, normal and color estimation, iii) refinement of normal and color through differentiable rendering.
+
+body surface and assigning skinning weights. We then learn an implicit function representation of the 3D occupancy in the canonical space based on SemS and SemDF, which enables the reconstruction of high-frequency details of the surface (including clothing wrinkles, hair style, etc.) superior to the state of the art [32, 40, 44]. The surface representing a clothed human in a neutral pose is implicitly rigged in order to be used as an animatable avatar. Moreover, a differentiable renderer is used to refine normal and color information for each 3D point in space by Granular Render-and-Compare. Here, we regard them as a sphere and develop a new blending formulation based on the estimated occupancy. See Fig. 2 for an overview of the framework.
+
+In our experiments, we evaluate ARCH on the task of 3D human reconstruction from a single image. Both quantitative and qualitative experimental results show ARCH outperforms state-of-the-art body reconstruction methods on public 3D scan benchmarks and in-the-wild 2D images. We also show that our reconstructed clothed humans can be animated by motion capture data, demonstrating the potential applications for human digitization for animation.
+
+Contributions. The main contributions are threefold: 1) we introduce the Semantic Space (SemS) and Semantic Deformation Field (SemDF) to handle implicit function representation of clothed humans in arbitrary poses, 2) we propose opacity-aware differentiable rendering to refine our human representation via Granular Render-and-Compare, and 3) we demonstrate how reconstructed avatars can directly be rigged and skinned for animation. In addition, we learn per-pixel normals to obtain high-quality surface details, and surface albedo for relighting applications.
+
+# 2. Related Work
+
+3D clothed human reconstruction focuses on the task of reconstructing 3D humans with clothes. There are multiple attempts to solve this task with video inputs [2, 3,
+
+37, 1, 52], RGB-D data [53, 56] and in multi-view settings [5, 13, 14, 45, 46, 47, 48, 6]. Though richer inputs clearly provide more information than single images, the developed pipelines yield more limitations on the hardware and additional time costs in deployment. Recently, some progress [7, 15, 18, 20, 21, 23, 41, 51, 54] has been made in estimating parametric human bodies from a single RGB image, yet boundaries are under-explored to what extent 3D clothing details can be reconstructed from such inputs. In recent work [22, 24, 4], the authors learn to generate surface geometry details and appearance using 2D UV maps. While details can be learned, the methods cannot reconstruct loose clothing (e.g., dress) and recover complex shapes such as hair or fine structures (e.g., shoe heels). Due to different types of clothing topology, volumetric reconstruction has great benefits in this scenario. For example, BodyNet [44] takes a person image as input and learns to reconstruct voxels of the person with additional supervision through body priors (e.g., 2D pose, 3D pose, part mask); while PIFu [40] assumes no body prior and learns an implicit surface function based on aligned image features, leading more clothes details and less robustness against pose variations.
+
+In this paper, we incorporate body prior knowledge to transform people in arbitrary poses to the canonical space, and then learn to reconstruct an implicit representation.
+
+Differentiable rendering makes the rendering operation differentiable and uses it to optimize parameters of the scene representation. Existing approaches can be roughly divided into two categories: mesh rasterization based rendering [9, 19, 25, 29, 43] and volume based rendering [16, 26]. For example, OpenDR [29] and Neural Mesh Renderer [19] manually define approximated gradients of the rendering operation to move the faces. SoftRasterizer [25] and DIB-R [9], in contrast, redefine the rasterization as a continuous and differentiable function, allowing gradients to be computed automatically. For volume-based differen
+
+tiable rendering, [16] represents each 3D point as a multivariate Gaussian and performs occlusion reasoning with grid discretization and ray tracing. Such methods require an explicit volume to perform occlusion reasoning. [26] develops differentiable rendering for implicit surface representations with a focus on reconstructing rigid objects.
+
+In contrast, we use a continuous rendering function as in [25], but revisit it to handle opacity, and we use geometric primitives at points of interest and optimize their properties.
+
+# 3. Proposed Framework
+
+ARCH contains three components, after 3D body estimation by [51] (see Fig. 2): pose-normalization using Semantic Space (SemS) and Semantic Deformation Field (SemDF), implicit surface reconstruction, and refinement using a differentiable renderer by Granular Render-and-Compare (see Sec. 3.4).
+
+# 3.1. Semantic Space and Deformation Field
+
+Our goal is to transform an arbitrary (deformable) object into a canonical space where the object is in a predefined rest pose. To do so, we introduce two concepts: the Semantic Space (SemS) and the Semantic Deformation Field (SemDF). SemS $S = \{(p, s_p) : p \in \mathbb{R}^3\}$ is a space consisting of 3D points where each point $p \in S$ is associated to semantic information $s_p$ enabling the transformation operation. SemDF is a vector field represented by a vector-valued function $\nu$ that accomplishes the transformation,
+
+In computer vision and graphics, 3D human models have been widely represented by a kinematic structure mimicking the anatomy that serves to control the pose, and a surface mesh that represents the human shape and geometry. Skinning is the transformation that deforms the surface given the pose. It is parameterized by skinning weights that individually influence body part transformations [28]. In ARCH, we define SemS in a similar form, with skinning weights.
+
+Assuming a skinned body template model $T$ in a normalized A-pose (i.e., the rest pose), its associated skeleton in the canonical space, and skinning weights $W$ , SemS is then
+
+$$
+S = \left\{\left(p, \left\{w _ {i, p} \right\} _ {i = 1} ^ {N _ {K}}\right): p \in \mathbb {R} ^ {3} \right\}, \tag {1}
+$$
+
+where each point $p$ is associated to a collection of skinning weights $\{w_{i,p}\}$ defined with respect to $N_K$ body parts (e.g., skeleton bones). In this paper, we approximate $\{w_{i,p}\}$ by retrieving the closest point $p'$ on the template surface to $p$ and assigning the corresponding skinning weights from $W$ . In practice, we set a distance threshold to cut off points that are too far away from $T$ .
+
+In ARCH, SemDF actually performs an inverse-skinning transformation, putting a human in arbitrary pose to its normalized-pose in the canonical space. This extends standard skinning (e.g., Linear Blend Skinning or LBS [28]) applied to structured objects to arbitrary 3D space and enables transforming an entire space in arbitrary poses to the canonical space, as every point $p'$ can be expressed as a lin
+
+ear combination of points $p$ with skinning weights $\{w_{i,p}\}$ .
+
+Following LBS, the canonical space of human body is tied to a skeletal rig. The state of the rig is described by relative rotations $R = \{r_i\}_{i=1}^{N_K}$ of all skeleton joints $X = \{x_i\}_{i=1}^{N_K}$ . Every rotation is relative to the orientation of the parent element in a kinematic tree. For a skeleton with $N_K$ body parts, $R \in \mathbb{R}^{3 \times N_K}$ , $X \in \mathbb{R}^{3 \times N_K}$ . Given a body template model $T$ in rest pose with $N_V$ vertices, the LBS function $\mathcal{V}(v_i, X, R; W)$ takes as input the vertices $v_i \in T$ , the joints $X$ , a target pose $R$ , and deforms every $v_i$ to the posed position $v_i'$ with skinning weights $W \in \mathbb{R}^{N_V \times N_K}$ , namely,
+
+$$
+\mathcal {V} (v _ {i}, X, R; W) = \sum_ {k = 1} ^ {N _ {K}} w _ {k, i} G _ {k} (R, X) v _ {i}, \tag {2}
+$$
+
+where $G_{k}(R,X)$ is the rest-posed corrected affine transformation to apply to body part $k$ .
+
+# 3.2. Implicit Surface Reconstruction
+
+We use the occupancy map $O$ to implicitly represent the 3D clothed human, i.e.,
+
+$$
+O = \{(p, o _ {p}): p \in \mathbb {R} ^ {3}, 0 \leq o _ {p} \leq 1 \}, \tag {3}
+$$
+
+where $o_p$ denotes the occupancy for a point $p$ . To obtain a surface, we can simply threshold $\tau$ the occupancy map $O$ to obtain the isosurface $O_{\tau}^{\prime}$ .
+
+In this paper, we incorporate a human body prior by always reconstructing a neutral-posed shape in the canonical space. Similar to [40], we develop a deep neural network that takes a canonical space point $p$ , its correspondent 2D position $q$ , and the 2D image $I$ as inputs and estimates occupancy $o_p$ , normal $n_p$ , color $c_p$ for $p$ ; that is,
+
+$$
+o _ {p} = \mathcal {F} \left(f _ {p} ^ {s}, I; \theta_ {o}\right),
+$$
+
+$$
+n _ {p} = \mathcal {F} \left(f _ {p} ^ {s}, I, f _ {p} ^ {o}; \theta_ {n}\right),
+$$
+
+$$
+c _ {p} = \mathcal {F} \left(f _ {p} ^ {s}, I, f _ {p} ^ {o}, f _ {p} ^ {n}; \theta_ {c}\right), \tag {4}
+$$
+
+$$
+f _ {p} ^ {s} \in \mathbb {R} ^ {1 7 1}, f _ {p} ^ {o} \in \mathbb {R} ^ {2 5 6}, f _ {p} ^ {n} \in \mathbb {R} ^ {6 4}, f _ {p} ^ {c} \in \mathbb {R} ^ {6 4},
+$$
+
+where $\theta^o$ , $\theta^n$ and $\theta^c$ denote the occupancy, normal and color sub-network weights, $f_p^s$ is the spatial feature extracted based on SemS. We use the estimated 57 canonical body landmarks from [51] and compute the Radial Basis Function (RBF) distance between $p$ and the $i$ -th landmark $p_i'$ , that is
+
+$$
+f _ {p} ^ {s} (i) = \exp \{- \mathcal {D} \left(p, p _ {i} ^ {\prime}\right) \}, \tag {5}
+$$
+
+where $\mathcal{D}(\cdot)$ is the Euclidean distance. We also evaluate the effects of different types of spatial features in Sec. 4.3. $f_{p}^{o}$ and $f_{p}^{n}$ the feature maps extracted from occupancy and normal sub-networks, respectively (see also Fig. 2). The three sub-networks are defined as follows:
+
+The Occupancy sub-network uses a Stacked Hourglass (SHG) [34] as the image feature encoder and a Multi-Layer Perceptron (MLP) as the regressor. Given a $512 \times 512$ input image $I$ , the SHG produces a feature map $f \in \mathbb{R}^{512 \times 512 \times 256}$ with the same grid size. For each 3D point $p$ , we consider the feature located at the corresponding projected pixel $q$ as its visual feature descriptor $f_{p}^{o} \in \mathbb{R}^{256}$ . For points that do not align onto the grid, we apply bi-linear in
+
+terpolation on the feature map to obtain the feature at that pixel-aligned location. The MLP takes the spatial feature of the 3D point $p \in \mathbb{R}^3$ and the pixel-aligned image features $f_p^o \in \mathbb{R}^{256}$ as inputs and estimates the occupancy $o_p \in [0,1]$ by classifying whether this point lies inside the clothed body or not.
+
+The Normal sub-network uses a U-net [39] as the image feature encoder and a MLP which takes the spatial feature, and feature descriptors $f_{p}^{n} \in \mathbb{R}^{64}$ and $f_{p}^{o} \in \mathbb{R}^{256}$ from its own backbone and from the occupancy sub-network as inputs and estimates the normal vector $n_{p}$ .
+
+The Color sub-network also uses a U-net [39] as the image feature encoder and a MLP which takes the spatial feature, and feature descriptors $f_{p}^{c} \in \mathbb{R}^{64}$ , $f_{p}^{n} \in \mathbb{R}^{64}$ and $f_{p}^{o} \in \mathbb{R}^{256}$ from its own backbone, as well as the normal and occupancy sub-networks as inputs and estimates the color $c_{p}$ in RGB space.
+
+For each sub-network, the MLP takes the pixel-aligned image features and the spatial features (as described in Sec. 3.1), where the numbers of hidden neurons are (1024, 512, 256, 128). Similar to [40], each layer of MLP has skip connections from the input features. For the occupancy sub-network, the MLP estimates one-dimension occupancy $o_p \in [0,1]$ using Sigmoid activation. For the normal sub-network, the MLP estimates three-dimension normal $n_p \in [0,1]^3$ , $\| n_p \|_2 = 1$ using L2 normalization. For the color sub-network, the MLP estimates three-dimension color $c_p \in [0,1]^3$ using range clamping.
+
+# 3.3. Training
+
+During training, we optimize the parameters of all three sub-models, i.e., the occupancy, normal and color models. We define the training in three separate loops to train each part with the appropriate losses and avoid computational bottlenecks. The total loss function is defined as
+
+$$
+\mathcal {L} = \mathcal {L} _ {3 d} ^ {o} + \mathcal {L} _ {3 d} ^ {n} + \mathcal {L} _ {3 d} ^ {c} + \mathcal {L} _ {2 d} ^ {n} + \mathcal {L} _ {2 d} ^ {c}, \tag {6}
+$$
+
+where $\mathcal{L}_{3d}^{o}$ is the 3D loss for occupancy network, $\mathcal{L}_{3d}^{n}$ and $\mathcal{L}_{2d}^{n}$ are the 3D and 2D losses for normal network, and $\mathcal{L}_{3d}^{c}$ and $\mathcal{L}_{2d}^{c}$ are the 3D and 2D losses for color network. For every training iteration, we perform the following three optimizations.
+
+Occupancy. We use the available ground truth to train the occupancy prediction model in a direct and supervised way. First, we sample 20 480 points in the canonical space. They are sampled around the template mesh according to a normal distribution with a standard deviation of $5\mathrm{cm}$ . This turned out to cover the various body shapes and clothing well in our experiments, but can be selected according to the data distribution at hand. These points are then processed by the occupancy model, providing us with an estimated occupancy value for every sampled point. We use a sigmoid function on these values to normalize the network output to the interval $[0,1]$ , where we select 0.5 as the position of the isosurface. 0.5 is the position where the derivative of the sigmoid function is the highest and we expect to optimize
+
+the surface prediction best. The loss $\mathcal{L}_{3d}^{o}$ is defined as the Huber loss comparing the occupancy prediction and ground truth. Similar to [36], we found a less aggressive loss function than the squared error better suited for the optimization, but found the quadratic behavior of the Huber loss around zero to be beneficial.
+
+Normals and colors for surface points. Colors and normals can be optimized directly from the ground truth mesh for points that lie on its surface. To use this strong supervision signal we introduce a dedicated training stage. In this stage, we sample points only from the mesh surface and push them through the color and normal models. In our setup, we use 51 200 point samples per model per training step. The loss terms $\mathcal{L}_{3d}^{n}$ and $\mathcal{L}_{3d}^{c}$ are defined as the $L1$ loss comparing the predicted normals and colors with the ground truth across all surface points. The occupancy predictions are kept unchanged.
+
+Normals and colors for points not on the surface. For points not on the mesh surface, it is not clear how the ground truth information can be used in the best way to improve the prediction without an additional mapping. In a third step for the training, we sample another set of 51 200 points, and push them through the occupancy, color and normal models and use a differentiable renderer on the prediction. We render the image using the occupancy information as opacity, and by using the color channels to represent colors or normals and use the gradients to update the predicted values. $\mathcal{L}_{2d}^n$ and $\mathcal{L}_{2d}^c$ are defined as the per-pixel L1 loss between the rendered image and the ground truth. For details on this step, see Fig. 3 and the following Sec. 3.4.
+
+# 3.4. Granular Render-and-Compare
+
+The prediction from the model is an implicit function representation. By sampling points in a predefined volume and optimizing $\mathcal{L}_{3d}^{o}$ , $\mathcal{L}_{3d}^{n}$ and $\mathcal{L}_{3d}^{c}$ , we can optimize the occupancy, normal and color at these points directly given 3D ground truth. However, it is not clear what the gradients should be for points that are located not directly on the surface of the ground truth mesh. To address this problem, we propose to use a differentiable renderer.
+
+We first create an explicit geometric representation of the scene at hand. For every sample point to optimize, we place a geometric primitive with a spatial extent at its position. To be independent of the viewpoint, we choose this to a sphere with $1\mathrm{cm}$ radius for every sampled point (for an overview of the differentiable rendering loss computation, see Fig. 3). During training, every scene to render contains 51200 spheres.
+
+We then define a differentiable rendering function [25] to project the spheres onto the image plane so that we can perform pixel-level comparisons with the projected ground truth. We use a linear combination with a weight $w_{j}^{i}$ to associate the color contribution from point $p_i$ to the pixel $q_j$ . Having the color $c_{i}$ and normal $n_i$ for point $p_i$ , the color and normal for pixel $q_j$ are calculated as the weighed linear
+
+
+Figure 3. Illustration of the loss computation through differentiable rendering. From left to right: points are sampled according to a Gaussian distribution around our template mesh in the canonical space. They are transformed with the estimated Semantic Deformation Field and processed by the model. The model provides estimations of occupancy, normal and color for each 3D point. We use a differentiable renderer to project those points onto a new camera view and calculate pixel-wise differences to the rendered ground truth.
+
+combination of point values $\sum_{i}w_{j}^{i}c_{i}$ and $\sum_{i}w_{j}^{i}n_{i}$ .
+
+We define $w_{j}^{i}$ considering two factors: the depth of the sphere for point $p_i$ at pixel $q_j$ , $z_{j}^{i}$ , and the proximity of the projected surface of the sphere for point $p_i$ to pixel $q_j$ , $d_{j}^{i}$ . To make occlusion possible, the depth needs to have a strong effect on the resulting weight. Hence, [25] defines the weight as
+
+$$
+w _ {j} ^ {i} = \frac {d _ {j} ^ {i} \exp \left(z _ {j} ^ {i} / \gamma\right)}{\sum_ {k} d _ {k} ^ {i} \exp \left(z _ {k} ^ {i} / \gamma\right) + \exp (\epsilon / \gamma)} \tag {7}
+$$
+
+with $\epsilon$ being a small numerical constant. With this definition, the proximity has linear influence on the resulting weight while the depth has exponential influence. The impact ratio is controlled by the scaling factor $\gamma$ , which we fix to $1 \times 10^{-5}$ in our experiments.
+
+In contrast to [25] we also need to use an opacity $\alpha_{i}$ per sphere for rendering. We tie this opacity value $\alpha_{i}$ directly to the predicted occupancy value through linear scaling and shifting. To stay with the formulation of the render function, we integrate $\alpha_{i}$ into the weight formulation in Eqn. 7.
+
+If the opacity is used as a linear factor in this equation, the softmax function will still render spheres with very low opacity over other spheres with a lower depth value. The problem is the exponential function that is applied to the scaled depth values. On the other hand, if an opacity factor is only incorporated into the exponential function, spheres will remain visible in front of the background (their weight factor is still larger than the background factor $\exp (\epsilon /\gamma)$ ). We found a solution by using the opacity value as both, linear scaling factor as well as exponential depth scaling factor. This solution turned out to be numerically stable and well-usable for optimization with all desired properties. This changes the weight function to the following:
+
+$$
+w _ {j} ^ {i} = \frac {\alpha^ {i} d _ {j} ^ {i} \exp \left(\alpha^ {i} z _ {j} ^ {i} / \gamma\right)}{\sum_ {k} \alpha^ {i} d _ {k} ^ {i} \exp \left(\alpha^ {i} z _ {k} ^ {i} / \gamma\right) + \exp (\epsilon / \gamma)}. \tag {8}
+$$
+
+Using this formulation, we optimize the color channel values $c_{i}$ and normal values $n_i$ per point. A per-pixel L1 loss is computed between the rendering and a rendering of the ground truth data and back-propagated through the model. For our experiments with $\gamma = 1\times 10^{-5}$ and the depth of the volume, we map the occupancy values that de
+
+fine the isosurface at the value 0.5 to the threshold where $\alpha$ shifts to transparency. We experimentally determined this value to be roughly 0.7.
+
+# 3.5. Inference
+
+For inference, we take as input a single RGB image representing a human in an arbitrary pose, and run the forward model as described in Sec. 3.2 and Fig. 2. The network outputs a densely sampled occupancy field over the canonical space from which we use the Marching Cube algorithm [30] to extract the isosurface at threshold 0.5. The isosurface represents the reconstructed clothed human in the canonical pose. Colors and normals for the whole surface are also inferred by the forward pass and are pixel-aligned to the input image (see Sec. 3.2). The human model can then be transformed to its original pose $R$ by LBS using SemDF and per-point corresponding skinning weights $W$ as defined in Sec. 3.1.
+
+Furthermore, since the implicit function representation is equipped with skinning weights and skeleton rig, it can naturally be warped to arbitrary poses. The proposed end-to-end framework can then be used to create a detailed 3D avatar that can be animated with unseen sequences from a single unconstrained photo (see Fig. 5).
+
+# 4. Experiments
+
+We present details on ARCH implementation and datasets for training, with results and comparisons to the state of the art.
+
+# 4.1. Implementation Details
+
+ARCH is implemented in PyTorch. We train the neural network model using the RMSprop optimizer with a learning rate starting from 1e-3. The learning rate is updated using an exponential schedule every 3 epochs by multiplying with the factor 0.1. We are using 582 3D scans to train the model and use 360 views per epoch, resulting in 209 520 images for the training per epoch. Training the model on an NVIDIA DGX-1 system with one Tesla V100 GPU takes $90\mathrm{h}$ for 9 epochs.
+
+
+Figure 4. Illustration of reposing 3D scans to the canonical space. (a) An original 3D scan from the RenderPeople dataset. (b) Automatically detected topology changes. Red marks points with self contacts, blue regions that are also removed before reposing to avoid problems with normals. (c, d) Reposed scan.
+
+# 4.2. Datasets
+
+Our training dataset is composed of 375 3D scans from the RenderPeople $^1$ dataset, and 207 3D scans from the AXYZ $^2$ dataset. The scans are watertight meshes which are mostly free of noise. They represent subjects wearing casual clothes, and potentially holding small objects (e.g., mobile phones, books and purses). Our test dataset contains 64 scans from the RenderPeople dataset, 207 scans from the AXYZ dataset, 26 scans from the BUFF dataset [55], and 2D images from the DeepFashion [27] dataset, representing clothed people with a large variety of complex clothing. The subjects in the training dataset are mostly in standing pose, while the subjects in the test dataset are in arbitrary poses (standing, bending, sitting, ...). We create renders of the 3D scans using Blender. For each 3D scan, we produce 360 images by rotating a camera around the vertical axis with intervals of 1 degree. For the current experiments, we only considered the weak perspective projection (orthographic camera) but this can be easily adapted. We also used 38 environment maps to render each scan with different natural lighting conditions. The proposed model is trained to predict albedo (given by ground truth scan color). We also observed that increasing the number of images improves the fidelity of predicted colors (as in [40]).
+
+In order to use a 3D scan for model training, we fit a rigged 3D body template to the scan mesh to estimate the 3D body pose (see Fig. 4). The estimated parametric 3D body can directly serve as ground truth input data during the model training step (see Sec. 3.3). This also allows us to obtain SemS and SemDF for the scan. However, since each 3D scan has its own topology, artifacts due to topology changes will occur when pose-normalization is naively applied to models containing self-contact (for example arms touching the body). This creates inaccurate deformations. Hence, we first detect regions of self-contact and topology changes and cut the mesh before pose-normalization (see Fig. 4 (c) and (d)). Holes are then filled up using Smooth Signed Distance Surface reconstruction [8] (see Fig. 4 (c)
+
+| Methods | RenderPeople | BUFF |
| Normal | P2S | Chamfer | Normal | P2S | Chamfer |
| BodyNet [44] | 0.26 | 5.72 | 5.64 | 0.31 | 4.94 | 4.52 |
| SiCloPe [32] | 0.22 | 3.81 | 4.02 | 0.22 | 4.06 | 3.99 |
| IM-GAN [10] | 0.26 | 2.87 | 3.14 | 0.34 | 5.11 | 5.32 |
| VRN [17] | 0.12 | 1.42 | 1.6 | 0.13 | 2.33 | 2.48 |
| PIFu [40] | 0.08 | 1.52 | 1.50 | 0.09 | 1.15 | 1.14 |
| ARCH, baseline | 0.080 | 1.98 | 1.85 | 0.081 | 1.74 | 1.75 |
| + SemDF | 0.042 | 0.74 | 0.85 | 0.045 | 0.82 | 0.87 |
| + GRaC | 0.038 | 0.74 | 0.85 | 0.040 | 0.82 | 0.87 |
+
+Table 1. Quantitative comparisons of normal, P2S and Chamfer errors between posed reconstruction and ground truth on the RenderPeople and BUFF datasets. Lower values are better.
+
+and (d)). For inference on 2D images from the DeepFashion dataset, we obtain 3D body poses using the pre-trained models from [51].
+
+# 4.3. Results and Comparisons
+
+We evaluate the reconstruction accuracy of ARCH with three metrics similar to [40]. We reconstruct the results on the same test set and repose them back to the original poses of the input images and compare the reconstructions with the ground truth surfaces in the original poses. We report the average point-to-surface Euclidean distance (P2S) in centimeters, the Chamfer distance in centimeters, and the L2 normal re-projection error in Tab. 1.
+
+Additionally to comparing with state-of-the-art methods [10, 17, 18, 32, 40, 44], we include scores of an ablative study with the proposed method. In particular, we evaluate three variants and validate the effectiveness of two main components: the Semantic Deformation Field and the Granular Render-and-Compare loss.
+
+ARCH, baseline: a variant of [40] using our own network specifications, taking an image as input and directly estimating the implicit surface reconstruction.
+
+Semantic Deformation Field (SemDF): we first estimate the human body configuration by [51] and then reconstruct the canonical shape using the implicit surface reconstruction, and finally repose the canonical shape to the original pose in the input image.
+
+Granular Render-and-Compare (GRaC): based on the previous step, we further refine the reconstructed surface normal and color using differentiable render-and-compare.
+
+ARCH baseline specification already achieves state-of-the-art performance in normal estimation, but has inferior performance w.r.t. P2S and Chamfer error compared to PIFu [40]. We use a different training dataset compared to PIFu that apparently does not represent the test set as well. Also, PIFu normalizes every scan at training and prediction time to have its geometric center at the coordinate origin, whereas we use origin placed scans with slight displacements. Lastly, PIFu performs a size normalization of the body using the initial 3D body configuration estimate. The image is rescaled so that the height of the person matches the canonical size. This makes person height estimation for PIFu impossible, whereas we properly reconstruct it—at the
+
+
+Input
+
+
+
+
+
+
+AnimatedAvatar
+
+
+
+Avatar
+
+
+
+
+
+
+
+
+Figure 5. An example for animating a predicted avatar. We use a predicted, skinned avatar from our test set and drive it using off-the-shelf motion capture data. This avatar has been created using only a single, frontal view. Our model produces a plausible prediction for the unseen parts, for example the hair and the back of the dress.
+Input
+
+
+Ours
+
+
+PIFu
+
+
+Input
+
+
+Ours
+
+
+PIFu
+
+
+Input
+Figure 7. Reconstruction quality of clothing details. The geometry reconstruction from our method reproduces larger wrinkles and the seam of the pants and shoes while the predicted normals reproduce fine wrinkles. The normal and color predictions rendered together produce a plausible image.
+
+
+Figure 6. Evaluation on BUFF. Our method outperforms [40] for detailed reconstruction from arbitrary poses. We show results from different angles.
+Geometry
+Reconstruction
+
+
+Normal
+Reconstruction
+
+
+Color
+Reconstruction
+
+cost of a more difficult task to solve. The benefit of this operation is not reflected in the scores because the metrics are calculated in the original image space.
+
+When adding SemDF, we see a substantial gain in performance compared to our own baseline, but also to the so far best-performing PIFu metrics. We outperform PIFu on average with an improvement of over $50\%$ on the Render-People dataset and an average improvement of over $60\%$ on the BUFF dataset. When adding the Granular Render-and-Compare loss, these number improve again slightly, especially on the normal estimation. Additionally, the results gain a lot of visual fidelity and we manage to remove a lot of visual artifacts.
+
+Fig. 7 shows the level of detail of geometry, normal and color predictions our model can achieve. Note that, for example, the zipper is not reproduced in the predicted normal
+
+
+Input
+Figure 8. Reconstruction example using different types of spatial features. XYZ: absolute coordinates, L2: Euclidean distances to each joint, RBF: Radial basis function based distance to each joint. The proposed RBF preserves notably more details.
+
+
+XYZ
+
+
+L2
+
+
+RBF
+
+| Spatial Feature Types | Normal | P2S | Chamfer |
| XYZ | 0.045 | 0.75 | 0.91 |
| L2 | 0.043 | 0.76 | 0.89 |
| RBF | 0.042 | 0.74 | 0.85 |
+
+Table 2. Ablation study on the effectiveness of spatial features. The XYZ feature uses the plain location of body landmarks. The L2 and RBF features both improve the performance.
+
+map. This is an indicator that the model does not simply reproduce differences in shading directly in the normal map, but is able to learn about geometric and shading properties of human appearance. In Fig. 6, we show qualitative results on challenging poses from the BUFF dataset. In Fig. 9, we provide a comparison of results of our method with a variety of state of the art models [44, 18, 40].
+
+Ablative Studies. We evaluate the effectiveness of different types of spatial features in Tab. 2 and Fig. 8. We evaluate three different features: XYZ uses the absolute position of the sampled point, L2 uses the Euclidean distance from the sampled point to each body landmark, and RBF denotes our proposed method in Sec. 3.1. It can be observed that RBF feature works best for this use case both qualitatively and quantitatively. RBF features strongly emphasize features that are close in distance to the currently analyzed point and puts less emphasis on points further away, facilitating optimization and preserving details.
+
+Animating Reconstructed Avatars. With the predicted occupancy field we can reconstruct a mesh that is already rigged and can directly be animated. We show the animation of an avatar we reconstructed from the AXYZ dataset in Fig. 5, driven by an off-the-shelf retargetted Mixamo an
+
+
+Figure 9. Qualitative comparisons against state-of-the-art methods [18, 44, 40] on unseen images. ARCH (Ours) handles arbitrary poses with self-contact and occlusions robustly, and reconstructs a higher level of details than existing methods. Images are from RenderPeople. Results on DeepFashion are of similar quality but are not shown due to copyright concerns. Please contact us for more information.
+
+
+Figure 10. Challenging cases. Reconstruction of rare poses, and details in occluded areas could be further improved.
+
+imation [51]. By working in the canonical space, the avatar is automatically rigged and can be directly animated. Given only a single view image, the avatar is reconstructed in 3D and looks plausible from all sides.
+
+As shown in Fig 10, rare poses not sufficiently covered
+
+in the training dataset (e.g., kneeling) return inaccurate body prior, and are then challenging to reconstruct. Also, details (i.e., normals) in occluded areas could be improved with specific treatment of occlusion-aware estimation.
+
+# 5. Conclusion
+
+In this paper, we propose ARCH, an end-to-end framework to reconstruct clothed humans from unconstrained photos. By introducing the Semantic Space and Semantic Deformation Field, we are able to handle reconstruction from arbitrary pose. We also propose a Granular Render-and-Compare loss for our implicit function representation to further constrain visual similarity under randomized camera views. ARCH shows higher fidelity in clothing details including pixel-aligned colors and normals with a wider range of human body configurations. The resulting models are animation-ready and can be driven by arbitrary motion sequences. We will explore handling heavy occlusion cases with in-the-wild images in the future.
+
+Acknowledgements. We would like to thank Junbang Liang and Yinghao Huang (Interns at FRL) for their work on dataset creation.
+
+# References
+
+[1] Thiemo Alldieck, Marcus Magnor, Bharat Lal Bhatnagar, Christian Theobalt, and Gerard Pons-Moll. Learning to reconstruct people in clothing from a single RGB camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun 2019. 2
+[2] Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. Detailed human avatars from monocular video. In International Conference on 3D Vision, 2018. 2
+[3] Thiemo Alldieck, Marcus Magnor, Weipeng Xu, Christian Theobalt, and Gerard Pons-Moll. Video based reconstruction of 3d people models. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 2
+[4] Thiemo Alldieck, Gerard Pons-Moll, Christian Theobalt, and Marcus Magnor. Tex2shape: Detailed full human body geometry from a single image. In IEEE International Conference on Computer Vision (ICCV). IEEE, oct 2019. 2
+[5] Alexandru O. Balan, Leonid Sigal, Michael J. Black, James E. Davis, and Horst W. Haussecker. Detailed human shape and pose from images. In IEEE Conference on Computer Vision and Pattern Recognition, 2007. 2
+[6] Bharat Lal Bhatnagar, Garvita Tiwari, Christian Theobalt, and Gerard Pons-Moll. Multi-garment net: Learning to dress 3d people from images. In IEEE International Conference on Computer Vision (ICCV). IEEE, oct 2019. 2
+[7] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J. Black. Keep it SMPL: Automatic estimation of 3d human pose and shape from a single image. In European Conference on Computer Vision, 2016. 2
+[8] Fatih Calakli and Gabriel Taubin. SSD: smooth signed distance surface reconstruction. Comput. Graph. Forum, 30(7):1993-2002, 2011. 6
+[9] Wenzheng Chen, Jun Gao, Huan Ling, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. Learning to predict 3d objects with an interpolation-based differentiable renderer. In Annual Conference on Neural Information Processing Systems, 2019. 2
+[10] Zhiqin Chen and Hao Zhang. Learning implicit fields for generative shape modeling. IEEE Conference on Computer Vision and Pattern Recognition, 2019. 6
+[11] Hao-Shu Fang, Yuanlu Xu, Wenguan Wang, Xiaobai Liu, and Song-Chun Zhu. Learning pose grammar to encode human body configuration for 3d pose estimation. In AAAI Conference on Artificial Intelligence, 2018. 1
+[12] Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multi-view stereopsis. In IEEE Conference on Computer Vision and Pattern Recognition, 2007. 1
+[13] Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1362-1376, 2010. 2
+[14] Juergen Gall, Carsten Stoll, Edilson de Aguiar, Christian Theobalt, Bodo Rosenhahn, and Hans-Peter Seidel. Motion capture using joint skeleton tracking and surface estimation. In IEEE Conference on Computer Vision and Pattern Recognition, 2009. 2
+
+[15] Riza Alp Guler and Iasonas Kokkinos. Holopose: Holistic 3d human reconstruction in-the-wild. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 2
+[16] Eldar Insafutdinov and Alexey Dosovitskiy. Unsupervised learning of shape and pose with differentiable point clouds. In Annual Conference on Neural Information Processing Systems, 2018. 2, 3
+[17] Aaron S. Jackson, Chris Manafas, and Georgios Tzimiropoulos. 3d human body reconstruction from a single image via volumetric regression. European Conference of Computer Vision Workshops, 2018. 6
+[18] Angjoo Kanazawa, Michael J. Black, David W Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1, 2, 6, 7, 8
+[19] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 2
+[20] Nikos Kolotouros, Georgios Pavlakos, Michael J. Black, and Kostas Daniilidis. Learning to reconstruct 3d human pose and shape via model-fitting in the loop. In IEEE International Conference on Computer Vision, 2019. 2
+[21] Nikos Kolotouros, Georgios Pavlakos, and Kostas Dani-ilidis. Convolutional mesh regression for single-image human shape reconstruction. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 2
+[22] Zorah Laehner, Daniel Cremers, and Tony Tung. Deepwrinkles: Accurate and realistic clothing modeling. In European Conference on Computer Vision, 2018. 2
+[23] Christoph Lassner, Javier Romero, Martin Kiefel, Federica Bogo, Michael J. Black, and Peter V Gehler. Unite the people: Closing the loop between 3d and 2d human representations. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. 2
+[24] Verica Lazova, Eldar Insafutdinov, and Gerard Pons-Moll. 360-degree textures of people in clothing from a single image. In International Conference on 3D Vision, 2019. 2
+[25] Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. IEEE International Conference on Computer Vision, 2019. 2, 3, 4, 5
+[26] Shichen Liu, Shunsuke Saito, Weikai Chen, and Hao Li. Learning to infer implicit surfaces without 3d supervision. Annual Conference on Neural Information Processing Systems, 2019. 2, 3
+[27] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaouu Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. 6
+[28] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. Smpl: A skinned multi-person linear model. ACM Transactions on Graphics, 34(6):248, 2015. 3
+[29] Matthew M. Loper and Michael J. Black. Opendr: An approximate differentiable renderer. In European Conference on Computer Vision, 2014. 2
+[30] William E. Lorensen and Harvey E. Cline. Differentiable monte carlo ray tracing through edge sampling. ACM SIGGRAPH Computer Graphics, 21(4):163-169, 1987. 5
+
+[31] Takashi Matsuyama, Shohei Nobuhara, Takeshi Takai, and Tony Tung. 3D Video and Its Applications. Springer, 2012. 1
+[32] Ryota Natsume, Shunsuke Saito, Zeng Huang, Weikai Chen, Chongyang Ma, Hao Li, and Shigeo Morishima. Siclope: Silhouette-based clothed people. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 2, 6
+[33] Richard A. Newcombe, Dieter Fox, and Steven M. Seitz. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In IEEE Conference on Computer Vision and Pattern Recognition, 2015. 1
+[34] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, 2016. 3
+[35] Mohamed Omran, Christoph Lassner, Gerard Pons-Moll, Peter V. Gehler, and Bernt Schiele. Neural body fitting: Unifying deep learning and model-based human pose and shape estimation. In International Conference on 3D Vision, 2018. 1
+[36] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 4
+[37] Gerard Pons-Moll, Sergi Pujades, Sonny Hu, and Michael J. Black. Clothcap: seamless 4d clothing capture and retargeting. ACM Transactions on Graphics, 36(4):73:1-73:15, 2017. 2
+[38] Hang Qi, Yuanlu Xu, Tao Yuan, Tianfu Wu, and Song-Chun Zhu. Scene-centric joint parsing of cross-view videos. In AAAI Conference on Artificial Intelligence, 2018. 1
+[39] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, 2015. 4
+[40] Shunsuke Saito, Zeng Huang, Ryota Natsume, Shigeo Morishima, Angjoo Kanazawa, and Hao Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In IEEE International Conference on Computer Vision, 2019. 2, 3, 4, 6, 7, 8
+[41] Hsiao-Yu Tung, Hsiao-Wei Tung, Ersin Yumer, and Katerina Fragkiadaki. Self-supervised learning of motion capture. In Annual Conference on Neural Information Processing Systems, 2017. 2
+[42] Tony Tung, Shohei Nobuhara, and Takashi Matsuyama. Complete multi-view reconstruction of dynamic scenes from probabilistic fusion of narrow and wide baseline stereo. In IEEE 12th International Conference on Computer Vision ICCV, 2009. 1
+[43] Fredo Durand Tzu-Mao Li, Miika Aittala and Jaakko Lehtinen. Differentiable monte carlo ray tracing through edge sampling. ACM Transactions on Graphics, 37(6):222:1-222:11, 2018. 2
+[44] Gul Varol, Duygu Ceylan, Bryan Russell, Jimei Yang, Ersin Yumer, Ivan Laptev, and Cordelia Schmid. BodyNet: Volumetric inference of 3D human body shapes. In European Conference on Computer Vision, 2018. 1, 2, 6, 7, 8
+[45] Daniel Vlasic, Ilya Baran, Wojciech Matusik, and Jovan Popovic. Articulated mesh animation from multi-view sil
+
+houettes. ACM Transactions on Graphics, 27(3):97:1-97:9, 2008. 2
+[46] Daniel Vlasic, Pieter Peers, Ilya Baran, Paul Debevec, Jovan Popovic, Szymon Rusinkiewicz, and Wojciech Matusik. Dynamic shape capture using multi-view photometric stereo. In ACM SIGGRAPH, 2009. 2
+[47] Ramesh Raskar Leonard McMillan Wojciech Matusik, Chris Buehler and Steven Gortler. Image-based visual hulls. In ACM SIGGRAPH, 2000. 2
+[48] Chenglei Wu, Kiran Varanasi, and Christian Theobalt. Full body performance capture under uncontrolled and varying illumination: A shading-based approach. In European Conference on Computer Vision, 2012. 2
+[49] Yuanlu Xu, Xiaobai Liu, Yang Liu, and Song-Chun Zhu. Multi-view people tracking via hierarchical trajectory composition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. 1
+[50] Yuanlu Xu, Xiaobai Liu, Lei Qin, and Song-Chun Zhu. Multi-view people tracking via hierarchical trajectory composition. In AAAI Conference on Artificial Intelligence, 2017. 1
+[51] Yuanlu Xu, Song-Chun Zhu, and Tony Tung. DenseRaC: Joint 3D pose and shape estimation by dense render-and compare. In IEEE International Conference on Computer Vision, 2019. 1, 2, 3, 6, 8
+[52] Jinlong Yang, Jean-Sébastien Franco, Franck Hétroy-Wheeler, and Stefanie Wuhrer. Estimation of human body shape in motion with wide clothing. In European Conference on Computer Vision, 2016. 2
+[53] Tao Yu, Zerong Zheng, Kaiwen Guo, Jianhui Zhao, Qionghai Dai, Hao Li, Gerard Pons-Moll, and Yebin Liu. Doublefusion: Real-time capture of human performances with inner body shapes from a single depth sensor. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 2
+[54] Yixuan Wei Qionghai Dai Yebin Liu Zerong Zheng, Tao Yu. Deephuman: 3d human reconstruction from a single image. In IEEE International Conference on Computer Vision, 2019. 2
+[55] Chao Zhang, Sergi Pujades, Michael Black, and Gerard Pons-Moll. Detailed, accurate, human shape estimation from clothed 3D scan sequences. In IEEE Conference on Computer Vision and Pattern Recognition, 2017. 6
+[56] Zerong Zheng, Tao Yu, Hao Li, Kaiwen Guo, Qionghai Dai, Lu Fang, and Yebin Liu. Hybridfusion: Real-time performance capture using a single depth sensor and sparse imus. In European Conference on Computer Vision, 2018. 2
\ No newline at end of file
diff --git a/archanimatablereconstructionofclothedhumans/images.zip b/archanimatablereconstructionofclothedhumans/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5cc04af0ee06e09e2d593a43e8c8d737bd179baa
--- /dev/null
+++ b/archanimatablereconstructionofclothedhumans/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:922053cf61105ddbba6f2b403ba7bddb373974761f51672649c7442187307d06
+size 563336
diff --git a/archanimatablereconstructionofclothedhumans/layout.json b/archanimatablereconstructionofclothedhumans/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..3d1e5221848f86e22492adf320c8002daa5cb745
--- /dev/null
+++ b/archanimatablereconstructionofclothedhumans/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b359e653cdd75f2391295de91d025eb26aed23467e8885bec9529818332975e1
+size 486941
diff --git a/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_content_list.json b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..abb16c3d28356a9b3eb22f4027d7709b70342485
--- /dev/null
+++ b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f6097df86950db55f8979d0cf2d34802a54eedfcc165bffd92f11e4e06ff792a
+size 82553
diff --git a/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_model.json b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a0ddef023177ced1d18cb1bb4d07edd9a5c42973
--- /dev/null
+++ b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dbed5ab8b18771a5740d6d6147210ac735f7933318c00c15497261a67de351ff
+size 107798
diff --git a/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_origin.pdf b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e621935c5528a6e2e96318533db350a33ec73584
--- /dev/null
+++ b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/961fb3aa-aa84-4b85-8ed3-eb8001fbb173_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:422546fefca251b7e55d2442ad3027cbb8ffa85b35837c6c28fe3c4faa7d3a52
+size 2453708
diff --git a/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/full.md b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..436c4cf0a93a693cd8cdfe8dc266479da0a1b02f
--- /dev/null
+++ b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/full.md
@@ -0,0 +1,408 @@
+# ARShadowGAN: Shadow Generative Adversarial Network for Augmented Reality in Single Light Scenes
+
+Daquan Liu1, Chengjiang Long2*, Hongpan Zhang1, Hanning Yu1, Xinzhi Dong1, Chunxia Xiao1,3,4*
+1School of Computer Science, Wuhan University
+2Kitware Inc., Clifton Park, NY, USA
+3National Engineering Research Center For Multimedia Software, Wuhan University
+4Institute of Artificial Intelligence, Wuhan University
+
+chengjiang.long@kitware.com, {daquanliu,zhanghp,fishaning,dongxz97,cxxiao}@whu.edu.cn
+
+# Abstract
+
+Generating virtual object shadows consistent with the real-world environment shading effects is important but challenging in computer vision and augmented reality applications. To address this problem, we propose an end-to-end Generative Adversarial Network for shadow generation named ARShadowGAN for augmented reality in single light scenes. Our ARShadowGAN makes full use of attention mechanism and is able to directly model the mapping relation between the virtual object shadow and the real-world environment without any explicit estimation of the illumination and 3D geometric information. In addition, we collect an image set which provides rich clues for shadow generation and construct a dataset for training and evaluating our proposed ARShadowGAN. The extensive experimental results show that our proposed ARShadowGAN is capable of directly generating plausible virtual object shadows in single light scenes. Our source code is available at https://github.com/1dq9526/ARShadowGAN.
+
+# 1. Introduction
+
+Augmented reality (AR) technology seamlessly integrates virtual objects with real-world scenes. It has broad application prospects in the fields of medical science, education and entertainment. In a synthetic AR image, the shadow of the virtual object directly reflects the illumination consistency between the virtual object and the real-world environment, which greatly affects the sense of reality. Therefore, it is very critical to generate the virtual object shadow and ensure it consistent with illumination constraints for high-quality AR applications.
+
+Automatically generating shadows for inserted virtual
+
+
+Figure 1. An example of casting virtual shadow for an inserted object in a single light scene. From left to right: the original image, the synthetic image without the virtual object shadow, the virtual object mask and the image with virtual object shadows.
+
+objects is extremely challenging. Previous methods are based on inverse rendering [32] and their performances highly depend on the quality of the estimated geometry, illumination, reflectance and material properties. However, such an inverse rendering problem is very expensive and challenging in practice. What's worse, any inaccurate estimation may result in unreasonable virtual shadows. We aim to explore a mapping relationship between the virtual object shadow and the real-world environment in the AR setting without explicit inverse rendering. A shadow image dataset with clues to AR shadow generation in each image is desired for training and evaluating the performance of AR shadow generation. However, existing shadow-related datasets like SBU [41], SRD [38], and ISTD [44], contain pairs of shadow image and corresponding shadow-free image, but most of the shadows lack occluders and almost all shadows are removed in shadow-free images. Such shadow datasets do not provide sufficient clues to generate shadows. Therefore, it is necessary to construct a new shadow dataset for AR applications.
+
+In this work, we construct a large-scale AR shadow image dataset named Shadow-AR dataset where each raw image contains occluders, corresponding shadows and inserted 3D objects from public available datasets like ShapeNet [3]. We first annotate the real-world shadows and their corresponding occluders, and then determine the illumination and geometric information with camera and lighting cal
+
+ization. Then we can apply 3D rendering to produce shadow for an inserted 3D object and take it as the groundtruth virtual shadow for both training and evaluation.
+
+We observe that a straightforward solution like an imaged-to-image translation network cannot achieve plausible virtual shadows since it does not pay sufficient attention for handling the more important regions like real-world shadows and corresponding occluders. This observation inspires us to leverage the spatial attention information for real-world shadows and corresponding occluders to generate shadows for inserted virtual objects.
+
+In this paper, we propose a generative adversarial network for directly virtual object shadow generation, which is called ARShadowGAN. As illustrated in Figure 1, ARShadowGAN takes a synthetic AR image without virtual shadows and the virtual object mask as input, and directly generates plausible virtual object shadows to make the AR image more realistic. Unlike inverse rendering-based methods [22, 23] perform geometry, illumination and reflectance estimation, our proposed ARShadowGAN produces virtual shadows without any explicit inverse rendering. Our key insight is to model the mapping relationship between the virtual object shadow and the real-world environment. In other words, ARShadowGAN automatically infers virtual object shadows with the clues provided by the real-world environment.
+
+We shall emphasize that we adopt the adversarial training process [10] between the generator and the discriminator to generate an AR shadow image. With the number of epochs increases, both models improve their functionalities so that it becomes harder and harder to distinguish a generated AR shadow image from a real AR shadow image. Therefore, after a certain large number of training epochs, we can utilize the learned parameters in the generator to generate an AR shadow image.
+
+To sum up, our main contributions are three-fold:
+
+- We construct the first large-scale Shadow-AR dataset, which consists of 3,000 quintuples and each quintuple consists of a synthetic AR image without the virtual object shadow and its corresponding AR image containing the virtual object shadow, a mask of the virtual object, a labeled real-world shadow matting and its corresponding labeled occluder.
+- We propose an end-to-end trainable generative adversarial network named ARShadowGAN. It is capable of directly generating virtual object shadows without illumination and geometry estimation.
+- Through extensive experiments, we show that the proposed ARShadowGAN outperforms the baselines derived from state-of-the-art straightforward image-to-image translation solutions.
+
+# 2. Related Work
+
+The related work to shadow generation can be divided into two categories: with or without inverse rendering.
+
+Shadow Generation with Inverse Rendering. Previous methods are based on inverse rendering to generate virtual object shadows, which require geometry, illumination, reflectance and material properties. Methods [39, 36, 48, 1] estimate lighting with known marker, which fail when the marker is blocked. Methods [22, 23, 25] estimate all the required properties, but inaccurate reconstruction results in odd-looking results. In recent years, deep learning has made significant breakthroughs, especially in visual recognition [13, 18, 26, 28, 17, 30, 27, 29, 16], object detection and segmentation [9, 42, 31], and so on. In particular, deep learning-based methods [7, 45, 8, 6, 14, 49] have been developed to estimate HDR illumination from a single LDR image but few of them work well for both indoor and outdoor scenes, and the rendering requires user interaction. Such heavy time and labor cost make this kind of methods infeasible for automatic shadow generation in AR.
+
+Shadow Generation without Inverse Rendering. In recent years, generative adversarial network (GAN) [10] and its variants such as cGAN [33] and WGAN [2] have proven been applied successfully to various generative tasks such as shadow detection and removal [44, 46, 5, 50], of course also can be extended for shadow generation as a particular style transfer. It is worth mentioning that Hu et al.'s Mask-ShadowGAN [15] conducts shadow removal and mask-guided shadow generation with unpaired data at the same time. Zhang et al. extended image completion cGAN [19] to ShadowGAN [51] which generates virtual object shadows for VR images in which the scenes are synthesized with a single point light. Nonetheless, these methods dose not account for the occluders of real shadows. Unlike the previous methods, our proposed ARShadowGAN makes full use of spatial attention mechanism to explore the correlation between occluders and the corresponding shadows to cast plausible virtual shadows for inserted objects.
+
+# 3. Shadow-AR Dataset
+
+To cast shadow for an inserted virtual object in a single light scene, we need to explore a mapping relationship between the virtual object and the shadow in the AR setting. A necessary shadow image dataset with shadow clues for generating virtual shadow in each image is desired for training and evaluating the performance of virtual shadow generation. However, existing shadow-related datasets have many limitations. SBU [41] and UCF [52] consist of pairs of shadow images and corresponding shadow masks but no corresponding shadow-free images. SRD [38], UIUC [12], LRSS [11] and ISTD [44] contain pairs of shadow image and corresponding shadow-free image, but most of the
+
+
+(a)
+
+
+(b)
+
+
+(c)
+Figure 2. An illustration of two image examples in our Shadow-AR dataset. (a) is the original scene image without marker, (b) is the synthetic image without virtual object shadow, (c) is the mask of the virtual object, (d) is the real-world occluder, (e) is the real-world shadow, and (f) is the synthetic image containing the virtual object shadow.
+
+
+(d)
+
+
+(e)
+
+
+(f)
+
+
+Figure 3. An illustration of data annotation. A 3D Cartesian coordinate system $M$ is established at the square marker. The camera pose is calculated by marker recognition. The light source position or direction is calibrated in the coordinate system $\mathbf{M}$ .
+
+shadows lack occluders and almost all shadows are removed in shadow-free images. Such shadow datasets do not provide sufficient clues to generate shadows. Therefore, we have to construct a Shadow-AR dataset with shadow images and virtual objects.
+
+# 3.1. Data Collection
+
+We collect raw images taken with a Logitech C920 Camera at $640 \times 480$ resolution, where scenes are taken with different camera poses. We keep real-world shadows and the corresponding occluders in photos because we believe that these can be used as series clues to shadow inference. We choose 9 models from ShapeNet [3], 4 models from Stanford 3D scanning repository and insert them into photos to produce different images of foreground (model) and background (scene) combinations. Our Shadow-AR dataset contains 3,000 quintuples. Each quintuple consists of 5 images: a synthetic image without the virtual object shadow and its corresponding image containing the virtual object shadow, a mask of the virtual object, a labeled real-world shadow matting and its corresponding labeled occluder. Figure 2
+
+shows examples of our image data.
+
+# 3.2. Mask Annotation and Shadow Rendering
+
+We need to collect supervised information containing the real-world shadow matting, the corresponding occluder mask, and the synthetic images with plausible virtual object shadows. Note that insertion of a virtual 3D object requires geometric consistency and the virtual object shadow needs to be consistent with the real-world environment. This means that we need to calibrate the camera pose and the lighting in the real-world environment at the same time, which is very challenging. For convenience, we use a simple black-white square marker to complete the data annotation. As is shown in Figure 3, we establish such a 3D Cartesian coordinate system $\mathbf{M}$ at the square marker as the world coordinate system.
+
+Clues annotation. As is shown in Figure 2.(c)-(d), we annotate the real-world shadows and their corresponding occluders, which help to inference the virtual object shadow. We annotate real-world shadows with Robust-Matting software and annotate occluder with the LabelMe tool [43].
+
+Camera and lighting calibration. We perform the square marker recognition and tracking by adaptive threshold with Otsu's [35] segmentation. With the extracted four marker corner points, camera poses are calculated by EPnP [24]. For indoor scenes, we consider a single dominant light and model it as a point light source with a three-dimensional position. To determine the most dominant light source, we manually block or turn off each indoor light (usually point or area light) sequentially and choose the one gives the most visible shadow. Then, we manually measure the dominant light geometric center coordinate $X_{m}$ as the light position (as is shown in Figure 3). For outdoor scenes, the main light source is the sun and we model it as a directional light source. We measure the sunlight direction using interest point correspondences between a known
+
+
+(a) occluder area distribution
+
+
+(b) real-world shadow area distribution
+
+
+(c) virtual object area distribution
+
+
+(d) virtual object location distribution
+Figure 4. Statistics of virtual objects and real-world clues. We show that our dataset have reasonable property distributions.
+
+straight edge and its shadow.
+
+Rendering. With the calibrated camera and lighting, we render 3D objects and the corresponding shadows. We render 3D objects with Phong shading [37]. We experimentally set ambient lighting as white with normalized intensity 0.25 for indoor and 0.35 for outdoor. We add a plane at the bottom of the 3D object and perform shadow mapping [47] along with alpha blending to produce shadows. To make the generated shadows have consistent appearances with real-world shadows, we apply a Gaussian kernel $(5\times 5,\sigma = 1.0)$ to blur the shadow boundaries to get soft shadow borders.
+
+Figure 4 shows statistical analysis of distribution properties of our dataset. The area distribution is expressed as the ratio between the target (shadows, occluders or virtual objects) area and image area. As we can see, majority of occluders falls in range of $(0.0, 0.3]$ , majority of shadows falls in range of $(0.0, 0.2]$ and majority of virtual objects falls in range of $(0.0, 0.2]$ . We found that clues falling in $(0.4, 0.6]$ occupy most of the image area, making it difficult to insert virtual objects. Similarly, inserted objects with too large area will block important clues. There are almost no such cases in our data set. In addition, we analyze the spatial distribution of virtual objects, we compute a probability map (Figure 4 (d)) to show how likely a pixel belongs to a virtual object. This is reasonable as virtual objects placed around human eyesight usually produce the most visual pleasing results.
+
+# 4. Proposed ARShadowGAN
+
+As illustrated in Figure 5, our proposed ARShadowGAN is an end-to-end network which takes a synthetic image without virtual object shadows and the virtual object mask as input, and produces the corresponding image with virtual object shadows. It consists of 3 components: an attention
+
+block, a virtual shadow generator with a refinement module, and a discriminator to distinguish whether the generated virtual shadow is plausible.
+
+# 4.1. Attention Block
+
+The attention block produces attention maps of real shadows and corresponding occluders. The attention map is a matrix with elements ranging from 0 to 1 which indicates varying attention of the real-world environments. The attention block takes the concatenation of the image without virtual object shadows and the virtual object mask as input. It has two identical decoder branches and one branch predicts the real shadow attention map and the other one predicts the corresponding occluder attention map.
+
+There are 4 down-sampling (DS) layers. Each DS layer extracts features by a residual block [13] which consists of 3 consecutive convolution, batch normalization and Leaky ReLU operations and halves the feature map with an average pooling operation. Then, features extracted by DS layers are shared by two decoder branches. The two decoder branches have the same architecture. Each decoder consists of 4 up-sampling (US) layers. Each US layer doubles the feature map by nearest interpolation followed by consecutive dilated convolution, batch normalization and Leaky ReLU operations. The last feature map is activated by a sigmoid function. Symmetrical DS-US layers are concatenated by skip connections.
+
+# 4.2. Virtual Shadow Generator
+
+The virtual shadow generator produces plausible virtual object shadows. It consists of a U-net followed by a refinement module. The U-net with 5 DS-US layers produces a coarse residual shadow image and then it is fine-tuned by the refinement module with 4 consecutive composite functions [18]. The final output is the addition of the improved residual shadow image and the input image.
+
+In the virtual shadow generator, DS layers are the same as those in the attention block while US layers use convolutions instead of dilated ones. Each composite function produces 64 feature maps.
+
+# 4.3. Discriminator
+
+The discriminator distinguishes whether the virtual shadow shadows are plausible, thereby assisting the training of generator. We designed the discriminator in the form of Patch-GAN [20].
+
+The discriminator contains 4 consecutive convolution with valid padding, instance normalization and Leaky ReLU operations. Then, a convolution produces the last feature map which is activated by sigmoid function. The final output of the discriminator is the global average pooling of the activated last feature map. In ARShoWGAN, the discriminator takes the concatenation of image without
+
+Figure 5. The architecture of our proposed ARShadowGAN. It consists of an attention block, a virtual shadow generator with a refinement module and a discriminator. Attention block has two branches producing attention maps of real-world shadows and occluders. The attention maps are leveraged by virtual shadow generator to produce a coarse residual shadow image. The coarse shadow image is fine-tuned by the refinement module. The final output is the addition of input image and the fine-tuned residual shadow image.
+
+$\mathbb{C}$ concatenation
+$\oplus$ addition
+
+virtual object shadows, virtual object masks and the image with virtual object shadows as input.
+
+# 4.4. Loss functions
+
+Attention Loss. We use standard squared loss to measure the difference between the predicted attention maps and the ground truth masks. $\mathcal{L}_{attn}$ is defined as follows:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {a t t n}} = \left\| \mathcal {A} _ {\text {r o b j}} (x, m) - \mathcal {M} _ {\text {r o b j}} \right\| _ {2} ^ {2} \tag {1} \\ + \left\| \mathcal {A} _ {r s h a d o w} (x, m) - \mathcal {M} _ {r s h a d o w} \right\| _ {2} ^ {2}, \\ \end{array}
+$$
+
+where $\mathcal{A}_{r\text{shadow}}(\cdot)$ is the output attention map for real shadows and $\mathcal{A}_{r\text{obj}}(\cdot)$ is the output attention map for real objects based on the input synthetic image $x$ without virtual object shadows and the virtual object mask $m$ . Note both $\mathcal{M}_{r\text{obj}}$ and $\mathcal{M}_{r\text{shadow}}$ are the ground truth binary maps of the real-world shadows and their corresponding occluders. For $\mathcal{M}_{r\text{obj}}$ , 1 indicates that the pixel belongs to real objects and 0 otherwise. Similarly, 1 in $\mathcal{M}_{r\text{shadow}}$ indicates the pixel in the real shadow regions and 0 not.
+
+Shadow Generation Loss. $\mathcal{L}_{gen}$ is used to measure the difference between the ground truth and the generated image with virtual object shadows. The shadow generation loss consists of three weighted terms, i.e., $\mathcal{L}_2$ , $\mathcal{L}_{per}$ and $\mathcal{L}_{adv}$ , and the total loss is:
+
+$$
+\mathcal {L} _ {\text {g e n}} = \beta_ {1} \mathcal {L} _ {2} + \beta_ {2} \mathcal {L} _ {\text {p e r}} + \beta_ {3} \mathcal {L} _ {\text {a d v}}, \tag {2}
+$$
+
+where $\beta_{1},\beta_{2}$ and $\beta_{3}$ are hyper-parameters which control the influence of terms.
+
+$\mathcal{L}_2$ is the pixel-wise loss between the generated image and the corresponding ground truth. It is worth mentioning that our ARShadowGAN produces a coarse residual shadow image to generate a coarse virtual shadow image $\bar{y} = x + \mathbf{G}(x, m, \mathcal{A}_{robj}, \mathcal{A}_{rshadow})$ . We further improve the residual image to form the final shadow image
+
+$\hat{y} = x + \mathbf{R}(\mathbf{G}(x,m,\mathcal{A}_{r_{obj}},\mathcal{A}_{r_{shadow}}))$ through the refinement module $\mathbf{R}(\cdot)$ . Therefore, we can define $\mathcal{L}_2$ as follows:
+
+$$
+\mathcal {L} _ {2} = \| y - \bar {y} \| _ {2} ^ {2} + \| y - \hat {y} \| _ {2} ^ {2}, \tag {3}
+$$
+
+where $y$ is the corresponding ground truth shadow image.
+
+$\mathcal{L}_{per}$ is the perceptual loss [21], which measures the semantic difference between the generated image and the ground truth. We use a VGG16 model [40] pre-trained on ImageNet dataset [4] to extract feature. The feature is the output of the $4^{th}$ max pooling layer $(14\times 14\times 512)$ , i.e. the first 10 VGG16 layers are used to compute feature map. $\mathcal{L}_{per}$ is defined as follows:
+
+$$
+\mathcal {L} _ {p e r} = \operatorname {M S E} \left(V _ {y}, V _ {\bar {y}}\right) + \operatorname {M S E} \left(V _ {y}, V _ {\hat {y}}\right), \tag {4}
+$$
+
+where MSE is the mean squared error, and $V_{i} = \mathrm{VGG}(i)$ is the feature map extracted by the well-trained VGG16 model.
+
+$\mathcal{L}_{adv}$ describes the competition between the generator and the discriminator, which is defined as follows:
+
+$$
+\mathcal {L} _ {a d v} = \log (\mathbf {D} (x, m, y)) + \log (1 - \mathbf {D} (x, m, \hat {y})), \tag {5}
+$$
+
+where $\mathbf{D}(\cdot)$ is the probability that the image is "real". During the adversarial training, the discriminator tries to maximize $\mathcal{L}_{adv}$ while the generator tries to minimize it.
+
+# 4.5. Implementation details
+
+Our ARShadowGAN is implemented in TensorFlow framework. In ARShadowGAN, all the batch normalization and Leaky ReLU operations share the same hyper parameters. We set decay as 0.9 for batch normalization and leak as 0.2 for Leaky ReLU. All images in our dataset are resized to $256 \times 256$ by cubic interpolation for training and testing.
+
+Synthetic images and virtual object masks are normalized to $[-1, 1]$ while labeled clue images are normalized to $[0, 1]$ . We randomly divide our dataset into three parts: 500 for attention block training, 2,000 for virtual shadow generation training and 500 for testing.
+
+We adopt a two-stage training. At the $1^{st}$ stage, we train the attention block alone with the 500 training set. We optimize the attention block by minimizing $\mathcal{L}_{\text{att}}$ with ADAM optimizer. Learning rate is initialized as $10^{-5}$ and $\beta$ is set to (0.9, 0.99). The attention block is trained for 5000 iterations with batch size 1. At the $2^{nd}$ stage, the attention block is fixed and we train virtual shadow generator and the discriminator with the 2,000 training set. We set $\beta_{1} = 10.0$ , $\beta_{2} = 1.0$ , $\beta_{3} = 0.01$ for $\mathcal{L}_{\text{gen}}$ . We adopt ADAM optimizer to optimize the generator and discriminator. The optimizer parameters are all same as those in the $1^{st}$ phase. The virtual shadow generator and discriminator is trained for 150,000 iterations with batch size 1. In each iteration, we alternately optimize the generator and discriminator.
+
+# 5. Experiments
+
+To evaluate the performance of our proposed ARShadowGAN, we conduct experiments on our collected ShadowAR dataset. We calculate the average error on the testing set for quantitative evaluation. We calculate the root mean square error (RMSE) and structural similarity index (SSIM) with generated shadow images and the ground truth to measure the global image error. We calculate the balanced error rate [34] (BER) and accuracy (ACC) with generated shadow masks and ground truth shadow masks, which are obtained with ratio threshold, to measure the shadow area and boundary error. In general, the smaller RMSE and BER, the larger SSIM and ACC, the better the generated image. Note that all the images for visualization are resized to 4:3.
+
+# 5.1. Visualization of Generated Attentions
+
+Attention maps are used to assist the virtual shadow generator. As is shown in Figure 6, real-world shadows and their corresponding occluders are suggested more attention. It is worth mentioning that the virtual object itself is not a clue, and the mask prevents the virtual object from receiving more attention as real-world shadows and occluders. To verify the role of the mask, we replace the mask with a full black image which indicates no virtual object. The result is also shown in the $2^{nd}$ and $4^{th}$ row of Figure 6.
+
+# 5.2. Comparison to Baselines
+
+To our best knowledge, there are no existing methods proposed to directly generate AR shadows for inserted object without any 3D information. We still choose the following methods as baselines to compete since we can extend and adapt them on the our task:
+
+
+
+
+
+
+
+
+Figure 6. Examples of attention maps. From left to right: input images without virtual object shadows, input masks, attention maps of real-world shadows and their corresponding occluders. Corresponding cases without masks are also shown.
+
+Pix2Pix [20] is a cGAN trained on paired data for general image-to-image translation. It is directly applicable to our shadow generation task. We make the Pix2Pix output shadow image directly.
+
+Pix2Pix-Res is a variant of Pix2Pix whose architecture is the same as Pix2Pix but outputs the residual virtual shadow image like our ARShadowGAN.
+
+ShadowGAN [51] synthesizes shadows for inserted objects in VR images. ShadowGAN takes exactly the same input items as our ARShadowGAN and generates shadow maps which are then multiplied to the source images to produce final images. We calculate shadow maps from our data to train ShadowGAN and we evaluate ShadowGAN with the produced final images.
+
+Mask-ShadowGAN [15] performs both shadow removal and mask-guided shadow generation. We adapt this framework to our task. $G_{s}$ and $G_{f}$ are two generators of Mask-ShadowGAN and we adjust $G_{s}$ to perform virtual shadow generation while $G_{f}$ to perform mask-guided virtual shadow removal.
+
+For fair comparison, we train all the models on the same training data with same training details and evaluate on the same testing data.
+
+| Models | RMSE | SSIM | S (%) | A (%) | ACC (%) |
| Pix2Pix | 9.514 | 0.938 | 41.468 | 27.358 | 90.631 |
| Pix2Pix-Res | 8.043 | 0.959 | 29.597 | 26.476 | 96.689 |
| ShadowGAN | 8.041 | 0.961 | 28.347 | 24.547 | 97.122 |
| Mask-ShadowGAN | 7.493 | 0.959 | 23.261 | 21.131 | 98.443 |
| ARShadowGAN | 6.520 | 0.965 | 22.278 | 19.267 | 98.453 |
+
+Table 1. Results of quantitative comparison. In the table, S represents BER of virtual shadow regions and A represents BER of the whole shadow mask. The best scores are highlighted in bold.
+
+Quantitative comparison results are shown in Table 1.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+
+(d)
+
+
+(e)
+
+
+(f)
+
+
+(g)
+
+
+(h)
+
+
+Figure 7. Visualization comparison with different methods. From left to right are input image (a), input mask (b), the results of Pix2Pix (c), Pix2Pix-Res (d), ShadowGAN (e), Mask-ShadowGAN (f), ARShadowGAN (g), and ground-truth (h).
+(a) input image
+
+
+(b) input mask
+Figure 8. Examples of qualitative ablation studies of network modules.
+
+
+(c) w/o Attn
+
+
+(d) $\mathbf{w} / \mathbf{o}\mathcal{L}_{adv}$
+
+
+(e) w/o Refine
+
+
+(f) ARShadowGAN
+
+
+(g) ground truth
+
+Examples of qualitative comparison are shown in Figure 7. As we can see, the overall performances of Pix2Pix-Res and ShadowGAN are better than Pix2Pix, which indicates that the target of the shadow map or the residual shadow image makes the network focus on shadow itself rather than the whole image reconstruction. Mask-ShadowGAN performs a little better than Pix2Pix-Res and ShadowGAN, but it still produces artifacts. ARShadowGAN outperforms baselines with much less artifacts in terms of shadow azimuth and shape, which is partially because the attention mechanism enhances the beneficial features and make the most of them.
+
+| Models | RMSE | SSIM | S (%) | A (%) | ACC (%) |
| w/o Attn | 7.175 | 0.962 | 23.162 | 21.079 | 98.446 |
| w/o Refine | 7.050 | 0.961 | 23.087 | 21.024 | 98.450 |
| w/o Ladv | 7.781 | 0.959 | 29.093 | 26.354 | 97.487 |
| w/o Lper | 8.001 | 0.963 | 29.576 | 26.399 | 97.152 |
| w/o L2 | 9.696 | 0.924 | 50.748 | 30.829 | 88.548 |
| ARShadowGAN | 6.520 | 0.965 | 22.278 | 19.267 | 98.453 |
+
+Table 2. Results of ablation studies. The best scores are highlighted in bold.
+
+# 5.3. Ablation Studies
+
+To verify the effectiveness of our loss function and network architecture, we compare our ARShadowGAN with its ablated versions:
+
+- w/o Attn: we remove the attention block.
+- w/o Refine: we remove the refinement module.
+
+- w/o $\mathcal{L}_{adv}$ : we remove the discriminator $(\beta_{3} = 0)$ .
+- w/o $\mathcal{L}_{per}$ : we remove $\mathcal{L}_{per}$ from Equation 2 ( $\beta_{2} = 0$ ).
+- w/o $\mathcal{L}_2$ : we remove $\mathcal{L}_2$ from Equation 2 ( $\beta_1 = 0$ ).
+
+For models without attention blocks, the input to the virtual shadow generator is adjusted to the concatenation of synthetic image (without virtual object shadows) and the object mask. We train these models on training set. Quantitative results of ablation studies are shown in Table 2 and examples of qualitative ablation studies are shown in Figure 8 and Figure 9.
+
+Network modules. As we can see, our full model achieves the best performance. As is shown in Figure 8, the model without a discriminator mostly produces odd-looking virtual object shadows because the generator has not yet converge, which indicates that adversarial training does speed up the convergence of the generator. Our full model outperforms the version without attention block in overall virtual object shadow azimuth, which indicates that the attention block helps preserve features useful for shadow inference. The model without refinement module produces artifacts in the shadow area, suggesting that the refinement module fine-tunes virtual shadows from details by nonlinear activation functions.
+
+Loss functions. As we can see, our full loss function achieves the best performance. As is shown in Figure 9, $\mathcal{L}_{per}$ has an important role in constraining the shadow
+
+
+(a) input image
+
+
+(b) input mask
+
+
+(c) $\mathsf{w} / \mathsf{o}\mathcal{L}_2$
+
+
+(d) $\mathbf{w} / \mathbf{o}$ Lper
+
+
+(e) ARShadowGAN
+
+
+(f) ground truth
+
+shape. However, $\mathcal{L}_{per}$ is a global semantic constraint rather than a detail, so the pixel-wise intensity and noise are not well resolved. $\mathcal{L}_2$ maintains good pixel-wise intensity but produces blurred virtual object shadows which are not good in shape. $\mathcal{L}_{per} + \mathcal{L}_2$ outperforms both $\mathcal{L}_{per}$ and $\mathcal{L}_2$ , which indicates that $\mathcal{L}_{per}$ and $\mathcal{L}_2$ promote each other.
+
+
+Figure 10. Robustness testing. From left to right: input images, input masks, attention maps of real-world shadows and their corresponding occluders and output images.
+
+# 5.4. Robustness Testing
+
+We test our ARShadowGAN with new cases outside Shadow-AR dataset in Figure 10 to show the robustness. All the images, model buddha, vase and mug are new and without the ground truth. The case with the model inserted in the real shadow is shown in the $3^{rd}$ row. Cases of multiple light sources and multiple inserted models are shown in the $4^{th}$ and $5^{th}$ row. Visualization results show that ARShadowGAN is capable of producing plausible shadows.
+
+# 6. Limitations
+
+ARShadowGAN is subject to the following limitations:
+
+(1) ARShadowGAN fails when there are large areas of dark or few clues. Examples are shown in Figure 11.
+
+
+Figure 9. Examples of qualitative ablation studies of loss function.
+Figure 11. Failure cases of large dark areas and few clues. From left to right: input images without virtual shadows, input masks, attention maps of real-world shadows and their corresponding occluder and output images.
+
+(2) ARShadowGAN only produces planar shadows which do not intersect with real-world shadows and do not exhibit multiple light source characteristics.
+(3) ARShadowGAN does not change the shading of the inserted object.
+
+Limitation (1) is because ARShadowGAN relies on clues to infer virtual object shadows while large dark areas seriously interfere with clues. Limitations (2) and (3) exist because the training data does not contain such examples. Extending the Shadow-AR dataset is a possible way to solve limitations (2) and (3).
+
+# 7. Conclusion and Future Work
+
+In this work, we construct a dataset and propose AR-ShadowGAN to directly generate plausible virtual object shadows consistent with real-world shading effects without any explicit estimation of the illumination and the geometry. The future work includes addressing the self-shading problem of inserted objects and extending the current ShadowAR dataset and ARShadowGAN for more complex cases.
+
+# Acknowledgement
+
+This work was partly supported by Key Technological Innovation Projects of Hubei Province (2018AAA062), Wuhan Science and Technology Plan Project (No. 2017010201010109), the National Key Research and Development Program of China (2017YFB1002600), the NSFC (No. 61672390, 61972298). The corresponding author is Chunxia Xiao.
+
+# References
+
+[1] Ibrahim Arief, Simon McCallum, and Jon Yngve Hardeberg. Realtime estimation of illumination direction for augmented reality on mobile devices. In Color and Imaging Conference, volume 2012, pages 111-116. Society for Imaging Science and Technology, 2012.
+[2] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017.
+[3] Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015.
+[4] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 248-255. IEEE, 2009.
+[5] Bin Ding, Chengjiang Long, Ling Zhang, and Chunxia Xiao. Argan: Attentive recurrent generative adversarial network for shadow detection and removal. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), October 2019.
+[6] Marc-Andre Gardner, Yannick Hold-Geoffroy, Kalyan Sunkavalli, Christian Gagne, and Jean-Francois Lalonde. Deep parametric indoor lighting estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), October 2019.
+[7] Marc-Andre Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen, Emiliano Gambaretto, Christian Gagne, and Jean-François Lalonde. Learning to predict indoor illumination from a single image. ACM Transactions on Graphics (SIGGRAPH Asia), 9(4), 2017.
+[8] Mathieu Garon, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, and Jean-Francois Lalonde. Fast spatially-varying indoor lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+[9] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. Nas-fpn: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7036-7045, 2019.
+[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the Advances in neural information processing systems (NeurIPS), pages 2672–2680, 2014.
+[11] Maciej Gryka, Michael Terry, and Gabriel J Brostow. Learning to remove soft shadows. ACM Transactions on Graphics (TOG), 34(5):153, 2015.
+[12] Ruiqi Guo, Qieyun Dai, and Derek Hoiem. Paired regions for shadow detection and removal. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 35(12):2956-2967, 2013.
+[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed-
+
+ings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016.
+[14] Yannick Hold-Geoffroy, Akshaya Athawale, and Jean-Francois Lalonde. Deep sky modeling for single image outdoor lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+[15] Xiaowei Hu, Yitong Jiang, Chi-Wing Fu, and Pheng-Ann Heng. Mask-ShadowGAN: Learning to remove shadows from unpaired data. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2019.
+[16] Gang Hua, Chengjiang Long, Ming Yang, and Yan Gao. Collaborative active learning of a kernel machine ensemble for recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1209-1216. IEEE, 2013.
+[17] Gang Hua, Chengjiang Long, Ming Yang, and Yan Gao. Collaborative active visual recognition from crowds: A distributed ensemble approach. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), 40(3):582-594, 2018.
+[18] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 4700-4708, 2017.
+[19] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017.
+[20] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pages 1125-1134, 2017.
+[21] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), pages 694–711. Springer, 2016.
+[22] Kevin Karsch, Varsha Hedau, David Forsyth, and Derek Hoiem. Rendering synthetic objects into legacy photographs. ACM Transactions on Graphics (TOG), 30(6):1-12, 2011.
+[23] Kevin Karsch, Kalyan Sunkavalli, Sunil Hadap, Nathan Carr, Hailin Jin, Rafael Fonte, Michael Sittig, and David Forsyth. Automatic scene inference for 3d object compositing. ACM Transactions on Graphics (TOG), 33(3):32, 2014.
+[24] Vincent Lepetit, Francesc Moreno-Noguer, and Pascal Fua. Epnp: An accurate o (n) solution to the pnp problem. International journal of computer vision (IJCV), 81(2):155, 2009.
+[25] Bin Liao, Yao Zhu, Chao Liang, Fei Luo, and Chunxia Xiao. Illumination animating and editing in a single picture using scene structure estimation. Computers & Graphics, 82:53-64, 2019.
+[26] Chengjiang Long, Roddy Collins, Eran Swears, and Anthony Hoogs. Deep neural networks in fully connected crf for image labeling with social network metadata. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1607-1615. IEEE, 2019.
+
+[27] Chengjiang Long and Gang Hua. Multi-class multi-annotator active learning with robust gaussian process for visual recognition. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2839–2847, 2015.
+[28] Chengjiang Long and Gang Hua. Correlational gaussian processes for cross-domain visual recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 118-126, 2017.
+[29] Chengjiang Long, Gang Hua, and Ashish Kapoor. Active visual recognition with expertise estimation in crowdsourcing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 3000-3007. IEEE, 2013.
+[30] Chengjiang Long, Gang Hua, and Ashish Kapoor. A joint gaussian process model for active visual recognition with expertise estimation in crowdsourcing. International Journal of Computer Vision (IJCV), 116(2):136-160, 2016.
+[31] Chengjiang Long, Xiaoyu Wang, Gang Hua, Ming Yang, and Yuanqing Lin. Accurate object detection with location relaxation and regionlets re-localization. In Proceedings of the Asian Conference on Computer Vision (ACCV), pages 3000-3016. IEEE, 2014.
+[32] Stephen Robert Marschner and Donald P Greenberg. Inverse rendering for computer graphics. CiteSeer, 1998.
+[33] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. Computer Science, pages 2672-2680, 2014.
+[34] Vu Nguyen, Yago Vicente, F Tomas, Maozheng Zhao, Minh Hoai, and Dimitris Samaras. Shadow detection with conditional generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 4510-4518, 2017.
+[35] Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1):62-66, 1979.
+[36] Alexandros Panagopoulos, Chaohui Wang, Dimitris Samaras, and Nikos Paragios. Illumination estimation and cast shadow detection through a higher-order graphical model. In Proceedings of the IEEE International Computer Vision and Pattern Recognition (CVPR), pages 673-680. IEEE, 2011.
+[37] Bui Tuong Phong. Illumination for computer generated pictures. Communications of the ACM, 18(6):311-317, 1975.
+[38] Liangqiong Qu, Jiandong Tian, Shengfeng He, Yandong Tang, and Rynson WH Lau. Deshadownet: A multi-context embedding deep network for shadow removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4067-4075, 2017.
+[39] Imari Sato, Yoichi Sato, and Katsushi Ikeuchi. Illumination from shadows. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI), (3):290-300, 2003.
+[40] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. Computer Science, 2014.
+[41] Tomás F Yago Vicente, Le Hou, Chen-Ping Yu, Minh Hoai, and Dimitris Samaras. Large-scale training of shadow detectors with noisily-annotated shadow examples. In Proceedings of the European Conference on Computer Vision (ECCV), pages 816–832. Springer, 2016.
+
+[42] Paul Voigtlaender, Yuning Chai, Florian Schroff, Hartwig Adam, Bastian Leibe, and Liang-Chieh Chen. Feelvos: Fast end-to-end embedding learning for video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 9481-9490, 2019.
+[43] Ketaro Wada. labelme: Image Polygonal Annotation with Python. https://github.com/wkentaro/ labelme, 2016.
+[44] Jifeng Wang, Xiang Li, and Jian Yang. Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1788-1797, 2018.
+[45] Henrique Weber, Donald Prévost, and Jean-François Lalonde. Learning to estimate indoor lighting from 3d objects. In International Conference on 3D Vision (3DV), pages 199-207. IEEE, 2018.
+[46] Jinjiang Wei, Chengjiang Long, Hua Zou, and Chunxia Xiao. Shadow inpainting and removal using generative adversarial networks with slice convolutions. In Computer Graphics Forum (CGF), volume 38, pages 381-392. Wiley Online Library, 2019.
+[47] Lance Williams. Casting curved shadows on curved surfaces. In Proceedings of the 5th annual conference on Computer graphics and interactive techniques, pages 270-274, 1978.
+[48] Ryan Christopher Yeoh and Steven Zhi Ying Zhou. Consistent real-time lighting for virtual objects in augmented reality. 2009.
+[49] Jinsong Zhang, Kalyan Sunkavalli, Yannick Hold-Geoffroy, Sunil Hadap, Jonathan Eisenman, and Jean-Francois Lalonde. All-weather deep outdoor lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+[50] Ling Zhang, Chengjiang Long, Xiaolong Zhang, and Chunxia Xiao. Ris-gan: Explore residual and illumination with generative adversarial networks for shadow removal. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2020.
+[51] Shuyang Zhang, Runze Liang, and Miao Wang. Shadowgan: Shadow synthesis for virtual objects with conditional adversarial networks. Computational Visual Media, 5(1):105-115, 2019.
+[52] Jiejie Zhu, Kegan GG Samuel, Syed Z Masood, and Marshall F Tappen. Learning to recognize shadows in monochromatic natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 223-230. IEEE, 2010.
\ No newline at end of file
diff --git a/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/images.zip b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4f9f654dc3074f4778cf9de82cce941003407c9f
--- /dev/null
+++ b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15ffcd6ac1f6db5e3e7c9775dfd70d77c4814f6c4185cfe098ba1afd6e3bf34c
+size 598224
diff --git a/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/layout.json b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5c301f5ab9138644d912f5bff548dd121d324298
--- /dev/null
+++ b/arshadowganshadowgenerativeadversarialnetworkforaugmentedrealityinsinglelightscenes/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec8800cee21c1bed8ce2ed6dd4f3bc37be4f02673ad3ea7705b996611685154d
+size 473229
diff --git a/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_content_list.json b/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1e005c7f6e5d5bc4e147513eb8a20a2155de2e89
--- /dev/null
+++ b/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76d9fdeb6c132c72596519ed6d456fc08fc2dfc93cc210cc1a6e8baa974ffe65
+size 78529
diff --git a/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_model.json b/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..87a07a15cd3b3cb2f1835cac6682f76211545831
--- /dev/null
+++ b/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aad0ec5d400d4bea2872762834b51ff9322c1c33d9f23aafdb41a9cc50a5007e
+size 90036
diff --git a/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_origin.pdf b/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..7183cc5ed22562b8d85bcc694609f78c680a63e0
--- /dev/null
+++ b/articulationawarecanonicalsurfacemapping/26d694ae-3c00-487b-830f-ad8eddb193ee_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d58f653c31a34e86d0dd817e4c964ee3804ef06080703b50cb63374b2342875d
+size 1806182
diff --git a/articulationawarecanonicalsurfacemapping/full.md b/articulationawarecanonicalsurfacemapping/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f399eeb39e2d5fb977bd2fa4a00699f62d398c0b
--- /dev/null
+++ b/articulationawarecanonicalsurfacemapping/full.md
@@ -0,0 +1,349 @@
+# Articulation-aware Canonical Surface Mapping
+
+Nilesh Kulkarni1
+
+1University of Michigan
+
+{nileshk, fouhey}@umich.edu
+
+Abhinav Gupta $^{2,3}$
+
+$^{2}$ Carnegie Mellon University
+
+abhinavg@cs.cmu.edu shubhtuls@fb.com
+
+
+
+
+
+
+
+
+Figure 1: We tackle the tasks of: a) canonical surface mapping (CSM) i.e. mapping pixels to corresponding points on a template shape, and b) predicting articulation of this template. Our approach allows learning these without relying on keypoint supervision, and we visualize the results obtained across several categories. The color across the template 3D model on the left and image pixels represent the predicted mapping among them, while the smaller 3D meshes represent our predicted articulations in camera (top) or a novel (bottom) view.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Shubham Tulsiani3
+
+$^{3}$ Facebook AI Research
+
+shubhtuls@fb.com
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Abstract
+
+We tackle the tasks of: 1) predicting a Canonical Surface Mapping (CSM) that indicates the mapping from 2D pixels to corresponding points on a canonical template shape, and 2) inferring the articulation and pose of the template corresponding to the input image. While previous approaches rely on keypoint supervision for learning, we present an approach that can learn without such annotations. Our key insight is that these tasks are geometrically related, and we can obtain supervisory signal via enforcing consistency among the predictions. We present results across a diverse set of animal object categories, showing that our method can learn articulation and CSM prediction from image collections using only foreground mask labels for training. We empirically show that allowing articulation helps learn more accurate CSM prediction, and that enforcing the consistency with predicted CSM is similarly critical for learning meaningful articulation.
+
+# 1. Introduction
+
+We humans have a remarkable ability to associate our 2D percepts with 3D concepts, at both a global and a local level. As an illustration, given a pixel around the nose of the horse depicted in Figure 1 and an abstract 3D model, we can easily map this pixel to its corresponding 3D point. Further, we can also understand the global relation between the two, e.g. the 3D structure in the image corresponds to the template with the head bent down. In this work, we pursue these goals of a local and global 3D understanding, and tackle the tasks of: a) canonical surface mapping (CSM) i.e. mapping from 2D pixels to a 3D template, and b) predicting this template's articulation corresponding to the image.
+
+While several prior works do address these tasks, they do so independently, typically relying on large-scale annotation for providing supervisory signal. For example, Guler et al. [2] show impressive mappings from pixels to a template human mesh, but at the cost of hundreds of thousands of annotations. Similarly, approaches pursuing articulation inference [16, 43] also rely on keypoint annotation to enable
+
+learning. While these approaches may be used for learning about categories of special interest e.g. humans, cats etc., the reliance on such large-scale annotation makes them unscalable for generic classes. In contrast, our goal in this work is to enable learning articulation and pixel to surface mappings without leveraging such manual annotation.
+
+Our key insight is that these two forms of prediction are in fact geometrically related. The CSM task yields a dense local mapping from pixels to the template shape, and conversely, inferring the global articulation (and camera pose) indicates a transform of this template shape onto the image. We show that these two predictions can therefore provide supervisory signal for each other, and that enforcing a consistency between them can enable learning without requiring direct supervision for either of these tasks. We present an approach that operationalizes this insight, and allows us to learn CSM and articulation prediction for generic animal object categories from online image collections.
+
+We build on upon our prior work [18] that, with a similar motivation, demonstrated that it is possible to learn CSM prediction without annotation, by relying on the consistency between rigid reprojections of the template shape and the predicted CSM. However, this assumed that the object in an image is rigid e.g. does not have a bent head, moving leg etc., and this restricts the applicability and accuracy for objects that exhibit articulation. In contrast, we explicitly allow predicting articulations, and incorporate these before enforcing such consistency, and our approach thereby: a) helps us learn articulation prediction without supervision, and b) leads to more accurate CSM inference. We present qualitative and quantitative results across diverse classes indicating that we learn accurate articulation and pixel to surface mappings across these. Our approach allows us to learn using ImageNet [6] images with approximate segmentations from off-the-shelf systems, thereby enabling learning in setups that previous supervised approaches could not tackle, and we believe this is a step towards large-internet-scale 3D understanding.
+
+# 2. Related Work
+
+Pose and Articulation Prediction. One of the tasks we address is that of inferring the camera pose and articulation corresponding to an input image. The task of estimating pose for rigid objects has been central to understanding objects in 3D scenes, and addressed by several works over the decades, from matching based methods [12, 27, 37], to recent CNN based predictors [29, 32]. Closer to our work, a natural generalization of this task towards animate objects is to also reason about their articulation i.e. movement of parts, and a plethora of fitting based [4, 15] or prediction based [14, 36, 43] methods have been proposed to tackle this. While these show impressive results across challenging classes, these methods crucially rely on (often dense)
+
+2D keypoint annotations for learning, and sometimes even inference. Our goal is to learn such a prediction without requiring this supervision. We show that enforcing consistency with a dense pixel to 3D mapping allows us to do so.
+
+Dense Mappings and Correspondences. In addition to learning articulation, we a predict per-pixel mapping to a template shape. Several previous approaches similarly pursue pixel to surface [2, 23, 24, 28, 42] or volume [35] mappings, but unlike our approach, crucially rely on direct supervision towards this end. Note that these mappings also allow one to recover correspondences across images, as corresponding pixels have similar representations. Towards this general goal of learning representations that respect correspondence, several prior works attempt to design [22], or learn features invariant to camera movement [9, 38], or synthetic transforms [30]. While the latter approaches can be leveraged without supervision, the embedding does not enforce a geometric structure, which is what crucially helps us jointly learn articulation and pose. Closer to our work, Kulkarni et al. [18] learn a similar mapping without direct supervision but unlike us, ignore the effects of articulation, which we model to obtain more accurate results.
+
+Reconstructing Objects in 3D. Our approach can be considered as predicting a restricted form of 3D reconstruction from images, by 'reconstructing' the 3D shape in the form of an articulated template shape and its pose. There are many existing approaches which tackle more general forms of 3D prediction, ranging from volumetric prediction [5, 10] to point cloud inference [8, 20]. Perhaps more directly related to our representation is the line of work that, following the seminal work on Blanz and Vetter [3], represents the 3D in the form of a morphabble model, jointly capturing articulation and deformation [21, 25, 26]. While all these approaches yield more expressive 3D than our approach, they typically rely on 3D supervision for training. Even methods that attempt to relax this [16, 33, 39] need to leverage multi-view or keypoint supervision for learning, and in this work, we attempt to also relax this requirement.
+
+# 3. Approach
+
+Given an input image $I$ , our goal is to infer: (1) a per-pixel correspondence $C$ , mapping each pixel in $I$ to a point on the template; (2) an articulation $\delta$ of the 3D template, as well as a camera pose $\pi = (s, \mathbf{R}, t)$ that represents how the object appears in or projects into the image. We operationalize this with two deep networks $f_{\theta}$ and $g_{\theta'}$ that take as input image $I$ , and produce $C \equiv f_{\theta}(I)$ and $\delta, \pi \equiv g_{\theta'}(I)$ respectively. Instead of requiring large-scale manual keypoint annotations for learning these mappings, we strive for an approach that can learn without such keypoint labels, using only category-level image collections with (possibly noisy) foreground masks. Our key insight is that the two tasks of
+
+predicting pixel to 3D template mappings and transformations of template to image frame are geometrically related, and we can enforce consistency among the predictions to obtain supervisory signal for both. Recent work by Kulkarni et al. [18] leveraged a similar insight to learn CSM prediction, but assumed a rigid template, which is a fundamentally limiting assumption for most animate object classes. We present an approach that further allows the model to articulate, and observe that this enables us to both learn about articulation without supervision, and recover more accurate pixel to surface mappings.
+
+The core loss and technique is a geometric consistency loss that synchronizes the CSM, articulation and pose, which we present along with our articulation parametrization in Section 3.1. We then describe how we train $f_{\theta}$ and $g_{\theta'}$ in Section 3.2, which builds on this core loss by adding auxiliary losses based on mask supervision and shows how our approach can be extended to incorporate sparse keypoint supervision if available.
+
+Mesh Preliminaries. We note that the surface of a mesh is a 2D manifold in 3D space and we can therefore construct a 2D parametrization of a 3D surface as $\phi :[0,1)^2\to S$ . This maps a 2D vector $\mathbf{u}$ to a unique point on the surface of the template shape $S$ . Given such a surface parametrization, a canonical surface mapping $C$ is defined as a 2D vector image, such that for a given pixel $\mathbf{p}$ , $\phi (C[\mathbf{p}])$ is its corresponding 3D point on the template. Please see the supplemental for additional details on constructing $\phi$ for a template shape.
+
+# 3.1. Articulation-aware Geometric Consistency
+
+Articulation Parametrization. Given a template shape in the form of a mesh, we approximately group its vertices into functional 'parts' e.g. head, neck, legs etc., as well as define a hierarchy among these parts. While our initial grouping is discrete, following standard practices in computer graphics [19], we 'soften' the per-vertex assignment as depicted in Figure 2. Assuming $K$ parts, this 'rigging' procedure yields, for each mesh vertex $v$ , the associated memberships $\alpha_{k}^{v}\in [0,1]$ corresponding to each part. Note that this annotation procedure is easily scalable, requiring only a few minutes per category (for a non-expert annotator). The articulation $\delta$ of this template is specified by a rigid transformation (translation and rotation) of each part w.r.t. its parent part i.e. $\delta \equiv \{(t_k,R_k)\}$ , with the 'body' being the root part. Given (predicted) articulation parameters $\delta$ , we can compute a global transformation $\mathcal{T}_k(\cdot ,\delta)$ for each part, s.t. a point $p$ on the part in the canonical template moves to $\mathcal{T}_k(p,\delta)$ in the articulated template (see supplemental for details). Therefore, given a vertex $v$ on the canonical template mesh, we can compute its position after articulation as $\sum_{k}\alpha_{k}^{v}\mathcal{T}_{k}(v,\delta)$ . We can extend this definition for any point $p$ on the surface using barycentric interpolation (see sup
+
+
+Figure 2: Sample per-part vertex assignments. We show softened per-vertex assignment to various parts of quadrupeds. This pre-computed soft assignment enables us to obtain smooth deformations of the template mesh across the part boundaries under articulation.
+
+
+Figure 3: Illustration of surface parametrization and articulation. Given a 2D coordinate $\mathbf{u} \in [0,1]^2$ , the function $\phi$ maps it to the surface of a template shape, which can then be transformed according to the articulation $\delta$ specified. We depict here the mappings from this 2D space to the articulated shapes for two possible articulations: horse with moving legs, and sheep with a bent head.
+
+plemental). We slightly overload notation for convenience, and denote by $\delta(p)$ the position of any point $p \in S$ after undergoing articulation specified by $\delta$ .
+
+Canonical to Articulated Surface Mapping. For any 2D vector $\mathbf{u} \in [0,1)^2$ , we can map it to the template shape via $\phi$ . If the shape has undergone an articulation specified by $\delta$ , we can map this vector to a point on the articulated shape by composing the articulation and mapping, or $\delta(\phi(\mathbf{u}))$ . We depict this in Figure 3, and show the mapping from the 2D space to the template under various articulations. Given a pixel to canonical surface mapping $C$ , we can therefore recover for a pixel $\mathbf{p}$ its corresponding point on the articulated shape as $\delta(\phi(C[\mathbf{p}]))$ .
+
+Geometric Consistency. The canonical surface mapping defines a $2\mathrm{D} \rightarrow 3\mathrm{D}$ mapping from a pixel to a point on the 3D mesh; we show how to use cameras observing the mesh to define a cycle-consistent loss from each mesh point to a pixel. In particular, the canonical surface mapping $C$ maps pixels to the corresponding 3D points on the (unarticulated) template. In the other direction, a (predicted) articulation $\delta$ and camera parameters $\pi$ define a mapping from this canonical shape to the image space: the mesh de
+
+
+Figure 4: Articulation-aware Geometric Cycle Consistency. Given an image pixel, we can map it to a point on the surface of the template shape using the predicted CSM mapping and $\phi$ . We then articulate the surface using $\delta$ to map points on the surface of template shape to the articulated shape. The inconsistency arising from the reprojection of points from articulated shape under the camera $\pi$ yields the geometric cycle consistency loss, $L_{\mathrm{gcc}}$ .
+
+forms and is then projected back into the image. Ideally, for any pixel $\mathbf{p}$ , this 3D mapping to the template followed by articulation and projection should yield the original pixel location if the predictions are geometrically consistent. We call this constraint as geometric cycle consistency (GCC).
+
+We can operationalize this to measure the inconsistency between a canonical surface mapping $C$ , articulation $\delta$ and camera $\pi$ , as shown in Figure 4. Given a foreground pixel $\mathbf{p}$ , its corresponding point on the template shape can be computed as $\phi(C[\mathbf{p}]$ ), and on the articulated shape as $\delta(\phi(C[\mathbf{p}]))$ . Given the (predicted) camera $\pi$ , we can compute its reprojection in the image frame as $\pi(\delta(\phi(C[\mathbf{p}]))$ . We then penalize the difference between the initial and the reprojected pixel location to enforce consistency.
+
+$$
+L _ {\mathrm {g c c}} = \sum_ {\mathbf {p} \in I _ {f}} \| \mathbf {p} - \bar {\mathbf {p}} \|; \quad \bar {\mathbf {p}} = \pi (\delta (\phi (C [ \mathbf {p} ]))) \tag {1}
+$$
+
+# 3.2. Learning CSM and Articulation Prediction
+
+Recall that our goal is to train a predictor $f_{\theta}$ that predicts the CSM $C$ and a predictor $g_{\theta'}$ that predicts the articulation $\delta$ and camera $\pi$ . Our approach, as illustrated in Figure 5, learns these using $L_{\mathrm{gcc}}$ that enforces consistency among the predictions. We additionally have to add auxiliary losses based on foreground mask supervision to prevent trivial or degenerate solutions. These losses penalize the discrepancy between the annotated masks and masks rendered from the articulated mesh. We describe the learning procedure and objectives in more detail below, and then discuss incorporating keypoint supervision if available.
+
+Visibility Constraints. The GCC reprojection can be con
+
+
+Figure 5: Overview of our approach. Our approach A-CSM jointly learns to predict the CSM mapping, a camera, and the articulation. We require that these predictions be consistent with each other by enforcing the $L_{\mathrm{cyc}}$ and $L_{\mathrm{mask}}$ constraint.
+
+sistent even under mappings to an occluded region e.g. if the pixel considered in Figure 4 were mapped to the other side of the horse's head, its image reprojection map still be consistent. To discourage such mappings to invisible regions, we follow Kulkarni et al. [18] and incorporate a visibility loss $L_{\mathrm{vis}}$ that penalizes inconsistency between the reprojected and rendered depth (for more details see supplemental).
+
+Overcoming Ambiguities via Mask Supervision. Simply enforcing self-consistency among all predictions in absence of any grounding however, can lead to degenerate solutions. Hence, we leverage the foreground mask obtained under camera $(\pi)$ for the template shape after articulation $(\delta)$ to match the annotated foreground mask. As we want to encourage more precise articulation, we find it beneficial to measure the difference between the 2D distance fields induced by the foreground masks instead of simply comparing the per-pixel binary values, and define an objective $L_{\mathrm{mask}}$ to capture this. This objective is sum of mask-consistency and mask-coverage objectives as defined in [17]. We describe it further detail in the supplemental.
+
+Learning Objective. Our overall training objective $L_{\mathrm{total}}$ then minimizes a combination of the above losses:
+
+$$
+L _ {\text {t o t a l}} = L _ {\mathrm {g c c}} + L _ {\mathrm {v i s}} + L _ {\mathrm {m a s k}} \tag {2}
+$$
+
+Additionally, instead of learning a camera and deformation predictor $g_{\theta'}$ that predicts a unique output, we follow previous approaches [13, 18, 31] to learn a multi-hypothesis predictor, that helps overcome local minima. Concretely, $g_{\theta'}(I)$ outputs 8 (pose, deformation) hypotheses, $\{(\pi_i, \delta_i)\}$ ,
+
+and an associated probability $c_{i}$ , and we minimize the expected loss across these.
+
+Leveraging Optional Keypoint (KP) Supervision. While we are primarily interested in learning without any manual keypoint annotations, our approach can be easily extended to additional annotations for some set of semantic keypoints e.g. nose, left eye etc. are available. To leverage these, we manually define the set of corresponding 3D points $X$ on the template for these semantic 2D keypoints. Given an input image with 2D annotations $\{x_{i}\}$ we can leverage these for learning. We do so by adding an objective that ensures the projection of the corresponding 3D keypoints under the predicted camera pose $\pi$ , after articulation, is consistent with the available 2D annotations. We denote $\mathcal{I}$ as indices of the visible keypoints to formalize the objective as:
+
+$$
+L _ {k p} = \sum_ {i \in \mathcal {I}} \| x _ {i} - \pi (\delta (X _ {i})) \| \tag {3}
+$$
+
+In scenarios where such supervision is available, we observe that our approach enables us to easily leverage it for learning. While we later empirically examine such scenarios and highlight consistent benefits of allowing articulation in these, all visualizations in the paper are in a keypoint-free setting where this additional loss is not used.
+
+Implementation Details. We use a ResNet18 [11] based encoder and a convolutional decoder to implement the per-pixel CSM predictor $f_{\theta}$ and another instantiation of ResNet18 based encoder for the deformation and camera predictor $g_{\theta'}$ . We describe these in more detail in the supplemental and links to code are available on the webpage.
+
+# 4. Experiments
+
+Our approach allows us to: a) learn a CSM prediction indicating mapping from each pixel to corresponding 3D point on template shape, and b) infer the articulation and pose that transforms the template to the image frame. We present experiments that evaluate both these aspects, and we empirically show that: a) allowing for articulation helps learn accurate CSM prediction (Section 4.2), and b) we learn meaningful articulation, and that enforcing consistency with CSM is crucial for this learning (Section 4.3).
+
+# 4.1. Datasets and Inputs
+
+We create our dataset out of existing datasets – CUB-200-2011 [34], PASCAL [7], and Imagenet [6], which we divide into two sets by animal species. The first, (Set 1) are birds, cows, horses and sheep, on which we report quantitative results. To demonstrate generality, we also have (Set 2), or other animals on which we show qualitative results. Animals in Set 1 have keypoints available, which enable both quantitative results and experiments that test our model in
+
+the presence of keypoints. Animals in Set 2 do not have keypoints, and we show qualitative results. Throughout, we follow the underlying dataset's training and testing splits to ensure meaningful results.
+
+Birds. We use the CUB-200-2011 dataset for training and testing on birds (using the standard splits). It comprises 6000 images across 200 species, as well as foreground mask annotations (used for training), and annotated keypoints (used for evaluation, and optionally in training).
+
+Set 1 Quadrupeds (Cows, Horses, Sheep). We combine images from PASCAL VOC and Imagenet. We use the VOC masks and masks on Imagenet produced from a COCO trained Mask RCNN model. When we report experiments that additionally leverage keypoints during training for these classes, they only use this supervision on the VOC training subset of images (and are therefore only 'semi-supervised' in terms of keypoint annotations).
+
+Set 2 Quadrupeds (Hippos, Rhinos, Kangaroos, etc.). We use images from Imagenet. In order to obtain masks for these animals, we annotate coarse masks for around 300 images per category, and then train a Mask RCNN by combining all these annotations into a single class, thus predicting segmentations for a generic 'quadruped' category.
+
+Filtering. Throughout, we filter images with one large untruncated and largely unoccluded animal (i.e., some grass is fine).
+
+Template Shapes. We download models for all categories from [1]. We partition the quadrupeds to have 7 parts corresponding to torso, 4 legs, head, and a neck (see Figure 2 for examples of some of these). For the elephant model, we additionally mark two more parts for the trunk without a neck. Our birds model has 3 parts (head, torso, tail).
+
+# 4.2. Evaluating CSM via Correspondence Transfer
+
+The predicted CSMs represent a per-pixel correspondence to the 3D template shape. Unfortunately, directly evaluating these requires dense annotation which is difficult to acquire, but we note that this prediction also allows one to infer dense correspondence across images. Therefore, we can follow the evaluation protocol typically used for measuring image to image correspondence quality [40, 41], and indirectly evaluate the learned mappings by measuring the accuracy for transferring annotated keypoints from a source to a target image as shown in Figure 7.
+
+Keypoint Transfer using CSM Prediction. Given a source and a target image, we want to transfer an annotated keypoint from the source to the target using the predicted pixelwise mapping. Intuitively, given a query source pixel, we can recover its corresponding 3D point on the template using the predicted CSM, and can then search over the target image for the pixel that is predicted to map the closest (described formally in the supplemental). Given some key-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Induced part labeling. Our CSM inference allows inducing pixel-wise semantic part predictions. We visualize the parts of the template shape in the $1^{\mathrm{st}}$ and $5^{\mathrm{th}}$ columns, and the corresponding induced labels on the images via corresponding 3D point.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 7: Visualizing Keypoint Transfer. We transfer keypoints from the 'source' image to target image. Keypoint Transfer comparison between Rigid-CSM [18] and A-CSM (Ours). We see that the inferred correspondences as a result of modeling articulation are more accurate, for example note the keypoint transfers for the head of the sheep and horse.
+
+point annotations on one image, we can therefore predict corresponding points on another.
+
+Evaluation Metric. We use the 'Percentage of Correct Keypoint Transfers' (PCK-Transfer) metric to indirectly evaluate the learned CSM mappings. Given several source-target image pairs, we transfer annotated keypoints from the source to the target, and label a transfer as 'correct' if the predicted location is within $0.1 \times \max(w, h)$ distance of the ground-truth location. We report our performance over 10K source-target pairs
+
+Baselines. We report comparisons against two alternate approaches that leverage similar form of supervision. First, we compare against Rigid-CSM [18] which learns similar pixel to surface mappings, but without allowing model articulation. The implementation of this baseline simply corresponds to using our training approach, but without any the articulation $\delta$ . We also compare against the Dense Equivari-
+
+ance (DE) [30] approach that learns self-supervised mappings from pixels to an implicit (and non-geometric) space.
+
+Results. We report the empirical results obtained under two settings: with, and without keypoint supervision for learning in Table 1. We find that across both these settings, our approach of learning pixel to surface mappings using articulation-aware geometric consistency improves over learning using articulation-agnostic consistency. We also find that our geometry-aware approach performs better than learning equivariant embeddings using synthetic transforms. We visualize keypoint transfer results in Figure 7 and observe accurate transfers despite different articulation e.g. for the horse head, we can accurately transfer the keypoints despite it being bent in the target and not in the source. The Rigid-CSM [18] baseline however, does not do so successfully. We also visualize the induced part labeling by transferring part labels from 3D models to image pixels shown in Figure 6.
+
+# 4.3. Articulation Evaluation via Keypoint Reprojection.
+
+Towards analyzing the fidelity of the learned articulation (and pose), we observe that under accurate predictions, annotated 2D keypoints in images should match re-projection of manually defined 3D keypoints on the template. We therefore measure whether the 3D keypoints on the articulated template, when reprojected back with the predicted camera pose, match the 2D annotations. Using this metric, we address: a) does allowing articulation help accuracy? and b) is joint training with CSM consistency helpful?
+
+Evaluation Metrics. We again use the 'Percentage of Correct Keypoints' (PCK) metric to evaluate the accuracy of 3D keypoints of template when articulated and reprojected. For each test image with available 2D keypoint keypoint annotations, we obtain reproductions of 3D points, and label a reprojection correct if the predicted location is within $0.1 \times \max(w, h)$ distance of the ground-truth. Note that unlike 'PCK-Transfer', this evaluation is done per-image.
+
+Do we learn meaningful articulation? We report the key-
+
+
+Figure 8: Sample Results. We demonstrate our approach to learn the CSM mapping and articulation over a wide variety of non-rigid objects. The figure depicts: a) category-level the template shape on the left, b) per-image CSM prediction where colors indicate correspondence, and c) the predicted articulated shape from camera and a novel view.
+
+Table 1: PCK-Transfer for Evaluating CSM Prediction. We evaluate the transfer of keypoints from a source and target image, and report the transfer accuracy as PCK transfer as described in Section 4.2. Higher is better
+
+| Supv | Method | Birds | Horses | Cows | Sheep |
| KP + | Rigid-CSM [18] | 45.8 | 42.1 | 28.5 | 31.5 |
| Mask | A-CSM (ours) | 51.0 | 44.6 | 29.2 | 39.0 |
| Mask | Rigid-CSM [18] | 36.4 | 31.2 | 26.3 | 24.7 |
| Dense-Equi [30] | 33.5 | 23.3 | 20.9 | 19.6 |
| A-CSM (ours) | 42.6 | 32.9 | 26.3 | 28.6 |
+
+Table 2: Articulation Evaluation. We compute PCK under reprojection of manually annotated keypoints on the mesh as described in Section 4.3. Higher is better.
+
+| Supv | Method | Birds | Horses | Cows | Sheep |
| KP + | Rigid-CSM [18] | 68.5 | 46.4 | 52.6 | 47.9 |
| Mask | A-CSM (ours) | 72.4 | 57.3 | 56.8 | 57.4 |
| Mask | Rigid-CSM [18] | 50.9 | 49.7 | 37.4 | 36.4 |
| A-CSM (ours) | 46.8 | 54.2 | 41.5 | 42.5 |
+
+Table 3: Effect of $L_{gcc}$ for Learning Articulation. We report performance of our method, and compare it with a variant trained without the geometric cycle loss.
+
+| Supv | Method | Birds | Horses | Cows | Sheep |
| KP + | A-CSM (ours) | 72.4 | 57.3 | 56.8 | 57.4 |
| Mask | A-CSM w/o GCC | 72.2 | 35.5 | 56.6 | 54.5 |
| Mask | A-CSM (ours) | 47.5 | 54.2 | 43.8 | 42.5 |
| A-CSM w/o GCC | 12.9 | 24.8 | 18.7 | 16.6 |
+
+point reprojection accuracy across classes under settings with different forms of supervision in Table 2. We compare against the alternate approach of not modeling articulations, and observe that our approach yields more accurate predictions, thereby highlighting that we do learn meaningful articulation. One exception is for 'birds' when training without keypoint supervision, but we find this to occur because of some ambiguities in defining the optimal 3D keypoint on the template, as 'back', 'wing' etc. and we found that our model simply learned a slightly different (but consistent) notion of pose, leading to suboptimal evaluation. We also show several qualitative results in Figure 8 and Figure 1 that depict articulations of the canonical mesh for various input images, and do observe that we can learn to articulate parts like moving legs, elephant trunk, animal heads etc., and these results do clearly highlight that we can learn articulation using our approach.
+
+Does consistency with CSM help learn articulation? The
+
+cornerstone of our approach is that we can obtain supervisory signal by enforcing consistency among predicted CSM, articulation, and pose. However, another source of signal for learning articulation (and pose) is the mask supervision. We therefore investigate whether this joint consistency is useful for learning, or whether just the mask supervision can suffice. We train a variant of our model 'A-CSM w/o GCC' where we only learn the pose and articulation predictor $g$ , without the cycle consistency loss. We report the results obtained under two supervision settings in Table 3, and find that when keypoint supervision is available, using the consistency gives modest improvements. However, when keypoint supervision is not available, we observe that this consistency is critical for learning articulation (and pose), and that performance in settings without keypoint supervision drops significantly if not enforced.
+
+# 4.4. Learning from Imagenet
+
+As our approach enables learning pixel to surface mappings and articulation without requiring keypoint supervision, we can learn these from a category-level image collection e.g. ImageNet, using automatically obtained segmentation masks. We used our 'quadruped' trained Mask-RCNN to obtain (noisy) segmentation masks per instance. We then use our approach to learn articulation and canonical surface mapping for these classes. We show some results in Figure 1 and Figure 8, where all classes except (birds, horse, sheep, cow) were trained using only ImageNet images. We observe that even under this setting with limited and noisy supervision, our approach enables us to learn meaningful articulation and consistent CSM prediction.
+
+# 5. Discussion
+
+We presented an approach to jointly learn prediction of canonical surface mappings and articulation, without direct supervision, by instead enforcing consistency among the predictions. While enabling articulations allowed us to go beyond explaining pixelwise predictions via reprojections of a rigid template, the class of transformations allowed may still be restrictive in case of intrinsic shape variation. An even more challenging scenario where our approach is not directly applicable is for categories where a template is not well-defined e.g. chairs, and future attempts could investigate enabling learning over these. Finally, while our focus was to demonstrate results in setting without direct supervision, our techniques may also be applicable in scenarios where large-scale annotation is available, and can serve as further regularization or a mechanism to include even more unlabelled data for learning.
+
+Acknowledgements. We would like to thank the members of the Fouhey AI lab (FAIL), CMU Visual Robot Learning lab and anonymous reviewers for helpful discussions and feedback. We also thank Richard Higgins for his help with varied quadruped category suggestions and annotating 3D models
+
+# References
+
+[1] Free3d.com. http://www.free3d.com.
+[2] Ríza Alp Güler, Natalia Neverova, and Iasonas Kokkinos. Denseseposé: Dense human pose estimation in the wild. In CVPR, 2018.
+[3] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, 1999.
+[4] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In ECCV. Springer, 2016.
+[5] Christopher B Choy, JunYoung Gwak, Silvio Savarese, and Manmohan Chandraker. Universal correspondence network. In NeurIPS, 2016.
+[6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
+[7] Mark Everingham, SM Ali Eslami, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes challenge: A retrospective. IJCV, 2015.
+[8] Haoqiang Fan, Hao Su, and Leonidas J Guibas. A point set generation network for 3d object reconstruction from a single image. In CVPR, 2017.
+[9] Peter R Florence, Lucas Manuelli, and Russ Tedrake. Dense object nets: Learning dense visual object descriptors by and for robotic manipulation. CoRL, 2018.
+[10] Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In ECCV. Springer, 2016.
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+[12] Daniel P Huttenlocher and Shimon Ullman. Recognizing solid objects by alignment with an image. IJCV, 1990.
+[13] Eldar Insafutdinov and Alexey Dosovitskiy. Unsupervised learning of shape and pose with differentiable point clouds. In NeurIPS, 2018.
+[14] Angjoo Kanazawa, Michael J. Black, David W. Jacobs, and Jitendra Malik. End-to-end recovery of human shape and pose. In CVPR, 2018.
+[15] Angjoo Kanazawa, Shahar Kovalsky, Ronen Basri, and David Jacobs. Learning 3d deformation of animals from 2d images. In Eurographics, volume 35, pages 365-374. Wiley Online Library, 2016.
+[16] Angjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning category-specific mesh reconstruction from image collections. In ECCV, 2018.
+[17] Abhishek Kar, Shubham Tulsiani, Joao Carreira, and Jitendra Malik. Category-specific object reconstruction from a single image. In CVPR, 2015.
+[18] Nilesh Kulkarni, Abhinav Gupta, and Shubham Tulsiani. Canonical surface mapping via geometric cycle consistency. In ICCV, 2019.
+[19] John P Lewis, Matt Cordner, and Nickson Fong. Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 165-172. ACM Press/Addison-Wesley Pub
+
+lishing Co., 2000.
+[20] Chen-Hsuan Lin, Chen Kong, and Simon Lucey. Learning efficient point cloud generation for dense 3d object reconstruction. In AAAI, 2018.
+[21] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J. Black. SMPL: A skinned multiperson linear model. SIGGRAPH Asia, 2015.
+[22] G Lowe. Sift-the scale invariant feature transform. IJCV, 2004.
+[23] Haggai Maron, Meirav Galun, Noam Aigerman, Miri Trope, Nadav Dym, Ersin Yumer, Vladimir G Kim, and Yaron Lipman. Convolutional neural networks on surfaces via seamless toric covers. 2017.
+[24] Natalia Neverova, James Thewlis, Riza Alp Guler, Iasonas Kokkinos, and Andrea Vedaldi. Slim densepose: Thrifty learning from sparse annotations and motion cues. In CVPR, 2019.
+[25] Markus Oberweger, Paul Wohlhart, and Vincent Lepetit. Hands deep in deep learning for hand pose estimation. arXiv preprint arXiv:1502.06807, 2015.
+[26] Georgios Pavlakos, Vasileios Choutas, Nima Ghorbani, Timo Bolkart, Ahmed AA Osman, Dimitrios Tzionas, and Michael J Black. Expressive body capture: 3d hands, face, and body from a single image. In CVPR, 2019.
+[27] Bojan Pepik, Michael Stark, Peter Gehler, and Bernt Schiele. Teaching 3d geometry to deformable part models. In CVPR, 2012.
+[28] Ayan Sinha, Asim Unmesh, Qixing Huang, and Karthik Ramani. Surfnet: Generating 3d shape surfaces using deep residual networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6040-6049, 2017.
+[29] Hao Su, Charles R Qi, Yangyan Li, and Leonidas J Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In ICCV, 2015.
+[30] James Thewlis, Hakan Bilen, and Andrea Vedaldi. Unsupervised learning of object frames by dense equivariant image labelling. In NeurIPS, 2017.
+[31] Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Multi-view consistency as supervisory signal for learning shape and pose prediction. In CVPR, 2018.
+[32] Shubham Tulsiani and Jitendra Malik. Viewpoints and keypoints. In CVPR, 2015.
+[33] Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In CVPR, 2017.
+[34] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
+[35] He Wang, Srinath Sridhar, Jingwei Huang, Julien Valentin, Shuran Song, and Leonidas J. Guibas. Normalized object coordinate space for category-level 6d object pose and size estimation. In CVPR, 2019.
+[36] Donglai Xiang, Hanbyul Joo, and Yaser Sheikh. Monocular total capture: Posing face, body, and hands in the wild. In CVPR, 2019.
+[37] Yu Xiang, Roozbeh Mottaghi, and Silvio Savarese. Beyond Pascal: A benchmark for 3d object detection in the wild. In WACV, 2014.
+[38] Yu Xiang, Tanner Schmidt, Venkatraman Narayanan, and
+
+Dieter Fox. Poseconn: A convolutional neural network for 6d object pose estimation in cluttered scenes. RSS 2018, 2017.
+[39] Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NeurIPS, 2016.
+[40] Tinghui Zhou, Yong Jae Lee, Stella X Yu, and Alyosha A Efros. Flowweb: Joint image set alignment by weaving consistent, pixel-wise correspondences. In CVPR, 2015.
+[41] Tinghui Zhou, Philipp Krahenbuhl, Mathieu Aubry, Qixing Huang, and Alexei A. Efros. Learning dense correspondence via 3d-guided cycle consistency. In CVPR, 2016.
+[42] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z Li. Face alignment across large poses: A 3d solution. In CVPR, 2016.
+[43] Silvia Zuffi, Angjoo Kanazawa, Tanja Berger-Wolf, and Michael J Black. Three-d safari: Learning to estimate zebra pose, shape, and texture from images" in the wild". 2019.
\ No newline at end of file
diff --git a/articulationawarecanonicalsurfacemapping/images.zip b/articulationawarecanonicalsurfacemapping/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b4a3c5a54b558f520ea07f3c956bfca3c7559249
--- /dev/null
+++ b/articulationawarecanonicalsurfacemapping/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:781a325de037836aeefde8a2d1ca78ad79b2532295818dcc0e41ceb4c0dba4e2
+size 841309
diff --git a/articulationawarecanonicalsurfacemapping/layout.json b/articulationawarecanonicalsurfacemapping/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f11247ed8180bfb4e1aa67dc0e9bc60cf51263dd
--- /dev/null
+++ b/articulationawarecanonicalsurfacemapping/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:38f00a6b4ca5b9ffce327e602dd87550db5eb6b17d4fd24bc9c8e04c9054eb60
+size 419817
diff --git a/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_content_list.json b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a4a4b4bbd6450b12e95f2ba73d324db202e59d78
--- /dev/null
+++ b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b498ef80a7d100c9af4d79674f744de0e44dcfbebfc2800c7be2ad0ec7085187
+size 83582
diff --git a/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_model.json b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6f24c95aa99c11e739705a38aac1ffe7c10bbbdf
--- /dev/null
+++ b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d4f9286a2941ef5a520702ac90bade102c2b72f839cf69d432cd1f48d8a9f754
+size 99903
diff --git a/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_origin.pdf b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d6f934b35de94cbb0e735f0fe30938d34f3fd78d
--- /dev/null
+++ b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/76ff161d-b396-4ee5-a12c-d04b441e6bf8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:52d8cca28f312445377e39831ea746775fc0eba8065419ced149d23097d586cd
+size 2091136
diff --git a/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/full.md b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..28f448baefecd0118d5b40a2f992820da66dab50
--- /dev/null
+++ b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/full.md
@@ -0,0 +1,351 @@
+# ASLFeat: Learning Local Features of Accurate Shape and Localization
+
+Zixin Luo $^{1}$ Lei Zhou $^{1}$ Xuyang Bai $^{1}$ Hongkai Chen $^{1}$ Jiahui Zhang $^{2}$ Yao Yao $^{1}$ Shiwei Li $^{3}$ Tian Fang $^{3}$ Long Quan $^{1}$ $^{1}$ Hong Kong University of Science and Technology
+ $^{2}$ Tsinghua University
+ $^{3}$ Everest Innovation Technology
+{zluoag, lzhouai, xbaiad, hchencf, yaoag, quan}@cse.ust.hk
+jiahui-z15@mails.tsinghua.edu.cn {sli, fangtian}@altizure.com
+
+# Abstract
+
+This work focuses on mitigating two limitations in the joint learning of local feature detectors and descriptors. First, the ability to estimate the local shape (scale, orientation, etc.) of feature points is often neglected during dense feature extraction, while the shape-awareness is crucial to acquire stronger geometric invariance. Second, the localization accuracy of detected keypoints is not sufficient to reliably recover camera geometry, which has become the bottleneck in tasks such as 3D reconstruction. In this paper, we present ASLFeat, with three light-weight yet effective modifications to mitigate above issues. First, we resort to deformable convolutional networks to densely estimate and apply local transformation. Second, we take advantage of the inherent feature hierarchy to restore spatial resolution and low-level details for accurate keypoint localization. Finally, we use a peakiness measurement to relate feature responses and derive more indicative detection scores. The effect of each modification is thoroughly studied, and the evaluation is extensively conducted across a variety of practical scenarios. State-of-the-art results are reported that demonstrate the superiority of our methods. [code release]
+
+# 1. Introduction
+
+Designing powerful local features is an essential basis for a broad range of computer vision tasks [31, 43, 44, 30, 40, 15, 40]. During the past few years, the joint learning of local feature detectors and descriptors has gained increasing popularity, with promising results achieved in real applications. However, there are two limitations we consider that may have hinged further boost in performance: 1) the lack of shape-awareness of feature points for acquiring stronger geometric invariance, and 2) the lack of keypoint localization accuracy for solving camera geometry robustly.
+
+Traditionally, the local shape is parameterized by hand
+
+crafted scale/rotation estimation [17, 29] or affine shape adaptation [20], while more recently, data-driven approaches [23, 22, 39] have emerged that build a separate network to regress the shape parameters, then transform the patch inputs before feature descriptions. Due to the increasing prevalence of the joint learning with keypoint detectors [6, 25, 27, 7, 4], recent research focus has shifted to frameworks that densely extract features from image inputs, whereas no pre-defined keypoint is given and thus previous patch-wise shape estimation becomes inapplicable. As an alternative, LF-Net [25] extracts dense features and transforms intermediate feature maps via Spatial Transformer Networks (STN) [12], whereas multiple forward passes are needed and only sparse predictions of shape parameters are practically feasible. In this view, there still lacks a solution that enables efficient local shape estimation in a dense prediction framework.
+
+Besides, the localization accuracy of learned keypoints is still concerned in solving geometry-sensitive problems. For instance, LF-Net [25] and D2-Net [7] empirically yield low precision in two-view matching or introduce large reprojection error in Structure-from-Motion (SfM) tasks, which in essence can be ascribed to the lack of spatial accuracy as the detections are derived from low-resolution feature maps (e.g., 1/4 times the original size). To restore the spatial resolution, SuperPoint [6] learns to upsample the feature maps with pixel-wise supervision from artificial points, while R2D2 [27] employs dilated convolutions to maintain the spatial resolution but trades off excessive GPU computation and memory usage. Moreover, it is questionable that if the detections from the deepest layer are capable of identifying low-level structures (corners, edges, etc.) where keypoints are often located. Although widely discussed in dense prediction tasks [28, 10, 16], in our context, neither the keypoint localization accuracy, nor the low-level nature of keypoint detection has received adequate attention.
+
+To mitigate above limitations, we present ASLFeat, with three light-weight yet effective modifications. First, we em
+
+ploy deformable convolutional networks (DCN) [5, 45] in the dense prediction framework, which allows for not only pixel-wise estimation of local transformation, but also progressive shape modelling by stacking multiple DCNs. Second, we leverage the inherent feature hierarchy, and propose a multi-level detection mechanism that restores not only the spatial resolution without extra learning weights, but also low-level details for accurate keypoint localization. Finally, we base our methods on an improved D2-Net [7] that is trained from scratch, and further propose a peakiness measurement for more selective keypoint detection.
+
+Despite the key insights of above modifications being familiar, we address their importance in our specific context, fully optimize the implementation in a non-trivial way, and thoroughly study the effect by comparing with different design choices. To summarize, we aim to provide answers to two critical questions: 1) what deformation parameterization is needed for local descriptors (geometrically constrained [23, 22, 39] or free-form modelling [5, 45]), 2) what feature fusion is effective for keypoint detectors (multi-scale input [27, 7], in-network multi-scale inference [25], or multi-level fusion [28]). Finally, we extensively evaluate our methods across various practical scenarios, including image matching [1, 2], 3D reconstruction [32] and visual localization [30]. We demonstrate drastic improvements upon the backbone architecture, D2-Net, and report state-of-the-art results on popular benchmarks.
+
+# 2. Related works
+
+Hand-crafted local features have been widely evaluated in [1, 32], we here focus mainly on the learning approaches.
+
+Local shape estimation. Most existing descriptor learning methods [19, 18, 21, 37, 36, 40] do not explicitly model the local shape, but rely on geometric data augmentation (scaling/rotational perturbation) or hand-crafted shape estimation (scale/rotation estimation [17, 29]) to acquire geometric invariance. Instead, OriNet [23] and LIFT [39] propose to learn a canonical orientation of feature points, AffNet [22] predicts more affine parameters to improve the modelling power, and the log-polar representation [8] is used to handle in particular scale changes. Despite the promising results, those methods are limited to take image patches as input, and introduce a considerable amount of computation since two independent networks are constructed for predicting patch shape and patch description separately. As an alternative, LF-Net [25] takes images as input and performs STN [12] on intermediate features, while multiple forward passes are needed to transform individual "feature patch", and thus only prediction on sparse locations is practically applicable.
+
+Meanwhile, the modelling of local shape has been shown crucial in image recognition tasks, which inspires works such as scale-adaptive convolution (SAC) for flexible-size
+
+dilations [42] and deformable convolution networks (DCN) for tunable grid sampling locations [5, 45]. In this paper, we adopt the similar idea in our context, and propose to equip DCN for dense local transformation prediction, of which the inference requires only a single forward pass and is thus of high efficiency.
+
+Joint local feature learning. The joint learning of feature detectors and descriptors has received increasing attention, where a unified network is constructed to share most computations of the two tasks for fast inference. In terms of descriptor learning, the ranking loss [25, 7, 6, 4, 27] has been primarily used as a de-facto standard. However, due to the difficulty of acquiring unbiased ground-truth data, no general consensus has yet been reached regarding an effective loss design for keypoint detector learning. For instance, LF-Net [25] warps the detection map and minimizes the difference at selected pixels in two views, while SuperPoint [6] operates a self-supervised paradigm with a bootstrap training on synthetic data and multi-round adaptations on real data. More recent R2D2 [27] enforces grid-wise peakiness in conjunction with reliability prediction for descriptor, while UnsuperPoint [4] and Key.Net [14] learn grid-wise offsets to localize keypoints.
+
+By contrast, D2-Net [7] eschews learning extra weights for a keypoint detector, but hand-crafts a selection rule to derive keypoints from the same feature maps that are used for extracting feature descriptors. This design essentially couples the capability of the feature detector and descriptor, and results in a clean framework without complex heuristics in loss formulation. However, it is a known issue that D2-Net lacks of accuracy of keypoint localization, as keypoints are derived from low-resolution feature maps. In this paper, we base our methods on D2-Net, and mitigate above limitation by a light-weight modification that cheaply restores both the spatial resolution and low-level details.
+
+# 3. Methods
+
+# 3.1. Prerequisites
+
+The backbone architecture in this work is built upon 1) deformable convolutional networks (DCN) [5, 45] that predict and apply dense spatial transformation, and 2) D2-Net [7] that jointly learns keypoint detector and descriptor.
+
+Deformable convolutional networks (DCN) [5, 45] target to learn dynamic receptive filed to accommodate the ability of modelling geometric variations. Formally, given a regular grid $\mathcal{R}$ that samples values over the input feature maps $\mathbf{x}$ , the output features $\mathbf{y}$ of a standard convolution for each spatial position $\mathbf{p}$ can be written as:
+
+$$
+\mathbf {y} (\mathbf {p}) = \sum_ {\mathbf {p} _ {n} \in \mathcal {R}} \mathbf {w} \left(\mathbf {p} _ {n}\right) \cdot \mathbf {x} \left(\mathbf {p} + \mathbf {p} _ {n}\right). \tag {1}
+$$
+
+DCN augments the regular convolution by additionally
+
+
+Figure 1. Network architecture, with the proposed equipment of deformable convolutional network (DCN), multi-level detection (MulDet), and peakiness measurement for keypoint scoring.
+
+learning both sampling offsets [5] $\{\triangle \mathbf{p}_n|n = 1,\dots ,N\}$ and feature amplitudes [45] $\{\triangle \mathbf{m}_n|n = 1,\dots ,N\}$ , where $N = |\mathcal{R}|$ , and rewrites Eq. 1 as:
+
+$$
+\mathbf {y} (\mathbf {p}) = \sum_ {\mathbf {p} _ {n} \in \mathcal {R}} \mathbf {w} \left(\mathbf {p} _ {n}\right) \cdot \mathbf {x} \left(\mathbf {p} + \mathbf {p} _ {n} + \triangle \mathbf {p} _ {n}\right) \cdot \triangle \mathbf {m} _ {n}. \tag {2}
+$$
+
+As the offset $\triangle \mathbf{p_n}$ is typically fractional, Eq. 2 is implemented via bilinear interpolation, while the feature amplitude $\triangle \mathbf{m}_n$ is limited to (0,1). During training, the initial values of $\triangle \mathbf{p}_n$ and $\triangle \mathbf{m}_n$ are respectively set to 0 and 0.5, following the settings in [45].
+
+D2-Net [7] proposes a describe-and-detect strategy to jointly extract feature descriptions and detections. Over the last feature maps $\mathbf{y} \in \mathbb{R}^{H \times W \times C}$ , D2-Net applies channel-wise L2-normalization to obtain dense feature descriptors, while the feature detections are derived from 1) the local score and 2) the channel-wise score. Specifically, for each location $(i,j)$ in $\mathbf{y}^c$ $(c = 1,2,\dots,C)$ , the local score is obtained by:
+
+$$
+\alpha_ {i j} ^ {c} = \frac {\exp \left(\mathbf {y} _ {i j} ^ {c}\right)}{\sum_ {\left(i ^ {\prime} , j ^ {\prime}\right) \in \mathcal {N} (i , j)} \exp \mathbf {y} _ {i ^ {\prime} j ^ {\prime}} ^ {c}}, \tag {3}
+$$
+
+where $\mathcal{N}(i,j)$ is neighboring pixels around $(i,j)$ , e.g., 9 neighbours defined by a $3\times 3$ kernel. Next, the channel-wise score is obtained by:
+
+$$
+\beta_ {i j} ^ {c} = \mathbf {y} _ {i j} ^ {c} / \max _ {t} \mathbf {y} _ {i j} ^ {t}. \tag {4}
+$$
+
+The final detection score is combined as:
+
+$$
+s _ {i j} = \max _ {t} \left(\alpha_ {i j} ^ {c} \beta_ {i j} ^ {c}\right). \tag {5}
+$$
+
+The detection score will be later used as a weighting term in loss formulation (Sec. 3.4), and will allow for top-K selection of keypoints during testing.
+
+# 3.2. DCN with Geometric Constraints
+
+The original free-form DCN predicts local transformation of high degrees of freedom (DOF), e.g., $9 \times 2$ offsets
+
+for a $3 \times 3$ kernel. On the one hand, it enables the potential to model complex deformation such as non-planarity, while on the other hand, it takes a risk of over-paramertizing the local shape, where simpler affine or perspective transformation are often considered to serve as a good approximation [20, 23, 22]. To find out what deformation is needed in our context, we compare three shape modellings via enforcing different geometric constraints in DCN, including 1) similarity, 2) affine and 3) homography. The shape properties of the investigated variants are summarized in Tab. 1.
+
+| Variants | Modeling Power | DOF |
| unconstrained | non-planarity | 2k2 |
| s.t. similarity | scale, rotation | 2 |
| s.t. affine | scale, rotation, shear | 4 |
| s.t. homography | perspective | 6 |
+
+Table 1. The shape properties of DCN variants, where DOF denotes the degrees of freedom and $k$ denotes the kernel size of convolution. Translation is omitted as is fixed for keypoints.
+
+Affine-constrained DCN. Traditionally, the local shape is often modelled by similarity transformation with estimates of rotation and scale [17, 29]. In a learning framework such as [23, 25], this transformation is decomposed as:
+
+$$
+\mathbf {S} = \lambda R (\theta) = \lambda \left( \begin{array}{c c} \cos (\theta) & \sin (\theta) \\ - \sin (\theta) & \cos (\theta) \end{array} \right). \tag {6}
+$$
+
+Moreover, a few works such as HesAff [20] further include an estimate of shearing, which is cast as a learnable problem by AffNet [22]. Here, we follow AffNet and decompose the affine transformation as:
+
+$$
+\begin{array}{l} \mathbf {A} = \mathbf {S} A ^ {\prime} = \lambda R (\theta) A ^ {\prime} \\ = \lambda \left( \begin{array}{c c} \cos (\theta) & \sin (\theta) \\ - \sin (\theta) & \cos (\theta) \end{array} \right) \left( \begin{array}{c c} a _ {1 1} ^ {\prime} & 0 \\ a _ {2 1} ^ {\prime} & a _ {2 2} ^ {\prime} \end{array} \right), \tag {7} \\ \end{array}
+$$
+
+where $\det A' = 1$ . The network is implemented to predict one scalar for scaling $(\lambda)$ , another two for rotation $(\cos(\theta)$ , $\sin(\theta))$ , while the other three for shearing $(A')$ .
+
+Homography-constrained DCN. Virtually, the local deformation can be better approximated by a homography (perspective) transformation $\mathbf{H}$ , and we here adopt the Tensor Direct Linear Transform (Tensor DLT) [24] to solve the 4-point parameterization of $\mathbf{H}$ in a differentiable manner.
+
+Formally, a linear system can be created that solves $\mathbf{M}\mathbf{h} = \mathbf{0}$ , where $\mathbf{M} \in \mathbb{R}^{8 \times 9}$ and $\mathbf{h}$ is a vector with 9 elements consisting of the entries of $\mathbf{H}$ , and each correspondence provides two equations in $\mathbf{M}$ . By enforcing the last element of $\mathbf{h}$ equals to 1 [11] and omitting the translation, we set $\mathbf{H}_{33} = 1$ and $\mathbf{H}_{13} = \mathbf{H}_{23} = 0$ , then rewrite the above system of equations as $\hat{\mathbf{M}}_{(i)}\hat{\mathbf{h}} = \hat{\mathbf{b}}_{(i)}$ , where $\hat{\mathbf{M}}_{(i)} \in \mathbb{R}^{2 \times 6}$ and for each correspondence,
+
+$$
+\hat {\mathbf {M}} _ {(i)} = \left[ \begin{array}{l l l l l l} 0 & 0 & - u _ {i} & - v _ {i} & v _ {i} ^ {\prime} u _ {i} & v _ {i} ^ {\prime} v _ {i} \\ u _ {i} & v _ {i} & 0 & 0 & - u _ {i} ^ {\prime} u _ {i} & - u _ {i} ^ {\prime} v _ {i} \end{array} \right], \quad (8)
+$$
+
+$\hat{\mathbf{b}}_{(i)} = [-v_i', u_i']^T \in \mathbb{R}^{2 \times 1}$ and $\hat{\mathbf{h}}$ consists of 6 elements from the first two columns of $\mathbf{H}$ . By stacking the equations of 4 correspondences, we derive the final linear system:
+
+$$
+\hat {\mathbf {M}} \hat {\mathbf {h}} = \hat {\mathbf {b}}. \tag {9}
+$$
+
+Suppose that correspondence points are not collinear, $\hat{\mathbf{h}}$ can be then efficiently and uniquely solved by using the differentiable pseudo-inverse of $\hat{\mathbf{A}}^1$ . In practice, we initialize 4 corner points at $\{(-1, -1), (1, -1), (1, 1), (-1, 1)\}$ , and implement the network to predict 8 corresponding offsets lying in $(-1, 1)$ so as to avoid collinearity.
+
+After forming the above transformation $\mathbf{T} \in \{\mathbf{S}, \mathbf{A}, \mathbf{H}\}$ , the offset values in Eq. 2 are now obtained by:
+
+$$
+\triangle \mathbf {p} _ {n} = \mathbf {T} \mathbf {p} _ {n} - \mathbf {p} _ {n}, \text {w h e r e} \mathbf {p} _ {n} \in \mathcal {R}, \tag {10}
+$$
+
+so that geometry constraints are enforced in DCN. More implementation details can be found in the Appendix.
+
+# 3.3. Selective and Accurate Keypoint Detection
+
+Keypoint peakiness measurement. As introduced in Sec. 3.1, D2-Net scores a keypoint regarding both spatial and channel-wise responses. Among many possible metrics, D2-Net implements a ratio-to-max (Eq. 4) to evaluate channel-wise extremeness, whereas one possible limitation lies on that it only weakly relates to the actual distribution of all responses along the channel.
+
+To study this effect, we first trivially modify Eq. 4 with a channel-wise softmax, whereas this modification deteriorates the performance in our experiments. Instead, inspired by [27, 41], we propose to use peakiness as a keypoint measurement in D2-Net, which rewrites Eq. 4 as:
+
+$$
+\beta_ {i j} ^ {c} = \operatorname {s o f t p l u s} \left(\mathbf {y} _ {i j} ^ {c} - \frac {1}{C} \sum_ {t} \mathbf {y} _ {i j} ^ {t}\right), \tag {11}
+$$
+
+
+(a) Multi-scale (pyramid)
+Figure 2. Different design choices to leverage feature hierarchy, shorten as variants of MulDet.
+
+
+(b) Multi-scale (in-network)
+
+
+c) Multi-level (U-Net)
+
+
+(d) Multi-level (ours)
+
+where $\text{softplus}$ activates the peakiness to a positive value. To balance the scales of both scores, we also rewrites Eq. 3 in the similar form:
+
+$$
+\alpha_ {i j} ^ {c} = \operatorname {s o f t p l u s} \left(\mathbf {y} _ {i j} ^ {c} - \frac {1}{| \mathcal {N} (i , j) |} \sum_ {\left(i ^ {\prime}, j ^ {\prime}\right) \in \mathcal {N} (i, j)} \mathbf {y} _ {i ^ {\prime} j ^ {\prime}} ^ {c}\right), \tag {12}
+$$
+
+and the two scores are again combined as in Eq. 5.
+
+Multi-level keypoint detection (MulDet). As aforementioned, one known limitation of D2-Net [7] is the lack of keypoint localization accuracy, since detections are obtained from low-resolution feature maps. There are multiple options to restore the spatial resolution, for instance, by learning an additional feature decoder (SuperPoint [6]) or employing dilated convolutions (R2D2 [27]). However, those methods either increase the number of learning parameters, or consume huge GPU memory or computation. Instead, we propose a simple yet effective solution without introducing extra learning weights, by leveraging the inherent pyramidal feature hierarchy of ConvNets and combining detections from multiple feature levels.
+
+Specifically, given a feature hierarchy consisting of feature maps at different levels $\{\mathbf{y}^{(1)},\mathbf{y}^{(2)},\dots,\mathbf{y}^{(l)}\}$ stripped by $\{1,2,\ldots,2^{(l-1)}\}$ , respectively, we apply the aforementioned detection at each level to get a set of score maps $\{\mathbf{s}^{(1)},\mathbf{s}^{(2)},\dots,\mathbf{s}^{(l)}\}$ . Next, each score map is upsampled to have the same spatial resolution as input image, and finally combined by taking the weighted sum:
+
+$$
+\hat {\mathbf {s}} = \frac {1}{\sum_ {l} w _ {l}} \sum_ {l} w _ {l} \mathbf {s} ^ {(l)}. \tag {13}
+$$
+
+To better address the superiority of the proposed method, we implement 1) the multi-scale detection used in D2-Net [7] and R2D2 [27] (Fig. 4a) by constructing an image pyramid with multiple forward passes, 2) the in-network multi-scale prediction used in LF-Net [25] (Fig. 4b) by resizing the intermediate feature maps, and 3) the standard U-Net architecture [28] (Fig. 4c) that builds a feature decoder and skip connections from low-level feature maps.
+
+The proposed multi-level detection (Fig. 4d) is advantageous in three aspects. Firstly, it adopts implicit multi-scale detection that conforms to classical scale-space theory [17]
+
+by having different sizes of receptive field for localizing keypoints. Secondly, compared with U-Net architecture, it cheaply restores the spatial resolution without introducing extra learning weights to achieve pixel-wise accuracy. Thirdly, different from U-Net that directly fuses low-level and high-level features, it keeps the low-level features untouched, but fuses the detections of multi-level semantics, which helps to better preserve the low-level structures such as corners or edges. The implementation details of above variants can be found in the Appendix.
+
+# 3.4. Learning Framework
+
+Network architecture. The network architecture is illustrated in Fig. 1. To reduce computations, we replace the VGG backbone [34] used in D2-Net with more light-weight L2-Net [36]. Similar to R2D2 [27], we further replace the last $8 \times 8$ convolution of L2-Net by three $3 \times 3$ convolutions, resulting in feature maps of 128 dimension and $1/4$ times resolution of the input. Finally, the last three convolutions, conv6, conv7 and conv8, are substituted with DCN (Sec. 3.1). Three levels, conv1, conv3 and conv8, are selected to perform the proposed MulDet (Sec. 3.3). The combination weights in Eq. 13 are empirically set to $w_i = 1, 2, 3$ , and the dilation rate to find neighboring pixels $\mathcal{N}(i,j)$ in Eq. 3 is set to $3, 2, 1$ , respectively, which we find to deliver best trade-offs to balance the attention on low-level and abstracted features.
+
+Loss design. We identify a set of correspondences $\mathcal{C}$ for an image pair $(I, I')$ via densely warping $I$ to $I'$ regarding ground-truth depths and camera parameters. To derive the training loss for both detector and descriptor, we adopt the formulation in D2-Net [7], written as:
+
+$$
+\mathcal {L} (I, I ^ {\prime}) = \frac {1}{| \mathcal {C} |} \sum_ {c \in \mathcal {C}} \frac {\hat {s} _ {c} \hat {s} _ {c} ^ {\prime}}{\sum_ {q \in \mathcal {C}} \hat {s} _ {q} \hat {s} _ {q} ^ {\prime}} \mathcal {M} \left(\mathbf {f} _ {c}, \mathbf {f} ^ {\prime} _ {c}\right), \tag {14}
+$$
+
+where $\hat{s}_k$ and $\hat{s}_k^\prime$ are combined detection scores in Eq. 13 for image $I$ and $I^{\prime}$ , $\mathbf{f}_k$ and $\mathbf{f}_k^\prime$ are their corresponding descriptors, and $\mathcal{M}(\cdot ,\cdot)$ is the ranking loss for representation learning. Instead of using the hardest-triplet loss in D2-Net [7], we adopt the hardest-contrastive form in FCGF [3], which we find guarantee better convergence when training from scratch and equipping DCN, written as:
+
+$$
+\begin{array}{l} \mathcal {M} \left(\mathbf {f} _ {c}, \mathbf {f} _ {c} ^ {\prime}\right) = \left[ D \left(\mathbf {f} _ {c}, \mathbf {f} _ {c} ^ {\prime}\right) - m _ {p} \right] _ {+} + \\ \left[ m _ {p} - \min \left(\min D (\mathbf {f}, \mathbf {f} ^ {\prime}) , \min D (\mathbf {f}, \mathbf {f} ^ {\prime})\right) \right], \end{array} \tag {15}
+$$
+
+$$
+[ m _ {n} - \min (\min _ {k \neq c} D (\mathbf {f} _ {c}, \mathbf {f} _ {k} ^ {\prime}), \min _ {k \neq c} D (\mathbf {f} _ {k}, \mathbf {f} _ {c} ^ {\prime})) ] _ {+}, \tag {15}
+$$
+
+where $D(\cdot, \cdot)$ denotes the Euclidean distance measured between two descriptors, and $m_p, m_n$ are respectively set to 0.2, 1.0 for positives and negatives. Similar to D2-Net [7], a safe radius sized 3 is set to avoid taking spatially too close feature points as negatives.
+
+# 3.5. Implementations
+
+Training. In contrast to D2-Net [7] which starts from an ImageNet pretrained model with only the last convolution fine-tuned, we train our model from scratch with ground-truth cameras and depths obtained from [33, 26] (the same data used in [19, 18]). The training consumes $800K$ image pairs sized $480 \times 480$ and batched 2. Learning gradients are computed for image pairs that have at least 32 matches, while the maximum match number is limited to 512. Each input image is standardized to have zero mean and unit norm, and independently applied with random photometric augmentation including brightness, contrast and blurriness. The SGD optimizer is used with momentum of 0.9, and the base learning rate is set to 0.1.
+
+Although an end-to-end learning with DCN is feasible, we find that a two-stage training yields better results in practice. Specifically, in the first round we train the model with all regular convolutions for $400K$ iterations. In the second round, we tune only the DCNs with the base learning rate divided by 10 for another $400K$ iterations. Our implementation is made in TensorFlow with single NVIDIA RTX 2080Ti card, and the training finishes within 42 hours.
+
+Testing. A non-maximum suppression (NMS) sized 3 is applied to remove detections that are spatially too close. Similar to D2-Net, we postprocess the keypoints with the SIFT-like edge elimination (with threshold set to 10) and sub-pixel refinement, the descriptors are then bilinearly interpolated at the refined locations. We select top-K keypoints regarding detection scores obtained in Eq. 13, and empirically discard those whose scores are lower than 0.50.
+
+# 4. Experiments
+
+In the following sections, we evaluate our methods across several practical scenarios, including image matching, 3D reconstruction and visual localization. Further experiments on dense reconstruction and image retrieval can be found in the Appendix.
+
+# 4.1. Image Matching
+
+Datasets. First, we use the popular HPatches dataset [1], which includes 116 image sequences with ground-truth homography. Following D2-Net [7], we exclude 8 high-resolution sequences, leaving 52 and 56 sequences with illumination or viewpoint variations, respectively.
+
+Though widely used, HPatches dataset exhibits only homography transformation, which may not comprehensively reflect the performance in real applications. Thus, we resort to the newly proposed FM-Bench [2], which comprises four datasets captured in practical scenarios: the TUM dataset [35] in indoor SLAM settings, the KITTI dataset [9] in driving scenes, the Tanks and Temples dataset
+
+(T&T) [13] for wide-baseline reconstruction, and the Community Photo Collection (CPC) [38] for wild reconstruction from web images. For each datasets, 1000 overlapping image pairs are randomly chosen for evaluation, with ground-truth fundamental matrix pre-computed.
+
+Evaluation protocols. On HPatches dataset [1], three standard metrics are used: 1) Keypoint repeatability (%Rep.), a.k.a. the ratio of possible matches and the minimum number of keypoints in the shared view. 2) Matching score (%M.S.), a.k.a. the ratio of correct matches and the minimum number of keypoints in the shared view. 3) Mean matching accuracy (%MMA), a.k.a. the ratio of correct matches and possible matches. Here, a match is defined to correspond if the point distance is below some error threshold after homography wrapping, and a correct match is further required to be a mutual nearest neighbor during brute-force searching. For above metrics, we report their average scores over all image pairs in the dataset.
+
+In terms of FM-Bench [2], a full matching pipeline including outlier rejection (e.g., ratio test [17]) and geometric verification (e.g., RANSAC) is performed, and the final pose recovery accuracy is evaluated. To determine the correctness of a pose estimate, FM-Bench uses ground-truth pose to generate a set of virtual correspondences, then measures the average of normalized symmetric epipolar distance regarding a pose estimate, and finally computes %Recall as the ratio of estimates where the distance error is below a certain threshold (0.05 by default). At correspondence level, FM-Bench also reports intermediate results such as the inlier ratio (%Inlier/%Inlier-m) and correspondence number (%Corr/%Corr-m) after/before RANSAC.
+
+| HPatches dataset (error threshold @ 3px) |
| Config. | %Rep. | %M.S. | %MMA |
| D2-Net | orig. | 47.86 | 23.58 | 43.00 |
| our impl. | 43.34 | 29.55 | 45.36 |
| peakiness meas. | 46.24 | 32.27 | 48.54 |
| + MulDet | multi-scale (pyramid) | 46.12 | 32.55 | 48.72 |
| multi-scale (in-network) | 45.17 | 31.74 | 47.94 |
| multi-level (U-Net) | 75.35 | 40.12 | 66.42 |
| multi-level (ours) | 77.37 | 42.99 | 68.66 |
| + MulDet & DCN | s.t. similarity | 78.33 | 44.79 | 71.67 |
| s.t. affine | 78.49 | 45.35 | 71.80 |
| s.t. homography | 78.39 | 45.08 | 71.89 |
| free-form, 1 layer | 78.27 | 45.12 | 71.08 |
| free-form | 78.31 | 46.28 | 72.26 |
| free-form, multi-scale | 86.03 | 39.37 | 72.64 |
+
+Table 2. Ablation experiments of the proposed modifications, where peakiness meas. improves the detection scoring upon D2-Net, +MulDet studies the effect of different feature fusion strategies, and +MulDet & DCN further compares the effect of different parameterization of deformation.
+
+Comparative methods. We compare our methods with 1) patch descriptors, including HardNet++ [21] with SIFT [17] detector (SIFT + HN++, or plus a shape estimator
+
+HesAffNet [22] $(\text{HAN} + \text{HN}++)$ . Also, ContextDesc [18] with SIFT detector (SIFT + ContextDesc). 2) Joint local feature learning approaches including SuperPoint [6], LF-Net [25], D2-Net (fine-tuned) [7] and more recent R2D2 [27]. Unless otherwise specified, we report either results reported in original papers, or derived from authors' public implementations with default parameters. We limit the maximum numbers of features of our methods to 5K and 20K on HPatches dataset and FM-Bench, respectively.
+
+On FM-Bench, both the mutual check and ratio test [17] are applied to reject outliers before RANSAC. A ratio at 0.8 is used for all methods except for D2-Net and $\mathrm{R2D2^2}$ .
+
+Baseline. To avoid overstatement, we first present our re-implementation of D2-Net (our impl.) as the baseline. As mentioned in Sec. 3.4 and Sec. 3.5, the new baseline differs from the original D2-Net (orig.) in three aspects: 1) Different backbone architecture (L2-Net [36] with 128-d output vs. VGG [34] with 512-d output). 2) Different loss formulation (hardest-contrastive [3] vs. hardest-triplet [7]). 3) Different training settings (trained from scratch vs. finetuned only the last convolution from a pre-trained model). As shown in Tab. 2 and Tab. 3, the new baseline outperforms original D2-Net in general, while being more parameter-and computation-efficient regarding model size.
+
+Ablations on peakiness measurement. We first adopt the peakiness measurement for more indicative keypoint scoring (Sec. 3.3). As shown in Tab. 2, this modification (peakiness meas.) notably improves the results regarding all evaluation metrics on HPatches dataset. This effect is validated on FM-Bench, which is shown to apply for all different scenarios as shown in Tab. 3 (ASLFeat w/o peakiness meas.). Our later modifications will be thus based on this model.
+
+Ablations on MulDet. As shown in Tab. 2, applying multiscale detection solely does not take obvious effect, as spatial accuracy is still lacking. Instead, adopting multi-level detection, with spatial resolution restored, remarkably boosts the performance, which conforms the necessity of pixel-level accuracy especially when small pixel error is tolerated. It is also note-worthy that, despite less learning weights and computation, the proposed multi-level detection outperforms the U-Net variant, addressing the low-level nature of this task where a better preservation of low-level features is beneficial. Although the proposed multi-level detection also includes feature fusion of difference scales, we find that combining a more explicit multi-scale (pyramid) detection (free-form, multi-scale) is in particular advantageous in order to handle the scale changes. This combination will be denoted as ASLFeat (MS) in the following context.
+
+Ablations on DCN. As shown in Tab. 2, all investigated
+
+ | TUM [35] (indoor SLAM settings) | KITTI [9] (driving settings) |
| Methods | %Recall | %Inlier | #Inlier-m | #Corrs (-m) | %Recall | %Inlier | #Inlier-m | #Corrs (-m) |
| SIFT [17] | 57.40 | 75.33 | 59.21 | 65 (316) | 91.70 | 98.20 | 87.40 | 154 (525) |
| SIFT + HN++ [21] | 58.90 | 75.74 | 62.07 | 67 (315) | 92.00 | 98.21 | 91.25 | 159 (535) |
| HAN + HN++ [22] | 51.70 | 75.70 | 62.06 | 101 (657) | 90.40 | 98.09 | 90.64 | 233 (1182) |
| SIFT + ContextDesc [18] | 59.70 | 75.53 | 62.61 | 69 (325) | 92.20 | 98.23 | 91.92 | 160 (541) |
| LF-Net (MS) [25] | 53.00 | 70.97 | 56.25 | 143 (851) | 80.40 | 95.38 | 84.66 | 202 (1045) |
| D2-Net (MS) [7] | 34.50 | 67.61 | 49.01 | 74 (1279) | 71.40 | 94.26 | 73.25 | 103 (1832) |
| SuperPoint [6] | 45.80 | 72.79 | 64.06 | 39 (200) | 86.10 | 98.11 | 91.52 | 73 (392) |
| R2D2 (MS) [27] | 57.70 | 73.70 | 61.53 | 260 (1912) | 78.80 | 97.53 | 86.49 | 278 (1804) |
| D2-Net (our impl.) | 39.10 | 70.09 | 61.58 | 64 (337) | 70.80 | 97.04 | 91.97 | 81 (683) |
| ASLFeat (w/o peakiness meas.) | 53.30 | 74.96 | 68.29 | 116 (703) | 89.60 | 98.47 | 95.36 | 223 (1376) |
| ASLFeat | 60.20 | 76.34 | 69.09 | 148 (739) | 92.20 | 98.69 | 96.25 | 444 (1457) |
| ASLFeat (MS) | 59.90 | 76.72 | 69.50 | 258 (1332) | 92.20 | 98.76 | 96.16 | 630 (2222) |
| T&T [13] (wide-baseline reconstruction) | CPC [38] (wild reconstruction from web images) |
| SIFT | 70.00 | 75.20 | 53.25 | 85 (795) | 29.20 | 67.14 | 48.07 | 60 (415) |
| SIFT + HN++ | 79.90 | 81.05 | 63.61 | 96 (814) | 40.30 | 76.73 | 62.30 | 69 (400) |
| HAN + HN++ | 82.50 | 84.71 | 70.29 | 97 (920) | 47.40 | 82.58 | 72.22 | 65 (405) |
| SIFT + ContextDesc | 81.60 | 83.32 | 69.92 | 94 (728) | 41.80 | 84.01 | 72.21 | 61 (306) |
| LF-Net (MS) | 57.40 | 66.62 | 60.57 | 54 (362) | 19.40 | 44.27 | 44.35 | 50 (114) |
| D2-Net (MS) | 68.40 | 71.79 | 55.51 | 78 (2603) | 31.30 | 56.57 | 49.85 | 84 (1435) |
| SuperPoint | 81.80 | 83.87 | 70.89 | 52 (535) | 40.50 | 75.28 | 64.68 | 31 (225) |
| R2D2 (MS) | 73.00 | 80.81 | 65.31 | 84 (1462) | 43.00 | 82.40 | 67.28 | 91 (954) |
| D2-Net (our impl.) | 83.20 | 84.19 | 75.32 | 74 (1009) | 46.60 | 83.72 | 77.31 | 51 (464) |
| ASLFeat (w/o peakiness meas.) | 86.30 | 84.71 | 77.84 | 171 (1775) | 49.50 | 85.80 | 80.39 | 97 (780) |
| ASLFeat | 89.90 | 85.33 | 79.08 | 295 (2066) | 51.50 | 87.98 | 82.24 | 165 (989) |
| ASLFeat (MS) | 88.70 | 85.68 | 79.74 | 327 (2465) | 54.40 | 89.33 | 82.76 | 185 (1159) |
+
+Table 3. Evaluation results on FM-Bench [2] for pair-wise image matching, where #Recall denotes the percentage of accurate pose estimates, #Inlier and #Inlier-m, #Corrs and #Corrs-m denote the inlier ratio and correspondence number after/before RANSAC.
+
+variants of DCN are valid and notably boost the performance. Among those designs, the free-form variant slightly outperforms the constrained version, despite the fact that HPatches datasets exhibit only homography transformation. This confirms that modelling non-planarity is feasible and useful for local features, and we thus opt for the free-form DCN to better handle geometric variations. Besides, we also implement a single-layer DCN (free-form, 1 layer) that replaces only the last regular convolution (i.e., conv8 in Fig. 1), showing that stacking more DCNs is beneficial and the shape estimation can be learned progressively.
+
+
+Figure 3. Comparisons on HPatches dataset [1] with mean matching accuracy (MMA) evaluated at different error thresholds, where "MS" denotes that the multi-scale inference is enabled.
+
+Comparisons with other methods. As illustrated in Fig. 3, both ASLFeat and its multi-scale (MS) variant achieve overall best results on HPatches dataset regarding both illumina-
+
+tion and viewpoint variations at different error thresholds. Specifically, ASLFeat delivers remarkable improvements upon its backbone architecture, D2-Net, especially at low error thresholds, which in particular demonstrates that the keypoint localization error has been largely reduced. Besides, ASLFeat notably outperforms the more recent R2D2 (72.64 vs. 68.64 for MMA@3 overall), while being more computationally efficient by eschewing the use of dilated convolutions for restoring spatial resolution.
+
+In addition, as shown in Tab. 3 on FM-Bench, the ASLFeat remarkably outperforms other joint learning approaches. In particular, ASLFeat largely improves the state-of-the-art results on two MVS datasets: T&T and CPC, of which the scenarios are consistent with the training data. It is also noteworthy that our methods generalize well to unseen scenarios: TUM (indoor scenes) and KITTI (driving scenes). As a common practice, adding more task-specific training data is expected to further boost the performance.
+
+Visualizations. We here present some sample detection results on FM-Bench in Fig.4, and more visualizations are provided in the Appendix.
+
+# 4.2.3D Reconstruction
+
+Datasets. We resort to ETH benchmark [32] to demonstrate the effect on 3D reconstruction tasks. Following [7], we
+
+
+
+
+
+
+Figure 4. Sample detection results on FM-Bench [2] with top-5000 keypoints displayed.
+
+
+
+evaluate on three medium-scale datasets from [38].
+
+Evaluation protocols. We exhaustively match all image pairs for each dataset with both ratio test at 0.8 and mutual check for outlier rejection, then run SfM and MVS algorithms by COLMAP [31]. For sparse reconstruction, we report the number of registered images (#Reg. Images), the number of sparse points (#Sparse Points), average track length (Track Length) and mean reprojection error (Reproj. Error). For dense reconstruction, we report the number of dense points (#Dense Points). We limit the maximum number of features of ASLFeat to $20\mathrm{K}$ .
+
+Results. As shown in Tab. 4, ASLFeat produces the most complete reconstructions regarding #Reg. Images and #Dense Points. Besides, ASLFeat results in Reproj. Error that is on par with SuperPoint and smaller than D2-Net, which again validates the effect of the proposed MulDet for restoring spatial information. However, the reprojection error produced by hand-crafted keypoints (e.g., RootSIFT) is still notably smaller than all learning methods, which implies that future effort can be spent to further improve the keypoint localization in a learning framework.
+
+# 4.3. Visual Localization
+
+Datasets. We resort to Aachen Day-Night dataset [30] to demonstrate the effect on visual localization tasks, where the key challenge lies on matching images with extreme day-night changes for 98 queries.
+
+Evaluation protocols. We use the evaluation pipeline provided in The Visual Localization Benchmark3, which takes custom features as input, then relies on COLMAP [31] for image registration, and finally generates the percentages of successfully localized images within three error tolerances $(0.5\mathrm{m}, 2^{\circ}) / (1\mathrm{m}, 5^{\circ}) / (5\mathrm{m}, 10^{\circ})$ . The maximum feature number of our methods are limited to $20\mathrm{K}$ .
+
+| Datasets | Methods | #Reg. Images | # Sparse Points | Track Length | Reproj. Error | #Dense Points |
| Madrid Metropolis 1344 images | RootSIFT | 500 | 116K | 6.32 | 0.60px | 1.82M |
| GeoDesc | 495 | 144K | 5.97 | 0.65px | 1.56M |
| SuperPoint | 438 | 29K | 9.03 | 1.02px | 1.55M |
| D2-Net (MS) | 495 | 144K | 6.39 | 1.35px | 1.46M |
| ASLFeat | 613 | 96K | 8.76 | 0.90px | 2.00M |
| ASLFeat (MS) | 649 | 129K | 9.56 | 0.95px | 1.92M |
| Gendarmen-markt 1463 images | RootSIFT | 1035 | 338K | 5.52 | 0.69px | 4.23M |
| GeoDesc | 1004 | 441K | 5.14 | 0.73px | 3.88M |
| SuperPoint | 967 | 93K | 7.22 | 1.03px | 3.81M |
| D2-Net (MS) | 965 | 310K | 5.55 | 1.28px | 3.15M |
| ASLFeat | 1040 | 221K | 8.72 | 1.00px | 4.01M |
| ASLFeat (MS) | 1061 | 320K | 8.98 | 1.05px | 4.00M |
| Tower of London 1576 images | RootSIFT | 804 | 239K | 7.76 | 0.61px | 3.05M |
| GeoDesc | 776 | 341K | 6.71 | 0.63px | 2.73M |
| SuperPoint | 681 | 52K | 8.67 | 0.96px | 2.77M |
| D2-Net (MS) | 708 | 287K | 5.20 | 1.34px | 2.86M |
| ASLFeat | 821 | 222K | 12.52 | 0.92px | 3.06M |
| ASLFeat (MS) | 846 | 252K | 13.16 | 0.95px | 3.08M |
+
+Results. As shown in Tab. 5, although only mediocre results are obtained in previous evaluations, D2-Net performs surprisingly well under challenging illumination variations. This can be probably ascribed to the superior robustness of low-level features pre-trained on ImageNet. On the other hand, our method outperforms the plain implementation of R2D2, while a specialized R2D2 model (R2D2 (finetuned)) achieves the state-of-the-art results with doubled model size, training on day image from Aachen dataset and using photo-realistic style transfer to generate night images.
+
+Table 4. Evaluation results on ETH benchmark [32] for 3D reconstruction.
+
+| Methods | #Features | Dim | 0.5m, 2° | 1m, 5° | 5m, 10° |
| RootSIFT | 11K | 128 | 33.7 | 52.0 | 65.3 |
| HAN + HN++ | 11K | 128 | 37.8 | 54.1 | 75.5 |
| SIFT + ContextDesc | 11K | 128 | 40.8 | 55.1 | 80.6 |
| SuperPoint | 7K | 256 | 42.8 | 57.1 | 75.5 |
| D2-Net (MS) | 19K | 512 | 44.9 | 64.3 | 88.8 |
| R2D2 (MS) | 10K | 128 | 43.9 | 61.2 | 77.6 |
| R2D2 (MS, fine-tuned) | 10K | 128 | 45.9 | 66.3 | 88.8 |
| D2-Net (our impl.) | 10K | 128 | 40.8 | 59.2 | 77.6 |
| ASLFeat | 10K | 128 | 45.9 | 64.3 | 86.7 |
| ASLFeat (MS) | 10K | 128 | 44.9 | 64.3 | 85.7 |
+
+Table 5. Evaluation results on Aachen Day-Night dataset [30] for visual localization.
+
+# 5. Conclusions
+
+In this paper, we have used D2-Net as the backbone architecture to jointly learn the local feature detector and descriptor. Three light-weight yet effective modifications have been proposed that drastically boost the performance in two aspects: the ability to model the local shape for stronger geometric invariance, and the ability to localize keypoints accurately for solving robust camera geometry. We have conducted extensive experiments to study the effect of each modification, and demonstrated the superiority and practicability of our methods across various applications.
+
+Acknowledgments. This work is supported by Hong Kong RGC GRF 16206819, 16203518 and T22-603/15N.
+
+# References
+
+[1] V. Balntas, K. Lenc, A. Vedaldi, and K. Mikolajczyk. Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In CVPR, 2017. 2, 5, 6, 7
+[2] J.-W. Bian, Y.-H. Wu, J. Zhao, Y. Liu, L. Zhang, M.-M. Cheng, and I. Reid. An evaluation of feature matchers for fundamental matrix estimation. BMVC, 2019. 2, 5, 6, 7, 8
+[3] C. Choy, J. Park, and V. Koltun. Fully convolutional geometric features. In ICCV, 2019. 5, 6
+[4] P. H. Christiansen, M. F. Kragh, Y. Brodskiy, and H. Karstoft. Unsuperpoint: End-to-end unsupervised interest point detector and descriptor. arXiv, 2019. 1, 2
+[5] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, 2017. 2, 3
+[6] D. DeTone, T. Malisiewicz, and A. Rabinovich. Superpoint: Self-supervised interest point detection and description. In CVPRW, 2018. 1, 2, 4, 6, 7
+[7] M. Dusmanu, I. Rocco, T. Pajdla, M. Pollefeys, J. Sivic, A. Torii, and T. Sattler. D2-net: A trainable cnn for joint detection and description of local features. In CVPR, 2019, 1, 2, 3, 4, 5, 6, 7
+[8] P. Ebel, A. Mishchuk, K. M. Yi, P. Fua, and E. Trulls. Beyond cartesian representations for local descriptors. ICCV, 2019. 2
+[9] A. Geiger, P. Lenz, and R. Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In CVPR, 2012. 5, 7
+[10] C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017. 1
+[11] R. Hartley and A. Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. 4
+[12] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In NeurIPS, 2015. 1, 2
+[13] A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ToG, 2017. 6, 7
+[14] A. B. Laguna, E. Riba, D. Ponsa, and K. Mikolajczyk. Key. net: Keypoint detection by handcrafted and learned cnn filters. ICCV, 2019. 2
+[15] S. Li, L. Yuan, J. Sun, and L. Quan. Dual-feature warping-based motion model estimation. In ICCV, 2015. 1
+[16] G. Lin, A. Milan, C. Shen, and I. Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In CVPR, 2017. 1
+[17] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004. 1, 2, 3, 4, 6, 7
+[18] Z. Luo, T. Shen, L. Zhou, J. Zhang, Y. Yao, S. Li, T. Fang, and L. Quan. Contextdesc: Local descriptor augmentation with cross-modality context. In CVPR, 2019. 2, 5, 6, 7
+[19] Z. Luo, T. Shen, L. Zhou, S. Zhu, R. Zhang, Y. Yao, T. Fang, and L. Quan. Geodesc: Learning local descriptors by integrating geometry constraints. In ECCV, 2018. 2, 5
+[20] K. Mikolajczyk and C. Schmid. An affine invariant interest point detector. In ECCV, 2002. 1, 3
+
+[21] A. Mishchuk, D. Mishkin, F. Radenovic, and J. Matas. Working hard to know your neighbor's margins: Local descriptor learning loss. In NeurIPs, 2017. 2, 6, 7
+[22] D. Mishkin, F. Radenovic, and J. Matas. Repeatability is not enough: Learning affine regions via discriminability. In ECCV, 2018. 1, 2, 3, 6, 7
+[23] K. Moo Yi, Y. Verdie, P. Fua, and V. Lepetit. Learning to assign orientations to feature points. In CVPR, 2016. 1, 2, 3
+[24] T. Nguyen, S. W. Chen, S. S. Shivakumar, C. J. Taylor, and V. Kumar. Unsupervised deep homography: A fast and robust homography estimation model. In IEEE Robotics and Automation Letters, 2018. 4
+[25] Y. Ono, E. Trulls, P. Fua, and K. M. Yi. Lf-net: learning local features from images. In NeurIPS, 2018. 1, 2, 3, 4, 6, 7
+[26] F. Radenovic, G. Tolias, and O. Chum. Cnn image retrieval learns from bow: Unsupervised fine-tuning with hard examples. In ECCV, 2016. 5
+[27] J. Revaud, P. Weinzaepfel, C. De Souza, N. Pion, G. Csurka, Y. Cabon, and M. Humenberger. R2d2: Repeating and reliable detector and descriptor. In NeurIPS, 2019. 1, 2, 4, 5, 6, 7
+[28] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, 2015. 1, 2, 4
+[29] E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski. Orb: An efficient alternative to sift or surf. In ICCV, 2011. 1, 2, 3
+[30] T. Sattler, T. Weyand, B. Leibe, and L. Kobbelt. Image retrieval for image-based localization revisited. In BMVC, 2012. 1, 2, 8
+[31] J. L. Schonberger and J.-M. Frahm. Structure-from-motion revisited. In CVPR, 2016. 1, 8
+[32] J. L. Schonberger, H. Hardmeier, T. Sattler, and M. Pollefeys. Comparative evaluation of hand-crafted and learned local features. In CVPR, 2017. 2, 7, 8
+[33] T. Shen, Z. Luo, L. Zhou, R. Zhang, S. Zhu, T. Fang, and L. Quan. Matchable image retrieval by learning from surface reconstruction. In ACCV, 2018. 5
+[34] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. *ICLR*, 2014. 5, 6
+[35] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers. A benchmark for the evaluation of rgb-d slam systems. In IROS, 2012. 5, 7
+[36] Y. Tian, B. Fan, and F. Wu. L2-net: Deep learning of discriminative patch descriptor in euclidean space. In CVPR, 2017. 2, 5, 6
+[37] Y. Tian, X. Yu, B. Fan, F. Wu, H. Heijnen, and V. Balntas. Sosnet: Second order similarity regularization for local descriptor learning. In CVPR, 2019. 2
+[38] K. Wilson and N. Snavely. Robust global translations with 1dsfm. In ECCV, 2014. 6, 7, 8
+[39] K. M. Yi, E. Trulls, V. Lepetit, and P. Fua. Lift: Learned invariant feature transform. In ECCV, 2016. 1, 2
+[40] J. Zhang, D. Sun, Z. Luo, A. Yao, L. Zhou, T. Shen, Y. Chen, L. Quan, and H. Liao. Learning two-view correspondences and geometry using order-aware network. In ICCV, 2019. 1, 2
+
+[41] L. Zhang and S. Rusinkiewicz. Learning to detect features in texture images. In CVPR, 2018. 4
+[42] R. Zhang, S. Tang, Y. Zhang, J. Li, and S. Yan. Scale-adaptive convolutions for scene parsing. In ICCV, 2017. 2
+[43] R. Zhang, S. Zhu, T. Fang, and L. Quan. Distributed very large scale bundle adjustment by global camera consensus. In ICCV, 2017. 1
+[44] L. Zhou, S. Zhu, Z. Luo, T. Shen, R. Zhang, M. Zhen, T. Fang, and L. Quan. Learning and matching multi-view descriptors for registration of point clouds. In ECCV, 2018. 1
+[45] X. Zhu, H. Hu, S. Lin, and J. Dai. Deformable convnets v2: More deformable, better results. In CVPR, 2019. 2, 3
\ No newline at end of file
diff --git a/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/images.zip b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9ed71279674955d7abd706de9c3b48a4c9214faf
--- /dev/null
+++ b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9851e83483c708d6c5267c961bbf7498894c788752a2d454332b4345ca91807b
+size 707725
diff --git a/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/layout.json b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e76b6c3cfb381ab62776b9c19b08b6faa40d2294
--- /dev/null
+++ b/aslfeatlearninglocalfeaturesofaccurateshapeandlocalization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1dc5fa86ca60cc245a90d7655dda6a04cdd2395742105a0dc32b2ab8e100f8e
+size 384975
diff --git a/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_content_list.json b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ff6fe7a40c2e9e570dd896f4507cbceb42e55eae
--- /dev/null
+++ b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bac696aa20e4ea938a11ec13b34be415003e08bf20c6220c8f07fe1bff38b1b8
+size 71949
diff --git a/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_model.json b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..827930d01ebecd0f2933b53dea268b0654105e98
--- /dev/null
+++ b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ab169dc7b1827611f9de65bf74e7267905714d9034a4ec90843153df9c28f9b
+size 85502
diff --git a/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_origin.pdf b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2b922ed93d54d08f902d58e6347dfe27bbfc710b
--- /dev/null
+++ b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/2087dece-56c1-459f-8616-cf3ccfcd3efe_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b1ec28220ce8afad4f9619753db944a2080cbe5ebf905c200aa25998e89a0e0a
+size 2707676
diff --git a/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/full.md b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..3483af4de3eed20b42614eea052a9b551360d0e2
--- /dev/null
+++ b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/full.md
@@ -0,0 +1,313 @@
+# Assessing Eye Aesthetics for Automatic Multi-Reference Eye In-Painting
+
+Bo Yan\*, Qing Lin, Weimin Tan, Shili Zhou
+
+Shanghai Key Laboratory
+
+of Intelligent Information Processing,
+
+School of Computer Science, Fudan University
+
+{byan,18210240028,wmtan14,15307130270}@fudan.edu.cn
+
+# Abstract
+
+With the wide use of artistic images, aesthetic quality assessment has been widely concerned. How to integrate aesthetics into image editing is still a problem worthy of discussion. In this paper, aesthetic assessment is introduced into eye inpainting task for the first time. We construct an eye aesthetic dataset, and train the eye aesthetic assessment network on this basis. Then we propose a novel eye aesthetic and face semantic guided multi-reference eye inpainting GAN approach (AesGAN), which automatically selects the best reference under the guidance of eye aesthetics. A new aesthetic loss has also been introduced into the network to learn the eye aesthetic features and generate high-quality eyes. We prove the effectiveness of eye aesthetic assessment in our experiments, which may inspire more applications of aesthetics assessment. Both qualitative and quantitative experimental results show that the proposed AesGAN can produce more natural and visually attractive eyes compared with state-of-the-art methods.
+
+# 1. Introduction
+
+Aesthetic quality assessment [14, 15] has been gaining increasing demands with the wide applications of digital images in social, communication, entertainment, and shopping, etc. The aesthetic quality of an image largely determines its using possibility due to the nature of human loving aesthetic things. Assessing image aesthetic quality is essential for screening beautiful pictures from massive online images, recommending beautiful pictures that users enjoy, and understanding image attributes such as image composition, contrast, and lighting. Eye aesthetic assessment, a branch of image aesthetic assessment, aims at using computational methods to evaluate the "aesthetic feeling" of face images by simulating human perception and cognition of beauty. Eye aesthetic quality has a great influence on users' satis
+
+
+Figure 1. Eye in-painting results. Columns represent: (a) Image to in-paint, (b) The commercial state-of-the-art eye opening algorithm in Adobe Photoshop Elements 2019, (c) ExGAN result [7] and (d) Our AesGAN result.
+
+faction with photos. Knowing eye aesthetic quality is also useful for selecting better face images. In addition, it can be used as a guide to restore poor-quality eyes to high-quality ones, i.e., eye in-painting.
+
+Eye aesthetic quality assessment is a challenging task due to its extremely subjective nature. Assessing eye aesthetic quality cannot simply use objective quality assessment methods such as PSNR, MSE, and SSIM [25], which are commonly used to assess the distortions of image. In contrast, the assessment of eye aesthetic quality needs a lot of manual marking due to people's preferences. Currently, there has been no research on the assessment of eye aesthetic quality. In addition, the eye aesthetic quality assessment can help us know the quality of the eyes in a face image. Not satisfied with just knowing the aesthetic quality of the eyes, we also hope that we can use this as a guide to make the poor eyes more realistic. As a result, eye inpainting is produced, which can be seen as an application of eye aesthetics.
+
+There are few researches on eye in-painting. Figure 1
+
+
+Figure 2. Comparison of different eye in-painting frameworks. (a) Traditional eye in-painting methods do not use references, which only use incomplete images as input and finally output repaired images through encoders and decoders. (b) Single reference based eye inpainting methods use one reference image to assist in-painting, while (c) multi-reference based eye inpainting methods take multiple references. (d) Our proposed aesthetic guided eye in-painting method takes aesthetics as the criterion of reference selection, and uses the final selected image as the input.
+
+
+
+
+
+
+
+shows some results. Eye in-painting is a branch of face restoration problem, which is mainly applied in closed eyes and squint eyes situations, in order to produce real and natural new eyes. The current approaches can be summarized as the three types of frameworks shown in Figure 2. The previous three frameworks pose attention on whether to use reference samples or how many reference samples to use. ExGAN [7] method has well shown that the identity preserved eye in-painting result can be obtained by referring to the same identity example. Nevertheless, these three frameworks do not consider the selection of reference examples, which is a practical problem. In addition, eye aesthetic attributes that are crucial to eye in-painting, have not been considered in these frameworks. These observations motivate us to develop a new framework: Aesthetics Guided Eye In-painting, which addresses the selection problem of multiple reference examples based on our proposed eye aesthetic assessment.
+
+Due to the lack of research on eye aesthetic, there is no dataset that can be directly used in eye aesthetic assessment. On the basis of the existing face dataset, we cut out the eye area and invite 22 volunteers to assess the eye quality aesthetically. The marked dataset contains 1,040 eye images, all of which are divided into two categories according to the average level of manual tagging: aesthetically pleasing or not. Based on this dataset, we train the eye aesthetic assessment network. We introduce the eye aesthetic assessment system into the work of eye in-painting. First, it can automatically select the appropriate reference for the eye generation. Then we introduce aesthetic loss to force the in-painting network to learn the eye aesthetic features, and promote the generated eyes to be more realistic.
+
+This paper mainly has the following contributions:
+
+1. This paper first shows the effectiveness of aesthetic assessment in the multi-reference eye in-painting task, which may inspire the introduction of aesthetic assessment to other work in the future.
+2. We annotate a new eye aesthetic dataset and construct an eye aesthetic assessment network. To the best of our
+
+knowledge, we are the first to introduce the branch of image reconstruction into the quality assessment network, and verify its effectiveness in maintaining the sample uniqueness and improving the network performance.
+
+3. Through the eye in-painting task, the effectiveness of the eye aesthetic assessment network is proved. We propose a novel eye aesthetic and face semantic guided multi-reference eye in-painting GAN approach (AesGAN). By using the high quality reference and aesthetic features provided by the eye aesthetic assessment network, the performance of our eye in-painting method is better than those of state-of-the-art methods in both qualitative and quantitative results.
+
+# 2. Related Work
+
+# 2.1. Aesthetic Assessment
+
+The image aesthetic assessment [8, 12, 6] aims to use the computer to simulate human perception and cognition of aesthetic, which has important application prospects in clothing design, beauty makeup, face editing, and pictures beautifying [20, 3], etc. In addition to some objectivity, image aesthetic quality assessment has a strong subjective, which is more difficult than other image processing tasks.
+
+Recently, the image aesthetic assessment is regarded as an independent task. However, the deep network for extracting the aesthetic features may not well explain the aesthetics. For different aesthetic tasks, the definition of aesthetics changes accordingly. Facial images are different from ordinary natural images which have more specific aesthetic characteristics [21], especially the eye area. Therefore, we consider that the aesthetic assessment can be applied to more specific image processing tasks, such as eye in-painting. This not only helps to produce more realistic results, but also makes the aesthetic features extracted by aesthetic network more explanatory.
+
+Compared with other computer vision tasks, the data acquisition of image aesthetics is more difficult and the overall data size is smaller. Taking image recognition task as an ex
+
+ample, this task has a large number of research results and large datasets, such as ImageNet [5] with more than 14 million of images and tagging data. Only a few datasets are available for image aesthetic quality assessment, of which the largest ones, AROD [24] and AVA [19], only have $380\mathrm{K}$ and $250\mathrm{K}$ images, respectively. The tagging data of these images are obtained by users on online image-sharing sites. Most of these images come from camera photography and can not be directly used for the eye aesthetic assessment.
+
+Due to the lack of research on eye aesthetic, we mark a dataset containing 1,040 eye images. Based on this dataset, we train the eye aesthetic assessment network. Then we introduce the eye aesthetic assessment system into the work of eye in-painting.
+
+# 2.2. Eye In-Painting
+
+Recently more people tend to record their lives with selfies. Plenty of photos appear on social media every day, especially portraits, some of which need to be repaired for blemish [29, 4, 26]. The blinking and closing eyes in the photos are big problems that bother people, which promote the task of eye in-painting.
+
+With the wide use of GAN [10], image restoration work can obtain more real results [22, 11, 28]. The non-reference GAN can only generate the eyes according to the experience, not to the given identity in the photo. However, different people's appearance and face structure are diverse. The exemplar of a person is necessary, which can make GANs generate new eyes that are more consistent with the identity of people [18].
+
+The previous eye in-painting methods combine the reference image directly with the image to be repaired [1, 2, 23]. These methods do not take into account the semantic and structural information around the eye, so they show a poor in-painting performance when the light or the facial posture is different, such as the commercial state-of-the-art eye opening algorithm shown in Figure 1(b). In addition, some of the methods rely on automatic eye recognition, but the eye parts of many closed-eye photographs are not well detected.
+
+ExGANs [7] use one of the different images of the same identity as the reference for generator training, which can provide additional information to the generation network. Different from previous GANs, these additional identity information can be inserted into the network at multiple points to help it have better expression ability. Although ExGANs can produce real eye in-painting results, there are also some limitations. The random selection of exemplar can only provide the basic reference information, but does not take into account the quality and fitness of the eyes. When confronted with occluded eyes and side faces, ExGANs do not perform satisfactorily. Therefore, we add the constraints of eye aesthetic assessment and face semantic parsing, and replace the
+
+
+Figure 3. The labeled eye aesthetics assessment dataset according to manual scoring. The dataset has a total of 1,040 eye images divided into two categories. The first line shows low-quality eye images, and the second line shows high-quality ones.
+
+original square masks with the elliptical ones to solve these limitations. Figure 1 also shows the results of our eye inpainting networks compared with ExGANs.
+
+# 3. Eye Aesthetic Assessment
+
+# 3.1. Building Aesthetic Dataset
+
+The eye aesthetics assessment is still a new topic with few studies. Unlike the traditional aesthetic assessment task, we need to build a specific dataset for eye aesthetic. In view of the difficulty of obtaining aesthetic dataset, we choose to label the traditional face dataset. The CASPEAL[9] Face Dataset contains 1,040 face-frontal images and standardizes the remaining objective factors, making the training of the network more focused on extracting effective features of the eyes. So we choose to use the CASPEAL dataset for tagging and training.
+
+Based on the CAS-PEAL Dataset, we annotate a new eye dataset with 1,040 eye images. 22 volunteers are invited to make the eye aesthetic assessment. According to the average level of manual tagging, the dataset is divided into two grades: high quality and low quality. Figure 3 shows part of the eye aesthetic dataset. The number before the eye images represent the score that they are rated. The eyes rated as 2 are generally more aesthetically pleasing.
+
+# 3.2. Aesthetic Assessment Network
+
+Based on the eye aesthetic dataset, we propose an eye aesthetics assessment network (AesNet). The traditional quality assessment network has only one branch, which directly trains a classifier to output the corresponding image quality level in an end-to-end manner. For aesthetic quality, especially the beauty of the eyes, each sample has its own unique features. In order to maintain the uniqueness of the samples, we add a reconstruction branch to the image quality assessment task for the first time. As shown in Figure 4, our AesNet consists of three parts: eye aesthetic feature extraction, eye scoring and eye reconstruction. The aesthetic feature extraction module contains an encoder and nine residual blocks. The encoder has three convolution modules, which consists of one convolution layer, one normalized layer, one relu activation layer and one max-pooling layer.
+
+
+Figure 4. The architecture of our eye aesthetic assessment network. We first introduce the reconstruction branch into the image quality assessment task to maintain the uniqueness of eye aesthetic. Only the eye aesthetic feature extraction module and the eye scoring module are needed during testing.
+
+We send the extracted features into the eye scoring module and the eye reconstruction module at the same time. The eye scoring module outputs the prediction result of the eye assessment. The eye reconstruction module is composed of a decoder and the output is a generated eye image. We use reconstruction loss to constrain the generated eyes to be similar to the input ones. Only the eye aesthetic feature extraction module and the eye scoring module are needed during testing.
+
+Assume that for each image $e_i$ input to the AesNet, we have its corresponding aesthetic label $y$ . We use the softmax cross-entropy loss as the classification loss defined as
+
+$$
+\mathcal {L} _ {\text {C l a s s i f i c a t i o n}} = - \sum_ {j = 1} ^ {T} y _ {j} \log s _ {j} \tag {1}
+$$
+
+where $T$ is the number of categories, and $s_j$ is the j-th value of the softmax output vector, which represents the probability that the sample belongs to category $j$ . We use the MSE loss as the reconstruction loss defined as
+
+$$
+\mathcal {L} _ {\text {R e c o n s t r u c t i o n}} = \frac {1}{n} \sum_ {i = 1} ^ {n} \left(g _ {i} - e _ {i}\right) ^ {2} \tag {2}
+$$
+
+where $g_{i}$ is the generated eye image, and $n$ is the pixel number. The overall loss function is defined as
+
+$$
+\mathcal {L} _ {\text {E y e A e s}} = \mathcal {L} _ {\text {C l a s s i f i c a t i o n}} + \lambda_ {\text {r e c}} \mathcal {L} _ {\text {R e c o n s t r u c t i o n}} \tag {3}
+$$
+
+where $\lambda_{rec}$ is the weight to balance the effects of different losses and takes 0.01 when training. We divide the dataset into five and cross-validate. The trained eye aesthetic assessment network can achieve the accuracy of 0.84.
+
+
+Figure 5. The architecture of our eye in-painting network (AesGAN) based on eye aesthetic and face semantic, containing a generator, two discriminators, an eye aesthetic assessment network and a parsing network. The function $f(O,R)$ is the eye aesthetic feature extraction module in Figure 4.
+
+# 4. Eye In-Painting with Eye Aesthetic Assessment
+
+# 4.1. Overview
+
+We introduce the eye aesthetic assessment into the eye in-painting task, and propose the eye aesthetic and face semantic guided multi-reference eye in-painting method (AesGAN). Given an incomplete image, our goal is to produce natural and attractive eyes that are both semantically consistent with the whole object and visually realistic. Figure 5 shows the proposed network that consists of one generator, two discriminators, an eye aesthetic assessment network and a parsing network.
+
+We use the eye aesthetics assessment network and structural similarity index (SSIM) to automatically select the best reference. In order to highlight the role of eye aesthetic assessment, we introduced a new aesthetic loss. At the same time, a parsing loss is added to ensure the fidelity and semantic consistency of pixels. The parameters of parsing network and eye assessment network are fixed when training.
+
+# 4.2. Guidance of Eye Aesthetic Assessment
+
+People's growing pursuit of beauty gives a new idea to the image restoration task. It is instructive to introduce eye aesthetic assessment to eye in-painting task. The guidance of eye aesthetic assessment is manifested in three aspects.
+
+Firstly, based on eye aesthetics, we propose a multi-reference selection mechanism. We use the AesNet to score the eyes of the references. Then we calculate the SSIM values of each image except for the eye region in order to select the image which is most similar to the structure of the image to be in-painted in shape and motion. The selected reference
+
+
+Figure 6. The segmentation result of face semantic parsing network testing on Celeb-ID Dataset. Different colors of pixels represent different face components.
+
+can provide additional eye information for the generator, especially eye aesthetic features.
+
+Secondly, in addition to providing aesthetic prior knowledge, we also need to constrain beauty in the training of the network. We use the eye feature extraction module of AesNet to extract the eye aesthetic features of the generated eyes and the reference. The aesthetic loss calculates the $\mathcal{L}2$ distance between these two features, making the generator learn the concept of eye aesthetic better.
+
+Last but not least, we can use the eye aesthetic assessment network to compare our results with those of other advanced methods to verify the effectiveness of introducing eye aesthetics.
+
+# 4.3. Face Semantic Embedding
+
+Traditional GAN models independently generate facial components which may not be suitable for the original face. As mentioned in [7], if a part of the eyes is obscured, the new eyes take on strange shapes or have blurred eye details. Therefore, inspired by [16], we introduce a parsing network, which is implemented by changing the last layer of object contour detection network proposed in [27] to 11 outputs.
+
+We train a segmentation model on the CelebA [17] dataset, which achieves the f-score of 0.822. Figure 6 shows the segmentation result of face semantic parsing network testing on Celeb-ID Dataset. The parsing result of the generated image is compared with that of the original image, and softmax cross-entropy loss is used as the parsing loss of the in-painting network to make the generated eye details more consistent with the overall coherence of the image.
+
+# 4.4. Loss Functions
+
+The global loss function of the network is defined as
+
+$$
+\mathcal {L} = \mathcal {L} _ {G A N} + \lambda_ {r} \mathcal {L} _ {r} + \lambda_ {p} \mathcal {L} _ {p} + \lambda_ {a e s} \mathcal {L} _ {a e s} \tag {4}
+$$
+
+where $\mathcal{L}_{GAN}$ is the adversarial loss, and $\mathcal{L}_r$ is the reconstruction loss used in [7]. $\mathcal{L}_p$ is the parsing loss, which is the softmax cross-entropy loss. $\mathcal{L}_{aes}$ is the aesthetic loss, which is the activation of the residual blocks' final layer defined as
+
+$$
+\mathcal {L} _ {a e s} = \left\| \mathcal {F} \left(g _ {i}\right) - \mathcal {F} \left(e _ {i}\right) \right\| _ {2} \tag {5}
+$$
+
+where $\mathcal{F}(g_i)$ and $\mathcal{F}(e_i)$ are the eye feature layers of the generated image and the reference, respectively. By shortening the aesthetic distance between the generated eyes and the reference eyes, we make the generated eyes look more aesthetically pleasing. $\lambda_r$ , $\lambda_p$ and $\lambda_{aes}$ are the weights to balance the effects of different losses.
+
+# 5. Experiment
+
+This section provides a detailed assessment of the eye aesthetic and its effectiveness for eye in-painting. Specifically, we first analyze the influence of different modules of AesNet on network performance. Then we conduct the ablation study to analyze the effectiveness of different designs of our AesGAN, including the different settings of loss functions and eye aesthetic assessment. We also demonstrate the experiment between Single Example VS. Aesthetic Assessment Guided Eye In-painting and Multi Examples VS. Aesthetic Assessment Guided Eye In-painting. Finally, we compare the latest and representative eye in-painting methods.
+
+For the eye in-painting task, we use the Celeb-ID [7] dataset to train and test our model, which contains about 17k personal identities and a total of 100k photos. Each celebrity has at least 3 photos. We split the dataset according to the following criteria: for any celebrity, if there is a closed-eye photo in his samples, all his photos will be classified as the test set, otherwise classified as the training set. So every image in the training set contains a person with eyes opened, forcing the network to produce open eyes. Each training image has a reference of identity.
+
+All experiments are conducted on a machine with an Nvidia GTX 1080Ti GPU with learning rate 1e-4. The parameters were optimized by ADAM [13] with parameters $\beta_{1} = 0.5, \beta_{2} = 0.999$ . To balance the effects of different losses, we use $\lambda_{r} = 1$ , $\lambda_{p} = 0.03$ and $\lambda_{aes} = 1$ in our experiments. Further results are shown in the supplementary file to provide a more detailed understanding of the performance advantages of our method.
+
+# 5.1. Discussion of the AesNet modules
+
+The traditional image quality assessment network is an end-to-end mode with only one classification branch. The network architecture is simple and intuitive, which can achieve good classification effect with a large number of training samples. Different from the existing quality assessment task, the eye aesthetic assessment is more subjective and self-employed. However, due to the small number of ocular aesthetic samples, the lack of sufficient learning knowledge in the network results in unstable performance. Thus we add the image reconstruction branch to assist the deep network to study the concept of eye aesthetic. The two branches common one eye aesthetic feature extraction module, and the eye reconstruction module helps the network to
+
+| Framework | Accuracy | Recall | Precision | F1 |
| Baseline | 0.7 | 0.84 | 0.656 | 0.737 |
| Baseline+(a) | 0.73 | 0.78 | 0.709 | 0.743 |
| Baseline+(a)+(b) | 0.84 | 0.76 | 0.905 | 0.826 |
+
+Table 1. Comparison of AesNet performance under different modules. The baseline model consists of a encoder and the eye scoring module. Module(a) represents the residual blocks. Module(b) represents the eye reconstruction module.
+
+| Network | Ref-Select | Mean\( \mathcal{L}_{1}^{-} \) | \( PSNR^{+} \) | \( MS - SSIM^{+} \) | \( IS^{+} \) | \( FID^{-} \) |
| ExGAN | Random | 7.15E-3 | 38.57dB | 0.9344 | 3.56 | 15.66 |
| Our baseline | SSIM | 4.82E-3 | 42.56dB | 0.9708 | 4.10 | 6.74 |
| Our baseline+\( \mathcal{L}_{p} \) | 4.78E-3 | 42.57dB | 0.9720 | 4.11 | 6.47 |
| Our baseline | | 4.76E-3 | 42.56dB | 0.9720 | 4.04 | 6.99 |
| Our baseline+\( \mathcal{L}_{p} \) | | 4.71E-3 | 42.57dB | 0.9728 | 4.09 | 6.55 |
| Our baseline+\( \mathcal{L}_{aes}(a) \) | Aesthetic | 4.74E-3 | 42.56dB | 0.9722 | 4.03 | 6.57 |
| Our baseline+\( \mathcal{L}_{aes}(b) \) | 4.70E-3 | 42.59dB | 0.9730 | 4.08 | 6.78 |
| Our baseline+\( \mathcal{L}_{p} + \mathcal{L}_{aes}(a) \) | | 4.67E-3 | 42.60dB | 0.9729 | 4.10 | 6.66 |
| Our baseline+\( \mathcal{L}_{p} + \mathcal{L}_{aes}(b) \) | | 4.68E-3 | 42.58dB | 0.9730 | 4.15 | 6.43 |
+
+Table 2. Quantitative results of AesGAN with different structures. The second column represents the way the network selects the reference. We use one-branch AesNet in $\mathcal{L}_{aes}$ (a) and two-branch AesNet in $\mathcal{L}_{aes}$ (b). $^-$ Lower is better. $^+$ Higher is better. IS means the inception score.
+
+visualize the learned concept of eye aesthetic.
+
+The comparison of AesNet performance under different modules is shown in Table 1. The baseline network is shallow with simple structure resulting in an undesirable accuracy. After 9 residual blocks are added, the accuracy is improved, and the network's judgment on positive and negative samples also tends to be balanced. After the reconstruction branch is added, the accuracy of the network is greatly improved, which proves the effectiveness of the module in aesthetic assessment.
+
+# 5.2. Ablation Study on the Eye In-painting Network
+
+Effectiveness of Eye Aesthetic Assessment. As described in section 3, we trained an eye aesthetic assessment network, and then used this mechanism to select the best reference. Because AesNet can only output two categories, when the references' eye quality all belong to the high grade, we choose the one closest to the input. We consider the pose, angle of the face by measuring the structural similarity (SSIM) between the reference and the input image, so that the eye aesthetic features can be given more to the generator. Figure 7 shows our algorithm of how to select the best reference and the corresponding in-painting results. Table 2 shows the different effects of network with different reference selecting metrics. SSIM can only select the face with similar position and pose without considering the eye aesthetics. By using the eye aesthetics as an index, the generator can learn the most suitable eyes, so as to improve the in-painting effect.
+
+We then use the eye aesthetic assessment network to assess the generated images and find that our method does improve eye quality. Figure 8 compares the eye assessment
+
+
+Figure 7. Our algorithm of how to select the best reference. First of all, the network assesses the eye aesthetic grade for each reference. In all references with a rating of 2, the algorithm calculates SSIM between the input and the references. Considering these two factors, we finally choose the most suitable reference map. Columns represent: (a) Original image, (b) AesGAN's in-painting results and (c) The selected best reference is marked with a green box.
+
+
+Figure 8. Comparison between ExGAN, AesGAN without aesthetic and AesGAN. Network with aesthetic constraints can produce more satisfactory results.
+
+results. We find that $10.1\%$ of the test images changed from low quality to high quality with eye scoring constraints. Experimental results show that AesNet greatly improves the quality of eye in-painting, which also suggest that our network has a good generalization ability without domain shift problem.
+
+Discussion of Loss Functions. In order to compare the influence of different network structures, we add several innovation points to the network separately. Table 2 shows the quantitative results of different network structures. The smaller $\mathcal{L}_1$ is and the larger SSIM is, the closer the generated image is to the input. This means that the in-paintings do not change the individual information of the original im
+
+
+Figure 9. Visual comparison of AesGAN with different structures. The second column are ExGAN's results. The third column are the results of our AesGAN without using aesthetic selection and aesthetic loss. The fourth column are the results of our AesGAN without using parsing loss. The last column are the results of our AesGAN with all modules. Better zoom in.
+
+age. Larger value of PSNR means less distortion and inception score(IS) is a quota of image richness which higher is better.
+
+From Table 2, we can find that after adding the parsing loss constraint into the network, the inception score of the network has been greatly improved. This shows that segmentation constraints can better preserve the individual's feature information. We use one-branch AesNet in AesGAN(a). After adding the eye aesthetic constraint, most of the quantitative results of the network reach the best. However, the inception score decreases slightly, which may be due to the aesthetic learning leading to a decline in the richness of eye samples. We can see that the inception score is significantly improved after using the two-branch AesNet in AesGAN(b). This also demonstrates the effectiveness of the proposed eye reconstruction module on maintaining sample features. As mentioned in [7], the FID score is closely related to perceived quality compared to several other metrics, which increases with the fuzziness of the image. We also measure the FID scores of the eye area, and the results show that our final model can achieve the best performance.
+
+Figure 9 shows a comparison of the visual effects of different network structures. It can be observed that the network with parsing constraint and eye aesthetic constraint can effectively solve the limitations such as the eye detail blur and generate more realistic eyes.
+
+# 5.3. Comparison with State-of-the-arts
+
+Single Reference VS. Aesthetic Assessment Guided Eye In-Painting: ExGAN [7] selects a single reference ran-
+
+
+Figure 10. Different in-painting results of the same photo with different references. Row (a) represent different references and row (b) represent the corresponding in-painting results with ExGAN.
+
+
+Figure 11. Comparison with state-of-the-arts. GLCIC [11] is a non-exemplar method. ExGAN [7] is the first exemplar based eye in-painting GAN. MR-GAN is the multi-reference model we trained. AesGAN is the eye aesthetic based eye in-painting method we proposed. Better zoom in.
+
+domly which may have some limitations. Figure 10 shows different in-painting results of the same photo with different references, which suggests that the eye quality of the reference greatly influences the final in-painting result. Aesthetic assessment guided eye in-painting method can effectively utilize the individual's original eye information and make the generated eye meet the requirements of eye quality in vision. The visual contrast results are shown in Figure 11.
+
+Multi-Reference VS. Aesthetic Assessment Guided Eye In-Painting: Due to the limitations of single reference, a direct idea is to select multiple references. Since there is no existing multi-reference method, we trained a model called MR-GAN. The experimental results show that
+
+| Method | Mean\( \underline{L_1} \) | \( PSNR^+ \) | \( MS - SSIM^+ \) | \( IS^+ \) | \( FID^- \) |
| GLCIC | 7.36E-3 | 28.94dB | 0.7261 | 3.72 | 15.30 |
| ExGAN | 7.15E-3 | 38.57dB | 0.9344 | 3.56 | 15.66 |
| MR-GAN | 11.77E-3 | 38.37dB | 0.9277 | 3.90 | 13.61 |
| AesGAN | 4.68E-3 | 42.58dB | 0.9730 | 4.15 | 6.43 |
+
+Table 3. Quantitative results of different methods. ${}^{ - }$ Lower is better. ${}^{ + }$ Higher is better. IS means the inception score.
+
+although multi-reference can solve the problem of posture inadaptability, more references provide more in-painting directions, resulting in uncontrollable eye quality of the results. AesGAN can encourage the generator to learn the eye aesthetic features and produce better in-painting results as shown in Figure 11.
+
+Qualitative and Quantitative Results: Figure 11 and Table 3 show the qualitative and quantitative results compared with the state-of-the-art methods. It is obvious that the exemplar-based methods have a better performance than the non-exemplar method. And our AesGAN can generate the most realistic and natural eyes. From the quantitative results, the numerical values of our method are much higher than those of state-of-the-art methods.
+
+# 6. Discussion
+
+# 6.1. Challenge Cases and Real-world Examples
+
+It is mentioned in [7] that ExGAN can not deal well with occluded eyes and some iris colors of the new eyes may be inconsistent with the original image. As shown in Figure 12, AesGAN with parsing constraint and eye aesthetic constraint can address these limitations well. We also test closed-eye images of several celebrities from the Internet with a selected reference. These photos are taken in reality, without any pre-processing, matching the actually closed eyes situation. Figure 13 shows our test results, proving that our method can produce realistic eyes.
+
+# 6.2. Limitation and Future Work
+
+The effectiveness of eye aesthetic in improving the quality of eye in-painting is shown above. However, due to the small number of samples used for training, the performance of the eye aesthetic assessment network remains to be further improved. The experimental results show that the variousness of the network-generated eyes decreases after the addition of aesthetic loss. This may be due to the simple structure of the aesthetic assessment network, which leads to the singleness of the aesthetic feature extraction. We demonstrate in the experiment that the addition of an eye reconstruction branch to the aesthetic network is useful for increasing the sample diversity. However, due to the different learning efficiency of the high-level and lower-level features, the network performance is not stable. When the two-branch AesNet is used in the eye in-painting task, some metrics are reduced. Therefore, how to extract and apply
+
+
+
+
+
+
+
+
+Original ExGAN AesGAN
+
+
+
+
+
+
+
+
+Figure 12. Improvement results to ExGAN's limitations. We can better handle occluded eyes and side faces, and also improve the color change of the iris mentioned in ExGAN.
+
+
+Reference
+Figure 13. The real-world test results.
+
+
+Reference
+Input
+In-painting
+
+the aesthetic information accurately still needs more exploration.
+
+In fact, many image completion tasks are faced with the problem of selecting reference samples, and the selection of appropriate samples is of great significance for the final results. As an important image quality assessment index, aesthetic is worth paying more attention to in the future for selecting appropriate references and improving the quality of image restoration. This is our first attempt to prove the value of aesthetics. And how to edit the image aesthetically needs more research.
+
+# 7. Conclusion
+
+This paper shows the effectiveness of the eye aesthetic assessment for eye in-painting. This suggests that aesthetic assessment is of great value for image completion. In this paper, a new dataset is constructed to train the assessment network. A wide range of experimental results show that the proposed eye aesthetic assessment network greatly improves the quality of eye in-painting. The subjective effect and the objective quality have reached the state-of-the-art performance. In order to improve the quality of image completion, we look forward to more research on aesthetic assessment.
+
+# References
+
+[1] Aseem Agarwala, Mira Dontcheva, Maneesh Agrawala, Steven Drucker, Alex Colburn, Brian Curless, David Salesin, and Michael Cohen. Interactive digital photomontage. In ACM Transactions on Graphics (ToG), volume 23, pages 294-302. ACM, 2004.
+[2] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. In ACM Transactions on Graphics (ToG), volume 28, page 24. ACM, 2009.
+[3] Subhababrata Bhattacharya, Rahul Sukthankar, and Mubarak Shah. A framework for photo-quality assessment and enhancement based on visual aesthetics. In Proceedings of the 18th ACM international conference on Multimedia, pages 271-280. ACM, 2010.
+[4] Matthew Brand and Patrick Pletscher. A conditional random field for automatic photo editing. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1-7. IEEE, 2008.
+[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009.
+[6] Yubin Deng, Chen Change Loy, and Xiaou Tang. Image aesthetic assessment: An experimental survey. IEEE Signal Processing Magazine, 34(4):80-106, 2017.
+[7] Brian Dolhansky and Cristian Canton Ferrer. Eye in-painting with exemplar generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7902-7911, 2018.
+[8] R Edler, M Abd Rahim, D Wertheim, and D Greenhill. The use of facial anthropometrics in aesthetic assessment. The Cleft palate-craniofacial journal, 47(1):48-57, 2010.
+[9] Wen Gao, Bo Cao, Shiguang Shan, Xilin Chen, Delong Zhou, Xiaohua Zhang, and Debin Zhao. The cas-peal large-scale chinese face database and baseline evaluations. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 38(1):149-161, 2008.
+[10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
+[11] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (ToG), 36(4):107, 2017.
+[12] Wei Jiang, Alexander C Loui, and Cathleen Daniels Cerosaletti. Automatic aesthetic value assessment in photographic images. In 2010 IEEE International Conference on Multimedia and Expo, pages 920-925. IEEE, 2010.
+[13] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[14] Congcong Li and Tsuhan Chen. Aesthetic visual quality assessment of paintings. IEEE Journal of selected topics in Signal Processing, 3(2):236-252, 2009.
+
+[15] Congcong Li, Andrew Gallagher, Alexander C Loui, and Tsuhan Chen. Aesthetic quality assessment of consumer photos with faces. In 2010 IEEE International Conference on Image Processing, pages 3221-3224. IEEE, 2010.
+[16] Yijun Li, Sifei Liu, Jimei Yang, and Ming-Hsuan Yang. Generative face completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3911-3919, 2017.
+[17] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pages 3730-3738, 2015.
+[18] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
+[19] Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A large-scale database for aesthetic visual analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2408-2415. IEEE, 2012.
+[20] L Neumann, M Sbert, B Gooch, W Purgathofer, et al. Defining computational aesthetics. Computational aesthetics in graphics, visualization and imaging, pages 13-18, 2005.
+[21] Ira D Papel. Quantitative facial aesthetic evaluation with computer imaging. Facial Plastic Surgery, 7(01):35-44, 1990.
+[22] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536-2544, 2016.
+[23] Patrick Pérez, Michel Gangnet, and Andrew Blake. Poisson image editing. ACM Transactions on graphics (TOG), 22(3):313-318, 2003.
+[24] Katharina Schwarz, Patrick Wieschollek, and Hendrik PA Lensch. Will people like your image? learning the aesthetic space. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 2048-2057. IEEE, 2018.
+[25] Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004.
+[26] Fei Yang, Jue Wang, Eli Shechtman, Lubomir Bourdev, and Dimitri Metaxas. Expression flow for 3d-aware face component transfer. ACM Transactions on Graphics (TOG), 30(4):60, 2011.
+[27] Jimei Yang, Brian Price, Scott Cohen, Honglak Lee, and Ming-Hsuan Yang. Object contour detection with a fully convolutional encoder-decoder network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 193-202, 2016.
+[28] Raymond A Yeh, Chen Chen, Teck Yian Lim, Alexander G Schwing, Mark Hasegawa-Johnson, and Minh N Do. Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5485-5493, 2017.
+[29] Seunghwan Yoo and Rae-Hong Park. Red-eye detection and correction using inpainting in digital photographs. IEEE Transactions on Consumer Electronics, 55(3):1006-1014, 2009.
\ No newline at end of file
diff --git a/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/images.zip b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d55514be64c722ffb87dc12a9748b5f0c3057093
--- /dev/null
+++ b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6561220ba8fff214fd0fa0815414e85611bc52dffde5fe86037a7aac954710c
+size 616692
diff --git a/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/layout.json b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..81689de8bd467c9d785d6dcdfc90ba7ed7246a0d
--- /dev/null
+++ b/assessingeyeaestheticsforautomaticmultireferenceeyeinpainting/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e192d50582fb13baa447ce041d82e46587d59eaf08a00b870705a0476451e39f
+size 317506
diff --git a/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_content_list.json b/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d9d890495f0783335764500721e6726b1725956a
--- /dev/null
+++ b/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f95f98debd001f9557cf6d6487b9705c0d6540a5558d8e4c357df1049e7e3106
+size 80337
diff --git a/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_model.json b/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..106e63d98d1e5d329c2db8960afd3ab617d44b55
--- /dev/null
+++ b/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18a317bf566f8b6817f3d7b418963dd169b6b812dbb36e782747be1d9edda844
+size 102735
diff --git a/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_origin.pdf b/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d3ebb152ac2d3167b9a61f92d0d90fa386e536dc
--- /dev/null
+++ b/assessingimagequalityissuesforrealworldproblems/9aa6fcca-99f2-43aa-ab1a-388cf12e03d1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:70e73950b6fb9200e2c1d221da8d6d8b09ee485187b49c769f30b58b68a78dbb
+size 2502758
diff --git a/assessingimagequalityissuesforrealworldproblems/full.md b/assessingimagequalityissuesforrealworldproblems/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e27258a8121a0cef4815b779d48c963b1ca8e152
--- /dev/null
+++ b/assessingimagequalityissuesforrealworldproblems/full.md
@@ -0,0 +1,320 @@
+# Assessing Image Quality Issues for Real-World Problems
+
+Tai-Yin Chiu, Yinan Zhao, Danna Gurari
+University of Texas at Austin
+
+# Abstract
+
+We introduce a new large-scale dataset that links the assessment of image quality issues to two practical vision tasks: image captioning and visual question answering. First, we identify for 39,181 images taken by people who are blind whether each is sufficient quality to recognize the content as well as what quality flaws are observed from six options. These labels serve as a critical foundation for us to make the following contributions: (1) a new problem and algorithms for deciding whether an image is insufficient quality to recognize the content and so not captionable, (2) a new problem and algorithms for deciding which of six quality flaws an image contains, (3) a new problem and algorithms for deciding whether a visual question is unanswerable due to unrecognizable content versus the content of interest being missing from the field of view, and (4) a novel application of more efficiently creating a large-scale image captioning dataset by automatically deciding whether an image is insufficient quality and so should not be captioned. We publicly-share our datasets and code to facilitate future extensions of this work: https://vizwiz.org.
+
+# 1. Introduction
+
+Low-quality images are an inevitable, intermittent reality for many real-world, computer vision applications. At one extreme, they can be life threatening, such as when they impede the ability of autonomous vehicles [60] and traffic controllers [30] to safely navigate environments. In other cases, they can serve as irritants when they convey a negative impression to the viewing audiences, such as on social media or dating websites.
+
+Despite that low-quality images often emerge in practical settings, there has largely been a disconnect between research aimed at recognizing quality issues and research aimed at performing downstream vision tasks. For researchers focused on uncovering what quality issues are observed in an image, their progress largely has grown from artificially-constructed settings where they train and evaluate algorithms on publicly-available datasets that were constructed by distorting high quality images to simulate
+
+quality issues (e.g., using JPEG compression or Gaussian blur) [41, 48, 12, 21, 37, 36, 25, 31]. Yet, these contrived environments typically lack sufficient sophistication to capture the plethora of factors that contribute to quality issues in natural settings (e.g., camera hardware, lighting, camera shake, scene obstructions). Moreover, the quality issues are detangled from whether they relate to the ability to complete specific vision tasks. As for researchers focusing on specific tasks, much of their progress has developed from environments that lack low-quality images. That is because the creators of popular publicly-available datasets that support the development of such algorithms typically included a step to filter out any candidate images that are deemed insufficient quality for the final dataset [11, 14, 23, 9, 53, 28, 59]. Consequently, such datasets lack data that would enable training algorithms to identify when images are of insufficient quality to complete a given task.
+
+Motivated by the aim to tie the assessment of image quality to practical vision tasks, we introduce a new image quality assessment (IQA) dataset that emerges from a real use case. Specifically, our dataset is built around 39,181 images that were taken by people who are blind who were authentically trying to learn about images they took using the VizWiz mobile phone application [5]. Of these images, $17\%$ were submitted to collect image captions from remote humans. The remaining $83\%$ were submitted with a question to collect answers to their visual questions. As discussed in prior work [7, 17], users submitted these images and visual questions (i.e., images with questions) to overcome real visual challenges that they faced in their daily lives. They typically waited nearly two minutes to receive a response from the remote humans [5]. For each image, we asked crowdworkers to either supply a caption describing it or clarify that the quality issues are too severe for them to be able to create a caption. We call this task the unrecognizability classification task. We also ask crowdworkers to label each image with quality flaws that are more traditionally discussed in the literature [7, 12]: blur, overexposure (bright), underexposure (dark), improper framing, obstructions, and rotated views. We call this task the quality flaws classification task. Examples of resulting labeled images in our dataset are shown in Figure 1. Altogether, we call this
+
+
+No Flaws (NON)
+A dry golden leaf on a brownish red background.
+
+
+Blur (BLR)
+A gold thermostat or compass attached to a piece of wood.
+
+
+Bright (BRT)
+A black Acer brand laptop with monitor off.
+
+
+Dark (DRK)
+A person holding a Chap Stick brand lip balm.
+
+
+Framing (FRM)
+A hand holding a blue Kraft Foods jar.
+
+
+Obscured (OBS)
+A coin placed on a left hand or palm.
+
+
+Rotation (ROT)
+A green bag of potato chips from the store.
+
+
+Unrecognizable
+Quality issues are too severe to recognize visual content.
+Not Captionable
+Figure 1: We introduce a new image quality assessment dataset which we call VizWiz-QualityIssues. Shown are examples of the taxonomy of labels, which ranges from no quality issues to six quality flaws to unrecognizable/uncaptionable images. Images can manifest different combinations of the above labels, for instance the unrecognizable image is also labeled as suffering from image blur and poor framing.
+
+Quality Flaws
+
+# dataset VizWiz-QualityIssues.
+
+We then demonstrate the value of this new dataset for several new purposes. First, we introduce a novel problem and algorithms for predicting whether an image is sufficient quality to be captioned (Section 4). This can be of immediate use to blind photographers, who otherwise must wait nearly two minutes to learn their image is unsuitable quality for image captioning. We next conduct experiments to demonstrate an additional benefit of this prediction system for creating large-scale image captioning datasets with less wasted human effort (Section 4.3). Finally, we introduce a novel problem and algorithms that inform a user who submits a novel visual question whether it can be answered, cannot be answered because the image content is unrecognizable, or cannot be answered because the image content is missing from the image (Section 5). This too can be of immediate benefit to blind photographers by enabling them to both fail fast and gain valuable insight into how to update the visual question to make it become answerable.
+
+More generally, our work underscores the importance of defining quality within the context of specific tasks. We expect our work can generalize to related vision tasks such as object recognition, scene classification, and video analysis.
+
+# 2. Related Work
+
+Image Quality Datasets. A number of image quality datasets exist to support the development of image quality assessment (IQA) algorithms, including LIVE [41, 48], LIVE MD [21], TID2008 [37], TID2013 [36], CSIQ [25], Waterloo Exploration [31], and ESPL-LIVE[24]. A commonality across most such datasets is that they originate from high quality images that were artificially distorted to introduce image quality issues. For example, LIVE [12] consists of 779 distorted images, which are derived by applying five different types of distortions at numerous distor
+
+tion levels to 29 high-quality images. Yet, image quality issues that arise in real-world settings exhibit distinct appearances than those that are found by simulating distortions to high-quality images. Accordingly, our work complements recent efforts to create large-scale datasets that flag quality issues in natural images [12]. However, our dataset is considerably larger, offering approximately a 19-fold increase in the number of naturally distorted images; i.e., 20,244 in our dataset versus 1,162 images for [12]. In addition, while [12] assigns a single quality score to each image to capture any of a wide array of image quality issues, our work instead focuses on recognizing the presence of each distinct quality issue and assessing the impact of the quality issues on the real application needs of real users.
+
+Image Quality Assessment. Our work also relates to the literature that introduces methods for assessing the quality of images. One body of work assumes that developers have access to a high-quality version of each novel image, whether partially or completely. For example, distorted images are evaluated against original, intact images for full-reference IQA algorithms [48, 50, 57, 41, 25, 6, 39] and distorted images are evaluated against partial information about the original, intact images for reduced-reference IQA algorithms [49, 26, 47, 42, 38, 32, 51]. Since our natural setting inherently limits us from having access to original, intact images, our work instead aligns with the second body of work which is built around the assumption that no original, reference image is available; i.e., no-reference IQA (NR-IQA). NR-IQA algorithms instead predict a quality score for each novel image [33, 22, 47, 56, 55, 29, 43, 6, 44]. While many algorithms have been introduced for this purpose, our analysis of five popular NR-IQA models (i.e., BRISQUE [33], NIQE [34], CNN-NRIQA [22], DNN-NRIQA [6], and NIMA [44]) demonstrates that they are inadequate for our novel task of assessing which images
+
+are unrecognizable and so cannot be captioned (discussed in Section 4). Accordingly, we introduce new algorithms for this purpose, and demonstrate their advantage.
+
+Efficient Creation of Large-Scale Vision Datasets. Progress in the vision community has largely been measured and accelerated by the creation of large-scale vision datasets over the past 20 years. Typically, researchers have scraped images for such datasets from online image search databases [11, 14, 23, 9, 53, 28, 59]. In doing so, they typically curate a large collection of high-quality images, since such images first passed uploaders' assessment that they are of sufficient quality to be shared publicly. In contrast, when employing images captured "in the wild," it can be a costly, time-consuming process to identify and remove images with unrecognizable content. Accordingly, we quantify the cost of this problem, introduce a novel problem and algorithms for deciphering when image content would be unrecognizable to a human and so should be discarded, and demonstrate the benefit of such solutions for more efficiently creating a large-scale image captioning dataset.
+
+Assistive Technology for Blind Photographers. Our work relates to the literature about technology for assisting people who are blind to take high-quality pictures [1, 5, 20, 45, 58]. Already, existing solutions can assist photographers in improving the image focus [1], lighting [5], and composition [20, 45, 58]. Additionally, algorithms can inform photographers whether their questions about their images can be answered [17] and why crowds struggle to provide answers [4, 15]. Complementing prior work, we introduce a suite of new AI problems and solutions for offering more fine-grained guidance when alerting blind photographers about what image quality issue(s) are observed. Specifically, we introduce novel problems of (1) recognizing whether image content can be recognized (and so captioned) and (2) deciphering when a question about an image can be answered, cannot be answered because the image content is unrecognizable, or cannot be answered because the content of interest is missing from the image.
+
+# 3. VizWiz-QualityIssues
+
+We now describe our creation of a large-scale, human-labeled dataset to support the development of algorithms that can assess the quality of images. We focus on a real use case that is prone to image quality issues. Specifically, we build off of 39,181 publicly-available images [16, 17] that originate from blind photographers who each submitted an image with, optionally, a question to the VizWiz mobile phone application [5] in order to receive descriptions of the image from remote humans. Since blind photographers are unable to verify the quality of the images they take, the dataset exemplifies the large diversity of quality issues that
+
+occur naturally in practice. We describe below how we create and analyze our new dataset.
+
+# 3.1. Creation of the Dataset
+
+We scoped our dataset around quality issues that impede people who are blind in their daily lives. Specifically, a clear, resounding message is that people who are blind need assistance in taking images that are sufficiently high-quality that sighted people are able to either describe them or answer questions about them [5, 7].
+
+Quality Issues Taxonomy. One quality issue label we assess is whether image content is sufficiently recognizable for sighted people to caption the images. We also label numerous quality flaws to situate our work in relation to other papers that similarly focus on assessing image quality issues [7, 12]. Specifically, we include the following categories: blur (is the image blurry?), bright (is the image too bright?), dark (is the image too dark?), obstruction (is the scene obscured by the photographer's finger over the lens, or another unintended object?), framing (are parts of necessary items missing from the image?), rotation (does the image need to be rotated for proper viewing?), other, and no issues (there are no quality issues in the image).
+
+Image Labeling Task. To efficiently label all images, we designed our task to run on the crowdsourcing platform Amazon Mechanical Turk. The task interface showed an image on the left half and the instructions with user-entry fields on the right half. First, the crowdworker was instructed to either describe the image in one sentence or click a button to flag the image as being insufficient quality to recognize the content (and so not captionable). When the button was clicked, the image description was automatically populated with the following text: "Quality issues are too severe to recognize the visual content." Next, the crowdworker was instructed to select all image quality flaws from a pre-defined list that are observed. Shown were the six reasons identified above, as well as Other (OTH) linked to a free-entry text-box so other flaws could be described and None (NON) so crowd workers could specify the image had no quality flaws. The interface enabled workers to adjust their view of the image, using the toolbar to zoom in, zoom out, pan around, or rotate the image if needed. To encourage higher quality results, the interface prevented a user from completing the task until a complete sentence was provided and at least one option from the "image quality flaw" options was chosen. A screen shot of the user interface is shown in the Supplementary Materials.
+
+Crowdsourcing Labels. To support the collection of high quality labels, we only accepted crowdworkers who previously had completed over 500 HITs with at least a $95\%$ acceptance rate. Also, we collected redundant results. Specif
+
+ically, we recruited five crowdworkers to label each image. We deemed a label as valid only if at least two crowdworkers chose that label.
+
+# 3.2. Characterization of the Dataset
+
+Prevalence of Quality Issues. We first examine the frequency at which images taken by people who are blind suffer from the various quality issues to identify the (un)common reasons. To do so, we tally how often unrecognizable images and each quality-flaw arise.
+
+Roughly half of the images suffer from image quality flaws (i.e., $1 - P(\mathrm{NON}) = 51.6\%$ ). We observe that the most common reasons are image blur (i.e., $41.0\%$ ) and inadequate framing (i.e., $55.6\%$ ). In contrast, only a small portion of the images are labeled as too bright (i.e., $5.3\%$ ), too dark ( $5.6\%$ ), having objects obscuring the scene ( $3.6\%$ ), needing to be rotated for successful viewing ( $17.5\%$ ), or other reasons ( $0.8\%$ ). The statistics reveal the most promising directions for how to improve assistive photography tools to improve blind users' experiences. Specifically, the main functions should be focused on camera shake detection and object detection to mitigate the possibility of taking images with blur or framing flaws.
+
+We also observe that the image quality issues are so severe that image content is deemed unrecognizable for $14.8\%$ of the images. In absolute terms, this means that $\$3,829$ and 379 hours of human annotation were wasted on employing crowdworkers to caption images that contained unrecognizable content. $^{1}$ In other words, great savings can be achieved by automatically filtering such uncaptionable images such that they are not sent to crowdworkers. We explore this idea further in Section 4.3.
+
+Likelihood Image Has Unrecognizable Content Given its Quality-Flaw. We next examine the probability that an image's content is unrecognizable conditioned on each of the reasons for quality flaws. Results are shown in Figure 2.
+
+Almost all reasons led to percentages that are larger than the overall percentage of unrecognizable images, which is $14.8\%$ of all images. This demonstrates what we intuitively suspected, which is that images with quality flaws are more likely to have unrecognizable content. We observe that this trend is the strongest for images that suffer from obstructions (OBS) and inadequate lighting (BRT and DRK), with percentages just over $40\%$ .
+
+Interestingly, two categories have percentages that are smaller than the overall percentage of unrecognizable images, at $14.8\%$ of all images. First, images that are flagged as needing to be rotated for proper viewing (ROT) have only $8.3\%$ deemed unrecognizable. In retrospect, this seems understandable, as the content of images with a rotation flaw
+
+could still be recognized if viewers tilt their heads (or apply visual display tools to rotate the images). Second, images labeled with no flaws (NON) have only $3.9\%$ deemed unrecognizable. This tiny amount aligns with the concept that "unrecognizable" and "no flaws" are two conflicting ideas. Still, the fact the percentage is not $0\%$ highlights that humans can offer different perspectives. Put differently, the image quality assessment task can be subjective.
+
+Likelihood Image Has Each Quality-Flaw Given its Content is Unrecognizable. We next examine the probability that an image manifests each quality flaw given that its content is unrecognizable. Results are shown in Figure 2. Overall, our findings parallel those identified in the "Prevalence of Quality Issues" paragraph. For example, we
+
+
+Figure 2: Left: Percentage of images with quality flaws given unrecognizability. Right: Percentage of unrecognizable images given quality flaws.
+
+
+Figure 3: Interrelation of quality flaws. Values are scaled, with each multiplied by 100. The grid at the $i$ -th row and the $j$ -th column shows the value of $I(\text{flaw } i, \text{flaw } j)$ . The diagonal is suppressed for clarity.
+
+
+Figure 4: Distributions of image quality scores predicted by conventional NR-IQA systems [33, 34, 22, 6, 44] in our new VizWiz-QualityIssues dataset. The heavy overlap of the distributions of scores for recognizable and unrecognizable images reveals that none of the methods are able to distinguish recognizable images from unrecognizable images.
+
+
+
+
+
+
+
+
+
+
+
+again observe the most common reasons are blurry images $(71.0\%)$ and improper framing $(71.2\%)$ . Similarly, unrecognizable images are found to be associated less frequently with the other quality flaws.
+
+Relationship Between Quality Flaws in Images. Finally, we quantify the relationship between all possible pairs of quality flaws. In doing so, we were motivated to provide a measure that offers insight into causality and co-occurrence when comparing any pair of quality flaws, while avoiding measuring joint probabilities. To meet this aim, we introduce a new measure which we call interrelation index $I(A,B)$ , which is defined as follows:
+
+$$
+I (A, B) = \frac {P (B | A)}{P (B)} - \frac {P (B | \bar {A})}{P (B)}. \tag {1}
+$$
+
+More details about this measure and the motivation for it are provided in the Supplementary Materials. Briefly, larger positive $I(A,B)$ values indicate that $A$ and $B$ tend to co-occur with $A$ causing $B$ to happen more often. Results are shown in Figure 3.
+
+We observe that almost all quality flaws tend to occur with one another, as shown with the positive values of $I$ . At first, we were surprised to observe that there is a relationship between BRT and DRK (i.e., $I(\mathrm{BRT},\mathrm{DRK}) = 73$ is greater than zero), since these flaws are seemingly incompatible concepts. However, from visual inspection of the data, we found some images indeed suffered from both lighting flaws. We exemplify this and other quality flaw correlations in the Supplementary Materials. From our findings, we also observe that "no flaws" does not co-occur with other quality flaws; i.e., the values in the grid are all negative for the row and column for NON. This finding aligns with our intuition that an image labeled with NON is less likely to have a quality flaw at the same time.
+
+# 4. Classifying Unrecognizable Images
+
+A widespread assumption when captioning images is that the image quality is good enough to recognize the image content. Yet, people who are blind cannot verify the quality of the images they take and it is known their images can be very poor in quality [5, 7, 17]. Accordingly, we
+
+now examine the benefit of our large-scale quality dataset for training algorithms to detect when images are unrecognizable and so not captionable.
+
+# 4.1. Motivation: Inadequate Existing Methods
+
+Before exploring novel algorithms, it is important to first check whether existing methods are suitable for our purposes. Accordingly, we check whether related NR-IQA systems can detect when images are unrecognizable. To do so, we apply five NR-IQA methods on the complete VizWiz-QualityIssues dataset: BRISQUE [33], NIQE [34], CNN-NRIQA [22], DNN-NRIQA [6], and NIMA [44]. The first two are popular conventional methods that rely on handcrafted features. The last three are based on neural networks and trained on IQA datasets mentioned in Section 2. For example, DNN-NRIQA-TID and DNN-NRIQA-LIVE in Figure 4 are trained on the TID dataset and LIVE dataset, respectively. Intuitively, if the algorithms are effective for this task, we would expect that the scores for recognizable images are distributed mostly in the high-score region, while the scores for unrecognizable images are distributed mostly in the low-score region.
+
+Results are shown in Figure 4. A key finding is that the distributions of scores for recognizable and unrecognizable images heavily overlap. That is, none of the methods can distinguish recognizable images from recognizable images in our dataset. This finding shows that existing methods trained on existing datasets (i.e., LIVE, TID, CSIQ) are unsuitable for our novel task on the VizWiz-QualityIssues dataset. This is possibly in part because quality issues resulting from artificial distortions, such as compression, Gaussian blur, and additive Gaussian noise, differ from natural distortions triggered by poor camera focus, lighting, framing, etc. This also may be because there is no 1-1 mapping between scores indicating overall image quality and our proposed task, since an image with a low quality score may still have recognizable content.
+
+# 4.2. Proposed Algorithm
+
+Having observed that existing IQA methods are inadequate for our problem, we now introduce models for our novel task of assessing whether an image is recognizable.
+
+Architecture. We use ResNet-152 [18] to extract image features, which are then processed by 2-dimensional global-pooling followed by two fully connected layers. The final layer is a single neuron with a sigmoid activation function. We train this algorithm using an Adam optimizer with the learning rate set to 0.001 for 8 epochs. We fix the ResNet weights pre-trained on ImageNet [9] and only learn the weights in the two fully connected layers.
+
+Dataset Splits. For training and evaluation of our algorithm, we apply a $52.5\% / 37.5\% / 10\%$ split to our dataset to create the training, validation, and test splits.
+
+Baselines. We compare our algorithm to numerous baselines. Included is random guessing, which means an image is unrecognizable with probability 0.148. We also analyze a linear SVM that predicts with scale-invariant feature transform (SIFT) features. Intuitively, a low-quality image should have few/no key points. We also evaluate a linear SVM that predicts from histogram of oriented gradients (HOG) features.
+
+Evaluation Metrics. We evaluate each method using average precision, recall, and f1 scores. Accuracy is excluded because the distributions of unrecognizability are highly biased to "false" and such unbalanced data suffer from the accuracy paradox.
+
+Results. Results are shown in Table 1. We observe that both SIFT and HOG are much stronger baselines than random guessing and get high scores on precision, especially 87.2 for SIFT. However, they both get low scores on recall. This means that SIFT and HOG are good at capturing a subset of unrecognizable images but still miss many others. On the other hand, the ResNet model gets much higher recall scores while maintaining decent average precision scores, implying that it is more effective at learning the characteristics of unrecognizable images.3 This is exciting since such
+
+2Due to space constraints, we demonstrate the effectiveness of this architecture for assessing the quality flaws in the Supplementary Materials. The primary difference for that architecture is that we replace ResNet-152 with XceptionNet [8], use three fully connected layers, and a final layer of eight neurons with eight sigmoid functions.
+
+3Again, due to space constraints, results showing prediction performance for quality flaw classification is in the Supplementary Materials.
+
+ | Avg. precision | Recall | F1 |
| ResNet-152 | 80.0 | 75.1 | 71.2 |
| Random guessing | 16.6 | 14.6 | 15.5 |
| SIFT | 87.2 | 42.3 | 56.9 |
| HOG + linear SVM | 56.4 | 41.2 | 47.6 |
+
+Table 1: Performance of algorithms in assessing whether image content can be recognized (and so captioned).
+
+an algorithm can be of immediate use to blind photographers, who otherwise must wait nearly two minutes to learn their image is unsuitable quality for image captioning.
+
+# 4.3. Application: Efficient Dataset Creation
+
+We now examine another potential benefit of our algorithm in helping to create a large scale training dataset.
+
+To support this effort, we divide the dataset into three sets. One set is used to train our image unrecognizability algorithm. A second set is used to train our image captioning algorithms, which we call the captioning-training-set. The third set is used to evaluate our image captioning algorithms, which we call the captioning-evaluation-set.
+
+We use our method to identify which images in the captioning-training-set to use for training image captioning algorithms. In particular, the $N$ images flagged as recognizable are included and the remaining images are excluded. We compare this method to three baselines, specifically training on: all images in the captioning-training-set, a random sample of $N$ images in the captioning-training-set, a perfect sample of $N$ images in the captioning-training-set that are known to be recognizable images.
+
+We evaluate two state-of-art image captioning algorithms, trained independently on each training set, with respect to eight evaluation metrics: BLEU-1-4 [35], METEOR [10], ROUGE-L [27], CIDEr-D [46], and SPICE [2].
+
+Results are shown in Table 2. Our method performs comparably to when the algorithms were trained on all images as well as the perfect set. In contrast, our method yields improved results over the random sample. Altogether, these findings offer promising evidence that our prediction system is successfully retaining meaningful images while removing images that are not informative for the captioning task (i.e., recognizable). This reveals that a benefit of using the recognizability prediction system is to save time and money when crowdsourcing captions (by first removing unrecognizable images), without diminishing the performance of downstream trained image captioning algorithms.
+
+# 5. Recognizing Unanswerable Visual Questions
+
+The visual question "answerability" problem is to decide whether a visual question can be answered [17]. Yet, as exemplified in Figure 5, visual questions can be unanswerable because the image is unrecognizable or because the answer to the question is missing in a recognizable image. Towards enabling more fine-grained guidance to photographers regarding how to modify the visual question so it is answerable, we move beyond predicting whether a visual question is unanswerable [17] and introduce a novel problem of predicting why a visual question is unanswerable.
+
+ | | B@1 | B@2 | B@3 | B@4 | METEOR | ROUGE-L | CIDEr-D | SPICE |
| AoANet [19] | full training set | 63.3 | 44.3 | 29.9 | 19.7 | 18.0 | 44.4 | 43.6 | 11.2 |
| perfect flag | 63.3 | 43.8 | 29.5 | 19.9 | 18.1 | 44.2 | 43.6 | 11.5 |
| predicted flag | 63.2 | 44.0 | 29.5 | 19.8 | 18.1 | 44.2 | 42.9 | 11.5 |
| random sample | 62.5 | 43.3 | 28.8 | 18.9 | 18.0 | 44.1 | 41.9 | 11.4 |
| SGAE [54] | full training set | 62.8 | 43.3 | 28.6 | 18.8 | 17.3 | 44.0 | 32.4 | 10.4 |
| perfect flag | 63.0 | 43.1 | 28.6 | 18.9 | 17.2 | 43.9 | 32.5 | 10.3 |
| predicted flag | 63.1 | 43.1 | 28.4 | 18.7 | 17.2 | 44.0 | 32.4 | 10.4 |
| random sample | 62.4 | 42.7 | 27.9 | 18.2 | 17.1 | 43.7 | 30.4 | 10.4 |
+
+Table 2: Performance of two image captioning algorithms with respect to eight metrics trained on the full captioning-training-set, training images annotated to be recognizable (perfect flag), training images predicted to be recognizable (predicted flag), and a subset random sampled from the captioning-training-set. $(\mathbf{B}@\mathbf{\alpha} = \mathbf{BLEU} - )$
+
+
+Figure 5: Examples of visual questions that are unanswerable for two reasons. The left two examples have unrecognizable images while the right two examples have recognizable images but the content of interest is missing from the field of view. Our posed algorithm correctly predicts why visual questions are unanswerable for these examples.
+
+
+
+# 5.1. Motivation
+
+We extend the VizWiz-VQA dataset [17], which labels each image-question pair as answerable or unanswerable. We inspect how answerability relates to recognizability and each quality flaw. For convenience, we use the following notations: $A$ : answerable, $\bar{A}$ : unanswerable, $R$ : recognizable; $\bar{R}$ : unrecognizable, $Q$ : quality issues, and $P(\cdot)$ : probability function. Results are shown in Figure 6. We can observe that for most quality flaws $Q$ , $P(\bar{A} | Q)$ is larger than $P(\bar{A})$ , and $P(\bar{A}) = 28.7\%$ increases to $P(\bar{A} | \bar{R}) = 58.7\%$ . Additionally, the probability $P(\bar{R})$ increases from $14.8\%$ to $P(\bar{R} | \bar{A}) = 30.2\%$ when questions are known to be unanswerable. Observing that a large reason for unanswerable questions is that images are unrecognizable images, we are motivated to equip VQA systems with a function that is able to clarify why their questions are unanswerable.
+
+# 5.2. Proposed Algorithm
+
+Algorithm. Our algorithm extends the Up-Down VQA model [3]. It takes as input encoded image features and a
+
+
+
+
+Figure 6: Top: Fractions of unanswerable questions conditioned on unrecognizability or a quality flaw. Bottom: Fractions of quality issues and unrecognizable images given answerability. Values are scaled by being multiplied with 100.
+
+paired question. Image features could be grid-level features extracted by ResNet-152 [18] as well as object-level features extracted by Faster-RCNN [40] or Detector [13, 52]. The input question is first encoded by a GRU cell. Then, a top-down attention module computes a weighted image feature from the encoded question representation and the input image features. The image and question features are coupled by element-wise multiplication. This coupled feature is processed by the prediction module to predict answerability and recognizability. We employ two different activation functions at the end of the model to make the final prediction. The first one is softmax which predicts three exclusive classes: answerable, unrecognizable, and insufficient content information (answers cannot be found in images). The
+
+ | Unans | Unrec given unans |
| AP | Rec | F1 | AP | Rec | F1 |
| [17] | 71.7 | - | 64.8 | - | - | - |
| Rand guess | - | - | - | 31.1* | 14.8 | 20.0 |
| SIFT | - | - | - | 94.9* | 45.3 | 61.3 |
| HOG | - | - | - | 73.1* | 44.9 | 55.7 |
| TD+soft | 72.6 | 77.3 | 67.0 | 82.2 | 79.3 | 75.0 |
| TD+sigm | 73.6 | 71.2 | 68.0 | 86.6 | 79.3 | 78.6 |
| BU+sigm | 73.0 | 66.6 | 66.7 | 87.4 | 73.7 | 78.7 |
| TD+BU+sigm | 74.0 | 82.3 | 67.9 | 87.7 | 79.3 | 79.7 |
| sigm w/o att. | 67.7 | 66.1 | 64.2 | 86.7 | 66.7 | 74.2 |
+
+TD: top-down attention. BU: bottom-up attention. soft: softmax. sigm: sigmoid. att: attention. AP: average precision. Rec: recall. Unrec: Unrecognizable. Unans: Unanswerable.
+*: Precision is calculated, since true or false is predicted instead of a probability.
+
+Table 3: Performance of predicting why a visual question is unanswerable: unrecognizable image versus unanswerable because the content of interest is missing from the field of view. [17] only predicts answerability and serves as the baseline for unanswerability prediction. Random guessing, SIFT, and HOG only predict recognizability and serve as the baselines for unrecognizability prediction.
+
+second activation function is two independent sigmoids, one for answerability and the other for recognizability. We train the network using an Adam optimizer with a learning rate of 0.001, only for the layers after feature extraction.
+
+Dataset Splits. We split VizWiz dataset into training/validation/test sets according to a $70\% / 20\% / 10\%$ ratio.
+
+Evaluation Metrics. We evaluate performance using average precision, precision, recall, and f1 scores, for which a simple threshold 0.5 is used to binarize probability values. For inter-model comparisons, we also report the precision-recall curve for each variant.
+
+Baselines. For comparison, we consider a number of baselines. One approach is the original model for predicting whether a visual question is answerable, and also employs a top-down attention model [17]. We also evaluate the random guessing, SIFT, and HOG baselines used to evaluate the recognizability algorithms in the previous section.
+
+Results. Results are shown in Table 3 and Figure 7. Our models perform comparably to the answerability baseline [17]. This is exciting because it shows that jointly learning to predict answerability with recognizability does not degrade the performance; i.e., the average precision scores from TD+softmax and TD+sigmoid models are better than
+
+
+Figure 7: Precision-recall curves for five algorithms predicting unrecognizability when questions are unanswerable.
+
+the one from the baseline [17] (72.6, 73.6 > 71.7) as well as the F1 scores (67.0, 68.0 > 64.8).
+
+Our results also highlight the importance of learning to predict jointly the answerability with recognizability task (i.e., rows 5-9) over relying on more basic baselines (i.e., rows 2-4). As shown in Table 3, low recall values imply that SIFT and HOG fail to capture many unrecognizable images, while our models learn image features and excel in recall and f1 scores.
+
+Next, we compare the results from TD+softmax and TD+sigmoid. We observe they are comparable in unanswerability prediction due to comparable average precision scores and F1 scores. For unrecognizability prediction, TD+softmax is a bit weaker than TD+sigmoid because due to slightly lower average precision and F1 scores. One reason for this may be the manual assignment of unrecognizability to false when answerability is true. Originally, $14.8\%$ of images are unrecognizable, but after assignment, the portion drops to $8.7\%$ . Learning from more highly biased data is a harder task, which could in part explain the weaker performance of TD+softmax model.
+
+# 6. Conclusions
+
+We introduce a new image quality assessment dataset that emerges from an authentic use case where people who are blind struggle to capture high-quality images towards learning about their visual surroundings. We demonstrate the potential of this dataset to encourage the development of new algorithms that can support real users trying to obtain image captions and answers to their visual questions. The dataset and all code are publicly available at https://vizwiz.org.
+
+Acknowledgements. We gratefully acknowledge funding support from the National Science Foundation (IIS-1755593), Microsoft, and Amazon. We thank Nilavra Bhattacharya and the crowdworkers for their valuable contributions to creating the new dataset.
+
+# References
+
+[1] http://www.taptapseeapp.com/.3
+[2] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer, 2016. 6
+[3] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086, 2018. 7
+[4] Nilavra Bhattacharya, Qing Li, and Danna Gurari. Why does a visual question have different answers? In Proceedings of the IEEE International Conference on Computer Vision, pages 4271-4280, 2019. 3
+[5] Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, et al. Vizwiz: nearly real-time answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pages 333-342. ACM, 2010. 1, 3, 5
+[6] Sebastian Bosse, Dominique Maniry, Klaus-Robert Müller, Thomas Wiegand, and Wojciech Samek. Deep neural networks for no-reference and full-reference image quality assessment. IEEE Transactions on Image Processing, 27(1):206-219, 2017. 2, 5
+[7] Erin Brady, Meredith Ringel Morris, Yu Zhong, Samuel White, and Jeffrey P Bigham. Visual challenges in the everyday lives of blind people. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pages 2117-2126. ACM, 2013. 1, 3, 5
+[8] François Chollet. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1251-1258, 2017. 6
+[9] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 1, 3, 6
+[10] Michael Denkowski and Alon Lavie. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation, 2014. 6
+[11] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pages 178-178. IEEE, 2004. 1, 3
+[12] Deepti Ghadiyaram and Alan C Bovik. Massive online crowdsourced study of subjective and objective picture quality. IEEE Transactions on Image Processing, 25(1):372-387, 2015. 1, 2, 3
+[13] Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Dollár, and Kaiming He. Detector. https://github.com/facebookresearch/detectron, 2018.7
+[14] Gregory Griffin, Alex Holub, and Pietro Perona. Caltech-256 object category dataset. 2007. 1, 3
+
+[15] Danna Gurari and Kristen Grauman. Crowdverge: Predicting if people will agree on the answer to a visual question. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 3511-3522, 2017. 3
+[16] Danna Gurari, Qing Li, Chi Lin, Yinan Zhao, Anhong Guo, Abigale Stangl, and Jeffrey P Bigham. Vizwiz-priv: A dataset for recognizing the presence and purpose of private visual information in images taken by blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 939–948, 2019. 3
+[17] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3608-3617, 2018. 1, 3, 5, 6, 7, 8
+[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6, 7
+[19] Lun Huang, Wenmin Wang, Jie Chen, and Xiao-Yong Wei. Attention on attention for image captioning. In International Conference on Computer Vision, 2019. 7
+[20] Chandrika Jayant, Hanjie Ji, Samuel White, and Jeffrey P Bigham. Supporting blind photography. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility, pages 203-210. ACM, 2011. 3
+[21] Dinesh Jayaraman, Anish Mittal, Anush K Moorthy, and Alan C Bovik. Objective quality assessment of multiply distorted images. In 2012 Conference record of the forty sixth asilomar conference on signals, systems and computers (ASILOMAR), pages 1693-1697. IEEE, 2012. 1, 2
+[22] Le Kang, Peng Ye, Yi Li, and David Doermann. Convolutional neural networks for no-reference image quality assessment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1733-1740, 2014. 2, 5
+[23] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009. 1, 3
+[24] D Kundu, D Ghadiyaram, AC Bovik, and BL Evans. Large-scale crowdsourced study for high dynamic range images. IEEE Trans. Image Process., 26(10):4725-4740, 2017. 2
+[25] Eric Cooper Larson and Damon Michael Chandler. Most apparent distortion: full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1):011006, 2010. 1, 2
+[26] Qiang Li and Zhou Wang. Reduced-reference image quality assessment using divisive normalization-based image representation. IEEE journal of selected topics in signal processing, 3(2):202-211, 2009. 2
+[27] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81, 2004. 6
+[28] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dálár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In
+
+European conference on computer vision, pages 740-755. Springer, 2014. 1, 3
+[29] Lixiong Liu, Bao Liu, Hua Huang, and Alan Conrad Bovik. No-reference image quality assessment based on spatial and spectral entropies. Signal Processing: Image Communication, 29(8):856-863, 2014. 2
+[30] Yihang Lou, Yan Bai, Jun Liu, Shiqi Wang, and Lingyu Duan. Veri-wild: A large dataset and a new method for vehicle re-identification in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3235–3243, 2019. 1
+[31] Kede Ma, Zhengfang Duanmu, Qingbo Wu, Zhou Wang, Hongwei Yong, Hongliang Li, and Lei Zhang. Waterloo exploration database: New challenges for image quality assessment models. IEEE Transactions on Image Processing, 26(2):1004-1016, 2016. 1, 2
+[32] Lin Ma, Songnan Li, Fan Zhang, and King Ngi Ngan. Reduced-reference image quality assessment using reorganized dct-based image representation. IEEE Transactions on Multimedia, 13(4):824-829, 2011. 2
+[33] Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695-4708, 2012. 2, 5
+[34] Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Making a completely blind image quality analyzer. IEEE Signal Processing Letters, 20(3):209-212, 2012. 2, 5
+[35] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311-318. Association for Computational Linguistics, 2002. 6
+[36] Nikolay Ponomarenko, Lina Jin, Oleg Ieremeiev, Vladimir Lukin, Karen Egiazarian, Jaakko Astola, Benoit Vozel, Kacem Chehdi, Marco Carli, Federica Battisti, et al. Image database tid2013: Peculiarities, results and perspectives. Signal Processing: Image Communication, 30:57-77, 2015. 1, 2
+[37] Nikolay Ponomarenko, Vladimir Lukin, Alexander Zelensky, Karen Egiazarian, Marco Carli, and Federica Battisti. Tid2008-a database for evaluation of full-reference visual quality assessment metrics. Advances of Modern Radioelectronics, 10(4):30-45, 2009. 1, 2
+[38] Abdul Rehman and Zhou Wang. Reduced-reference image quality assessment by structural similarity estimation. IEEE Transactions on Image Processing, 21(8):3378-3389, 2012. 2
+[39] Rafael Reisenhofer, Sebastian Bosse, Gitta Kutyniok, and Thomas Wiegand. Aaar wavelet-based perceptual similarity index for image quality assessment. Signal Processing: Image Communication, 61:33-43, 2018. 2
+[40] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015. 7
+[41] Hamid R Sheikh, Muhammad F Sabir, and Alan C Bovik. A statistical evaluation of recent full reference image quality
+
+assessment algorithms. IEEE Transactions on image processing, 15(11):3440-3451, 2006. 1, 2
+[42] Rajiv Soundararajan and Alan C Bovik. Rred indices: Reduced reference entropic differencing for image quality assessment. IEEE Transactions on Image Processing, 21(2):517-526, 2011. 2
+[43] Sundaram Suresh, R Venkatesh Babu, and Hyoung J Kim. No-reference image quality assessment using modified extreme learning machine classifier. Applied Soft Computing, 9(2):541-552, 2009. 2
+[44] Hossein Talebi and Peyman Milanfar. Nima: Neural image assessment. IEEE Transactions on Image Processing, 27(8):3998-4011, 2018. 2, 5
+[45] Marynel Vázquez and Aaron Steinfeld. An assisted photography framework to help visually impaired users properly aim a camera. ACM Transactions on Computer-Human Interaction (TOCHI), 21(5):25, 2014. 3
+[46] Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4566-4575, 2015. 6
+[47] Zhou Wang and Alan C Bovik. Reduced-and no-reference image quality assessment. IEEE Signal Processing Magazine, 28(6):29-40, 2011. 2
+[48] Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 1, 2
+[49] Zhou Wang and Eero P Simoncelli. Reduced-reference image quality assessment using a wavelet-domain natural image statistic model. In Human Vision and Electronic Imaging X, volume 5666, pages 149-159. International Society for Optics and Photonics, 2005. 2
+[50] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398-1402. IEEE, 2003. 2
+[51] Jinjian Wu, Weisi Lin, Guangming Shi, and Anmin Liu. Reduced-reference image quality assessment with visual information fidelity. IEEE Transactions on Multimedia, 15(7):1700-1705, 2013. 2
+[52] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detector2. https://github.com/facebookresearch/detectron2, 2019.7
+[53] Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485-3492. IEEE, 2010. 1, 3
+[54] Xu Yang, Kaihua Tang, Hanwang Zhang, and Jianfei Cai. Auto-encoding scene graphs for image captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10685–10694, 2019. 7
+[55] Peng Ye and David Doermann. No-reference image quality assessment using visual codebooks. IEEE Transactions on Image Processing, 21(7):3129-3138, 2012. 2
+
+[56] Peng Ye, Jayant Kumar, Le Kang, and David Doermann. Unsupervised feature learning framework for no-reference image quality assessment. In 2012 IEEE conference on computer vision and pattern recognition, pages 1098-1105. IEEE, 2012. 2
+[57] Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang. Fsim: A feature similarity index for image quality assessment. IEEE transactions on Image Processing, 20(8):2378-2386, 2011. 2
+[58] Yu Zhong, Pierre J Garrigues, and Jeffrey P Bigham. Real time object scanning using a mobile phone and cloud-based visual search engine. In Proceedings of the 15th Interna-
+
+tional ACM SIGACCEss Conference on Computers and Accessibility, page 20. ACM, 2013. 3
+[59] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. 1, 3
+[60] Zhe Zhu, Dun Liang, Songhai Zhang, Xiaolei Huang, Baoli Li, and Shimin Hu. Traffic-sign detection and classification in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2110-2118, 2016. 1
\ No newline at end of file
diff --git a/assessingimagequalityissuesforrealworldproblems/images.zip b/assessingimagequalityissuesforrealworldproblems/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..9bc53ea68bd00c67942a1680dc76d7d3bf9c2fb2
--- /dev/null
+++ b/assessingimagequalityissuesforrealworldproblems/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e5a79ff40516e940f44b4bc8ee759fc8ba19c778d719d2fabd581962fb160b94
+size 409053
diff --git a/assessingimagequalityissuesforrealworldproblems/layout.json b/assessingimagequalityissuesforrealworldproblems/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6eac859eebfc725ae4db5f64fe3bdd2781797691
--- /dev/null
+++ b/assessingimagequalityissuesforrealworldproblems/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e49a997c3ea5c90412bfc73e33f981d8a430af186af48ec8dc58b414af182519
+size 386507
diff --git a/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_content_list.json b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..135a145e2752af54a2a23ac98cbbbcaf8cb02869
--- /dev/null
+++ b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b84000bca7cce1cf2209f87922a7a9f00c77d1b363f3d5195e24251979468a37
+size 73984
diff --git a/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_model.json b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..80be118a1b92482120bd7ac3a0976680b38ed6ba
--- /dev/null
+++ b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fc10fc2f61495e9fb6a083612c1f0e99b22f80ab922506171c0dc5f716fa3991
+size 91198
diff --git a/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_origin.pdf b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dfcc351dd89ad56b4055135bea1ba9199ffbf579
--- /dev/null
+++ b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/084cc90c-3a54-473c-a2ba-7cfbe58dedc5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43d95ee2ddc0e3b6142081b112355c06c95a25ce2bb794bfc8dbc2edec12e3ea
+size 1257315
diff --git a/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/full.md b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b030dba8c048b05f1e399d612619f0ccc62aab11
--- /dev/null
+++ b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/full.md
@@ -0,0 +1,289 @@
+# Associate-3Ddet: Perceptual-to-Conceptual Association for 3D Point Cloud Object Detection
+
+Liang Du†\*, Xiaqing Ye†, Xiao Tan², Jianfeng Feng¹, Zhenbo Xu³, Errui Ding² and Shilei Wen²
+
+$^{1}$ Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, China, Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, China.
+
+$^{2}$ Baidu Inc., China.
+
+$^{3}$ University of Science and Technology of China, China.
+
+# Abstract
+
+Object detection from 3D point clouds remains a challenging task, though recent studies pushed the envelope with the deep learning techniques. Owing to the severe spatial occlusion and inherent variance of point density with the distance to sensors, appearance of a same object varies a lot in point cloud data. Designing robust feature representation against such appearance changes is hence the key issue in a 3D object detection method. In this paper, we innovatively propose a domain adaptation like approach to enhance the robustness of the feature representation. More specifically, we bridge the gap between the perceptual domain where the feature comes from a real scene and the conceptual domain where the feature is extracted from an augmented scene consisting of non-occlusion point cloud rich of detailed information. This domain adaptation approach mimics the functionality of the human brain when proceeding object perception. Extensive experiments demonstrate that our simple yet effective approach fundamentally boosts the performance of 3D point cloud object detection and achieves the state-of-the-art results.
+
+# 1. Introduction
+
+3D object detection [11, 24, 28, 42, 44] received widespread attention from both industry and academia due to its crucial role in autonomous driving [14]. Despite the tremendous success achieved in recent years in object detection from 2D images [12, 25, 27, 31, 32], object detection based on 3D point clouds remains an open and highly chal
+
+
+Traditional 3D Object Detection
+
+
+Associate-3Ddet
+Figure 1. Comparison between traditional 3D object detection and our Associate-3Ddet. $E$ and $D$ are the encoder and decoder, respectively. In contrast to the traditional method that directly uses the features of sparse point clouds for detection, our Associate-3Ddet learns to associate incomplete perceived features of objects with more complete features of corresponding class-wise conceptual models for feature enhancement.
+
+lenging problem due to the object occlusion and the variance of the point distribution.
+
+With the distance to the LiDAR sensor increases, the density of the point cloud dramatically decreases, resulting in the huge density variance. Moreover, some parts of objects might be invisible due to occlusion or low density point cloud. In these cases, 3D detection results are error-prone due to lack of compact perceptual features. Recent 3D detection methods [37,42] struggle to alleviate this problem. On one hand, STD [42] proposes PointsPool to transform intermediate point features from sparse expression to more compact voxel representation for final box prediction. However, for far objects, simple voxelization did not help the network learn compact perceptual fea
+
+tures as most voxels might be empty. On the other hand, RANGE [37] exploits generative adversarial networks to encourage consistent features for far-range as near-range objects. However, it focuses on range domain adaptation and ignores object-wise constrains like viewing angles and 3D shape. By comparison, we select objects with similar rotation angles and similar shapes as the conceptual guidance and directly adapt object-wise features from the perceptual domain to the conceptual domain following a transfer learning paradigm (see Figure 1).
+
+From a psychological perspective, object perceptual is usually carried out in form of an association process. Such association process helps mapping from the observed occluded or distant object to a complete object with the finest details. This process is regarded as the associative recognition in the human cognitive system. More specifically, as it is proposed in [3,8] that human object perceptual is a hierarchical process consisting of two stages: (a) a "viewer-centered" representation stage, where the features of the object are presented from the viewer's perspective, while the existing features may be incomplete due to occlusion and distance; (b) an "object-centered" representation stage, where the object's features are associated with its class-wise conceptual model stored in the brain.
+
+Previous works such as [5, 9, 17] demonstrate that biological plausibility can be used as a guide to design intelligent systems. Considering above issues, in this paper, we present Associate-3Ddet for object detection in point cloud, as depicted in Figure 1. Associate-3Ddet builds up the association between the weak perceptual features, where 3D detection results are unsatisfactory due to the dramatic variance of appearance, and the robust conceptual features. To associate the perceptual features and our proposed conceptual features, we present a self-contained method to filter conceptual models with similar view-points and high density in the same dataset, and put them on the corresponding ground-truth positions for conceptual feature extraction. Compared with perceptual features, performing 3D detection on conceptual features brings significant performance gains (see Table 3). By narrowing the gap between the perceptual domain and our proposed conceptual domain, Associate-3Ddet surpasses state-of-the-art on the popular KITTI 3D detection benchmark (see Table 1 and 2). The main contributions of our work are summarized as follows:
+
+- We propose a 3D object detection framework that learns to associate feature extracted from the real scene with more discriminative feature from class-wise conceptual models, which enhances the feature robustness especially for occluded objects or objects in a distance.
+- We propose a perceptual-to-conceptual module (P2C) that adapts object features from the perceptual domain to the conceptual domain balanced by an
+
+incompletion-aware reweighting map.
+
+- We present a concrete method to construct the conceptual scene without appealing to external resources.
+- We achieve state-of-the-art 3D object detection performance on KITTI benchmark.
+
+# 2. Related Work
+
+3D Object Detection. Current 3D object detection methods can be divided into three categories: multi-view, voxel-based, and point-based methods.
+
+The multi-view method [4, 20, 21, 40, 41] projects point clouds to the bird's eye view (BEV) or the image plane [22] to perform 3D detection. [4] applied a region proposal network (RPN) to obtain positive proposals, and features are merged from the image view, front view and BEV. [20] further merged features from multiple views in the RPN phase to generate proposals.
+
+There are several methods using voxel-grid representation. 3DFCN [22] and Vote3Deep [13] discretized the point cloud on square grids and applied 3D convolutional networks to address 3D point clouds for object detection, but these approaches suffer from high computational overhead due to 3D convolutions. In [45], the author first introduced the VoxelNet architecture to learn discriminative features from 3D point clouds for 3D object detection. VoxelNet is further improved by applying sparse convolution [15,16,39] to reduce computational consumption. Pseudo-images were utilized by [21] as the representation after point clouds are voxelized, which further enhances the detection speed.
+
+F-PointNet [30] first exploits raw point clouds to detect 3D objects. Frustum proposals were used from off-the-shelf 2D object detectors as candidate boxes, and predictions were regressed according to interior points [38]. PointR-CNN [35] directly processed the raw point cloud to generate 3D proposal regions in a bottom-up manner according to the segmentation label from 3D box annotations. STD [42] presented a point-based proposal generation paradigm with spherical anchors, which achieves a high recall.
+
+Differently, RANGE [37] proposed cross-range adaptation following a generative adversarial network paradigm to produce consistent features for far-range objects as near-range objects. It aims at range adaptation and ignores the object-wise features like occlusion. By comparison, our proposed object-wise domain adaptation produces better performance.
+
+Transfer Learning in Deep Neural Networks. Transfer learning [29] is a type of machine learning paradigm aimed at bridging different domains in natural language processing (NLP) [6] and computer vision [18]. Our Associate-3Ddet adopts the transfer learning method to narrow the domain
+
+
+Figure 2. Overview of our proposed Associate-3Ddet framework and its underlying biological model. Associate-3Ddet mainly consists of four parts: (a) The voxel-grid representation and sparse-convolution-based perceptual feature extractor (PFE), which encodes the perceived information as the visual cortex (VC) does in the brain. (b) The conceptual feature generator (CFG) generates the features based on the constructed conceptual scene. (c) The perceptual-to-conceptual module (P2C) adapts features from the perceptual domain to the conceptual domain, which mimics the knowledge association and retrieval process in the brain. The right yellow arrow points to the adaptation loss. (d) The loss for 3D object detection. Note that the PFE without deformable convolutional layers is our baseline method. The explanation of the Bio-model can be found in Sec.3.1
+
+gap between perceptual features and class-wise conceptual features. The key point lies in how to reduce the distribution discrepancy across different domains while avoiding over-fitting. Deep neural networks are leveraged to perform transfer learning, since they are able to extract high-level representations which disentangle different explanatory factors of variations behind the data [2] and manifest invariant factors underlying different populations that transfer well from source domain tasks to target domain tasks [10, 43].
+
+# 3. Methodology
+
+Our Associate-3Ddet establishes associations between the perceptual feature and conceptual feature based on domain adaptation. Due to the occlusion or the long range distance, some instances only contain few points captured by LiDAR, making it difficult to directly decide the category of the object or to infer the object location knowledge. The perceptual domain corresponds to the real scene that reports unsatisfactory performance owing to the variance of appearance. Fortunately, we can construct the conceptual domain that learns from a compact point cloud or more informative instances to guide the feature learning in challenging cases. In other words, the conceptual feature is robust to object location and orientation, which can be extracted from complete point clouds with details as fine as possible. We believe that a 3D detection algorithm will become more robust, if the gap between the two features diminishes.
+
+The overall architecture of the proposed Associate-3Ddet is illustrated in Figure 2, which consists of four parts: (a) the perceptual feature extractor (PFE) to extract the target domain feature from the real scene; (b) the conceptual feature generator (CFG) to provide the source do
+
+main feature from an augmented scene; (c) the perceptual-to-conceptual module (P2C) to perform domain adaptation balanced by a incompletion-aware reweighting map; (d) the training loss for forcing the feature learning. We detail the network and depict the biological model that supports our motivation in Sec.3.1. In Sec.3.2, the training process of Associate-3Ddet is described. The practical method we adopt to construct conceptual models is explained in Sec.3.3.
+
+# 3.1. Network of Associate-3Ddet
+
+Many 3D point cloud detection methods [21, 35, 39, 42] can be employed as the PFE or/and the CFG branch in our approach. In this section, we instantiate one PFE and one CFG for the demonstration purpose.
+
+PFE to extract perceptual features. The PFE module functions as the feature extractor from the real scene which is shown in Figure 2 and colored in red. Following [21,39,45], our Associate-3Ddet uses a brief and simple encoder that first fetches the input features by dividing the entire 3D space into voxels and then adopts sparse convolution [15] for feature extraction. For sparse convolution, the output points are not computed if there is no related input point, which saves computational resources.
+
+It is more difficult for the PFE branch to directly learn the conceptual feature representation from the perceptual scene comparing against the CFG branch, due to the lack of valid points within the scene. Therefore, to make the PFE to generate feature representation resembling conceptual features for occluded objects or objects in a distance, the network may have the ability to adaptively adjust the respective field and actively capture more informative context information even in sparse cases. Inspired by [7] that vali
+
+
+Figure 3. Illustration of our incompletion-aware reweighting map. We calculate the average length of all learned offsets for each pixel to represent its expansion degree of searching its context information. Those regions (yellow) with incomplete and sparse object point clouds requires larger searching steps, which also needs to be given more attention during domain adaptation.
+
+dates the boosting capability of deformable convolution in modeling geometric transformation, this work extends 2D deformable convolutional operation to 3D voxel-based representations, as shown in Figure 2.
+
+CFG to generate conceptual features. A siamese subnetwork named CFG extracts the conceptual feature from augmented scene serving as source domain feature. The CFG is first separately trained end-to-end on the conceptual scene, which is derived from the real-world scene and integrated with complete object models. Here we raise that the conceptual model can be a complete point cloud for each object, such as 3D CAD models from external resources, or surrogate models with more informative knowledge originated from the same dataset, i.e., self-contained. One of practical conceptual model-constructing methods will be introduced in following Sec.3.3. After end-to-end training of the CFG, the parameters is fixed to provide stable feature guidance for further domain adaptation. During the training process of the PFE, the real-world scene and its corresponding conceptual scene are fed into the PFE and CFG, respectively. Even only with sparse and partly visible point clouds of real objects, the PFE is encouraged to learn more robust features under the guidance of conceptual model. During inference process, the CFG is no longer needed.
+
+P2C to perform domain adaptation. The proposed P2C learns to map the perceptual feature generated by PFE to the conceptual feature extracted by the CFG to build up associations. As mentioned above, the source domain feature is obtained by the CFG and the locations of both the real scanned objects and conceptual models are spatially aligned. Consequently, we directly optimize the L2 distance between these two features for domain adaptation. The parameters of PFE are tuned to encourage perceived objects to generate features that resemble more informative conceptual ones. To make the domain adaptation more focused, it is restricted to foreground pixels only. As depicted in
+
+Figure 3, we calculate the foreground mask of the source domain by reprojecting the 3D object bounding box to the BEV view and downsample it to match the output feature map scale and the foreground pixels are those containing at least one point of objects. It is noticed that the pixelwise difference between these two feature maps are prone to be large in regions containing incomplete structures due to the occlusion or sampling effect. As proposed in [7], the deformable convolutional layers have the adaptive respective field according to the learned offsets. We also observed that the learned offsets of the deformable convolutional layers in these regions are comparatively larger than those in other regions. This is probably because feature extraction for incomplete structure usually demands the surrounding information as a complement. Given such observations, we design a incompletion-aware reweighting map based on the learned offset as a guidance to reweight the per pixel L2 loss. This weighting approach shares a similar idea behind focal loss [26] and OHEM [36] where a weighting strategy is employed to encourage the network to focus on particular samples to achieve better performance. As illustrated in Figure 3, for each pixel of the feature, the deformable convolutional layer learns 2 ( $\Delta x$ and $\Delta y$ two directions) $\times N$ offsets. The averaged offset length over all offsets for each pixel is $\frac{1}{N}\sum_{n=1}^{N}\sqrt{\Delta x_n^2 + \Delta y_n^2}$ , and it is first adopted as the weighting value of the pixel to obtain the offset length map. Next, the offset length map is multiplied by the foreground feature mask so that the domain adaptation is only carried out on foreground. Then, the foreground offset map is normalized to $0 \sim 1$ to get the reweighting map. The formulation will be detailed in Sec.3.2. Note that after training, the whole P2C and CFG are no longer needed.
+
+Biological model underlying the framework. To further explain how such simple architecture is able to enhance the performance of 3D object detection, we investigate a number of biology studies on the human cognitive system corresponding to 3D object detection. According to the biological model, the associative recognition process in the human brain involves three crucial factors, as illustrated in Figure 2: the visual cortex (VC), anterior temporal lobes (ATL) and inferior longitudinal fasciculus (ILF). In general, the VC encodes the received primary information, while the ATL has been implicated as a key repository of conceptual knowledge [19], which plays a critical role in object detection [34]. In the ATL, conceptual models of different classes are coded in an invariant manner. The ILF is proved to be connected between the ATL and VC [1, 33], which provides the associative path between conceptual models and the real-world objects. With the enhanced object feature guided by the ATL through the ILF, the representation of an object becomes more abundant and robust for further object localization, despite of far distance and occlusion.
+
+Our proposed P2C that adapts the object features from
+
+the perceptual domain to the conceptual domain simulates the knowledge association and retrieving process between the ATL and VC. After perceptual and conceptual domains being well aligned after training, the network is able to adaptively generate the conceptual features without CFG. Therefore, the CFG is actually designed to build up a virtual "repository" of conceptual representations for the network like the ATL in the brain, and thus the CFG and P2C can be simply removed during the inference process.
+
+# 3.2. Training of Associate-3Ddet
+
+Training of CFG. There are two losses for our CFG, including the binary cross entropy loss for classification and the smooth-L1 loss for 3D proposal generation. We denote the total loss of our CFG as $\mathcal{L}_{CFG}$ :
+
+$$
+\mathcal {L} _ {C F G} = \mathcal {L} _ {b b o x} + \mathcal {L} _ {c l a s s} \tag {1}
+$$
+
+Same regression targets as those in [39, 45] are set up and smooth-L1 loss $\mathcal{L}_{bbox}$ is adopted to regress the normalized box parameters as:
+
+$$
+\Delta x = \frac {x _ {a} - x _ {g}}{d _ {a}}, \Delta y = \frac {y _ {a} - y _ {g}}{h _ {a}}, \Delta z = \frac {z _ {a} - z _ {g}}{d _ {a}},
+$$
+
+$$
+\Delta l = \log \left(\frac {l _ {g}}{l _ {a}}\right), \Delta h = \log \left(\frac {h _ {g}}{h _ {a}}\right), \Delta w = \log \left(\frac {w _ {g}}{w _ {a}}\right), \tag {2}
+$$
+
+$$
+\Delta \theta = \theta_ {g} - \theta_ {a},
+$$
+
+where $x$ , $y$ , and $z$ are the center coordinates; $w$ , $l$ , and $h$ are the width, length, and height, respectively; $\theta$ is the yaw rotation angle; the subscripts $a$ , and $g$ indicate the anchor and the ground truth, respectively; and $d_a = \sqrt{(l_a)^2 + (w_a)^2}$ is the diagonal of the base of the anchor box. $(x_a, y_a, z_a, h_a, w_a, l_a, \theta_a)$ are the parameters of 3D anchors and $(x_g, y_g, z_g, h_g, w_g, l_g, \theta_g)$ represent the corresponding ground truth box.
+
+We use the focal loss introduced by [26] for classification to alleviate the sample imbalance during our anchor-based training, and the classification loss $\mathcal{L}_{\text{class}}$ is formulated as follows:
+
+$$
+\mathcal {L} _ {\text {c l a s s}} = \alpha_ {t} (1 - p _ {t}) ^ {\gamma} \log (p _ {t}) \tag {3}
+$$
+
+where $p_t$ is the predicted classification probability and $\alpha$ and $\gamma$ are the parameters of the focal loss.
+
+Training of the whole Associate-3Ddet. In addition to the classification and regression loss, there is an extra loss functions for associative feature adaptation. We denote the total loss of our Associate-3Ddet as $\mathcal{L}_{total}$ :
+
+$$
+\mathcal {L} _ {\text {t o t a l}} = \mathcal {L} _ {\text {b b o x}} + \mathcal {L} _ {\text {c l a s s}} + \sigma \mathcal {L} _ {\text {a s s o c i a t e}} \tag {4}
+$$
+
+Where $\sigma$ is a hyperparameter to balance these loss terms. The association loss $\mathcal{L}_{\text{associate}}$ for the object feature adaptation is formulated as follows:
+
+$$
+\begin{array}{l} \mathcal {L} _ {\text {a s s o c i a t e}} = \frac {1}{P} \sum_ {p = 1} ^ {P} \left[ \| \mathcal {F} _ {\text {p e r c e t u a l}} ^ {p} - \mathcal {F} _ {\text {c o n c e p t u a l}} ^ {p} \| _ {2} \right. \tag {5} \\ \cdot (1 + \mathcal {M} _ {r e w e i g h t} ^ {p}) ] \\ \end{array}
+$$
+
+
+Figure 4. The generation process of the conceptual scene contains three steps: (1) Objects are divided into $N$ groups according to their rotation angles. In each group, we choose the top $K\%$ objects with the most points as our conceptual models. (2) For each object not be chosen as the conceptual model, we choose the conceptual model with the minimum average closest point distance from the object as its correspondence. (3) The scale and rotation are further refined accordingly.
+
+where $\mathcal{F}_{\text{perceptual}}$ and $\mathcal{F}_{\text{conceptual}}$ are the two feature maps from the target and source domain. $P$ and $p$ denote the number of nonzero pixels and their indexes in $\mathcal{M}_{\text{reweight}}$ , respectively. $\mathcal{M}_{\text{reweight}}$ is formulated as follows:
+
+$$
+\mathcal {M} _ {\text {r e w e i g h t}} = \phi \left(\mathcal {M} _ {\text {o f f s e t}} \cdot \mathcal {M} _ {\text {f o r e g r o u n d}}\right) \tag {6}
+$$
+
+where $M_{offset}$ denotes the average offset length map. As explained in Sec.3.1, we average the length of all learned offsets for each pixel to calculate this map, and $\mathcal{M}_{\text{foreground}}$ is the reprojected foreground feature mask. $\phi$ represents the operation that normalizes the map to $0 \sim 1$ .
+
+# 3.3. Self-contained method to build conceptual models
+
+As explained before, conceptual models can be fabricated through various approaches, such as 3D CAD models and render-based techniques. Instead of appealing to costly external resources, we present a self-constrained conceptual model-constructing method to adopt surrogate models with more informative knowledge originated from the same dataset. Free-of-charge ground-truth point cloud instance objects with more complete structures are chosen as candidate conceptual models since 3D point cloud objects of the same class are usually of the similar scale. The main process is shown in Figure 4 and explained as follows.
+
+Firstly, for each category, relevant point cloud instances are divided into $M$ groups according to their rotation angles ranging from $-180^{\circ}$ to $+180^{\circ}$ . In each equally divided rotation range, we rank the point clouds of objects in density. The top $K\%$ instances with the most complete point clouds
+
+in each range and category are chosen as our candidate conceptual models. Next, to build pairwise correspondence for less informative objects in real-world scanned point clouds, given a less informative perceived object (which is not selected as a candidate conceptual model) and its ground truth pose of the 3D bounding box, we select conceptual models from the candidates within the same rotation range as the less informative one. To eliminate the small angle difference, the chosen candidate is further rotated into the exact pose of the real scanned object. The one with the minimum average closest point distance from perceived object is selected as the correspondence. Then, the scale (length, width and height) of the corresponding conceptual model is further fine-tuned according to the size of less informative perceived object. Each conceptual scene is finally composed by replacing the incomplete original point cloud of object with related conceptual models in 3D space.
+
+# 4. Experiments
+
+# 4.1. Dataset and Experimental Setup
+
+We evaluate Associate-3Ddet on the widely acknowledged 3D object detection benchmark, KITTI dataset [14], on 'Car' category. The color images as well as the relevant point cloud are provided. We split the 7481 training images equally into train and validation set in accordance with previous works such as [42]. Average precision (AP) metrics measured in 3D and BEV are utilized to evaluate the performance of different methods. During evaluation, we follow the official KITTI evaluation protocol. Three levels of difficulty are defined according to the 2D bounding box height, occlusion and truncation degree as follows: easy, moderate and hard. The benchmark ranks algorithms based on the moderately difficult results.
+
+# 4.2. Implementation Details.
+
+Data Augmentation We perform data augmentation to prevent over-fitting following [39] on both perceptual and conceptual datasets.
+
+Network Architecture In contrast to [39] that adopts stacked convolutions for input feature encoding, we simply limit the number of each voxel to be no more than five points and compute the mean x, y, z values within them as the input feature, followed by a 128-dimension linear layer. After feeding into stacked sparse and submanifold convolutions for feature extraction and dimensionality reduction, the shape of obtained encoded tensor is $128 \times 2 \times 200 \times 176$ , where 2 corresponds to height dimension. We squeeze the height dimension by reshaping it into the feature map channels. Since PFE and CFG are of siamese structures with different input features, we leave the detailed architecture in the supplementary material. Then, features of PFE are fed into a deformable convolutional layer with a 128-output
+
+feature map, a kernel size of (5, 5). Finally, naive convolutions are applied to features originated from the deformable convolutional layer.
+
+Training Parameters Both CFG and PFE are trained with the ADAM optimizer and batch size 6 for 80 epochs on a single NVIDIA Titan RTX card. We utilize the cosine annealing learning rate strategy with an initial learning rate 0.001. We set $\alpha = 0.25$ and $\gamma = 2$ in focal loss, $\sigma = 0.5$ to balance the loss terms, and $M = 24$ and $K = 20$ to create the conceptual scenes. We first train CFG for conceptual models end-to-end and then keep it fixed for further Associate-3Ddet network training.
+
+# 4.3.Quantitative Results
+
+As shown in Table 1, we evaluate our Associate-3Ddet on 3D detection and BEV detection benchmark on KITTI val split set. For 3D object detection, by only utilizing LiDAR point clouds and voxel-based encoding, our proposed Associate-3Ddet outperforms most existing state-of-the-art methods and obtains comparable results on "hard" difficulty level with STD [42], whereas our network runs at much higher FPS (see the time cost in Table 1 and GPU resource for detail). For BEV detection, our method outperforms previous top-ranked methods by large margins on all the difficulty levels. Thanks to the self-contained conceptual model and brain-inspired P2C module, our simple yet effective network is able to get superior performance to complicated networks and runs at high FPS.
+
+Table 2 shows the result on KITTI 3D object detection test server. For the fairness, the comparison is carried out among methods based on pseudo-image representations. Our Associate-3Ddet is superior to all LiDAR-based methods on all entries. Note that in this paper, we focus on pseudo-image-based methods to demonstrate the effectiveness of our brain-inspired approach. Our method can also be easily expanded to the point-based two-stage methods.
+
+# 4.4. Ablation Study
+
+To validate the effectiveness of different modules and settings of Associate-3Ddet, the following ablation experiments are conducted.
+
+The upper bound performance of CFG. Table 3 demonstrates the upper bound performance of our CFG, i.e., adopting the conceptual models for both train and test. The high precision (over $90\%$ ) indicates the possibility of adopting conceptual models to guide the PFE module for enhanced feature learning. The third row "real + conceptual" indicates merely training baseline model with KITTI train split set and the generated conceptual data without using siamese network for adaptation. The result validates that simply mixing more training data using the same network won't boost the performance.
+
+The strength of domain adaptation. We conduct an in
+
+
+
+
+
+
+Figure 5. Visualization of our results on KITTI val split set. The ground-truth 3D boxes and the predicted 3D boxes of the baseline method and our method are drawn in green, yellow and red, respectively, in the LiDAR phase. The first row shows RGB images, and the second and third rows show the front view and the bird's-eye view, respectively.
+
+
+
+Table 1. Results on 3D object detection and BEV detection of the KITTI val split set at IoU = 0.7 for cars.
+
+| Method | Time(s) | Modality | 3D Detection (%) | BEV Detection (%) | GPU |
| Mod. | Easy | Hard | Mod. | Easy | Hard |
| MV3D [4] | 0.24 | RGB + LiDAR | 62.68 | 71.29 | 56.56 | 78.10 | 86.55 | 76.67 | TITAN X |
| AVOD-FPN [20] | 0.1 | RGB + LiDAR | 74.44 | 84.41 | 68.65 | - | - | - | TITAN XP |
| F-PointNet [30] | 0.17 | RGB + LiDAR | 70.92 | 83.76 | 63.65 | 84.02 | 88.16 | 76.44 | GTX 1080 |
| VoxelNet [45] | 0.22 | LiDAR only | 65.46 | 81.98 | 62.85 | 84.81 | 89.60 | 78.57 | TITAN X |
| SECOND [39] | 0.05 | LiDAR only | 76.48 | 87.43 | 69.10 | 87.07 | 89.96 | 79.66 | GTX 1080Ti |
| RANGE [37] | 0.05 | LiDAR only | 78.31 | 88.80 | 76.16 | 87.82 | 90.32 | 87.51 | GTX 1080 Ti |
| PointRCNN [35] | 0.1 | LiDAR only | 78.63 | 88.88 | 77.38 | 87.07 | 89.96 | 79.66 | TITAN XP |
| STD [42] | 0.08 | LiDAR only | 78.70 | 88.80 | 78.20 | 88.30 | 90.10 | 87.40 | TITAN V |
| Ours | 0.06 | LiDAR only | 79.17 | 89.29 | 77.76 | 88.98 | 90.55 | 87.71 | GTX 1080Ti |
+
+vestigation by measuring the performance with or without our P2C and CFG, as shown in Table 4. The first row is the baseline results trained with only KITTI train split set. The second row shows the baseline equipped with only $D$ (deformable convolutional layers). We find that without domain adaptation, simply adopting deformable convolutional layers has little effect on the performance of the de
+
+tector. Owing to domain adaptation (P2C and CFG), improvement is observed in Row 3. Following rows indicate the performance of Associate-3Ddet with or without $D$ , $M$ (foreground mask) and $R$ (reweighting map). The improvements on all difficulty levels indicate that our full approach equipped with P2C and CFG learns better discriminative and robust features for 3D detection.
+
+Table 2. Results on KITTI 3D object detection test server (test split). The 3D object detection and BEV detection are evaluated by average precision at IoU = 0.7 for cars.
+
+| Method | Time(s) | Modality | 3D Detection (%) | BEV Detection (%) | GPU |
| Mod. | Easy | Hard | Mod. | Easy | Hard |
| MV3D [4] | 0.24 | RGB + LiDAR | 52.73 | 66.77 | 51.31 | 77.00 | 85.82 | 68.94 | TITAN X |
| AVOD-FPN [20] | 0.1 | RGB + LiDAR | 71.88 | 81.94 | 66.38 | 83.79 | 88.53 | 77.90 | TITAN XP |
| ContFuse [24] | 0.06 | RGB + LiDAR | 66.22 | 82.54 | 64.04 | 85.83 | 88.81 | 77.33 | - |
| UberATG-MMF [23] | 0.08 | RGB + LiDAR | 76.75 | 86.81 | 68.41 | 87.47 | 89.49 | 79.10 | TITAN XP |
| VoxelNet [45] | 0.22 | LiDAR only | 65.11 | 77.47 | 57.73 | 79.26 | 89.35 | 77.39 | TITAN X |
| SECOND [39] | 0.05 | LiDAR only | 73.66 | 83.13 | 66.20 | 79.37 | 88.07 | 77.95 | GTX 1080Ti |
| PointPillars [21] | 0.016 | LiDAR only | 74.99 | 79.05 | 68.30 | 86.10 | 88.35 | 79.83 | GTX 1080Ti |
| Ours | 0.06 | LiDAR only | 77.40 | 85.99 | 70.53 | 88.09 | 91.40 | 82.96 | GTX 1080Ti |
+
+Table 3. The upper bound performance of CFG training with the conceptual data on $\mathrm{AP}_{3\mathrm{d}}$
+
+| training data | validation data | AP3d(IoU=0.7) |
| Mod. | Easy | Hard |
| conceptual | real | 59.87 | 78.58 | 59.33 |
| conceptual | 90.69 | 98.18 | 90.70 |
| real + conceptual | real | 76.75 | 87.27 | 74.68 |
+
+Table 4. Ablation study for our Associate-3Ddet with different settings on $\mathrm{AP}_{3\mathrm{d}}$
+
+| P2C & CFG | Method | AP3d(IoU=0.7) |
| Mod. | Easy | Hard |
| × | Baseline | 76.78 | 87.28 | 75.46 |
| Baseline (with D) | 76.86 | 87.34 | 75.70 |
| ✓ | Ours (without M,D,R) | 78.05 | 88.34 | 76.75 |
| Ours (with D) | 78.29 | 88.52 | 76.91 |
| Ours (with M) | 78.46 | 88.74 | 77.20 |
| Ours (with M,D) | 78.69 | 88.96 | 77.36 |
| Ours (full approach) | 79.17 | 89.29 | 77.76 |
+
+Table 5. Different collection strategies of conceptual models for CFG on $\mathrm{AP}_{3\mathrm{d}}$
+
+| K% | AP3d (IoU=0.7) |
| Mod. | Easy | Hard |
| 50% | 78.45 | 88.88 | 77.15 |
| 40% | 78.49 | 88.96 | 77.19 |
| 30% | 78.59 | 89.03 | 77.29 |
| 20% | 79.17 | 89.29 | 77.76 |
| 10% | 78.89 | 88.92 | 77.73 |
+
+Hyperparameter for constructing conceptual models. To evaluate the effect of hyperparameters for constructing candidate conceptual models from original object instances, we conduct experiments with different settings of $K$ , where the top $K\%$ instances in dense-to-sparse order within each angle range are selected as candidates. Smaller $K$ represents that each candidate conceptual model contains more points, which is more close to ideal models. As is revealed in Table 5, smaller $K$ leads to higher performance due to
+
+more complete conceptual models being selected. However, if the percentage $K$ is too small, for each angle range, too few conceptual models exist, making it less possible to find the best corresponding conceptual model with a similar structure for each perceived instance. Comparison shows that setting $K$ to 20 achieves superior performance. More experiments can be found in the supplementary material.
+
+# 4.5. Qualitative Results
+
+We present some representative comparison results of the baseline and our Associate-3Ddet on the val split of KITTI in Figure 5. It can be seen that the occluded and distant objects are well detected after adopting the proposed association mechanism, which demonstrates that the proposed brain-inspired approach adaptively generates robust features for predicting more accurate 3D bounding boxes.
+
+# 5. Conclusions
+
+Inspired by human associative recognition, we propose a simple yet effective 3D object detection framework that learns to associate features of precepted objects with discriminative and robust features from their conceptual models by domain adaptation. This approach explicitly bridges the gap between two domains, and enhances the robustness against appearance changes in point clouds. In addition, our approach can be easily integrated into many existing object detection methods in 3D point clouds. Experimental results on the KITTI benchmark dataset demonstrate the effectiveness and robustness of our Associate-3Ddet.
+
+# Acknowledgement
+
+This work was supported by the 111 Project (NO.B18015), the National Natural Science Foundation of China (No.91630314), the key project of Shanghai Science & Technology (No.16JC1420402), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01) and ZJLab.
+
+# References
+
+[1] Manzar Ashtari. Anatomy and functional role of the inferior longitudinal fasciculus: a search that has just begun. Developmental Medicine & Child Neurology, 54(1):6-7, 2012.
+[2] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013.
+[3] Giovanni A Carlesimo, Paola Casadio, Maurizio Sabbadini, and Carlo Caltagirone. Associative visual agnosia resulting from a disconnection between intact visual memory and semantic systems. Cortex, 34(4):563-576, 1998.
+[4] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1907-1915, 2017.
+[5] Changmao Cheng, Yanwei Fu, Yu-Gang Jiang, Wei Liu, Wenlian Lu, Jianfeng Feng, and Xiangyang Xue. Dual skipping networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4071-4079, 2018.
+[6] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. Journal of machine learning research, 12(Aug):2493-2537, 2011.
+[7] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764-773, 2017.
+[8] Ennio De Renzi. Disorders of visual recognition. In Seminars in neurology, volume 20, pages 479-486. Copyright © 2000 by Thieme Medical Publishers, Inc., 333 Seventh Avenue, New ..., 2000.
+[9] Liang Du, Jingang Tan, Xiangyang Xue, Lili Chen, Hongkai Wen, Jianfeng Feng, Jiamao Li, and Xiaolin Zhang. 3dcts: Fast and robust joint 3d semantic-instance segmentation via coupled feature selection. arXiv preprint arXiv:2003.00535, 2020.
+[10] Liang Du, Jingang Tan, Hongye Yang, Jianfeng Feng, Xi-angyang Xue, Qibao Zheng, Xiaqing Ye, and Xiaolin Zhang. Ssf-dan: Separated semantic feature based domain adaptation network for semantic segmentation. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[11] Xinxin Du, Marcelo H Ang, Sertac Karaman, and Daniela Rus. A general pipeline for 3d detection of vehicles. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3194-3200. IEEE, 2018.
+[12] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. Centernet: Object detection with keypoint triplets. arXiv preprint arXiv:1904.08189, 2019.
+[13] Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, and Ingmar Posner. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In 2017 IEEE International Conference on
+
+Robotics and Automation (ICRA), pages 1355-1361. IEEE, 2017.
+[14] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 3354-3361. IEEE, 2012.
+[15] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9224-9232, 2018.
+[16] Benjamin Graham and Laurens van der Maaten. Submanifold sparse convolutional networks. arXiv preprint arXiv:1706.01307, 2017.
+[17] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscience-inspired artificial intelligence. Neuron, 95(2):245-258, 2017.
+[18] Judy Hoffman, Sergio Guadarrama, Eric S Tzeng, Ronghang Hu, Jeff Donahue, Ross Girshick, Trevor Darrell, and Kate Saenko. Lsda: Large scale detection through adaptation. In Advances in Neural Information Processing Systems, pages 3536-3544, 2014.
+[19] Paul Hoffman and Matthew A Lambon Ralph. From percept to concept in the ventral temporal lobes: Graded hemispheric specialisation based on stimulus and task. Cortex, 101:107-118, 2018.
+[20] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven L Waslander. Joint 3d proposal generation and object detection from view aggregation. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1-8. IEEE, 2018.
+[21] Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12697-12705, 2019.
+[22] Bo Li. 3d fully convolutional network for vehicle detection in point cloud. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1513-1518. IEEE, 2017.
+[23] Ming Liang, Bin Yang, Yun Chen, Rui Hu, and Raquel Urtasun. Multi-task multi-sensor fusion for 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7345-7353, 2019.
+[24] Ming Liang, Bin Yang, Shenlong Wang, and Raquel Urtasun. Deep continuous fusion for multi-sensor 3d object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pages 641-656, 2018.
+[25] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017.
+[26] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980-2988, 2017.
+
+[27] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21-37. Springer, 2016.
+[28] Wenjie Luo, Bin Yang, and Raquel Urtasun. Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 3569-3577, 2018.
+[29] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345-1359, 2009.
+[30] Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 918-927, 2018.
+[31] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779-788, 2016.
+[32] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015.
+[33] Goksel Sali, Robert G Briggs, Andrew K Conner, Meherzad Rahimi, Cordell M Baker, Joshua D Burks, Chad A Glenn, James D Battiste, and Michael E Sughrue. A connectomic atlas of the human cerebrum—chapter 11: Tractographic description of the inferior longitudinal fasciculus. *Operative Neurosurgery*, 15(suppl_1):S423-S428, 2018.
+[34] Daniel Schacter, Daniel Gilbert, Daniel Wegner, and Bruce M Hood. Psychology: European Edition. Macmillan International Higher Education, 2011.
+[35] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointcnn: 3d object proposal generation and detection from point cloud. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-779, 2019.
+[36] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard example mining. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 761-769, 2016.
+[37] Ze Wang, Sihao Ding, Ying Li, Minming Zhao, Sohini Roychowdhury, Andreas Wallin, Guillermo Sapiro, and Qiang Qiu. Range adaptation for 3d object detection in lidar. arXiv preprint arXiv:1909.12249, 2019.
+[38] Danfei Xu, Dragomir Anguelov, and Ashesh Jain. Pointfusion: Deep sensor fusion for 3d bounding box estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 244-253, 2018.
+[39] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 18(10):3337, 2018.
+[40] Bin Yang, Ming Liang, and Raquel Urtasun. Hdnet: Exploiting hd maps for 3d object detection. In Conference on Robot Learning, pages 146-155, 2018.
+[41] Bin Yang, Wenjie Luo, and Raquel Urtasun. Pixor: Realtime 3d object detection from point clouds. In Proceedings of
+
+the IEEE conference on Computer Vision and Pattern Recognition, pages 7652-7660, 2018.
+[42] Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. Std: Sparse-to-dense 3d object detector for point cloud. arXiv preprint arXiv:1907.10471, 2019.
+[43] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in neural information processing systems, pages 3320-3328, 2014.
+[44] Tan Yu, Jingjing Meng, and Junsong Yuan. Multi-view harmonized bilinear network for 3d object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 186-194, 2018.
+[45] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4490-4499, 2018.
\ No newline at end of file
diff --git a/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/images.zip b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..fa53512a9e418ce1ef06afa7733d0e7554a2981f
--- /dev/null
+++ b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8fce277c9e36f876879075fc4c22050983e07ea07ba9305f7208608af3f241ab
+size 598409
diff --git a/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/layout.json b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6268b720f08f901158e3ca2d11c94cf6e07ff369
--- /dev/null
+++ b/associate3ddetperceptualtoconceptualassociationfor3dpointcloudobjectdetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45f47cfc668635e56257ce45dafb184ec04aede66ad545d642c225f98218ae16
+size 337787
diff --git a/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_content_list.json b/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..3d2ae04643c5733e4523c81a6146910148bc9364
--- /dev/null
+++ b/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eef562a177a54efb30552ba30584d9973f9bbf2070106dcfc058929dc865c467
+size 75827
diff --git a/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_model.json b/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..dcf52b641fee92a8c843f3df397d5e337ac3089a
--- /dev/null
+++ b/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b6534dc00943684d58182b6265eb3ed725da28e50735707a553703988d49645
+size 98731
diff --git a/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_origin.pdf b/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d0c4a4f8b7b015364864e69ed615c4149dcd4f1b
--- /dev/null
+++ b/attacktoexplaindeeprepresentation/b625529c-5ea0-490d-8d89-a69ab9cfb327_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d63b446a49c9e3a7e861ad2fa5b6c5fdd805f070a981d7a00518d7c89d86ba6
+size 1601822
diff --git a/attacktoexplaindeeprepresentation/full.md b/attacktoexplaindeeprepresentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a4de491527f99a03ce340090fcc5d25bce583a60
--- /dev/null
+++ b/attacktoexplaindeeprepresentation/full.md
@@ -0,0 +1,323 @@
+# Attack to Explain Deep Representation
+
+Mohammad A. A. K. Jalwana Naveed Akhtar Mohammed Bennamoun Ajmal Mian Computer Science and Software Engineering, The University of Western Australia.
+
+{mohammad.jalwana@research., naveed.akhtar@, mohammed.bennamoun@, ajmal.mian}@uwa.edu.au
+
+# Abstract
+
+Deep visual models are susceptible to extremely low magnitude perturbations to input images. Though carefully crafted, the perturbation patterns generally appear noisy, yet they are able to perform controlled manipulation of model predictions. This observation is used to argue that deep representation is misaligned with human perception. This paper counter-argues and proposes the first attack on deep learning that aims at explaining the learned representation instead of fooling it. By extending the input domain of the manipulative signal and employing a model faithful channelling, we iteratively accumulate adversarial perturbations for a deep model. The accumulated signal gradually manifests itself as a collection of visually salient features of the target label (in model fooling), casting adversarial perturbations as primitive features of the target label. Our attack provides the first demonstration of systematically computing perturbations for adversariably nonrobust classifiers that comprise salient visual features of objects. We leverage the model explaining character of our algorithm to perform image generation, inpainting and interactive image manipulation by attacking adversarially robust classifiers. The visually appealing results across these applications demonstrate the utility of our attack (and perturbations in general) beyond model fooling.
+
+# 1. Introduction
+
+Deep visual models have provided breakthroughs in numerous computer vision tasks, including image classification [24, 43], object detection [37, 38], semantic segmentation [27, 9] and image captioning [49]. However, despite their impressive performance, deep models are found vulnerable to adversarial perturbations to inputs [45]. These perturbations are weak additive signals that manipulate model predictions while remaining imperceptible to the human visual system. The intriguing susceptibility of deep models to adversarial perturbations is currently being actively investigated by the research community [2].
+
+Dictated by the original 'adversarial' perspective [45],
+
+
+Figure 1. Top: Using an image distribution, our attack iteratively generates and refines a perturbation $\pmb{p}$ for a standard deep visual classifier (VGG-16 here) that extracts geometric patterns deemed salient visual features of a label by the classifier. Bottom: Applying our attack to adversarially robust classifiers (ResNet-50 here) enables visually appealing interactive image manipulation (here), image generation (Fig. 6) and inpainting (Fig. 7).
+
+research in this direction has taken a natural bicephalous approach. One stream of works aims at generating perturbations with modest visual perceptibility and high transferability to fool known and unknown models [16, 25, 13, 42, 11, 30]. While the other focuses on defending the models against such perturbations [50, 36, 26, 1, 34]. There are very few exceptions that deviate from the 'adversarial' brand of perturbations and cast these signals as a fooling tool for deep learning. Santurkar et al. [41] presented a notable contribution along this line by using perturbations for image synthesis with adversially robust networks.
+
+Investigating adversarial perturbations, Ilyas et al. [19] claimed that the existing large datasets (e.g. ImageNet [12]) admit to brittle yet highly predictive features that remain
+
+imperceptible to humans. It is argued that deep visual models rely on these non-robust features for high accuracy, which also makes them susceptible to adversarial perturbations. Reliance of deep models on these 'apparently' incomprehensible features is also argued to indicate a misalignment between deep visual representation and human perception [14]. To remove this misalignment, Engstorm et al. [14] proposed to learn deep models under robust optimization framework. However, this entails a significant performance loss for the original model and a drastic increase in the computational complexity of model induction.
+
+It is paradoxical that a representation misaligned with human perception still performs human-meaningful visual tasks with high accuracy. To investigate this phenomenon, we delve deep into the composition of perturbation signals with an alternate objective of model explanation instead of model fooling. We discover that under appropriate conditions, adversarial perturbations eventually emerge as salient visual features of the target label even for the non-robust models, see Fig. 1 (top). Within the context of adversarial perturbations, this observation drastically weakens the argument of misalignment between human perception and deep representation. Rather, it places adversarial perturbations as human-meaningful geometric features of the target label, albeit in a primitive and subtle form.
+
+Our perturbation estimation algorithm stochastically maximizes the prediction probability of an image distribution's perturbed samples for a given target label. Anchored by a seed image, the maximization takes place by iteratively stepping in the Expected gradient direction of the classifier's loss surface w.r.t. the input samples. The optimization is guided by gradient moments and adjusting the step direction to achieve the ultimate objective more efficiently. We further channel the perturbation signal to focus more on its regions that cause high activity of the neurons in the deeper layers of the classifier. This refinement is purely based on the intermediate perturbations computed by our algorithm, which makes our technique model faithful - a desirable property for model explanation [14].
+
+Besides explaining deep models in terms of salient visual features for class labels and highlighting the alignment of deep representation with human perception, our attack naturally suits to the low-level vision tasks of e.g. image generation, inpainting and interactive image manipulation using 'classifiers' [41]. We affirm the utility of our technique (and perturbations in general) beyond the adversarial objective by achieving significant visual improvements for these tasks over [41]. The major contributions of this work are summarized as:
+
+- We propose the first attack on deep learning with input perturbation that explains a model instead of fooling it.
+- By manifesting salient visual features of class labels in perturbations for 'non-robust' models, we drastically
+
+weaken the argument that deep representation is misaligned with the human perception.
+
+- We demonstrate visually appealing image generation, inpainting and interactive image manipulation by attacking robust classifiers. Our results affirm the utility of perturbations beyond model fooling.
+
+# 2. Related work
+
+Adversarial perturbations are being actively investigated along the lines of attacking deep models and defending them against the adversarial attacks [2]. We first discuss the key contributions along these lines and then focus on the non-adversarial perspective of input perturbations.
+
+Adversarial attacks: Additive adversarial perturbations that can arbitrarily alter the decisions of deep models made their first appearance in the seminal work of Szegedy et al. [45]. This discovery fueled the development of numerous techniques to attack deep visual models. Goodfellow et al. [16] devised the Fast Gradient Sign Method (FGSM) to craft adversarial perturbations in a single gradient ascent step over the model's loss surface for the input. Later, Kurakin et al. [25] advanced this scheme by introducing a mutli-step version called Iterative FGSM (IFGSM). Further instances of the follow-up iterative algorithms for adversarial attacks include Momentum I-FGSM (MI-FGSM) [13], Diverse Input I-FGSM $(\mathrm{DI}^2$ -FGSM) [51] and Variance-Reduced I-FGSM (vr-IGSM) [48] etc.
+
+The above-mentioned algorithms and other recent works [30, 42, 39, 11, 52, 15] compute image-specific adversarial perturbations. These perturbations appear noise to humans but completely fool the models. Moosavi-Dezfooli et al. [29] first demonstrated the possibility of fooling deep models simultaneously on a large number of images with Universal Adversarial Perturbations. Later, [33, 5, 22, 31] also devised techniques for computing effective universal perturbations. The pervasive susceptibility of deep models to adversarial perturbations is seen as a serious threat to practical deep learning [2] - an idea currently fueling the very high level of research activity in this area.
+
+Adversarial defenses: On the flip side, numerous techniques have also surfaced to counter the adversarial attacks [20, 34, 36, 50, 44, 35, 1, 26]. These techniques aim at protecting deep model against both image-specific [34] and universal perturbations [1]. This is commonly done by either detecting the perturbation in an input image, or diluting the adversarial effects of perturbation signals by modifying the model or the input itself. Nevertheless, Carlini et al. [7, 6, 8] and later Athalye et al. [3] demonstrated that it is often possible to break the adversarial defenses by stronger adversarial attacks.
+
+Non-adversarial perspective: Currently, there are also contributions in the literature (albeit very few) that hint towards the utility of perturbations beyond model fooling. For
+
+instance, Tsipras et al. [46] observed the presence of salient visual features of the target class in the perturbation signals that fool 'adversarially robust' models. A similar observation is made by Woods et al. [47] for the models robustified with regularized gradients. Existence of salient visual features in perturbations indicate the potential of these signals in model explanation [28, 47]. However, their manifestation uniquely in the case of robustified models is interpreted as a misalignment between (non-robust) deep representation and the human perception [14, 46]. Potentially, the re-alignment is only achievable by adversarially robustifying the models at a serious cost of performance loss and amplified computational complexity [14, 46].
+
+# 3. Attacking to explain
+
+Let $\pmb{I} \in \mathbb{R}^{m}$ be a sample of a distribution $\mathcal{I}$ over the natural images and $\mathcal{K}(\pmb{I})$ be a deep visual classification model that maps $\pmb{I}$ to its correct label $\ell_{\mathrm{true}}$ . The common aim of generating perturbations in adversarial settings is to compute $p \in \mathbb{R}^{m}$ that satisfies the constraint
+
+$$
+\mathcal {K} (\boldsymbol {I} + \boldsymbol {p}) \rightarrow \ell_ {\text {t a r g e t}} \text {s . t .} \ell_ {\text {t a r g e t}} \neq \ell_ {\text {t r u e}}, \| \boldsymbol {p} \| _ {p} \leq \eta , \tag {1}
+$$
+
+where $||.||_p$ denotes the $\ell_p$ -norm that is restrained by a fixed $\eta$ . In (1), restricting $\ell_{\mathrm{target}}$ to a pre-defined label results in a targeted adversarial attack.
+
+According to (1), $\pmb{p}$ can also be expressed as a function over $\pmb{I}$ and $\mathcal{K}(.)^{1}$ . Given a fixed $\mathcal{K}(.)$ , the objective of computing an image-specific perturbation confines the domain of $\pmb{p}$ , say $\mathrm{Dom}(\pmb{p})$ to the extreme case of a single image. With such restrictions the perturbation signal can only reflect peculiarities of a single data point w.r.t. $\mathcal{K}(.)$ , that is hardly indicative of any general character of the classifier. This also calls into question the relevance of claiming human perceptual misalignment with deep representation by alluding to image-specific perturbations. To better encode the classifier information in the perturbation, the signal needs to be invariant to the input samples, which is achievable by broadening the domain of $\pmb{p}$ .
+
+Incidentally, universal perturbations [29] are computed with a broader domain as per our formulation. Inline with our reasoning, those perturbations exhibit much more regular geometric patterns as compared to the image-specific perturbations. However, those patterns still remain far from salient visual features of any object. This is because universal perturbations map all the input images to random class labels. For a given $\mathcal{K}(.)$ , broadening the perturbation domain with a 'targeted' objective is more likely to induce the geometric patterns in $\pmb{p}$ that are actually considered salient features of $\ell_{\mathrm{target}}$ by $\mathcal{K}(.)$ .
+
+Further to the above argument, we can alternately describe the objective of (1) as maximizing the probability of
+
+a perturbed sample being mapped to $\ell_{\mathrm{target}}$ by $\mathcal{K}(.)$ . For $|\operatorname {Dom}(\pmb {p})| > 1$ , where $|.|$ is the set cardinally, this maximization must incorporate all the relevant samples. Hence, we re-cast (1) into the following constraint for our objective of explaining a deep model with $\pmb{p}$ :
+
+$$
+\mathbb {E} \left[ P \left(\mathcal {K} (\boldsymbol {I} + \boldsymbol {p}) \rightarrow \ell_ {\text {t a r g e t}}\right)\right] \geq \gamma , s. t. \tag {2}
+$$
+
+$$
+\operatorname {D o m} (\boldsymbol {p}) = \{\forall \boldsymbol {I} | \boldsymbol {I} \sim \mathcal {I} \}, | \operatorname {D o m} (\boldsymbol {p}) | \gg 1, \| \boldsymbol {p} \| _ {p} \leq \eta ,
+$$
+
+where $P(\cdot)$ denotes probability and $\gamma \in [0,1]$ is a predefined constant. As compared to the commonly computed adversarial perturbations, a $p$ satisfying (2) is expected to reveal clear information about e.g. what constitutes discriminative visual features of objects for a model?, what semantics are attached to a given label index of the model?, and do these features and semantics are human-meaningful? etc.
+
+# 4. Algorithm
+
+We compute the desired perturbations in two phases. In the first phase of perturbation estimation, discriminative features of the target class (as perceived by the classifier) are induced in the perturbation in a holistic manner. Later, in the phase of perturbation refinement, the technique focuses more on the image regions that cause high neural activity in the model to refine the perturbation.
+
+Perturbation estimation: To expand our perturbation domain, we need to sample a distribution of images. Considering $\mathcal{I}$ , we define a set $\mathfrak{S} = \{\pmb{d}\} \cup \overline{\mathcal{D}}$ of the samples from that distribution. Here, $\pmb{d} \in \mathbb{R}^{m}$ denotes a 'seed' image, whereas each element of $\overline{\mathcal{D}}$ is also a sample from $\mathcal{I}$ . We adopt this formalization to explicate the role of distribution and seed choice in the subsequent text.
+
+The procedure to estimate the perturbation is summarized as Algorithm 1, that solves the optimization problem below with a guided stochastic gradient descent strategy
+
+$$
+\max _ {\boldsymbol {p}} \wp = \underset {\boldsymbol {I} \sim \Im} {\mathbb {E}} \left[ P \left(\mathcal {K} (\boldsymbol {I} + \boldsymbol {p}) \rightarrow \ell_ {\text {t a r g e t}}\right)\right] s. t. | | \boldsymbol {p} | | _ {2} \leq \eta . \tag {3}
+$$
+
+At its core, the algorithm employs mini-batches of the distribution samples for a multi-step traversal of the visual model's cost surface to solve (3). We bias this traversal with the seed image. The algorithm iteratively steps in the direction of increasing $\wp$ by computing the gradient of the surface w.r.t. mini-batches and utilizing the gradient moments for efficient optimization. Instead of aiming at the optimal solution, based on (2), we accept any solution for which $\wp \geq \gamma$ . Below, we describe this procedure in detail, following the sequence in Algorithm 1.
+
+We compute the desired perturbation expecting the inputs mentioned in Algorithm 1. Briefly ignoring the initialization on line-1, the algorithm first randomly selects $b - 1$ samples to form a set $\mathcal{D}$ and clips these samples and the input seed $d$ after perturbing them with the current estimate of the perturbation (line 3&4). The clipping is performed to
+
+# Algorithm 1 Perturbation estimation
+
+Input: Classifier $\mathcal{K}$ , seed $d$ , raw samples $\overline{D}$ , target label $\ell_{\mathrm{target}}$ , perturbation norm $\eta$ , mini-batch size $b$ , probability threshold $\gamma$ .
+
+Output: Perturbation $\pmb{p} \in \mathbb{R}^m$ .
+
+1: Initialize $\pmb{p}_0, \pmb{\mu}_0, \pmb{\sigma}_0$ to $\mathbf{0} \in \mathbb{R}^m$ and $t = \wp = 0$ Set $\alpha = 0.9$ , $\beta = 0.999$ and $\overline{\boldsymbol{d}} = \boldsymbol{d}$ .
+2: while $\wp < \gamma$ do
+3: $\mathcal{D} \sim \overline{\mathcal{D}}$ , s.t. $|\mathcal{D}| = b - 1$
+4: $\mathcal{D} \gets \operatorname{Clip}(\mathcal{D} \ominus \pmb{p}_t), \pmb{d} \gets \operatorname{Clip}(\pmb{d} \ominus \pmb{p}_t)$
+5: $t\gets t + 1$
+6: $\xi \gets \frac{||\nabla_{\boldsymbol{d}}\mathcal{I}(\boldsymbol{d},\ell_{\mathrm{target}})||_2}{\underset{\boldsymbol{d}_i\in\mathcal{D}}{\mathbb{E}}[||\nabla_{\boldsymbol{d}_i}\mathcal{I}(\boldsymbol{d}_i,\ell_{\mathrm{target}})||_2]}$
+7: $\pmb{g}_{\pmb{t}} \gets \frac{1}{2} \nabla_{\pmb{d}} \mathcal{J}(\pmb{d}, \ell_{\text{target}}) + \frac{\xi}{2} \underset{\pmb{d}_{i} \in \mathcal{D}}{\mathbb{E}} [\nabla_{\pmb{d}_{i}} \mathcal{J}(\pmb{d}_{i}, \ell_{\text{target}})]$
+8: $\pmb{\mu}_{t} \gets \alpha \pmb{\mu}_{t-1} + (1 - \alpha) \pmb{g}_{t}$
+9: $\pmb{\sigma}_{t} \gets \beta \pmb{\sigma}_{t-1} + (1 - \beta)(\pmb{g}_{t} \odot \pmb{g}_{t})$
+10: $\pmb {\rho}\gets \left(\pmb {\mu}_t\sqrt{1 - \beta^t}\right)\odot \left(\sqrt{\pmb{\sigma}_t} (1 - \alpha^t)\right)^{-1}$
+11: $\mathcal{D}_{\rho}^{+}\gets \overline{\mathcal{D}}\ominus \left(\pmb{p}_{t - 1} + \frac{\rho}{||\pmb{\rho}||_{\infty}}\right)$
+12: $\mathcal{D}_{\rho}^{-}\gets \overline{\mathcal{D}}\ominus \left(\pmb{p}_{t - 1} - \frac{\rho}{\|\rho\|_{\infty}}\right)$
+13: $\varrho^{+}\gets \mathbb{E}\left[P(\mathcal{K}(\mathcal{D}_{\pmb{\rho}}^{+})\to \ell_{\mathrm{target}})\right]$
+14: $\varrho^{-}\gets \mathbb{E}\left[P(\mathcal{K}(\mathcal{D}_{\pmb{\rho}}^{-})\to \ell_{\mathrm{target}})\right]$
+15: if $\varrho^{+}\geq \varrho^{-}$ then
+16: $\pmb{p}_{t} \gets \pmb{p}_{t-1} + \pmb{\rho}$
+17: else
+18: $\pmb{p}_{t} \gets \pmb{p}_{t-1} - \pmb{\rho}$
+19: end if
+20: $\pmb{p}_{t} \gets \pmb{p}_{t} \odot \min \left(1, \frac{\eta}{\|\pmb{p}_{t}\|_{2}}\right)$
+21: $\mathfrak{S}_p\gets \mathrm{Clip}(\{\overline{\boldsymbol{d}}\cup \overline{\mathcal{D}}\} \ominus \boldsymbol {p}_t)$
+22: $\wp \gets \mathbb{E}\big[P(\mathcal{K}(\mathfrak{S}_p)\to \ell_{\mathrm{target}})\big]$
+23: end while
+24: return
+
+confine the dynamic range of the resulting samples to $[0,1]$ . The $\ominus$ symbol indicates that perturbation is being applied to a sample or individual elements of a set. For a given iteration, clipped $d\cup \mathcal{D}$ forms a mini-batch that is used by our stochastic gradient descent strategy.
+
+The seed is introduced in our algorithm to allow variation in the perturbations by changing this input. We do not assume any restrictions over the input samples, implying that the elements of $\mathcal{D}$ and $\pmb{d}$ can be widely different. This also means that the gradients of $d_{i} \in \mathcal{D}$ in the direction of $\ell_{\mathrm{target}}$ - denoted by $\nabla_{d_i}\mathcal{J}(d_i,\ell_{\mathrm{target}})$ - can significantly differ from their counterpart computed for $\pmb{d}$ . To account for this difference, line-6 of the algorithm computes the ratio between the gradient norm for $\pmb{d}$ and the Expected gradient norm for $d_{i} \in \mathcal{D}$ . The ratio is later used to fuse the gradients on line-7, giving higher relevance to the seed gradient.
+
+Given the fused gradient, we estimate its first and second raw moment on line-8 & 9 using the exponential running
+
+
+Figure 2. Visually salient geometric patterns emerge with more iteration of Algorithm 1 that are further refined with Algorithm 2. The refined perturbation is shown after post-refinement 100 iterations of the former. The 'Nail' patterns are computed for VGG-16 with $\eta = 10$ . We follow [45] for perturbation visualization.
+
+average controlled by the hyper-parameters $\alpha$ and $\beta$ . In our algorithm, the use of adaptive moments is inspired by the Adam algorithm [23] that employs this scheme for model parameter optimization. After empirically verifying the qualitative similarity between the effects of these hyperparameters on our algorithm and Adam, we fix their values to those proposed in [23]. This is indicated on line-1, where the other parameters are initialized to null values and a copy of the seed is created for subsequent processing.
+
+We combine the running averages on line-10 and then perform a binary search for the resulting intermediate perturbation update signal $\rho$ on lines 11-19. The search monitors if changing the direction of $\rho$ is more conducive for our ultimate objective. Stochasticity can cause our optimization to significantly deviate from the eventual objective in a given iteration. On one hand, the binary search inhibits this case. On the other, it introduces more variety in the perturbation that is desirable for better model explanation. We project the updated perturbation to the $\ell_2$ -ball of radius $\eta$ on line-20, and estimate $\wp$ on the perturbed clipped distribution samples on line-21 & 22.
+
+Whereas the $\ell_p$ -norm of perturbation is restricted in adversarial settings for imperceptibility, this constraint plays a different role in our technique. By iterative back-projection and clipping, we keep amplifying those geometric patterns in the perturbation that strongly influence $\mathcal{K}(.)$ to predict $\ell_{\mathrm{target}}$ as the label of all the input samples. With successive back-projections, visually salient feature of $\ell_{\mathrm{target}}$ start to emerge in our perturbations (Fig. 2) that are subsequently refined for better visualization, as discussed below.
+
+Perturbation refinement: The holistic treatment of perturbation in Algorithm 1 results in an unrestricted spread of energy over the signal. To achieve finer patterns we let the technique focus more on the relevant regions with an adaptive filtration mechanism summarized in Algorithm 2. A key property of this mechanism is that it upholds model fidelity of the perturbation by assuming no external priors.
+
+To refine the perturbation, it is fed to the convolutional base $\bar{\mathcal{K}}(.)$ of the classifier (line-2). The output $\Omega$ of the base is a set of low resolution 2D signals, which is reduced to an average signal $\pmb{a}$ on line-3. This signal captures rough
+
+Algorithm 2 Perturbation refinement
+Input: Classifier $\kappa$ perturbation $\pmb {p}\in \mathbb{R}^{\mathrm{m}}$
+Output: Refined perturbation $\pmb{p}$
+1: Initialize $\pmb{f}$ to $0\in \mathbb{R}^{m}$ Set $\bar{\kappa} =$ convolutional base of $\kappa$ , scale factor $\lambda = 5$
+2: $\Omega \gets \bar{\mathcal{K}} (\pmb {p}):\Omega \in \mathbb{R}^{H\times W\times C}$
+3: $\pmb {a}\gets \frac{1}{C}\sum_{n = 1}^{C}\Omega^{n}$
+4: $\tau \gets \Psi (\pmb {a})$
+5: if $a(x,y) > \tau$ then $a(x,y) = \lambda$ else $a(x,y) = 0$
+6: $\pmb {f}\gets$ upsample $(a):f\in \mathbb{R}^{\mathrm{m}}$
+7: $\pmb {p}\gets \operatorname {Clip}(\pmb {p}\odot \pmb {f})$
+8: return
+
+silhouette of the salient regions in the input perturbation, which makes it a useful spatial filter for our technique. On line-4, $\Psi(.)$ computes the Otsu threshold [32] for the average signal, that is subsequently used to binarize the image on line-5. We empirically set $\lambda = 5$ in this work. The resulting image is up-sampled by bicubic interpolation [21] on line-6 to match the dimensions of the input perturbation $p$ . The scaled mask is applied to the perturbation, which is subsequently clipped to the valid dynamic range.
+
+The output of Algorithm 2 is further processed by Algorithm 1 to again highlight any salient patterns that might be diminished with filtration. The final perturbation is computed by iterating between the two algorithms.
+
+# 5. Experimentation
+
+We experiment with the proposed algorithm for model explanation in § 5.1, and to perform low-level image processing in § 5.2. The former uses standard 'non-robust' classifiers, whereas 'adversarily robust' classifiers are used for the latter.
+
+# 5.1. Model explanation
+
+Setup: We assume $\mathcal{I}$ to be a distribution of natural images and create our set $\overline{\mathcal{D}}$ by randomly sampling 256 images from the validation set of ILSVRC 2012 dataset [12]. Random samples are used for each experiments separately. We consider visual models trained on the ImageNet as our classifiers and arbitrarily select the target label $\ell_{\mathrm{target}}$ . A mini-batch size of $b = 32$ is used. To compute the perturbations, we set the probability threshold $\gamma = 0.8$ and perturbation norm $\eta = 10$ . The value of $\gamma$ is chosen based on the visual clarity of salient patterns in the final perturbations. Higher $\gamma$ tends to generate clearer patterns at higher computational cost. We keep $\eta$ comparable to the existing techniques for adversarial perturbation generation [29, 1]. NVIDIA Titan V GPU with 12 GB RAM is used.
+
+To compute a perturbation, we first let Algorithm 1 run to achieve the desired $\wp$ . Then, we apply Algorithm 2 for refinement. Subsequently, Algorithm 1 is again applied such that a refinement is carried out after every $50^{\text{th}}$ iteration
+
+until 300 iterations.
+
+Salient visual features: Model-gradient based adversarial perturbations are known to generate noise-like patterns [45, 16, 30] or motifs that seem meaningless to humans [29, 1]. However, by accumulating such perturbations under a slightly different objective, our attack is able to discover visually salient features of the target labels in those signals. In Fig. 3, we show representative examples of the perturbations computed by our algorithm for VGG-16 model. Notice the clear geometric patterns that humans can associate with the target class labels. These patterns emerge without assuming any priors on the perturbation, distribution samples (in $\overline{D}$ ), or the model itself.
+
+Firstly, from the figure, it is apparent that our technique can (qualitatively) explain a model in terms of 'what human-meaningful semantics are attached to its output neurons?'. This is useful e.g. in the settings where an unknown model is available and one must discover the labels of its output layer. Secondly, the perturbations are explaining 'what geometric patterns are perceived as the discriminative features of a given class by the classifier?''. Interestingly, these patterns align very well with the human perception, and we compute them with the same tool (i.e. gradient based perturbation) that is used to promote the argument of misalignment between human perception and deep representation [14, 46].
+
+Diversity of the salient patterns: We provide two representative perturbations for each target class in Fig. 3, where the difference in the perturbations is caused by selecting different seeds. Besides ascertaining the effective role of seed in our algorithm, the diverse patterns that remain visually salient, affirm that the model has learned the general (human-meaningful) semantics for the target label. We emphasize that we ignored the target class while creating $\overline{D}$ for Fig. 3. Hence, the patterns are completely based on the visual model, which also highlights the potential of standard classifiers for the task of diverse image generation.
+
+Region specific semantics: Intrigued by the spatial distribution of the salient patterns in perturbations, we also explore the possibility of extracting model semantics associated with specific regions in the image space. This is possible by increasing the correlation between the pixels in those regions across the distribution samples that are input to our algorithm. This leads the gradients for the individual samples to be in the same direction for the specified regions. Which reinforces the signal for those regions with back-projection while weak signals in the other regions get suppressed with refinement. We emulate this scenario by replacing the image regions of interest with $64 \times 64$ patches for all the samples, where all patch pixels are generated with the mean pixel value of the sampled images.
+
+In Fig. 4, we show the perturbations for a representative
+
+
+
+
+Figure 3. Visually salient features of the target class (label given) emerge by accumulating the gradient based perturbations with explanation objective. The shown perturbations are computed for VGG-16 with ImageNet samples, excluding the target class samples. Perturbations for the same target are generated with different seeds for variety.
+
+
+
+
+Figure 4. Obstructing samples with a uniform patch (seed shown) lets the algorithm focus on/near the pre-specified region for extracting model semantics. Perturbations for 'Centipede' are computed for VGG-16.
+
+
+Figure 5. Salient pattern emergence is a general phenomenon. Patterns for two random labels are shown for different models.
+
+label ('Centipede') with three random choices of regions. A region of interest is depicted with seed only. As can be observed, our attack is able to focus much better near the specified regions. Interestingly, the model is generally able to associate similar discriminative features of the target label to different regions in a coherent manner, further strengthening the notion of human perception alignment with the deep representation.
+
+Patterns for different models: Above, we mainly presented the patterns for VGG for their visual clarity after resizing. However, emergence of the salient visual features
+
+in our perturbations is a general phenomenon for the deep visual classifiers. In Fig. 5, we also show representative perturbations for ResNet-50 [17], DenseNet-121 [18] and MobileNet-V2 [40] for two random classes used in our experiments. The perturbations clearly depict the features of the target labels for all these model.
+
+To demonstrate perceptual alignment of deep representation over different models, we classify the 'perturbations' generated for one model with other models. High confidence of multiple models for the intended target label indicates that the extracted patterns are commonly seen as discriminative visual features of the target class.
+
+# 5.2. Leverage in low-level tasks
+
+Santurakar et al. [41] recently showed that adversarially robust deep classifiers can be exploited beyond classification. They demonstrated image generation, inpainting and image manipulation etc. by attacking a robust ResNet with the PGD attack [28]. The key notion exploited by Santurakar et al. is the presence of salient visual features in the 'adversarial' perturbations computed for the 'robust' classifiers. Relating this concept to our findings, their study indicates an excellent test bed for our attack, where successful results not only ascertain the implicit model explaining nature of our perturbations, but also improves the state-of-
+
+
+Figure 6. Image generation by attacking adversarially robust ResNet. The generated images are adversarial examples of the shown seeds. The intended class labels are mentioned. Setup of Santurkal et al. [41] is followed.
+
+
+
+the-art for the newly found place of the robust classifiers in the broader Machine Learning context.
+
+To demonstrate improvements in the results, we follow [41] closely in terms of the used classifier, perturbation budget and the underlying evaluation procedure. In the experiments to follow, we create the set $\overline{D}$ by sampling a multivariate Gaussian $\mathcal{N}(\boldsymbol{\mu}_I,\Sigma_I)$ , where $\boldsymbol{\mu}_I\in \mathbb{R}^m$ is the mean value of an image set $I_{i = 1,\dots ,n}\sim \mathcal{T}^{\mathrm{target}}$ . Here, $\mathcal{T}^{\mathrm{target}}$ is the distribution of a target class images, emulated by ImageNet. We compute $\Sigma_I = \mathbb{E}[(\boldsymbol{I}_i - \boldsymbol{\mu}_I)^\top (\boldsymbol{I}_i - \boldsymbol{\mu}_I)]$ . For computational reasons, the multivariate Gaussian is computed by $4\times$ downsampling of the original images. Random 256 distribution samples are later upsampled to match the network input and used to create the set $\overline{D}$ . In the following experiments, where the image processing tasks are performed holistically, we do not use the refinement step.
+
+# 5.2.1 Image Generation
+
+In Fig. 6, we show representative examples of images generated by our technique and compare those with Santurkal et al. [41]. We use the author-provided code for [41] and strictly follow the guidelines to achieve the best results of their method. In the context of adversarial attacks, the generated images are adversarial examples of the seed images. We show two images per class, generated with the shown seeds for the mentioned target label. Our technique is clearly able to generate more refined and coherent images. Notice the details in the backgrounds as well. Theoretically, Santurkar et al. [41] used the strongest gradient-based iterative adversarial attack [28] in their method. Hence, our
+
+improved performance can be easily attributed to the model explaining nature of the perturbations computed by the proposed attack. We use the same perturbation budget $\eta = 40$ for both the techniques.
+
+The variety in the images generated with different seeds, their textural details and clear semantic coherence strengthen the broader idea that robust classifiers are capable of more than simple classification [41] - a worth exploring venue for the future research.
+
+# 5.2.2 Inpainting
+
+Image inpainting [4] restores information in large corrupt regions of images while upholding the perceptual consistency. We demonstrate improved inpainting performance with robust classifiers using the proposed attack.
+
+For this task, we treat the corrupted image as the seed, where its corrupt region is identified as a binary mask $F \in \{0,1\}^m$ . Let $\mathfrak{S}$ contain the seed and samples form our above-mentioned multivariate Gaussian distribution $\mathcal{N}(.)$ . Keeping the robust classifier parameters fixed, we minimize the following loss:
+
+$$
+\mathcal {L} (\boldsymbol {p}) = \mathbb {E} \left[ \mathcal {J} \left(\Im_ {p}, l _ {\text {t a r g e t}}\right) + \beta (\boldsymbol {p} \odot (1 - F)) \right], \tag {4}
+$$
+
+where $\mathfrak{S}_p = \mathfrak{S} \ominus p$ , $\mathcal{J}(.)$ is the cross-entropy loss of the classifier and $\beta = 10$ is an empirically chosen scaling factor. The designed loss function allows the perturbation signal to grow freely for the corrupt region while restricting it in the other regions.
+
+In Fig. 7, we show representative examples of corrupt images restored with our technique and Santurkar et al. [41]
+
+
+Figure 7. Representative inpainting results. The Masked image is the seed. Both approaches restore images using the same robust model provided by Santurkar et al. [41]. using the same perturbation budget.
+
+
+
+
+
+
+Figure 8. Representative examples of interactive image manipulation. The seed is a raw image required to be manipulated into an image of the target category. Both techniques use the same robust classifier with perturbation budget 60, optimized for the images.
+
+using the robust ResNet provided by the authors. We use the same perturbation budget $\eta = 21$ for both techniques. The restoration quality of our technique is visibly better. The shown images and mask placements are randomly selected.
+
+# 5.2.3 Interactive Image Manipulation
+
+An interesting recent application of deep networks, especially GANs [10] is to turn crude sketches into realistic images. Santurkar et al. [41] demonstrated the possibility of such interactive image manipulation by attacking/fooling robust classifiers. We advance this direction by demonstrating that our alternate objective of model explanation is more suitable for the problem.
+
+Using the raw sketch as the seed and creating the set $\overline{\mathcal{D}}$ with the multivariate Gaussian, we manipulate the seed similar to image generation. However, this time we also apply the refinement procedure. Representative results of our attack are shown in Fig. 8. Compared to [41], images generated with our technique appear much more realistic. Such a refined manipulation of crude sketches with a classifier affirms the ability of our attack to highlight humanmeaningful visual patterns learned by the classifier.
+
+The three low-level image processing tasks discussed above not only demonstrate the utility of perturbations beyond model fooling (in general), but also ascertain that our attack is a positive step forward in that direction.
+
+# 6. Conclusion
+
+We present the first attack on deep learning that has an objective of explaining the model instead of fooling it. To compute the perturbation, our attack performs a stochastic gradient search on the cost surface of the model to increase the log-probability of a 'distribution' of images to be classified as a particular target. By iterative back-projection of the gradients and refinement with adaptive attention, our attack finds geometric patterns in the perturbations that are deemed salient by the classifier. We find that these patterns align well with the human perception, which weakens the argument of misalignment between human perception and deep representation - in the context of adversarial perturbations. Besides demonstrating perturbations with visually salient features for multiple state-of-the-art classifiers, we also perform low-level image manipulation with our technique using robust classifiers. Realistic image generation, inpainting and interactive image manipulation ascertain the model explaining nature of our attack, and advance the state-of-the-art in these newly found classifier utilities.
+
+Acknowledgment This research was supported by ARC Discovery Grant DP190102443, DP150100294 and DP150104251. The Titan V used in our experiments was donated by NVIDIA corporation.
+
+# References
+
+[1] Naveed Akhtar, Jian Liu, and Ajmal Mian. Defense against universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3389-3398, 2018. 1, 2, 5
+[2] Naveed Akhtar and Ajmal Mian. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6:14410-14430, 2018. 1, 2
+[3] Anish Athalye and Nicholas Carlini. On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:1804.03286, 2018. 2
+[4] Marcelo Bertalmio, Guillermo Sapiro, Vincent Caseles, and Coloma Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 417-424. ACM Press/Addison-Wesley Publishing Co., 2000. 7
+[5] Tom B Brown, Dandelion Mane, Aurko Roy, Martin Abadi, and Justin Gilmer. Adversarial patch. arXiv preprint arXiv:1712.09665, 2017. 2
+[6] Nicholas Carlini and David Wagner. Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311, 2016. 2
+[7] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 3-14. ACM, 2017. 2
+[8] Nicholas Carlini and David Wagner. Magnet and” efficient defenses against adversarial attacks" are not robust to adversarial examples. arXiv preprint arXiv:1711.08478, 2017. 2
+[9] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801-818, 2018. 1
+[10] Wengling Chen and James Hays. Sketchygan: Towards diverse and realistic sketch to image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9416-9425, 2018. 8
+[11] Francesco Croce and Matthias Hein. Sparse and imperceivable adversarial attacks. arXiv preprint arXiv:1909.05040, 2019. 1, 2
+[12] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009. 1, 5
+[13] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9185-9193, 2018. 1, 2
+[14] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Learning perceptually-aligned representations via adversarial robustness. arXiv preprint arXiv:1906.00945, 2019. 2, 3, 5
+[15] Aditya Ganeshan and R Venkatesh Babu. Fda: Feature disruptive attack. In Proceedings of the IEEE International Conference on Computer Vision, pages 8069-8079, 2019. 2
+
+[16] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. 1, 2, 5
+[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 6
+[18] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708, 2017. 6
+[19] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175, 2019. 1
+[20] Xiaojun Jia, Xingxing Wei, Xiaochun Cao, and Hassan Foroosh. Comdefend: An efficient image compression model to defend adversarial examples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6084-6092, 2019. 2
+[21] Robert Keys. Cubic convolution interpolation for digital image processing. IEEE transactions on acoustics, speech, and signal processing, 29(6):1153-1160, 1981. 5
+[22] Valentin Khrulkov and Ivan Oseledets. Art of singular vectors and universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8562-8570, 2018. 2
+[23] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 4
+[24] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. 1
+[25] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016. 1, 2
+[26] Jiayang Liu, Weiming Zhang, Yiwei Zhang, Dongdong Hou, Yujia Liu, Hongyue Zha, and Nenghai Yu. Detection based defense against adversarial examples from the steganalysis point of view. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4825-4834, 2019. 1, 2
+[27] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 1
+[28] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. 3, 6, 7
+[29] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765–1773, 2017. 2, 3, 5
+
+[30] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016. 1, 2, 5
+[31] Konda Reddy Mopuri, Aditya Ganeshan, and Venkatesh Babu Radhakrishnan. Generalizable data-free objective for crafting universal adversarial perturbations. IEEE transactions on pattern analysis and machine intelligence, 2018. 2
+[32] Nobuyuki Otsu. A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics, 9(1):62-66, 1979. 5
+[33] Omid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. Generative adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4422-4431, 2018. 2
+[34] Aaditya Prakash, Nick Moran, Solomon Garber, Antonella DiLillo, and James Storer. Deflecting adversarial attacks with pixel deflection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8571-8580, 2018. 1, 2
+[35] Yuxian Qiu, Jingwen Leng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, and Yuhao Zhu. Adversarial defense through network profiling based path extraction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4777-4786, 2019. 2
+[36] Edward Raff, Jared Sylvester, Steven Forsyth, and Mark McLean. Barrage of random transforms for adversarially robust defense. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6528-6537, 2019. 1, 2
+[37] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7263-7271, 2017. 1
+[38] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, pages 91-99, 2015. 1
+[39] Jérôme Rony, Luiz G Hafemann, Luiz S Oliveira, Ismail Ben Ayed, Robert Sabourin, and Eric Granger. Decoupling direction and norm for efficient gradient-based 12 adversarial attacks and defenses. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4322-4330, 2019. 2
+[40] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zh-moginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4510-4520, 2018. 6
+
+[41] Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, and Aleksander Madry. Computer vision with a single (robust) classifier. arXiv preprint arXiv:1906.09453, 2019. 1, 2, 6, 7, 8
+[42] Yucheng Shi, Siyu Wang, and Yahong Han. Curls & whey: Boosting black-box adversarial attacks. arXiv preprint arXiv:1904.01160, 2019. 1, 2
+[43] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 1
+[44] Bo Sun, Nian-hsuan Tsai, Fangchen Liu, Ronald Yu, and Hao Su. Adversarial defense by stratified convolutional sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11447-11456, 2019. 2
+[45] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. 1, 2, 4, 5
+[46] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robustness may be at odds with accuracy. In International Conference on Learning Representations, number 2019, 2019. 3, 5
+[47] Walt Woods, Jack Chen, and Christof Teuscher. Reliable classification explanations via adversarial attacks on robust networks. arXiv preprint arXiv:1906.02896, 2019. 3
+[48] Lei Wu, Zhanxing Zhu, Cheng Tai, et al. Understanding and enhancing the transferability of adversarial examples. arXiv preprint arXiv:1802.09707, 2018. 2
+[49] Qi Wu, Chunhua Shen, Peng Wang, Anthony Dick, and Anton van den Hengel. Image captioning and visual question answering based on attributes and external knowledge. IEEE transactions on pattern analysis and machine intelligence, 40(6):1367-1381, 2017. 1
+[50] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 501-509, 2019. 1, 2
+[51] Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and Alan L Yuille. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2730-2739, 2019. 2
+[52] Tianhang Zheng, Changyou Chen, and Kui Ren. Distributionally adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2253-2260, 2019. 2
\ No newline at end of file
diff --git a/attacktoexplaindeeprepresentation/images.zip b/attacktoexplaindeeprepresentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..847fd2f1c00a3b5dc114958deba650e945b5a3df
--- /dev/null
+++ b/attacktoexplaindeeprepresentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78444a7c8d8ef76f9aec9a18ddf4f6d070a9b02f3763ba33340ba3001f1d2846
+size 882479
diff --git a/attacktoexplaindeeprepresentation/layout.json b/attacktoexplaindeeprepresentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e45059ce8a25896b02b069adbcf2ed5a800c259e
--- /dev/null
+++ b/attacktoexplaindeeprepresentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e4502ee64f51ced8d03b1dc5800c3625a0abd61c92c4e377b69ab0567e6ed27
+size 453348
diff --git a/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_content_list.json b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..533a3c15f3024743bf93af43c3267646b5b74846
--- /dev/null
+++ b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb74b72688175b1e9045bef35b7f2d89c79a5adc58d9f09c742faf0254c93075
+size 76470
diff --git a/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_model.json b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5702fc2db1e6f4b93f62ba63a7c440525073d1b5
--- /dev/null
+++ b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c6fd703fd0f50d670c2a5d2a1e4a912e130408194a6266cf8f296be449998fca
+size 94599
diff --git a/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_origin.pdf b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..19cb0483ed9c56cba354d968995fa1329e3b1556
--- /dev/null
+++ b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/b3e242ba-9d0b-49c7-88c1-3a99c52a7a93_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f813486205a7f2b90c6e578661f899a8e9ddf805bf5542c4ec0e8a804ef05a6f
+size 1304637
diff --git a/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/full.md b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..459f5421d327c7ba2978df2c4345066bbd4c4650
--- /dev/null
+++ b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/full.md
@@ -0,0 +1,305 @@
+# Attention Convolutional Binary Neural Tree for Fine-Grained Visual Categorization
+
+Ruyi Ji $^{1,2,*}$ , Longyin Wen $^{3,*}$ , Libo Zhang $^{1,\dagger}$ , Dawei Du $^{4}$ , Yanjun Wu $^{1}$ , Chen Zhao $^{1}$ , Xianglong Liu $^{5}$ , Feiyue Huang $^{6}$
+
+$^{1}$ State Key Laboratory of Computer Science, ISCAS, Beijing, China
+ $^{2}$ University of Chinese Academy of Sciences, Beijing, China
+ $^{3}$ JD Finance America Corporation, Mountain View, CA, USA
+ $^{4}$ University at Albany, State University of New York, Albany, NY, USA
+ $^{5}$ Beihang University, Beijing, China
+ $^{6}$ Tencent Youtu Lab, Beijing, China
+{ruyi2017, libero, yanjun, zhaochen}@iscas.ac.cn, longyin.wen@jd.com, cvdaviddo@gmail.com
+
+# Abstract
+
+Fine-grained visual categorization (FGVC) is an important but challenging task due to high intra-class variances and low inter-class variances caused by deformation, occlusion, illumination, etc. An attention convolutional binary neural tree is presented to address those problems for weakly supervised FGVC. Specifically, we incorporate convolutional operations along edges of the tree structure, and use the routing functions in each node to determine the root-to-leaf computational paths within the tree. The final decision is computed as the summation of the predictions from leaf nodes. The deep convolutional operations learn to capture the representations of objects, and the tree structure characterizes the coarse-to-fine hierarchical feature learning process. In addition, we use the attention transformer module to enforce the network to capture discriminative features. Several experiments on the CUB200-2011, Stanford Cars and Aircraft datasets demonstrate that our method performs favorably against the state-of-the-arts. Code can be found at https://isrc.iscas.ac.cn/gitlab/research/acnet.
+
+
+
+
+Brewer_Blackbird
+
+
+
+
+Boat Tailed Grackle
+
+
+
+
+Bronzed_Cowbird
+Figure 1: Exemplars of fine-gained visual categorization. FGVC is challenging due to two reasons: (a) high intra-class variances: the birds belonging to the same category usually present significant different appearance, such as illumination variations (the 1st column), view-point changes (the 2nd column), clutter background (the 3rd column) and occlusion (the 4th column); (b) low inter-class variances: the birds in different columns belong to different categories, but share similar appearance in the same rows.
+
+
+
+
+Fish_Crow
+
+# 1. Introduction
+
+Fine-Grained Visual Categorization (FGVC) aims to distinguish subordinate objects categories, such as different species of birds [42, 52], and flowers [1]. The high intra-class and low inter-class visual variances caused by deformation, occlusion, and illumination, make FGVC to be a highly challenging task.
+
+Recently, the FGVC task is dominated by the convolutional neural network (CNN) due to its amazing classifica
+
+tion performance. Some methods [29, 26] focus on extracting discriminative subtle parts for accurate results. However, it is difficult for a single CNN model to describe the differences between subordinate classes (see Figure 1). In [34], the object-part attention model is proposed for FGVC, which uses both object and part attentions to exploit the subtle and local differences to distinguish subcategories. It demonstrates the effectiveness of using multiple deep models concentrating on different object regions in FGVC.
+
+Inspired by [41], we design an attention convolutional binary neural tree architecture (ACNet) for weakly supervised FGVC. It incorporates convolutional operations along the edges of the tree structure, and use the routing functions in each node to determine the root-to-leaf computational paths within the tree as deep neural networks. This designed architecture makes our method inherits the representation learning ability of the deep convolutional model, and the coarse-to-fine hierarchical feature learning process. In this way, different branches in the tree structure focus on different local object regions for classification. The final decision is computed as the summation of the predictions from all leaf nodes. Meanwhile, we use the attention transformer to enforce the tree network to capture discriminative features for accurate results. The negative log-likelihood loss is adopted to train the entire network in an end-to-end fashion by stochastic gradient descent with back-propagation.
+
+Notably, in contrast to the work in [41] adaptively growing the tree structure in learning process, our method uses a complete binary tree structure with the pre-defined depth and the soft decision scheme to learn discriminative features in each root-to-leaf path, which avoids the pruning error and reduces the training time. In addition, the attention transformer module is used to further help our network to achieve better performance. Several experiments are conducted on the CUB-200-2011 [42], Stanford Cars [25], and Aircraft [32] datasets, demonstrating the favorable performance of the proposed method against the state-of-the-art methods. We also conduct the ablation study to comprehensively understand the influences of different components in the proposed method.
+
+The main contributions of this paper are summarized as follows. (1) We propose a new attention convolutional binary neural tree for FGVC. (2) The attention transformer is introduced to facilitate the coarse-to-fine hierarchical feature learning in the tree network. (3) Extensive experiments on three challenging datasets, i.e., CUB-200-2011, Stanford Cars, and Aircraft, show the effectiveness of our method.
+
+# 2. Related Works
+
+Deep supervised methods. Some algorithms [51, 31, 18, 50] use object annotations or even dense part/keypoint annotations to guide the training of deep CNN model for FGVC. Zhang et al.[51] propose to learn two detectors, i.e.,
+
+the whole object detector and the part detector, to predict the fine-grained categories based on the pose-normalized representation. Liu et al.[31] propose a fully convolutional attention networks that glimpses local discriminative regions to adapt to different fine-grained domains. The method in [18] construct the part-stacked CNN architecture, which explicitly explains the fine-grained recognition process by modeling subtle differences from object parts. In [50], the proposed network consists of detection and classification sub-networks. The detection sub-network is used to generate small semantic part candidates for detection; while the classification sub-network can extract features from parts detected by the detection sub-network. However, these methods rely on labor-intensive part annotations, which limits their applications in real scenarios.
+
+Deeply supervised methods. To that end, more recent methods [52, 12, 38, 46] only require image-level annotations. Zheng et al. [52] introduce a multi-attention CNN model, where part generation and feature learning process reinforce each other for accurate results. Fu et al. [12] develop a recurrent attention module to recursively learn discriminative region attention and region-based feature representation at multiple scales in a mutually reinforced way. Recently, Sun et al. [38] regulate multiple object parts among different input images by using multiple attention region features of each input image. In [46], a bank of convolutional filters is learned to capture class-specific discriminative patches, through a novel asymmetric multi-stream architecture with convolutional filter supervision. However, the aforementioned methods merely integrate the attention mechanism in a single network, affecting their performance.
+
+Decision tree. Decision tree is an effective algorithm for classification task. It selects the appropriate directions based on the characteristic of feature. The inherent ability of interpretability makes it as promising direction to pose insight into internal mechanism in deep learning. Xiao [48] propose the principle of fully functioned neural graph and design neural decision tree model for categorization task. Frosst and Hinton [11] develop a deep neural decision tree model to understand decision mechanism for particular test case in a learned network. Tanno et al. [41] propose the adaptive neural trees that incorporates representation learning into edges, routing functions and leaf nodes of a decision tree. In our work, we integrate the decision tree with neural network to implement sub-branch selection and representation learning simultaneously.
+
+Attention mechanism. Attention mechanism has played an important role in deep learning to mimic human visual mechanism. In [49], the attention is used to make sure the student model focuses on the discriminative regions as teacher model does. In [21], the cascade attention mechanism is proposed to guide the different layers of CNN and
+
+concatenate them to gain discriminative representation as the input of final linear classifier. Hu et al. [16] apply the attention mechanism from aspect of channels and allocate the different weights according to the contribution of each channel. The CBAM module in [47] combines space region attentions with feature map attentions. In contrast to the aforementioned methods, we apply the attention mechanism on each branch of the tree architecture to sake the discriminative regions for classification.
+
+# 3. Attention Convolutional Binary Neural Tree
+
+Our ACNet model aims to classify each object sample in $X$ to sub-categories, i.e., assign each sample in $X$ with the category label $Y$ , which consists of four modules, i.e., the backbone network, the branch routing, the attention transformer, and the label prediction modules, shown in Figure 2. We define the ACNet as a pair $(\mathbb{T},\mathbb{O})$ , where $\mathbb{T}$ defines the topology of the tree, and $\mathbb{O}$ denotes the set of operations along the edges of $\mathbb{T}$ . Notably, we use the full binary tree $\mathbb{T} = \{\mathcal{V},\mathcal{E}\}$ , where $\mathcal{V} = \{v_{1},\dots ,v_{n}\}$ is the set of nodes, $n$ is the total number of nodes, and $\mathcal{E} = \{e_1,\dots ,e_k\}$ is the set of edges between nodes, $k$ is the total number of edges. Since we use the full binary tree $\mathbb{T}$ , we have $n = 2^{h} - 1$ and $k = 2^{h} - 2$ , where $h$ is the height of $\mathbb{T}$ . Each node in $\mathbb{T}$ is formed by a routing module determining the sending path of samples, and the attention transformers are used as the operations along the edges.
+
+Meanwhile, we use the asymmetrical architecture in the fully binary tree $\mathbb{T}$ , i.e., two attention transformers are used in the left edge, and one attention transformer is used in the right edge. In this way, the network is able to capture the different scales of features for accurate results. The detail architecture of our ACNet model is described as follows.
+
+# 3.1. Architecture
+
+Backbone network module. Since the discriminative regions in fine-grained categories are highly localized [46], we need to use a relatively small receptive field of the extracted features by constraining the size and stride of the convolutional filters and pooling kernels. The truncated network is used as the backbone network module to extract features, which is pre-trained on the ILSVRC CLSLOC dataset [35]. Similar to [38], we use the input image size $448 \times 448$ instead of the default $224 \times 224$ . Notably, ACNet can also work on other pre-trained networks, such as ResNet [15] and Inception V2 [19]. In practice, we use VGG-16 [37] (retaining the layers from conv1_1 to conv4_3) and ResNet-50 [15] (retaining the layers from res_1 to res_4) networks as the backbone in this work.
+
+Branch routing module. As described above, we use the branch routing module to determine which child (i.e., left or right child) the samples would be sent to. Specifically, as shown in Figure 2(b), the $i$ -th routing module $\mathcal{R}_i^k(\cdot)$ at the
+
+$k$ -th layer uses one convolutional layer with the kernel size $1 \times 1$ , followed by a global context block [4]. The global context block is an improvement of the simplified non-local (NL) block [44] and Squeeze-Excitation (SE) block [16], which shares the same implementation with the simplified NL block on the context modeling and fusion steps, and shares the transform step with the SE block. In this way, the context information is integrated to better describe the objects. After that, we use the global average pooling [27], element-wise square-root and L2 normalization [28], and a fully connected layer with the sigmoid activation function to produce a scalar value in $[0,1]$ indicating the probability of samples being sent to the left or right sub-branches. Let $\phi_i^k(x_j)$ denote the output probability of the $j$ -th sample $x_j \in X$ being sent to the right sub-branch produced by the branch routing module $\mathcal{R}_i^k(x_j)$ , where $\phi_i^k(x_j) \in [0,1]$ , $i = 1, \dots, 2^{k-1}$ . Thus, we have the probability of the sample $x_j \in X$ being sent to the left sub-branch to be $1 - \phi_i^k(x_j)$ . If the probability $\phi_i^k(x_j)$ is larger than 0.5, we prefer the left path instead of the right one; otherwise, the left branch dominates the final decision.
+
+Attention transformer. The attention transformer module is used to enforce the network to capture discriminative features, see Figure 3. According to the fact that the empirical receptive field is much smaller than theoretical receptive field in deep networks [30], the discriminative representation should be formed by larger receptive field in new-added layers of our proposed tree structure. To this end, we intergate the Atrous Spatial Pyramid Pooling (ASPP) module [5] into the attention transformer. Specifically, ASPP module provides different feature maps with each characterized by a different scale/receptive field and an attention module. Then, multi-scale feature maps are generated by four parallel dilated convolutions with different dilated rates, i.e., 1, 6, 12, 18. Following the parallel dilated convolution layers, the concatenated feature maps are fused by one convolutional layer with kernel $1 \times 1$ and stride 1. Following the ASPP module, we insert an attention module, which generates a channel attention map with the size $\mathbb{R}^{C \times 1 \times 1}$ using a batch normalization (BN) layer [19], a global average pooling (GAP) layer, a fully connected (FC) layer and ReLU activation function, and a FC layer and sigmoid function. In this way, the network is guided to focus on meaningful features for accurate results.
+
+Label prediction. For each leaf node in our ACNet model, we use the label prediction module $\mathcal{P}_i$ (i.e., $i = 1,\dots ,2^{h - 1}$ ) to predict the subordinate category of the object $x_{j}$ , see Figure 2. Let $r_i^k (x_j)$ to be the accumulated probability of the object $x_{j}$ passing from the root node to the $i$ -th node at the $k$ -th layer. For example, if the root to the node $\mathcal{R}_i^k (\cdot)$ path on the tree is $\mathcal{R}_1^1,\mathcal{R}_1^2,\dots ,\mathcal{R}_1^k$ i.e., the object $x_{j}$ is always sent to the left child, we have $r_i^k (x_j) = \prod_{i = 1}^k\phi_1^i (x_j)$ . As shown in Figure 2, the la
+
+
+
+
+Figure 2: The overview of our ACNet model, formed by (a) the backbone network module, (b) the branch routing module, (c) the attention transformer module, and (d) the label prediction module. We show an example image in Fish_Crow. Best visualization in color.
+
+
+
+
+Figure 3: The architecture of the attention transformer module.
+
+bel prediction module is formed by a batch normalization layer, a convolutional layer with kernel size $1 \times 1$ , a max-pooling layer, a sqrt and L2 normalization layer, and a fully connected layer. Then, the final prediction $\mathcal{C}(x_j)$ of the $j$ -th object $x_j$ is computed as the summation of all leaf predictions multiplied with the accumulated probability generated by the passing branch routing modules, i.e., $\mathcal{C}(x_j) = \sum_{i=1}^{2^h-1} \mathcal{P}_i(x_j) r_i^h(x_j)$ . We would like to emphasize that $\| \mathcal{C}(x_j) \|_1 = 1$ , i.e., the summation of confidence of $x_j$ belonging to all subordinate classes equal to 1,
+
+$$
+\| \mathcal {C} (x _ {j}) \| _ {1} = \| \sum_ {i = 1} ^ {2 ^ {h} - 1} \mathcal {P} _ {i} (x _ {j}) r _ {i} ^ {h} (x _ {j}) \| _ {1} = 1, \tag {1}
+$$
+
+where $r_i^h (x_j)$ is the accumulated probability of the $i$ -th node at the leaf layer. We present a short description to prove that $\| \mathcal{C}(x_j)\| _1 = 1$ as follows.
+
+Proof. Let $r_i^k(\cdot)$ be the accumulated probability of the $i$ -th branch routing module $\mathcal{R}_i^k(\cdot)$ at the $k$ -th layer. Thus, the
+
+accumulated probabilities of the left and right children corresponding to $\mathcal{R}_i^k (\cdot)$ are $r_{2i - 1}^{k + 1}(\cdot)$ and $r_{2i}^{k + 1}(\cdot)$ , respectively. At first, we demonstrate that the summation of the accumulated probabilities $r_{2i - 1}^{k + 1}(\cdot)$ and $r_{2i}^{k + 1}(\cdot)$ is equal to the accumulated probability of their parent $r_i^k (x_j)$ . That is,
+
+$$
+\begin{array}{l} r _ {2 i - 1} ^ {k + 1} (x _ {j}) + r _ {2 i} ^ {k + 1} (x _ {j}) \\ = \phi_ {2 i - 1} ^ {k + 1} (x _ {j}) \cdot r _ {i} ^ {k} (x _ {j}) + \phi_ {2 i} ^ {k + 1} (x _ {j}) \cdot r _ {i} ^ {k} (x _ {j}) \\ = \phi_ {2 i - 1} ^ {k + 1} (x _ {j}) \cdot r _ {i} ^ {k} (x _ {j}) + \left(1 - \phi_ {2 i - 1} ^ {k + 1} (x _ {j})\right) \cdot r _ {i} ^ {k} (x _ {j}) \\ = r _ {i} ^ {k} \left(x _ {j}\right). \\ \end{array}
+$$
+
+Meanwhile, due to the fully binary tree $\mathbb{T}$ in ACNet, we have $\sum_{i=1}^{2^{h-1}} r_i^h(x_j) = \sum_{i=1}^{2^{h-2}} \left( r_{2i-1}^h(x_j) + r_{2i}^h(x_j) \right)$ . We can further get $\sum_{i=1}^{2^{h-1}} r_i^h(x_j) = \sum_{i=1}^{2^{h-2}} r_i^{h-1}(x_j)$ . This process is carried out iteratively, and we have $\sum_{i=1}^{2^{h-1}} r_i^h(x_j) = \dots = r_1^1(x_j) = 1$ . In addition, since the category prediction $\mathcal{P}_i(x_i)$ is generated by the softmax layer (see Figure 2), we have $\| \mathcal{P}_i(x_j) \|_1 = 1$ . Thus,
+
+$$
+\begin{array}{l} \| \mathcal {C} \left(x _ {j}\right) \| _ {1} = \| \sum_ {\substack {i = 1 \\ 2 ^ {h - 1}}} ^ {2 ^ {h - 1}} \mathcal {P} _ {i} \left(x _ {j}\right) r _ {i} ^ {h} \left(x _ {j}\right) \| _ {1} \tag{3} \\ = \sum_ {i = 1} ^ {2 ^ {n - 1}} \| \mathcal {P} _ {i} (x _ {j}) \| _ {1} r _ {i} ^ {h} (x _ {j}) = 1. \\ \end{array}
+$$
+
+
+
+As shown in Figure 2, when occlusion happens, AC-Net can still localize some discriminative object parts and context information of the bird. Although high intra-class
+
+visual variances always happen in FGVC, ACNet uses a coarse-to-fine hierarchical feature learning process to exploit the discriminative feature for classification. In this way, different branches in the tree structure focus on different fine-grained object regions for accurate results.
+
+# 3.2. Training
+
+Data augmentation. In the training phase, we use the cropping and flipping operations to augment data to construct a robust model to adapt to variations of objects. That is, we first rescale the original images such that their shorter side is 512 pixels. After that, we randomly crop the patches with the size $448 \times 448$ , and randomly flip them to generate the training samples.
+
+Loss function. The loss function for our ACNet is formed by two parts, i.e., the loss for the predictions of leaf nodes, and the loss for the final prediction, computed by the summation over all predictions from the leaf nodes. That is,
+
+$$
+\mathcal {L} = L \left(\mathcal {C} \left(x _ {j}\right), y ^ {*}\right) + \sum_ {i = 1} ^ {2 ^ {h - 1}} L \left(\mathcal {P} _ {i} \left(x _ {j}\right), y ^ {*}\right), \tag {4}
+$$
+
+where $h$ is the height of the tree $\mathbb{T}$ , $L\big(\mathcal{C}(x_j),y^*\big)$ is the negative logarithmic likelihood loss of the final prediction $\mathcal{C}(x_j)$ and the ground truth label $y^{*}$ , and $L\big(\mathcal{P}_i(x_j),y^*\big)$ is the negative logarithmic likelihood loss of the $i$ -th leaf prediction and the ground truth label $y^{*}$ .
+
+**Optimization.** The backbone network in our ACNet method is pre-trained on the ImageNet dataset. Besides, the "xavier" method [14] is used to randomly initialize the parameters of the additional convolutional layers. The entire training process is formed by two stages.
+
+- For the first stage, the parameters in the truncated VGG-16 network are fixed, and other parameters are trained with 60 epochs. The batch size is set to 24 in training with the initial learning rate 1.0. The learning rate is gradually divided by 4 at the 10-th, 20-th, 30-th, and 40-th epochs.
+- In the second stage, we fine-tune the entire network for 200 epochs. We use the batch size 16 in training with the initial learning rate 0.001. The learning rate is gradually divided by 10 at the 30-th, 40-th, and 50-th epochs.
+
+We use the SGD algorithm to train the network with 0.9 momentum, and 0.000005 weight decay in the first stage and 0.0005 weight decay in the second stage.
+
+# 4. Experiments
+
+Several experiments on three FGVC datasets, i.e., CUB200-2011 [42], Stanford Cars [25], and Aircraft [32], are
+
+Table 1: The fine-grained classification results on the CUB-200-2011 dataset [42].
+
+| Method | Backbone | Annotation Top-1 Acc. (%) |
| FCAN [31] | ResNet-50 | ✓ | 84.7 |
| B-CNN [29] | VGG-16 | ✓ | 85.1 |
| SPDA-CNN [50] | CaffeNet | ✓ | 85.1 |
| PN-CNN [2] | Alex-Net | ✓ | 85.4 |
| STN [20] | Inception | × | 84.1 |
| B-CNN [29] | VGG-16 | × | 84.0 |
| CBP [13] | VGG-16 | × | 84.0 |
| LRBP [23] | VGG-16 | × | 84.2 |
| FCAN [31] | ResNet-50 | × | 84.3 |
| RA-CNN [12] | VGG-19 | × | 85.3 |
| HIHCA [3] | VGG-16 | × | 85.3 |
| Improved B-CNN [28] | VGG-16 | × | 85.8 |
| BoostCNN [33] | VGG-16 | × | 86.2 |
| KP [8] | VGG-16 | × | 86.2 |
| MA-CNN [52] | VGG-19 | × | 86.5 |
| MAMC [38] | ResNet-101 | × | 86.5 |
| MaxEnt [10] | DenseNet-161 | × | 86.5 |
| HBPASM [40] | Resnet-34 | × | 86.8 |
| DCL [7] | VGG-16 | × | 86.9 |
| KERL w/ HR [6] | VGG-16 | × | 87.0 |
| TASN [53] | VGG-19 | × | 87.1 |
| DFL-CNN [46] | ResNet-50 | × | 87.4 |
| DCL [7] | ResNet-50 | × | 87.8 |
| TASN [53] | ResNet-50 | × | 87.9 |
| Ours | VGG-16 | × | 87.8 |
| Ours | ResNet-50 | × | 88.1 |
+
+conducted to demonstrate the effectiveness of the proposed method. Our method is implemented in Caffe [22]. All models are trained on a workstation with a 3.26 GHz Intel processor, 32 GB memory, and a Nvidia V100 GPU.
+
+# 4.1. Evaluation on the CUB-200-2011 Dataset
+
+The Caltech-UCSD birds dataset (CUB-200-2011) [42] consists of 11,788 annotated images in 200 subordinate categories, including 5,994 images for training and 5,794 images for testing. The fine-grained classification results are shown in Table 1. As shown in Table 1, the best supervised method1, i.e., PN-CNN [2] using both the object and part level annotations produces $85.4\%$ top-1 accuracy on the CUB-200-2011 dataset. Without part-level annotation, MAMC [38] produces $86.5\%$ top-1 accuracy using two attention branches to learn discriminative features in different regions. KERL w/ HR [6] designs a single deep gated graph neural network to learn discriminative features,
+
+Table 2: The fine-grained classification results on the Stanford Cars dataset [25].
+
+| Method | Backbone | Annotation Top-1 Acc. (%) |
| FCAN [31] | ResNet-50 | ✓ | 91.3 |
| PA-CNN [24] | VGG-19 | ✓ | 92.6 |
| FCAN [31] | ResNet-50 | × | 89.1 |
| B-CNN [29] | VGG-16 | × | 90.6 |
| LRBP [23] | VGG-16 | × | 90.9 |
| HIHCA [3] | VGG-16 | × | 91.7 |
| Improved B-CNN [28] | VGG-16 | × | 92.0 |
| BoostCNN [33] | VGG-16 | × | 92.1 |
| KP [8] | VGG-16 | × | 92.4 |
| RA-CNN [12] | VGG-19 | × | 92.5 |
| MA-CNN [52] | VGG-19 | × | 92.8 |
| MAMC [38] | ResNet-101 | × | 93.0 |
| MaxEnt [10] | DenseNet-161 | × | 93.0 |
| WS-DAN [17] | Inception v3 | × | 93.0 |
| DFL-CNN [46] | ResNet-50 | × | 93.1 |
| HBPASM [40] | Resnet-34 | × | 93.8 |
| TASN [53] | VGG-19 | × | 93.2 |
| TASN [53] | ResNet-50 | × | 93.8 |
| DCL [7] | VGG-16 | × | 94.1 |
| DCL [7] | ResNet-50 | × | 94.5 |
| Ours | VGG-16 | × | 94.3 |
| Ours | ResNet-50 | × | 94.6 |
+
+achieving better performance, i.e., $87.0\%$ top-1 accuracy. Compared to the state-of-the-art weakly supervised methods [6, 10, 38, 46, 7, 53], our method achieves the best results with $87.8\%$ and $88.1\%$ top-1 accuracy with different backbones. This is attributed to the designed attention transformer module and the coarse-to-fine hierarchical feature learning process.
+
+# 4.2. Evaluation on the Stanford Cars Dataset
+
+The Stanford Cars dataset [25] contains 16,185 images from 196 classes, which is formed by 8,144 images for training and 8,041 images for testing. The subordinate categories are determined by the Make, Model, and Year of cars. As shown in Table 2, previous methods using part-level annotations (i.e., FCAN [31] and PA-CNN [24]) only produces less than $93.0\%$ top-1 accuracy. The recent weakly supervised method WS-DAN [17] employs the complex Inception V3 backbone [39] and designs the attention-guided data augmentation strategy to exploit discriminative object parts, achieving $93.0\%$ top-1 accuracy. Without using any fancy data augmentation strategy, our method achieves the best top-1 accuracy, i.e., $94.3\%$ with the VGG-16 backbone and $94.6\%$ with the ResNet-50 backbone.
+
+# 4.3. Evaluation on the Aircraft Dataset
+
+The Aircraft dataset [32] is a fine-grained dataset of 100 different aircraft variants formed by 10,000 annotated im
+
+Table 3: The fine-grained classification results on the Aircraft dataset [32].
+
+| Method | Backbone | Annotation Top-1 Acc. (%) |
| MG-CNN [43] | ResNet-50 | ✓ | 86.6 |
| MDTP [45] | VGG-16 | ✓ | 88.4 |
| RA-CNN [12] | VGG-19 | × | 88.2 |
| MA-CNN [52] | VGG-19 | × | 89.9 |
| B-CNN [29] | VGG-16 | × | 86.9 |
| KP [8] | VGG-16 | × | 86.9 |
| LRBP [23] | VGG-16 | × | 87.3 |
| HIHCA [3] | VGG-16 | × | 88.3 |
| Improved B-CNN [28] | VGG-16 | × | 88.5 |
| BoostCNN [33] | VGG-16 | × | 88.5 |
| PC-DenseNet-161 [9] | DenseNet-161 | × | 89.2 |
| MaxEnt [10] | DenseNet-161 | × | 89.7 |
| HBPASM [40] | Resnet-34 | × | 91.3 |
| DFL-CNN [46] | ResNet-50 | × | 91.7 |
| DCL [7] | VGG-16 | × | 91.2 |
| DCL [7] | ResNet-50 | × | 93.0 |
| Ours | VGG-16 | × | 91.5 |
| Ours | ResNet-50 | × | 92.4 |
+
+ages, which is divided into two subsets, i.e., the training set with 6,667 images and the testing set with 3,333 images. Specifically, the category labels are determined by the Model, Variant, Family and Manufacturer of airplanes. The evaluation results are presented in Table 3. Our AC-Net method outperforms the most compared methods, especially with the same VGG-16 backbone. Besides, our model performs on par with the state-of-the-art method DCL [7], i.e., $91.2\%$ vs. $91.5\%$ top-1 accuracy for the VGG-16 backbone and $93.0\%$ vs. $92.4\%$ top-1 accuracy for the ResNet-50 based backbone network. The operations along different root-to-leaf paths in our tree architecture $\mathbb{T}$ focus on exploiting discriminative features on different object regions, which help each other to achieve the best performance in FGVC.
+
+# 4.4. Ablation Study
+
+We study the influence of some important parameters and different components of ACNet. Notably, we employ the VGG-16 backbone in the experiment. The Grad-CAM method [36] is used to generate the heatmaps to visualize the responses of branch routing and leaf nodes.
+
+Effectiveness of the tree architecture $\mathbb{T}$ . To validate the effectiveness of the tree architecture design, we construct two variants, i.e., VGG and w/ Tree, of our ACNet method. Specifically, we construct the VGG method by only using the VGG-16 backbone network for classification, and further integrate the tree architecture to form the w/ Tree method. The evaluation results are reported in Figure 6. We find that using the tree architecture significantly improves
+
+
+Figure 4: Visualization of the responses in different leaf nodes in our ACNet method. Each column presents a response heatmap of each leaf node.
+
+the accuracy, i.e., $3.025\%$ improvements in top-1 accuracy, which demonstrates the effectiveness of the designed tree architecture $\mathbb{T}$ in our ACNet method.
+
+Height of the tree $\mathbb{T}$ . To explore the effect of the height of the tree $\mathbb{T}$ , we construct four variants with different heights of tree on the CUB-200-2011 dataset [42] in Table 4. Notably, the tree $\mathbb{T}$ is degenerated to a single node when the height of the tree is set to 1, i.e., only the backbone VGG-16 network is used in classification. As shown in Table 4, we find that our ACNet achieves the best performance (i.e., $87.8\%$ top-1 accuracy) with the height of tree equals to 3. If we set $h \leq 2$ , there are limited number of parameters in our ACNet model, which are not enough to represent the significant variations of the subordinate categories. However, if we set $h = 4$ , too many parameters with limited number of training data cause overfitting of our ACNet model, resulting in $2.3\%$ drop in the top-1 accuracy. To verify our hypothesis, we visualize the responses of all leaf nodes in ACNet with the height of 4 in Figure 5. We find that some leaf nodes focus on almost the same regions (see the 3rd and 4th columns).
+
+Effectiveness of leaf nodes. To analyze the effectiveness of the individual leaf node, we calculate the accuracy of individual leaf predictions with height of 3, respectively. The accuracies of four individual leaf nodes are $85.8\%$ , $86.2\%$ , $86.7\%$ , and $87.0\%$ on CUB-200-2011 respectively. It shows that all leaf nodes are informative and fusion of them can produce more accurate results (i.e., $87.8\%$ ). As shown in Figure 4, we observe that different leaf nodes concentrate on different regions of images. For example, the leaf node
+
+
+Figure 5: Responses of all leaf nodes in the tree with the height of 4.
+
+
+Figure 6: Effect of the various components in the proposed ACNet method on the CUB-200-2011 dataset [42].
+
+corresponding to the first column focuses more on the background region, the leaf node corresponding to the second column focuses more on the head region, and the other two leaf nodes are more interested in the patches of wings and tail. The different leaf nodes help each other to construct more effective model for accurate results.
+
+Asymmetrical architecture of the tree $\mathbb{T}$ . To explore the architecture design in $\mathbb{T}$ , we construct two variants, i.e., one uses the symmetry architecture, and another one uses the asymmetrical architecture, and set the height of the tree $\mathbb{T}$ to be 3. The evaluation results are reported in Table 5. It can be seen that the proposed method produces $86.2\%$ top-1 accuracy using the symmetrical architecture. If we use the asymmetrical architecture, the top-1 accuracy is improved $1.6\%$ to $87.8\%$ . We speculate that the asymmetrical architecture is able to fuse various features with different receptive fields for better performance.
+
+Effectiveness of the attention transformer module. We construct a variant "w/ Tree-Attn", of the proposed ACNet model, to validate the effectiveness of the attention transformer module in Figure 6. Specifically, we add the attention block in the transformer module in the "w/ Tree" method to construct the "w/ Tree-Attn" method. As shown in Figure 6, the "w/ Tree-Attn" method performs consistently better than the "w/ Tree" method, producing higher
+
+Table 4: Effect of the height of the tree $\mathbb{T}$ on Table 5: Effect of tree architecture on the Table 6: Comparison between GMP and GAP the CUB-200-2011 dataset [42]. CUB-200-2011 dataset [42]. on the CUB-200-2011 dataset [42].
+
+| Height of T | Top-1 Acc. (%) |
| 1 | 82.2 |
| 2 | 86.0 |
| 3 | 87.8 |
| 4 | 85.5 |
+
+| Mode | Level | Top-1 Acc. (%) |
| symmetry | 3 | 86.2 |
| asymmetry | 3 | 87.8 |
+
+| Pooling | Top-1 Acc. (%) |
| GMP | 87.2 |
| GAP | 87.8 |
+
+
+
+
+Bobolink
+
+
+
+
+
+
+
+
+
+
+Chevrolet HHR SS 2010
+Figure 7: Visualization of the responses in different branch routing modules of the exemplars in CUB-200-2011 [42], Stanford Cars [25], and Aircraft [32].
+
+
+Boeing
+
+
+
+top-1 accuracy with different number of channels, i.e., improving $0.4\%$ top-1 accuracy in average, which demonstrates that the attention mechanism is effective for FGVC.
+
+To further investigate the effect of ASPP module in our proposed model, we also conduct the "w/ Tree-ASPP" method, a variant of proposed ACNet model, where the only difference lies on between one convolution layer or ASPP module in the attention transformer module. As illustrated in Figure 6, the attention transformer with ASPP module achieves better accuracy than the one with only one convolution layer. It indicates that the ASPP module improves the global performance by parallel dilated convolution layers with different dilated rates. Specifically, the "w/ Tree-ASPP" method improves $0.5\%$ top-1 accuracy in average. We can conclusion that multi-scale embedding and different dilated convolutions in ASPP module can facilitate helping the proposed tree network to obtain robust performance.
+
+Components in the branch routing module. We ana-
+
+lyze the effectiveness of the global context block [4] in the branch routing module in Figure 6. Our ACNet method produces the best results with different number of channels in the branch routing module; while the top-1 accuracy drops $0.275\%$ in average after removing the global context block. Meanwhile, we also study the effectiveness of the pooling strategy in the branch routing module in Table 6. We observe that using the global max-pooling (GMP) instead of the global average pooling (GAP) leads to $0.6\%$ top-1 accuracy drop on the CUB-200-2011 dataset. We speculate that the GAP operation encourages the filter to focus on high average response regions instead of the only maximal ones, which is able to integrate more context information for better performance.
+
+Coarse-to-fine hierarchical feature learning process. The branch routing modules focus on different semantic regions (e.g., different object parts) or context information (e.g., background) at different levels, e.g., $\mathbf{R}_1^1$ , $\mathbf{R}_1^2$ , and $\mathbf{R}_2^2$ in Figure 2. As the example Bobolink shown in Figure 7, the $\mathbf{R}_1^1$ module focuses on the whole bird region at level-1; the $\mathbf{R}_1^2$ and $\mathbf{R}_2^2$ modules focus on the wing and head regions of the bird at level-2. As shown in the first row in Figure 5, the four leaf nodes focus on several fine-grained object parts at level-3, e.g., different parts of the head region. In this way, our ACNet uses the coarse-to-fine hierarchical feature learning process to exploit discriminative features for more accurate results. This phenomenon demonstrates that our hierarchical feature extraction process in the tree $\mathbb{T}$ architecture gradually enforces our model to focus on more discriminative detail regions of object.
+
+# 5. Conclusion
+
+In this paper, we present an attention convolutional binary neural tree (ACNet) for weakly supervised FGVC. Specifically, different root-to-leaf paths in the tree network focus on different discriminative regions using the attention transformer inserted into the convolutional operations along edges of the tree. The final decision is produced by maxvoting the predictions from leaf nodes. The experiments conducted on several challenging datasets show the effectiveness of ACNet. We also present how we design the tree structure and why we use the coarse-to-fine hierarchical feature learning process for FGVC.
+
+# References
+
+[1] Anelia Angelova, Shenghuo Zhu, and Yuanqing Lin. Image segmentation for large-scale subcategory flower recognition. In WACV, pages 39-45, 2013. 1
+[2] Steve Branson, Grant Van Horn, Serge J. Belongie, and Pietro Perona. Bird species categorization using pose normalized deep convolutional nets. CoRR, abs/1406.2952, 2014. 5
+[3] Sijia Cai, Wangmeng Zuo, and Lei Zhang. Higher-order integration of hierarchical convolutional activations for fine-grained visual categorization. In ICCV, pages 511-520, 2017. 5, 6
+[4] Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu. Gcnet: Non-local networks meet squeeze-excitation networks and beyond. CoRR, abs/1904.11492, 2019. 3, 8
+[5] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 40(4):834-848, 2018. 3
+[6] Tianshui Chen, Liang Lin, Riquan Chen, Yang Wu, and Xiaonan Luo. Knowledge-embedded representation learning for fine-grained image recognition. In IJCAI, pages 627-634, 2018. 5, 6
+[7] Yue Chen, Yalong Bai, Wei Zhang, and Tao Mei. Destruction and construction learning for fine-grained image recognition. In CVPR, pages 5157-5166, 2019. 5, 6
+[8] Yin Cui, Feng Zhou, Jiang Wang, Xiao Liu, Yuanqing Lin, and Serge J. Belongie. Kernel pooling for convolutional neural networks. In CVPR, pages 3049-3058, 2017. 5, 6
+[9] Abhimanyu Dubey, Otkrist Gupta, Pei Guo, Ramesh Raskar, Ryan Farrell, and Nikhil Naik. Pairwise confusion for fine-grained visual classification. In ECCV, pages 71-88, 2018. 6
+[10] Abhimanyu Dubey, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Maximum-entropy fine grained classification. In NeurIPS, pages 635–645, 2018. 5, 6
+[11] Nicholas Frosst and Geoffrey E. Hinton. Distilling a neural network into a soft decision tree. CoRR, abs/1711.09784, 2017. 2
+[12] Jianlong Fu, Heliang Zheng, and Tao Mei. Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. In CVPR, pages 4476-4484, 2017. 2, 5, 6
+[13] Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell. Compact bilinear pooling. CoRR, abs/1511.06062, 2015. 5
+[14] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, pages 249-256, 2010. 5
+[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016. 3
+[16] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In CVPR, pages 7132-7141, 2018. 3
+[17] Tao Hu, Honggang Qi, Qingming Huang, and Yan Lu. See better before looking closer: Weakly supervised data
+
+augmentation network for fine-grained visual classification. CoRR, abs/1901.09891, 2019.6
+[18] Shaoli Huang, Zhe Xu, Dacheng Tao, and Ya Zhang. Part-stacked CNN for fine-grained visual categorization. In CVPR, pages 1173-1182, 2016. 2
+[19] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448-456, 2015. 3
+[20] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. In NeurIPS, pages 2017-2025, 2015. 5
+[21] Saumya Jetley, Nicholas A. Lord, Namhoon Lee, and Philip H. S. Torr. Learn to pay attention. CoRR, abs/1804.02391, 2018. 2
+[22] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross B. Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. CoRR, abs/1408.5093, 2014. 5
+[23] Shu Kong and Charless C. Fowlkes. Low-rank bilinear pooling for fine-grained classification. CoRR, abs/1611.05109, 2016. 5, 6
+[24] Jonathan Krause, Hailin Jin, Jianchao Yang, and Fei-Fei Li. Fine-grained recognition without part annotations. In CVPR, pages 5546–5555, 2015. 6
+[25] Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In ICCVW, pages 554–561, 2013. 2, 5, 6, 8
+[26] Haoming Lin, Yuyang Hu, Siping Chen, Jianhua Yao, and Ling Zhang. Fine-grained classification of cervical cells using morphological and appearance based convolutional neural networks. CoRR, abs/1810.06058, 2018. 2
+[27] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. In ICLR, 2014. 3
+[28] Tsung-Yu Lin and Subhransu Maji. Improved bilinear pooling with cnns. In BMVC, 2017. 3, 5, 6
+[29] Tsung-Yu Lin, Aruni Roy Chowdhury, and Subhransu Maji. Bilinear CNN models for fine-grained visual recognition. In ICCV, pages 1449-1457, 2015. 2, 5, 6
+[30] Wei Liu, Andrew Rabinovich, and Alexander C. Berg. Parsenet: Looking wider to see better. CoRR, abs/1506.04579, 2015. 3
+[31] Xiao Liu, Tian Xia, Jiang Wang, and Yuanqing Lin. Fully convolutional attention localization networks: Efficient attention localization for fine-grained recognition. CoRR, abs/1603.06765, 2016. 2, 5, 6
+[32] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew B. Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. CoRR, abs/1306.5151, 2013. 2, 5, 6, 8
+[33] Mohammad Moghimi, Serge J. Belongie, Mohammad J. Saberian, Jian Yang, Nuno Vasconcelos, and Li-Jia Li. Boosted convolutional neural networks. In BMVC, 2016. 5, 6
+[34] Yuxin Peng, Xiangteng He, and Junjie Zhao. Object-part attention driven discriminative localization for fine-grained image classification. CoRR, abs/1704.01740, 2017. 2
+[35] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
+
+Aditya Khosla, Michael S. Bernstein, Alexander C. Berg, and Fei-Fei Li. Imagenet large scale visual recognition challenge. IJCV, 115(3):211-252, 2015. 3
+[36] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In ICCV, pages 618-626, 2017. 6
+[37] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. 3
+[38] Ming Sun, Yuchen Yuan, Feng Zhou, and Errui Ding. Multi-attention multi-class constraint for fine-grained image recognition. In ECCV, pages 834–850, 2018. 2, 3, 5, 6
+[39] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In CVPR, pages 2818-2826, 2016. 6
+[40] Min Tan, Guijun Wang, Jian Zhou, Zhiyou Peng, and Meilian Zheng. Fine-grained classification via hierarchical bilinear pooling with aggregated slack mask. Access, 7:117944-117953, 2019. 5, 6
+[41] Ryutaro Tanno, Kai Arulkumaran, Daniel C. Alexander, Antonio Criminisi, and Aditya V. Nori. Adaptive neural trees. In ICML, pages 6166-6175, 2019. 2
+[42] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical report, California Institute of Technology, 2011. 1, 2, 5, 7, 8
+[43] Dequan Wang, Zhiqiang Shen, Jie Shao, Wei Zhang, Xiangyang Xue, and Zheng Zhang. Multiple granularity descriptors for fine-grained categorization. In ICCV, pages 2399-2406, 2015. 6
+[44] Xiaolong Wang, Ross B. Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, pages 7794-7803, 2018. 3
+[45] Yaming Wang, Jonghyun Choi, Vlad I. Morariu, and Larry S. Davis. Mining discriminative triplets of patches for fine-grained classification. CoRR, abs/1605.01130, 2016. 6
+[46] Yaming Wang, Vlad I. Morariu, and Larry S. Davis. Learning a discriminative filter bank within a CNN for fine-grained recognition. In CVPR, pages 4148-4157, 2018. 2, 3, 5, 6
+[47] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. CBAM: convolutional block attention module. CoRR, abs/1807.06521, 2018. 3
+[48] Han Xiao. NDT: neural decision tree towards fully functioned neural graph. CoRR, abs/1712.05934, 2017. 2
+[49] Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. CoRR, abs/1612.03928, 2016. 2
+[50] Han Zhang, Tao Xu, Mohamed Elhoseiny, Xiaolei Huang, Shaoting Zhang, Ahmed M. Elgammal, and Dimitris N. Metaxas. SPDA-CNN: unifying semantic part detection and abstraction for fine-grained recognition. In CVPR, pages 1143-1152, 2016. 2, 5
+[51] Ning Zhang, Jeff Donahue, Ross B. Girshick, and Trevor Darrell. Part-based r-cnns for fine-grained category detection. In ECCV, pages 834-849, 2014. 2
+
+[52] Heliang Zheng, Jianlong Fu, Tao Mei, and Jiebo Luo. Learning multi-attention convolutional neural network for fine-grained image recognition. In ICCV, pages 5219–5227, 2017. 1, 2, 5, 6
+[53] Heliang Zheng, Jianlong Fu, Zheng-Jun Zha, and Jiebo Luo. Looking for the devil in the details: Learning trilinear attention sampling network for fine-grained image recognition. CoRR, abs/1903.06150, 2019. 5, 6
\ No newline at end of file
diff --git a/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/images.zip b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..83faed4772618098a1c213021799bb1abe5c0ba1
--- /dev/null
+++ b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f8e04f1ab4f372e3798eaf2229882eadc71d3806e485b9df06e18912375c98d
+size 620214
diff --git a/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/layout.json b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8b085f3eadccfbc5c4c8302ebba2f1dd9962d9ff
--- /dev/null
+++ b/attentionconvolutionalbinaryneuraltreeforfinegrainedvisualcategorization/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6243b57d8ce1c5e23d13c9e0165a58f9f31305e4f12ffbbbd303989b7e91e1af
+size 438152
diff --git a/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_content_list.json b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8e80fb4b35622f4e8a556a264af8d80f18a5d0ae
--- /dev/null
+++ b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b5adc26f92317bb944f2996af256002f946dd80f910579acd3d38a0082bfdd69
+size 75951
diff --git a/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_model.json b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..673bf1f424197628c2940fc8331d7ceec7dad49a
--- /dev/null
+++ b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4824b177147bdb1ac2e11b1ae863850ab0b493e8dcc1b92b9ea18cdba99d623
+size 92192
diff --git a/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_origin.pdf b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..dde2bbd1142ea416824cdc77973314c93e8c8e3d
--- /dev/null
+++ b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/bccae8f0-a909-47a3-914a-300321397878_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eae3dcd33770b9260feda5a3c1b596902beda158855f89a5121a42064c3d1c93
+size 2301835
diff --git a/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/full.md b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..eead2e721eae30f8a24da9f55b2e024576986c3d
--- /dev/null
+++ b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/full.md
@@ -0,0 +1,282 @@
+# Attention Mechanism Exploits Temporal Contexts: Real-time 3D Human Pose Reconstruction
+
+Ruixu Liu1, Ju Shen1, He Wang1, Chen Chen2, Sen-ching Cheung3, Vijayan Asari1
+
+1University of Dayton, 2University of North Carolina at Charlotte, 3University of Kentucky {liur05, jshen1, hwang6, vasari1}@udayton.edu, chen.chen@uncc.edu, sccheung@ieee.org
+
+# Abstract
+
+We propose a novel attention-based framework for 3D human pose estimation from a monocular video. Despite the general success of end-to-end deep learning paradigms, our approach is based on two key observations: (1) temporal incoherence and jitter are often yielded from a single frame prediction; (2) error rate can be remarkably reduced by increasing the receptive field in a video. Therefore, we design an attentional mechanism to adaptively identify significant frames and tensor outputs from each deep neural net layer, leading to a more optimal estimation. To achieve large temporal receptive fields, multi-scale dilated convolutions are employed to model long-range dependencies among frames. The architecture is straightforward to implement and can be flexibly adopted for real-time applications. Any off-the-shelf 2D pose estimation system, e.g. Mocap libraries, can be easily integrated in an ad-hoc fashion. We both quantitatively and qualitatively evaluate our method on various standard benchmark datasets (e.g. Human3.6M, HumanEva). Our method considerably outperforms all the state-of-the-art algorithms up to $8\%$ error reduction (average mean per joint position error: 34.7) as compared to the best-reported results. Code is available at: (https://github.com/lrxjason/Attention3DHumanPose)
+
+# 1. Introduction
+
+Articulated 3D human pose estimation is a classic vision task enabling numerous applications from activity recognition to human-robot interaction. Traditional approaches often use specialized devices under highly controlled environments, such as multi-view capture [1], marker systems [26] and multi-modal sensing [32], which requires a laborious setup process that limits their practical uses. This work focuses on 3D pose estimation from an arbitrary monocu
+
+
+
+
+Figure 1: Comparison results: Top: side-by-side views of motion retargeting results on a 3D avatar; the source is from frame 857 of walking S9 and frame 475 of posing S9 in Human3.6M. Bottom: the average joint error comparison across all the frames of the video walking S9 [19, 35].
+
+lar video, which is challenging due to the high-dimensional variability and nonlinearity of human dynamics. Recent efforts of using deep architectures have significantly advanced the state-of-the-art in 3D pose reasoning [41, 29]. The end-to-end learning process alleviates the need of using tailor-made features or spatial constraints, thereby minimizing the characteristic errors such as double-counting image evidence [15].
+
+In this work, we aim to utilize an attention model to further improve the accuracy among existing deep networks while preserving natural temporal coherence in videos. The
+
+concept of "attention" is to learn optimized global alignment between pairwise data and has gained recent success in the integration with deep networks for processing mono/multi-modal data, such as text-to-speech matching [12] or neural machine translation [3]. To the best of our knowledge, our work is the first to use the attention mechanism in the domain of 3D pose estimation to selectively identify important tensor through-puts across neural net layers to reach an optimal inference.
+
+While vast and powerful deep models on 3D pose prediction are emerging (from convolutional neural network (CNN) [34, 40, 22] to generative adversarial networks (GAN) [43, 10]), many of these approaches focus on a single image inference, which is inclined to jittery motion or inexact body configuration. To resolve this, temporal information is taken into account for better motion consistency. Existing works can be generally classified into two categories: direct 3D estimation and 2D-to-3D estimation [50, 9]. The former explores the possibility of jointly extracting both 2D and 3D poses in a holistic manner [34, 42]; while the latter decouples the estimation into two steps: 2D body part detection and 3D correspondence inference [8, 5, 50]. We refer readers to the recent survey for more details of their respective advantages [27].
+
+Our approach falls under the category of 2D-to-3D estimation with two key contributions: (a) developing a systematic approach to design and train of attention models for 3D pose estimation and (b) learning implicit dependencies in large temporal receptive fields using multi-scale dilated convolutions. Experimental evaluations show that the resulting system can reach almost the same level of estimation accuracy under both causal or non-causal conditions, making it very attractive for real-time or consumer-level applications. To date, state-of-the-art results on video-based 2D-to-3D estimation can be achieved by a semi-supervised approach [35] or a layer normalized LSTM approach [19]. Our model can further improve the performance in both quantitative accuracy and qualitative evaluation. Figure 1 shows an example result from Human3.6M measured by the Mean Per Joint Position Error (MPJPE). To visually demonstrate the significance of the improvement, animation retargeting is applied to a 3D avatar by synthesizing the captured motion from the same frame of the Walking S9 and posing S9 sequences. From the side-by-side comparisons, one can easily see the differences of the rendered results against the ground truth. Specifically, the shadows of the legs and the right hand are rendered differently due to the erroneous pose estimated, while ours stay more aligned with the ground truth. The histogram on the bottom demonstrates the MPJPE error reduction on individual joints. More extensive evaluation can be found in our supplementary materials.
+
+# 2. Related Works
+
+Articulated pose estimation from a video has been studied for decades. Early works relied on graphical or restrictive models to account for the high degree of freedom and dependencies among body parts, such as tree-structures [2, 1, 44] or pictorial structures [2]. These methods often introduced a large number of parameters that required careful and manual tuning using techniques such as piecewise approximation. With the rise of convolutional neural networks (CNNs) [34, 38], automated feature learning disentangles the dependencies among output variables and surpasses the performance of tailor-made solvers. For example, Tekin et al. trained an auto-encoder to project 3D joints to a high dimensional space to enforce structural constraints [40]. Park et al. estimated the 3D pose by propagating 2D classification results to 3D pose regressors inside a neural network [33]. A kinematic object model was introduced to guarantee the geometric validity of the estimated body parts [49]. A comprehensive list on CNNs-based systems can be found in the survey [38].
+
+Our contribution to this rich body of works lies in the introduction of attention mechanism that can further improve the estimation accuracy on traditional convolutional networks. Prior work on attention in deep learning (DL) mostly addresses long short-term memory networks (LSTMs) [18]. For example, a LSTM encodes context within a sentence to form attention-based word representations that boost the word-alignment between two sentences [36]. A similar attentional mechanism was successfully applied to improve the task of neural machine translation by jointly translating and aligning words [3]. Given the success in the language domain, we utilize the attention model for visual data computing through training a temporal convolutional network (TCN) [45].
+
+Compared to LSTMs, TCNs have the advantage of efficient memory usage without storing a large number of parameters introduced by the gates of LSTMs [31, 4]. In addition, TCNs enable parallel processing on the input frames instead of sequentially loading them into memory [19], where an estimation failure on one frame might affect the subsequent ones. Our work bears some similarity to the semi-supervised approach that uses a voting mechanism to select important frames [35]. But ours has three distinct features: first, instead of selectively choosing a subset of frames for estimation, our approach systematically assign a weight distribution to frames, all of which might contribute to the inference. Furthermore, our attention model enables automated weight assignment to all the network tensors and their internal channels that significantly improve the accuracy. Last but not least, our dilation model aims at enhancing the temporal consistency with large receptive field, while the semi-supervised approach focuses on speeding up the computation by reusing pre-processed frames [35].
+
+
+Figure 2: Left: An example of 4-layers architecture for attention-based temporal convolutional neural network. In this example, all the kernel sizes are 3. In practice, different layers can have different kernel sizes. Right: The detailed configuration of Kenrnel Attention Module.
+
+
+
+# 3. The Attention-based Approach
+
+# 3.1. Network Design
+
+Figure 2 (left) depicts the overall architecture of our attention-based neural network. It takes a sequence of $n$ frames with 2D joint positions as the input and outputs the estimated 3D pose for the target frame as labeled. The framework involves two types of processing modules: the Temporal Attention module (indicated by the long green bars) and the Kernel Attention module (indicated by the gray squares). The kernel attention module can be further categorized as TCN Units (in dark grey color) and Linear Projection Units (in light grey color) [17]. By viewing the graphical model vertically from the top, one can notice the two attention modules distribute in an interlacing pattern that a row of kernel attention modules situate right below a temporal attention module. We regard these two adjacent modules as one layer, which has the same notion as a neural net layer. According to the functionalities, the layers can be grouped as top layer, middle layers, and bottom layer. Note that the top layer only has TCN units for the kernel module, while the bottom layer only has a linear projection unit to deliver the result. It is also worth mentioning that the number of middle layers can be varied depending on the receptive field setting, which will be discussed in section 5.3.
+
+# 3.2. Temporal Attention
+
+The goal of the temporal attention module is to provide a contribution metric for the output tensors. Each attention
+
+module produces a set of scalars, $\{\omega_0^{(l)},\omega_1^{(l)},\ldots \}$ , weighing the significance of different tensors within a layer:
+
+$$
+\mathbf {W} ^ {(l)} \otimes \mathbf {T} ^ {(l)} \triangleq \left\{\omega_ {0} ^ {(l)} \otimes \mathcal {T} _ {0} ^ {(l)}, \dots , \omega_ {\lambda_ {l} - 1} ^ {(l)} \otimes \mathcal {T} _ {\lambda_ {l} - 1} ^ {(l)} \right\} \tag {1}
+$$
+
+where $l$ and $\lambda_{l}$ indicate the layer index and the number of tensors output from the $l^{(th)}$ layer. We use $\mathcal{T}_u^{(l)}$ to denote the $u^{th}$ tensor output from the $l^{th}$ layer. The bold format of $\mathbf{W} \otimes \mathbf{T}$ is a compacted vector representation used in Algorithm 1. Note for the top layer, the input to the TCN units is just the 2D joints. The choice for computing their attention scores can be flexible. A commonly used scheme is the multilayer perceptron strategy for optimal feature set selection [37]. Empirically, we achieve desirable results by simply computing the normalized cross-correlation (ncc) that measures the positive cosine similarity between $\mathbf{P}_i$ and $\mathbf{P}_t$ on their 2D joint positions [46]:
+
+$$
+\mathbf {W} ^ {(0)} = \left[ n c c (\mathbf {P} _ {0}, \mathbf {P} _ {t}), \dots , n c c (\mathbf {P} _ {n - 1}, \mathbf {P} _ {t}) \right] ^ {T} \tag {2}
+$$
+
+where $\mathbf{P}_0, \ldots, \mathbf{P}_{n-1}$ are the 2D joint positions. $t$ indicates the target frame index. The output $\mathbf{W}^{(0)}$ is forwarded to the attention matrix $\boldsymbol{\theta}_t^{(l)}$ to produce tensor weights for the subsequent layers.
+
+$$
+\mathbf {W} ^ {(l)} = \operatorname {s i g} \left(\boldsymbol {\theta} _ {t} ^ {(l) T} \mathbf {W} ^ {(l - 1)}\right), \text {f o r} l \in [ 1, L - 2 ] \tag {3}
+$$
+
+where $sig(\cdot)$ is the sigmoid activation function. We require the dimension of $\pmb{\theta}_{t}^{(l)}\in \mathcal{R}^{F^{\prime}\times F}$ matching the number of output tensors between layers $l - 1$ and $l$ , s.t. $F^{\prime} = \lambda_{l - 1}$ and $F = \lambda_{l}$ .
+
+# 3.3. Kernel Attention
+
+Similar to the temporal attention that determines a tensor weight distribution $\mathbf{W}^{(l)}$ within layer $l$ , the kernel attention module assigns a channel weight distribution within a tensor, denoted as $\widetilde{\mathbf{W}}^{(l)}$ . Figure 2 (right) depicts the steps on how an updated tensor $\mathbf{T}_{final}^{(l)}$ is generated through the weight adjustment. Given an input tensor $\mathbf{T}^{(l)} \in \mathcal{R}^{C \times F}$ , we generate $M$ new tensors $\widetilde{T}_m^{(l)}$ using $M$ TCN units with different dilation rates. These $M$ tensors are fused together through element-wise summation: $\widetilde{\mathbf{T}}^{(l)} = \sum_{m=1}^{M} \widetilde{T}_m^{(l)}$ , which is fed into a global average pooling layer (GAP) to generate channel-wise statistics $\widetilde{\mathcal{T}}_c^{(l)} \in \mathcal{R}^{C \times 1}$ . The channel number $C$ is acquired through a TCN unit as discussed in the ablation study. The output $\widetilde{\mathcal{T}}_c^{(l)}$ is forwarded to a fully connected layer to learn the relationship among features of different kernel sizes: $\widetilde{\mathcal{T}}_r^{(l)} = \theta_r^{(l)} \widetilde{\mathcal{T}}_c^{(l)}$ . The role of matrix $\theta_r^{(l)} \in \mathcal{R}^{r \times C}$ is to reduce the channel dimension to $r$ . Guided by the compacted feature descriptor $\widetilde{\mathcal{T}}_r^{(l)}$ , $M$ vectors are generated (indicated by the yellow cuboids) through a second fully connected layer across channels. Their kernel attention weights are computed by a softmax function:
+
+$$
+\widetilde {\boldsymbol {W}} ^ {(l)} \triangleq \left\{\widetilde {W} _ {1} ^ {(l)}, \dots , \widetilde {W} _ {M} ^ {(l)} \mid \widetilde {W} _ {m} ^ {(l)} = \frac {e ^ {\boldsymbol {\theta} _ {\boldsymbol {m}} ^ {(l)} \widetilde {\mathcal {T}} _ {r} ^ {(l)}}}{\sum_ {m = 1} ^ {M} e ^ {\boldsymbol {\theta} _ {\boldsymbol {m}} ^ {(l)} \widetilde {\mathcal {T}} _ {r} ^ {(l)}}} \right\} \tag {4}
+$$
+
+where $\theta_{\pmb{m}}^{(l)}\in \mathcal{R}^{C\times r}$ are the kernel attention parameters and $\sum_{m = 1}^{M}W_{m}^{(l)} = 1$ . Based on the weight distribution, we finally obtain the output tensor:
+
+$$
+\mathbf {T} _ {\text {f i n a l}} ^ {(l)} \triangleq \sum_ {m = 1} ^ {M} \widetilde {W} _ {m} ^ {(l)} \otimes \widetilde {T} _ {m} ^ {(l)} \tag {5}
+$$
+
+The channel update procedure can be further decomposed as:
+
+$$
+\widetilde {W} _ {m} ^ {(l)} \otimes \widetilde {T} _ {m} ^ {(l)} = \left\{\widetilde {\omega} _ {1} ^ {(l)} \otimes \widetilde {T} _ {1} ^ {(l)}, \dots , \widetilde {\omega} _ {C} ^ {(l)} \otimes \widetilde {T} _ {C} ^ {(l)} \right\} \tag {6}
+$$
+
+This shares the same format as the tensor distribution process (equation 1) in the temporal attention module but focuses on the channel distribution. The temporal attention parameters $\pmb{\theta}_t^{(l)}$ and kernel attention parameters $\pmb{\theta}_r^{(l)}$ , $\pmb{\theta}_{m}^{(l)}$ for $l \in [1, L - 2]$ are learned through mini-batch stochastic gradient descent (SGD) in the same manner as the TCN unit training [6].
+
+# 4. Integration with Dilated Convolutions
+
+For the proposed attention model, a large receptive field is crucial to learn long range temporal relationships across frames, thereby enhancing the estimation consistency. However, with more frames feeding into the network,
+
+
+Figure 3: The model of temporal dilated convolution network. As the level index increases, the receptive field over frames (layer index $= 0$ ) or tensors (layer index $\geq 0$ ) increases.
+
+the number of neural layers increases together with more training parameters. To avoid vanishing gradients or other superfluous layers problems [27], we devise a multi-scale dilation (MDC) strategy by integrating dilated convolutions.
+
+Figure 3 shows our dilated network architecture. For visualization purpose, we project the network into an $xyz$ space. The $xy$ plane has the same configuration as the network in Figure 2, with the combination of temporal and kernel attention modules along the $x$ direction, and layers layout along the $y$ direction. As an extension, we place the dilated convolution units (DCUs) along the $z$ direction. Terminologically, this $z$ -axis is labeled as levels to differ from the layer concept along the $y$ direction. As the level index increases, the receptive field grows with increasing dilation size while reducing the number of DCUs.
+
+Algorithm 1 describes the data flow on how these DCUs interact with each other. For notation simplicity, we use $\mathbf{U}_v^{(l)}$ to denote a DCU from layer $l$ and level $v$ . With the extra dimension introduced by the dilation levels, the tensor's weights from the attention module in equation (1) are extended to three dimensional. We format them as a set of matrices: $\{\bar{\mathbf{W}}^{(0)},\dots,\bar{\mathbf{W}}^{(L - 2)}\}$ . Accordingly, the prelearned attention parameters in equation (3) are upgraded to a tensor format $\{\hat{\pmb{\theta}}_t^{(1)},\dots,\hat{\pmb{\theta}}_t^{(L - 2)}\}$ . Lines 4~5 of the Algorithm 1 provide the details about the dimension of a convolution unit, i.e. kernel $\times$ dilation $\times$ stride. For tensor product convenience, we impose the following dimension constraints to $\mathbf{U}_v^{(l)}$ :
+
+- The dilation size of unit $\mathbf{U}_v^{(l)}$ equals to the kernel size of the unit $\mathbf{U}_0^{(l + 1)}$ : $d_v^{(l)} \coloneqq k_0^{(l + 1)}$ . In other words, the
+
+Algorithm 1: Multi-scale Dilation Configuration
+input: Number of layers: L kernel sizes: $\{k_0,k_1,\dots ,k_{L - 2},1\}$ 2D joints: $\{\mathbf{P}_0,\mathbf{P}_1,\dots ,\mathbf{P}_{n - 1}\}$ Result: configure the input/output for each $\mathbf{U}_v^{(l)}$
+1 $V\coloneqq L - 2$ // level size
+for $l\gets 0$ to $L - 2$ do
+3 for $v\gets 0$ to $V - 1$ do
+4 $d_v^{(l)}\coloneqq k_0^{(l + 1)}$ ;// dilation size for $\mathbf{U}_v^{(l)}$
+5 $s_v^{(l)}\coloneqq k_v^{(l)}\times d_v^{(l)}$ // stride size
+6 $U_v^{(l)} = DCU(d_v^{(l)},s_v^{(l)})$ if $l = 0$ then
+7 $\{\mathbf{P}_1,\ldots ,\mathbf{P}_n\} \mapsto \mathbf{U}_v^{(0)}$ // input
+8 $\mathbf{U}_v^{(0)}\Rightarrow \mathbf{T}_v^{(0)}$ // output
+9 else
+10 $\bar{\mathbf{W}}_v^{(l)} = \mathrm{sig}(\hat{\boldsymbol{\theta}}_t^{(l)T}\bar{\mathbf{W}}^{(l - 1)})$
+11 if $v = 0$ then
+12 $i_m\coloneqq l - 1$ ; // max level index
+13 $\{\bar{\mathbf{W}}_0^{(l - 1)}\otimes \mathbf{T}_0^{(l - 1)}\oplus \bar{\mathbf{W}}_1^{(l - 1)}\otimes$ $\mathbf{T}_1^{(l - 2)}\oplus \dots \oplus \bar{\mathbf{W}}_{i_m}^{(l - 1)}\otimes \mathbf{T}_{i_m}^{(0)}\} \rightarrow$ $\mathbf{U}_v^{(l)}$ ; // ⊕ is element-wise add
+14 $\mathbf{U}_v^{(l)}\Rightarrow \mathbf{T}_v^{(l)}$ .
+15 else
+16 $\bar{\mathbf{W}}_{i_m}^{(l - 1)}\otimes \mathbf{T}_0^{(l - 1)}\hookrightarrow \mathbf{U}_v^{(l)};$
+17 $\mathbf{U}_v^{(l)}\Rightarrow \mathbf{T}_v^{(l)}$ .
+18 end
+19 end
+20 end
+
+dilation size of all the units from layer $l$ is defined by the kernel size of the $0^{th}$ unit of the next layer $l + 1$ .
+
+- The stride size of $\mathbf{U}_v^{(l)}$ equals to the product of its corresponding kernel and dilation sizes: $s_v^{(l)}\coloneqq k_v^{(l)}\times d_v^{(l)}$
+
+Lines 6 - 18 configure the input (denoted by “ $\rightarrow$ ”) and output (denoted by “ $\Rightarrow$ ”) data flows for the unit $\mathbf{U}_v^{(l)}$ . For the input flow, we consider two cases according to the layer indices: $l = 0$ and $l \geq 1$ . All the units from layer $l = 0$ share the same $n$ video frames as the input. For all the units from subsequent layers ( $l \geq 1$ ), their input tensors are from:
+
+$$
+i n p u t \left(\mathbf {U} _ {v} ^ {(l)}\right) \triangleq \left\{ \begin{array}{l l} \left\{\mathbf {T} _ {0} ^ {(l - 1)}, \mathbf {T} _ {1} ^ {(l - 2)}, \dots , \mathbf {T} _ {V} ^ {(0)} \right\} & \text {i f} v = 0; \\ \mathbf {T} _ {0} ^ {(l - 1)} & \text {o t h e r w i s e .} \end{array} \right. \tag {7}
+$$
+
+where $\mathbf{T}_v^{(l - 1)}$ are the output tensors from the previous layer. Element-wise multiplication is applied to these input tensors with their weights $\bar{\mathbf{W}}_v^{(l - 1)}$ , as described in line 13.
+
+# 5. Experiments
+
+We have implemented the proposed approach in native Python without parallel optimization. The test system runs on a single NVIDIA Titan RTX GPU. For real-time inference, it can reach 3000 FPS, approximately 0.3 milliseconds to process a video frame. For training and testing, we have built three prototypes $n = 27$ , $n = 81$ , and $n = 243$ where $n$ is the receptive field on input frames. The details about $n$ 's selection is discussed in the ablation study section 5.3. All the prototypes present similar convergence rates in training and testing, as shown in Figure 4. We train our model using a ranger optimizer for 80 epochs with an initial learning rate of 1e-3, followed by a learning rate decay with cosine annealing decrease to 1e-5 [47, 24]. Data augmentation is applied to both the training and testing data by horizontally flipping poses. We also set the batch size, dropout rate, and activation function to 1024, 0.2, and Mish, respectively [35, 28].
+
+
+Figure 4: Convergence and accuracy performance for training and testing on the three prototypes.
+
+# 5.1. Datasets and Evaluation Protocols
+
+Our training images are from two public datasets: Human3.6M [7] and HumanEva [39], following the same training and validation policy as existing works [27, 43, 19, 35]. Specifically, the subjects S1, S5, S6, S7, and S8 from Human3.6M are used for training, and S9 and S11 are applied for testing. In the same manner, we conduct training/testing on the HumanEva dataset with the "Walk" and "Jog" actions performed by subjects S1, S2, and S3. For both datasets, we use the standard evaluation metrics (MPJPE and P-MPJPE) to measure the offset between the estimated result and ground-truth (GT) relative to the root node in millimeters [7]. Two protocols are involved in the experiment: Protocol#1 computes the mean Euclidean distance for all the joints after aligning the root joints (i.e. pelvis) between the predicted and ground-truth poses, referred as MPJPE [14, 21, 34, 25]. Protocol#2 applies an additional similarity transformation (Procrustes analysis) [20] to the predicted pose as an enhancement, referred as P-MPJPE
+
+| Method | Dir. | Disc. | Eat | Greet | Phone | Photo | Pose | Pur. | Sit | SitD. | Smoke | Wait | WalkD. | Walk | WalkT. | Avg |
| Martinez et al. ICCV'17 [27] | 51.8 | 56.2 | 58.1 | 59.0 | 69.5 | 78.4 | 55.2 | 58.1 | 74.0 | 94.6 | 62.3 | 59.1 | 65.1 | 49.5 | 52.4 | 62.9 |
| Fang et al. AAAI'18 [14] | 50.1 | 54.3 | 57.0 | 57.1 | 66.6 | 73.3 | 53.4 | 55.7 | 72.8 | 88.6 | 60.3 | 57.7 | 62.7 | 47.5 | 50.6 | 60.4 |
| Yang et al. CVPR'18 [43] | 51.5 | 58.9 | 50.4 | 57.0 | 62.1 | 65.4 | 49.8 | 52.7 | 69.2 | 85.2 | 57.4 | 58.4 | 43.6 | 60.1 | 47.7 | 58.6 |
| Pavlakos et al. CVPR'18 [34] | 48.5 | 54.4 | 54.4 | 52.0 | 59.4 | 65.3 | 49.9 | 52.9 | 65.8 | 71.1 | 56.6 | 52.9 | 60.9 | 44.7 | 47.8 | 56.2 |
| Luvizon et al. CVPR'18 [25] | 49.2 | 51.6 | 47.6 | 50.5 | 51.8 | 60.3 | 48.5 | 51.7 | 61.5 | 70.9 | 53.7 | 48.9 | 57.9 | 44.4 | 48.9 | 53.2 |
| Hossain et al. ECCV'18 [19] | 48.4 | 50.7 | 57.2 | 55.2 | 63.1 | 72.6 | 53.0 | 51.7 | 66.1 | 80.9 | 59.0 | 57.3 | 62.4 | 46.6 | 49.6 | 58.3 |
| Lee et al. ECCV'18 [21] | 40.2 | 49.2 | 47.8 | 52.6 | 50.1 | 75.0 | 50.2 | 43.0 | 55.8 | 73.9 | 54.1 | 55.6 | 58.2 | 43.3 | 43.3 | 52.8 |
| Dabral et al. ECCV'18 [13] | 44.8 | 50.4 | 44.7 | 49.0 | 52.9 | 61.4 | 43.5 | 45.5 | 63.1 | 87.3 | 51.7 | 48.5 | 52.2 | 37.6 | 41.9 | 52.1 |
| Zhao et al. CVPR'19 [48] | 47.3 | 60.7 | 51.4 | 60.5 | 61.1 | 49.9 | 47.3 | 68.1 | 86.2 | 55.0 | 67.8 | 61.0 | 42.1 | 60.6 | 45.3 | 57.6 |
| Pavillo et al. CVPR'19 [35] | 45.2 | 46.7 | 43.3 | 45.6 | 48.1 | 55.1 | 44.6 | 44.3 | 57.3 | 65.8 | 47.1 | 44.0 | 49.0 | 32.8 | 33.9 | 46.8 |
| Ours (n=243 CPN causal) | 42.3 | 46.3 | 41.4 | 46.9 | 50.1 | 56.2 | 45.1 | 44.1 | 58.0 | 65.0 | 48.4 | 44.5 | 47.1 | 32.5 | 33.2 | 46.7 |
| Ours (n=243 CPN) | 41.8 | 44.8 | 41.1 | 44.9 | 47.4 | 54.1 | 43.4 | 42.2 | 56.2 | 63.6 | 45.3 | 43.5 | 45.3 | 31.3 | 32.2 | 45.1 |
| Martinez et al. ICCV'17 [27] | 37.7 | 44.4 | 40.3 | 42.1 | 48.2 | 54.9 | 44.4 | 42.1 | 54.6 | 58.0 | 45.1 | 46.4 | 47.6 | 36.4 | 40.4 | 45.5 |
| Hossain et al. ECCV'18 [19] | 35.2 | 40.8 | 37.2 | 37.4 | 43.2 | 44.0 | 38.9 | 35.6 | 42.3 | 44.6 | 39.7 | 39.7 | 40.2 | 32.8 | 35.5 | 39.2 |
| Lee et al. ECCV'18 [21] | 32.1 | 36.6 | 34.4 | 37.8 | 44.5 | 49.9 | 40.9 | 36.2 | 44.1 | 45.6 | 35.3 | 35.9 | 37.6 | 30.3 | 35.5 | 38.4 |
| Zhao et al. CVPR'19 [48] | 37.8 | 49.4 | 37.6 | 40.9 | 45.1 | 41.4 | 40.1 | 48.3 | 50.1 | 42.2 | 53.5 | 44.3 | 40.5 | 47.3 | 39.0 | 43.8 |
| Pavillo et al. CVPR'19 [35] | 35.2 | 40.2 | 32.7 | 35.7 | 38.2 | 45.5 | 40.6 | 36.1 | 48.8 | 47.3 | 37.8 | 39.7 | 38.7 | 27.8 | 29.5 | 37.8 |
| Ours (n=243 GT) | 34.5 | 37.1 | 33.6 | 34.2 | 32.9 | 37.1 | 39.6 | 35.8 | 40.7 | 41.4 | 33.0 | 33.8 | 33.0 | 26.6 | 26.9 | 34.7 |
+
+Table 1: Protocol#1 with MPJPE (mm): Reconstruction error on Human3.6M. Top-table: input 2D joints are acquired by detection. Bottom-table: input 2D joints with ground-truth. (CPN) - cascaded pyramid network; (GT) - ground-truth.
+
+| Method | Dir. | Disc. | Eat | Greet | Phone | Photo | Pose | Pur. | Sit | SitD. | Smoke | Wait | WalkD. | Walk | WalkT. | Avg |
| Martinez et al. ICCV'17 [27] | 39.5 | 43.2 | 46.4 | 47.0 | 51.0 | 56.0 | 41.4 | 40.6 | 56.5 | 69.4 | 49.2 | 45.0 | 49.5 | 38.0 | 43.1 | 47.7 |
| Fang et al. AAAI'18 [14] | 38.2 | 41.7 | 43.7 | 44.9 | 48.5 | 55.3 | 40.2 | 38.2 | 54.5 | 64.4 | 47.2 | 44.3 | 47.3 | 36.7 | 41.7 | 45.7 |
| Hossain et al. ECCV'18 [19] | 35.7 | 39.3 | 44.6 | 43.0 | 47.2 | 54.0 | 38.3 | 37.5 | 51.6 | 61.3 | 46.5 | 41.4 | 47.3 | 34.2 | 39.4 | 44.1 |
| Pavlakos et al. CVPR'18 [34] | 34.7 | 39.8 | 41.8 | 38.6 | 42.5 | 47.5 | 38.0 | 36.6 | 50.7 | 56.8 | 42.6 | 39.6 | 43.9 | 32.1 | 36.5 | 41.8 |
| Yang et al. CVPR'18 [43] | 26.9 | 30.9 | 36.3 | 39.9 | 43.9 | 47.4 | 28.8 | 29.4 | 36.9 | 58.4 | 41.5 | 30.5 | 29.5 | 42.5 | 32.2 | 37.7 |
| Dabral et al. ECCV'18 [13] | 28.0 | 30.7 | 39.1 | 34.4 | 37.1 | 28.9 | 31.2 | 39.3 | 60.6 | 39.3 | 44.8 | 31.1 | 25.3 | 37.8 | 28.4 | 36.3 |
| Pavillo et al. CVPR'19 [35] | 34.1 | 36.1 | 34.4 | 37.2 | 36.4 | 42.2 | 34.4 | 33.6 | 45.0 | 52.5 | 37.4 | 33.8 | 37.8 | 25.6 | 27.3 | 36.5 |
| Ours (n=243 CPN) | 32.3 | 35.2 | 33.3 | 35.8 | 35.9 | 41.5 | 33.2 | 32.7 | 44.6 | 50.9 | 37.0 | 32.4 | 37.0 | 25.2 | 27.2 | 35.6 |
+
+[27, 19, 43, 35]. Compared to Protocol#1, this protocol is more robust to individual joint prediction failure. Another commonly used protocol (N-MPJPE) is to apply a scale alignment to the predicted pose. Compared to Protocol#2, this protocol involves a relatively less degree of transformation, resulting in a smaller error range than Protocol#2. Thus it should be sufficient to combine Protocols#1 for the accuracy analysis.
+
+# 5.2. Comparison with State-of-the-Art
+
+We compare our approach with state-of-the-art techniques on the two datasets Human3.6M and HumanEva, as shown in Tables 1-3. The best and second best results are highlighted in bold and underline formats respectively. The last column of each table shows the average performance on all the testing sets. Our approach achieves the minimum errors with $45.1\mathrm{mm}$ in MPJPE and $35.6\mathrm{mm}$ in P-MPJPE. In particular, under Protocol#1, our model reduces the best reported error rate of MPJPE [35] by approximate $8\%$ .
+
+2D Detection: a number of widely adopted 2D detectors were investigated. We tested the Human3.6M dataset starting with the pre-trained Stacked Hourglass (SH) net
+
+Table 2: Protocol#2 with P-MPJPE (mm): Reconstruction error on Human3.6M with similarity transformation.
+
+ | S1 | Walk S2 | S3 | S1 | Jog S2 | S3 | Avg |
| Pavlakos et al. [34] | 22.3 | 19.5 | 29.7 | 28.9 | 21.9 | 23.8 | 24.4 |
| Martinez et al. [27]* | 19.7 | 17.4 | 46.8 | 26.9 | 18.2 | 18.6 | 24.6 |
| Lee et al. [21] | 18.6 | 19.9 | 30.5 | 25.7 | 16.8 | 17.7 | 21.5 |
| Pavlo et al. [35] | 13.4 | 10.2 | 27.2 | 17.1 | 13.1 | 13.8 | 15.8 |
| Ours (n=27 CPN) | 13.1 | 9.8 | 26.8 | 16.9 | 12.8 | 13.3 | 15.4 |
+
+Table 3: Protocol#2 with P-MPJPE (mm): Reconstruction error on HumanEva. (*) - single action model.
+
+work (SH) to extract 2D point locations within the ground-truth bounding box, the results of which were further finetuned through the SH model [30]. Several automated methods without ground-truth bounding box were also investigated, including ResNet-101-FPN [23] with Mask R-CNN [16] and Cascaded Pyramid Network (CPN) [11]. Table 4 demonstrates the results with 2D directors by pre-trained SH, fine-tuned SH, and fine-tuned CPN models [35]. Further evaluation on 2D detectors can also be found in the second part of Table 1, where a comparison is shown with
+
+either the CPN estimation or the ground-truth (GT) as the input. For both cases, our attention model demonstrates clear advantages.
+
+| Method | SH PT | SH FT | CPN FT | GT |
| Martinez et al. [27] | 67.5 | 62.9 | - | 45.5 |
| Hossain et al. [19] | - | 58.3 | - | 41.6 |
| Pavllo et al. [35] | 58.5 | 53.4 | 46.8 | 37.8 |
| ours(n=243) | 57.3 | 52.0 | 45.1 | 34.7 |
| Pavllo et al.[35] | - | - | 49.0 | - |
| Ours(n=27) | 62.5 | 56.4 | 49.4 | 39.7 |
| Ours(n=81) | 60.3 | 55.7 | 47.5 | 37.1 |
| Ours(n=243) | 59.2 | 54.9 | 46.7 | 35.5 |
+
+Table 4: Top-table: Performance impacted by 2D detectors under Protocol#1 with MPJPE (mm). Bottom-table: Causal sequence processing performance in terms of the different 2D detectors. PT - pre-trained, FT - fine-tuned, GT - ground-truth, SH - stacked hourglass, CPN - cascaded pyramid network.
+
+Causal Performance: To facilitate real-time applications, we investigated the causal setting that has the architecture similar to the one described in Figure 2, but only considers the frames in the past. In the same manner, we implemented three prototypes with different receptive fields: $n = 27$ , $n = 81$ , and $n = 243$ . Table 4 (bottom) demonstrates our causal model can still reach the same level of accuracy as state-of-the-art. For example, compared to the semi-supervised approach, the prototypes $n = 81$ and $n = 243$ yield smaller MPJPE [35]. It is worth mentioning even without the input of frames in the future, the temporal coherence is not compromised in the casual setting. The qualitative results are provided in our supplementary videos.
+
+# 5.3. Ablation Studies
+
+To verify the impact and performance of each component in the network, we conducted ablation experiments on the Human3.6M dataset under Protocol#1.
+
+TCN Unit Channels: we first investigated how the channel number $C$ affects the performance between TCN units and temporal attention models. In our test, we used both the CPN and GT as the 2D input. Starting with a receptive field of $n = 3 \times 3 \times 3 = 27$ , as we increase the channels ( $C \leq 512$ ), the MPJPE drops down significantly. However, the MPJPE changes slowly when $C$ grows between 512 and 1024, and remains almost stable afterwards. As shown in Figure 5, with the CPN input, a marginal improvement is yielded from MPJPE $49.9\mathrm{mm}$ at $C = 1024$ to $49.6\mathrm{mm}$ at $C = 2048$ . A similar curve shape can be observed for the GT input. Considering the computation load
+
+with more parameters introduced, we chose $C = 1024$ in our experiments.
+
+
+Figure 5: The impact of channel number on MPJPE. CPN: cascaded pyramid network and GT: ground-truth.
+
+Kernel Attention: Table 5 shows how the setting of different parameters inside the Kernel Attention module impact the performance under Protocol#1. The left three columns list the main variables. For validation purposes, we divide the configuration into three groups in row-wise. Within each group, we assign different values in one variable while keeping the other two fixed. The items in bold represent the best individual setting for each group. Empirically, we chose the combination of $M = 3$ , $G = 8$ , and $r = 128$ as the optimal setting (labeled in box). Note, we select $G = 8$ instead of the individual best assignment $G = 2$ , which introduces a larger number of parameters with negligible MPJPE improvement.
+
+| Kernels | Groups | Channels | Parameters | P1 |
| M=1 | G=1 | - | 16.95M | 37.8 |
| M=2 | G=8 | r=128 | 9.14M | 37.1 |
| M=3 | G=8 | r=128 | 11.25M | 35.5 |
| M=4 | G=8 | r=128 | 13.36M | 38.0 |
| M=3 | G=1 | r=128 | 44.28M | 37.4 |
| M=3 | G=2 | r=128 | 25.41M | 35.3 |
| M=3 | G=4 | r=128 | 15.97M | 35.6 |
| M=3 | G=8 | r=128 | 11.25M | 35.5 |
| M=3 | G=16 | r=128 | 8.89M | 37.3 |
| M=3 | G=8 | r=64 | 10.20M | 35.9 |
| M=3 | G=8 | r=128 | 11.25M | 35.5 |
| M=3 | G=8 | r=256 | 13.35M | 36.2 |
+
+Table 5: Ablation study on different parameters in our kernel attention model. Here, we are using receptive field $n = 3 \times 3 \times 3 \times 3 \times 3 = 243$ . The evaluation is performed on Human3.6M under Protocol#1 with MPJPE (mm).
+
+In Table 6, we discuss the choice of different types of receptive fields and how it affects the network performance. The first column shows various layer configurations, which generates different receptive fields, ranging from $n = 27$ to $n = 1029$ . To validate the impact of $n$ , we fix the other parameters, i.e. $M = 3$ , $G = 8$ , $r = 128$ . Note that for
+
+a network with lower number of layers (e.g. $L = 3$ ), a larger receptive field may reduce the error more effectively. For example, increasing the receptive field from $n = 3 \times 3 \times 3 = 27$ to $n = 3 \times 3 \times 7 = 147$ , the MPJPE drops from 40.6 to 36.8. However, for a deeper network, a larger receptive field may not be always optimal, e.g. when $n = 1029$ , MPJPE $= 37.0$ . Empirically, we obtained the best performance with the setting of $n = 243$ and $L = 5$ , as indicated in the last row.
+
+| Receptive fields | Kernels | Groups | Channels | Parameters | P1 |
| 3×3×3=27 | M=1 | G=1 | - | 8.56M | 40.6 |
| 3×3×3=27 | M=2 | G=4 | r=128 | 6.21M | 40.0 |
| 3×5×3=45 | M=2 | G=4 | r=128 | 6.21M | 39.9 |
| 3×5×5=75 | M=2 | G=4 | r=128 | 6.21M | 38.5 |
| 3×3×3=27 | M=3 | G=8 | r=128 | 5.69M | 39.5 |
| 3×5×3=45 | M=3 | G=8 | r=128 | 5.69M | 39.2 |
| 3×5×5=75 | M=3 | G=8 | r=128 | 5.69M | 38.2 |
| 3×7×7=147 | M=3 | G=8 | r=128 | 5.69M | 36.8 |
| 3×3×3×3=81 | M=3 | G=8 | r=128 | 8.46M | 37.8 |
| 3×5×5×5=375 | M=3 | G=8 | r=128 | 8.46M | 36.6 |
| 3×7×7×7=1029 | M=3 | G=8 | r=128 | 8.46M | 37.0 |
| 3×3×3×3×3=243 | M=3 | G=8 | r=128 | 11.25M | 35.5 |
+
+Multi-Scale Dilation: To evaluate the impact of the dilation component on the network, we tested the system with and without dilation and compared their individual outcomes. In the same way, the GT and CPN 2D detectors are used as input and being tested on the Human3.6M dataset under Protocol#1. Table 7 demonstrates the integration of attention, and multi-scale dilation components surpass their individual performance with the minimum MPJPE for all the three prototypes. We also found the attention model makes an increasingly significant contribution as the layer number grows. This is because more layers lead to a larger receptive field, allowing the multi-scale dilation to capture long-term dependency across frames. The effect is more noticeable when fast motion or self-occlusion present in videos.
+
+Qualitative Results We also further evaluate our approach on a number of challenging wide videos, such as activities of fast motion or low-resolution human images, which are extremely difficult to obtain a meaningful 2D detection. For example, in Figure 6, the person playing sword not only has quick body movement also has a long casual dress with partial occlusion; the skating girl has fast speed generating blur regions. Our approach achieves a high level of robustness and accuracy in these challenging scenarios. More results can be found in the supplementary material.
+
+Table 6: Ablation study on different receptive fields in our kernel attention model. The evaluation is performed on Human3.6M under Protocol#1 with MPJPE (mm).
+
+| Model Method | n = 27 | n = 81 | n = 243 |
| Attention model (CPN) | 49.1 | 47.2 | 46.3 |
| Multi-Scale Dilation model (CPN) | 48.7 | 47.1 | 45.7 |
| Attention and Dilation (CPN) | 48.5 | 46.3 | 45.1 |
| Attention model (GT) | 39.5 | 37.8 | 35.5 |
| Multi-Scale Dilation model (GT) | 39.3 | 37.2 | 35.3 |
| Attention and Dilation (GT) | 38.9 | 36.2 | 34.7 |
+
+Table 7: Ablation study on different components in our method. The evaluation is performed on Human3.6M under Protocol#1 with MPJPE (mm).
+
+
+Figure 6: Qualitative results on wide videos.
+
+# 6. Conclusion
+
+We presented an attentional approach for 3D pose estimation from 2D videos. Combining multi-scale dilation with the temporal attention module, our system is able to capture long-range temporal relationships across frames, thereby significantly enhancing temporal coherency. Our experiments show a robust, high-fidelity prediction that compares favorably to related techniques. We believe our system substantially advances the state-of-the-art in video-based 3D pose estimation, making it practical for real-time applications.
+
+# References
+
+[1] S. Amin, M. Andriluka, M. Rohrbach, and B. Schiele. Multiview pictorial structures for 3d human pose estimation. BMVC, 2013.
+[2] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated pose estimation. Conference on Computer Vision and Pattern Recognition (CVPR), pages 1-8, 2009.
+[3] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In ICLR, 2016.
+[4] Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
+[5] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. European Conference on Computer Vision (ECCV), page 1-18, 2016.
+[6] Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010, pages 177-186. Springer, 2010.
+[7] V. Olaru C. Ionescu, D. Papava and C. Sminchisescu. Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7):1325-1339, 2014.
+[8] C.-H. Chen and D. Ramanan. 3d human pose estimation = 2d pose estimation + matching. Conference on Computer Vision and Pattern Recognition (CVPR), pages 7035-7043, 2017.
+[9] W. Chen, H. Wang, Y. Li, and H. Su et al. Synthesizing training images for boosting human 3d pose estimation. Fourth International Conference on 3D Vision (3DV), pages 479-488, 2016.
+[10] Y. Chen, C. Shen, H. Chen, X. S. Wei, L. Liu, and J. Yang. Adversarial learning of structure-aware fully convolutional networks for landmark localization. IEEE transactions on pattern analysis and machine intelligence, 2019.
+[11] Yilun Chen, Zhicheng Wang, Yuxiang Peng, Zhiqiang Zhang, Gang Yu, and Jian Sun. Cascaded pyramid network for multi-person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7103-7112, 2018.
+[12] J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech recognition. Advances in Neural Information Processing Systems 28, pages 577-585, 2015.
+[13] Rishabh Dabral, Anurag Mundhada, Uday Kusupati, Safeer Afaque, Abhishek Sharma, and Arjun Jain. Learning 3d human pose from structure and motion. In Proceedings of the European Conference on Computer Vision (ECCV), pages 668-683, 2018.
+[14] Hao-Shu Fang, Yuanlu Xu, Wenguan Wang, Xiaobai Liu, and Song-Chun Zhu. Learning pose grammar to encode human body configuration for 3d pose estimation. Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+[15] V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Pose search: Retrieving people using their pose. IEEE Confer-
+
+ence on Computer Vision and Pattern Recognition, page 1-8, 2009.
+[16] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn. International Conference on Computer Vision (ICCV), page 2980-2988, 2017.
+[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[18] S. Hochreiter and J. Schmidhuber. Long short-term memory. neural computation. Neural computation, 9(8):1735-1780, 1997.
+[19] M. Hossain and J. Little. Exploiting temporal information for 3d human pose estimation. European Conference on Computer Vision (ECCV), pages 69-86, 2018.
+[20] I. Kostrikov and J. Gall. Depth sweep regression forests for estimating 3d human pose from images. British Machine Vision Conference (BMVC), 2014.
+[21] Kyoungoh Lee, Inwooong Lee, and Sanghoon Lee. Propagating lstm: 3d pose estimation based on joint interdependency. Proceedings of the European Conference on Computer Vision (ECCV), pages 119-135, 2018.
+[22] S. Li, W. Zhang, and A. B. Chan. Maximum-margin structured learning with deep networks for 3d human pose estimation. International Conference on Computer Vision (ICCV), page 2848-2856, 2015.
+[23] T. Lin, P. Dollar, R. B. Girshick, K. He, B. Hariharan, and S. J. Belongie. Feature pyramid networks for object detection. Conference on Computer Vision and Pattern Recognition (CVPR), page 936-944, 2017.
+[24] Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265, 2019.
+[25] Diogo C Luvizon, David Picard, and Hedi Tabia. 2d/3d pose estimation and action recognition using multitask deep learning. pages 5137-5146, 2018.
+[26] C. Mandery, O. Terlemez, M. Do, N. Vahrenkamp, and T. Asfour. The kit whole-body human motion database. International Conference on Advanced Robotics (ICAR), pages 329-336, 2015.
+[27] J. Martinez, R. Hossain, J. Romero, and J. J. Little. A simple yet effective baseline for 3d human pose estimation. International Conference on Computer Vision (ICCV), page 2659-2668, 2017.
+[28] Diganta Misra. Mish: A self regularized non-monotonic neural activation function. arXiv preprint arXiv:1908.08681, 2019.
+[29] N. Neverova, C. Wolf, G. W. Taylor, and F. Nebout. Multiscale deep learning for gesture detection and localization. European Conference on Computer Vision (ECCV) Workshops, pages 474-490, 2014.
+[30] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. European Conference on Computer Vision, pages 483-499, 2016.
+[31] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner,
+
+Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
+[32] C. Palmero, A. Clapés, C. Bahnsen, A. Møgelmose, T. B. Moeslund, and S. Escalera. Multi-modal rgb-depth-thermal human body segmentation. International Journal of Computer Vision, 118(2):217-239, 2016.
+[33] S. Park, J. Hwang, and N. Kwak. 3d human pose estimation using convolutional neural networks with 2d pose information. European Conference on Computer Vision (ECCV) Workshops, page 156-169, 2016.
+[34] G. Pavlakos, X. Zhou, K. G. Derpanis, and K. Daniilidis. Coarse-to-fine volumetric prediction for single-image 3d human pose. Conference on Computer Vision and Pattern Recognition (CVPR), page 1263-1272, 2017.
+[35] Dario Pavllo, Christoph Feichtenhofer, David Grangier, and Michael Auli. 3d human pose estimation in video with temporal convolutions and semi-supervised training. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7753-7762, 2019.
+[36] Tim Roktäsche, Edward Grefenstette, Karl Moritz Hermann, Tomáš Kocisky, and Phil Blunsom. Reasoning about entailment with neural attention. In ICLR, 2016.
+[37] D. Ruck, S. Rogers, and M. Kabrisky. Feature selection using a multilayer perceptron. Journal of Neural Network Computing, 2(2):40-48, 1990.
+[38] N. Sarafianos, B. Boteanu, B. Ionescu, and I. A. Kakadiaris. 3d human pose estimation: A review of the literature and analysis of covariates. CVIU, page 1-20, 2016.
+[39] L. Sigal, A. O. Balan, and M. J. Black. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. International Journal of Computer Vision, 87(12):4-27, 2010.
+[40] B. Tekin, A. Rozantsev, V. Lepetit, and P. Fua. Direct prediction of 3d body poses from motion compensated sequences. Conference on Computer Vision and Pattern Recognition (CVPR), page 991-1000, 2016.
+[41] A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. Conference on Computer Vision and Pattern Recognition (CVPR), pages 1653-1660, 2014.
+[42] G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid. Learning from synthetic humans. Conference on Computer Vision and Pattern Recognition (CVPR), page 109-117, 2017.
+[43] W. Yang, W. Ouyang, X. Wang, J. Ren, H. Li, and X. Wang. 3d human pose estimation in the wild by adversarial learning. Conference on Computer Vision and Pattern Recognition (CVPR), page 5255-5264, 2018.
+[44] Y. Yang and D. Ramanan. Articulated pose estimation with flexible mixtures-of-parts. Conference on Computer Vision and Pattern Recognition (CVPR), page 1385-1392, 2011.
+[45] W. Yin, H. Schütze, B. Xiang, and B. Zhou. Abcnn: Attention-based convolutional neural network for modeling sentence pairs. Transactions of the Association for Computational Linguistics, 4:259-272, 2016.
+[46] J. Yoo and T. Han. Fast normalized cross-correlation. Circuits, Systems and Signal Processing, 28(819):1-13, 2009.
+
+[47] Michael R Zhang, James Lucas, Geoffrey Hinton, and Jimmy Ba. Lookahead optimizer: k steps forward, 1 step back. arXiv preprint arXiv:1907.08610, 2019.
+[48] Long Zhao, Xi Peng, Yu Tian, Mubbasir Kapadia, and Dimitris N Metaxas. Semantic graph convolutional networks for 3d human pose regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3425-3435, 2019.
+[49] X. Zhou, X. Sun, W. Zhang, S. Liang, and Y. Wei. Deep kinematic pose regression. European Conference on Computer Vision (ECCV) Workshops, page 156-169, 2016.
+[50] X. Zhou, M. Zhu, S. Leonardos, K. G. Derpanis, and K. Daniilidis. Sparseness meets deepness: 3d human pose estimation from monocular video. Conference on Computer Vision and Pattern Recognition (CVPR), pages 4966-4975, 2016.
\ No newline at end of file
diff --git a/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/images.zip b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4ebebb99160934d58e7180cfde7f70e072866994
--- /dev/null
+++ b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:531fc1646a67fc2eb9310695e352e730c5e638e7a7e2eb66c2a26689ff42d16e
+size 843557
diff --git a/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/layout.json b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a54849b0473b598a985f95952f2980cf9db02d9f
--- /dev/null
+++ b/attentionmechanismexploitstemporalcontextsrealtime3dhumanposereconstruction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b146e2f8aa485d3cefccacd5ec6b1f8fa6cdb1eac418ac694ce6594d2bf6191
+size 401167