diff --git a/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_content_list.json b/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d215d944f771ff126f01e66f46fd47d49db84488
--- /dev/null
+++ b/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ddc4047c95a5971d1a26f6899171b865c501c39ab28e4211ae2e72a5efa1214e
+size 88167
diff --git a/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_model.json b/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d5cd09eee9eb002ad0bd24448bb2874e185f7168
--- /dev/null
+++ b/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8936d1c3a4926361caa02afc0220be359565e873eca36d9dcef787960318c5e4
+size 106714
diff --git a/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_origin.pdf b/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..93ab047581a96728fb0aec1953d7ebda42b947db
--- /dev/null
+++ b/12in1multitaskvisionandlanguagerepresentationlearning/22c26506-3388-45ee-af0f-f53251df929c_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7830c5610c6d3dea2d99f9266d349cb1786deacb6a9a936db62b6e2461c1082
+size 393469
diff --git a/12in1multitaskvisionandlanguagerepresentationlearning/full.md b/12in1multitaskvisionandlanguagerepresentationlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7e2605aa28a21e4f9a1cf0a786ccf9201259cd98
--- /dev/null
+++ b/12in1multitaskvisionandlanguagerepresentationlearning/full.md
@@ -0,0 +1,321 @@
+# 12-in-1: Multi-Task Vision and Language Representation Learning
+
+Jiasen Lu $^{3*}$ Vedanuj Goswami $^{1*}$ Marcus Rohrbach $^{1}$ Devi Parikh $^{1,3}$ Stefan Lee $^{2}$ $^{1}$ Facebook AI Research $^{2}$ Oregon State University $^{3}$ Georgia Institute of Technology
+
+{vedanuj, mrf}@fb.com leestef@oregonstate.edu {jiasenlu, parikh}@gatech.edu
+
+# Abstract
+
+Much of vision-and-language research focuses on a small but diverse set of independent tasks and supporting datasets often studied in isolation; however, the visually-grounded language understanding skills required for success at these tasks overlap significantly. In this work, we investigate these relationships between vision-and-language tasks by developing a large-scale, multi-task training regime. Our approach culminates in a single model on 12 datasets from four broad categories of task including visual question answering, caption-based image retrieval, grounding referring expressions, and multi-modal verification. Compared to independently trained single-task models, this represents a reduction from approximately 3 billion parameters to 270 million while simultaneously improving performance by 2.05 points on average across tasks. We use our multi-task framework to perform in-depth analysis of the effect of joint training diverse tasks. Further, we show that finetuning task-specific models from our single multi-task model can lead to further improvements, achieving performance at or above the state-of-the-art.
+
+# 1. Introduction
+
+A compelling reason to study language and vision jointly is the promise of language as a universal and natural interface for visual reasoning problems – useful both in specifying a wide range of problems and in communicating AI responses. However, the current research landscape for visually-grounded language understanding is a patchwork of many specialized tasks like question answering or caption generation, each supported by a handful of datasets. As such, progress in this field has been measured by the independent improvement of bespoke models designed and trained for each of these specific tasks and datasets.
+
+The recent rise of general architectures for vision-and-language [1, 23, 24, 27, 43, 45, 54] reduces the architectural differences across tasks. These models pretrain common architectures on self-supervised tasks to learn general visio-linguistic representations then fine-tune for specific
+
+
+Figure 1: We introduce an approach for effective multi-task learning, training a single model on 12 popular vision-and-language datasets. This single model performs at par or even better than independent task-specific state-of-the-art approaches for many tasks.
+
+
| Visual Question Answering
+What color is the child's outfit? Orange |
| Referring Expressions
+child sheep basket people sitting on chair |
| Multi-modal Verification
+The child is petting a dog. false |
| Caption-based Image Retrieval
+A child in orange clothes plays with sheep. |
+
+datasets; however, the result is still a menagerie of independent task-specific models rather than a single unified model. This is dissatisfying in practice – the model that understands questions cannot ground noun phrases, the grounding model cannot retrieve images based on a description, and so forth. Further, this approach does not scale well as each new task requires storing a new model.
+
+Beyond being intellectually dissatisfying, this task-based fracturing leaves quite a lot on the table. While individual tasks present different challenges and diverse interfaces, the underlying associations between language and visual concepts are often common across tasks. For example, learning to ground the referring expression "small red vase" requires understanding the same concepts as answering the question "What color is the small vase?" Training multiple tasks jointly can potentially pool these different sources of grounding supervision. Further, developing models that can perform well on a wide range of tasks simultaneously can help guard against the research community overfitting to specific datasets and metrics.
+
+In this work, we develop a multi-task model for discriminative vision-and-language tasks based on the recently proposed ViLBERT [27] model. We consider four categories of tasks – training jointly on a total of 12 different datasets. Our results not only show that a single model can perform all these tasks, but also that joint training can improve the performance compared to single-task training with the same architecture. Before undertaking this effort, it was not obvious to us that this would be the case – multitask training
+
+is notoriously challenging and vision-and-language datasets vary greatly in size, interface, and difficulty. Our model attains improvements of 0.25 to 4.19 absolute points from multi-task training – improving over corresponding single-task models for 11 out of 12 tasks. Further, we demonstrate that multi-task training is an effective pretraining step for single-task models – leading to further gains and setting a new state-of-the-art for 7 out of 12 tasks.
+
+Large-scale multi-task learning is challenging as datasets can vary in size and difficulty. To address these issues, we introduce a dynamic stop-and-go training scheduler, task-dependent input tokens, and simple hyper-parameter heuristics. Using our proposed pipeline, we were able to train many multi-task models with varying datasets - assessing the relationships between different vision-and-language tasks in terms of their performance when trained together.
+
+To summarize, we make the following contributions:
+
+- We systematically analyze the joint training relationships between different of vision-and-language datasets and tasks and present a Clean V&L Multi-Task setup, which ensures no train-test leaks across task.
+- We develop a single multi-task model trained on 12 popular V&L datasets. Compared to a set of independent models, this represents a reduction from $\sim 3$ billion parameters to $\sim 270$ million while simultaneously improving average performance by 2.05 points.
+- We demonstrate that multi-task training is useful even in cases where single-task performance is paramount. On average, fine-tuning from our multi-task model for single tasks resulted in an average improvement of 2.98 points over baseline single-task trained models.
+
+# 2. Vision-and-Language Tasks
+
+# 2.1. Task-Groups and Datasets
+
+We consider 12 popular vision and language datasets. These datasets cover a wide range of tasks and require diverse grounding granularity and reasoning skills. We group related datasets into four groups to facilitate our analysis:
+
+Vocab-based VQA. Given an image and a natural-language question, select an answer from a fixed vocabulary. We consider three popular datasets for this group - VQAv2 [15], GQA [17], and Visual Genome (VG) QA [21].
+
+Image Retrieval. Given a caption and a pool of images, retrieve the target image that is best-described by the caption. We consider COCO [7] and Flickr30K [35] captioning datasets for this task-group.
+
+Referring Expressions. Given a natural language expression and an image, identify the target region that is referred to by expression. The expression can vary greatly across datasets from simple noun phrases to multi-round dialogs.
+
+ | % Row-Task Test Images in Column-Task Train/Val Set |
| [A] | [B] | [C] | [D] | [E] | [F] | [G] | [H] | [I] | [J] | [K] | [L] |
| [A] VQA2.0 [15] | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% |
| [B] VG QA [21] | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% |
| [C] GQA [17] | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% |
| [D] COCO [7] | 100% | 43% | 33% | 0% | 0% | 0% | 0% | 0% | 7% | 46% | 0% | 0% |
| [E] Flickr30k [35] | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 98% | 0% |
| [F] RefCOCO [19] | 100% | 36% | 27% | 100% | 0% | 0% | 0% | 66% | 8% | 62% | 0% | 0% |
| [G] RefCOCO+ [19] | 100% | 38% | 27% | 100% | 0% | 0% | 0% | 66% | 8% | 62% | 0% | 0% |
| [H] RefCOCOg [30] | 100% | 41% | 31% | 100% | 0% | 53% | 53% | 0% | 8% | 63% | 0% | 0% |
| [I] Visual 7W [55] | 50% | 100% | 79% | 48% | 0% | 8% | 8% | 10% | 0% | 24% | 0% | 0% |
| [J] GuessWhat [13] | 100% | 40% | 31% | 96% | 0% | 20% | 20% | 26% | 7% | 0% | 0% | 0% |
| [K] SNLI-VE [49] | 0% | 0% | 0% | 0% | 94% | 0% | 0% | 0% | 0% | 0% | 0% | 0% |
| [L] NLVRg2 [44] | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% | 0% |
+
+Table 1: Percentage of row-task test images that are present in column-tasks train/val images.
+
+We consider phrase grounding in RefCOCO(+/g) [19, 30], Pointing questions in Visual7W [55], and dialog sequences in the GuessWhat [13]. We note that these language inputs vary significantly in terms of detail and structure.
+
+Multi-modal Verification. Given one or more images and a natural language statement, judge the correctness or predict their semantic relationship. We consider $\mathrm{NLVR}^2$ [44] and SNLI-VE [49]. In $\mathrm{NLVR}^2$ , two images are given and the statement must be true for both to be true. In SNLI-VE, image-statement pairs are classified as representing an entailment, contradiction, or neutral. That is, whether the content of the image confirms, refutes, or is insufficient to comment on the truth of the corresponding statement.
+
+# 2.2. A Clean V&L Multi-Task Setup
+
+Many V&L tasks are built on top of each other and share significant overlap in terms of individual images. However, as each task is often examined in isolation, there does not exist an in-depth analysis of this overlap across different V&L tasks. Table 1 shows the percentage of test images for the target tasks which are present in other tasks' train/val sets. As we can see, there exists significant overlap across tasks. Even though different tasks require different inputs and outputs, other task annotations will provide clues about the visual grounding – for example, a referring expression for a "blue striped ball" at training could unfairly improve a VQA model's ability to answer "What color is the striped ball?" for the same image at test time. To avoid information leakage from the annotations of other tasks, we propose a cleaned multi-task split for V&L tasks where test images are removed from train/val for all the tasks. We stress that the test sets are not modified in any way, so our results are comparable to prior work. Cleaning results in about an $11\%$ reduction in training data on average across datasets. Full details of this process and statistics regarding cleaned dataset size are available in the supplement.
+
+# 3. Approach
+
+# 3.1. Base Architecture
+
+There has been a flurry of recent work developing general vision-and-language model architectures that are amenable to large-scale self-supervised pretraining. [1, 23,
+
+24, 27, 43, 45, 54]. By pretraining general representations and then finetuning on single downstream tasks, these models set state-of-the-art in many tasks. For the base architecture in our experiments, we take the ViLBERT model proposed by Lu et al. [27]. We describe it here briefly.
+
+At the interface level, ViLBERT takes as input an image $I$ and text segment $Q$ represented as the sequence $\{\mathrm{IMG}, v_1, \ldots, v_T, \mathrm{CLS}, w_1, \ldots, w_T, \mathrm{SEP}\}$ where $\{v_i\}_{i=1}^T$ are image region features [2], $\{w_j\}_{j=1}^T$ are word tokens, and the IMG, CLS, and SEP tokens are special markers. The model then outputs embeddings for each input $\{h_{v_i}\}_{i=1}^T$ , $\{h_{w_j}\}_{j=1}^T$ , $h_{\mathrm{IMG}}$ , $h_{\mathrm{CLS}}$ , and $h_{\mathrm{SEP}}$ . As in [27], we take $h_{\mathrm{IMG}}$ and $h_{\mathrm{CLS}}$ as holistic image and text representations.
+
+Internally, ViLBERT consists of two parallel BERT-style [14] models operating over image regions and text segments. Each stream is a series of transformer blocks (TRM) [48] connected by co-attentional transformer layers (CoTRM) which enable information exchange between modalities. We use the default parameter setting, which has 6/12 layers of TRM for visual / linguistic streams respectively.
+
+Like many of the models of this class, ViLBERT is pretrained on the Conceptual Caption dataset [39] with two 'proxy' tasks: masked multi-modal modelling and multimodal alignment prediction. The first randomly masks approximately $15\%$ of both words and image tokens and reconstructs them given the remaining inputs. The later tasks the model with predicting whether an image and caption correspond or not. After pretraining, the model can be finetuned for strong performance for various downstream tasks.
+
+We make two important modifications to this pretraining process. First, when masking visual regions we also mask other regions with significant overlap ( $>0.4$ IoU) to avoid leaking visual information. This forces the model to rely more heavily on language to predict image content. Second, we do not enforce the masked multi-modal modelling loss when sampling a negative (unmatching) caption for multimodal alignment prediction. This will effectively remove the noise introduced by negative samples. While orthogonal to our primary contribution of multi-task learning, we found these modifications to make the baseline model more effective. For further discussion, see the supplemental material. All models we present are first pretrained in this manner.
+
+# 3.2. Multi-Task Learning
+
+We consider a simple multi-task model where each task has a task-specific 'head' network that branches off a common, shared 'trunk' ViLBERT model. As such, we learn shared trunk parameters $\theta_{s}$ and a set of task-specific layers $\{\theta_t\}_{t=1}^T$ for $\mathcal{T}$ tasks. Our goal is to learn parameters $\theta_{s} \cup \{\theta_{t}\}_{t=1}^{T}$ that minimize loss across all tasks. Details on heads and other modifications follow.
+
+Task Token. While relying on the same groundings, different tasks may still require the model to process inputs differently - e.g. referring expressions just require grounding while VQA must follow grounding with additional reasoning. To enable this, we augment the query with a task token $\text{TASK}_t$ such that the new input format is $\{\text{IMG}, v_1, \ldots, v_n, \text{CLS}, \text{TASK}_t, w_1, \ldots, w_m, \text{SEP}\}$ . The architecture can then leverage this task information in a bottom-up manner. In what follows, we describe the task-specific heads by task groups.
+
+Vocab-Based VQA Output: We compute an overall image-query representation as an element-wise product between the holistic $h_{\mathrm{IMG}}$ and $h_{\mathrm{CLS}}$ representations. As in [2, 17], we treat vocab-based VQA as a multi-label classification task - assigning a soft target score to each answer based on its relevancy to the ground truth answer. We compute scores for a set of the pre-defined answers $A$ by using a two-layer MLP on top of the overall representation:
+
+$$
+P _ {v} (A \mid I, Q) = \sigma \left(\mathrm {M L P} \left(h _ {\mathrm {I M G}} \odot h _ {\mathrm {C L S}}\right)\right) \tag {1}
+$$
+
+where $\sigma$ is the sigmoid function. Due to the answer vocabulary differences, VQA and VG QA share the MLP and answer vocabulary while GQA learns a separate one.
+
+Image Retrieval Output: Using the same overall representation, we compute an alignment score between image-caption pairs as:
+
+$$
+\operatorname {R e l} (I, Q) = W _ {i} \left(h _ {\text {I M G}} \odot h _ {\text {C L S}}\right) \tag {2}
+$$
+
+where $W_{i} \in \mathbb{R}^{d \times 1}$ is shared across COCO and Flickr30k image retrieval tasks. As in [27], we train a 4-way multiple-choice against hard-negatives selected off-line and then fixed. Recent work has used online hard-negative mining [8, 23] but this is costly to compute.
+
+Referring Expressions Output: We rerank a set of region proposals [50] given the referring expression. We pass the final representation $h_{v_i}$ for each image region $i$ into a learned projection $W_r \in \mathbb{R}^{d \times 1}$ to predict a matching score.
+
+$$
+\operatorname {R e l} \left(v _ {i}, Q\right) = W _ {r} h _ {v i} \tag {3}
+$$
+
+Note that $Q$ may be either a phrase, question or dialog based on different tasks (RefCOCO+/g, Visual7W, GuessWhat). $W_{r}$ is shared across all the referring expression tasks.
+
+Multi-modal Verification Output: Taking $\mathrm{NLVR}^2$ as an example, the input is a concatenation of two images $(I_0$ and $I_{1})$ and a statement $Q$ , that the model must judge the validity of the statement given the images. We consider this a classification problem given an embedding that encodes the two image-statement pairs $(I_0, Q)$ and $(I_1, Q)$ . The output probability is predicted by a 2-layer MLP with softmax:
+
+$$
+P _ {v} \left(C \mid I _ {0}, I _ {1}, Q\right) = \text {s o f t m a x} \left(\operatorname {M L P} \left(\left[ \begin{array}{l} h _ {\mathrm {I M G}} ^ {0} \odot h _ {\mathrm {C L S}} ^ {0} \\ h _ {\mathrm {I M G}} ^ {1} \odot h _ {\mathrm {C L S}} ^ {1} \end{array} \right]\right)\right) \tag {4}
+$$
+
+where $[ ]$ is concatenation. For SNLI-VE, the input is a single image and statement. We thus learn a separate classifier of the same form that predicts the sentiment (entailment, neutral, contradiction) from the inputs.
+
+# 3.3. Large-Scale Multitask Training
+
+With 6 task heads, 12 datasets, and over 4.4 million individual training instances – training our multi-task ViLBERT model is a daunting proposition. Multi-task learning (especially at this scale) poses significant challenges as learning objectives have complex and unknown dynamics and may compete [41]. Further, vision-and-language datasets vary significantly in size and difficulty. For instance, a single epoch of VG (our largest dataset) corresponds to 19.8 epochs of RefCOCOg (our smallest). Likewise, when trained in isolation RefCOCOg converges in 5K iterations whereas VQAv2 takes 84K iterations (over 16 times more). Below, we describe the details of our multi-task training approach and techniques to overcome these challenges.
+
+Pretraining. All our models are pretrained on Conceptual Caption dataset [39] including our self-supervised task modifications as described in Sec. 3.1.
+
+Round-Robin Batch-Level Sampling. We consider a round-robin batch-level sampling regime that cycles through each task from the beginning of multi-task training. As such, one multi-task iteration consists of each task forwarding a batch and updating parameters in sequence.
+
+Dynamic Stop-and-Go. As noted earlier, different tasks have different difficulties and dataset sizes. Consequently, simply cycling through all tasks may drastically overtrain smaller tasks leading to overfitting. Typically early-stopping provides a strong defense to this phenomenon; however, stopping a task in multi-task training introduces problems with catastrophic forgetting as the base network drifts over time due to other tasks. We introduce an intuitive but effective dynamic stop and go (DSG) mechanism to avoid these problems. We monitor the validation loss $s_t$ of each task $t$ , computing it once per task epoch. If performance improvement is less than 0.1% over 2 epochs, we consider it Converged and shift it into stop mode. In DSG stop mode, a task only updates every iter-gap $(\Delta)$ iterations. If validation performance degrades by 0.5% from the task's best measured performance while in stop mode, the task is considered Divered and is returned to DSG go. This procedure is shown in Algorithm 1.
+
+Curriculum Learning. Inspired by prior multi-task literature [4] [31], we experimented with both curriculum and anti-curriculum strategies based on task difficulty. Specifically, for anti-curriculum we first train on the slowest-converging task-group G1 (Vocab-Based VQA) before starting full round-robin multi-task training. Inversely for the curriculum setting we first train on our fastest
+
+Algorithm 1: DSG for Multi-Task Learning
+$n_t\gets$ number of iterations per epoch for task $t$ $\Delta \leftarrow$ size of gap between iterations in stop mode
+ $\mathrm{DSG}_t\gets \mathrm{go}$
+for $i\gets 1$ to MaxIter:
+for $t\in \mathsf{Tasks}$ .. if $\mathrm{DSG}_t = \mathrm{go}$ or $(\mathrm{DSG}_t = \mathrm{stop}$ and $i\bmod \Delta = 0)$ : Compute task loss $L_{t}(\theta)$ and gradient $\nabla_t(\theta)$ Update $\theta \leftarrow \theta -\epsilon \nabla_t(\theta)$ , where $\theta = \theta_{s}\cup \theta_{t}$ if $i\bmod n_t = 0$ . Compute validation score $s_t$ on task t if $\mathrm{DSG}_t = \mathrm{go}$ and Converged $(s_t)$ .. DSGt $\leftarrow$ stop else if $\mathrm{DSG}_t = \mathrm{stop}$ and Divered $(s_t)$ .. DSGt $\leftarrow$ go
+end
+
+converging task-group G3 (Referring Expressions). Different from previous observation [31, 33], we found that using no curriculum leads to superior performance when combined with other strategies proposed in this section.
+
+Setting Multi-Task Hyperparameters. We follow a simple design philosophy – identify simple heuristics based on hyper-parameters tuned for each task in single-task training. This significantly reduces the burden of searching for joint-training hyper-parameters. See the supplement for a full list of per task learning rates, batch sizes, and other settings. Our code has been made available1.
+
+Batch Size: For multi-task, we keep the batch size tuned for single-task training for each task.
+
+Warm-up Duration: We found it important to set warm-up duration relative to the largest dataset. Specifically, we run linear warm-up over $\eta * N$ iterations where $N$ is the max. number of iterations taken to train any dataset in the single-task setting. We observe significant performance degradation for harder tasks when warm-up was shorter. We set $\eta$ to 0.1 for our experiments.
+
+Loss Scaling: Our model has shared and task-specific parameters and we found it important to maintain separate learning rates. For the shared base model, we set the base learning rate to the minimum over all single-task dataset parameters. To accommodate variable learning rates for each dataset, we scale the task loss for each dataset by the ratio of task target learning rate over base learning rate.
+
+# 4. Experiments and Results
+
+# 4.1. Single-Task Performance
+
+To establish baseline performance for the ViLBERT architecture that forms the backbone of our multi-task experiments, we first train single-task models on top of the base ViLBERT architecture (Section 3) for each of our 12 datasets. Rows 1 and 2 in Table 2 show the performance of these models trained on the full and cleaned
+
+ | Clean | Vocab-based VQA (G1) | Image Retrieval (G2) | Referring Expression (G3) | Verification (G4) | # params (# models) | All Tasks Average |
| VQAv2 | GQA | VG QA | COCO | Flickr30k | COCO | COCO+ | COCOg | V7W | GW | NLVR2 | SNLI-VE |
| test-dev | test-dev | val | test(R1) | test(R1) | test | test | test | test | test | testP | test |
| 1 Single-Task (ST) | | 71.82 | 58.19 | 34.38 | 65.28 | 61.14 | 78.63 | 71.11 | 72.24 | 80.51 | 62.81 | 74.25 | 76.72 | 3B (12) | 67.25 |
| 2 Single-Task (ST) | ✓ | 71.24 | 59.09 | 34.10 | 64.80 | 61.46 | 78.17 | 69.47 | 72.21 | 80.51 | 62.53 | 74.25 | 76.53 | 3B (12) | 67.03 |
| 3 Group-Tasks (GT) | ✓ | 72.03 | 59.60 | 36.18 | 65.06 | 66.00 | 80.23 | 72.79 | 75.30 | 81.54 | 64.78 | 74.62 | 76.52 | 1B (4) | 68.72 |
| 4 All-Tasks (AT) | ✓ | 72.57 | 60.12 | 36.36 | 63.70 | 63.52 | 80.58 | 73.25 | 75.96 | 82.75 | 65.04 | 78.44 | 76.78 | 270M (1) | 69.08 |
| 5 All-Tasksw/o G4 | ✓ | 72.68 | 62.09 | 36.74 | 64.88 | 64.62 | 80.76 | 73.60 | 75.80 | 83.03 | 65.41 | - | - | 266M (1) | - |
| 6 GT finetune→ST | ✓ | 72.61 | 59.96 | 35.81 | 66.26 | 66.98 | 79.94 | 72.12 | 75.18 | 81.57 | 64.56 | 74.47 | 76.34 | 3B (12) | 68.81 |
| 7 AT finetune→ST | ✓ | 72.92 | 60.48 | 36.56 | 65.46 | 65.14 | 80.86 | 73.45 | 76.00 | 83.01 | 65.15 | 78.87 | 76.73 | 3B (12) | 69.55 |
| 8 AT finetune→ST | | 73.15 | 60.65 | 36.64 | 68.00 | 67.90 | 81.20 | 74.22 | 76.35 | 83.35 | 65.69 | 78.87 | 76.95 | 3B (12) | 70.24 |
+
+Table 2: Comparison of our multi-task models to single-task performance. We find multi-task training (rows 3-5) provides significant gains over single-task training (rows 1-2) while reducing the parameter count from over 3 billion to 270 million. Further, following multi-task training by task-specific fine-tuning (rows 6-9) further gains can be made at the cost of increased parameters.
+
+| Relative PERF | G1 (VQAv2) | Trained With | Avg. | Trained With | Avg. |
| G1 | G2 | G3 | G4 | G1 & G2 | G1&G3 | G1 & G4 | G2 & G3 | G2 & G4 | G3 & G4 |
| 0.46% | - | 0.38% | 0.38% | -0.20% | 0.19% | - | - | - | 0.63% | -0.08% | 0.18% | 0.24% |
| 0.39% | 0.78% | - | 0.23% | -4.13% | -1.15% | - | 1.24% | 0.49% | - | - | -4.36% | -0.88% |
| 0.29% | 1.47% | 0.67% | - | 0.47% | 0.47% | 0.86% | - | 0.19% | - | 0.29% | - | 0.44% |
| 2.29% | 1.47% | 0.67% | - | 1.48% | 3.69% | 3.22% | - | 2.73% | - | - | - | 3.21% |
| Avg. | 1.04% | 0.88% | 0.43% | -1.36% | - | 2.27% | 2.23% | 0.34% | 1.68% | 0.10% | -2.09% | - |
+
+Table 3: Pair-wise (left) and triple-wise (right) inter-group representative task analysis. Each entry is the relative performance change from single-task training for the row-task when jointly trained with the column-task(s).
+
+datasets, respectively. As expected, reducing the training set size through cleaning results in lower performance in most cases. Our improvements over the pretraining objective (Sec 3.1) results in better downstream tasks performance (71.82 vs. 70.55 on VQA and 61.46 vs. 58.20 on Flickr30k Recall@1). See the supplementary for full comparison. Overall, our base architecture is competitive with prior work and a good starting point for multi-task learning.
+
+# 4.2. Intra-Group Multi-task Performance
+
+We begin with the most intuitive multi-task setting – jointly training tasks within the same groups. As grouped tasks are typically highly related, this is akin to some existing data augmentation practices (e.g. adding Visual Genome (VG) QA data when training VQA). Note this corresponds to four separate multi-task models – one for each group.
+
+Table 2 row 3 shows the result of intra-group multi-task training. Comparing with single-task models trained on the same data (row 2), we see meaningful improvements of between $0.37\%$ $(\mathrm{NLVR}^2)$ and $4.54\%$ (Flickr30k retrieval) points for 11 out of 12 tasks (only SNLI-VE did not improve). Comparing to row 1, we see that intra-group multitask training overcomes the data-loss from cleaning with an average score of 68.72, outperforming the single-task models trained on the full datasets which have an average score of 67.25. Further, the total number of parameters drops by a factor of $3\times -$ going from 12 full models to only 4.
+
+# 4.3. Inter-Group Multi-task Performance
+
+Representative Task Analysis. We next consider the interplay between different task-groups. For efficiency, we consider multi-task training with representative tasks from each group - specifically VQA (G1), Retrieval Flickr30k (G2), Visual7W (G3), and NLVR $^2$ (G4). These were selected to maximize diversity in underlying image sources. We examine their relationships by jointly training all pairs and triplets of tasks under our multi-task training approach.
+
+Table 3 (left) shows the results of training each representative task pair. Each entry is the percent change from single-task performance for the row-task when jointly trained with the column-task. As such, the Avg. row (bottom) shows the mean impact each column-task has on other tasks, and likewise the Avg. column (right) shows the mean impact other tasks have on each row-task. For instance, we find that adding VQA (G1) benefits other tasks with an average improvement of $+1.04\%$ . Interestingly, adding $\mathrm{NLVR}^2$ (G4) degrades other tasks on average $(-1.36\%)$ while making significant gains itself $(+1.48\%)$ . This is primarily due to a $-4.13\%$ interaction with G2. Table 3 (right) shows all task triplets. Gains in the paired-experiments are not simply additive. In the pair-wise analysis, G3 gained $+0.39\%$ and $+0.78\%$ from G1 and G2 respectively. As before, G4 has some strong negative effects on other groups $(-4.36\%)$ G2 with G3 & G4) but these effects can be regulated by other tasks $(+0.49\%)$ G2 with G1 & G4).
+
+| Task | Split | SOTA | UNITER [8] | OursAT | OursAT->ST |
| BERTB | BERTL | BERTB | BERTB |
| VQA | test-dev | - | 72.27 | 73.24 | 72.57 | 73.15 |
| VG QA | val | - | - | - | 36.36 | 36.64 |
| GQA | test-dev | 60.00 [45] | - | - | 60.12 | 60.65 |
| IR COCO | test (R1) | 68.50 [23] | - | - | 63.70 | 68.00 |
| IR Flickr30k | test (R1) | - | 71.50 | 73.66 | 63.52 | 67.90 |
| RefCOCO | test | - | 80.21 | 80.88 | 80.58 | 81.20 |
| RefCOCO+ | test | - | 72.90 | 73.73 | 73.25 | 74.22 |
| RefCOCOg | test | - | 74.41 | 75.77 | 75.96 | 76.35 |
| Visual 7W | test | 72.53 [16] | - | - | 82.75 | 83.35 |
| GuessWhat | test | 61.30 [13] | - | - | 65.04 | 65.69 |
| NLVR2 | testP | - | 77.87 | 79.50 | 78.44 | 78.87 |
| SNLI-VE | test | - | 78.02 | 78.98 | 76.78 | 76.95 |
| # params (# models) | | | 602M | 2.1B | 270M | 3B |
| | (7 x 86M) | (7 x 303M) | (1 x 270M) | (12 x 250M) |
+
+Full Multi-task Results. We move to our main result - a single model trained on all 12 datasets. The results of this All-Tasks (AT) model are shown in Table 2 row 4. This model outperforms independent single-task models trained on the same data (row 2) for 11 out of 12 tasks and improve the average score by 2.05 points (69.08 vs. 67.03). We reiterate for emphasis, average performance improves by 2.05 points while reducing the number of parameters from over 3 billion to 270 million (a $12 \times$ reduction). This is also true for comparison with single-task models trained on full datasets (row 1) by a similar margin of 1.83 points.
+
+Our AT model also outperforms the Group-Task (GT) models (row 3) despite having $4\mathrm{x}$ fewer parameters (avg. 69.08 vs 68.72). This implies that despite their diversity, tasks across different groups can benefit from joint training.
+
+We observed from the representative task analysis that G4 tends to have a negatively effect other groups during joint training. To validate this observation on all tasks, we train an All-Task model without G4 (row 5). This model achieves higher avg. score of 67.96 for $\mathrm{G1 + G2 + G3}$ compared to the full AT model's 67.39. NLVR $^2$ (G4) presents two images per description and often one matches while the other does not. Despite the alignment with one image, the instance as a whole is negative. We speculate that this supervision may interfere with the standard caption-image alignment objective in Flickr30k.
+
+# 4.4. Multi-Task Learning as Pretraining
+
+For some applications, single task performance may be paramount and justify storing a task-specific model. Even then, fine-tuning from a multi-task trained model may allow the model to take advantage of the additional, diverse supervision captured during multi-task training. Following [26], we finetune our trained multi-task models (GT and AT) on each downstream task and show results in Table 2. Rows 6 and 7 show that finetuning from the all-task model (AT) outperforms finetuning from the group-task models (GT) with
+
+Table 4: Comparison to recent SOTA. For image retrieval (IR) COCO and Flickr we report R1 scores on the 1K test set.
+
+ | VQA | COCO Retrieval | Flickr Retrieval | FG |
| R1 | R5 | R10 | R1 | R5 | R10 | R1 |
| OmniNet [36] | 55.76 | - | - | - | - | - | - | - |
| HDC [33] | 69.28 | 57.40 | 88.40 | 95.60 | 56.10 | 82.90 | 89.40 | 57.39 |
| Ours | 72.70 | 65.16 | 91.00 | 96.20 | 65.06 | 88.66 | 93.52 | 64.61 |
+
+Table 5: Comparison with other multi-task models. VQA score is on test-dev and the retrieval tasks on their respective 1K test split. For Flickr Grounding (FG) we report R1 on Flickr30K test.
+
+an average score of 69.51 vs. 68.81. For comparison with our multi-task models, these are finetuned on the cleaned datasets which are $11\%$ smaller on average. To compare to prior work, we also finetune on the full dataset for individual tasks (Row 8) and observe further improvements. Recall that our multi-task model was trained on cleaned data so there is no possibility of test leak here. These model outperform single-task models without multi-task pretraining (row 1) by a large margin (70.24 vs. 67.25 avg. score).
+
+# 4.5. Comparison with Existing Work
+
+In Table 4 we compare with existing state-of-the-art. We draw special comparison with the recent UNITER [8] architecture as it is similar to our base ViLBERT model. Like ViLBERT, UNITER is a general BERT-based vision-and-language architecture pretrained through self-supervised tasks and then finetuned for each downstream task. We show two UNITER columns corresponding to their underlying BERT model - either Base B or Large L. Our ViLBERT model uses the smaller $\mathrm{BERT}_{\mathrm{B}}$ . Our single all-task model $(\mathrm{Ours}_{\mathrm{AT}})$ achieves competitive performance to state-of-the-art task-specific models. Our single-task finetuned models $(\mathrm{Ours}_{\mathrm{AT} - > \mathrm{ST}})$ surpass state-of-the-art on 7 out of 12 tasks.
+
+Table 5 compares our method with other recently proposed multi-modal, multi-task learning approaches - OmniNet [36] and Hierarchical Dense Co-Attention (HDC) [33]. OmniNet is trained on part-of-speech tagging, image captioning, visual question answering, and video activity recognition, while HDC is trained on image caption retrieval, visual question answering, and visual grounding. We train a multi-task model on the same tasks and cleaned datasets used in HDC [33]. Flickr Grounding is a new task that we include for this comparison. Our multi-task model outperforms these approaches by a large margin.
+
+# 5. Analysis and Ablation Study
+
+Ablations on task token and training strategies. To verify our design choices, we perform ablations for different task token granularity and multi-task training strategies. The results are shown in Table 6. We report average group and overall average performance. Detailed breakdown for each task can be found in supplement.
+
+For task tokens, our default setting is with a different
+
+ | Task Token | Dynamic Stop-and-Go | G1 | G2 | G3 | G4 | All Tasks Average |
| AT (our) |
| 1 token per dataset | ✓ | ✓ | 56.35 | 63.61 | 75.52 | 77.61 | 69.08 |
| 2 token per head | ✓ | ✓ | 55.95 | 61.48 | 75.35 | 77.37 | 68.52 |
| 3w/o task token | | ✓ | 55.67 | 62.55 | 75.38 | 76.73 | 68.53 |
| 4w/o DSG | ✓ | | 55.50 | 62.92 | 75.24 | 76.31 | 68.52 |
| 5w/ curriculum | | | 54.68 | 61.21 | 75.19 | 76.70 | 67.24 |
| 6w/ anti-curriculum | | | 55.82 | 59.58 | 73.69 | 75.94 | 67.98 |
| 7vanilla multitask | | | 54.09 | 61.45 | 75.28 | 76.71 | 67.92 |
+
+Table 6: Ablations on our design choices and comparison to curriculum and anti-curriculum learning multi-task approaches.
+
+task token per dataset (12 total, Row 1). We compare this with two ablations: one task token per output head (4 total, Row 2) and no task tokens (Row 3). We observe that task-specific tokens lead to better performance compared to head-based tokens (avg. 69.08 vs. 68.52) and no task tokens (avg. 69.08 vs. 68.53). This shows that task-aware feature embedding is useful even within the same output space; e.g. per-task tokens may help differentiate noun phrases and pointing questions in Referring Expression.
+
+For multi-task training schedule, we compare our dynamic stop-and-go (DSG) (Row 3) with Curriculum (Row 5) and Anti-Curriculum (Row 6) approaches discussed in Sec. 3. We consider convergence rate as a measure of task difficulty. For Curriculum, we first train tasks in G4 and then train all tasks together (easier $\longrightarrow$ harder). For Anti-Curriculum, we train G1 tasks first and then train on all tasks together (harder $\longrightarrow$ easier). Table 6 shows our dynamic stop-and-go training schedule outperforms anti-curriculum (avg. 68.52 vs. 67.98) and curriculum (avg. 68.53 vs. 67.24). Row 7 shows results of a 'vanilla', round-robin training scheme with no task tokens or training scheduling. The average score of vanilla multitask is close to anti-curriculum (67.92 vs. 67.98). Consistent with prior work [31], performance on harder tasks (G1) is worse compared to anti-curriculum. Our full training regime outperforms this significantly (avg. 69.08 vs. 67.92).
+
+Behavior of Dynamic Stop-and-Go training. To characterize our dynamic stop-and-go training scheme, we visualize the dynamic training schedule in Fig. 2 (left) – bold lines indicate normal go training and thin lines are stop states when datasets receive sparser updates at a fixed iteration gap (every 4th iteration here). We see that smaller datasets quickly converge and enter stop state training early. As the base model drifts over time, they periodically return to full go state training to adjust. Interestingly, after some cycles of this, they enter the stop state and continue with only sparse updates for the rest of training.
+
+Another aspect of dynamic stop-and-go training is the sparsity of updates in the stop state. Fig. 2 (right) shows the mean normalized accuracy for each group for multi-task models trained with different iteration gaps $(\Delta)$ . We observe that raising $\Delta$ (i.e. updating more sparsely) improves performance initially but degrades for larger values. Abso
+
+
+Figure 2: Left: Visualization of Dynamic stop-and-go during multi-task training. Solid line indicates in the go mode while thin line indicates stop mode. Right: Mean accuracy (normalized group-wise for easier comparison) for each group with different iter-gap $\Delta$ for Dynamic stop-and-go.
+
+
+
+lute and per-task scores are provided in the supplement.
+
+Multi-Task visual grounding consistency. Given the common shared base model, one question is whether multitask models exhibit more consistent visual groundings than independent task-specific models. For example, does a model that correctly answers "What color is the largest dog?" also correctly ground the referring expression "largest dog"? To assess this, we consider 1500 images from the RefCOCO/+ test sets that also have VQA annotations such that for each image $I_{i}$ there are associated questions $\{q^{(i)}\}$ and referring expressions $\{r^{(i)}\}$ . To measure the overlap in visual concepts between a question $q_{j}^{(i)}$ and reference $r_{k}^{(i)}$ , we count overlapping nouns and adjectives (identified using a part-of-speech tagger [47]) and denote this $d(q_{j}^{(i)}, r_{k}^{(i)})$ . Armed with this notion of similarity, we consider each question-reference pair for each image (total 111,275 combinations) and compute a weighted accuracy. A pair is considered correct if the question was answered correctly and the referent was localized. Each pair is weighed by their overlap $d(q_{j}^{(i)}, r_{k}^{(i)})$ . Note that if $q_{j}^{(i)}$ and $r_{k}^{(i)}$ do not have any common visual concept $(d(q_{j}^{(i)}, r_{k}^{(i)}))$ , the correctness of this pair does not affect the overall metric.
+
+We evaluate our Single-Task (ST), All-Task (AT), and finetuned from All-Task (AT->ST) models on the proposed metric. AT consistently outperforms ST (55.40% vs. 58.30%) and AT->ST achieves the best performance (64.64%). This shows our model trained on multiple tasks achieve better visual grounding consistency across different tasks. Further analysis can be found in the supplement.
+
+Regularizing effects of multi-task learning. We find multi-task training to have a regularizing effect on tasks which overfit when trained separately. In Fig. 4 we plot the training and validation curves for two tasks (SNLI-VE and Flickr Grounding) where single task training overfits quickly. On the other hand when trained in a multi-task setup with all other tasks, the validation score improves and there is no overfitting.
+
+Qualitative examples. Figure 3 shows example outputs of
+
+
+Figure 3: Our single model $(\mathrm{Our}_{\mathrm{AT}})$ can perform a multitude of V&L tasks: caption and image retrieval, question answering, grounding phrases, guessing image regions based on a dialog, verifying facts about a pair of images, natural language inferences from an image, etc. Here we show outputs of our model for a variety of inputs (that mimic tasks from the 12 datasets it has been trained on).
+
+
+Figure 4: Multi-Task training acts as a regularizer.
+
+
+
+our models. Due to space limitation, we provide extensive visualizations in the supplement.
+
+# 6. Related Work
+
+Multi-task learning. There has been substantial interest in multi-task learning [6,38], i.e. training a single model for multiple tasks at once. Advances in multi-task learning have been developed in the context of vision [5,20,32,42,52,53], language [10, 25, 26, 31, 37], and robotics [18, 34, 46]. Among them, Standley et al. [41] studies how different vision tasks are related to each other. Strezoski et al. [42] studies layer-wise task routing for different vision tasks. McCann et al. [31] pose ten natural language processing (NLP) tasks as question answering tasks. MT-DNN [26] combines multi-task learning with pretraining [14] to improve the learning of text representations. Despite this progress, it is still challenging to train a single model on many tasks that can outperform or even match their single-task counterparts. To enhance the training scheme, BAM [9] applies knowledge distillation where single-task models teach the multi-task model. Raffel et al. [37] explore different sampling strategies for NLP tasks. We focus on multi-task learning for V&L tasks.
+
+Vision and language. While we address 12 V&L tasks in Sec. 2.1, we do miss some families of tasks including image and video captioning [7], visual dialog [12], embodied question answering [11] and instruction following [3]. Different from earlier work [16,22,28,29,50,51,55] which de
+
+sign bespoke architecture for different tasks, recently proposed models for V&L [1, 8, 23, 24, 27, 43, 45, 54] provide a common architecture that can be pretrained using self-supervised losses and adapted to many vision and language tasks. However, these models still require task specific finetuning, which may easily overfit on small dataset. Our single model jointly learns from multiple V&L tasks and achieves competitive performance. Further, multi-task training provides a better visolinguistic representation for task specific finetuning than self-supervised objectives.
+
+Multi-task V&L learning. Recent work [33, 36, 40] also explores multi-task learning in V&L. HDC [33] trains a multi-task network on multiple datasets and uses a hyperparameter search method to determine which layer output should be taken for each task. Our method does not need any hyperparameter search to choose outputs for different tasks and outperforms both [36] and [33]. [40] is a concurrent work that does multi-task training on 12 dialogue datasets (only two with images). Our work differs in that we focus on a variety of vision and language tasks.
+
+# 7. Conclusion
+
+In this work, we develop a training regime and experimental setting for large-scale, multi-modal, multi-task learning. As one part of this, we introduce a novel task scheduling approach to help avoid over- or under-training tasks with differing sizes or difficulties. Using this framework, we explore the relationships between 12 vision-and-language datasets - our single multi-task model outperforms 12 single-task models. We find multi-task training can lead to significant gains over independent task training. Further, we show that multi-task learning is an effective pretraining task for training state-of-the-art single-task models.
+
+Acknowledgement. The GaTech effort was supported in part by NSF, AFRL, DARPA, ONR YIPs, ARO PECASE, Amazon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government, or any sponsor.
+
+# References
+
+[1] Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. Fusion of detected objects in text for visual question answering. arXiv preprint arXiv:1908.05054, 2019. 1, 2, 8
+[2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, pages 6077-6086, 2018. 3
+[3] Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In CVPR), 2018. 8
+[4] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41-48. ACM, 2009. 4
+[5] Felix JS Bragman, Ryutaro Tanno, Sebastien Ourselin, Daniel C Alexander, and Jorge Cardoso. Stochastic filter groups for multi-task cnns: Learning specialist and generalist convolution kernels. In Proceedings of the IEEE International Conference on Computer Vision, pages 1385-1394, 2019. 8
+[6] Rich Caruana. Multitask learning. Machine learning, 28(1):41-75, 1997. 8
+[7] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dolkar, and C. Lawrence Zitnick. Microsoft COCO captions: Data collection and evaluation server. CoRR, abs/1504.00325, 2015. 2, 8
+[8] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740, 2019. 3, 6, 8
+[9] Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D Manning, and Quoc V Le. Bam! bornagain multi-task networks for natural language understanding. arXiv preprint arXiv:1907.04829, 2019. 8
+[10] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160-167. ACM, 2008. 8
+[11] Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied Question Answering. In CVPR, 2018. 8
+[12] Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jose M. F. Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In CVPR, 2017. 8
+[13] Harm De Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron Courville. Guess-what?! visual object discovery through multi-modal dialogue. In CVPR, 2017. 2, 6
+[14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 3, 8
+
+[15] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Bartra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017. 2
+[16] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling relationships in referential expressions with compositional modular networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 6, 8
+[17] Drew A Hudson and Christopher D Manning. Gqa: a new dataset for compositional question answering over real-world images. arXiv preprint arXiv:1902.09506, 2019. 2, 3
+[18] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. 8
+[19] Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitag: Referring to objects in photographs of natural scenes. In EMNLP, 2014. 2
+[20] Iasonas Kokkinos. Ethernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6129-6138, 2017. 8
+[21] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32-73, 2017. 2
+[22] Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. Stacked cross attention for image-text matching. In Proceedings of the European Conference on Computer Vision (ECCV), pages 201-216, 2018. 8
+[23] Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. arXiv preprint arXiv:1908.06066, 2019. 1, 2, 3, 6, 8
+[24] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and perform-. mant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. 1, 2, 8
+[25] Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. Representation learning using multitask deep neural networks for semantic classification and information retrieval. 2015. 8
+[26] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019. 6, 8
+[27] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265, 2019. 1, 2, 3, 8
+[28] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In Advances In Neural Information Processing Systems, pages 289–297, 2016. 8
+
+[29] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Neural baby talk. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7219-7228, 2018. 8
+[30] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In CVPR, 2016. 2
+[31] Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018. 4, 7, 8
+[32] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3994-4003, 2016. 8
+[33] Duy-Kien Nguyen and Takayuki Okatani. Multi-task learning of hierarchical vision-language representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10492-10501, 2019. 4, 6, 8
+[34] Emilio Parisotto, Jimmy Lei Ba, and Ruslan Salakhutdinov. Actor-mimic: Deep multitask and transfer reinforcement learning. arXiv preprint arXiv:1511.06342, 2015. 8
+[35] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In ICCV, 2015. 2
+[36] Subhojeet Pramanik, Priyanka Agrawal, and Aman Hussain. Omninet: A unified architecture for multi-modal multi-task learning. arXiv preprint arXiv:1907.07804, 2019. 6, 8
+[37] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. 8
+[38] Sebastian Ruder. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017. 8
+[39] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018. 3, 4
+[40] Kurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y-Lan Boureau, and Jason Weston. The dialogue dodecathlon: Open-domain knowledge and image grounded conversational agents, 2019. 8
+[41] Trevor Standley, Amir R Zamir, Dawn Chen, Leonidas Guibas, Jitendra Malik, and Silvio Savarese. Which tasks should be learned together in multi-task learning? arXiv preprint arXiv:1905.07553, 2019. 4, 8
+[42] Gjorgji Strezoski, Nanne van Noord, and Marcel Worring. Many task learning with task routing. In Proceedings of the IEEE International Conference on Computer Vision, pages 1375–1384, 2019. 8
+[43] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530, 2019. 1, 2, 8
+
+[44] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. In ACL, 2019. 2
+[45] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019. 1, 2, 6, 8
+[46] Yee Teh, Victor Bapat, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. In Advances in Neural Information Processing Systems, pages 4496-4506, 2017. 8
+[47] Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 173–180. Association for computational linguistics, 2003. 7
+[48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. 3
+[49] Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. Visual entailment task for visually-grounded language learning. arXiv preprint arXiv:1811.10582, 2018. 2
+[50] Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. Mattnet: Modular attention network for referring expression comprehension. In CVPR, 2018. 3, 8
+[51] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 8
+[52] Tianzhu Zhang, Bernard Ghanem, Si Liu, and Narendra Ahuja. Robust visual tracking via structured multi-task sparse learning. International journal of computer vision, 101(2):367-383, 2013. 8
+[53] Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaou Tang. Facial landmark detection by deep multi-task learning. In European conference on computer vision, pages 94-108. Springer, 2014. 8
+[54] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, and Jianfeng Gao. Unified vision-language pre-training for image captioning and vqa. arXiv preprint arXiv:1909.11059, 2019. 1, 2, 8
+[55] Yuke Zhu, Oliver Groth, Michael Bernstein, and Li Fei-Fei. Visual7w: Grounded question answering in images. In CVPR, 2016. 2, 8
\ No newline at end of file
diff --git a/12in1multitaskvisionandlanguagerepresentationlearning/images.zip b/12in1multitaskvisionandlanguagerepresentationlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..db73de6f3545caee703a8de339c665831be10c9f
--- /dev/null
+++ b/12in1multitaskvisionandlanguagerepresentationlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e7df86c132a3ec601a69b208ed6e9aca78c2d3117c0fc82a7abc7f287222725
+size 488771
diff --git a/12in1multitaskvisionandlanguagerepresentationlearning/layout.json b/12in1multitaskvisionandlanguagerepresentationlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..01eca370e6bad4f93480368bf8631c86485cea57
--- /dev/null
+++ b/12in1multitaskvisionandlanguagerepresentationlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:183d55febecda7be2d5c22f11215ece0d5918e1af930a41c0c4e6fc89c6d7e20
+size 416731
diff --git a/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_content_list.json b/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4318b6f80d068bcc989174fca8548602e83df08d
--- /dev/null
+++ b/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:286bde086b028c36b3741ad58f9de5eb6f887ba4f3c662d73b312bdd2dc827e0
+size 84147
diff --git a/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_model.json b/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d328602768eb0ea3d8d69243d9be921c23898b3b
--- /dev/null
+++ b/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df194b06026180a44eb69ef6423503e221844c89c0d939a0229997bd9f8abdfd
+size 104315
diff --git a/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_origin.pdf b/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8d569b760b0e6618aec817d94e488c4f4c742e89
--- /dev/null
+++ b/15keypointsisallyouneed/e5ac5156-93c5-41e3-8de5-74f59d1fd56d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e515bc658233a009c6b70cfad9ef66cefe73660ba57a61413bdc645397566a88
+size 1489487
diff --git a/15keypointsisallyouneed/full.md b/15keypointsisallyouneed/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..869a3ce1aac08e6e0cf08d41a11d59ab64754898
--- /dev/null
+++ b/15keypointsisallyouneed/full.md
@@ -0,0 +1,325 @@
+# 15 Keypoints Is All You Need
+
+Michael Snower†* Asim Kadav‡ Farley Lai‡ Hans Peter Graf‡
+†Brown University ‡NEC Labs America
+
+michael_SNower@brown.edu {asim,farleylai,hpg}@nec-labs.com
+
+# Abstract
+
+Pose tracking is an important problem that requires identifying unique human pose-instances and matching them temporally across different frames of a video. However, existing pose tracking methods are unable to accurately model temporal relationships and require significant computation, often computing the tracks offline. We present an efficient multi-person pose tracking method, KeyTrack, that only relies on keypoint information without using any RGB or optical flow information to track human keypoints in real-time. Keypoints are tracked using our Pose Entailment method, in which, first, a pair of pose estimates is sampled from different frames in a video and tokenized. Then, a Transformer-based network makes a binary classification as to whether one pose temporally follows another. Furthermore, we improve our top-down pose estimation method with a novel, parameter-free, keypoint refinement technique that improves the keypoint estimates used during the Pose Entailment step. We achieve state-of-the-art results on the PoseTrack'17 and the PoseTrack'18 benchmarks while using only a fraction of the computation required by most other methods for computing the tracking information.
+
+# 1. Introduction
+
+Multi-person Pose Tracking is an important problem for human action recognition and video understanding. It occurs in two steps: first, estimation, where keypoints of individual persons are localized; second, the tracking step, where each keypoint is assigned to a unique person. Pose tracking methods rely on deep convolutional neural networks for the first step [48, 47, 57, 52], but approaches in the second step vary. This is a challenging problem because tracks must be created for each unique person, while overcoming occlusion and complex motion. Moreover, individuals may appear visually similar because they are wearing the same uniform. It is also important for tracking to be performed online. Commonly used methods, such
+
+
+
+
+Figure 1. They look alike, how do we decide who's who? In the Pose Entailment framework, given a video frame, we track individuals by comparing pairs of poses, using temporal motion cues to determine who's who. Using a novel tokenization scheme to create pose pair inputs interpretable by Transformers [49], our network divides its attention equally between both poses in matching pairs, and focuses more on a single pose in non-matching pairs because motion cues between keypoints are not present. We visualize this above; bright red keypoints correspond to high attention.
+
+as optical flow and graph convolutional networks (GCNs) are effective at modeling spatio-temporal keypoint relationships [45], [35], but are dependent on high spatial resolution, making them computationally costly. Non-learning based methods, such as spatial consistency, are faster than the convolution-based methods, but are not as accurate.
+
+To address the above limitations, we propose an efficient pose tracking method, KeyTrack, that leverages temporal relationships to improve multi-person pose estimation and tracking. Hence, KeyTrack follows the tracking by detection approach by first localizing humans, estimating human pose keypoints and then encoding the keypoint information in a novel entailment setting using transformer building blocks [49]. Similar to the textual entailment task where one has to predict if one sentence follows one another, we propose the Pose Entailment task, where the model learns to make a binary classification if two keypoints pose temporally follow or entail each other. Hence, rather than extracting information from a high-dimensional image representation using deep CNNs, we extract information from a sentence of 15 tokens, and each token corresponds to a key-
+
+point on a pose. Similar to how BERT tokenizes words [14], we propose an embedding scheme for pose data that captures spatio-temporal relationships and feed our transformer network these embeddings. Since these embeddings contain information beyond spatial location, our network outperforms convolution-based approaches in terms of accuracy and speed, particularly at very low resolutions.
+
+Additionally, in order to improve the keypoint estimates used by the transformer network, we propose a Temporal Object Keypoint Similarity (TOKS) method. TOKs refines the pose estimation output by augmenting missed detections and thresholding low quality estimates using a keypoint similarity metric. TOKs adds no learned parameters to the estimation step, and is superior to existing bounding box propagation methods that often rely on NMS and optical flow. KeyTrack makes the following contributions:
+
+1. KeyTrack introduces Pose Entailment, where a binary classification is made as to whether two poses from different timesteps are the same person. We model this task in a transformer-based network which learns temporal pose relationships even in datasets with complex motion. Furthermore, we present a tokenization scheme for pose information that allows transformers to outperform convolutions at low spatial resolutions when tracking keypoints.
+2. KeyTrack introduces a temporal method for improving keypoint estimates. TOKs is more accurate than bounding box propagation, faster than a detector ensemble, and does not require learned parameters.
+
+Using the above methods, we develop an efficient multiperson pose tracking pipeline which sets a new SOTA on the PoseTrack test set. We achieve $61.2\%$ tracking accuracy on the PoseTrack'17 Test Set and $66.6\%$ on the PoseTrack'18 Val set using a model that consists of just 0.43M parameters in the tracking step. This portion of our pipeline 500X more efficient than the leading optical flow method [45]. Our training is performed on a single NVIDIA 1080Ti GPU. Not reliant on RGB or optical flow information in the tracking step, our model is suitable to perform pose tracking using other non-visual pose estimation sensors that only provide 15 keypoints for each person [3].
+
+# 2. Related Work
+
+We are inspired by related work on pose estimation and tracking methods, and recent work on applying the transformer network to video understanding.
+
+Pose estimation Early work on pose estimation uses graphical models to learn spatial correlations and interactions between various joints [5, 16]. These models often perform poorly due to occlusions and long range temporal relationships, which need to be explicitly modeled [12, 42, 51]. More recent work involves using convolutional neural networks (CNNs) to directly regress cartesian
+
+| Method | Estimation | Detection Improvement | Tracking |
| Ours | HRNet | Temporal OKS | Pose Entailment |
| HRNet [45] | HRNet | BBox Prop. | Optical Flow |
| POINet [40] | VGG, T-VGG | - | Ovonic Insight Net |
| MDPN [20] | MDPN | Ensemble | Optical Flow |
| LightTrack [35] | Simple Baselines | Ensemble/BBox Prop. | GCN |
| ProTracker [19] | 3D Mask RCNN | - | IoU |
| Affinity Fields [38] | VGG/STFields | - | STFields |
| STEboldings [28] | STEboldings | - | STEboldings |
| JointFlow | Siamese CNN | - | Flow Fields |
+
+Table 1. How different approaches address each step of the Pose Tracking problem. Our contributions are in bold.
+
+coordinates of the joints [48] or to generate heatmaps of the probability of a joint's location [47, 57, 52]. A majority of the convolutional approaches can be classified into top-down and bottom-up methods – the top-down methods use a separate detection step to identify person candidates [21, 37, 10, 24, 37]. The single person pose estimation step is then performed on these person candidates. Bottom-up methods calculate keypoints from all candidates and then correlate these keypoints into individual human joints [53, 25]. The latter method is more efficient since all keypoints are calculated in a single step; however, the former is more accurate since the object detection step limits the regression boundaries. However, top-down methods work poorly on small objects and recent work (HRNet) [45] uses parallel networks at different resolutions to maximize spatial information. PoseWarper [8] uses a pair of labeled and unlabeled frames to predict human pose by learning the pose-warping using deformable convolutions. Finally, since the earliest applications of deep learning to pose estimation [48], iterative predictions have improved accuracy. Pose estimation has shown to benefit from cascaded predictions [10] and pose-refinement methods [17, 34] refine the pose estimation results of previous stages using a separate post-processing network. In that spirit, our work, KeyTrack relies on HRNet to generate keypoints and refines keypoint estimates by temporally aggregating and suppressing low confidence keypoints with TOKS instead of commonly used bounding box propagation approaches.
+
+Pose tracking Methods Pose tracking methods assign unique IDs to individual keypoints, estimated with techniques described in the previous subsection, to track them through time [4, 26, 27, 1]. Some methods perform tracking by learning spatio-temporal pose relationships across video frames using convolutions [50, 40, 35]. [40], in an end-to-end fashion, predicts track ids with embedded visual features from its estimation step, making predictions in multiple temporal directions. [35] uses a GCN to track poses based on spatio-temporal keypoint relationships. These networks require high spatial resolutions. In contrast, we create keypoint embeddings from the keypoint's spatial location and other information making our network less reliant
+
+
+Figure 2. a) Keypoints are estimated with HRNet. b) TOKS improves detection accuracy. c) Pose pairs are collected from multiple past timesteps. Poses of the same color have the same track id, the color black indicates the track id is unknown. d) Each pair is tokenized independently from the other pairs. e) Our Transformer Matching Network calculates match scores independently for each pair. f) The maximum match score is greedily chosen and the corresponding track id is assigned.
+
+on spatial resolution, and thus more efficient. We can also model more fine-grained spatio-temporal relationships.
+
+Among non-learned tracking methods, optical flow propagates poses from one frame to the next to determine which pose they are most similar to in the next frame [45, 20]. This improves over spatial consistency, which measures the IoU between bounding boxes of poses from temporally adjacent frames [19]. Other methods use graph-partitioning based approaches to group pose tracks [26, 27, 29]. Another method, PoseFlow [55], uses inter/intra-frame pose distance and NMS to construct pose flows. However, our method does not require hard-coded parameters during inference, this limits the ability of non-learned methods to model scenes with complex motion and requires time-intensive manual tuning. Table 1 shows top-down methods similar to our work as well as competitive bottom-up methods.
+
+Transformer Models Recently, there have been successful implementations of transformer-based models for image and video input modalities often substituting convolutions and recurrence mechanisms. These methods can efficiently model higher-order relationships between various scene elements unlike pair-wise methods [11, 22, 41, 56]. They have been applied for image classification [39], visual question-answering [30, 31, 46, 60], action-recognition [23, 32], video captioning [44, 61] and other video problems. VideoAction Transformer [18] solves the action localization problem using transformers by learning the context and interactions for every person in the video. BERT [13] uses transformers by pretraining a transformer-based network in a multi-task transfer learning scheme over the unsupervised tasks of predicting missing words or next sentences. Instead, in a supervised setting, KeyTrack uses transformers to learn spatio-temporal keypoint relationships for the visual problem of pose tracking.
+
+# 3. Method
+
+# 3.1. Overview of Our Approach
+
+We now describe the keypoint estimation and tracking approach used in KeyTrack as shown in Figure 2. For frame $\mathcal{F}^t$ at timestep $t$ , we wish to assign a track id to the $i$ th
+
+pose $p^{t,i} \in \mathcal{P}^t$ . First, each of the pose's $k^j \in \mathcal{K}$ keypoints are detected. This is done by localizing a bounding box around each pose with an object detector and then estimating keypoint locations in the box. Keypoint predictions are improved with temporal OKS (TOKS). Please see 3.3 for more details. From here, this pose with no tracking id, $p_{\emptyset}^{t,i}$ , is assigned its appropriate one. This is based on the pose's similarity to a pose in a previous timestep, which has an id, $p_{id}^{t - \delta ,j}$ . Similarity is measured with the match score, $m_{id}^{t - \delta ,j}$ , using Pose Entailment (3.2).
+
+False negatives are an inevitable problem in keypoint detection, and hurt the downstream tracking step because poses with the correct track id may appear to be no longer in the video. We mitigate this by calculating match scores for poses in not just one previous frame, but multiple frames $\{\mathcal{F}^1,\mathcal{F}^2,\dots \mathcal{F}^\delta \}$ . Thus, we compare to each pose $p_{id}^{t - d,j}$ where $1\leqslant d\leqslant \delta$ and $1\leqslant j\leqslant |\mathcal{P}^{t - d}|$ . In practice, we limit the number of poses we compare to in a given frame to the $n$ spatially nearest poses. This is just as accurate as comparing to everyone in the frame and bounds our runtime to $O(\delta n)$ . This gives us a set of match scores $\mathcal{M}$ , and we assign $p_{\emptyset}^{t,i}$ the track id corresponding to the maximum match score $m^{*} = \max (\mathcal{M})$ , where $id^{*} = m_{id}^{*}$ . Thus, we assign the tracking id to the pose, $p_{id*}^{t,i}$ .
+
+# 3.2. Pose Entailment
+
+To effectively solve the multi-person pose tracking problem, we need to understand how human poses move through time based on spatial joint configurations as well as in the presence of multiple persons and occluding objects. Hence, we need to learn if a pose in timestep $t$ , can be inferred from timestep $t - 1$ . Textual entailment provides us with a similar framework in the NLP domain where one needs to understand if one sentence can be implied from the next. More specifically, the textual entailment model classifies whether a premise sentence implies a hypothesis sentence in a sentence pair [9]. The typical approach to this problem consists of first projecting the pair of sentences to an embedding space and then feeding them through a neural network which outputs a binary classification for the sentence pair.
+
+Hence, we propose the Pose Entailment problem. More formally, we seek to classify whether a pose in a timestep
+
+
+Figure 3. Orange box: Visualizations to intuitively explain our tokenization. In the Position column, the matching poses are spatially closer together than the non-matching ones. This is because their spatial locations in the image are similar. The axis limit is 432 because the image has been downsampled to width * height = 432. In the following column, the matching contours are similar, since the poses are in similar orientations. The Segment axis in the last column represents the temporal distance of the pair. Green box: A series of transformers (Tx) compute self-attention, extracting the temporal relationship between the pair. Binary classification follows.
+
+$p^{t - \delta}$ , i.e. the premise, and a pose in timestep $p^t$ , i.e. the hypothesis, are the same person. To solve this problem, instead of using visual feature based similarity that incurs large computational cost, we use the set of human keypoints, $\mathcal{K}$ , detected by our pose estimator. It is computationally efficient to use these as there are a limited number of them (in our case $|\mathcal{K}| = 15$ ), and they are not affected by unexpected visual variations such as lighting changes in the tracking step. In addition, as we show in the next section, keypoints are amenable to tokenization. Thus, during the tracking stage, we use only the keypoints estimated by the detector as our pose representation.
+
+Tokenizing Pose Pairs The goal of tokenization is to transform pose information into a representation that facilitates learning spatio-temporal human pose relationships. To achieve this goal, for each pose token, we need to provide (i) the spatial location of each keypoint in the scene to allow the network to spatially correlate keypoints across frames, (ii) type information of each keypoint (i.e. head, shoulder etc.) to learn spatial joint relationships in each human pose, and finally (iii) the temporal location index for each keypoint within a temporal window $\delta$ , to learn temporal keypoint transitions. Hence, we use three different types of tokens for each keypoint as shown in Figure 3. There are 2 poses, and thus $2|\mathcal{K}|$ tokens of each type. Each token is linearly projected to an embedding, $E \in \mathbb{R}^{2|\mathcal{K}|,H}$ where $H$ is the transformer hidden size. Embeddings are a learned lookup table. We now describe the individual tokens in de
+
+tail:
+
+Position Token: The absolute spatial location of each keypoint is the Position token, $\rho$ , and its values fall in the range $[1, w^{\mathcal{F}} h^{\mathcal{F}}]$ . In practice, the absolute spatial location of a downsampled version of the original frame is used. This not only improves the efficiency of our method, but also makes it more accurate, as is discussed in 5.2. We give a general expression for the Position tokens of poses $p^t$ and $p^{t - \delta}$ , where $\rho_j^{p^t}$ corresponds to the Position token of the $j$ th keypoint of $p^t$ :
+
+$$
+\left\{\rho_ {1} ^ {p ^ {t}}, \rho_ {2} ^ {p ^ {t}}, \dots \rho_ {| \mathcal {K} |} ^ {p ^ {t}}, \rho_ {1} ^ {p ^ {t - \delta}}, \rho_ {2} ^ {p ^ {t - \delta}}, \dots \rho_ {| \mathcal {K} |} ^ {p ^ {t - \delta}} \right\} \tag {1}
+$$
+
+Type Token: The Type token corresponds to the unique type of the keypoint: e.g. the head, left shoulder, right ankle, etc... The Type keypoints fall in the range $[1,|\mathcal{K}|]$ . These add information about the orientation of the pose and are crucial for achieving high accuracy at low resolution, when keypoints have similar spatial locations. A general expression for the Type tokens of poses $p^t$ and $p^{t - \delta}$ is below, where $j^{p^t}$ corresponds to the Type token of the $j$ th keypoint of $p^t$ :
+
+$$
+\left\{1 ^ {p ^ {t}}, 2 ^ {p ^ {t}}, \dots \left| \mathcal {K} \right| ^ {p ^ {t}}, 1 ^ {p ^ {t - \delta}}, 2 ^ {p ^ {t - \delta}}, \dots \left| \mathcal {K} \right| ^ {p ^ {t - \delta}} \right\} \tag {2}
+$$
+
+Segment Token: The Segment token indicates the number of timesteps the pose is from the current one. The segment token is in range $[1, \delta]$ , where $\delta$ is a chosen constant. (We set $\delta$ to be 4.) This also allows our method to adapt
+
+to irregular frame rates. Or, if a person is not detected in a frame, we can look back two timesteps, conditioning our model on temporal token value of 2 instead of 1.
+
+$$
+\left\{1 ^ {p ^ {t}}, 1 ^ {p ^ {t}}, \dots 1 ^ {p ^ {t}}, \delta^ {p ^ {t - \delta}}, \delta^ {p ^ {t - \delta}}, \dots \delta^ {p ^ {t - \delta}} \right\} \tag {3}
+$$
+
+After each token is embedded, we sum the embeddings, $E_{sum} = E_{Position} + E_{Type} + E_{Segment}$ , to combine the information from each class of token. This is fed to our Transformer Matching Network.
+
+Transformer Matching Network: The goal of our network is to learn motion cues indicative of whether a pose pair matches. The self-attention mechanism of transformers allows us to accomplish this by learning which temporal relationships between the keypoints are representative of a match. Transformers compute scaled dot-product attention over a set of Queries $(Q)$ , Keys $(K)$ , and Values $(V)$ each of which is a linear projection of the input $E_{sum} \in \mathbb{R}^{2|\mathcal{K}|,H}$ . We compute the softmax attention with respect to every keypoint embedding in the pair, with the input to the softmax operation being of dimensions $[2|K|,2|K|]$ . In fact, we can generate heatmaps from the attention distribution over the pair's keypoints, as displayed in 5.3. In practice, we use multi-headed attention, which leads to the heads specializing, also visualized.
+
+Additionally, we use an attention mask to account for keypoints which are not visible due to occlusion. This attention mask is implemented exactly as the attention mask in [49], resulting in no attention being paid to the keypoints which are not visible due to occlusion. The attention equation is as follows, and we detail each operation in a single transformer in Table 5 of the Supplement:
+
+$$
+\operatorname {A t t e n t i o n} (Q, K, V) = \operatorname {s o f t m a x} \left(\frac {Q K ^ {T}}{\sqrt {d _ {k}}}\right) V \tag {4}
+$$
+
+After computing self-attention through a series of stacked transformers, similar to BERT, we feed this representation to a Pooler, which "pools" the input, by selecting the first token in the sequence and then inputting that token into a learned linear projection. This is fed to another linear layer, functioning as a binary classifier, which outputs the likelihood two given poses match. We govern training with a binary cross entropy loss providing our network only with the supervision of whether the pose pair is a match. See Figure 3 for more details.
+
+# 3.3. Improved Multi-Frame Pose Estimation
+
+We now describe how we improve keypoint estimation. Top-down methods suffer from two primary classes of errors from the object detector: 1. Missed bounding boxes 2. Imperfect bounding boxes. We use the box detections from adjacent timesteps in addition to the one in the current
+
+timestep to make pose predictions, thereby combating these issues. This is based on the intuition that the spatial location of each person does not change dramatically from frame to frame when the frame rate is relatively high, typical in most modern datasets and cameras. Thus, pasting a bounding box for the $ith$ person in frame, $\mathcal{F}^{t - 1}$ , $p^{t - 1,i}$ , in its same spatial location in frame $\mathcal{F}^t$ is a good approximation of the true bounding box for person $p^{t,i}$ . Bounding boxes are enlarged by a small factor to account for changes in spatial location from frame to frame. Previous approaches, such as [54], use standard non-maximal suppression (NMS) to choose which of these boxes to input into the estimator. Though this addresses the 1st issue of missed boxes, it does not fully address the second issue. NMS relies on the confidence score of the boxes. We make pose predictions for the box in the current frame and temporally adjacent boxes. Then we use object-keypoint similarity (OKS) to determine which of the poses should be kept. This is more accurate than using NMS because we use the confidence scores of the keypoints, not the bounding boxes. The steps of TOKs are enumerated below:
+
+# Algorithm 1 Temporal OKS
+
+Input: $p^{t - 1}, p^t, \mathcal{F}^t$
+
+1. Retrieve bounding box, $B$ , enclosing $p^{t - 1}$ , and dilate by a factor, $\alpha$
+2. Estimate a new pose, $p^t$ , in $\mathcal{F}^t$ from $B$
+3. Use OKS to determine which pose to keep, $p^* = OKS(p'^t, p^t)$
+
+Output: $p^*$
+
+# 4. Experiments
+
+# 4.1. The PoseTrack Dataset
+
+The PoseTrack 2017 training, validation, and test sets consist of 250, 50, and 208 videos, respectively. Annotations for the test set are held out. We evaluate on the PoseTrack 17 Test set because the PoseTrack 18 Test set has yet to be released. We use the official evaluation server on the test set, which can be submitted to up to 4 times. [4, 1] We conduct the rest of comparisons on the PoseTrack ECCV 2018 Challenge Validation Set, a superset of PoseTrack 17 with 550 training, 74 validation, and 375 test videos [2].
+
+Metrics Per-joint Average Precision (AP) is used to evaluate keypoint estimation based on the formulation in [6]. Multi-Object Tracking Accuracy (MOTA [7], [33]) scores tracking. It penalizes False Negatives (FN), False Positives (FP), and ID Switches (IDSW) under the following formulation for each keypoint $k^i$ , where $t$ is the current timestep. Our final MOTA is the average of all keypoints $k^i \in \mathcal{K}$ :
+
+$$
+1 - \frac {\sum_ {t} (F N _ {t} ^ {i} + F P _ {t} ^ {i} + I D S W _ {t} ^ {i})}{\sum_ {t} G T _ {t} ^ {i}}
+$$
+
+| Tracking Method | Detection Method | AP ↑ Total | % IDSW ↓ | MOTA ↑ Total |
| Head | Shou | Elb | Wri | Hip | Knee | Ankl | Total |
| Pose Entailment | | | 0.7 | 0.7 | 0.6 | 0.6 | 0.6 | 0.7 | 0.7 | 0.7 | 99.3 |
| GCN | GT Boxes, GT Keypoints | 100 | 1.4 | 1.4 | 1.4 | 1.5 | 1.4 | 1.6 | 1.6 | 1.5 | 98.5 |
| Optical Flow | | | 1.1 | 1.2 | 1.2 | 1.2 | 1.2 | 1.3 | 1.4 | 1.2 | 98.7 |
| Pose Entailment | | | 0.9 | 0.9 | 0.8 | 0.8 | 0.7 | 0.8 | 0.8 | 0.8 | 72.2 |
| GCN | GT Boxes, Predicted Keypoints | 86.7 | 1.6 | 1.6 | 1.6 | 1.6 | 1.3 | 1.5 | 1.4 | 1.5 | 71.6 |
| Optical Flow | | | 1.2 | 1.2 | 1.2 | 1.1 | 1.0 | 1.1 | 1.1 | 1.1 | 71.8 |
| Pose Entailment | | | 0.9 | 1.0 | 0.9 | 0.8 | 0.7 | 0.8 | 0.8 | 0.8 | 66.6 |
| GCN | Predicted Boxes, Predicted Keypoints | 81.6 | 1.7 | 1.7 | 1.7 | 1.7 | 1.4 | 1.5 | 1.4 | 1.6 | 65.9 |
| Optical Flow | | | 1.3 | 1.2 | 1.2 | 1.2 | 1.1 | 1.1 | 1.1 | 1.1 | 66.3 |
+
+Our approach assigns track ids and estimates keypoints independently. This is also true of competing methods with MOTA scores closest to ours. In light of this, we use the same keypoint estimations to compare Pose Entailment to competing tracking methods in 4.2. This makes the IDSW the only component of the MOTA metric that changes, and we calculate $\% IDSW^{i} = \sum_{t} IDSW_{t}^{i} / \sum_{t} GT_{t}^{i}$ . In 4.3, we compare our estimation method to others without evaluating tracking. Finally, in 4.4, we compare our entire tracking pipeline to other pipelines.
+
+# 4.2. Improving Tracking with Pose Entailment
+
+We compare with the optical flow tracking method [54], and the Graph Convolutional Network [35] (GCN) as shown in Figure 4. We do not compare with IoU because, GCN and optical flow [35], [54] have shown to outperform it, nor do we compare to the network from [40] because it is trained in an end-to-end fashion. We follow [54] for Optical Flow and use the pre-trained GCN provided by [35]. IDSW is calculated with three sets of keypoints. Regardless of the keypoint AP, we find that KeyTrack's Pose Entailment maintains a consistent improvement over other methods. We incur approximately half as many IDSW as the GCN and $30\%$ less than Optical Flow.
+
+Our improvement over GCN stems from the fact that it relies only on keypoint spatial locations. By using additional information beyond the spatial location of each keypoint, our model can make better inferences about the temporal relationship of poses. The optical flow CNNs are not specific to pose tracking and require manual tuning. For example, to scale the CNN's raw output, which is normalized from -1 to 1, to pixel flow offsets, a universal constant, given by the author of the original optical flow network (not [54]), must be applied. However, we found that this constant required adjustment. In contrast, our learned method requires no tuning during inference.
+
+# 4.3. Improving Detection with TOKS
+
+Table 2 shows offers a greater improvement in keypoint detection quality than other methods. In the absence of
+
+Figure 4. Compares accuracy of tracking methods on the PoseTrack 18 Val set, given the same keypoints. GT stands for Ground Truth, "predicted" means a neural net is used. Lower % IDSW is better, higher MOTA is better. "Total" averages all joint scores.
+
+| Detection Method | AP |
| Head | Shou | Elb | Wri | Hip | Knee | Ankl | Total |
| GT | 90.2 | 91.4 | 88.7 | 83.6 | 81.4 | 86.1 | 83.7 | 86.7 |
| Det. | 68.8 | 72.8 | 73.1 | 68.4 | 68.0 | 72.4 | 69.8 | 70.4 |
| Det. + Box Prop. | 79.3 | 82.0 | 80.8 | 75.6 | 72.4 | 76.5 | 72.4 | 77.1 |
| Det. + TOKS@0.3 | 83.6 | 86.6 | 84.9 | 78.9 | 76.4 | 80.2 | 76.2 | 81.1 |
| Det. + TOKS@0.35 (ours) | 84.1 | 87.2 | 85.3 | 79.2 | 77.1 | 80.6 | 76.5 | 81.6 |
| Det. + TOKS@0.5 | 83.9 | 87.2 | 85.2 | 79.1 | 77.1 | 80.7 | 76.4 | 81.5 |
+
+Table 2. Per-joint AP when the pose estimator is conditioned on different boxes. GT indicates ground truth boxes are used, and serves as an upper bound for accuracy. Det. indicates a detector was used to estimate boxes. @OKS* is the OKS threshold used.
+
+boundong box improvement, the AP performance is $6.6\%$ lower, highlighting the issue of False Negatives. The further improvement from TOKs emphasizes the usefulness of estimating every pose. By using NMS, bounding box propagation methods miss the opportunity to use the confidence scores of the keypoints, which lead to better pose selection.
+
+# 4.4. Tracking Pipeline Comparison to the SOTA
+
+Now that we have analyzed the benefits of Pose Entailment and TOKs, we put them together and compare to other approaches. Figure 5 shows that we achieve the highest MOTA score. We improve over the original HRNet paper by 3.3 MOTA points on the Test set. [25], nearest our score on the 2018 Validation set, is much further away on the 2017 Test set. Additionally, our FPS is improved over all methods with similar MOTA scores, with many methods being offline due to their use of ensembles. (Frames per second (FPS) is calculated by diving the number of frames in the dataset by the runtime of the approach.) Moreover, our method outperforms all others in terms of AP, showing the benefits of TOKs. $\mathrm{AP}^T$ is also reported, which is the AP score after tracking post-processing has been applied. This post-processing is beneficial to the MOTA score, but lowers AP. See section A.3 for more details on this post-processing. As we have the highest AP, but not the highest $\mathrm{AP}^T$ it appears the effect of tracking post-processing varies from paper to paper. Only $\mathrm{AP}^T$ is given on the test set because each paper is given 4 submissions, so these are used to optimize MOTA, rather than AP.
+
+PoseTrack 2018 ECCV Challenge Val Set
+
+| No. | Method | Extra Data | APT | AP | FPS | MOTA |
| 1. | KeyTrack (ours) | X | 74.3 | 81.6 | 1.0 | 66.6 |
| 2. | MIPAL [25] | X | 74.6 | - | - | 65.7 |
| 3. | LightTrack (offline) [35] | X | 71.2 | 77.3 | E | 64.9 |
| 4. | LightTrack (online) [35] | X | 72.4 | 77.2 | 0.7 | 64.6 |
| 5. | Miracle [58] | ✓ | - | 80.9 | E | 64.0 |
| 6. | OpenSVAI [36] | X | 69.7 | 76.3 | - | 62.4 |
| 7. | STAF [38] | ✓ | 70.4 | - | 3 | 60.9 |
| 8. | MDPN [20] | ✓ | 71.7 | 75.0 | E | 50.6 |
+
+Figure 5. Top scores on the PoseTrack leaderboards. E indicates an ensemble of detectors is used, and results in the method being offline. A check indicates external training data is used beyond COCO and PoseTrack. A “-” indicates the information has not been made publicly available. FPS calculations for JointFlow and FlowTrack are taken from [59]. HRNet FPS is approximated from FlowTrack since the methods are very similar. The AP column has the best AP score. $\mathbf{AP}^T$ is the AP score after tracking post-processing.
+
+PoseTrack 2017 Test Set Leaderboard
+
+| No. | Method | Extra Data | APT | FPS | MOTA |
| 1. | KeyTrack (ours) | X | 74.0 | 1.0 | 61.2 |
| 2. | POINet [40] | X | 72.5 | - | 58.4 |
| 3. | LightTrack [35] | X | 66.7 | E | 58.0 |
| 4. | HRNet [45] | X | 75.0 | 0.2 | 57.9 |
| 5. | FlowTrack [54] | X | 74.6 | 0.2 | 57.8 |
| 6. | MIPAL [25] | X | 68.8 | - | 54.5 |
| 7. | STAF [38] | ✓ | 70.3 | 2 | 53.8 |
| 8. | JointFlow [15] | X | 63.6 | 0.2 | 53.1 |
+
+
+Figure 6. Qualitative results of KeyTrack on the PoseTrack PoseTrack 17 Test Set. Additional qualitative results are in the supplement.
+
+
+
+
+
+
+
+Efficiency: Our tracking approach is efficient, not reliant on optical flow or RGB data. When processing an image at our optimal resolution, $24 \times 18$ , we reduce the GFLOPS required by optical flow, which processes images at full size, from 52.7 to 0.1. [35]'s GCN does not capture higher-order interactions over keypoints and can be more efficient than our network with local convolutions. However, this translates to a $\sim$ 1ms improvement in GPU runtime. In fact, our tracking pipeline demonstrates a $30\%$ improvement in end-to-end runtime over [35], shown in 4.4. We have the fastest FPS of Top-down approaches. Also, we do not rely on optical flow to improve bounding box propagation as [54, 45] do, instead we use TOKs. This contributes to our 5x FPS improvement over [54, 45]. Further details on the parameters and FLOPS of the GCN, Optical Flow Network, and our Transformer Matching Network are in Table 6 of the Supplement.
+
+# 5. Analysis
+
+# 5.1. Tracking Pipeline
+
+Varying Tokenization Schemes and Transformer Hyper-parameters We examine the benefits of each embedding. As evident in Table 3, Segment embeddings are crucial because they enable the network to distinguish between the Poses being matched. Token embeddings give the network information about the orientation of a pose and help it interpret keypoints which are in close spatial proximity; i.e. keypoints that have the same or similar position embedding. We also train a model that uses the relative
+
+| Abs. Position | Type | Segment | Rel. Position | Match % Accuracy |
| ✓ | ✓ | X | X | 72.6 |
| ✓ | X | ✓ | X | 90.0 |
| ✓ | ✓ | ✓ | X | 93.2 (ours) |
| X | ✓ | ✓ | ✓ | 91.3 |
| ✓ | ✓ | ✓ | ✓ | 92.0 |
+
+Table 3. Match accuracies for various embedding schemes.
+
+keypoint distance from the pose center rather than the absolute distance of the keypoint in the entire image. We find that match accuracy deteriorates with this embedding. This is likely because many people perform the same activity, such as running, in the PoseTrack dataset, leading to them having nearly identical relative pose positions. We vary the number of transformer blocks, the hidden size in the transformer block, and number of heads in Table 7. Decreasing the number of transformer blocks, the hidden size, and attention heads hurts performance.
+
+Number of Timesteps and Other Factors We find that reducing the number of timesteps adversely effects the MOTA score. It drops up to 0.3 points when using only a single timestep because we are less robust to detection errors. Also, in replacement of our greedy algorithm, we experimented with the Hungarian algorithm used in [19]. This algorithm is effective with ground truth information, but is not accurate when using detected poses.
+
+| Num Tx | Hidden Size | Int. Size | Num Heads | Parameters (M) | % IDSW |
| 2 | 128 | 512 | 4 | 0.40 | 1.0 |
| 4 | 128 | 512 | 4 | 0.43 | 0.8 |
| 6 | 128 | 512 | 4 | 1.26 | 1.1 |
| 4 | 64 | 256 | 4 | 0.23 | 0.9 |
| 4 | 128 | 512 | 4 | 0.43 | 0.8 |
| 4 | 256 | 1024 | 4 | 3.31 | 1.1 |
| 4 | 128 | 128 | 4 | 0.43 | 0.8 |
| 4 | 128 | 512 | 4 | 0.86 | 0.8 |
| 4 | 128 | 128 | 2 | 0.43 | 0.9 |
| 4 | 128 | 128 | 4 | 0.43 | 0.8 |
| 4 | 128 | 128 | 6 | 0.43 | 0.8 |
+
+
+Figure 7. Left: Transformer network hyper-parameters are varied. Right: A plot of IDSW rate vs. image resolution. The table on the left shows the input to each method, the conv+visual input is blurry because images are downsampled.
+
+
+Visual Features
+t=2 t=1
+
+
+Tx Head 6-0
+
+
+Tx Head 6-3
+
+
+t=3 t=2
+
+
+
+
+
+
+t=2 t=1
+
+
+
+
+
+
+t=3 t=2
+
+
+
+
+Figure 8. Attention heatmaps from two of our network's attention heads are shown. These are the 0th, and 3rd heads from our final transformer. The two pairs above the dotted line are a matching pair, while the pair below the dotted line are not (and are also from separate videos). $t$ is the frame timestep.
+
+# 5.2. Comparing Self-Attention to Convolutions
+
+We compare transformers and CNNs by replacing our Transformer Matching Network with two convolution-based methods. One takes visual features from bounding box pose pairs as input while the other takes only keypoints as input, where each unique keypoint is colored via a linear interpolation, a visual version of our Type tokens. Both approaches use identical CNNs, sharing an architecture inspired by VGG [43], and have approximately $4\mathrm{x}$ more parameters than our transformer-based model because this was required for stable training. See A.4 of the Supplement for details.
+
+Transformers outperform CNNs for the tracking task, as shown in Figure 7. However, we find two areas where CNNs can be competitive. First, at higher resolutions, transformers often need a large number of parameters to match
+
+CNN's performance. In NLP, when using large vocabularies, a similar behavior is observed where transformers need multiple layers to achieve good performance. Second, we also find that convolutions optimize more quickly than the transformers, reaching their lowest number of ID Switches within the first 2 epochs of training. Intuitively, CNNs are more easily able to take advantage of spatial proximity. The transformers receive spatial information via the position embeddings, which are 1D linear projections of 2D locations. This can be improved by using positional embedding schemes that better preserve spatial information [18].
+
+In summary, CNNs are accurate at high resolutions given its useful properties such as translation invariance and location invariance. However, there is an extra computational cost of using them. The extra information, beyond the spatial location of keypoints, included in our keypoint embeddings, coupled with the transformer's ability to model higher-order interactions allows it to function surprisingly well at very low resolutions. Thus, the advantage of CNNs is diminished and our transformer-based network outperforms them in the low resolution case.
+
+# 5.3. Visualizing Attention Heatmaps
+
+We visualize our network's attention heatmaps in Fig. 8. When our network classifies a pair as non-matching, its attention is heavily placed on one of the poses over the other. Also, we find it interesting that one of our attention heads primarily places its attention on keypoints near the person's head. This specialization suggests different attention heads are attuned to specific keypoint motion cues.
+
+# 6. Conclusion
+
+In summary, we present an efficient Multi-person Pose Tracking method. Our proposed Pose Entailment method achieves SOTA performance on PoseTrack datasets without using RGB information in the tracking step. KeyTrack also benefits from improved keypoint estimates using TOKs, which outperforms bounding box propagation methods. Finally, we demonstrate how to tokenize and embed human pose information in the transformer architecture that has applications to tasks such as pose-based action recognition.
+
+# References
+
+[1]Posetrack leaderboard,2017 test set,2017.2,5
+[2] Posetrack challenge - eccv 2018, 2018. 5
+[3] Abdulrahman Alarifi, AbdulMalik Al-Salman, Mansour Alsaleh, Ahmad Alnafessah, Suheer Al-Hadhrami, Mai A Al-Ammar, and Hend S Al-Khalifa. Ultra wideband indoor positioning technologies: Analysis and recent advances. Sensors, 16(5):707, 2016. 2
+[4] Mykhaylo Andriluka, Umar Iqbal, Eldar Insafutdinov, Leonid Pishchulin, Anton Milan, Juergen Gall, and Bernt Schiele. Posetrack: A benchmark for human pose estimation and tracking. In CVPR, 2018. 2, 5
+[5] Mykhaylo Andriluka, Stefan Roth, and Bernt Schiele. Pictorial structures revisited: People detection and articulated pose estimation. In 2009 IEEE conference on computer vision and pattern recognition, pages 1014-1021. IEEE, 2009. 2
+[6] Mykhaylo Andriluka1, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation - mpii human pose dataset. In CVPR, 2014. 5
+[7] Keni Bernstein and Rainer Stiefelhagen. Evaluating multiple object tracking performance: the clear mot metrics. Journal on Image and Video Processing, 2008;1, 2008. 5
+[8] Gedas Bertasius, Christoph Feichtenhofer, Du Tran, Jianbo Shi, and Lorenzo Torresani. Learning temporal pose estimation from sparsely-labeled videos. arXiv preprint arXiv:1906.04016, 2019. 2
+[9] Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326, 2015. 3
+[10] Yilun Chen, Zhicheng Wang, Yuxiang Peng, Zhiqiang Zhang, Gang Yu, and Jian Sun. Cascaded pyramid network for multi-person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7103-7112, 2018. 2
+[11] Bo Dai, Yuqi Zhang, and Dahua Lin. Detecting visual relationships with deep relational networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 3
+[12] Matthias Dantone, Juergen Gall, Christian Leistner, and Luc Van Gool. Human pose estimation using body parts dependent joint regressors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3041-3048, 2013. 2
+[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. 3
+[14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 2
+[15] Andreas Doering, Umar Iqbal, and Juergen Gall. Joint flow: Temporal flow fields for multi person tracking, 2018. 7
+
+[16] Pedro F Felzenszwalb and Daniel P Huttenlocher. Pictorial structures for object recognition. International journal of computer vision, 61(1):55-79, 2005. 2
+[17] Mihai Fieraru, Anna Khoreva, Leonid Pishchulin, and Bernt Schiele. Learning to refine human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 205-214, 2018. 2
+[18] Rohit Girdhar, Joao Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 244–253, 2019. 3, 8
+[19] Rohit Girdhar, Georgia Gkioxari, Lorenzo Torresani, Manohar Paluri, and Du Tran. Detect-and-track: Efficient pose estimation in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 350-359, 2018. 2, 3, 7
+[20] Hengkai Guo, Tang Tang, Guozhong Luo, Riwei Chen, Yongchen Lu, and Linfu Wen. Multi-domain pose network for multi-person pose estimation and tracking. Computer Vision - ECCV 2018 Workshops, page 209-216, 2019. 2, 3, 7
+[21] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. 2
+[22] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling relationships in referential expressions with compositional modular networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016. 3
+[23] Hao Huang, Luowei Zhou, Wei Zhang, Jason J Corso, and Chenliang Xu. Dynamic graph modules for modeling object-object interactions in activity recognition: Supplementary material. BMVC, 2019. 3
+[24] Shaoli Huang, Mingming Gong, and Dacheng Tao. A coarse-fine network for keypoint localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 3028-3037, 2017. 2
+[25] Jihye Hwang, Jieun Lee, Sungheon Park, and Nojun Kwak. Pose estimator and tracker using temporal flow maps for limbs, 2019. 2, 6, 7
+[26] Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres, and Bernt Schiele. Artrack: Articulated multi-person tracking in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6457-6465, 2017. 2, 3
+[27] Umar Iqbal, Anton Milan, and Juergen Gall. Posetrack: Joint multi-person pose estimation and tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2011-2020, 2017. 2, 3
+[28] Sheng Jin, Wentao Liu, Wanli Ouyang, and Chen Qian. Multi-person articulated tracking with spatial and temporal embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5664-5673, 2019. 2
+[29] Sheng Jin, Xujie Ma, Zhipeng Han, Yue Wu, Wei Yang, Wentao Liu, Chen Qian, and Wanli Ouyang. Towards multi-
+
+person pose tracking: Bottom-up and top-down methods. In ICCV PoseTrack Workshop, volume 2, page 7, 2017. 3
+[30] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and perform-. mant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. 3
+[31] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265, 2019. 3
+[32] Chih-Yao Ma, Asim Kadav, Iain Melvin, Zsolt Kira, Ghassan AlRegib, and Hans Peter Graf. Attend and interact: Higher-order object interactions for video understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6790-6800, 2018. 3
+[33] Anton Milan, Laura Leal-Taixe, Ian D. Reid, Stefan Roth, and Konrad Schindler. MOT16: A benchmark for multi-object tracking. CoRR, abs/1603.00831, 2016. 5
+[34] Gyeongsik Moon, Ju Yong Chang, and Kyoung Mu Lee. Posefix: Model-agnostic general human pose refinement network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7773–7781, 2019. 2
+[35] Guanghan Ning and Heng Huang. Lighttrack: A generic framework for online top-down human pose tracking. arXiv preprint arXiv:1905.02822, 2019. 1, 2, 6, 7
+[36] Guanghan Ning, Ping Liu, Xiaochuan Fan, and Chi Zhang. A top-down approach to articulated human pose estimation and tracking. Computer Vision - ECCV 2018 Workshops, page 227-234, 2019. 7
+[37] George Papandreou, Tyler Zhu, Nori Kanazawa, Alexander Toshev, Jonathan Tompson, Chris Bregler, and Kevin Murphy. Towards accurate multi-person pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4903-4911, 2017. 2
+[38] Yaadhav Raaj, Haroon Idrees, Gines Hidalgo, and Yaser Sheikh. Efficient online multi-person 2d pose tracking with recurrent spatio-temporal affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4620-4628, 2019. 2, 7
+[39] Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Standalone self-attention in vision models. arXiv preprint arXiv:1906.05909, 2019. 3
+[40] Weijian Ruan, Wu Liu, Qian Bao, Jun Chen, Yuhao Cheng, and Tao Mei. Pointet: Pose-guided ovonic insight network for multi-person pose tracking. In Proceedings of the 27th ACM International Conference on Multimedia, MM '19, pages 284-292, New York, NY, USA, 2019. ACM. 2, 6, 7
+[41] Adam Santoro, David Raposo, David GT Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In Advances in Neural Information Processing Systems, 2017. 3
+[42] Leonid Sigal and Michael J Black. Measure locally, reason globally: Occlusion-sensitive articulated pose estimation. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 2041-2048. IEEE, 2006. 2
+
+[43] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2014. 8
+[44] Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743, 2019. 3
+[45] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. CoRR, abs/1902.09212, 2019. 1, 2, 3, 7
+[46] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019. 3
+[47] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 648-656, 2015. 1, 2
+[48] Alexander Toshev and Christian Szegedy. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1653-1660, 2014. 1, 2
+[49] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017. 1, 5
+[50] Xiaolong Wang, Allan Jabri, and Alexei A Efros. Learning correspondence from the cycle-consistency of time. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2566-2576, 2019. 2
+[51] Yang Wang and Greg Mori. Multiple tree models for occlusion and spatial constraints in human pose estimation. In European Conference on Computer Vision, pages 710-724. Springer, 2008. 2
+[52] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4724–4732, 2016. 1, 2
+[53] Fangting Xia, Peng Wang, Xianjie Chen, and Alan L Yuille. Joint multi-person pose estimation and semantic part segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6769-6778, 2017. 2
+[54] Bin Xiao, Haiping Wu, and Yichen Wei. Simple baselines for human pose estimation and tracking. In European Conference on Computer Vision (ECCV), 2018. 5, 6, 7
+[55] Yuliang Xiu, Jiefeng Li, Haoyu Wang, Yinghong Fang, and Cewu Lu. Pose flow: Efficient online pose tracking. arXiv preprint arXiv:1802.00977, 2018.3
+[56] Jiarui Xu, Yue Cao, Zheng Zhang, and Han Hu. Spatial-temporal relation networks for multi-object tracking. arXiv preprint arXiv:1904.11489, 2019. 3
+[57] Wei Yang, Shuang Li, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. Learning feature pyramids for human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1281-1290, 2017. 1, 2
+[58] Dongdong Yu, Kai Su, Jia Sun, and Changhu Wang. Multi-person pose estimation for pose tracking with enhanced cascaded pyramid network. In European Conference on Computer Vision, pages 221-226. Springer, 2018. 7
+
+[59] Jiabin Zhang, Zheng Zhu, Wei Zou, Peng Li, Yanwei Li, Hu Su, and Guan Huang. Fastpose: Towards real-time pose estimation and tracking via scale-normalized multi-task networks, 2019. 7
+[60] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, and Jianfeng Gao. Unified vision-language pre-training for image captioning and vqa. arXiv preprint arXiv:1909.11059, 2019. 3
+[61] Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. End-to-end dense video captioning with masked transformer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8739-8748, 2018. 3
\ No newline at end of file
diff --git a/15keypointsisallyouneed/images.zip b/15keypointsisallyouneed/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b560b32cddcf6fa268538f9750634b0106719016
--- /dev/null
+++ b/15keypointsisallyouneed/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:316cd93505affe87eee07716c0e57176579d2f17ebbc0be5650e80b8850ceb12
+size 476691
diff --git a/15keypointsisallyouneed/layout.json b/15keypointsisallyouneed/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..4e0337bb398b4fdf463c1c2c5da08a7f6e9e24d4
--- /dev/null
+++ b/15keypointsisallyouneed/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0e33a40d28f8c218e2c0a1663c0a1dced37d54a96d1204958993e08fa59cc45f
+size 407455
diff --git a/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_content_list.json b/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..266bd4a8e7d9d8c742ea00f9c3467186b181349a
--- /dev/null
+++ b/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:546cd698d50dad4b75e54ecb356a3e7781ba9b45418a26770e6ee7830c7ac33d
+size 77599
diff --git a/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_model.json b/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..51bb9943f5f264b670ac801f135b58190219b6a0
--- /dev/null
+++ b/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:81c2d7d663f5d903f72f29bee0cdaaaa3de06dcd30ebc32ffafbaa4f1e8e61f1
+size 95607
diff --git a/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_origin.pdf b/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..0de49ed1f11f58cfb239f8377be49c2db9ccd601
--- /dev/null
+++ b/3dhumanmeshregressionwithdensecorrespondence/5ce609af-653e-4f2d-bb43-2bcd175021d4_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:371f0ab1db0b974604e55690e0e7c652f828bc87b87f060f8ae97a9e13c320aa
+size 1909257
diff --git a/3dhumanmeshregressionwithdensecorrespondence/full.md b/3dhumanmeshregressionwithdensecorrespondence/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a6e3bde7b84faa030675e40f20a345f5bf86f1c0
--- /dev/null
+++ b/3dhumanmeshregressionwithdensecorrespondence/full.md
@@ -0,0 +1,373 @@
+# 3D Human Mesh Regression with Dense Correspondence
+
+Wang Zeng $^{1}$ , Wanli Ouyang $^{2}$ , Ping Luo $^{3}$ , Wentao Liu $^{4}$ , and Xiaogang Wang $^{1,4}$
+
+1The Chinese University of Hong Kong 2The University of Sydney 3The University of Hong Kong 4SenseTime Research {zengwang@link, xgwang@ee}.cuhk.edu.hk, wanli.ouyang@sydney.edu.au, pluo@cs.hku.hk, liuwentao@sensetime.com
+
+# Abstract
+
+Estimating 3D mesh of the human body from a single 2D image is an important task with many applications such as augmented reality and Human-Robot interaction. However, prior works reconstructed 3D mesh from global image feature extracted by using convolutional neural network (CNN), where the dense correspondences between the mesh surface and the image pixels are missing, leading to suboptimal solution. This paper proposes a model-free 3D human mesh estimation framework, named DecoMR, which explicitly establishes the dense correspondence between the mesh and the local image features in the UV space (i.e. a 2D space used for texture mapping of 3D mesh). DecoMR first predicts pixel-to-surface dense correspondence map (i.e., IUV image), with which we transfer local features from the image space to the UV space. Then the transferred local image features are processed in the UV space to regress a location map, which is well aligned with transferred features. Finally we reconstruct 3D human mesh from the regressed location map with a predefined mapping function. We also observe that the existing discontinuous UV map are unfriendly to the learning of network. Therefore, we propose a novel UV map that maintains most of the neighboring relations on the original mesh surface. Experiments demonstrate that our proposed local feature alignment and continuous UV map outperforms existing 3D mesh based methods on multiple public benchmarks. Code will be made available at https://github.com/zengwang430521/DecoMR.
+
+# 1. Introduction
+
+Estimation of the full human body pose and shape from a monocular image is a fundamental task for various applications such as human action recognition [12, 35], VR/AR [11] and video editing [10]. It is challenging mostly due to the inherent depth ambiguity and the difficulty to
+
+
+Figure 1. Prior methods (e.g., SPIN [20] and CMR [21]) usually reconstruct 3D meshes of human body from the global image feature vector extracted by neural networks, where the dense correspondences between the mesh surface and the image pixels are missing, leading to suboptimal results (top). Our DecoMR framework explicitly establishes such correspondence in the feature space with the aid of a novel continuous UV map, which results in better results in mesh details (bottom).
+
+obtain the ground-truth 3D human body data. There are several popular representations for 3D objects in literature, e.g., point clouds, 3D voxels and 3D meshes. Because of its compatibility with existing computer graphic engines and the efficiency to represent object surface in details with reasonable storage, 3D mesh representation has been widely adopted for 3D human body reconstruction [18, 4, 20, 8, 27, 38, 11, 26, 25, 37, 21, 39].
+
+However, unlike 3D voxel representation, the dense correspondence between the template human mesh surface and the image pixels is missing, while this dense correspondence between the input and the output has been proven crucial for various tasks [24, 39]. Due to this limitation, most existing 3D mesh based methods, either model-based [18, 26, 25, 20] or model-free [21], have to ignore the correspondence between the mesh representation and pixel representation. And they have to estimate the human meshes based on either global image feature [18, 21, 20], or hierarchical projection and refinement [39], which is time consuming and sensitive to initial estimation.
+
+To utilize the 3D mesh representation without losing
+
+the correspondence between the mesh space and the image space, we propose a 3D human mesh estimation framework that explicitly establishes the dense correspondence between the output 3D mesh and the input image in the UV space.
+
+Representing output mesh by a new UV map: Every point on the mesh surface is represented by its coordinates on the continuous UV map. Therefore, the 3D mesh can be presented as a location map in the UV space, of which the pixel values are the 3D coordinates of its corresponding point on the mesh surface, as shown in Figure 1. Instead of using SMPL default UV map, we construct a new continuous UV map that maintains more neighboring relations of the original mesh surface, by parameterizing the whole mesh surface into a single part on the UV plane, as shown in Figure 1.
+
+Mapping image features to the UV space: To map the image features to the continuous UV map space, we first use a network that takes a monocular image as input for predicting an IUV image [2], which assign each pixel to a specific body part location. Then the local image features from the decoder are transferred to the UV space with the guidance of predicted IUV image to construct the transferred feature maps that are well aligned with the corresponding mesh area.
+
+Given the transferred local features, we use both the local features and the global feature to estimate the location map in the UV space, which is further used to reconstruct the 3D human body mesh with the predefined UV mapping function. Since our UV map is continuous and maintains the neighboring relationships among body parts, details between body parts can be well preserved when the local features are transferred.
+
+In summary, our contributions are twofold:
+
+- We propose a novel UV map that maintains most of the neighboring relations on the original mesh surface.
+- We explicitly establish the dense correspondence between the output 3D mesh and the input image by the transferred local image features.
+
+We extensively evaluate our methods on multiple widely used benchmarks for 3D human body reconstruction. Our method achieves state-of-the-art performance on both 3D human body mesh reconstruction and 3D human body pose estimation.
+
+# 2. Related Work
+
+# 2.1. Optimization-based methods
+
+Pioneer works solve the 3D human body reconstruction by optimizing parameters of an predefined 3D human mesh models, e.g., SCAPE [3] and SMPL [23], with respect to the ground-truth body landmark locations [8], or employing
+
+a 2D keypoints estimation network [4]. To improve the precision, extra landmarks are used in [22]. Recent work [38] enables multiple persons body reconstruction by incorporating human semantic part segmentation clues, scene and temporal constrains.
+
+# 2.2. Learning-based methods
+
+Model-based methods: Directly reconstruction of the 3D human body from a single image is a relatively hard problem. Therefore, many methods incorporate a parameterized 3D human model and change the problem into the model parameter regression. For example, HMR [18] regresses the SMPL parameters directly from RGB image. In order to mitigate the lack of robustness caused by the inadequacy of in-the-wild training data, some approaches employ intermediate representations, such as 2D joint heatmaps and silhouette [26], semantic segmentation map [25] or IUV image [36]. Recently, SPIN [20] incorporates 3D human model parameter optimization into network training process by supervising network with optimization result, and achieves the state-of-the-art results among model-based 3D human body estimation approaches.
+
+Compared with optimization-based methods, model parameter regression methods are more computationally efficient. While these methods can make use of the prior knowledge embedded in 3D human model, and tend to reconstruct more biologically plausible human bodies compared with model-free methods, the representation capability is also limited by the parameter space with these predefined human models. In addition, as stated in [21], 3D human model parameter space might not be so friendly to the learning of network. On the contrary, our framework does not regress model parameters. Instead, it directly outputs 3D coordinates of each mesh vertex.
+
+Model-free methods: Some methods do not rely on human models and regress 3D human body representation directly from image. BodyNet [33] estimates volumetric representation of 3D human with a Voxel-CNN. A recent work [6] estimates visible and hidden depth maps, and combines them to form a point cloud of human. Voxel and point cloud based representations are flexible and can represent objects with different topology. However, the capability of reconstructing surface details is limited by the storage cost.
+
+CMR [21] uses a Graph-CNN to directly regress 3D coordinates of vertices from image features. Densebody [37] estimates vertex location in the form of UV-position map. A recent work [28] represents the 3D shapes using 2D geometry images, which can be regarded as a special kind of UV-position map. These methods do not use any human model. However, they still lack correspondence between human mesh and image and estimate the whole surface only relying on global image feature. On the contrary, our method can employ local feature for the reconstruction
+
+
+Figure 2. Overview of our framework. Given an input image, an IUV map is first predicted by the correspondence net. Then local image features are transferred to the UV space. Location net takes transferred local features, expanded global feature and reference location map as input, and regresses a location map. Finally, 3D mesh is reconstructed from the location map.
+
+of corresponding surface area.
+
+The efficacy of the UV space representation has been demonstrated in recent work Tex2Shape [1], where the 3D human shape is estimated from the texture map which is obtained by transferring images pixels according to the IUV image estimated by DensePose [2]. We also use the IUV image to guide the human mesh estimation. However, in [1], the UV transfer is used to preprocess the raw image and is independent from the model learning, while we incorporate the UV transfer into our network to enable the end-to-end learning. We observe the efficacy of learning the transferred features end-to-end, which has also been proved by prior works, e.g., Spatial Transformer Networks [15] and Deformable ConvNets [5].
+
+Very recently, HMD [39] refines initial estimated human mesh by hierarchical projection and mesh deformation. PIFu [30] reconstructs 3D human as implicit function. HMD and PIFu are able to utilize local image features to achieve impressive details in the reconstruction results. However, HMD is computationally intensive and sensitive to the initial estimation, while implicit function lacks the semantic information of human body. In contrast, we estimate the pixel-to-surface dense correspondence from images directly, which is computationally efficient and more robust, and the location map maintains the semantic information of human body.
+
+# 3. Our Method
+
+Overview. As shown in Figure 2, our framework DecoMR consists of two components, including a dense correspondence estimation network (CNet), which preforms in the image space, as well as a localization network (LNet), which performs on a new continuous UV map space. The
+
+CNet has an encoder-decoder architecture to estimate an IUV image. It also extracts local image features $\mathcal{F}_{im}$ , and then uses the estimated IUV image for transferring the image features $\mathcal{F}_{im}$ to the transferred local features $\mathcal{F}_{UV}$ in the UV space. LNet takes the above transferred local features $\mathcal{F}_{UV}$ as input, and regresses a location map $X$ , whose pixel value is the 3D coordinates of the corresponding points on the mesh surface. Finally, the 3D human mesh $V$ is reconstructed from the above location map by using a predefined UV mapping function. As a result, the location map and the transferred feature map are well aligned in the UV space, thus leading to dense correspondence between the output 3D mesh and the input image.
+
+Although the SMPL UV map [23] is widely used in the literature [37, 1, 7], it loses the neighboring relationships between different body parts as shown in Figure 3 (a), which is crucial for network learning as stated in [21]. Therefore, we design a new UV map that is able to maintain more neighboring relationships on the original mesh surface as shown in Figure 3 (b).
+
+The overall objective function of DecoMR is
+
+$$
+\mathcal {L} = \mathcal {L} _ {I U V} + \mathcal {L} _ {L o c} + \lambda_ {c o n} \mathcal {L} _ {c o n}. \tag {1}
+$$
+
+It has three loss functions of different purposes. The first loss denoted as $\mathcal{L}_{IUV}$ minimizes the distance between the predicted IUV image and the ground-truth IUV image. The second loss function denoted as $\mathcal{L}_{Loc}$ minimizes the dissimilarity between the regressed human mesh (e.g. location map) and the ground-truth human mesh. In order to encourage the output mesh to be aligned with the input image, we add an extra loss function, denoted as $\mathcal{L}_{con}$ , which is a consistent loss to increase the consistency between the regressed location map and the ground-truth IUV image. The $\lambda_{con}$ in Equation 1 is a constant coefficient to balance the
+
+consistent loss $\mathcal{L}_{con}$ . We first define the new UV map below and then introduce different loss functions in details.
+
+# 3.1. The Continuous UV map
+
+First we define a new continuous UV map that preserves more neighboring relationships of the original mesh than the ordinary UV map of SMPL. As shown in Figure 3 (a), multiple mesh surface parts are placed separately on the SMPL default UV map, which loses the neighboring relationships of the original mesh surface. Instead of utilizing SMPL UV map as [1, 7, 37], we design a new continuous UV map. We first carefully split the template mesh into an open mesh, while keeping the entire mesh surface as a whole. Then we utilize an algorithm of area-preserving 3D mesh planar parameterization [14, 16], to minimize the area distortion between the UV map and the original mesh surface, in order to obtain an initial UV map. To maintain symmetry for every pair of symmetric vertices on the UV map, we further refine the initial UV map by first aligning the fitted symmetric axis with $v$ axis and then averaging the UV coordinates with the symmetric vertex flipped by $v$ axis.
+
+Comparisons. Here we quantitatively show that our continuous UV map outperforms the SMPL UV map in terms of preserving connection relationships between vertices on the mesh. To do so, we compute the distance matrix, where each element is the distance between every vertex pair. We also compute the distance matrix on the UV map. Figure 4 shows such distance matrices. This distance matrix can be computed by using different types of data. For the mesh surface, the distance between two vertices is defined as the length of the minimal path between them on the graph built from the mesh. For the UV map, the distance between two vertices is directly calculated by the distance between their UV coordinates.
+
+Now we quantitatively evaluate the similarity between the distance matrices of UV map and original mesh in two aspects as shown in Table 1. In the first aspect, we calculate the 2D correlation coefficient denoted as $S_{1}$ . We have
+
+$$
+S _ {1} = \frac {\sum_ {m} \sum_ {n} \left(A _ {m n} - \bar {A}\right) \left(B _ {m n} - \bar {B}\right)}{\sqrt {\left(\sum_ {m} \sum_ {n} \left(A _ {m n} - \bar {A}\right) ^ {2}\right) \left(\sum_ {m} \sum_ {n} \left(B _ {m n} - \bar {B}\right) ^ {2}\right)}}, \tag {2}
+$$
+
+where $A$ and $B$ are the distance matrices of original mesh and UV map, respectively. $\bar{A}$ and $\bar{B}$ are the mean value of $A$ and $B$ respectively. $m$ and $n$ are the indices of mesh vertices.
+
+In the second aspect, we calculate the normalized cosine similarity between the distance matrices of UV map and original mesh, denoted as $S_{2}$ . From Table 1, we see that our continuous UV map outperforms SMPL UV map by large margins on both metric values, showing that our
+
+
+(a)
+
+
+
+
+
+
+
+
+(b)
+RGB image
+
+
+IUV image
+
+
+UV map
+
+
+3D mesh
+
+
+Figure 3. Comparisons of UV maps. Row (a) shows SMPL default UV map and row (b) shows our continuous UV map.
+SMPL UV map
+Figure 4. Comparisons of distance matrices between vertices calculated on SMPL UV map, the proposed UV map, and the original mesh surface. Compared to SMPL UV map, the distance matrix of the proposed UV map is more similar to that of the original mesh.
+
+
+Our UV map
+
+
+Original mesh
+
+| UV map | 2D correlation (S1) | cosine similarity (S2) |
| SMPL [23] | 0.2132 | 0.8306 |
| Ours | 0.7758 | 0.9458 |
+
+Table 1. Comparisons of the similarity between the vertices' distance matrices of the original mesh surface and different types of UV maps. $S_{1}$ is the 2D correlation coefficient and $S_{2}$ is the normalized cosine similarity. We see that the proposed UV map outperforms SMPL default UV map on both metrics.
+
+UV map preserves more neighboring relationships than the SMPL UV map.
+
+Pixel-to-Mesh Correspondence. With the proposed UV map, every point on the mesh surface can be expressed by its coordinates on the UV map (i.e. UV coordinates). Therefore, we can predict the pixel-to-surface correspondence by estimating the UV coordinates for each pixel belonging to human body, leading to an IUV image as shown in Figure 3. More importantly, we can also represent a 3D mesh with a location map in the UV space, where the pixel values are 3D coordinates of the corresponding points on the mesh surface. Thus it is easy to reconstruct 3D mesh from a location map with the following formula,
+
+$$
+V _ {i} = X \left(u _ {i}, v _ {i}\right), \tag {3}
+$$
+
+where $V_{i}$ denotes 3D coordinates of vertex, $X$ is the location map, $u_{i}$ and $v_{i}$ are UV coordinates of the vertex.
+
+# 3.2. Dense Correspondence Network (CNet)
+
+CNet establishes the dense correspondence between pixels of the input image and areas of 3D mesh surface. As
+
+
+Figure 5. Illustration of the UV transferring of raw image pixels. Elements in the image space can be transferred to the UV space with the guidance of IUV image.
+
+illustrated in Figure 2, CNet has an encoder-decoder architecture, where the encoder employs ResNet50 [9] as backbone, and the decoder consists of several upsampling and convolutional layers with skip connection with encoder. In particular, the encoder encodes the image as a local feature map and a global feature vector, as well as regresses the camera parameters, which are used to project the 3D mesh into the image plane. The decoder first generates a mask of the human body, which distinguishes fore pixels (i.e. human body) from those at the back. Then, the decoder outputs the exact UV coordinates for the fore pixels, constituting an IUV image as shown in Figure 3. With the predicted IUV image, the corresponding point on the mesh surface for every image pixel can be determined. The loss function for the CNet contains two terms,
+
+$$
+\mathcal {L} _ {I U V} = \lambda_ {c} \mathcal {L} _ {c} + \lambda_ {r} \mathcal {L} _ {r}, \tag {4}
+$$
+
+where $\mathcal{L}_c$ is a dense binary cross-entropy loss for classifying each pixel as 'fore' or 'back', $\mathcal{L}_r$ is an $l_1$ dense regression loss for predicting the exact UV coordinates, and $\lambda_c$ and $\lambda_r$ are two constant coefficients.
+
+# 3.3. Vertex coordinates regression
+
+The location net (LNet) aims to regress 3D coordinates of mesh vertices by outputting a location map, from which the 3D mesh can be reconstructed easily. As shown in Figure 2, the LNet first transfers image features from the image space to the UV space with the guidance of predicted IUV image:
+
+$$
+\mathcal {F} _ {U V} (u, v) = \mathcal {F} _ {i m} (x, y), \tag {5}
+$$
+
+where $(x,y)$ are the coordinates in image space of the pixels classified as fore, and $(u,v)$ are the predicted coordinates in UV space of these pixels. $\mathcal{F}_{im}$ is the feature map in image space and $\mathcal{F}_{UV}$ is the transferred feature map in UV space.
+
+The feature map $\mathcal{F}_{UV}$ is well aligned with the output location map. So the LNet can predict location map utilizing corresponding local image features. In this way, the dense correspondence between image pixels and mesh surface areas is established explicitly. An example of raw image pixels transferred to UV space is shown in Figure 5. Note that our framework transfers features instead of pixel values.
+
+The LNet is a light CNN with skip connections taking the transferred local image features, expanded global image
+
+
+Figure 6. Illustration of our consistent loss between the location map and the IUV image. 3D coordinates in the location map are transferred back to the image space using IUV image, and then projected to the image plane. The projected 2D coordinates are supervised by the coordinates of image pixels in the image space.
+
+feature and a reference location map as input. Intuitively, we apply an weighted $l_{1}$ loss between the predicted location map $X$ and ground-truth location map $\hat{X}$ , i.e.,
+
+$$
+\mathcal {L} _ {\text {m a p}} = \sum_ {u} \sum_ {v} W (u, v) \cdot \left\| X (u, v) - \hat {X} (u, v) \right\| _ {1}. \tag {6}
+$$
+
+$W$ is a weight map used to balance the contribution of different mesh areas, where areas away from torso are assigned higher weights.
+
+We also reconstruct a 3D human mesh from the predicted location map and get 3D joints from human mesh employing joint regressor as previous works [18, 21, 20]. Then we add supervision on the 3D coordinates and projected 2D coordinates in the image space of the joints, i.e.,
+
+$$
+\mathcal {L} _ {J} ^ {3 D} = \sum_ {i} ^ {k} \left\| Z _ {i} - \hat {Z} _ {i} \right\| _ {1}, \tag {7}
+$$
+
+$$
+\mathcal {L} _ {J} ^ {2 D} = \sum_ {i} ^ {k} \| v _ {i} \left(z _ {i} - \hat {z} _ {i}\right) \| _ {2} ^ {2}, \tag {8}
+$$
+
+where $Z_{i}$ and $z_{i}$ are the regressed 3D and 2D coordinates of joints, while $\hat{Z}_{i}$ and $\hat{z}_i$ refer to the coordinates of the ground-truth joints, and $v_{i}$ denotes the visibility of joints.
+
+Finally, the full loss for LNet is
+
+$$
+\mathcal {L} _ {l o c} = \mathcal {L} _ {m a p} + \mathcal {L} _ {J} ^ {3 D} + \mathcal {L} _ {J} ^ {2 D}. \tag {9}
+$$
+
+Consistent Loss: Besides the above widely used supervision, we add an extra supervision between regressed location map and ground-truth IUV image to improve the alignment between 3D mesh and image.
+
+As shown in Figure 6, with an IUV image, we can also transfer location map from the UV space back to the image space and get 3D coordinates for every foreground pixel. The 3D coordinates are then projected to image plane to get 2D coordinates, which should be consistent with the coordinates of the pixels in the image space. Then the consistent
+
+loss is constructed as follows:
+
+$$
+\mathcal {L} _ {c o n} = \sum_ {(x, y)} \| (x, y) - \pi (X (u, v), c)) \| _ {2} ^ {2}, \tag {10}
+$$
+
+where $X$ is the predicted location map, $\pi(X, c)$ denotes the projection function with predicted camera parameters $c$ , and $x, y, u, v$ are the same as that in Equation 5. This consistent loss is similar to the loss item $\mathcal{L}_{dense}$ in recent work of Rong et al. [29]. However, in our framework there is no need to calculate the corresponding point on mesh surface as in [29], because the correspondence between mesh surface and image pixel is already established.
+
+# 3.4. Implementation details
+
+We set $\lambda_{c}$ , $\lambda_{r}$ and $\lambda_{cons}$ to 0.2, 1 and 1 respectively and optimize the framework with an Adam optimizer [19], with batch size 128 and learning rate 2.5e-4. The training data is augmented with randomly scaling, rotation, flipping and RGB channel noise. We first train the CNet for 5 epochs and then train the full framework end-to-end for 30 epochs.
+
+# 4. Experiments
+
+# 4.1. Datasets
+
+In the experiment, we train our model on the Human3.6M [13], UP-3D [22] and SURREAL [34] dataset, while we provide evaluations on the test set of Human3.6M, SURREAL and LSP dataset [17].
+
+Human3.6M: Human3.6M [13] is a large scale indoor dataset for 3D human pose estimation, including multiple subjects performing typical actions like walking, sitting and eating. Following the common setting [18], we use subjects S1, S5, S6, S7 and S8 as training data and use subjects S9 and S11 for evaluation. For evaluation, results are reported using two widely used metrics (MPJPE and MPJPE-PA) under two popular protocols: P1 and P2, as defined in [18],
+
+UP-3D: UP-3D [22] is an outdoor 3D human pose estimation dataset. It provides 3D human body ground truth by fitting SMPL model on images from 2D human pose benchmarks. We utilize the images of training and validation set for training.
+
+SURREAL: SURREAL dataset [34] is a large dataset providing synthetic images with ground-truth SMPL model parameters. We use the standard split setting [34] but remove all images with incomplete human body and evaluate on the same sampled test set as BodyNet [33].
+
+LSP: LSP [17] dataset is a 2D human pose estimation benchmark. In our work, we evaluate the segmentation accuracy of each model on the segmentation annotation [22].
+
+# 4.2. Comparison with the state-of-the-art
+
+In this section, we present comparison of our method with other state-of-the-art mesh-based methods.
+
+| Methods | MPJPE-PA |
| Lassner etc. [22] | 93.9 |
| SMPLify [4] | 82.3 |
| Pavlakos etc. [26] | 75.9 |
| HMR[18] | 56.8 |
| NBF[25] | 59.9 |
| CMR[21] | 50.1 |
| DenseRaC[36] | 48.0 |
| SPIN[20] | 41.1 |
| Ours | 39.3 |
+
+Table 2. Comparison with the state-of-the-art mesh-based 3D human estimation methods on Human3.6M test set. The numbers are joint errors in mm with Procrustes alignment under P2, and lower is better. Our approach achieves the state-of-the-art performance.
+
+| Methods | Surface Error |
| SMPLify++ [22] | 75.3 |
| Tunget al. [32] | 74.5 |
| BodyNet[33] | 73.6 |
| Ours | 56.5 |
+
+Table 3. Comparison with the state-of-the-art methods on SUR-REAL dataset. The numbers are the mean vertex errors in mm, and lower is better. Our methods outperform baselines with a large margin.
+
+ | FB Seg. | Part Seg |
| acc. | f1 | acc. | f1 |
| SMPLify oracle [4] | 92.17 | 0.88 | 88.82 | 0.67 |
| SMPLify [4] | 91.89 | 0.88 | 87.71 | 0.67 |
| SMPLify on [26] | 92.17 | 0.88 | 88.24 | 0.64 |
| HMR [18] | 91.67 | 0.87 | 87.12 | 0.60 |
| CMR [21] | 91.46 | 0.87 | 88.69 | 0.66 |
| SPIN [20] | 91.83 | 0.87 | 89.41 | 0.68 |
| Ours | 92.10 | 0.88 | 89.45 | 0.69 |
+
+Table 4. Comparison with the state-of-the-art methods on LSP test set. The numbers are accuracy and f1 scores, and higher is better. SMPLify [4] is optimization based, while HMR [18], CMR [21], SPIN [20] and our method are regression based. Our framework achieves the state-of-the-art result among regression based methods and is competitive with optimization based methods.
+
+Table 2 shows the results on Human3.6M test set. We train our model following the setting of CMR [21] and utilize Human3.6M and UP-3D as the training set. Our method achieves the state-of-the-art performance among the mesh-based methods. It's worth notice that SPIN [20] and our method focus on different aspect and are compatible. SPIN [31] focus on the training using data with scarce 3D ground truth and the network is trained with extra data from 2D human pose benchmarks. While we focus on the dense correspondence between mesh and image, and do not include data from 2D human pose benchmarks.
+
+Similarly, we show the results on SURREAL dataset in
+
+| UV map | \( {\mathcal{F}}_{G} \) | \( {\mathcal{F}}_{L} \) | raw pixel | MPJPE | MPJPE-PA |
| P1 | P2 | P1 | P2 |
| SMPL | ✓ | | | 72.1 | 68.9 | 51.9 | 49.1 |
| ✓ | | 71.9 | 69.6 | 47.4 | 44.8 |
| ✓ | ✓ | | 65.0 | 61.7 | 45.1 | 42.6 |
| ✓ | | ✓ | 65.0 | 63.2 | 46.5 | 44.7 |
| Ours | ✓ | | | 69.5 | 67.7 | 49.4 | 47.1 |
| ✓ | | 69.8 | 68.4 | 44.6 | 42.3 |
| ✓ | ✓ | | 62.7 | 60.6 | 42.2 | 39.3 |
| ✓ | | ✓ | 63.2 | 61.0 | 45.5 | 42.6 |
+
+Table 5. Comparison on Human3.6M test set with different UV map and input of location net. The numbers are 3D joint errors in mm. $\mathcal{F}_G$ and $\mathcal{F}_L$ refer to global feature vector and local feature map, respectively. With both UV maps, the framework use local feature outperforms the baseline using global feature with a large margin. Combining global feature and local feature further improves the performance. However, transferring raw image pixels brings a gain much smaller. With the same input, the frameworks using our UV map outperform these using SMPL default UV map.
+
+Table 3. Our model is trained only with training data of SURREAL dataset and outperforms the previous methods by a large margin. The human shape in SURREAL dataset is of great variety, and this verifies the human shape reconstruction capability of our method.
+
+We also investigate human shape estimation accuracy by evaluating the foreground-background and part-segmentation performance on the LSP test set. During the evaluation, we use the projection of the 3D mesh as segmentation result. The predicted IUV image is not used in evaluation for fair comparison. The results are shown in Table 4. Our regression based method outperforms the state-of-the-art regression based methods and is competitive with the optimization based methods, which tend to outperform the regression based methods on this metric but are with much lower inference speed.
+
+# 4.3. Ablative studies
+
+In this section, we provide the ablation studies of the proposed method. We train all networks with training data from Human3.6M and UP-3D dataset, and evaluate the models on Human3.6M test set.
+
+Dense correspondence: We first investigate the effectiveness of the dense correspondence between 3D mesh and image features. We train networks that only use global feature or transferred local feature as the input of LNet. The comparison is shown in Table 5. With both UV maps, the framework utilizing transferred local feature outperforms the baseline using global feature with a large margin, which proves the effectiveness of the established dense correspondence. Combining global feature with local feature further improves the performance.
+
+We also train frameworks that transfer raw image pixels
+
+
+RGB image
+
+
+
+Regressed
+
+Figure 7. An example of mesh reconstructed using our new UV map (top) and SMPL default UV map (bottom). SMPL default UV map may cause discontinuity between different parts as well as erroneous estimation of some vertices near part edges. While our new UV map mitigates these problems.
+
+
+
+
+Estimated
+detail
+
+
+
+
+Mesh
+
+rather than image features and observe much less improvement than transferring local features. We attribute this phenomenon to the lack of human pose information in transferred raw pixels. For images with the same person in different poses, the pixels of a certain body part will be transferred to the same position in the UV space, which generates similar inputs for the LNet. So the LNet can only use transferred pixels to refine the estimation of human shape, and predict human pose only based on global feature.
+
+On the contrary, the CNet is able to embed human pose information into image features. Then the LNet can resort to transferred features to refine both human shape and pose estimation.
+
+UV map: For the second ablative study, we investigate the influence of different UV maps. We compare the performance of frameworks using SMPL default UV map [23], and our continuous UV map.
+
+As shown in Table 5, with the same input of LNet, the frameworks using our continuous UV map outperforms these frameworks using SMPL default UV map with a large margin. We attribute the gain to the continuity of the new UV map. As shown in Figure 7, some neighboring parts on mesh surface are distant on SMPL default UV map, such as arms and hands. This may lead to discontinuity of these parts on the final 3D mesh. Additionally, some faraway surface parts are very close on the UV plane, such as hands and foots, which might cause erroneous estimation of vertices on edges of these parts. These phenomenons are both shown in Figure 7. On the contrary, our UV map preserves more neighboring relations of the original mesh surface, so these problems are mitigated.
+
+# 4.4. Qualitative result
+
+Some qualitative results are presented in Figure 8, and Figure 9 includes some failure cases. Typical failure cases can be attributed to challenging poses, viewpoints rare seen
+
+
+Figure 8. Qualitative results of our approach. Rows 1-3: LSP [17]. Rows 4-5: Human3.6M [13].
+
+
+(a) Image
+
+
+(b) Result
+
+
+(c) Image
+Figure 9. Examples of erroneous reconstruction of our methods. Typical failures can be attributed to challenging poses, viewpoints rare seen in training set, severe self-osculation, as well as confusion caused by interaction among multiple people.
+
+
+(d) Result
+
+in training set, severe self-osculation, as well as confusion caused by interaction among multiple people.
+
+# 5. Conclusion
+
+This work aims to solve the problem of lacking dense correspondence between the image feature and output 3D
+
+mesh in mesh-based monocular 3D human body estimation. The correspondence is explicitly established by IUV image estimation and image feature transferring. Instead of reconstructing human mesh from global feature, our framework is able to make use of extra dense local features transferred to the UV space. To facilitate the learning of frame work, we propose a new UV map that maintains more neighboring relations of the original mesh surface. Our framework achieves state-of-the-art performance among 3D mesh-based methods on several public benchmarks. Future work can focus on extending the framework to the reconstruction of surface details beyond existing human models, such as cloth wrinkles and hair styles.
+
+# Acknowledgement
+
+We thank reviewers for helpful discussions and comments. Wanli Ouyang is supported by the Australian Research Council Grant DP200103223.
+
+# References
+
+[1] T. Alldieck, G. Pons-Moll, C. Theobalt, and M. Magnor. Tex2shape: Detailed full human body geometry from a single image. arXiv preprint arXiv:1904.08645, 2019.
+[2] R. Alp Güler, N. Neverova, and I. Kokkinos. Densesepos: Dense human pose estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7297-7306, 2018.
+[3] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. Scape: shape completion and animation of people. In ACM transactions on graphics (TOG), volume 24, pages 408-416. ACM, 2005.
+[4] F. Bogo, A. Kanazawa, C. Lassner, P. Gehler, J. Romero, and M. J. Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In European Conference on Computer Vision, pages 561-578. Springer, 2016.
+[5] J. Dai, H. Qi, Y. Xiong, Y. Li, G. Zhang, H. Hu, and Y. Wei. Deformable convolutional networks. In ICCV, 2017.
+[6] V. Gabeur, J.-S. Franco, X. Martin, C. Schmid, and G. Rogez. Moulding humans: Non-parametric 3d human shape estimation from single images. arXiv preprint arXiv:1908.00439, 2019.
+[7] A. Grigorev, A. Sevastopolsky, A. Vakhitov, and V. Lempitsky. Coordinate-based texture inpainting for pose-guided human image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12135-12144, 2019.
+[8] P. Guan, A. Weiss, A. O. Balan, and M. J. Black. Estimating human shape and pose from a single image. In 2009 IEEE 12th International Conference on Computer Vision, pages 1381-1388. IEEE, 2009.
+[9] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[10] P. Huang, M. Tejera, J. Collosomse, and A. Hilton. Hybrid skeletal-surface motion graphs for character animation from 4d performance capture. ACM Transactions on Graphics (ToG), 34(2):17, 2015.
+[11] Y. Huang, F. Bogo, C. Lassner, A. Kanazawa, P. V. Gehler, J. Romero, I. Akhter, and M. J. Black. Towards accurate marker-less human shape and pose estimation over time. In 2017 International Conference on 3D Vision (3DV), pages 421-430. IEEE, 2017.
+[12] M. E. Hussein, M. Torki, M. A. Gowayyed, and M. El-Saban. Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations. In Twenty-Third International Joint Conference on Artificial Intelligence, 2013.
+[13] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE transactions on pattern analysis and machine intelligence, 36(7):1325-1339, 2013.
+
+[14] A. Jacobson and D. Panozzo. libigl: prototyping geometry processing research in $\mathbf{c} + +$ . In SIGGRAPH Asia 2017 courses, page 11. ACM, 2017.
+[15] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017-2025, 2015.
+[16] Z. Jiang, S. Schaefer, and D. Panozzo. Simplicial complex augmentation framework for bijective maps. ACM Transactions on Graphics, 36(6), 2017.
+[17] S. Johnson and M. Everingham. Clustered pose and nonlinear appearance models for human pose estimation. In Proceedings of the British Machine Vision Conference, 2010. doi:10.5244/C.24.12.
+[18] A. Kanazawa, M. J. Black, D. W. Jacobs, and J. Malik. End-to-end recovery of human shape and pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7122-7131, 2018.
+[19] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[20] N. Kolotouros, G. Pavlakos, M. J. Black, and K. Daniilidis. Learning to reconstruct 3d human pose and shape via model-fitting in the loop. arXiv preprint arXiv:1909.12828, 2019.
+[21] N. Kolotouros, G. Pavlakos, and K. Daniilidis. Convolutional mesh regression for single-image human shape reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4501-4510, 2019.
+[22] C. Lassner, J. Romero, M. Kiefel, F. Bogo, M. J. Black, and P. V. Gehler. Unite the people: Closing the loop between 3d and 2d human representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6050-6059, 2017.
+[23] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. Smpl: A skinned multi-person linear model. ACM transactions on graphics (TOG), 34(6):248, 2015.
+[24] A. Newell, K. Yang, and J. Deng. Stacked hourglass networks for human pose estimation. In European conference on computer vision, pages 483-499. Springer, 2016.
+[25] M. Omran, C. Lassner, G. Pons-Moll, P. Gehler, and B. Schiele. Neural body fitting: Unifying deep learning and model based human pose and shape estimation. In 2018 International Conference on 3D Vision (3DV), pages 484-494. IEEE, 2018.
+[26] G. Pavlakos, L. Zhu, X. Zhou, and K. Daniilidis. Learning to estimate 3d human pose and shape from a single color image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 459-468, 2018.
+[27] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. V. Gehler, and B. Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4929-4937, 2016.
+[28] A. Pumarola, J. Sanchez-Riera, G. Choi, A. Sanfeliu, and F. Moreno-Noguer. 3dpeople: Modeling the geometry of dressed humans. In Proceedings of the IEEE International Conference on Computer Vision, pages 2242-2251, 2019.
+
+[29] Y. Rong, Z. Liu, C. Li, K. Cao, and C. C. Loy. Delving deep into hybrid annotations for 3d human recovery in the wild. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[30] S. Saito, Z. Huang, R. Natsume, S. Morishima, A. Kanazawa, and H. Li. Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization. In Proceedings of the IEEE International Conference on Computer Vision, pages 2304-2314, 2019.
+[31] B. Tekin, P. Márquez-Neila, M. Salzmann, and P. Fua. Learning to fuse 2d and 3d image cues for monocular body pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3941–3950, 2017.
+[32] H.-Y. Tung, H.-W. Tung, E. Yumer, and K. Fragkiadaki. Self-supervised learning of motion capture. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5236-5246. Curran Associates, Inc., 2017.
+[33] G. Varol, D. Ceylan, B. Russell, J. Yang, E. Yumer, I. Laptev, and C. Schmid. Bodynet: Volumetric inference of 3d human body shapes. In Proceedings of the European Conference on Computer Vision (ECCV), pages 20-36, 2018.
+[34] G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid. Learning from synthetic humans. In CVPR, 2017.
+[35] L. Xia, C.-C. Chen, and J. K. Aggarwal. View invariant human action recognition using histograms of 3d joints. In 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 20-27. IEEE, 2012.
+[36] Y. Xu, S.-C. Zhu, and T. Tung. Denserac: Joint 3d pose and shape estimation by dense render-and-compare. In Proceedings of the IEEE International Conference on Computer Vision, pages 7760-7770, 2019.
+[37] P. Yao, Z. Fang, F. Wu, Y. Feng, and J. Li. Densebody: Directly regressing dense 3d human pose and shape from a single color image. arXiv preprint arXiv:1903.10153, 2019.
+[38] A. Zanfir, E. Marinoiu, and C. Sminchisescu. Monocular 3d pose and shape estimation of multiple people in natural scenes-the importance of multiple scene constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2148-2157, 2018.
+[39] H. Zhu, X. Zuo, S. Wang, X. Cao, and R. Yang. Detailed human shape estimation from a single image by hierarchical mesh deformation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4491-4500, 2019.
\ No newline at end of file
diff --git a/3dhumanmeshregressionwithdensecorrespondence/images.zip b/3dhumanmeshregressionwithdensecorrespondence/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..480726e3deae72eb155e867a865dbeaa8a93be21
--- /dev/null
+++ b/3dhumanmeshregressionwithdensecorrespondence/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d691025a13320fbc09489c5318a8d3ecdbd67616b9871d893460604ad8bb7f1e
+size 686379
diff --git a/3dhumanmeshregressionwithdensecorrespondence/layout.json b/3dhumanmeshregressionwithdensecorrespondence/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cac969231916e223ae5b31017d2c9f0d8ecfa490
--- /dev/null
+++ b/3dhumanmeshregressionwithdensecorrespondence/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d4a9003eb7918e5f1e3a5f60a2824a8b945080dce86a46fe918069f29d76256
+size 394874
diff --git a/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_content_list.json b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..301e906c86e65c03cbf73b1c3befc7e51fb4ec42
--- /dev/null
+++ b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3298e85997a7ed88d7589b4e5bb787624f2422a5ac62c9b04cc48737cfada913
+size 79389
diff --git a/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_model.json b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c5ee9c724cfa9da058b76fbdb98f9f18c52cc0d4
--- /dev/null
+++ b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4f9f7c308f2554fff0cd1ba3a8be06a3db82dc3a5eadf50a6aa33274c8eaa6c
+size 96578
diff --git a/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_origin.pdf b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ecfc33b66137bb4feda3940197a0ef59d7176628
--- /dev/null
+++ b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/f5f1d7aa-96ff-441f-b2d2-c3511cdca894_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c77558eae318ac66d82b71402077a8621dfd4a708dc3d8f2241cccc1aedc3bf4
+size 4312691
diff --git a/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/full.md b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..ef396360cbf60d281e49eccfca717380bf603f47
--- /dev/null
+++ b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/full.md
@@ -0,0 +1,301 @@
+# 3D-MPA: Multi Proposal Aggregation for 3D Semantic Instance Segmentation
+
+Francis Engelmann $^{1,2\dagger}$ Martin Bokeloh $^{2}$ Alireza Fathi $^{2}$ Bastian Leibe $^{1}$ Matthias Nießner $^{3}$ $^{1}$ RWTH Aachen University $^{2}$ Google $^{3}$ Technical University Munich
+
+
+Input: 3D Point Cloud
+
+
+Figure 1: Given an input 3D point cloud, our Multi Proposal Aggregation network (3D-MPA) predicts point-accurate 3D semantic instances. We propose an object-centric approach which generates instance proposals followed by a graph convolutional network which enables higher-level interactions between adjacent proposals. Unlike previous methods, the final object instances are obtained by aggregating multiple proposals instead of pruning proposals using non-maximum-suppression.
+
+
+Object Center Votes & Aggregated Proposals
+Output: 3D Semantic Instances
+
+# Abstract
+
+We present 3D-MPA, a method for instance segmentation on 3D point clouds. Given an input point cloud, we propose an object-centric approach where each point votes for its object center. We sample object proposals from the predicted object centers. Then, we learn proposal features from grouped point features that voted for the same object center. A graph convolutional network introduces interproposal relations, providing higher-level feature learning in addition to the lower-level point features. Each proposal comprises a semantic label, a set of associated points over which we define a foreground-background mask, an objectness score and aggregation features. Previous works usually perform non-maximum-suppression (NMS) over proposals to obtain the final object detections or semantic instances. However, NMS can discard potentially correct predictions. Instead, our approach keeps all proposals and groups them together based on the learned aggregation features. We show that grouping proposals improves over NMS and outperforms previous state-of-the-art methods on the tasks of 3D object detection and semantic instance segmentation on the ScanNetV2 benchmark and the S3DIS dataset.
+
+# 1. Introduction
+
+With the availability of commodity RGB-D sensors such as Kinect or Intel RealSense, the computer vision and graphics communities have achieved impressive results on 3D reconstruction methods [27, 28] that can now even achieve global pose tracking in real time [8, 47]. In addition to the reconstruction of the geometry, semantic scene understanding is critical to many real-world computer vision applications, including robotics, upcoming applications on mobile devices, or AR/VR headsets. In order to understand reconstructed 3D environments, researchers have already made significant progress with 3D deep learning methods that operate on volumetric grids [6, 32, 37, 38, 48], point clouds [11, 31, 33], meshes [16, 36] or multi-view hybrids [7, 39]. While early 3D learning approaches focus mostly on semantic segmentation, we have recently seen many works on 3D semantic instance segmentation [18, 19, 49] and 3D object detection [29, 51], both of which we believe are critical for real-world 3D perception.
+
+One of the fundamental challenges in 3D object detection lies in how to predict and process object proposals: On one side, top-down methods first predict a large number of rough object bounding box proposals (e.g., anchor mechanisms in Faster R-CNN [35]), followed by a second stage refinement step. Here, results can be generated in a single
+
+forward pass, but there is little outlier tolerance to wrongly detected box anchors. On the other side, bottom-up approaches utilize metric-learning methods with the goal of learning a per-point feature embedding space which is subsequently clustered into object instances [10, 19, 24]. This strategy can effectively handle outliers, but it heavily depends on manually tuning cluster parameters and is inherently expensive to compute at inference time due to $O(N^2)$ pairwise relationships.
+
+In this work, we propose 3D-MPA which follows a hybrid approach that takes advantage of the benefits of both top-down and bottom-up techniques: from an input point cloud representing a 3D scan, we generate votes from each point for object centers and group those into object proposals; then - instead of rejecting proposals using nonmaximum-suppression - we learn higher-level features for each proposal, which we use to cluster the proposals into final object detections. The key idea behind this strategy is that the number of generated proposals is orders of magnitude smaller than the number of raw input points in a 3D scan, which makes grouping computationally very efficient. At the same time, each object can receive multiple proposals, which simplifies proposal generation since objects of all sizes are handled in the same fashion, and we can easily tolerate outlier proposals further down the pipeline.
+
+To this end, our method first generates object-centric proposals using a per-point voting scheme from a sparse volumetric feature backbone. We then interpret the proposals as nodes of a proposal graph which we feed into a graph convolutional neural network in order to enable higher-order interactions between neighboring proposal features. In addition to proposal losses, the network is trained with a proxy loss between proposals similar to affinity scores in metric learning; however, due to the relatively small number of proposals, we can efficiently train the network and cluster proposals. In the end, each node predicts a semantic class, an object foreground mask, an objectness score, and additional features that are used to group nodes together.
+
+In summary, our contributions are the following:
+
+- A new method for 3D instance segmentation based on dense object center prediction leveraging learned semantic features from a sparse volumetric backbone.
+- To obtain the final object detections and semantic instances from the object proposals, we replace the commonly used NMS with our multi proposal aggregation strategy based on jointly learned proposal features and report significantly improved scores over NMS.
+- We employ a graph convolutional network that explicitly models higher-order interactions between neighboring proposal features in addition to the lower-level point features.
+
+# 2. Related Work
+
+Object Detection and Instance Segmentation. In the 2D domain, object detection and instance segmentation have most notably been influenced by Faster R-CNN from Ren et al. [35], which introduced the anchor mechanism to predict proposals with associated objectness scores and regions of interest that enable the regression of semantic bounding boxes. This approach was extended in Mask-RCNN [17] to predict per-pixel object instance masks. Hou et al. [18] apply the 2D proposal ideas onto the 3D domain by means of dense 3D convolutional networks. As an alternative, proposal-free methods were proposed in [4, 14, 19] which rely on metric learning. In the 2D domain, Fathi et al. [14] estimate how likely pixels are to belong to the same object. De Brabandere et al. [4] define a discriminative loss, which moves feature points of the same object towards their mean while pushing means of different objects apart. This discriminative loss is adopted by Lahoud et al. [19] to perform instance segmentation in 3D space. Final instances are obtained via clustering of the learned feature space. Yang et al. [49] directly predict object bounding boxes from a learned global feature vector and obtain instance masks by segmenting points inside a bounding box. The recent VoteNet [29] highlights the challenge of directly predicting bounding box centers in sparse 3D data as most surface points are far away from object centers. Instead, they predict bounding boxes by grouping points from the same object based on their votes for object centers. We adopt the object-centric approach, extend it with a branch for instance mask prediction and replace NMS with a grouping mechanism of jointly-learned proposal features.
+
+3D Deep Learning. PointNets [31] have pioneered the use of deep learning methods for point cloud processing. Since then, we have seen impressive progress in numerous different fields, including 3D semantic segmentation [15, 12, 21, 31, 33, 40, 46], 3D instance segmentation [10, 18, 19, 45, 49, 50], object detection [18, 29, 51] and relocalization [42], flow estimation [3, 25, 43], scene-graph reconstruction [1] and scene over-segmentation [20]. Point-based architectures, such as PointNet [29] and PointNet++ [34] operate directly on unstructured sets of points, while voxel based approaches, such as 3DMV [7] or SparseConvNets [5, 15] transform the continuous 3D space into a discrete grid representation and define convolutional operators on the volumetric grid, analogously to image convolutions in the 2D domain. Graph-based approaches [22, 41, 46] define convolutional operators over graph-structured data such as 3D meshes [16, 36], citation networks [41], or molecules [9]. Here, we leverage the voxel-based approach of Graham et al. [15] as point feature backbone and use the graph neural network of Wang et al. [46] to enable higher-level interactions between proposals.
+
+
+Figure 2: 3D-MPA network architecture. From an input point cloud, our network predicts object instance masks by aggregating object proposal masks. The full model consists of three parts: the proposal generation (left) follows an object-centric strategy: each point votes for the center of the object it belongs to. Proposal positions are then sampled from the predicted object centers. By grouping and aggregating votes in the vicinity of sampled proposal positions, we learn proposal features. During proposal consolidation (middle), proposal features are further refined using a graph convolutional network, which enables higher-order interactions on the level of proposals. Finally, we propose to aggregate multiple proposals by clustering jointly learned aggregation features as opposed to the commonly used non-maximum-suppression (right).
+
+# 3. Method
+
+The overall architecture of 3D-MPA is depicted in Fig. 2. The model consists of three parts: the first one takes as input a 3D point cloud and learns object proposals from sampled and grouped point features that voted for the same object center (Sec. 3.1). The next part consolidates the proposal features using a graph convolutional network enabling higher-level interactions between proposals which results in refined proposal features (Sec. 3.2). Last, the object generator consumes the object proposals and generates the final object detections, i.e. semantic instances. We parameterize an object as a set of points associated with that object and a semantic class. (Sec. 3.3).
+
+# 3.1. Proposal Generation
+
+Given a point cloud of size $N \times I$ , consisting of $N$ points and $I$ -dimensional input features (e.g. positions, colors and normals), the first part of the network generates a fixed number $K$ of object proposals. A proposal is a tuple $(y_{i}, g_{i}, s_{i})$ consisting of a position $y_{i} \in \mathbb{R}^{3}$ , a proposal features vector $g_{i} \in \mathbb{R}^{D}$ and a set of points $s_{i}$ associated with the proposal.
+
+To generate proposals, we need strong point features that encode the semantic context and the geometry of the underlying scene. We implement a sparse volumetric network [5, 15] as feature backbone to generate per-point features $\{f_i\in \mathbb{R}^F\}_{i = 1}^N$ (Fig. 2, $\square$ ). Semantic context is encoded into the point features by supervising the feature backbone with semantic labels, using the standard cross-entropy loss for per-point semantic classification $\mathcal{L}_{\mathrm{sem,pt}}$ . Following the object-centric approach suggested by Qi et al. [29], points vote for the center of the object they belong to. However, unlike [29], only points from objects predict a center. This is possible since we jointly predict semantic classes, i.e.
+
+we can differentiate between points from foreground (objects) and background (walls, floor, etc.) during both training and test. This results in precise center predictions since noisy predictions from background points are ignored. In particular, this is implemented as a regression loss which predicts per-point relative 3D offsets $\Delta x_{i}\in \mathbb{R}^{3}$ between a point position $x_{i}\in \mathbb{R}^{3}$ and its corresponding ground truth bounding-box center $c_{i}^{*}\in \mathbb{R}^{3}$ . We define the per-point center regression loss as:
+
+$$
+\mathcal {L} _ {\text {c e n t . p t .}} = \frac {1}{M} \left\| x _ {i} + \Delta x _ {i} - c _ {i} ^ {*} \right\| _ {H} \cdot \mathbb {1} \left(x _ {i}\right), \tag {1}
+$$
+
+where $||\cdot ||_H$ is the Huber-loss (or smooth $\mathrm{L}_1$ -loss) and $\mathbb{1}(\cdot)$ is a binary function indicating whether a point $x_{i}$ belongs to an object. $M$ is a normalization factor equal to the total number of points on objects. All in all, the feature backbone has two heads (Fig. 2, $\square$ ): a semantic head (which performs semantic classification of points) and a center head (which regresses object centers for each point). They are jointly supervised using the combined loss $\mathcal{L}_{\mathrm{point}}$ where $\lambda$ is a weighting factor set to 0.1:
+
+$$
+\mathcal {L} _ {\text {p o i n t}} = \lambda \cdot \mathcal {L} _ {\text {s e m . p t .}} + \mathcal {L} _ {\text {c e n t . p t .}}. \tag {2}
+$$
+
+Proposal Positions and Features. After each point (that belongs to an abject) has voted for a center, we obtain a distribution over object centers (Fig. 3, $3^{\mathrm{rd}}$ col.). From this distribution, we randomly pick $K$ samples as proposal positions $\{y_{i} = x_{i} + \Delta x_{i}\in \mathbb{R}^{3}\}_{i = 1}^{K}$ (Fig. 3, $4^{\mathrm{th}}$ col.). We found random sampling to work better than Farthest Point Sampling (FPS) used in [29], as FPS favors outliers far away from true object centers. Next, we define the set of associated points $s_i$ as those points that voted for centers within a radius $r$ of the sampled proposal position $y_{i}$ . The proposal
+
+features $\{g_i\in \mathbb{R}^D\}_{i = 1}^K$ are learned using a PointNet [31] applied to the point features of the associated points $s_i$ . This corresponds to the grouping and normalization technique described in [29]. At this stage, we have $K$ proposals composed of 3D positions $y_{i}$ located near object centers, proposal features $g_{i}\in \mathbb{R}^{D}$ describing the local geometry and the semantics of the nearest objects (Fig. 2, $\square$ ), along with a set of points $s_i$ associated with each proposal.
+
+# 3.2. Proposal Consolidation
+
+So far, proposal features encode local information of their associated objects. During proposal consolidation, proposals become aware of their global neighborhood by explicitly modeling higher-order interactions between neighboring proposals. To this end, we define a graph convolutional network (GCN) over the proposals. While the initial point-feature backbone operates at the level of points, the GCN operates at the level of proposals. In particular, the nodes of the graph are defined by the proposal positions $y_{i}$ with associated proposal features $g_{i}$ . An edge between two nodes exists if the Euclidean distance $d$ between two 3D proposal positions $y_{\{i,j\}}$ is below $2\mathrm{m}$ . We adopt the convolutional operator from DGCNN [46] to define edge-features $e_{ij}$ between two neighboring proposals as:
+
+$$
+e _ {i j} = h _ {\Theta} \left(\left[ y _ {i}, g _ {i} \right], \left[ y _ {j}, g _ {j} \right] - \left[ y _ {i}, g _ {i} \right]\right), \tag {3}
+$$
+
+where $h_\Theta$ is a non-linear function with learnable parameters $\theta$ and $[\cdot, \cdot]$ denotes concatenation. The graph convolutional network consists of $l$ stacked graph convolutional layers. While our method also works without the GCN refinement (i.e. $l = 0$ ), we observe the best results using $l = 10$ (Sec. 4). To conclude, during proposal consolidation a GCN learns refined proposal features $\{h_i \in \mathbb{R}^{D'}\}_{i=1}^K$ given the initial proposal features $\{g_i \in \mathbb{R}^D\}_{i=1}^K$ (Fig. 2, □).
+
+# 3.3. Object Generation
+
+At this stage, we have $K$ proposals $\{(y_i,h_i,s_i)\}_{i = 1}^K$ with positions $y_{i}$ , refined features $h_i$ and sets of points $s_i$ . The goal is to obtain the final semantic instances (or object detections) from these proposals. To this end, we predict for every proposal a semantic class, an aggregation feature vector, an objectness score and a binary foregroundbackground mask over the points $s_i$ associated with the proposal. Specifically, the proposal features $h_i$ are input to an MLP with output sizes $(128,128,D_{out})$ where $D_{out} = S + E + 2$ with $S$ semantic classes, $E$ -dimensional aggregation feature and a 2D (positive, negative) objectness score (Fig. 2, $\square$
+
+The objectness score [29, 35] classifies proposals into positive or negative examples. It is supervised via a cross-entropy loss $\mathcal{L}_{obj}$ . Proposals near a ground truth center ( $< 0.3\mathrm{m}$ ) are classified as positive. They are classified as negative, if they are far away ( $>0.6\mathrm{m}$ ) from any ground
+
+truth center, or if they are equally far away from two ground truth centers since then the correct ground truth object is ambiguous. This is the case when $d_1 > 0.6 \cdot d_2$ where $d_i$ is the distance to the $i^{th}$ closest ground truth center.
+
+Positive proposals are further supervised to predict a semantic class, aggregation features, and a binary mask. Negative ones are ignored. We use a cross-entropy loss $\mathcal{L}_{\mathrm{sem}}$ to predict the semantic label of the closest ground truth object.
+
+Aggregation Features. Previous methods such as VoteNet [29] or 3D-BoNet [49] rely on non-maximum-suppression (NMS) to obtain the final objects. NMS iteratively selects proposals with the highest objectness score and removes all others that overlap with a certain IoU. However, this is sensitive to the quality of the objectness scores and can discard correct predictions. Instead of rejecting potentially useful information, we combine multiple proposals. To this end, we learn aggregation features for each proposal which are then clustered using DBScan [13].
+
+All proposals whose aggregation features end up in the same cluster are aggregated together, yielding the final object detections. The points of a final object are the union over the foreground masks of combined proposals. As the number of proposals is relatively small ( $K \approx 500$ ) compared to the full point cloud ( $N \approx 10^6$ ), this step is very fast ( $\sim 8$ ms). This is a significant advantage over clustering full point clouds [10, 19], which can be prohibitively slow.
+
+We investigate two types of aggregation features:
+
+① Geometric features $\{\epsilon_i\in \mathbb{R}^{E = 4}\}_{i = 1}^K$ are composed of a refined 3D object center prediction $\Delta y_{i}$ and a 1D object radius estimation $r_i$ . The loss is defined as:
+
+$$
+\mathcal {L} _ {\text {a g g .}} = \left\| y _ {i} + \Delta y _ {i} - c _ {i} ^ {*} \right\| _ {H} + \left\| r _ {i} - r _ {i} ^ {*} \right\| _ {H} \tag {4}
+$$
+
+where $c_{i}^{*}$ is the nearest ground truth object center and $r_{i}^{*}$ the radius of the nearest ground truth object bounding sphere.
+
+② Embedding features $\{\epsilon_{i}\in \mathbb{R}^{E}\}_{i = 1}^{K}$ are supervised with a discriminative loss function [4]. This loss was already successfully applied for 3D instance segmentation [10, 19]. It is composed of three terms: $\mathcal{L}_{\mathrm{agg.}} = \mathcal{L}_{\mathrm{var.}} + \mathcal{L}_{\mathrm{dist.}} + \gamma \cdot \mathcal{L}_{\mathrm{reg.}}$
+
+$$
+\mathcal {L} _ {\text {v a r .}} = \frac {1}{C} \sum_ {c = 1} ^ {C} \frac {1}{N _ {C}} \sum_ {i = 1} ^ {N _ {C}} \left[ \left\| \mu_ {C} - \epsilon_ {i} \right\| - \delta_ {v} \right] _ {+} ^ {2} \tag {5}
+$$
+
+$$
+\mathcal {L} _ {\text {d i s t .}} = \frac {1}{C (C - 1)} \sum_ {\substack {C _ {A} = 1 \\ C _ {A} \neq C _ {B}}} ^ {C} \sum_ {C _ {B} = 1} ^ {C} [ 2 \delta_ {d} - \| \mu_ {C _ {A}} - \mu_ {C _ {B}} \| ] _ {+} ^ {2} \tag{6}
+$$
+
+$$
+\mathcal {L} _ {\text {r e g .}} = \frac {1}{C} \sum_ {C = 1} ^ {C} \| \mu_ {C} \| \tag {7}
+$$
+
+In our experiments, we set $\gamma = 0.001$ and $\delta_v = \delta_d = 0.1$ . $C$ is the total number of ground truth objects and $N_C$ the number of proposals belonging to one object. $\mathcal{L}_{\mathrm{var}}$ pulls features that belong to the same instance towards their mean, $\mathcal{L}_{\mathrm{dist}}$ .
+
+pushes clusters with different instance labels apart, and $\mathcal{L}_{\mathrm{reg}}$ is a regularization term pulling the means towards the origin. Further details and intuitions are available in the original work by DeBrabandere et al. [4]. In Sec. 4, we will show that geometric features outperform embedding features.
+
+Mask Prediction. Each positive proposal predicts a class-agnostic binary segmentation mask over the points $s_i$ associated with that proposal, where the number of points per proposal $i$ is $|s_i| = n_i$ (Fig. 2, $\square$ ). Prior approaches obtain masks by segmenting 2D regions of interest (RoI) (Mask-RCNN [17]) or 3D bounding boxes (3D-BoNet [49]). Since we adopt an object-centric approach, mask segmentation can directly be performed on the points $s_i$ associated with a proposal. In particular, for each proposal, we select the per-point features $f_i$ of points that voted for a center within a distance $r$ of the proposal position $y_i$ . Formally, the set of selected per-point features is defined as $M_f = \{f_i \mid \| (x_i + \Delta x_i) - y_i \|_2 < r\}$ with $r = 0.3\mathrm{m}$ . The selected features $M_f$ are passed to a PointNet [32] for binary segmentation, i.e., we apply a shared MLP on each per-point feature, compute max-pooling over all feature channels, and concatenate the result to each feature before passing it through another MLP with feature sizes (256, 128, 64, 32, 2). Points that have the same ground truth instance label as the closest ground truth object instance label are supervised as foreground, while all others are background. Similar to [49], the mask loss $\mathcal{L}_{\mathrm{mask}}$ is implemented as FocalLoss [23] instead of a cross-entropy loss to cope with the foreground-background class imbalance.
+
+# 3.4. Training Details
+
+The model is trained end-to-end from scratch using the multi-task loss $\mathcal{L} = \mathcal{L}_{\mathrm{point}} + \mathcal{L}_{\mathrm{obj.}} + 0.1 \cdot \mathcal{L}_{\mathrm{sem.}} + \mathcal{L}_{\mathrm{mask}} + \mathcal{L}_{\mathrm{agg.}}$ . The batch size is 4 and the initial learning rate 0.1 which is reduced by half every $2 \cdot 10^4$ iterations and trained for $15 \cdot 10^4$ iterations in total. Our model is implemented in TensorFlow and runs on an Nvidia TitanXp GPU (12GB).
+
+Input and data augmentation. Our network is trained on $3\mathrm{m}\times 3\mathrm{m}$ point cloud crops of $N$ points sampled from the surface of a 3D mesh. During test time, we evaluate on full scenes. Input features are the 3D position, color and normal assigned to each point. Data augmentation is performed by randomly rotating the scene by $\mathrm{Uniform}[-180^{\circ},180^{\circ}]$ around the upright axis and $\mathrm{Uniform}[-10^{\circ},10^{\circ}]$ around the other axis. The scenes are randomly flipped in both horizontal directions and randomly scaled by $\mathrm{Uniform}[0.9,1.1]$ .
+
+# 4. Experiments
+
+We compare our approach to previous state-of-the-art methods on two large-scale 3D indoor datasets (Sec. 4.1). Our ablation study analyzes the contribution of each component of our model and shows in particular the improvement of aggregating proposals over NMS (Sec. 4.2).
+
+| 3D Object Detection |
| ScanNetV2 | mAP@25% | mAP@50% |
| DSS [37] | 15.2 | 6.8 |
| MRCNN 2D-3D [17] | 17.3 | 10.5 |
| F-PointNet [30] | 19.8 | 10.8 |
| GSPN [50] | 30.6 | 17.7 |
| 3D-SIS [18] | 40.2 | 22.5 |
| VoteNet [29] | 58.6 | 33.5 |
| 3D-MPA (Ours) | 64.2 | 49.2 |
+
+Table 1: 3D object detection scores on ScanNetV2 [6] validation set. We report per-class mean average precision (mAP) with an IoU of $25\%$ and $50\%$ . The IoU is computed on bounding boxes. All other scores are as reported in [29].
+
+| 3D Instance Segmentation |
| S3DIS 6-fold CV | mAP@50% | mAR@50% |
| PartNet [26] | 56.4 | 43.4 |
| ASIS [45] | 63.6 | 47.5 |
| 3D-BoNet [49] | 65.6 | 47.6 |
| 3D-MPA (Ours) | 66.7 | 64.1 |
| S3DIS Area 5 | mAP@50% | mAR@50% |
| ASIS [45] | 55.3 | 42.4 |
| 3D-BoNet [49] | 57.5 | 40.2 |
| 3D-MPA (Ours) | 63.1 | 58.0 |
+
+Table 2: 3D instance segmentation scores on S3DIS [2]. We report scores on Area 5 (bottom) and 6-fold cross validation results (top). The metric is mean average precision (mAP) and mean average recall (mAR) at an IoU threshold of $50\%$ . The IoU is computed on per-point instance masks.
+
+| 3D Instance Segmentation |
| ScanNetV2 | Validation Set | Hidden Test Set |
| mAP | @50% | @25% | mAP | @50% | @25% |
| SGPN [44] | - | 11.3 | 22.2 | 4.9 | 14.3 | 39.0 |
| 3D-BEVIS [10] | - | - | - | 11.7 | 24.8 | 40.1 |
| 3D-SIS [18] | - | 18.7 | 35.7 | 16.1 | 38.2 | 55.8 |
| GSPN [50] | 19.3 | 37.8 | 53.4 | 15.8 | 30.6 | 54.4 |
| 3D-BoNet [49] | - | - | - | 25.3 | 48.8 | 68.7 |
| MTML [19] | 20.3 | 40.2 | 55.4 | 28.2 | 54.9 | 73.1 |
| 3D-MPA (Ours) | 35.3 | 59.1 | 72.4 | 35.5 | 61.1 | 73.7 |
+
+Table 3: 3D instance segmentation scores ScanNetV2 [6]. The metric is mean average precision (mAP) at an IoU threshold of $55\%$ , $50\%$ and averaged over the range [0.5:0.95:05]. IoU on per-point instance masks.
+
+
+Ground Truth Instances
+
+
+Predicted Instances
+Predicted Object Centers
+
+
+
+
+
+
+Figure 3: Qualitative results and intermediate steps on ScanNetV2 [6]. First two columns: Our approach properly segments instances of vastly different sizes and makes clear decisions at object boundaries. Different colors represent separate instances (ground truth and predicted instances are not necessarily the same color). Third column: Every point on the surface of an object predicts its object center. These centers are shown as blue dots. Fourth column: Gray segments correspond to votes, they illustrate which point predicted a center. Colored spheres represent proposals. Proposals are obtained by sampling from the predicted object centers. Proposal features are learning from grouped point features that voted for the same object center. Spheres with the same color show which proposals are grouped together based on these learned proposal features.
+
+
+
+
+Center Votes & Aggregated Proposals
+
+
+
+
+
+
+
+
+
+
+
+
+Ground Truth Instances
+
+
+Predicted Instances
+
+
+Predicted Object Centers
+
+
+Input Point Cloud
+
+
+Figure 4: Failure Cases. We show two failure cases where our method incorrectly separates single instances. However, when comparing them to the input point cloud, they are still plausible predictions.
+
+
+
+
+
+
+
+| mAP@25 % | cab | bed | chair | sofa | tabl | door | wind | bkshf | pic | cntr | desk | curt | fridg | showr | toil | sink | bath | ofurn | avg |
| SegCluster [18] | 11.8 | 13.5 | 18.9 | 14.6 | 13.8 | 11.1 | 11.5 | 11.7 | 0.0 | 13.7 | 12.2 | 12.4 | 11.2 | 18.0 | 19.5 | 18.9 | 16.4 | 12.2 | 13.4 |
| MRCNN [17] | 15.7 | 15.4 | 16.4 | 16.2 | 14.9 | 12.5 | 11.6 | 11.8 | 19.5 | 13.7 | 14.4 | 14.7 | 21.6 | 18.5 | 25.0 | 24.5 | 24.5 | 16.9 | 17.1 |
| SGPN [44] | 20.7 | 31.5 | 31.6 | 40.6 | 31.9 | 16.6 | 15.3 | 13.6 | 0.0 | 17.4 | 14.1 | 22.2 | 0.0 | 0.0 | 72.9 | 52.4 | 0.0 | 18.6 | 22.2 |
| 3D-SIS [18] | 32.0 | 66.3 | 65.3 | 56.4 | 29.4 | 26.7 | 10.1 | 16.9 | 0.0 | 22.1 | 35.1 | 22.6 | 28.6 | 37.2 | 74.9 | 39.6 | 57.6 | 21.1 | 35.7 |
| MTML [19] | 34.6 | 80.6 | 87.7 | 80.3 | 67.4 | 45.8 | 47.2 | 45.3 | 19.8 | 9.7 | 49.9 | 54.2 | 44.1 | 74.9 | 98.0 | 44.5 | 79.4 | 33.5 | 55.4 |
| 3D-MPA (Ours) | 69.9 | 83.4 | 87.6 | 76.1 | 74.8 | 56.6 | 62.2 | 78.3 | 48.0 | 62.5 | 69.2 | 66.0 | 61.4 | 93.1 | 99.2 | 75.2 | 90.3 | 48.6 | 72.4 |
+
+Table 4: Per class 3D instance segmentation on ScanNetV2 [6] validation set with mAP@25% on 18 classes. Our method outperforms all other methods on all classes except for chair and sofa.
+
+| mAP@50% | cab | bed | chair | sofa | tabl | door | wind | bkshf | pic | cntr | desk | curt | fridg | showr | toil | sink | bath | ofurn | avg |
| SegCluster [18] | 10.4 | 11.9 | 15.5 | 12.8 | 12.4 | 10.1 | 10.1 | 10.3 | 0.0 | 11.7 | 10.4 | 11.4 | 0.0 | 13.9 | 17.2 | 11.5 | 14.2 | 10.5 | 10.8 |
| MRCNN [17] | 11.2 | 10.6 | 10.6 | 11.4 | 10.8 | 10.3 | 0.0 | 0.0 | 11.1 | 10.1 | 0.0 | 10.0 | 12.8 | 0.0 | 18.9 | 13.1 | 11.8 | 11.6 | 9.1 |
| SGPN [44] | 10.1 | 16.4 | 20.2 | 20.7 | 14.7 | 11.1 | 11.1 | 0.0 | 0.0 | 10.0 | 10.3 | 12.8 | 0.0 | 0.0 | 48.7 | 16.5 | 0.0 | 0.0 | 11.3 |
| 3D-SIS [18] | 19.7 | 37.7 | 40.5 | 31.9 | 15.9 | 18.1 | 0.0 | 11.0 | 0.0 | 0.0 | 10.5 | 11.1 | 18.5 | 24.0 | 45.8 | 15.8 | 23.5 | 12.9 | 18.7 |
| MTML [19] | 14.5 | 54.0 | 79.2 | 48.8 | 42.7 | 32.4 | 32.7 | 21.9 | 10.9 | 0.8 | 14.2 | 39.9 | 42.1 | 64.3 | 96.5 | 36.4 | 70.8 | 21.5 | 40.2 |
| 3D-MPA (Ours) | 51.9 | 72.2 | 83.8 | 66.8 | 63.0 | 43.0 | 44.5 | 58.4 | 38.8 | 31.1 | 43.2 | 47.7 | 61.4 | 80.6 | 99.2 | 50.6 | 87.1 | 40.3 | 59.1 |
+
+# 4.1. Comparison with State-of-the-art Methods
+
+Datasets. The ScanNetV2 [6] benchmark dataset consists of richly-annotated 3D reconstructions of indoor scenes. It comprises 1201 training scenes, 312 validation scenes and 100 hidden test scenes. The benchmark is evaluated on 20 semantic classes which include 18 different object classes.
+
+The S3DIS [2] dataset is a collection of six large-scale indoor areas annotated with 13 semantic classes and object instance labels. We follow the standard evaluation protocol and report scores on Area 5, as well as 6-fold cross validation results over all six areas.
+
+Object detection scores are shown in Tab. 1. Object detections are obtained by fitting a tight axis-aligned bounding box around the predicted object point-masks. We compare 3D-MPA to recent approaches including VoteNet [29] on the ScanNetV2 [6] dataset. Scores are obtained by using the evaluation methodology provided by [29]. Our method outperforms all previous methods by at least $+5.8$ mAP@ $25\%$ and $+15.7$ mAP@ $50\%$ .
+
+Instance segmentation scores on S3DIS [2] are shown in Tab. 2. Per-class instance segmentation results are shown in Tab. 7. We report mean average precision (mAP) and mean average recall (mAR) scores. Our scores are computed using the evaluation scripts provided by Yang et al. [49]. Our approach outperforms all previous methods. In particular, we report an increased recall of $+17.8$ mAR@50% on Area5 and $+16.5$ mAR@50% on 6-fold cross validation, which means we detect significantly more objects, while simultaneously achieving higher precision.
+
+We show results on ScanNetV2 [6] validation and hidden test set in Tab. 3 and per-class scores with mAP@25% in Tab. 4 and mAP@50% in Tab. 5. We improve over previous methods by at least +18.1 mAP@50% and +17.0 mAP@25%. In particular, our 3D-MPA outperforms all other methods in every object class on mAP@50 (Tab. 5). On mAP@25, we outperform on all classes except chair and sofa. Qualitative results on ScanNetV2 are visualized in Fig. 3 and failure cases in Fig. 4.
+
+# 4.2. Ablation study
+
+In Tab. 6, we show the result of our ablation study analyzing the design choices of each component of our model. The evaluation metric is mean average precision (mAP) on the task of instance segmentation, evaluated on the ScanNetV2 validation set.
+
+Table 5: Per class 3D instance segmentation on ScanNetV2 [6] validation set with mAP@50% on 18 classes. Our method outperforms all other methods on all classes.
+
+| Ablation Study |
| 3D Instance Segmentation (ScanNetV2 val.) | mAP@50% |
| ① Proposals + NMS | 47.5 |
| ② Agg. Props. (proposal positions) | 52.4 (+4.9) |
| ③ Agg. Props. (embedding features) | 56.7 (+9.2) |
| ④ Agg. Props. (geometric features) | 57.8 (+10.3) |
| ⑤ Agg. Props. (geometric features + GCN) | 59.1 (+11.6) |
+
+Table 6: Ablation study. In Sec. 4.2 we discuss the results in detail. Scores are instance segmentation results on the ScanNetV2 [6] validation set and absolute improvements in mAP (in green) relative to the baseline ①.
+
+ | S3DIS 6-fold CV | ceil. | floor | walls | beam | colm. | wind. | door | table | chair | sofa | bookc. | board | clut. | mean |
| mAP@0.5 | 3D-BoNet [49] | 88.5 | 89.9 | 64.9 | 42.3 | 48.0 | 93.0 | 66.8 | 55.4 | 72.0 | 49.7 | 58.3 | 80.7 | 47.6 | 65.6 |
| 3D-MPA (Ours) | 95.5 | 99.5 | 59.0 | 44.6 | 57.7 | 89.0 | 78.7 | 34.5 | 83.6 | 55.9 | 51.6 | 71.0 | 46.3 | 66.7 |
| mAR@0.5 | 3D-BoNet [49] | 61.8 | 74.6 | 50.0 | 42.2 | 27.2 | 62.4 | 58.5 | 48.6 | 64.9 | 28.8 | 28.4 | 46.5 | 28.6 | 46.7 |
| 3D-MPA (Ours) | 68.4 | 96.2 | 51.9 | 58.8 | 77.6 | 79.8 | 69.5 | 32.8 | 75.2 | 71.1 | 46.2 | 68.2 | 38.2 | 64.1 |
+
+Table 7: Per class 3D instance segmentation scores on S3DIS [2]. We report per-class mean average precision (mAP) and recall (mAR) with an IoU of $50\%$ . 3D-BoNet are up-to-date numbers provided by the original authors. Our method detects significantly more objects (+17.4 mAR) and it is even able to do so with a higher precision (+1.1 mAP).
+
+Effect of grouping compared to NMS. The main result of this work is that grouping multiple proposals is superior to non-maximum-suppression (NMS). We demonstrate this experimentally by comparing two baseline variants of our model: In experiment ① (Tab. 6), we apply the traditional approach of predicting a number of proposals and applying NMS to obtain the final predictions. The model corresponds to the one depicted in Fig. 2 without proposal consolidation and with the aggregation replaced by NMS. NMS chooses the most confident prediction and suppresses all other predictions with an IoU larger than a specified threshold, in our case $25\%$ . For experiment ②, we perform a naive grouping of proposals by clustering the proposal positions $y_{i}$ . The final object instance masks are obtained as the union over all proposal masks in one cluster. We observe a significant increase of $+4.9$ mAP by replacing NMS with aggregation.
+
+How important are good aggregation features? In experiment ②, we group proposals based on their position $y_{i}$ . These are still relatively simple features. In experiments ③ and ④, proposals are grouped based on learned embedding features and learned geometric features, respectively. These features are described in Sec. 3.3. Again, we observe a notable improvement of +5.4 mAP compared to experiment ② and even +10.3 mAP over ①. In our experiments, the geometric features performed better than the embedding features (+1.1 mAP). One possible explanation could be that the geometric features have an explicit meaning and are therefore easier to train than the 5-dimensional embedding space used in experiment ③. Therefore, for the next experiment in the ablation study and our final model, we make use of the geometric features. In summary, the quality of the aggregation features has a significant impact.
+
+Does the graph convolutional network help? The graph convolutional network (GCN) defined on top of proposals enables higher-order interaction between proposals. Experiment ⑤ corresponds to the model depicted in Fig. 2 with a 10 layer GCN. Experiment ④ differs from experiment ⑤ in that it does not include the GCN for proposal consolidation. Adding the GCN results in another improvement of
+
++1.3 mAP. In total, by incorporating the GCN and replacing NMS with multi-proposal aggregation, we observe an improvement of +11.6 mAP over the same network architecture without those changes.
+
+# 5. Conclusion
+
+In this work, we introduced 3D-MPA, a new method for 3D semantic instance segmentation. Our core idea is to combine the benefits of both top-down and bottom-up object detection strategies. That is, we first produce a number of proposals using an object-centric voting scheme based on a sparse volumetric backbone. Each object may receive multiple proposals, which makes our method robust to potential outliers in the object proposal stage. However, at the same time we obtain only a handful of proposals such that clustering them is computationally inexpensive. To address this, we first allow higher-order feature interactions between proposals via a graph convolutional network. We then aggregate proposals based on graph relationship results and proposal feature similarities. We show that graph convolutions help to achieve high evaluation scores, although, the largest improvement originates from our multi proposal aggregation strategy. Our combined approach achieves state-of-the-art instance segmentation and object detection results on the popular ScanNetV2 and S3DIS datasets, thus validating our algorithm design.
+
+Overall, we believe that multi proposal aggregation is a promising direction for object detection, in particular in the 3D domain. However, there still remain many interesting future avenues, for instance, how to combine detection with tracking in semi-dynamic sequences. We see a variety of interesting ideas, where proposals could be distributed in 4D space and accumulated along the time-space axis.
+
+Acknowledgements. We would like to thank Theodora Kontogianni, Jonas Schult, Jonathon Luiten, Mats Steinweg, Ali Athar, Dan Jia and Sabarinath Mahadevan for helpful feedback as well as Angela Dai for help with the video. This work was funded by the ERC Consolidator Grant DeeViSe (ERC-2017-COG-773161) and the ERC Starting Grant Scan2CAD (804724).
+
+# References
+
+[1] I. Armeni, Z.-Y. He, J. Gwak, A. R. Zamir, M. Fischer, J. Malik, and S. Savarese. 3D Scene Graph: A Structure for Unified Semantics, 3D Space, and Camera. In IEEE International Conference on Computer Vision (ICCV), 2019. 2
+[2] I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis, M. Fischer, and S. Savarese. 3D Semantic Parsing of LargeScale Indoor Spaces. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 5, 7, 8
+[3] A. Behl, D. Paschalidou, S. Donne, and A. Geiger. Point-FlowNet: Learning Representations for Rigid Motion Estimation from Point Clouds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
+[4] B. D. Brabandere, D. Neven, and L. V. Gool. Semantic Instance Segmentation with a Discriminative Loss Function. In IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPR'W), 2017. 2, 4, 5
+[5] C. Choy, J. Gwak, and S. Savarese. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2, 3
+[6] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nießner. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1, 5, 6, 7
+[7] A. Dai and M. Nießner. 3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation. In European Conference on Computer Vision (ECCV), 2018. 1, 2
+[8] A. Dai, M. Nießner, M. Zollhöfer, S. Izadi, and C. Theobalt. Bundlefusion: Real-time Globally Consistent 3D Reconstruction Using On-the-fly Surface Reintegration. ACM Transactions on Graphics (TOG), 2017. 1
+[9] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. In Neural Information Processing Systems (NIPS), 2015. 2
+[10] C. Elich, F. Engelmann, J. Schult, T. Kontogianni, and B. Leibe. 3D-BEVIS: Birds-Eye-View Instance Segmentation. In German Conference on Pattern Recognition (GCR), 2019. 2, 4, 5
+[11] F. Engelmann, T. Kontogianni, and B. Leibe. Dilated Point Convolutions: On the Receptive Field Size of Point Convolutions on 3D Point Clouds. In International Conference on Robotics and Automation (ICRA), 2020. 1
+[12] F. Engelmann, T. Kontogianni, J. Schult, and B. Leibe. Know What Your Neighbors Do: 3D Semantic Segmentation of Point Clouds. In European Conference on Computer Vision Workshop (ECCV'W), 2018. 2
+[13] M. Ester, H. peter Kriegel, J. Sander, and X. Xu. A Density-based Algorithm for Discovering Clusters in Large Spatial Databases With Noise. In ACM International Conference on Knowledge Discovery & Data Mining (KDD), 1996. 4
+[14] A. Fathi, Z. Wojna, V. Rathod, P. Wang, H. O. Song, S. Guadarrama, and K. P. Murphy. Semantic Instance Segmentation via Deep Metric Learning. CoRR, abs/1703.10277, 2017. 2
+
+[15] B. Graham, M. Engelcke, and L. van der Maaten. 3D Semantic Segmentation with Submanifold Sparse Convolutional Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2, 3
+[16] R. Hanocka, A. Hertz, N. Fish, R. Giryes, S. Fleishman, and D. Cohen-Or. MeshCNN: A Network with an Edge. ACM Transactions on Graphics (TOG), 2019. 1, 2
+[17] K. He, G. Gkioxari, P. Dollar, and R. B. Girshick. Mask R-CNN. In IEEE International Conference on Computer Vision (ICCV), 2017. 2, 5, 7
+[18] J. Hou, A. Dai, and M. Nießner. 3D-SIS: 3D Semantic Instance Segmentation of RGB-D Scans. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, 1, 2, 5, 7
+[19] J. Lahoud, B. Ghanem, M. Pollefeys, and M. R. Oswald. 3D Instance Segmentation via Multi-Task Metric Learning. In IEEE International Conference on Computer Vision (ICCV), 2019. 1, 2, 4, 5, 7
+[20] L. Landrieu and M. Boussaha. Point Cloud Oversegmentation with Graph-Structured Deep Metric Learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
+[21] L. Landrieu and M. Simonovsky. Large-scale Point Cloud Semantic Segmentation with Superpoint Graphs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[22] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated Graph Sequence Neural Networks. In International Conference on Learning Representations (ICLR), 2017. 2
+[23] T.-Y. Lin, P. Goyal, R. B. Girshick, K. He, and P. Dollar. Focal Loss for Dense Object Detection. In IEEE International Conference on Computer Vision (ICCV), 2017. 5
+[24] C. Liu and Y. Furukawa. MASC: Multi-scale Affinity with Sparse Convolution for 3D Instance Segmentation. CoRR, abs/1902.04478, 2017. 2
+[25] X. Liu, C. R. Qi, and L. J. Guibas. FlowNet3D: Learning Scene Flow in 3D Point Clouds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
+[26] K. Mo, S. Zhu, A. X. Chang, L. Yi, S. Tripathi, L. J. Guibas, and H. Su. PartNet: A Large-Scale Benchmark for Fine-Grained and Hierarchical Part-Level 3D Object Understanding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 5
+[27] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. W. Fitzgibbon. KinectFusion: Real-time Dense Surface Mapping and Tracking. In International Symposium on Mixed and Augmented Reality (ISMAR), 2011. 1
+[28] M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger. Real-time 3D Reconstruction at Scale using Voxel Hashing. ACM Transactions on Graphics (TOG), 2013. 1
+[29] C. R. Qi, O. Litany, K. He, and L. J. Guibas. Deep Hough Voting for 3D Object Detection in Point Clouds. In IEEE International Conference on Computer Vision (ICCV), 2019, 1, 2, 3, 4, 5, 7
+
+[30] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas. Frustumum PointNets for 3D Object Detection from RGB-D Data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 5
+[31] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 1, 2, 4
+[32] C. R. Qi, H. Su, M. Nießner, A. Dai, M. Yan, and L. J. Guibas. Volumetric and multi-view cnns for object classification on 3d data. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 1, 5
+[33] C. R. Qi, L. Yi, H. Su, and L. J. Guibas. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Neural Information Processing Systems (NIPS), 2017. 1, 2
+[34] X. Qi, R. Liao, J. Jia, S. Fidler, and R. Urtasun. 3D Graph Neural Networks for RGBD Semantic Segmentation. In IEEE International Conference on Computer Vision (ICCV), 2017. 2
+[35] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In Neural Information Processing Systems (NIPS), 2015. 1, 2, 4
+[36] J. Schult, F. Engelmann, T. Kontogianni, and B. Leibe. DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1, 2
+[37] S. Song and J. Xiao. Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 1, 5
+[38] Y. Song, C. Yang, Y. Shen, P. Wang, Q. Huang, and C. J. Kuo. SPG-Net: Segmentation Prediction and Guidance Network for Image Inpainting. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1
+[39] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multiview convolutional neural networks for 3d shape recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 1
+[40] M. Tatarchenko, J. Park, V. Koltun, and Q.-Y. Zhou. Tangent Convolutions for Dense Prediction in 3D. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+
+[41] K. Thomas and W. Max. Semi-Supervised Classification with Graph Convolutional Networks. In International Conference on Learning Representations (ICLR), 2017. 2
+[42] J. Wald, A. Avetisyan, N. Navab, F. Tombari, and M. Nießner. RIO: 3D Object Instance Re-Localization in Changing Indoor Environments. In IEEE International Conference on Computer Vision (ICCV), 2019. 2
+[43] S. Wang, S. Suo, W. Ma, A. Pokrovsky, and R. Urtasun. Deep Parametric Continuous Convolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[44] W. Wang, R. Yu, Q. Huang, and U. Neumann. SGPN: Similarity Group Proposal Network for 3D Point Cloud Instance Segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 5, 7
+[45] X. Wang, S. Liu, X. Shen, C. Shen, and J. Jia. Associatively Segmenting Instances and Semantics in Point Clouds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2, 5
+[46] Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein, and J. M. Solomon. Dynamic Graph CNN for Learning on Point Clouds. In ACM Transactions on Graphics (TOG), 2019. 2, 4
+[47] T. Whelan, S. Leutenegger, R. Salas-Moreno, B. Glocker, and A. Davison. ElasticFusion: Dense SLAM without a Pose Graph. In Robotics: Science and Systems (RSS), 2015. 1
+[48] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D ShapeNets: A Deep Representation for Volumetric Shapes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. 1
+[49] B. Yang, J. Wang, R. Clark, Q. Hu, S. Wang, A. Markham, and N. Trigoni. Learning object bounding boxes for 3d instance segmentation on point clouds. In Neural Information Processing Systems (NIPS), 2019. 1, 2, 4, 5, 7, 8
+[50] L. Yi, W. Zhao, H. Wang, M. Sung, and L. Guibas. GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2, 5
+[51] Y. Zhou and O. Tuzel. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1, 2
\ No newline at end of file
diff --git a/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/images.zip b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c2965be696957384bb12998bb4aa7bfcb15bfa00
--- /dev/null
+++ b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b743d83e2803847dc6ce3747ef255fb40f6d7c394d8ca56cf6729f4a9c6dc74d
+size 719561
diff --git a/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/layout.json b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..cf3df8dba453a96d78638895755cd424070c7611
--- /dev/null
+++ b/3dmpamultiproposalaggregationfor3dsemanticinstancesegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ad23e9ade7e652c85c711ff9dd69f61095472a7dbde59808a2584b026e515fb
+size 424676
diff --git a/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_content_list.json b/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..eebb89a746c4d8917ff7f49728d96c8e65002a60
--- /dev/null
+++ b/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d173f81ffa17e63a0c69431b430bad14161f740c51b379b5549c3cc644206fc
+size 79463
diff --git a/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_model.json b/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..22b529d7998a671703a516086ddb8ebd2177c5bc
--- /dev/null
+++ b/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78f5878969f622d2f7ee688df532bc10484ce1aa53e1ba1ce31e1010b8d608b8
+size 97207
diff --git a/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_origin.pdf b/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..791827a8f715b9657ccb3022009910f2f8e537fa
--- /dev/null
+++ b/3dpackingforselfsupervisedmonoculardepthestimation/98cb13b1-a587-4d6d-b10f-2fb39663ea60_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:554e10d071b2f8425790a2e1c4801be4868ffd27a3d8da6c246a771e8fc31836
+size 3456914
diff --git a/3dpackingforselfsupervisedmonoculardepthestimation/full.md b/3dpackingforselfsupervisedmonoculardepthestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..85d6b7f0dc9746cca0cc1b613b38c5ee341d2f5c
--- /dev/null
+++ b/3dpackingforselfsupervisedmonoculardepthestimation/full.md
@@ -0,0 +1,280 @@
+# 3D Packing for Self-Supervised Monocular Depth Estimation
+
+Vitor Guizilini Rares Ambrus Sudeep Pillai Allan Raventos Adrien Gaidon Toyota Research Institute (TRI)
+
+first这个名字@tri.global
+
+# Abstract
+
+Although cameras are ubiquitous, robotic platforms typically rely on active sensors like LiDAR for direct 3D perception. In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Our architecture leverages novel symmetrical packing and unpacking blocks to jointly learn to compress and decompress detail-preserving representations using 3D convolutions. Although self-supervised, our method outperforms other self, semi, and fully supervised methods on the KITTI benchmark. The 3D inductive bias in PackNet enables it to scale with input resolution and number of parameters without overfitting, generalizing better on out-of-domain data such as the NuScenes dataset. Furthermore, it does not require large-scale supervised pretraining on ImageNet and can run in real-time. Finally, we release DDAD (Dense Depth for Automated Driving), a new urban driving dataset with more challenging and accurate depth evaluation, thanks to longer-range and denser ground-truth depth generated from high-density LiDARs mounted on a fleet of self-driving cars operating world-wide. $^{\dagger}$
+
+# 1. Introduction
+
+Accurate depth estimation is a key prerequisite in many robotics tasks, including perception, navigation, and planning. Depth from monocular camera configurations can provide useful cues for a wide array of tasks [23, 30, 34, 36], producing dense depth maps that could complement or eventually replace expensive range sensors. However, learning monocular depth via direct supervision requires ground-truth information from additional sensors and precise cross-calibration. Self-supervised methods do not suffer from these limitations, as they use geometrical constraints on image sequences as the sole source of supervision. In this work, we address the problem of jointly estimating scene structure and camera motion across RGB image sequences using a self-supervised deep network.
+
+While recent works in self-supervised monocular depth
+
+
+Figure 1: Example metrically accurate PackNet prediction (map and textured point cloud) on our DDAD dataset.
+
+estimation have mostly focused on engineering the loss function [5, 33, 47, 53], we show that performance critically depends on the model architecture, in line with the observations of [27] for other self-supervised tasks. Going beyond image classification models like ResNet [20], our main contribution is a new convolutional network architecture, called PackNet, for high-resolution self-supervised monocular depth estimation. We propose new packing and unpacking blocks that jointly leverage 3D convolutions to learn representations that maximally propagate dense appearance and geometric information while still being able to run in real time. Our second contribution is a novel loss that can optionally leverage the camera's velocity when available (e.g., from cars, robots, mobile phones) to solve the inherent scale ambiguity in monocular vision. Our third contribution is a new dataset: Dense Depth for Automated Driving (DDAD). It leverages diverse logs from a fleet of well-calibrated self-driving cars equipped with cameras and high-accuracy long-range LiDARs. Compared to existing benchmarks, DDAD enables much more accurate depth evaluation at range, which is key for high resolution monocular depth estimation methods (cf. Figure 1).
+
+Our experiments on the standard KITTI benchmark [16], the recent NuScenes dataset [4], and our new proposed DDAD benchmark show that our self-supervised monocular approach $i)$ improves on the state of the art, especially at longer ranges; $ii)$ is competitive with fully supervised methods; $iii)$ generalizes better on unseen data; $iv)$ scales better with number of parameters, input resolution, and more unlabeled training data; $v)$ can run in real time at high resolution; and $vi)$ does not require supervised pretraining on ImageNet to achieve state-of-the-art results; or test-time ground-truth scaling if velocity information is available at training time.
+
+# 2. Related Work
+
+Depth estimation from a single image poses several challenges due to its ill-posed and ambiguous nature. However, modern convolutional networks have shown that it is possible to successfully leverage appearance-based patterns in large scale datasets in order to make accurate predictions.
+
+Depth Network Architectures Eigen et al. [13] proposed one of the earliest works in convolutional-based depth estimation using a multi-scale deep network trained on RGB-D sensor data to regress the depth directly from single images. Subsequent works extended these network architectures to perform two-view stereo disparity estimation [35] using techniques developed in the flow estimation literature [12]. Following [12, 35], Umenhofer et al. [42] applied these concepts to simultaneously train a depth and pose network to predict depth and camera ego-motion between successive unconstrained image pairs.
+
+Independently, dense pixel-prediction networks [2, 31, 48] have made significant progress towards improving the flow of information between encoding and decoding layers. Fractional pooling [19] was introduced to amortize the rapid spatial reduction during downsampling. Lee et al. [29] generalized the pooling function to allow the learning of more complex patterns, including linear combinations and learnable pooling operations. Shi et al. [39] used sub-pixel convolutions to perform Single-Image-Super-Resolution, synthesizing and super-resolving images beyond their input resolutions, while still operating at lower resolutions. Recent works [38, 51] in self-supervised monocular depth estimation use this concept to super-resolve estimates and further improve performance. Here, we go one step further and introduce new operations relying on 3D convolutions for learning to preserve and process spatial information in the features of encoding and decoding layers.
+
+Self-Supervised Monocular Depth and Pose As supervised techniques for depth estimation advanced rapidly, the availability of target depth labels became challenging, especially for outdoor applications. To this end, [15, 17] pro
+
+vided an alternative strategy involving training a monocular depth network with stereo cameras, without requiring ground-truth depth labels. By leveraging Spatial Transformer Networks [22], Godard et al [17] use stereo imagery to geometrically transform the right image plus a predicted depth of the left image into a synthesized left image. The loss between the resulting synthesized and original left images is then defined in a fully-differentiable manner, using a Structural Similarity [44] term and additional depth regularization terms, thus allowing the depth network to be self-supervised in an end-to-end fashion.
+
+Following [17] and [42], Zhou et al. [52] generalize this to self-supervised training in the purely monocular setting, where a depth and pose network are simultaneously learned from unlabeled monocular videos. Several methods [5, 26, 33, 43, 46, 47, 51, 53] have advanced this line terms, of work by incorporating these methods, ad,ditional loss and constraints. All, however, take advantage of constraints in monocular Structure-from-Motion (SfM) training that only allow the estimation of depth and pose up to an unknown scale factor, and rely on the ground-truth LiDAR measu, rements to scale their depth estimates appropriately for evaluation purposes [52]. Instead, in this work we show that, by simply using the instantaneous velocity of the camera during training, we are able to learn a scale-aware depth and pose model, alleviating the impractical need to use Li-DAR ground-truth depth measurements at test-time.
+
+# 3. Self-Supervised Scale-Aware SfM
+
+In self-supervised monocular SfM training (Fig. 2), we aim to learn: (i) a monocular depth model $f_{D}: I \to D$ , that predicts the scale-ambiguous depth $\hat{D} = f_{D}(I(p))$ for every pixel $p$ in the target image $I$ ; and (ii) a monocular ego-motion estimator $f_{\mathbf{x}}: (I_t, I_S) \to \mathbf{x}_{t \to S}$ , that predicts the set of 6-DoF rigid transformations for all $s \in S$ given by $\mathbf{x}_{t \to s} = \begin{pmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0} & \mathbf{1} \end{pmatrix} \in \mathrm{SE}(3)$ , between the target image $I_t$ and the set of source images $I_s \in I_S$ considered as part of the temporal context. In practice, we use the frames $I_{t-1}$ and $I_{t+1}$ as source images, although using a larger context is possible. Note that in the case of monocular SfM both depth and pose are estimated up to an unknown scale factor, due to the inherent ambiguity of the photometric loss.
+
+# 3.1. Self-Supervised Objective
+
+Following the work of Zhou et al. [52], we train the depth and pose network simultaneously in a self-supervised manner. In this work, however, we learn to recover the inverse-depth $f_{d}: I \to f_{D}^{-1}(I)$ instead, along with the ego-motion estimator $f_{\mathbf{x}}$ . Similar to [52], the overall self-supervised objective consists of an appearance matching loss term $\mathcal{L}_p$ that is imposed between the synthesized target image $\hat{I}_t$ and the target image $I_t$ , and a depth regularization term $\mathcal{L}_s$ that
+
+ensures edge-aware smoothing in the depth estimates $\hat{D}_t$ . The objective takes the following form:
+
+$$
+\mathcal {L} \left(I _ {t}, \hat {I} _ {t}\right) = \mathcal {L} _ {p} \left(I _ {t}, I _ {S}\right) \odot \mathcal {M} _ {p} \odot \mathcal {M} _ {t} + \lambda_ {1} \mathcal {L} _ {s} (\hat {D} _ {t}) \tag {1}
+$$
+
+where $\mathcal{M}_t$ is a binary mask that avoids computing the photometric loss on the pixels that do not have a valid mapping, and $\odot$ denotes element-wise multiplication. Additionally, $\lambda_1$ enforces a weighted depth regularization on the objective. The overall loss in Equation 1 is averaged per-pixel, pyramid-scale and image batch during training. Fig. 2 shows a high-level overview of our training pipeline.
+
+Appearance Matching Loss. Following [17, 52] the pixel-level similarity between the target image $I_{t}$ and the synthesized target image $\hat{I}_{t}$ is estimated using the Structural Similarity (SSIM) [44] term combined with an L1 pixel-wise loss term, inducing an overall photometric loss given by Equation 2 below.
+
+$$
+\mathcal {L} _ {p} \left(I _ {t}, \hat {I} _ {t}\right) = \alpha \frac {1 - \operatorname {S S I M} \left(I _ {t} , \hat {I} _ {t}\right)}{2} + (1 - \alpha) \| I _ {t} - \hat {I} _ {t} \| \tag {2}
+$$
+
+While multi-view projective geometry provides strong cues for self-supervision, errors due to parallax in the scene have an undesirable effect incurred on the photometric loss. We mitigate these undesirable effects by calculating the minimum photometric loss per pixel for each source image in the context $I_{S}$ , as shown in [18], so that:
+
+$$
+\mathcal {L} _ {p} \left(I _ {t}, I _ {S}\right) = \min _ {I _ {S}} \mathcal {L} _ {p} \left(I _ {t}, \hat {I} _ {t}\right) \tag {3}
+$$
+
+The intuition is that the same pixel will not be occluded or out-of-bounds in all context images, and that the association with minimal photometric loss should be the correct one. Furthermore, we also mask out static pixels by removing those which have a warped photometric loss $\mathcal{L}_p(I_t,\hat{I}_t)$ higher than their corresponding unwarped photometric loss $\mathcal{L}_p(I_t,I_s)$ , calculated using the original source image without view synthesis. Introduced in [18], this auto-mask removes pixels whose appearance does not change between frames, which includes static scenes and dynamic objects with no relative motion, since these will have a smaller photometric loss when assuming no ego-motion.
+
+$$
+\mathcal {M} _ {p} = \min _ {I _ {S}} \mathcal {L} _ {p} \left(I _ {t}, I _ {s}\right) > \min _ {I _ {S}} \mathcal {L} _ {p} \left(I _ {t}, \hat {I} _ {t}\right) \tag {4}
+$$
+
+Depth Smoothness Loss. In order to regularize the depth in texture-less low-image gradient regions, we incorporate an edge-aware term (Equation 5), similar to [17]. The loss is weighted for each of the pyramid-levels, and is decayed by a factor of 2 on down-sampling, starting with a weight of 1 for the $0^{\text{th}}$ pyramid level.
+
+$$
+\mathcal {L} _ {s} (\hat {D} _ {t}) = \left| \delta_ {x} \hat {D} _ {t} \right| e ^ {- \left| \delta_ {x} I _ {t} \right|} + \left| \delta_ {y} \hat {D} _ {t} \right| e ^ {- \left| \delta_ {y} I _ {t} \right|} \tag {5}
+$$
+
+
+Figure 2: PackNet-SfM: Our proposed scale-aware self-supervised monocular structure-from-motion architecture. We introduce PackNet as a novel depth network, and optionally include weak velocity supervision at training time to produce scale-aware depth and pose models.
+
+# 3.2. Scale-Aware SfM
+
+As previously mentioned, both the monocular depth and ego-motion estimators $f_{d}$ and $f_{\mathbf{x}}$ predict scale-ambiguous values, due to the limitations of the monocular SfM training objective. In other words, the scene depth and the camera ego-motion can only be estimated up to an unknown and ambiguous scale factor. This is also reflected in the overall learning objective, where the photometric loss is agnostic to the metric depth of the scene. Furthermore, we note that all previous approaches which operate in the self-supervised monocular regime [5, 15, 17, 33] suffer from this limitation, and resort to artificially incorporating this scale factor at test-time, using LiDAR measurements.
+
+Velocity Supervision Loss. Since instantaneous velocity measurements are ubiquitous in most mobile systems today, we show that they can be directly incorporated in our self-supervised objective to learn a metrically accurate and scale-aware monocular depth estimator. During training, we impose an additional loss $\mathcal{L}_v$ between the magnitude of the pose-translation component of the pose network prediction $\hat{\mathbf{t}}$ and the measured instantaneous velocity scalar $v$ multiplied by the time difference between target and source frames $\Delta T_{t\rightarrow s}$ , as shown below:
+
+$$
+\mathcal {L} _ {v} \left(\hat {\mathbf {t}} _ {t \rightarrow s}, v\right) = \left| \| \hat {\mathbf {t}} _ {t \rightarrow s} \| - | v | \Delta T _ {t \rightarrow s} \right| \tag {6}
+$$
+
+Our final scale-aware self-supervised objective loss $\mathcal{L}_{\mathrm{scale}}$ from Equation 1 becomes:
+
+$$
+\mathcal {L} _ {\text {s c a l e}} \left(I _ {t}, \hat {I} _ {t}, v\right) = \mathcal {L} \left(I _ {t}, \hat {I} _ {t}\right) + \lambda_ {2} \mathcal {L} _ {v} \left(\hat {\mathbf {t}} _ {t \rightarrow s}, v\right) \tag {7}
+$$
+
+where $\lambda_{2}$ is a weight used to balance the different loss terms. This additional velocity loss allows the pose network to make metrically accurate predictions, subsequently resulting in the depth network also learning metrically accurate estimates to maintain consistency (cf. Section 5.4).
+
+
+Figure 3: Proposed 3D packing and unpacking blocks. Packing replaces striding and pooling, while unpacking is its symmetrical feature upsampling mechanism.
+
+# 4. PackNet: 3D Packing for Depth Estimation
+
+Standard convolutional architectures use aggressive striding and pooling to increase their receptive field size. However, this potentially decreases model performance for tasks requiring fine-grained representations [19, 49]. Similarly, traditional upsampling strategies [6, 11] fail to propagate and preserve sufficient details at the decoder layers to recover accurate depth predictions. In contrast, we propose a novel encoder-decoder architecture, called PackNet, that introduces new 3D packing and unpacking blocks to learn to jointly preserve and recover important spatial information for depth estimation. This is in alignment with recent observations that information loss is not a necessary condition to learn representations capable of generalizing to different scenarios [21]. In fact, progressive expansion and contraction in a fully invertible manner, without discarding "uninformative" input variability, has been shown to increase performance in a wide variety of tasks [3, 10, 25]. We first describe the different blocks of our proposed architecture, and then proceed to show how they are integrated together in a single model for monocular depth estimation.
+
+# 4.1. Packing Block
+
+The packing block (Fig. 3a) starts by folding the spatial dimensions of convolutional feature maps into extra feature channels via a Space2Depth operation [39]. The resulting tensor is at a reduced resolution, but in contrast to striding or pooling, this transformation is invertible and comes at no loss. Next, we learn to compress this concatenated feature space in order to reduce its dimensionality to a desired number of output channels. As we show in our experiments (cf. Section 5.6), 2D convolutions are not designed to directly leverage the tiled structure of this feature space. Instead, we propose to first learn to expand this structured
+
+ | Layer Description | K | Output Tensor Dim. |
| #0 | Input RGB image | | 3×H×W |
| Encoding Layers |
| #1 | Conv2d | 5 | 64×H×W |
| #2 | Conv2d → Packing | 7 | 64×H/2×W/2 |
| #3 | ResidualBlock (x2) → Packing | 3 | 64×H/4×W/4 |
| #4 | ResidualBlock (x2) → Packing | 3 | 128×H/8×W/8 |
| #5 | ResidualBlock (x3) → Packing | 3 | 256×H/16×W/16 |
| #6 | ResidualBlock (x3) → Packing | 3 | 512×H/32×W/32 |
| Decoding Layers |
| #7 | Unpacking (#6) → Conv2d (⊕ #5) | 3 | 512×H/16×W/16 |
| #8 | Unpacking (#7) → Conv2d (⊕ #4) | 3 | 256×H/8×W/8 |
| #9 | InvDepth (#8) | 3 | 1×H/8×W/8 |
| #10 | Unpacking (#8) → Conv2d (⊕ #3 ⊕ Upsample(#9)) | 3 | 128×H/4×W/4 |
| #11 | InvDepth (#10) | 3 | 1×H/4×W/4 |
| #12 | Unpacking (#10) → Conv2d (⊕ #2 ⊕ Upsample(#11)) | 3 | 64×H/2×W/2 |
| #13 | InvDepth (#12) | 3 | 1×H/2×W/2 |
| #14 | Unpacking (#12) → Conv2d (⊕ #1 ⊕ Upsample(#13)) | 3 | 64×H×W |
| #15 | InvDepth (#14) | 3 | 1×H×W |
+
+Table 1: Summary of our PackNet architecture for self-supervised monocular depth estimation. The Packing and Unpacking blocks are described in Fig. 3, with kernel size $K = 3$ and $D = 8$ . Conv2d blocks include Group-Norm [45] with $G = 16$ and ELU non-linearities [7]. In-vDepth blocks include a 2D convolutional layer with $K = 3$ and sigmoid non-linearities. Each ResidualBlock is a sequence of 3 2D convolutional layers with $K = 3/3/1$ and ELU non-linearities, followed by GroupNorm with $G = 16$ and Dropout [40] of 0.5 in the final layer. Upsample is a nearest-neighbor resizing operation. Numbers in parentheses indicate input layers, with $\oplus$ as channel concatenation. Bold numbers indicate the four inverse depth output scales.
+
+representation via a 3D convolutional layer. The resulting higher dimensional feature space is then flattened (by simple reshaping) before a final 2D convolutional contraction layer. This structured feature expansion-contraction, inspired by invertible networks [3, 21] although we do not ensure invertibility, allows our architecture to dedicate more parameters to learn how to compress key spatial details that need to be preserved for high resolution depth decoding.
+
+# 4.2. Unpacking Block
+
+Symmetrically, the unpacking block (Fig. 3b) learns to decompress and unfold packed convolutional feature channels back into higher resolution spatial dimensions during the decoding process. The unpacking block replaces convolutional feature upsampling, typically performed via nearest-neighbor or with learnable transposed convolutional weights. It is inspired by sub-pixel convolutions [39], but adapted to reverse the 3D packing process that the features went through in the encoder. First, we use a 2D convolutional layer to produce the required number of feature channels for a following 3D convolutional layer. Second, this 3D convolution learns to expand back the compressed spatial features. Third, these unpacked features are converted back to spatial details via a reshape and Depth2Space operation [39] to obtain a tensor with the desired number of output channels and target higher resolution.
+
+
+(a) Input Image
+
+
+(b) Max Pooling + Bilinear Upsample
+Figure 4: Image reconstruction using different encoder-decoders: (b) standard max pooling and bilinear upsampling, each followed by 2D convolutions; (c) one packing-unpacking combination (cf. Fig. 3) with $D = 2$ . All kernel sizes are $K = 3$ and $C = 4$ for intermediate channels.
+
+
+(c) Pack + Unpack
+
+# 4.3. Detail-Preserving Properties
+
+In Fig. 4, we illustrate the detail-preserving properties of our packing / unpacking combination, showing we can get a near-lossless encoder-decoder for single image reconstruction by minimizing the L1 loss. We train a simple network composed of one packing layer followed by a symmetrical unpacking one and show it is able to almost exactly reconstruct the input image (final loss of 0.0079), including sharp edges and finer details. In contrast, a comparable baseline replacing packing / unpacking with max pooling / bilinear upsampling (and keeping the 2D convolutions) is only able to learn a blurry reconstruction (final loss of 0.063). This highlights how PackNet is able to learn more complex features by preserving spatial and appearance information end-to-end throughout the network.
+
+# 4.4. Model Architecture
+
+Our PackNet architecture for self-supervised monocular depth estimation is detailed in Table 1. Our symmetrical encoder-decoder architecture incorporates several packing and unpacking blocks, and is supplemented with skip connections [35] to facilitate the flow of information and gradients throughout the network. The decoder produces intermediate inverse depth maps that are upsampled before being concatenated with their corresponding skip connections and unpacked feature maps. These intermediate inverse depth maps are also used at training time in the loss calculation, after being upsampled to the full output resolution using nearest neighbors interpolation.
+
+# 5. Experiments
+
+# 5.1. Datasets
+
+KITTI [16]. The KITTI benchmark is the de facto standard for depth evaluation. More specifically, we adopt the training protocol used in Eigen et al. [13], with Zhou et al.'s [52] pre-processing to remove static frames. This results in 39810 images for training, 4424 for validation and 697 for evaluation. We also consider the improved ground-truth depth maps from [41] for evaluation, which uses 5 consecutive frames to accumulate LiDAR points and stereo
+
+information to handle moving objects, resulting in 652 high-quality depth maps.
+
+DDAD (Dense Depth for Automated Driving). As one of our contributions, we release a diverse dataset of urban, highway, and residential scenes curated from a global fleet of self-driving cars. It contains 17,050 training and 4,150 evaluation frames with ground-truth depth maps generated from dense LiDAR measurements using the Luminar-H2 sensor. This new dataset is a more realistic and challenging benchmark for depth estimation, as it is diverse and captures precise structure across images (30k points per frame) at longer ranges (up to $200m$ vs $80m$ for previous datasets). See supplementary material for more details.
+
+NuScenes [4]. To assess the generalization capability of our approach w.r.t. previous ones, we evaluate KITTI models (without fine-tuning) on the official NuScenes validation dataset of 6019 front-facing images with ground-truth depth maps generated by LiDAR reprojection.
+
+CityScapes [8]. We also experiment with pretraining our monocular networks on the CityScapes dataset, before finetuning on the KITTI dataset. This also allows us to explore the scalability and generalization performance of different models, as they are trained with increasing amounts of unlabeled data. A total of 88250 images were considered as the training split for the CityScapes dataset, using the same training parameters as KITTI for 20 epochs.
+
+# 5.2. Implementation Details
+
+We use PyTorch [37] with all models trained across 8 Titan V100 GPUs. We use the Adam optimizer [24], with $\beta_{1} = 0.9$ and $\beta_{2} = 0.999$ . The monocular depth and pose networks are trained for 100 epochs, with a batch size of 4 and initial depth and pose learning rates of $2 \cdot 10^{-4}$ and $5 \cdot 10^{-4}$ respectively. Training sequences are generated using a stride of 1, meaning that the previous $t - 1$ , current $t$ , and posterior $t + 1$ images are used in the loss calculation. As training proceeds, the learning rate is decayed every 40 epochs by a factor of 2. We set the SSIM weight to $\alpha = 0.85$ , the depth regularization weight to $\lambda_{1} = 0.001$ and, where applicable, the velocity-scaling weight to $\lambda_{2} = 0.05$ .
+
+Depth Network. Unless noted otherwise, we use our PackNet architecture as specified in Table 1. During training, all four inverse depth output scales are used in the loss calculation, and at test-time only the final output scale is used, after being resized to the full ground-truth depth map resolution using nearest neighbor interpolation.
+
+Pose Network. We use the architecture proposed by [52] without the explainability mask, which we found not to improve results. The pose network consists of 7 convolutional layers followed by a final $1 \times 1$ convolutional layer. The input to the network consists of the target view $I_{t}$ and the context views $I_{S}$ , and the output is the set of 6 DOF transformations between $I_{t}$ and $I_{s}$ , for $s \in S$ .
+
+# 5.3. Depth Estimation Performance
+
+First, we report the performance of our proposed monocular depth estimation method when considering longer distances, which is now possible due to the introduction of our new DDAD dataset. Depth estimation results using this dataset for training and evaluation, considering cumulative distances up to $200\mathrm{m}$ , can be found in Fig. 5 and Table 2. Additionally, in Fig. 6 we present results for different depth intervals calculated independently. From these results we can see that our PackNet-SfM approach significantly outperforms the state-of-the-art [18], based on the ResNet family, the performance gap consistently increasing when larger distances are considered.
+
+Second, we evaluate depth predictions on KITTI using the metrics described in Eigen et al. [13]. We summarize our results in Table 3, for the original depth maps from [13] and the accumulated depth maps from [41], and illustrate their performance qualitatively in Fig. 7. In contrast to previous methods [5, 18] that predominantly focus on modifying the training objective, we show that our proposed Pack-Net architecture can by itself bolster performance and establish a new state of the art for the task of monocular depth estimation, trained in the self-supervised monocular setting.
+
+Furthermore, we show that by simply introducing an additional source of unlabeled videos, such as the publicly available CityScapes dataset (CS+K) [8], we are able to further improve monocular depth estimation performance. As indicated by Pillai et al. [38], we also observe an improvement in performance at higher image resolutions, which we attribute to the proposed network's ability to properly preserve and process spatial information end-to-end. Our best results are achieved when injecting both more unlabeled data at training time and processing higher resolution input images, achieving performance comparable to semi-supervised [28] and fully supervised [14] methods.
+
+# 5.4. Scale-Aware Depth Estimation Performance
+
+Due to their inherent scale ambiguity, self-supervised monocular methods [18, 33, 52] evaluate depth by scaling their estimates to the median ground-truth as measured via LiDAR. In Section 3.2 we propose to also recover the metric scale of the scene from a single image by imposing a loss on the magnitude of the translation for the pose network output. Table 3 shows that introducing this weak velocity supervision at training time allows the generation of scale-aware depth models with similar performance as their unscaled counterparts, with the added benefit of not requiring ground-truth depth scaling (or even velocity information) at test-time. Another benefit of scale-awareness is that we can compose metrically accurate trajectories directly from the output of the pose network. Due to space constraints, we report pose estimation results in supplementary material.
+
+
+Figure 5: PackNet pointcloud reconstructions on DDAD.
+
+| Method | Abs Rel | Sq Rel | RMSE | \( RMSE_{log} \) | \( \delta_{1.25} \) |
| Monodepth2 (R18) | 0.381 | 8.387 | 21.277 | 0.371 | 0.587 |
| Monodepth2‡ (R18) | 0.213 | 4.975 | 18.051 | 0.340 | 0.761 |
| Monodepth2 (R50) | 0.324 | 7.348 | 20.538 | 0.344 | 0.615 |
| Monodepth2‡ (R50) | 0.198 | 4.504 | 16.641 | 0.318 | 0.781 |
| PackNet-SfM | 0.162 | 3.917 | 13.452 | 0.269 | 0.823 |
+
+Table 2: Depth Evaluation on DDAD, for $640 \times 384$ resolution and distances up to $200\mathrm{m}$ . While the ResNet family heavily relies on large-scale supervised ImageNet [9] pretraining (denoted by $\ddagger$ ), PackNet achieves significantly better results despite being trained from scratch.
+
+
+Figure 6: Depth Evaluation on DDAD binned at different intervals, calculated independently by only considering ground-truth depth pixels in that range (0-20m, 20-40m, ...).
+
+# 5.5. Network Complexity
+
+The introduction of packing and unpacking as alternatives to standard downsampling and upsampling operations increases the complexity of the network, due to the number of added parameters. To ensure that the gain in performance shown in our experiments is not only due to an increase in model capacity, we compare different variations of our PackNet architecture (obtained by modifying the number of layers and feature channels) against available ResNet architectures. These results are depicted in Fig. 8 and show that, while the ResNet family stabilizes with diminishing returns as the number of parameters increase, the PackNet family matches its performance at around 70M parameters and further improves as more complexity is added. Finally, the proposed architecture (Table 1) reaches around 128M parameters with an inference time of 60ms on a Titan V100 GPU, which can be further improved to $< 30$ ms using TensorRT [1], making it suitable for real-time applications.
+
+ | Method | Supervision | Resolution | Dataset | Abs Rel | Sq Rel | RMSE | RMSElog | δ < 1.25 | δ < 1.252 | δ < 1.253 |
| Original [13] | SfMLearner [52] | M | 416 x 128 | CS + K | 0.198 | 1.836 | 6.565 | 0.275 | 0.718 | 0.901 | 0.960 |
| Vid2Depth [33] | M | 416 x 128 | CS + K | 0.159 | 1.231 | 5.912 | 0.243 | 0.784 | 0.923 | 0.970 |
| DF-Net [53] | M | 576 x 160 | CS + K | 0.146 | 1.182 | 5.215 | 0.213 | 0.818 | 0.943 | 0.978 |
| Struct2Depth [5] | M | 416 x 128 | K | 0.141 | 1.026 | 5.291 | 0.215 | 0.816 | 0.945 | 0.979 |
| Zhou et al.‡ [50] | M | 1248 x 384 | K | 0.121 | 0.837 | 4.945 | 0.197 | 0.853 | 0.955 | 0.982 |
| Monodepth2‡ [18] | M | 640 x 192 | K | 0.115 | 0.903 | 4.863 | 0.193 | 0.877 | 0.959 | 0.981 |
| Monodepth2‡ [18] | M | 1024 x 320 | K | 0.115 | 0.882 | 4.701 | 0.190 | 0.879 | 0.961 | 0.982 |
| PackNet-SfM | M | 640 x 192 | K | 0.111 | 0.785 | 4.601 | 0.189 | 0.878 | 0.960 | 0.982 |
| PackNet-SfM | M+v | 640 x 192 | K | 0.111 | 0.829 | 4.788 | 0.199 | 0.864 | 0.954 | 0.980 |
| PackNet-SfM | M | 640 x 192 | CS + K | 0.108 | 0.727 | 4.426 | 0.184 | 0.885 | 0.963 | 0.983 |
| PackNet-SfM | M+v | 640 x 192 | CS + K | 0.108 | 0.803 | 4.642 | 0.195 | 0.875 | 0.958 | 0.980 |
| PackNet-SfM | M | 1280 x 384 | K | 0.107 | 0.802 | 4.538 | 0.186 | 0.889 | 0.962 | 0.981 |
| PackNet-SfM | M+v | 1280 x 384 | K | 0.107 | 0.803 | 4.566 | 0.197 | 0.876 | 0.957 | 0.979 |
| PackNet-SfM | M | 1280 x 384 | CS + K | 0.104 | 0.758 | 4.386 | 0.182 | 0.895 | 0.964 | 0.982 |
| PackNet-SfM | M+v | 1280 x 384 | CS + K | 0.103 | 0.796 | 4.404 | 0.189 | 0.881 | 0.959 | 0.980 |
| Improved [41] | SfMLeaner [52] | M | 416 x 128 | CS + K | 0.176 | 1.532 | 6.129 | 0.244 | 0.758 | 0.921 | 0.971 |
| Vid2Depth [33] | M | 416 x 128 | CS + K | 0.134 | 0.983 | 5.501 | 0.203 | 0.827 | 0.944 | 0.981 |
| GeoNet [47] | M | 416 x 128 | CS + K | 0.132 | 0.994 | 5.240 | 0.193 | 0.883 | 0.953 | 0.985 |
| DDVO [43] | M | 416 x 128 | CS + K | 0.126 | 0.866 | 4.932 | 0.185 | 0.851 | 0.958 | 0.986 |
| EPC++ [32] | M | 640 x 192 | K | 0.120 | 0.789 | 4.755 | 0.177 | 0.856 | 0.961 | 0.987 |
| Monodepth2‡ [18] | M | 640 x 192 | K | 0.090 | 0.545 | 3.942 | 0.137 | 0.914 | 0.983 | 0.995 |
| Kuznietsov et al.‡ [28] | D | 621 x 187 | K | 0.089 | 0.478 | 3.610 | 0.138 | 0.906 | 0.980 | 0.995 |
| DORN‡ [14] | D | 513 x 385 | K | 0.072 | 0.307 | 2.727 | 0.120 | 0.932 | 0.984 | 0.995 |
| PackNet-SfM | M | 640 x 192 | K | 0.078 | 0.420 | 3.485 | 0.121 | 0.931 | 0.986 | 0.996 |
| PackNet-SfM | M | 1280 x 384 | CS + K | 0.071 | 0.359 | 3.153 | 0.109 | 0.944 | 0.990 | 0.997 |
| PackNet-SfM | M+v | 1280 x 384 | CS + K | 0.075 | 0.384 | 3.293 | 0.114 | 0.938 | 0.984 | 0.995 |
+
+Table 3: Quantitative performance comparison of PackNet-SfM on the KITTI dataset for distances up to $80\mathrm{m}$ . For Abs Rel, Sq Rel, RMSE and $\mathrm{RMSE}_{log}$ lower is better, and for $\delta < 1.25$ , $\delta < 1.25^2$ and $\delta < 1.25^3$ higher is better. In the Dataset column, CS+K refers to pretraining on CityScapes (CS) and fine-tuning on KITTI (K). M refers to methods that train using monocular (M) images, and M+v refers to added velocity weak supervision (v), as shown in Section 3.2. $\ddagger$ indicates ImageNet [9] pretraining. Original uses raw depth maps from [13] for evaluation, and Improved uses annotated depth maps from [41]. At test-time, all monocular methods (M) scale estimated depths with median ground-truth LiDAR information. Velocity-scaled (M+v) and supervised (D) methods are not scaled in such way, since they are already metrically accurate.
+
+
+Figure 7: Qualitative monocular depth estimation performance comparing PackNet with previous methods, on frames from the KITTI dataset (Eigen test split). Our method is able to capture sharper details and structure (e.g., on vehicles, pedestrians, and thin poles) thanks to the learned preservation of spatial information.
+
+
+Figure 8: Performance of different depth network architectures for varying numbers of parameters on the original KITTI Eigen split [13] with resolutions of $640 \times 192$ (MR) and $1280 \times 384$ (HR). While the ResNet family plateaus at 70M parameters, the PackNet family matches its performance at the same number of parameters for MR, outperforms it clearly for HR, and improves significantly with more parameters in both cases without overfitting.
+
+The PackNet family is also consistently better at higher resolution, as it properly preserves and propagates spatial information between layers. In contrast, as reported in prior works [18], ResNet architectures do not scale well, with only minor improvements at higher resolution.
+
+# 5.6. Ablation Studies
+
+To further study the performance improvements that PackNet provides, we perform an ablative analysis on the different architectural components introduced, as depicted in Table 4. We show that the base architecture, without the proposed packing and unpacking blocks, already produces a strong baseline for the monocular depth estimation task. The introduction of packing and unpacking boosts depth estimation performance, especially as more 3D convolutional filters are added, with new state-of-the-art results being achieved by the architecture described in Table 1.
+
+As mentioned in [14, 18], ResNet architectures highly benefit from ImageNet pretraining, since they were originally developed for classification tasks. Interestingly, we also noticed that the performance of pretrained ResNet architectures degrades in longer training periods, due to catastrophic forgetting that leads to overfitting. The proposed PackNet architecture, on the other hand, achieves state-of-the-art results from randomly initialized weights, and can be further improved by self-supervised pretraining on other datasets, thus properly leveraging the large-scale availability of unlabeled information thanks to its structure.
+
+# 5.7. Generalization Capability
+
+We also investigate the generalization performance of PackNet, as evidence that it does not simply memorize training data but learns transferable discriminative features. To assess this, we evaluate on the recent NuScenes dataset [4] models trained on a combination of CityScapes and KITTI $(\mathrm{CS} + \mathrm{K})$ , without any fine-tuning. Results in Table 5 show PackNet indeed generalizes better across a large spectrum of
+
+| Depth Network | Abs Rel | Sq Rel | RMSE | \( RMSE_{log} \) | \( \delta_{1.25} \) |
| ResNet18 | 0.133 | 1.023 | 5.123 | 0.211 | 0.845 |
| \( ResNet18^{\ddagger} \) | 0.120 | 0.896 | 4.869 | 0.198 | 0.868 |
| ResNet50 | 0.127 | 0.977 | 5.023 | 0.205 | 0.856 |
| \( ResNet50^{\ddagger} \) | 0.117 | 0.900 | 4.826 | 0.196 | 0.873 |
| PackNet (w/o pack/unpack) | 0.122 | 0.880 | 4.816 | 0.198 | 0.864 |
| PackNet (D=0) | 0.121 | 0.922 | 4.831 | 0.195 | 0.869 |
| PackNet (D=2) | 0.118 | 0.802 | 4.656 | 0.194 | 0.868 |
| PackNet (D=4) | 0.113 | 0.818 | 4.621 | 0.190 | 0.875 |
| PackNet (D=8) | 0.111 | 0.785 | 4.601 | 0.189 | 0.878 |
+
+Table 4: Ablation study on the PackNet architecture, on the standard KITTI benchmark for $640 \times 192$ resolution. ResNetXX indicates that specific architecture [20] as encoder, with and without ImageNet [9] pretraining (denoted with $\ddagger$ ). We also show results with the proposed PackNet architecture, first without packing and unpacking (replaced respectively with convolutional striding and bilinear upsampling) and then with increasing numbers of 3D convolutional filters ( $D = 0$ indicates no 3D convolutions and the corresponding reshape operations).
+
+| Method | Abs Rel | Sq Rel | RMSE | RMSElog | δ1.25 |
| ResNet18 | 0.218 | 2.053 | 8.154 | 0.355 | 0.650 |
| ResNet18‡ | 0.212 | 1.918 | 7.958 | 0.323 | 0.674 |
| ResNet50 | 0.216 | 2.165 | 8.477 | 0.371 | 0.637 |
| ResNet50‡ | 0.210 | 2.017 | 8.111 | 0.328 | 0.697 |
| PackNet | 0.187 | 1.852 | 7.636 | 0.289 | 0.742 |
+
+Table 5: Generalization capability of different depth networks, trained on both KITTI and CityScapes and evaluated on NuScenes [4], for $640 \times 192$ resolution and distances up to $80\mathrm{m}$ . $\ddagger$ denotes ImageNet [9] pretraining.
+
+vehicles and countries (Germany for $\mathrm{CS + K}$ , USA + Singapore for NuScenes), outperforming standard architectures in all considered metrics without the need for large-scale supervised pretraining on ImageNet.
+
+# 6. Conclusion
+
+We propose a new convolutional network architecture for self-supervised monocular depth estimation: PackNet. It leverages novel, symmetrical, detail-preserving packing and unpacking blocks that jointly learn to compress and decompress high resolution visual information for fine-grained predictions. Although purely trained on unlabeled monocular videos, our approach outperforms other existing self- and semi-supervised methods and is even competitive with fully-supervised methods while able to run in real-time. It also generalizes better to different datasets and unseen environments without the need for ImageNet pretraining, especially when considering longer depth ranges, as assessed up to $200\mathrm{m}$ on our new DDAD dataset. Additionally, by leveraging during training only weak velocity information, we are able to make our model scale-aware, i.e. producing metrically accurate depth maps from a single image.
+
+# References
+
+[1] TensorRT python library. https://developer. nvidia.com/tensorrt. Accessed: 2019-11-09. 6
+[2] Aayush Bansal, Xinlei Chen, Bryan Russell, Abhinav Gupta, and Deva Ramanan. Pixelnet: Representation of the pixels, by the pixels, and for the pixels. arXiv preprint arXiv:1702.06506, 2017. 2
+[3] Jens Behrmann, Will Grathwohl, Ricky TQ Chen, David Duvenaud, and Jörn-Henrik Jacobsen. Invertible residual networks. arXiv preprint arXiv:1811.00995, 2018. 4
+[4] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. CoRR, 2019. 2, 5, 8
+[5] Vincent Casser, Soeren Pirk, Reza Mahjourian, and Anelia Angelova. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. In AAAI, 2019. 1, 2, 3, 6, 7
+[6] Yunjin Chen and Thomas Pock. Trainable nonlinear reaction diffusion: A flexible framework for fast and effective image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:1256-1272, 2017. 4
+[7] Djork-Arné Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). In ICLR, 2016. 4
+[8] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In IEEE conference on computer vision and pattern recognition, pages 3213-3223, 2016. 5, 6
+[9] Jia Deng, Wei Dong, Richard Socher, Li jia Li, Kai Li, and Li Fei-fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009. 6, 7, 8
+[10] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. In *ICLR*, 2017. 4
+[11] Chao Dong, Chen Change Loy, Kaiming He, and Xiaou Tang. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 38(2):295-307, Feb. 2016. 4
+[12] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 2758-2766, 2015. 2
+[13] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in neural information processing systems, pages 2366-2374, 2014. 2, 5, 6, 7, 8
+[14] Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Bat-manghelich, and Dacheng Tao. Deep ordinal regression network for monocular depth estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2002-2011, 2018. 6, 7, 8
+
+[15] Ravi Garg, Vijay Kumar BG, Gustavo Carneiro, and Ian Reid. Unsupervised cnn for single view depth estimation: Geometry to the rescue. In European Conference on Computer Vision, pages 740-756. Springer, 2016. 2, 3
+[16] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013. 2, 5
+[17] Clément Godard, Oisin Mac Aodha, and Gabriel J Brosstow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, volume 2, page 7, 2017. 2, 3
+[18] Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J. Brostow. Digging into self-supervised monocular depth prediction. In ICCV, 2019. 3, 6, 7, 8
+[19] Benjamin Graham. Fractional max-pooling. arXiv:1412.607, 2015. 2, 4
+[20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1, 8
+[21] Jrn-Henrik Jacobsen, Arnold W.M. Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks. In International Conference on Learning Representations, 2018. 4
+[22] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017-2025, 2015. 2
+[23] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7482-7491, 2018. 1
+[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.5
+[25] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, 2018. 4
+[26] Maria Klodt and Andrea Vedaldi. Supervising the new with the old: Learning sfm from sfm. In European Conference on Computer Vision, pages 713-728. Springer, 2018. 2
+[27] Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. arXiv preprint arXiv:1901.09005, 2019. 1
+[28] Yevhen Kuznietsov, Jörg Stuckler, and Bastian Leibe. Semi-supervised deep learning for monocular depth map prediction. In IEEE Conference on Computer Vision and Pattern Recognition, pages 6647-6655, 2017. 6, 7
+[29] Chen-Yu Lee, Patrick Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional neural networks: Mixed, gated, and tree. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2016. 2
+[30] Kuan-Hui Lee, German Ros, Jie Li, and Adrien Gaidon. Spigan: Privileged adversarial learning from simulation. In ICLR, 2019. 1
+[31] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Pro
+
+ceedings of the IEEE conference on computer vision and pattern recognition, pages 3431-3440, 2015. 2
+[32] C. Luo, Z. Yang, P. Wang, Y. Wang, W. Xu, R. Nevatia, and A. Yuille. Every pixel counts++: Joint learning of geometry and motion with 3d holistic understanding. arXiv preprint arXiv:1810.06125, 2018. 7
+[33] Reza Mahjourian, Martin Wicke, and Anelia Angelova. Unsupervised learning of depth and ego-motion from monocular video using 3d geometric constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5667-5675, 2018. 1, 2, 3, 6, 7
+[34] Fabian Manhardt, Wadim Kehl, and Adrien Gaidon. Roi10d: Monocular lifting of 2d detection to 6d pose and metric shape. IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1
+[35] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4040-4048, 2016. 2, 5
+[36] Jeff Michels, Ashutosh Saxena, and Andrew Y Ng. High speed obstacle avoidance using monocular vision and reinforcement learning. In 22nd international conference on Machine learning, pages 593-600. ACM, 2005. 1
+[37] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. In NIPS-W, 2017. 5
+[38] Sudeep Pillai, Rares Ambrus, and Adrien Gaidon. Superdepth: Self-supervised, super-resolved monocular depth estimation. In Robotics and Automation (ICRA), 2019 IEEE International Conference on, 2018. 2, 6
+[39] Wenzhe Shi, Jose Caballero, Ferenc Huszár, Johannes Totz, Andrew P Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1874-1883, 2016. 2, 4
+[40] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958, 2014. 4
+[41] J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger. Sparsity invariant cnns. 3DV, 2017. 5, 6, 7
+
+[42] Benjamin Ummenhofer, Huizhong Zhou, Jonas Uhrig, Nikolaus Mayer, Eddy Ilg, Alexey Dosovitskiy, and Thomas Brox. Demon: Depth and motion network for learning monocular stereo. In IEEE Conference on computer vision and pattern recognition (CVPR), volume 5, page 6, 2017. 2
+[43] Chaoyang Wang, José Miguel Buenaposada, Rui Zhu, and Simon Lucey. Learning depth from monocular videos using direct methods. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2022–2030, 2018. 2, 7
+[44] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 2, 3
+[45] Yuxin Wu and Kaiming He. Group normalization. In Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part XIII, pages 3-19, 2018. 4
+[46] Nan Yang, Rui Wang, Jörg Stuckler, and Daniel Cremers. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry. arXiv preprint arXiv:1807.02570, 2018. 2
+[47] Zhichao Yin and Jianping Shi. Geonet: Unsupervised learning of dense depth, optical flow and camera pose. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, 2018. 1, 2, 7
+[48] Fisher Yu, Vladlen Koltun, and Thomas Funkhouser. Dilated residual networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 2
+[49] Hao Zhang and Jianwei Ma. Hartley spectral pooling for deep learning. Computing Research Repository, abs/1810.04028, 2018. 4
+[50] Junsheng Zhou, Yuwang Wang, Naiyan Wang, and Wenjun Zeng. Unsupervised high-resolution depth learning from videos with dual networks. In Inter. Conf. on Computer Vision. IEEE, IEEE, 2019. 7
+[51] Lipu Zhou, Jiamin Ye, Montiel Abello, Shengze Wang, and Michael Kaess. Unsupervised learning of monocular depth estimation with bundle adjustment, super-resolution and clip loss. arXiv preprint arXiv:1812.03368, 2018. 2
+[52] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, volume 2, page 7, 2017. 2, 3, 5, 6, 7
+[53] Yuliang Zou, Zelun Luo, and Jia-Bin Huang. Df-net: Unsupervised joint learning of depth and flow using cross-task consistency. In ECCV, 2018. 1, 2, 7
\ No newline at end of file
diff --git a/3dpackingforselfsupervisedmonoculardepthestimation/images.zip b/3dpackingforselfsupervisedmonoculardepthestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..39042bef829f4bad8b1b90eb3cd0dfc2926a9b4a
--- /dev/null
+++ b/3dpackingforselfsupervisedmonoculardepthestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4227fa3267f5ebef9454d4e57a0e108a780069482d4395dd38201a016e14c41
+size 737295
diff --git a/3dpackingforselfsupervisedmonoculardepthestimation/layout.json b/3dpackingforselfsupervisedmonoculardepthestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d542abd24ac74350824f12010b56a229359c5a69
--- /dev/null
+++ b/3dpackingforselfsupervisedmonoculardepthestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:559a527ec74105f83c7afb5316374d80279100b3e82fdd51374f007859ab4070
+size 371359
diff --git a/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_content_list.json b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..ee87cae63379b43ca9233c9fe809ab347546609b
--- /dev/null
+++ b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dabaed30c3026d4c8c2a7390c1d58dd6b033b42dc00afe13b44c728fd8dc7591
+size 78995
diff --git a/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_model.json b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d912034be279c07caac699405255e445192c4385
--- /dev/null
+++ b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:80747bd61b0d5e266c824f1e6c083c59c628fd601409cf5cdd55d8852a94878b
+size 95578
diff --git a/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_origin.pdf b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..676313b3813b291f65a5b9c54647566cd6e8fe83
--- /dev/null
+++ b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/48654681-1e22-4cee-a2f1-9f712cc0b228_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:246b55adc7fc93c92129b38e0a0861ac5ed5427b5ff063b1c5b369b481f7339d
+size 7346149
diff --git a/3dpartguidedimageeditingforfinegrainedobjectunderstanding/full.md b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..13f42993314ac9f863f24a2efad463ea944d08c8
--- /dev/null
+++ b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/full.md
@@ -0,0 +1,322 @@
+# 3D Part Guided Image Editing for Fine-grained Object Understanding
+
+Zongdai Liu $^{\dagger 1}$ , Feixiang Lu $^{\dagger 2,6}$ , Peng Wang $^{\S 2,5}$ , Hui Miao $^{1}$ , Liangjun Zhang $^{2,6}$ , Ruigang Yang $^{2,3,6}$ and Bin Zhou $^{*1,4}$
+
+$^{1}$ State Key Laboratory of Virtual Reality Technology and Systems, Beihang University $^{2}$ Robotics and Autonomous Driving Laboratory, Baidu Research $^{3}$ University of Kentucky $^{4}$ Peng Cheng Laboratory, Shenzhen, China $^{5}$ ByteDance Research $^{6}$ National Engineering Laboratory of Deep Learning Technology and Application, China
+
+# Abstract
+
+Holistically understanding an object with its 3D movable parts is essential for visual models of a robot to interact with the world. For example, only by understanding many possible part dynamics of other vehicles (e.g., door or trunk opening, taillight blinking for changing lane), a self-driving vehicle can be success in dealing with emergency cases. However, existing visual models tackle rarely on these situations, but focus on bounding box detection. In this paper, we fill this important missing piece in autonomous driving by solving two critical issues. First, for dealing with data scarcity, we propose an effective training data generation process by fitting a 3D car model with dynamic parts to cars in real images. This allows us to directly edit the real images using the aligned 3D parts, yielding effective training data for learning robust deep neural networks (DNNs). Secondly, to benchmark the quality of 3D part understanding, we collected a large dataset in real driving scenario with cars in uncommon states (CUS), i.e. with door or trunk opened etc., which demonstrates that our trained network with edited images largely outperforms other baselines in terms of 2D detection and instance segmentation accuracy.
+
+# 1. Introduction
+
+An object, e.g. a car or a person, is commonly composed with articulated and movable 3D parts [47, 24]. Understanding an object with its 3D parts and their future states within images or videos is essential for the vision/perception system of a robot to decide its actions to interact with the world. For example, in the popular au
+
+
+
+
+
+
+Figure 1: Fine-grained parsing of cars in uncommon states on various datasets. The results include 2D detection (red bounding box), instance segmentation (orange mask), dynamic part segmentation (blue mask), and state description. Note that the common-state cars are with green color.
+
+
+
+
+
+
+
+
+
+tonomous driving (AD) scenario, when a car parking on the road has its door opened, it will be very likely that someone would get off. As a response, the autonomous vehicle should immediately slow down, turn the steering wheel, and change line. Though, this case is not common, it is deadly if there is no such understanding behind, and in real driving scenario, there are many such cases as illustrated in Fig. 1.
+
+However, the dominant visual perception systems with deep neural networks, though achieved great success in 2D/3D detection [34, 12], instance segmentation [17] and pose estimation [4, 22, 39], are based on coarse understanding of objects with bounding boxes or masks. In our opin
+
+ion, this is not sufficient for performing actions respect to 3D part dynamics of vehicles on the street.
+
+This paper is a step forward to fill this missing piece, especially in AD scenario, by providing a model that enables detailed 3D part parsing of an object. To perform such a task, we first look through many popular AD datasets, such as KITTI [14], CityScapes [6] and ApolloScape [20, 46]. As shown in Fig. 1, we found firstly, cases where a car has its part moved as we discussed are existing in real driving scenarios. Secondly, the amount of cases is too scarce, e.g. only tens of cars, to train an effective model to dealing with 3D part understanding when these cases happened.
+
+To generate enough amount of data for training a model understanding 3D parts, the common strategy is manually crowd sourcing large amount of real images [15], which will be labor expensive, while other solutions such as obtaining dataset with simulated environment and computer graphics [1, 45, 31] will have strong domain gap of car and scene appearance to realistic scenarios. To balance the two and automatically generate training data for current deep learning models [17], here we propose a 3D part guided image editing strategy, as illustrated in Fig. 2, by first fitting a 3D car model with dynamic parts in images, then re-rendering the car inside with re-configured parts and the realistic texture. Specifically, we adopt models from the ApolloCar3D [39] dataset where each car instance is fitted with a 3D model, and we define ten commonly happened dynamic parts, i.e., bonnet, trunk, four doors, two headlights and two taillights, for each type of 3D car model. More specifically, for each part, we labelled its motion axis which constraints the range of possible movement. By sampling all the possible motion of a 3D car instance, our strategy automatically edit the 2D car instance inside images, yielding a large number of training samples.
+
+Based on the generated data, we design and train a multitask network performing object understanding with fine granularity, including 2D detection, instance segmentation, dynamic part segmentation, and 3D car state description. Our deep model is significantly more robust in understanding cars in AD than models without our generated dataset.
+
+Finally, to benchmark our model and strategies, to our best knowledge, we construct the first dataset with large amount of described uncommon states of cars in AD, i.e. with door or trunk open etc., which contains 1441 labelled street-view images, 1850 car instances, and 12 defined states. We evaluate the part understanding quality extensively with the dataset, and show our network and training strategies yields large improvements (over $8\%$ relatively) in discovering and understanding these uncommon cases.
+
+In summary, our contributions are in three aspects:
+
+- We present a 3D part guided image editing pipeline for automatic training data generation, which helps to learn fine-grained object understanding models in AD.
+
+- We design a multi-task network architecture which produces output of both instance level and part level object understanding.
+
+- To benchmark our data generation strategies, and network architectures, we build a large dataset which contains 1441 real images with fine-grained annotation of objects in many uncommon states. It demonstrates the effectiveness of our approaches.
+
+# 2. Related Work
+
+Fine-grained object understanding is one of the center problem for autonomous driving. Our work is mostly related to two areas: datasets and vehicle for fine-grained parsing. We review the related works in the following.
+
+Datasets for Autonomous Driving. Focusing on perception in autonomous driving, several datasets have been constructed and released. The first dataset is CamVid [2], which annotates 701 images with 32 semantic classes. The later released KITTI benchmark [14] contains multiple vision tasks (e.g., optical flow, 2D/3D detection). However, it mainly annotates 2D/3D bounding boxes for each car, resulting in 7481 training and 7518 test images. Recently, CityScapes dataset [6] labelled vehicles with instance-level segmentation, which released 2975 training, 500 validation, and 1525 test images. ApolloScape [20] is a large-scale AD dataset for various 2D and 3D tasks. It performs pixels-level annotations for 2D scene parsing, providing about 140K images. ApolloCar3D [39] is a 3D instance car dataset built from real images in driving scenes. For each car instance in 2D image, 3D model and corresponding 6-DoF pose are manually labelled. Moreover, there exist other real street-view self-driving datasets (e.g., Toronto [48], Mapillary [30], and BDD100K [51]) and synthetic datasets (e.g., SYNTHIA [37], P.F.B. [35], and Virtual KITTI [11]). However, all of these datasets only annotate common cars with 2D bounding box or semantic/instance segmentation, while cars in uncommon states are ignored (e.g., opened door or trunk, and flashed headlights or taillights). In an AD scenario, this information can predict further action of the vehicle, which becomes very important for safety.
+
+Data Generation for Deep Network. Learning effective deep networks (e.g., AlexNet [21], VGG [38], ResNet [18], and FPN [25]), depends on large amount of training data for each individual task. However, real data collection and annotation [8, 26, 20] are laborious. To avoid the difficulties of data labelling, synthetic data is widely used for training deep networks. Current image synthesis techniques can be roughly divided into two classes: 3D model rendering and 2D image 'cut-paste' [10]. Recently, several large-scale 3D model datasets have been released, such as ShapeNet [5], ModelNet [49] and ScanNet [7]. Researchers directly render 3D models to obtain 2D im
+
+
+Figure 2: Overview of our data augmentation pipeline.
+
+ages for training. However, rendering is a time-consuming work which requires pre-building the complex realistic 3D scenes. Therefore, some works cut objects from images, and then paste to other background to synthesize photorealistic training data. However, the diversity of 'cut-paste' results is limited. Furthermore, it cannot handle the problem of occlusion.
+
+Nevertheless, many computer vision tasks are beneficial from synthetic data, such as optical flow [3, 9], scene flow [28], stereo [32, 52], semantic segmentation [36, 37], 3D keypoint extraction [42], viewpoint [40], object pose [29], 3D reconstruction [16], and object detection [1, 13, 31, 45]. The key problem for these works is to fix appearance domain gap to realistic images. Domain randomization [43] is widely used for vehicle detection [31, 45], which gets the optimal performance. Alhaija et al. [1] take advantage of AR approach to overlay vehicle rendering results to the real street-view images, yielding augmented photo-realistic training data. Hinterstoisser et al. [19] show that by freezing a pre-trained feature extractor can train a good object detector with synthetic data only.
+
+Fine-grained Parsing and Understanding. For AD, as discussed in Sec. 1, it is important to detect, segment, and parse the moving objects into part-level semantics. Here, state-of-the-art (SOTA) methods often rely on detecting and understanding pipelines. Specifically, an object is first separated using detectors such as one-stage methods (e.g., SSD513 [12], YOLOv3 [33]) or two-stage methods (e.g., Faster-RCNN [34], Mask-RCNN [17]); and then performed fine-grained recognition with object parts, such as part keypoints regression [39] and part segmentation [47, 50]. Most recently, Lu et al. [27] extend the part-level pixel-wise annotation to the part state inference problem, such that visual models can be more instructive. Our work follows this trend, while extends previous works with object part understanding in 3D to handle uncommon cases in AD scenario.
+
+# 3. 3D Part Guided Image Editing
+
+In this section, we introduce how to leverage the 3D parts to automatically edit the source 2D images. To achieve this goal, four essential components are required: 1) 3D part segmentation and motion axis annotation; 2) 3D transformation and 2D projection; 3) hole filling and image filtering; 4) invisible region generation.
+
+Recently, Song et al. [39] published a 2D-3D alignment dataset: ApolloCar3D, which annotates the 3D model and 6-DoF pose for each 2D car instance. Based on the released 3D CAD models of cars, we manually segment out the movable parts (i.e., bonnet, trunk and four doors) and the semantic parts (i.e., two headlights and two taillights), respectively. For semantic parts, we directly project them to obtain the corresponding 2D regions, which are further edited to yellow or red flashed effects (the third row in Fig. 3). For movable parts, we firstly annotate their motion axis, then transform the 3D parts to guide 2D image editing. Note that the 3D models provided by ApolloCar3D are low-quality. It is difficult to obtain appropriate texture map from source image to perform photo-realistic rendering.
+
+Instead, we render the 3D geometry parts to obtain corresponding depth map $D$ , according to the global rotation $\mathbf{R_g}$ , translation $\mathbf{t_g}$ , and the camera intrinsic matrix $\mathbf{K}$ . For each 2D pixel $\mathbf{u} = (u,v)^{\top}$ with depth value $D(\mathbf{u})$ , we convert it to acquire 3D point $\mathbf{P} = (x,y,z)^{\top}$ through
+
+$$
+\mathbf {P} = \mathbf {R} _ {\mathbf {g}} ^ {- 1} \cdot \left(D (\mathbf {u}) \cdot \mathbf {K} ^ {- 1} \cdot \dot {\mathbf {u}} - \mathbf {t} _ {\mathbf {g}}\right). \tag {1}
+$$
+
+Here, $\dot{\mathbf{u}}$ is a homogeneous vector: $\dot{\mathbf{u}} = (\mathbf{u}^{\top}|1)^{\top}$ .
+
+Assuming the part locally transformed with a 3D rotation $\mathbf{R_o}$ along with the motion axis, and the axis translate $\mathbf{t_o}$ in the global coordinate. We compute the pixel's new position $\mathbf{u}'$ in the image domain, which is defined as:
+
+$$
+\mathbf {u} ^ {\prime} = \left. \left\lfloor \pi \left(\mathbf {K} \cdot \left(\mathbf {R} _ {\mathbf {g}} \left(\mathbf {R} _ {\mathbf {o}} \left(\mathbf {P} - \mathbf {t} _ {\mathbf {o}}\right) + \mathbf {t} _ {\mathbf {o}}\right) + \mathbf {t} _ {\mathbf {g}}\right)\right) \right\rfloor . \right. \tag {2}
+$$
+
+
+Figure 3: The generated cars in uncommon states by our approach. The editing results of movable parts (i.e., trunk, bonnet, and four doors) are shown in the 1st row and the 2nd row. And the editing results of semantic parts (i.e., two headlights and two taillights) are shown in the 3rd row.
+
+
+
+
+
+
+Figure 4: The architecture of our two-backbone network, which can output 2D detection, instance-level segmentation, dynamic part segmentation, and state description.
+
+Here, the function $\mathbf{u} = \pi (\mathbf{P})$ performs perspective projection of $\mathbf{P}\in \mathbb{R}^3 = (x,y,z)^\top$ including dehomogenisation to obtain $\mathbf{u}\in \mathbb{R}^2 = (x / z,y / z)^\top$ .
+
+Note that the transformed pixels are always sparse and noisy in the part region (Fig. 2 (e)). Here, we call the nonvalued pixel as 'hole'. In order to fill these holes, we perform the linear blending algorithm [41] to obtain the RGB values. After interpolating the non-valued pixels, we apply a bilateral filter [44] on the edited images. The smoothed results are shown in Fig. 2 (f) and Fig. 3.
+
+Invisible region generation. For the case of opening door, we can generate visual compelling results if the cart towards the camera. Once the car in the opposite direction, opening door will introduce some invisible regions in the original image. These invisible regions can be roughly divided into two classes: one is the reverse side of the part, another is the vehicle interior (e.g., seat, steering wheel and engine). Empirically, the interior regions are always dark due to the inadequate illumination. Therefore, we directly fill interior regions with the gray color. Also, we have tried the random color and the patches from real images. However, according to the experimental results, we don't find obvious differences among them.
+
+Compared with the interior regions, coloring the reverse side of part seems rather complex. As shown in Fig. 2, it is not appropriate to directly filling in pure color. Thus, we adopt the photo-realistic rendering pipeline to generate high-fidelity results of reverse side. Considering the low-quality models provided by ApolloCar3D, we firstly construct a small expert designed 3D model database for movable parts. Each part is designed by a professional artist with the commercial software, 3dsMax. The part materials are manually labelled and BRDF parameters are predefined. As shown in Fig. 2 (h), we use the environment map calculated [23] from ApolloCar3D to perform photorealistic rendering. Our editing results are shown in Fig. 3.
+
+
+
+# 4. Network Architectures
+
+We propose a novel multi-task deep neural network architecture shown in Fig. 4, which is used for fine-grained object understanding. In this section, we discuss the modules of our network and training settings in details.
+
+# 4.1. Two Backbones
+
+We aim to detect cars in uncommon states from real street-view images through only training on the editing images. To achieve the transfer ability from synthetic data to real data, there are two ResNet50-FPN [25] backbones in our network. We pre-train the main backbone both on ApolloCar3D [39] benchmark and CityScapes [6] benchmark using Mask-RCNN to extract the car body features guided by a car detection task. Simultaneously, we pre-train the auxiliary backbone on COCO dataset to learn the general features of the edited region (e.g., the rendered parts) guided by a general detection task. Finally, we fix the parameters of these two backbones to train the network on the editing data. Indeed, experimental results in Sec. 6.4 demonstrate that we can get the optimal performance by freezing two backbones.
+
+# 4.2. Dynamic Part Segmentation
+
+We adopt the Mask-RCNN [17] to implement the task of dynamic part segmentation. In Mask-RCNN, the mask branch outputs a $Km^2$ dimensional binary mask for each RoI aligned feature map, where $K$ is the number of class and $m$ is the resolution. Besides, we take the dynamic part segmentation as a new channel, resulting in output containing a $(K + 1)m^2$ binary mask. Specifically, we feed $14 \times 14$ RoI aligned feature map to four sequential 256-d $3 \times 3$ convolution layers. A $2 \times 2$ deconvolution layer is used to up-sample the output to $28 \times 28$ . Finally, we define the $L_{\text{part}}$ as the average of per-pixel sigmoid cross entropy loss.
+
+| Datasets | Bonnet | Trunk | Doors | Headlights | Taillights | Total |
| lifted | lifted | fl-o. | fr-o. | bl-o. | br-o. | l-tu. | r-tu. | l-tu. | r-tu. | stop | alarm |
| KITTI | 1 | 9 | 1 | 0 | 0 | 5 | 1 | 0 | 2 | 1 | 8 | 0 | 28 |
| CityScapes | 0 | 0 | 14 | 5 | 8 | 4 | 3 | 2 | 4 | 0 | 15 | 0 | 55 |
| ApolloScape | 0 | 23 | 29 | 0 | 59 | 157 | 15 | 18 | 23 | 27 | 33 | 16 | 400 |
| ApolloCar3D | 0 | 13 | 19 | 1 | 0 | 11 | 3 | 5 | 12 | 9 | 21 | 0 | 94 |
| Capt. Images | 15 | 405 | 232 | 66 | 79 | 346 | 19 | 17 | 25 | 18 | 44 | 7 | 1273 |
| CUS Dataset | 16 | 450 | 295 | 72 | 146 | 523 | 41 | 42 | 66 | 55 | 121 | 23 | 1850 |
+
+Table 1: The constructed CUS dataset, which annotates 1850 car instances in uncommon states from 1441 street-view images. 'fl-o. (br-o.)' indicates the opened front-left (back-right) part, and 'l-tu. (r-tu.)' indicates turning left (right).
+
+# 4.3. State Description
+
+We use a binary variable to represent the existence of the particular part state (i.e., 1 for existed and 0 for others). Then, we define the 'part state vector' as a concatenation of all binary variables. Our method regresses the part state vector through the sequential convolution layers and a fully connected layer in mask branch. Similarly, we define the $L_{state}$ as the average sigmoid cross entropy loss.
+
+# 4.4. Training Details
+
+At first, we pre-train a Mask-RCNN with ResNet50-FPN backbone both on ApolloCar3D [39] benchmark and CityScapes [6] benchmark through a car instance segmentation task. Then, we initialize the main backbone by copying the parameters of the pre-trained network. Simultaneously, we pre-train the auxiliary backbone on COCO dataset using the same network architecture. Finally, we fix the parameters of these two backbones to train the network on the editing data. The multi-task loss is defined as:
+
+$$
+\begin{array}{l} L = L _ {\text {c l a s s}} ^ {p} + L _ {\text {r e g}} ^ {p} + L _ {\text {c l a s s}} ^ {r} + L _ {\text {b o x}} ^ {r} \tag {3} \\ + L _ {m a s k} ^ {r} + L _ {s t a t e} ^ {r} + L _ {p a r t} ^ {r}, \\ \end{array}
+$$
+
+where $(.)^p$ and $(.)^r$ indicate RPN and RCNN, respectively. The subscript state and part denote the loss of state vector and part mask, respectively. We minimize our loss function using the SGD with a weight decay of 0.0001 and a momentum of 0.9. The learning rate is initially set to 0.002, and reduced by 0.1 for every 5 epochs.
+
+# 5. CUS Dataset
+
+To our best knowledge, none of existing datasets provides the detailed annotation of cars in uncommon states (CUS). To evaluate the quality of edited data and benchmark network performance, we construct a CUS dataset with real street-view images annotated. Specifically, we firstly look up the existing AD-oriented datasets, including KITTI [14], CityScapes [6], ApolloScape [20], and ApolloCar3D [39]. These four datasets have a total of 80,000
+
+images, which labelled over 1 million car instances. However, very few of them are in uncommon states (Tab. 1).
+
+To add more CUS data, we drive a car to capture images in various sites (i.e., hospital, park, school, and urban road) and in different time (i.e., morning, noon, and afternoon). Consequently, we capture about 150,000 images in total. After removed the blurred and overexposed images, we finally collect 1273 car instances to label.
+
+As shown in Tab. 1, our dataset covers 10 dynamic parts (i.e., bonnet, trunk, four doors, two headlights, and two taillights) and 12 uncommon states, which annotates 1850 car instances from 1441 images. For each car instance, we manually labelled the 2D bounding box, instance segmentation, dynamic part segmentation, and state description. Notice that our trained deep model is used directly for testing on CUS dataset without any 'domain adaptation' or 'fineturning' strategies. We believe the built benchmark can effectively verify the quality of editing data, and quantitatively evaluate the network performance.
+
+# 6. Results
+
+# 6.1. Experimental Settings
+
+Our network is trained on a 64-bit work station with a 8-core 3.4 GHz CPU, 4 Nvidia Titan XP graphics cards, and Ubuntu 16.04 OS. The generated training data mostly comes from ApolloCar3D dataset which labelled the 3D model and 6-DoF pose for each car instance. Considering the obvious domain gap among different datasets, we further annotate 100 common car instances with 2D-3D aligned in KITTI, CityScapes, ApolloScape, and captured images, respectively. Then we perform the proposed editing approach to generate CUS data for training. The editing time for each car is about 3 seconds. Specifically, 0.5s for 3D points transformation and projection, 0.5s for hole filling and filtering, and 2s for invisible region generation.
+
+The training time of our network depends on the data number. In general, training 25K images costs 24 hours. On the testing phase, we directly use the trained model to perform fine-grained understanding on CUS dataset. As shown
+
+
+(a)
+
+
+
+
+
+
+
+
+Figure 5: The training data of different approaches: (a) raw images; (b) rendering data; (c) editing data by our approach.
+
+
+Figure 6: Visualization results on 2D detection and instance segmentation (five baseline methods vs. ours).
+
+# 6.2. Evaluation Metric
+
+In Sec. 6.3, our network is compared with Mask-RCNN. Note that the proposed benchmark is only focused on CUS. While Mask-RCNN cannot distinguish the cars in common/uncommon states, which are both existed in the testing data. If we use the $AP$ metric to evaluate this experiment, the detected common-state cars will decrease the precision, resulting in inaccurate $AP$ value. Therefore, we compute the maximum of IoU values between the ground truth and the predictions to evaluate the network performance.
+
+Different with Mask-RCNN, our two-backbone network can correctly detect the cars in uncommon states. For the ablation study in Sec. 6.4, we choose the $mAP$ metric to evaluate the performance of 2D detection, instance segmentation, and part segmentation. For state description, we compute the match rate at each binary item between prediction state vectors and ground truth.
+
+| Methods | 2D Detection (IoU) | Ins. Seg. (IoU) |
| Baseline 1 | 0.751 | 0.704 |
| Baseline 2 | 0.758 | 0.712 |
| Baseline 3 | 0.775 | 0.721 |
| Baseline 4 | 0.766 | 0.713 |
| Baseline 5 | 0.772 | 0.719 |
| Ours | 0.862 | 0.815 |
+
+Table 2: 2D detection and instance segmentation evaluation results with different approaches on CUS dataset.
+
+
+
+
+
+
+in Fig. 1, Fig. 7 (c) and Fig. 10, our network outputs holistic parsing results, including 2D detection, instance-level segmentation, dynamic part segmentation, and state description. Source code, data and more results can be found on the project page (https://github.com/zongdai/EditingForDNN).
+
+
+
+
+
+
+
+# 6.3. Comparison with Baseline Methods
+
+To demonstrate our method can effectively improve the performance on 2D detection and instance-level segmentation, following baseline methods are compared (Tab. 2):
+
+Baseline 1: Mask-RCNN + Existing Datasets. We train the Mask-RCNN network on the existing datasets (i.e., KITTI, CityScapes, ApolloScape, and ApolloCar3D), which only annotate the common-state cars (Fig. 5 (a)). In the testing phase, we directly output the results of 2D detection and instance-level segmentation on CUS dataset.
+
+Baseline 2: Mask-RCNN + Rendering Data. We implement the rendering-based data generation pipeline which is adopted in most image synthesis works [40, 31, 45]. Following [1], we construct 50 high-quality textured CAD models of cars, which are labelled the dynamic parts and motion axis. We transform the car models according to the 6-DoF pose and operate the dynamic parts to generate uncommon states. Here, we use Blender software to obtain rendering result which is further overlaid to background (Fig. 5 (b)). Consequently, we build a rendering dataset which consists of 25K images. We train the Mask-RCNN network on rendering data and test on CUS dataset.
+
+Baseline 3: Mask-RCNN + Editing Data. We then train the Mask-RCNN network using our editing data (Fig. 5 (c)) which has the same number with the rendering data. We evaluate the trained Mask-RCNN network on CUS dataset.
+
+Baseline 4: Our Network + Existing Datasets. We train our two-backbone network using the existing datasets which are introduced in Baseline 1.
+
+Baseline 5: Our Network + Rendering Data. We train the proposed two-backbone network using the rendering data which is illustrated in Baseline 2.
+
+Our method: Our Network + Editing Data. At last, we train our two-backbone network using the editing data. The quantitative results of these approaches are listed in Tab. 2.
+
+| Methods | 2D Detection (mAP) | Instance Seg. (mAP) | Part Seg. (mAP) | State Description |
| Single Backbone Re-trained | 0.136 | 0.114 | 0.144 | 0.149 |
| Single Backbone Frozen | 0.672 | 0.516 | 0.273 | 0.837 |
| Two Backbones Frozen | 0.701 | 0.563 | 0.314 | 0.874 |
+
+Table 3: Ablation study of our network on 2D detection, instance segmentation, part segmentation, and state description.
+
+
+(a)
+
+
+(b)
+
+
+(c)
+
+The results of Baseline 1 indicate that the model of Mask-RCNN trained by common-state cars can detect and segment the car body. However, the dynamic parts are always ignored (Fig. 6 (a)). The results of Baseline 2 show that rendering data improves the network performance compared with Baseline 1. While the rendering data (Fig. 5 (b)) has the natural domain gap with the real captured images (Fig. 5 (a)). In addition, 3D rendering costs much 10x time than editing-based approach. The results of Baseline 3 prove that Mask-RCNN trained by editing data outperforms existing datasets and rendering data. However, when we visualize the results of detection and segmentation in Fig. 6 (c), it is clearly shown that the visible parts are good while the reverse side of dynamic part suffers from errors.
+
+Baseline 4 and Baseline 5 are using our two-backbone network to train on the existing datasets and rendering data, respectively. However, the performance of both baseline methods are not improved significantly. Here, we emphasize that our two-backbone network is carefully designed to learn the editing data, especially the dynamic parts. Directly using our network cannot effectively learn other data, because they are in the different domain. Consequently, our two-backbone network trained by editing data gets the best performance, which advances other methods by over 8 percent on both tasks (Tab. 2). The main improvement comes from the invisible regions (Fig. 6 (f)).
+
+
+Figure 7: Visual results on ablation study of our network: (a) single backbone re-trained; (b) single backbone frozen; (c) two backbones frozen. The words with red/green color indicate the wrong/correct state descriptions.
+Figure 8: The performance of our two-backbone network with different number of training data.
+
+# 6.4. Performance Analysis
+
+The impact of our network structure. Besides 2D detection and instance segmentation, our network can detect cars in uncommon states, segment dynamic parts, and describe states. To illustrate the impact of our network structure, we conduct an ablation study as shown in Tab. 3 using constant number of training data (i.e., 25K). We firstly retrain the single backbone, which is a common strategy in most deep networks (e.g., [17]). The results show that it can hardly predict correct class of CUS, leading bad performance on these tasks (Fig. 7 (a)). We then freeze the single backbone pre-trained on COCO during the editing data training. The performance is improved due to relieving the over-fitting problem. However, the frozen backbone can not extract adequate features (Fig. 7 (b)). On the contrast, our two backbones which pre-trained on car detection task and general task can not only extract adequate features but also avoid over-fitting problem. It achieves the best performance on these tasks (Fig. 7 (c)).
+
+The impact of the number of training data. Empirically, the performance of deep network largely relies on the number of training data. Here, we conduct an experiment to study the relationship between the number of data and network performance. Thanks to the fully automatic editing-based approach, we set the number of data from 5K to 40K with an interval of 5K to train our network. Fig. 8 shows the network performance on multiple tasks with respect to
+
+
+Figure 9: The rendering results with (a) and without (b) environment map.
+
+| Tasks | w/o Env. Map | with Env. Map |
| 2D Detection (mAP) | 0.688 | 0.701 |
| Ins. Seg. (mAP) | 0.538 | 0.563 |
| Part Seg. (mAP) | 0.221 | 0.314 |
| State Description | 0.844 | 0.874 |
+
+Table 4: The impact of environment map.
+
+different number of training data. We find that from 5K to 25K, the network performance is significantly improved. While from 25K to 40K, it is not sensitive to the number. In practice, we set the number of training data to 25K, which is a good compromise of efficiency and accuracy.
+
+The impact of environment map. In the proposed data generation pipeline, we render the reverse side of dynamic parts to generate the invisible region data. While in the wild, illumination (or environment map) plays an important role which determines the rendered region whether is compatible with the surroundings. Here, we conduct an experiment to study the effectiveness of environment map. We utilize the same number of reverse side data with/without environment map (shown in Fig. 9) to train our network, and evaluate on the proposed CUS dataset. As shown is Tab. 4, the data rendered by environment map significantly improves the network performance. In particular, the dynamic part segmentation gets a 9.3 percent improvement.
+
+# 6.5. Application
+
+Our results can effectively support a number of high-level vision tasks. As shown in Fig. 10, we integrate human detection task to our network. Intuitively, there exists rich semantics between the human and the dynamic parts. For example, if someone stands near by the lifted trunk, it is very likely that he/she is taking the luggage. If someone bends to push the door of car, it implies he/she would get off. Besides action reasoning and interaction understanding, we can even infer the people's identity from the uncommon cases. For instance, if the front left door is opened, people near the door usually is a driver.
+
+
+Figure 10: The applications on action reasoning and people identity inference by understanding CUS.
+
+# 7. Conclusion and Limitation
+
+In this paper, we make the first attempt to analyse the cars in uncommon states (CUS). Instead of annotating large amount of images, we present an editing-based data generation approach which takes advantage of 3D parts. Our method is light-weight but high-efficiency, which advances the rendering-based methods by a large margin. To perform a holistic understanding for CUS, we propose a multi-task deep network which can simultaneously output 2D detection, instance-level segmentation, dynamic part segmentation, and state description. To benchmark the performance, we construct a CUS dataset which contains 1441 real images (1850 car instances) with fine-grained annotation. The experimental results show that our editing data and deep network perform well on CUS.
+
+Nevertheless, there are a number of limitations, which point out our direction in the future work. First, AD is a huge and complex project, the uncommon states analysed in this paper are closed to cars. Some other objects, such as human and road, we will pay more attention to them. Second, the output of our network are mostly 2D results. We will extend this work to 3D space, such as 3D detection, 3D localization, and 3D reconstruction. Third, we will research CUS on the video sequences. Lastly, we will fuse multiple sensors (e.g., RGB camera, stereo camera, Lidar, and Radar) to research the CUS problem.
+
+# Acknowledgement
+
+We thank the anonymous reviewers for their valuable comments. This work was supported in part by National Natural Science Foundation of China (U1736217 and 61932003), National Key R&D Program of China (2019YFF0302902), and Pre-research Project of the Manned Space Flight (060601).
+
+# References
+
+[1] Hassan Abu Alhaija, Siva Karthik Mustikovela, Lars Mescheder, Andreas Geiger, and Carsten Rother. Augmented reality meets computer vision.
+[2] Gabriel J Brostow, Julien Fauqueur, and Roberto Cipolla. Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters, 30(2):88-97, 2009.
+[3] Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. A naturalistic open source movie for optical flow evaluation. In Proceedings of ECCV, pages 611-625. Springer, 2012.
+[4] F. Chabot, M. Chaouch, J. Rabarisoa, C. Teulière, and T. Chateau. Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
+[5] Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], 2015.
+[6] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3213-3223, 2016.
+[7] Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
+[8] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255. IEEE, 2009.
+[9] Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2758-2766, 2015.
+[10] Debidatta Dwibedi, Ishan Misra, and Martial Hebert. Cut, paste and learn: Surprisingly easy synthesis for instance detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 1301-1310, 2017.
+[11] Francis Engelmann, Theodora Kontogianni, Alexander Hermans, and Bastian Leibe. Exploring spatial context for 3d semantic segmentation of point clouds. In Proceedings of the IEEE International Conference on Computer Vision, pages 716-724, 2017.
+[12] Cheng-Yang Fu, Wei Liu, Ananth Ranga, Ambrish Tyagi, and Alexander C Berg. Dssd: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659, 2017.
+
+[13] Adrien Gaidon, Qiao Wang, Yohann Cabon, and Eleonora Vig. Virtual worlds as proxy for multi-object tracking analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4340-4349, 2016.
+[14] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013.
+[15] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231, 2018.
+[16] Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, and Roberto Cipolla. Understanding real world indoor scenes with synthetic data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4077-4085, 2016.
+[17] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, pages 2961-2969, 2017.
+[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770-778, 2016.
+[19] Stefan Hinterstoisser, Vincent Lepetit, Paul Wohlhart, and Kurt Konolige. On pre-trained image features and synthetic images for deep learning. In The European Conference on Computer Vision (ECCV) Workshops, September 2018.
+[20] Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and Ruigang Yang. The apolloscape dataset for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 954–960, 2018.
+[21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097-1105, 2012.
+[22] Abhijit Kundu, Yin Li, and James M. Rehg. 3d-rcnn: Instance-level 3d object reconstruction via render-and compare. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
+[23] Jean-François Lalonde and Alexei A Efros. Synthesizing environment maps from a single image. Technical Report CMU-R I-TR-10-24, 2010.
+[24] Huan Lei, Naveed Akhtar, and Ajmal Mian. Spherical kernel for efficient graph convolution on 3d point clouds. arXiv preprint arXiv:1909.09287, 2019.
+[25] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2117-2125, 2017.
+[26] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proceedings of ECCV, pages 740-755. Springer, 2014.
+
+[27] Cewu Lu, Hao Su, Yonglu Li, Yongyi Lu, Li Yi, Chi-Keung Tang, and Leonidas J Guibas. Beyond holistic object recognition: Enriching image understanding with part states. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6955–6963, 2018.
+[28] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4040-4048, 2016.
+[29] Matthias Müller, Vincent Casser, Jean Lahoud, Neil Smith, and Bernard Ghanem. Sim4cv: A photo-realistic simulator for computer vision applications. International Journal of Computer Vision, 126(9):902-919, 2018.
+[30] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE International Conference on Computer Vision, pages 4990-4999, 2017.
+[31] Aayush Prakash, Shaad Boochoon, Mark Brophy, David Acuna, Eric Cameracci, Gavriel State, Omer Shapira, and Stan Birchfield. Structured domain randomization: Bridging the reality gap by context-aware synthetic data. arXiv preprint arXiv:1810.10093, 2018.
+[32] Weichao Qiu and Alan Yuille. Unrealcv: Connecting computer vision to unreal engine. In Proceedings of ECCV, pages 909-916. Springer, 2016.
+[33] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
+[34] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems, pages 91-99, 2015.
+[35] Stephan R Richter, Zeeshan Hayden, and Vladlen Koltun. Playing for benchmarks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2213-2222, 2017.
+[36] Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In Proceedings of ECCV, pages 102-118. Springer, 2016.
+[37] German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3234-3243, 2016.
+[38] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+[39] Xibin Song, Peng Wang, Dingfu Zhou, Rui Zhu, Chenye Guan, Yuchao Dai, Hao Su, Hongdong Li, and Ruigang Yang. Apollocar3d: A large 3d car instance understanding benchmark for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5452-5462, 2019.
+
+[40] Hao Su, Charles R Qi, Yangyan Li, and Leonidas J Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In Proceedings of the IEEE International Conference on Computer Vision, pages 2686-2694, 2015.
+[41] Robert W Sumner, Johannes Schmid, and Mark Pauly. Embedded deformation for shape manipulation. ACM Transactions on Graphics (TOG), 26(3):80, 2007.
+[42] Supasorn Suwajanakorn, Noah Snavely, Jonathan J Tompson, and Mohammad Norouzi. Discovery of latent 3d keypoints via end-to-end geometric reasoning. In Advances in Neural Information Processing Systems, pages 2059-2070, 2018.
+[43] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In International Conference on Intelligent Robots and Systems (IROS), pages 23-30. IEEE, 2017.
+[44] Carlo Tomasi and Roberto Manduchi. Bilateral filtering for gray and color images. In Proceedings of ICCV, volume 98, page 2, 1998.
+[45] Jonathan Tremblay, Aayush Prakash, David Acuna, Mark Brophy, Varun Jampani, Cem Anil, Thang To, Eric Cameracci, Shaad Boochoon, and Stan Birchfield. Training deep networks with synthetic data: Bridging the reality gap by domain randomization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 969-977, 2018.
+[46] Peng Wang, Xinyu Huang, Xinjing Cheng, Dingfu Zhou, Qichuan Geng, and Ruigang Yang. The apolloscope open dataset for autonomous driving and its application. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
+[47] Peng Wang, Xiaohui Shen, Zhe Lin, Scott Cohen, Brian Price, and Alan L Yuille. Joint object and part segmentation using deep learned potentials. In Proceedings of the IEEE International Conference on Computer Vision, pages 1573-1581, 2015.
+[48] Shenlong Wang, Min Bai, Gellert Mattyus, Hang Chu, Wenjie Luo, Bin Yang, Justin Liang, Joel Cheverie, Sanja Fidler, and Raquel Urtasun. Toronto: Seeing the world with a million eyes. arXiv preprint arXiv:1612.00423, 2016.
+[49] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1912-1920, 2015.
+[50] Fangting Xia, Peng Wang, Liang-Chieh Chen, and Alan L Yuille. Zoom better to see clearer: Human and object parsing with hierarchical auto-zoom net. In Proceedings of ECCV, pages 648-663. Springer, 2016.
+[51] Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2018.
+[52] Yi Zhang, Weichao Qiu, Qi Chen, Xiaolin Hu, and Alan Yuille. Unrealstereo: A synthetic dataset for analyzing stereo vision. arXiv preprint arXiv:1612.04647, 1(2), 2016.
\ No newline at end of file
diff --git a/3dpartguidedimageeditingforfinegrainedobjectunderstanding/images.zip b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..cf82013f61e2154675f8678d158f7567a879ca36
--- /dev/null
+++ b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45e06f9a7237ad17fca5540733966ee8fa446baa9c02cd99c204cd215703e4b8
+size 732082
diff --git a/3dpartguidedimageeditingforfinegrainedobjectunderstanding/layout.json b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..319cd6ef13c529f9e7cc3c8cab21d9563399ec42
--- /dev/null
+++ b/3dpartguidedimageeditingforfinegrainedobjectunderstanding/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb6e5f35caa886e01ad0b8965098f64f8bdea353e2b01dbe8534a974c2ab86a2
+size 362140
diff --git a/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_content_list.json b/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..211ed7dd1d4d38ca3fa1eb331ce8735a754db9a2
--- /dev/null
+++ b/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cac912472aee7c984e0a7716d3359ea94e27dd498e212339e00548aeeea75765
+size 96236
diff --git a/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_model.json b/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..695e3be2a81e69e001fae622b0d2acb5d1e8762a
--- /dev/null
+++ b/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2356c8ba8e3bdc7409767bc80717bee9789aa89e1450849aaa248186c3a8eb18
+size 122135
diff --git a/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_origin.pdf b/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8a42cd5db72c3fcbfdb1c0d4434af475bda1e0db
--- /dev/null
+++ b/3dphotographyusingcontextawarelayereddepthinpainting/d1b0b4a6-ad26-4d18-aac0-e2f991cc179b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6f32bc7ac201efae11a7ecef171a0a2d77b93d995b87424a509c9619e2621caf
+size 5176667
diff --git a/3dphotographyusingcontextawarelayereddepthinpainting/full.md b/3dphotographyusingcontextawarelayereddepthinpainting/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..27b3eccbf4af6f38e42e6c20a24bfcd4168fec2f
--- /dev/null
+++ b/3dphotographyusingcontextawarelayereddepthinpainting/full.md
@@ -0,0 +1,491 @@
+# 3D Photography using Context-aware Layered Depth Inpainting
+
+Meng-Li Shih12
+
+shihsml@gapp.nthu.edu.tw
+
+Shih-Yang Su
+
+shihyang@vt.edu
+
+Johannes Kopf3
+
+jkopf@fb.com
+
+Jia-Bin Huang
+
+jbhuang@vt.edu
+
+1Virginia Tech
+
+$^{2}$ National Tsing Hua University
+
+Facebook
+
+https://shihmengli.github.io/3D-Photo-Inpainting
+
+
+(a) Depth-warping (holes)
+
+
+(b) Depth-warping (stretching)
+
+
+(c) Facebook 3D photo
+Figure 1. 3D photography from a single RGB-D image. Naive methods either produce holes (a) or stretch content (b) at disocclusions. Color and depth inpainting using diffusion is better, but provides a too smooth appearance (c). Our approach is capable of synthesizing new color/depth texture and structures, leading to more photorealistic novel views (d).
+
+
+(d) Our result
+
+# Abstract
+
+We propose a method for converting a single RGB-D input image into a 3D photo — a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show less artifacts compared with the state of the arts.
+
+# 1. Introduction
+
+3D photography—capturing views of the world with a camera and using image-based rendering techniques for novel view synthesis—is a fascinating way to record and reproduce visual perception. It provides a dramatically more immersive experience than old 2D photography: almost lifelike in Virtual Reality, and even to some degree on normal flat displays when displayed with parallax.
+
+Classic image-based reconstruction and rendering tech
+
+niques, however, require elaborate capture setups involving many images with large baselines [17, 59, 26, 45, 19, 12], and/or special hardware (e.g., Lytro Immerge, Facebook Manifold camera $^{1}$ ).
+
+Recently, we have seen work to make capture for 3D photography more effortless by using cell phone cameras and lowering baseline requirements [17, 18]. In the most extreme cases, novel techniques such as Facebook 3D Photos2 now just require capturing a single snapshot with a dual lens camera phone, which essentially provides an RGB-D (color and depth) input image.
+
+In this work we are interested in rendering novel views from such an RGB-D input. The most salient features in rendered novel views are the disocclusions due to parallax: naive depth-based warping techniques either produce gaps here (Figure 1a) or stretched content (1b). Recent methods try to provide better extrapolations.
+
+Stereo magnification [72] and recent variants [52, 39] use a fronto-parallel multi-plane representation (MPI), which is synthesized from the small-baseline dual camera stereo input. However, MPI produces artifacts on sloped surfaces. Besides, the excessive redundancy in the multi
+
+plane representation makes it memory and storage inefficient and costly to render.
+
+Facebook 3D Photos use a layered depth image (LDI) representation [48], which is more compact due to its sparsity, and can be converted into a light-weight mesh representation for rendering. The color and depth in occluded regions are synthesized using heuristics that are optimized for fast runtime on mobile devices. In particular it uses a isotropic diffusion algorithm for inpainting colors, which produces overly smooth results and is unable to extrapolate texture and structures (Figure 1c).
+
+Several recent learning-based methods also use similar multi-layer image representations [7, 56]. However, these methods use "rigid" layer structures, in the sense that every pixel in the image has the same (fixed and predetermined) number of layers. At every pixel, they store the nearest surface in the first layer, the second-nearest in the next layer, etc. This is problematic, because across depth discontinuities the content within a layer changes abruptly, which destroys locality in receptive fields of convolution kernels.
+
+In this work we present a new learning-based method that generates a 3D photo from an RGB-D input. The depth can either come from dual camera cell phone stereo, or be estimated from a single RGB image [30, 28, 13]. We use the LDI representation (similar to Facebook 3D Photos) because it is compact and allows us to handle situations of arbitrary depth-complexity. Unlike the "rigid" layer structures described above, we explicitly store connectivity across pixels in our representation. However, as a result it is more difficult to apply a global CNN to the problem, because our topology is more complex than a standard tensor. Instead, we break the problem into many local inpainting sub-problems, which we solve iteratively. Each problem is locally like an image, so we can apply standard CNN. We use an inpainting model that is conditioned on spatially-adaptive context regions, which are extracted from the local connectivity of the LDI. After synthesis we fuse the inpainted regions back into the LDI, leading to a recursive algorithm that proceeds until all depth edges are treated.
+
+The result of our algorithm are 3D photos with synthesized texture and structures in occluded regions (Figure 1d). Unlike most previous approaches we do not require predetermining a fixed number of layers. Instead our algorithm adapts by design to the local depth-complexity of the input and generates a varying number of layers across the image. We have validated our approach on a wide variety of photos captured in different situations.
+
+# 2. Related Work
+
+Representation for novel view synthesis. Different types of representations have been explored for novel view synthesis, including light fields [15, 29, 2], multi-plane images [72, 52, 39], and layered depth images [48, 55, 7, 56, 17, 18, 6, 42]. Light fields enable photorealistic rendering of novel views, but generally require many input images
+
+to achieve good results. The multi-plane image representation [72, 52, 39] stores multiple layers of RGB- $\alpha$ images at fixed depths. The main advantage of this representation is its ability to capture semi-reflective or semi-transparent surfaces. However, due to the fixed depth discretization, sloped surfaces often do not reproduce well, unless an excessive number of planes is used. Many variants of layered depth image representations have been used over time. Representations with a fixed number of layers everywhere have recently been used [7, 56], but they do not preserve locality well, as described in the previous section. Other recent work [17, 18] extends the original work of Shade et al. [48] to explicitly store connectivity information. This representation can locally adapt to any depth-complexity and can be easily converted into a textured mesh for efficient rendering. Our work uses this representation as well.
+
+Image-based rendering. Image-based rendering techniques enable photorealistic synthesis of novel views from a collection of posed images. These methods work best when the images have sufficiently large baselines (so that multi-view stereo algorithms can work well) or are captured with depth sensors. Recent advances include learning-based blending [19], soft 3D reconstruction [45], handling reflection [49, 26], relighting [63], and reconstructing mirror and glass surfaces [59]. Our focus in this work lies in novel view synthesis from one single image.
+
+Learning-based view synthesis. CNN-based methods have been applied to synthesizing novel views from sparse light field data [23] or two or more posed images [12, 19, 4]. Several recent methods explore view synthesis from a single image. These methods, however, often focus on a specific domain [53, 60], synthetic 3D scenes/objects [73, 43, 54, 6, 7, 11], hallucinating only one specific view [61, 68], or assuming piecewise planar scenes [32, 34].
+
+Many of these learning-based view synthesis methods require running a forward pass of the pre-trained network to synthesize the image of a given viewpoint. This makes these approaches less applicable to display on resource-constrained devices. Our representation, on the other hand, can be easily converted into a textured mesh and efficiently rendered with standard graphics engines.
+
+Image inpainting. The task of image inpainting aims to fill missing regions in images with plausible content. Inspired by the success of texture synthesis [9, 8], example-based methods complete the missing regions by transferring the contents from the known regions of the image, either through non-parametric patch-based synthesis [58, 1, 5, 20] or solving a Markov Random Field model using belief propagation [25] or graph cut [46, 27, 16]. Driven by the progress of convolutional neural networks, CNN-based methods have received considerable attention due to their ability to predict semantically meaningful contents that are not available in the known regions [44, 51, 21, 65, 66]. Recent efforts include designing CNN architectures to better handle holes with irregular shapes [33, 67, 64] and two-
+
+stage methods with structure-content disentanglement, e.g., predicting structure (e.g., contour/edges in the missing regions) and followed by content completion conditioned on the predicted structures [41, 62, 47].
+
+Our inpainting model builds upon the recent two-stage approaches [41, 62, 47] but with two key differences. First, unlike existing image inpainting algorithms where the hole and the available contexts are static (e.g., the known regions in the entire input image), we apply the inpainting locally around each depth discontinuity with adaptive hole and context regions. Second, in addition to inpaint the color image, we also inpaint the depth values as well as the depth discontinuity in the missing regions.
+
+Depth inpainting. Depth inpainting has applications in filling missing depth values where commodity-grade depth cameras fail (e.g., transparent/reflective/distant surfaces) [35, 70, 36] or performing image editing tasks such as object removal on stereo images [57, 40]. The goal of these algorithms, however, is to inpaint the depth of the visible surfaces. In contrast, our focus is on recovering the depth of the hidden surface.
+
+CNN-based single depth estimation. CNN-based methods have recently demonstrated promising results on estimating depth from a single image. Due to the difficulty of collecting labeled datasets, earlier approaches often focus on specific visual domains such as indoor scenes [10] or street view [14, 71]. While the accuracy of these approaches is not yet competitive with multi-view stereo algorithms, this line of research is particularly promising due to the availability of larger and more diverse training datasets from relative depth annotations [3], multi-view stereo [30], 3D movies [28] and synthetic data [42].
+
+For cases where only one single color image is available, we obtain the depth estimate through a pre-trained depth estimation model [30, 28]. Removing the dependency on stereo or multiple images as input makes our method more widely applicable to all the existing photos.
+
+# 3. Method
+
+Layered depth image. Our method takes as input an RGB-D image (i.e., an aligned color-and-depth image pair) and generates a Layered Depth Image (LDI, [48]) with inpainted color and depth in parts that were occluded in the input.
+
+An LDI is similar to a regular 4-connected image, except at every position in the pixel lattice it can hold any number of pixels, from zero to many. Each LDI pixel stores a color and a depth value. Unlike the original LDI work [48], we explicitly represent the local connectivity of pixels: each pixel stores pointers to either zero or at most one direct neighbor in each of the four cardinal directions (left, right, top, bottom). LDI pixels are 4-connected like normal image pixels within smooth regions, but do not have neighbors across depth discontinuities.
+
+LDIs are a useful representation for 3D photography, be
+
+cause (1) they naturally handle an arbitrary number of layers, i.e., can adapt to depth-complex situations as necessary, and (2) they are sparse, i.e., memory and storage efficient and can be converted into a light-weight textured mesh representation that renders fast.
+
+The quality of the depth input to our method does not need to be perfect, as long as discontinuities are reasonably well aligned in the color and depth channels. In practice, we have successfully used our method with inputs from dual camera cell phones as well as with estimated depth maps from learning-based methods [30, 28].
+
+Method overview. Given an input RGB-D image, our method proceeds as follows. We first initialize a trivial LDI, which uses a single layer everywhere and is fully 4-connected. In a pre-process we detect major depth discontinuities and group them into simple connected depth edges (Section 3.1). These form the basic units for our main algorithm below. In the core part of our algorithm, we iteratively select a depth edge for inpainting. We then disconnect the LDI pixels across the edge and only consider the background pixels of the edge for inpainting. We extract a local context region from the "known" side of the edge, and generate a synthesis region on the "unknown" side (Section 3.2). The synthesis region is a contiguous 2D region of new pixels, whose color and depth values we generate from the given context using a learning-based method (Section 3.3). Once inpainted, we merge the synthesized pixels back into the LDI (Section 3.4). Our method iteratively proceeds in this manner until all depth edges have been treated.
+
+# 3.1. Image preprocessing
+
+The only input to our method is a single RGB-D image. Every step of the algorithm below proceeds fully automatically. We normalize the depth channel, by mapping the min and max disparity values (i.e., $1 / \mathrm{depth}$ ) to 0 and 1, respectively. All parameters related to spatial dimensions below are tuned for images with 1024 pixels along the longer dimension, and should be adjusted proportionally for images of different sizes.
+
+We start by lifting the image onto an LDI, i.e., creating a single layer everywhere and connecting every LDI pixel to its four cardinal neighbors. Since our goal is to inpaint the occluded parts of the scene, we need to find depth discontinuities since these are the places where we need to extend the existing content. In most depth maps produced by stereo methods (dual camera cell phones) or depth estimation networks, discontinuities are blurred across multiple pixels (Figure 2c), making it difficult to precisely localize them. We, therefore, sharpen the depth maps using a bilateral median filter [37] (Figure 2d), using a $7 \times 7$ window size, and $\sigma_{\text{spatial}} = 4.0$ , $\sigma_{\text{intensity}} = 0.5$ .
+
+After sharpening the depth map, we find discontinuities by thresholding the disparity difference between neighboring pixels. This results in many spurious responses, such as isolated speckles and short segments dangling off longer
+
+
+(a) Color
+
+
+(b) Raw / filtered depth
+
+
+(c) Raw
+
+
+(d) Filtered
+
+
+(e) Raw discontinuities
+
+
+(f) Linked depth edges
+
+
+Figure 2. Preprocessing. Preprocessing of the color and depth input (a-b). We use a bilateral median filter to sharpen the input depth maps (c-d), detect raw discontinuities using disparity thresholds (e), and clean up spurious threshold responses and link discontinuities into connected depth edges (f). These linked depth edges form the basic unit for our inpainting process.
+(a) Initial LDI
+fully connected)
+
+
+(b) Cut across discontinuity
+
+
+(c) Context / synthesis regions
+
+
+(d) Inpainted
+
+
+Figure 3. Conceptual illustration of the LDI inpainting algorithm. (a) The initial LDI is fully connected. A depth edge (discontinuity) is marked in gray. (b) We first cut the LDI pixel connections across the depth, forming a foreground silhouette (green) and a background silhouette (red). (c) For the background silhouette we spawn a context region (blue) and a synthesis region (red) of new LDI pixels. (d) The synthesized pixels have been merged into the LDI.
+Figure 4. Context/synthesis regions. Context regions (blue) and synthesis regions (red) for three example connected depth edges (black) from Figure 2(f).
+
+
+
+
+
+edges (Figure 2e). We clean this up as follows: First, we create a binary map by labeling depth discontinuities as 1 (and others as 0). Next, we use connected component analysis to merge adjacent discontinuities into a collection of "linked depth edges". To avoid merging edges at junctions, we separate them based on the local connectivity of the LDI. Finally, we remove short segments (< 10 pixels), including both isolated and dangling ones. We determine the threshold 10 by conducting five-fold cross-validation with LPIPS [69] metric on 50 samples randomly selected from RealEstate10K training set. The final edges (Figures 2f) form the basic unit of our iterative inpainting procedure, which is described in the following sections.
+
+
+Input
+Figure 5. Handling imperfect depth edges. As the detected depth edges may not align well around occlusion boundaries, we dilate the synthesis region by 5 pixels. This strategy helps reduce artifacts in the inpainted regions.
+
+
+context/synthesis
+
+
+w/o dilation
+
+
+w/dilation
+
+# 3.2. Context and synthesis regions
+
+Our inpainting algorithm operates on one of the previously computed depth edges at a time. Given one of these edges (Figure 3a), the goal is to synthesize new color and depth content in the adjacent occluded region. We start by disconnecting the LDI pixels across the discontinuity (Figure 3b). We call the pixels that became disconnected (i.e., are now missing a neighbor) silhouette pixels. We see in Figure 3b that a foreground silhouette (marked green) and a background silhouette (marked red) forms. Only the background silhouette requires inpainting. We are interested in extending its surrounding content into the occluded region.
+
+We start by generating a synthesis region, a contiguous region of new pixels (Figure 3c, red pixels). These are essentially just 2D pixel coordinates at this point. We initialize the color and depth values in the synthesis region
+
+using a simple iterative flood-fill like algorithm. It starts by stepping from all silhouette pixels one step in the direction where they are disconnected. These pixels form the initial synthesis region. We then iteratively expand (for 40 iterations) all pixels of the region by stepping left/right/up/down and adding any pixels that have not been visited before. For each iteration, we expand the context and synthesis regions alternately and thus a pixel only belongs to either one of the two regions. Additionally, we do not step back across the silhouette, so the synthesis region remains strictly in the occluded part of the image. Figure 4 shows a few examples.
+
+We describe our learning-based technique for inpainting the synthesis region in the next section. Similar techniques [33, 41] were previously used for filling holes in images. One important difference to our work is that these image holes were always fully surrounded by known content, which constrained the synthesis. In our case, however, the inpainting is performed on a connected layer of an LDI pixels, and it should only be constrained by surrounding pixels that are directly connected to it. Any other region in the LDI, for example on other foreground or background layer, is entirely irrelevant for this synthesis unit, and should not constrain or influence it in any way.
+
+We achieve this behavior by explicitly defining a context region (Figure 3c, blue region) for the synthesis. Our inpainting networks only considers the content in the context region and does not see any other parts of the LDI. The context region is generated using a similar flood-fill like algorithm. One difference, however, is that this algorithm selects actual LDI pixels and follows their connection links, so the context region expansion halts at silhouettes. We run this algorithm for 100 iterations, as we found that synthesis performs better with slightly larger context regions. In practice, the silhouette pixels may not align well with the actual occluding boundaries due to imperfect depth estimation. To tackle this issue, we dilate the synthesis region near the depth edge by 5 pixels (the context region erodes correspondingly). Figure 5 shows the effect of this heuristic.
+
+# 3.3. Context-aware color and depth inpainting
+
+Model. Given the context and synthesis regions, our next goal is to synthesize color and depth values. Even though we perform the synthesis on an LDI, the extracted context and synthesis regions are locally like images, so we can use standard network architectures designed for images. Specifically, we build our color and depth inpainting models upon image inpainting methods in [41, 33, 62].
+
+One straightforward approach is to inpaint the color image and depth map independently. The inpainted depth map, however, may not be well-aligned with respect to the inpainted color. To address this issue, we design our color and depth inpainting network similar to [41, 62]: we break down the inpainting tasks into three sub-networks: (1) edge inpainting network, (2) color inpainting network, and (3) depth inpainting network (Figure 6). First, given the con
+
+text edges as input, we use the edge inpainting network to predict the depth edges in the synthesis regions, producing the inpainted edges. Performing this step first helps infer the structure (in terms of depth edges) that can be used for constraining the content prediction (the color and depth values). We take the concatenated inpainted edges and context color as input and use the color inpainting network to produce inpainted color. We perform the depth inpainting similarly. Figure 7 shows an example of how the edge-guided inpainting is able to extend the depth structures accurately and alleviate the color/depth misalignment issue.
+
+Multi-layer inpainting. In depth-complex scenarios, applying our inpainting model once is not sufficient as we can still see the hole through the discontinuity created by the inpainted depth edges. We thus apply our inpainting model until no further inpainted depth edges are generated. Figure 8 shows an example of the effects. Here, applying our inpainting model once fills in missing layers. However, several holes are still visible when viewed at a certain viewpoint (Figure 8b). Applying the inpainting model one more time fixes the artifacts.
+
+Training data generation. For training, our proposed model can be simply trained on any image dataset without the need of annotated data. Here, we choose to use MSCOCO dataset [31] for its wide diversity in object types and scenes. To generate the training data for the inpainting model, we create a synthetic dataset as follows. First, we apply the pre-trained MegaDepth [30] on the COCO dataset to obtain pseudo ground truth depth maps. We extract context/synthesis regions (as described in Section 3.2) to form a pool of these regions. We then randomly sample and place these context-synthesis regions on different images in the COCO dataset. We thus can obtain the ground truth content (RGB-D) from the simulated occluded region.
+
+# 3.4. Converting to 3D textured mesh
+
+We form the 3D textured mesh by integrating all the inpainted depth and color values back into the original LDI. Using mesh representations for rendering allows us to quickly render novel views, without the need to perform per-view inference step. Consequently, the 3D representation produced by our algorithm can easily be rendered using standard graphics engines on edge devices.
+
+# 4. Experimental Results
+
+In this section, we start with describing implementation details (Section 4.1). We then show visual comparisons with the state-of-the-art novel view synthesis methods (Section 4.2). We refer to the readers to supplementary material for extensive results and comparisons. Next, we follow the evaluation protocol in [72] and report the quantitative comparisons on the RealEstate10K dataset (Section 4.3). We present an ablation study to justify our model design (Section 4.4). Finally, we show that our method works well with
+
+
+Figure 6. Context-aware color and depth inpainting. Given the color, depth, the extracted and linked depth edges as inputs, we randomly select one of the edges as a subproblem. We start with inpainting the depth edge in the synthesis region (red) using an edge inpainting network. We then concatenate the inpainted depth edges with the context color together and apply a color inpainting network to produce the inpainted color. Similarly, we concatenate the inpainted depth edges with the context depth and apply a depth inpainting network to produce the inpainted depth.
+
+
+Zoom-in Diffusion w/o edge w/ edge
+
+
+
+
+
+
+Figure 7. Effect of depth inpainting. Edge-guided depth inpainting produces more accurate structure inpainting, particularly for depth-complex regions (e.g., T-junctions). Blue box: synthesized novel view.
+(a) None
+Figure 8. Multi-layer inpainting.
+
+
+(b) Once
+
+
+(c) Twice
+
+depth maps from different sources (Section 4.5). Additional details and visual comparisons can be found in our supplementary material.
+
+# 4.1. Implementation details
+
+Training the inpainting model. For the edge-generator, we follow the hyper-parameters in [41]. Specifically, we train the edge-generator model using the ADAM optimizer [24] with $\beta = 0.9$ and an initial learning rate of
+
+0.0001. We train both the edge and depth generator model using the context-synthesis regions dataset on the MSCOCO dataset for 5 epochs. We train the depth generator and color image generator for 5 and 10 epochs, respectively.
+
+Inpainting model architecture. For the edge inpainting network, we adopt the architecture provided by [41]. For the depth and color inpainting networks, we use a standard U-Net architecture with partial covolution [33]. Due to the space limitation, we leave additional implementation details (specific network architecture, the training loss and the weights for each network) to the supplementary material. We will make the source code and pre-trained model publicly available to foster future work.
+
+Training data. We use the 118k images from COCO 2017 set for training. We select at most 3 pairs of regions from each image to form the context-synthesis pool. During training, we sample one pair of regions for each image, and resize it by a factor between [1.0, 1.3].
+
+# 4.2. Visual comparisons
+
+Comparisons with methods with MPI representations. We compare our proposed model against MPI-based approaches on RealEstate10K dataset. We use DPSNet [22] to obtain the input depth maps for our method. We render the novel views of MPI-based methods using the pretrained weights provided by the authors. Figure 9 shows two challenging examples with complex depth structures. Our method synthesizes plausible structures around depth boundaries; on the other hand, stereo magnification and PB-MPI produce artifacts around depth discontinuities. LLFF [38] suffers from ghosting effects when extrapolating new views.
+
+
+
+
+Reference Frame
+
+
+
+
+
+
+
+
+Zoom-in
+
+
+
+
+
+
+
+
+StereoMag [72] PB-MPI [52]
+
+
+
+
+
+
+
+
+PB-MPI [52] LLFF [39]
+
+
+
+
+
+
+
+
+XView [4]
+
+LLFF [39]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Ours
+
+
+Figure 9. Visual comparison with MPI-based methods. Our method inpaints plausible structure and color in the occluded region.
+
+
+
+
+
+
+Facebook 3D Photo results
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 10. Visual comparison to Facebook 3D Photos. Our approach fills plausible textures and structures at disocclusions.
+
+
+
+
+
+
+Our results
+
+
+
+
+
+
+
+
+
+
+
+Comparisons with Facebook 3D photo. Here, we aim to evaluate the capability of our method on photos taken in the wild. We extract the color images and the corresponding depth maps estimated from an iPhone X (with dual camera lens). We use the same set of RGB-D inputs for both Facebook 3D photo and our algorithm. Figure 10 shows the view synthesis result in comparison with Facebook 3D photo. The diffused color and depth values by the facebook
+
+3D photo algorithm work well when small or thin occluded regions are revealed at novel views. These artifacts, however, become clearly visible with larger occluded regions. On the other hand, our results in general fill in the synthesis regions with visually plausible contents and structures.
+
+Table 1. Quantitative comparison on the RealEstate10K dataset.
+
+| Methods | SSIM ↑ | PSNR ↑ | LPIPS ↓ |
| Stereo-Mag [72] | 0.8906 | 26.71 | 0.0826 |
| PB-MPI [52] | 0.8773 | 25.51 | 0.0902 |
| LLFF [39] | 0.8062 | 23.17 | 0.1323 |
| Xview [4] | 0.8628 | 24.75 | 0.0822 |
| Ours | 0.8887 | 27.29 | 0.0724 |
+
+Table 2. Using depth edge as guidance improves the results.
+Blue: results in disocculated regions.
+
+| Methods | SSIM ↑ | PSNR ↑ | LPIPS ↓ |
| Diffusion | 0.8665 (0.6237) | 25.95 (18.91) | 0.084 |
| Inpaint w/o edge | 0.8665 (0.6247) | 25.96 (18.94) | 0.084 |
| Inpaint w/ edge (Ours) | 0.8666 (0.6265) | 25.97 (18.98) | 0.083 |
+
+Table 3. Using color inpainting model gives better perceptual quality. Our dilation heuristic further boosts the performance.
+Blue: results in disocculated regions.
+
+| Methods | SSIM ↑ | PSNR ↑ | LPIPS ↓ |
| Diffusion | 0.8661 (0.6215) | 25.90 (18.78) | 0.088 |
| Inpaint w/o dilation | 0.8643 (0.5573) | 25.56 (17.14) | 0.085 |
| Inpaint w/ dilation (Ours) | 0.8666 (0.6265) | 25.97 (18.98) | 0.083 |
+
+# 4.3. Quantitative comparisons
+
+We evaluate how well our model can extrapolate views compared to MPI-based methods [52, 72, 4, 39]. We randomly sample 1500 video sequences from RealEstate10K to generate testing triplets. For each triplet, we set $t = 10$ for target view, so that all the methods need to extrapolate beyond the source ( $t = 0$ ) and reference ( $t = 4$ ) frame. We use DPSNet [22] to generate the input depth maps required for our model. We quantify the performance of each model using SSIM and PSNR metrics between the synthesized target views and the ground truth. As these metrics do not capture the perceptual quality of the synthesized view, we include LPIPS [69] metric to quantify how well does the generated view align with human perception. For PB-MPI, we set the number of depth layers to 64 as it yields the best result. We report the evaluation results in Table 1. Our proposed method performs competitively on SSIM and PSNR. In addition, our synthesis views exhibit better perceptual quality, as reflected in the superior LPIPS score.
+
+# 4.4. Ablation study
+
+We conduct ablation studies to see how each of our proposed components contribute to the final performance. We first verify the effectiveness of edge-guided depth inpainting. We sample 130 triplets from our testing sequences, evaluate the inpainted color on both the entire image and disoccluded regions, and report the numbers in Table 2. The results show that our proposed edge-guided inpainting leads to minor improvement in numerical metrics. Next, we ex
+
+
+Input
+
+
+
+
+
+
+(a) Disocclusion
+(c) w/o Dilation
+
+
+(b) Diffusion
+(d) w/Dilation
+
+
+Figure 11. Color inpainting leads to better visual quality.
+
+
+Input
+Figure 12. Our method works with various sources of depth map. We show the depth estimates on the top-left of novel views.
+
+
+
+
+MegaDepth
+
+
+
+
+MiDas
+
+
+
+
+Kinect
+
+amine the efficacy of our color inpainting model following the same procedure described above. We present the performance in both entire image and occluded regions in Table 3. We observe that our proposed model yields better perceptual quality. Figure 11 shows an example.
+
+# 4.5. Handling different depth maps
+
+We test our method using depth maps generated using different approaches (Figure 12). We select images from SUNRGBD [50] dataset, and obtain the corresponding depth maps from three different sources: 1) depth estimated with MegaDepth [30], 2) MiDas [28] and 3) Kinect depth sensor. We present the resulting 3D photos in Figure 12. The results show that our method can handle depth maps from different sources reasonably well.
+
+# 5. Conclusions
+
+In this paper, we present an algorithm for creating compelling 3D photography from a single RGB-D image. Our core technical novelty lies in creating a completed layered depth image representation through context-aware color and depth inpainting. We validate our method on a wide variety of everyday scenes. Our experimental results show that our algorithm produces considerably fewer visual artifacts when compared with the state-of-the-art novel view synthesis techniques. We believe that such technology can bring 3D photography to a broader community, allowing people to easily capture scenes for immersive viewing.
+
+Acknowledgement. This project is supported in part by NSF (#1755785) and MOST-108-2634-F-007-006 and MOST-109-2634-F-007-016.
+
+# References
+
+[1] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. In ACM Transactions on Graphics, volume 28, page 24, 2009. 2
+[2] Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. Unstructured lumigraph rendering. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, 2001. 2
+[3] Weifeng Chen, Zhao Fu, Dawei Yang, and Jia Deng. Single-image depth perception in the wild. In NeurIPS, 2016. 3
+[4] Inchang Choi, Orazio Gallo, Alejandro Troccoli, Min H Kim, and Jan Kautz. Extreme view synthesis. In ICCV, 2019. 2, 7, 8
+[5] Soheil Darabi, Eli Shechtman, Connelly Barnes, Dan B Goldman, and Pradeep Sen. Image melding: Combining inconsistent images using patch-based synthesis. ACM Transactions on Graphics, 31(4):82-1, 2012. 2
+[6] Helisa Dhamo, Nassir Navab, and Federico Tombari. Object-driven multi-layer scene decomposition from a single image. In ICCV, 2019. 2
+[7] Helisa Dhamo, Keisuke Tateno, Iro Laina, Nassir Navab, and Federico Tombari. Peeking behind objects: Layered depth prediction from a single image. In ECCV, 2018. 2
+[8] Alexei A Efros and William T Freeman. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 341-346, 2001. 2
+[9] Alexei A Efros and Thomas K Leung. Texture synthesis by non-parametric sampling. In ICCV, 1999. 2
+[10] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, 2015. 3
+[11] SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. Neural scene representation and rendering. Science, 360(6394):1204-1210, 2018. 2
+[12] John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new views from the world's imagery. In CVPR, 2016. 1, 2
+[13] Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J Brostow. Digging into self-supervised monocular depth estimation. In ICCV, pages 3828-3838, 2019. 2
+[14] Clément Godard, Oisin Mac Aodha, and Gabriel J Brostow. Unsupervised monocular depth estimation with left-right consistency. In CVPR, 2017. 3
+[15] Steven J Gortler, Radek Grzesczuk, Richard Szeliski, and Michael F Cohen. The lumigraph. In SIGGRAPH, volume 96, pages 43-54, 1996. 2
+[16] Kaiming He and Jian Sun. Image completion approaches using the statistics of similar patches. TPAMI, 36(12):2423-2435, 2014. 2
+[17] Peter Hedman, Suhib Alsisan, Richard Szeliski, and Johannes Kopf. Casual 3d photography. ACM Transactions on Graphics, 36(6):234, 2017. 1, 2
+[18] Peter Hedman and Johannes Kopf. Instant 3d photography. ACM Transactions on Graphics, 37(4):101, 2018. 1, 2
+
+[19] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics, page 257, 2018. 1, 2
+[20] Jia-Bin Huang, Sing Bing Kang, Narendra Ahuja, and Johannes Kopf. Image completion using planar structure guidance. ACM Transactions on graphics, 33(4):129, 2014. 2
+[21] Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. TOG, 36(4):107, 2017. 2
+[22] Sunghoon Im, Hae-Gon Jeon, Steve Lin, and In So Kweon. Dpsnet: End-to-end deep plane sweep stereo. 2019. 6, 8
+[23] Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics, 35(6):193, 2016. 2
+[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 6
+[25] Nikos Komodakis and Georgios Tziritas. Image completion using efficient belief propagation via priority scheduling and dynamic pruning. TIP, 16(11):2649-2661, 2007. 2
+[26] Johannes Kopf, Fabian Langguth, Daniel Scharstein, Richard Szeliski, and Michael Goesele. Image-based rendering in the gradient domain. ACM Transactions on Graphics, 32(6):199, 2013. 1, 2
+[27] Vivek Kwatra, Arno Schödl, Irfan Essa, Greg Turk, and Aaron Bobick. Graphcut textures: image and video synthesis using graph cuts. ACM Transactions on Graphics, 22(3):277-286, 2003. 2
+[28] Katrin Lasinger, René Ranftl, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. arXiv preprint arXiv:1907.01341, 2019. 2, 3, 8
+[29] Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 31-42, 1996. 2
+[30] Zhengqi Li and Noah Snavely. Megadepth: Learning single-view depth prediction from internet photos. In CVPR, 2018, 2, 3, 5, 8
+[31] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 5
+[32] Chen Liu, Jimei Yang, Duygu Ceylan, Ersin Yumer, and Yasutaka Furukawa. Planenet: Piece-wise planar reconstruction from a single rgb image. In CVPR, 2018. 2
+[33] Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, 2018. 2, 5, 6
+[34] Miaomiao Liu, Xuming He, and Mathieu Salzmann. Geometry-aware deep network for single-image novel view synthesis. In CVPR, 2018. 2
+[35] Wei Liu, Xiaogang Chen, Jie Yang, and Qiang Wu. Robust color guided depth map restoration. TIP, 26(1):315-327, 2017. 3
+[36] Si Lu, Xiaofeng Ren, and Feng Liu. Depth enhancement via low-rank matrix completion. In CVPR, 2014. 3
+
+[37] Ziyang Ma, Kaiming He, Yichen Wei, Jian Sun, and Enhua Wu. Constant time weighted median filtering for stereo matching and beyond. Proceedings of the 2013 IEEE International Conference on Computer Vision, pages 49-56, 2013. 3
+[38] Leonard McMillan and Gary Bishop. Plenoptic modeling: An image-based rendering system. In Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pages 39-46. ACM, 1995. 6
+[39] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 38(4), July 2019. 1, 2, 7, 8
+[40] Tai-Jiang Mu, Ju-Hong Wang, Song-Pei Du, and Shi-Min Hu. Stereoscopic image completion and depth recovery. The Visual Computer, 30(6-8):833-843, 2014. 3
+[41] Kamyar Nazeri, Eric Ng, Tony Joseph, Faisal Qureshi, and Mehran Ebrahimi. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint, 2019. 3, 5, 6
+[42] Simon Niklaus, Long Mai, Jimei Yang, and Feng Liu. 3d ken burns effect from a single image. ACM Transactions on Graphics (TOG), 38(6), Nov. 2019. 2, 3
+[43] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In CVPR, 2017. 2
+[44] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. 2
+[45] Eric Penner and Li Zhang. Soft 3d reconstruction for view synthesis. ACM Transactions on Graphics (TOG), 36(6):235, 2017. 1, 2
+[46] Yael Pritch, Eitam Kav-Venaki, and Shmuel Peleg. Shift-map image editing. In CVPR, pages 151-158. IEEE, 2009. 2
+[47] Yurui Ren, Xiaoming Yu, Ruonan Zhang, Thomas H Li, Shan Liu, and Ge Li. Structureflow: Image inpainting via structure-aware appearance flow. In ICCV, 2019. 3
+[48] Jonathan Shade, Steven Gortler, Li-wei He, and Richard Szeliski. Layered depth images. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pages 231–242. ACM, 1998. 2, 3
+[49] Sudipta N Sinha, Johannes Kopf, Michael Goesele, Daniel Scharstein, and Richard Szeliski. Image-based rendering for scenes with reflections. ACM Transactions on Graphics, 31(4):100-1, 2012. 2
+[50] Shuran Song, Samuel P Lichtenberg, and Jianxiong Xiao. Sun rgb-d: A rgb-d scene understanding benchmark suite. In CVPR, 2015. 8
+[51] Yuhang Song, Chao Yang, Zhe Lin, Xiaofeng Liu, Qin Huang, Hao Li, and C-C Jay Kuo. Contextual-based image inpainting: Infer, match, and translate. arXiv preprint arXiv:1711.08590, 2017. 2
+[52] Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snively. Pushing the boundaries of view extrapolation with multiplane images. In CVPR, 2019. 1, 2, 7, 8
+
+[53] Pratul P Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. Learning to synthesize a 4d rgbd light field from a single image. In ICCV, 2017. 2
+[54] Shao-Hua Sun, Minyoung Huh, Yuan-Hong Liao, Ning Zhang, and Joseph J Lim. Multi-view to novel view: Synthesizing novel views with self-learned confidence. In ECCV, 2018. 2
+[55] Lech Świrski, Christian Richardt, and Neil A Dodgson. Layered photo pop-up. In ACM SIGGRAPH 2011 Posters, 2011. 2
+[56] Shubham Tulsiani, Richard Tucker, and Noah Snavely. Layer-structured 3d scene inference via view synthesis. In ECCV, 2018. 2
+[57] Liang Wang, Hailin Jin, Ruigang Yang, and Minglun Gong. Stereoscopic inpainting: Joint color and depth completion from stereo images. In CVPR, 2008. 3
+[58] Yonatan Wexler, Eli Shechtman, and Michal Irani. Spacetime completion of video. TPAMI, (3):463-476, 2007. 2
+[59] Thomas Whelan, Michael Goesele, Steven J Lovegrove, Julian Straub, Simon Green, Richard Szeliski, Steven Butterfield, Shobhit Verma, and Richard Newcombe. Reconstructing scenes with mirror and glass surfaces. ACM Transactions on Graphics, 37(4):102, 2018. 1, 2
+[60] Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. Synsin: End-to-end view synthesis from a single image. In CVPR, 2020. 2
+[61] Junyuan Xie, Ross Girshick, and Ali Farhadi. Deep3d: Fully automatic 2d-to-3d video conversion with deep convolutional neural networks. In ECCV, 2016. 2
+[62] Wei Xiong, Zhe Lin, Jimei Yang, Xin Lu, Connelly Barnes, and Jiebo Luo. Foreground-aware image inpainting. 2019. 3, 5
+[63] Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. Deep view synthesis from sparse photometric images. ACM Transactions on Graphics (TOG), 38(4):76, 2019. 2
+[64] Zhaoyi Yan, Xiaoming Li, Mu Li, Wangmeng Zuo, and Shiguang Shan. Shift-net: Image inpainting via deep feature rearrangement. In ECCV, September 2018. 2
+[65] Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multiscale neural patch synthesis. In CVPR, volume 1, page 3, 2017. 2
+[66] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. In CVPR, 2018. 2
+[67] Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Free-form image inpainting with gated convolution. In ICCV, 2019. 2
+[68] Qiong Zeng, Wenzheng Chen, Huan Wang, Changhe Tu, Daniel Cohen-Or, Dani Lischinski, and Baoquan Chen. Hallucinating stereoscopy from a single image. In Computer Graphics Forum, volume 34, pages 1–12, 2015. 2
+[69] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 4, 8
+[70] Yinda Zhang and Thomas Funkhouser. Deep depth completion of a single rgb-d image. In CVPR, 2018. 3
+
+[71] Tinghui Zhou, Matthew Brown, Noah Snavely, and David G Lowe. Unsupervised learning of depth and ego-motion from video. In CVPR, volume 2, page 7, 2017. 3
+[72] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. ACM Transactions on Graphics, 2018. 1, 2, 5, 7, 8
+[73] Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. View synthesis by appearance flow. In ECCV, 2016. 2
\ No newline at end of file
diff --git a/3dphotographyusingcontextawarelayereddepthinpainting/images.zip b/3dphotographyusingcontextawarelayereddepthinpainting/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d0a42dc2f1dac81906915425a39c180dced6a44a
--- /dev/null
+++ b/3dphotographyusingcontextawarelayereddepthinpainting/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d8dd7fb42b3befc1f783567f56f871feb51d35640d7017b7af0a1986e90ac30f
+size 818330
diff --git a/3dphotographyusingcontextawarelayereddepthinpainting/layout.json b/3dphotographyusingcontextawarelayereddepthinpainting/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1a384465d08b56e35d7db5e878778828d408897b
--- /dev/null
+++ b/3dphotographyusingcontextawarelayereddepthinpainting/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b23e2c644a4a7d268b9a59bc9cceb1ba5f16ffca66efa84e411da97ea6e40f8f
+size 521494
diff --git a/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_content_list.json b/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6e1ebcc45f8526cf7ca40bcbdb27e5763be5e986
--- /dev/null
+++ b/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fce90a25e16496cfe0d1d2abf75c36d949fb8c3a17e2aba8cec8884b5001314
+size 88756
diff --git a/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_model.json b/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..726bd89baacad1dfae9ce395dabc8cb49d442f3a
--- /dev/null
+++ b/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43f06eb313b8993eba5564ef96b2c93e1c68d7f081fcc726a0410c9d29348495
+size 113174
diff --git a/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_origin.pdf b/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..69908760108c2d8d81284cd80a6de7eeb957205a
--- /dev/null
+++ b/3dregnetadeepneuralnetworkfor3dpointregistration/a5f4920e-2130-4def-a31a-c357b9131bbf_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c745c2ed48512334433eff3cdca5d9420e9a2327017e9b879ed4cb0db14f458
+size 1629132
diff --git a/3dregnetadeepneuralnetworkfor3dpointregistration/full.md b/3dregnetadeepneuralnetworkfor3dpointregistration/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..e7212c6042a2084b9ad0e5301522319702030c57
--- /dev/null
+++ b/3dregnetadeepneuralnetworkfor3dpointregistration/full.md
@@ -0,0 +1,394 @@
+# 3DRegNet: A Deep Neural Network for 3D Point Registration
+
+G. Dias Pais1, Srikumar Ramalingam2, Venu Madhav Govindu3, Jacinto C. Nascimento1, Rama Chellappa4, and Pedro Miraldo1
+
+$^{1}$ Instituto Superior Técnico, Lisboa $^{2}$ Google Research, NY $^{3}$ Indian Institute of Science, Bengaluru $^{4}$ University of Maryland, College Park
+
+# Abstract
+
+We present 3DRegNet, a novel deep learning architecture for the registration of 3D scans. Given a set of 3D point correspondences, we build a deep neural network to address the following two challenges: (i) classification of the point correspondences into inliers/outliers, and (ii) regression of the motion parameters that align the scans into a common reference frame. With regard to regression, we present two alternative approaches: (i) a Deep Neural Network (DNN) registration and (ii) a Procrustes approach using SVD to estimate the transformation. Our correspondence-based approach achieves a higher speedup compared to competing baselines. We further propose the use of a refinement network, which consists of a smaller 3DRegNet as a refinement to improve the accuracy of the registration. Extensive experiments on two challenging datasets demonstrate that we outperform other methods and achieve state-of-the-art results. The code is available at https://github.com/3DVisionISR/3DRegNet.
+
+# 1. Introduction
+
+We address the problem of 3D registration, which is one of the classical and fundamental problems in geometrical computer vision due to its wide variety of vision, robotics, and medical applications. In 3D registration, the 6 Degrees of Freedom (DoF) motion parameters between two scans are computed given noisy (outliers) point correspondences. The standard approach is to use minimal solvers that employ three-point correspondences (see [48, 39]) in a RANSAC [17] framework, followed by refinement techniques such as the Iterative Closest Point (ICP) [6]. In this paper, we investigate if the registration problem can be solved using a deep neural methodology. Specifically, we study if deep learning methods can bring any complementary advantages over classical registration methods. In particular, we wish to achieve speedup without compromises.
+
+3DRegNet
+
+(a) Inliers/outliers classification using the proposed 3DRegNet vs. a RANSAC approach. Green and red colors indicate the inliers and outliers, respectively.
+
+
+RANSAC
+
+3DRegNet
+
+(b) Results of the estimation of the transformation that aligns two point clouds, 3DRegNet vs. the current state-of-the-art Fast Global Registration method (FGR) [65].
+
+FGR
+
+Figure 1: Given a set of 3D point correspondences from two scans with outliers, our proposed network 3DRegNet simultaneously classifies the point correspondences into inliers and outliers (see (a)), and also computes the transformation (rotation, translation) for the alignment of the scans (see (b)). 3DRegNet is significantly faster and outperforms other standard geometric methods.
+
+ing the registration accuracy in the presence of outliers. In other words, the challenge is not in pose given point correspondences, but how can efficiently handle the outliers. Figure 1 illustrates the main goals of this paper. Figure 1(a) depicts the classification of noisy point correspondences into inliers and outliers using 3DRegNet (left) and RANSAC (right) for aligning two scans. Figure 1(b) shows the estimation of the transformation that aligns two point clouds using the proposed 3DRegNet (left) and current state-of-the-art FGR [65] (right).
+
+In Fig. 2(a), we show our proposed architecture with two sub-blocks: classification and registration. The for
+
+
+(a) Depiction of the 3DRegNet with DNNs for Registration.
+
+
+(b) Representation of the 3DRegNet with Procrustes.
+
+
+(c) Classification Block
+(d) Registration Block with DNNs.
+Figure 2: Two proposed architectures. (a) shows our first proposal with the classification and the registration blocks. (b) shows our second proposal with the same classification block as in the first one, but with a different registration block based on the differential Procrustes method. (c) classification block using C ResNets, which receives a set of point correspondences as input and outputs weights classifying them as inliers/outliers. (d) registration block (used in the architecture shown in (a)) that is obtained from the features of classification block and where its parameters are obtained through a DNN.
+
+
+
+mer takes a set of noisy point correspondences between two scans and produces weight (confidence) parameters that indicate whether a given point correspondence is an inlier or an outlier. The latter directly produces the 6 DoF motion parameters for the alignment of two 3D scans. Our main contributions are as follows. We present a novel deep neural network architecture for solving the problem of 3D scan registration, with the possibility of a refinement network that can fine-tune the results. While achieving a significant speedup, our method achieves state-of-the-art registration performance.
+
+# 2. Related Work
+
+The ICP is widely considered as the gold standard approach to solve point cloud registration [6, 44]. However since ICP often gets stuck in local minima, other approaches have proposed extensions or generalizations that achieve both efficiency and robustness, e.g., [49, 40, 41, 58, 20, 31, 43, 29]. The 3D registration can also be viewed as a non-rigid problem motivating several works [67, 5, 51, 34]. A survey of rigid and non-rigid registration of 3D point clouds is available in [52]. An optimal least-squares solution can be obtained using methods such as [53, 49, 40, 38, 24, 57, 65, 7, 36]. Many of these methods require either a good initialization or identification of inliers using RANSAC. Subsequently, the optimal pose is estimated using only the selected inliers. In contrast to the above strategies, we focus on jointly solving (i) the inlier correspondences and (ii) the estimation of the transformation parameters without requiring an initialization. We propose a unified deep learning framework to address both challenges mentioned above.
+
+Deep learning has been used to solve 3D registration problems in diverse contexts [14, 15, 23]. PointNet is a Deep Neural Network (DNN) that produces classification and segmentation results for unordered point clouds [46]. It strives to achieve results that are invariant to the order of points, rotations, and translations. To achieve invariance, PointNet uses several Multi-Layer Perceptrons (MLP) individually on different points, and then use a symmetric function on top of the outputs from the MLPs. PointNetLK builds on PointNet and proposes a DNN loop scheme to compute the 3D point cloud alignment [2]. In [54], authors derive an alternative approach to ICP, i.e., alternating between finding the closest points and computing the 3D registration. The proposed method focuses on finding the closest points at each step; the registration is computed with Procrustes. [32] proposes a network that initially generates correspondences based on learned matched probabilities and then creates an aligned point cloud. In [56, 50, 25, 55], other methods are proposed for object detection and pose estimation on point clouds with 3D bounding boxes. In contrast to these methods, our registration is obtained from pre-computed 3D point matches, such as [47, 61], instead of using the original point clouds and thereby achieving considerable speedup.
+
+A well-known approach is to use point feature histograms as features for describing a 3D point [47]. The matching of 3D points can also be achieved by extracting features using convolutional neural networks [61, 12, 59, 15, 13, 19]. Some methods directly extract 3D features from the point clouds that are invariant to the 3D environment (spherical CNNs) [10, 16]. A deep network has been designed recently for computing the pose for direct image
+
+to image registration [21]. Using graph convolutional networks and cycle consistency losses, one can train an image matching algorithm in an unsupervised manner [45].
+
+In [60], a deep learning method for classifying 2D point correspondences into inliers/outliers is proposed. The regression of the Essential Matrix is computed separately using eigendecomposition and the inlier correspondences. The input of the network is only pixel coordinates instead of original images allowing for faster inference. The method was improved in [62], by proposing hierarchically extracted and aggregated local correspondences. The method is also insensitive to the order of correspondences. In [11], an eigendecomposition-free approach was introduced to train a deep network whose loss depends on the eigenvector corresponding to a zero eigenvalue of a matrix predicted by the network. This was also applied to 2D outlier removal. In [33], a DNN classifier was trained on a general match representation based on putative match through exploiting the consensus of local neighborhood structures and a nearest neighbor strategy. In contrast with the methods mentioned above, our technique aims at getting an end-to-end solution to the registration and outlier/inlier classification from matches of 3D point correspondences.
+
+For 3D reconstruction using a large collection of scans, rotation averaging can be used to improve the pairwise relative pose estimates using robust methods [8]. Recently, it was shown that it would be possible to utilize deep neural networks to compute the weights for different pairwise relative pose estimates [26]. The work in [64] focuses on learning 3D match of features in three views. Our paper focuses on the problem of pairwise registration of 3D scans.
+
+# 3. Problem Statement
+
+Given a set of $N$ 3D point correspondences $\{(\mathbf{p}_i,\mathbf{q}_i)\}_{i = 1}^N$ , where $\mathbf{p}_i\in \mathbb{R}^3$ , $\mathbf{q}_i\in \mathbb{R}^3$ are the 3D points in the first and second scan respectively, our goal is to compute the transformation parameters (rotation matrix $\mathbf{R}\in \mathcal{SO}(3)$ and translation vector $\mathbf{t}\in \mathbb{R}^3$ ) as follows
+
+$$
+\mathbf {R} ^ {*}, \mathbf {t} ^ {*} = \underset {\mathbf {R} \in \mathcal {S O} (3), \mathbf {t} \in \mathbb {R} ^ {3}} {\operatorname {a r g m i n}} \sum_ {n = 1} ^ {N} \rho (\mathbf {q} _ {n}, \mathbf {R p} _ {n} + \mathbf {t}), \tag {1}
+$$
+
+where $\rho (\mathbf{a},\mathbf{b})$ is some distance metric. The problem addressed in this work is shown in Fig. 1. The input consists of $N$ point correspondences, and the output consists of $N + M + 3$ variables. Specifically, the first $N$ output variables form a weight vector $W\coloneqq \{w_{i}\}_{i = 1}^{N}$ , where $w_{i}\in [0,1)$ represents the confidence that the $i$ -th correspondence pair $(\mathbf{p}_i,\mathbf{q}_i)$ is an inlier. By comparing $w_{i}$ with a threshold $\mathcal{T}$ , i.e., $w_{i}\geq \mathcal{T}$ we can classify all the input correspondences into inliers/outiers. The next $M$ output variables represent the rotation parameters, i.e., $(v_{1},\ldots ,v_{M})$ . The remaining three parameters $(t_1,t_2,t_3)$
+
+denote the translation. Although a 3D rotation has exactly 3 degrees of freedom, there are different possible parameterizations. As shown in [66], choosing the correct parameterization for the rotation is essential for the overall performance of these approaches. Previous methods use over-parameterization for the rotation (e.g., PoseNet [27] uses four parameter-quaternions for representing the rotation, while deep PnP [11] uses nine parameters). We study the different parameterizations of the rotation and evaluate their performance.
+
+# 4. 3DRegNet
+
+The proposed 3DRegNet architecture is shown in Fig. 2 with two blocks for classification and registration. We have two possible approaches for the registration block, either using DNNs or differentiable Procrustes. This choice does not affect the loss functions presented in Sec. 4.1.
+
+Classification: The classification block (see the respective block in Fig. 2(c)) follows the ideas of previous works [46, 60, 11, 62]. The input is a 6-tuples set of 3D point correspondences given by $\{(\mathbf{p}_i,\mathbf{q}_i)\}_{i = 1}^N$ between the two scans.
+
+Each 3D point correspondence is processed by a fully connected layer with 128 ReLU activation functions. There is a weight sharing for each of the individual $N$ point correspondences, and the output is of dimension $N \times 128$ , where we generate 128 dimensional features from every point correspondence. The $N \times 128$ output is then passed through $C$ deep ResNets [22], with weight-shared fully connected layers instead of convolutional layers. At the end, we use another fully connected layer with ReLU $(\mathrm{ReLU}(x) = \max(0, x))$ followed by $\tanh(\tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} \in (-1, 1))$ units to produce the weights in the range $w_i \in [0, 1)$ . The number $C$ of deep ResNets depends on the complexity of the transformation to be estimated as is discussed in Sec. 5.
+
+Registration with DNNs: The input to this block are the features extracted from the point correspondences. As shown in Fig. 2(d), we use pooling to extract meaningful features of dimensions $128 \times 1$ from each layer of the classification block. We extract features at $C + 1$ stages of the classification, i.e., the first one is extracted before the first ResNet and the last one is extracted after the $C$ -th ResNet. Based on our experiments, max-pooling performed the best in comparison with other choices such as average pooling. After the pooling is completed, we apply context normalization, as introduced in [60], and concatenate the $C + 1$ feature maps (see Figs. 2(a) and 2(d)). This process normalizes the features and it helps to extract the necessary and fixed number of features to obtain the transformation at the end of the registration block (that should be independent of $N$ ). The features from the context normalization is of size $(C + 1) \times 128$ , which is then passed on to a con
+
+volitional layer, with 8 channels. Each filter passes a 3-by-3 patch with a stride of 2 for the column and of 1 for the row. The output of the convolution is then injected in two fully connected layers with 256 filters each, with ReLU between the layers, that generate the output of $M + 3$ variables: $\mathbf{v} = (v_{1},\dots,v_{M})$ and $\mathbf{t} = (t_1,t_2,t_3)$ .
+
+Registration with Differentiable Procrustes: In contrast to the previous block, we present another alternative to perform the registration. Now, we obtain the desired transformation through the point correspondences (see Fig. 2(b)). We filter out the outliers and compute the centroid of the inliers, using this as the origin. Since the centroids of the point clouds are now at the origin, we only need to obtain the rotation between them. Note that the outlier filtering and the shift in the centroids can be seen as intermediate layers, thereby allowing end-to-end training for both classification and pose computation. This rotation is computed from the SVD of the matrix $\mathbf{M} = \mathbf{U}\boldsymbol{\Sigma}\mathbf{V}^{\mathrm{T}}$ [3], where $\mathbf{M} \in \mathbb{R}^{3 \times 3}$ is as follows:
+
+$$
+\mathbf {M} = \sum_ {i \in \mathcal {I}} w _ {i} \mathbf {p} _ {i} \mathbf {q} _ {i} ^ {T}, \tag {2}
+$$
+
+where $\mathcal{I}$ represents the set of inliers obtained from the classification block. The rotation is obtained by
+
+$$
+\mathbf {R} = \mathbf {U} \operatorname {d i a g} (1, 1, \det (\mathbf {U V} ^ {T})) \mathbf {V} ^ {T}. \tag {3}
+$$
+
+The translation parameters are given by
+
+$$
+\mathbf {t} = \frac {1}{N _ {\mathcal {I}}} \left(\sum_ {i \in \mathcal {I}} \mathbf {p} _ {i} - \mathbf {R} \sum_ {i \in \mathcal {I}} \mathbf {q} _ {i}\right), \tag {4}
+$$
+
+where $N_{\mathcal{I}}$ and $\mathcal{I}$ are the number of inliers and the inlier set, respectively.
+
+# 4.1. Loss Functions
+
+Our overall loss function has two individual loss terms, namely classification and registration losses from the two blocks of the network.
+
+Classification Loss: The classification loss penalizes incorrect correspondences using cross-entropy:
+
+$$
+\mathcal {L} _ {c} ^ {k} = \frac {1}{N} \sum_ {i = 1} ^ {N} \gamma_ {i} ^ {k} H \left(y _ {i} ^ {k}, \sigma \left(o _ {i} ^ {k}\right)\right), \tag {5}
+$$
+
+where $o_i^k$ are the network outputs before passing them through ReLU and tanh for computing the weights $w_{i}$ . $\sigma$ denotes the sigmoid activation function. Note that the motion between pairs of scans are different, and the index $k$ is used to denote the associated training pair of scans. $H(.,.)$ is the cross-entropy function, and $y_{i}^{k}$ (equals to one or zero) is the ground-truth, which indicates whether the $i$ -th point correspondence is an inlier or outlier. The term $\mathcal{L}_c^k$ is the classification loss for the 3D point correspondences of a particular scan-pair with an index $k$ . The $\gamma_{i}^{k}$ balances the
+
+classification loss by the number of examples for each class in the associated scan pair $k$ .
+
+Registration Loss: The registration loss penalizes misaligned points in the point cloud using the distance between the 3D points in the second scan $\mathbf{q}_i$ and the transformed points from the first 3D scan $\mathbf{p}_i$ , for $i = \{1, \dots, N\}$ . The loss function becomes
+
+$$
+\mathcal {L} _ {r} ^ {k} = \frac {1}{N} \sum_ {i = 1} ^ {N} \rho \left(\mathbf {q} _ {i} ^ {k}, \mathbf {R} ^ {k} \mathbf {p} _ {i} ^ {k} + \mathbf {t} ^ {k}\right), \tag {6}
+$$
+
+where $\rho (.,.)$ is the distance metric function. For a given scan pair $k$ , the relative motion parameters obtained from the registration block are given by $\mathbf{R}^k$ and $\mathbf{t}^k$ . We considered and evaluated distance metrics: $L_{1}$ , weighted least squares, $L_{2}$ , and Geman-McClure [18] in Sec. 7.
+
+Total Loss: The individual loss functions are given below:
+
+$$
+\mathcal {L} _ {c} = \frac {1}{K} \sum_ {k = 1} ^ {K} \mathcal {L} _ {c} ^ {k} \quad \text {a n d} \quad \mathcal {L} _ {r} = \frac {1}{K} \sum_ {k = 1} ^ {K} \mathcal {L} _ {r} ^ {k}, \tag {7}
+$$
+
+where $K$ is the total number of scan pairs in the training set. The total training loss is the sum of both the classification and the registration loss terms:
+
+$$
+\mathcal {L} = \alpha \mathcal {L} _ {c} + \beta \mathcal {L} _ {r}, \tag {8}
+$$
+
+where the coefficients $\alpha$ and $\beta$ are hyperparameters that are manually set for classification and registration terms in the loss function.
+
+# 5.3DRegNet Refinement
+
+We describe our architecture consisting of two 3DRegNet where the second network provides a regression refinement (see Fig. 3(a)). A commonly adopted approach for 3D registration is to first consider a rough estimate for the transformation followed by a refinement strategy. Following this reasoning, we consider the possibility of using an additional 3DRegNet. The first 3DRegNet provides a rough estimate trained for larger rotation and translation parameters values. Subsequently, the second smaller network is used for refinement, estimating smaller transformations. This can also be seen as deep-supervision that is shown to be useful in many applications [30]. Figure 3(a) illustrates the proposed architecture.
+
+Architecture: As shown in Fig. 3(a), we use two 3DReg-Nets, where the first one is used to obtain the coarse registration followed by the second one doing the refinement. Each 3DRegNet is characterized by the regression parameters $\{(\mathbf{R}^r,\mathbf{t}^r)\}$ and the classification weights $\{w_i^r\}_{i = 1}^N$ with $r = \{1,2\}$ . We note that the loss on the second network has to consider the cumulative regression of both
+
+
+(a) Scheme for refinement using 3DRegNet.
+
+
+(b) Before Refinement
+
+
+(c) After Refinement
+Figure 3: (a) shows the proposed architecture with two 3DRegNet blocks in sequence. (b),(c) show an improvement upon using an additional 3DRegnet to fine-tune or refine the registration from the first 3DRegNet.
+
+3DRegNets. Hence, the original set of point correspondences $(\{\mathbf{p}_i,\mathbf{q}_i\})_{1 = 1}^N$ are transformed by the following cumulative translation and rotation
+
+$$
+\mathbf {R} = \mathbf {R} ^ {2} \mathbf {R} ^ {1} \text {a n d} \mathbf {t} = \mathbf {R} ^ {2} \mathbf {t} ^ {1} + \mathbf {t} ^ {2}. \tag {9}
+$$
+
+Notice that, in (9), the update of the transformation parameters $\mathbf{R}$ and $\mathbf{t}$ , depends on the estimates of both 3DRegNets. The point correspondence update at the refinement network becomes
+
+$$
+\left\{\left(\mathbf {p} _ {i} ^ {1}, \mathbf {q} _ {i} ^ {1}\right) \right\} = \left\{\left(w _ {i} ^ {1} \left(\mathbf {R} ^ {1} \mathbf {p} _ {i} + \mathbf {t} ^ {1}\right), w _ {i} ^ {1} \mathbf {q} _ {i}\right) \right\}, \tag {10}
+$$
+
+forcing the second network to obtain smaller transformations that corrects for any residual transformation following the first 3DRegNet block.
+
+Loss Functions: The classification and registration losses are computed as in (5) and (6) at each step, then averaged by the total loss:
+
+$$
+\mathcal {L} _ {c} = \frac {1}{K} \sum_ {k = 1} ^ {K} \frac {1}{2} \sum_ {r = 1} ^ {2} \mathcal {L} _ {c} ^ {k, r} \text {a n d} \mathcal {L} _ {r} = \frac {1}{K} \sum_ {k = 1} ^ {K} \frac {1}{2} \sum_ {r = 1} ^ {2} \mathcal {L} _ {r} ^ {k, r}. \tag {11}
+$$
+
+We then apply (8) as before.
+
+# 6. Datasets and 3DRegNet Training
+
+Datasets: We use two datasets, the synthetic augmented ICL-NUIM Dataset [9] and the SUN3D [63] consisting of real images. The former consists of 4 scenes with a total of about 25000 different pairs of connected point clouds. The latter is composed of 13 randomly selected scenes,
+
+with a total of around 3700 different connected pairs. Using FPFH [47], we extract about 3000 3D point correspondences for each pair of scans in both datasets. Based on the ground-truth transformations and the 3D distances between the transformed 3D points, correspondences are labeled as inliers/outliers using a predefined threshold (set $y_{n}^{k}$ to one or zero). The threshold is set such that the number of outliers is about $50\%$ of the total matches. We select $70\%$ of the pairs for training and $30\%$ for testing for the ICL-NUIM Dataset. With respect to the SUN3D Dataset, we select 10 scenes, for training and 3 scenes, completely unseen with respect to the training set, for testing.
+
+Training: The proposed architecture is implemented in Tensorflow [1]. We used $C = 8$ for the first 3DRegNet and $C = 4$ for the refinement 3DRegNet1. The other values for the registration blocks are detailed in Sec. 4. The network was trained for 1000 epochs with 1092 steps for the ICL-NUIM dataset and for 1000 epochs with 200 steps for the SUN3D dataset. The learning rate was $10^{-4}$ , while using the Adam Optimizer [28]. A cross-validation strategy is used during training. We used a batch size of 16. The coefficients of the classification and registration terms are given by $\alpha = 0.5$ and $\beta = 10^{-3}$ . The network was trained using an INTEL i7-7600 and a NVIDIA GEFORCE GTX 1070. For a fair comparison to the classical methods, all run times were obtained using CPU, only.
+
+Data Augmentation: To generalize for unseen rotations, we augment the training dataset by applying random rotations. Taking inspiration from [4, 37, 42], we propose the use of Curriculum Learning (CL) data augmentation. The idea is to start small [4], (i.e., easier tasks containing small values of rotation) and having the tasks ordered by increasing difficulty. The training only proceeds to harder tasks after the easier ones are completed. However, an interesting alternative of traditional CL was adopted. Let the magnitude of the augmented rotation to be applied in the training be denoted as $\theta$ , and an epoch such that $\tau \in [0,1]$ (normalized training steps). In CL, we should start small at the beginning of each epoch. However, this breaks the smoothness of $\theta$ values (since the maximum value for $\theta$ , i.e., $\theta_{\mathrm{Max}}$ has been reached at the end of the previous epoch). This can easily be tackled if we progressively increase the $\theta$ up to $\theta_{\mathrm{Max}}$ at $\tau = 0.5$ , decreasing $\theta$ afterwards.
+
+# 7. Experimental Results
+
+In this section, we start by defining the evaluation metrics used throughout the experiments. Then, we present some ablation studies considering: 1) the use of different distance metrics; 2) different parameterizations for the rotation; 3) the use of Procrustes vs. DNN for estimat-
+
+| Distance Function | Rotation [deg] | Translation [m] | Time [s] | Classification Accuracy |
| Mean | Median | Mean | Median |
| L2-norm | 2.44 | 1.64 | 0.087 | 0.067 | 0.0295 | 0.95 |
| L1-norm | 1.37 | 0.90 | 0.054 | 0.042 | 0.0281 | 0.96 |
| Weighted L2-norm | 1.89 | 1.33 | 0.070 | 0.056 | 0.0294 | 0.95 |
| Geman-McClure | 2.45 | 1.59 | 0.089 | 0.068 | 0.0300 | 0.95 |
+
+ing the transformation parameters; 4) the sensitivity to the number of point correspondences; 5) the use of Data-Augmentation in the training; and 6) the use of the refinement network. The ablation studies are performed on the ICL-NUIM dataset. We conclude the experiments with some comparison with previous methods and the application of our method in unseen scenes.
+
+Evaluation Metrics: We defined the following metrics for accuracy. For rotation, we use
+
+$$
+\delta \left(\mathbf {R}, \mathbf {R} _ {\mathrm {G T}}\right) = \operatorname {a c o s} \left(\frac {\operatorname {t r a c e} \left(\mathbf {R} ^ {- 1} \mathbf {R} _ {\mathrm {G T}}\right) - 1}{2}\right), \tag {12}
+$$
+
+where $\mathbf{R}$ and $\mathbf{R}_{\mathrm{GT}}$ are the estimated and ground-truth rotation matrices, respectively. We refer to [35] for more details. For measuring the accuracy of translation, we use
+
+$$
+\delta \left(\mathbf {t}, \mathbf {t} _ {\mathrm {G T}}\right) = \left\| \mathbf {t} - \mathbf {t} _ {\mathrm {G T}} \right\|. \tag {13}
+$$
+
+For the classification accuracy, we used the standard classification error. The computed weights $w_{i} \in [0,1)$ will be rounded to 0 or 1 based on a threshold $(\mathcal{T} = 0.5)$ before measuring the classification error.
+
+# 7.1. Ablation Studies
+
+Distance Metrics: We start these experiments by evaluating the 3DRegNet training using different types of distance metrics in the regression loss function. Namely, we use: 1) the $L_{2}$ -norm; 2) $L_{1}$ -norm; 3) Weighted $L_{2}$ -norm with the weights obtained from the classification block; and 4) German-McClure distances. For all the pairwise correspondences in the testing phase, we compute the rotation and translation errors obtained by the 3DRegNet. The results of the classification are reported in Tab. 1, in which we use the minimal Lie algebra representation for the rotation.
+
+As it can be seen from these results (see Tab 1), the $L_{1}$ -norm gives the best results in all the evaluation criteria. It is interesting to note that weighted $L_{2}$ -norm, despite using the weights from the classification block, did not perform as good as the $L_{1}$ -norm. This is possible since the registration block also utilizes the outputs from some of the intermediate layers of the classification block. Based on these results, the remaining evaluations are conducted using the $L_{1}$ -norm.
+
+Parameterization of R: We study the following three parameterizations for the rotation: 1) minimal Lie algebra
+
+Table 1: Evaluation of the different distance functions on the training of the proposed architecture.
+
+ | Rotation [deg] | Translation [m] | Time [s] | Classification Accuracy |
| Representation | Mean | Median | Mean | Median |
| Lie Algebra | 1.37 | 0.90 | 0.054 | 0.042 | 0.0281 | 0.96 |
| Quaternions | 1.55 | 1.11 | 0.067 | 0.054 | 0.0284 | 0.95 |
| Linear | 5.78 | 4.78 | 0.059 | 0.042 | 0.0275 | 0.95 |
| Procrustes | 1.65 | 1.52 | 0.235 | 0.233 | 0.0243 | 0.52 |
+
+Table 2: Evaluation of different representations for the rotations.
+
+| Matches | Rotation [deg] | Translation [m] | Time [s] | Classification Accuracy |
| Mean | Median | Mean | Median |
| 10% | 2.40 | 1.76 | 0.089 | 0.073 | 0.0106 | 0.94 |
| 25% | 1.76 | 1.22 | 0.068 | 0.054 | 0.0149 | 0.95 |
| 50% | 1.51 | 1.01 | 0.060 | 0.047 | 0.0188 | 0.95 |
| 75% | 1.41 | 0.92 | 0.056 | 0.044 | 0.0241 | 0.96 |
| 90% | 1.38 | 0.90 | 0.055 | 0.043 | 0.0267 | 0.96 |
| 100% | 1.37 | 0.90 | 0.054 | 0.042 | 0.0281 | 0.96 |
+
+Table 3: Evaluation of different number of correspondences.
+
+(three parameters); 2) quaternions (four parameters); and 3) linear matrix form (nine parameters). The results are shown in Tab. 2. We observe that the minimal parameterization using Lie algebra provides the best results. In the experimental results that follows, we use the three parameters Lie algebra representation. While Lie algebra performs better for the problem on hand, we cannot generalize this conclusion to other problems like human pose estimation, as shown in [66].
+
+Regression with DNNs vs. Procrustes: We aim at evaluating the merits of using DNNs vs. Procrustes to get the 3D registration, as shown in Fig. 2(a) and Fig. 2(b). From Tab. 2, we conclude that the differentiable Procrustes method does not solve the problem as accurately as DNNs. The run time is lower than the DNNs with the Lie Algebra, but the difference is small and can be neglected. On the other hand, the classification accuracy degrades significantly. From now on, we use the DNNs for the regression.
+
+Sensitivity to the number of correspondences: Instead of considering all the correspondences in each of the pairwise scans of the testing examples, we select a percentage of the total number of matches ranging from $10\%$ to $100\%$ (recall that the total number of correspondences per pair is around 3000). The results are shown in Tab. 3.
+
+As expected, the accuracy of the regression degrades as the number of input correspondences decreases. The classification, however, is not affected. The inlier/outlier classifications should not depend on the number of input correspondences, while the increase of the number of inliers should lead to a better estimate.
+
+Data Augmentation: Using the 3DRegNet trained in the previous sections, we select a pair of 3D scans from the training data and rotate the original point-clouds to increase the rotation angles between them. We vary the magnitude
+
+
+Figure 4: Training with and without data augmentation. It is observed an improvement on the test results when perturbances are applied. The data augmentation regularizes the network for other rotations that were not included in the original dataset.
+
+
+
+ | Rotation [deg] | Translation [m] | Time [s] | Classification Accuracy |
| Refinement | Mean | Median | Mean | Median |
| without | 1.37 | 0.90 | 0.054 | 0.042 | 0.0281 | 0.96 |
| with | 1.19 | 0.89 | 0.053 | 0.044 | 0.0327 | 0.94 |
+
+of this rotation $(\theta)$ from 0 to 50 degrees, and the results for the rotation error and accuracy in the testing are shown in Fig. 4 (green curve). Afterward, we train the network a second time, using the data augmentation strategy proposed in Sec. 6. At each step, the pair of examples is perturbed by a rotation with increasing steps of $2^{\circ}$ , setting the maximum value of $\theta = 50^{\circ}$ . We run the test as before, and the results are shown in Fig. 4 (blue curve).
+
+From this experiment we can conclude that, by only training with the original dataset, we constrained to the rotations contained in the dataset. On the other hand, by performing a smooth regularization (CL data augmentation), we can overcome this drawback. Since the datasets at hand are sequences of small motions, there is no benefit on generalizing the results for the rotation parameters. If all the involved transformations are small, the network should be trained as such. We do not carry out data augmentation in the following experiments.
+
+3DRegNet refinement: We consider the use of the extra 3DRegNet presented in Sec. 5 for regression refinement. This composition of two similar networks was developed to improve the accuracy of the results. From Tab. 4, we observe an overall improvement on the transformation estimation, without compromising the run time significantly. The classification accuracy decreases by $2\%$ , but does not influence the final regression. This improvement on the estimation can also be seen in Fig. 3, where the estimation using only one 3DRegNet (Fig. 3(b)) is still a bit far from the true alignment, in comparison to using the 3DRegNet with refinement, shown in Fig. 3(c), which is closer to the correct alignment. For the remainder of the paper, when we refer to 3DRegNet, we are using the refinement network.
+
+Table 4: Evaluation of the use of 3DRegNet refinement.
+
+ | Rotation [deg] | Translation [m] | Time [s] |
| Method | Mean | Median | Mean | Median |
| FGR | 1.39 | 0.53 | 0.045 | 0.024 | 0.2669 |
| ICP | 3.78 | 0.43 | 0.121 | 0.023 | 0.1938 |
| RANSAC | 1.89 | 1.45 | 0.063 | 0.051 | 0.8441 |
| 3DRegNet | 1.19 | 0.89 | 0.053 | 0.044 | 0.0327 |
| FGR + ICP | 1.01 | 0.38 | 0.038 | 0.021 | 0.3422 |
| RANSAC + U | 1.42 | 1.02 | 0.050 | 0.042 | 0.8441 |
| 3DRegNet + ICP | 0.55 | 0.34 | 0.030 | 0.021 | 0.0691 |
| 3DRegNet + U | 0.28 | 0.22 | 0.014 | 0.011 | 0.0327 |
+
+(a) Baselines results on the ICL-NUIM Dataset.
+
+| Method | Rotation [deg] | Translation [m] | Time [s] |
| Mean | Median | Mean | Median |
| FGR | 2.57 | 1.92 | 0.121 | 0.067 | 0.1623 |
| ICP | 3.18 | 1.50 | 0.146 | 0.079 | 0.0596 |
| RANSAC | 3.00 | 1.73 | 0.148 | 0.074 | 2.6156 |
| 3DRegNet | 1.84 | 1.69 | 0.087 | 0.078 | 0.0398 |
| FGR + ICP | 1.49 | 1.10 | 0.070 | 0.046 | 0.1948 |
| RANSAC + U | 2.74 | 1.48 | 0.134 | 0.061 | 2.6157 |
| 3DRegNet + ICP | 1.26 | 1.14 | 0.066 | 0.048 | 0.0852 |
| 3DRegNet + U | 1.16 | 1.10 | 0.053 | 0.050 | 0.0398 |
+
+(b) Results on unseen sequences (SUN3D Dataset).
+
+Table 5: Comparison with the baselines: FGR [65]; RANSAC-based approaches [17, 48]; and ICP [6].
+
+# 7.2. Baselines
+
+We use three baselines. The Fast Global Registration [65] (FGR) geometric method, that aims to provide a global solution for some set of 3D correspondences. The second baseline is the classical RANSAC method [17]. The third baseline is ICP [6]. Note that we are comparing our technique against both correspondence-free (ICP) and correspondence-based methods (FGR, RANSAC). For this test, we use the ICL-NUIM dataset. In the attempt to ascertain what is the strategy that provides the best registration prior for the ICP, we applied two methods termed as FGR + ICP and 3DRegNet + ICP, where the initialization for ICP is done using the estimated transformations given by the FGR and the 3DRegNet, respectively. Also, for evaluating the quality of the classification, we take the inliers given by the 3DRegNet and RANSAC, and input these in a least square non-linear Umayama refinement technique presented in [53]. These methods are denoted as 3DRegNet + U and RANSAC + U, respectively. The results are shown in Tab. 5(a).
+
+Cumulative distribution function (i.e., like a precision-recall curve) is shown in Fig. 6(a) to better illustrate the performance of both 3DRegNet and FGR. In this figure, part of the tests are shown where the rotation error is less than a given error angle. It can be seen that FGR performs better than 3DRegNet (until $2^{\circ}$ error). Afterward, 3DRegNet starts to provide better results. This implies that FGR does better for easier problems but for a larger number of cases it
+
+
+Harvard MIT
+
+
+3DRegNet
+
+
+
+
+3DRegNet + ICP
+
+
+
+
+FGR
+
+
+
+
+FGR + ICP
+
+has high error (also higher than that of 3DRegNet). In other words, FGR has a heavier tail, hence lower median error and higher mean error compared to 3DRegNet as evident from Tab. 5. As the complexity of the problem increases, 3DRegNet becomes a better algorithm. This is further illustrated when we compare their performance in combination with ICP. Here, we can see that the initial estimates provided by 3DRegNet (3DRegNet + ICP) outperform to those of FGR + ICP. It is particularly noteworthy that even though ICP is local, 3DRegNet + ICP converges to a better minimum than FGR + ICP. This means that a deep learning approach allows us to perform better when the pairwise correspondences are of lower quality, which makes the problem harder. In terms of computation time, we are at least $8\mathrm{x}$ faster than FGR, and $25\mathrm{x}$ faster than RANSAC. To do a fair comparison for all the methods, all computation timings are obtained using CPU.
+
+When considering the use of ICP and Umeyama refinement techniques, in terms of accuracy, we see that both the 3DRegNet + ICP and the 3DRegNet + U beat any other methods. With results from 3DRegNet + ICP, we conclude that the solution to the transformation provided by our network leads ICP to a lower minimum than FGR + ICP. From 3DRegNet + U, we get that our classification selects better the inliers. In terms of computation time, we can draw the same conclusions as before.
+
+# 7.3. Results in Unseen Sequences
+
+For this test, we use the SUN3D dataset. We run the same tests as in the previous section. However, while in Sec. 7.2 we used all the pairs from the sequences and split them into training and testing, here, we run our tests in hold-out training sequences. The results are shown in Tab. 5(b) and Fig. 6(b). The conclusions are similar as in the previous
+
+
+Figure 5: Two examples of 3D point-cloud alignment using the 3DRegNet, 3DRegNet + ICP, FGR, and FGR + ICP methods. A pair of 3D scans were chosen from three scenes in the SUN3D data-set: MIT and Harvard sequences. These sequences were not used in the training of the network.
+(a) ICL-NUIM
+Figure 6: Cumulative distribution function of the rotation errors of 3DRegNet vs. FGR.
+
+
+(b) SUN3D
+
+section. We observe that the results from 3DRegNet do not degrade significantly, which means that the network is able to generalize the classification and registration to unseen sequences. Some snapshots are shown in Fig. 5.
+
+# 8. Discussion
+
+We propose 3DRegNet, a deep neural network that can solve the scan registration problem by jointly solving the outlier rejection given 3D point correspondences and computing the pose for alignment of the scans. We show that our approach is extremely efficient. It performs as well as the current baselines, while still being significantly faster. We show additional tests and visualizations of 3D registrations in the Supplementary Materials.
+
+# Acknowledgements
+
+This work was supported by the Portuguese National Funding Agency for Science, Research and Technology project PTDC/EEI-SII/4698/2014, and the LARSyS - FCT Plurianual funding 2020-2023.
+
+# References
+
+[1] Martin Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software available from tensorflow.org.
+[2] Yasuhiro Aoki, Hunter Goforth, Rangaprasad Arun Srivatsan, and Simon Lucey. Pointnetlk: Robust & efficient point cloud registration using pointnet. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 7163-7172, 2019.
+[3] K Somani Arun, Thomas S Huang, and Steven D Blostein. Least-squares fitting of two 3-d point sets. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 9(5):698-700, 1987.
+[4] Yoshua Bengio, Jerome Lourador, Ronan Collobert, and Jason Weston. Curriculum learning. In Int'l Conf. Machine learning (ICML), pages 41-48, 2009.
+[5] Florian Bernard, Frank R. Schmidt, Johan Thunberg, and Daniel Cremers. A combinatorial solution to non-rigid 3d shape-to-image matching. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 1436-1445, 2017.
+[6] Paul J. Besl and Neil D. McKay. A method for registration of 3-d shapes. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 14(2):239-256, 1992.
+[7] Alvaro Parra Bustos and Tat-Jun Chin. Guaranteed outlier removal for point cloud registration with correspondences. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 40(12):2868-2882, 2018.
+[8] Avishek Chatterjee and Venu Madhav Govindu. Robust relative rotation averaging. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 40(4):958-972, 2018.
+[9] Sungjoon Choi, Qian-Yi Zhou, and Vladlen Koltun. Robust reconstruction of indoor scenes. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 5556-5565, 2015.
+[10] Taco S. Cohen, Mario Geiger, Jonas Koehler, and Max Welling. Spherical cnns. In Int'l Conf. Learning Representations (ICLR), 2018.
+[11] Zheng Dang, Kwang Moo Yi, Yinlin Hu, Fei Wang, Pascal Fua, and Mathieu Salzmann. Eigendecomposition-free training of deep networks with zero eigenvalue-based losses. In European Conf. Computer Vision (ECCV), pages 792-807, 2018.
+[12] Haowen Deng, Tolga Birdal, and Slobodan Ilic. Ppfnet: Global context aware local features for robust 3d point matching. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 195-205, 2018.
+
+[13] Haowen Deng, Tolga Birdal, and Slobodan Ilic. 3d local features for direct pairwise registration. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 3239-3248, 2019.
+[14] Li Ding and Chen Feng. Deepmapping: Unsupervised map estimation from multiple point clouds. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 8650-8659, 2019.
+[15] Gil Elbaz, Tamar Avraham, and Anath Fischer. 3d point cloud registration for localization using a deep neural network auto-encoder. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 2472 - 2481, 2017.
+[16] Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. Learning so(3) equivariant representations with spherical cnns. In European Conf. Computer Vision (ECCV), pages 52–68, 2018.
+[17] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381-395, 1981.
+[18] Stuart Geman and Donald E. McClure. Bayesian image analysis: An application to single photon emission tomography. In Proc. American Statistical Association, pages 12-18, 1985.
+[19] Zan Gojcic, Caifa Zhou, Jan D. Wegner, and Andreas Wieser. The perfect match: 3d point cloud matching with smoothed densities. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 5545-5554, 2019.
+[20] Venu Madhav Govindu and A. Pooja. On averaging multiview relations for 3d scan registration. IEEE Trans. Image Processing (T-IP), 23(3):1289-1302, 2014.
+[21] Lei Han, Mengqi Ji, Lu Fang, and Matthias Niessner. Regnet: Learning the optimization of direct image-to-image pose registration. arXiv:1812.10212, 2018.
+[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016.
+[23] Joao F. Henriques and Andrea Vedaldi. Mapnet: An allocentric spatial memory for mapping environments. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 8476-8484, 2018.
+[24] Dirk Holz, Alexandru E. Ichim, Federico Tombari, Radu B. Rusu, and Sven Behnke. Registration with the point cloud library: A modular framework for aligning in 3-d. IEEE Robotics Automation Magazine (RA-M), 22(4):110-124, 2015.
+[25] Ji Hou, Angela Dai, and Matthias Niessner. 3d-sis: 3d semantic instance segmentation of rgb-d scans. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 4416-4425, 2019.
+[26] Xiangru Huang, Zhenxiao Liang, Xiaowei Zhou, Yao Xie, Leonidas Guibas, and Qixing Huang. Learning transformation synchronization. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 8082-8091, 2019.
+[27] Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network for real-time 6-dof camera
+
+relocalization. In IEEE Int'l Conf. Computer Vision (ICCV), pages 2938-2946, 2015.
+[28] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In Int'l Conf. Learning Representations (ICLR), 2015.
+[29] Huu M. Le, Thanh-Toan Do, Tuan Hoang, and Ngai-Man Cheung. Sdrsac: Semidefinite-based randomized approach for robust point cloud registration without correspondences. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 124-133, 2019.
+[30] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets, 2014.
+[31] Hongdong Li and Richard Hartley. The 3d-3d registration problem revisited. In IEEE Int'l Conf. Computer Vision (ICCV), pages 1-8, 2017.
+[32] Weixin Lu, Guowei Wan, Yao Zhou, Xiangyu Fu, Pengfei Yuan, and Shiyu Song. Deepvcp: An end-to-end deep neural network for point cloud registration. In IEEE Int'l Conf. Computer Vision (ICCV), pages 3523-3532, 2019.
+[33] Jiayi Ma, Xingyu Jiang, Junjun Jiang, Ji Zhao, and Xiao-jie Guo. Lmr: Learning a two-class classifier for mismatch removal. IEEE Trans. Image Processing (T-IP), 28(8):4045–4059, 2019.
+[34] Lingni Ma, Jorg Stuckler, Christian Kerl, and Daniel Cremers. Multi-view deep learning for consistent semantic mapping with rgb-d cameras. In IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), pages 598-605, 2017.
+[35] Yi Ma, Stefano Soatto, Jana Kosecka, and S. Shankar Sastry. An Invitation to 3-D Vision. Springer-Verlag New York, 2004.
+[36] Andre Mateus, Srikumar Ramalingam, and Pedro Miraldo. Minimal solvers for 3d scan alignment with pairs of intersecting lines. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), 2020.
+[37] Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. IEEE Trans. Neural Networks and Learning Systems (T-NNLS), 2019.
+[38] Nicolas Mellado, Niloy Mitra, and Dror Aiger. Super 4pcs: Fast global pointcloud registration via smart indexing. Computer Graphics Forum (Proc. EUROGRAPHICS), 33(5):205-215, 2014.
+[39] Pedro Miraldo, Surojit Saha, and Srikumar Ramalingam. Minimal solvers for mini-loop closures in 3d multi-scan alignment. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 9699-9708, 2019.
+[40] Andriy Myronenko and Xubo Song. Point set registration: Coherent point drift. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 32(12):2262-2275, 2010.
+[41] Richard A. Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J. Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, and Andrew Fitzgibbon. Kinectfusion: Real-time dense surface mapping and tracking. In IEEE Int'l Symposium on Mixed and Augmented Reality (ISMAR), pages 127-136, 2011.
+[42] Ilkay Oksuz, Bram Ruijsink, Esther Puyol-Antn, James R. Clough, Gastao Cruz, Aurelien Bustin, Claudia Prieto, Rene
+
+Botnar, Daniel Rueckert, Julia A. Schnabel, and Andrew P. King. Automatic cnn-based detection of cardiac mr motion artefacts using k-space data augmentation and curriculum learning. Medical Image Analysis, 55:136-147, 2019.
+[43] Jaesik Park, Qian-Yi Zhou, and Vladlen Koltun. Colored point cloud registration revisited. In IEEE Int'l Conf. Computer Vision (ICCV), pages 143-152, 2017.
+[44] Graeme P. Penney, Philip J. Edwards, Andrew P. King, Jane M. Blackall, Philipp G. Batchelor, and David J. Hawkes. A stochastic iterative closest point algorithm (stochastic). In Medical Image Computing and Computer-Assisted Intervention (MICCAI), pages 762-769, 2001.
+[45] Stephen Phillips and Kostas Daniilidis. All graphs lead to rome: Learning geometric and cycle-consistent representations with graph convolutional networks. arXiv:1901.02078, 2019.
+[46] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 652-660, 2017.
+[47] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast point feature histograms (fpfh) for 3d registration. In IEEE Int'l Conf. Robotics and Automation (ICRA), pages 3212-3217, 2009.
+[48] Peter H. Schonemann. A generalized solution of the orthogonal procrustes problem. Psychometrika, 31(1):1-10, 1966.
+[49] Aleksandr V. Segal, Dirk Haehnel, and Sebastian Thrun. Generalized-icp. In Robotics: Science and Systems (RSS), 2009.
+[50] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Poincarc: 3d object proposal generation and detection from point cloud. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 770-779, 2019.
+[51] Miroslava Slavcheva, Maximilian Baust, Daniel Cremers, and Slobodan Ilic. Killingfusion: Non-rigid 3d reconstruction without correspondences. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 5474-5483, 2017.
+[52] Gary K.L. Tam, Zhi-Quan Cheng, Yu-Kun Lai, Frank C. Langbein, Yonghuai Liu, David Marshall, Ralp R. Martin, Xian-Fang Sun, and Paul L. Rosin. Registration of 3d point clouds and meshes: A survey from rigid to nonrigid. IEEE Trans. Visualization and Computer Graphics (T-VCG), 19(7):1199–1217, 2013.
+[53] Shinji Umeyama. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 13(4):376-380, 1991.
+[54] Yue Wang and Justin Solomon. Deep closest point: Learning representations for point cloud registration. In IEEE Int'l Conf. Computer Vision (ICCV), pages 3522-3531, 2019.
+[55] Xinshuo Weng and Kris Kitani. Monocular 3D Object Detection with Pseudo-LiDAR Point Cloud. In ICCV Workshops, 2019.
+[56] Jay M. Wong, Vincent Kee, Tiffany Le, Syler Wagner, Gian-Luca Mariottini, Abraham Schneider, Lei Hamilton,
+
+Rahul Chipalkatty, Mitchell Hebert, David M.S. Johnson, Jimmy Wu, Bolei Zhou, and Antonio Torralba. Segicp: Integrated deep semantic segmentation and pose estimation. In IEEE/RSJ Int'l Conf. Intelligent Robots and Systems (IROS), pages 5784-5789, 2017.
+[57] Jiaolong Yang, Hongdong Li, Dylan Campbell, and Yunde Jia. Go-icp: Solving 3d registration efficiently and globally optimally. IEEE Trans. Pattern Analysis and Machine Intelligence (T-PAMI), 38(11):2241–2254, 2016.
+[58] Jiaolong Yang, Hongdong Li, and Yunde Jia. Go-icp: Solving 3d registration efficiently and globally optimally. In IEEE Int'l Conf. Computer Vision (ICCV), pages 1457–1464, 2013.
+[59] Zi Jian Yew and Gim Hee Lee. 3dfeat-net: Weakly supervised local 3d features for point cloud registration. In European Conf. Computer Vision (ECCV), pages 630-646, 2018.
+[60] Kwang Moo Yi, Eduard Trulls, Yuki Ono, Vincent Lepetit, Mathieu Salzmann, and Pascal Fua. Learning to find good correspondences. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 2666-2674, 2018.
+[61] Andy Zeng, Shuran Song, Matthias Niessner, Matthew Fisher, Jianxiong Xiao, and Thomas Funkhouser. 3dmatch: Learning local geometric descriptors from rgb-d reconstructions. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 199-208, 2017.
+[62] Chen Zhao, Zhiguo Cao, Chi Li, Xin Li, and Jiaqi Yang. Nm-net: Mining reliable neighbors for robust feature correspondences. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 215-224, 2019.
+[63] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features for scene recognition using places database. In Advances in Neural Information Processing Systems (NIPS), pages 487-495, 2014.
+[64] Lei Zhou, Siyu Zhu, Zixin Luo, Tianwei Shen, Runze Zhang, Mingmin Zhen, Tian Fang, and Long Quan. Learning and matching multi-view descriptors for registration of point clouds. In European Conf. Computer Vision (ECCV), pages 527-544, 2018.
+[65] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Fast global registration. In European Conf. Computer Vision (ECCV), pages 766-782, 2016.
+[66] Yi Zhou, Connelly Barnes, Jingwan Lu, Jimei Yang, and Hao Li. On the continuity of rotation representations in neural networks. In IEEE Conf. Computer Vision and Pattern Recognition (CVPR), pages 5745-5753, 2019.
+[67] Michael Zollhofer, Matthias Niessner, Shahram Izadi, Christoph Rehmann, Christopher Zach, Matthew Fisher, Chenglei Wu, Andrew Fitzgibbon, Charles Loop, Christian Theobalt, and Marc Stamminger. Real-time non-rigid reconstruction using an rgb-d camera. ACM Trans. Graphics, 33(4), 2014.
\ No newline at end of file
diff --git a/3dregnetadeepneuralnetworkfor3dpointregistration/images.zip b/3dregnetadeepneuralnetworkfor3dpointregistration/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..f0f515ff049c7b19804d920a2a1cf09cbeaad92b
--- /dev/null
+++ b/3dregnetadeepneuralnetworkfor3dpointregistration/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7bf67bea87b7a0f734ab9d7d8a32bed2fc8afc7424266953624a15b5833afd1f
+size 549388
diff --git a/3dregnetadeepneuralnetworkfor3dpointregistration/layout.json b/3dregnetadeepneuralnetworkfor3dpointregistration/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a7c394c0e0597552bac0114c5800279bf65128a3
--- /dev/null
+++ b/3dregnetadeepneuralnetworkfor3dpointregistration/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:94b8d3584b6d6a66f09b096e6a16570a0403de0e3776d0f79a5ca58d636b7ac4
+size 483286
diff --git a/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_content_list.json b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..340b0c86a8fce5ea82a5d570de9c313657fa8435
--- /dev/null
+++ b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bea2afa069c87b29349ccd0c1ddacd752d6503c24a78b0ca5057c28da6d21e48
+size 90968
diff --git a/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_model.json b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..799d871b8d04bd67209596514babc83da8cdd6a3
--- /dev/null
+++ b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e73037fd3a7347ce99aba9e021d55157410d671b20210610da0d3b353e751fd5
+size 108451
diff --git a/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_origin.pdf b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c66745067ee525d91d91988ba378f2c9ecb30744
--- /dev/null
+++ b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/21e2be0e-0fb4-4da1-b0aa-370b4456f88a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:30e41dd515a47ca7b480330822f0044dfa5f0ed923cc8d86c8e3012ddf61f140
+size 2130154
diff --git a/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/full.md b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b850debd033e0b8f06b3982834adc4294e49c47c
--- /dev/null
+++ b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/full.md
@@ -0,0 +1,395 @@
+# 3D Sketch-aware Semantic Scene Completion via Semi-supervised Structure Prior
+
+Xiaokang Chen $^{1*}$ Kwan-Yee Lin $^{2}$ Chen Qian $^{2}$ Gang Zeng $^{1\dagger}$ Hongsheng Li $^{3}$ $^{1}$ Key Laboratory of Machine Perception (MOE), School of EECS, Peking University
+ $^{2}$ SenseTime Research The Chinese University of Hong Kong
+
+
+RGB
+
+
+Depth (NYUCAD)
+
+
+Ground Truth
+
+
+SSCNet
+
+
+Ours
+
+
+Figure 1. Visualization of Semantic Scene Completion task. From left to right: (1) RGB input, (2) depth map, (3) ground truth of semantic scene completion, (4) result of SSCNet [27], (5) result of the proposed method. Our method generates a more reasonable result and obtains a better intra-class consistency and inter-class distinction compared with SSCNet [27], a classic method that models context on implicitly embedded depth feature that learnt from general 3D CNNs.
+
+
+Floor
+
+
+Window
+
+
+Bed
+
+
+Table
+
+
+Furn.
+
+# Abstract
+
+The goal of the Semantic Scene Completion (SSC) task is to simultaneously predict a completed 3D voxel representation of volumetric occupancy and semantic labels of objects in the scene from a single-view observation. Since the computational cost generally increases explosively along with the growth of voxel resolution, most current state-of-the-arts have to tailor their framework into a low-resolution representation with the sacrifice of detail prediction. Thus, voxel resolution becomes one of the crucial difficulties that lead to the performance bottleneck.
+
+In this paper, we propose to devise a new geometry-based strategy to embed depth information with low-resolution voxel representation, which could still be able to encode sufficient geometric information, e.g., room layout, object's sizes and shapes, to infer the invisible areas of the scene with well structure-preserving details. To this end, we first propose a novel 3D sketch-aware feature embedding to explicitly encode geometric information effectively and efficiently. With the 3D sketch in hand, we further devise a simple yet effective semantic scene completion framework that incorporates a light-weight 3D Sketch Hallucination mod
+
+ule to guide the inference of occupancy and the semantic labels via a semi-supervised structure prior learning strategy. We demonstrate that our proposed geometric embedding works better than the depth feature learning from habitual SSC frameworks. Our final model surpasses state-of-the-arts consistently on three public benchmarks, which only requires 3D volumes of $60 \times 36 \times 60$ resolution for both input and output.
+
+# 1. Introduction
+
+Semantic Scene Completion (SSC), which provides an alternative to understand the 3D world with both 3D geometry and semantics of the scene from a partial observation, is an emerging topic in computer vision for its wide applicability on many applications, e.g., augmented reality, surveillance and robotics. Due to the high memory and computational cost requirements on inherent voxel representation, most existing methods [27, 9, 41, 7, 14, 16, 40, 4] achieve semantic scene completion through sophisticated 3D context modeling on implicitly embedded depth feature that learnt from general 3D CNNs. These methods are either error-prone on classifying fine details of objects or have the difficulties in completing the scene when there exists a large portion of geometry missing, as shown in Figure 1.
+
+Several recent studies [7, 14, 16] present promising re
+
+sults on this topic by introducing high-resolution RGB images into the process. Though driven by various motivations, these methods could be thought as building cross-modality feature embedding with the assumption that the fine detail feature could be compensated from RGB counterpart and computation-efficient property could be guaranteed with 2D operators on RGB source. However, such an approach is highly relied on the effectiveness of cross-modality feature embedding module design and is vulnerable to complex scenes.
+
+In contrast, from the human perception, it is a breeze to complete and recognize 3D scene even from the partial low-resolution observation, due to the prior knowledge on object's geometry properties, e.g., size and shape, of different categories. From this perspective, we hypothesize the feature embedding strategy that explicitly encodes the geometric information could facilitate the network learning the concept of object's structure, and therefore reconstructing and recognizing the scene precisely even from the low-resolution partial observation. To this end, the geometry properties need to be resolution-invariant or at least resolution-insensitive.
+
+Based on this intuition, we present 3D sketch1-aware feature embedding, an explicit and compact depth feature embedding schema for the semantic scene completion task. It has been demonstrated in [23] that the similar geometric cue in image space, i.e., 2D boundary, is resolution-insensitive. We show that the 3D world also holds the same conclusion, as indicated in Figure 2.
+
+
+240x144x240
+
+
+120x72x120
+Figure 2. Visualization of sketches extracted from semantic labels with different resolutions. From left to right, the sketch begins to lose some details as resolution decreases, while the structure description of the scene is well preserved.
+
+
+80x48x80
+
+
+60x36x60
+
+However, 3D sketch extracted from 2D depth image is still a 2D/2.5D observation from a single viewpoint. To fully utilize the strength of this new feature embedding, we further propose a 3D sketch-aware semantic scene completion network, which injects a 3D Sketch Hallucination Module to infer the full 3D sketch from the partial one at first, and then utilize the feature embedded from the hallucinated 3D sketch to guide the reconstruction and recognition. Specifically, since lifting the 2D/2.5D observation to full 3D sketch is intrinsically ambiguous, instead of directly regressing the ground-truth full 3D sketch, we seek a nature
+
+prior distribution to sample diverse reasonable 3D sketches. We achieve that by tailoring Conditional Variational Autoencoder (CVAE) [26] into the 3D Sketch Hallucination Module design. We show that such a design could help to generate accurate and realistic results even when there is a large portion of geometry missing from the partial observation.
+
+We summarize our contributions as follows:
+
+- We devise a new geometric embedding from depth information, namely 3D sketch-aware feature embedding, to break the performance bottleneck of the SSC task caused by a low-resolution voxel representation.
+- We introduce a simple yet effective semantic scene completion framework that incorporates a novel 3D Sketch Hallucination Module to guide the full 3D sketch inference from partial observation via semi-supervised structure prior property of Conditional Variational Autoencoder (CVAE), and utilizes the feature embedded from the hallucinated 3D sketch to further guide the scene completion and semantic segmentation.
+- Our model outperforms state-of-the-arts consistently on three public benchmarks, with only requiring 3D volumes of $60 \times 36 \times 60$ resolution for both input and output.
+
+# 2. Related Work
+
+# 2.1. Object Shape Completion
+
+Object shape completion has a long history in geometry processing. We summarize existing methods to two categories: knowledge-based and learning-based.
+
+Knowledge-based methods complete partial input of an object by reasoning geometric cues or matching it with 3D models from an extensive shape database. Some works detect symmetries in meshes or point clouds and use them to fill in missing data, such as [31, 28, 19]. An alternative is to match the partial input with CAD models from a large database [18, 21, 13]. However, it is too expensive to retrieve, and it has poor generalization for new shapes that do not exist in the database.
+
+Learning-based methods are more flexible and effective than knowledge-based ones. They usually infer the invisible area with a deep neural network, which has fast inference speed and better robustness. [2] proposes a 3D-Encoder-Predictor Network, which first encodes the known and unknown space to get a relatively low-resolution prediction, and then correlates this intermediary result with 3D geometry from a shape database. [37] proposes an end-to-end method that directly operates on raw point clouds without any structural assumption about the underlying shape. [29] proposes a weakly-supervised approach that learns a shape
+
+prior on synthetic data and then conducts maximum likelihood fitting using deep neural networks.
+
+These methods focus on reconstructing 3D shape from the partial input of a single object, which makes it hard for them to extend to partial scenes along with multiple objects estimated in semantic level.
+
+# 2.2. Semantic Scene Completion
+
+Semantic Scene Completion (SSC) is a fundamental task in 3D scene understanding, which produces a complete 3D voxel representation of volumetric occupancy and semantic labels. SSCNet [27] is the first to combine these two tasks in an end-to-end way. ESSCNet [39] introduces Spatial Group Convolution (SGC) that divides input volume into different groups and conduct 3D sparse convolution on them. VVNet [9] combines 2D and 3D CNN with a differentiable projection layer to efficiently reduce computational cost and enable feature extraction from multi-channel inputs. ForkNet [33] proposes a multi-branch architecture and draws on the idea of generative models to sample new pairs of training data, which alleviates the limited training samples problem on real scenes. CCPNet [41] proposes a self-cascaded context aggregation method to reduce semantic gaps of multi-scale 3D contexts and incorporates local geometric details in a coarse-to-fine manner.
+
+Some works also utilize RGB images as vital complementary to depth. TS3D [7] designs a two-stream approach to leverage semantic and depth information, fused by a vanilla 3DCNN. SATNet [16] disentangles semantic scene completion task by sequentially accomplishing 2D semantic segmentation and 3D semantic scene completion tasks. DDRNet [14] proposes a light-weight Dimensional Decomposition Residual network and fused multi-scale RGB-D features seamlessly.
+
+Above methods could be regraded as encoding depth information implicitly by either single- or cross-modality feature embedding. They map depth information into an inexplicable high-dimensional feature space and then use the feature to predict the result directly. Different from current methods, we propose an explicit geometric embedding strategy from depth information, which predicts 3D sketch first and utilize the feature embedded from it to guide the reconstruction and recognition.
+
+# 2.3. 2D Boundary Detection
+
+2D Boundary detection is a fundamental challenge in computer vision. There are lots of methods proposed to detect boundaries. Sobel operator [25] and Canny operator [1] are two hand-craft based classics that detect boundaries with gradients of the image. Learning-based works [17, 10, 35] try to employ deep neural networks with supervision. Most of them directly concatenate multi-level features to extract the boundary. Since boundary includes a distinct geometric structure of objects, some other works
+
+try to inject boundary detection into other tasks to help boost the performance. [32] combines boundary detection with salient object detection task to encourage better edge-preserving salient object segmentation. [36, 30] introduce boundary detection into semantic segmentation task to obtain more precise semantic segmentation results. [34] achieves robust facial landmark detection by utilizing facial boundary as an intermediate representation to remove the ambiguities. With similar spirits, we introduce a 3D sketch-aware feature embedding to break the performance bottleneck of the SSC task caused by a low-resolution voxel representation.
+
+# 2.4. Structure Representation Learning
+
+Deep generative models have demonstrated significant performance in structure representation learning. [26] develops a deep conditional generative model to predict structured output using Gaussian latent variables, which can be trained efficiently in the framework of stochastic gradient variational Bayes. [42] proposes an autoencoding formulation to discover landmarks as explicit structural representations in an unsupervised manner. [5] proposes to synthesize images under the guidance of shape representations and conditions on the learned textural information. [22] employs CVAE to stress the issue of the inherent ambiguity in 2D-to-3D lifting in the pose estimation task. Adopting the idea of structure representation learning, we embed the geometric structure of a 3D scene through a CVAE [26] conditioned on the estimated sketch.
+
+# 3. Methodology
+
+The overall architecture of our network is illustrated in Figure 3. The proposed method consists of multiple stages and each stage adopts an encoder-decoder architecture. Taking a pair of RGB and depth images of a 3D scene as input, the network outputs a dense prediction and each voxel in the view frustum is assigned with a semantic label $C_i$ , where $i \in [0,1,\dots,N]$ and $N$ is the number of semantic categories. $C_0$ stands for empty voxels.
+
+More specifically, we stack two stages and let each stage handle different tasks. The first stage tackles the task of sketch extraction. It embeds the geometric cues contained in the scene and provides the structure prior information (which we call it sketch) for the next stage. Besides, we employ CVAE to guide the predicted sketch. The second stage tackles the task of semantic scene completion (SSC) based on the extracted sketch. Details are introduced below.
+
+# 3.1. Generation of Ground-truth Sketch
+
+We perform 3D Sobel operator on the semantic label to extract the sketch of the semantic scene. Suppose we have obtained gradients $g_{x}^{i}, g_{y}^{i}, g_{z}^{i}$ at the $i$ -th voxel $V_{i}$ along $x, y, z$ axes, we first binarize these values to be 0 or 1
+
+
+Figure 3. Overview of our network. We first generate structure prior information from the TSDF input and use CVAE to refine the prediction. Then the prior information will be passed to the RGB-branch to predict occupancy and object labels for each voxel in the view frustum. The convolution parameters are shown as (kernel size, dilation). The DDR parameters are shown as (dilation, downsample rate). The Deconvolution parameters are shown as (kernel size, upsample rate).
+
+to eliminate the semantic gap. For example, the gap between class 1 and class 2 should be considered equal to the gap between class 1 and class 10 when generating the sketch. Finally, the extracted sketch can be described as a set: $\mathbf{S}_{\mathrm{sketch}} = \{V_i : g_x^i + g_y^i + g_z^i > 1\}$ . To distinguish generated geometric representation with generally 2D edge/boundary, we refer it as 3D Sketch.
+
+# 3.2. Sketch Prediction Stage
+
+This stage takes a single-view depth map as input and encodes it as a 3D volume. We follow [27] to rotate the scene to align with gravity and room orientation based on Manhattan assumption. We adopt Truncated Signed Distance Function (TSDF) to encode the 3D space, where every voxel stores the distance value $d$ to its closest surface and the sign of the value indicates whether the voxel is in free space or occluded space. The encoder volume has a grid size of $0.02\mathrm{m}$ and a truncation value of $0.24\mathrm{m}$ , resulting in a $240\times 144\times 240$ volume. For the saving of computational cost, [27] downsamples the ground truth by a rate of 4, and we use the same setting. Following SATNet [16], we also downsample the input volume by a rate of 4 and use $60\times 36\times 60$ resolution as input.
+
+Previous works [20, 38, 36] demonstrate that contextual information is important for 2D semantic segmentation. Due to the sparseness and the high computational cost of 3D voxels, it is hard to obtain the context of the scene. To learn rich contextual information, we should make sure that our network has a large enough receptive field without significantly increasing the computational cost. To this end, [14] proposed Dimensional Decomposition Residual (DDR) block which is computation-efficient compared with basic 3D residual block. We adopt DDR block as our basic unit and stack them layer by layer with different dilation
+
+rates to maintain big receptive fields. As shown in Figure 3, We first employ several convolutions to encode the TSDF volume into high dimensional features. Then we aggregate the contextual information of the input feature by several DDR blocks and downsample it by a rate of 4 to reduce computational cost. Finally, we employ two deconvolution layers to upsample the feature volume and obtain the dense predicted sketch, which we denote as $\hat{G}_{raw}$ . Following [27], we add a skip connection between two layers for better gradient propagation, which is illustrated in Figure 3.
+
+Due to the input of semantic scene completion task is not a complete scene, we assume that a more precise and complete sketch will bring more information increments to the subsequent stage. To some extent, it may make up for the inadequacy of incomplete input. Thus we design a 3D Sketch Hallucination Module to handle this issue.
+
+# 3.3. 3D Sketch Hallucination Module
+
+Lifting 2D/2.5D observation to full 3D sketch is intrinsically ambiguous, we thus seek a nature prior distribution to sample diverse reasonable 3D sketches instead of directly regressing the ground truth. Thus, we employ CVAE to further process the original predicted sketch by sampling an accurate and diverse sketch set $S = \{\hat{G}_{\text{refined}}^k : k \in \{1, 2, \dots, K\}\}$ conditioned on the estimated $\hat{G}_{\text{raw}}$ .
+
+The proposed 3D Sketch Hallucination Module (as shown in Figure 4) consists of a standard encoder-decoder structure. The encoder which we denote as $\mathcal{E}(G_{gt},\hat{G}_{raw})$ performs some convolution operations on the input groundtruth sketch and a condition $\hat{G}_{raw}$ to output the mean and diagonal covariance for the posterior $q(\hat{z} |G_{gt},\hat{G}_{raw})$ . Then the decoder which we denote as $\mathcal{D}(\hat{z},\hat{G}_{raw})$ will reconstruct the sketch by taking a latent $\hat{z}$ sampled from the pos
+
+terior $q(\hat{z}|G_{gt},\hat{G}_{raw})$ and the condition $\hat{G}_{raw}$ as input.
+
+
+Figure 4. Architecture of the proposed Sketch Hallucination Module. During training time, the original estimated sketch and the ground-truth sketch are fed into the encoder to generate mean and diagonal covariance for the posterior $q$ . Then the decoder will reconstruct the ground-truth sketch with a latent sampled from $q$ and the original estimated sketch as input.
+
+During training, we optimize the proposed module though minimizing the following objective function,
+
+$$
+\begin{array}{l} \mathcal {L} _ {\mathrm {C V A E}} = \lambda_ {1} K L (q (\hat {z} | G _ {g t}, \hat {G} _ {r a w}) \mid p (z | \hat {G} _ {r a w})) \tag {1} \\ + \lambda_ {2} \mathbb {E} _ {z \sim q (\hat {z} | G _ {g t}, \hat {G} _ {r a w})} \epsilon (G _ {g t}, \mathcal {D} (\hat {z}, \hat {G} _ {r a w})), \\ \end{array}
+$$
+
+where $\epsilon$ is a cross-entropy loss and $KL(x||y)$ is the Kullback-Leibler divergence loss. We use $\lambda_{i}$ as hyperparameter to weight these two loss items. $\mathbb{E}$ is the expectation which is taken over $K$ samples. The $p(z|\hat{G}_{raw})$ is the prior distribution. To ensure gradients can be backpropagated through the latent code, the KL divergence is required to be computed in a closed form. Thus, the latent space of CVAE is typically restricted to be a distribution over $\mathcal{N}(0,I)$ . We follow this setting in our framework. Specifically, it draws a Gaussian prior assumption over the coarse-step geometry representation to fine-step geometry representation in our framework. Sketch is a simple yet compact geometry representation which suits the assumption. Since the encoder will not be used during inference, the current objective will introduce inconsistency between training and inference. To address this issue, we follow [26, 22] to set the encoder the same as prior network $p(z)\sim \mathcal{N}(0,I)$ , namely Gaussian Stochastic Neural Network (GSNN) and the reparameterization trick of CVAE can be used to train GSNN. We combine $\mathcal{L}_{\mathrm{GSNN}}$ and $\mathcal{L}_{\mathrm{CVAE}}$ with $\alpha$ as weight term to obtain final objective for our refine network,
+
+$$
+\mathcal {L} _ {\mathrm {G S N N}} = \mathbb {E} _ {z \sim \mathcal {N} (0, I)} \epsilon (G _ {g t}, \mathcal {D} (z, \hat {G} _ {r a w})), \tag {2}
+$$
+
+$$
+\mathcal {L} _ {\text {h y b r i d}} = \mathcal {L} _ {\text {C V A E}} + \alpha \mathcal {L} _ {\text {G S N N}}, \tag {3}
+$$
+
+Durning inference, we randomly sample $z$ from $\mathcal{N}(0,I)$ for $K$ times and obtain $K$ different $\mathcal{D}(z,\hat{G}_{raw})$ , which are denoted as $S = \{\hat{G}_{refined}^k : k \in 1,2,\dots,K\}$ . We average them and obtain the refined sketch $\hat{G}_{refined}$ .
+
+# 3.4. Semantic Scene Completion Stage
+
+In this stage, we will take a single RGB image and the pre-computed sketches from the former stage as input to densely predict the semantic scene labels. We divide this
+
+stage into three parts: 2D feature learning, 2D-3D projection and 3D feature learning. The input RGB image is firstly fed into a ResNet-50 [11] to extract local and global textural features. For achieving stable training, we utilize the parameters pre-trained on ImageNet [3] and freeze the weight of them. Due to the output tensor of ResNet-50 has too many channels, which will bring too much computational cost for 3D learning part, we adopt a convolution layer followed by a Batch Normalization [12] and Rectified Linear Unit (ReLU) to reduce its dimensions.
+
+Then the computed 2D semantic feature map will be projected into 3D space according to the depth map and the corresponding camera parameters. Given the depth image $I_{depth}$ , the intrinsic camera matrix $K_{camera} \in \mathbb{R}^{3 \times 3}$ , and the extrinsic camera matrix $E_{camera} \in \mathbb{R}^{3 \times 4}$ , each pixel $p_{u,v}$ in the 2D feature map can be projected to an individual 3D point $p_{x,y,z}$ . Because the resolution of the 3D volume is lower than the 2D feature map, multiple points may be divided into the same voxel in the process of voxelization. For those voxels, we only keep one feature vector in a certain voxel by max-pooling. After this step, the semantic feature vector for each pixel is assigned to its corresponding voxel via the mapping $\mathbb{M}$ . Since many areas are not visible, zero vectors are assigned to the occluded areas and empty foreground in the scene.
+
+Given the projected 3D feature map $\mathbf{F}_{proj} \in \mathbb{R}^{C \times H \times W \times L}$ , where $C$ is the number of channels and $H, W, L$ are size of the feature map. We now use the prior information $\hat{G}_{\text{raw}}$ and $\hat{G}_{\text{refined}}$ as guidance. We define two sketch mappings: $\mathcal{F}_{\text{raw}}: \hat{G}_{\text{raw}} \to \mathbf{F}_{\text{raw}} \in \mathbb{R}^{C \times H \times W \times L}$ and $\mathcal{F}_{\text{refined}}: \hat{G}_{\text{refined}} \to \mathbf{F}_{\text{refined}} \in \mathbb{R}^{C \times H \times W \times L}$ to map these prior information to the same feature space with $\mathbf{F}_{\text{proj}}$ . After these two mapping operations, both $\mathbf{F}_{\text{raw}}$ and $\mathbf{F}_{\text{refined}}$ have the same resolution and dimension with $\mathbf{F}_{\text{proj}}$ . Thus we introduce the prior information by an element-wise addition operation on $\mathbf{F}_{\text{proj}}$ , $\mathbf{F}_{\text{raw}}$ and $\mathbf{F}_{\text{refined}}$ . In practice, these two mapping functions are implemented by $3 \times 3$ convolution layers. In the following, the new feature map will be fed into a 3D CNN, whose architecture is the same with that of sketch-branch, and we obtain the final semantic scene completion predictions.
+
+# 3.5. Loss Function
+
+During training, the dataset is organized as a set $\{(X_{\mathrm{TSDF}}, X_{\mathrm{RGB}}, G_{gt}, S_{gt})\}$ , where $G_{gt}$ represents the ground-truth sketch and $S_{gt}$ represents the ground-truth semantic labels. We optimize the entire architecture by the following formulas:
+
+$$
+\mathcal {L} _ {\text {l o s s}} = \mathcal {L} _ {\text {s e m a n t i c}} + \mathcal {L} _ {\text {h y b r i d}} + \mathcal {L} _ {\text {s k e t c h}}, \tag {4}
+$$
+
+$$
+\mathcal {L} _ {\text {s e m a n t i c}} = \epsilon \left(S _ {g t}, \mathcal {D} _ {s} \left(\mathcal {E} _ {s} \left(X _ {\mathrm {R G B}}\right)\right)\right), \tag {5}
+$$
+
+$$
+\mathcal {L} _ {\text {s k e t c h}} = \epsilon \left(G _ {g t}, \mathcal {D} _ {g} \left(\mathcal {E} _ {g} \left(X _ {\text {T S D F}}\right)\right), \right. \tag {6}
+$$
+
+where $\mathcal{D}_g, \mathcal{E}_g$ are the encoder and the decoder of the sketch stage, $\mathcal{D}_s, \mathcal{E}_s$ are the encoder and the decoder of the semantic stage, $\mathcal{L}_{\mathrm{hybrid}}$ is defined in Eq. (3), and $\epsilon$ denotes the cross-entropy loss.
+
+# 4. Experiments
+
+# 4.1. Datasets and Evaluation Metrics
+
+We evaluate the proposed method on three datasets: NYU Depth V2 [24] (which is denoted as NYU in the following), NYUCAD [6] and SUNCG [27]. We will introduce these three datasets in detail in the supplementary material. We follow SSCNet [27] and use precision, recall and voxel-level intersection over union (IoU) as evaluation metrics. Following [27], two tasks are considered: semantic scene completion (SSC) and scene completion (SC). For the task of SSC, we evaluate the IoU of each object class on both observed and occluded voxels in the view frustum. For the task of SC, we treat all voxels as binary predictions, i.e., empty or non-empty. We evaluate the binary IoU on occluded voxels in the view frustum.
+
+# 4.2. Implementation Details
+
+Training Details. We use PyTorch framework to implement our experiments with 2 GeForce GTX 1080 Ti GPUs. We adopt mini-batch SGD with momentum to train our model with batch size 4, momentum 0.9 and weight decay 0.0005. We employ a poly learning rate policy where the initial learning rate is multiplied by $(1 - \frac{\text{iter}}{\text{max\_iter}})^{0.9}$ . For both NYU and NYUCAD, we train our network for 250 epochs with initial learning rate 0.1. For SUNCG, we train our network for 8 epochs with initial learning rate 0.01. The expectation in Eq. (1) is estimated using $K = 4$ samples. $\lambda_{1}, \lambda_{2}$ and $\alpha$ in Eq. (1) and Eq. (3) are set to 2, 1 and 1.5 respectively.
+
+| Drop Rate(%) | SC-IoU(%) | SSC-mIoU(%) |
| 0 | 94.2 | 65.0 |
| 20 | 93.7 | 63.6 |
| 40 | 93.2 | 62.3 |
| 60 | 92.0 | 59.9 |
| 80 | 89.9 | 57.1 |
+
+Oracle Ablation. To obtain the theoretical upper limit of the proposed method, we replace the output of the first stage with the ground-truth 3D sketch to supply the structure prior. Results are shown in Table 1. Drop Rate means we randomly discard some voxels in the ground-truth 3D sketch by some ratio. We observe that with the whole 3D sketch as structure prior, our network could infer most of the invisible areas and obtain $94.2\%$ SC IoU. As the drop rate increases to $80\%$ , the performance has not dropped a
+
+lot and is still higher than the best performance of the proposed method, which verifies the validity of accurate structure prior.
+
+# 4.3. Comparisons with State-of-the-art Methods
+
+We further compare the proposed method with state-of-the-art methods. Table 3 shows the performances by state-of-the-art methods on NYU dataset. We observe that the proposed method outperforms all existing methods by a large margin, more specifically, we gain an increase of $7.8\%$ SC IoU and $2.6\%$ SSC mIoU compared to CCPNet [41]. We argue that this improvement is caused by the novel two-stage architecture which makes the full use of the structure prior. The provided structure prior can accurately infer invisible areas of the scene with well structure-preserving details.
+
+We also conduct experiments on NYUCAD dataset to validate the generalization of the proposed method. Table 4 presents the quantitative results on NYUCAD dataset. Our proposed method maintains the performance advantage and outperforms CCPNet [41] by $1.8\%$ SC IoU and $2.0\%$ SSC mIoU. Note that although some works [41, 33, 7] use larger input resolution than ours, the proposed method still outperforms them with a low-resolution input of $60 \times 36 \times 60$ .
+
+Experiments on SUNCG dataset and the visualization of the SSC results compared with SSCNet [27] on NYUCAD dataset are put in the supplementary material.
+
+# 4.4. Ablation Study
+
+To evaluate the effectiveness of the pivotal components of our method, we perform extensive ablation studies using the same hyperparameters. Details are illustrated below.
+
+Table 1. Oracle Ablation. (Oracle) Drop Rate means we randomly drop the ground-truth sketch in a certain proportion. We perform this ablation study on NYUCAD dataset.
+
+| #Stage | Structure Prior | CVAE | SC-IoU(%) | SSC-mIoU(%) |
| 1 | X | X | 79.3 | 48.7 |
| 2 | X | X | 81.1 | 50.6 |
| 2 | ✓ | X | 83.6 | 53.9 |
| 2 | ✓ | ✓ | 84.2 | 55.2 |
+
+Table 2. Ablation studies on different modules. We perform this ablation study on NYUCAD dataset.
+
+Different Modules in the Framework. We first conduct ablation studies on different modules in the proposed method. Results are shown in Table 2. From Row 1 and Row 2, we find that just adopting a dual-path structure could boost the performance, as more parameters are introduced. In the third row, with the introduction of structure prior, our network could infer the invisible areas of the scene with well structure-preserving details, which brings great improvements. Finally, with the proposed 3D Sketch Hallucination Module, we further boost the performance and achieve $84.2\%$ SC IoU and $55.2\%$ SSC mIoU, which are both new state-of-the-art performance on NYUCAD.
+
+Different Representations of Structure Prior. We also perform ablation studies on different representations of
+
+ | | scene completion | semantic scene completion | |
| Methods | Resolution | Trained on | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| Lin et al. [15] | (240, 60) | NYU | 58.5 | 49.9 | 36.4 | 0.0 | 11.7 | 13.3 | 14.1 | 9.4 | 29.0 | 24.0 | 6.0 | 7.0 | 16.2 | 1.1 | 12.0 |
| Geiger et al. [8] | (240, 60) | NYU | 65.7 | 58.0 | 44.4 | 10.2 | 62.5 | 19.1 | 5.8 | 8.5 | 40.6 | 27.7 | 7.0 | 6.0 | 22.6 | 5.9 | 19.6 |
| SSCNet [27] | (240, 60) | NYU | 57.0 | 94.5 | 55.1 | 15.1 | 94.7 | 24.4 | 0.0 | 12.6 | 32.1 | 35.0 | 13.0 | 7.8 | 27.1 | 10.1 | 24.7 |
| ESSCNet [39] | (240, 60) | NYU | 71.9 | 71.9 | 56.2 | 17.5 | 75.4 | 25.8 | 6.7 | 15.3 | 53.8 | 42.4 | 11.2 | 0 | 33.4 | 11.8 | 26.7 |
| DDRNet [14]* | (240, 60) | NYU | 71.5 | 80.8 | 61.0 | 21.1 | 92.2 | 33.5 | 6.8 | 14.8 | 48.3 | 42.3 | 13.2 | 13.9 | 35.3 | 13.2 | 30.4 |
| VVNetR-120 [9] | (120, 60) | NYU+SUNCG | 69.8 | 83.1 | 61.1 | 19.3 | 94.8 | 28.0 | 12.2 | 19.6 | 57.0 | 50.5 | 17.6 | 11.9 | 35.6 | 15.3 | 32.9 |
| TS3D [7]* | (240, 60) | NYU | - | - | 60.0 | 9.7 | 93.4 | 25.5 | 21.0 | 17.4 | 55.9 | 49.2 | 17.0 | 27.5 | 39.4 | 19.3 | 34.1 |
| SATNet-TNetFuse [16]* | (60, 60) | NYU+SUNCG | 67.3 | 85.8 | 60.6 | 17.3 | 92.1 | 28.0 | 16.6 | 19.3 | 57.5 | 53.8 | 17.2 | 18.5 | 38.4 | 18.9 | 34.4 |
| ForkNet [33] | (80, 80) | NYU | - | - | 63.4 | 36.2 | 93.8 | 29.2 | 18.9 | 17.7 | 61.6 | 52.9 | 23.3 | 19.5 | 45.4 | 20.0 | 37.1 |
| CCPNet [41] | (240, 240) | NYU | 74.2 | 90.8 | 63.5 | 23.5 | 96.3 | 35.7 | 20.2 | 25.8 | 61.4 | 56.1 | 18.1 | 28.1 | 37.8 | 20.1 | 38.5 |
| Ours* | (60, 60) | NYU | 85.0 | 81.6 | 71.3 | 43.1 | 93.6 | 40.5 | 24.3 | 30.0 | 57.1 | 49.3 | 29.2 | 14.3 | 42.5 | 28.6 | 41.1 |
+
+Table 3. Results on NYU dataset. Bold numbers represent the best scores. Resolution(a, b) means the input resolution is $(a \times 0.6a \times a)$ and the output resolution is $(b \times 0.6b \times b)$ . * are RGB-D based methods.
+
+ | | scene completion | semantic scene completion | |
| Methods | Resolution | Trained on | prec. | recall | IoU | ceil. | floor | wall | win. | chair | bed | sofa | table | tvs | furn. | objs. | avg. |
| Zheng et al. [43] | (240, 60) | NYUCAD | 60.1 | 46.7 | 34.6 | - | - | - | - | - | - | - | - | - | - | - | - |
| Firman et al. [6] | (240, 60) | NYUCAD | 66.5 | 69.7 | 50.8 | - | - | - | - | - | - | - | - | - | - | - | - |
| SSCNet [27] | (240, 60) | NYUCAD+SUNCG | 75.4 | 96.3 | 73.2 | 32.5 | 92.6 | 40.2 | 8.9 | 33.9 | 57.0 | 59.5 | 28.3 | 8.1 | 44.8 | 25.1 | 40.0 |
| VVNetR-120 [9] | (120, 60) | NYUCAD+SUNCG | 86.4 | 92.0 | 80.3 | - | - | - | - | - | - | - | - | - | - | - | - |
| DDRNet [14]* | (240, 60) | NYUCAD | 88.7 | 88.5 | 79.4 | 54.1 | 91.5 | 56.4 | 14.9 | 37.0 | 55.7 | 51.0 | 28.8 | 9.2 | 44.1 | 27.8 | 42.8 |
| TS3D [7]* | (240, 60) | NYUCAD | - | - | 76.1 | 25.9 | 93.8 | 48.9 | 33.4 | 31.2 | 66.1 | 56.4 | 31.6 | 38.5 | 51.4 | 30.8 | 46.2 |
| CCPNet [41] | (240, 240) | NYUCAD | 91.3 | 92.6 | 82.4 | 56.2 | 94.6 | 58.7 | 35.1 | 44.8 | 68.6 | 65.3 | 37.6 | 35.5 | 53.1 | 35.2 | 53.2 |
| Ours* | (60, 60) | NYUCAD | 90.6 | 92.2 | 84.2 | 59.7 | 94.3 | 64.3 | 32.6 | 51.7 | 72.0 | 68.7 | 45.9 | 19.0 | 60.5 | 38.5 | 55.2 |
+
+Table 4. Results on NYUCAD dataset. Bold numbers represent the best scores. Resolution(a, b) means the input resolution is $(a \times 0.6a \times a)$ and the output resolution is $(b \times 0.6b \times b)$ . * are RGB-D based methods.
+
+| Input | Shape | Semantic Labels | Sketch | SC-IoU(%) | SSC-mIoU(%) |
| TSDF+RGB | ✓ | | | 83.1 | 52.5 |
| TSDF+RGB | | ✓ | | 82.6 | 53.2 |
| TSDF+RGB | | | ✓ | 84.2 | 55.2 |
+
+Table 5. Ablation studies on different representations of structure prior. We perform this ablation study on NYUCAD dataset.
+
+| Supervision | Embedding | SC-IoU(%) | SSC-mIoU(%) |
| None | Implicit | 81.1 | 50.6 |
| Shape | Implicit | 83.1 | 51.8 |
| Explicit | 83.1 | 52.5 |
| Semantic | Implicit | 82.3 | 52.1 |
| Explicit | 82.6 | 53.2 |
| Sketch | Implicit | 83.5 | 54.4 |
| Explicit | 84.2 | 55.2 |
+
+structure prior. We list three different representations of the prior here: shape, semantic labels and sketch. Shape is the binary description of the scene and we generate the ground-truth shape by binarizing the semantic labels. Semantic labels and sketch have been introduced in the above sections. From Table 5, we observe that sketch is the best representation for modelling structure prior as it could infer the invisible regions with well structure-preserving details.
+
+Different Types of Embeddings. In this part, we conduct ablation studies on different types of embeddings. Results are shown in Table 6. 'Implicit' represents taking the output of the last deconvolution layer in the first stage as the geometric embedding and feed it to the second stage as prior
+
+Table 6. Ablation studies on different types of embeddings. We perform this ablation study on NYUCAD dataset.
+
+| Input for Stage1 | Input for Stage2 | SC-IoU(%) | SSC-mIoU(%) |
| RGB | RGB | 68.0 | 40.0 |
| RGB | TSDF | 71.2 | 40.2 |
| TSDF | TSDF | 71.5 | 37.2 |
| TSDF | RGB | 71.3 | 41.1 |
+
+Table 7. Ablation studies on different modal input. We perform this ablation study on NYU dataset.
+
+information. 'Explicit' represents we abstract a concrete structure based on the implicit embedding and use it a structure prior. We observe that even using implicit embedding, adding any reasonable supervision on it could boost the performance, such as semantics, shape and sketch. When we convert to explicit embedding, a better structure prior is obtained and the performance shows another boost. Note that the explicit embedding supervised by sketch outperforms its baseline using implicit embedding with no supervision by $3.1\%$ SC IoU and $4.6\%$ SSC mIoU, which demonstrates the effectiveness of the proposed sketch structure prior and the explicit embedding method.
+
+Different Modal Input. We adopt data from different modalities as input, more specifically, TSDF for the first stage and RGB for the second stage. We claim that TSDF embeds rich geometric information and is suitable for the sketch prediction task, while RGB is rich in semantic information and is suitable for semantic label prediction task. Results are shown in Table 7. From Row 1 and Row 4, we observe that TSDF generates better structure prior than RGB, resulting in a gain of $3.3\%$ SC IoU. From Row 3 and Row 4, we observe that RGB generates more precise se
+
+
+RGB
+Figure 5. Visualization of the sketch on NYUCAD dataset. With the proposed 3D Sketch Hallucination Module, which leverages CVAE to guide the inference of invisible areas, the sketch obtains a sharper boundary and is completer, resulting in better semantic predictions.
+
+
+
+
+
+
+
+
+Observed Surface
+
+
+
+
+
+
+
+
+Sketch Ground Truth
+
+
+
+
+
+
+
+
+Sketch w/o CVAE
+
+
+
+
+
+
+
+
+Sketch with CVAE
+
+
+
+
+
+
+
+
+SSC Ground Truth
+
+
+
+
+
+
+
+
+SSC w/o CVAE
+
+
+
+
+
+
+
+
+SSC with CVAE
+
+| Dataset | Resolution | SC-IoU(%) | SSC-mIoU(%) |
| NYU | (60, 60) | 71.3 | 41.1 |
| NYU | (80, 60) | 71.4 | 41.2 |
| NYU | (80, 80) | 76.5 | 40.0 |
| NYUCAD | (60, 60) | 84.2 | 55.2 |
| NYUCAD | (80, 60) | 84.1 | 55.9 |
| NYUCAD | (80, 80) | 86.0 | 54.9 |
+
+Table 8. Ablation studies on input/output resolutions. We perform this ablation study on NYU and NYUCAD dataset both. Resolution(a, b) means the input resolution is $(a \times 0.6a \times a)$ and the output resolution is $(b \times 0.6b \times b)$ .
+
+mantic labels based on the same structure prior provided by TSDF, resulting in a gain of $3.9\%$ SSC mIoU. From Row 1, Row 2 and Row 3, we observe that the introduction of other modalities would result in corresponding gains on the basis of single-mode data.
+
+Different Input/Output Resolutions. In this part, we conduct ablation studies to verify the impacts of different input/output resolutions on the performance. Results are shown in Table 8. We observe that increasing input size would not make the performance worse. If we increase both the input and output resolutions, SC IoU increases substantially, while SSC mIoU only declines slightly. Hence we conclude that increasing resolution of either input or output is beneficial to semantic scene completion task.
+
+# 4.5. Qualitative Results of 3D Sketch
+
+We visualize the predicted 3D sketch with/without CVAE in Figure 5. We can observe that the sketch is more complete and precise with the proposed 3D Sketch Hallucination Module. Under the constraints of a more complete sketch, the semantic result shows great consistency in regions with the same semantic labels and has a sharper
+
+boundary. For example, in the first row, some regions in the bookcase are mislabeled as objects without CVAE, and those regions in the corresponding sketch are missing. In the second row, the sketch without CVAE fails to extract the outline of the object on the wall, leading to uncertainty of the semantic boundary. In the third row, the missing boundary in the sketch without CVAE brings confusing semantics. In the last row, the sketch of the photo frame is incomplete without CVAE, resulting in more areas to be mislabeled as wall.
+
+# 5. Conclusion
+
+In this paper, we propose a novel 3D sketch-aware feature embedding scheme which explicitly embeds geometric information with structure-preserving details. Based on this, we further propose a semantic scene completion framework that incorporates a novel 3D Sketch Hallucination Module to guide full 3D sketch inference from partial observation via structure prior. Experiments show the effectiveness and efficiency of the proposed method, and state-of-the-art performances on three public benchmarks are achieved.
+
+Acknowledgments: This work is supported by the National Key Research and Development Program of China (2017YFB1002601, 2016QY02D0304), National Natural Science Foundation of China (61375022, 61403005, 61632003), Beijing Advanced Innovation Center for Intelligent Robots and Systems (2018IRS11), and PEK-SenseTime Joint Laboratory of Machine Vision.
+
+# References
+
+[1] John Canny. A computational approach to edge detection. PAMI, (6):679-698, 1986.
+[2] Angela Dai, Charles Ruizhongtai Qi, and Matthias Nießner. Shape completion using 3d-encoder-predictor cnns and shape synthesis. In CVPR, pages 5868-5877, 2017.
+[3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248-255. IEEE, 2009.
+[4] Aloisio Dourado, Teofilo Emidio de Campos, Hansung Kim, and Adrian Hilton. Edgenet: Semantic scene completion from rgb-d images. arXiv preprint arXiv:1908.02893, 2019.
+[5] Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u-net for conditional appearance and shape generation. In CVPR, pages 8857-8866, 2018.
+[6] Michael Firman, Oisin Mac Aodha, Simon Julier, and Gabriel J Brostow. Structured prediction of unobserved voxels from a single depth image. In CVPR, pages 5431-5440, 2016.
+[7] Martin Garbade, Johann Sawatzky, Alexander Richard, and Juergen Gall. Two stream 3d semantic scene completion. 2019.
+[8] Andreas Geiger and Chaohui Wang. Joint 3d object and layout inference from a single rgb-d image. In $GCPR$ , pages 183-195, 2015.
+[9] Yu-Xiao Guo and Xin Tong. View-volume network for semantic scene completion from a single depth image. In IJ-CAI, pages -, 2018.
+[10] Jianzhong He, Shiliang Zhang, Ming Yang, Yanhu Shan, and Tiejun Huang. Bi-directional cascade network for perceptual edge detection. In CVPR, pages 3828-3837, 2019.
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016.
+[12] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
+[13] Young Min Kim, Niloy J Mitra, Dong-Ming Yan, and Leonidas Guibas. Acquiring 3d indoor environments with variability and repetition. TOG, 31(6):138, 2012.
+[14] Jie Li, Yu Liu, Dong Gong, Qinfeng Shi, Xia Yuan, Chunxia Zhao, and Ian Reid. Rgbd based dimensional decomposition residual network for 3d semantic scene completion. In CVPR, pages -, 2019.
+[15] Dahua Lin, Sanja Fidler, and Raquel Urtasun. Holistic scene understanding for 3d object detection with rgbd cameras. In ICCV, pages 1417-1424, 2013.
+[16] Shice Liu, Yu Hu, Yiming Zeng, Qiankun Tang, Beibei Jin, Yinhe Han, and Xiaowei Li. See and think: Disentangling semantic scene completion. In NIPS, pages 261-272, 2018.
+[17] Yun Liu, Ming-Ming Cheng, Xiaowei Hu, Kai Wang, and Xiang Bai. Richer convolutional features for edge detection. In CVPR, pages 3000-3009, 2017.
+[18] Liangliang Nan, Ke Xie, and Andrei Sharf. A search-classify approach for cluttered indoor scene understanding. TOG, 31(6):137, 2012.
+
+[19] Mark Pauly, Niloy J Mitra, Johannes Wallner, Helmut Pottmann, and Leonidas J Guibas. Discovering structural regularity in 3d geometry. In TOG, volume 27, page 43. ACM, 2008.
+[20] Chao Peng, Xiangyu Zhang, Gang Yu, Guiming Luo, and Jian Sun. Large kernel matters-improve semantic segmentation by global convolutional network. In CVPR, pages 4353-4361, 2017.
+[21] Tianjia Shao, Weiwei Xu, Kun Zhou, Jingdong Wang, Dongping Li, and Baining Guo. An interactive approach to semantic modeling of indoor scenes with an rgbd camera. TOG, 31(6):136, 2012.
+[22] Saurabh Sharma, Pavan Teja Varigonda, Prashast Bindal, Abhishek Sharma, and Arjun Jain. Monocular 3d human pose estimation by generation and ordinal ranking. In CVPR, pages 2325-2334, 2019.
+[23] Yukai Shi, Keze Wang, Chongyu Chen, Li Xu, and Liang Lin. Structure-preserving image super-resolution via contextualized multitask learning. IEEE transactions on multimedia, 19(12):2804-2815, 2017.
+[24] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012.
+[25] I. Sobel and G. Feldman. A computational approach to edge detection, 1968.
+[26] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In NIPS, pages 3483-3491, 2015.
+[27] Shuran Song, Fisher Yu, Andy Zeng, Angel X Chang, Manolis Savva, and Thomas Funkhouser. Semantic scene completion from a single depth image. In CVPR, pages 1746-1754, 2017.
+[28] Pablo Speciale, Martin R Oswald, Andrea Cohen, and Marc Pollefeys. A symmetry prior for convex variational 3d reconstruction. In ECCV, pages 313-328. Springer, 2016.
+[29] David Stutz and Andreas Geiger. Learning 3d shape completion from laser scan data with weak supervision. In CVPR, pages 1955-1964, 2018.
+[30] Towaki Takikawa, David Acuna, Varun Jampani, and Sanja Fidler. Gated-scnn: Gated shape cnns for semantic segmentation. In ICCV, pages 5229-5238, 2019.
+[31] Sebastian Thrun and Ben Wegbreit. Shape from symmetry. In ICCV, volume 2, pages 1824-1831. IEEE, 2005.
+[32] Wenguan Wang, Shuyang Zhao, Jianbing Shen, Steven CH Hoi, and Ali Borji. Salient object detection with pyramid attention and salient edges. In CVPR, pages 1448-1457, 2019.
+[33] Yida Wang, David Joseph Tan, Nassir Navab, and Federico Tombari. Forknet: Multi-branch volumetric semantic completion from a single depth image. In ICCV, pages 8608–8617, 2019.
+[34] Wayne Wu, Chen Qian, Shuo Yang, Quan Wang, Yici Cai, and Qiang Zhou. Look at boundary: A boundary-aware face alignment algorithm. In CVPR, 2018.
+[35] Jimei Yang, Brian Price, Scott Cohen, Honglak Lee, and Ming-Hsuan Yang. Object contour detection with a fully convolutional encoder-decoder network. In CVPR, pages 193-202, 2016.
+
+[36] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Learning a discriminative feature network for semantic segmentation. In CVPR, pages 1857-1866, 2018.
+[37] Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, and Martial Hebert. Pcn: Point completion network. In 3DV, pages 728-737. IEEE, 2018.
+[38] Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, and Amit Agrawal. Context encoding for semantic segmentation. In CVPR, pages 7151-7160, 2018.
+[39] Jiahui Zhang, Hao Zhao, Anbang Yao, Yurong Chen, Li Zhang, and Hongen Liao. Efficient semantic scene completion network with spatial group convolution. In ECCV, pages 733-749, 2018.
+[40] Liang Zhang, Le Wang, Xiangdong Zhang, Peiyi Shen, Mohammed Bennamoun, Guangming Zhu, Syed Afaq Ali Shah, and Juan Song. Semantic scene completion with dense crf from a single depth image. Neurocomputing, 318:182-195, 2018.
+[41] Pingping Zhang, Wei Liu, Yinjie Lei, Huchuan Lu, and Xiaoyun Yang. Cascaded context pyramid for full-resolution 3d semantic scene completion. In ICCV, pages 7801-7810, 2019.
+[42] Yuting Zhang, Yijie Guo, Yixin Jin, Yijun Luo, Zhiyuan He, and Honglak Lee. Unsupervised discovery of object landmarks as structural representations. In CVPR, pages 2694-2703, 2018.
+[43] Bo Zheng, Yibiao Zhao, Joey C Yu, Katsushi Ikeuchi, and Song-Chun Zhu. Beyond point clouds: Scene understanding by reasoning geometry and physics. In CVPR, pages 3127-3134, 2013.
\ No newline at end of file
diff --git a/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/images.zip b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..953e7d60da8ba7d783d96c8141be93c68e2926bb
--- /dev/null
+++ b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:de6a58202426796483595f797e8a815d4c0aa23910f5dbc3c8e52b9f4d3b9be6
+size 662859
diff --git a/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/layout.json b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..b50ffb61c45ddd10e97439dec951bb0d0e092fa9
--- /dev/null
+++ b/3dsketchawaresemanticscenecompletionviasemisupervisedstructureprior/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:163bd69a45cc075a2c43a9072e902efb56dc76432cc5519b391f36a7643c928a
+size 479126
diff --git a/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_content_list.json b/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..63ebc66ae40cc509a1dd608a9598ce45c93202b8
--- /dev/null
+++ b/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ef074071deb5e9567a149f842d970a5ca7ff23edccdbf39323fad9cd91fe55dc
+size 76086
diff --git a/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_model.json b/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..520093d2ea398918d696488ea90db4323f709c7b
--- /dev/null
+++ b/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0821336baa3216eb50fea1151d52336749ba4e8b5be87cf6bde063af02f22ff
+size 89372
diff --git a/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_origin.pdf b/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1fbc6690385087fe886ec952cbf33ef17a2b7d24
--- /dev/null
+++ b/3dssdpointbased3dsinglestageobjectdetector/4f63b4bc-c092-4389-8f80-754e279beaad_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2eddff89056fe83da2da4f9106ac2b039f3620970ec0f1188f2bf256826b219a
+size 2019468
diff --git a/3dssdpointbased3dsinglestageobjectdetector/full.md b/3dssdpointbased3dsinglestageobjectdetector/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5674363efa223000e51006cc965151fa8aa53038
--- /dev/null
+++ b/3dssdpointbased3dsinglestageobjectdetector/full.md
@@ -0,0 +1,337 @@
+# 3DSSD: Point-based 3D Single Stage Object Detector
+
+Zetong Yang $^{1}$ Yanan Sun $^{2}$ Shu Liu $^{3}$ Jiaya Jia $^{1,3}$
+
+1The Chinese University of Hong Kong 2Hong Kong University of Science and Technology 3SmartMore
+
+{tomztyang, now.syn}@gmail.com sliu@smartmore.com leojia@cse.cuhk.edu.hk
+
+# Abstract
+
+Prevalence of voxel-based 3D single-stage detectors contrast with underexplored point-based methods. In this paper, we present a lightweight point-based 3D single stage object detector 3DSSD to achieve decent balance of accuracy and efficiency. In this paradigm, all upsampling layers and the refinement stage, which are indispensable in all existing point-based methods, are abandoned. We instead propose a fusion sampling strategy in downsampling process to make detection on less representative points feasible. A delicate box prediction network, including a candidate generation layer and an anchor-free regression head with a 3D center-ness assignment strategy, is developed to meet the demand of high accuracy and speed. Our 3DSSD paradigm is an elegant single-stage anchor-free one. We evaluate it on widely used KITTI dataset and more challenging nuScenes dataset. Our method outperforms all state-of-the-art voxel-based single-stage methods by a large margin, and even yields comparable performance with two-stage point-based methods, with amazing inference speed of $25+$ FPS, $2\times$ faster than former state-of-the-art point-based methods.
+
+# 1. Introduction
+
+3D scene understanding has attracted much attention since it benefits many applications, such as autonomous driving [7] and augmented reality [17]. In this paper, we focus on the fundamental task of 3D object detection, which predicts 3D bounding boxes and class labels for each instance within a point cloud.
+
+Although great breakthrough has been made in 2D detection, it is still not possible to directly apply these 2D methods to 3D because of the unique characteristics of point cloud. Compared with 2D images, point cloud is sparse, unordered and locality sensitive, making it hard to use convolution neural networks (CNNs) for parsing. How to convert and utilize raw point cloud data has become the primary problem in the detection task.
+
+Several existing methods convert point clouds from
+
+sparse formation to compact representations by projecting them to images [4, 11, 8, 18, 5], or subdividing them to equally distributed voxels [16, 26, 33, 29, 28, 12]. We call these methods voxel-based ones, which require voxelization on the whole point cloud. Features in each voxel are generated by either PointNet-like backbones [21, 22] or handcrafted features. Then a variety of 2D detection paradigms can be applied in the compact voxel space. Although these methods are straightforward and efficient, they suffer from information loss during voxelization and encounter performance bottleneck.
+
+Another stream is with point-based methods [31, 32, 23]. They take raw point clouds as input, and predict bounding boxes based on each point. Specifically, they are composed of two stages. In the first stage, set abstraction (SA) layers are used for downsampling and extracting context features. Afterwards, feature propagation (FP) layers are applied for upsampling and broadcasting features to points, which are discarded during downsampling. A 3D region proposal network (RPN) is then applied for generating proposals centered at each point. Based on these proposals, a refinement module is developed in the second stage to give final prediction. These methods achieve better performance. But inference usually takes much longer time.
+
+Our Contributions Different from all previous methods, we develop a lightweight and efficient point-based 3D single stage object detection framework. Our key observation is that in point-based methods, FP layers and the refinement stage consume half of the inference time. However, it is non-trivial to abandon FP layers. Under the current sampling strategy in SA with only furthest-point-sampling based on 3D Euclidean distance (D-FPS), foreground instances with only a few interior points may be lost after sampling. Consequently, it is impossible for them to be detected, which leads to huge performance drop.
+
+In STD [32], without upsampling and only conducting detection on remaining downsampled points, the performance drops by about $9\%$ . That is why FP layers must be used for point upsampling, albeit a large amount of extra computation is consumed. To deal with this issue, we first
+
+propose a new sampling strategy based on feature distance, called F-FPS, which effectively preserves interior points of various instances. Our final sampling strategy becomes a fusion version of F-FPS and D-FPS.
+
+To better exploit the representative points retained after SA layers, we develop a box prediction network, which utilizes a candidate generation layer (CG), an anchor-free regression head and a 3D center-ness assignment strategy. In the CG layer, we first shift representative points from F-FPS to generate candidate points. This shifting operation is supervised by the relative locations between the representative points and centers of their corresponding instances.
+
+Then, we treat these candidate points as centers, find their surrounding points from the whole set of representative points from both F-FPS and D-FPS, and extract their features through multi-layer perceptron (MLP) networks. These features are finally fed into an anchor-free regression head to predict 3D bounding boxes. We also design a 3D center-ness assignment strategy, which assigns higher classification scores to candidate points closer to instance centers, in order to retrieve precise localization prediction.
+
+We evaluate our method on widely used KITTI [6] dataset, and more challenging nuScenes [3] dataset. Experiments show that our model outperforms all state-of-the-art voxel-based single-stage methods by a large margin, and even achieves comparable performance with all two-stage point-based methods at a much faster inference speed. Our primary contribution is manifold.
+
+- We propose a lightweight and effective point-based 3D single-stage object detector 3DSSD. We remove computational-heavy FP layers and the refinement module, which are however indispensable in all existing point-based methods.
+- A novel fusion sampling strategy in SA layers is developed to keep adequate interior points of different foreground instances. It preserves rich information for regression and classification.
+- We design a box prediction network to better effectiveness and efficiency. Experimental results show that our framework outperforms all single-stage methods, and yields comparable performance to state-of-the-art two-stage methods with much higher efficiency (38ms per scene).
+
+# 2. Related Work
+
+3D Object Detection with Multiple Sensors There are several methods exploiting the way to fuse information from multiple sensors for object detection. MV3D [4] projects LiDAR point cloud to bird-eye view (BEV) in order to generate proposals. These proposals with other information from images, front view and BEV are then sent to the second stage to predict final bounding boxes. AVOD [11]
+
+extends MV3D by introducing image features in the proposal generation stage. MMF [14] fuses information from depth maps, LiDAR point clouds, images and maps to accomplish multiple tasks including depth completion, 2D object detection and 3D object detection. These tasks benefit each other and enhance final performance on 3D object detection.
+
+3D Object Detection with LiDAR Only Mainly two streams of methods deal with 3D object detection only using LiDAR data. One is voxel-based, which applies voxelization on the entire point cloud. The difference among these voxel-based methods lies on the initialization of voxel features. In [26], each non-empty voxel is encoded with 6 statistical quantities by the points within this voxel. Binary encoding is used in [13] for each voxel grid. VoxelNet [33] utilizes PointNet [21] to extract features of each voxel. Compared to [33], SECOND [28] applies sparse convolution layers [9] for parsing the compact representation. PointPillars [12] treats pseudo-images as the representation after voxelization.
+
+Another line is point-based, which takes raw point cloud as input, and generates predictions based on each point. F-PointNet [20] and IPOD [31] adopt 2D-mechanism-like detection or segmentation to filter most useless points, and generate predictions from kept useful points. PointRCNN [23] utilizes PointNet++ [22] with SA and FP layers to extract features for each point, proposes a region proposal network (RPN) to generate proposals, and applies a refinement module to predict bounding boxes and class labels. These methods outperform voxel-based ones, and yet with much longer inference time. They cannot be applied to real-time autonomous driving systems.
+
+STD [32] takes advantage of both point- and voxel-based methods. It uses raw point cloud as input, applies PointNet++ to extract features, proposes a PointsPool layer for converting features from sparse to dense representation, and finally utilizes CNNs in the refinement module. It's speed is faster than former point-based methods, and meanwhile is still much slower than voxel-based ones. As analyzed above, all point-based methods are composed of two stages of proposal generation – including SA layers and FP layers – and refinement for accurate prediction. It is the first attempt in this paper not to use FP layers and the refinement module, so as to speed up the whole procedure.
+
+# 3. Our Framework
+
+In this section, we first analyze the bottleneck of point-based methods, and describe our proposed fusion sampling strategy. Next, we present the box prediction network including a candidate generation layer, anchor-free regression head and our 3D center-ness assignment strategy. Fi
+
+
+Figure 1. Illustration of the 3DSSD framework. It has a backbone box prediction network that includes a candidate generation layer and an anchor-free prediction head. (a) Backbone network. It takes the raw point cloud $(x, y, z, r)$ as input, and generates global features for all representative points through several SA layers with fusion sampling (FS) strategy. (b) Candidate generation layer (CG). It downsamples, shifts and extracts features for representative points after SA layers. (c) Anchor-free prediction head.
+
+nally, we discuss the loss function. The whole framework of 3DSSD is illustrated in Figure 1.
+
+# 3.1. Fusion Sampling
+
+Motivation As aforementioned, there are two streams of methods in 3D object detection, which are point-based and voxel-based frameworks. Albeit accurate, point-based methods are more time-consuming compared to voxel-based ones. All current point-based methods [32, 23, 31] are composed of the two stages of proposal generation and prediction refinement.
+
+In first stage, SA layers are applied to downsample points for better efficiency and enlarging receptive fields, while FP layers are applied to broadcast features for dropped points during downsampling process, in order to recover all points. In the second stage, a refinement module optimizes proposals from RPN to get more accurate prediction. SA layers are necessary for extracting features of points. We reiterate that FP layers and the refinement module limit the efficiency, as shown in Table 1. We are thus motivated to design a lightweight and effective point-based single stage detector.
+
+Challenge It is non-trivial to remove FP layers. SA layers in backbone utilize D-FPS to choose a subset of points as the downsampled representative points. Without FP layers, the box prediction network has to be conducted on those surviving representative points. Nonetheless, this sampling method only takes the relative locations among points into consideration. Consequently a large portion of surviving representative points are actually background ones, due to the large amount.
+
+Now with a limited number $N_{m}$ of the total representative points, for remote (or small) instances, their inner points are not likely to be selected, because the amount is much smaller than that of background points. The situation becomes even worse on more complex datasets, like nuScenes [3].
+
+Statistically, we use points recall – the quotient between the number of instances whose interior points survived in
+
+| Methods | SA layers (ms) | FP layers (ms) | Refinement Module (ms) |
| Baseline | 40 | 14 | 35 |
+
+Table 1. Running time of different components in our reproduced PointRCNN [23] model, which has 4 SA layers and 4 FP layers for feature extraction, and a refinement module with 3 SA layers for prediction.
+
+| Methods | 4,096 | 1,024 | 512 |
| D-FPS | 99.7 % | 65.9 % | 51.8 % |
| F-FPS (λ=0.0) | 99.7 % | 83.5 % | 68.4 % |
| F-FPS (λ=0.5) | 99.7 % | 84.9 % | 74.9 % |
| F-FPS (λ=1.0) | 99.7 % | 89.2 % | 76.1 % |
| F-FPS (λ=2.0) | 99.7 % | 86.3 % | 73.7 % |
+
+Table 2. Points recall among different sampling strategies on nuScenes dataset. "4,096", "1,024" and "512" stand for the amounts of representative points in the subset.
+
+the sampled representative points and the total number of instances, to help illustrate this fact. As listed in the first row of Table 2, with 1,024 (or 512) representative points, point recalls are only $65.9\%$ (or $51.8\%$ ) respectively, which means nearly half of the instances are totally erased and cannot be detected. To ameliorate this problem, most of existing methods apply FP layers to recall those abandoned useful points during downsampling, under the heavy cost of computation during inference.
+
+Feature-FPS In order to preserve positive points (interior points within any instance) and erase those useless negative points (points locating on background), we consider not only spatial distance but also semantic information of each point during the sampling process. We note that semantic information is well captured by the deep neural network. So, utilizing the feature distance as the criterion in FPS can remove many similar negative points on background. It is intriguing that positive points of remote objects can still survive because semantic features of points from different objects are distinct from each other.
+
+However, only taking the semantic feature distance as the sole criterion would preserve quite a number of points
+
+
+Figure 2. Illustration of the shifting operation in the CG layer. The gray rectangle represents an instance with all positive representative points from F-FPS (green) and D-FPS (blue). The red dot represents instance center. We only shift points from F-FPS under the supervision of their distances to the center of an instance.
+
+within one instance, which introduces redundancy. For example, given a car, there is prominent difference between features of points around the windows and those of wheels. As a result, points around the two parts are respectively sampled, while points in either part are already informative for regression.
+
+Therefore, to reduce the redundancy and increase the diversity, we apply both spatial distance and semantic feature distance as the criteria in FPS. It is formulated as
+
+$$
+C (A, B) = \lambda L _ {d} (A, B) + L _ {f} (A, B), \tag {1}
+$$
+
+where $L_{d}(A,B)$ and $L_{f}(A,B)$ represent $L2X - Y - Z$ distance and $L2$ feature distance between two points. $\lambda$ is the balance factor. We call this sampling method Feature-FPS (F-FPS). The comparison of using different $\lambda$ is shown in Table 2, which demonstrates that combining the two distances in the downsampling operation is more powerful than only using feature distance where $\lambda$ is set to 0.
+
+Moreover, as illustrated in Table 2, using F-FPS with 1,024 representative points and setting $\lambda$ to 1 guarantee that $89.2\%$ of the instances are preserved in nuScenes [3] dataset, $23.3\%$ higher than the D-FPS sampling strategy.
+
+Fusion Sampling A large amount of positive points within different instances are preserved through SA layers thanks to F-FPS. However, with the limited number $N_{m}$ of total representative points, many negative points are discarded during the downsampling process, which benefits regression and yet hampers classification. During the grouping stage in a SA layer, which aggregates features from neighboring points, a negative point is unable to find enough surrounding points, making it impossible to enlarge its receptive field.
+
+As a result, it is difficult to distinguish between positive and negative points, leading to poor performance in classification. Our experiments also demonstrate this limitation in ablation study. Although the model with F-FPS yields a higher recall rate and better localization accuracy than the one with D-FPS, it mistakenly treats several negative points as positive ones, leading to drop of classification accuracy.
+
+The analysis above indicates that, after a SA layer, not only positive points should be sampled as many as possible, but also we need to gather enough negative points for more reliable classification. We present a novel fusion sampling strategy (FS), in which both F-FPS and D-FPS are applied during a SA layer, to retain more positive points for localization and enough negative points for classification as well.
+
+Specifically, we sample $\frac{N_m}{2}$ points respectively with F-FPS and D-FPS and feed the two sets together to the following grouping operation in a SA layer.
+
+# 3.2. Box Prediction Network
+
+Candidate Generation Layer After the backbone network implemented with several SA layers and fusion sampling, we gain a subset of points from both F-FPS and D-FPS, which are used for final prediction. In former point-based methods, another SA layer is applied to extract features before the prediction head. There are three steps in a normal SA layer, including center point selection, surrounding points extraction and semantic feature generation.
+
+In order to further reduce computation cost and fully utilize the advantage of fusion sampling, we present a candidate generation layer (CG) before our prediction head, which is a variant of SA layer. Since most of representative points from D-FPS are negative, useless in bounding box regression, we only take those from F-FPS as initial center points. They are shifted under the supervision of their relative locations to their corresponding instances as illustrated in Figure 2, same as the way of VoteNet [19]. We call these new points after shifting candidate points.
+
+Then we treat these candidate points as the center ones in our CG layer. We use candidate points rather than original ones as the center for the sake of performance, which will be discussed in detail later. Next, we find the surrounding points of each candidate point from the whole representative point set containing points from both D-FPS and F-FPS with a pre-defined range threshold and concatenate their normalized location and semantic features as input. MLP layers are finally applied to extract features. These features are sent to the prediction head for regression and classification. This entire process is illustrated in Figure 1.
+
+Anchor-free Regression Head With fusion sampling strategy and the CG layer, our model can safely remove the time-consuming FP layers and the refinement module. In the regression head, we have two options of building anchor-based or anchor-free prediction network. For anchor-based head, we need to construct multi-scale and multi-orientation anchors to cover objects with various sizes and orientations. In complex scenes like those in the nuScenes dataset [3], objects are from 10 different categories with a wide range of orientations. We thus need at least 20 anchors, including 10 different sizes and 2 different
+
+
+Figure 3. Backbone network of 3DSSD on KITTI (left) and nuScenes (right) datasets.
+
+
+
+orientations $(0, \pi/2)$ in an anchor-based model. To avoid this cumbersome setting with multiple anchors and stick with our lightweight design, we utilize anchor-free regression head instead.
+
+In the regression head, for each candidate point, we predict distance $(d_x,d_y,d_z)$ to its corresponding instance, as well as size $(d_l,d_w,d_h)$ and orientation of its corresponding instance. Since there is no prior orientation of each point, we apply hybrid of classification and regression formulation following [20] in orientation angle regression. Specifically, we define $N_{a}$ equally split orientation angle bins and classify the proposal orientation angle into one of these bins. Residual is regressed with respect to the bin value. $N_{a}$ is set to 12 in our experiments.
+
+3D Center-ness Assignment Strategy In the training process, we need an assignment strategy to assign labels for each candidate point. In 2D single-stage detectors, intersection-over-union (IoU) [15] threshold or mask [25, 30] can be used. FCOS [25] adopts a continuous centerness label, which replaces original binary classification label to further help distinguish among pixels. It assigns higher center-ness scores to pixels closer to instance centers, leading to relatively better performance compared to IoU- or mask-based assignment strategy.
+
+However, it is not optimal to directly apply center-ness labels to the 3D detection task. Given that all LiDAR points are located on surfaces of objects, the center-ness labels are all very small and similar. It is almost impossible to distinguish good predictions from other points.
+
+Instead of utilizing original representative points in point cloud, we resort to the predicted candidate points, which are supervised to be close to instance centers. Candidate points closer to instance centers tend to get more accurate localization predictions. Thus 3D center-ness labels are able to distinguish among them easily.
+
+For each candidate point, we define its center-ness label in two steps. We first determine if it is within an instance $l_{mask}$ , which is a binary value. Then we draw a centerness label according to its distance to 6 surfaces of its cor
+
+responding instance. The center-ness label is calculated as
+
+$$
+l _ {c t r n e s s} = \sqrt [ 3 ]{\frac {\operatorname* {m i n} (f , b)}{\operatorname* {m a x} (f , b)} \times \frac {\operatorname* {m i n} (l , r)}{\operatorname* {m a x} (l , r)} \times \frac {\operatorname* {m i n} (t , d)}{\operatorname* {m a x} (t , d)}}, \tag {2}
+$$
+
+where $(f,b,l,r,t,d)$ represent the distance to front, back, left, right, top and bottom surfaces respectively. The final classification label is the multiplication of $l_{mask}$ and $l_{ctrness}$ .
+
+# 3.3. Loss Function
+
+The overall loss is composed of classification loss, regression loss and shifting loss, as
+
+$$
+\begin{array}{l} L = \frac {1}{N _ {c}} \sum_ {i} L _ {c} \left(s _ {i}, u _ {i}\right) + \lambda_ {1} \frac {1}{N _ {p}} \sum_ {i} [ u _ {i} > 0 ] L _ {r} \tag {3} \\ + \lambda_ {2} \frac {1}{N _ {p} ^ {*}} L _ {s}, \\ \end{array}
+$$
+
+where $N_{c}$ and $N_{p}$ are the numbers of total candidate points and positive candidate points for foreground instances. In the classification loss, we denote $s_i$ and $u_{i}$ as the predicted classification score and center-ness label for point $i$ respectively and use cross entropy loss as $L_{c}$ .
+
+The regression loss $L_{r}$ includes distance regression loss $L_{dist}$ , size regression loss $L_{size}$ , angle regression loss $L_{angle}$ , and corner loss $L_{corner}$ . We utilize the smooth- $l_{1}$ loss for $L_{dist}$ and $L_{size}$ , in which the targets are offsets from candidate points to their corresponding instance centers and sizes of corresponding instances respectively.
+
+Angle regression loss contains orientation classification loss and residual prediction loss as
+
+$$
+L _ {a n g l e} = L _ {c} \left(d _ {c} ^ {a}, t _ {c} ^ {a}\right) + D \left(d _ {r} ^ {a}, t _ {r} ^ {a}\right), \tag {4}
+$$
+
+where $d_c^a$ and $d_r^a$ are predicted angle class and residual, while $t_c^a$ and $t_r^a$ are their targets. Corner loss is the distance between the predicted 8 corners and assigned ground-truth, expressed as
+
+$$
+L _ {\text {c o r n e r}} = \sum_ {m = 1} ^ {8} \| P _ {m} - G _ {m} \|, \tag {5}
+$$
+
+where $P_{m}$ and $G_{m}$ are the location of ground-truth and prediction for point $m$ .
+
+As for the shifting loss $L_{s}$ , which is the supervision of shifts prediction in CG layer, we utilize a smooth- $l_{1}$ loss to calculate the distance between the predicted shifts and residuals from representative points to their corresponding instance centers. $N_{p}^{*}$ is the amount of positive representative points from F-FPS.
+
+# 4. Experiments
+
+We evaluate our model on two datasets. They are the widely adopted KITTI Object Detection Benchmark [6, 7], and the larger and more complex nuScenes dataset [3].
+
+# 4.1. KITTI
+
+There are 7,481 training images/point clouds and 7,518 test ones with three categories of Car, Pedestrian and Cyclist in the KITTI dataset. We evaluate our method on all three classes and use average precision (AP) metric to evaluate different methods. During evaluation, we follow the official KITTI evaluation protocol - that is, the IoU threshold is 0.7 for class Car and 0.5 for Pedestrian and Cyclist.
+
+Implementation Details To align network input, we randomly choose 16k points from the entire point cloud per scene. The detail of backbone network is illustrated in Figure 3. The network is trained by ADAM [10] optimizer with an initial learning rate 0.002 and batch size 16 equally distributed on 4 GPU cards. The learning rate is decayed by 10 at 40 epochs. We train our model for 50 epochs.
+
+We adopt 4 different data augmentation strategies on KITTI dataset in order to prevent overfitting. First, we use the mix-up strategy [28], which randomly adds foreground instances with their inner points from other scenes to current point cloud. For each bounding box, we also rotate it following a uniform distribution $\Delta \theta_{1} \in [-\pi/4, +\pi/4]$ and add a random translation $(\Delta x, \Delta y, \Delta z)$ . Finally, each point cloud is randomly flipped along $x$ -axis. We randomly rotate each point cloud around $z$ -axis (upper-direction) and rescale it.
+
+Main Results In Table 3, we compare our method with state-of-the-art 3D detectors on KITTI test set. Since August 2019, KITTI changes the mAP calculation criterion to using 40 recall positions rather than the 11 recall positions applied in former KITTI test server. For papers published before that time, we cannot directly cite the results, and instead re-compute them using the new mAP calculation. So there may be misalignment between the results in Table 3 and in original papers.
+
+As illustrated in Table 3, our method outperforms all state-of-the-art voxel-based single stage detectors by a large margin on all three classes. On the main metric, i.e., AP
+
+on "moderate" instances in class Car, our method outperforms SECOND [28] and PointPillars [12] by $3.61\%$ and $5.26\%$ respectively. Still, it retains comparable performance to state-of-the-art point-based method STD [32] with more than $2\times$ faster inference time.
+
+Our method outperforms the two-stage methods of part-A $^2$ net and PointRCNN by $1.08\%$ and $3.93\%$ respectively. Moreover, we prove its superiority by comparing with multi-sensors methods of MMF [14] and F-ConvNet [27] – our method intriguingly achieves $2.14\%$ and $3.18\%$ improvement respectively. On the other two classes Pedestrian and Cyclist, our 3DSSD even goes beyond these two-stage object detectors. It outperforms STD [32] on these two classes by $1.8\%$ and $2.51\%$ respectively. We present several qualitative results in Figure 4.
+
+# 4.2.nuScenes
+
+nuScenes is a more challenging dataset. It contains 1,000 scenes, gathered from Boston and Singapore considering heavy traffic and highly challenging driving situations. It provides 1.4M 3D objects in 10 classes, along with object attributes and velocity. There are about 40k points per frame.
+
+In order to predict velocity and attribute, all former methods combine points from current frame and previous frames in 0.5s, gathering about 400k points. With such a large amount of points, all previous point-based two-stage methods perform less satisfyingly than voxel-based ones due to the GPU memory limitation.
+
+In the benchmark, a new evaluation metric called nuScenes detection score (NDS) is also presented, which is a weighted sum between mean average precision (mAP), the mean average errors of location (mATE), size (mASE), orientation (mAOE), attribute (mAAE) and velocity (mAVE). We use $TP$ to denote the set of the five mean average errors. NDS is calculated as
+
+$$
+N D S = \frac {1}{1 0} [ 5 m A P + \sum_ {m T P \in T P} (1 - \min (1, m T P)) ]. \tag {6}
+$$
+
+Implementation Details For each key frame, we similarly combine its points with those in previous 0.5s frames to get richer point cloud input. Then, we apply voxelization for randomly sampling point clouds to align input and keep original distribution. We randomly choose 65,536 voxels, including 16,384 from the key frame and 49,152 from others. The voxel size is $[0.1, 0.1, 0.1]$ . 1 interior point is randomly selected from each voxel. We feed these 65,536 points into our point-based network.
+
+The backbone network is illustrated in Figure 3. The training schedule is the same as the one on KITTI dataset. We only apply flip augmentation during training.
+
+| Type | Method | Modality | Car (%) | Pedestrian (%) | Cyclist (%) |
| Easy | Mod | Hard | Easy | Mod | Hard | Easy | Mod | Hard |
| 2-stage | F-PointNet [20] | RGB + LiDAR | 82.19 | 69.79 | 60.59 | 50.53 | 42.15 | 38.08 | 72.27 | 56.12 | 49.01 |
| AVOD-FPN [11] | 83.07 | 71.76 | 65.73 | 50.46 | 42.27 | 39.04 | 63.76 | 50.55 | 44.93 |
| F-ConvNet [27] | 87.36 | 76.39 | 66.69 | 52.16 | 43.38 | 38.80 | 81.98 | 65.07 | 56.54 |
| PointRCNN [23] | LiDAR | 86.96 | 75.64 | 70.70 | 47.98 | 39.37 | 36.01 | 74.96 | 58.82 | 52.53 |
| MMLab-PartA^2 [24] | 87.81 | 78.49 | 73.51 | 53.10 | 43.35 | 40.06 | 79.17 | 63.52 | 56.93 |
| STD [32] | 87.95 | 79.71 | 75.09 | 53.29 | 42.47 | 38.35 | 78.69 | 61.59 | 55.30 |
| 1-stage | SECOND [28] | LiDAR | 84.65 | 75.96 | 68.71 | 45.31 | 35.52 | 33.14 | 75.83 | 60.82 | 53.67 |
| PointPillars [12] | 82.58 | 74.31 | 68.99 | 51.45 | 41.92 | 38.89 | 77.10 | 58.65 | 51.92 |
| Ours | 88.36 | 79.57 | 74.55 | 54.64 | 44.27 | 40.23 | 82.48 | 64.10 | 56.90 |
+
+Table 3. 3D AP Results on KITTI test set for class Car, Pedestrian and Cyclist drawn from official Benchmark [1].
+
+ | Car | Ped | Bus | Barrier | TC | Truck | Trailer | Moto | Cons. Veh. | Bicycle | mAP |
| SECOND [28] | 75.53 | 59.86 | 29.04 | 32.21 | 22.49 | 21.88 | 12.96 | 16.89 | 0.36 | 0 | 27.12 |
| PointPillars [12] | 70.5 | 59.9 | 34.4 | 33.2 | 29.6 | 25.0 | 20.0 | 16.7 | 4.5 | 1.6 | 29.5 |
| Ours | 81.20 | 70.17 | 61.41 | 47.94 | 31.06 | 47.15 | 30.45 | 35.96 | 12.64 | 8.63 | 42.66 |
+
+Table 4. AP on nuScenes dataset. The results of SECOND come from its official implementation [2].
+
+ | mAP | mATE | mASE | mAOE | mAVE | AAE | NDS |
| PP [12] | 29.5 | 0.54 | 0.29 | 0.45 | 0.29 | 0.41 | 44.9 |
| Ours | 42.6 | 0.39 | 0.29 | 0.44 | 0.22 | 0.12 | 56.4 |
+
+Table 5. NDS on nuScenes dataset. "PP" represents PointPillars.
+
+| Method | Easy | Moderate | Hard |
| VoxelNet [33] | 81.97 | 65.46 | 62.85 |
| SECOND [28] | 87.43 | 76.48 | 69.10 |
| PointPillars [12] | - | 77.98 | - |
| Ours | 89.71 | 79.45 | 78.67 |
+
+Main results We show NDSs and mAPs for different methods in Table 5, and compare their APs of each class in Table 4. As illustrated in Table 5, our method yields better performance compared to all voxel-based single-stage solutions by a large margin. It also outperforms these methods in terms of AP of each class, as illustrated in Table 4.
+
+The results manifest that our model deals well with different objects with a large variance on scale. Even for a huge scene with many negative points, our fusion sampling strategy is still capable to gather enough positive points. In addition, better results on velocity and attribute prove that our model also better gather and separate information from different frames.
+
+# 4.3. Ablation Studies
+
+All ablation studies are conducted on KITTI dataset [6]. We follow VoxelNet [33] to split original training set to 3,717 images/scenes train set and 3,769 images/scenes val set. All "AP" results in ablation studies are calculated on "Moderate" difficulty level in class Car with 11 recall positions for fair comparison.
+
+Results on Validation Set We report the performance on KITTI validation set and and compare it with other state
+
+Table 6. 3D detection AP on KITTI val set of our model for "Car" compared to other state-of-the-art single-stage methods.
+
+ | D-FPS | F-FPS | FS |
| recall (%) | 92.47 | 98.45 | 98.31 |
| AP (%) | 70.4 | 76.7 | 79.4 |
+
+Table 7. Points recall and AP from different sampling methods.
+
+ | IoU | Mask | 3D center-ness |
| without shifting (%) | 70.4 | 76.1 | 43.0 |
| with shifting (%) | 78.1 | 77.3 | 79.4 |
+
+Table 8. AP among different assignment strategies. "with shifting" means using shifts in the CG layer.
+
+of-the-art voxel-based single-stage methods in Table 6. On the most important "moderate" difficulty level, our method outperforms by $1.47\%$ , $2.97\%$ and $13.99\%$ compared to PointPillars, SECOND and VoxelNet respectively. This illustrates the vast effectiveness of our strategies.
+
+Effect of Fusion Sampling Strategy Our fusion sampling strategy is composed of F-FPS and D-FPS. We compare points recall and AP among different sub-sampling methods in Table 7. Sampling strategy containing F-FPS yield higher points recall than that with D-FPS only.
+
+In Figure 5, we also present visual examples to illustrate the benefit of F-FPS in fusion sampling. In addition, the fusion sampling strategy yields a much higher AP, i.e., $2.7\%$ better than the one with F-FPS only. The reason is that the fusion sampling method can gather enough negative points, which enlarge receptive fields and accomplish accurate classification results.
+
+Effect of Shifting in CG Layer In Table 8, we compare performance when using (and not using) shifting representative points from F-FPS in CG layer. Under different assignment strategies, APs of models with shifting are all higher than those without these operations. It means if the candidate points are closer to the centers of instances, it is generally easier to retrieve their corresponding instances.
+
+
+
+
+
+
+Figure 4. Visualizing results of 3DSSD on KITTI (top) and nuScenes (bottom) datasets. The ground truth and predictions are labeled in red and green respectively.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5. Comparison between representative points after fusion sampling (top) and D-FPS only (bottom). The whole point cloud and all representative points are colored in white and yellow respectively. Positive representative points are shown in red.
+
+
+
+
+
+
+
+ | F-PointNet [20] | PointRCNN [23] | STD[32] | Ours |
| time (ms) | 170 | 100 | 80 | 38 |
+
+Table 9. Inference time among different point-based methods.
+
+Effect of 3D Center-ness Assignment We compare performance of different assignment strategies including IoU, mask and 3D center-ness label. As shown in Table 8, with the shifting operation, the model using center-ness label gains better performance than the other two strategies.
+
+Inference Time The total inference time of 3DSSD is $38\mathrm{ms}$ , tested on KITTI dataset with a Titan V GPU. We compare inference time between 3DSSD and all existing point-based methods in Table 9. As illustrated, our method is much faster than all these methods.
+
+It is noteworthy that our method even keeps a level of similar inference speed compared to state-of-the-art voxel-based single-stage methods. For example, SECOND uses 40ms in inference while ours is 38ms. Among all exist-
+
+ing methods, ours is only slower than PointPillars, which has been enhanced by several implementation optimization strategies, such as TensorRT, which however is not used so far in our implementation. Our method still have much potential to be further accelerated.
+
+# 5. Conclusion
+
+In this paper, as the first attempt, we have proposed a lightweight and efficient point-based 3D single-stage object detection framework. We introduced a novel fusion sampling strategy to remove the time-consuming FP layers and the refinement module, which were however needed in all existing point-based methods. In the prediction network, a candidate generation layer was designed to further reduce computation cost and utilize downsampled representative points. Our anchor-free regression head with 3D centerness label boosted the final performance. All these effective designs enabled our model to work satisfyingly in terms of both performance and inference time.
+
+# References
+
+[1] "kitti 3d object detection benchmark". http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=3d, 2019.7
+[2] "second github code". https://github.com/traveller59/second.pytorch, 2019.7
+[3] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027, 2019. 2, 3, 4, 6
+[4] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3d object detection network for autonomous driving. In CVPR, 2017. 1, 2
+[5] Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, and Ingmar Posner. Vote3deep: Fast object detection in 3d point clouds using efficient convolutional neural networks. In ICRA, 2017. 1
+[6] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The KITTI dataset. I. J. Robotics Res., 2013. 2, 6, 7
+[7] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In CVPR, 2012. 1, 6
+[8] Alejandro González, Gabriel Villalonga, Jiaolong Xu, David Vázquez, Jaume Amores, and Antonio M. López. Multiview random forest of local experts combining RGB and LIDAR data for pedestrian detection. In IV, 2015. 1
+[9] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3d semantic segmentation with submanifold sparse convolutional networks. In CVPR, 2018. 2
+[10] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, 2014. 6
+[11] Jason Ku, Melissa Mozifian, Jungwook Lee, Ali Harakeh, and Steven Lake Waslander. Joint 3d proposal generation and object detection from view aggregation. CoRR, 2017. 1, 2, 7
+[12] Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. CVPR, 2019. 1, 2, 6, 7
+[13] Bo Li. 3d fully convolutional network for vehicle detection in point cloud. In IROS, 2017. 2
+[14] Ming Liang*, Bin Yang*, Yun Chen, Rui Hu, and Raquel Urtasun. Multi-task multi-sensor fusion for 3d object detection. In CVPR, 2019. 2, 6
+[15] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott E. Reed, Cheng-Yang Fu, and Alexander C. Berg. SSD: single shot multibox detector. In ECCV, 2016. 5
+[16] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In IROS, 2015. 1
+[17] Youngmin Park, Vincent Lepetit, and Woontack Woo. Multiple 3d object tracking for augmented reality. In ISMAR, 2008. 1
+
+[18] Cristiano Premebida, João Carreira, Jorge Batista, and Urbano Nunes. Pedestrian detection combining RGB and dense LIDAR data. In ICoR, 2014. 1
+[19] Charles R. Qi, Or Litany, Kaiming He, and Leonidas J. Guibas. Deep hough voting for 3d object detection in point clouds. ICCV, 2019. 4
+[20] Charles Ruizhongtai Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J. Guibas. Frustum pointnets for 3d object detection from RGB-D data. CoRR, 2017. 2, 5, 7, 8
+[21] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In CVPR, 2017. 1, 2
+[22] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NIPS, 2017. 1, 2
+[23] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. Pointr-cnn: 3d object proposal generation and detection from point cloud. In CVPR, 2019. 1, 2, 3, 7, 8
+[24] Shaoshuai Shi, Zhe Wang, Xiaogang Wang, and Hongsheng Li. Part-a $^2$ net: 3d part-aware and aggregation neural network for object detection from point cloud. arXiv preprint arXiv:1907.03670, 2019. 7
+[25] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. FCOS: fully convolutional one-stage object detection. ICCV, 2019. 5
+[26] Dominic Zeng Wang and Ingmar Posner. Voting for voting in online point cloud object detection. In Robotics: Science and Systems XI, 2015. 1, 2
+[27] Zhixin Wang and Kui Jia. Frustum convnet: Sliding frustums to aggregate local point-wise features for amodal 3d object detection. In IROS. IEEE, 2019. 6, 7
+[28] Yan Yan, Yuxing Mao, and Bo Li. Second: Sparsely embedded convolutional detection. Sensors, 2018. 1, 2, 6, 7
+[29] Bin Yang, Wenjie Luo, and Raquel Urtasun. **PIXOR: real-time 3d object detection from point clouds. In CVPR, 2018.** 1
+[30] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. Repoints: Point set representation for object detection. ICCV, 2019. 5
+[31] Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. IPOD: intensive point-based object detector for point cloud. CoRR, 2018. 1, 2, 3
+[32] Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. STD: sparse-to-dense 3d object detector for point cloud. ICCV, 2019. 1, 2, 3, 6, 7, 8
+[33] Yin Zhou and Oncel Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. CoRR, 2017. 1, 2, 7
\ No newline at end of file
diff --git a/3dssdpointbased3dsinglestageobjectdetector/images.zip b/3dssdpointbased3dsinglestageobjectdetector/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..3068831ce786d3d015d771a37a80c620a2ac0ac0
--- /dev/null
+++ b/3dssdpointbased3dsinglestageobjectdetector/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea44f138fde573b4c104a05c97fcab38c3acef2d95a8c862d9b72a6fd3c5af5f
+size 524457
diff --git a/3dssdpointbased3dsinglestageobjectdetector/layout.json b/3dssdpointbased3dsinglestageobjectdetector/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a85e710d193bd80448583f2525032a5bcd560087
--- /dev/null
+++ b/3dssdpointbased3dsinglestageobjectdetector/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97b490de74286e7b4c72fcd76c0b857e073ed958239ae82fd4cf8c5f83824cf5
+size 366782
diff --git a/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_content_list.json b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..af4113ffdccecff57c0d94c5a8a75eb4e710f103
--- /dev/null
+++ b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71ab6af7b7b3f1210acbd96de80c0b55de04fdc313524a34e4eb4ad2c8737806
+size 86275
diff --git a/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_model.json b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..98aed200da87415caeeac1f605b8b9968d21fc29
--- /dev/null
+++ b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e87bba6eefe04c9658ffdd79e3ddfccf5ad44028fb0993b44c77968228c4cd9
+size 109651
diff --git a/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_origin.pdf b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1274a584f52300c7da2899785867680b67804037
--- /dev/null
+++ b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/591810c8-d543-4bf0-a38a-f129ff853f51_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1917f7e30119cb955e7a0752a46dc314c72511cc4358b767524033ff11fd0068
+size 953771
diff --git a/3dv3ddynamicvoxelforactionrecognitionindepthvideo/full.md b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..85855cbb6cd4e5764265823b6fea1fdda793fb7a
--- /dev/null
+++ b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/full.md
@@ -0,0 +1,383 @@
+# 3DV: 3D Dynamic Voxel for Action Recognition in Depth Video
+
+Yancheng Wang $^{1}$ , Yang Xiao $^{1\dagger}$ , Fu Xiong $^{2}$ , Wenxiang Jiang $^{1}$ , Zhiguo Cao $^{1}$ , Joey Tianyi Zhou $^{3}$ , and Junsong Yuan $^{4}$
+
+$^{1}$ National Key Laboratory of Science and Technology on Multispectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China $^{2}$ Megvii Research Nanjing, Megvii Technology, China
+
+$^{3}$ IHPC, A*STAR, Singapore $^{4}$ CSE Department, State University of New York at Buffalo
+
+yancheng_wang, Yang_Xiao@hust.edu.cn, xiongyu@megvii.com, wenx_jiang, zgcao@hust.edu.cn, zhouty@ihpc.a-star.edu.sg, jsyuan@buffalo.edu
+
+# Abstract
+
+To facilitate depth-based 3D action recognition, 3D dynamic voxel (3DV) is proposed as a novel 3D motion representation. With 3D space voxelization, the key idea of 3DV is to encode 3D motion information within depth video into a regular voxel set (i.e., 3DV) compactly, via temporal rank pooling. Each available 3DV voxel intrinsically involves 3D spatial and motion feature jointly. 3DV is then abstracted as a point set and input into PointNet++ for 3D action recognition, in the end-to-end learning way. The intuition for transferring 3DV into the point set form is that, PointNet++ is lightweight and effective for deep feature learning towards point set. Since 3DV may lose appearance clue, a multi-stream 3D action recognition manner is also proposed to learn motion and appearance feature jointly. To extract richer temporal order information of actions, we also divide the depth video into temporal splits and encode this procedure in 3DV integrally. The extensive experiments on 4 well-established benchmark datasets demonstrate the superiority of our proposition. Impressively, we acquire the accuracy of $82.4\%$ and $93.5\%$ on NTU RGB+D 120 [13] with the cross-subject and cross-setup test setting respectively. 3DV's code is available at https://github.com/3huo/3DV-Action.
+
+# 1. Introduction
+
+During the past decade, due to the emergence of low-cost depth camera (e.g., Microsoft Kinect [52]) 3D action recognition becomes an active research topic, with the widerange application scenarios of video surveillance, human-machine interaction, etc [45, 46]. The state-of-the-art 3D action recognition approaches can be generally categorized
+
+
+Figure 1. A live "Handshaking" 3DV example from NTU RGB+D 60 dataset [33]. 3DV motion value reveals the temporal order of 3D motion component. The later motion component is of higher value, and vice versa. And, the local region of richer motion information holds higher standard deviation on 3DV motion value.
+
+into the depth-based [28, 17, 16, 36, 11, 51, 35, 34] and skeleton-based groups [32, 48, 22, 10, 42, 46]. Since accurate and robust 3D human pose estimation is still challenging [47, 21], we focus on depth-based avenue in this work.
+
+Since human conducts actions in 3D space, capturing 3D motion pattern effectively and efficiently is crucial for depth-based 3D action recognition. An intuitive way is to calculate dense scene flow [1]. However this can be time consuming [1], which may not be preferred by the practical applications. Recently, dynamic image [3, 2] able to represent the motion information within RGB video compactly has been introduced to depth domain for 3D action characterization [42, 46]. It can compress RGB video into a single image, while still maintaining the motion characteristics well via temporal rank pooling [6, 5]. Thus dynamic image
+
+can fit deep CNN model [8] well for action categorization, which is leveraged by CNN's strong pattern representation capacity. Nevertheless we argue that the ways of applying dynamic image to 3D field in [42, 46] have not fully exploited 3D descriptive clue within depth video, although normal vector [42] or multi-view projection [46] is applied. The insight is that, both methods in [42, 46] finally encode 3D motion information onto the 2D image plane to fit CNN. Thus, they cannot well answer the question "Where does the certain 3D motion pattern within human action appear in 3D space?" crucial for effective 3D action characterization due to the fact that human actions actually consists of both motion patterns and compact spatial structure [29].
+
+To address the concern above, we propose 3D dynamic voxel (3DV) as a novel 3D motion representation for 3D action representation. To extract 3DV, 3D space voxelization is first executed. Each depth frame will be transformed into a regular voxel set. And the appearance content within it can be encoded by observing whether the yielded voxels have been occupied or not [40], in a binary way. Then, temporal rank pooling [6, 5] is executed towards all the binary voxel sets to compress them into one single voxel set termed 3DV particularly. Thus, 3D motion and spatial characteristics of 3D action can be encoded into 3DV jointly. To reveal this, a live "Handshaking" 3DV example is provided in Fig. 1. As shown, each available 3DV voxel possesses a motion value able to reflect the temporal order of its corresponding 3D motion component. Specifically, the later motion component is of higher value, and vice versa. Meanwhile, the local region of richer 3D motion information possesses higher standard deviation on 3DV motion value (e.g., hand region vs. head region). Meanwhile, 3DV voxel's location reveals the 3D position of its 3D motion component. Thus, 3DV's spatial-motion representative ability can essentially leverage 3D action characterization. To involve richer temporal order information, we further divide depth video into finer temporal splits. This is encoded in 3DV integrally by fusing the motion values from all the temporal splits.
+
+With 3DV, the upcoming question is how to choose the adaptive deep learning model to conduct 3D action recognition particularly. Towards voxel set, 3D CNN [20, 7, 21] is often used for 3D visual pattern understanding, and also applicable to 3DV. However, it is difficult to train due to the large number of convolutional parameters. Inspired by the recent success of the lightweight deep learning models on point set (e.g., PointNet++ [25]), we propose to transfer 3DV into the point set form as the input of PointNet++ to conduct 3D action recognition in end-to-end learning manner. That is, each 3DV voxel will be abstract as a point characterized by its 3D location index and motion value. Our intuition is to alleviate the training difficulty and burden.
+
+Although 3DV can reveal 3D motion information, it still may lose appearance details as in Fig. 1. Since appearance
+
+
+(a) Human-object interaction
+
+
+Figure 2. 3D skeleton extraction failure cases in NTU RGB+D 60 dataset [13], due to human-object interaction and self-occlusion. The depth frame and its RGB counterpart are shown jointly.
+
+
+(b) Self-occlusion
+
+
+
+also plays vital role for action recognition [23, 37], only using 3DV may weaken performance. To alleviate this, a multi-stream deep learning model using PointNet++ is also proposed to learn 3D motion and appearance feature jointly. In particular, it consists of one motion stream and multiple appearance streams. The input of motion stream is 3DV. And, the inputs of appearance streams are the depth frames sampled from the different temporal splits. They will also be transformed into the point set form to fit PointNet++.
+
+The experiments on 2 large-scale 3D action recognition datasets (i.e., NTU RGB+D 120 [13] and 60 [33]), and 2 small-scale ones (i.e., N-UCLA [41] and UWA3DII [26]) verify 3DV's superiority over the state-of-the-art manners.
+
+The main contributions of this paper include:
+
+- 3DV: a novel and compact 3D motion representative manner for 3D action characterization;
+- PointNet++ is applied to 3DV for 3D action recognition in end-to-end learning way, from point set perspective;
+- A multi-stream deep learning model is proposed to learn 3D motion and appearance feature jointly.
+
+# 2. Related Works
+
+3D action recognition. The existing 3D action recognition approaches generally falls into the depth-based [23, 23, 48, 22, 10, 42, 46] and skeleton-based groups [15, 17, 16, 36, 11, 51, 35, 34]. Recently the skeleton-based approaches with RNN [15] and GCN [35] has drawn more attention, since using 3D skeleton can help to resist the impact of variations on scene, human attribute, imaging viewpoint, etc. However, one critical issue should not be ignored. That is, accurate and robust 3D human pose estimation is still not trivial [47, 21]. To reveal this, we have checked the 3D skeletons within NTU RGB+D 60 [13] carefully. Actually, even under the constrained condition 3D skeleton extraction still may fail to work as in Fig. 2. Thus, currently for the practical applications depth-based manner seems more preferred and is what we concern.
+
+Most of the paid efforts focus on proposing 3D action representation manner to capture 3D spatial-temporal appearance or motion pattern. At the early stage, the handcrafted descriptions of bag of 3D points [12], depth mo
+
+tion map (DMM) [49], Histogram of Oriented 4D Normals (HON4D) [23], Super Normal Vector (SNV) [19] and binary range sample feature [19] are proposed from the different research perspectives. Recently CNN [8, 50] has been introduced to this field [43, 44, 42, 46], and enhanced performance remarkably. Under this paradigm, the depth video will be compressed into one image using DMM [49] or dynamic image [3, 2] to fit CNN. To better exploit 3D descriptive clue, normal vector or multi-view projection is applied additionally. However, they generally suffer from 2 main defects. First, as aforementioned DMM or dynamic image cannot fully reveal 3D motion characteristics well. Secondly, they tend to ignore appearance information.
+
+Temporal rank pooling. To represent action, temporal rank pooling [6, 5] is proposed to capture the frame-level evolution characteristics within video. Its key idea is to train a linear ranking machine towards the frames to arrange them in chronological order. Then, the parameters of the ranking machine can be used as the action representation. By applying temporal rank pooling to the raw frame pixels, dynamic image [3, 2] is proposed with strong motion representative ability and adaptive to CNN. As aforementioned, temporal rank pooling has recently been applied to 3D action recognition [42, 46]. However, how to use it to fully reveal 3D motion property still has not been deeply studied.
+
+Deep learning on point set. Due to the irregularity of 3D point set, typical convolutional architectures (e.g., CN-N [8]) cannot handle it well. To address this, deep learning on point set draws the increasing attention. Among the paid efforts, PointNet++ [25] is the representative one. It contributes to ensure the permutation invariance of point sets, and capture 3D local geometric clue. However, it has not been applied to 3D action recognition yet.
+
+Accordingly, 3DV is proposed to characterize 3D motion compactly, via temporal rank pooling. The adaptive multi-stream deep learning model using PointNet++ is also proposed to learn 3D motion and appearance feature jointly.
+
+# 3. 3DV: A Novel Voxel Set based Compact 3D Motion Representative Manner
+
+Our research motivation on 3DV is to seek a compact 3D motion representative manner to characterize 3D action. Accordingly, deep feature learning can be easily conducted on it. The proposition of 3DV can be regarded as the essential effort for extending temporal rank pooling [6, 5] originally for 2D video to 3D domain, to capture 3D motion pattern and spatial clue jointly. The main idea for 3DV extraction is in Fig. 3. The depth frames will be first map into point clouds to better reveal 3D characteristics. Then, 3D voxelization is executed to further transform the disordered point clouds into the regular voxel sets. Consequently, 3D action appearance clue within the certain depth frame can be described by judging whether the voxels have been oc
+
+
+Figure 3. The main idea for 3DV extraction via temporal rank pooling, towards the 3D voxel sets transformed from depth frames.
+
+
+(a) Point cloud
+
+
+(b) 3D voxel set
+Figure 4. The point cloud and its corresponding 3D voxel set sampled from "Handshaking".
+
+copied or not. Then temporal rank pooling is executed to the yielded binary voxel sets to compress them into one voxel set (i.e., 3DV), to reveal the 3D appearance evolution within actions compactly. The resulting ranking machine parameters can actually characterize 3D motion pattern of the corresponding 3DV voxels. In particular, each 3DV voxel possesses a motion value (i.e., ranking machine parameter). And, its 3D position can encode the spatial property of the corresponding 3D motion pattern. Action proposal will also be conducted to resist background.
+
+# 3.1. Voxel-based 3D appearance representation
+
+Projecting 3D data to 2D depth frame actually distorts the real 3D shape [21]. To better represent 3D appearance clue, we map the depth frame into point cloud. Nevertheless, one critical problem emerges. That is, temporal rank pooling cannot be applied to the yielded point clouds directly, due to their disordered property [25] as in Fig. 4(a). To address this, we propose to execute 3D voxelization towards the point clouds. Then the 3D appearance information can be described by observing whether the voxels have been occupied or not, disregarding the involved point number as
+
+$$
+V _ {t} (x, y, z) = \left\{ \begin{array}{l l} 1, & \text {i f} V _ {t} (x, y, z) \text {i s o c c u p i e d} \\ 0, & \text {o t h e r w i s e} \end{array} , \right. \tag {1}
+$$
+
+where $V_{t}(x,y,z)$ indicates one certain voxel at the $t$ -th frame; $(x,y,z)$ is the regular 3D position index. This actually holds 2 main profits. First, the yielded binary 3D voxel sets are regular as in Fig. 4(b). Thus, temporal rank pooling
+
+
+(a) Bow
+
+
+(b) Sit down
+
+
+(c) Hugging
+Figure 5. The 3DV examples from NTU RGB+D 60 dataset [33].
+
+
+(d) Pushing
+
+can be applied to them for 3DV extraction. Meanwhile the binary voxel-wise representation manner is of higher tolerance towards the intrinsic sparsity and density variability problem [25] within point clouds, which essentially helps to leverage generalization power.
+
+# 3.2. 3DV extraction using temporal rank pooling
+
+With the binary 3D appearance voxel sets above, temporal rank pooling is executed to generate 3DV. A linear temporal ranking score function will be defined for compressing the voxel sets into one voxel set (i.e., 3DV).
+
+Particularly, suppose $V_{i},\ldots ,V_{T}$ indicate the binary 3D appearance voxel sets, and $\overline{V_i} = \frac{1}{t}\times \sum_i^t V_i$ is their average till time $t$ . The ranking score function at time $t$ is given by
+
+$$
+S (t | \mathbf {w}) = \left\langle \mathbf {w}, \bar {V _ {i}} \right\rangle , \tag {2}
+$$
+
+where $\mathbf{w} \in \mathbb{R}^d$ is the ranking parameter vector. $\mathbf{w}$ is learned from the depth video to reflect the ranking relationship among the frames. The criteria is that, the later frames are of larger ranking scores as
+
+$$
+q > t \Rightarrow S (q | \mathbf {w}) > S (t | \mathbf {w}). \tag {3}
+$$
+
+The learning procedure of $\mathbf{w}$ is formulated as a convex optimization problem using RankSVM [38] as
+
+$$
+\begin{array}{l} \mathbf {w} ^ {*} = \underset {\mathbf {w}} {\operatorname {a r g m i n}} \frac {\lambda}{2} \| \mathbf {w} \| ^ {2} + \\ \frac {2}{T (T - 2)} \times \sum_ {q > t} \max \{0, 1 - S (q | \mathbf {w}) + S (t | \mathbf {w}) \}. \tag {4} \\ \end{array}
+$$
+
+Specifically, the first term is the often used regularizer for SVM. And, the second is the hinge-loss for soft-counting how many pairs $q > t$ are incorrectly ranked, which does not obey $S(q|\mathbf{w}) > S(t|\mathbf{w}) + 1$ . Optimizing Eqn. 4 can map the 3D appearance voxel sets $V_{i},\dots ,V_{T}$ to a single vector $\mathbf{w}^*$ . Actually, $\mathbf{w}^*$ encodes the dynamic evolution information from all the frames. Spatially reordering $\mathbf{w}^*$ from 1D to 3D in voxel form can construct 3DV for 3D action characterization. Thus, each 3DV voxel can be jointly encoded by the corresponding $\mathbf{w}^*$ item as motion feature and its regular 3D position index $(x,y,z)$ as spatial feature. Some more 3DV examples are shown in Fig. 5. We can intuitively observe that, 3DV can actually distinguish the different actions from motion perspective even human-object or human-human interaction happens. Meanwhile to accelerate 3DV
+
+
+Figure 6. Temporal split for 3DV extraction.
+
+extraction for application, the approximated temporal rank pooling [2] is used by us during implementation.
+
+# 3.3. Temporal split
+
+Applying temporal rank pooling to whole depth video may vanish some fine temporal order information. To better maintain motion details, we propose to execute temporal split for 3DV. The depth video will be divided into $T_{1}$ temporal splits with the overlap ratio of 0.5, which is the same as [46]. 3DV will extract from all the temporal splits and the whole depth video simultaneously as in Fig. 6, to involve the global and partial temporal 3D motion clues jointly.
+
+# 3.4. Action proposal
+
+Since background is generally not helpful for 3D action characterization, action proposal is also conducted by us, following [46] but with some minor modifications. First YOLOv3-Tiny [30] is used for human detection instead of Faster R-CNN [31], concerning running speed. Meanwhile, human and background are separated by depth thresholding. Particularly, depth value histogram is first extracted with the discretization interval of $100\mathrm{mm}$ . The interval of highest occurrence probability is then found. The threshold is empirically set as its mediate value plus $200\mathrm{mm}$ . Then, 3DV will be extracted only from action proposal's 3D space.
+
+# 4. Deep learning network on 3DV
+
+After acquiring 3DV, the upcoming problem is how to conduct deep learning on it to conduct feature learning and 3D action type decision jointly. Since 3DV appears in 3D voxel form, an intuitive way is to apply 3D CNN to it as many 3D visual recognition methods [20, 7, 21] does. Nevertheless 3D CNN is generally hard to train, mainly due to its relatively large number of model parameters. Deep learning on point set (e.g., PointNet [24] and PointNet++ [25]) is the recently emerged research avenue to address the disordered characteristics of point set, with promising performance and lightweight model size. Inspired by this, we propose to apply PointNet++ to conduct
+
+
+Figure 7. The procedure of abstracting 3DV voxel $V\{x,y,z\}$ into 3DV point $P\{x,y,z\}$ .
+
+deep learning on 3DV instead of 3D CNN concerning effectiveness and efficiency jointly. To this end, 3DV will be abstracted into point set form. To our knowledge, using PointNet++ to deal with voxel data has not been well studied before. Meanwhile since 3DV tends to loose some appearance information as shown Fig. 4, a multi-stream deep learning model based on PointNet++ is also proposed to learn appearance and motion feature for 3D action characterization.
+
+# 4.1. Review on PointNet++
+
+PointNet++ [25] is derived from PointNet [24], the pioneer in deep learning on point set. PointNet is proposed mainly to address the disordered problem within point clouds. However, it cannot capture the local fine-grained pattern well. PointNet++ alleviates this in a local-to-global hierarchical learning manner. It declares 2 main contributions. First, it proposes to partition the set of points into overlap local regions to better maintain local fine 3D visual clue. Secondly, it uses PointNet recursively as the local feature learner. And, the local features will be further grouped into larger units to reveal the global shape characteristics. In summary, PointNet++ generally inherits the merits of PointNet but with stronger local fine-grained pattern descriptive power. Compared with 3D CNN, PointNet++ is generally of more light-weight model size and higher running speed. Meanwhile, it tends to be easier to train.
+
+The main intuitions for why we apply PointNet++ to 3D-V lie into 3 folders. First, we do not want to trap in the training challenges of 3D CNN. Secondly, PointNet++ is good at capturing local 3D visual pattern, which is beneficial for 3D action recognition. That is, local 3D motion pattern actually plays a vital role for good 3D action characterization, as the hand region shown in Fig. 1 towards "Handshaking". Last, applying PointNet++ to 3DV is not a difficult task. What we need to do is to abstract 3DV into the point set form, which will be illustrated next.
+
+# 4.2. Abstract 3DV into point set
+
+Suppose the acquired 3DV for a depth video without temporal split is of size $H \times W \times D$ , each 3DV voxel $V(x,y,z)$ will possess a global motion value $m_{G}$ given
+
+
+Figure 8. PointNet++ based multi-stream network for 3DV to learn motion and appearance feature jointly.
+
+by temporal rank pooling as in Fig. 7 where $(x,y,z)$ indicates the 3D position index of $V(x,y,z)$ within 3DV. To fit PointNet++, $V(x,y,z)$ is then abstracted as a 3DV point $P(x,y,z)$ with the descriptive feature of $(x,y,z,m_{G})$ . Particularly, $(x,y,z)$ denotes the 3D spatial feature and $m$ is the motion feature. Thus, the yielded $P(x,y,z)$ is able to represent the 3D motion pattern and corresponding spatial information integrally. Since $(x,y,z)$ and $m$ are multimodular feature, feature normalization is executed to balance their effect towards PointNet++ training. Specifically, $m$ is linearly normalized into the range of $[-0.5,0.5]$ . Towards the spatial feature, $y$ is first linearly normalized into the range of $[-0.5,0.5]$ . Then $x$ and $z$ are re-scaled respectively, according to their size ratio towards $y$ . In this way, the 3D geometric characteristics can be well maintained to alleviate distortion. As illustrated in Sec. 3.3, temporal split is executed in 3DV to involve multi-temporal motion information. Thus, each 3DV point $P(x,y,z)$ will correspond to multiple global and local motion values. We propose to concatenate all the motion values to describe 3D motion pattern integrally. $P(x,y,z)$ will be finally characterized by the spatial-motion feature as
+
+$$
+F _ {P (x, y, z)} = \overbrace {\left(x , y , z \right.} ^ {\text {S p a t i a l}} \overbrace {m _ {G} , m _ {1} , \dots m _ {T _ {1}}} ^ {\text {M o t i o n}}, \tag {5}
+$$
+
+where $m_{G}$ is the motion feature extracted from whole video; $m_{i}$ denotes motion feature from the $i$ -th temporal split; and $T_{1}$ is the number of temporal splits as in Sec. 3.3.
+
+# 4.3. Multi-stream network
+
+Since 3DV may lose fine appearance clue, a multi-stream network using PointNet++ is proposed to learn motion and appearance feature jointly, following the idea in [37] for RGB video. As in Fig. 8, it consists of 1 motion stream and multiple appearance streams. The input of motion stream is the single 3DV point set from Sec. 4.2. For motion PointNet++ the 3DV points with all the motion features of 0 will not be sampled. And, the inputs of appearance streams are
+
+the raw depth point sets sampled from $T_{2}$ temporal splits with action proposal. Particularly, they share the same appearance PointNet++. Motion and appearance feature is late fused via concatenation at fully-connected layer.
+
+# 5. Implementation details
+
+3DV voxel is set of size $35mm \times 35mm \times 35mm$ . $T_{1}$ and $T_{2}$ is set to 4 and 3 respectively, for multi-temporal motion and appearance feature extraction. For PointNet++, farthest point sampling is used on the centroids of local regions. The sampled points are grouped with ball query. The group radius at the first and second level is set to 0.1 and 0.2 respectively. Adam [9] is applied as the optimizer with batch size of 32. Leaning rate begins with 0.001, and decays with a rate of 0.5 every 10 epochs. Training will end when reaching 70 epochs. During training, we perform data augmentation for 3DV points and raw depth points including random rotation around $Y$ and $X$ axis, jittering and random points dropout. Multi-stream network is implemented using PyTorch. Within each stream, PointNet++ will sample 2048 points for both of motion and appearance feature learning.
+
+# 6. Experiments
+
+# 6.1. Experimental setting
+
+Dataset: NTU RGB+D 120 [13]. It is the most recently emerged challenging 3D action recognition dataset, and also of the largest size. Particularly, 114,480 RGB-D action samples of 120 categories captured using Microsoft Kinect v2 are involved in this dataset. These involved action samples are of large variation on subject, imaging viewpoint and background. This imposes essential challenges to 3D action recognition. The accuracy of the state-of-the-art approaches is not satisfactory (i.e., below $70\%$ ) both under the cross-subject and cross-setup evaluation criteria.
+
+Dataset: NTU RGB+D 60 [33]. It is the preliminary version of NTU RGB+D 120. That is, 56,880 RGB-D action samples of 60 categories captured using Microsoft Kinect v2 are involved in this dataset. Before NTU RGB+D 120, it is the largest 3D action recognition dataset. Cross-subject and cross-view evaluation criteria is used for test.
+
+Dataset: N-UCLA [41]. Compared with NTU RGB+D 120 and NTU RGB+D 120, this is a relatively small-scale 3D action recognition dataset. It only contains 1475 action samples of 10 action categories. These samples are captured using Microsoft Kinect v1 from 3 different viewpoints, with relatively higher imaging noise. Cross-view evaluation criteria is used for test.
+
+Dataset: UWA3DII [26]. This is also a small-scale 3D action recognition dataset with only 1075 video samples from 30 categories. One essential challenge of this dataset is the limited number of training samples per action category. And, the samples are captured using Microsoft Kinect
+
+Table 1. Performance comparison on action recognition accuracy $(\%)$ among different methods on NTU RGB+D 120 dataset.
+
+| Methods | Cross-subject | Cross-setup |
| Input: 3D Skeleton |
| NTU RGB+D 120 baseline [13] | 55.7 | 57.9 |
| GCA-LSTM [17] | 58.3 | 59.3 |
| FSNet [14] | 59.9 | 62.4 |
| Two stream attention LSTM [16] | 61.2 | 63.3 |
| Body Pose Evolution Map [18] | 64.6 | 66.9 |
| SkeleMotion [4] | 67.7 | 66.9 |
| Input: Depth maps |
| NTU RGB+D 120 baseline [13] | 48.7 | 40.1 |
| 3DV-PointNet++ (ours) | 82.4 | 93.5 |
+
+Table 2. Performance comparison on action recognition accuracy $(\%)$ among different methods on NTU RGB+D 60 dataset.
+
+| Methods | Cross-subject | Cross-view |
| Input: 3D Skeleton |
| SkeleMotion [4] | 69.6 | 80.1 |
| GCA-LSTM [17] | 74.4 | 82.8 |
| Two stream attention LSTM [16] | 77.1 | 85.1 |
| AGC-LSTM [36] | 89.2 | 95.0 |
| AS-GCN [11] | 86.8 | 94.2 |
| VA-fusion [51] | 89.4 | 95.0 |
| 2s-AGCN [35] | 88.5 | 95.1 |
| DGNN [34] | 89.9 | 96.1 |
| Input: Depth maps |
| HON4D [23] | 30.6 | 7.3 |
| SNV [48] | 31.8 | 13.6 |
| \( HOG^2 \) [22] | 32.2 | 22.3 |
| Li, [10] | 68.1 | 83.4 |
| Wang, [42] | 87.1 | 84.2 |
| MVDI [46] | 84.6 | 87.3 |
| 3DV-PointNet++ (ours) | 88.8 | 96.3 |
+
+v1 with relatively high imaging noise.
+
+Input data modality and evaluation metric. During experiments, the input data of our proposed 3DV based 3D action recognition method is only depth maps. We will not use any other auxiliary information, such as skeleton, RGB image, human mask, etc. The training / test sample splits and testing setups on all the 4 datasets are strictly followed for fair comparison. Classification accuracy on all the action samples is reported for performance evaluation.
+
+# 6.2. Comparison with state-of-the-art methods
+
+NTU RGB+D 120: Our 3DV based approach is compared with the state-of-the-art skeleton-based and depth-based 3D action recognition methods [13, 17, 16, 18, 4] on this dataset. The performance comparison is listed in Table 1. We can observe observed that:
+
+- It is indeed impressive that, our proposition achieves the breaking-through results on this large-scale challenging dataset both towards the cross-subject and cross-setup test settings. Particularly we achieve $82.4\%$ and $93.5\%$ on these 2 settings respectively, which outperforms the state-of-the-art manners by large margins (i.e., $14.7\%$ at least on cross-subject, and $26.6\%$ at least on cross-setup). This essentially verifies the superiority of our proposition;
+
+- The performance of the other methods is poor. This reveals the great challenges of NTU RGB+D 120 dataset;
+
+Table 3. Performance comparison on action recognition accuracy $(\%)$ among different depth-based methods on N-UCLA dataset.
+
+| Methods | Accuracy |
| HON4D [23] | 39.9 |
| SNV [48] | 42.8 |
| AOG [41] | 53.6 |
| HOPC [27] | 80.0 |
| MVDI [46] | 84.2 |
| 3DV-PointNet++ (ours) | 95.3 |
+
+Table 4. Performance comparison on action recognition accuracy $(\%)$ among different depth-based methods on UWA3DII dataset.
+
+| Methods | Mean accuracy |
| HON4D [23] | 28.9 |
| SNV [48] | 29.9 |
| AOG [41] | 26.7 |
| HOPC [27] | 52.2 |
| MVDI [46] | 68.1 |
| 3DV-PointNet++ (ours) | 73.2 |
+
+- Our method achieves better performance on cross-setup case than cross-subject. This implies that, 3DV is more sensitive to subject variation.
+NTU RGB+D 60: The proposed method is compared with the state-of-the-art approaches [17, 16, 36, 11, 51, 35, 34, 23, 48, 22, 10, 42, 46] on this dataset. The performance comparison is listed in Table 2. We can see that:
+- Our proposition still significantly outperforms all the depth-based manners, both on the cross-subject and cross-view test settings.
+- On cross-view setting, the proposed method is also superior to all the skeleton-based manners. And, it is only slightly inferior to DGNN [34] on cross-subject setting; This reveals that, only using depth maps can still achieve the promising performance.
+- By comparing Table 1 and 2, we can find that the performance of some methods (i.e., GCA-LSTM [17], Two stream attention LSTM [16],) significantly drops. Concerning the shared cross-subject setting, GCA-LSTM drops $16.1\%$ and Two stream attention LSTM drops $15.9\%$ . However, our manner only drops $6.4\%$ . This demonstrates 3D-V's strong adaptability and robustness.
+N-UCLA and UWA3DII: We compared the proposed manner with the state-of-the-art depth-based approaches [23, 48, 41, 27, 46] on these 2 small-scale datasets. The performance comparison is given in Table 3 and 4 respectively. To save space, the average accuracy of the different viewpoint combinations is reported on UWA3DII. It can be summarized that:
+- On these 2 small-scale datasets, the proposed approach still consistently outperforms the other depth-based manners. This demonstrates that, our proposition takes advantages over both of the large-scale and small-scale test cases;
+- 3DV does not perform well on UWA3DII, with the accuracy of $73.2\%$ . In our opinion, this may be caused by the fact that the training sample amount per class is limited on this dataset. Thus, deep learning cannot be well conducted.
+
+Table 5. Effectiveness of 3DV motion feature on NTU RGB+D 120 dataset. Appearance stream is not used.
+
+| T1 | 3DV point feature | Cross-subject | Cross-setup |
| 1 | (x,y,z) | 61.4 | 68.9 |
| 1 | (x,y,z,mG) | 75.1 | 87.4 |
+
+Table 6. Effectiveness of temporal split for 3DV extraction on N-TU RGB+D 120 dataset. Appearance stream is not used.
+
+| T1 | 3DV point feature | Cross-subject | Cross-setup |
| 1 | (x,y,z,mG) | 75.1 | 87.4 |
| 2 | (x,y,z,mG,m1,m2) | 75.8 | 89.6 |
| 4 | (x,y,z,mG,m1,...,m4) | 76.9 | 92.5 |
+
+Table 7. Effectiveness of appearance stream on NTU RGB+D 60 and 120 dataset.
+
+| Dataset | Input stream | Cross-subject | Cross-setup |
| NTU 120 | 3DV | 76.9 | 92.5 |
| Appearance | 72.1 | 79.4 |
| 3DV+appearance | 82.4 | 93.5 |
| | Cross-subject | Cross-view |
| NTU 60 | 3DV | 84.5 | 95.4 |
| Appearance | 80.1 | 85.1 |
| 3DV+appearance | 88.8 | 96.3 |
+
+Table 8. Effectiveness of action proposal on N-UCLA dataset.
+
+| Action proposal | Accuracy |
| W/O | 92.9 |
| With | 95.3 |
+
+# 6.3. Ablation study
+
+Effectiveness of 3DV motion feature: To verify this, we choose to remove 3DV motion feature from the sampled 3DV points within PointNet++ to observe the performance change. The comparison results on NTU RGB+D 120 dataset are given in Table 5. We can see that, without the motion feature 3DV's performance will significantly drop (i.e., $18.5\%$ at most).
+
+Effectiveness of temporal split for 3DV extraction: Towards this, the temporal split number $T_{1}$ is set to 1, 2, and 4 respectively on NTU RGB+D 120 dataset. The comparison results are listed in Table 6. Obviously, temporal split can essentially leverage the performance in all test cases.
+
+Effectiveness of appearance stream: This is verified on NTU RGB+D 60 and 120 dataset simultaneously, as listed in Table 7. We can observe that:
+
+- The introduction of appearance stream can consistently enhance the performance of 3DV on these 2 datasets, towards all the 3 test settings;
+- 3DV stream significantly outperforms the appearance stream consistently, especially on cross-setup and cross-view settings. This verifies 3DV's strong discriminative power for 3D action characterization in motion way.
+
+Effectiveness of Action proposal: The performance comparison of our method with and without action proposal on N-UCLA dataset is listed in Table 8. Actually, action proposal can help to enhance performance.
+
+PointNet++ vs. 3D CNN: To verify the superiority of PointNet++ for deep learning on 3DV, we compare it with
+
+Table 9. Comparison on performance and complexity between PointNet++ and C3D on N-UCLA and NTU RGB+D 60 dataset.
+
+| Method | Parameters | FLOPs | N-UCLA | NTU RGB+D 60 |
| C3D | 29.2M | 10.99G | 64.5 | 85.0 |
| PointNet++ | 1.24M | 1.24G | 71.3 | 90.0 |
+
+Table 10. Performance comparison among the different sampling point numbers for 3DV, on NTU RGB+D 120 dataset.
+
+| Sampling point number | Cross-subject | Cross-view |
| 512 | 74.9 | 89.0 |
| 1024 | 75.7 | 90.9 |
| 2048 | 76.9 | 92.5 |
+
+Table 11. Performance comparison among the different 3DV voxel sizes, on NTU RGB+D 120 dataset.
+
+| Voxel size (mm) | Cross-subject | Cross-view |
| 25 × 25 × 25 | 75.9 | 92.0 |
| 35 × 35 × 35 | 76.9 | 92.5 |
| 50 × 50 × 50 | 76.0 | 91.6 |
| 75 × 75 × 75 | 74.1 | 90.4 |
+
+3D CNN. Particularly, the well-established 3D CNN model (i.e., C3D [39]) for video classification is used with some modification. That is, the number of C3D's input channels is reduced from 3 to 1. And, 3DV is extracted with the fixed size of $50 \times 50 \times 50$ grids as the input of C3D. Without data augmentation and temporal split, the performance and model complexity comparison on N-UCLA and NTU RGB+D 60 dataset (cross-view setting) is given in Table 9. We can see that, PointNet++ essentially takes advantage both on effectiveness and efficiency.
+
+# 6.4. Parameter analysis
+
+Sampling point number on 3DV: Before inputting 3DV point set into PointNet++, farthest point sampling is executed first. To investigate the choice of sampling point number, we compare the performance of 3DV stream with the different sampling point number values on NTU RGB+D 120 dataset. The results are listed in Table 10. That is, 2048 can achieve the best performance on 3DV.
+
+3DV voxel size: To investigate the choice of 3D voxel size, we compare the performance of 3DV stream with the different 3DV voxel sizes on NTU RGB+D 120 dataset. The results are listed in Table 11. Particularly, $35mm \times 35mm \times 35mm$ is the optimal 3DV voxel size.
+
+# 6.5. Other issues
+
+Running time: On the platform with CPU: Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.6GHz (only using 1 core), and GPU: 1 Nvidia RTX 2080Ti, 3DV's overall online running time is 2.768s/video as detailed in Table 12. Particularly, 100 samples with the average length of 97.6 frames are randomly selected from NTU RGB+D 60 dataset for test.
+
+Approximated temporal rank pooling: In our implementation, the approximated temporal rank pooling is used for 3DV extraction due to its high running efficiency. We compare it with the original one on N-UCLA data, using
+
+Table 12. Item-wise time consumption of 3DV per video.
+
+| Unit | Item | Time (ms) | Unit | Item | Time (ms) |
| GPU | Human detection | 231ms | CPU | 3DV Pointlization | 88ms |
| CPU | Point cloud voxelization | 2107ms | GPU | PointNet++ forward | 242ms |
| CPU | Temporal rank pooling | 100ms | Overall | 2768ms |
+
+Table 13. Performance comparison between the original and approximated temporal rank pooling for 3DV on N-UCLA dataset.
+
+| Temporal rank pooling methods | Accuracy | Time per sample |
| Original | 95.8 | 1.12s |
| Approximated | 95.3 | 0.10s |
+
+
+Rub two hands Clapping
+
+
+Clapping
+Rub two hands
+
+
+
+
+Figure 9. Some classification failure cases of 3DV. Ground-truth action label is shown in black, and the prediction is in red.
+
+CPU. As shown in Table 13, the approximated temporal rank pooling runs much faster than the original one with the similar performance.
+
+3DV failure cases: Some classification failure cases of 3DV are shown in Fig. 9. We find that, the failures tend to be caused by the tiny motion difference between the actions.
+
+# 7. Conclusions
+
+In this paper, 3DV is proposed as a novel and compact 3D motion representation for 3D action recognition. PointNet++ is applied to 3DV to conduct end-to-end feature learning. Accordingly, a multi-stream PointNet++ based network is also proposed to learn the 3D motion and depth appearance feature jointly to better characterize 3D actions. The experiments on 4 challenging datasets demonstrate the superiority of our proposition both for the large-scale and small-scale test cases. How to further enhance 3DV's discriminative power is what we mainly concern about in future, especially towards the tiny motion patterns.
+
+# Acknowledgment
+
+This work is jointly supported by the National Natural Science Foundation of China (Grant No. 61502187 and 61876211), Equipment Pre-research Field Fund of China (Grant No. 61403120405), the Fundamental Research Funds for the Central Universities (Grant No. 2019kyfxyKJC024), National Key Laboratory Open Fund of China (Grant No. 6142113180211), the start-up funds from University at Buffalo. Joey Tianyi Zhou is supported by Singapore Government's Research, Innovation and Enterprise 2020 Plan (Advanced Manufacturing and Engineering domain) under Grant A1687b0033 and Grant A18A1b0045.
+
+# References
+
+[1] Tali Basha, Yael Moses, and Nahum Kiryati. Multi-view scene flow estimation: A view centered variational approach. International Journal of Computer Vision, 101(1):6-21, 2013. 1
+[2] Hakan Bilen, Basura Fernando, Efstratios Gavves, and Andrea Vedaldi. Action recognition with dynamic image networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12):2799-2813, 2017. 1, 3, 4
+[3] Hakan Bilen, Basura Fernando, Efstratios Gavves, Andrea Vedaldi, and Stephen Gould. Dynamic image networks for action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3034-3042, 2016. 1, 3
+[4] Carlos Caetano, Jessica Sena, François Brémond, Jefersson A dos Santos, and William Robson Schwartz. Skelemotion: A new representation of skeleton joint dequences based on motion information for 3d action recognition. arXiv preprint arXiv:1907.13025, 2019. 6
+[5] Basura Fernando, Efstratios Gavves, José Oramas, Amir Ghodrati, and Tinne Tuytelaars. Rank pooling for action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4):773-787, 2016. 1, 2, 3
+[6] Basura Fernando, Efstratios Gavves, Jose M Oramas, Amir Ghodrati, and Tinne Tuytelaars. Modeling video evolution for action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5378-5387, 2015. 1, 2, 3
+[7] Liuhao Ge, Hui Liang, Junsong Yuan, and Daniel Thalmann. 3d convolutional neural networks for efficient and robust hand pose estimation from single depth images. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1991-2000, 2017. 2, 4
+[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016. 2, 3
+[9] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 6
+[10] Junnan Li, Yongkang Wong, Qi Zhao, and Mohan Kankanhalli. Unsupervised learning of view-invariant action representations. In Porc. Advances in Neural Information Processing Systems (NIPS), pages 1262-1272, 2018. 1, 2, 6, 7
+[11] Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian. Actional-structural graph convolutional networks for skeleton-based action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3595–3603, 2019. 1, 2, 6, 7
+[12] Wanqing Li, Zhengyou Zhang, and Zicheng Liu. Expandable data-driven graphical modeling of human actions based on salient postures. IEEE Transactions on Circuits and Systems for Video Technology, 18(11):1499-1510, 2008. 2
+[13] Jun Liu, Amir Shahroudy, Maurizio Lisboa Perez, Gang Wang, Ling-Yu Duan, and Alex Kot Chichung. NTU RG-B+D 120: A large-scale benchmark for 3d human activity
+
+understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 1, 2, 6
+[14] Jun Liu, Amir Shahroudy, Gang Wang, Ling-Yu Duan, and Alex Kot Chichung. Skeleton-based online action prediction using scale selection network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 6
+[15] Jun Liu, Amir Shahroudy, Dong Xu, and Gang Wang. Spatio-temporal LSTM with trust gates for 3d human action recognition. In Proc. European Conference on Computer Vision (ECCV), pages 816-833, 2016. 2
+[16] Jun Liu, Gang Wang, Ling-Yu Duan, Kamila Abdiyeva, and Alex C Kot. Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Transactions on Image Processing, 27(4):1586-1599, 2017. 1, 2, 6, 7
+[17] Jun Liu, Gang Wang, Ping Hu, Ling-Yu Duan, and Alex C Kot. Global context-aware attention LSTM networks for 3d action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1647-1656, 2017. 1, 2, 6, 7
+[18] Mengyuan Liu and Junsong Yuan. Recognizing human actions as the evolution of pose estimation maps. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1159-1168, 2018. 6
+[19] Cewu Lu, Jiaya Jia, and Chi-Keung Tang. Range-sample depth feature for action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 772–779, 2014. 3
+[20] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 922–928, 2015. 2, 4
+[21] Gyeongsik Moon, Ju Yong Chang, and Kyoung Mu Lee. V2V-PoseNet: Voxel-to-voxel prediction network for accurate 3d hand and human pose estimation from a single depth map. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5079-5088, 2018. 1, 2, 3, 4
+[22] Eshed Ohn-Bar and Mohan Trivedi. Joint angles similarities and hog2 for action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPR), pages 465-470, 2013. 1, 2, 6, 7
+[23] Omar Oreifej and Zicheng Liu. HON4D: Histogram of oriented 4d normals for activity recognition from depth sequences. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 716-723, 2013. 2, 3, 6, 7
+[24] Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. PointNet: Deep learning on point sets for 3d classification and segmentation. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 652-660, 2017. 4, 5
+[25] Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 5099-5108, 2017. 2, 3, 4, 5
+
+[26] Hossein Rahmani, Arif Mahmood, Du Huynh, and Ajmal Mian. Histogram of oriented principal components for cross-view action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(12):2430-2443, 2016. 2, 6
+[27] Hossein Rahmani, Arif Mahmood, Du Q Huynh, and Ajmal Mian. Hopc: Histogram of oriented principal components of 3d point clouds for action recognition. In Proc. European Conference on Computer Vision (ECCV), pages 742-757. Springer, 2014. 7
+[28] Hossein Rahmani and Ajmal Mian. 3d action recognition from novel viewpoints. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 1506-1515, 2016. 1
+[29] Cen Rao, Alper Yilmaz, and Mubarak Shah. View-invariant representation and recognition of actions. International Journal of Computer Vision, 50(2):203-226, 2002. 2
+[30] Joseph Redmon and Ali Farhadi. YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. 4
+[31] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 91-99, 2015. 4
+[32] Zhou Ren, Junsong Yuan, Jingjing Meng, and Zhengyou Zhang. Robust part-based hand gesture recognition using Kinect sensor. IEEE Transactions on Multimedia, 15(5):1110-1120, 2013. 1
+[33] Amir Shahroudy, Jun Liu, Tian-Tsong Ng, and Gang Wang. NTU RGB+D: A large scale dataset for 3d human activity analysis. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1010-1019, 2016. 1, 2, 4, 6
+[34] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Skeleton-based action recognition with directed graph neural networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7912-7921, 2019. 1, 2, 6, 7
+[35] Lei Shi, Yifan Zhang, Jian Cheng, and Hanqing Lu. Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 12026-12035, 2019. 1, 2, 6, 7
+[36] Chenyang Si, Wentao Chen, Wei Wang, Liang Wang, and Tieniu Tan. An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1227-1236, 2019. 1, 2, 6, 7
+[37] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In Proc. Advances in Neural Information Processing Systems (NIPS), pages 568-576, 2014. 2, 5
+[38] Alex J Smola and Bernhard Schölkopf. A tutorial on support vector regression. Statistics and Computing, 14(3):199-222, 2004. 4
+[39] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proc. IEEE International
+
+Conference on Computer Vision (ICCV), pages 4489-4497, 2015. 8
+[40] Jiang Wang, Zicheng Liu, Jan Chorowski, Zhuoyuan Chen, and Ying Wu. Robust 3d action recognition with random occupancy patterns. In Proc. European Conference on Computer Vision (ECCV), pages 872-885, 2012. 2
+[41] Jiang Wang, Xiaohan Nie, Yin Xia, Ying Wu, and Song-Chun Zhu. Cross-view action modeling, learning and recognition. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2649-2656, 2014. 2, 6, 7
+[42] Pichao Wang, Wanqing Li, Zhimin Gao, Chang Tang, and Philip O. Ogunbona. Depth pooling based large-scale 3-d action recognition with convolutional neural networks. IEEE Transactions on Multimedia, 20(5):1-1, 2018. 1, 2, 3, 6, 7
+[43] Pichao Wang, Wanqing Li, Zhimin Gao, Chang Tang, Jing Zhang, and Philip Ogunbona. Convnets-based action recognition from depth maps through virtual cameras and pseudocoloring. In Proc. ACM International Conference on Multimedia (ACMM), pages 1119-1122, 2015. 3
+[44] Pichao Wang, Wanqing Li, Zhimin Gao, Jing Zhang, Chang Tang, and Philip O Ogunbona. Action recognition from depth maps using deep convolutional neural networks. IEEE Transactions on Human-Machine Systems, 46(4):498-509, 2015. 3
+[45] Pichao Wang, Wanqing Li, Philip Ogunbona, Jun Wan, and Sergio Escalera. Rgb-d-based human motion recognition with deep learning: A survey. Computer Vision and Image Understanding, 171:118-139, 2018. 1
+[46] Yang Xiao, Jun Chen, Yancheng Wang, Zhiguo Cao, Joey Tianyi Zhou, and Xiang Bai. Action recognition for depth video using multi-view dynamic images. Information Sciences, 480:287-304, 2019. 1, 2, 3, 4, 6, 7
+[47] Fu Xiong, Boshen Zhang, Yang Xiao, Zhiguo Cao, Taidong Yu, Joey Tianyi Zhou, and Junsong Yuan. A2J: Anchor-to-joint regression network for 3d articulated pose estimation from a single depth image. In Proc. IEEE Internation Conference on Computer Vision (ICCV), 2019. 1, 2
+[48] Xiaodong Yang and YingLi Tian. Super normal vector for activity recognition using depth sequences. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 804-811, 2014. 1, 2, 6, 7
+[49] Xiaodong Yang, Chenyang Zhang, and YingLi Tian. Recognizing actions using depth motion maps-based histograms of oriented gradients. In Proc. ACM International Conference on Multimedia (ACM MM), pages 1057-1060, 2012. 3
+[50] Le Zhang, Zenglin Shi, Ming-Ming Cheng, Yun Liu, Jia-Wang Bian, Joey Tianyi Zhou, Guoyan Zheng, and Zeng Zeng. Nonlinear regression via deep negative correlation learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 3
+[51] Pengfei Zhang, Cuiling Lan, Junliang Xing, Wenjun Zeng, Jianru Xue, and Nanning Zheng. View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. 1, 2, 6, 7
+[52] Zhengyou Zhang. Microsoft Kinect sensor and its effect. IEEE Multimedia, 19(2):4-10, 2012. 1
\ No newline at end of file
diff --git a/3dv3ddynamicvoxelforactionrecognitionindepthvideo/images.zip b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b978b94b8608085a12305e8692d47a4b90a842b6
--- /dev/null
+++ b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1559f20fa8d82c2635fd32489d1df792e87e1d8f911794f5749ab702682056be
+size 454152
diff --git a/3dv3ddynamicvoxelforactionrecognitionindepthvideo/layout.json b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..82d91d86a5c8c41606b4823fef437aff4b2c8a38
--- /dev/null
+++ b/3dv3ddynamicvoxelforactionrecognitionindepthvideo/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d1109802abd6d6bfa309947f067d51cc24b702ee0aa64015e7259801a82a213
+size 453827
diff --git a/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_content_list.json b/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..9579ca179937a1e8aa31e1fedd1cd5234793fa09
--- /dev/null
+++ b/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7125fc185111c0e23a8cefa45060467c5c24493bddc21c28f931be3dad8fa04
+size 86982
diff --git a/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_model.json b/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..7bd1d48fb95a06f7e2e6f0597c2c892227a43b03
--- /dev/null
+++ b/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a4bebb5b2e278d50db7cd33bc1b1296d2cfa627bea6999b7430bce1ca77be42
+size 110773
diff --git a/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_origin.pdf b/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5f35c0f0e3a8880f0f2a29e6e5c4018918f7f17d
--- /dev/null
+++ b/3dzefa3dzebrafishtrackingbenchmarkdataset/fcee7ec1-c8f6-49f6-9d19-94ffc3372bb8_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b8766c738ff6a7ede026f99489f1dc0a252c26f4dbe099d3226c5b9ad6cef85
+size 858923
diff --git a/3dzefa3dzebrafishtrackingbenchmarkdataset/full.md b/3dzefa3dzebrafishtrackingbenchmarkdataset/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..09affb6894b94d4913d68eb1c6674fef2ff40746
--- /dev/null
+++ b/3dzefa3dzebrafishtrackingbenchmarkdataset/full.md
@@ -0,0 +1,348 @@
+# 3D-ZeF: A 3D Zebrafish Tracking Benchmark Dataset
+
+Malte Pedersen*, Joakim Bruslund Haurum*, Stefan Hein Bengtson, Thomas B. Moeslund Visual Analysis of People (VAP) Laboratory, Aalborg University, Denmark
+
+mape@create.aau.dk, joha@create.aau.dk, shbe@create.aau.dk, tbm@create.aau.dk
+
+# Abstract
+
+In this work we present a novel publicly available stereo based 3D RGB dataset for multi-object zebrafish tracking, called 3D-ZeF. Zebrafish is an increasingly popular model organism used for studying neurological disorders, drug addiction, and more. Behavioral analysis is often a critical part of such research. However, visual similarity, occlusion, and erratic movement of the zebrafish makes robust 3D tracking a challenging and unsolved problem.
+
+The proposed dataset consists of eight sequences with a duration between 15-120 seconds and 1-10 free moving zebrafish. The videos have been annotated with a total of 86,400 points and bounding boxes. Furthermore, we present a complexity score and a novel open-source modular baseline system for 3D tracking of zebrafish. The performance of the system is measured with respect to two detectors: a naive approach and a Faster R-CNN based fish head detector. The system reaches a MOTA of up to $77.6\%$ . Links to the code and dataset is available at the project page http://vap.aau.dk/3d-zef
+
+# 1. Introduction
+
+Over the past decades, the use of zebrafish (Danio rerio) as an animal model has increased significantly due to its applicability within large-scale genetic screening [1, 2]. The zebrafish has been used as a model for studying human neurological disorders, drug addiction, social anxiety disorders, and more [3, 4, 5, 6, 7, 8]. Locomotion and behavioral analysis are often critical parts of neuroscientific and biological research, which have traditionally been conducted manually [9, 10, 11]. However, manual inspection is subjective and limited to small-scale experiments. Therefore, tracking systems are getting increasingly popular due to their efficiency and objectivity. The majority of the solutions has been developed for terrestrial animals or fish in shallow water, and most studies have been based on 2D observations in scientific [12, 13, 14, 15, 16, 17, 18] and com
+
+
+Figure 1: An example that illustrates the difference between the two perspectives. The 3D trajectories are estimated based on the head point annotations.
+
+mercial systems [19, 20, 21, 22]. However, observations in a single plane cannot capture all the relevant phenotypes of fish [23, 24, 25]. Estimating the 3D trajectories of multiple zebrafish accurately is difficult due to their erratic movement, visual similarity, and social behavior [26], see Figure 1. This may be one of the reasons why no commercial solution has been developed yet. Only few groups in the scientific community have addressed the problem, focusing mainly on stereo vision [27, 28, 29, 30, 31] and monocular stereo using mirrors [32, 33]. However, no labeled datasets have been made publicly available within the field, which makes a fair comparison between the applied methods difficult. This ultimately hinders significant developments in the field as we have seen in other computer vision fields with common datasets. Therefore, our contributions are
+
+- a publicly available RGB 3D video dataset of zebrafish with 86,400 bounding box and point annotations.
+- an open-source modular baseline system.
+
+A large part of 3D multi-object tracking methods are developed for LiDAR-based traffic datasets [34, 35, 36, 37, 38] or RGB-D tracking [39, 40]. However, to the best of our knowledge, there exists no publicly available annotated RGB stereo dataset with erratic moving and similarly looking subjects like the one we propose.
+
+
+Figure 2: Five frames from two different occlusion scenarios. The upper frames are from the front-view and the lower frames are from the top-view. An illustration of the experimental setup is shown to the right.
+
+
+
+# 2. Related Work
+
+Multi-Object Tracking (MOT). Reliably tracking multiple objects is widely regarded as incredibly difficult. The interest in solving MOT has been steadily increasing since 2015 with the release of the MOT [41, 42, 43], UADETRAC [44, 45], and KITTI [34, 35] challenges. Within the MOT challenges, the current focus is on either aiming to solve the association problem using deep learning [46], using techniques such as intersection-over-union based tracking [47], or disregarding tracking-specific models and utilizing the improvements within object detections [48].
+
+Zebrafish Tracking. Vision-based tracking systems developed for studying animal behavior have traditionally been based on 2D [18, 49, 50, 51, 52, 53, 54] due to simplicity and because the movement of most terrestrial animals can be approximated to a single plane. The majority of research in zebrafish tracking has followed this path by only allowing the fish to move in shallow water and assuming that motion happens in a 2D plane.
+
+A 2D animal tracker, called idTracker presented by Perez-Escudero et al. in 2014 [49], uses thresholding to segment blobs and is able to distinguish between individual zebrafish based on intensity and contrast maps. In 2019, Romero-Ferrero et al. presented an updated version of idTracker, called idtracker.ai [18], which is the current state-of-the-art 2D tracker system based on convolutional neural networks (CNN) for handling occlusions and identifying individuals. The subjects are observed with a camera positioned above a tank with a water depth of $2.5\mathrm{cm}$ and the distance between camera and subjects is, therefore, approximately the same at all times. As stated by the authors, this simplifies the task compared to a real 3D tracking scenario.
+
+However, as most aquatic species move in three dimensions, trajectories in 3D are required to thoroughly describe their behavior [55, 56]. The most frequently used acquisition method when dealing with studies of animal behavior in 3D is stereo vision [28, 30, 31, 56, 57, 58, 59, 60, 61]. 3D tracking of zebrafish has been focused mainly on single subjects or small groups, as occlusion is a big hindrance for maintaining correct IDs due to their shoaling behavior [26].
+
+Furthermore, the visual appearance of the fish can change dramatically depending on the position and posture, which makes re-identification more complex compared to 2D.
+
+The Track3D module from the commercial EthoVision XT [19] is popular for tracking zebrafish in 3D, but is limited to a single individual [56, 61]. An early semi-automatic 3D tracking system was developed by Viscido et al. [58] to investigate the relationship between individual members of fish schools. Initial 2D tracks were generated by a nearest neighbor algorithm followed by a step allowing the user to adjust and correct the proposed 2D trajectories, and subsequently triangulated to reconstruct the 3D trajectories.
+
+Qian et al. have worked extensively with tracking of zebrafish and have developed a 2D tracking system with a top-view camera using an augmented fast marching method (AFMM) [62] and the determinant of the Hessian [15]. This was expanded to 3D tracking by extending the setup with a side-view camera. AFMM was utilized to generate a feature point based fish representation in each view followed by 2D tracklet construction based on motion constraints. 3D tracks were then constructed by associating the 2D tracklets with side-view detections using epipolar and motion consistency constraints [29]. Liu et al. [63] extended this method to better handle occlusions based on a set of heuristic methods and the epipolar constraint. A third camera was added in [31], and the feature point representation method was extended.
+
+Cheng et al. [28] utilized a similar three-camera setup, applying an iterative unsupervised learning method to train a CNN-based classifier to distinguish between the individual fish from a camera placed above the water tank. The classifier was trained on the head region of the fish during periods when all fish were visible at the same time. By iteratively retraining the classifier, they were able to generate 2D tracks from the top-view and reconstruct the 3D tracklets based on detections from the two other side-view cameras under epipolar and motion constraints.
+
+Wang et al. [30] also utilized a three-camera setup, using a Gaussian Mixture Model, a Gabor filter and an SVM-based method to detect the fish heads in the top- and side
+
+ | Trn2 | Trn5 | Val2 | Val5 | Tst1 | Tst2 | Tst5 | Tst10 | Total |
| Length | 120 s | 15 s | 30 s | 15 s | 15 s | 15 s | 15 s | 15 s | 240 s |
| Frames | 14,400 | 1,800 | 3,600 | 1,800 | 1,800 | 1,800 | 1,800 | 1,800 | 28,800 |
| BBs | 28,800 | 9,000 | 7,200 | 9,000 | 1,800 | 3,600 | 9,000 | 18,000 | 86,400 |
| Points | 28,800 | 9,000 | 7,200 | 9,000 | 1,800 | 3,600 | 9,000 | 18,000 | 86,400 |
| OC | 1.82 / 1.42 | 3.60 / 2.93 | 0.93 / 0.47 | 2.67 / 3.80 | 0.00 / 0.00 | 0.67 / 0.67 | 3.07 / 2.93 | 4.40 / 6.53 | |
| OL | 0.41 / 0.51 | 0.56 / 0.64 | 0.22 / 0.63 | 0.25 / 0.66 | 0.00 / 0.00 | 0.10 / 0.38 | 0.25 / 0.36 | 0.28 / 0.35 | |
| TBO | 0.69 / 0.89 | 1.00 / 1.21 | 1.79 / 3.20 | 1.64 / 0.73 | 15.00 / 15.00 | 2.41 / 2.18 | 1.38 / 1.28 | 1.86 / 1.40 | |
| IBO | 0.29 / 0.26 | 0.28 / 0.28 | 0.24 / 0.35 | 0.22 / 0.34 | 0.00 / 0.00 | 0.19 / 0.19 | 0.25 / 0.23 | 0.26 / 0.24 | |
| Ψ | 0.26 | 0.50 | 0.03 | 0.63 | 0.00 | 0.01 | 0.16 | 0.28 | |
+
+Table 1: Overview of the proposed dataset. OC, OL, TBO, and IBO are listed for the top- and front-view, respectively, and the number of fish is denoted in the sequence name. OC: average amount of occlusions per second, OL: average occlusion length in seconds, TBO: average amount of seconds between occlusions, IBO: intersection between occlusions, $\Psi$ : complexity measure based on OC, OL, TBO and IBO (see Equation (2)).
+
+views, respectively. The top-view detections are associated into 2D tracklets based on a cross-correlation method and by applying a Kalman filter; near linear movement is achieved by a frame rate of 100 FPS. The 2D tracklets are then constructed into 3D tracklets by associating the side-view detections under epipolar and motion constraints. In [64], Wang et al. proposed to model the top-view movement of the zebrafish through long short-term memory networks, which were used to improve the motion constraints in a new iteration of their 3D system [65]. Lastly, Wang et al. used a CNN for re-identification of zebrafish heads from the top-view [66], although this has yet to be incorporated into a 3D tracking setup. None of the methods are able to track multiple zebrafish in 3D for more than a few seconds without ID swaps; this is still a difficult and unsolved problem.
+
+Datasets. As in other MOT challenges, there is a mutual agreement that occlusion is what makes 3D tracking of zebrafish difficult. Nonetheless, only Wang et al. [65] describe their recordings based on occlusion frequency; however, they do not define how it is measured. Qian et al. [31] indicate their complexity based on the amount of fish, but only four occlusion events occur during their 15 seconds demo video with ten fish. For comparison, there are 66 occlusion events in our 15 seconds sequence with ten fish.
+
+# 3. Proposed Dataset
+
+The proposed 3D zebrafish dataset, 3D-ZeF, has been recorded from a top- and front-view perspective. This approach was taken to minimize events of total occlusion typical for side-by-side binocular setups. An example of the visual variation between the views is shown in Figure 2 together with an illustration of the experimental setup.
+
+# 3.1. Experimental Setup
+
+The setup used to record the proposed dataset was built entirely from off-the-shelf hardware, whereas previous methods have used specialized camera equipment. An illustration of the setup is shown in Figure 2. The two light pan
+
+els are IKEA FLOAT of size $30 \times 30$ cm with a luminous flux of 670 lumen and a color temperature of $4000\mathrm{K}$ . The test tank is a standard glass aquarium of size $30 \times 30 \times 30$ cm with a water depth of $15\mathrm{cm}$ . The top and front cameras are GoPro Hero 5 and GoPro Hero 7, respectively. All the videos are recorded with a resolution of $2704 \times 1520$ , 60 FPS, $1/60s$ shutter speed, 400 ISO, and a linear field of view. However, the fish tank does not take up the entire image, therefore, the effective region of interest is approximately $1200 \times 1200$ and $1800 \times 900$ for the top- and front-view, respectively. Diffusion fabric was placed in front of the top light in order to reduce the amount of glare in the top-view. Semi-transparent plastic was attached to three out of four of the window panes in order to reduce reflections. Furthermore, the front camera was placed orthogonally to the water level, which reduced reflections from the water surface. Lastly, the pair-wise recordings have been manually synchronized using a flashing LED, which results in a worst case temporal shift of $\frac{1}{2\cdot\mathrm{FPS}}$ .
+
+# 3.2. Dataset Construction
+
+A total of eight sequences were recorded and divided into a training, validation, and test split. Each sequence consists of a pair of temporally aligned top- and front-view videos and the specifications of the three splits are shown in Table 1. In order to avoid data leakage, each split contains a unique set of fish. The training and validation set of fish were from the same cohort, whereas the fish in the test split were from a younger cohort. Therefore, the test set differs from the training and validation set, as the fish are smaller and behave socially different. This represents a real-life scenario where different cohorts need to be tracked, which has not generally been addressed within the field.
+
+The zebrafish were manually bounding box and point annotated with consistent identity tags through all frames. The bounding boxes were tightly fitted to the visible parts of the zebrafish and the point annotations were centered on the head. If a set of fish touched, an occlusion tag was set for all
+
+involved bounding boxes. During occlusions, the bounding box was fitted to the visible parts of the fish and not where it was expected to be due to the extreme flexibility of the zebrafish. The pair-wise point annotations from the two views were triangulated into 3D positions using the method proposed by Pedersen et al. [67]. The fish head was approximated during occlusions to ensure continuous 3D tracks.
+
+It should be noted that the data was recorded in RGB. Zebrafish can change their body pigmentation based on their environment, stress level, and more [23]. The changes in coloration can be important in behavioral studies and may even be valuable in solving the 3D tracking problem.
+
+# 3.3. Dataset Complexity
+
+Intuitively, a higher number of fish creates a more difficult tracking problem. However, this is only true to some extent as the main complexity factor is the number and level of occlusions, which depends on a combination of the social activity and amount of space rather than the number of individuals. Therefore, we have defined a range of metrics based on occlusion events to describe the complexity of the proposed sequences. An occlusion event is defined by a set of consecutive frames, where a fish is part of an occlusion. The events are measured from the perspective of the fish; if two fish are part of an occlusion it counts as two events.
+
+The number of occlusion events indicates how often a fish is part of an occlusion, but, few long occlusions can be just as problematic as many short. The length of the occlusions and time between them are, therefore, important to keep in mind when evaluating the complexity of a recording. Due to our definition of occlusion events there are cases where fish are part of occlusions with only minor parts of their bodies. Therefore, the intersection between occlusions is measured as an indication of the general intersection level. The metrics that we provide as basis for the complexity level of our recordings are defined here:
+
+Occlusion Count (OC): the average number of occlusion events per second.
+
+Occlusion Length (OL): the average time in seconds of all occlusion events.
+
+Time Between Occlusions (TBO): the average time in seconds between occlusion events.
+
+Intersection Between Occlusions (IBO): a measure of how large a part of the fish that is part of an occlusion event.
+
+The intersection in a frame, $f$ , for fish $i$ is given by
+
+$$
+\mathrm {I B O} _ {i, f} = \frac {1}{| \mathbf {b b} _ {i} |} \sum_ {j = 1} ^ {n _ {\text {o c c}}} \mathbf {b b} _ {i} \cap \mathbf {b b} _ {j}, \quad \text {f o r} j \neq i, \tag {1}
+$$
+
+where $n_{\mathrm{occ}}$ is the number of fish in an occlusion event, and $\mathbf{bb}_j$ is the set of pixel coordinates in the bounding box of fish $j$ . IBO is measured across all bounding boxes with an occlusion tag in a given frame, even for subjects that are not
+
+
+Figure 3: IBO seen from the perspective of two different individuals in the same frame. The targets are marked in yellow, the red area shows the intersection with a subject that is part of the same occlusion as the target, and the blue area shows the intersection with a subject that is not part of the same occlusion as the target.
+
+
+
+part of the same occlusion. Two examples are presented in Figure 3, where the $\mathrm{IBO}_{i,f}$ is calculated from the perspective of the targets enclosed in yellow. The blue area in the second example, represents the intersection with a subject that is not part of the same occlusion as the target. Additionally, the annotated bounding boxes enclose only the visible parts of the subjects. Thus, the actual intersection between the subjects can be higher if a large part of a fish is hidden. Nonetheless, the assumption is that a high IBO is an expression of heavy occlusion and vice versa. The IBO measure presented in Table 1 is an average between all fish in all frames. A single complexity measure is calculated per sequence, by combining the four proposed metrics by
+
+$$
+\Psi = \frac {1}{n} \sum_ {v} ^ {\{T, F \}} \frac {\mathrm {O C} _ {v} \mathrm {O L} _ {v} \mathrm {I B O} _ {v}}{\mathrm {T B O} _ {v}}, \tag {2}
+$$
+
+where $n$ is the number of camera views and subscript $T$ and $F$ denote the top- and front-view, respectively. If a recording has no occlusions the complexity measure, $\Psi$ , is zero; otherwise, the measure is in the interval $]0, \infty[$ , where a larger value indicates a higher complexity.
+
+# 4. Method
+
+The pipeline of the proposed 3D tracker follows a modular tracking-reconstruction approach, where subjects are detected and tracked in each view before being triangulated and associated across views. This allows us to use the temporal information of the tracklets in the two views in the 3D association step in opposition to a reconstruction-tracking approach, where detections are triangulated before tracks are generated.
+
+# 4.1. Object Detection in 2D
+
+A consistent 2D point is needed in each view in order to create 3D trajectories. As the head is the only rigid part of
+
+the body the tracking point is chosen to be located between the eyes of the fish. We present two simple methods to find the head-point of the fish: a naive approach, that does not require training, and a CNN based approach.
+
+Naive: A background image, $bg$ , is initially estimated for each view by taking the median of $N_{bg}$ images sampled uniformly across the videos. Subsequently, the background is subtracted by calculating the absolute difference image, $fg = |im - bg|$ . To locate the head of a fish in the top-view, the $fg$ is binarized using the intermodes bimodal threshold algorithm [68]. The skeletonization approach of Zhang and Suen [69] is applied, and the endpoints are analyzed to locate the head of the fish. In the front-view the $fg$ is binarized through the use of a histogram entropy thresholding method because the appearance of the fish cannot be approximated as bimodal. The head point is estimated as being either the center of the blob or one of the middle edge points of the blob along the minor axis of the detected bounding box. All three points are evaluated during the 3D reconstruction step, and the two points with the highest reprojection errors are discarded.
+
+FRCNN-H: A Faster R-CNN [70] model has been trained for each view. The bounding boxes have been extracted from all the head-point annotations in the training sequences in order to train a head-detector model for each view. The bounding boxes have static diameters of 25 and 50 pixels for the top-, and front-view, respectively. The head-points are determined as the center of the detected bounding boxes which have a minimum confidence of $c$ .
+
+See the supplementary material for more detailed information on the detectors.
+
+# 4.2. 2D Tracklet Construction
+
+As zebrafish move erratically, it is difficult to set up a stable motion model. Therefore, we use a naive tracking-by-detection approach. The tracking is done by constructing a distance matrix between the detections in a frame and the last detections of current tracklets. The matrix is solved as a global optimization problem using the Hungarian algorithm [71]. Tracklets are deliberately constructed in a conservative manner, where robustness is encouraged above length. A new detection is only assigned to a tracklet located within a minimum distance, denoted $\delta_{\mathrm{T}}$ and $\delta_{\mathrm{F}}$ , for the top and front view respectively. If a tracklet has not been assigned a detection within a given amount of time, $\tau_{k}$ , the tracklet is terminated.
+
+The $\ell_{2}$ distance between the head detections is used in both views for the FRCNN-H method. However, the Mahalanobis distance between the center-of-mass is used for the front-view in the Naive method. This is due to the elliptical form of the zebrafish body, which can be utilized by setting the covariance matrix of the blob as the Mahalanobis matrix; as the fish is more likely to move along the major axis
+
+
+Figure 4: The colored lines represent 2D tracklets in each view, the node pairs are represented by the double-colored circles, and the edges of the DAG are shown by the arrows. The numbers represent example node and edge weights.
+
+than along the minor axis.
+
+# 4.3. 2D Tracklet Association Between Views
+
+The 2D tracklets from each view are associated into 3D tracklets through a graph-based approach. All 2D tracklets with less than a given number of detections, $\alpha$ , are removed in order to filter out noisy tracklets. The 3D calibration and triangulation method from Pedersen et al. [67] is used.
+
+# 4.3.1 Graph Construction
+
+A directed acyclic graph (DAG) is constructed. Every node represents a 3D tracklet and consists of two 2D tracklets; one from each camera view. Each edge associates nodes, where the 3D tracklet is based on the same 2D tracklet from one of the views.
+
+Create nodes: The graph nodes are constructed by processing each top-view tracklet and identifying all temporally intersecting front-view tracklets as given by
+
+$$
+I = F _ {\mathrm {T}} \cap F _ {\mathrm {F}}, \tag {3}
+$$
+
+where $F_{\mathrm{T}}$ and $F_{\mathrm{F}}$ are the set of frames with detections in the top- and front-view tracklets, respectively, and $I$ is the set of frames with detections in both views. If $I = \emptyset$ , the node is not created.
+
+An example is presented in Figure 4, where both the blue and red tracklets in the top-view intersects with the three tracklets in the front-view. The outer and inner circles of the six nodes represent the top- and front-view tracklets, respectively. The number inside the nodes indicates the node weight, which is calculated as follows.
+
+For each intersecting frame in $I$ , denoted $f$ , the 2D track-lets are triangulated. This results in a 3D point of the zebrafish head, $p_f$ , with a reprojection error, $x_f$ . For the Naive method where the head is not directly detected in the front-view, the top-view 2D point is triangulated with the three estimated points to find the match resulting in the smallest reprojection error. Therefore, $p_f$ represents the point with the smallest reprojection error. To penalize large reprojection
+
+errors, the complimentary probability from the exponential cumulative distribution function (CDF), $\Phi$ , is utilized. The exponential CDF is chosen as it approximately models the reprojection error of the ground truth training data. The set of weights for all valid 3D points, $V$ , can be described by the following set-builder notation
+
+$$
+V = \left\{1 - \Phi \left(x _ {f} \mid \lambda_ {\text {e r r}}\right) \mid f \in I \wedge A \left(p _ {f}\right) \right\}, \tag {4}
+$$
+
+where $\lambda_{err}$ is the reciprocal of the mean of the training data reprojection error, and $A$ states whether $p_f$ is within the water tank. The per-frame weights in $V$ are combined into a single weight, $W$ , for the entire node by
+
+$$
+W = \operatorname {m e d i a n} (V) \frac {| V |}{\left| F _ {\mathrm {T}} \cup F _ {\mathrm {F}} \right|}, \tag {5}
+$$
+
+and the node is added to the DAG given that $W \neq 0$ . This weighting scheme considers both the reprojection error and the ratio of frames with valid 3D points compared to the set of all frames $I$ . The median function is used instead of the mean function in order to counteract that a few extreme outliers skew the weight.
+
+Connect nodes: The nodes in the DAG should be connected to all other nodes building on one of the same 2D tracklets, as long as the 2D tracklets in the other view do not overlap temporally, as illustrated in Figure 4. This is done by constructing the set of node pairs, $P$ , from the set of nodes in the DAG, $N$ . Each element of $N$ , denoted $n$ , consists of the 2D tracklets, $t_{\mathrm{F}}$ and $t_{\mathrm{T}}$ , the 3D tracklet, $t$ , and the node weight, $W$ . Nodes $n_i$ and $n_j$ are considered a pair if $t_{i,\mathrm{T}} = t_{j,\mathrm{T}}$ or $t_{i,\mathrm{F}} = t_{j,\mathrm{F}}$ , if the 2D tracklets in the other view do not temporally overlap, and if $t_i$ starts earlier in time than $t_j$ . This is necessary in order to avoid assigning multiple detections to the same frame.
+
+This can be represented by the set-builder notation
+
+$$
+P = \left\{\left(n _ {i}, n _ {j}\right) \mid n _ {i}, n _ {j} \in N \land O \left(n _ {i}, n _ {j}\right) \land T \left(n _ {i}, n _ {j}\right) \right\}, \tag {6}
+$$
+
+where $O$ assesses whether $t_i$ starts before $t_j$ , and $T$ ensures that the 2D tracklets in $n_i$ and $n_j$ do not temporally overlap, where $n = \{t_{\mathrm{T}}, t_{\mathrm{F}}, t, W\}$ .
+
+For each node pair in $P$ , the weight, $E$ , of the directed edge from $n_i$ to $n_j$ is based on:
+
+- $s$ , the speed of the fish as it moves between the last detection in $t_i$ and the first detection in $t_j$ .
+- $t_d$ , the temporal difference between $t_i$ and $t_j$ .
+- $W_{i}$ and $W_{j}$ , the weights of the nodes.
+
+The edge weight is calculated as the complimentary probability of the CDF of the exponential distribution, $\Phi$ . The exponential distribution is chosen as it approximately models that of the speed of the zebrafish. $E$ is calculated by
+
+$$
+E = \left(1 - \Phi (s \mid \lambda_ {s})\right) e ^ {- \frac {t _ {d}}{\tau_ {p}}} \left(W _ {i} + W _ {j}\right), \tag {7}
+$$
+
+
+Figure 5: Graph evaluation based on the example from Figure 4. The colored lines represent 2D tracklet pairs based on the chosen nodes in the graph; the transparent nodes are discarded.
+
+where $\tau_{p}$ is an empirically chosen value, and $\lambda_{s}$ is the reciprocal of the sum of the mean and standard deviation of the measured speed in the training data. In case a node is not present in any node pairs, the node will be assigned to the DAG, but it will have no edges. The DAG is therefore a disconnected graph.
+
+# 4.3.2 Graph Evaluation
+
+The final 3D tracklets are extracted from the constructed DAG; this is done by recursively finding the longest path in the graph and storing the set of nodes as a single 3D tracklet. The longest path is the path throughout the DAG, which gives the highest value when summing all nodes and edge weights in the path, see Figure 5. After extraction of a path, the used nodes, and all other nodes using the same 2D tracklets, are removed from the DAG. This process is repeated until the DAG is empty. In case a 2D tracklet in the 3D tracklet is missing a detection, the 3D position cannot be assigned, but the known information of the 2D tracklet is kept. For the Naive method, the head position of the front-view 2D tracklet is determined by assigning the estimated point, which minimizes the $\ell_2$ distance to the head positions in the consecutive frame.
+
+# 4.4. 3D Tracklet Association
+
+The final 3D tracks are constructed from the 3D tracklets in a greedy manner. A set of tracklets equal to the amount of fish present, $N_{\mathrm{fish}}$ , is used as initial main tracklets. The remaining tracklets, denoted gallery tracklets, are assigned one by one to a single main tracklet, until no more tracklets can be assigned.
+
+# 4.4.1 Initial Tracklet Selection
+
+The set of $N_{\mathrm{fish}}$ in the main tracks is selected by finding the stable tracklets that are temporally concurrent in time and span long time intervals. For each tracklet, the set of other temporally concurrent tracklets is considered. In this set, all possible combinations of size $N_{\mathrm{fish}}$ are investigated. If
+
+
+Figure 6: Example of internal spatio-temporal DAG, with the spatial distance between detections in the tracklets. The shortest path is found when switching from tracklet $_{\text{gallery}}$ to tracklet $_{\text{main}}$ in frame $t_{n+1}$ .
+
+all tracklets in the set overlap temporally, the set is saved as a valid tracklet set. The valid tracklet set with the highest median temporal overlap is used to construct $N_{\mathrm{fish}}$ full 3D tracks. This is done by using the greedy association scheme described in the following sections. No 3D tracks are created if no valid combination of size $N_{\mathrm{fish}}$ is identified.
+
+# 4.4.2 Greedy Association
+
+A greedy association algorithm is used when each gallery tracklet is associated with a single main tracklet. The greedy part of the algorithm concerns the way that gallery tracklets are chosen; all gallery tracks are ranked in ascending order by the shortest temporal distance to any main tracklet. If the gallery tracklet overlaps temporally with all main tracklets, it is relegated to the end of the list. When the gallery tracklet has been associated with a main track, the remaining gallery tracks are re-ranked, and the process repeated. In this way, the main tracklets are "grown" into full tracks. The gallery tracklet assignment is based on minimizing the cost of assignment. The cost is based on a set of distance measures, which are determined from two cases.
+
+In the first case at least one main tracklet does not temporally overlap with the gallery tracklet. In this case, the association process is based on the spatio-temporal distances between the gallery tracklet and main tracklets. All temporally overlapping main tracklets are not considered.
+
+In the second case the gallery tracklet overlaps temporally with all main tracklets. As the spatio-temporal distances between the main and gallery tracklet is no longer measurable, a different set of distance values are used: The internal spatio-temporal distances, the amount of intersecting frames, i.e. frames with a detection in both the main and gallery tracklets, and the ratio of intersecting frames compared to the total amount of detections in the gallery tracklet. The internal spatio-temporal distances are determined through the construction of a DAG, where each node is a detection in a frame, and the edge weights are the spatial
+
+distances between the temporally previous nodes. The final path is the one minimizing the spatial distance traveled. An example of a graph is shown in Figure 6. The distances are calculated as the mean of the values when the graph switches from a detection in the gallery tracklet to the main tracklet and vice versa.
+
+Association: The distance measures are consolidated into a single assignment decision through a global cost scheme. Each distance value is normalized across valid main tracklets into the range [0; 1] and sum to 1. The final cost of assigning the gallery tracklet to a main tracklet, is obtained by calculating the mean of the normalized distance values. The gallery tracklet is associated with the main tracklet with the smallest cost, unless all main tracklet costs are located within a small margin, $\beta$ , of each other, in which case the gallery tracklet is discarded. $\beta$ directly enforces a margin of confidence in the assignment, in order to not assign a gallery tracklet based on inconclusive cost values.
+
+# 5. Evaluation
+
+The metrics used in the MOT challenges [41, 42, 43] and the Mean Time Between Failures (MTBF) proposed by Carr and Collins [72] are utilized to measure the performance of the system on the proposed dataset. The MOT challenge metrics consist of the CLEAR MOT metrics [73], the mostly tracked/lost metrics [74], and the identification-based metrics [75].
+
+The final 3D tracks are evaluated based on a subset of the MOT challenge metrics and the monotonic MTBF metric. The primary metric used is the multiple object tracking (MOTA) metric. The detected and ground truth tracklets are compared using the detected and annotated head points. A detection is only associated with a ground truth tracklet if it is within a distance of $0.5\mathrm{cm}$ . The performance of the system is evaluated with two different detection modules: Naive, and FRCNN-H. The results are compared with a hypothetical tracker, called Oracle, which tracks perfectly at all times except during occlusions. This provides an upper bound on the performance if occlusions are not handled in any way. The full set of metrics, system parameters, and results can be found in the supplementary material.
+
+Results for all sequences compared to data complexity is shown in Figure 7, and metrics for the test sequences are shown in Table 2. It is clear that the FRCNN-H outperforms the Naive method on the training and validation splits; it even outperforms the Oracle tracker in three out of four cases. This is likely due to the method being able to detect some of the fish heads during occlusions. However, the superior performance is only seen on the two splits where the fish are from the same cohort. On the test set the FRCNN-H fails to generalize, whereas the Naive method still manages to track the fish.
+
+
+Figure 7: MOTA compared to the dataset complexity, $\Psi$ , for all sequences in the dataset.
+
+It should be noted that the poor performance of the Naive method on Tst1, is suspected to be due many short tracks from erratic movement, which the pipeline with the used parameter settings does not handle well.
+
+# 5.1. Comparison with Other Methods
+
+It has not been possible to make a fair comparison with the other 3D zebrafish tracking methods mentioned in Section 2. Previous systems have been analyzed in terms of ID swaps, fragments, precision, and recall for the generated 2D and 3D tracks. However, there is no exact description of how these metrics are calculated. The evaluation protocol is further limited by not including a statement on the maximum allowed distance between estimated and ground truth tracks leading to uncertainty on the accuracy of the metrics.
+
+Furthermore, the evaluated sequences are not described in terms of complexity, even though occlusion is repeatedly stated as a major hindrance in 3D zebrafish tracking. The only common complexity indication of the datasets is the number of fish, even though it is not representative. An example of this is the tracking demo video of Qian et al. [62] with ten fish and only four occlusion events during 15 seconds. Wang et al. [30] describes their dataset on basis of an occlusion probability but do not explain how it is measured.
+
+There are currently no publicly available annotated data and the previous systems are evaluated on seemingly simplified cases of the problem. Furthermore, the used evaluation protocols are lacking details in such a manner that it is not possible to determine under which conditions the metrics have been calculated. This, along with inaccessible codebases, severely limits the reproducibility of the results,
+
+ | Method | MOTA ↑ | MT ↑ | ML ↓ | ID Sw. ↓ | Frag. ↓ | MTBFm ↑ |
| Tst1 | Naive | 77.6% | 1 | 0 | 0 | 28 | 12.5 |
| FRCNN-H | 30.2% | 0 | 0 | 0 | 15 | 8.212 |
| Oracle | 100.0% | 1 | 0 | 0 | 0 | 900 |
| Tst2 | Naive | 77.6% | 1 | 0 | 0 | 44 | 15.856 |
| FRCNN-H | 5.7% | 0 | 2 | 2 | 17 | 2.641 |
| Oracle | 81.6% | 2 | 0 | 0 | 25 | 27.396 |
| Tst5 | Naive | 39.7% | 0 | 0 | 7 | 185 | 6.249 |
| FRCNN-H | 40.2% | 0 | 0 | 7 | 115 | 7.577 |
| Oracle | 67.8% | 1 | 0 | 0 | 50 | 28.112 |
| Tst10 | Naive | 48.3% | 0 | 0 | 11 | 268 | 9.075 |
| FRCNN-H | 25.2% | 0 | 3 | 32 | 225 | 4.904 |
| Oracle | 66.6% | 1 | 10 | 0 | 119 | 23.105 |
+
+Table 2: Evaluation of 3D tracks on test split. The arrows indicate whether higher or lower values are better. MOTA: Multiple Object Tracking Accuracy, MT: Mostly tracked, ML: Mostly lost, ID Sw.: Number of identity swaps, Frag.: Number of fragments, $\mathrm{MTBF}_m$ : Monotonic MTBF.
+
+and makes it impossible to ensure identical evaluation procedures. Therefore, it simply does not make sense to compare the proposed system to the other methods under the current circumstances.
+
+# 6. Conclusion
+
+Zebrafish is an increasingly popular animal model and behavioral analysis plays a major role in neuroscientific and biological research. However, it is tedious and subjective to manually describe the complex 3D motion of zebrafish. Therefore, 3D zebrafish tracking systems are critically needed to conduct accurate experiments on a grand scale. The significant development experienced in other fields of MOT has not yet translated to 3D zebrafish tracking. The main reason being that no dataset has been made publicly available with ground truth annotations. Therefore, we present the first publicly available RGB 3D zebrafish tracking dataset called 3D-ZeF.
+
+3D-ZeF consists of eight stereo sequences with highly social and similarly looking subjects demonstrating complex and erratic motion patterns in three dimensions that are not seen in common MOT challenges. A complexity measure based on the level of occlusions has been provided for each sequence to make them comparable to future related datasets. The proposed dataset is annotated with 86,400 bounding boxes and points; the latter used for estimating ground truth 3D tracks based on the head position of the fish. Different cohorts of zebrafish are used in the training, validation, and test splits to avoid data leakage; a problem that has never been addressed within the field.
+
+The proposed Naive method scores a MOTA between $25\%$ and $80\%$ across the entire dataset, which correlates well with the complexity measure of the recordings. The open-source modular based system provides a baseline and stepping stone for further development within the field of 3D zebrafish tracking and understanding.
+
+# References
+
+[1] J. S. Eisen, “Zebrafish make a big splash,” Cell, vol. 87, pp. 969–977, Dec. 1996.
+[2] P. Haffter, M. Granato, M. Brand, M. Mullins, M. Hammerschmidt, D. Kane, J. Odenthal, F. van Eeden, Y. Jiang, C. Heisenberg, R. Kelsh, M. Furutani-Seiki, E. Vogelsang, D. Beuchle, U. Schach, C. Fabian, and C. Nusslein-Volhard, "The identification of genes with unique and essential functions in the development of the zebrafish, Danio rerio," Development, vol. 123, no. 1, pp. 1-36, 1996.
+[3] A. D. Collier, K. M. Khan, E. M. Caramillo, R. S. Mohn, and D. J. Echevarria, “Zebrafish and conditioned place preference: A translational model of drug reward,” Progress in Neuro-Psychopharmacology and Biological Psychiatry, vol. 55, pp. 16–25, Dec. 2014.
+[4] C.-Y. Lin, C.-Y. Chiang, and H.-J. Tsai, “Zebrafish and Medaka: new model organisms for modern biomedical research,” Journal of Biomedical Science, vol. 23, Jan. 2016.
+[5] M. C. Soares, S. C. Cardoso, T. d. S. Carvalho, and C. Maximino, “Using model fish to study the biological mechanisms of cooperative behaviour: A future for translational research concerning social anxiety disorders?” Progress in NeuroPsychopharmacology and Biological Psychiatry, vol. 82, pp. 205–215, Mar. 2018.
+[6] A. V. Kalueff, A. M. Stewart, and R. Gerlai, “Zebrafish as an emerging model for studying complex brain disorders,” Trends in Pharmacological Sciences, vol. 35, pp. 63–75, Feb. 2014.
+[7] D. A. Meshalkina, M. N. Kizlyk, E. V. Kysil, A. D. Collier, D. J. Echevarria, M. S. Abreu, L. J. G. Barcellos, C. Song, J. E. Warnick, E. J. Kyzar, and A. V. Kalueff, “Zebrafish models of autism spectrum disorder,” Experimental Neurology, vol. 299, pp. 207–216, Jan. 2018.
+[8] K. M. Khan, A. D. Collier, D. A. Meshalkina, E. V. Kysil, S. L. Khatsko, T. Kolesnikova, Y. Y. Morzherin, J. E. Warnick, A. V. Kalueff, and D. J. Echevarria, “Zebrafish models in neuropsychopharmacology and CNS drug discovery,” British Journal of Pharmacology, vol. 174, no. 13, pp. 1925–1944, 2017.
+[9] L. Li and J. E. Dowling, "A dominant form of inherited retinal degeneration caused by a non-photoreceptor cell-specific mutation," Proceedings of the National Academy of Sciences, vol. 94, pp. 11645–11650, Oct. 1997.
+[10] U. K. Muller, “Swimming of larval zebrafish: ontogeny of body waves and implications for locomotory development,” Journal of Experimental Biology, vol. 207, pp. 853–868, Feb. 2004.
+[11] M. B. McElligott and D. M. O'Malley, "Prey tracking by larval zebrafish: Axial kinematics and visual control," Brain, Behavior and Evolution, vol. 66, no. 3, pp. 177-196, 2005.
+[12] E. Fontaine, D. Lentink, S. Kranenberg, U. K. Müller, J. L. van Leeuwen, A. H. Barr, and J. W. Burdick, “Automated visual tracking for studying the ontogeny of zebrafish swimming,” Journal of Experimental Biology, vol. 211, no. 8, pp. 1305–1316, 2008.
+
+[13] B. Risse, D. Berh, N. Otto, C. Klämbt, and X. Jiang, "FIM-Track: An open source tracking and locomotion analysis software for small animals," PLOS Computational Biology, vol. 13, no. 5, pp. 1-15, 2017.
+[14] S. Ohayon, O. Avni, A. L. Taylor, P. Perona, and S. E. R. Egnor, "Automated multi-day tracking of marked mice for the analysis of social behaviour," Journal of Neuroscience Methods, vol. 219, no. 1, pp. 10-19, 2013.
+[15] Z.-M. Qian, X. E. Cheng, and Y. Q. Chen, "Automatically detect and track multiple fish swimming in shallow water with frequent occlusion," PLOS ONE, vol. 9, no. 9, pp. 1-12, 2014.
+[16] G. d. O. Feijó, V. A. Sangalli, I. N. L. d. Silva, and M. S. Pinho, "An algorithm to track laboratory zebrafish shoals," Computers in Biology and Medicine, vol. 96, pp. 79 - 90, 2018.
+[17] J. E. Franco-Restrepo, D. A. Forero, and R. A. Vargas, “A review of freely available, open-source software for the automated analysis of the behavior of adult zebrafish,” Zebrafish, vol. 16, June 2019.
+[18] F. Romero-Ferrero, M. G. Bergomi, R. C. Hinz, F. J. H. Heras, and G. G. d. Polavieja, "idtracker.ai: tracking all individuals in small or large collectives of unmarked animals," Nature Methods, vol. 16, pp. 179-182, Jan. 2019.
+[19] L. P. J. J. Noldus, A. J. Spink, and R. A. J. Tegelenbosch, "EthoVision: A versatile video tracking system for automation of behavioral experiments," Behavior Research Methods, Instruments, & Computers, vol. 33, pp. 398-414, Aug. 2001.
+[20] Loligo Systems, "LoliTrack v.4." https://www.loligosystems.com/lolitrack-v-4.
+[21] ViewPoint, “ZebraLab.” http://www.viewpoint.fr/en/p/software/zebralab.
+[22] TSE Systems, "VideoMot2 - versatile video tracking system." https://www.tse-systems.com/product-details/videoomot.
+[23] A. V. Kalueff, M. Gebhardt, A. M. Stewart, J. M. Cachat, M. Brimmer, J. S. Chawla, C. Craddock, E. J. Kyzar, A. Roth, S. Landsman, S. Gaikwad, K. Robinson, E. Baatrup, K. Tierney, A. Shamchuk, W. Norton, N. Miller, T. Nicolson, O. Braubach, C. P. Gilman, J. Pittman, D. B. Rosemberg, R. Gerlai, D. Echevarria, E. Lamb, S. C. F. Neuhauss, W. Weng, L. Bally-Cuif, H. Schneider, and t. Z. Neuros, "Towards a comprehensive catalog of zebrafish behavior 1.0 and beyond," Zebrafish, vol. 10, pp. 70-86, Mar. 2013.
+[24] J. M. Cachat, P. R. Canavello, S. I. Elkhayat, B. K. Bartels, P. C. Hart, M. F. Elegante, E. C. Beeson, A. L. Laffoon, W. A. Haymore, D. H. Tien, A. K. Tien, S. Mohnot, and A. V. Kalueff, "Video-aided analysis of zebrafish locomotion and anxiety-related behavioral responses," in Zebrafish Neurobehavioral Protocols (A. V. Kalueff and J. M. Cachat, eds.), pp. 1-14, Totowa, NJ: Humana Press, 2011.
+
+[25] S. Macri, D. Neri, T. Rubio, V. Mwaffo, S. Butail, and M. Porfiri, “Three-dimensional scoring of zebrafish behavior unveils biological phenomena hidden by two-dimensional analyses,” Scientific Reports, vol. 7, May 2017.
+[26] N. Miller and R. Gerlai, “Quantification of shoaling behaviour in zebrafish (Danio rerio),” Behavioural Brain Research, vol. 184, pp. 157–166, Dec. 2007.
+[27] H. AlZu'bi, W. Al-Nuaimy, J. Buckley, L. Sneddon, and Iain Young, "Real-time 3D fish tracking and behaviour analysis," in 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), pp. 1-5, Nov. 2015.
+[28] X. E. Cheng, S. S. Du, H. Y. Li, J. F. Hu, and M. L. Chen, "Obtaining three-dimensional trajectory of multiple fish in water tank via video tracking," Multimedia Tools and Applications, vol. 77, pp. 24499-24519, Feb. 2018.
+[29] Z. Qian, M. Shi, M. Wang, and T. Cun, "Skeleton-based 3D tracking of multiple fish from two orthogonal views," in Communications in Computer and Information Science, pp. 25-36, Springer Singapore, 2017.
+[30] S. H. Wang, X. Liu, J. Zhao, Y. Liu, and Y. Q. Chen, "3D tracking swimming fish school using a master view tracking first strategy," in 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), IEEE, Dec. 2016.
+[31] Z.-M. Qian and Y. Q. Chen, “Feature point based 3D tracking of multiple fish from multi-view images,” PLOS ONE, vol. 12, no. 6, pp. 1–18, 2017.
+[32] G. Audira, B. Sampurna, S. Juniardi, S.-T. Liang, Y.-H. Lai, and C.-D. Hsiao, "A simple setup to perform 3D locomotion tracking in zebrafish by using a single camera," Inventions, vol. 3, p. 11, Feb. 2018.
+[33] G. Xiao, W.-K. Fan, J.-F. Mao, Z.-B. Cheng, D.-H. Zhong, and Y. Li, “Research of the fish tracking method with occlusion based on monocular stereo vision,” in 2016 International Conference on Information System and Artificial Intelligence (ISAI), IEEE, June 2016.
+[34] M. Menze, C. Heipke, and A. Geiger, "Joint 3D estimation of vehicles and scene flow," in ISPRS Workshop on Image Sequence Analysis (ISA), 2015.
+[35] P. Voigtlaender, M. Krause, A. Osep, J. Luiten, B. B. G. Sekar, A. Geiger, and B. Leibe, "MOTS: multi-object tracking and segmentation," in Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
+[36] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, "nuScenes: a multimodal dataset for autonomous driving," arXiv preprint arXiv:1903.11027, 2019.
+[37] "Waymo open dataset: An autonomous driving dataset," 2019. https://www.waymo.com/open.
+[38] M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, et al., "Argoverse: 3D tracking and forecasting with rich maps," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8748-8757, 2019.
+
+[39] S. Song and J. Xiao, "Tracking revisited using RGBD camera: Unified benchmark and baselines," in Proceedings of the IEEE international conference on computer vision, pp. 233-240, 2013.
+[40] A. Lukezic, U. Kart, J. Kapyla, A. Durmush, J.-K. Kamarainen, J. Matas, and M. Kristan, "Cdtb: A color and depth visual object tracking dataset and benchmark," in Proceedings of the IEEE International Conference on Computer Vision, pp. 10013-10022, 2019.
+[41] L. Leal-Taixe, A. Milan, I. Reid, S. Roth, and K. Schindler, "MOTChallenge 2015: Towards a benchmark for multi-target tracking," arXiv:1504.01942 [cs], Apr. 2015. arXiv: 1504.01942.
+[42] A. Milan, L. Leal-Taixe, I. Reid, S. Roth, and K. Schindler, "MOT16: A benchmark for multi-object tracking," arXiv:1603.00831 [cs], Mar. 2016. arXiv: 1603.00831.
+[43] P. Dendorfer, H. Rezatofighi, A. Milan, J. Shi, D. Cremers, I. Reid, S. Roth, K. Schindler, and L. Leal-Taixe, "CVPR19 tracking and detection challenge: How crowded can it get?", arXiv:1906.04567 [cs], June 2019. arXiv: 1906.04567.
+[44] S. Lyu, M. Chang, D. Du, L. Wen, H. Qi, Y. Li, Y. Wei, L. Ke, T. Hu, M. Del Coco, P. Carcagni, D. Anisimov, E. Bochinski, F. Galasso, F. Bunyak, G. Han, H. Ye, H. Wang, K. Palaniappan, K. Ozcan, L. Wang, L. Wang, M. Lauer, N. Watcharapinchai, N. Song, N. M. Al-Shakarji, S. Wang, S. Amin, S. Rujikietgumjorn, T. Khanova, T. Sikora, T. Kutschbach, V. Eiselein, W. Tian, X. Xue, X. Yu, Y. Lu, Y. Zheng, Y. Huang, and Y. Zhang, "UA-DETRAC 2017: Report of AVSS2017 IWT4S challenge on advanced traffic monitoring," in 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1-7, Aug 2017.
+[45] L. Wen, D. Du, Z. Cai, Z. Lei, M.-C. Chang, H. Qi, J. Lim, M.-H. Yang, and S. Lyu, "Ua-detrac: A new benchmark and protocol for multi-object detection and tracking," Computer Vision and Image Understanding, vol. 193, p. 102907, 2020.
+[46] S. Sun, N. Akhtar, H. Song, A. S. Mian, and M. Shah, "Deep affinity network for multiple object tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 1-1, 2019.
+[47] E. Bochinski, T. Senst, and T. Sikora, “Extending IOU based multi-object tracking by visual information,” in 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6, Nov 2018.
+[48] P. Bergmann, T. Meinhardt, and L. Leal-Taixe, "Tracking without bells and whistles," in The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[49] A. Perez-Escudero, J. Vicente-Page, R. C. Hinz, S. Arganda, and G. G. de Polavieja, "idTracker: tracking individuals in a group by automatic identification of unmarked animals," Nature Methods, vol. 11, pp. 743 - 748, 2014.
+[50] V. H. Sridhar, D. G. Roche, and S. Gingins, "Tracktor: Image-based automated tracking of animal movement and behaviour," Methods in Ecology and Evolution, vol. 10, pp. 815-820, Mar. 2019.
+
+[51] H. J. Monck, A. Jörg, T. v. Falkenhausen, J. Tanke, B. Wild, D. Dormagen, J. Piotrowski, C. Winklmayr, D. Bierbach, and T. Landgraf, "BioTracker: an open-source computer vision framework for visual animal tracking," CoRR, vol. abs/1803.07985, 2018.
+[52] A. Rodriguez, H. Zhang, J. Klaminder, T. Brodin, P. L. Andersson, and M. Andersson, "ToxTrac: a fast and robust software for tracking organisms," Methods in Ecology and Evolution, vol. 9, pp. 460-464, Sept. 2017.
+[53] A. M. T. Harmer and D. B. Thomas, "pathtracker: An r package for video tracking and analysing animal movement," Methods in Ecology and Evolution, May 2019.
+[54] X. Liu, P. R. Zhu, Y. Liu, and J. W. Zhao, "Tracking full-body motion of multiple fish with midline subspace constrained multicue optimization," Scientific Programming, vol. 2019, pp. 1-7, June 2019.
+[55] L. Zhu and W. Weng, "Catadioptric stereo-vision system for the real-time monitoring of 3D behavior in aquatic animals," Physiology & Behavior, vol. 91, no. 1, pp. 106 - 119, 2007.
+[56] J. Cachat, A. Stewart, E. Utterback, P. Hart, S. Gaikwad, K. Wong, E. Kyzar, N. Wu, and A. V. Kalueff, "Three-dimensional neurophenotyping of adult zebrafish behavior," PLOS ONE, vol. 6, no. 3, pp. 1-14, 2011.
+[57] X. E. Cheng, S. H. Wang, and Y. Q. Chen, "3D tracking targets via kinematic model weighted particle filter," in 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, July 2016.
+[58] S. V. Viscido, J. K. Parrish, and D. Grünbaum, “Individual behavior and emergent properties of fish schools: a comparison of observation and theory,” Marine Ecology Progress Series, vol. 273, pp. 239–249, 2004.
+[59] A. D. Straw, K. Branson, T. R. Neumann, and M. H. Dickinson, "Multi-camera real-time three-dimensional tracking of multiple flying animals," Journal of The Royal Society Interface, vol. 8, pp. 395-409, July 2010.
+[60] K. Müller, J. Schlemper, L. Kuhnert, and K. D. Kuhnert, "Calibration and 3D ground truth data generation with orthogonal camera-setup and refraction compensation for aquaria in real-time," in 2014 International Conference on Computer Vision Theory and Applications (VISAPP), vol. 3, pp. 626-634, Jan. 2014.
+[61] A. M. Stewart, F. Grieco, R. A. J. Tegelenbosch, E. J. Kyzar, M. Nguyen, A. Kaluyeva, C. Song, L. P. J. J. Noldus, and A. V. Kalueff, “A novel 3D method of locomotor analysis in adult zebrafish: Implications for automated detection of CNS drug-evoked phenotypes,” Journal of Neuroscience Methods, vol. 255, pp. 66–74, Nov. 2015.
+[62] Z.-M. Qian, S. H. Wang, X. E. Cheng, and Y. Q. Chen, "An effective and robust method for tracking multiple fish in video image based on fish head detection," BMC Bioinformatics, vol. 17, p. 251, June 2016.
+[63] X. Liu, Y. Yue, M. Shi, and Z.-M. Qian, "3-D video tracking of multiple fish in a water tank," IEEE Access, vol. 7, pp. 145049-145059, 2019.
+
+[64] S. H. Wang, X. E. Cheng, and Y. Q. Chen, "Tracking undulatory body motion of multiple fish based on midline dynamics modeling," in 2016 IEEE International Conference on Multimedia and Expo (ICME), pp. 1-6, July 2016.
+[65] S. H. Wang, J. Zhao, X. Liu, Z. Qian, Y. Liu, and Y. Q. Chen, "3D tracking swimming fish school with learned kinematic model using LSTM network," in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1068-1072, March 2017.
+[66] S. H. Wang, J. W. Zhao, and Y. Q. Chen, "Robust tracking of fish schools using CNN for head identification," Multimedia Tools and Applications, pp. 1-19, 2016.
+[67] M. Pedersen, S. Hein Bengtson, R. Gade, N. Madsen, and T. B. Moeslund, "Camera calibration for underwater 3D reconstruction based on ray tracing using Snell's law," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018.
+[68] J. M. S. Prewitt and M. L. Mendelsohn, “The analysis of cell images,” Annals of the New York Academy of Sciences, vol. 128, no. 3, pp. 1035–1053, 1966.
+[69] T. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” Communications of the ACM, vol. 27, no. 3, pp. 236–239, 1984.
+[70] S. Ren, K. He, R. Girshick, and J. Sun, "Faster R-CNN: Towards real-time object detection with region proposal networks," in Advances in Neural Information Processing Systems, pp. 91-99, 2015.
+[71] H. W. Kuhn, “The Hungarian method for the assignment problem,” Naval Research Logistics Quarterly, vol. 2, no. 1-2, pp. 83–97, 1955.
+[72] P. Carr and R. T. Collins, "Assessing tracking performance in complex scenarios using mean time between failures," in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1-10, March 2016.
+[73] K. Bernardin and R. Stiefelhagen, “Evaluating multiple object tracking performance: The CLEAR MOT metrics,” EURASIP Journal on Image and Video Processing, vol. 2008, p. 246309, May 2008.
+[74] Bo Wu and R. Nevatia, "Tracking of multiple, partially occluded humans based on static body part detection," in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06), vol. 1, pp. 951-958, June 2006.
+[75] E. Ristani, F. Solera, R. Zou, R. Cucchiara, and C. Tomasi, "Performance measures and a data set for multi-target, multicamera tracking," in Computer Vision - ECCV 2016 Workshops (G. Hua and H. Jégou, eds.), (Cham), pp. 17-35, Springer International Publishing, 2016.
\ No newline at end of file
diff --git a/3dzefa3dzebrafishtrackingbenchmarkdataset/images.zip b/3dzefa3dzebrafishtrackingbenchmarkdataset/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..1804140f6c7fb6336bb51b8a7ecdc7671c7b02b8
--- /dev/null
+++ b/3dzefa3dzebrafishtrackingbenchmarkdataset/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ebac298db3b242e042b364f64f2e647163b60236c9140cfd56b9d9dee264885
+size 303876
diff --git a/3dzefa3dzebrafishtrackingbenchmarkdataset/layout.json b/3dzefa3dzebrafishtrackingbenchmarkdataset/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..032b26e4ed1a5605512cefc0d730d43d8bf0b28a
--- /dev/null
+++ b/3dzefa3dzebrafishtrackingbenchmarkdataset/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5f8415a5f68050172fcefe9eab6dcbba62d07e8018a4e8a567dc0989759d922
+size 438613
diff --git a/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_content_list.json b/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f5733184d4a1ae2d9ead3a957a45274ce0d9adfb
--- /dev/null
+++ b/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a4d7834ae53716d8c06b8035c2522ef38f833dee997cbefca0733ffb6fbd1a95
+size 88491
diff --git a/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_model.json b/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..819aaaf1827a0cfba10be4f402331669f794d653
--- /dev/null
+++ b/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bd4bb630b69d6919f629f8835a057d3ebce036ca515bb2ac03ede6bb69fc01f8
+size 108148
diff --git a/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_origin.pdf b/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c5118a89644f2a0e33d03bc663c45c362d03d563
--- /dev/null
+++ b/3fabrecfastfewshotfacealignmentbyreconstruction/2781462b-038f-4764-9570-84123d370a7a_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3603fbfb384fab036a881f273a727573adcfef292dab17eb0b9664320a5c30ad
+size 799220
diff --git a/3fabrecfastfewshotfacealignmentbyreconstruction/full.md b/3fabrecfastfewshotfacealignmentbyreconstruction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..52f158c40eec831ccc473e77f693a11b935a6930
--- /dev/null
+++ b/3fabrecfastfewshotfacealignmentbyreconstruction/full.md
@@ -0,0 +1,371 @@
+# 3FabRec: Fast Few-shot Face alignment by Reconstruction
+
+Björn Browatzki and Christian Wallraven*
+
+Dept. of Artificial Intelligence, Korea University, Seoul
+
+browatbn@korea.ac.kr, wallraven@korea.ac.kr
+
+
+UNSUPERVISED TRAINING
+Train autoencoder on VGGFace2 and AffectNet
+
+
+[FEW-SHOT] SUPERVISED TRAINING
+
+
+
+
+ENTIRE(!) trainset
+[first 10(!) images from 300-W]
+
+FAST TESTING
+
+[possible at $>300$ FPS]
+
+
+
+
+
+
+Figure 1: The framework of 3FabRec consisting of: (leftmost box) a first, unsupervised training stage that trains a low-dimensional latent space via an adversarial (generative) autoencoder on a large dataset of unlabeled faces, (middle box) subsequent supervised training with few annotated faces. The rightmost box shows results from testing the framework trained on only the 10 images of the middle box with original faces (top row), reconstructed faces via the autoencoder (middle row), and confidence heatmaps (bottom row).
+
+
+
+
+
+
+Results on testset [first 6 images from 300-W common]
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Original faces with predicted landmarks
+
+
+Reconstructed faces with predicted landmarks
+
+
+Reconstructed faces with confidence heatmaps
+
+# Abstract
+
+Current supervised methods for facial landmark detection require a large amount of training data and may suffer from overfitting to specific datasets due to the massive number of parameters. We introduce a semi-supervised method in which the crucial idea is to first generate implicit face knowledge from the large amounts of unlabeled images of faces available today. In a first, completely unsupervised stage, we train an adversarial autoencoder to reconstruct faces via a low-dimensional face embedding. In a second, supervised stage, we interleave the decoder with transfer layers to retask the generation of color images to the prediction of landmark heatmaps. Our framework (3FabRec) achieves state-of-the-art performance on several common benchmarks and, most importantly, is able to maintain impressive accuracy on extremely small training sets down to as few as 10 images. As the interleaved layers only add a low amount of parameters to the decoder, inference runs at several hundred FPS on a GPU.
+
+# 1. Introduction
+
+Accurate and robust localization of facial landmarks is a critical step in many existing face processing applications, including tracking, expression analysis, and face identification. Unique localization of such landmarks is severely affected by occlusions, partial face visibility, large pose variations, uneven illumination, or large, non-rigid deformations during more extreme facial expressions [41, 44]. These challenges have to be overcome in order to achieve a low landmark localization error, implying high robustness to appearance changes in faces while guaranteeing high localization accuracy for each landmark.
+
+The recent advances in deep learning techniques [3] coupled with the availability of large, annotated databases have allowed steady progress with localization accuracy on a typical benchmark increasing by $100\%$ (from [51] to [49] - see below for more related work). Most approaches use a combination of highly-tuned, supervised learning schemes in order to achieve this performance and almost always
+
+are specifically optimized on the particular datasets that are tested, increasing the potential of overfitting to that dataset [7]. Similarly, it has been shown that annotations in datasets can be imprecise and inconsistent (e.g., [14]).
+
+Given that in addition to the existing annotated facial landmark datasets, there is an even larger number of datasets available for other tasks (face detection, face identification, facial expression analysis, etc.), it should be possible to leverage the implicit knowledge about face shape contained in this pool to both ensure better generalizability across datasets and easier and faster, few-shot training of landmark localization. Here, we present such a framework that is based on a two-stage architecture (3FabRec, see Figs.1,2): the key to the approach lies in the first, unsupervised stage, in which an (generative) adversarial autoencoder [30] is trained on a large dataset of faces that yields a low-dimensional embedding capturing "face knowledge" [46] from which it is able to reconstruct face images across a wide variety of appearances. With this embedding, the second, supervised stage then trains the landmark localization task on annotated datasets, in which the generator is retasked to predict the locations of a set of landmarks by generating probabilistic heatmaps [5]. This two-stage approach is a special case of semi-supervised learning [24,59] and has been successful in other domains, including general network training [20], text classification [22] and translation [13], and visual image classification [54].
+
+In the current study, we show that our method is able to achieve state-of-the-art results running at $>300$ FPS on the standard benchmark datasets. Most importantly, it yields impressive localization performance already with a few percent of the training data - beating the leading scores in all cases and setting new standards for landmark localization from as few as 10 images. The latter result demonstrates that landmark knowledge has, indeed, been implicitly captured by the unsupervised pre-training. Additionally, the reconstructed autoencoder images are able to "explain away" extraneous factors (such as occlusions or make-up), yielding a best-fitting face shape for accurate localization and adding to the explainability of the framework.
+
+Source code is available at https://github.com/browatbn2/3FabRec.
+
+# 2. Related Work
+
+Before the advent of deep learning methods, explicitly parametrized landmark models such as active shape [9], active appearance [8] or cascade regression models [17, 51] provided the state-of-the-art in facial landmark detection. Current models using deep convolutional neural networks, however, quickly became the best-performing approaches, starting with deep alignment networks [42], fully convolutional networks [27], coordinate regression mod
+
+els [28, 45], or multi-task learners [36], with the deep networks being able to capture the pixel-to-landmark correlations across face appearance variations.
+
+The recent related work in the context of our approach can be structured into supervised and semi-supervised approaches (for a recent, interesting unsupervised method - at lower performance levels - see [43]).
+
+# 2.1. Supervised methods
+
+Several recent, well-performing supervised methods are based on heatmap regression, in which a deep network will infer a probabilistic heatmap for each of the facial landmarks with its corresponding maximum encoding the most likely location of that landmark [5, 12, 27] - an approach we also follow here. In order to provide additional geometric constraints, extensions use an active-appearance-based model-fitting step based on PCA [31], explicit encoding of geometric information from the face boundary [49], or additional weighting from occlusion probabilities [56]. The currently best-performing method on many benchmarks uses a heatmap-based framework together with optimization of the loss function to foreground versus background pixels [47]. Such supervised methods will typically require large amounts of labelled training data in order to generalize across the variability in facial appearance (see [10] for an architecture using high-resolution deep cascades that tries to address this issue).
+
+# 2.2. Semi-supervised methods
+
+In addition to changes to the network architecture, the issue of lack of training data and inconsistent labeling quality is addressed in semi-supervised models [24, 59] that augment the training process to make use of partially- or weakly-annotated data. Data augmentation based on landmark perturbation [29] or from generating additional views from a 3D face model [60] can be applied to generate more robust pseudo landmark labels. [14] uses constraints from temporal consistency of landmarks based on optic flow to enhance the training of the landmark detector - see also [53]. In [36, 55], multi-task frameworks are proposed in which attribute-networks tasked with predicting other facial attributes including pose and emotion are trained together with the landmark network, allowing for gradient transfer from one network to the other. Similar to this, [35] show improvements using data augmentation with style-translated examples during training. In [15], a teacher-supervises-students $(\mathrm{TS}^3)$ framework is proposed in which a teacher is trained to filter student-generated landmark pseudolabels into "qualified" and "unqualified" samples, such that the student detectors can retrain themselves with better-quality data. Similarly, in [39], a GAN framework produces "fake" heatmaps that the main branch of the network needs to discriminate, hence improving performance.
+
+
+Figure 2: Overview of the 3FabRec pipeline, including the architecture of the autoencoder, as well as training paths for unsupervised, supervised, and the fine-tuning stages (see text for more details).
+
+# 3. Methods
+
+# 3.1. Our approach
+
+Most of the semi-supervised approaches discussed above use data augmentation on the same dataset as done for testing. Our approach (see Figs. 1,2) starts from an unsupervised method in which we leverage the implicit knowledge about face shape contained in large datasets of faces (such as used for face identification [6]). This knowledge is captured in a low-dimensional latent space of an autoencoder framework. Importantly, the autoencoder also has generative capabilities, i.e., it is tasked during training to reconstruct the face from the corresponding latent vector. This step is done because the following, supervised stage implements a hybrid reconstruction pipeline that uses the generator together with interleaved transfer layers to both reconstruct the face as well as probabilistic landmark heatmaps. Hence, the changes in the latent vector space will be mapped to the position of the landmarks trained on labeled datasets. Given that the first, unsupervised stage has already captured knowledge about facial appearance and face shape, this information will be quickly made explicit during the second, supervised stage allowing for generalization across multiple datasets and enabling low-shot and few-shot training.
+
+# 3.2. Unsupervised face representation
+
+The unsupervised training step follows the framework of [4] in which an adversarial autoencoder is trained through four loss functions balancing faithful image reconstruction with the generalizability and smoothness of the embedding
+
+space needed for the generation of novel faces. A reconstruction loss $\mathcal{L}_{rec}$ penalizes reconstruction errors through a pixel-based $L1$ error. An encoding feature loss $\mathcal{L}_{enc}$ [19] ensures the creation of a smooth and continuous latent space. An adversarial feature loss $\mathcal{L}_{adv}$ pushes the encoder $E$ and generator $G$ to produce reconstructions with high fidelity since training of generative models using only image reconstruction losses typically leads to blurred images.
+
+As the predicted landmark locations in our method follow directly from the locations of reconstructed facial elements, our main priority in training the autoencoder lies in the accurate reconstruction of such features. Thus, we trade some of the generative power against reconstruction accuracy by replacing the generative image loss, $\mathcal{L}_{gen}$ , used in [4] with a new structural image loss $\mathcal{L}_{cs}$ .
+
+Structural image loss: To penalize reconstructions that do not align facial structures well with input images, we add a structural image loss based on the SSIM [48] image similarity metric, which measures contrast $c(a,b)$ and correlation $s(a,b)$ between two image windows $a$ and $b$ :
+
+$$
+c (a, b) = \frac {2 \sigma_ {a} \sigma_ {a} + c}{\sigma_ {a} ^ {2} + \sigma_ {b} ^ {2} + c}, s (a, b) = \frac {\sigma_ {a b} + c / 2}{\sigma_ {a} \sigma_ {b} + c / 2} \tag {1}
+$$
+
+The values $\sigma_{a}$ and $\sigma_{b}$ denote intensity variances of windows $a, b$ and $\sigma_{ab}$ denotes their covariance. The constant $c$ adds stability against small denominators. It is set to $c = 255^{0.01}$ for images with 8-bit channels. The calculation is run for each $k \times k$ window across the images:
+
+$$
+c s (x, y) = \frac {1}{| w |} \sum_ {w} c \left(x _ {w}, y _ {w}\right) s \left(x _ {w}, y _ {w}\right)) \tag {2}
+$$
+
+We obtain the structural image loss by evaluating $cs(x,y)$ with the original image and its reconstructions:
+
+$$
+\mathcal {L} _ {c s} (E, G) = \mathbb {E} _ {x \sim p (x)} [ c s (x, G (E (x)) ] \tag {3}
+$$
+
+This loss improves the alignment of high-frequency image elements and imposes a penalty for high-frequency noise introduced by the adversarial image loss. Hence, $\mathcal{L}_{cs}$ also serves as a regularizer, stabilizing adversarial training.
+
+Full autoencoder objective: The final training objective is a weighted combination of all loss terms:
+
+$$
+\begin{array}{l} \min _ {E, G} \max _ {D _ {z}, D _ {x}} \mathcal {L} _ {A E} (E, G, D _ {z}, D _ {x}) = \\ \lambda_ {r e c} \mathcal {L} _ {r e c} (E, G) + \lambda_ {c s} \mathcal {L} _ {c s} (E, G) \\ + \lambda_ {e n c} \mathcal {L} _ {e n c} (E, D _ {z}) + \lambda_ {a d v} \mathcal {L} _ {a d v} (E, G, D _ {x}) \tag {4} \\ \end{array}
+$$
+
+We set $\lambda_{enc}$ and $\lambda_{adv}$ to 1.0. $\lambda_{rec}$ and $\lambda_{cs}$ are selected so the corresponding loss terms yield similarly large values to each other, while at the same time ensuring a roughly 10 times higher weight in comparison to $\lambda_{enc}$ and $\lambda_{adv}$ (given the range of loss terms, we set $\lambda_{rec} \approx 1.0$ , $\lambda_{cs} \approx 60.0$ ).
+
+# 3.3. Supervised landmark discovery
+
+For landmark detection, we are not primarily interested in producing a RGB image but rather an $L$ -channel image containing landmark probability maps. This can be seen as a form of style transfer in which the appearance of the generated face is converted to a representation that allows us to read off landmark positions. Hence, information about face shape that was implicitly present in the generation of color images before is now made explicit. Our goal is to create this transfer without losing the face knowledge distilled from the very large set of (unlabeled) images as the annotated datasets available for landmark prediction are only a fraction of that size and suffer from imprecise and inconsistent human annotations [14]. For this, we introduce additional, interleaved transfer layers into the generator $G$ .
+
+# 3.3.1 Interleaved transfer layers
+
+Training of landmark generation starts by freezing all parameters of the autoencoder. We then interleave the inverted ResNet layers of the generator with $3 \times 3$ convolutional layers. Each of these Interleaved Transfer Layers (ITL) produces the same number of output channels as the original ResNet layer. Activations produced by a ResNet layer are transformed by these layers and fed into the next higher block. The last convolutional layer mapping to RGB
+
+images is replaced by a convolutional layer mapping to $L$ -channel heatmap images ( $L =$ number of landmarks to be predicted). This approach adds just enough flexibility to the generator to produce new heatmap outputs by re-using the pre-trained autoencoder weights.
+
+Given an annotated face image $x$ , the ground truth heatmap $H_{i}$ for each landmark $l_{i} \in \mathbb{R}^{2}$ consists of a 2D Normal distribution centered at $l_{i}$ and a standard deviation of $\sigma$ . During landmark training and inference the activations $a_{1}$ produced by the first inverted ResNet layer for an encoded image $z = E(x)$ are passed to the first ITL layer. This will transfer the activations and feed it into the next, frozen inverted ResNet layer, such that the full cascade of ResNet and ITLs can reconstruct a landmark heatmap $\tilde{H}$ . The heatmap prediction loss $\mathcal{L}_H$ is defined as the $L2$ distance between predicted $(\tilde{H})$ and ground truth heatmap $(H)$
+
+$$
+\mathcal {L} _ {H} (I T L) = \mathbb {E} _ {x \sim p (x)} [ \| H - I T L (a _ {1}) \| _ {2} ] \tag {5}
+$$
+
+The position of the landmark is $\tilde{l}_i = \operatorname*{argmax}_{u,v}\tilde{H}_i(u,v)$
+
+# 3.3.2 Encoder finetuning
+
+Once training of the ITL layers reaches convergence we can perform an optional finetuning step. For this, the encoder $E$ is unfrozen so that ITL layers and encoder are optimized in tandem (see Fig.2).
+
+$$
+\mathcal {L} _ {H} (I T L) \rightarrow \mathcal {L} _ {H} (E, I T L) \tag {6}
+$$
+
+Since the updates are only based on landmark errors, this will push $E$ to encode input faces such that facial features are placed more precisely in reconstructed faces. At the same time, other attributes like gender, skin color, or illumination may be removed as these are not relevant for the landmark prediction task. Overfitting is avoided since the generator remains unchanged, which acts as a regularizer and limits the flexibility of the encoder.
+
+# 4. Experiments1
+
+# 4.1. Datasets
+
+VGGFace2 & AffectNet The dataset used for unsupervised training of the generative autoencoder combines two datasets: the VGGFace2 dataset [6], which contains a total of 3.3 million faces collected with large variability in pose, age, illumination, and ethnicity in mind. From the full dataset, we removed faces with a height of less than 100 pixels resulting in 1.8 million faces (from 8631 unique identities). In addition, we add the AffectNet dataset [34] that was designed for capturing a wide variety of facial expressions (thus providing additional variability in face shape), which contains 228k images, yielding a total of 2.1M images for autoencoder training.
+
+1For experiments on parameter tuning, cross-database results, and further ablation studies, see supplementary materials.
+
+300-W This dataset was assembled by [40] from several sources, including LFPW [2], AFW [26], HELEN [61], XM2VTS [32], and own data and annotated semiautomatically with 68 facial landmarks. Using the established splits reported in [38], a total of 3,148 training images and 689 testing images were used in our experiments. The latter is further split into 554 images that constitute the common subset and a further 135 images that constitute the challenging subset. Additionally, 300-W contains 300 indoor and 300 outdoor images that define the private testset of the original 300-W challenge.
+
+AFLW This dataset [25] contains 24,386 in-the-wild faces with an especially wide range of face poses (yaw angles from $-120^{\circ} - 120^{\circ}$ ] and roll and pitch angles from $-90^{\circ} - 90^{\circ}$ ). Following common convention, we used splits of 20,000 images for training and 4,386 for testing and trained with only 19 of the 21 annotated landmarks [28].
+
+WFLW The newest dataset in our evaluation protocol is from [49] containing a total of 10,000 faces with a 7,500/2,500 train/test split. Images were sourced from the WIDER FACE dataset [52] and were manually annotated with a much larger number of 98 landmarks. The dataset contains different (partially overlapping) test subsets for evaluation where each subset varies in pose, expression, illumination, make-up, occlusion, or blur.
+
+# 4.2. Experimental settings
+
+# 4.2.1 Unsupervised autoencoder training
+
+Network architecture Our implementation is based on [4] which combines a standard ResNet-18 as encoder with an inverted ResNet-18 (first convolution layers in each block replaced by $4 \times 4$ deconvolution layers) as decoder. Both encoder and decoder contain $\approx 10\mathrm{M}$ parameters each. The encoded feature length is 99 dimensions.
+
+Training procedure We train the autoencoder for 50 epochs with an input/output size of $128 \times 128$ and a batch-size of 100 images. Upon convergence we add an additional ResNet layer to both the encoder and decoder and train for another 50 epochs with an image size of $256 \times 256$ to increase reconstruction fidelity with a batchsize of 50. We use the Adam optimizer [23] $(\beta_{1} = 0.0, \beta_{2} = 0.999)$ with a constant learning rate of $2 \times 10^{-5}$ , which yielded robust settings for adversarial learning. We apply data augmentations of random horizontal flipping $(p = 0.5)$ , translation $(\pm 4\%)$ resizing $(94\%$ to $103\%)$ , rotation $(\pm 45^{\circ})$ .
+
+# 4.2.2 Supervised landmark training
+
+Images are cropped using supplied bounding boxes and resized to $256 \times 256$ . For creating ground truth heatmaps, we set $\sigma = 7$ . In all experiments we train four ITL layers and generate landmark heatmaps of size $128x128$ by
+
+
+Figure 3: Randomly-generated faces with overlaid generated landmark probability maps.
+
+skipping the last generator layer (as detailed in 4.6, higher generator layers contain mostly decorrelated local appearance information). To train from the landmark dataset images, we apply data augmentations of random horizontal flipping $(p = 0.5)$ , translation $(\pm 4\%)$ resizing $(\pm 5\%)$ , rotation $(\pm 30^{\circ})$ , and occlusion (at inference time no augmentation is performed). The learning rate during ITL-only training is set to 0.001. During the optional finetuning stage we lower ITL learning rate to 0.0001 while keeping the encoder learning rate the same as during training $(= 2 \times 10^{-5})$ and resetting Adam's $\beta_{1}$ to the default value of 0.9.
+
+Evaluation Metrics Performance of facial landmark detection is reported here using normalized mean error (NME), failure rate (FR) at $10\%$ NME and area-under-the-curve (AUC) of the Cumulative Error Distribution (CED) curve. For 300-W and WFLW we use the distance between the outer eye-corners as the "inter-ocular" normalization. Due to the high number of profile faces in AFLW, errors are normalized using the width of the (square) bounding boxes following [57].
+
+# 4.3. Qualitative results
+
+The trained generator is able to produce a wide range of realistic faces from a low-dimensional (99D) latent feature vector $z$ - this is shown in Fig.3 with randomly-generated faces with overlaid, predicted landmark heatmaps. To achieve this, the model must have learned inherent information about the underlying structure of faces. We can further illustrate the implicit face shape knowledge by interpolating between face embeddings and observing that facial structures (such as mouth corners) in produced images are constructed in a highly consistent manner (see Fig. 4 for a visualization). This leads to two insights: First, facial structures are actually encoded in the low-dimensional representation $z$ . Second, this information can be transformed into 2D maps of pixel intensities (i.e., a color image) while maintaining high correlation with the originating encoding.
+
+Further examples of the reconstruction quality on challenging images are shown in Fig. 5. As can be seen, the pipeline will try to reconstruct a full face as much as possible given the input, removing occlusions and make-up and
+
+
+Figure 4: Predicted landmarks of generated faces by interpolation between embedded feature vectors.
+
+
+Figure 5: 3FabRec results on challenging test examples from WFLW. Rows show the original, and the reconstruction itself, with predicted landmarks, with ground-truth landmarks, and with predicted landmark heatmaps, respectively. The fifth column illustrates a failure case. For more examples, see supplementary materials.
+
+even "upsampling" the face (Fig. 5, first column) in the process. This is because the databases for training the autoencoder contained mostly unoccluded and non-disguised faces at roughly similar resolutions. Additionally we note that the reconstructed faces will not necessarily preserve the identity as the goal of the fully-trained pipeline is to reconstruct the best-fitting face shape. Although our method is able to handle considerable variations in resolution (Fig. 5, first column), make-up (Fig. 5, second column), lighting (Fig. 5, third column), and pose (Fig. 5, fourth column), it does produce failed predictions in cases when these factors become too extreme, as shown in the fifth column of Fig. 5. Landmark prediction, however, typically degrades gracefully in these cases as the confidence encoded in the heatmaps will also be low.
+
+# 4.4. Comparison with state-of-the-art
+
+Table 1 shows comparisons of our semi-supervised pipeline with state-of-the-art on the 300-W and the AFLW datasets using the full amount of training. We achieve top
+
+| Method | AFLW | 300-W |
| Full | Frontal | Com | Chall. | Full |
| SDM [51] | 4.05 | 2.94 | 5.57 | 15.40 | 7.52 |
| LBF [37] | 4.25 | 2.74 | 4.95 | 11.98 | 6.32 |
| CFSS [58] | 3.92 | 2.68 | 4.73 | 9.98 | 5.76 |
| Two-Stage [28] | 2.17 | - | 4.36 | 7.56 | 4.99 |
| DSRN [33] | 1.86 | - | 4.12 | 9.68 | 5.21 |
| SBR [16] | 2.14 | 2.07 | 3.28 | 7.58 | 4.10 |
| SAN [14] | 1.91 | 1.85 | 3.34 | 6.60 | 3.98 |
| LAB [49] | 1.85 | 1.62 | 2.98 | 5.19 | 3.49 |
| ODN [56] | 1.63 | 1.38 | 3.56 | 6.67 | 4.17 |
| LaplaceKL (70K) [39] | 1.97 | - | 3.19 | 6.87 | 3.91 |
| 3FabRec | 1.84 | 1.59 | 3.36 | 5.74 | 3.82 |
+
+Table 1: Normalized mean error $(\%)$ on 300-W dataset. Best results highlighted in bold, second best are underlined.
+
+| Method | AUC | FR |
| M3 CSR [11] | 47.52 | 5.5 |
| CFSS [57] | 49.87 | 5.05 |
| DenseReg+MDM [1] | 52.19 | 3.67 |
| JMFA [12] | 54.85 | 1.00 |
| LAB [49] | 58.85 | 0.83 |
| 3FabRec | 54.61 | 0.17 |
+
+Table 2: Area under the curve (AUC) and failure rate (FR in (%)) @0.1) on the 300-W testset.
+
+2 accuracy on nearly all test sets with the exception of the common set from 300-W. This demonstrates that our framework is able to reach current levels of performance despite a much lighter, supervised training stage using only a few interleaved transfer layers on top of the generator pipeline.
+
+The results in Table 2 for AUC and FR for the commonly-reported 300-W dataset demonstrate that our framework achieves the lowest failure rate of all methods (our FR=0.17 corresponds to only 1 image out of the full set that has large enough errors to count as a failure). At the same time, the AUC is in the upper range but not quite as good as that of [49], for example, which means that overall errors across landmarks are low, but more equally distributed compared to the top-performing methods.
+
+The NME results in Table 3 show that on the newest WFLW dataset, our approach performs at levels of the LAB method [49] with most subsets, although we perform consistently below the current StyleAlign approach (SA, [35] - note, however, that this approach could be easily implemented into our framework as well, which would allow us to disentangle the 99D-feature vector into style attributes [4]
+
+ | Method | Full | Pose | Exp. | Ill. | Mk. Up | Occ. | Blur |
| NME (%) | SDM [51] | 10.29 | 24.10 | 11.45 | 9.32 | 9.38 | 13.03 | 11.28 |
| CFSS [58] | 9.07 | 21.36 | 10.09 | 8.30 | 8.74 | 11.76 | 9.96 |
| DVLN [50] | 6.08 | 11.54 | 6.78 | 5.73 | 5.98 | 7.33 | 6.88 |
| LAB [49] | 5.27 | 10.24 | 5.51 | 5.23 | 5.15 | 6.79 | 6.32 |
| SAN [14] | 5.22 | 10.39 | 5.71 | 5.19 | 5.49 | 6.83 | 5.80 |
| Wing [49] | 5.11 | 8.75 | 5.36 | 4.93 | 5.41 | 6.37 | 5.81 |
| SA [35] | 4.39 | 8.24 | 4.68 | 4.24 | 4.37 | 5.60 | 4.86 |
| 3FabRec | 5.62 | 10.23 | 6.09 | 5.55 | 5.68 | 6.92 | 6.38 |
| FR @0.1 (%) | SDM [51] | 29.40 | 84.36 | 33.44 | 26.22 | 27.67 | 41.85 | 35.32 |
| CFSS [58] | 20.56 | 66.26 | 23.25 | 17.34 | 21.84 | 32.88 | 23.67 |
| DVLN [50] | 10.84 | 46.93 | 11.15 | 7.31 | 11.65 | 16.30 | 13.71 |
| LAB [49] | 7.56 | 28.83 | 6.37 | 6.73 | 7.77 | 13.72 | 10.74 |
| SAN [14] | 6.32 | 27.91 | 7.01 | 4.87 | 6.31 | 11.28 | 6.60 |
| Wing [49] | 6.00 | 22.70 | 4.78 | 4.30 | 7.77 | 12.50 | 7.76 |
| SA [35] | 4.08 | 18.10 | 4.46 | 2.72 | 4.37 | 7.74 | 4.40 |
| 3FabRec | 8.28 | 34.35 | 8.28 | 6.73 | 10.19 | 15.08 | 9.44 |
| AUC @0.1 | SDM [51] | 0.300 | 0.023 | 0.229 | 0.324 | 0.312 | 0.206 | 0.239 |
| CFSS [58] | 0.366 | 0.063 | 0.316 | 0.385 | 0.369 | 0.269 | 0.304 |
| DVLN [50] | 0.455 | 0.147 | 0.389 | 0.474 | 0.449 | 0.379 | 0.397 |
| LAB [49] | 0.532 | 0.235 | 0.495 | 0.543 | 0.539 | 0.449 | 0.463 |
| SAN [15] | 0.536 | 0.236 | 0.462 | 0.555 | 0.522 | 0.456 | 0.493 |
| Wing [49] | 0.534 | 0.310 | 0.496 | 0.541 | 0.558 | 0.489 | 0.492 |
| SA [35] | 0.591 | 0.311 | 0.549 | 0.609 | 0.581 | 0.516 | 0.551 |
| 3FabRec | 0.484 | 0.192 | 0.448 | 0.496 | 0.473 | 0.398 | 0.434 |
+
+Table 3: Evaluation results on WFLW dataset.
+
+to generate augmented training data). The main reason for this is that WFLW contains much more heavy occlusions and extreme appearance changes compared to our training sets leading to more failure cases (see Fig.5 fifth column).
+
+# 4.5. Limited training data and few-shot learning
+
+Tables 4, 5, 6 showcase the central result of our framework: when training on only parts of the training set, 3Fab-Rec can beat the published benchmark performance values.
+
+300-W Table 4 shows that performance is comparable to that of 2-year-old approaches trained on the full dataset (cf. Table 1) although 3FabRec was trained only with $10\%$ of the dataset. In addition, performance does not decrease much when going to lower values of $5\%$ and $1.5\%$ of training set size. Even when training with only 10 images or 1 image, our approach is able to deliver reasonably robust results (see Fig.1 for landmark reconstruction results from training with 10 images).
+
+AFLW For this dataset (Table 5), our approach already starts to come ahead at $20\%$ of training set size with little degradation down to $1\%$ . Again, even with only a few images 3FabRec can make landmark predictions.
+
+WFLW For this more challenging dataset (Table 6), our approach easily outperforms the StyleAlign [21] method as soon as less than $10\%$ is used for training while being able
+
+
+Figure 6: Layer-analysis of 3FabRec. Gray curve: cumulative number of network parameters; blue curve: spatial dimension of each layer. The four red blocks indicate the ITL layers, with arrows showing how well the landmark heatmap can be predicted when starting from that layer.
+
+to maintain landmark prediction capabilities down to only 10 images in the training set.
+
+# 4.6. Ablation studies
+
+# 4.6.1 Effects of ITLs
+
+In order to see where information about landmarks is learned in the interleaved transfer layers, Figure 6 shows the reconstruction of the landmark heatmap when using all four layers versus decreasing subsets of the upper layers. As can be seen, the highest layer has only very localized information (mostly centered on eyes and mouth), whereas the lower layers are able to add information about the outlines - especially below layer 2.
+
+Localization accuracy is reported on the 300-W dataset (NME of 51 inner landmarks and outlines, as well as FR) in Table 7. As can be expected from the visualization, performance is bad for the upper layers only, but quickly recovers (especially when including the outlines) below layer 2. The reason for this is that the upper layers of the generator will mostly contain localized, de-correlated information at the pixel level, whereas the lower layers are closer to the more global and contextual information necessary to cover highly variable outlines (cf. blue curve in Figure 6, note that all ITLs have $3 \times 3$ convolutions). As the gray curve in Figure 6 and Table 7 show as well, the ITLs can achieve this with only very few additional parameters.
+
+# 4.6.2 Effects of finetuning
+
+Table 8 reports the effects of running the model with and without finetuning on the full testsets of the three evaluated datasets. The additional retraining of the autoencoder allows for better reconstruction of the faces and results in benefits of $10.9\%$ on average (8.9% for 300-W, 15.2% for AFLW, and 8.5% for WFLW, respectively).
+
+| 300-W dataset | | |
| Method | Training set size | | |
| 100% | 20% | 10% | 5% | 50 (1.5%) | 10 (0.3%) | 1 (0.003%) | |
| \( RCN^{+} [21] \) | 4.20 | 7.78 | 4.90 | - | 9.56 | 5.88 | - | 10.35 | 6.32 | - | 15.54 | 7.22 | - | - | - | - | - | - | |
| \( RCN^{+} [21]^\dagger \) | 3.00 | 4.98 | 3.46 | - | 6.12 | 4.15 | - | 6.63 | 4.47 | - | 9.95 | 5.11 | - | - | - | - | - | - | |
| SA [35] | 3.21 | 6.49 | 3.86 | 3.85 | - | - | 4.27 | - | - | 6.32 | - | - | | | | | | | |
| \( TS^3 [15] \) | 2.91 | 5.9 | 3.49 | 4.31 | 7.97 | 5.03 | 4.67 | 9.26 | 5.64 | - | - | - | - | - | - | - | - | - | |
| 3FabRec | 3.36 | 5.74 | 3.82 | 3.76 | 6.53 | 4.31 | 3.88 | 6.88 | 4.47 | 4.22 | 6.95 | 4.75 | 4.55 | 7.39 | 5.10 | 4.96 | 8.29 | 5.61 | 8.45 |
+
+Table 4: NME (%) with reduced training sets on 300-W. $\dagger$ RCN+ reports errors normalized by eye-center distance - for better comparison values were rescaled by the known ratios of inter-ocular to inter-pupil distances, " -" denotes values not reported.
+
+| AFLW dataset |
| Method | Training set size |
| 100% | 20% | 10% | 5% | 1% | 50 (0.0025%) | 10 (0.0005%) | 1 (<0.0001%) |
| \( RCN^{+} \) [21] | 1.61 - | - - | - - | - - | 2.17 - | 2.88 - | - - | - - | - - | - - | - - | - - | - - | |
| \( TS^3 \) [15] | - - | 1.99 1.86 | 2.14 1.94 | 2.19 2.03 | - - | - - | - - | - - | - - | - - | - - | - - | - - | |
| 3FabRec | 1.87 1.59 | 1.96 1.74 | 2.03 1.74 | 2.13 1.86 | 2.38 2.03 | 2.74 2.23 | 3.05 2.56 | 4.93 4.04 | |
+
+Table 5: NME (%) with reduced training sets for AFLW. The first column in each cell is the full testset, the second is the frontal testset, " -" denotes values not reported.
+
+| WFLW dataset |
| Method | Training set size |
| 100% | 20% | 10% | 5% | 50 | 10 | 1 |
| SA [21] | 4.39 | 6.00 | 7.20 | - | - | - | |
| 3FabRec | 5.62 | 6.51 | 6.73 | 7.68 | 8.39 | 9.66 | 15.79 |
+
+Table 6: NME (%) with reduced training sets for WFLW.
+
+ | Trained ITLs |
| 1+2+3+4 | 2+3+4 | 3+4 | 4 |
| Input size | 256x8x8 | 128x16x16 | 64x32x32 | 64x64x64 |
| Trainable params | 881k | 291k | 143k | 106k |
| 300-W NME ¬O | 3.54 | 3.63 | 5.34 | 16.34 |
| 300-W NME O | 6.58 | 7.32 | 18.17 | 40.24 |
| 300-W FR@0.1 | 1.45 | 2.03 | 22.93 | 91.44 |
+
+Table 7: Parameters and training results for ITLs ( $\neg O =$ without outlines, $O =$ outlines only)
+
+ | 300-W | AFLW | WFLW |
| NME before FT | 4.16 | 2.12 | 6.11 |
| NME after FT | 3.82 | 1.84 | 5.62 |
+
+Table 8: NME (%) before and after finetuning on full testsets.
+
+# 4.7. Runtime performance
+
+Since inference complexity is equivalent to two forward-passes through a ResNet-18, our method is able to run at frame rates of close to 300fps on a TitanX GPU - an order of magnitude faster than state-of-the-art approaches with similar, high accuracy (LAB [49] = 16fps, Wing [18] = 30fps, Deep Regression [28] = 83fps, Laplace [39] = 20fps).
+
+# 5. Conclusion
+
+With 3FabRec, we have demonstrated that an unsupervised, generative training on large amounts of faces captures implicit information about face shape, making it possible to solve landmark localization with only a minimal amount of supervised follow-up training. This paradigm makes our approach inherently more robust against overfitting to specific training datasets as well as against human annotation variability [14]. The critical ingredients of 3FabRec that enable this generalization are the use of an adversarial autoencoder that reconstructs high-quality faces from a low-dimensional latent space, together with low-overhead, interleaved transfer layers added to the generator stage that transfer face reconstruction to landmark heatmap reconstruction.
+
+Results show that the autoencoder is easily able to generalize from its unlabeled training set to data from unseen datasets. This offers generalization for training from only a few percent of the training set and still produces reliable results from only a few annotated images - far below anything reported so far in the literature. At the same time, since inference amounts to only two forward passes through a ResNet18, our method achieves much higher runtime performance than other highly accurate methods.
+
+Acknowledgements This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2019-0-00079, Department of Artificial Intelligence, Korea University)
+
+# References
+
+[1] Riza Alp Guler, George Trigeorgis, Epameinondas Antonakos, Patrick Snape, Stefanos Zafeiriou, and Iasonas Kokkinos. Densereg: Fully convolutional dense shape regression in-the-wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6799-6808, 2017. 6
+[2] Peter N Belhumeur, David W Jacobs, David J Kriegman, and Neeraj Kumar. Localizing parts of faces using a consensus of exemplars. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(12):2930-2940, 2013. 5
+[3] Matteo Bodini. A review of facial landmark extraction in 2d images and videos using deep learning. *Big Data and Cognitive Computing*, 3(1):14, 2019. 1
+[4] Björn Browatzki and Christian Wallraven. Robust discrimination and generation of faces using compact, disentangled embeddings. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 0–0, 2019. 3, 5, 6
+[5] Adrian Bulat and Georgios Tzimiropoulos. Two-stage convolutional part heatmap regression for the 1st 3D face alignment in the wild (3DFAW) challenge. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 9914 LNCS, pages 616-624, 2016. 2
+[6] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In Face & Gesture Recognition (FG 2018), pages 67-74. IEEE, 2018. 3, 4
+[7] Gavin C Cawley and Nicola LC Talbot. On over-fitting in model selection and subsequent selection bias in performance evaluation. Journal of Machine Learning Research, 11(Jul):2079-2107, 2010. 2
+[8] Timothy F Cootes, Gareth J Edwards, and Christopher J Taylor. Active appearance models. In European Conference on Computer Vision, pages 484-498. Springer, 1998. 2
+[9] Timothy F Cootes and Christopher J Taylor. Active shape modelssmart snakes. In BMVC92, pages 266-275. Springer, 1992. 2
+[10] Arnaud Dapogny, Kévin Bailly, and Matthieu Cord. De-CaFA: Deep Convolutional Cascade for Face Alignment In The Wild. pages 6893-6901, 2019. 2
+[11] Jiankang Deng, Qingshan Liu, Jing Yang, and Dacheng Tao. M3 scr: Multi-view, multi-scale and multi-component cascade shape regression. Image and Vision Computing, 47:19-26, 2016. 6
+[12] Jiankang Deng, George Trigeorgis, Yuxiang Zhou, and Stefanos Zafeiriou. Joint multi-view face alignment in the wild. IEEE Transactions on Image Processing, 28(7):3636-3648, 2019. 2, 6
+[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 2
+[14] Xuanyi Dong, Yan Yan, Wanli Ouyang, and Yi Yang. Style Aggregated Network for Facial Landmark Detection. Proceedings of the IEEE Computer Society Conference on Com
+
+puter Vision and Pattern Recognition, pages 379-388, 2018. 2,4,6,7,8
+[15] Xuanyi Dong and Yi Yang. Teacher Supervises Students How to Learn From Partially Labeled Images for Facial Landmark Detection. In Proceedings of the IEEE International Conference on Computer Vision, pages 783-792, 2019. 2, 7, 8
+[16] Xuanyi Dong, Shouu-I Yu, Xinshuo Weng, Shih-En Wei, Yi Yang, and Yaser Sheikh. Supervision-by-registration: An unsupervised approach to improve the precision of facial landmark detectors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 360–368, 2018. 6
+[17] Zhen-Hua Feng, Guosheng Hu, Josef Kittler, William Christmas, and Xiao-Jun Wu. Cascaded collaborative regression for robust facial landmark detection trained using a mixture of synthetic and real images with dynamic weighting. IEEE Transactions on Image Processing, 24(11):3425-3440, 2015. 2
+[18] Zhen-Hua Feng, Josef Kittler, Muhammad Awais, Patrik Huber, and Xiao-Jun Wu. Wing Loss for Robust Facial Landmark Localisation with Convolutional Neural Networks. 2017. 8
+[19] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2014. 3
+[20] Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527-1554, 2006. 2
+[21] Sina Honari, Pavlo Molchanov, Stephen Tyree, Pascal Vincent, Christopher Pal, and Jan Kautz. Improving Landmark Localization with Semi-Supervised Learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 1546-1555, 2018. 7, 8
+[22] Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146, 2018. 2
+[23] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.5
+[24] Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pages 3581-3589, 2014. 2
+[25] Martin Koestinger, Paul Wohlhart, Peter M Roth, and Horst Bischof. Annotated facial landmarks in the wild: A largescale, real-world database for facial landmark localization. In 2011 IEEE International Conference on Computer Vision Workshops (ICCV workshops), pages 2144-2151. IEEE, 2011. 5
+[26] Vuong Le, Jonathan Brandt, Zhe Lin, Lubomir Bourdev, and Thomas S Huang. Interactive facial feature localization. In European Conference on Computer Vision, pages 679-692. Springer, 2012. 5
+
+[27] Zhujin Liang, Shengyong Ding, and Liang Lin. Unconstrained facial landmark localization with backbone-branches fully-convolutional networks. arXiv preprint arXiv:1507.03409, 2015. 2
+[28] Jiangjing Lv, Xiaohu Shao, Junliang Xing, Cheng Cheng, and Xi Zhou. A deep regression architecture with two-stage re-initialization for high performance facial landmark detection. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-January:3691-3700, 2017. 2, 5, 6, 8
+[29] Jiang-Jing Lv, Cheng Cheng, Guo-Dong Tian, Xiang-Dong Zhou, and Xi Zhou. Landmark perturbation-based data augmentation for unconstrained face recognition. Signal Processing: Image Communication, 47:465-475, 2016. 2
+[30] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015. 2
+[31] Daniel Merget, Matthias Rock, and Gerhard Rigoll. Robust Facial Landmark Detection via a Fully-Convolutional Local-Global Context Network. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 781–790, 2018. 2
+[32] Kieron Messer, Jiri Matas, Josef Kittler, Juergen Luettin, and Gilbert Maitre. Xm2vtsdb: The extended m2vts database. In Second international conference on audio and video-based biometric person authentication, volume 964, pages 965-966, 1999. 5
+[33] Xin Miao, Xiantong Zhen, Xianglong Liu, Cheng Deng, Vassilis Athitsos, and Heng Huang. Direct shape regression networks for end-to-end face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5040-5049, 2018. 6
+[34] Ali Mollahosseini, Behzad Hasani, and Mohammad H. Ma-hoor. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Transactions on Affective Computing, 2017. 4
+[35] Shengju Qian, Keqiang Sun, Wayne Wu, Chen Qian, and Jiaya Jia. Aggregation via Separation: Boosting Facial Landmark Detector with Semi-Supervised Style Translation. In Proceedings of the IEEE International Conference on Computer Vision, 2019. 2, 6, 7, 8
+[36] Rajeev Ranjan, Vishal M Patel, and Rama Chellappa. Hyperface: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(1):121-135, 2017. 2
+[37] Shaoqing Ren, Xudong Cao, Yichen Wei, and Jian Sun. Face alignment at 3000 fps via regressing local binary features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1685-1692, 2014. 6
+[38] Shaoqing Ren, Xudong Cao, Yichen Wei, and Jian Sun. Face alignment via regressing local binary features. IEEE Transactions on Image Processing, 25(3):1233-1245, 2016. 5
+[39] Joseph P Robinson, Yuncheng Li, Ning Zhang, Yun Fu, and Sergey Tulyakov. Laplace Landmark Localization. In Proceedings of the IEEE International Conference on Computer Vision, 2019. 2, 6, 8
+
+[40] Christos Sagonas, Epameinondas Antonakos, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 Faces In-The-Wild Challenge: database and results. Image and Vision Computing, 47:3-18, 2016. 5
+[41] Christos Sagonas, Georgios Tzimiropoulos, Stefanos Zafeiriou, and Maja Pantic. 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 397-403, 2013. 1
+[42] Yi Sun, Xiaogang Wang, and Xiaou Tang. Deep convolutional network cascade for facial point detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3476-3483, 2013. 2
+[43] James Thewlis, Samuel Albanie, Hakan Bilen, and Andrea Vedaldi. Unsupervised learning of landmarks by descriptor vector exchange. In Proceedings of the IEEE International Conference on Computer Vision, pages 6361-6371, 2019. 2
+[44] Phil Tresadern, Tim Cootes, Chris Taylor, and Vladimir Petrovic. Face alignment models. In Handbook of face recognition, pages 109-135. Springer, 2011. 1
+[45] George Trigeorgis, Patrick Snape, Mihalis A Nicolaou, Epameinondas Antonakos, and Stefanos Zafeiriou. Mnemonic descent method: A recurrent process applied for end-to-end face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4177-4187, 2016. 2
+[46] Tim Valentine, Michael B Lewis, and Peter J Hills. Face-space: A unifying concept in face recognition research. The Quarterly Journal of Experimental Psychology, 69(10):1996-2019, 2016. 2
+[47] Xinyao Wang, Liefeng Bo, and Li Fuxin. Adaptive Wing Loss for Robust Face Alignment via Heatmap Regression. 2019. 2
+[48] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398-1402. IEEE, 2003. 3
+[49] Wenyan Wu, Chen Qian, Shuo Yang, Quan Wang, Yici Cai, and Qiang Zhou. Look at Boundary: A Boundary-Aware Face Alignment Algorithm. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2129-2138, 2018. 1, 2, 5, 6, 7, 8
+[50] Yue Wu, Tal Hassner, KangGeon Kim, Gerard Medioni, and Prem Natarajan. Facial Landmark Detection with Tweaked Convolutional Neural Networks. 2015. 7
+[51] Xuehan Xiong and Fernando De la Torre. Supervised descent method and its applications to face alignment. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 532-539, 2013. 1, 2, 6, 7
+[52] Wei Yang, Shuang Li, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. Learning feature pyramids for human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1281-1290, 2017. 5
+[53] Egor Zakharov, Aliaksandra Shysheya, Egor Burkov, and Victor Lempitsky. Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. 2019. 2
+
+[54] Richard Zhang, Phillip Isola, and Alexei A Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1058-1067, 2017. 2
+[55] Zhanpeng Zhang, Ping Luo, Chen-Change Loy, and Xiaou Tang. Facial Landmark Detection by Deep Multi-task Learning. *Eccv*, 2014. 2
+[56] Meilu Zhu, Daming Shi, Muhammad Sadiq, and Mingjie Zheng. Robust Facial Landmark Detection via Occlusion-adaptive Deep Networks. Cvpr, pages 3486-3496, 2019. 2, 6
+[57] Shizhan Zhu, Cheng Li, Chen Change Loy, and Xiaou Tang. Face alignment by coarse-to-fine shape searching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4998-5006, 2015. 5, 6
+[58] Shizhan Zhu, Cheng Li, Chen Change Loy, and Xiaou Tang. Face alignment by coarse-to-fine shape searching. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 07-12-June:4998-5006, 2015. 6, 7
+[59] Xiaojin Zhu. Semi-supervised learning. Encyclopedia of Machine Learning and Data Mining, pages 1142-1147, 2017. 2
+[60] Xiangyu Zhu, Zhen Lei, Xiaoming Liu, Hailin Shi, and Stan Z. Li. Face Alignment Across Large Poses: A 3D Solution. 2016. 2
+[61] Xiangxin Zhu and Deva Ramanan. Face detection, pose estimation, and landmark localization in the wild. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2879-2886. IEEE, 2012. 5
\ No newline at end of file
diff --git a/3fabrecfastfewshotfacealignmentbyreconstruction/images.zip b/3fabrecfastfewshotfacealignmentbyreconstruction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..390f9136ef74586ba1c9e4fb1f8e1cc661f3f8ad
--- /dev/null
+++ b/3fabrecfastfewshotfacealignmentbyreconstruction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:112f4c1d2a947c24d6efd56c74696232ccdd673eb3db1b5043c7d013df3d2fbe
+size 721787
diff --git a/3fabrecfastfewshotfacealignmentbyreconstruction/layout.json b/3fabrecfastfewshotfacealignmentbyreconstruction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2302523fed0516e403b016c26760a0d5b6ccfa99
--- /dev/null
+++ b/3fabrecfastfewshotfacealignmentbyreconstruction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d07e4baace0ffe77cd7ad6f9c02020bce8dfa1ac862d6628e547dda4d1101f56
+size 443067
diff --git a/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_content_list.json b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..1cc88436ef078a6b40b0a0b9cc6f5d705a27dca9
--- /dev/null
+++ b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e8e047d5966a3c10acd0b42dc3698f56eea11ed99fd67d3a01f6f86b71fb428c
+size 79966
diff --git a/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_model.json b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..de113182027013c3d9fb2a36d468e21f1a387bed
--- /dev/null
+++ b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c1ceb41dd2c2b4baadb671f2139de2574e4f6fe3ce9c41e5247b32ea870d7443
+size 96595
diff --git a/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_origin.pdf b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3030aa95cb522504c56b36874b8daecb002f9c0e
--- /dev/null
+++ b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/18bcafd5-b4eb-49a9-b6d9-460b00c49326_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0246cf850c463f8f7832a87a8131a87df52ca86370a92e0bedab60df919bddb
+size 5429906
diff --git a/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/full.md b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..88f2864265c175c33e0e1640da1b9433b61eb04c
--- /dev/null
+++ b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/full.md
@@ -0,0 +1,374 @@
+# 4D Association Graph for Realtime Multi-person Motion Capture Using Multiple Video Cameras
+
+Yuxiang Zhang $^{1*}$ , Liang An $^{1*}$ , Tao Yu $^{1}$ , Xiu Li $^{1}$ , Kun Li $^{2}$ , Yebin Liu $^{1,3}$
+
+$^{1}$ Department of Automation, Tsinghua University $^{2}$ Tianjin University
+
+$^{3}$ Institute for Brain and Cognitive Sciences, Tsinghua University
+
+# Abstract
+
+This paper contributes a novel realtime multi-person motion capture algorithm using multiview video inputs. Due to the heavy occlusions and closely interacting motions in each view, joint optimization on the multiview images and multiple temporal frames is indispensable, which brings up the essential challenge of realtime efficiency. To this end, for the first time, we unify per-view parsing, cross-view matching, and temporal tracking into a single optimization framework, i.e., a 4D association graph that each dimension (image space, viewpoint and time) can be treated equally and simultaneously. To solve the 4D association graph efficiently, we further contribute the idea of 4D limb bundle parsing based on heuristic searching, followed with limb bundle assembling by proposing a bundle Kruskal's algorithm. Our method enables a realtime motion capture system running at 30fps using 5 cameras on a 5-person scene. Benefiting from the unified parsing, matching and tracking constraints, our method is robust to noisy detection due to severe occlusions and close interacting motions, and achieves high-quality online pose reconstruction quality. The proposed method outperforms state-of-the-art methods quantitatively without using high-level appearance information.
+
+# 1. Introduction
+
+Markerless motion capture of multi-person in a scene is important for many industry applications but still challenging and far from being solved. Although the literatures have reported single view 2D and 3D pose estimation methods [41, 36, 11, 12, 18, 17, 28, 34, 44, 45, 33], they suffer from heavy occlusions and produce low-fidelity results. Comparably, multi-view cameras provide more than one views to alleviate occlusion, as well as stereo cues for accurate 3D triangulation, therefore are indispensable inputs for markerless motion capture of multi-person
+
+
+
+
+
+
+Figure 1. Our method enables multi-person motion capture system working at 30fps for 5 persons using 5 RGB cameras, while achieving high quality skeleton reconstruction results.
+
+
+
+scenes. While remarkable advances have been made in many kinds of multi-camera motion capture systems for human [30, 31, 24] or even animals [4], most of them fail to achieve the goals of realtime performance and high quality capture under extremely close interactions.
+
+Given the 4D (2D spatial, 1D viewpoint and 1D temporal) multiview video input, the key to the success of real-time and high quality multi-person motion capture is how to leverage the rich data input, i.e., how to operate on the 4D data structure to achieve high accuracy while maintaining real-time performance. Essentially, based on the human body part features pre-detected in the separate 2D views using state-of-the-art CNN methods [11], three kinds of basic associations can be defined on this 4D structure. These include single image association (i.e., parsing) [11, 20] to form human skeletons in a single image, cross-view association (i.e., matching) to establish correspondences among different views, and temporal association (i.e. tracking) to build correspondences between sequential frames.
+
+Existing methods struggle to deal with all these association simultaneously and efficiently. They consider only
+
+parts of these associations, or simply operate them in a sequential manner, resulting in failure to be a high quality and realtime method. For example, the state-of-the-art methods [14, 10, 39] share a similar high-level framework by first performing per-view person parsing, followed by cross-view person matching, and temporal tracking sequentially. They usually assume and rely on perfect per-view person parsing results in the first stage. However, this can not be guaranteed in crowded or close interaction scenarios. Temporal extension [8, 7] of the 3D pictorial structure (3DPS) model [6] apply temporal tracking [23], followed with cross-view parsing using the very time-consuming 3DPS structure optimization. The Panoptic Studio [24] addresses these associations in a sequential manner, by first matching (generate node proposals), then tracking (generate trajectories), and finally assemble the 3D human instances. As it tracks over the whole sequence, it is impossible to achieve realtime performance.
+
+In this paper, we formulate parsing, matching, and tracking in a unified graph optimization framework, called 4D association graph, to simultaneously and equally addressing 2D spatial, 1D viewpoint and 1D temporal information. By regarding the detected 2D skeleton joint candidates in the current frame and the 3D skeleton joints in the former frame as graph nodes, we construct edges by calculating confidence weights between nodes. Such calculation jointly takes advantage of feature confidences in each individual image, epipolar constraints and reconstructed skeletons in the temporal precedent frame. Compared with [14, 24, 8, 7] which adopt sequential processing strategy on image space, viewpoint, and time dimensions, our 4D graph formulation enables unified optimization on all these dimensions, thereby allowing better mutual benefit among them.
+
+To realize realtime optimization on the 4D association graph, we further contribute an efficient method to solve the 4D association by separating the problem into a 4D limb parsing step and a skeleton assembling step. In the former step, we propose a heuristic searching algorithm to form 4D limb bundles and a modified minimum spanning tree algorithm to assemble the 4D limb bundles into skeletons. Both of these two steps are optimized based on an energy function designed to jointly consider the image feature, stereo and temporal cues, thus optimization quality is guaranteed while realtime efficiency is achieved. We demonstrate a real-time multi-person motion capture system using only 5 to 6 multiview video cameras, see Fig. 1 and the supplemental video. Benefiting from this unified strategy, our system succeeds even in the close interaction scenarios (Video 02:55-03:30). Finally, we contribute a multiview multi-person close interacting motion dataset synchronized with marker-based motion capture system.
+
+# 2. Related Work
+
+We briefly overview literature on multi-person skeleton estimation according to the dimension of input data.
+
+# 2.1. Single Image Parsing
+
+We restrict our single image parsing to the work that addresses multi-person pose estimation in 2D and 3D. As there are close interactions in the scene, they all need to consider skeleton joint or body part detection and their connection to form skeletons. Parsing methods can be typically categorized into two classes: bottom-up method and top-down method. In general, top-down methods [26, 17, 12, 18, 43, 28] demonstrate higher average precision benefiting from human instance information, and bottom-up methods [20, 11, 35, 27, 38] tend to propose pixel-aligned low-level feature positions while assembling them is still a great challenge. Typically, a state-of-the-art bottom-up method, OpenPose [11], introduces part affinity field (PAF) to assist parsing low-level keypoints on limbs, obtaining realtime performance with high accuracy.
+
+# 2.2. Cross-view Matching
+
+Matching finds correspondences across views, no matter on high level features (human instances) or low-level features (keypoints). Previous work [6, 8, 7, 16] implicitly solves matching and parsing using 3D pictorial structure model. However, such method is time-consuming due to large state space and iterative belief propagation. Joo et al. [24] utilize detected features from dense multi-view images to vote for possible 3D joint positions, which does matching in another implicit way. Such voting method only works well with enough observation views. Most recent work [14] matches per-view parsed human instances cross view with convex optimization method constrained by cycle-consistency. Though fast and robust, such method relies on appearance information to ensure good results, and could be affected by possible parsing error (e.g. false positive human instance and wrong joint estimation).
+
+# 2.3. Temporal Tracking
+
+Tracking is one key step towards continuous and smooth motion capture, and helps solve current pose ambiguity according to history results. Tracking could be done either in 2D space or 3D space. Many works have addressed 2D tracking, known as pose-tracking tasks [3, 37, 22, 19]. For 3D tracking, motion capture of multiple closely interacting persons [31, 30] has been proposed through joint 3D template tracking and multi-view body segmentation. Li et al. [29] propose a spatio-temporal tracking for closely interacting persons from multi-view videos. However, these pure tracking algorithms are easy to fail because of temporal error accumulation. Elhayek et al. [15] track 3D articulated model to 2D human appearance descriptor (Sum of
+
+Gaussian), achieving markerless motion capture for both indoor and outdoor scenes. However, it does not demonstrate multi-person case (more than 3 persons). Belagiannis et al. [8] also utilize tracking information, but they derive human tracks in advance as prior to reduce state space, instead of solving tracking and matching simultaneously. Bridgeman et al. [10] contribute a real time method, yet it adopt a sequential processing of image parsing, cross-view correction and temporal tracking. In Panoptic Studio [24], after temporal tracking of 3D joint proposals on the whole sequence, optimization is started for human assembling.
+
+# 3. Overview
+
+Our 4D association graph considers the information in two consecutive frames. We first use the off-the-shelf bottom-up human pose detector [11] on each input view of the current frame to generate low-level human features on each view. Our 4D association graph takes as input multi-view human body part candidates (2D heatmaps position) and connection confidence (PAF [11] score ranging between 0 and 1) between body parts (see Fig. 2(a)), together with the former reconstructed 3D skeletons. By regarding body parts and the 3D joints in the former frame as graph nodes, we construct edges with significant semantic meaning between nodes. Specifically, as shown in Fig. 2(b), there exist three kinds of edges: per-view parsing edges connecting adjacent body parts in each image view, cross-view matching edges connecting the same body part across views, and temporal tracking edges connecting history 3D nodes and 2D candidates. The construction of these edges will be elaborated in Sect. 4.
+
+Based on the input graph in Fig. 2(b), this 4D association problem can be described as a minimum-cost multi-cut problem, i.e., a 0-1 integer programming problem to select those edges that belong to the real skeletons and the physically real temporal and cross-view edges, see Fig. 2(c). Actually, our graph model is similar to the available single view association problem [11, 20], except that it is more complex. As it is a NP-hard problem, we split it to 4D limb parsing (Sect. 5.1) and a skeleton assembling (Sect. 5.2) problems. Our proposed solving method can guarantee real-time performance while obtaining robust results. Here, it is worth mentioning that, our graph model and the solving method also work for special cases when there is no temporal edges, i.e., at the first frame of the whole sequence, or when new persons entering the scene.
+
+# 4. 4D Association Graph
+
+For each image view $c \in \{1,2,\dots,N\}$ at the current frame $t$ , the convolutional pose machine (CPM) model [41, 11] is first applied to get the heatmaps of keypoints and their part affinity fields (PAFs). Denote $\mathcal{D}_j(c) =$
+
+$\{\mathbf{d}_j^m (c)\in \mathbb{R}^2\}$ as the candidate positions of the skeleton joints $j\in \{1,2,\dots,J\}$ , with $m$ as candidate index. Here, $t$ is ignored by default as processing the current frame. Denote $f_{ij}^{mn}(c)$ as PAF score connecting $\mathbf{d}_i^m (c)$ and $\mathbf{d}_j^n (c)$ where $\{ij\} \in \mathcal{T}$ is a limb on the skeleton topology tree $\mathcal{T}$ .
+
+With both the candidate positions $\mathcal{D}_j(c)$ and the skeleton joints reconstructed in former frame seen as graph nodes, we have three kinds of edges: per-view parsing edges $\mathcal{E}_P$ connecting nodes in the same view, cross-view matching edges $\mathcal{E}_V$ connecting nodes in different views geometrically, and temporal tracking edges $\mathcal{E}_T$ connecting nodes temporally. The solving of this association graph is equivalent to determining bool variable $z\in \{0,1\}$ for each edge, where $z = 1$ means connected nodes are associated in the same human body, $z = 0$ otherwise. Note that $z = 0$ means the two nodes are linked with two different bodies, or are linked with a false position (a fake joint that is not on a real body). The connecting weight on edges is written as $p(z) = p(z = 1)$ . In the following, the weights of each edge is defined in the 4D association graph.
+
+# 4.1. Parsing Edges and Matching Edges
+
+Without considering the temporal tracking edges introduced by the former reconstructed 3D skeletons, the parsing edges and the matching edges forms a 3D association graph $\mathcal{G}_{3D}$ . This case happens when processing the first frame of the whole sequence or when a new person is entering in the scene. The graph $\mathcal{G}_{3D}$ directly extends the original per-view multiple person parsing problem [11] with cross view geometric matching constraints. With these geometric constraints, false limb connections in single view case may have good chance to be distinguished and corrected during joint 3D association.
+
+Denote $z_{ij}^{mn}(c_1,c_2)$ as bool variable on edge connecting $\mathbf{d}_i^m (c_1)$ and $\mathbf{d}_j^n (c_2)$ . Obviously, a feasible solution $\{z_{ij}^{mn}(c_1,c_2)\}$ on $\mathcal{G}_{3D}$ must conform to the following inequalities
+
+$$
+\begin{array}{l} \forall m, \sum z _ {i j} ^ {m n} (c, c) \leq 1 \\ \forall c _ {2} \neq c _ {1}, m, \sum_ {n} ^ {n} z _ {i i} ^ {m n} \left(c _ {1}, c _ {2}\right) \leq 1 \tag {1} \\ \end{array}
+$$
+
+Specifically, the top one forces that no two edges share a node, i.e., no two limbs of the same type (e.g., left forearm) share a part. The bottom one forces that no joint from one view connects to two joints of the same type from another view. Note also here $c_{1}$ and $c_{2}$ represent all possible combinations of view pairs.
+
+For the per-view parsing edge $\mathcal{E}_P$ , we directly define the input edge weight as its PAF score:
+
+$$
+p \left(z _ {i j} ^ {m n} (c) = 1\right) = f _ {i j} ^ {m n} (c) \tag {2}
+$$
+
+
+Figure 2. Method overview. (a) We input body part positions and connection confidence of different views at time $t$ , together with 3D person of last time. We use 3 views for example. (b) The 4D association graph. For clarity, we only highlight the association of the torso limb with three types of edges (parsing edges, matching edges and tracking edges) with different colors. (c) From the initial graph (b), our association method outputs the assembling results. (d) We optimize the assembled multiview 2D skeletons (c) to form 3D skeletons of current frame $t$ .
+
+For cross-view matching edge $\mathcal{E}_V$ , the weight is defined based on the epipolar distance, written as line-to-line distance in 3D space:
+
+$$
+p \left(z _ {i i} ^ {m n} \left(c _ {1}, c _ {2}\right)\right) = 1 - \frac {1}{Z} \mathbf {d} _ {i} ^ {m} \left(c _ {1}\right) \oplus \mathbf {d} _ {i} ^ {n} \left(c _ {2}\right) \tag {3}
+$$
+
+$$
+\mathbf {d} \left(c _ {1}\right) \oplus \mathbf {d} \left(c _ {2}\right) = d \left(K _ {c _ {1}} ^ {- 1} \tilde {\mathbf {d}} \left(c _ {1}\right), K _ {c _ {2}} ^ {- 1} \tilde {\mathbf {d}} \left(c _ {2}\right)\right) \tag {4}
+$$
+
+where $\tilde{\mathbf{d}} = [\mathbf{d}^{\mathrm{T}},1]^{\mathrm{T}}$ , $K_{c}$ is intrinsic matrix of view $c$ , $d(\cdot ,\cdot)$ means line-to-line distance between two rays emitting from the camera centers of view $c_{1}$ and $c_{2}$ . $Z$ is an empirically defined normalization factor, which adjusts epipolar distance to range [0, 1]. Note that we only build edges for those cross-view nodes sharing the same joint index.
+
+# 4.2. Tracking Edges
+
+Although solving $\mathcal{G}_{3D}$ at each time instant could provide good association in most cases, failures may happen for very crowded scene or severe occlusions. To improve skeleton reconstruction robustness, we take advantage of the temporal prior, i.e., the reconstructed skeletons at the former frame for regularization of the association problem, which forms the 4D association graph $\mathcal{G}_{4D}$ . We restrict the connecting edge between the former frame skeletons and the current frame joint features, by requiring the two nodes of the edge to be the same skeleton joint (can be on different persons). Denote $z_{i}^{mk}(c)$ as the final optimized bool variable for edge connecting image joint feature $\mathbf{d}_i^m (c)$ and skeleton joint $\mathbf{X}_i^k$ . We define tracking edge connecting probability as
+
+$$
+p \left(z _ {i} ^ {m k} (c)\right) = 1 - \frac {1}{T} d ^ {\prime} \left(\mathbf {X} _ {i} ^ {k}, K _ {c} ^ {- 1} \mathbf {d} _ {i} ^ {m} (c)\right) \tag {5}
+$$
+
+where $d^{\prime}(\mathbf{X},\mathbf{d})$ indicates point-to-line distance between 3D point $\mathbf{X}$ and 3D line emitting from camera center to $\mathbf{d}$ , and $T$ is normalization factor, ensuring $p(z_i^{mk}(c))$ to be in range
+
+[0, 1]. Similarly, we have inequality conditions hold for the feasible solution space:
+
+$$
+\forall i, c, \sum_ {m} z _ {i} ^ {m k} (c) \leq 1, \sum_ {k} z _ {i} ^ {m k} (c) \leq 1 \tag {6}
+$$
+
+This constraint forces that each 3D joint at the last frame matches no more than one 2D joint on each view at the current frame, and vice versa.
+
+# 4.3. Objective Function
+
+Based on the predefined probabilities for the parsing edges, matching edges and tracking edges, our 4D association optimization can be formulated as an edge selection problem to maximize an objective function under conditions 1 and 6. Specifically, let $q(z) = p(z)\cdot z$ denote the final energy of an edge, where $z$ is a boolean variable, and then our objective function can be written as the summation of energies of all the selected edges in $\mathcal{E}_P$ , $\mathcal{E}_M$ and $\mathcal{E}_T$ :
+
+$$
+\begin{array}{l} E (\mathcal {Z}) = w _ {p} \sum q \left(z _ {i j} ^ {m n} (c, c)\right) + w _ {m} \sum q \left(z _ {i i} ^ {m n} \left(c _ {1}, c _ {2}\right)\right) \\ + w _ {t} \sum q \left(z _ {i} ^ {m k} (c)\right) \tag {7} \\ \end{array}
+$$
+
+Note here $\sum$ would traverse all the possible edges, i.e., all feasible values of variables $(i,j,m,n,k,c,c1,c2)$ by default. $w_{p}$ , $w_{m}$ and $w_{t}$ are empirically defined weighting factors for edges $\mathcal{E}_P$ , $\mathcal{E}_M$ and $\mathcal{E}_T$ , respectively. With $w_{t} = 0$ , it degenerates to the objective function for solving association graph $\mathcal{G}_{3D}$ . Notice that, both $\mathcal{G}_{3D}$ and $\mathcal{G}_{4D}$ can be solved with the same procedure, as described in Sect. 5.
+
+# 5. Solving 4D Association
+
+Solving the 4D Association graph means maximizing the objective function Eqn. 7 under constraints Eqn. 1 and
+
+
+(a) $G_{4D}^{ij}$
+
+
+
+
+Figure 3. Illustration of limb cliques. (a) A sample 4D graph on limb $\{ij\}$ denoted as $\mathcal{G}_{4D}^{ij}$ . Two cliques are marked as red area and blue area. (b) Limb cliques of different sizes could be proposed from the 4D graph on limb. Joints of the same type (same color in the above figure) on a limb clique form a clique, and joints of different types on each view must share a green parsing edge.
+
+
+
+
+
+
+
+
+(b) limb cliques $\mathcal{G}_C$
+
+Eqn. 6. Traversing the huge association space in a brute force manner is infeasible for realtime systems. Instead, inspired by the realtime but high quality parsing method [11] that assembles 2D human skeleton in a greedy manner, we propose a realtime 4D association solver. The key difference between our 4D association and the previous 2D association is that: the limb candidates scatter not only in a single image but in the whole space and time, and some limbs represent the same physical limbs. Therefore, we need to first associate those limbs that are likely to be the same limb bundle across views and times, before 4D skeletons assembling. Based on this idea, our realtime solution can be divided into two steps: 4D limb bundle association (Sect. 5.1), and 4D human skeleton association by the bundle Kruskal's algorithm (Sect. 5.2). It is worth noting that, both of these two steps rely on the objective function Eqn. 7 for optimization.
+
+# 5.1. 4D Limb Bundle Parsing
+
+To extract limb bundles across view and time, we first restrict $\mathcal{G}_{4D}$ on a limb $\{ij\}$ (two adjacent types of joint) as $\mathcal{G}_{4D}^{ij}$ . Since there are multiple persons in the scene, graph $\mathcal{G}_{4D}^{ij}$ may contain multiple real limb bundles. In theory, each real limb bundle contains two joint cliques. For clarity, a clique means a graph where every two nodes are connected [42], see Fig. 3(a) for example. This implies that every two joints of the same type in the limb bundle must share a cross-view edge or a temporal edge. By further considering the parsing edges, a correct 4D limb bundle consists of two joint clique connected with parsing edges on each view. We call such limb bundle candidate as limb clique. Fig. 3(b) enumerates all the possible limb cliques of Fig. 3(a). Consequently, our goal in this step is to search all possible limb cliques $\{\mathcal{G}_C|\mathcal{G}_C\subset \mathcal{G}_{4D}^{ij}\}$ for the real limb bundles.
+
+We measure each limb clique with $E(\mathcal{Z}_{\mathcal{G}_C})$ based on the objective function Eqn. 7. However, directly maximizing $E(\mathcal{Z}_{\mathcal{G}_C})$ would always encourage as many edges as possible to be selected in a clique, even false edges. Hence, we
+
+
+
+
+
+
+(d)
+2D Limb
+
+
+---Edges Between Limbs
+
+
+3D Limb
+Figure 4. Illustration of limb bundle parsing procedure. (a) Initial graph $\mathcal{G}_{4D}^{ij}$ . A square/cube represents a limb (2D or 3D), and each grey dash line means an edge. (b) A best clique (limb bundle) detected from (a) is shown in blue. (c) Then, we remove both limbs and edges related to the best clique, and extract next best one. (d) Finally, all cliques are detected. We could extract cliques without temporal edges, like the orange one.
+
+normalize $E(\mathcal{Z}_{\mathcal{G}_C})$ with clique size $|\mathcal{V}_C|$ of $\mathcal{G}_C$ , and add a penalty term to balance the clique size and the average probability. Overall, the objective function for a limb clique is
+
+$$
+E \left(\mathcal {G} _ {C}\right) = E \left(\mathcal {Z} _ {\mathcal {G} _ {C}}\right) / \left| \mathcal {V} _ {C} \right| + w _ {v} \rho \left(\left| \mathcal {V} _ {C} \right|\right) \tag {8}
+$$
+
+where $w_{v}$ is balancing weight, and $\rho$ is a Welsch robust loss[13, 5] defined as
+
+$$
+\rho (x) = 1 - \exp \left(- \frac {1}{2} (x / c) ^ {2}\right) \tag {9}
+$$
+
+Here, $c = (N - 1) / 2$ is a parameter depending on the total number of views.
+
+Fig. 4 illustrates the limb bundle parsing procedure. After selecting a limb clique and marking it as a limb bundle, we remove it from $\mathcal{G}_{4D}^{ij}$ (Fig. 4(b)), together with all other edges connected with any joint in this clique (Fig. 4(c)). By doing this, our solution always conforms to feasibility inequalities (1,6). This selection process is iterated until $\mathcal{G}_{4D}^{ij}$ is empty (Fig. 4(d)).
+
+# 5.2. 4D Skeleton Assembling
+
+After generating all the 4D limb bundles, we need to assemble them into multiple 4D human skeletal structures. We first sort all the 4D limb bundles based on their scores, and build a priority queue to store them. In each iteration,
+
+we pop a 4D limb bundle from the queue with the maximum score (based on Eqn. 8), and merge it into the 4D skeletons. In this merging process, all the 2D joints (belongs to this bundle, from different views) should have a same labeled person ID. However, since a newly added limb bundle may share the same 4D joint as some limb bundles that are already assigned, conflicts would arise when these 2D joints have already been labeled with different person IDs on different views in the previous iterations, see Fig. 5(a). To eliminate this conflict, we propose a simple yet effective way by splitting the newly added limb bundles to small limb bundles according to the persons whose joints are assigned to (Fig. 5(b)). We then re-compute the objective function of each small bundle and push back to the prior queue for further assembling. If there is no conflict, we merge the bundle into the skeleton and label the 2D joints. We iterate popping and merging until the queue is empty (Fig. 5(c)).
+
+We call the above method bundle Kruskal's algorithm. In the single view case, there would be no conflicts, and our method degenerates to traditional Kruskal's algorithm, which is a famous minimum spanning tree (MST) algorithm used in OpenPose [11].
+
+# 5.3. Parametric Optimization
+
+Based on 4D skeleton assembling results on the 2D view images, we can further optimize the full 3D body pose by embedding a parametric skeleton. We minimize the energy function
+
+$$
+E (\Theta) = w _ {2 D} E _ {2 D} + w _ {\text {s h a p e}} E _ {\text {s h a p e}} + w _ {\text {t e m p}} E _ {\text {t e m p}} \tag {10}
+$$
+
+where $E_{2D}$ is the data term aligning 2D projections on each view to the detected joints, $E_{shape}$ penalizes human shape prior (e.g. bone length and symmetry), and $E_{temp}$ is temporal smoothing term ( $w_{2D}$ , $w_{shape}$ and $w_{temp}$ are balancing weights, $w_{temp} = 0$ if no temporal information exists). As this fitting process is a classic optimization step, please refer to [9, 44, 29] for details. Temporally, we track each person and use the average bone lengths of the first five frames with high confidence (visible in more than 3 cameras) as the bone length prior for the person in the later frames. If the person is lost and re-appear, we simply regard him/her as a new person and re-calculate the bone lengths.
+
+# 6. Results
+
+In Fig. 6, we demonstrate the results of our system. Using only geometry information from sparse view points, our method enables realtime and robust multi-person motion capture under severe occlusions (Fig. 6(a)), challenging poses (Fig. 6(b)) and subtle social interactions (Fig. 6(c)).
+
+# 6.1. Implementation Details
+
+The multi-view capture system consists of 5 synchronized industrial RGB cameras (with resolution $2048 \times 2048$ )
+
+
+(a)
+Nodes assigned to person 1
+Nodes assigned to person 2
+
+
+(b)
+
+
+Possible Final Limb Bundles
+(c)
+
+
+- Nodes to assemble
+Different Views
+Figure 5. Conflicts handling in our skeleton assembling step. (a) A limb bundle to be added. It contains 3 parsing edges on 3 views. In this case, each parsing edge contains a joint to be assembled (black node) and a joint already assembled (blue or red nodes) in previous iterations. Here conflict arises as blue and red belong to different person IDs. (b) We split original limb bundle into small bundles according to the existing person IDs. (c) A possible final assembling result.
+
+and a single PC with one 3.20 GHz CPU and one NVIDIA TITAN RTX GPU. Our system achieves 30 fps motion capture for 5 persons. Specifically, for each frame, the preprocessing step (including demosaicing, undistortion and resizing for multi-view inputs) takes less than 1 ms, the CNN inference step takes $22.9\mathrm{ms}$ in total for 5 images, the 4D association step takes $11~\mathrm{ms}$ , and the parametric optimization step takes less than $4\mathrm{ms}$ . Moreover, we ping-pong the CNN inference and the 4D association for achieving real-time performance with affordable delay $(60~\mathrm{ms})$ . More details about the optimization parameters are provided in the supplementary material.
+
+Note that the 4D association pipeline is fully implemented on CPU. Also, in the CNN inference step, the input RGB images are resized to $368 \times 368$ , and the CNNs for keypoints and PAFs are re-implemented using TensorRT [40] for further acceleration.
+
+# 6.2. Dataset
+
+We contribute a new evaluation dataset for multi-person 3D skeleton tracking with ground truth 3D skeletons captured by commercial motion capture system, OptiTrack [1]. Compared with previous 3D human datasets [25, 21, 32, 24, 8, 2], our dataset is mainly focusing on the more challenging scenarios like close interactions and challenging motion. Our dataset contains 5 sequences with each around 20-second long capturing a 2-4 person scene using 6 cameras. Our actors all wear black marker-suit for ground truth skeletal motion capture. With ground truth 3D skeletons, our dataset enables more effective quantitative evaluations for both 2D parsing and 3D tracking algorithms. Note that besides evaluating our method using the proposed dataset, we also provide evaluation results using Shelf and Panoptic Studio dataset following previous works [8, 7, 14].
+
+
+(a) Our Live Data (5 views)
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6. Results of our system. From top to bottom: input images, reprojection of 3D human, and 3D visualization respectively. (a) Our live captured data with fast motion (left), severe occlusion (middle) and crowded scene (right). 5 views used. (b) Our dataset with textureless clothing and rich motion. 6 views used. (c) Panoptic studio dataset with natural social interaction. 7 views used.
+
+
+
+
+
+
+(b) Our Dataset (6 views)
+
+
+
+
+
+
+
+
+
+
+
+
+(c) Panoptic Studio (7 views)
+
+
+
+
+
+# 6.3.Quantitative Comparison
+
+We compare with state-of-the-art methods quantitatively using both the Shelf dataset and our testing dataset. The quantitative comparison on Shelf dataset is shown in Table. 1. Benefiting from our 4D association formulation, we achieve more accurate results than both temporal tracking methods based on 3DPS ([8, 6, 7, 16]) and appearance-based global optimization methods [14].
+
+We also compare with [14] on our testing dataset according to 'precision' (the ratio of correct joints in all estimated joints) and 'recall' (the ratio of correct joints in all ground truth joints). A joint is correct if its Euclidean distance to the ground truth joint is less than threshold $0.2m$ . As shown in Tab. 2, our method outperforms [14] under both metrics.
+
+| Shelf | A1 | A2 | A3 | Avg |
| Belagiannis et al. [6] | 66.1 | 65.0 | 83.2 | 71.4 |
| †Belagiannis et al. [8] | 75.0 | 67.0 | 86.0 | 76.0 |
| Belagiannis et al. [7] | 75.3 | 69.7 | 87.6 | 77.5 |
| Ershadi-Nasab et al. [16] | 93.3 | 75.9 | 94.8 | 88.0 |
| Dong et al. [14] | 97.2 | 79.5 | 96.5 | 91.1 |
| *Dong et al. [14] | 98.8 | 94.1 | 97.8 | 96.9 |
| †# Tanke et al. [39] | 99.8 | 90.0 | 98.0 | 96.0 |
| †Ours(final) | 99.0 | 96.2 | 97.6 | 97.6 |
+
+Table 1. Quantitative comparison on Shelf dataset using percentage of correct parts (PCP) metric. \* means method with appearance information, $\dagger$ means method with temporal information, $\#$ means accuracy without head. A1'-A3' correspond to the results of three actors, respectively. The averaged result is in column 'Avg'.
+
+| Our Dataset | Dong[14] | Ours(final) |
| Precision(%) | 71.0 | 88.5 |
| Recall(%) | 80.2 | 90.2 |
+
+Table 2. Comparison with [14] using our testing dataset.
+
+
+Figure 7. Comparison with two-step pipeline. Top figures are association result, bottom figures are reprojection of 3D pose. Notice that, reprojection of 3D pose generated by two-step pipeline obviously deviates from correct position due to false parsing.
+
+# 6.4. Qualitative Comparison
+
+To further demonstrate the advantages of our bottom-up system, we perform qualitative comparison with the state-of-the-art method [14], which utilizes top-down human pose detector [12] to perform single view parsing. The qualitative results is shown in Fig. 8, from which we can see that top-down method depends heavily on instance proposals, and may generate false positive human pose detection to deteriorate the cross-view matching performance (left case). Furthermore, per-view parsing would fail to infer correct human poses under severe occlusion, deteriorating pose reconstruction results (right). Instead, thanks to relatively precise low-level features (e.g. keypoints) and robust 4D asso
+
+
+Figure 8. Qualitative comparison with Dong[14] on Shelf (left figure) and our captured data (right figure), both with 5 cameras. For each case, we show association results and reprojection of 3D pose on two sample views. For 3D visualization, we show a side view rendering and a top view rendering for clear comparison.
+
+ciation algorithm, the joints are associated more accurately in our results.
+
+| Shelf | A1 | A2 | A3 | Avg |
| two-step | 98.1 | 83.8 | 97.6 | 93.1 |
| w/o tracking | 96.5 | 86.8 | 97.0 | 93.4 |
| Ours(final) | 99.0 | 96.2 | 97.6 | 97.6 |
+
+Table 3. Ablation study on Shelf dataset. 'two-step' means first per-view parsing and then cross-view matching. 'w/o tracking' means we solve $\mathcal{G}_{3D}$ in each frame. Both 'two-step' and 'w/o tracking' use triangulation to infer 3D poses. Numbers are percentage of correct parts(PCP).
+
+# 6.5. Ablation Study
+
+With/Without tracking. We first evaluate tracking edges in the 4D graph. By triangulating 2D bodies into 3D skeletons directly using $\mathcal{G}_{3D}$ , we eliminate the usage of tracking edges. The result is labeled as 'w/o tracking' in Table. 3. Without using tracking edges, our method still exhibits competent result and out-performs state-of-the-art method [14] (93.4% vs 91.1%). Moreover, our 4D association method is more robust in messy scenes ('Ours(final') as shown in Table. 3).
+
+Compare with two-stage pipeline. We implement a two-step pipeline for comparison, by using [11] to parse human in each view, followed with human matching using clique searching method with objective function defined on the parsed bodies. Note that no temporal information is used,
+
+and 3D poses are obtained by triangulation. Result is shown as 'two-step' in Table. 3. As shown in Table. 3, our per-frame $\mathcal{G}_{3D}$ solution 'w/o tracking' performs better than two-step pipeline, especially on actor 'A2'. To show our robustness to per-view parsing ambiguity, we use only 3 views to reconstruct 2 persons (Fig. 7). Wrong parsing result on one view would harm the inferred 3D pose, especially when very sparse views are available.
+
+# 7. Conclusion
+
+We proposed a realtime multi-person motion capture method with sparse view points. Build on top of the low-level detected features directly, we formulated parsing, matching and tracking problem simultaneously into a unified 4D graph association framework. The new 4D association formulation not only enabled realtime motion capture performance, but also achieved state-of-the-art accuracy, especially for crowded and close interaction scenarios. Moreover, we contributed a new testing dataset for multi-person motion capture with ground truth 3D poses. Our system narrowed the gap between laboratory markerless motion capture system and industrial applications in real world scenarios. Finally, our novel 4D graph formulation may stimulate future research in this topic.
+
+Acknowledgements. This paper is supported by the National Key Research and Development Program of China [2018YFB2100500] and the NSFC No.61531014 and No.61861166002.
+
+# References
+
+[1] Optitrack marker mocap. https://www.optitrack.com.
+[2] Nvd Aa, X Luo, G Giezeman, R Tan, and R Veltkamp. Utrecht multi-person motion (umpm) benchmark: a multiperson dataset with synchronized video and motion capture data for evaluation of articulated human motion and interaction. In ICCV Workshop HICV, 2011.
+[3] Mykhaylo Andriluka, Umar Iqbal, Eldar Insafutdinov, Leonid Pishchulin, Anton Milan, Juergen Gall, and Bernt Schiele. Posetrack: A benchmark for human pose estimation and tracking. In CVPR, 2018.
+[4] Praneet C Bala, Benjamin R Eisenreich, Seng Bum Michael Yoo, Benjamin Y Hayden, Hyun Soo Park, and Jan Zimmermann. Openmonkeystudio: Automated markerless pose estimation in freely moving macaques. bioRxiv, 2020.
+[5] Jonathan T Barron. A general and adaptive robust loss function. In CVPR, 2019.
+[6] Vasileios Belagiannis, Sikandar Amin, Mykhaylo Andriluka, Bernt Schiele, Nassir Navab, and Slobodan Ilic. 3d pictorial structures for multiple human pose estimation. In CVPR, 2014.
+[7] Vasileios Belagiannis, Sikandar Amin, Mykhaylo Andriluka, Bernt Schiele, Nassir Navab, and Slobodan Ilic. 3d pictorial structures revisited: Multiple human pose estimation. TPAMI, 2016.
+[8] Vasileios Belagiannis, Xinchao Wang, Bernt Schiele, Pascal Fua, Slobodan Ilic, and Nassir Navab. Multiple human pose estimation with temporally consistent 3d pictorial structures. In ECCV Workshop, 2014.
+[9] Federica Bogo, Angjoo Kanazawa, Christoph Lassner, Peter Gehler, Javier Romero, and Michael J Black. Keep it smpl: Automatic estimation of 3d human pose and shape from a single image. In ECCV, 2016.
+[10] Lewis Bridgeman, Marco Volino, Jean-Yves Guillemaut, and Adrian Hilton. Multi-person 3d pose estimation and tracking in sports. In CVPR Workshop, 2019.
+[11] Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. OpenPose: realtime multi-person 2d pose estimation using part affinity fields. TPAMI, 2019.
+[12] Yilun Chen, Zhicheng Wang, Yuxiang Peng, Zhiqiang Zhang, Gang Yu, and Jian Sun. Cascaded Pyramid Network for Multi-Person Pose Estimation. In CVPR, 2018.
+[13] John E Dennis Jr and Roy E Welsch. Techniques for nonlinear least squares and robust regression. Communications in Statistics-Simulation and Computation, 1978.
+[14] Junting Dong, Wen Jiang, Qixing Huang, Hujun Bao, and Xiaowei Zhou. Fast and robust multi-person 3d pose estimation from multiple views. In CVPR, 2019.
+[15] Ahmed Elhayek, Edilson de Aguiar, Arjun Jain, J Thompson, Leonid Pishchulin, Mykhaylo Andriluka, Christoph Bregler, Bernt Schiele, and Christian Theobalt. Marconiconvnet-based marker-less motion capture in outdoor and indoor scenes. TPAMI, 2017.
+[16] Sara Ershadi-Nasab, Erfan Noury, Shohreh Kasaei, and Esmaeil Sanaei. Multiple human 3d pose estimation from multiview images. Multimedia Tools and Applications, 2018.
+
+[17] Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, and Cewu Lu. RMPE: Regional multi-person pose estimation. In ICCV, 2017.
+[18] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In ICCV, 2017.
+[19] Eldar Insafutdinov, Mykhaylo Andriluka, Leonid Pishchulin, Siyu Tang, Evgeny Levinkov, Bjoern Andres, and Bernt Schiele. Arttrack: Articulated multi-person tracking in the wild. In CVPR, 2017.
+[20] Eldar Insafutdinov, Leonid Pishchulin, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. Deepercut: A deeper, stronger, and faster multi-person pose estimation model. In ECCV, 2016.
+[21] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. TPAMI, 2013.
+[22] Umar Iqbal, Anton Milan, and Juergen Gall. Posetrack: Joint multi-person pose estimation and tracking. In CVPR, 2017.
+[23] Engin Turetken Pascal Fua Jerome Berclaz, Francois Fleuret. Multiple object tracking using k-shortest paths optimization. TPAMI, 2011.
+[24] Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Godisart, Bart Nabbe, Iain Matthews, et al. Panoptic studio: A massively multiview system for social interaction capture. TPAMI, 2019.
+[25] Vahid Kazemi, Magnus Burenius, Hossein Azizpour, and Josephine Sullivan. Multi-view body part recognition with random forests. In BMVC, 2013.
+[26] Lipeng Ke, Ming-Ching Chang, Honggang Qi, and Siwei Lyu. Multi-scale structure-aware network for human pose estimation. In ECCV, 2018.
+[27] Muhammed Kocabas, Salih Karagoz, and Emre Akbas. Multiposenet: Fast multi-person pose estimation using pose residual network. In ECCV, 2018.
+[28] Jiefeng Li, Can Wang, Hao Zhu, Yihuan Mao, Hao-Shu Fang, and Cewu Lu. Crowdpose: Efficient crowded scenes pose estimation and a new benchmark. In CVPR, 2019.
+[29] Kun Li, Nianhong Jiao, Yebin Liu, Yangang Wang, and Jingyu Yang. Shape and pose estimation for closely interacting persons using multi-view images. In CGF, 2018.
+[30] Yebin Liu, Juergen Gall, Carsten Stoll, Qionghai Dai, Hans-Peter Seidel, and Christian Theobalt. Markerless motion capture of multiple characters using multiview image segmentation. TPAMI, 2013.
+[31] Yebin Liu, Carsten Stoll, Juergen Gall, Hans-Peter Seidel, and Christian Theobalt. Markerless motion capture of interacting characters using multi-view image segmentation. In CVPR, 2011.
+[32] Dushyant Mehta, Helge Rhodin, Dan Casas, Pascal Fua, Oleksandr Sotnychenko, Weipeng Xu, and Christian Theobalt. Monocular 3d human pose estimation in the wild using improved cnn supervision. In 3DV, 2017.
+[33] Dushyant Mehta, Oleksandr Sotnychenko, Franziska Mueller, Weipeng Xu, Srinath Sridhar, Gerard Pons-Moll, and Christian Theobalt. Single-shot multi-person 3d pose estimation from monocular rgb. In 3DV, 2018.
+
+[34] Xuecheng Nie, Jiashi Feng, Jianfeng Zhang, and Shuicheng Yan. Single-stage multi-person pose machines. In ICCV, 2019.
+[35] George Papandreou, Tyler Zhu, Liang-Chieh Chen, Spyros Gidaris, Jonathan Tompson, and Kevin Murphy. Personlab: Person pose estimation and instance segmentation with a bottom-up, part-based, geometric embedding model. In ECCV, 2018.
+[36] Leonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter V Gehler, and Bernt Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. In CVPR, 2016.
+[37] Yaadhav Raaj, Haroon Idrees, Gines Hidalgo, and Yaser Sheikh. Efficient online multi-person 2d pose tracking with recurrent spatio-temporal affinity fields. In CVPR, 2019.
+[38] Jie Song, Bjoern Andres, Michael Black, Otmar Hilliges, and Siyu Tang. End-to-end learning for graph decomposition. In ICCV, 2019.
+[39] Julian Tanke and Juergen Gall. Iterative greedy matching for 3d human pose tracking from multiple views. In $GCPR$ , 2019.
+[40] Han Vanholder. Efficient inference with tensorsrt, 2016.
+[41] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In CVPR, 2016.
+[42] Robin J. Wilson. Introduction to graph theory, fourth edition, 1996.
+[43] Bin Xiao, Haiping Wu, and Yichen Wei. Simple baselines for human pose estimation and tracking. In ECCV, 2018.
+[44] Andrei Zanfir, Elisabeta Marinoiu, and Cristian Sminchisescu. Monocular 3d pose and shape estimation of multiple people in natural scenes-the importance of multiple scene constraints. In CVPR, 2018.
+[45] Andrei Zanfir, Elisabella Marinoiu, Mihai Zanfir, Alin-Ionut Popa, and Cristian Sminchisescu. Deep network for the integrated 3d sensing of multiple people in natural images. In NIPS, 2018.
\ No newline at end of file
diff --git a/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/images.zip b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..116ab0206e0dc33072fc485ab16b5735c2bcdc1d
--- /dev/null
+++ b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00f9db78dbbcdfad6689e9f8a6fd9b5471a59564250e8a082c5e1089227d9751
+size 699522
diff --git a/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/layout.json b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..790113cbed99662147babec059f3ba588e29a45d
--- /dev/null
+++ b/4dassociationgraphforrealtimemultipersonmotioncaptureusingmultiplevideocameras/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:24d4d1ba2dd3afa7ac265693ab06a1c09d58d9ec9d70db98d03a5e12a1bd2cad
+size 451342
diff --git a/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_content_list.json b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..132c00a9479f30ab5eb9741535f58cd875d2b3b3
--- /dev/null
+++ b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:44b72361a2b62b16de4071fbbd7a5ae149af1c376d0088d77996b113c34a4b2e
+size 70060
diff --git a/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_model.json b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..fb2af52cc599c4a8b11c9ba809af4fb9d65faee2
--- /dev/null
+++ b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ebb429cd2df8f36e613c578d308ae280c9bf3d8ba57a7b6bc14dfb56d7f97ec9
+size 87718
diff --git a/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_origin.pdf b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c0f6c59a7674496312128bb31df9e4ba9be51aa0
--- /dev/null
+++ b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/d86ea0ed-cf0f-4298-b311-281c2be5518f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9bc7255db44353def6e024cc0bf3a05c49fb976a975c6ebb29120051eabdfe20
+size 4872763
diff --git a/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/full.md b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a1c04b28956615bbce242252f98e3989566518db
--- /dev/null
+++ b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/full.md
@@ -0,0 +1,318 @@
+# 4D Visualization of Dynamic Events from Unconstrained Multi-View Videos
+
+Aayush Bansal Minh Vo Yaser Sheikh Deva Ramanan Srinivasa Narasimhan Carnegie Mellon University
+
+{aayushb,mpvo,yaser,deva,srinivas}@cs.cmu.edu
+
+http://www.cs.cmu.edu/~aayushb/Open4D/
+
+
+(1)
+
+
+(2)
+Figure 1. We can create virtual cameras that facilitate: (1) freezing the time and exploring views (red); (2) freezing the view and moving through time (green); and (3) vary both time and view (blue).
+
+# Abstract
+
+We present a data-driven approach for 4D space-time visualization of dynamic events from videos captured by hand-held multiple cameras. Key to our approach is the use of self-supervised neural networks specific to the scene to compose static and dynamic aspects of an event. Though captured from discrete viewpoints, this model enables us to move around the space-time of the event continuously. This model allows us to create virtual cameras that facilitate: (1) freezing the time and exploring views; (2) freezing a view and moving through time; and (3) simultaneously changing both time and view. We can also edit the videos and reveal occluded objects for a given view if it is visible in any of the other views. We validate our approach on challenging in-the-wild events captured using up to 15 mobile cameras.
+
+# 1. Introduction
+
+Imagine going back in time and revisiting crucial moments of your lives, such as your wedding ceremony, your graduation ceremony, or the first birthday of your child, immersively from any viewpoint. The prospect of building such a virtual time machine [40] have become increasingly
+
+realizable with the advent of affordable and high-quality smartphone cameras producing extensive collections of social video data. Unfortunately, people do not benefit from this broader set of captures of their social events. When it comes to look back, we are likely to only look at one video or two even when hundreds were captured. We present a data-driven approach that leverages all perspectives to enable a more complete exploration of the event. With our approach, the benefits from each extra perspective that is captured leads to a more complete experience. We seek to automatically organize the disparate visual data into a comprehensive four-dimensional environment (3D space and time). The complete control of spatiotemporal aspects not only enables us to see a dynamic event from any perspective but also allows geometrically consistent content editing. This functionality unlocks many potential applications in the movie industry and consumer devices, especially as virtual reality headsets are becoming popular by the day. Figure 1 show examples of virtual camera views synthesized using our approach for an event captured from multi-view videos.
+
+Prior work on virtualized reality [28, 30, 32] has primarily been restricted to studio setups with tens or even hundreds of synchronized cameras. Four hundred hours of video data is uploaded on YouTube every minute. This feat has become possible because of the commercial success of high qual-
+
+
+
+
+
+
+SfM
+
+
+
+
+
+
+Frame-450
+SfM+humans
+
+
+
+
+
+
+Frame-1200
+Figure 2. Comparison to existing work: Given a dynamic event captured using 10 phones, we freeze time and explore views for two time instances. We use a standard Structure-from-Motion (SfM) [42, 43] to reconstruct the camera trajectory. As shown in first-column, SfM treats dynamic information as outliers for rigid reconstruction. We use additional cues such as 2D keypoints [4], statistical human body model [36], and human association [51] along-with the outputs of SfM to generate dynamic information for these two time instances (Frame-450 and Frame-1200 in second and third columns respectively). We call this SfM+humans. These three outputs lack realism. Additionally, the reconstruction fails for non-Lambertian surfaces (see glass windows), non-textured regions (see umbrellas), and shadows (around humans). Our approach, on the other hand, can densely synthesize the various static and dynamic components, as shown in fourth and fifth columns for the same moments.
+
+
+
+
+
+
+Frame-450
+Ours
+
+
+
+
+
+
+Frame-1200
+
+ity hand-held cameras such the iPhones or GoPros. Many public events are easily captured from multiple perspectives by different people. Despite this new form of big visual data, reconstructing and rendering the dynamic aspects have mostly been limited to studios and not for in-the-wild captures with hand-held cameras. Currently, there exists no method for fusing the information from multiple cameras into a single comprehensive model that could facilitate content sharing. This gap is largely because the mathematics of dynamic 3D reconstruction [20] is not well-posed. The segmentation of objects [19] are far from being consistently recovered to do 3D reconstruction [56]. Large scale analytics of internet images exist for static scenes [24, 42, 43, 46] alone, and ignores the interesting dynamic events (as shown in Figure 2-first-column).
+
+We pose the problem of 4D visualization from in-the-wild captures within an image-based rendering paradigm utilizing large capacity parametric models. The parametric models based on convolutional neural nets (CNNs) can circumvent the requirement of explicitly computing a comprehensive model [2, 5] for modeling and fusing static and dynamic scene components. Key to our approach is the use of self-supervised CNNs specific to the scene to compose static and dynamic parts of the event. This data-driven model enables us to extract the nuances and details in a dynamic event. We work with in-the-wild dynamic events captured from multiple mobile phone cameras. These multiple views have arbitrary baselines and unconstrained camera poses.
+
+Despite impressive progress with CNN-based scene re
+
+construction [53, 26, 33, 52], noticeable holes and artifacts are often visible, especially for large texture-less regions or non-Lambertian surfaces. We accumulate spatiotemporal information available from multiple videos to capture content that is not visible at a particular time instant. This accumulation helps us to capture even the large non-textured regions (umbrellas in Figure 2) or non-Lambertian surfaces (glass windows in Figure 2). Finally, a complete control of static and dynamic components of a scene, and viewpoint and time enables user-driven content editing in the videos. In public events, one often encounters random movement obstructing the cameras to capture an event. Traditionally nothing can be done about such spurious content in captured data. The complete 4D control in our system enables the user to remove unwanted occluders and obtain a clearer view of the actual event using multi-view information.
+
+# 2. Related Work
+
+There is a long history of 4D capture systems [30] to experience immersive virtualized reality [14], especially being able to see from any viewpoint that a viewer wants irrespective of the physical capture systems.
+
+4D Capture in Studios: The ability to capture depth maps from a small baseline stereo pair via 3D geometry techniques [20] led to the development of video-rate stereo machines [32] mounting six cameras with small baselines. This ability to capture dense depth maps motivated a generation of researchers to develop close studios [28, 31, 39, 58] that can precisely capture the dynamic events happening within it.
+
+
+Figure 3. Overview: We pose the problem of 4D visualization of dynamic events captured from multiple cameras as a data-driven composition of static background (top) and instantaneous foreground (middle) to generate the final output (bottom). Importantly, the data-driven composition enables us to capture certain aspects that may otherwise be missing in the inputs, e.g., parts of the human body are missing in the first and third column, and parts of background are missing in second row.
+
+A crucial requirement in these studios is the use of synchronized video cameras [31]. This line of research is restricted to a few places in the world with access to proper studios and camera systems.
+
+Beyond Studios: The onset of mobile phones have revolutionized the capture scenario. Each one of us possess high-definition smartphone cameras. Usually, there are more cameras at a place than there are people around. Many public events are captured by different people from various perspectives. This feat motivated researchers to use in-the-wild data for 3D reconstruction [23, 46] and 4D visualization [2, 5]. A hybrid of geometry [20] and image-based rendering [44] approaches have been used to reconstruct 3D scenes from pictures [7]. Photo tourism [46] and the works following it [1, 15, 16, 24, 45] use internet-scale images to reconstruct architectural sites. These approaches have led to the development of immersive 3D visualization of static scenes.
+
+The work on 3D reconstruction treats dynamic information as outliers and reconstructs the static components alone. Additional cues such as visual hulls [13, 18, 37], or 3D body scans [5, 6], or combination of both [3, 49] are used to capture dynamic aspects (esp. human performances) from multi-view videos. Hasler et al. [21] use markerless method by combining pose estimation and segmentation. Vedula et al. [48] compute scene shape and scene flow for 4D modeling. Ballan et al. [2] model foreground subjects as video-sprites on billboards. However, these methods assume a single actor in multi-view videos. Recent approaches [10, 50] are not restricted by this assumption but does sparse reconstruction. CNN-based Image Synthesis: Data-driven approaches [9, 17, 27, 54] using convolutional neural networks [35] have led to impressive results in image synthesis. These results inspired a large body of work [11, 12, 29, 38, 47, 57] on con
+
+tinuous view synthesis for small baseline shifts. Hedman et al. [22] extended this line of work to free-viewpoint capture. However, these methods are currently applicable to static scenes only. We combine the insights from CNN-based image synthesis and earlier work on 4D visualization to build a data-driven 4D Browsing Engine that makes minimal assumption about the content of multi-view videos.
+
+# 3. 4D Browsing Engine
+
+We are given $N$ camera views with extrinsic parameters $\{C_1,C_2,\dots,C_N\}$ , and intrinsic parameters $\{M_1,M_2,\dots,M_N\}$ . Our goal is to generate virtual camera view $C$ that does not exist in any of these $N$ cameras.
+
+In this work, we accumulate the long-term multiview spatiotemporal information to densely reconstruct static background, and combine it with instantaneous information. We pose this problem as a self-supervised composition of static background and instantaneous foreground. Figure 3 shows an overview of our approach via a virtual camera that freezes time and explores views. We begin by describing the overall fusion architecture in Section 3.1, then describe the modules for computing the foreground and background components in Section 3.2, and finally discuss the model in Section 3.3.
+
+# 3.1. Self-Supervised Composition
+
+We use a data-driven approach to learn the fusion of a static background, $B$ , and a dynamic foreground, $F$ , to generate the required target view for given camera parameters. Since there exists no ground truth or manual annotations, we train a convolutional neural network (CNN) in a self-supervised manner by reconstructing a known held-out camera view, $C$ , from the remaining $N - 1$ views, thereby learning a mapping $G: (B, F) \to C$ . We use three losses to learn this mapping: (1) Reconstruction loss; (2) Adversarial loss; and (3) Frequency loss.
+
+Reconstruction Loss: We use standard $l_{1}$ reconstruction loss to minimize reconstruction error on the content with paired data samples $\{((b_i, f_i), c_i)\}$ where $b_i \in B$ , $f_i \in F$ , and $c_i \in C$ :
+
+$$
+\min _ {G} L _ {r} = \sum_ {i} \left| \left| c _ {i} - G \left(b _ {i}, f _ {i}\right) \right| \right| _ {1} \tag {1}
+$$
+
+Adversarial Loss: Recent work [17] has shown that learned mapping can be improved by tuning it with a discriminator $D$ that is adversarially trained to distinguish between real samples of $c_{i}$ from generated samples $G(b_{i},f_{i})$ :
+
+$$
+\begin{array}{l} \min _ {G} \max _ {D} L _ {a d v} (G, D) = \sum_ {i} \log D (c _ {i}) + \\ \sum_ {i} \log (1 - D (G (b _ {i}, f _ {i}))) \tag {2} \\ \end{array}
+$$
+
+
+disparity estimation for a stereo pair
+
+
+Figure 4. Instantaneous Foreground Estimation: We begin with estimating disparity for a stereo pair using an off-the-shelf disparity estimation approach [52]. We use $\binom{N}{2}$ stereo pairs, and reproject them to the target view using standard 3D geometry [20]. We use a dynamic event consistency to select appropriate reprojected views from $\binom{N}{2}$ views (marked with green in middle). The dynamic event consistency is computed using the output of SfM+humans (marked with yellow in bottom-row). Finally, we compute a per-pixel max, weighted-mean, and median of selected views. Collectively these represent instantaneous foreground information along with SfM+humans (shown in the bottom-row).
+
+Frequency Loss: We enforce a frequency-based loss function via Fast-Fourier Transform to learn appropriate frequency content and avoid generating spurious high-frequencies when ambiguities arise (inconsistent foreground and background inputs):
+
+$$
+\min _ {G} L _ {f r} = \sum_ {i} | | \mathscr {F} (c _ {i}) - \mathscr {F} (G (b _ {i}, f _ {i})) | | _ {1} \tag {3}
+$$
+
+where $\mathcal{F}$ is fast-Fourier transform. The overall optimization combines Eq. 1, Eq. 2, and Eq. 3:
+
+$$
+L = \lambda_ {r} L _ {r} + \lambda_ {a d v} L _ {a d v} + \lambda_ {f r} L _ {f r}
+$$
+
+where, $\lambda_r = \lambda_{fr} = 100$ , and $\lambda_{adv} = 1$ . Explicitly using background and foreground for target view makes the model independent of explicit camera parameters.
+
+# 3.2. Intermediate Data Generation
+
+We now describe our approach to estimate information about dynamic foreground and static background that are
+
+
+reprojected views for different camera poses over time
+Figure 5. Static Background Estimation: We generate images for the target camera pose for all time. A per-pixel median of the images over a large temporal window for the target camera pose filters the dynamic components. We show estimated background images in the bottom-row for the three camera poses.
+
+used as an input to the neural network (Section 3.3). We start with pre-processing of multiple camera views, and then use it to compute background and foreground information.
+
+Temporal Alignment & Correspondences: We establish the frame-level temporal alignment for the all cameras using spatiotemporal bundle adjustment [50]. We estimate pixel-level correspondences between a stereo pair using an off-the-shelf disparity estimation [52]. While these correspondences can be noisy, multiple views constraints a better selection of points across $\binom{N}{2}$ stereo pairs.
+
+Instantaneous Foreground Estimation: We build foreground estimates at a given time using stereo pairs. We use estimated disparity [52] to warp to the target view. Figure 4 shows different reprojected views from various stereo pairs. Since we have no control over camera placements, the disparity from various $\binom{N}{2}$ stereo pairs are often noisy and cannot be naively used to synthesize the target view in all conditions. Sparse cameras, large stereo baseline, bad stereopairs, or errors in camera-poses may result in misaligned frames (shown in Figure-4-middle) for the target camera pose. Therefore we enforce dynamic event consistency via 3D reconstruction to select five best reprojected views to compose instantaneous foreground information.
+
+Dynamic Event Consistency: We use a previous approach to perform long-term 3D human tracking across multiple views [51]. The 3D tracking provides a rough 3D estimate of humans from different views. Collectively, with the output of 3D background reconstruction from SfM and MVS [42, 43], we call this SfM+humans. While not a realistic and precise
+
+
+Figure 6. Freeze a view and Move in time: We show 4 frames from a 3 minute-long generated video of a stationary camera. This sequence is captured using 10 phones. The top-row shows the output generated using SfM+humans. The second-row shows intermediate using instantaneous information. We observe missing foreground and background details in the first and second row. The third row shows the output of our approach. Our approach can consistently generate the background and foreground. We also compare our outputs to a ground-truth held-out camera sequence in fourth-row. Finally, we show a few close-ups in the last-row. Our approach not only captures the humans well but also contains detailed information such as flowing dresses and flowers in it. We are, however, not able to capture the sun's glare at this location as we compose output from views at other locations and do not explicitly parameterize illumination.
+
+output by itself, such 3D reconstruction is sufficient to rank the various stereo pairs that are required to generate a target camera view. This is the main purpose of SfM+Human. We compute the distance between various reprojections and SfM+humans. This distance is computed using the Conv-5 features of an ImageNet [8] pre-trained AlexNet model [34]. We use top-5 scoring views for composing an instantaneous foreground image. As shown in Figure 4, we find good stereo-pairs using SfM+humans (marked yellow). We compute a per-pixel max, weighted-mean, and median using the top-5 ranked stereo pairs (marked green). These images, along with SfM+humans, collectively represent instantaneous information (Figure 4-bottom).
+
+Static Background Estimation: We accumulate long-term
+
+spatiotemporal information to compute static background for a target camera view. The intrinsic and extrinsic parameters from $N$ physical cameras enable us to create the views over a large temporal window of $[0, t]$ for a target camera position. Figure 5 shows creation of virtual cameras for various poses and time instants. We estimate a static background by computing a median of different views for a given camera pose. Computing the median over large temporal window for a given camera position enables us to capture non-textured and non-Lambertian stationary surfaces in a scene (see Figure 5).
+
+# 3.3. Stacked Multi-Stage CNN
+
+We now describe the neural network architecture that composes the static background and instantaneous fore
+
+
+Figure 7. Many people and their unrestricted movement: We captured a Jiu-Jitsu retreat event that was witnessed by more than 30 people. We show 4 frames of two virtual cameras (freeze a view and move in time) and contrast it with the held-out camera sequences (ground truth). We also show close-ups at various locations and compare it with the ground-truth. We capture various nuance despite people in different clothing, poses, and involved in unchoreographed activities.
+
+ground. We use a modified U-Net [41] architecture that inputs background and foreground information, and outputs the image. Most consumer phones enable to capture $1080p$ videos at 60fps. Training a neural network combining the various background and foreground information with hi-res images require the use of high capacity models. These models need prohibitive memory and hence we use a stacked multi-stage CNN for an effective composition. We use a high capacity model for low-res image generation that learns overall structure, and improve the resolution with multiple stages.
+
+We train three models for three different resolutions, namely: (1) low-res $(270\times 480)$ ; (2) mid-res $(540\times 960)$ ; and (3) hi-res $(1080\times 1960)$ . These models are trained independently and form multiple stages of our formulation. At test time, we use them sequentially, starting from low-res to mid-res to hi-res outputs. The median channel of foreground information for mid-res model is replaced by a $2\times$ upsampled output of low-res model. Similarly, the median channel of foreground information for hi-res model is replaced by a $2\times$ upsampled output of mid-res model. We artificially create
+
+
+
+
+
+
+viewing behind an occlusion
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+viewing behind a second-order occlusion
+
+
+
+
+
+
+
+
+
+
+
+
+original
+
+
+Figure 8. User-Controlled Manipulation: We show two examples of user-controlled manipulation and editing in videos. In the top-row, a user selects a mask to see the occluded blue-shirt person (behind red shirt person). There is no way we can infer this information from a single-view. However, multi view information allows us to not only see the occluded human but also gives a sense of activity he is doing. We show frames from 2 seconds of video. In the middle-row, we want to see the part of scene behind the blue-shirt person who is disoccluded above. This is an example of a seeing behind a second-order occlusion. While not as sharp as first-order occlusion result, we can still see green grass and white bench in the background with a person moving. This particular scenario is not only challenging due to second-order occlusion but also because of larger distance from cameras. In the bottom-row, a user can remove the foreground person by marking on a single frame in video. Our system associates this mask to all the frames in video, and edit it to show background in place of human. We show frames of edited video (20 seconds long).
+
+
+edited
+
+
+
+
+
+a low-resolution median foreground image (down-sampled by a factor of 2) during training to effectively utilize the modifications to mid-res and hi-res models at test time. We provide more details for stacked multi-stage composition on our project page.
+
+# 3.4. User-Controlled Manipulation
+
+We have complete control of the 3D space and time information of the event. This 4D control allows us to browse the dynamic events. A user can see behind the occlusions, edit, add, or remove objects. To accomplish this, a user only needs to mark the required portion in a video. Our approach automatically edits the content, i.e., update the background and foreground, via multi-view information — the modified inputs to stacked multi-stage composition results in desirable outputs. Importantly, marking on a single frame in the video is sufficient, as we can effortlessly propagate the mask to the rest of the video (4D control of foreground). We show two examples of user-controlled manipulation in Figure 8. In the first example, we enable a user to see occluded person without changing the view. Our system takes input of mask from the user, and disocclude the blue-shirt person (Figure 8-top-row). We also explore viewing behind a second-order occlusion. Figure 8-middle shows a very challenging example of viewing behind the blue-shirt person. Despite farther
+
+away from the camera, we see grass, white table, and a person moving in the output. Finally, we show an example of editing where a user can mark region in a frame of video (Figure 8-bottom-row). Our system generates full video sequence without the masked person.
+
+# 4. Experiments
+
+Datasets: We collected a large number of highly diverse sequences of unrestricted dynamic events having a wide variety of human motion, human-human interaction, human-object interaction, clothing, both indoor and outdoor, under varying environmental and illumination conditions. These sequences are captured using upto 15 mobile phones. We refer the reader to our project page for all the results. Here we describe a few prominent sequences used for evaluation.
+
+Western Folk Dance: We captured sequences of western folk dance performances. Figure 6 shows the example of one of the sequences from this capture. This sequence is challenging due to flowing dresses worn by performers, self-occlusions, and illumination conditions. Such a sequence paves the path for explicit parametrization of illumination condition in 4D modeling.
+
+Jiu-Jitsu Retreat: Jiu-Jitsu is a type of Brazilian Martial
+
+| Approach | M.S.E | PSNR | SSIM | LPIPS [55] | FID [25] |
| N.N | 1595.78 | 15.50 | 0.476 | 0.401 | - |
| ±294.53 | ±2.72 | ±0.086 | ±0.027 | |
| SfM | 6494.47 | 9.66 | 0.438 | 0.422 | 184.629 |
| + Humans | ±1721.17 | ±1.88 | ±0.079 | ±0.022 | |
| Inst. | 2886.11 | 13.57 | 0.538 | 0.391 | 122.31 |
| ±1654.34 | ±3.23 | ±0.113 | ±0.054 | |
| Ours | 591.92 | 20.06 | 0.689 | 0.222 | 47.610 |
| ±286.86 | ±3.95 | ±0.130 | ±0.025 | |
+
+Table 1. Comparison: We contrast our approach with: (1). a simple nearest neighbor (N.N.) baseline; (2). reconstructed outputs of SfM+humans; and finally (3). median-channel of instantaneous dynamic information (Inst). We use various evaluation criteria to study our approach in comparisons with these three methods: (1). M.S.E: We compute a mean-squared error of the generated camera sequences using held-out camera sequences.; (2). PSNR: We compute a peak signal-to-noise ratio of the generated sequences against the held out sequences; (3). SSIM: We also compute a SSIM in similar manner.; (4). We also use LPIPS [55] to study structural similarity and to avoid any biases due to MSE, PSNR, and SSIM. Lower it is, better it is. Note that all the above four criteria are computed using held-out camera sequences; and finally (5) we compute a FID-score [25] to study the quality of generations when a ground-truth is not available for comparisons. Lower it is, better it is.
+
+Art. We captured sequences of this sporting event during a summer retreat of the Jiu-Jitsu group. This sequence is an extreme example of unchoreographed dynamic motion from more than 30 people who participated in it. Figure 1 and Figure 7 show examples from this capture.
+
+Performance Dance: We captured many short performance dances including ballet, Tango, and restraints of plays. The illumination, clothing, and motions change drastically in these sequences.
+
+Sequences from Prior Work: We also used sequences from Vo et al. [50] to properly compare our results with their 3D reconstruction (SfM+humans). Figure 2 and Figure 3 shows the results of freezing the time and exploring the views for these sequences.
+
+Evaluation: We use a mean-squared error (MSE), PSNR, SSIM, and LPIPS [55] to study the quality of virtual camera views created using our approach. MSE: Lower is better. PSNR: Higher is better. SSIM: Higher is better. LPIPS: Lower is better. We use held-out cameras for proper evaluation. We also compute a FID score [25], lower the better, to study the quality of sequences where we do not have any ground truth (e.g., freezing the time and exploring
+
+views). This criterion contrasts the distribution of virtual cameras against the physical cameras.
+
+Baselines: To the best of our knowledge, there does not exist a work that has demonstrated dense 4D visualization for in-the-wild dynamic events captured from unconstrained multi-view videos. We, however, study the performance of our approach with: (1) a simple nearest neighbor baseline N.N.: We find nearest neighbors of generated sequences using conv-5 features of an ImageNet pre-trained AlexNet model. This feature space helps in finding the images closer in structure.; (2) SfM+humans: We use work from Vo et al [50, 51] for these results.; and finally (3) we contrast it with median channel of instantaneous image (Inst).
+
+Table 1 contrasts our approach with various baselines on held-out cameras for different sequences. In total, we generated data for 12 minutes long sequences for evaluation against held-out sequences, and another 12 minutes of random movements. We observe significantly better outputs under all the criteria. We provide more qualitative and quantitative ablation studies on our project page.
+
+# 5. Discussion & Future Work
+
+The world is our studio. The ability to do 4D visualization of dynamic events captured from unconstrained multi-view videos opens up avenue for future research to capture events with a combination of drones, robots, and hand-held cameras. The use of self-supervised scene-specific CNNs allows one to browse the 4D space-time of dynamic events captured from unconstrained multi-view videos. We extensively captured various in-the-wild events to study this problem. We show different qualitative and quantitative analysis in our study. A real-time user guided system that allows a user to upload videos and browse will enable a better understanding of 4D visualization systems. The proposed formulation and the captured sequences, however, open a number of opportunities for future research such as incorporating illumination and shadows in 4D spatiotemporal representation, and modeling low-level high frequency details. One drawback of our method is that the video streams are treated as perfectly synchronized. This introduces motion artifacts for fast actions [50]. Future work will incorporate sub-frame modeling between different video streams in depth estimation and view synthesis modules for more appealing 4D slow motion browsing.
+
+Acknowledgements: We are extremely grateful to Bojan Vrcelj for helping us shape the project. We are also thankful to Gengshan Yang for his help with the disparity estimation code and many other friends for their patience in collecting the various sequences. We list them on our project page. This work is supported by the Qualcomm Innovation Fellowship.
+
+# References
+
+[1] S. Agarwal, N. Snavely, I. Simon, S. M. Seitz, and R. Szeliski. Building rome in a day. In ICCV, 2009. 3
+[2] Luca Ballan, Gabriel J. Brostow, Jens Puwein, and Marc Pollefeys. Unstructured video-based rendering: Interactive exploration of casually captured videos. ACM Trans. Graph., 2010. 2, 3
+[3] Luca Ballan and Guido Maria Cortelazzo. Marker-less motion capture of skinned models in a four camera set-up using optical flow and silhouettes. 3DPVT, 2008. 3
+[4] Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. Openpose: realtime multi-person 2d pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008, 2018. 2
+[5] Joel Carranza, Christian Theobalt, Marcus A. Magnor, and Hans-Peter Seidel. Free-viewpoint video of human actors. ACM Trans. Graph., 2003. 2, 3
+[6] Edilson De Aguiar, Carsten Stoll, Christian Theobalt, Naveed Ahmed, Hans-Peter Seidel, and Sebastian Thrun. Performance capture from sparse multi-view video. In ACM SIGGRAPH. 2008. 3
+[7] Paul E. Debevec, Camillo J. Taylor, and Jitendra Malik. Modeling and rendering architecture from photographs: A hybrid geometry- and image-based approach. In ACM Trans. Graph. ACM, 1996. 3
+[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. 5
+[9] Emily L Denton, Soumith Chintala, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NeurIPS, 2015. 3
+[10] N. Dinesh Reddy, Minh Vo, and Srinivasa G. Narasimhan. Carfusion: Combining point tracking and part detection for dynamic 3d reconstruction of vehicles. In CVPR, 2018. 3
+[11] John Flynn, Michael Broxton, Paul Debevec, Matthew DuVall, Graham Fyffe, Ryan Overbeck, Noah Snavely, and Richard Tucker. Deepview: View synthesis with learned gradient descent. In CVPR, 2019. 3
+[12] John Flynn, Ivan Neulander, James Philbin, and Noah Snavely. Deepstereo: Learning to predict new views from the world's imagery. In CVPR, 2016. 3
+[13] J-S Franco and Edmond Boyer. Fusion of multiview silhouette cues using a space occupancy grid. In ICCV, 2005. 3
+[14] H. Fuchs, G. Bishop, K. Arthur, L. McMillan, R. Bajcsy, S. Lee, H. Farid, and Takeo Kanade. Virtual space teleconferencing using a sea of cameras. In Proc. First International Conference on Medical Robotics and Computer Assisted Surgery, 1994. 2
+[15] Yasutaka Furukawa, Brian Curless, Steven M Seitz, and Richard Szeliski. Towards internet-scale multi-view stereo. In CVPR, 2010. 3
+[16] Yasutaka Furukawa and Jean Ponce. Accurate, dense, and robust multiview stereopsis. TPAMI, 2009. 3
+[17] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014. 3
+
+[18] Jean-Yves Guillemaut, Joe Kilner, and Adrian Hilton. Robust graph-cut scene segmentation and reconstruction for free-viewpoint video of complex dynamic scenes. In ICCV, 2009. 3
+[19] Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A dataset for large vocabulary instance segmentation. In CVPR, 2019. 2
+[20] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003. 2, 3, 4
+[21] Nils Hasler, Bodo Rosenhahn, Thorsten Thormahlen, Michael Wand, Jürgen Gall, and Hans-Peter Seidel. Markerless motion capture with unsynchronized moving cameras. In CVPR, 2009. 3
+[22] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. ACM Trans. Graph., 2018. 3
+[23] Jared Heinly. Toward Efficient and Robust Large-Scale Structure-from-Motion Systems. PhD thesis, The University of North Carolina at Chapel Hill, 2015. 3
+[24] Jared Heinly, Johannes Lutz Schonberger, Enrique Dunn, and Jan-Michael Frahm. Reconstructing the World* in Six Days * (As Captured by the Yahoo 100 Million Image Dataset). In CVPR, 2015. 2, 3
+[25] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In NeurIPS, 2017. 8
+[26] Po-Han Huang, Kevin Matzen, Johannes Kopf, Narendra Ahuja, and Jia-Bin Huang. Deepmvs: Learning multi-view stereopsis. CVPR, 2018. 2
+[27] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. 3
+[28] Hanbyul Joo, Tomas Simon, Xulong Li, Hao Liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Scott Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh. Panoptic studio: A massively multiview system for social interaction capture. IEEE TPAMI, 2017. 1, 2
+[29] Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. Learning-based view synthesis for light field cameras. ACM Trans. Graph., 2016. 3
+[30] Takeo Kanade and PJ Narayanan. Historical perspectives on 4d virtualized reality. In CVPR Workshops, 2006. 1, 2
+[31] Takeo Kanade, Peter Rander, and PJ Narayanan. Virtualized reality: Constructing virtual worlds from real scenes. IEEE multimedia, 1997. 2, 3
+[32] T. Kanade, A. Yoshida, K. Oda, H. Kano, and M. Tanaka. A stereo machine for video-rate dense depth mapping and its new applications. In CVPR, 1996. 1, 2
+[33] Tejas Khot, Shubham Agrawal, Shubham Tulsiani, Christoph Mertz, Simon Lucey, and Martial Hebert. Learning unsupervised multi-view stereopsis via robust photometric consistency. arXiv preprint arXiv:1905.02706, 2019. 2
+[34] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NeurIPS, 2012. 5
+
+[35] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 2015. 3
+[36] Matthew Loper, Naureen Mahmood, Javier Romero, Gerard Pons-Moll, and Michael J Black. Smpl: A skinned multiperson linear model. ACM Trans. Graph., 2015. 2
+[37] Wojciech Matusik, Chris Buehler, Ramesh Raskar, Steven J Gortler, and Leonard McMillan. Image-based visual hulls. In ACM Trans. Graph., 2000. 3
+[38] Moustafa Meshry, Dan B. Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo Martin-Brualla. Neural rerendering in the wild. In CVPR, 2019. 3
+[39] Martin Oswald and Daniel Cremers. A convex relaxation approach to space time multi-view 3d reconstruction. In ICCVW, 2013. 2
+[40] Raj Reddy. Teleportation, Time Travel, and Immortality. Springer New York, 1999. 1
+[41] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention. Springer, 2015. 6
+[42] Johannes Lutz Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In CVPR, 2016. 2, 4
+[43] Johannes Lutz Schonberger, Enliang Zheng, Marc Pollefeys, and Jan-Michael Frahm. Pixelwise view selection for unstructured multi-view stereo. In ECCV, 2016. 2, 4
+[44] Harry Shum and Sing Bing Kang. Review of image-based rendering techniques. In Visual Communications and Image Processing, 2000. 3
+[45] Sudipta N Sinha, Drew Steedly, Richard Szeliski, Maneesh Agrawala, and Marc Pollefeys. Interactive 3d architectural modeling from unordered photo collections. In ACM Trans. Graph. ACM, 2008. 3
+[46] Noah Snavely, Steven M. Seitz, and Richard Szeliski. Photo tourism: Exploring photo collections in 3d. ACM Trans. Graph., 2006. 2, 3
+[47] Pratul P Srinivasan, Richard Tucker, Jonathan T Barron, Ravi Ramamoorthi, Ren Ng, and Noah Snavelly. Pushing the boundaries of view extrapolation with multiplane images. In CVPR, 2019. 3
+[48] Sundar Vedula, Simon Baker, and Takeo Kanade. Image-based spatio-temporal modeling and view interpolation of dynamic events. ACM Trans. Graph., 2005. 3
+[49] Daniel Vlasic, Ilya Baran, Wojciech Matusik, and Jovan Popovic. Articulated mesh animation from multi-view silhouettes. In ACM SIGGRAPH 2008. 2008. 3
+[50] Minh Vo, Srinivasa G. Narasimhan, and Yaser Sheikh. Spatiotemporal bundle adjustment for dynamic 3d reconstruction. In CVPR, 2016. 3, 4, 8
+[51] Minh Vo, Ersin Yumer, Kalyan Sunkavalli, Sunil Hadap, Yaser Sheikh, and Srinivasa Narasimhan. Automatic adaptation of person association for multiview tracking in group activities. IEEE TPAMI, 2020. 2, 4, 8
+[52] Gengshan Yang, Joshua Manela, Michael Happold, and Deva Ramanan. Hierarchical deep stereo matching on high-resolution images. In CVPR, 2019. 2, 4
+
+[53] Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, and Long Quan. Mvsnet: Depth inference for unstructured multi-view stereo. In CVPR, 2018. 2
+[54] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In ICCV, 2017. 3
+[55] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. 8
+[56] Ruo Zhang, Ping-Sing Tsai, James Edwin Cryer, and Mubarak Shah. Shape-from-shading: a survey. TPAMI, 1999. 2
+[57] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: learning view synthesis using multiplane images. ACM Trans. Graph., 2018. 3
+[58] C. Lawrence Zitnick, Sing Bing Kang, Matthew Uytendaele, Simon Winder, and Richard Szeliski. High-quality video view interpolation using a layered representation. ACM Trans. Graph., 2004. 2
\ No newline at end of file
diff --git a/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/images.zip b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6dec9dd33147be7dfcb3da82f692de232ae979d9
--- /dev/null
+++ b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:362bbafd6ca6336d85f77d896e5df81a7c3c52c65bdb23339a6faaa2cd76bd21
+size 1327824
diff --git a/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/layout.json b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ec5c4aa946552c7f24f5ef3050baab3df905b769
--- /dev/null
+++ b/4dvisualizationofdynamiceventsfromunconstrainedmultiviewvideos/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e6936446ea565582f40d9f1b7580939faddf46d6e0337106f3524b280aa85e8d
+size 357612
diff --git a/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_content_list.json b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c4d1a3e8ca65c87d235fd31600f58628eaab0331
--- /dev/null
+++ b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c3d57a11e7292774d8fcb837b23d80f22daf04785903e4260c815cb1299243b
+size 70421
diff --git a/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_model.json b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..69c6086581a9b009c066293311a32a8773e5b31f
--- /dev/null
+++ b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e32c3bbf4e29ba62679341899a245e6aba5086eecaf1b66985597b3dcc1e3bf1
+size 87926
diff --git a/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_origin.pdf b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..869f578a27377c20eead8a917d1daab18f052e35
--- /dev/null
+++ b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/632e4093-0cc9-45cb-8860-3f33bc4b48f1_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea8f4f5b0ba9b8e95cfcfbad2e7471df573913c65cae660a43c5153b84be8f4d
+size 1192176
diff --git a/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/full.md b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4a9b08780a0606c364968c01a4ad95b40593262f
--- /dev/null
+++ b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/full.md
@@ -0,0 +1,260 @@
+# A2dele: Adaptive and Attentive Depth Distiller for Efficient RGB-D Salient Object Detection
+
+Yongri Piao $^{1*}$ Zhengkun Rong $^{1*}$ Miao Zhang $^{1,2\dagger}$ Weisong Ren $^{1}$ Huchuan Lu $^{1,3}$
+
+$^{1}$ Dalian University of Technology, China
+
+$^{2}$ Key Lab for Ubiquitous Network and Service Software of Liaoning Province,
+
+Dalian University of Technology, China
+
+3Pengcheng Lab
+
+{yrpiao, miao zhang, lhchuan}@dlut.edu.cn, {rzk911113, beatlescoco} $@$ mail.dlut.edu.cn
+
+# Abstract
+
+Existing state-of-the-art RGB-D salient object detection methods explore RGB-D data relying on a two-stream architecture, in which an independent subnetwork is required to process depth data. This inevitably incurs extra computational costs and memory consumption, and using depth data during testing may hinder the practical applications of RGB-D saliency detection. To tackle these two dilemmas, we propose a depth distiller (A2dele) to explore the way of using network prediction and attention as two bridges to transfer the depth knowledge from the depth stream to the RGB stream. First, by adaptively minimizing the differences between predictions generated from the depth stream and RGB stream, we realize the desired control of pixel-wise depth knowledge transferred to the RGB stream. Second, to transfer the localization knowledge to RGB features, we encourage consistencies between the dilated prediction of the depth stream and the attention map from the RGB stream. As a result, we achieve a lightweight architecture without use of depth data at test time by embedding our A2dele. Our extensive experimental evaluation on five benchmarks demonstrate that our RGB stream achieves state-of-the-art performance, which tremendously minimizes the model size by $76\%$ and runs 12 times faster, compared with the best performing method. Furthermore, our A2dele can be applied to existing RGB-D networks to significantly improve their efficiency while maintaining performance (boosts FPS by nearly twice for DMRA and 3 times for CPFP).
+
+# 1. Introduction
+
+The emergence of convolutional neural networks (C-NNs), together with larger datasets [31, 17, 30, 29] have
+
+
+Figure 1. F-measure vs. Model Size on NLPR dataset [30]. By embedding our A2dele (CPFP'19 [41] + A2dele and DMRA'19 [31] + A2dele marked with $\times$ ), we achieve comparable accuracy compared to the original models (CPFP'19 and DMRA'19 marked with $\bullet$ ) at a significantly smaller model size.
+
+recently led to remarkable progress in RGB-D salient object detection. In RGB-D methods, the depth information provides a preponderance of discriminative power in location and spatial structure, which plays an important role in the task of saliency detection [2]. Many pioneering works [31, 3, 5, 4, 41, 43] have demonstrated its effectiveness, especially in challenging scenes.
+
+Learning discriminative representations for visual saliency, from two modalities, has been widely explored. For learning cross-model complementarity, RGB and depth data are often learnt separately in a two-stream architecture illustrated in Figure 2(a), where a multi-level fusion decoder is then appended to learn joint representations and cooperated predictions [31, 3, 5, 4]. On the other hand, approaches for learning enhanced RGB representations rely on exploring depth information by a tailor-maid subnetwork [41, 43], illustrated in Figure 2(b).
+
+
+(a)
+
+
+
+
+(b)
+
+
+(c)
+Figure 2. (a) Exploiting cross-modal complementarity by a two-stream architecture (e.g.[31, 3, 5, 4]). (b) Using depth information to enhance RGB features by a tailor-maid subnetwork (e.g.[41, 43]). (c) Our RGB stream embedded with the proposed depth distiller (A2dele). By embedding our A2dele, we achieve free use of depth stream during testing.
+
+The strategy of leveraging RGB-D data and CNNs produces the impressive results, but it remains challenging in terms of two aspects. First, RGB-D approaches inevitably incur extra computational costs and memory consumption during inference of the two-stream model in which an independent encoder or subnetwork is required to process depth data, as shown in the F-measure vs. model size plot on the NLPR dataset [30] in Figure 1. We observe from the plot that the model size of the RGB-D networks is 1.5 larger than their RGB networks. Second, The use of depth information during testing may hinder the practical applications of RGB-D saliency detection. Despite the fact that the advent of consumer grade RGB-D cameras leaves open the possibility of opening a path towards a border application of 3D vision, depth sensors may pose a high risk to accurate saliency detection as they can be easily influenced by a number of factors, such as the temperature of the camera, background illumination, and distance and reflectivity of the observed objects. Considering these two challenges, our goal is to design a mechanism that learns from RGB-D data during training and is free of the use of depth data during testing, while maximizing performance.
+
+To achieve this goal, we propose a depth distiller (A2dele), in which two bridges are adopted to connect RGB and depth modalities for transferring depth knowledge to the RGB stream as shown in Figure 2(c). First, we use the network prediction as a bridge for adaptively transferring the pixel-wise depth knowledge to the prediction of the RGB stream, namely an adaptive depth distillation scheme. More precisely, we selectively minimize the differences between predictions generated from the depth stream and RGB stream by an adaptive factor. This scheme realizes the
+
+desired control of pixel-wise depth knowledge transferred to RGB stream. Second, we use the network attention as an another bridge for transferring localization knowledge of salient objects to RGB features, namely an attentive depth distillation scheme. Specifically, we improve the prediction of depth stream via dilation operation to ensure the holistic coverage of salient objects, so that the dilated prediction can serve as reliable localization cues. By encouraging consistencies between the dilated prediction and attention map of the RGB stream, background area activations can be effectively suppressed in RGB features. Furthermore, our A2dele can facilitate other existing RGB-D approaches to achieve high efficiency while preserving accuracy. Figure 1 shows that the CPFP'19 [41] + A2dele and DMRA [31] + A2dele achieve comparable accuracy at a significantly smaller model size, compared to the original models.
+
+Our core insight is that we embrace the challenges and move away from attempting to train and test a model both on paired RGB and depth images, and instead test the model over only the single RGB modality. Our approach is to design a depth distiller that uses the network prediction and attention as two bridges connecting RGB and depth modalities while being free of using depth maps during testing. In such way, our adaptive and attention distillation schemes ensure the reliable depth information being transferred by screening out the erroneous depth knowledge. The source code is released1. Concretely, we make following contributions:
+
+- We propose a depth distiller (A2dele), which explores the way of using network prediction and attention
+
+as two bridges to transfer the depth knowledge from the depth stream to the RGB stream. As a result, a lightweight architecture, being free of the depth stream at test time, can be achieved by embedding our proposed A2dele at training time.
+
+- Extensive experimental results on five benchmark datasets demonstrate that our RGB stream achieves state-of-the-art performance, which tremendously minimizes the model size by $76\%$ and runs 12 times faster, compared with the best performing method.
+- Our depth distiller (A2dele) can be applied to improve existing RGB-D approaches. Compared to the original models, the ones embedded by our A2dele achieve comparable performance while running much faster (FPS is boosted by nearly twice for DMRA [31] and 3 times for CPFP [41]) at a significantly smaller model size (model size is minimized by $37\%$ for DMRA [31] and $43\%$ for CPFP [41]).
+
+# 2. Related Work
+
+RGB-D Salient Object Detection. Early RGB-D saliency detection methods [30, 8, 17, 34] manually design handcrafted features and break the new ground. Recently, CNNs-based RGB-D approaches have yielded a qualitative leap in performance due to the powerful ability of CNNs in hierarchically extracting informative features. Zhu et.al [43] use an independent encoder network to make full use of depth cues and assist the RGB-stream network. Chen et.al [3] exploit the cross-model complement across all the levels by a complementarity-aware fusion module. Chen et.al [5] propose a multi-scale multi-path fusion network with cross-modal interactions to enable sufficient and efficient fusion. Chen et.al [4] introduce a cross-modal distillation stream to learn new discriminative multi-modal features in each level. Zhao et.al [41] propose to use the contrast-enhanced depth map as an attention map to suppress distractors in the RGB features. Piao et.al [31] propose a recurrent attention module based on ConvLSTM to progressively learn the internal semantic relation of the multi-modal features.
+
+However, existing RGB-D approaches require an additional network to process depth data which incurs extra computational cost and memory consumption. Moreover, depth maps are easily influenced, which may pose a high risk to accurate saliency detection. These severely impede practical applications of RGB-D saliency detection. In contrast, by embedding our A2dele, we achieve free use of the depth stream at test time, while maximizing performance.
+
+Distillation and Learning under Privileged Information. Our depth distiller is inspired by the generalized distillation [26] that combines distillation [14] and privileged information [36]. In distillation, knowledge is transferred from the
+
+teacher network to the student by minimizing the differences between the soft target from the teacher and the class probabilities from the student. Knowledge distillation has been exploited in many computer vision tasks, such as domain adaptation [10], object detection [21, 15], depth estimation [32] and semantic segmentation [13, 25]. In a similar spirit, our goal is to transfer knowledge from the depth stream to the RGB stream, being free use of depth stream during testing. The learning under privileged information provides a network with extra information which is only available in the training stage. Recent works [19, 37, 27] propose to use privileged depth information in semantic segmentation and action recognition. In our case, depth is the privileged information available for training, along with RGB data, but only RGB data is used at test time.
+
+Different from the aforementioned distillation designs which indiscriminately transfer knowledge, we propose a tailor-made depth distiller (A2dele) to achieve the discriminative transfer of useful depth knowledge. It is well known that the unstable quality of depth map can impose negative effects on RGB-D salient object detection. Our A2dele can transfer useful depth information to the RGB stream and meanwhile suppressing erroneous ones.
+
+# 3. Method
+
+# 3.1. Overview
+
+Existing methods for RGB-D salient object detection inevitably incur extra computational costs and memory due to requiring an independent subnetwork to process depth data, and the use of depth information during testing may hinder the practical applications of RGB-D saliency detection. To confront those challenges, we propose a depth distiller (A2dele) to improve RGB-D saliency detection taking a single RGB image as input at test time. An overview of the proposed framework is shown in Figure 2(c).
+
+Depth, we train the depth stream to not only locate salient objects accurately but also transfer privileged knowledge for the RGB stream. The encoder in the depth stream is based on VGG16 [35], in which 5 convolutional block-s are maintained and the last pooling and fully-connected layers are discarded. Then we select the high-level features $(F_{Conv}^{3}, F_{Conv}^{4}$ and $F_{Conv}^{5})$ to detect salient objects. Moreover, we boost the quality of depth features by applying a receptive field block (RFB) [24] in each level. The RFB can capture global contrast information which is suitable to the aim of depth stream. Finally, the decoder takes the depth features as input and make a final prediction. The detailed architecture of the decoder is shown in Figure 3.
+
+RGB, we design an efficient RGB stream to effectively leverage both RGB information and depth knowledge transferred from the depth stream. The RGB stream has the same architecture with the depth stream. The only difference is
+
+
+Figure 3. Detailed structure of the decoder in the depth stream or RGB stream.
+
+that we replace the RFB with an attention module. The attention module is lightweight and consists of only one $3 \times 3$ convolutional layer. The training of the RGB stream is supervised by our proposed depth distiller (A2dele), which consists of an adaptive depth distillation scheme and an attentive depth distillation scheme (Details in Section 3.2).
+
+# 3.2. The Proposed Depth Distiller (A2dele)
+
+Inspired by distillation [14] and privileged information [36], we build two bridges connecting RGB and depth modalities via a depth distiller (A2dele) for transferring privileged depth knowledge to the RGB stream. The knowledge is defined as two parts: (1) The first part is designed to achieve the desired control of pixel-wise depth knowledge transferred to the prediction of RGB stream. (2) The second part is designed to transfer localization knowledge of salient objects to RGB features. Next, we elaborate on each distillation scheme in A2dele.
+
+# 3.2.1 Adaptive Depth Distillation Scheme
+
+In our proposed depth distiller, we use the network prediction as the first bridge across RGB and depth modalities for transferring pixel-wise depth knowledge to the prediction of the RGB stream. To this end, we train the RGB network by minimizing the loss between predictions produced from the depth stream and RGB stream. When we obtain an accurate prediction from the depth stream, this strategy will effectively help the RGB stream easily discriminate salient objects from background. On the contrary, if the prediction is not reliable due to the low-quality depth map, this strategy may introduce side effects in RGB prediction. Based on this observation, we propose an adaptive depth distillation scheme to ensure the desired depth knowledge transfer. More precisely, we design an adaptive factor $\lambda$ to modulate the influence of the depth stream. The $\lambda$ is defined as:
+
+$$
+\lambda = \exp (- \alpha L _ {C E} (S _ {\text {d e p t h}}, Y)), \tag {1}
+$$
+
+where $Y$ represents the ground truth and the hyperparameter $\alpha$ is set to 70 for keeping the $\lambda$ ranging from 0 to 1. The $\lambda$ is inversely related to the loss between the output of the depth stream and ground truth. This indicates that the
+
+RGB stream learns from the depth stream when the predictions of the depth stream are reliable; otherwise the RGB stream learns from the ground truth. Thus, the complete loss function is written as:
+
+$$
+L _ {A d a p} = \lambda L _ {K L} \left(S _ {R G B} \| S _ {\text {d e p t h}}\right) + (1 - \lambda) L _ {C E} \left(S _ {R G B}, Y\right), \tag {2}
+$$
+
+where $L_{KL}$ is the Kullback-Leibler divergence loss in which the temperature hyper-parameter $T$ is set to 20 and $L_{CE}$ is the cross entropy loss. Compared to directly enforcing the RGB stream to mimic the output from depth stream with a fixed weight, our proposed adaptive depth distillation scheme allows the RGB stream to selectively absorb the useful depth information from the depth stream.
+
+# 3.2.2 Attentive Depth Distillation Scheme
+
+Our attentive distillation scheme goes a further step: we choose the network attention as the second bridge for transferring localization knowledge to RGB features. This is achieved by encouraging consistency between the prediction of the depth stream and the attention map in the RGB stream. To minimize the inconsistency, the RGB stream must learn an attention map to approach to the prediction of the depth stream. As the attention map is improved in quality, the distractors of RGB features are suppressed gradually, inching the RGB stream toward accurate localization of salient objects. However, when the depth stream infers incomplete detection of salient objects, this strategy may lead to unsatisfactory segmentation results. To ensure the reliable localization knowledge, we enlarge the coverage area of the prediction from the depth stream to improve its effectiveness via dilation operation as observed in Figure 2(c). The Dilation is achieved by using the max-pooling operation and expressed as:
+
+$$
+D i l a t i o n \left(S _ {d e p t h}\right) = M a x p o o l \left(S _ {d e p t h}, k e r n e l s i z e = 1 1\right). \tag {3}
+$$
+
+By covering more complete regions of salient objects, the dilated prediction of the depth stream can act as better localization cues and help boost the RGB features. In summary, the attentive depth distillation scheme can be defined as:
+
+$$
+L _ {A t t e n} = \sum_ {i = 1} ^ {N} L _ {C E} \left(A t t _ {R G B} ^ {i}, D i l a t i o n \left(S _ {d e p t h}\right)\right), \tag {4}
+$$
+
+where $Att_{RGB}^{i}$ represents the $i^{th}$ attention map in the RGB stream. $N$ means the total number of the levels and is set to 3. By minimizing the loss $L_{Atten}$ , the response from outside the salient objects is suppressed, focusing the response on the salient regions.
+
+# 3.3. Optimization
+
+The training process of our method involves two stages as is presented in Algorithm 1. In stage 1, the depth stream
+
+is supervised by the cross entropy loss $L_{CE}$ with the ground truth $Y$ . During the knowledge distillation process (stage 2), the parameters of the depth stream are kept frozen. The RGB stream is supervised by a combination of the adaptive depth distillation loss $L_{Adap}$ in Eq.(2) and the attentive distillation loss $L_{Atten}$ in Eq.(4). $W_{D}$ and $W_{R}$ are the parameters of the depth stream and RGB stream, respectively.
+
+# Algorithm 1: Training Process of Our Method
+
+1 Stage 1: Training the depth stream.
+2 Input : Depth map.
+3 $W_{D} = \operatorname{argmin}_{W_{D}} L_{CE}(S_{depth}, Y)$
+4 Stage 2: Training the RGB stream.
+5 Input: RGB.
+6 $W_{R} = \operatorname{argmin}_{W_{R}}\left(L_{Adap} + L_{Attenu}\right)$
+
+# 4. Experiments
+
+# 4.1. Benchmark Datasets
+
+We conduct our experiments on five following widely-used RGB-D datasets. DUT-RGBD [31]: contains 1200 images captured by Lytro camera in real life scenes. NJUD [17]: includes 1985 stereo image pairs, in which the stereo images are collected from 3D movies, the Internet and photographs are taken by a Fuji W3 stereo camera. NLPR [30]: contains 1000 images captured by Kinect under different illumination conditions. STEREO [29]: includes 797 stereoscopic images gathered from the Internet. RGBD135 [6]: includes 135 images captured by Kinect.
+
+For comparison, we adopt the same training set as in [31], which contains 800 samples from the DUT-RGBD dataset, 1485 samples from NJUD and 700 samples from NLPR for training. The remaining images and other two datasets are used for testing to verify the generalization ability of saliency models. To avoid overfitting, we augment the training set by flipping, cropping and rotating.
+
+# 4.2. Experimental Setup
+
+Evaluation Metrics. We use generally-recognized $F$ -measure $(F_{\beta})$ [1], weighted $F$ -measure $(F_{\beta}^{w})$ [28] and Mean Absolute Error (MAE). These three evaluation metrics can provide comprehensive and reliable evaluation results and have been well explained in many literatures. We also adopt model size and Frames Per Second (FPS) to evaluate the complexity of each method.
+
+Implementation Details. We implement our method based on the Pytorch toolbox with one GTX 1080Ti GPU. During the training phrase, we use the Adam optimization [18] algorithm to train our depth stream and RGB stream. The batch size is set as 10 and the initial learning rate is set to 1e-4. The maximum epochs of the depth stream and RGB
+
+stream are set to 100 and 50, respectively. All the training images are resized to $256 \times 256$ .
+
+# 4.3. Comparison with State-of-the-arts
+
+We compare our RGB stream with 18 other state-of-the-art methods including 9 RGB-D methods (remarked with $\star$ ): CTMF\* [11], DF\* [33], CDCP\* [44], PCA\* [3], PDNet\* [43], MMCI\* [5], TANet\* [4], CPFP\* [41], DMRA\* [31]; and 9 RGB methods: DSS [16], Amulet [39], R $^{3}$ Net [7], Pi-CANet [23], PAGRN [40], PoolNet [22], AFNet [9], CPD [38], EGNet [42]. We implement these models with authorized codes or directly evaluate results provided by authors. Note that CPD [38] and EGNet [42] have two settings (with VGG16 [35] and ResNet50 [12] backbone networks). For fair comparison, we show the results of CPD [38] and EGNet [42] using the same VGG16 backbone network as ours.
+
+Quantitative Evaluation. Table 1 shows the quantitative comparison in terms of three evaluation metrics on five datasets. It can be seen that our proposed RGB stream can outperform both RGB methods and RGB-D methods across five datasets, except second-best weighted $F$ -measure scores on NJUD and RGBD135. Especially, our RGB stream outperforms all other methods by a large margin on DUT-RGBD, NLPR and STEREO, where the images are comparably complicated. This indicates that our distiller can transfer the qualified depth knowledge to facilitate the RGB stream.
+
+Qualitative Evaluation. In Figure 4, we show the qualitative comparison in some challenging cases: low intensity environment ( $1^{st}$ row), similar foreground and background ( $2^{nd}$ and $3^{rd}$ rows), transparent object ( $5^{th}$ row), small object ( $5^{th}$ and $6^{th}$ rows) and multiple objects ( $4^{th}$ , $5^{th}$ and $6^{th}$ rows). Compared to the RGB methods (last 4 columns), our method makes it easier to discriminate the salient objects from background and achieves more complete predictions. This indicates that our RGB stream is positively influenced by the depth knowledge transferred from the depth stream, leading to robust results. Moreover, compared to the RGB-D methods ( $5^{th} - 8^{th}$ columns), our method also locates and segments salient objects more accurately. It further demonstrates the superiority of our proposed A2dele in transferring depth knowledge.
+
+Complexity Evaluation. Moreover, we compare the model size and FPS (Frames Per Second) with other models for complexity evaluation as shown in Table 1. It can be observed that our RGB stream runs 12 times faster and minimizes the model size by $76\%$ than the best performing method DMRA* [31]. Not only that, compared to the most efficient model CPD [38], we also achieve a large improvement on DUT-RGBD, NJUD and NLPR with half model size and nearly double FPS. Those results further verify that our A2dele enables a high-accuracy and low-cost RGB-D saliency detection model.
+
+Table 1. Quantitative comparisons of $F$ -measure $\left( {F}_{\beta }\right) \left\lbrack 1\right\rbrack$ ,weighted $F$ -measure $\left( {F}_{\beta }^{w}\right) \left\lbrack {28}\right\rbrack$ and Mean Absolute Error (MAE) scores on five RGB-D datasets. $\star$ represents RGB-D methods. - means no available results. (red: best, blue: second best, green: third best).
+
+| Methods | Years | FPS↑ | Size↓ | DUT-RGBD | NJUD | NLPR | STEREO | RGBD135 |
| Fβw↑ | Fβ↑ | MAE↓ | Fβw↑ | Fβ↑ | MAE↓ | Fβw↑ | Fβ↑ | MAE↓ | Fβw↑ | Fβ↑ | MAE↓ | Fβw↑ | Fβ↑ | MAE↓ |
| DSS | CVPR'17 | 23 | 447 | .628 | .732 | .127 | .678 | .776 | .108 | .614 | .755 | .076 | .718 | .814 | .087 | .556 | .697 | .098 |
| Amulet | ICCV'17 | 21 | 133 | .762 | .803 | .083 | .758 | .798 | .085 | .716 | .722 | .062 | .811 | .842 | .062 | .701 | .725 | .070 |
| R3Net | IJCAI'18 | 22 | 225 | .709 | .781 | .113 | .736 | .775 | .092 | .611 | .649 | .101 | .752 | .800 | .084 | .693 | .728 | .066 |
| PiCANetCVPR'18 | 5 | 197 | .741 | .826 | .080 | .768 | .806 | .071 | .707 | .761 | .053 | .792 | .835 | .062 | .741 | .797 | .042 | |
| PAGRN | CVPR'18 | - | - | .746 | .836 | .079 | .746 | .827 | .081 | .707 | .795 | .051 | .774 | .856 | .067 | .748 | .834 | .044 |
| PoolNet | CVPR'19 | 32 | 279 | .836 | .871 | .049 | .816 | .850 | .057 | .771 | .791 | .046 | .849 | .877 | .045 | .814 | .852 | .031 |
| AFNet | CVPR'19 | 26 | 144 | .817 | .851 | .064 | .832 | .857 | .056 | .796 | .807 | .043 | .850 | .876 | .046 | .816 | .840 | .034 |
| CPD | CVPR'19 | 66 | 112 | .835 | .872 | .055 | .821 | .853 | .059 | .829 | .840 | .037 | .851 | .880 | .046 | .841 | .860 | .028 |
| EGNet | ICCV'19 | 21 | 412 | .805 | .866 | .059 | .808 | .846 | .060 | .774 | .800 | .047 | .835 | .876 | .049 | .787 | .831 | .035 |
| CTMF* | Tcyb'17 | 50 | 826 | .690 | .792 | .097 | .732 | .788 | .085 | .691 | .723 | .056 | .727 | .786 | .087 | .694 | .765 | .055 |
| DF* | TIP'17 | - | - | .542 | .748 | .145 | .552 | .744 | .151 | .524 | .682 | .099 | .576 | .761 | .142 | .397 | .566 | .130 |
| CDCP* | ICCV'17 | - | - | .530 | .633 | .159 | .522 | .618 | .181 | .512 | .591 | .114 | .595 | .680 | .149 | .484 | .583 | .119 |
| PCA* | CVPR'18 | 15 | 534 | .696 | .760 | .100 | .811 | .844 | .059 | .772 | .794 | .044 | .810 | .845 | .061 | .718 | .763 | .049 |
| PDNet* | ICME'19 | - | - | .650 | .757 | .112 | .798 | .832 | .062 | .659 | .740 | .064 | .799 | .833 | .064 | .731 | .800 | .050 |
| MMCI* | PR'19 | 19 | 930 | .636 | .753 | .112 | .749 | .813 | .079 | .688 | .729 | .059 | .747 | .812 | .080 | .656 | .750 | .064 |
| TANet* | TIP'19 | - | - | .712 | .779 | .093 | .812 | .844 | .061 | .789 | .795 | .041 | .811 | .849 | .059 | .745 | .782 | .045 |
| CPFP* | CVPR'19 | 7 | 278 | .644 | .736 | .099 | - | - | - | .820 | .822 | .036 | - | - | - | .794 | .819 | .037 |
| DMRA* | ICCV'19 | 10 | 239 | .858 | .883 | .048 | .853 | .872 | .051 | .845 | .854 | .031 | .850 | .868 | .047 | .849 | .857 | .029 |
| Our | - | 120 | 57.3 | .870 | .892 | .042 | .851 | .874 | .051 | .867 | .878 | .028 | .867 | .884 | .043 | .845 | .865 | .028 |
+
+
+Figure 4. Visual comparison of our RGB stream with top-ranking CNNs-based methods in some challenging scenes.
+
+# 4.4. Ablation Studies
+
+Effect of Adaptive Depth Distillation Scheme. Our adaptive depth distillation scheme aims to transfer the desired pixel-wise depth knowledge to the prediction of RGB stream. We look in to the effect of enabling our adaptive depth distillation scheme as shown in Table 2. It is seen that our adaptive distillation largely improves the baseline
+
+RGB stream (leveraging RGB only) across four datasets. We also show the visual effects in Figure 5. It can be observed that our adaptive distillation scheme can help the RGB stream distinguish the salient objects from background by transferring the high-quality depth knowledge $(1^{st}$ and $2^{nd}$ rows), and remove the negative effects caused by inaccurate depth map $(3^{rd}$ row). Moreover, to make a
+
+Table 2. The effect of different distillation schemes in our proposed A2dele. $\lambda$ denotes the adaptive depth distillation scheme with fixed $\lambda$ and $L_{Adap}$ denotes our proposed adaptive factor. $L_{Atten}$ represents attentive distillation scheme.
+
+| Model | DUT-RGBD | NJUD | NLPR | STEREO |
| Fwβ↑ | Fβ↑ | MAE↓ | Fwβ↑ | Fβ↑ | MAE↓ | Fwβ↑ | Fβ↑ | MAE↓ | Fwβ↑ | Fβ↑ | MAE↓ |
| Depth | .829 | .852 | .054 | .815 | .835 | .061 | .811 | .825 | .043 | .648 | .702 | .116 |
| RGB | .836 | .873 | .052 | .817 | .848 | .058 | .834 | .850 | .036 | .829 | .860 | .053 |
| RGB+λ=0.3 | .856 | .883 | .048 | .841 | .862 | .053 | .849 | .863 | .032 | .850 | .869 | .048 |
| RGB+λ=0.5 | .858 | .884 | .048 | .840 | .863 | .053 | .854 | .869 | .031 | .855 | .875 | .046 |
| RGB+λ=0.7 | .834 | .863 | .056 | .823 | .844 | .058 | .830 | .843 | .037 | .832 | .852 | .054 |
| RGB+LAdap | .861 | .886 | .045 | .845 | .867 | .051 | .855 | .870 | .032 | .858 | .877 | .046 |
| RGB+LAdap+LAtten | .870 | .892 | .042 | .851 | .874 | .051 | .867 | .878 | .028 | .867 | .884 | .043 |
+
+
+Figure 5. Visual analysis of the adaptive depth distillation scheme.
+
+deeper analysis about the core component in the adaptive depth distillation scheme – the adaptive factor $\lambda$ , we add comparisons with fixed $\lambda$ (0.3, 0.5, 0.7) in Table 2. It can be seen that our 'RGB+LAdap' achieves the overall best results. Learning from the depth stream with fixed $\lambda$ cannot maximize the benefits of depth stream. By contrast, our adaptive factor can tackle this dilemma by selectively transferring the depth knowledge to the RGB stream according to the performance of the depth stream.
+
+Effect of Attentive Depth Distillation Scheme. Our attentive depth distillation scheme aims to transfer localization knowledge to RGB features. To prove the effect of the attentive depth distillation scheme, we visualize the attention map and saliency prediction in the absence of this scheme as shown in Figure 6. It is obvious that without our attentive depth distillation scheme, the attention map (Figure 6(a)) can not effectively filter the distractors of RGB features, introducing some background noise in the saliency prediction (Figure 6(b)). In contrast, the attention map generated by adding attentive depth distillation scheme (Figure 6(c)) can effectively suppress the distractions of background in RGB features and as a result, the prediction highlights the salient objects successfully (Figure 6(d)). These visual improvements are reasonable since the useful RGB features are emphasized and background area activations are suppressed by the proposed attentive depth distillation scheme. Also in Table 2, the improved performances across four datasets are achieved by adding our attentive depth distillation scheme.
+
+
+Figure 6. Visual analysis of the attentive depth distillation scheme. (a) and (b) denote the attention map and prediction generated from $\mathrm{RGB} + L_{Adap}$ , respectively. (c) and (d) represent the attention map and prediction generated from $\mathrm{RGB} + L_{Adap} + L_{Atten}$ , respectively.
+
+# 4.5. Applying A2dele in Existing RGB-D Models
+
+In this paper, we apply the proposed A2dele in two top-ranking RGB-D models (CPFP [41], DMRA [31]) to achieve improved efficiency, as well as comparable accuracy. CPFP uses a contrast-enhanced subnet to process depth data and DMRA adopts a VGG-19 to encode depth features. We first replace the original depth stream (the subnet in CPFP and VGG-19 in DMRA) with ours, and then impose the proposed two distillation schemes. Specifically, to apply our attentive depth distillation scheme, we add the same attention module in each level. The attention module is lightweight and nearly does not cause extra computation cost, referred to Table 3. And for DMRA, the depth vector in the depth-induced multi-scale weighting module is set to one. For fair comparisons, we adopt the same training sets and test sets with their original settings.
+
+In Table 3, we show the quantitative comparison of the original models and the improved models (+A2dele). It can be observed that our A2dele largely improves the efficiency of original models. In detail, our A2dele boots the FPS of CPFP by $340\%$ and the FPS of DMRA by $180\%$ , and tremendously minimizes the model size of CPFP by $43\%$ and the model size of DMRA by $37\%$ . On the other hand, by applying our A2dele, we improve the performance of
+
+Table 3. Quantitative Comparison of applying A2dele on top-ranking RGB-D models with original models. '-RGB' represents the RGB-D models without depth stream, and '+A2dele' represents embedding '-RGB' with our A2dele.
+
+| Methods | Size(M)↓ | FPS↑ | LFSD [20] | NJU2000 [17] | RGBD135 [6] | NLPR [30] |
| Fβ↑ | MAE↓ | Fβ↑ | MAE↓ | Fβ↑ | MAE↓ | Fβ↑ | MAE↓ |
| CPFP-RGB | 159.37 | 31 | .759 | .123 | .844 | .057 | .804 | .040 | .797 | .041 |
| CPFP | 278 | 7 | .811 | .088 | .850 | .053 | .815 | .037 | .840 | .036 |
| CPFP+A2dele | 159.42 | 31 | .806 | .094 | .861 | .053 | .818 | .043 | .873 | .033 |
| Methods | Size(M)↓ | FPS↑ | DUT-RGBD [31] | NJUD [17] | NLPR [30] | STEREO [29] |
| Fβ↑ | MAE↓ | Fβ↑ | MAE↓ | Fβ↑ | MAE↓ | Fβ↑ | MAE↓ |
| DMRA-RGB | 150.14 | 28 | .874 | .054 | .828 | .061 | .826 | .036 | .844 | .054 |
| DMRA | 238.8 | 10 | .883 | .048 | .872 | .051 | .855 | .031 | .868 | .047 |
| DMRA+A2dele | 150.15 | 28 | .889 | .040 | .867 | .051 | .854 | .032 | .869 | .046 |
+
+
+Figure 7. Visual Comparison of applying A2dele on top-ranking RGB-D models with original models on NLPR dataset. The meaning of indexes has been explained in the caption of Table 3.
+
+CPFP-RGB and DMRA-RGB (without depth stream) by a dramatic margin across four datasets. These results further verify the generalization of our A2dele. Meanwhile, we also achieve comparable performance compared to the original models (CPFP and DMRA). Especially, the CPFP+A2dele achieves large improvements on NLPR and DMRA+A2dele improves the performance on DUT-RGBD by a large margin. Moreover, our A2dele leaves the depth data unused during testing, allowing the original model to be more applicable.
+
+In Figure 7, we show some challenging cases in NLPR dataset: inaccurate depth map ( $1^{st}$ and $2^{nd}$ rows) or depth map with extremely low contrast between salient objects and non-salient regions ( $3^{rd}$ row). We can see that the CPFP+A2dele segments more uniform salient objects than the original model. Consistently, CPFP+A2dele achieves large improvements in $F$ -measure score as shown in Table 3. This improvement is reasonable since CPFP does not consider the bad effects caused by low-quality depth map, but our A2dele can screen out the erroneous effects due to its discriminative ability of transferring useful depth knowledge. Meanwhile, the DMRA+A2dele also benefits from our A2dele and improves robustness in these challenging scenes.
+
+# 5. Conclusion
+
+In this paper, we propose a distiller (A2dele) within a two-stream framework that learns from RGB-D data and can be tested on RGB only, while maximizing performance. The proposed A2dele uses the network prediction as the first bridge to adaptively transfer the desired pixel-wise depth knowledge to the prediction of the RGB stream, while the network attention serves as the second bridge for transferring the localization knowledge of salient objects to RGB features. We conduct the experiments on five benchmark datasets and demonstrate that our method achieves state-of-the-arts performance and runs significantly faster at a much smaller model size than existing RGB-D and RGB methods. To prove the generalization of our A2dele, we apply it on top-ranking RGB-D networks. Extensive experiments show that our A2dele can improve the efficiency of RGB-D methods by a large margin, while maintaining performance.
+
+Acknowledgements. This work was supported by the Science and Technology Innovation Foundation of Dalian (2019J12GX034), the National Natural Science Foundation of China (61976035, 61725202, U1903215, U1708263, 61829102, 91538201 and 61751212), and the Fundamental Research Funds for the Central Universities (DUT19JC58).
+
+# References
+
+[1] Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. Frequency-tuned salient region detection. In CVPR, number CONF, pages 1597-1604, 2009.
+[2] Ali Borji, Ming Ming Cheng, Qibin Hou, Huaizu Jiang, and Jia Li. Salient object detection: A survey. Eprint Arxiv, 16(7):3118, 2014.
+[3] Hao Chen and Youfu Li. Progressively complementarity-aware fusion network for rgb-d salient object detection. In CVPR, pages 3051-3060, 2018.
+[4] Hao Chen and Youfu Li. Three-stream attention-aware network forrgb-d salient object detection. TIP, 28(6):2825-2835, 2019.
+[5] Hao Chen, Youfu Li, and Dan Su. Multi-modal fusion network with multi-scale multi-path and cross-modal interactions for rgb-d salient object detection. PR, 86:376-385, 2019.
+[6] Yupeng Cheng, Huazhu Fu, Xingxing Wei, Jiangjian Xiao, and Xiaochun Cao. Depth enhanced saliency detection method. In Proceedings of international conference on internet multimedia computing and service, page 23. ACM, 2014.
+[7] Zijun Deng, Xiaowei Hu, Lei Zhu, Xuemiao Xu, Jing Qin, Guoqiang Han, and Pheng-Ann Heng. R3net: Recurrent residual refinement network for saliency detection. In IJCAI, pages 684-690. AAAI Press, 2018.
+[8] David Feng, Nick Barnes, Shaodi You, and Chris McCarthy. Local background enclosure for rgb-d salient object detection. In CVPR, pages 2343-2350, 2016.
+[9] Mengyang Feng, Huchuan Lu, and Errui Ding. Attentive feedback network for boundary-aware salient object detection. In CVPR, pages 1623-1632, 2019.
+[10] Saurabh Gupta, Judy Hoffman, and Jitendra Malik. Cross modal distillation for supervision transfer. In CVPR, pages 2827-2836, 2016.
+[11] Junwei Han, Hao Chen, Nian Liu, Chenggang Yan, and Xuelong Li. Cnns-based rgb-d saliency detection via cross-view transfer and multiview fusion. IEEE Tcyb, 48(11):3171-3183, 2017.
+[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770-778, 2016.
+[13] Tong He, Chunhua Shen, Zhi Tian, Dong Gong, Changming Sun, and Youliang Yan. Knowledge adaptation for efficient semantic segmentation. In CVPR, pages 578-587, 2019.
+[14] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+[15] Judy Hoffman, Saurabh Gupta, and Trevor Darrell. Learning with side information through modality hallucination. In CVPR, June 2016.
+[16] Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, and Philip HS Torr. Deeply supervised salient object detection with short connections. In CVPR, pages 3203-3212, 2017.
+[17] Ran Ju, Ling Ge, Wenjing Geng, Tongwei Ren, and Gangshan Wu. Depth saliency based on anisotropic center-surface difference. In ICIP, pages 1115-1119. IEEE, 2014.
+
+[18] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *ICLR*, 2014.
+[19] Kuan-Hui Lee, German Ros, Jie Li, and Adrien Gaidon. Spigan: Privileged adversarial learning from simulation. *ICLR*, 2019.
+[20] Nianyi Li, Jinwei Ye, Yu Ji, Haibin Ling, and Jingyi Yu. Saliency detection on light field. In CVPR, pages 2806-2813, 2014.
+[21] Quanquan Li, Shengying Jin, and Junjie Yan. Mimicking very efficient network for object detection. In CVPR, pages 6356-6364, 2017.
+[22] Jiang-Jiang Liu, Qibin Hou, Ming-Ming Cheng, Jiashi Feng, and Jianmin Jiang. A simple pooling-based design for real-time salient object detection. CVPR, 2019.
+[23] Nian Liu, Junwei Han, and Ming-Hsuan Yang. Picanet: Learning pixel-wise contextual attention for saliency detection. In CVPR, pages 3089-3098, 2018.
+[24] Songtao Liu, Di Huang, et al. Receptive field block net for accurate and fast object detection. In ECCV, pages 385-400, 2018.
+[25] Yifan Liu, Ke Chen, Chris Liu, Zengchang Qin, Zhenbo Luo, and Jingdong Wang. Structured knowledge distillation for semantic segmentation. In CVPR, pages 2604-2613, 2019.
+[26] David Lopez-Paz, Léon Bottou, Bernhard Schölkopf, and Vladimir Vapnik. Unifying distillation and privileged information. arXiv preprint arXiv:1511.03643, 2015.
+[27] Zelun Luo, Jun-Ting Hsieh, Lu Jiang, Juan Carlos Niebles, and Li Fei-Fei. Graph distillation for action detection with privileged modalities. In ECCV, September 2018.
+[28] Ran Margolin, Lihi Zelnik-Manor, and Ayellet Tal. How to evaluate foreground maps? In CVPR, pages 248-255, 2014.
+[29] Yuzhen Niu, Yujie Geng, Xueqing Li, and Feng Liu. Leveraging stereopsis for saliency analysis. In CVPR, pages 454-461. IEEE, 2012.
+[30] Houwen Peng, Bing Li, Weihua Xiong, Weiming Hu, and Rongrong Ji. Rgbd salient object detection: A benchmark and algorithms. In ECCV, pages 92-109. Springer, 2014.
+[31] Yongri Piao, Wei Ji, Jingjing Li, Miao Zhang, and Huchuan Lu. Depth-induced multi-scale recurrent attention network for saliency detection. In ICCV, October 2019.
+[32] Andrea Pilzer, Stephane Lathuiliere, Nicu Sebe, and Elisa Ricci. Refine and distill: Exploiting cycle-inconsistency and knowledge distillation for unsupervised monocular depth estimation. In CVPR, pages 9768-9777, 2019.
+[33] Liangqiong Qu, Shengfeng He, Jiawei Zhang, Jiandong Tian, Yandong Tang, and Qingxiong Yang. Rgbd salient object detection via deep fusion. IEEE TIP, 26(5):2274-2285, 2017.
+[34] Jianqiang Ren, Xiaojin Gong, Lu Yu, Wenhui Zhou, and Michael Ying Yang. Exploiting global priors for rgb-d saliency detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 25-32, 2015.
+[35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. *ICLR*, 2015.
+
+[36] Vladimir Vapnik and Rauf Izmailov. Learning using privileged information: similarity control and knowledge transfer. Journal of machine learning research, 16(2023-2049):2, 2015.
+[37] Tuan-Hung Vu, Himalaya Jain, Maxime Bucher, Matthieu Cord, and Patrick Pérez. Dada: Depth-aware domain adaptation in semantic segmentation. arXiv preprint arXiv:1904.01886, 2019.
+[38] Zhe Wu, Li Su, and Qingming Huang. Cascaded partial decoder for fast and accurate salient object detection. In CVPR, pages 3907-3916, 2019.
+[39] Pingping Zhang, Dong Wang, Huchuan Lu, Hongyu Wang, and Xiang Ruan. Amulet: Aggregating multi-level convolutional features for salient object detection. In ICCV, pages 202-211, 2017.
+[40] Xiaoning Zhang, Tiantian Wang, Jinqing Qi, Hutchuan Lu, and Gang Wang. Progressive attention guided recurrent network for salient object detection. In CVPR, pages 714-722, 2018.
+[41] Jia-Xing Zhao, Yang Cao, Deng-Ping Fan, Ming-Ming Cheng, Xuan-Yi Li, and Le Zhang. Contrast prior and fluid pyramid integration for rgbd salient object detection. In CVPR, 2019.
+[42] Jia-Xing Zhao, Jiang-Jiang Liu, Deng-Ping Fan, Yang Cao, Jufeng Yang, and Ming-Ming Cheng. Egnet:edge guidance network for salient object detection. In ICCV, Oct 2019.
+[43] Chunbiao Zhu, Xing Cai, Kan Huang, Thomas H Li, and Ge Li. Pdnet: Prior-model guided depth-enhanced network for salient object detection. In ICME, pages 199-204. IEEE, 2019.
+[44] Chunbiao Zhu, Ge Li, Wenmin Wang, and Ronggang Wang. An innovative salient object detection using center-dark channel prior. In ICCV, pages 1509-1515, 2017.
\ No newline at end of file
diff --git a/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/images.zip b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b6737a33e16aa47923e6bca8ef45eccce091bd46
--- /dev/null
+++ b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:263846192557195df744910bd0b6668b8b7fdedd080684007daf5821158155e6
+size 751957
diff --git a/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/layout.json b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..da1a00ed39f3026903c4272a89a68f44e9a5b30c
--- /dev/null
+++ b/a2deleadaptiveandattentivedepthdistillerforefficientrgbdsalientobjectdetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27c4f6f3350466b42717adfbfcd3f3fcce5f706724f73760ef73eb212745068d
+size 342942
diff --git a/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_content_list.json b/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..641681af7d03e7deebdb69631921932fffd6c9e7
--- /dev/null
+++ b/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a95d1356783963348c8221ef627d278aa8dfb7300379e5d995bc4d7cae8916eb
+size 76375
diff --git a/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_model.json b/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..a052a714b6235eb8e49c3e0b69173b316df76595
--- /dev/null
+++ b/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b9320b6af65aa15f9aece902e23a2677c79aa5d56ac0d75188a4474f6fd552d
+size 93205
diff --git a/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_origin.pdf b/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e3b4e20bd07333f53483d202d5ed2f2ce05e1779
--- /dev/null
+++ b/aanetadaptiveaggregationnetworkforefficientstereomatching/c62fb2f3-b7e7-4344-a82f-218cb1d65953_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7c0796a59dbde3481b37d556eef621e14fa1a21e2ff9a4f91abb44e8d16b6ac
+size 2544562
diff --git a/aanetadaptiveaggregationnetworkforefficientstereomatching/full.md b/aanetadaptiveaggregationnetworkforefficientstereomatching/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bafb402324df34d4d7188e759cc0378207d4cd68
--- /dev/null
+++ b/aanetadaptiveaggregationnetworkforefficientstereomatching/full.md
@@ -0,0 +1,306 @@
+# AANet: Adaptive Aggregation Network for Efficient Stereo Matching
+
+Haofei Xu Juyong Zhang*
+University of Science and Technology of China
+
+xhf@mail.ustc.edu.cn, juyong@ustc.edu.cn
+
+# Abstract
+
+Despite the remarkable progress made by learning based stereo matching algorithms, one key challenge remains unsolved. Current state-of-the-art stereo models are mostly based on costly 3D convolutions, the cubic computational complexity and high memory consumption make it quite expensive to deploy in real-world applications. In this paper, we aim at completely replacing the commonly used 3D convolutions to achieve fast inference speed while maintaining comparable accuracy. To this end, we first propose a sparse points based intra-scale cost aggregation method to alleviate the well-known edge-fattening issue at disparity discontinuities. Further, we approximate traditional cross-scale cost aggregation algorithm with neural network layers to handle large textureless regions. Both modules are simple, lightweight, and complementary, leading to an effective and efficient architecture for cost aggregation. With these two modules, we can not only significantly speed up existing top-performing models (e.g., $41 \times$ than GC-Net, $4 \times$ than PSMNet and $38 \times$ than GA-Net), but also improve the performance of fast stereo models (e.g., StereoNet). We also achieve competitive results on Scene Flow and KITTI datasets while running at 62ms, demonstrating the versatility and high efficiency of the proposed method. Our full framework is available at https://github.com/haofelixu/aanet.
+
+# 1. Introduction
+
+Estimating depth from stereo pairs is one of the most fundamental problems in computer vision [29]. The key task is to find spatial pixel correspondences, i.e., stereo matching, then depth can be recovered by triangulation. Efficient and accurate stereo matching algorithms are crucial for many real-world applications that require fast and reliable responses, such as robot navigation, augmented reality and autonomous driving.
+
+Traditional stereo matching algorithms generally per
+
+
+(a)
+
+
+(b)
+Figure 1: Illustration of the sampling locations in regular convolution based cost aggregation methods and our proposed approach, where the yellow and red points represent the locations for aggregation. (a) left image of a stereo pair. (b) fixed sampling locations in regular convolutions, also the aggregation weights are spatially shared. (c) adaptive sampling locations and position-specific aggregation weights in our approach. The background in (b) and (c) is ground truth disparity.
+
+
+(c)
+
+form a four-step pipeline: matching cost computation, cost aggregation, disparity computation and refinement, and they can be broadly classified into global and local methods [29]. Global methods usually solve an optimization problem by minimizing a global objective function that contains data and smoothness terms [31, 17], while local methods only consider neighbor information [40, 12], making themselves much faster than global methods [23, 29]. Although significant progress has been made by traditional methods, they still suffer in challenging situations like textureless regions, repetitive patterns and thin structures.
+
+Learning based methods make use of deep neural networks to learn strong representations from data, achieving promising results even in those challenging situations. DispNetC [20] builds the first end-to-end trainable framework for disparity estimation, where a correlation layer is used to measure the similarity of left and right image features. GC-Net [14] takes a different approach by directly concatenating left and right features, and thus 3D convolutions are required to aggregate the resulting 4D cost volume. PSMNet [4] further improves GC-Net by introducing more 3D convolutions for cost aggregation and accordingly obtains better accuracy. Although state-of-the-art performance can be achieved with 3D convolutions, the high
+
+computational cost and memory consumption make it quite expensive to deploy in practice (for example, PSMNet costs about 4G memory and 410ms to predict a KITTI stereo pair even on high-end GPUs). The recent work, GA-Net [43], also notices the drawbacks of 3D convolutions and tries to replace them with two guided aggregation layers. However, their final model still uses fifteen 3D convolutions.
+
+To this end, a motivating question arises: How to achieve state-of-the-art results without any 3D convolutions while being significantly faster? Answering this question is especially challenging due to the strong regularization provided by 3D convolutions. In this paper, we show that by designing two effective and efficient modules for cost aggregation, competitive performance can be obtained on both Scene Flow and KITTI datasets even with simple feature correlation [20] instead of concatenation [14].
+
+Specifically, we first propose a new sparse points based representation for intra-scale cost aggregation. As illustrated in Fig. 1, a set of sparse points are adaptively sampled to locate themselves in regions with similar disparities, alleviating the well-known edge-fattening issue at disparity discontinuities [29]. Moreover, such representation is flexible to sample from a large context while being much more efficient than sampling from a large window, an essential requirement for traditional local methods to obtain high-quality results [23]. We additionally learn content-adaptive weights to achieve position-specific weighting for cost aggregation, aiming to overcome the inherent drawback of spatial sharing nature in regular convolutions. We implement the above ideas with deformable convolution [45].
+
+We further approximate traditional cross-scale cost aggregation algorithm [44] with neural network layers by constructing multi-scale cost volumes in parallel and allowing adaptive multi-scale interactions, producing accurate disparity predictions even in low-texture or textureless regions.
+
+These two modules are simple, lightweight, and complementary, leading to an efficient architecture for cost aggregation. We also make extensive use of the key ideas in the feature extraction stage, resulting in our highly efficient and accurate Adaptive Aggregation Network (AANet). For instance, we can outperform existing top-performing models on Scene Flow dataset, while being significantly faster, e.g., $41 \times$ than GC-Net[14], $4 \times$ than PSMNet [4] and $38 \times$ than GA-Net [43]. Our method can also be a valuable way to improve the performance of fast stereo models, e.g., StereoNet [15], which are usually based on a very low-resolution cost volume to achieve fast speed, while at the cost of sacrificing accuracy. We also achieve competitive performance on KITTI dataset while running at 62ms, demonstrating the versatility and high efficiency of the proposed method.
+
+# 2. Related Work
+
+This section reviews the most relevant work to ours.
+
+Local Cost Aggregation. Local stereo methods (either traditional [40, 12] or 2D/3D convolution based methods [20, 14]) usually perform window based cost aggregation:
+
+$$
+\tilde {\boldsymbol {C}} (d, \boldsymbol {p}) = \sum_ {\boldsymbol {q} \in N (\boldsymbol {p})} w (\boldsymbol {p}, \boldsymbol {q}) \boldsymbol {C} (d, \boldsymbol {q}), \tag {1}
+$$
+
+where $\tilde{C}(d, p)$ denotes the aggregated cost at pixel $p$ for disparity candidate $d$ , pixel $q$ belongs to the neighbors $N(p)$ of $p$ , $w(p, q)$ is the aggregation weight and $C(d, q)$ is the raw matching cost at $q$ for disparity $d$ . Despite the widespread and successful applications of local methods, they still have several important limitations. First and foremost, the fundamental assumption made by local methods is that all the pixels in the matching window have similar disparities. However, this assumption does not hold at disparity discontinuities, causing the well-known edge-fattening issue in object boundaries and thin structures [29, 23]. As a consequence, the weighting function $w$ needs to be designed carefully to eliminate the influence of pixels that violate the smoothness assumption [12, 40]. While learning based methods automatically learn the aggregation weights from data, they still suffer from the inherent drawback of regular convolutions: weights are spatially shared, thus making themselves content-agnostic. Moreover, a large window size is often required to obtain high-quality results [24, 23], leading to high computational cost. Some works have been proposed to address the limitations of fixed rectangular window, e.g., using varying window size [26], multiple windows [11], or unconstrained shapes [2].
+
+Different from existing methods, we propose a new sparse points based representation for cost aggregation. This representation is also different from [23], in which sparse points inside the matching window are regularly sampled to reduce the computational complexity. In contrast, our proposed sampling mechanism is completely unconstrained and adaptive, providing more flexibility than the regular sampling in [23]. We also learn additional content-adaptive weights to enable position-specific weighting in contrast to the spatial sharing nature of regular convolutions.
+
+Cross-Scale Cost Aggregation. Traditional cross-scale cost aggregation algorithm [44] reformulates local cost aggregation from a unified optimization perspective, and shows that by enforcing multi-scale consistency on cost volumes, the final cost volume is obtained through the adaptive combination of the results of cost aggregation performed at different scales. Details are provided in the supplementary material. We approximate this conclusion with neural network layers in an end-to-end manner. Different from existing coarse-to-fine approaches [33, 39, 30], we build multiscale cost volumes in parallel and allow adaptive multiscale interactions. Our cross-scale aggregation architecture is also different from the very recent work [35], in which
+
+
+Figure 2: Overview of our proposed Adaptive Aggregation Network (AANet). Given a stereo pair, we first extract down-sampled feature pyramid at $1/3$ , $1/6$ and $1/12$ resolutions with a shared feature extractor. Then multi-scale cost volumes are constructed by correlating left and right features at corresponding scales. The raw cost volumes are aggregated by six stacked Adaptive Aggregation Modules (AAModules), where an AAModule consists of three Intra-Scale Aggregation (ISA, Sec. 3.1) modules and a Cross-Scale Aggregation (CSA, Sec. 3.2) module for three pyramid levels. Next multi-scale disparity predictions are regressed. Note that the dashed arrows are only required for training and can be removed for inference. Finally the disparity prediction at $1/3$ resolution is hierarchically upsampled/refined to the original resolution. For clarity, the refinement modules are omitted in this figure, see Sec. 3.3 for details.
+
+multi-scale cost volumes are also constructed. However, [35] fuses the cost volumes from the lowest level to the higher ones hierarchically, while ours aggregates all scale cost volumes simultaneously based on the analysis in [44].
+
+Stereo Matching Networks. Existing end-to-end stereo matching networks can be broadly classified into two categories: 2D and 3D convolution based methods. They mainly differ in the way that cost volume is constructed. 2D methods [20, 18, 33] generally adopt a correlation layer [20] while 3D methods [14, 4, 25, 43, 3] mostly use direct feature concatenation [14]. An exception to concatenation based 3D methods is [8], in which group-wise correlation is proposed to reduce the information loss of full correlation [20]. In terms of performance, 3D methods usually outperform 2D methods by a large margin on popular benchmarks (e.g., Scene Flow [20] and KITTI [22]), but the running speed is considerably slower. In this paper, we aim at significantly speeding up existing top-performing methods while maintaining comparable performance. The very recent work, DeepPruner [6], shares a similar goal with us to build efficient stereo models. They propose to reduce the disparity search range by a differentiable PatchMatch [1] module, and thus a compact cost volume is constructed. In contrast, we aim at reducing the sampling complexity and improving the sampling flexibility in cost aggregation, which works on different aspects, and both methods can be
+
+complementary to each other.
+
+Deformable Convolution. Deformable convolution [5, 45] is initially designed to enhance standard convolution's capability of modeling geometric transformations, and commonly used as backbone for object detection and semantic/instance segmentation tasks. We instead take a new perspective of traditional stereo methods and propose an adaptive sampling scheme for efficient and flexible cost aggregation. Since the resulting formulation is similar to deformable convolution, we adopt it in our implementation.
+
+# 3. Method
+
+Given a rectified image pair $I_{l}$ and $I_{r}$ , we first extract downsampled feature pyramid $\{F_l^s\}_{s = 1}^S$ and $\{F_r^s\}_{s = 1}^S$ with a shared feature extractor, where $S$ denotes the number of scales, $s$ is the scale index, and $s = 1$ represents the highest scale. Then multi-scale 3D cost volumes $\{C^s\}_{s = 1}^S$ are constructed by correlating left and right image features at corresponding scales, similar to DispNetC [20]:
+
+$$
+\boldsymbol {C} ^ {s} (d, h, w) = \frac {1}{N} \langle \boldsymbol {F} _ {l} ^ {s} (h, w), \boldsymbol {F} _ {r} ^ {s} (h, w - d) \rangle , \tag {2}
+$$
+
+where $\langle \cdot ,\cdot \rangle$ denotes the inner product of two feature vectors and $N$ is the channel number of extracted features. $C^s (d,h,w)$ is the matching cost at location $(h,w)$ for disparity candidate $d$ . The raw cost volumes $\{C^s\}_{s = 1}^S$ are
+
+then aggregated with several stacked Adaptive Aggregation Modules (AAModules), where an AAModule consists of $S$ adaptive Intra-Scale Aggregation (ISA) modules and an adaptive Cross-Scale Aggregation (CSA) module for $S$ pyramid levels. Finally, the predicted low-resolution disparity is hierarchically upsampled to the original resolution with the refinement modules. All disparity predictions are supervised with ground truth when training, while only the last disparity prediction is required for inference. Fig. 2 provides an overview of our proposed Adaptive Aggregation Network (AANet). In the following, we introduce the ISA and CSA modules in detail.
+
+# 3.1. Adaptive Intra-Scale Aggregation
+
+To alleviate the well-known edge-fattening issue at disparity discontinuities, we propose a sparse points based representation for efficient and flexible cost aggregation. Since the resulting formulation is similar to deformable convolution, we adopt it in our implementation.
+
+Specifically, for cost volume $C \in \mathbb{R}^{D \times H \times W}$ at a certain scale, where $D, H, W$ represents the maximum disparity, height and width, respectively, the proposed cost aggregation strategy is defined as
+
+$$
+\tilde {\boldsymbol {C}} (d, \boldsymbol {p}) = \sum_ {k = 1} ^ {K ^ {2}} w _ {k} \cdot \boldsymbol {C} (d, \boldsymbol {p} + \boldsymbol {p} _ {k} + \Delta \boldsymbol {p} _ {k}), \tag {3}
+$$
+
+where $\hat{C}(d, p)$ denotes the aggregated cost at pixel $p$ for disparity candidate $d$ , $K^2$ is the number of sampling points ( $K = 3$ in our paper), $w_k$ is the aggregation weight for $k$ -th point, $p_k$ is the fixed offset to $p$ in window based cost aggregation approaches. Our key difference from previous stereo works is that we learn additional offset $\Delta p_k$ to regular sampling location $p + p_k$ , thus enabling adaptive sampling for efficient and flexible cost aggregation, leading to high-quality results in object boundaries and thin structures.
+
+However, in the context of learning, the spatial sharing nature of regular convolution weights $\{w_k\}_{k=1}^{K^2}$ makes themselves content-agnostic. We further learn position-specific weights $\{m_k\}_{k=1}^{K^2}$ (i.e., modulation in [45], they also have effects of controlling the relative influence of the sampling points) for each pixel location $p$ to achieve content-adaptive cost aggregation:
+
+$$
+\tilde {\boldsymbol {C}} (d, \boldsymbol {p}) = \sum_ {k = 1} ^ {K ^ {2}} w _ {k} \cdot \boldsymbol {C} (d, \boldsymbol {p} + \boldsymbol {p} _ {k} + \Delta \boldsymbol {p} _ {k}) \cdot m _ {k}. \tag {4}
+$$
+
+We implement Eq. (4) with deformable convolution [45], both $\Delta p_{k}$ and $m_{k}$ are obtained by a separate convolution layer applied over the input cost volume $\pmb{C}$ . The original formulation of deformable convolution assumes the offsets $\Delta p_{k}$ and weights $m_{k}$ are shared by each channel (i.e., disparity candidate $d$ in this paper), we further evenly divide all
+
+disparity candidates into $G$ groups, and share $\Delta \pmb{p}_k$ and $m_k$ within each group. Dilated convolution [41] is also used for deformable convolution to introduce more flexibility. We set $G = 2$ and the dilation rate to 2 in this paper.
+
+We build an Intra-Scale Aggregation (ISA) module with a stack of 3 layers and a residual connection [9]. The three layers are $1 \times 1, 3 \times 3$ and $1 \times 1$ convolutions, where the $3 \times 3$ convolution is a deformable convolution. This design is similar to the bottleneck in [9], but we always keep the channels constant (equals to the number of disparity candidates). That is, we keep reasoning about disparity candidates, similar to traditional cost aggregation methods.
+
+# 3.2. Adaptive Cross-Scale Aggregation
+
+In low-texture or textureless regions, searching the correspondence at the coarse scale can be beneficial [21], as the texture information will be more discriminative under the same patch size when an image is downsampled. A similar observation has also been made in [36]. Therefore, multiscale interactions are introduced in traditional cross-scale cost aggregation algorithm [44].
+
+The analysis in [44] shows that the final cost volume is obtained through the adaptive combination of the results of cost aggregation performed at different scales (details are given in the supplementary material). We thus approximate this algorithm with
+
+$$
+\hat {C} ^ {s} = \sum_ {k = 1} ^ {S} f _ {k} \left(\tilde {C} ^ {k}\right), \quad s = 1, 2, \dots , S, \tag {5}
+$$
+
+where $\hat{C}$ is the resulting cost volume after cross-scale cost aggregation, $\tilde{C}^k$ is the intra-scale aggregated cost volume at scale $k$ , for example, with the algorithm in Sec. 3.1, and $f_{k}$ is a general function to enable the adaptive combination of cost volumes at each scale. We adopt the definition of $f_{k}$ from HRNet [32], a recent work for human pose estimation, which depends on the resolutions of cost volumes $\tilde{C}^k$ and $\hat{C}^s$ . Concretely, for cost volume $\hat{C}^s$ ,
+
+$$
+f _ {k} = \left\{ \begin{array}{l} \mathcal {I}, \quad k = s, \\ (s - k) \text {s t r i d e} - 2 3 \times 3 \text {c o n v s}, \quad k < s, \\ \text {u p s a m p l i n g} \bigoplus 1 \times 1 \text {c o n v}, \quad k > s, \end{array} \right. \tag {6}
+$$
+
+where $\mathcal{I}$ denotes the identity function, $s - k$ stride-2 $3\times 3$ convolutions are used for $2^{s - k}$ times downsampling to make the resolution consistent, and $\bigoplus$ means bilinear upsampling to the same resolution first, then following a $1\times 1$ convolution to align the number of channels. We denote this architecture as Cross-Scale Aggregation (CSA) module.
+
+Although our CSA module is similar to HRNet[32], they have two major differences. First, we are inspired by traditional cross-scale cost aggregation algorithm [44] and aiming at approximating the geometric conclusion with neural network layers, while HRNet is designed for learning
+
+rich feature representations. Moreover, the channel number (corresponding to the disparity dimension) of lower scale cost volume is halved in our approach due to the smaller search range in coarser scales, while HRNet doubles, indicating our architecture is more efficient than HRNet.
+
+# 3.3. Adaptive Aggregation Network
+
+The proposed ISA and CSA modules are complementary and can be integrated, resulting in our final Adaptive Aggregation Module (AAModule, see Fig. 2). We stack six AAModules for cost aggregation, while for the first three AAModules, we simply use regular 2D convolutions for intra-scale aggregation, thus a total of nine deformable convolutions are used for cost aggregation in this paper.
+
+Our feature extractor adopts a ResNet-like [9] architecture (40 layers in total), in which six regular 2D convolutions are replaced with their deformable counterparts. We use Feature Pyramid Network [19] to construct feature pyramid at $1/3$ , $1/6$ and $1/12$ resolutions. Two refinement modules proposed in StereoDRNet [3] are used to hierarchically upsample the $1/3$ disparity prediction to the original resolution (i.e., upsample to $1/2$ resolution first, then to original resolution). Combining all these components leads to our final Adaptive Aggregation Network (AANet).
+
+# 3.4. Disparity Regression
+
+For each pixel, we adopt the soft argmin mechanism [14] to obtain the disparity prediction $\vec{d}$ :
+
+$$
+\tilde {d} = \sum_ {d = 0} ^ {D _ {\max } - 1} d \times \sigma \left(c _ {d}\right), \tag {7}
+$$
+
+where $D_{\mathrm{max}}$ is the maximum disparity range, $\sigma$ is the softmax function, and $c_{d}$ is the aggregated matching cost for disparity candidate $d$ . $\sigma (c_d)$ can be seen as the probability of disparity being $d$ . This regression based formulation can produce sub-pixel precision and thus is used in this paper.
+
+# 3.5. Loss Function
+
+Our AANet is trained end-to-end with ground truth disparities as supervision. While for KITTI dataset, the high sparsity of disparity ground truth may not be very effective to drive our learning process. Inspired by the knowledge distillation in [10], we propose to leverage the prediction results from a pre-trained stereo model as pseudo ground truth supervision. Specifically, we employ a pre-trained model to predict the disparity maps on the training set, and use the prediction results as pseudo labels in pixels where ground truth disparities are not available. We take the pre-trained GA-Net [43] model as an example to validate the effectiveness of this strategy.
+
+For disparity prediction $D_{\mathrm{pred}}^i, i = 1,2,\dots ,N$ , it is first bilinearly upsampled to the original resolution. The
+
+corresponding loss function is defined as
+
+$$
+\begin{array}{l} L _ {i} = \sum_ {\boldsymbol {p}} V (\boldsymbol {p}) \cdot \mathcal {L} \left(\boldsymbol {D} _ {\operatorname {p r e d}} ^ {i} (\boldsymbol {p}), \boldsymbol {D} _ {\operatorname {g t}} (\boldsymbol {p})\right) \\ + (1 - \boldsymbol {V} (\boldsymbol {p})) \cdot \mathcal {L} \left(\boldsymbol {D} _ {\text {p r e d}} ^ {i} (\boldsymbol {p}), \boldsymbol {D} _ {\text {p s e u d o}} (\boldsymbol {p})\right), \tag {8} \\ \end{array}
+$$
+
+where $V(p)$ is a binary mask to denote whether the ground truth disparity for pixel $p$ is available, $\mathcal{L}$ is the smooth L1 loss [4], $D_{\mathrm{gt}}$ is the ground truth disparity and $D_{\mathrm{pseudo}}$ is the pseudo ground truth.
+
+The final loss function is a combination of losses over all disparity predictions
+
+$$
+L = \sum_ {i = 1} ^ {N} \lambda_ {i} \cdot L _ {i}, \tag {9}
+$$
+
+where $\lambda_{i}$ is a scalar for balancing different terms.
+
+# 4. Experiments
+
+# 4.1. Datasets and Evaluation Metrics
+
+We conduct extensive experiments on three popular stereo datasets: Scene Flow, KITTI 2012 and KITTI 2015. The Scene Flow dataset [20] is a large scale synthetic dataset and provides dense ground truth disparity maps. The end-point error (EPE) and 1-pixel error are reported on this dataset, where EPE is the mean disparity error in pixels and 1-pixel error is the average percentage of pixel whose EPE is bigger than 1 pixel. The KITTI 2012 [7] and KITTI 2015 [22] are real-world datasets in the outdoor scenario, where only sparse ground truth is provided. The official metrics (e.g., D1-all) in the online leader board are reported.
+
+# 4.2. Implementation Details
+
+We implement our approach in PyTorch [27] and using Adam [16] $(\beta_{1} = 0.9, \beta_{2} = 0.999)$ as optimizer. For Scene Flow dataset, we use all training set (35454 stereo pairs) for training and evaluate on the standard test set (4370 stereo pairs). The raw images are randomly cropped to $288 \times 576$ as input. We train our model on 4 NVIDIA V100 GPUs for 64 epochs with a batch size of 64. The learning rate starts at 0.001 and is decreased by half every 10 epochs after 20th epoch. For KITTI dataset, we use $336 \times 960$ crop size, and first fine-tune the pre-trained Scene Flow model on mixed KITTI 2012 and 2015 training sets for 1000 epochs. The initial learning rate is 0.001 and decreased by half at 400th, 600th, 800th and 900th epochs. Then another 1000 epochs are trained on the separate KITTI 2012/2015 training set for benchmarking, with an initial learning rate of 0.0001 and same schedule as before. But only the last disparity prediction is supervised with ground truth following a similar strategy in [13]. For all datasets, the input images are normalized with ImageNet mean and standard deviation
+
+| Method | Scene Flow | KITTI 2015 |
| EPE | >1px | EPE | D1-all |
| w/o ISA & CSA | 1.10 | 10.9 | 0.75 | 2.63 |
| w/o ISA | 0.97 | 10.1 | 0.70 | 2.22 |
| w/o CSA | 0.99 | 10.1 | 0.69 | 2.31 |
| AANet | 0.87 | 9.3 | 0.68 | 2.29 |
+
+Table 1: Ablation study of ISA and CSA modules. The best performance is obtained by integrating these two modules.
+
+
+Figure 3: Visual comparisons of ablation study on Scene Flow test set. Our AANet produces sharper results in thin structures and better predictions in textureless regions.
+
+statistics. We use random color augmentation and vertical flipping, and set the maximum disparity as 192 pixels. From highest scale to lowest, the loss weights in Eq. 8 are set to $\lambda_{1} = \lambda_{2} = \lambda_{3} = 1.0$ , $\lambda_{4} = 2/3$ , $\lambda_{5} = 1/3$ .
+
+# 4.3. Analysis
+
+To validate the effectiveness of each component proposed in this paper, we conduct controlled experiments on Scene Flow test set and KITTI 2015 validation set (the KITTI 2015 training set is split into 160 pairs for training and 40 pairs for validation).
+
+Ablation Study. As shown in Tab. 1, removing the proposed ISA or CSA module leads to clear performance drop. The best performance is obtained by integrating these two modules, which are designed to be complementary in principle. Fig. 3 further shows the visual comparison results. Our full model produces better disparity predictions in thin structures and textureless regions, demonstrating the effectiveness of the proposed method.
+
+Sampling Points Visualization. To better understand our proposed adaptive intra-scale cost aggregation algo
+
+
+(a) object boundary
+
+
+(b) textureless region
+
+
+Figure 4: Visualization of sampling points (red points) in two challenging regions (green points). In object boundary (a), the sampling points tend to focus on similar disparity regions. While for large textureless region (b), they are more discretely distributed to sample from a large context.
+
+
+Figure 5: Visualization of disparity prediction results on KITTI 2015 validation set. Leveraging pseudo ground truth as additional supervision helps reduce the artifacts in regions where ground truth disparities are not available, e.g., the sky region.
+
+rithm, we visualize the sampling locations in two challenging regions. As illustrated in Fig. 4, for pixel in object boundary (Fig. 4a), the sampling points tend to focus on similar disparity regions. While for large textureless region (Fig. 4b), a large context is usually required to obtain reliable matching due to lots of local ambiguities. Our method can successfully adapt the sampling locations to these regions, validating that the proposed adaptive aggregation method can not only dynamically adjust the sampling locations, but also enables sampling from a large context.
+
+Pseudo Ground Truth Supervision. Fig. 5 shows the visual results on KITTI 2015 validation set. We empirically find that leveraging the prediction results from a pretrained GA-Net [43] model helps reduce the artifacts in regions where ground truth disparities are not available, e.g., the sky region. Quantitatively, the D1-all error metric decreases from 2.29 to 2.15, while the EPE increases from 0.68 to 0.69. The possible reason might be that the validation set is too small to make the results unstable. Similar phenomenon has also been noticed in [8]. However, the qualitative results indicate that our proposed strategy can be
+
+| Method | #3D Convs | #DConvs | #CSA | EPE | >1px | Params | FLOPs | Memory | Time (ms) |
| StereoNet [15] | 4 | 0 | 0 | 1.10 | - | 0.62M | 106.89G | 1.41G | 23 |
| StereoNet-AA | 0 | 4 | 0 | 1.08 | 12.9 | 0.53M | 88.17G | 1.38G | 17 |
| GC-Net [14] | 19 | 0 | 0 | 2.51 | 16.9 | 2.85M | 1754.10G | 21.52G | 3731 |
| GC-Net-AA | 0 | 9 | 6 | 0.98 | 10.8 | 2.15M | 212.59G | 1.97G | 91 |
| PSMNet [4] | 25 | 0 | 0 | 1.09 | 12.1 | 5.22M | 613.90G | 4.08G | 317 |
| PSMNet-AA | 0 | 9 | 6 | 0.97 | 10.2 | 4.15M | 208.73G | 1.58G | 77 |
| GA-Net [43] | 15 | 0 | 0 | 0.84 | 9.9 | 4.60M | 1439.57G | 6.23G | 2211 |
| GA-Net-AA | 0 | 14 | 6 | 0.87 | 9.2 | 3.68M | 119.64G | 1.63G | 57 |
+
+Table 2: Comparisons with four representative stereo models: StereoNet, GC-Net, PSMNet and GA-Net. We replace the 3D convolutions in cost aggregation stage with our proposed architectures and denote the resulting model with suffix AA. Our method not only obtains clear performance improvements (except GA-Net has lower EPE), but also shows fewer parameters, less computational cost and memory consumption, while being significantly faster than top-performing models ( $41 \times$ than GC-Net, $4 \times$ than PSMNet and $38 \times$ than GA-Net). The comparison with StereoNet indicates that our method can also be a valuable way to improve the performance of existing fast stereo models. "DConvs" is short for deformable convolutions.
+
+
+Figure 6: Generalization on Middlebury 2014 dataset. Our AANet produces sharper object boundaries and better preserves the overall structures than PSMNet.
+
+an effective way to handle highly sparse ground truth data.
+
+Generalization. We further test the generalization ability of our method on Middlebury 2014 dataset [28]. Specifically, we directly use our KITTI fine-tuned model to predict the disparity map, no additional training is done on Middlebury. Fig. 6 shows the results. Compared with the popular PSMNet [4] model, our AANet produces sharper object boundaries and better preserves the overall structures.
+
+# 4.4. Comparison with 3D Convolutions
+
+To demonstrate the superiority of our proposed cost aggregation method over commonly used 3D convolutions, we conduct extensive experiments on the large scale Scene Flow dataset.
+
+Settings. We mainly compare with four representative stereo models: the first 3D convolution based model GCNet [14], real-time model StereoNet [15], previous and current state-of-the-art models PSMNet [4] and GA-Net [43].
+
+For fair comparisons, we use similar feature extractors with them. Specifically, StereoNet uses $8 \times$ downsampling for fast speed while we use $4 \times$ ; five regular 2D convolutions in GA-Net are replaced with their deformable counterparts; for GC-Net and PSMNet, the feature extractors are exactly the same. We replace the 3D convolutions in cost aggregation stage with our proposed AAModules, and denote the resulting model with suffix AA. We integrate all these models in a same framework and measure the inference time with $576 \times 960$ resolution on a single NVIDIA V100 GPU.
+
+Results. Tab. 2 shows the comprehensive comparison metrics/statistics. To achieve fast speed, StereoNet [15] uses $8 \times$ downsampling to build a very low-resolution cost volume, while at the cost of sacrificing accuracy. But thanks to our efficient adaptive aggregation architecture, we are able to directly aggregate the $1/4$ cost volume with even less computation while being more accurate and faster, indicating that our method can be a valuable way to improve the performance of existing fast stereo models. Compared with top-performing stereo models GC-Net [14], PSMNet [4] and GA-Net [43], we not only obtain clear performance improvements (except GA-Net has lower EPE than ours), but also show fewer parameters, less computational cost and memory consumption, while being significantly faster ( $41 \times$ than GC-Net, $4 \times$ than PSMNet and $38 \times$ than GA-Net), demonstrating the high efficiency of our method compared with commonly used 3D convolutions.
+
+Complexity Analysis. 2D stereo methods use simple feature correlation to build a 3D cost volume $(D\times H\times W)$ while 3D methods use concatenation thus a 4D cost volume is built $(C\times D\times H\times W)$ , where $C,D,H,W$ denotes channels after feature concatenation, maximum disparity, height and width, respectively. $C$ usually equals to 64 for
+
+| Method | GC-Net [14] | PSMNet [4] | GA-Net [43] | DeepPruner-Best [6] | DispNetC [20] | StereoNet [15] | AANet | AANet* |
| EPE | 2.51 | 1.09 | 0.84 | 0.86 | 1.68 | 1.10 | 0.87 | 0.83 |
| Time (s) | 0.9 | 0.41 | 1.5 | 0.182 | 0.06 | 0.015 | 0.068 | 0.160 |
+
+Table 3: Evaluation results on Scene Flow test set. Our method not only achieves state-of-the-art performance but also runs significantly faster than existing top-performing methods.
+
+| Method | KITTI 2012 | KITTI 2015 | Time (s) |
| Out-Noc | Out-All | D1-bg | D1-all |
| MC-CNN [42] | 2.43 | 3.63 | 2.89 | 3.89 | 67 |
| GC-Net [14] | 1.77 | 2.30 | 2.21 | 2.87 | 0.9 |
| PSMNet [4] | 1.49 | 1.89 | 1.86 | 2.32 | 0.41 |
| DeepPruner-Best [6] | - | - | 1.87 | 2.15 | 0.182 |
| iResNet-i2 [18] | 1.71 | 2.16 | 2.25 | 2.44 | 0.12 |
| HD3 [39] | 1.40 | 1.80 | 1.70 | 2.02 | 0.14 |
| GwcNet [8] | 1.32 | 1.70 | 1.74 | 2.11 | 0.32 |
| GA-Net [43] | 1.36 | 1.80 | 1.55 | 1.93 | 1.5 |
| AANet* | 1.71 | 2.21 | 1.78 | 2.24 | 0.142 |
| StereoNet [15] | 4.91 | 6.02 | 4.30 | 4.83 | 0.015 |
| MADNet [33] | - | - | 3.75 | 4.66 | 0.02 |
| DispNetC [20] | 4.11 | 4.65 | 4.32 | 4.34 | 0.06 |
| DeepPruner-Fast [6] | - | - | 2.32 | 2.59 | 0.061 |
| AANet | 1.91 | 2.42 | 1.99 | 2.55 | 0.062 |
+
+Table 4: Benchmark results on KITTI 2012 and KITTI 2015 test sets. Our deeper model AANet\* achieves competitive results among existing top-performing methods while maintaining fast inference speed. Note that $\mathrm{HD}^3$ has more than $6\times$ parameters than ours. Compared with other fast models, our AANet is much more accurate.
+
+3D convolutions based methods and $D = 64$ for $1/3$ resolution cost volume. Supposing the output cost volume has the same size as input and the kernel size of a convolution layer is $K$ ( $K = 3$ usually), then the computational complexity of a 3D convolution layer is $\mathcal{O}(K^3 C^2 DHW)$ . In contrast, the complexity of a deformable convolution layer is $\mathcal{O}(K^2 D^2 HW + 3K^4 DHW + 3K^2 DHW)$ . Therefore, the computational complexity of a deformable convolution layer is less than $1/130$ of a 3D convolution layer.
+
+# 4.5. Benchmark Results
+
+For benchmarking, we build another model variant AANet\* that uses higher-resolution $(1 / 2)$ cost volume and deeper (61 layers) feature extractor. Tab. 3 shows the evaluation results on Scene Flow test set. Our method not only achieves state-of-the-art results, but also runs significantly faster than existing top-performing methods. The evaluation results on KITTI 2012 and KITTI 2015 benchmarks are shown in Tab. 4. Compared with other fast models, our AANet is much more accurate. The deeper version model AANet\* achieve competitive results while maintaining fast inference speed. We also note that $\mathrm{HD}^3$ [39] has more than
+
+
+Figure 7: Visualization of disparity prediction error on KITTI 2015 test set (red and yellow denote large errors). Our method produces better results in object boundaries. Best viewed enlarged.
+
+$6 \times$ parameters than our AANet\* (39.1M vs. 5.9M), and our AANet\* performs much better than iResNet-i2[18] on the more challenging KITTI 2015 dataset, demonstrating that our method achieves a better balance between accuracy and speed. Fig. 7 further visualizes the disparity prediction error on KITTI 2015 test set. Our method produces better results in object boundaries, validating the effectiveness of our proposed adaptive aggregation algorithm.
+
+# 5. Conclusion
+
+We have presented an efficient architecture for cost aggregation, and demonstrated its superiority over commonly used 3D convolutions by high efficiency and competitive performance on both Scene Flow and KITTI datasets. Extensive experiments also validate the generic applicability of the proposed method. An interesting future direction would be extending our method to other cost volume based tasks, e.g., high-resolution stereo matching [37], multi-view stereo [38] and optical flow estimation [30]. We also hope our lightweight design can be beneficial for downstream tasks, e.g., stereo based 3D object detection [34].
+
+Acknowledgements. We thank anonymous reviewers for their constructive comments. This work was supported by the National Natural Science Foundation of China (No. 61672481) and Youth Innovation Promotion Association CAS (No. 2018495).
+
+# References
+
+[1] Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. Patchmatch: A randomized correspondence algorithm for structural image editing. In ACM Transactions on Graphics (ToG), volume 28, page 24. ACM, 2009.
+[2] Yu Boykov, Olga Veksler, and Ramin Zabih. A variable window approach to early vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1283-1294, 1998.
+[3] Rohan Chabra, Julian Straub, Christopher Sweeney, Richard Newcombe, and Henry Fuchs. Stereodrnet: Dilated residual stereonet. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 11786-11795, 2019.
+[4] Jia-Ren Chang and Yong-Sheng Chen. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410-5418, 2018.
+[5] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In Proceedings of the IEEE international conference on computer vision, pages 764-773, 2017.
+[6] Shivam Duggal, Shenlong Wang, Wei-Chiu Ma, Rui Hu, and Raquel Urtasun. Deeppruner: Learning efficient stereo matching via differentiable patchmatch. In Proceedings of the IEEE International Conference on Computer Vision, pages 4384-4393, 2019.
+[7] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 3354-3361. IEEE, 2012.
+[8] Xiaoyang Guo, Kai Yang, Wukui Yang, Xiaogang Wang, and Hongsheng Li. Group-wise correlation stereo network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3273-3282, 2019.
+[9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[10] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
+[11] Heiko Hirschmüller, Peter R Innocent, and Jon Garibaldi. Real-time correlation-based stereo vision with reduced border errors. International Journal of Computer Vision, 47(1-3):229-246, 2002.
+[12] Asmaa Hosni, Christoph Rhemann, Michael Bleyer, Carsten Rother, and Margrit Gelautz. Fast cost-volume filtering for visual correspondence and beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(2):504-511, 2012.
+[13] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2462-2470, 2017.
+
+[14] Alex Kendall, Hayk Martirosyan, Saumitro Dasgupta, Peter Henry, Ryan Kennedy, Abraham Bachrach, and Adam Bry. End-to-end learning of geometry and context for deep stereo regression. In Proceedings of the IEEE International Conference on Computer Vision, pages 66-75, 2017.
+[15] Sameh Khamis, Sean Fanello, Christoph Rhemann, Adarsh Kowdle, Julien Valentin, and Shahram Izadi. Stereonet: Guided hierarchical refinement for real-time edge-aware depth prediction. In Proceedings of the European Conference on Computer Vision (ECCV), pages 573-590, 2018.
+[16] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[17] Vladimir Kolmogorov and Ramin Zabih. Computing visual correspondence with occlusions using graph cuts. In Eighth IEEE International Conference on Computer Vision, volume 2, pages 508-515, 2001.
+[18] Zhengfa Liang, Yiliu Feng, Yulan Guo, Hengzhu Liu, Wei Chen, Linbo Qiao, Li Zhou, and Jianfeng Zhang. Learning for disparity estimation through feature constancy. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2811-2820, 2018.
+[19] Tsung-Yi Lin, Piotr Dólar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2117-2125, 2017.
+[20] Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4040-4048, 2016.
+[21] Michael D Menz and Ralph D Freeman. Stereoscopic depth processing in the visual cortex: a coarse-to-fine mechanism. Nature neuroscience, 6(1):59-65, 2003.
+[22] Moritz Menze and Andreas Geiger. Object scene flow for autonomous vehicles. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3061-3070, 2015.
+[23] Dongbo Min, Jiangbo Lu, and Minh N Do. A revisit to cost aggregation in stereo matching: How far can we reduce its computational redundancy? In 2011 International Conference on Computer Vision, pages 1567-1574. IEEE, 2011.
+[24] Dongbo Min and Kwanghoon Sohn. Cost aggregation and occlusion handling with wls in stereo matching. IEEE Transactions on Image Processing, 17(8):1431-1442, 2008.
+[25] Guang-Yu Nie, Ming-Ming Cheng, Yun Liu, Zhengfa Liang, Deng-Ping Fan, Yue Liu, and Yongtian Wang. Multi-level context ultra-aggregation for stereo matching. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3283-3291, 2019.
+[26] Masatoshi Okutomi and Takeo Kanade. A locally adaptive window for signal matching. International Journal of Computer Vision, 7(2):143-162, 1992.
+[27] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming
+
+Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024-8035, 2019.
+[28] Daniel Scharstein, Heiko Hirschmüller, York Kitajima, Greg Krathwohl, Nera Nešić, Xi Wang, and Porter Westling. High-resolution stereo datasets with subpixel-accurate ground truth. In German conference on pattern recognition, pages 31–42. Springer, 2014.
+[29] Daniel Scharstein and Richard Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International journal of computer vision, 47(1-3):7-42, 2002.
+[30] Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8934-8943, 2018.
+[31] Jian Sun, Nan-Ning Zheng, and Heung-Yeung Shum. Stereo matching using belief propagation. IEEE Transactions on Pattern Analysis & Machine Intelligence, (7):787-800, 2003.
+[32] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5693-5703, 2019.
+[33] Alessio Tonioni, Fabio Tosi, Matteo Poggi, Stefano Mattoccia, and Luigi Di Stefano. Real-time self-adaptive deep stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 195-204, 2019.
+[34] Yan Wang, Wei-Lun Chao, Divyansh Garg, Bharath Hariharan, Mark Campbell, and Kilian Q Weinberger. Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8445-8453, 2019.
+[35] Zhenyao Wu, Xinyi Wu, Xiaoping Zhang, Song Wang, and Lili Ju. Semantic stereo matching with pyramid cost volumes. In Proceedings of the IEEE International Conference on Computer Vision, pages 7484-7493, 2019.
+[36] Qingshan Xu and Wenbing Tao. Multi-scale geometric consistency guided multi-view stereo. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5483-5492, 2019.
+[37] Gengshan Yang, Joshua Manela, Michael Happold, and Deva Ramanan. Hierarchical deep stereo matching on high-resolution images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5515-5524, 2019.
+[38] Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, and Long Quan. Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV), pages 767-783, 2018.
+[39] Zhichao Yin, Trevor Darrell, and Fisher Yu. Hierarchical discrete distribution decomposition for match density estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6044-6053, 2019.
+
+[40] Kuk-Jin Yoon and In So Kweon. Adaptive support-weight approach for correspondence search. IEEE transactions on pattern analysis & machine intelligence, (4):650-656, 2006.
+[41] Fisher Yu, Vladlen Koltun, and Thomas Funkhouser. Dilated residual networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 472-480, 2017.
+[42] Jure Zbontar and Yann LeCun. Computing the stereo matching cost with a convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1592-1599, 2015.
+[43] Feihu Zhang, Victor Prisacariu, Ruigang Yang, and Philip HS Torr. Ga-net: Guided aggregation net for end-to-end stereo matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 185-194, 2019.
+[44] Kang Zhang, Yuqiang Fang, Dongbo Min, Lifeng Sun, Shiqiang Yang, Shuicheng Yan, and Qi Tian. Cross-scale cost aggregation for stereo matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1590-1597, 2014.
+[45] Xizhou Zhu, Han Hu, Stephen Lin, and Jifeng Dai. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9308-9316, 2019.
\ No newline at end of file
diff --git a/aanetadaptiveaggregationnetworkforefficientstereomatching/images.zip b/aanetadaptiveaggregationnetworkforefficientstereomatching/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..d6baacb2e6b06fadbd99744a195a22ea928d534c
--- /dev/null
+++ b/aanetadaptiveaggregationnetworkforefficientstereomatching/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9dfcda6b5e00c2af722d70b61c08018183f3cbbf7ce368b9d8b8f194ba4583ef
+size 541965
diff --git a/aanetadaptiveaggregationnetworkforefficientstereomatching/layout.json b/aanetadaptiveaggregationnetworkforefficientstereomatching/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..909204b4208c936265e7d7e09bc86aff590ad22d
--- /dev/null
+++ b/aanetadaptiveaggregationnetworkforefficientstereomatching/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7eb64ecb7ac20061c220ddb4c8a81ca64db4c0a1ee0b03df98c9f79fcf6e233b
+size 405021
diff --git a/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_content_list.json b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b660145b258e84f4ee30c03e73ab96291bf5bada
--- /dev/null
+++ b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfbf68f35b5bfed6d85f4055f9d219f8ec7d8bc73f8f407123f0f8a4c9945544
+size 93600
diff --git a/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_model.json b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..99f2bbfb2fac2b650cdb9cbe9f86b62a0d92bc27
--- /dev/null
+++ b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c65095f91cc6ed76dc5a1e5c1271f660c437aff64dd9c1dea8362901e4aa6411
+size 112962
diff --git a/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_origin.pdf b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e887d5e5743b3648a171ca6319cdf2d763418ce4
--- /dev/null
+++ b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/41208601-27bc-4770-8178-ad8f17112541_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b4fbe8a2bf437aedd2d5a5dbe39b8ed4c29f59bc949df79f1747b0794e7a92a
+size 434274
diff --git a/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/full.md b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2b9d736c05077f8232e418d639cd4993db05de66
--- /dev/null
+++ b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/full.md
@@ -0,0 +1,534 @@
+# A Certifiably Globally Optimal Solution to Generalized Essential Matrix Estimation
+
+Ji Zhao
+
+TuSimple
+
+zhaoji84@gmail.com
+
+Wanting Xu
+
+ShanghaiTech University
+
+xuwt@shanghaiitech.edu.cn
+
+Laurent Kneip
+
+ShanghaiTech University
+
+Ikneip@shanghaiitech.edu.cn
+
+# Abstract
+
+We present a convex optimization approach for generalized essential matrix (GEM) estimation. The six-point minimal solver for the GEM has poor numerical stability and applies only for a minimal number of points. Existing non-minimal solvers for GEM estimation rely on either local optimization or relinearization techniques, which impedes high accuracy in common scenarios. Our proposed non-minimal solver minimizes the sum of squared residuals by reformulating the problem as a quadratically constrained quadratic program. The globally optimal solution is thus obtained by a semidefinite relaxation. The algorithm retrieves certifiably globally optimal solutions to the original non-convex problem in polynomial time. We also provide the necessary and sufficient conditions to recover the optimal GEM from the relaxed problems. The improved performance is demonstrated over experiments on both synthetic and real multi-camera systems.
+
+# 1. Introduction
+
+Relative pose estimation from images plays an important role in many geometric vision tasks, such as structure-from-motion (SfM) and simultaneous localization and mapping (SLAM). While central cameras can be modeled by the pin-hole or perspective camera model [11], more general non-central cameras such as multi-camera arrays are modelled by the generalized camera model [32]. This paper presents a new method to estimate the generalized essential matrix (GEM) or relative pose for non-central cameras.
+
+The essential matrix encodes the relative pose for pin-hole cameras and is well understood [11, 30, 4]. GEM estimation is more involved. A generalized camera is formed by abstracting landmark observations into spatial rays that are no longer required to originate from a common point (i.e. the focal point). Figure 1 demonstrates the difference between central and non-central cameras. As illustrated, the generalized camera model allows us to describe the mea
+
+
+(a) central camera
+Figure 1. Relative pose estimation. Red triangles represent perspective cameras. Solid arrows pointing from 3D points to cameras depict the imaging process. In the non-central camera scenario, three rigidly attached central cameras constitute a non-central camera.
+
+
+(b) non-central camera
+
+surements of a number of interesting camera systems, such as a multi-camera rig of rigidly attached cameras.
+
+From a more abstract and geometric point of view, a generalized camera consists of a Euclidean reference frame in which measurements are represented by rays in space, described by a suitable parameterization such as Plücker line vectors. In contrast to the standard essential matrix for which there exists an unobservability in the norm of the translation, the translation extracted from a GEM is generally unique. The down-side of this scale observability is that the minimal solution of the GEM requires at least 6 instead of only 5 correspondences across the two views (i.e. one correspondence per degree of freedom in the problem).
+
+There are both linear [24] and non-linear solutions [34, 18, 5] to GEM estimation. The linear solver—also known as the 17-point algorithm—takes 17 correspondences to derive the relative pose of the generalized camera. This method can be easily applied to an arbitrarily large number of points. However, its solution is not globally optimal, as the linearization ignores side-constraints on the GEM and the contained essential and rotation matrices. The most closely related works to ours are the non-minimal solvers by Kneip and Li [18] and Campos et al. [5], which use many correspondences to calculate a potentially accurate relative pose, but rely on local optimization methods and therefore
+
+may depend on a sufficiently accurate initial guess. They do not guarantee global optimality.
+
+By contrast, the present paper leverages convex optimization to—for the first time—come up with a fast and certifiably globally optimal solution to the non-minimal generalized relative pose problem, whose optimality may be certified $a$ -posteriori. In summary, the contribution of this paper is two-fold:
+
+- Formulation. We propose a novel formulation for GEM estimation of generalized cameras, and discuss its relation to the previously proposed eigenvalue-based formulation in [18].
+- Optimization. We provide a certifiably globally optimal solution by semidefinite relaxation (SDR) of the original formulation. We also provide a sufficient and necessary condition to recover the optimal GEM from the relaxed problem.
+
+As demonstrated in Section 5, our method sets a new state-of-the-art in terms of both accuracy and robustness while at the same time remaining computationally efficient.
+
+# 2. Related Work
+
+Using a non-central camera rig has attracted much attention from both academic and industrial communities. The most common case is that of a set of cameras—often with non-overlapping views—attached to a headset, micro air vehicle (MAV) or ground vehicle. Our work is particularly relevant for real-time visual localization [7, 14] and autonomous driving [22].
+
+Relative Pose of a Generalized Camera: The minimal solver is based on algebraic geometry, and uses 6 correspondences in order to come up with 64 solutions [34]. However, its large elimination template leads to poor numerical stability. Kim et al. later proposed alternative approaches for relative displacement estimation with non-overlapping multi-camera systems using second-order cone programming (SOCP) [15] or branch-and-bound over the space of all rotations [16]. [7] furthermore derived a $5 + 1$ point algorithm, and [25] proposed the antipodal epipolar constraint. A minimal solution for the case of non-holonomic motion was proposed in [22]. Minimal solutions for motions with a common direction were proposed in [23, 26]. An eigenvalue-based formulation for GEM estimation together with efficient local optimization was proposed in [18]. Very recently, another local optimization method for GEM estimation was proposed using an alternating minimization method [5].
+
+Generalized Relative Pose and Scale: There is a further generalization of GEM estimation, the generalized relative pose and scale problem. It introduces a further unknown:
+
+a relative scale factor between the ray origins in both generalized camera frames. It has an important application in structure from motion with central cameras. While existing work already introduces specialized solvers for this problem [35, 21], the method proposed in this paper could easily be extended to provide a more general solution as well.
+
+Relative Pose of a Central Camera: Essential or fundamental matrix estimation by algebraic error minimization has been extensively studied in previous literature [29, 10, 19, 4]. For both the essential and the fundamental matrix, pose estimation by algebraic error minimization can be formulated as a polynomial optimization problem [28]. A polynomial optimization problem can be reformulated as a quadratically constrained quadratic program (QCQP), which has numerous off-the-shelf solvers. In multiple view geometry, semidefinite relaxation (SDR) for polynomial optimization problems was first studied by Kahl and Henrion in [13]. Recent work [4, 40] has successfully applied it to globally optimal, non-minimal central relative pose computation, which serves as a further motivation of our work.
+
+# 3. Non-Minimal Solver for GEM Estimation
+
+Relative pose consists of a translation $\mathbf{t}$ — expressed in the first frame and denoting the position of the second frame w.r.t. the first one — and a rotation $\mathbf{R}$ — transforming vectors from the second into the first frame1. The translation $\mathbf{t} = [t_1, t_2, t_3]^\top$ is thus identical with a point in $\mathbb{R}^3$ . The 3D rotation $\mathbf{R}$ is a $3 \times 3$ orthogonal matrix with determinant 1 and belonging to the Special Orthogonal group SO(3), i.e.,
+
+$$
+\mathrm {S O} (3) \triangleq \left\{\mathbf {R} \in \mathbb {R} ^ {3 \times 3} \mid \mathbf {R} ^ {\top} \mathbf {R} = \mathbf {I} _ {3}, \det (\mathbf {R}) = 1 \right\}, \tag {1}
+$$
+
+where $\mathbf{I}_3$ is the $3\times 3$ identity matrix.
+
+The essential matrix $\mathbf{E}$ is defined as
+
+$$
+\mathbf {E} = [ \mathbf {t} ] _ {\times} \mathbf {R}, \tag {2}
+$$
+
+where $[\cdot ]_{\times}$ constructs the corresponding skew-symmetric matrix of a 3-dimensional vector [11]. The elements of the essential matrix $\mathbf{E}$ and the rotation matrix $\mathbf{R}$ are denoted by $e_{ij}$ and $r_{ij}$ , respectively, where $i$ represents the row index and $j$ the column index. We furthermore define the vectors
+
+$$
+\mathbf {e} \triangleq \operatorname {v e c} (\mathbf {E}) = \left[ e _ {1 1} e _ {2 1} \dots e _ {3 3} \right] ^ {T}, \text {a n d} \tag {3}
+$$
+
+$$
+\mathbf {r} \triangleq \operatorname {v e c} (\mathbf {R}) = \left[ r _ {1 1} r _ {2 1} \dots r _ {3 3} \right] ^ {T}, \tag {4}
+$$
+
+where $\operatorname{vec}(\cdot)$ stacks matrix entries by column-first order.
+
+We define the essential matrix set as
+
+$$
+\mathcal {M} _ {\mathbf {E}} \triangleq \left\{\mathbf {E} \mid \mathbf {E} = [ \mathbf {t} ] _ {\times} \mathbf {R}, \exists \mathbf {R} \in \operatorname {S O} (3) \right\}. \tag {5}
+$$
+
+
+Figure 2. Geometry of the generalized relative pose problem for multi-camera systems.
+
+This essential matrix set is called the essential matrix manifold [12]. It is worth mentioning that scale-ambiguity does not exist in GEM estimation, which is why $\mathcal{M}_{\mathbf{E}}$ does not contain any constraints on $\mathbf{t}$ . By contrast, there is a scale-ambiguity for standard relative pose estimation, and the translation $\mathbf{t}$ is typically restricted to length 1.
+
+# 3.1. Generalized Essential Matrix
+
+We now review the GEM describing the relative pose geometry for generalized cameras [32, 24, 18]. As outlined in [32], the transformation rule and the intersection-constraint of Plücker line-vectors easily leads to the epipolar constraint
+
+$$
+\mathbf {l} _ {i} ^ {\top} \left[ \begin{array}{c c} \mathbf {E} & \mathbf {R} \\ \mathbf {R} & \mathbf {0} \end{array} \right] \mathbf {l} _ {i} ^ {\prime} = 0, \tag {6}
+$$
+
+where $(\mathbf{l}_i, \mathbf{l}_i^{\prime})$ denotes a pair of corresponding Plücker line-vectors pointing at the $i$ -th 3D point from two different generalized cameras.
+
+Figure 2 illustrates a multi-camera system, which is a common special case of a generalized camera. A point on each Plücker-line is easily given by the capturing camera's center $c_{i}$ , seen from the origin of the multi-camera system $b$ . If denoting this displacement by $\mathbf{t}_{bc,i}$ , we obtain
+
+$$
+\mathbf {l} _ {i} = \left[ \begin{array}{c} \mathbf {f} _ {i} \\ \mathbf {t} _ {b c, i} \times \mathbf {f} _ {i} \end{array} \right]. \tag {7}
+$$
+
+Note that we assume that—without loss of generality— $c$ and $b$ have identical orientation. The generalized epipolar constraint thus becomes
+
+$$
+\mathbf {f} _ {i} ^ {\top} \mathbf {E} \mathbf {f} _ {i} ^ {\prime} + \mathbf {f} _ {i} ^ {\top} \mathbf {R} \mathbf {h} _ {i} ^ {\prime} + \mathbf {h} _ {i} ^ {\top} \mathbf {R} \mathbf {f} _ {i} ^ {\prime} = 0, \tag {8}
+$$
+
+where
+
+$$
+\mathbf {h} _ {i} \triangleq \mathbf {t} _ {b c, i} \times \mathbf {f} _ {i}; \quad \mathbf {h} _ {i} ^ {\prime} \triangleq \mathbf {t} _ {b c, i} ^ {\prime} \times \mathbf {f} _ {i} ^ {\prime}.
+$$
+
+# 3.2. Optimizing the GEM by Minimizing the Algebraic Error
+
+Due to the existence of measurement noise, the generalized epipolar constraint will not be strictly satisfied. Denoting the residual for $i$ -th correspondence as
+
+$$
+\varepsilon_ {i} = \mathbf {f} _ {i} ^ {\top} \mathbf {E} \mathbf {f} _ {i} ^ {\prime} + \mathbf {f} _ {i} ^ {\top} \mathbf {R} \mathbf {h} _ {i} ^ {\prime} + \mathbf {h} _ {i} ^ {\top} \mathbf {R} \mathbf {f} _ {i} ^ {\prime}, \tag {9}
+$$
+
+the summation of squared residuals for $N$ correspondences $\{(\mathbf{l}_i,\mathbf{l}_i^{\prime})\}_{i = 1}^{N}$ becomes a quadratic function in e and r
+
+$$
+\varepsilon \triangleq \sum_ {i = 1} ^ {N} \varepsilon_ {i} ^ {2} = \left[ \mathbf {e} ^ {\top}, \mathbf {r} ^ {\top} \right] \mathbf {C} \left[ \begin{array}{l} \mathbf {e} \\ \mathbf {r} \end{array} \right]. \tag {10}
+$$
+
+C can be expressed explicitly by
+
+$$
+\mathbf {C} = \left[ \begin{array}{c c} \mathbf {C} _ {1} & \mathbf {C} _ {4} + \mathbf {C} _ {5} \\ \mathbf {C} _ {4} ^ {\top} + \mathbf {C} _ {5} ^ {\top} & \mathbf {C} _ {2} + \mathbf {C} _ {3} + \mathbf {C} _ {6} + \mathbf {C} _ {6} ^ {\top} \end{array} \right], \tag {11}
+$$
+
+where
+
+$$
+\left\{ \begin{array}{l} \mathbf {C} _ {1} = \sum_ {i = 1} ^ {N} \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) ^ {\top} \\ \mathbf {C} _ {2} = \sum_ {i = 1} ^ {N} \left(\mathbf {h} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) \left(\mathbf {h} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) ^ {\top} \\ \mathbf {C} _ {3} = \sum_ {i = 1} ^ {N} \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {h} _ {i}\right) \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {h} _ {i}\right) ^ {\top} \\ \mathbf {C} _ {4} = \sum_ {i = 1} ^ {N} \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) \left(\mathbf {h} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) ^ {\top} \\ \mathbf {C} _ {5} = \sum_ {i = 1} ^ {N} \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {h} _ {i}\right) ^ {\top} \\ \mathbf {C} _ {6} = \sum_ {i = 1} ^ {N} \left(\mathbf {h} _ {i} ^ {\prime} \otimes \mathbf {f} _ {i}\right) \left(\mathbf {f} _ {i} ^ {\prime} \otimes \mathbf {h} _ {i}\right) ^ {\top}. \end{array} \right.
+$$
+
+Note that $\{\mathbf{C}_j\}_{j=1}^6$ are Gram matrices, so they are positive semidefinite (PSD) and symmetric (and so does $\mathbf{C}$ ). In practice, $\mathbf{C}$ is positive definite for non-minimal GEM estimation scenario.
+
+# 3.3. A QCQP Formulation
+
+The problem of minimizing the algebraic error on the manifold $\mathcal{M}_{\mathbf{E}}$ can be formulated as
+
+$$
+\min _ {\mathbf {E}, \mathbf {R}, \mathbf {t}} \left[ \mathbf {e} ^ {\top}, \mathbf {r} ^ {\top} \right] \mathbf {C} \left[ \begin{array}{l} \mathbf {e} \\ \mathbf {r} \end{array} \right] \tag {12}
+$$
+
+$$
+\mathrm {s . t .} \mathbf {E} = [ \mathbf {t} ] _ {\times} \mathbf {R}, \quad \mathbf {R} \in \operatorname {S O} (3).
+$$
+
+This problem is a QCQP: The objective is a sum of squares, which are PSD quadratic polynomials; the largest set of independent quadratic constraints to define SO(3) is 20 [20, 3]; and, lastly, the constraint between $\mathbf{E}$ , $\mathbf{R}$ and $\mathbf{t}$ , meaning $\mathbf{E} = [\mathbf{t}]_{\times} \mathbf{R}$ , is also quadratic. The problem has 21 variables and 29 constraints.
+
+There are some interesting examples in the literature on how the introduction of linearly independent redundant constraints into a QCQP formulation may significantly improve the tightness of the subsequent semidefinite relaxation [1, 3, 4, 38]. For the 20 quadratic constraints considered for SO(3), more than half of them are also redundant and added only for the sake of better tightness [3]. Inspired by this idea, we introduce redundant constraints for problem (12). The below equalities are easily verified:
+
+$$
+\left\{ \begin{array}{l} \mathbf {t} ^ {\top} \mathbf {E} = \mathbf {t} ^ {\top} ([ \mathbf {t} ] _ {\times} \mathbf {R}) = \mathbf {0} \\ \mathbf {E E} ^ {\top} = \left([ \mathbf {t} ] _ {\times} \mathbf {R}\right) \left([ \mathbf {t} ] _ {\times} \mathbf {R}\right) ^ {\top} = [ \mathbf {t} ] _ {\times} [ \mathbf {t} ] _ {\times} ^ {\top} \\ \mathbf {E R} ^ {\top} = ([ \mathbf {t} ] _ {\times} \mathbf {R}) \mathbf {R} ^ {\top} = [ \mathbf {t} ] _ {\times} \end{array} \right. \tag {13}
+$$
+
+These 3 equalities introduce 3, 6 and 9 additional constraints, respectively.
+
+# 3.4. Relations between Algebraic-Error-Based and Eigenvalue-Based Formulations
+
+In [18], an eigenvalue-based formulation was proposed. Here we demonstrate the close relation between the algebraic-error-based and the eigenvalue-based formulation. By substituting (2) into (8) and applying the permutation rule for triple scalar products, we obtain
+
+$$
+- \left(\mathbf {f} _ {i} \times \mathbf {R} \mathbf {f} _ {i} ^ {\prime}\right) ^ {\top} \mathbf {t} + \left(\mathbf {f} _ {i} ^ {\top} \mathbf {R} \mathbf {h} _ {i} ^ {\prime} + \mathbf {h} _ {i} ^ {\top} \mathbf {R} \mathbf {f} _ {i} ^ {\prime}\right) = 0, \tag {14}
+$$
+
+which can obviously be rewritten as
+
+$$
+\mathbf {g} _ {i} ^ {\top} \tilde {\mathbf {t}} = 0, \quad \text {w i t h} \tag {15}
+$$
+
+$$
+\mathbf {g} _ {i} = \left[ \begin{array}{c} \mathbf {f} _ {i} \times \mathbf {R} \mathbf {f} _ {i} ^ {\prime} \\ \mathbf {f} _ {i} ^ {\top} \mathbf {R} \mathbf {h} _ {i} ^ {\prime} + \mathbf {h} _ {i} ^ {\top} \mathbf {R} \mathbf {f} _ {i} ^ {\prime} \end{array} \right] \text {a n d} \tilde {\mathbf {t}} = \left[ \begin{array}{c} - w \mathbf {t} \\ w \end{array} \right].
+$$
+
+$\mathbf{g}_i$ here is called a generalized epipolar plane normal vector, and $\tilde{\mathbf{t}}$ the homogeneous translation vector, which has arbitrary scale [18]. We set $w$ as 1 without loss of generality. Denote
+
+$$
+\mathbf {G} = \left[ \mathbf {g} _ {1}, \dots , \mathbf {g} _ {k} \right], \tag {16}
+$$
+
+$$
+\mathbf {H} = \mathbf {G} \mathbf {G} ^ {\top} = \sum_ {i = 1} ^ {N} \mathbf {g} _ {i} \mathbf {g} _ {i} ^ {\top}. \tag {17}
+$$
+
+Then we can express the summation of residuals by this new parameterization
+
+$$
+\varepsilon = \sum_ {i = 1} ^ {N} \varepsilon_ {i} ^ {2} = \sum_ {i = 1} ^ {N} \left(\mathbf {g} _ {i} ^ {\top} \tilde {\mathbf {t}}\right) ^ {2} = \left\| \mathbf {G} ^ {\top} \tilde {\mathbf {t}} \right\| _ {2} ^ {2}. \tag {18}
+$$
+
+Thus the algebraic-error-based formulation (12) is equivalent to the following problem
+
+$$
+\min _ {\mathbf {R}, \tilde {\mathbf {t}}} \left\| \mathbf {G} ^ {\top} \tilde {\mathbf {t}} \right\| _ {2} ^ {2} \quad \text {s . t .} \mathbf {R} \in \mathrm {S O} (3), \quad \tilde {\mathbf {t}} _ {[ 4 ]} = 1. \tag {19}
+$$
+
+This problem can be further reformulated as
+
+$$
+\min _ {\mathbf {R}} J (\mathbf {R}) \quad \text {s . t .} \mathbf {R} \in \mathrm {S O} (3), \tag {20}
+$$
+
+where
+
+$$
+J (\mathbf {R}) = \min _ {\tilde {\mathbf {t}}} \| \mathbf {G} ^ {\top} \tilde {\mathbf {t}} \| \quad \text {s . t .} \tilde {\mathbf {t}} _ {[ 4 ]} = 1. \tag {21}
+$$
+
+If we replace the constraint in problem (21) by $\| \tilde{\mathbf{t}} \| = 1$ , $J(\mathbf{R})$ can be viewed as finding the optimal $\tilde{\mathbf{t}}$ to minimize $\| \mathbf{G}\tilde{\mathbf{t}}\|$ subject to the condition $\| \tilde{\mathbf{t}}\| = 1$ . The solution is the unit eigenvector corresponding to the smallest eigenvalue of the matrix $\mathbf{H} = \mathbf{G}^{\top}\mathbf{G}$ . Let $\sigma_{\mathbf{H},\min}$ denote the smallest eigenvalue of $\mathbf{H}$ , thus the optimization problem becomes
+
+$$
+\min _ {\mathbf {R}} \sigma_ {\mathbf {H}, \min } \quad \text {s . t .} \mathbf {R} \in \mathrm {S O} (3). \tag {22}
+$$
+
+which is exactly the eigenvalue-based formulation that was proposed in [18].
+
+From the previous analysis, it can be seen that the algebraic error formulation and the eigenvalue-based formulation differ only by the domain of the translation vector. The algebraic error method implicitly assumes that the optimal translation is never infinite, as otherwise we can not assume that the homogeneous coordinate of $\tilde{\mathbf{t}}$ is 1. Fortunately, infinite translations in relative pose estimation are not a practical concern.
+
+# 4. Semidefinite Relaxation and Optimization
+
+We use semidefinite relaxation (SDR) to solve QCQP problem (12). Let us rewrite it in more general form as
+
+$$
+\min _ {\mathbf {x} \in \mathbb {R} ^ {n}} \mathbf {x} ^ {\top} \mathbf {C} _ {0} \mathbf {x} \tag {23}
+$$
+
+$$
+\begin{array}{l} \mathrm {s . t .} \mathbf {x} ^ {\top} \mathbf {A} _ {i} \mathbf {x} = 0, i = 1, \dots , m \\ \mathbf {x} ^ {\top} \mathbf {L} \mathbf {x} = 1, \\ \end{array}
+$$
+
+where
+
+$$
+\mathbf {x} = \left[ \operatorname {v e c} (\mathbf {E}); \operatorname {v e c} (\mathbf {R}); \mathbf {t}; y \right], \tag {24}
+$$
+
+is a vector stacking all variables. Note that we add an additional variable $y$ that makes the objective and constraints purely quadratic (i.e., no linear or constant term in the objective and no linear term in the equality constraints). This trick is called homogenization [27, 3], and introduces the constraint $\mathbf{x}_{[n]}^2 = 1$ . By introducing a matrix $\mathbf{L} = \mathrm{diag}([0,\dots ,0,1])$ , this constraint can be reformulated as $\mathbf{x}^{\top}\mathbf{L}\mathbf{x} = 1$ . Matrices $\mathbf{C}_0,\mathbf{A}_1,\dots ,\mathbf{A}_m\in \mathbb{S}^n$ are determined by the original problem (12), where $\mathbb{S}^n$ denotes the set of all real symmetric $n\times n$ matrices.
+
+In our problem, $n = 22$ ; $\mathbf{C}_0 = \begin{bmatrix} \mathbf{C} & \mathbf{0}_{18\times 4}\\ \mathbf{0}_{4\times 18} & \mathbf{0}_{4\times 4} \end{bmatrix}$ . A crucial first step in deriving an SDR of problem (23) is to observe that
+
+$$
+\mathbf {x} ^ {\top} \mathbf {C} _ {0} \mathbf {x} = \operatorname {t r a c e} \left(\mathbf {x} ^ {\top} \mathbf {C} _ {0} \mathbf {x}\right) = \operatorname {t r a c e} \left(\mathbf {C} _ {0} \mathbf {x x} ^ {\top}\right), \tag {25}
+$$
+
+$$
+\mathbf {x} ^ {\top} \mathbf {A} _ {i} \mathbf {x} = \operatorname {t r a c e} \left(\mathbf {x} ^ {\top} \mathbf {A} _ {i} \mathbf {x}\right) = \operatorname {t r a c e} \left(\mathbf {A} _ {i} \mathbf {x x} ^ {\top}\right). \tag {26}
+$$
+
+In particular, both the objective function and constraints in problem (23) are linear in the matrix $\mathbf{xx}^{\top}$ . Thus, by introducing a new variable $\mathbf{X} = \mathbf{xx}^{\top}$ and noting that $\mathbf{X} = \mathbf{xx}^{\top}$ is equivalent to $\mathbf{X}$ being a rank one symmetric PSD matrix, we obtain the following equivalent form of problem (23):
+
+$$
+\min _ {\mathbf {X} \in \mathbb {S} ^ {n}} \operatorname {t r a c e} \left(\mathbf {C} _ {0} \mathbf {X}\right) \tag {27}
+$$
+
+$$
+\begin{array}{l} \begin{array}{l} \text {s . t .} \operatorname {t r a c e} (\mathbf {A} _ {i} \mathbf {X}) = 0, i = 1, \dots , m, \end{array} \\ \operatorname {t r a c e} (\mathbf {L X}) = 1, \quad \mathbf {X} \succeq \mathbf {0}, \quad \operatorname {r a n k} (\mathbf {X}) = 1. \\ \end{array}
+$$
+
+Here, $\mathbf{X} \succeq \mathbf{0}$ means that $\mathbf{X}$ is PSD. Solving rank constrained semidefinite programs is NP-hard [36]. SDR drops
+
+the rank constraint $\mathrm{rank}(\mathbf{X}) = 1$ to obtain the following relaxed version of problem (27)
+
+$$
+\min _ {\mathbf {X} \in \mathbb {S} ^ {n}} \operatorname {t r a c e} \left(\mathbf {C} _ {0} \mathbf {X}\right) \tag {28}
+$$
+
+$$
+\begin{array}{l} \text {s . t .} \operatorname {t r a c e} (\mathbf {A} _ {i} \mathbf {X}) = 0, i = 1, \dots , m, \end{array}
+$$
+
+$$
+\operatorname {t r a c e} (\mathbf {L X}) = 1, \quad \mathbf {X} \succeq \mathbf {0}.
+$$
+
+Problem (28) turns out to be an instance of a semidefinite program (SDP) [36, 27], which may be solved using convex optimization. Modern solvers for SDP are based on primal-dual interior point methods. Its dual problem is
+
+$$
+\max _ {\boldsymbol {\lambda}, \rho} \rho \tag {29}
+$$
+
+$$
+\text {s . t .} \mathbf {Q} (\boldsymbol {\lambda}, \rho) = \mathbf {C} _ {0} - \sum_ {i = 1} ^ {m} \lambda_ {i} \mathbf {A} _ {i} - \rho \mathbf {L} \succeq 0,
+$$
+
+where $\lambda = [\lambda_1,\dots ,\lambda_m]^\top \in \mathbb{R}^m$ . Problem (29) is called the Lagrangian dual problem of problem (23), and $\mathbf{Q}(\boldsymbol {\lambda},\rho)$ is the Hessian of the Lagrangian. In summary, the relations between the main formulations are demonstrated in Fig. 3.
+
+We now prove that there is no duality gap between (28) and (29). Thus the problem can be readily solved using off-the-shelf primal-dual interior point methods [39].
+
+Theorem 4.1. For QCQP problem (12), there is no duality gap between the primal SDP problem (28) and its dual problem (29).
+
+Proof. Denote the optimal value for problem (28) and its dual problem (29) as $f_{\mathrm{primal}}$ and $f_{\mathrm{dual}}$ . The inequality $f_{\mathrm{primal}} \geq f_{\mathrm{dual}}$ follows from weak duality. Equality, and the existence of $\mathbf{X}^{\star}$ and $\lambda^{\star}$ which attain the optimal values follow if we can show that the feasible regions of both the primal and dual problems have nonempty interiors [36, Theorem 3.1] (also known as Slater's constraint qualification [2].)
+
+For the primal problem (28), let $\mathbf{E}_0$ be an arbitrary point on the essential matrix manifold $\mathcal{M}_{\mathbf{E}}$ : $\mathbf{E}_0 = [\mathbf{t}_0]_{\times} \mathbf{R}_0$ . Denote $\mathbf{x}_0 = [\mathrm{vec}(\mathbf{E}_0); \mathrm{vec}(\mathbf{R}_0); \mathbf{t}_0; 1]$ . It can be verified that $\mathbf{X}_0 = \mathbf{x}_0 \mathbf{x}_0^\top$ is an interior in the feasible domain of the primal problem. For the dual problem (29), we first list part of the constraints as follows
+
+$$
+\left\{h _ {1}: e _ {1 1} ^ {2} + e _ {1 2} ^ {2} + e _ {1 3} ^ {2} - \left(t _ {2} ^ {2} + t _ {3} ^ {2}\right) = 0, \quad \right. \tag {30a}
+$$
+
+$$
+h _ {2}: e _ {2 1} ^ {2} + e _ {2 2} ^ {2} + e _ {2 3} ^ {2} - \left(t _ {1} ^ {2} + t _ {3} ^ {2}\right) = 0, \tag {30b}
+$$
+
+$$
+h _ {3}: e _ {3 1} ^ {2} + e _ {3 2} ^ {2} + e _ {3 3} ^ {2} - \left(t _ {1} ^ {2} + t _ {2} ^ {2}\right) = 0, \tag {30c}
+$$
+
+$$
+h _ {4}: r _ {1 1} ^ {2} + r _ {1 2} ^ {2} + r _ {1 3} ^ {2} - y ^ {2} = 0, \tag {30d}
+$$
+
+$$
+h _ {5}: r _ {2 1} ^ {2} + r _ {2 2} ^ {2} + r _ {2 3} ^ {2} - y ^ {2} = 0, \tag {30e}
+$$
+
+$$
+h _ {6}: r _ {3 1} ^ {2} + r _ {3 2} ^ {2} + r _ {3 3} ^ {2} - y ^ {2} = 0, \tag {30f}
+$$
+
+$$
+h _ {7}: r _ {1 1} ^ {2} + r _ {2 1} ^ {2} + r _ {3 1} ^ {2} - y ^ {2} = 0, \tag {30g}
+$$
+
+$$
+\begin{array}{l} h _ {8}: r _ {1 2} ^ {2} + r _ {2 2} ^ {2} + r _ {3 2} ^ {2} - y ^ {2} = 0, \\ h _ {9}: r _ {1 3} ^ {2} + r _ {2 3} ^ {2} + r _ {3 3} ^ {2} - y ^ {2} = 0, \end{array} \tag {30i}
+$$
+
+
+Figure 3. Relations between the main formulations in this work.
+
+where $h_1 \sim h_3$ follows from the constraint $\mathbf{E}\mathbf{E}^{\top} = [\mathbf{t}]_{\times}[\mathbf{t}]_{\times}^{\top}$ , and $h_4 \sim h_9$ originates from the constraints $\mathbf{R}\mathbf{R}^{\top} = \mathbf{R}^{\top}\mathbf{R} = \mathbf{I}_3$ . Recall that $\mathbf{C} \succ 0$ , thus its minimal eigenvector $\sigma_{\mathrm{min}}$ is positive. Let $\lambda_1 \sim \lambda_9$ correspond to the Lagrangian of $h_1 \sim h_9$ respectively. Let the first 9 entries in $\lambda_0$ satisfy $\lambda_{0[1:9]} = -\epsilon [1,1,1,1,1,1,1,1,1,1]^{\top}$ , and other entries in $\lambda_0$ and $\rho_0$ be zero. It can be verified that $\mathbf{Q}(\lambda_0,\rho_0) = \begin{bmatrix} \mathbf{C} - \epsilon \mathbf{I}_{18} & 0 & 0 \\ 0 & 2\varepsilon \mathbf{I}_3 & 0 \\ 0 & 0 & 6\varepsilon \end{bmatrix} \succ 0$ , $\forall \epsilon \in (0,\sigma_{\mathrm{min}})$ . That means $\{\lambda_0,\rho_0\}$ is an interior point in the feasible domain of the dual problem.
+
+# 4.1. Further Redundant Constraints
+
+To improve tightness of the SDR, we add further redundant constraints on our SDP. The redundant constraint is taken from the SO(3) orbitope.
+
+Definition 4.1 (Orbitope [33]). An orbitope is the convex hull of an orbit of a compact algebraic group that acts linearly on a real vector space. The orbit has the structure of a real algebraic variety, and the orbitope is a convex semi-algebraic set.
+
+Theorem 4.2 (SO(3) Orbitope, Proposition 4.1 in [33]). The tautological orbitope $\mathrm{conv}(SO(3))$ is a spectrahedron whose boundary is a quartic hypersurface. In fact, a $3 \times 3$ matrix $\mathbf{R}$ lies in $\mathrm{conv}(SO(3))$ if and only if
+
+$$
+\mathcal {L} (\mathbf {R}) + \mathbf {I} _ {4} \succeq 0 \tag {31}
+$$
+
+where $\mathcal{L}(\mathbf{R}) =$
+
+$$
+\left[ \begin{array}{c c c c} r _ {1 1} + r _ {2 2} + r _ {3 3} & r _ {3 2} - r _ {2 3} & r _ {1 3} - r _ {3 1} & r _ {2 1} - r _ {1 2} \\ r _ {3 2} - r _ {2 3} & r _ {1 1} - r _ {2 2} - r _ {3 3} & r _ {2 1} + r _ {1 2} & r _ {1 3} + r _ {3 1} \\ r _ {1 3} - r _ {3 1} & r _ {2 1} + r _ {1 2} & r _ {2 2} - r _ {1 1} - r _ {3 3} & r _ {3 2} + r _ {2 3} \\ r _ {2 1} - r _ {1 2} & r _ {1 3} + r _ {3 1} & r _ {3 2} + r _ {2 3} & r _ {3 3} - r _ {1 1} - r _ {2 2} \end{array} \right]
+$$
+
+Inequality (31) provides an additional linear matrix inequality for our optimization problem. Note that $\{r_{ij}\}_{i,j=1}^{3}$ in $\mathbf{R}$ are also entries in $\mathbf{X}$ since $\mathbf{X} = \mathbf{xx}^{\top}$ and $\mathbf{x} = [\mathrm{vec}(\mathbf{E});\mathrm{vec}(\mathbf{R});\mathbf{t};1]$ . Therefore (31) can be reformulated in terms of $\mathbf{X}$ .
+
+# 4.2. Recovery of Essential Matrix and Relative Pose
+
+Once the optimal $\mathbf{X}^{\star}$ of the SDP primal problem (28) has been calculated by an SDP solver, we are left with the
+
+task to recover the optimal essential matrix $\mathbf{E}^{\star}$ . Let us denote $\mathbf{X}_e^{\star} = \mathbf{X}_{[1:9,1:9]}^{\star}$ , $\mathbf{X}_r^{\star} = \mathbf{X}_{[10:18,10:18]}^{\star}$ and $\mathbf{X}_t^{\star} = \mathbf{X}_{[19:21,19:21]}^{\star}$ . Empirically, we found that $\mathrm{rank}(\mathbf{X}_e^{\star}) = 1$ . Denoting the eigenvector that corresponds to the nonzero eigenvalue of $\mathbf{X}_e^{\star}$ as $\mathbf{e}^{\star}$ , the optimal essential matrix is
+
+$$
+\mathbf {E} ^ {\star} = \operatorname {m a t} \left(\mathbf {e} ^ {\star}, [ 3, 3 ]\right), \tag {32}
+$$
+
+where $\operatorname{mat}(\mathbf{e}, [r, c])$ reshapes the vector $\mathbf{e}$ to an $r \times c$ matrix by column-first order.
+
+Once the essential matrix has been obtained, we can recover rotation $\mathbf{R}^{\star}$ and translation $\mathbf{t}^{\star}$ by the standard textbook method [11]. However, $\mathbf{E}^{\star}$ and its derived translation $\mathbf{t}^{\star}$ do not have the proper scale. To recover the proper scale, we denote the unknown scale factor as $s$ and substitute $s\mathbf{E}^{\star}$ and $\mathbf{R}^{\star}$ into the generalized epipolar constraint (8). We then calculate the scale $s$ by solving the least squares problem
+
+$$
+s = - \frac {\sum_ {i = 1} ^ {N} \left(\mathbf {f} _ {i} ^ {\top} \mathbf {E f} _ {i} ^ {\prime}\right) \cdot \left(\mathbf {f} _ {i} ^ {\top} \mathbf {R h} _ {i} ^ {\prime} + \mathbf {h} _ {i} ^ {\top} \mathbf {R f} _ {i} ^ {\prime}\right)}{\sum_ {i = 1} ^ {N} \left(\mathbf {f} _ {i} ^ {\top} \mathbf {E f} _ {i} ^ {\prime}\right) ^ {2}}. \tag {33}
+$$
+
+If the denominator in Eq. (33) is (near) zero, the problem is in a (near) degenerate configuration in which the scale is (nearly) unobservable. Known degenerate configurations correspond to a generalized camera moving along a straight line or—in some cases—a circular arc. In such a scenario, the real scale can not be recovered, while rotation and translation direction can still be found.
+
+We empirically verified that $\mathrm{rank}(\mathbf{X}_e^{\star})$ and $\mathrm{rank}(\mathbf{X}_t^{\star})$ remain 1, while $\mathrm{rank}(\mathbf{X}_r^{\star})$ may be 2. Since $\mathbf{X}_r^{\star}$ does not satisfy the rank-1 constraint, we can no longer recover the rotation from it. Fortunately, we do not require $\mathbf{X}_r^{\star}$ , and may recover the translation directly from $\mathbf{X}_t^{\star}$ . Similarly, Section 4.3 introduces Theorem 4.3, a sufficient and necessary condition for global optimality, which again does not depend on $\mathrm{rank}(\mathbf{X}_r^{\star})$ , which is why global optimality is not influenced by an eventual unobservability of scale. The outline of our method is shown in Algorithm 1.
+
+# 4.3. A Sufficient and Necessary Condition for Global Optimality
+
+Since SDR drops the rank-1 constraint, a sufficient condition for global optimality is that the optimal $\mathbf{X}^{\star}$ satisfies the rank-1 constraint. However, the rank-1 constraint of $\mathbf{X}^{\star}$ may not be necessary to guarantee global optimality. The following theorem provides a sufficient and necessary condition, which provides a theoretical foundation for the practical pose recovery method described in Section 4.2.
+
+Theorem 4.3. For QCQP problem (12) with constraints (13), its SDR problem is tight if and only if: the optimal solution $\mathbf{X}^{\star}$ to its primal SDP problem (28) satisfies $\mathrm{rank}(\mathbf{X}_e^\star) = \mathrm{rank}(\mathbf{X}_t^\star) = 1$ .
+
+Algorithm 1: Generalized Essential Matrix Estimation by SDP Optimization
+Input: observations $\{(l_i,l_i^{\prime})\}_{i = 1}^{N}$
+Output: Essential matrix $\mathbf{E}^{\star}$ , rotation $\mathbf{R}^{\star}$ , and translation t\*
+Construct C by Eq. (11); $C_0 = \left[ \begin{array}{cc}\mathbf{C} & \mathbf{0}_{18\times 4}\\ \mathbf{0}_{4\times 18} & \mathbf{0}_{4\times 4} \end{array} \right];$
+Construct $\{\mathbf{A}_i\}_{i = 1}^m$ and L in problem (23) which are independent of input;
+Obtain $\mathbf{X}^{\star}$ by solving SDP problem (28) or its dual (29) with redundant constraints;
+Assert that rank $(\mathbf{X}_e^\star) = \mathrm{rank}(\mathbf{X}_t^\star) = 1$ .
+ $\mathbf{E}^{\star} = \mathrm{mat}(\mathbf{e}^{\star},[3,3])$ , where e\* is the eigenvector corresponding to the largest eigenvalue of $\mathbf{X}_e^\star$
+Decompose E\* to obtain rotation R\* and normalized translation t\*;
+if $\sum_{i = 1}^{N}(f_{i}^{\top}\mathbf{E}f_{i}^{\prime})^{2}$ is larger than a threshold then. Calculate scale s by Eq. (33); t\*← st\*
+else
+t\* can only be determined up to scale.
+end
+
+Proof. First, we prove the if part. Note that $\mathbf{X}_e^\star$ and $\mathbf{X}_t^\star$ are real symmetric matrices because they are in the feasible region of the primal SDP. Besides it is given that $\mathrm{rank}(\mathbf{X}_e^\star) = \mathrm{rank}(\mathbf{X}_t^\star) = 1$ , thus there exist two vectors $\mathbf{e}^\star$ and $\mathbf{t}^\star$ satisfying $\mathbf{e}^\star (\mathbf{e}^\star)^\top = \mathbf{X}_e^\star$ and $\mathbf{t}^\star (\mathbf{t}^\star)^\top = \mathbf{X}_t^\star$ .
+
+According to Theorem 1 in [40], a real $3 \times 3$ matrix $\mathbf{E}$ is an essential matrix if and only if there exists a vector $\mathbf{t}$ satisfying $\mathbf{E} \mathbf{E}^{\top} = [\mathbf{t}]_{\times}[\mathbf{t}]_{\times}$ . Note that $\mathbf{X}_{e}^{\star} = \mathbf{e}^{\star}(\mathbf{e}^{\star})^{\top}$ and $\mathbf{X}_{t}^{\star} = \mathbf{t}^{\star}(\mathbf{t}^{\star})^{\top}$ satisfy the constraints in problem (28) since they are sub-matrices of a valid solution $\mathbf{X}^{\star}$ . By algebraic derivation based on these constraints, it can be proven that $\mathbf{E}^{\star} = \mathrm{mat}(\mathbf{e}^{\star}, [3, 3])$ and $\mathbf{t}^{\star}$ satisfy $\mathbf{E}^{\star} \mathbf{E}^{\top} = [\mathbf{t}^{\star}]_{\times}[\mathbf{t}^{\star}]_{\times}$ . Thus $\mathbf{E}^{\star}$ is a valid essential matrix.
+
+Next we prove the only if part. Since SDR is tight, it means we can uniquely recover a valid relative pose from matrix $\mathbf{X}^{\star}$ . According to Theorem 1 in [40], the minimal requirement to define a valid relative pose is constraints about $\mathbf{E}$ and $\mathbf{t}$ . To ensure valid $\mathbf{E}$ and $\mathbf{t}$ can be recovered from $\mathbf{X}^{\star}$ , it should satisfy that $\mathrm{rank}(\mathbf{X}_e^{\star}) \leq 1$ and $\mathrm{rank}(\mathbf{X}_t^{\star}) \leq 1$ . Since $\mathbf{X}_e^{\star}$ and $\mathbf{X}_t^{\star}$ cannot be zero matrices (otherwise $\mathbf{X}^{\star}$ is not in the feasible region), the equalities should hold.
+
+Theorem 4.3 provides a sufficient and necessary global optimality condition to recover the optimal solution for the original QCQP. It also provides a method to verify global optimality. Empirically, the optimal $\mathbf{X}^{\star}$ obtained by the SDP problem always satisfies this condition. Finding the essential conditions to guarantee tightness however remains an open problem [6].
+
+# 5. Experimental Results
+
+We choose SDPA [37] as the interior point method (IPM) solver, and use the default parameters in all experiments. Our method is implemented in MATLAB, and all experiments are performed on an Intel Core i7 CPU with $1.7\mathrm{Hz}$ . To improve efficiency, we use the results of the 17-point solver [24] for initialization when more than 17 inliers are available. In this paper, only experiments for synthetic data use this initialization. To improve accuracy, we follow the suggest-and-improve framework for general QCQPs [31]. We furthermore use a local optimization method [5] to refine the results provided by SDPA. The complete method takes an average of only $15\mathrm{ms}$ .
+
+We compared our method against several state-of-the-art methods on both synthetic and real data. Specifically, we compare our method against: (1) the minimal solver 6pt [34]; (2) the linear solver 17pt [24]; (3) the generalized eigenvalue solver ge [18]; and (4) an alternating minimization method (AMM), denoted 17pt-amm [5]. Methods ge and 17pt-amm are both initialized by 17pt. Our own methods are referred to as sdp (without any refinement) and sdp-amm (with AMM refinement).
+
+Among these methods, the implementation of 17pt-amm was provided by the authors, and other comparison methods were taken from OpenGV [17]. Note furthermore that we always ensure a balanced number of samples in each camera, independently of the experiment and number of cameras.
+
+# 5.1. Results on Synthetic Data
+
+Noise Resilience: The setup of our experiments is similar to the one proposed in [17]. We first test image noise resilience. Each method is evaluated for various noise levels reaching from 0 to 5 pixels and over 1000 random experiments per noise level. The rotation errors of all method is shown in Fig. 4(a). Translation errors follow a similar trend, but are omitted here for the sake of space limitations.
+
+Looking at Fig. 4(a), we make the following observations: (1) sdp-amm degrades the least, and has a relatively obvious advantage over other methods in terms of both accuracy and robustness. This is partially due to the fact that our method does not depend on any initialization and can always find the global optimum. By contrast, ge strongly depends on a good initial value. (2) sdp-amm consistently performs better than sdp, which underlines the effectiveness of the suggest-and-improve framework for general QC-QPs [31]. (3) sdp still has smaller error than previous state-of-the-art methods.
+
+Number of correspondences: In our next experiment, we fix the image noise level to 0.5 pixel in standard deviation and vary the number $N$ of point correspondences. 6pt can only take a subset of the point correspondences, while other methods utilize all point correspondences. To
+
+
+(a) Noise level variations
+
+
+(b) Number of correspondences
+
+
+Figure 4. Mean and median of rotation errors with respect to (a) noise level variations and (b) the number of point correspondences.
+(a) ours vs. 17pt [24]
+Figure 5. Scatter plot comparing the residuals between two methods. A point lying below the red line indicates that our method outperforms in terms of a smaller residual.
+
+
+(b) ours vs. 17pt-amm [5]
+
+make the comparison more fair, we randomly sample 20 minimal sets of point correspondences for $6\mathrm{pt}$ , and take the best result in each experiment. The best here is defined as the result that leads to the smallest algebraic error over all simulated correspondences. We again show the rotation error in Fig. 4(b), and make the following observations: (1) As expected, the errors of $17\mathrm{pt}$ , $17\mathrm{pt} - \mathrm{amm}$ , sdp, and sdp-amm all decrease as the number of point correspondences is increased. (2) ge still depends on a good initialisation. (3) sdp-amm still leads to the smallest error among all methods.
+
+Optimality Gap: We compare the residuals of our method against those of 17pt and 17pt-amm, respectively. The corresponding scatter plots are indicated in
+
+
+Figure 6. Mean and median number of identified inliers over outlier fraction.
+
+Fig. 5. As can be observed sdp-amm has smaller residuals for most of the experiments. The residual of our method typically remains below $1.5 \times 10^{-3}$ . By contrast, 17pt and 17pt-amm may have residuals as large as $5 \times 10^{-3}$ .
+
+Performance within RANSAC: The most relevant performance measure consists of testing all algorithms as part of a hypothesize-and-test framework. We use the classical RANSAC framework [8], and the same model verification for all methods. For 6pt, we use an additional 3 points per hypothesis to disambiguate the solution multiplicity. This has no effect on the cost of the disambiguation, and is safer than disambiguation with only one point, especially regarding the high number of solutions and the cost of hypothesis generation. For 17pt, ge, and our methods, we sample 17, 12, and 12 points in each iteration, respectively.
+
+The noise is kept at 0.5 pixel. The total number of point correspondences is fixed to 100, and we vary the outlier fraction. For each outlier fraction, we generate 2000 synthetic scenes and report the mean and median number of identified inliers. Figure 6 reports the number of inliers found by the different methods when integrated into RANSAC. As can be observed, the median of the methods is nearly ideal for all methods except 17pt. However, sdp-amm obtains the largest mean number of identified inliers. In fact, sdp-amm is the only method that consistently finds all inliers in each experiment.
+
+# 5.2. Results on Real Data
+
+To conclude our evaluation, we perform experiments on real data and demonstrate that the advantage of our proposed method applies here as well. We evaluate two datasets. The first one is captured by a custom-made, synchronized 4-camera system mounted on a small-scale automated guided vehicle (AGV), and ground truth is provided by an external motion tracking system. The cameras have a $1216 \times 1936$ resolution, are equipped with $48^{\circ}$ field-of-view lenses, and are pointing forward, left, right, and backward.
+
+
+(a) Results on omni-directional 4-cam dataset
+
+
+(b) Results on forward-facing 2-cam dataset
+Figure 7. Empirical cumulative errors distributions for (a) a 4-camera dataset with roughly omni-directional measurement distributions (b) a 2-camera dataset with forward facing cameras.
+
+The second dataset is taken from the KITTI [9] benchmark, which only has a forward facing stereo camera. We ignore the overlap in their fields of view, and treat it as a general multi-camera array. Ground truth is provided by a Velodyne LiDAR and a differential GPS. Figures 7(a) and 7(b) show the cumulative distribution functions (CDFs) of respective rotation errors, demonstrating how sdp-amm remains the most accurate method. The difference to alternative methods is particularly important on the KITTI sequence. In this sequence, the bearings of the landmark measurements do not have an omni-directional distribution, which is known to be a challenging case for relative pose estimation with generalized cameras.
+
+# 6. Conclusions
+
+We introduced the first certifiably globally optimal solution to the non-minimal generalized relative pose estimation problem. Extensive experiments on both synthetic and real data demonstrate clearly improved accuracy and robustness over the previous state-of-the-art, including the ability to handle the difficult scenario of a limited combined field of view of all cameras. Furthermore, by including the essential matrix in our parameterization, the dimensionality of our formulation turns out to be even smaller than the one of a previous SDR based method for central cameras. Even without further polishing of our implementation, this technique already enables real-time processing.
+
+# Acknowledgments
+
+This work has been supported by the Natural Science Foundation of Shanghai (grant number: 19ZR1434000) and NSFC (grant number: 61950410612).
+
+# References
+
+[1] Kurt Anstreicher and Henry Wolkowicz. On Lagrangian relaxation of quadratic matrix constraints. SIAM Journal on Matrix Analysis and Applications, 22(1):41-55, 2000.
+[2] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
+[3] Jesus Briales and Javier Gonzalez-Jimenez. Convex global 3D registration with Lagrangian duality. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4960-4969, 2017.
+[4] Jesus Briales, Laurent Kneip, and Javier Gonzalez-Jimenez. A certifiably globally optimal solution to the non-minimal relative pose problem. In IEEE Conference on Computer Vision and Pattern Recognition, pages 145-154, 2018.
+[5] João Campos, João R. Cardoso, and Pedro Miraldo. POSEAMM: A unified framework for solving pose problems using an alternating minimization method. In IEEE International Conference on Robotics and Automation, pages 3493-3499, 2019.
+[6] Diego Cifuentes, Sameer Agarwal, Pablo A. Parrilo, and Rekha R. Thomas. On the local stability of semidefinite relaxations. arXiv:1710.04287v2, 2018.
+[7] Brian Clipp, Jae-Hak Kim, Jan-Michael Frahm, Marc Pollefeys, and Richard Hartley. Robust 6DOF motion estimation for non-overlapping, multi-camera systems. In IEEE Workshop on Applications of Computer Vision, pages 1-8, 2008.
+[8] Martin A. Fischler and Robert C. Bolles. Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography. Communications of the ACM, 24(6):381-395, 1981.
+[9] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the KITTI vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3354-3361, 2012.
+[10] Richard Hartley. Minimizing algebraic error in geometric estimation problem. In IEEE International Conference on Computer Vision, pages 469-476, 1998.
+[11] Richard Hartley and Andrew Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, 2003.
+[12] Uwe Helmke, Knut Hüper, Pei Yean Lee, and John Moore. Essential matrix estimation using Gauss-Newton iterations on a manifold. International Journal of Computer Vision, 74(2):117-136, 2007.
+[13] Fredrik Kahl and Didier Henrion. Globally optimal estimates for geometric reconstruction problems. International Journal of Computer Vision, 74(1):3-15, 2007.
+[14] Tim Kazik, Laurent Kneip, Janosch Nikolic, Marc Pollefeys, and Roland Siegwart. Real-time 6D stereo visual odometry with non-overlapping fields of view. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1529-1536, 2012.
+[15] Jae Hak Kim, Richard Hartley, Jan Michael Frahm, and Marc Pollefeys. Visual odometry for non-overlapping views using second-order cone programming. In Asian Conference on Computer Vision, pages 353-362, 2007.
+
+[16] Jae Hak Kim, Hongdong Li, and Richard Hartley. Motion estimation for nonoverlapping multicamera rigs: Linear algebraic and $L_{\infty}$ geometric solutions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6):1044-1059, 2010.
+[17] Laurent Kneip and Paul Furgale. OpenGV: A unified and generalized approach to real-time calibrated geometric vision. In IEEE International Conference on Robotics and Automation, pages 1-8, 2014.
+[18] Laurent Kneip and Hongdong Li. Efficient computation of relative pose for multi-camera systems. In IEEE Conference on Computer Vision and Pattern Recognition, pages 446-453, 2014.
+[19] Laurent Kneip and Simon Lynen. Direct optimization of frame-to-frame rotation. In IEEE International Conference on Computer Vision, pages 2352-2359, 2013.
+[20] Laurent Kneip, Roland Siegwart, and Marc Pollefeys. Finding the exact rotation between two images independently of the translation. In European Conference on Computer Vision, pages 696-709. Springer, 2012.
+[21] Laurent Kneip, Chris Sweeney, and Richard Hartley. The generalized relative pose and scale problem: View-graph fusion via 2D-2D registration. In IEEE Winter Conference on Applications of Computer Vision, pages 1–9, 2016.
+[22] Gim Hee Lee, Friedrich Faundorfer, and Marc Pollefeys. Motion estimation for self-driving cars with a generalized camera. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2746-2753, 2013.
+[23] Gim Hee Lee, Marc Pollefeys, and Friedrich Fraundorfer. Relative pose estimation for a multi-camera system with known vertical direction. In IEEE Conference on Computer Vision and Pattern Recognition, pages 540-547, 2014.
+[24] Hongdong Li, Richard Hartley, and Jae-hak Kim. A linear approach to motion estimation using generalized camera models. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1-8, 2008.
+[25] John Lim, Nick Barnes, and Hongdong Li. Estimating relative camera motion from the antipodal-epipolar constraint. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10):1907-1914, 2010.
+[26] Liu Liu, Hongdong Li, Yuchao Dai, and Quan Pan. Robust and efficient relative pose with a multi-camera system for autonomous driving in highly dynamic environments. IEEE Transactions on Intelligent Transportation Systems, 19(8):2432-2444, 2018.
+[27] Zhi-Quan Luo, Wing-Kin Ma, Anthony Man-Cho So, Yinyu Ye, and Shuzhong Zhang. Semidefinite relaxation of quadratic optimization problems. IEEE Signal Processing Magazine, 27(3):20-34, 2010.
+[28] Martin Mevissen and Masakazu Kojima. SDP relaxations for quadratic optimization problems derived from polynomial optimization problems. Asia-Pacific Journal of Operational Research, 27(1):15-38, 2010.
+[29] Tsuyoshi Migita and Takeshi Shakunaga. Evaluation of epipole estimation methods with/without rank-2 constraint across algebraic/geometric error functions. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1-7, 2007.
+
+[30] David Nister. An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6):756-770, 2004.
+[31] Jaehyun Park and Stephen Boyd. General heuristics for nonconvex quadratically constrained quadratic programming. arXiv preprint arXiv:1703.07870, 2017.
+[32] Robert Pless. Using many cameras as one. In IEEE Conference on Computer Vision and Pattern Recognition, 2003.
+[33] Raman Sanyal, Frank Sottile, and Bernd Sturmfels. Orbitopes. Mathematika, 57(2):275-314, 2011.
+[34] Henrik Stewénius, Magnus Oskarsson, , Kalle Aström, and David Nister. Solutions to minimal generalized relative pose problems. In Workshop on Omnidirectional Vision in conjunction with ICCV, 2005.
+[35] Chris Sweeney, Laurent Kneip, Tobias Höllerer, and Matthew Turk. Computing similarity transformations from only image correspondences. In IEEE Conference on Computer Vision and Pattern Recognition, pages 3305-3313, 2015.
+[36] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM Review, 38(1):49-95, 1996.
+[37] Makoto Yamashita, Katsuki Fujisawa, Mituhiro Fukuda, Kazuhiro Kobayashi, Kazuhide Nakata, and Maho Nakata. Latest developments in the SDPA family for solving large-scale SDPs. In Handbook on semidefinite, conic and polynomial optimization, pages 687-713. Springer, 2012.
+[38] Heng Yang and Luca Carlone. A quaternion-based certifiably optimal solution to the Wahba problem with outliers. In International Conference on Computer Vision, pages 1665-1674, 2019.
+[39] Yinyu Ye. Interior Point Algorithms: Theory and Analysis. Wiley & Sons, 1997.
+[40] Ji Zhao. An efficient solution to non-minimal case essential matrix estimation. arXiv:1903.09067, 2019.
\ No newline at end of file
diff --git a/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/images.zip b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b2025fb05f2d2542dd1a82ba7d4980de5c4bd5c7
--- /dev/null
+++ b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9aaec6f8ca6e0260c16bbddfa03cf90db747a787fd83ec65f6a4adeca0c2dfa
+size 473226
diff --git a/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/layout.json b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..bcd72968151952df165ff883dada5e14f807d502
--- /dev/null
+++ b/acertifiablygloballyoptimalsolutiontogeneralizedessentialmatrixestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a8d2b7b2ffa54340a12064e933dbbef2941f930549213da96ec83fc9ef553c8
+size 550456
diff --git a/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_content_list.json b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..d3ae82deb2010a517fd2d44f2449b79c792ecde7
--- /dev/null
+++ b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b31af46f5ba92afd619a70b2e17e542c3ea3243e058b91ee95dc3a36b84ae463
+size 69334
diff --git a/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_model.json b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2fd1d1f8279cbd4041b6ae1c9cc1fb869c079f88
--- /dev/null
+++ b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b565eb12a964acf7a298c22d87a8a536a9c4e5d5e7c7037691736e4e4428c300
+size 83421
diff --git a/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_origin.pdf b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..fe2b2278072c6bad75a8ff018d3cbd231f6600d3
--- /dev/null
+++ b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/d517137a-8c1e-4063-a077-1dc2c3d9b8e6_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6691dc3a1f05bfe63f4e81cf849a9ccb36c6f7be11d64f7692e74b18fe8614ac
+size 422991
diff --git a/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/full.md b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..61c71258451a3a95ec3ec0f4bccf27b62fcc1fdc
--- /dev/null
+++ b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/full.md
@@ -0,0 +1,271 @@
+# A Characteristic Function Approach to Deep Implicit Generative Modeling
+
+Abdul Fatir Ansari†, Jonathan Scarlett†‡, and Harold Soh†
+†Department of Computer Science
+‡Department of Mathematics
+National University of Singapore
+{afatir, scarlett, harold}@comp.nus.edu.sg
+
+# Abstract
+
+Implicit Generative Models (IGMs) such as GANs have emerged as effective data-driven models for generating samples, particularly images. In this paper, we formulate the problem of learning an IGM as minimizing the expected distance between characteristic functions. Specifically, we minimize the distance between characteristic functions of the real and generated data distributions under a suitably-chosen weighting distribution. This distance metric, which we term as the characteristic function distance (CFD), can be (approximately) computed with linear time-complexity in the number of samples, in contrast with the quadratic-time Maximum Mean Discrepancy (MMD). By replacing the discrepancy measure in the critic of a GAN with the CFD, we obtain a model that is simple to implement and stable to train. The proposed metric enjoys desirable theoretical properties including continuity and differentiability with respect to generator parameters, and continuity in the weak topology. We further propose a variation of the CFD in which the weighting distribution parameters are also optimized during training; this obviates the need for manual tuning, and leads to an improvement in test power relative to CFD. We demonstrate experimentally that our proposed method outperforms WGAN and MMD-GAN variants on a variety of unsupervised image generation benchmarks.
+
+# 1. Introduction
+
+Implicit Generative Models (IGMs), such as Generative Adversarial Networks (GANs) [12], seek to learn a model $\mathbb{Q}_{\theta}$ of an underlying data distribution $\mathbb{P}$ using samples from $\mathbb{P}$ . Unlike prescribed probabilistic models, IGMs do not require a likelihood function, and thus are appealing when the data likelihood is unknown or intractable. Empirically, GANs have excelled at numerous tasks, from unsupervised image generation [18] to policy learning [17].
+
+The original GAN suffers from optimization instability and mode collapse, and often requires various ad-hoc
+
+tricks to stabilize training [31]. Subsequent research has revealed that the generator-discriminator setup in the GAN minimizes the Jensen-Shannon divergence between the real and generated data distributions; this divergence possesses discontinuities that results in uninformative gradients as $\mathbb{Q}_{\theta}$ approaches $\mathbb{P}$ , which hampers training. Various works have since established desirable properties for a divergence that can ease GAN training, and proposed alternative training schemes [2, 34, 3], primarily using distances belonging to the Integral Probability Metric (IPM) family [29]. One popular IPM is the kernel-based metric Maximum Mean Discrepancy (MMD), and a significant portion of recent work has focussed on deriving better MMD-GAN variants [21, 5, 1, 22].
+
+In this paper, we undertake a different, more elementary approach, and formulate the problem of learning an IGM as minimizing the expected distance between characteristic functions of real and generated data distributions. Characteristic functions are widespread in probability theory and have been used for two-sample testing [15, 11, 8], yet surprisingly, have not yet been investigated for GAN training. We find that this approach leads to a simple and computationally-efficient loss: the characteristic function distance (CFD). Computing CFD requires linear time in the number of samples (unlike the quadratic-time MMD), and our experimental results indicate that CFD minimization results in effective training.
+
+This work provides both theoretical and empirical support for using CFD to train IGMs. We first establish that the CFD is continuous and differentiable almost everywhere with respect to the parameters of the generator, and that it satisfies continuity in the weak topology - key properties that make it a suitable GAN metric [3, 21]. We provide novel direct proofs that supplement the existing theory on GAN training metrics. Algorithmically, our key idea is simple: train GANs using empirical estimates of the CFD under optimized weighting distributions. We report on systematic experiments using synthetic distributions and four benchmark image datasets (MNIST, CIFAR10, STL10, CelebA).
+
+Our experiments demonstrate that the CFD-based approach outperforms WGAN and MMD-GAN variants on quantitative evaluation metrics. From a practical perspective, we find the CFD-based GANs are simple to implement and stable to train. In summary, the key contributions of this work are:
+
+- a novel approach to train implicit generative models using a loss derived from characteristic functions;
+- theoretical results showing that the proposed loss metric is continuous and differentiable in the parameters of the generator, and satisfies continuity in the weak topology;
+- experimental results showing that our approach leads to effective generative models favorable against state-of-the-art WGAN and MMD-GAN variants on a variety of synthetic and real-world datasets.
+
+# 2. Probability Distances and GANs
+
+We begin by providing a brief review of the Generative Adversarial Network (GAN) framework and recent distance-based methods for training GANs. A GAN is a generative model that implicitly seeks to learn the data distribution $\mathbb{P}_{\mathcal{X}}$ given samples $\{\mathbf{x}\}_{i = 1}^{n}$ from $\mathbb{P}_{\mathcal{X}}$ . The GAN consists of a generator network $g_{\theta}$ and a critic network $f_{\phi}$ (also called the discriminator). The generator $g_{\theta}: \mathcal{Z} \to \mathcal{X}$ transforms a latent vector $\mathbf{z} \in \mathcal{Z}$ sampled from a simple distribution (e.g., Gaussian) to a vector $\hat{\mathbf{x}}$ in the data space. The original GAN [12] was defined via an adversarial two-player game between the critic and the generator; the critic attempts to distinguish the true data samples from ones obtained from the generator, and the generator attempts to make its samples indistinguishable from the true data.
+
+In more recent work, this two-player game is cast as minimizing a divergence between the real data distribution and the generated distribution. The critic $f_{\phi}$ evaluates some probability divergence between the true and generated samples, and is optimized to maximize this divergence. In the original GAN, the associated (implicit) distance measure is the Jensen-Shannon divergence, but alternative divergences have since been introduced, e.g., the 1-Wasserstein distance [3, 14], Cramer distance [4], maximum mean discrepancy (MMD) [21, 5, 1], and Sobolev IPM [28]. Many distances proposed in the literature can be reduced to the Integral Probability Metric (IPM) framework with different restrictions on the function class.
+
+# 3. Characteristic Function Distance
+
+In this work, we propose to train GANs using a distance metric based on characteristic functions (CFs). Letting $\mathbb{P}$ be the probability measure associated with a real-valued random variable $X$ , the characteristic function $\varphi_{\mathbb{P}}: \mathbb{R}^d \to \mathbb{C}$
+
+of $X$ is given by
+
+$$
+\varphi_ {\mathbb {P}} (\mathbf {t}) = \mathbb {E} _ {\mathbf {x} \sim \mathbb {P}} [ e ^ {i \langle \mathbf {t}, \mathbf {x} \rangle} ] = \int_ {\mathbb {R}} e ^ {i \langle \mathbf {t}, \mathbf {x} \rangle} d \mathbb {P}, \tag {1}
+$$
+
+where $\mathbf{t} \in \mathbb{R}^d$ is the input argument, and $i = \sqrt{-1}$ . Characteristic functions are widespread in probability theory, and are often used as an alternative to probability density functions. The characteristic function of a random variable completely defines it, i.e., for two distributions $\mathbb{P}$ and $\mathbb{Q}$ , $\mathbb{P} = \mathbb{Q}$ if and only if $\varphi_{\mathbb{P}} = \varphi_{\mathbb{Q}}$ . Unlike the density function, the characteristic function always exists, and is uniformly continuous and bounded: $|\varphi_{\mathbb{P}}(t)| \leq 1$ .
+
+The squared Characteristic Function Distance (CFD) [8, 16] between two distributions $\mathbb{P}$ and $\mathbb{Q}$ is given by the weighted integrated squared error between their characteristic functions
+
+$$
+\operatorname {C F D} _ {\omega} ^ {2} (\mathbb {P}, \mathbb {Q}) = \int_ {\mathbb {R} ^ {d}} | \varphi_ {\mathbb {P}} (\mathbf {t}) - \varphi_ {\mathbb {Q}} (\mathbf {t}) | ^ {2} \omega (\mathbf {t}; \eta) d \mathbf {t}, \tag {2}
+$$
+
+where $\omega (\mathbf{t};\eta)$ is a weighting function, which we henceforth assume to be parametrized by $\eta$ and chosen such that the integral in Eq. (2) converges. When $\omega (\mathbf{t};\eta)$ is the probability density function of a distribution on $\mathbb{R}^d$ , the integral in Eq. (2) can be written as an expectation:
+
+$$
+\operatorname {C F D} _ {\omega} ^ {2} (\mathbb {P}, \mathbb {Q}) = \mathbb {E} _ {\mathbf {t} \sim \omega (\mathbf {t}; \eta)} \left[ | \varphi_ {\mathbb {P}} (\mathbf {t}) - \varphi_ {\mathbb {Q}} (\mathbf {t}) | ^ {2} \right]. \tag {3}
+$$
+
+By analogy to Fourier analysis in signal processing, Eq. (3) can be interpreted as the expected discrepancy between the Fourier transforms of two signals at frequencies sampled from $\omega (\mathbf{t};\eta)$ . If $\mathrm{supp}(\omega) = \mathbb{R}^d$ , it can be shown using the uniqueness theorem of characteristic functions that $\mathrm{CFD}_{\omega}(\mathbb{P},\mathbb{Q}) = 0\iff \mathbb{P} = \mathbb{Q}$ [35].
+
+In practice, the CFD can be approximated using empirical characteristic functions and finite samples from the weighting distribution $\omega (\mathbf{t};\eta)$ . To elaborate, the characteristic function of a degenerate distribution $\delta_{\mathbf{a}}$ for $\mathbf{a}\in \mathbb{R}^d$ is given by $e^{i\langle \mathbf{t},\mathbf{a}\rangle}$ where $\mathbf{t}\in \mathbb{R}^d$ . Given observations $\mathcal{X}\coloneqq \{\mathbf{x}_1,\ldots ,\mathbf{x}_n\}$ from a probability distribution $\mathbb{P}$ , the empirical distribution is a mixture of degenerate distributions with equal weights, and the corresponding empirical characteristic function $\hat{\varphi}_{\mathbb{P}}$ is a weighted sum of characteristic functions of degenerate distributions:
+
+$$
+\hat {\varphi} _ {\mathbb {P}} (\mathbf {t}) = \frac {1}{n} \sum_ {j = 1} ^ {n} e ^ {i \langle \mathbf {t}, \mathbf {x} _ {j} \rangle}. \tag {4}
+$$
+
+Let $\mathcal{X} := \{\mathbf{x}_1, \ldots, \mathbf{x}_n\}$ and $\mathcal{Y} := \{\mathbf{y}_1, \ldots, \mathbf{y}_m\}$ with $\mathbf{x}_i, \mathbf{y}_i \in \mathbb{R}^d$ be samples from the distributions $\mathbb{P}$ and $\mathbb{Q}$ respectively, and let $\mathbf{t}_1, \ldots, \mathbf{t}_k$ be samples from $\omega(\mathbf{t}; \eta)$ . We define the empirical characteristic function distance (ECFD) between $\mathbb{P}$ and $\mathbb{Q}$ as
+
+$$
+\operatorname {E C F D} _ {\omega} ^ {2} (\mathbb {P}, \mathbb {Q}) = \frac {1}{k} \sum_ {i = 1} ^ {k} \left| \hat {\varphi} _ {\mathbb {P}} \left(\mathbf {t} _ {i}\right) - \hat {\varphi} _ {\mathbb {Q}} \left(\mathbf {t} _ {i}\right) \right| ^ {2}, \tag {5}
+$$
+
+
+Figure 1: (left) Variation of test power with the number of dimensions for ECFD-based tests; (right) Change in the scale of the weighting distribution upon optimization.
+
+
+
+where $\hat{\varphi}_{\mathbb{P}}$ and $\hat{\varphi}_{\mathbb{Q}}$ are the empirical CFs, computed using $\mathcal{X}$ and $\mathcal{Y}$ respectively.
+
+A quantity related to CFD (Eq. 2) has been studied in [30] and [16], in which the discrepancy between the analytical and empirical characteristic functions of stable distributions is minimized for parameter estimation. The CFD is well-suited to this application because stable distributions do not admit density functions, making maximum likelihood estimation difficult. Parameter fitting has also been explored for other models such as mixture-of-Gaussians, stable ARMA process, and affine jump diffusion models [36].
+
+More recently, [8] proposed fast $(O(n)$ in the number of samples $n$ ) two-sample tests based on ECFD, as well as a smoothed version of ECFD in which the characteristic function is convolved with an analytic kernel. The authors empirically show that ECFD and its smoothed variant have a better test-power/run-time trade-off compared to quadratic time tests, and better test power than the sub-quadratic time variants of MMD.
+
+# 3.1. Optimized ECFD for Two-Sample Testing
+
+The choice of $\omega (\mathbf{t};\eta)$ is important for the success of ECFD in distinguishing two different distributions; choosing an appropriate distribution and/or set of parameters $\eta$ allows better coverage of the frequencies at which the differences in $\mathbb{P}$ and $\mathbb{Q}$ lie. For instance, if the differences are concentrated at the frequencies far away from the origin and $\omega (\mathbf{t};\eta)$ is Gaussian, the test power can be improved by suitably enlarging the variance of each coordinate of $\omega (\mathbf{t};\eta)$ .
+
+To increase the power of ECFD, we propose to optimize the parameters $\eta$ (e.g., the variance associated with a normal distribution) of the weighting distribution $\omega(t; \eta)$ to maximize the power of the test. However, care should be taken
+
+when specifying how rich the class of functions $\omega(\cdot; \eta)$ is — the choice of which parameters to optimize and the associated constraints is important. Excessive optimization may cause the test to fixate on differences that are merely due to fluctuations in the sampling. As an extreme example, we found that optimizing $\mathbf{t}$ 's directly (instead of optimizing the weighting distribution) severely degrades the test's ability to correctly accept the null hypothesis $\mathbb{P} = \mathbb{Q}$ .
+
+To validate our approach, we conducted a basic experiment using high-dimensional Gaussians, similar to [8]. Specifically, we used two multivariate Gaussians $\mathbb{P}$ and $\mathbb{Q}$ that have the same mean in all dimensions except one. As the dimensionality increases, it becomes increasingly difficult to distinguish between samples from the two distributions. In our tests, the weighting distribution $\omega (\mathbf{t};\eta)$ was chosen to be a Gaussian distribution $\mathcal{N}(\mathbf{0},\mathrm{diag}(\pmb{\sigma}^2))$ 10000 samples each were taken from $\mathbb{P}$ and $\mathbb{Q}$ , and the number of frequencies $(k)$ was set to 3. We optimized the parameter vector $\eta = \{\pmb {\sigma}\}$ to maximize the ECFD using the Adam optimizer for 100 iterations with a batch-size of 1000.
+
+Fig. 1a shows the variation of the test power (i.e., the fraction of times the null hypothesis $\mathbb{P} = \mathbb{Q}$ is rejected) with the number of dimensions. OEFCD refers to the optimized ECFD, and the "Smooth" suffix indicates the smoothed ECFD variant proposed by [8]. We see that optimization of $\eta$ increases the power of ECFD and ECFD-Smooth, particularly at the higher dimensionalities. There do not appear to be significant differences between the optimized smoothed and non-smoothed ECFD variants. Moreover, the optimization improved the ability of the test to correctly distinguish the two different distributions, but did not hamper its ability to correctly accept the null hypothesis when the distributions are the same (see Appendix C).
+
+To investigate how $\sigma$ is adapted, we visualize two dimensions $\{i,j\}$ from the dataset where $\mu_i^{\mathbb{P}} = \mu_i^{\mathbb{Q}}$ and $\mu_j^{\mathbb{P}} \neq \mu_j^{\mathbb{Q}}$ . Fig. 1b shows the absolute difference between the ECFs of $\mathbb{P}$ and $\mathbb{Q}$ , with the corresponding dimensions of the weighting distribution plotted in both dimensions. The solid blue line shows the optimized distribution (for OECFD) while the dashed orange line shows the initial distribution (i.e., $\sigma = 1$ for ECFD and ECFD-Smooth). In the dimension where the distributions are the same, $\sigma$ has small deviation from the initial value. However, in the dimension where the distributions are different, the increase in variance is more pronounced to compensate for the spread of difference between the ECFs away from the origin.
+
+# 4. Implicit Generative Modeling using CFD
+
+In this section, we turn our attention to applying the (optimized) CFD for learning IGMs, specifically GANs. As in the standard GAN, our model is comprised of a generator $g_{\theta}:\mathcal{Z}\to \mathcal{X}$ and a critic $f_{\phi}:\mathcal{X}\rightarrow \mathbb{R}^{m}$ , with parameter vectors $\theta$ and $\phi$ , and data/latent spaces $\mathcal{X}\subseteq \mathbb{R}^d$ and $\mathcal{Z}\subseteq \mathbb{R}^p$ . Below, we write $\Theta, \Phi, \Pi$ for the spaces in which the parameters $\theta, \phi, \eta$ lie.
+
+The generator minimizes the empirical CFD between the real and generated data. Instead of minimizing the distance between characteristic functions of raw high-dimensional data, we use a critic neural network $f_{\phi}$ that is trained to maximize the CFD between real and generated data distributions in a learned lower-dimensional space. This results in the following minimax objective for the IGM:
+
+$$
+\inf _ {\theta \in \Theta} \sup _ {\psi \in \Psi} \operatorname {C F D} _ {\omega} ^ {2} \left(\mathbb {P} _ {f _ {\phi} (\mathcal {X})}, \mathbb {P} _ {f _ {\phi} (g _ {\theta} (\mathcal {Z}))}\right), \tag {6}
+$$
+
+where $\psi = \{\phi, \eta\}$ (with corresponding parameter space $\Psi$ ), and $\eta$ is the parameter vector of the weighting distribution $\omega$ . The optimization over $\eta$ is omitted if we choose to not optimize the weighting distribution. In our experiments, we set $\eta = \{\sigma\}$ , with $\sigma$ indicating the scale of each dimension of $\omega$ . Since evaluating the CFD requires knowledge of the data distribution, in practice, we optimize the empirical estimate $\mathrm{ECFD}_{\omega}^{2}$ instead of $\mathrm{CFD}_{\omega}^{2}$ . We henceforth refer to this model as the Characteristic Function Generative Adversarial Network (CF-GAN).
+
+# 4.1. CFD Properties: Continuity, Differentiability, and Weak Topology
+
+Similar to recently proposed Wasserstein [3] and MMD [21] GANs, the CFD exhibits desirable mathematical properties. Specifically, CFD is continuous and differentiable almost everywhere in the parameters of the generator (Thm. 1). Moreover, as it is continuous in the weak topology (Thm. 2), it can provide a signal to the generator $g_{\theta}$ that is more informative for training than other "distances" that
+
+lack this property (e.g., Jensen-Shannon divergence). In the following, we provide proofs for the above claims under assumptions similar to [3].
+
+The following theorem formally states the result of continuity and differentiability in $\theta$ almost everywhere, which is desirable for permitting training via gradient descent.
+
+Theorem 1. Assume that (i) $f_{\phi} \circ g_{\theta}$ is locally Lipschitz with respect to $(\theta, \mathbf{z})$ with constants $L(\theta, \mathbf{z})$ not depending on $\phi$ and satisfying $\mathbb{E}_{\mathbf{z}}[L(\theta, \mathbf{z})] < \infty$ ; (ii) $\sup_{\eta \in \Pi} \mathbb{E}_{\omega(\mathbf{t}; \eta)}[\|\mathbf{t}\|] < \infty$ . Then, the function $\sup_{\psi \in \Psi} \mathrm{CFD}_{\omega}^{2}(\mathbb{P}_{f_{\phi}(\mathcal{X})}, \mathbb{P}_{f_{\phi}(g_{\theta}(\mathcal{Z}))})$ is continuous in $\theta \in \Theta$ everywhere, and differentiable in $\theta \in \Theta$ almost everywhere.
+
+The following theorem establishes continuity in the weak topology, and concerns general convergent distributions as opposed to only those corresponding to $g_{\theta}(\mathbf{z})$ . In this result, we let $\mathbb{P}^{(\phi)}$ be the distribution of $f_{\phi}(\mathbf{x})$ when $\mathbf{x} \sim \mathbb{P}$ , and similarly for $\mathbb{P}_n^{(\phi)}$ .
+
+Theorem 2. Assume that (i) $f_{\phi}$ is $L_{f}$ -Lipschitz for some $L_{f}$ not depending on $\phi$ ; (ii) $\sup_{\eta \in \Pi} \mathbb{E}_{\omega(\mathbf{t})}[\|\mathbf{t}\|] < \infty$ . Then, the function $\sup_{\psi \in \Psi} \mathrm{CFD}_{\omega}^{2}(\mathbb{P}_{n}^{(\phi)}, \mathbb{P}^{(\phi)})$ is continuous in the weak topology, i.e., if $\mathbb{P}_n \xrightarrow{D} \mathbb{P}$ , then $\sup_{\psi \in \Psi} \mathrm{CFD}_{\omega}^{2}(\mathbb{P}_{n}^{(\phi)}, \mathbb{P}^{(\phi)}) \to 0$ , where $\xrightarrow{D}$ implies convergence in distribution.
+
+The proofs are given in the appendix. In brief, we bound the difference between characteristic functions using geometric arguments; we interpret $e^{ia}$ as a vector on a circle, and note that $|e^{ia} - e^{ib}| \leq |a - b|$ . We then upper-bound the difference of function values in terms of $\mathbb{E}_{\omega(\mathbf{t})}[\|\mathbf{t}\|]$ (assumed to be finite) and averages of Lipschitz functions of $\mathbf{x}, \mathbf{x}'$ under the distributions considered. The Lipschitz properties ensure that the function difference vanishes when one distribution converges to the other.
+
+Various generators satisfy the locally Lipschitz assumption, e.g., when $g_{\theta}$ is a feed-forward network with ReLU activations. To ensure that $f_{\phi}$ is Lipschitz, common methods employed in prior work include weight clipping [3] and gradient penalty [14]. In addition, many common distributions satisfy $\mathbb{E}_{\omega(\mathbf{t})}[\| \mathbf{t} \|] < \infty$ , e.g., Gaussian, Student-t, and Laplace with fixed $\sigma$ . When $\sigma$ is unbounded and optimized, we normalize the CFD by $\| \sigma \|$ , which prevents $\sigma$ from going to infinity.
+
+An example demonstrating the necessity of Lipschitz assumptions in continuity results (albeit for a different metric) can be found in Example 1 of [1]. In the appendix, we discuss conditions under which Theorem 2 can be strengthened to an "if and only if" statement.
+
+# 4.2. Relation to MMD and Prior Work
+
+The CFD is related to the maximum mean discrepancy (MMD) [13]. Given samples from two distributions $\mathbb{P}$ and
+
+$\mathbb{Q}$ , the squared MMD is given by
+
+$$
+\mathrm {M M D} _ {k} ^ {2} (\mathbb {P}, \mathbb {Q}) = \mathbb {E} [ \kappa (x, x ^ {\prime}) ] + \mathbb {E} [ \kappa (y, y ^ {\prime}) ] - 2 \mathbb {E} [ \kappa (x, y) ] \tag {7}
+$$
+
+where $x, x' \sim \mathbb{P}$ and $y, y' \sim \mathbb{Q}$ are independent samples, and $\kappa$ is kernel. When the weighting distribution of the CFD is equal to the inverse Fourier transform of the kernel in MMD (i.e., $\omega(\mathbf{t}) = \mathcal{F}^{-1}\{\kappa\}$ ), the CFD and squared MMD are equivalent: $\mathrm{CFD}_{\omega}^{2}(\mathbb{P}, \mathbb{Q}) = \mathrm{MMD}_{\kappa}^{2}(\mathbb{P}, \mathbb{Q})$ . Indeed, kernels with $\mathrm{supp}(\mathcal{F}^{-1}(\kappa)) = \mathbb{R}^d$ are called characteristic kernels [35], and when $\mathrm{supp}(\omega) = \mathbb{R}^d$ , $\mathrm{MMD}_{\kappa}(\mathbb{P}, \mathbb{Q}) = 0$ if and only if $\mathbb{P} = \mathbb{Q}$ . Although formally equivalent under the above conditions, we find experimentally that optimizing empirical estimates of MMD and CFD result in different convergence profiles and model performance across a range of datasets. Also, unlike MMD, which takes quadratic time in the number of samples to approximately compute, the CFD takes $O(nk)$ time and is therefore computationally attractive when $k \ll n$ .
+
+Learning a generative model by minimizing the MMD between real and generated samples was proposed independently by [23] and [10]. The Generative Moment Matching Network (GMMN) [23] uses an autoencoder to first transform the data into a latent space, and then trains a generative network to produce latent vectors that match the true latent distribution. The MMD-GAN [21] performs a similar input transformation using a network $f_{\phi}$ that is adversarially trained to maximize the MMD between the true distribution $\mathbb{P}_{\mathcal{X}}$ and the generator distribution $\mathbb{Q}_{\theta}$ ; this results in a GAN-like min-max criterion. More recently, [5] and [1] have proposed different theoretically-motivated regularizers on the gradient of MMD-GAN critic that improve training. In our experiments, we compare against the MMD-GAN both with and without gradient regularization.
+
+Very recent work [22] (IKL-GAN) has evaluated kernels parameterized in Fourier space, which are then used to compute MMD in MMD-GAN. In contrast to IKL-GAN, we derive the CF-GAN via characteristic functions rather than via MMD, and our method obviates the need for kernel evaluation. We also provide novel direct proofs for the theoretical properties of the optimized CFD that are not based on its equivalence to MMD. The IKL-GAN utilizes a neural network to sample random frequencies, whereas we use a simpler fixed distribution with a learned scale, reducing the number of hyperparameters to tune. Our method yields state-of-the-art performance, which suggests that the more complex setup in IKL-GAN may not be required for effective GAN training.
+
+In parallel, significant work has gone into improving GAN training via architectural and optimization enhancements [27, 7, 18]; these research directions are orthogonal to our work and can be incorporated in our proposed model.
+
+# 5. Experiments
+
+In this section, we present empirical results comparing different variants of our proposed model: CF-GAN. We prefix O to the model name when the $\sigma$ parameters were optimized along with the critic and omit it when $\sigma$ was kept fixed. Similarly, we suffix GP to the model name when gradient penalty [14] was used to enforce Lipschitzness of $f_{\phi}$ . In the absence of gradient penalty, we clipped the weights of $f_{\phi}$ in $[-0.01, 0.01]$ . When the parameters $\sigma$ were optimized, we scaled the ECFD by $\|\sigma\|$ to prevent $\sigma$ from going to infinity, thereby ensuring $\mathbb{E}_{\omega(\mathbf{t})}[\|\mathbf{t}\|] < \infty$ .
+
+We compare our proposed model against two variants of MMD-GAN: (i) MMD-GAN [21], which uses MMD with a mixture of RBF kernels as the distance metric; (ii) MMD-GAN- $\mathrm{GP}_{L2}$ [5], which introduces an additive gradient penalty based on MMD's IPM witness function, an L2 penalty on discriminator activations, and uses a mixture of RQ kernels. We also compare against WGAN [3] and WGAN-GP [14] due to their close relation to MMD-GAN [21, 5]. Our code is available online at https://github.com/crlslab/OCFGAN.
+
+# 5.1. Synthetic Data
+
+We first tested the methods on two synthetic 1D distributions: a simple unimodal distribution $(\mathcal{D}_1)$ and a more complex bimodal distribution $(\mathcal{D}_2)$ . The distributions were constructed by transforming $z \sim \mathcal{N}(0,1)$ using a function $h: \mathbb{R} \to \mathbb{R}$ . For the unimodal dataset, we used the scale-shift function form used by [37], where $h(z) = \mu + \sigma z$ . For the bimodal dataset, we used the function form used by planar flow [32], where $h(z) = \alpha z + \beta \tanh(\gamma \alpha z)$ . We trained the various GAN models to approximate the distribution of the transformed samples. Once trained, we compared the transformation function $\hat{h}$ learned by the GAN against the true function $h$ . We computed the mean absolute error (MAE) $(\mathbb{E}_z[|h(z) - \hat{h}(z)|])$ to evaluate the models. Further details on the experimental setup are in Appendix B.1.
+
+Figs. 2a and 2b show the variation of the MAE with training iterations. For both datasets, the models with gradient penalty converge to better minima. In $\mathcal{D}_1$ , MMD-GAN-GP and OCF-GAN-GP converge to the same value of MAE, but MMD-GAN-GP converges faster. During our experiments, we observed that the scale of the weighting distribution (which is initialized to 1) falls rapidly before the MAE begins to decrease. For the experiments with the scale fixed at 0.1 (CF-GAN-GP $_{\sigma=0.1}$ ) and 1 (CF-GAN-GP $_{\sigma=1}$ ), both models converge to the same MAE, but CF-GAN-GP $_{\sigma=1}$ takes much longer to converge than CF-GAN-GP $_{\sigma=0.1}$ . This indicates that the optimization of the scale parameter leads to faster convergence. For the more complex dataset $\mathcal{D}_2$ , MMD-GAN-GP takes significantly longer to converge compared to WGAN-GP and OCF-GAN-GP. OCF-GAN-GP converges fastest and to a better minimum,
+
+
+Figure 2: Variation of MAE for synthetic datasets $\mathcal{D}_1$ (left) and $\mathcal{D}_2$ (right) with generator iterations. The plots are averaged over 10 random runs.
+
+
+
+followed by WGAN-GP.
+
+# 5.2. Image Generation
+
+A recent large-scale analysis of GANs [26] showed that different models achieve similar best performance when given ample computational budget, and advocates comparisons between distributions under practical settings. As such, we compare scores attained by the models from different initializations under fixed computational budgets. We used four datasets: 1) MNIST [20]: 60K grayscale images of handwritten digits; 2) CIFAR10 [19]: 50K RGB images; 3) CelebA [24]: $\approx$ 200K RGB images of celebrity faces; and 4) STL10 [9]: 100K RGB images. For all datasets, we center-cropped and scaled the images to $32 \times 32$ .
+
+Network and Hyperparameter Details Given our computational budget and experiment setup, we used a DCGAN-like generator $g_{\theta}$ and critic $f_{\phi}$ architecture for all models (similar to [21]). For MMD-GAN, we used a mixture of five RBF kernels (5-RBF) with different scales [21]. MMD-GAN- $\mathrm{GP}_{L2}$ used a mixture of rational quadratic kernels (5-RQ). The kernel parameters and the trade-off parameters for gradient and L2 penalties were set according to [5]. We tested CF-GAN variants with two weighting distributions: Gaussian $(\mathcal{N})$ and Student's-t $(\mathcal{T})$ (with 2 degrees of freedom). For CF-GAN, we tested 3 scale parameters in the set $\{0.2, 0.5, 1\}$ , and we report the best results. The number of frequencies $(k)$ for computing ECFD was set to 8. Please see Appendix B.2 for implementation details.
+
+Evaluation Metrics We compare the different models using three evaluation metrics: Fréchet Inception Distance (FID) [34], Kernel Inception Distance (KID) [5], and Precision-Recall (PR) for generative models [33]. Details on these metrics and the evaluation procedure can be found in Appendix B.2. In brief, the FID computes the Fréchet distance between two multivariate Gaussians and the KID
+
+computes the MMD (with a polynomial kernel of degree 3) between the real and generated data distributions. Both FID and KID give single value scores, and PR gives a two dimensional score which disentangles the quality of generated samples from the coverage of the data distribution. PR is defined by a pair $F_{8}$ (recall) and $F_{1/8}$ (precision) which represent the coverage and sample quality, respectively [33].
+
+Results In the following, we summarize our main findings, and relegate the details to the Appendix. Table 1 shows the FID and KID values achieved by different models for CIFAR10, STL10, and CelebA datasets. In short, our model outperforms both variants of WGAN and MMDGAN by a significant margin. OCF-GAN, using just one weighting function, outperforms both MMD-GANs that use a mixture of 5 different kernels.
+
+We observe that the optimization of the scale parameter improves the performance of the models for both weighting distributions, and the introduction of gradient penalty as a means to ensure Lipschitzness of $f_{\phi}$ results in a significant improvement in the score values for all models. This is in line with the results of [14] and [5]. Overall, amongst the CF-GAN variants, OCF-GAN-GP with Gaussian weighting performs the best for all datasets.
+
+The two-dimensional precision-recall scores in Fig. 3 provide further insight into the performance of different models. Across all the datasets, the addition of gradient penalty (OCF-GAN-GP) rather than weight clipping (OCF-GAN) leads to a higher improvement in recall compared to precision. This result supports recent arguments that weight clipping forces the generator to learn simpler functions, while gradient penalty is more flexible [14]. The improvement in recall with the introduction of gradient penalty is more noticeable for CIFAR10 and STL10 datasets compared to CelebA. This result is intuitive; CelebA is a more uniform and simpler dataset compared to CIFAR10/STL10, which contain more diverse classes of images, and thus
+
+
+Figure 3: Precision-Recall scores (higher is better) for CIFAR10 (left), STL10 (center), and CelebA (right) datasets.
+
+
+
+
+
+likely have modes that are more complex and far apart. Results on the MNIST dataset, where all models achieve good score values, are available in Appendix C, which also includes further experiments using the smoothed version of ECFD and the optimized smoothed version (no improvement over the unsmoothed versions on the image datasets).
+
+Qualitative Results In addition to the quantitative metrics presented above, we also performed a qualitative analysis of the generated samples. Fig. 4 shows image samples generated by OCF-GAN-GP for different datasets. We also tested our method with a deep ResNet model on a $128 \times 128$ scaled version of CelebA dataset. Samples generated by this model (Fig. 5) show that OCF-GAN-GP scales to larger images and networks, and is able to generate visually appealing images comparable to state-of-the-art methods using similar sized networks. Additional qualitative comparisons can be found in Appendix C.
+
+Impact of Weighting Distribution The choice of weighting distribution did not lead to drastic changes in model performance. The $\mathcal{T}$ distribution performs best when weight clipping is used, while $\mathcal{N}$ performs best in the case of gradient penalty. This suggests that the proper choice of distribution is dependent on both the dataset and the Lipschitz regularization used, but the overall framework is robust to reasonable choices.
+
+We also conducted preliminary experiments using a uniform $(\mathcal{U})$ distribution weighting scheme. Even though the condition $\mathrm{supp}(\mathcal{U}) = \mathbb{R}^m$ does not hold for the uniform distribution, we found that this does not adversely affect the performance (see Appendix C). The uniform weighting distribution corresponds to the sinc-kernel in MMD, which is known to be a non-characteristic kernel [35]. Our results suggest that such kernels could remain effective when used in MMD-GAN, but we did not verify this experimentally.
+
+
+
+
+
+
+Figure 4: Image samples for the different datasets (top to bottom: CIFAR10, STL10, and MNIST) generated by OCF-GAN-GP (random samples without selection).
+
+Impact of Number of Random Frequencies We conducted an experiment to study the impact of the number of random frequencies $(k)$ that are sampled from the weighting distribution to compute the ECFD. We ran our best performing model (OCF-GAN-GP) with different values of $k$ from the set $\{1,4,8,16,32,64\}$ . The FID and KID scores for this experiment are shown in Table 2. As expected, the score values improve as $k$ increases. However, even for the
+
+Table 1: FID and KID $(\times 10^{3})$ scores (lower is better) for CIFAR10, STL10, and CelebA datasets averaged over 5 random runs (standard deviation in parentheses).
+
+| Model | Kernel/ Weight | CIFAR10 | STL10 | CelebA |
| FID | KID | FID | KID | FID | KID |
| WGAN | - | 44.11 (1.16) | 25 (1) | 38.61 (0.43) | 23 (1) | 17.85 (0.69) | 12 (1) |
| WGAN-GP | - | 35.91 (0.30) | 19 (1) | 27.85 (0.81) | 15 (1) | 10.03 (0.37) | 6 (1) |
| MMD-GAN | 5-RBF | 41.28 (0.54) | 23 (1) | 35.76 (0.54) | 21 (1) | 18.48 (1.60) | 12 (1) |
| MMD-GAN-GPL2 | 5-RQ | 38.88 (1.35) | 21 (1) | 31.67 (0.94) | 17 (1) | 13.22 (1.30) | 8 (1) |
| CF-GAN | N(σ=0.5) | 39.81 (0.93) | 23 (1) | 33.54 (1.11) | 19 (1) | 13.71 (0.50) | 9 (1) |
| T(σ=1) | 41.41 (0.64) | 22 (1) | 35.64 (0.44) | 20 (1) | 16.92 (1.29) | 11 (1) |
| OCF-GAN | N | 38.47 (1.00) | 20 (1) | 32.51 (0.87) | 19 (1) | 14.91 (0.83) | 9 (1) |
| T | 37.96 (0.74) | 20 (1) | 31.03 (0.82) | 17 (1) | 13.73 (0.56) | 8 (1) |
| OCF-GAN-GP | N | 33.08 (0.26) | 17 (1) | 26.16 (0.64) | 14 (1) | 9.39 (0.25) | 5 (1) |
| T | 34.33 (0.77) | 18 (1) | 26.86 (0.38) | 15 (1) | 9.61 (0.39) | 6 (1) |
+
+Table 2: FID and KID scores for on the MNIST dataset with varying numbers of frequencies used in OCF-GAN-GP.
+
+| # of freqs (k) | FID | KID × 103 |
| 1 | 0.44 (0.03) | 5 (1) |
| 4 | 0.39 (0.05) | 4 (1) |
| 8 | 0.36 (0.03) | 4 (1) |
| 16 | 0.35 (0.02) | 3 (1) |
| 32 | 0.35 (0.03) | 3 (1) |
| 64 | 0.36 (0.07) | 4 (1) |
+
+
+Figure 5: Image samples for the $128 \times 128$ CelebA dataset generated by OCF-GAN-GP with a ResNet generator (random samples without selection).
+
+lowest number of frequencies possible ( $k = 1$ ), the performance does not degrade too severely.
+
+# 6. Discussion and Conclusion
+
+In this paper, we proposed a novel weighted distance between characteristic functions for training IGMs, and showed that the proposed metric has attractive theoretical properties. We observed experimentally that the proposed model outperforms MMD-GAN and WGAN variants on four benchmark image datasets. Our results indicate that characteristic functions provide an effective alternative means for training IGMs.
+
+This work opens additional avenues for future research. For example, the empirical CFD used for training may result in high variance gradient estimates (particularly with a small number of sampled frequencies), yet the CFD-trained models attain high performance scores with better convergence in our tests. The reason for this should be more thoroughly explored. Although we used the gradient penalty proposed by WGAN-GP, there is no reason to constrain the gradient to exactly 1. We believe that an exploration of the geometry of the proposed loss could lead to improvement in the gradient regularizer for the proposed method.
+
+Apart from generative modeling, two sample tests such as MMD have been used for problems such as domain adaptation [25] and domain separation [6], among others. The optimized CFD loss function proposed in this work can be used as an alternative loss for these problems.
+
+Acknowledgements This research is supported by the National Research Foundation Singapore under its AI Singapore Programme (Award Number: AISG-RP-2019-011) to H. Soh. J. Scarlett is supported by the Singapore National Research Foundation (NRF) under grant number R-252-000-A74-281.
+
+# References
+
+[1] Michael Arbel, Dougal Sutherland, Mikołaj Binkowski, and Arthur Gretton. On gradient regularizers for MMD GANs. In NeurIPS, 2018. 1, 2, 4, 5
+[2] Martin Arjovsky and Léon Bottou. Towards principled methods for training generative adversarial networks. arXiv:1701.04862, 2017. 1
+[3] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In ICML, 2017. 1, 2, 4, 5
+[4] Marc G. Bellemare, Ivo Danihelka, Will Dabney, Shakir Mohamed, Balaji Lakshminarayanan, Stephan Hoyer, and Rémi Munos. The Cramer distance as a solution to biased Wasserstein gradients. arXiv:1705.10743, 2017. 2
+[5] Mikolaj Binkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In ICLR, 2018. 1, 2, 5, 6
+[6] Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. Domain separation networks. In NIPS, 2016. 8
+[7] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. arXiv:1809.11096, 2018. 5
+[8] Kacper P Chwialkowski, Aaditya Ramdas, Dino Sejdinovic, and Arthur Gretton. Fast two-sample testing with analytic representations of probability measures. In NIPS, 2015. 1, 2, 3
+[9] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In AISTATS, 2011. 6
+[10] Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, 2015. 5
+[11] TW Epps and Kenneth J Singleton. An omnibus test for the two-sample problem using the empirical characteristic function. Journal of Statistical Computation and Simulation, 26(3-4):177-203, 1986. 1
+[12] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. 1, 2
+[13] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723-773, 2012. 4
+[14] Ishaan Gulrajani, Faruk Ahmed, Martín Arjovsky, Vincent Dumoulin, and Aaron C. Courville. Improved training of Wasserstein GANs. In NIPS, 2017. 2, 4, 5, 6
+[15] CE Heathcote. A test of goodness of fit for symmetric random variables. Australian Journal of Statistics, 14(2):172-181, 1972. 1
+[16] CR Heathcote. The integrated squared error estimation of parameters. Biometrika, 64(2):255-264, 1977. 2, 3
+[17] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In NIPS, 2016. 1
+[18] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 1, 5
+
+[19] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. 6
+[20] Yann LeCun, Léon Bottou, and Patrick Haffner. Gradient-based learning applied to document recognition. 2001. 6
+[21] Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos. MMD GAN: Towards deeper understanding of moment matching network. In NIPS, 2017. 1, 2, 4, 5, 6
+[22] Chun-Liang Li, Wei-Cheng Chang, Youssef Mroueh, Yiming Yang, and Barnabás Póczos. Implicit kernel learning. In AISTATS, 2019. 1, 5
+[23] Yujia Li, Kevin Swersky, and Richard S. Zemel. Generative moment matching networks. In ICML, 2015. 5
+[24] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In ICCV, 2015. 6
+[25] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In ICML, 2015. 8
+[26] Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are GANs created equal? A large-scale study. In NeurIPS, 2018. 6
+[27] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv:1802.05957, 2018. 5
+[28] Youssef Mroueh, Chun-Liang Li, Tom Sercu, Anant Raj, and Yu Cheng. Sobolev GAN. arXiv:1711.04894, 2017. 2
+[29] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429-443, 1997. 1
+[30] Albert S Paulson, Edward W Holcomb, and Robert A Leitch. The estimation of the parameters of the stable laws. Biometrika, 62(1):163-170, 1975. 3
+[31] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434, 2015. 1
+[32] Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In ICML, 2015. 5
+[33] Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. Assessing generative models via precision and recall. In NeurIPS, pages 5228-5237, 2018. 6
+[34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training GANs. In NIPS, 2016. 1, 6
+[35] Bharath K Striperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf, and Gert RG Lanckriet. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11(Apr):1517-1561, 2010. 2, 5, 7
+[36] Jun Yu. Empirical characteristic function estimation and its applications. Econometric reviews, 2004. 3
+[37] Manzil Zaheer, Chun-Liang Li, Barnabás Póczos, and Ruslan Salakhutdinov. GAN connoisseur: Can GANs learn simple 1D parametric distributions? 2018. 5
\ No newline at end of file
diff --git a/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/images.zip b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..c0937ca016bb7846b24403a25152b21b4107ef7c
--- /dev/null
+++ b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cfdb0462f86aa69aab9ae855156f9e5f7e5b78848d5bb0997b26d5799a2b1ca2
+size 399519
diff --git a/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/layout.json b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7ac6a8215d7a2817f89366603803d7806792bacf
--- /dev/null
+++ b/acharacteristicfunctionapproachtodeepimplicitgenerativemodeling/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7648cd4bd567eef5b183f50604c17832fda054b8a2279df2add1308371c2d41
+size 432743
diff --git a/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_content_list.json b/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7dc56d1dd746afb397aa2007a4bca0868985d7dc
--- /dev/null
+++ b/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96f38cdb92ec016d9f535f6034657ea2840e5c879c5e2bab53c94830cd2c641a
+size 83868
diff --git a/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_model.json b/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f43203fbc67bdeb45b233f736d1db4f7da464721
--- /dev/null
+++ b/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9327c52d54b66c6de02c33c5c4e644dc6041648e61230a491728584bcda5387e
+size 106126
diff --git a/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_origin.pdf b/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2b3cca1209e346b1bd19ce9bca59b0d7eca0d341
--- /dev/null
+++ b/acontextawarelossfunctionforactionspottinginsoccervideos/84292883-010d-4065-a001-31aeb0f91b1f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:286c1ea44964cda897bd0dd28d92bed33a466fb37d166f61f85df7d0bd8c4036
+size 705571
diff --git a/acontextawarelossfunctionforactionspottinginsoccervideos/full.md b/acontextawarelossfunctionforactionspottinginsoccervideos/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5c3acf6084f70fca84b65d075ab00b2b9de60afb
--- /dev/null
+++ b/acontextawarelossfunctionforactionspottinginsoccervideos/full.md
@@ -0,0 +1,318 @@
+# A Context-Aware Loss Function for Action Spotting in Soccer Videos
+
+Anthony Cioppa*
+
+University of Liege
+
+anthony.cioppa@uliege.be
+
+Adrien Deligne*
+
+University of Liege
+
+adrien.deliege@uliege.be
+
+Silvio Giancola*
+
+KAUST
+
+silvio.giancola@kaust.edu.sa
+
+Bernard Ghanem
+
+KAUST
+
+Marc Van Droogenbroeck
+
+University of Liege
+
+Rikke Gade
+
+Aalborg University
+
+Thomas B. Moeslund
+
+Aalborg University
+
+# Abstract
+
+In video understanding, action spotting consists in temporally localizing human-induced events annotated with single timestamps. In this paper, we propose a novel loss function that specifically considers the temporal context naturally present around each action, rather than focusing on the single annotated frame to spot. We benchmark our loss on a large dataset of soccer videos, SoccerNet, and achieve an improvement of $12.8\%$ over the baseline. We show the generalization capability of our loss for generic activity proposals and detection on ActivityNet, by spotting the beginning and the end of each activity. Furthermore, we provide an extended ablation study and display challenging cases for action spotting in soccer videos. Finally, we qualitatively illustrate how our loss induces a precise temporal understanding of actions and show how such semantic knowledge can be used for automatic highlights generation.
+
+# 1. Introduction
+
+Aside from automotive, consumer, and robotics applications, sports is considered one of the most valuable applications in computer vision [54], capping $91 billion of annual market revenue [31], with$ 28.7 billion from the European Soccer market alone [15]. Recent advances helped provide automated tools to understand and analyze broadcast games. For instance, current computer vision methods can localize the field and its lines [17, 24], detect players [12, 63], their motion [18, 40], their pose [7, 67], their team [27], track the ball position [50, 56] and the camera motion [39]. Understanding spatial frame-wise information is useful to enhance the visual experience of sports viewers [47] and to gather players statistics [57], but it misses higher-level game understanding. For broadcast producers,
+
+
+
+
+Figure 1. Context-aware loss function. We design a novel loss that leverages the temporal context around an action spot (at a temporal shift of 0). We heavily penalize the frames far-distant from the action and decrease the penalty for those gradually closer. We do not penalize the frames just before the action to avoid providing misleading information as its occurrence is uncertain, but we heavily penalize those just after, as the action has occurred.
+
+it is of paramount importance to have a deeper understanding of the game actions. For instance, live broadcast production follows specific patterns when particular actions occur; sports live reporters comment on the game actions; and highlights producers generate short summaries by ranking the most representative actions within the game. In order to automate these production tasks, computer vision methods should understand the salient actions of a game and respond accordingly. While spatial information is widely studied and quite mature, localizing actions in time remains a challenging task for current video understanding algorithms.
+
+In this paper, we target the action spotting challenge, with a primary application on soccer videos. The task of action spotting has been defined as the temporal localization of human-induced events annotated with a single times-tamp [21]. Inherent difficulties arise from such annotations: their sparsity, the absence of start and end times of the actions, and their temporal discontinuities, i.e. the unsettling fact that adjacent frames may be annotated differently albeit being possibly highly similar. To overcome these issues, we propose a novel loss that leverages the temporal context information naturally present around the actions, as depicted in Figure 1. To highlight its generality and versatility, we showcase how our loss can be used for the task of activity localization in ActivityNet [23], by spotting the beginning and end of each activity. Using the network BMN introduced in [34] and simply substituting their loss with our enhanced context-aware spotting loss function, we show an improvement of $0.15\%$ in activity proposal leading to a direct $0.38\%$ improvement in activity detection on ActivityNet [23]. On the large-scale action spotting soccer-centric dataset, SoccerNet [21], our network substantially increases the Average-mAP spotting metric from $49.7\%$ to $62.5\%$ .
+
+Contributions. We summarize our contributions as follows. (i) We present a new loss function for temporal action segmentation further used for the task of action spotting, which is parameterized by the time-shifts of the frames from the ground-truth actions. (ii) We improve the performance of the state-of-the-art method on ActivityNet [23] by including our new contextual loss to detect activity boundaries, and improve the action spotting baseline of SoccerNet [21] by $12.8\%$ . (iii) We provide detailed insights into our action spotting performance, as well as a qualitative application for automatic highlights generation.
+
+# 2. Related Work
+
+Broadcast soccer video understanding. Computer vision tools are widely used in sports broadcast videos to provide soccer analytics [42, 57]. Current challenges lie in understanding high-level game information to identify salient game actions [13, 60], perform automatic game summarization [49, 51, 61] and report commentaries of live actions [65]. Early work uses camera shots to segment broadcasts [16], or analyze production patterns to identify salient moments of the game [46]. Further developments have used low-level semantic information in Bayesian frameworks [25, 55] to automatically detect salient game actions.
+
+Machine learning-based methods have been proposed to aggregate temporally hand-crafted features [5] or deep frame features [28] into recurrent networks [44]. SoccerNet [21] provides an in-depth analysis of deep frame feature extraction and aggregation for action spotting in soccer game broadcasts. Multi-stream networks merge additional
+
+optical flow [10, 59] or excitement [6, 51] information to improve game highlights identification. Furthermore, attention models are fed with per-frame semantic information such as pixel information [13] or player localization [32] to extract targeted frame features. In our work, we leverage the temporal context information around actions to handle the intrinsic temporal patterns representing these actions.
+
+Deep video understanding models are trained with large-scale datasets. While early works leveraged small custom video sets, a few large-scale datasets are available and worth mentioning, in particular Sports-1M [30] for generic sports video classification, MLB-Youtu [43] for baseball activity recognition, and GolfDB [41] for golf swing sequencing. These datasets all tackle specific tasks in sports. In our work, we use SoccerNet [21] to assess the performance of our context-aware loss for action spotting in soccer videos.
+
+Video understanding. Recent video challenges [23] include activity localization, that find temporal boundaries of activities. Following object localization, two-stage approaches have been proposed including proposal generation [9] and classification [8]. SSN [69] models each action instance with a structured temporal pyramid and TURNS TAP [20] predicts action proposals and regresses the temporal boundaries, while GTAN [38] dynamically optimizes the temporal scale of each action proposal with Gaussian kernels. BSN [36], MGG [37] and BMN [34] regress the time of activity boundaries, showing state-of-the-art performances on both ActivityNet 1.3 [23] and Thumos' [14] [29]. Alternatively, ActionSearch [4] tackles the spotting task iteratively, learning to predict which frame to visit next in order to spot a given activity. However, this method requires sequences of temporal annotations by human annotators to train the models that are not readily available for datasets outside ActivityNet. Also, Alwassel et al. [3] define an action spot as positive as soon as it lands within the boundary of an activity, which is less constraining than the action spotting defined in SoccerNet [21].
+
+Recently, Sigurdsson et al. [52] question boundaries sharpness and show that human agreement on temporal boundaries reach an average tIoU of $72.5\%$ for Charades [53] and $58.7\%$ on MultiTHUMOS[64]. Alwassel et al. [3] confirm such disparity on ActivityNet [23], but also show that it does not constitute a major roadblock to progress in the field. Different from activity localization, SoccerNet [21] proposes an alternative action spotting task for soccer action understanding, leveraging a well-defined set of soccer rules that define a single temporal anchor per action. In our work, we improve the SoccerNet [21] action spotting baseline by introducing a novel context-aware loss that temporally slices the vicinity of the action spots. Also, we integrate our loss for generic activity localization and detection on a boundary-based method [34, 36].
+
+
+Figure 2. Action context slicing. We define six temporal segments around each ground-truth action spot, each of which induces a specific behavior in our context-aware loss function when training the network. Far before and far after the action, its influence is negligible, thus we train the network not to predict an action. Just before the action, we do not influence the network since a particular context may or may not result in an action (i.e. an attacking phase may lead to a goal). Just after the action, its contextual information is rich and unambiguous as the action has just occurred (i.e. a goal leads to celebrating). Hence, we train the network to predict an action. Finally, we define transition zones for our loss function to be smooth, in which we softly train the network not to predict an action. For each class $c$ , the temporal segments are delimited by specific slicing parameters $K_{i}^{c}$ and are materialized through our time-shift encoding, which contains richer temporal context information about the action than the initial binary spotting annotation.
+
+# 3. Methodology
+
+We address the action spotting task by developing a context-aware loss for a temporal segmentation module, and a YOLO-like loss for an action spotting module that outputs the spotting predictions of the network. We first present the re-encoding of the annotations needed for the segmentation and spotting tasks, then we explain how the losses of these modules are computed based on the re-encodings.
+
+Problem definition. We denote by $C$ the number of classes of the action spotting problem. Each action is identified by a single action frame annotated as such. Each frame of a given video is annotated with either a one-hot encoded vector with $C$ components for the action frames or a vector of $C$ zeros for the background frames. We denote by $N_{F}$ the number of frames in a video.
+
+# 3.1. Encoding
+
+To train our network, the initial annotations are re-encoded in two different ways: with a time-shift encoding used for the temporal segmentation loss, and with aYOLO-like encoding used for the action spotting loss.
+
+Time-shift encoding (TSE) for temporal segmentation. We slice the temporal context around each action into segments related to their distance from the action, as shown in Figure 2. The segments regroup frames that are either far before, just before, just after, far after an action, or in transition zones between these segments.
+
+We use the segments in our temporal segmentation module so that its segmentation scores reflect the following ideas. (1) Far before an action spot of some class, we cannot foresee its occurrence. Hence, the score for that class should indicate that no action is occurring. (2) Just before
+
+an action, its occurrence is uncertain. Therefore, we do not influence the score towards any particular direction. (3) Just after an action has happened, plenty of visual cues allow for the detection of the occurrence of the action. The score for its class should reflect the presence of the action. (4) Far after an action, the score for its class should indicate that it is not occurring anymore. The segments around the actions of class $c$ are delimited by four temporal context slicing parameters $K_1^c < K_2^c < 0 < K_3^c < K_4^c$ as shown in Figure 2.
+
+The context slicing is used to perform a time-shift encoding (TSE) of each frame $x$ of a video with a vector of length $C$ , containing class-wise information on the relative location of $x$ with respect to its closest past or future actions. The TSE of $x$ for class $c$ , noted $s^c(x)$ , is the time-shift (i.e. difference in frame indices) of $x$ from either its closest past or future ground-truth action of class $c$ , depending on which has the dominant influence on $x$ . We set $s^c(x)$ as the time-shift from the past action if either (i) $x$ is just after the past action; or (ii) $x$ is in the transition zone after the past action, but is far before the future action; or (iii) $x$ is in the transition zones after the past and before the future actions while being closer to the past action. In all other cases, $s^c(x)$ is the time-shift from the future action.
+
+If $x$ is both located far after the past action and far before the future action, selecting either of the two time-shifts has the same effect in our loss. Furthermore, for the frames located either before the first or after the last annotated action of class $c$ , only one time-shift can be computed and is thus set as $s^c(x)$ . Finally, if no action of class $c$ is present in the video, then we set $s^c(x) = K_1^c$ for all the frames. This induces the same behavior in our loss as if they were all located far before their closest future action.
+
+
+Figure 3. Pipeline for action spotting. We propose a network made of a frame feature extractor and a temporal CNN outputting $C$ class feature vectors per frame, a segmentation module outputting per-class segmentation scores, and a spotting module extracting $2 + C$ values per spotting prediction (i.e. the confidence score $s$ for the spotting, its location $t$ and a per-class prediction).
+
+YOLO-like encoding for action spotting. Inspired by YOLO [45], each ground-truth action of the video engenders an action vector composed of $2 + C$ values. The first value is a binary indicator of the presence $(= 1)$ of the action. The second value is the location of the frame annotated as the action, computed as the index of that frame divided by $N_{F}$ . The remaining $C$ values represent the one-hot encoding of the action. We encode a whole video containing $N_{\mathrm{GT}}$ actions in a matrix $\mathbf{Y}$ of dimension $N_{\mathrm{GT}} \times (2 + C)$ , with each line representing an action vector of the video.
+
+# 3.2. Loss and Network Design
+
+Temporal segmentation loss. The TSE parameterizes the temporal segmentation loss described below. For clarity, we denote by $p$ the segmentation score for a frame $x$ to belong to class $c$ output by the segmentation module, and $s$ as the TSE of $x$ for class $c$ . We detail the loss generated by $p$ in this setting, noted $L(p, s)$ . First, in accordance with Figure 2, we compute $L(p, s)$ as follows:
+
+$$
+L (p, s) = \left\{ \begin{array}{l l} - \ln (1 - p) & s \leq K _ {1} ^ {c} \quad (1) \\ - \ln \left(1 - \frac {K _ {2} ^ {c} - s}{K _ {2} ^ {c} - K _ {1} ^ {c}} p\right) & K _ {1} ^ {c} < s \leq K _ {2} ^ {c} \quad (2) \\ 0 & K _ {2} ^ {c} < s < 0 \quad (3) \\ - \ln \left(\frac {s}{K _ {3} ^ {c}} + \frac {K _ {3} ^ {c} - s}{K _ {3} ^ {c}} p\right) & 0 \leq s < K _ {3} ^ {c} \quad (4) \\ - \ln \left(1 - \frac {s - K _ {3} ^ {c}}{K _ {4} ^ {c} - K _ {3} ^ {c}} p\right) & K _ {3} ^ {c} \leq s < K _ {4} ^ {c} \quad (5) \\ - \ln (1 - p) & s \geq K _ {4} ^ {c}. \quad (6) \end{array} \right.
+$$
+
+Then, following the practice in [14, 48] to help the network focus on improving its worst segmentation scores, we zero out the loss for scores that are satisfying enough. In the case of Equation (4) when $s = 0$ , we say that a score is satisfactory when it exceeds some maximum margin $\tau_{\mathrm{max}}$ . In the cases of Equations (1) and (6), we say that a score is satisfactory when it is lower than some minimum margin $\tau_{\mathrm{min}}$ . The range of values for $p$ that leads to zeroing out the
+
+loss varies with $s$ and the slicing parameters in most cases. This is achieved by revising $L(p, s)$ as in Equations (7) and (8). Figure 1 shows a representation of $\tilde{L}(p, s)$ .
+
+$$
+\tilde {L} (p, s) = \left\{ \begin{array}{l} \max (0, L (p, s) + \ln \left(\tau_ {\max }\right)) \quad 0 \leq s < K _ {3} ^ {c} \\ \max (0, L (p, s) + \ln \left(1 - \tau_ {\min }\right)) \text {o t h e r w i s e .} \end{array} \right. \tag {7}
+$$
+
+Finally, the segmentation loss $\mathcal{L}^{\mathrm{seg}}$ for a given video of frames $x_{1},\ldots ,x_{N_{F}}$ is given in Equation (9).
+
+$$
+\mathcal {L} ^ {\mathrm {s e g}} = \frac {1}{C N _ {F}} \sum_ {i = 1} ^ {N _ {F}} \sum_ {c = 1} ^ {C} \tilde {L} \left(p ^ {c} \left(x _ {i}\right), s ^ {c} \left(x _ {i}\right)\right) \tag {9}
+$$
+
+Action spotting loss. Let $N_{\mathrm{pred}}$ be a fixed number of action spotting predictions generated by our network for each video. Those predictions are encoded in $\hat{\mathbf{Y}}$ of dimension $N_{\mathrm{pred}} \times (2 + C)$ , similarly to $\mathbf{Y}$ .
+
+We leverage an iterative one-to-one matching algorithm to pair each of the $N_{\mathrm{GT}}$ ground-truth actions with a prediction. First, we match each ground-truth location of $\mathbf{Y}_{i,2}$ with its closest predicted location in $\hat{\mathbf{Y}}_{i,2}$ , and vice-versa (i.e. we match the predicted locations with their closest ground-truth locations). Next, we form pairs of (ground-truth, predicted) locations that reciprocally match, we remove them from the process, and we iterate until all ground truths are coupled with a prediction. Consequently, we build $\hat{\mathbf{Y}}^M$ as a reorganized version of the actions encoded in $\hat{\mathbf{Y}}$ , such that $\mathbf{Y}_{i,2}$ and $\hat{\mathbf{Y}}_{i,2}^M$ reciprocally match for all $i \leq N_{\mathrm{GT}}$ .
+
+We define the action spotting loss $\mathcal{L}^{\mathrm{as}}$ in Equation (10). It corresponds to a weighted sum of the squared errors between the matched predictions and a regularization on the confidence score of the unmatched predictions.
+
+$$
+\mathcal {L} ^ {\mathrm {a s}} = \sum_ {i = 1} ^ {N _ {\mathrm {G T}}} \sum_ {j = 1} ^ {2 + C} \alpha_ {j} \left(\mathbf {Y} _ {i, j} - \hat {\mathbf {Y}} _ {i, j} ^ {M}\right) ^ {2} + \beta \sum_ {i = N _ {\mathrm {G T}} + 1} ^ {N _ {\mathrm {p r e d}}} \left(\hat {\mathbf {Y}} _ {i, 1} ^ {M}\right) ^ {2} \tag {10}
+$$
+
+Complete loss. The final loss $\mathcal{L}$ is presented in Equation (11) as a weighted sum of $\mathcal{L}^{\mathrm{seg}}$ and $\mathcal{L}^{\mathrm{as}}$ .
+
+$$
+\mathcal {L} = \mathcal {L} ^ {\mathrm {a s}} + \lambda^ {\mathrm {s e g}} \mathcal {L} ^ {\mathrm {s e g}} \tag {11}
+$$
+
+Network for action spotting. The architecture of the network is illustrated in Figure 3 and further detailed in the supplementary material. We leverage frame feature representations for the videos (e.g. ResNet) provided with the dataset, embodied as the output of the frame feature extractor of Figure 3. The temporal CNN of Figure 3 is composed of a spatial two-layer MLP, followed by four multi-scale 3D convolutions (i.e. across time, features and classes). The temporal CNN outputs a set of $C \times f$ features for each frame organized in $C$ feature vectors (one per class) of size $f$ , as
+
+in [48]. These features are input into a segmentation module, in which we use Batch Normalization [26] and sigmoid activations. The closeness of the $C$ vectors obtained in this way to a pre-defined vector gives the $C$ segmentation scores output by the segmentation module, as [14]. The $C \times f$ features obtained previously are concatenated with the $C$ scores and fed to the action spotting module, as shown in Figure 3. It is composed of three successive temporal max-pooling and 3D convolutions, and outputs $N_{\mathrm{pred}}$ vectors of dimension $(2 + C)$ . The first two elements of these vectors are sigmoid-activated, the $C$ last are softmax-activated. The activated vectors are stacked to produce the prediction $\hat{\mathbf{Y}}$ of dimension $N_{\mathrm{pred}} \times (2 + C)$ for the action spotting task.
+
+# 4. Experiments
+
+We evaluate our new context-aware loss function in two scenarios: the action spotting task of SoccerNet [21], and activity localization and detection tasks on ActivityNet [23].
+
+# 4.1. Experiments on SoccerNet
+
+Data. Three classes of action are annotated in SoccerNet by Giancola et al. [21]: goals, cards, and substitutions, so $C = 3$ in this case. They identify each action by one annotated frame: the moment the ball crosses the line for goal, the moment the referee shows a player a card for card, and the moment a new player enters the field for substitution. We train our network on the frame features already provided with the dataset. Giancola et al. first subsampled the raw videos at 2 fps, then they extracted the features with a backbone network and reduced them by PCA to 512 features for each frame of the subsampled videos. Three sets of features are provided, each extracted with a particular backbone network: I3D [11], C3D [58], and ResNet [22].
+
+Action spotting metric. We measure performances with the action spotting metric introduced in SoccerNet [21]. An action spot is defined as positive if its temporal offset from its closest ground truth is less than a given tolerance $\delta$ . The average precision (AP) is estimated based on Precision-Recall curves, then averaged between classes (mAP). An Average-mAP is proposed as the AUC of the mAP over different tolerances $\delta$ ranging from 5 to 60 seconds.
+
+Experimental setup. We train our network on batches of chunks. We define a chunk as a set of $N_F$ contiguous frame feature vectors. We set $N_F = 240$ to maintain a high training speed while retaining sufficient contextual information. This size corresponds to a clip of 2 minutes of raw video. A batch contains chunks extracted from a single raw video. We extract a chunk around each ground-truth action, such that the action is randomly located within the chunk. Then, to balance the batch, we randomly extract $N_{\mathrm{GT}} / C$ chunks composed of background frames only. An epoch ends when the network has been trained on one batch per
+
+| Method | Frame features |
| I3D | C3D | ResNet |
| SoccerNet baseline 5s [21] | - | - | 34.5 |
| SoccerNet baseline 60s [21] | - | - | 40.6 |
| SoccerNet baseline 20s [21] | - | - | 49.7 |
| Vats et al. [62] | - | - | 57.5 |
| Ours | 53.6 | 57.7 | 62.5 |
+
+Table 1. Results on SoccerNet. Average-mAP (in %) on the test set of SoccerNet for the action spotting task. We establish a new state-of-the-art performance.
+
+training video. At each epoch, new batches are re-computed for each video for data augmentation purposes. Each raw video is time-shift encoded before training. Each new training chunk is encoded with the YOLO-like encoding.
+
+The number of action spotting predictions generated by the network is set to $N_{\mathrm{pred}} = 5$ , as we observed that no chunks of 2 minutes of raw video contain more than 5 actions. We train the network during 1000 epochs, with an initial learning rate $lr = 10^{-3}$ linearly decreasing to $10^{-6}$ . We use Adam as the optimizer with default parameters [33].
+
+For the segmentation loss, we set the margins $\tau_{\mathrm{max}} = 0.9$ and $\tau_{\mathrm{min}} = 0.1$ in Equations (7) and (8), following the practice in [48]. For the action spotting loss in Equation (10), we set $\alpha_{j} = 1$ for $j \neq 2$ , while $\alpha_{2}$ is optimized (see below) to find an appropriate weighting for the location components of the predictions. Similarly, $\beta$ is optimized to find the balance between the loss of the action vectors and the regularization of the remaining predictions. For the final loss in Equation (11), we optimize $\lambda^{\mathrm{seg}}$ to find the balance between the two losses.
+
+Hyperparameter optimization. For each set of features (I3D, C3D, ResNet), we perform a joint Bayesian optimization [1] on the number of frame features $f$ extracted per class, on the temporal receptive field $r$ of the network (i.e. temporal kernel dimension of the 3D convolutions), and on the parameters $\alpha_{2}, \beta, \lambda^{\mathrm{seg}}$ . Next, we perform a grid search optimization on the slicing parameters $K_{i}^{c}$ .
+
+For ResNet, we obtain $f = 16$ , $r = 80$ , $\alpha_{2} = 5$ , $\beta = 0.5$ , $\lambda^{\mathrm{seg}} = 1.5$ . For goals (resp. cards, substitutions) we have $K_{1} = -40$ (resp. $-40, -80$ ), $K_{2} = -20$ (resp. $-20, -40$ ), $K_{3} = 120$ (resp. $20, 20$ ), and $K_{4} = 180$ (resp. $40, 40$ ). Given theramerate of 2 fps, those values can be translated to seconds by scaling them down by a factor of 2. The value $r = 80$ corresponds to a temporal receptive field of 20 seconds on both sides of the central frame in the temporal dimension of the 3D convolutions.
+
+Main results. The performances obtained with the optimized parameters are reported in Table 1. As shown, we establish a new state-of-the-art performance on the action spotting task of SoccerNet, outperforming the previous benchmark by a comfortable margin, for all the frame fea
+
+tures. ResNet gives the best performance, as also observed in [21]. A sensitivity analysis of the parameters $K_{i}^{c}$ reveals robust performances around the optimal values, indicating that no heavy fine-tuning is required for the context slicing. Also, performances largely decrease as the slicing is strongly reduced, which emphasizes its usefulness.
+
+Ablation study. Since the ResNet features provide the best performance, we use them with their optimized parameters for the following ablation studies. (i) We remove the segmentation module, which is equivalent to setting $\lambda^{\mathrm{seg}} = 0$ in Equation (11). This also removes the context slicing and the margins $\tau_{\mathrm{max}}$ and $\tau_{\mathrm{min}}$ . (ii) We remove the action context slicing such that the ground truth for the segmentation module is the raw binary annotations, i.e. all the frames must be classified as background except the action frames. This is equivalent to setting $K_{1} = -1 = K_{2} = -K_{3} = -K_{4}$ . (iii) We remove the margins that help the network focus on improving its worst segmentation scores, by setting $\tau_{\mathrm{max}} = 1$ , $\tau_{\mathrm{min}} = 0$ in Equations (7) and (8). (iv) We remove the iterative one-to-one matching between the ground truth $\mathbf{Y}$ and the predictions $\hat{\mathbf{Y}}$ before the action spotting loss, which is equivalent to using $\hat{\mathbf{Y}}$ instead of $\hat{\mathbf{Y}}^{M}$ in Equation (10). The results of the ablation studies are shown in Table 2.
+
+From an Average-mAP perspective, the auxiliary task of temporal segmentation improves the performance on the action spotting task (from $58.9\%$ to $62.5\%$ ), which is a common observation in multi-task learning [66]. When the segmentation is performed, our temporal context slicing gives a significant boost compared to using the raw binary annotations (from $57.8\%$ to $62.5\%$ ). This observation is in accordance with the sensitivity analysis. It also appears that it is preferable to not use the segmentation at all rather than using the segmentation with the raw binary annotations ( $58.9\%$ vs $57.8\%$ ), which further underlines the usefulness of the context slicing. A boost in performance is also observed when we use the margins to help the network focus on improving its worst segmentation scores (from $59.0\%$ to $62.5\%$ ). Eventually, Table 2 shows that it is extremely beneficial to match the predictions of the network with the ground truth before the action spotting loss (from $46.8\%$ to $62.5\%$ ). This makes sense since there is no point in evaluating the network on its ability to order its predictions, which is a hard and unnecessary constraint. The large impact of the matching is also justified by its direct implication in the action spotting task assessed through the Average-mAP.
+
+Results through game time. In soccer, it makes sense to analyze the performance of our model through game time, since the actions are not uniformly distributed throughout the game. For example, a substitution is more likely to occur during the second half of a game. We consider non-overlapping bins corresponding to 5 minutes of game time and compute the Average-mAP for each bin. Figure 4 shows the evolution of this metric through game time.
+
+ | Segm. | Slic. | Marg. | Match. | Result |
| (i) | | | | ✓ | 58.9 |
| (ii) | ✓ | | ✓ | ✓ | 57.8 |
| (iii) | ✓ | ✓ | | ✓ | 59.0 |
| (iv) | ✓ | ✓ | ✓ | | 46.8 |
| Ours | ✓ | ✓ | ✓ | ✓ | 62.5 |
+
+Table 2. Ablation study. We perform ablations by (i) removing the segmentation $(\lambda^{\mathrm{seg}} = 0)$ , hence the slicing and the margins; (ii) removing the context slicing $(K_{1} = -1 = K_{2} = -K_{3} = -K_{4})$ ; (iii) removing the margins that help the network focus on improving its worst segmentation scores $(\tau_{\mathrm{min}} = 0, \tau_{\mathrm{max}} = 1)$ ; (iv) removing the matching (using $\hat{\mathbf{Y}}$ instead of $\hat{\mathbf{Y}}^{M}$ in $\mathcal{L}^{\mathrm{as}}$ ). Each part evidently contributes to the overall performance.
+
+
+Figure 4. Performance as function of game time. Average-mAP spotting performance over the game time with all ground-truth actions of the dataset binned in 5 minute intervals. It appears that actions around the half-time break are more challenging to spot. Number of actions for each bin. Our performance (62.5%).
+
+It appears that actions occurring during the first five minutes of a half-time are substantially more difficult to spot than the others. This may be partially explained by the occurrence of some of these actions at the very beginning of a half-time, for which the temporal receptive field of the network requires the chunk to be temporally padded. Hence, some information may be missing to allow the network to spot those actions. Besides, when substitutions occur during the break, they are annotated as such on the first frame of the second halves of the matches, which makes them practically impossible to spot. In the test set, this happens for $28\%$ of the matches. None of these substitutions are spotted by our model, which thus degrades the performances during the first minutes of play in the second halves of the matches. However, they merely represent $5\%$ of all the substitutions, and removing them from the evaluation only boosts our Average-mAP by $0.7\%$ (from $62.5\%$ to $63.2\%$ ).
+
+Results as function of action vicinity. We investigate whether actions are harder to spot when they are close to each other. We bin the ground-truth actions based on the distance that separates them from the previous (or next, depending on which is the closest) ground-truth action, regardless of their classes. Then, we compute the Average-mAP for each bin. The results are represented in Figure 5.
+
+We observe that the actions are more difficult to spot
+
+
+Figure 5. Performance as function of action vicinity. Average-mAP spotting performance per bin of ground-truth actions grouped by distance (in seconds) from their closest ground-truth action. It appears that nearby actions are more challenging to spot. Number of actions for each bin. Our performance $(62.5\%)$ .
+
+when they are close to each other. This could be due to the reduced number of visual cues, such as replays, when an action occurs rapidly after another and thus must be broadcast. Some confusion may also arise because the replays of the first action can still be shown after the second action, e.g. a sanctioned foul followed by a converted penalty. This analysis also shows that the action spotting problem is challenging even when the actions are further apart, as the performances in Figure 5 eventually plateau.
+
+Per-class results. We perform a per-class analysis in a similar spirit as the Average-mAP metric. For a given class, we fix a tolerance $\delta$ around each annotated action to determine positive predictions and we aggregate these results in a confusion matrix. An action is considered spotted when its confidence score exceeds some threshold optimized for the $F_{1}$ score on the validation set. From the confusion matrix, we compute the precision, recall and $F_{1}$ score for that class and for that tolerance $\delta$ . Varying $\delta$ from 5 to 60 seconds provides the evolution of the three metrics as a function of the tolerance. Figure 6 shows these curves for goals for our model and for the baseline [21]. The results for cards and substitutions are provided in supplementary material.
+
+Figure 6 shows that most goals can be efficiently spotted by our model within 10 seconds around the ground truth ( $\delta = 20$ seconds). We achieve a precision of $80\%$ for that tolerance. The previous baseline plateaus within 20 seconds ( $\delta = 40$ seconds) and still has a lower performance. In particular for goals, many visual cues facilitate their spotting, e.g. multiple replays, particular camera views, or celebrations from the players and from the public.
+
+# 4.2. Experiments on ActivityNet
+
+In this section, we evaluate our context-aware loss in a more generic task than action spotting in soccer videos. We tackle the Activity Proposal and Activity Detection tasks of the challenging ActivityNet dataset, for which we use the ResNet features provided with the dataset at 5 fps.
+
+Setup. We use the current state-of-the-art network, namely BMN [34], with the code provided in [2]. BMN is equipped with a temporal evaluation module (TEM), which plays a similar role as our temporal segmentation module. We re
+
+
+
+
+Figure 6. Per-class results (goals). A prediction of class goal is a true positive (TP) with tolerance $\delta$ when it is located at most $\delta /2$ seconds from a ground-truth goal. The baseline results are obtained from the best model of [21]. Our model spots most goals within 10 seconds around the ground truth ( $\delta = 20$ seconds).
+
+place the loss associated with the TEM by our novel temporal segmentation loss $\mathcal{L}^{\mathrm{seg}}$ . The slicing parameters are set identically for all the classes and are optimized with respect to the AUC performance on the validation set by grid search with the constraint $K_{1} = 2K_{2} = -2K_{3} = -K_{4}$ . The optimization yields the best results where $K_{1} = -14$ .
+
+Results. The average performances on 20 runs of our experiment and of the BMN base code [2] are reported in Table 3. Our novel temporal segmentation loss improves the performance obtained with BMN [2] by $0.15\%$ and $0.12\%$ for the activity proposal task (AR@100 and AUC) and by $0.38\%$ for the activity detection task (Average-mAP). These increases compare with some recent increments, while being obtained just by replacing their TEM loss by our context-aware segmentation loss. The network thus has the same architecture and number of parameters. We conjecture that our loss $\mathcal{L}^{\mathrm{seg}}$ , through its particular context slicing, helps train the network by modelling the uncertainty surrounding the annotations. Indeed, it has been shown in [3, 52] that a large variability exists among human annotators on which frames to annotate as the beginning and the end of the activities of the dataset. Let us note that in BMN, the TEM loss is somehow adapted around the action frames in order to mitigate the penalization attributed to their neighboring frames. Our work goes one step further, by directly designing a temporal context-aware segmentation loss.
+
+# 5. Automatic Highlights Generation for Soccer
+
+Some action spotting and temporal segmentation results are shown in Figure 7. It appears that some sequences of play have a high segmentation score for some classes but do not lead, quite rightly, to an action spotting. It turns
+
+| Method | AR@100 | AUC | Average-mAP |
| Lin et al. [35] | 73.01 | 64.40 | 29.17 |
| Gao et al. [19] | 73.17 | 65.72 | - |
| BSN [36] | 74.16 | 66.17 | 30.03 |
| P-GCN [68] | - | - | 31.11 |
| BMN [34] | 75.01 | 67.10 | 33.85 |
| BMN code [2] | 75.11 | 67.16 | 30.67 ± 0.08 |
| Ours: [2] + Lseg | 75.26 | 67.28 | 31.05 ± 0.07 |
+
+Table 3. Results on ActivityNet validation set for the proposal task (AR@100, AUC) and for the detection task (Average-mAP). For our experiments, we report the average values on 20 runs.
+
+
+Figure 7. Action spotting and segmentation for the $2^{\text{nd}}$ half of the "Remuntada" FCB - PSG. Ground truth actions, temporal segmentation curves, and spotting results are illustrated. We can identify unannotated interesting actions using our segmentation.
+
+out that these sequences are often related to unannotated actions of supplementary classes that resemble those considered so far, such as unconverted goal opportunities and unsanctioned fouls. Video clips of the two actions identified in Figure 7 are provided in the supplementary material.
+
+To quantify the spotting results of goal opportunities, we can only compute the precision metric since these actions are not annotated. We manually inspect each video sequence of the test set where the segmentation score for goals exceeds some threshold $\eta$ but where no ground-truth goal is present. We decide whether the sequence is a goal opportunity or not by asking two frequent observers of soccer games if they would include it in the highlights of the match. The sequence is a true positive when they both agree to include it and a false positive, otherwise. The precision is then computed for that $\eta$ . By gradually decreasing $\eta$ from 0.9 to 0.3, we obtain the precision curve shown in Figure 8. It appears that $80\%$ of the sequences with a segmentation score larger than $\eta = 0.5$ are considered goal opportunities.
+
+As a direct by-product, we derive an automatic highlights generator without explicit supervision. We extract
+
+
+Figure 8. Precision for goal opportunities, as a function of the threshold on the segmentation score to exceed for manually inspecting a sequence. For scores larger than $\eta = 0.5$ , a precision of 0.8 is achieved, i.e. $80\%$ of the sequences inspected were goal opportunities. Number of sequences inspected per threshold.
+
+a video clip starting 15 seconds before each spotting of a goal or a card and ending 20 seconds after. We proceed likewise for the sequences with a segmentation score $\geq 0.5$ for goals. We dismiss substitutions as they rarely appear in highlights. We assemble the clips chronologically to produce the highlights video, provided in supplementary material. Evaluating its quality is subjective, but we found its content to be adequate, even if the montage could be improved. Indeed, only sequences where a goal, a goal opportunity, or a foul occurs are selected. This reinforces the usefulness of the segmentation, as it provides a direct overview of the proceedings of the match, including proposals for unannotated actions that are interesting for highlights.
+
+# 6. Conclusion
+
+We tackle the challenging action spotting task of SoccerNet with a novel context-aware loss for segmentation and aYOLO-like loss for the spotting. The former treats the frames according to their time-shift from their closest ground-truth actions. The latter leverages an iterative matching algorithm that alleviates the need for the network to order its predictions. To show generalization capabilities, we also test our context-aware loss on ActivityNet. We improve the state-of-the-art on ActivityNet by $0.15\%$ in AR@100, $0.12\%$ in AUC, and $0.38\%$ in Average-mAP, by only including our context-aware loss without changing the network architecture. We achieve a new state-of-the-art on SoccerNet, surpassing by far the previous baseline (from $49.7\%$ to $62.5\%$ in Average-mAP) and spotting most actions within 10 seconds around their ground truth. Finally, we leverage the resulting segmentation results to identify unannotated actions such as goal opportunities and derive a highlights generator without specific supervision.
+
+Acknowledgments. This work is supported by the DeepSport project of the Walloon region and the FRIA (Belgium), as well as the King Abdullah University of Science and Technology (KAUST) Office of Sponsored Research (OSR) under Award No. OSR-CRG2017-3405.
+
+# References
+
+[1] Bayesian Optimization. https://github.com/fmfn/ BayesianOptimization. Last accessed: 2019-10-20. 5
+[2] Code for BMN. https://github.com/JJBOY/BMN-Boundary-Matching-Network. Last accessed: 2019-10-30. 7, 8
+[3] Humam Alwassel, Fabian Caba Heilbron, Victor Escorcia, and Bernard Ghanem. Diagnosing error in temporal action detectors. In European Conference on Computer Vision (ECCV), September 2018. 2, 7
+[4] Humam Alwassel, Fabian Caba Heilbron, and Bernard Ghanem. Action Search: Spotting Targets in Videos and Its Application to Temporal Action Localization. In European Conference on Computer Vision (ECCV), September 2018. 2
+[5] Moez Baccouche, Franck Mamalet, Christian Wolf, Christophe Garcia, and Atilla Baskurt. Action classification in soccer videos with long short-term memory recurrent neural networks. In International Conference on Artificial Neural Networks (ICANN), September 2010. 2
+[6] Vinay Bettadapura, Caroline Pantofaru, and Irfan Essa. Leveraging contextual cues for generating basketball highlights. In ACM international conference on Multimedia (ACM-MM), October 2016. 2
+[7] Lewis Bridgeman, Marco Volino, Jean-Yves Guillemaut, and Adrian Hilton. Multi-Person 3D Pose Estimation and Tracking in Sports. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 1
+[8] Shyamal Buch, Victor Escorcia, Bernard Ghanem, Li Fei-Fei, and Juan Carlos Niebles. End-to-End, Single-Stream Temporal Action Detection in Untrimmed Videos. In British Machine Vision Conference (BMVC), September 2017. 2
+[9] Shyamal Buch, Victor Escorcia, Chuanqi Shen, Bernard Ghanem, and Juan Carlos Niebles. SST: Single-Stream Temporal Action Proposals. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 2
+[10] Zixi Cai, Helmut Neher, Kanav Vats, David A. Clausi, and John Zelek. Temporal hockey action recognition via pose and optical flows. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 2
+[11] Joao Carreira and Andrew Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 5
+[12] Anthony Cioppa, Adrien Deliege, Maxime Istasse, Christophe De Vleeschouwer, and Marc Van Droogenbroeck. ARTHuS: Adaptive Real-Time Human Segmentation in Sports Through Online Distillation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 1
+[13] Anthony Cioppa, Adrien Deliege, and Marc Van Droogenbroeck. A Bottom-Up Approach Based on Semantics for the Interpretation of the Main Camera Stream in Soccer Games. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. 2
+[14] Adrien Deliège, Anthony Cioppa, and Marc Van Droogenbroeck. HitNet: a neural network with capsules embedded
+
+in a Hit-or-Miss layer, extended with hybrid data augmentation and ghost capsules. CoRR, abs/1806.06519, 2018. 4, 5
+[15] Deloitte. Market size of the European football market from 2006/07 to 2015/16 (in billion euros), 2017. Retrieved October 30, 2019, from https://www.statista.com/statistics/261223/european-soccer-market-total-revenue/. 1
+[16] Ahmet Ekin, A Murat Tekalp, and Rajiv Mehrotra. Automatic soccer video analysis and summarization. IEEE Transactions on Image Processing, 12(7):796-807, 2003. 2
+[17] Dirk Farin, Susanne Krabbe, Wolfgang Effelsberg, et al. Robust camera calibration for sport videos using court models. In Storage and Retrieval Methods and Applications for Multimedia 2004, volume 5307, pages 80-91. International Society for Optics and Photonics, 2003. 1
+[18] Panna Felsen, Pulkit Agrawal, and Jitendra Malik. What will happen next? Forecasting player moves in sports videos. In IEEE International Conference on Computer Vision (ICCV), October 2017. 1
+[19] Jiyang Gao, Kan Chen, and Ram Nevatia. CTAP: Complementary Temporal Action Proposal Generation. In European Conference on Computer Vision (ECCV), September 2018. 8
+[20] Jiyang Gao, Zhenheng Yang, Kan Chen, Chen Sun, and Ram Nevatia. TURN TAP: Temporal unit regression network for temporal action proposals. In IEEE International Conference on Computer Vision (ICCV), October 2017. 2
+[21] Silvio Giancola, Mohieddine Amine, Tarek Dghaily, and Bernard Ghanem. SoccerNet: A Scalable Dataset for Action Spotting in Soccer Videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. 2, 5, 6, 7
+[22] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 5
+[23] Fabian Caba Heilbron, Victor Escorcia, Bernard Ghanem, and Juan Carlos Niebles. ActivityNet: A Large-Scale Video Benchmark for Human Activity Understanding. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. 2, 5
+[24] Namdar Homayounfar, Sanja Fidler, and Raquel Urtasun. Sports field localization via deep structured models. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. 1
+[25] Chung-Lin Huang, Huang-Chia Shih, and Chung-Yuan Chao. Semantic analysis of soccer video using dynamic Bayesian network. IEEE Transactions on Multimedia, 8(4):749-760, 2006. 2
+[26] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. In International Conference on Machine Learning (ICML), July 2015. 5
+[27] Maxime Istasse, Julien Moreau, and Christophe De Vleeschouwer. Associative Embedding for Team Discrimination. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 1
+
+[28] Haohao Jiang, Yao Lu, and Jing Xue. Automatic Soccer Video Event Detection Based on a Deep Neural Network Combined CNN and RNN. In International Conference on Tools with Artificial Intelligence (ICTAI), November 2016. 2
+[29] Yu-Gang. Jiang, Jingen Liu, Amir Roshan Zamir, George Toderici, Ivan Laptev, Mubarak Shah, and Rahul Sukthankar. THUMOS Challenge: Action Recognition with a Large Number of Classes. http://crcv.ucf.edu/THUMOS14/, 2014.2
+[30] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, and Li Fei-Fei. Large-scale Video Classification with Convolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014. 2
+[31] A.T. Kearney. Global sports market - total revenue from 2005 to 2017 (in billion U.S. dollars), 2014. Retrieved 2019-10-30 from https://www.statista.com/statistics/370560/worldwide-sports-market-revenue/. 1
+[32] Abdullah Khan, Beatrice Lazzerini, Gaetano Calabrese, and Luciano Serafini. Soccer Event Detection. In International Conference on Image Processing and Pattern Recognition (IPPR), April 2018. 2
+[33] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In International Conference on Learning Representations (ICLR), May 2015. 5
+[34] Tianwei Lin, Xiao Liu, Xin Li, Errui Ding, and Shilei Wen. BMN: Boundary-Matching Network for Temporal Action Proposal Generation. In IEEE International Conference on Computer Vision (ICCV), October 2019. 2, 7, 8
+[35] Tianwei Lin, Xu Zhao, and Zheng Shou. Temporal Convolution Based Action Proposal: Submission to ActivityNet 2017. CoRR, abs/1707.06750, 2017. 8
+[36] Tianwei Lin, Xu Zhao, Haisheng Su, Chongjing Wang, and Ming Yang. BSN: Boundary Sensitive Network for Temporal Action Proposal Generation. In European Conference on Computer Vision (ECCV), September 2018. 2, 8
+[37] Yuan Liu, Lin Ma, Yifeng Zhang, Wei Liu, and Shih-Fu Chang. Multi-granularity Generator for Temporal Action Proposal. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 2
+[38] Fuchen Long, Ting Yao, Zhaofan Qiu, Xinmei Tian, Jiebo Luo, and Tao Mei. Gaussian Temporal Awareness Networks for Action Localization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019. 2
+[39] Jikai Lu, Jianhui Chen, and James J. Little. Pan-tilt-zoom SLAM for Sports Videos. In *British Machine Vision Conference (BMVC)*, September 2019. 1
+[40] Mehrtash Manafifard, Hamid Ebadi, and Hamid Abrishami Moghaddam. A survey on player tracking in soccer videos. Computer Vision and Image Understanding, 159:19-46, 2017. 1
+[41] William McNally, Kanav Vats, Tyler Pinto, Chris Dulhanty, John McPhee, and Alexander Wong. GolfDB: A Video Database for Golf Swing Sequencing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 2
+[42] Thomas B. Moeslund, Graham Thomas, and Adrian Hilton. Computer Vision in Sports. Springer, 2014. 2
+
+[43] AJ Piergiovanni and Michael S. Ryoo. Fine-Grained Activity Recognition in Baseball Videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. 2
+[44] Vignesh Ramanathan, Jonathan Huang, Sami Abu-El-Haija, Alexander Gorban, Kevin Murphy, and Li Fei-Fei. Detecting events and key actors in multi-person videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2
+[45] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You Only Look Once: Unified, Real-Time Object Detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 4
+[46] Ren Reede and Jose Joemon. Football video segmentation based on video production strategy. In European Conference on Information Retrieval (ECIR), March 2005. 2
+[47] Konstantinos Rematas, Ira Kemelmacher-Shlizerman, Brian Curless, and Steve Seitz. Soccer on Your Tabletop. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 1
+[48] Sara Sabour, Nicholas Frosst, and Geoffrey E. Hinton. Dynamic Routing Between Capsules. In Advances in Neural Information Processing Systems 30 (NeurIPS), December 2017. 4, 5
+[49] Melissa Sanabria, Frédéric Precioso, Thomas Menguy, et al. A Deep Architecture for Multimodal Summarization of Soccer Games. In ACM International Conference on Multimedia (ACM-MM) Workshops, October 2019. 2
+[50] Saikat Sarkar, Amlan Chakrabarti, and Dipti Prasad Mukherjee. Generation of Ball Possession Statistics in Soccer Using Minimum-Cost Flow Network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 1
+[51] Pushkar Shukla, Hemant Sadana, Apar and Bansal, Deepak Verma, Carlos Elmadjian, Balasubramanian Raman, and Matthew Turk. Automatic Cricket Highlight Generation Using Event-Driven and Excitement-Based Features. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. 2
+[52] Gunnar A. Sigurdsson, Olga Russakovsky, and Abhinav Gupta. What actions are needed for understanding human actions in videos? In IEEE International Conference on Computer Vision (ICCV), October 2017. 2, 7
+[53] Gunnar A. Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision (ECCV), October 2016. 2
+[54] Statista. Computer vision artificial intelligence (AI) market revenues worldwide, from 2015 to 2019, by application (in million U.S. dollars), 2016. Retrieved October 30, 2019, from https://www.statista.com/statistics/641922/worldwide-artificial-intelligence-computer-vision-market-revenues/. 1
+[55] Mostafa Tavassolipour, Mahmood Karimian, and Shohreh Kasaei. Event detection and summarization in soccer videos using Bayesian network and copula. IEEE Transactions on Circuits and Systems for Video Technology, 24(2):291-304, 2014. 2
+
+[56] Rajkumar Theagarajan, Federico Pala, Xiu Zhang, and Bir Bhanu. Soccer: Who Has the Ball? Generating Visual Analytics and Player Statistics. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. 1
+[57] Graham Thomas, Rikke Gade, Thomas B. Moeslund, Peter Carr, and Adrian Hilton. Computer vision for sports: Current applications and research topics. Computer Vision and Image Understanding, 159:3-18, 2017. 1, 2
+[58] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning Spatiotemporal Features with 3D Convolutional Networks. In IEEE International Conference on Computer Vision (ICCV), December 2015. 5
+[59] Grigorios Tsagkatakis, Mustafa Jaber, and Panagiotis Tsakalides. Goal!! Event detection in sports video. Journal of Electronic Imaging, 2017(16):15-20, 2017. 2
+[60] Takamasa Tsunoda, Yasuhiro Komori, Masakazu Matsugu, and Tatsuya Harada. Football Action Recognition Using Hierarchical LSTM. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017. 2
+[61] Francesco Turchini, Lorenzo Seidenari, Leonardo Galteri, Andrea Ferracani, Giuseppe Becchi, and Alberto Del Bimbo. Flexible Automatic Football Filming and Summarization. In ACM International Conference on Multimedia (ACM-MM) Workshops, October 2019. 2
+[62] Kanav Vats, Mehrnaz Fani, Pascale Walters, David A. Clausi, and John Zelek. Event detection in coarsely annotated sports videos via 1d temporal convolutions, 2019. Preprint at https://bit.ly/3b4TiTf.5
+[63] Ying Yang and Danyang Li. Robust player detection and tracking in broadcast soccer video based on enhanced particle filter. Journal of Visual Communication and Image Representation, 46:81-94, 2017. 1
+[64] Serena Yeung, Olga Russakovsky, Ning Jin, Mykhaylo Andriluka, Greg Mori, and Li Fei-Fei. Every moment counts: Dense detailed labeling of actions in complex videos. International Journal of Computer Vision, 126(2-4):375-389, 2018. 2
+[65] Huanyu Yu, Shuo Cheng, Bingbing Ni, Minsi Wang, Jian Zhang, and Xiaokang Yang. Fine-Grained Video Captioning for Sports Narrative. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2
+[66] Amir R. Zamir, Alexander Sax, William Shen, Leonidas J. Guibas, Jitendra Malik, and Silvio Savarese. Taskonomy: Disentangling Task Transfer Learning. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 6
+[67] Dan Zecha, Moritz Einfalt, and Rainer Lienhart. Refining Joint Locations for Human Pose Tracking in Sports Videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 1
+[68] Runhao Zeng, Wenbing Huang, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, and Chuang Gan. Graph convolutional networks for temporal action localization. In International Conference on Computer Vision, pages 7094-7103, 2019. 8
+[69] Yue Zhao, Yuanjun Xiong, Limin Wang, Zhirong Wu, Xiaou Tang, and Dahua Lin. Temporal action detection with
+
+structured segment networks. In IEEE International Conference on Computer Vision (ICCV), October 2017. 2
\ No newline at end of file
diff --git a/acontextawarelossfunctionforactionspottinginsoccervideos/images.zip b/acontextawarelossfunctionforactionspottinginsoccervideos/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..41a8f1acf6ece1e7b563b0c5af4743676a6b29ae
--- /dev/null
+++ b/acontextawarelossfunctionforactionspottinginsoccervideos/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:829e97b582dd3d51b73d42f7aae3dee336b2a05ba9ba4a5474390408cecc7eb7
+size 437297
diff --git a/acontextawarelossfunctionforactionspottinginsoccervideos/layout.json b/acontextawarelossfunctionforactionspottinginsoccervideos/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f9e694002ca27b8570cd9f481efc445fcee01174
--- /dev/null
+++ b/acontextawarelossfunctionforactionspottinginsoccervideos/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e0e4b9c6ddf9f6a5230eb5324c73a3188414cf250cec46e03bd95b502e91c04
+size 497466
diff --git a/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_content_list.json b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..2b419f2ab8d7172f73caccc5bf0ffdd6a338904c
--- /dev/null
+++ b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a089c5d0e4f3a4821d0996cf42762f6a10d94f3cf3ae45304d7500759a08851
+size 74057
diff --git a/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_model.json b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..93f0c52b7d2f2b6f3a6798698f31a379734474b4
--- /dev/null
+++ b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:772907c2834663756e189522224d85f94e23373ba6b7ed1694451e5f4b4b3914
+size 92312
diff --git a/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_origin.pdf b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..43f3c2f153ba439c7af250ebc866a5dd5f0a4aa3
--- /dev/null
+++ b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/ad23c253-258e-45b2-8e98-73c13f8dd34f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4a71b4fc83e246e6c7a5641e3aed191d1df2bf7c18fba0652dd5baca53a03d40
+size 2696383
diff --git a/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/full.md b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..2784c4cdd504b56aeb0d09b201d04ed7bab89b74
--- /dev/null
+++ b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/full.md
@@ -0,0 +1,294 @@
+# A Disentangling Invertible Interpretation Network for Explaining Latent Representations
+
+Patrick Esser* Robin Rombach* Björn Ommer
+Heidelberg Collaboratory for Image Processing
+IWR, Heidelberg University, Germany
+
+# Abstract
+
+Neural networks have greatly boosted performance in computer vision by learning powerful representations of input data. The drawback of end-to-end training for maximal overall performance are black-box models whose hidden representations are lacking interpretability: Since distributed coding is optimal for latent layers to improve their robustness, attributing meaning to parts of a hidden feature vector or to individual neurons is hindered. We formulate interpretation as a translation of hidden representations onto semantic concepts that are comprehensible to the user. The mapping between both domains has to be bijective so that semantic modifications in the target domain correctly alter the original representation. The proposed invertible interpretation network can be transparently applied on top of existing architectures with no need to modify or retrain them. Consequently, we translate an original representation to an equivalent yet interpretable one and backwards without affecting the expressiveness and performance of the original. The invertible interpretation network disentangles the hidden representation into separate, semantically meaningful concepts. Moreover, we present an efficient approach to define semantic concepts by only sketching two images and also an unsupervised strategy. Experimental evaluation demonstrates the wide applicability to interpretation of existing classification and image generation networks as well as to semantically guided image manipulation.
+
+# 1. Introduction
+
+Deep neural networks have achieved unprecedented performance in various computer vision tasks [51, 15] by learning task-specific hidden representations rather than relying on predefined hand-crafted image features. However, the significant performance boost due to end-to-end learning comes at the cost of now having black-box models that are lacking interpretability: A deep network may have found
+
+
+Figure 1: Our Invertible Interpretation Network $T$ can be applied to arbitrary existing models. Its invertibility guarantees that the translation from $z$ to $\tilde{z}$ does not affect the performance of the model to be interpreted. Code and results can be found at the project page https://compvis.github.io/iiin/.
+
+a solution to a task, but human users cannot understand the causes for the predictions that the model makes [36]. Conversely, users must also be able to understand what the hidden representations have not learned and on which data the overall model will, consequently, fail. Interpretability is therefore a prerequisite for safeguarding AI, making its decisions transparent to the users it affects, and understanding its applicability, limitations, and the most promising options for its future improvement.
+
+A key challenge is that learned latent representations typically do not correspond to semantic concepts that are comprehensible to human users. Hidden layer neurons are trained to help in solving an overall task in the output layer of the network. Therefore, the output neurons correspond to human-interpretable concepts such as object classes in semantic image segmentation [3] or bounding boxes in object detection [48]. In contrast, the hidden layer representa
+
+tion of semantic concepts is a distributed pattern [9]. This distributed coding is crucial for the robustness and generalization performance of neural networks despite noisy inputs, large intra-class variability, and the stochastic nature of the learning algorithm [13]. However, as a downside of semantics being distributed over numerous neurons it is impossible to attribute semantic meaning to only an individual neuron despite attempts to backpropagate [37] or synthesize [47, 52] their associated semantic concepts. One solution has been to modify and constrain the network so that abstract concepts can be localized in the hidden representation [55]. However, this alters the network architecture and typically deteriorates overall performance [45].
+
+Objective: Therefore, our goal needs to be an approach that can be transparently applied on top of arbitrary existing networks and their already learned representations without altering them. We seek a translation between these hidden representations and human-comprehensible semantic concepts—a non-linear mapping between the two domains. This translation needs to be invertible, i.e. an invertible neural network (INN) [5, 6, 19, 22], so that modifications in the domain of semantic concepts correctly alter the original representation.
+
+To interpret a representation, we need to attribute meaning to parts of the feature encoding. That is, we have to disentangle the high-dimensional feature vector into multiple multi-dimensional factors so that each is mapped to a separate semantic concept that is comprehensible to the user. As discussed above, this disentangled mapping should be bijective so that modifications of the disentangled semantic factors correctly translate back to the original representation. We can now, without any supervision, disentangle the representation into independent concepts so that a user can post-hoc identify their meaning. Moreover, we present an efficient strategy for defining semantic concepts. It only requires two sketches that exhibit a change in a concept of interest rather than large annotated training sets for each concept. Given this input, we derive the invariance properties that characterize a concept and generate synthesized training data to train our invertible interpretation network. This network then acts as a translator that disentangles the original representation into multiple factors that correspond to the semantic concepts.
+
+Besides interpreting a network representation, we can also interpret the structure that is hidden in a dataset and explain it to the user. Applying the original representation and then translating onto the disentangled semantic factors allows seeing which concepts explain the data and its variability. Finally, the invertible translation supports semantically meaningful modifications of input images: Given an autoencoder representation, its representation is mapped onto interpretable factors, these can be modified and inverse translation allows to apply the decoder to project back into
+
+the image domain. In contrast to existing disentangled image synthesis [34, 8, 32, 24, 7], our invertible approach can be applied on top of existing autoencoder representations, which therefore do not have to be altered or retrained to handle different semantic concepts. Moreover, for other architectures such as classification networks, interpretability helps to analyze their invariance and robustness.
+
+To summarize, $(i)$ we present a new approach to the interpretability of neural networks, which can be applied to arbitrary existing models without affecting their performance; $(ii)$ we obtain an invertible translation from hidden representations to disentangled representations of semantic concepts; $(iii)$ we propose a method that allows users to efficiently define semantic concepts to be used for our interpretable representation; $(iv)$ we investigate the interpretation of hidden representations, of the original data, and demonstrate semantic image modifications enabled by the invertibility of the translation network.
+
+# 2. Interpretability
+
+An interpretation is a translation between two domains such that concepts of the first domain can be understood in terms of concepts of the second domain. Here, we are interested in interpretations of internal representations of a neural network in terms of human-understandable representations. Examples for the latter are given by textual descriptions, visual attributes or images.
+
+To interpret neural networks, some approaches modify network architectures or losses used for training to obtain inherently more interpretable networks. [55] relies on a global average pooling layer to obtain class activation maps, i.e. heatmaps showing which regions of an input are most relevant for the prediction of a certain object class. [54] learn part specific convolutional filters by restricting filter activations to localized regions. Invertible neural networks [5, 6, 19, 22] have been used to get a better understanding of adversarial attacks [18]. Instead of replacing existing architectures with invertible ones, we propose to augment them with invertible transformations. Using the invertibility, we can always map back and forth between original representations and interpretable ones without loss of information. Thus, our approach can be applied to arbitrary existing architectures without affecting their performance, whereas approaches modifying architectures always involve a tradeoff between interpretability and performance.
+
+Most works on interpretability of existing networks focus on visualizations. [53] reconstruct images which activated a specific feature layer of a network. [47] uses gradient ascent to synthesize images which maximize class probabilities for different object classes. [52] generalizes this to arbitrary neurons within a network. Instead of directly optimizing over pixel values, [38] optimize over input codes of a generator network which was trained to reconstruct im
+
+ages from hidden layers. [55] avoid synthesizing images from scratch and look for regions within a given image that activate certain neurons. For a specific class of functions, [1] decompose the function into relevance scores which can be visualized pixel-wise. Layer-wise relevance propagation [37] is a more general approach to propagate relevance scores through a network based on rules to distribute the relevance among input neurons. [41] shows how saliency maps representing the importance of pixels for a classifier's decision can be obtained without access to the classifiers gradients. All these approaches assume that a fixed set of neurons is given and should be interpreted in terms of inputs which activate them. However, [2], [9] demonstrated that networks use distributed representations. In particular, semantic concepts are encoded by activation patterns of multiple neurons and single neurons are not concept specific but involved in the representation of different concepts. We directly address this finding by learning a non-linear transformation from a distributed representation to an interpretable representation with concept specific factors.
+
+While [9] shows that for general networks we must expect internal representations to be distributed, there are situations where representations can be expected to have a simpler structure: Generative models are trained with the explicit goal to produce images from samples of a simple distribution, e.g. a Gaussian distribution. Most approaches are based either on Variational Autoencoders [23, 44], which try to reconstruct images from a representation whose marginal distribution is matched to a standard normal distribution, or on Generative Adversarial Networks [11, 14, 39], which directly map samples from a standard normal distribution to realistic looking images as judged by a discriminator network. The convexity of the Gaussian density makes linear operations between representations meaningful. Linear interpolations between representations enable walks along nonlinear data manifolds [42]. [26] finds visual attribute vectors which can be used to interpolate between binary attributes. To this end, two sets of images containing examples with or without an attribute are encoded to their representations, and the direction between their means is the visual attribute vector. Such attribute vectors have also been found for classifier networks [49], but because their representations have no linear structure, the approach is limited to aligned images. [42, 43] demonstrated that vector arithmetic also enables analogy making. [46] interprets the latent space of a GAN by finding attribute vectors as the normal direction of the decision boundary of an attribute classifier. [10] uses a similar approach to find attribute vectors associated with cognitive properties such as memorability, aesthetics and emotional valence. While these approaches provide enhanced interpretability through modification of attributes they are limited to representations with a linear structure. In contrast, we provide an approach
+
+to map arbitrary representations into a space of interpretable representations. This space consists of factors representing semantic attributes and admits linear operations. Thus, we can perform semantic modifications in our interpretable space and, due to the invertibility of our transformation, map the modified representation back to the original space.
+
+# 3. Approach
+
+# 3.1. Interpreting Hidden Representations
+
+Invertible Transformation of Hidden Representations: Let $f$ be a given neural network to be interpreted. We place no restrictions on the network $f$ . For example, $f$ could be an object classifier, a segmentation network or an autoencoder. $f$ maps an image $x \in \mathbb{R}^{h \times w \times 3}$ through a sequence of hidden layers to a final output $f(x)$ . Intermediate activations $E(x) \in \mathbb{R}^{H \times W \times C}$ of a hidden layer are a task-specific representation of the image $x$ . Such hidden representations convey no meaning to a human and we must transform them into meaningful representations. We introduce the notation $z = E(x) \in \mathbb{R}^{H \cdot W \cdot C}$ , i.e. $z$ is the $N = H \cdot W \cdot C$ dimensional, flattened version of the hidden representation to be interpreted. $E$ is the sub-network of $f$ consisting of all layers of $f$ up to and including the hidden layer that produces $z$ , and the sub-network after this layer will be denoted by $G$ , such that $f(x) = G \circ E(x)$ as illustrated in Fig. 1.
+
+To turn $z$ into an interpretable representation, we aim to translate the distributed representation $z$ to a factorized representation $\tilde{z} = (\tilde{z}_k)_{k=0}^K \in \mathbb{R}^N$ where each of the $K + 1$ factors $\tilde{z}_k \in \mathbb{R}^{N_k}$ , with $\sum_{k=0}^{K} N_k = N$ , represents an interpretable concept. The goal of this translation is twofold: On the one hand, it should enable an analysis of the relationship between data and internal representations of $f$ in terms of interpretable concepts; this requires a forward map $T$ from $z$ to $T(z) = \tilde{z}$ . On the other hand, it should enable semantic modifications on internal representations of $f$ ; this requires the inverse of $T$ . With this inverse map, $T^{-1}$ , an internal representation $z$ can be mapped to $\tilde{z}$ , modified in semantically meaningful ways to obtain $\tilde{z}^*$ (e.g. changing a single interpretable concept), and mapped back to an internal representation of $f$ . This way, semantic modifications, $\tilde{z} \mapsto \tilde{z}^*$ , which were previously only defined on $\tilde{z}$ can be applied to internal representations via $z \mapsto z^* := T^{-1}(T(z)^*)$ . See Fig. 2 for an example, where $z$ is modified by replacing one of its semantic factors $\tilde{z}_k$ with that of another image.
+
+Disentangling Interpretable Concepts: For meaningful analysis and modification, each factor $\tilde{z}_k$ must represent a specific interpretable concept and taken together, $\tilde{z}$ should support a wide range of modifications. Most importantly, it must be possible to analyze and modify different factors $\tilde{z}_k$ independently of each other. This implies a factorization of their joint density $p(\tilde{z}) = \prod_{k=0}^{K} p(\tilde{z}_k)$ . To explore different
+
+
+$\tilde{z}_1 =$ "digit"
+
+
+$\tilde{z}_{2} =$ "color"
+
+
+$\tilde{z}_0 =$ "residual"
+Figure 2: Applied to latent representations $z$ of an autoencoder, our approach enables semantic image analogies. After transforming $z$ to disentangled semantic factors $(\tilde{z}_k)_{k=0}^K = T(z)$ , we replace $\tilde{z}_k$ of the target image (leftmost column), with $\tilde{z}_k$ of the source image (top row). From left to right: $k = 1$ (digit), $k = 2$ (color), $k = 0$ (residual).
+
+factors, the distribution $p(\tilde{z}_k)$ of each factor must be easy to sample from to gain insights into the variability of a factor, and interpolations between two samples of a factor must be valid samples to analyze changes along a path. We thus specify each factor to be normally distributed which gives
+
+$$
+p (\tilde {z}) = \prod_ {k = 0} ^ {K} \mathcal {N} (\tilde {z} _ {k} | 0, \mathbf {1}) \tag {1}
+$$
+
+Without additional constraints, the semantics represented by a factor $\tilde{z}_k$ are unspecified. To fix this, we demand that (i) each factor $\tilde{z}_k$ varies with one and only one interpretable concept and (ii) it is invariant with respect to all other variations. Thus, let there be training image pairs $(x^a,x^b)$ which specify semantics through their similarity, e.g. image pairs containing animals of the same species to define the semantic concept of 'animal species'. Each semantic concept $F\in \{1,\ldots ,K\}$ defined by such pairs shall be represented by the corresponding factor $\tilde{z}_F$ and we write $(x^{a},x^{b})\sim p(x^{a},x^{b}|F)$ to emphasize that $(x^{a},x^{b})$ is a training pair for factor $\tilde{z}_F$ . However, we cannot expect to have examples of image pairs for every semantic concept relevant in $z$ . Still, all factors together, $\tilde{z} = (\tilde{z}_k)_{k = 0}^K$ , must be in one-to-one correspondence with the original representation, i.e. $z = T^{-1}(\tilde{z})$ . Therefore, we introduce $\tilde{z}_0$ to act as a residual concept that captures the remaining variability of $z$ which is missed by the semantic concepts $F = 1,\dots ,K$ .
+
+For a given training pair $(x^{a}, x^{b}) \sim p(x^{a}, x^{b} | F)$ , the corresponding factorized representations, $\tilde{z}^{a} = T(E(x^{a}))$ and $\tilde{z}^{b} = T(E(x^{b}))$ , must now (i) mirror the semantic similarity of $(x^{a}, x^{b})$ in its $F$ -th factor and (ii) be invariant in the remaining factors. This is expressed by a positive correlation factor $\sigma_{ab} \in (0, 1)$ for the $F$ -th factor between pairs,
+
+$$
+\tilde {z} _ {F} ^ {b} \sim \mathcal {N} \left(\tilde {z} _ {F} ^ {b} \mid \sigma_ {a b} \tilde {z} _ {F} ^ {a}, (1 - \sigma_ {a b} ^ {2}) \mathbf {1}\right) \tag {2}
+$$
+
+and no correlation for the remaining factors between pairs,
+
+$$
+\tilde {z} _ {k} ^ {b} \sim \mathcal {N} (\tilde {z} _ {k} ^ {b} | 0, \mathbf {1}) \quad k \in \{0, \dots , K \} \backslash \{F \} \tag {3}
+$$
+
+To fit this model to data, we utilize the invertibility of $T$ to directly compute and maximize the likelihood of pairs $(z^a, z^b) = (E(x^a), E(x^b))$ . We compute the likelihood with the absolute value of the Jacobian determinant of $T$ , denoted $|T'(\cdot)|$ , as
+
+$$
+p \left(z ^ {a}, z ^ {b} \mid F\right) = p \left(z ^ {a}\right) p \left(z ^ {b} \mid z ^ {a}, F\right) \tag {4}
+$$
+
+$$
+= \left| T ^ {\prime} \left(z ^ {a}\right) \right| p \left(T \left(z ^ {a}\right)\right). \tag {5}
+$$
+
+$$
+\left| T ^ {\prime} \left(z ^ {b}\right) \right| p \left(T \left(z ^ {b}\right) \mid T \left(z ^ {a}\right), F\right) \tag {6}
+$$
+
+To be able to compute the Jacobian determinant efficiently, we follow previous works [22] and build $T$ based on ActNorm, AffineCoupling and Shuffling layers as described in more detail in Sec. A.1 of the supplementary. For training we use the negative log-likelihood as our loss function. Substituting Eq. (1) into Eq. (5), Eq. (2) and (3) into Eq. (6), leads to the per-example loss $\ell(z^a, z^b | F)$ ,
+
+$$
+\begin{array}{l} \ell \left(z ^ {a}, z ^ {b} \mid F\right) = \sum_ {k = 0} ^ {K} \left\| T \left(z ^ {a}\right) _ {k} \right\| ^ {2} - \log \left| T ^ {\prime} \left(z ^ {a}\right) \right| (7) \\ + \sum_ {k \neq F} \| T \left(z ^ {b}\right) _ {k} \| ^ {2} - \log \left| T ^ {\prime} \left(z ^ {b}\right) \right| (8) \\ + \frac {\left\| T \left(z ^ {b}\right) _ {F} - \sigma_ {a b} T \left(z ^ {a}\right) _ {F} \right\| ^ {2}}{1 - \sigma_ {a b} ^ {2}} (9) \\ \end{array}
+$$
+
+which is optimized over training pairs $(x^{a}, x^{b})$ for all semantic concepts $F \in \{1, \ldots, K\}$ :
+
+$$
+\mathcal {L} = \sum_ {F = 1} ^ {K} \mathbb {E} _ {\left(x ^ {a}, x ^ {b}\right) \sim p \left(x ^ {a}, x ^ {b} \mid F\right)} \ell \left(E \left(x ^ {a}\right), E \left(x ^ {b}\right) \mid F\right) \tag {10}
+$$
+
+Note that we have described the case where image pairs share at least one semantic concept, which includes the case where they share more than one semantic concept. Moreover, our approach is readily applicable in the case where image pairs differ in a semantic concept. In this case, Eq. (2) holds for all factors $\tilde{z}_k^b$ , $k \in \{0, \dots, K\} \setminus \{F\}$ and Eq. (3) holds for factor $\tilde{z}_F^b$ . This case will also be used in the next section, where we discuss the dimensionality and origin of semantic concepts.
+
+# 3.2. Obtaining Semantic Concepts
+
+Estimating Dimensionality of Factors: Semantic concepts differ in complexity and thus also in dimensionality. Given image pairs $(x^{a}, x^{b}) \sim p(x^{a}, x^{b} | F)$ that define the $F$ -th semantic concept, we must estimate the dimensionality of factor $\tilde{z}_{F}$ that represents this concept. Due to the invertibility of $T$ , the sum of dimensions of all these factors equals the dimensionality of the original representation. Thus, semantic concepts captured by the network $E$ require a larger share of the overall dimensionality than those $E$ is invariant to.
+
+
+Figure 3: Efficient generation of training examples for semantic concepts: A user must only provide two sketches (first row) for a change of a semantic concept, here: roundness. We then synthesize training images to reflect this semantic change.
+
+The similarity of $x^{a}, x^{b}$ in the $F$ -th semantic concept implies a positive mutual information between them, which will only be preserved in the latent representations $E(x^{a}), E(x^{b})$ if the $F$ -th semantic concept is captured by $E$ . Thus, based on the simplifying assumption that components of hidden representations $E(x^{a})_{i}, E(x^{b})_{i}$ are jointly Gaussian distributed, we approximate their mutual information with their correlation for each component $i$ . Summing over all components $i$ yields a relative score $s_{F}$ that serves as proxy for the dimensionality of $\tilde{z}_{F}$ in case of training images $(x^{a}, x^{b}) \sim p(x^{a}, x^{b}|F)$ for concept $F$ ,
+
+$$
+s _ {F} = \sum_ {i} \frac {\operatorname {C o v} \left(E \left(x ^ {a}\right) _ {i} , E \left(x ^ {b}\right) _ {i}\right)}{\sqrt {\operatorname {V a r} \left(E \left(x ^ {a}\right) _ {i}\right) \operatorname {V a r} \left(E \left(x ^ {b}\right) _ {i}\right)}}. \tag {11}
+$$
+
+Since correlation is in $[-1,1]$ , scores $s_F$ are in $[-N,N]$ for $N$ -dimensional latent representations of $E$ . Using the maximum score $N$ for the residual factor $\tilde{z}_0$ ensures that all factors have equal dimensionality if all semantic concepts are captured by $E$ . The dimensionality $N_F$ of $\tilde{z}_F$ is then $N_F = \left\lfloor \frac{\exp s_F}{\sum_{k=0}^{K} \exp s_k} N \right\rfloor$ . Tab. 1 demonstrates the feasibility
+
+| Dataset | Model | Latent z | Interpretable z |
| Dim. | Dim. | Factor zF |
| Color-MNIST | AE | 64 | 12 | Digit |
| 19 | Color |
| Classifier | 64 | 22 | Digit |
| 11 | Color |
+
+Table 1: Estimated dimensionalities of interpretable factors $\tilde{z}_F$ representing different semantic concepts. Remaining dimensions are assigned to the residual factor $\tilde{z}_0$ . Compared to an autoencoder, the color factor is smaller in case of a color-invariant classifier.
+
+
+Figure 4: The inverse of our interpretation network $T$ maps linear walks in the interpretable domain back to nonlinear walks on the data manifold in the encoder space, which get decoded to meaningful images (bottom right). In contrast, decoded images of linear walks in the encoder space contain ghosting artifacts (bottom left).
+
+of predicting dimensionalities with this approximation.
+
+Sketch-Based Description of Semantic Concepts: Training requires the availability of image pairs that depict changes in a semantic concept. Most often, a sufficiently large number of such examples is not easy to obtain. The following describes an approach to help a user specify semantic concepts effortlessly.
+
+Two sketches are worth a thousand labels: Instead of labeling thousands of images with semantic concepts, a user only has to provide two sketches, $y^{a}$ and $y^{b}$ which demonstrate a change in a concept. For example, one sketch may contain mostly round curves and another mostly angular ones as in Fig. 3. We then utilize a style transfer algorithm [40] to transform each $x$ from the training set into two new images: $x^{a}$ and $x^{b}$ which are stylized with $y^{a}$ and $y^{b}$ , respectively. The combinations $(x, x^{a}), (x, x^{b})$ and $(x^{a}, x^{b})$ serve as examples for a change in the concept of interest.
+
+Unsupervised Interpretations: Even without examples for changes in semantic factors, our approach can still produce disentangled factors. In this case, we minimize the negative log-likelihood of the marginal distribution of hidden representations $z = E(x)$ :
+
+$$
+\mathcal {L} _ {\text {u n s u p}} = - \mathbb {E} _ {x} \| T (E (x)) \| ^ {2} - \log | T ^ {\prime} (E (x)) | \tag {12}
+$$
+
+As this leads to independent components in the transformed representation, it allows users to attribute meaning to this representation after training. Mapping a linear interpolation in our disentangled space back to $E$ 's representation space
+
+
+Figure 5: Transfer on AnimalFaces: We combine $\tilde{z}_0$ (residual) of the target image (leftmost column) with $\tilde{z}_1$ (animal class) of the source image (top row), resulting in a transfer of animal type from source to target.
+
+leads to a nonlinear interpolation on the data manifold embedded by $E$ (see Fig. 4). This linear structure allows to explore the representations using vector arithmetics [49]. For example, based on a few examples of images with a change in a semantic concept, we can find a vector representing this concept as the mean direction between these images (see Eq. (14)). In contrast to previous works, we do not rely on disentangled latent representations but learn to translate arbitrary given representations into disentangled ones.
+
+# 4. Experiments
+
+The subsequent experiments use the following datasets: AnimalFaces [28], DeepFashion [29, 31], CelebA [30] MNIST[27], Cifar10 [25], and FashionMNIST [50]. Moreover, we augment MNIST by randomly coloring its images to provide a benchmark for disentangling experiments (denoted ColorMNIST).
+
+# 4.1. Interpretation of Autoencoder-Frameworks
+
+Autoencoders learn to reconstruct images from a low-dimensional latent representation $z = E(x)$ . Subsequently, we map $z$ onto interpretable factors to perform semantic image modification. Note that $z$ is only obtained using a given network; our invertible interpretation network has never seen an image itself.
+
+Disentangling Latent Codes of Autoencoders: Now we alter the $\tilde{z}_k$ which should in turn modify the $z$ in a semantically meaningful manner. This tests two aspects of our translation onto mutually disentangled, interpretable representations: First, if its factors have been successfully disentangled, swapping factors from different images should
+
+
+Figure 6: Transfer on DeepFashion: We combine $\tilde{z}_0$ (residual) of the target image (top row) with $\tilde{z}_1$ (appearance) of the source image (leftmost column), resulting in a transfer of appearances from source to target.
+
+still yield valid representations. Second, if the factors represent their semantic concepts faithfully, modifying a factor should alter its corresponding semantic concept.
+
+To evaluate these aspects, we trained an autoencoder on the AnimalFaces dataset. As semantic concepts we utilize the animal category and a residual factor. Fig. 5 shows the results of combining the residual factor of the image on the left with the animal-class-factor of the image at the top. After decoding, the results depict animals from the class of the image at the top. However, their gaze direction corresponds to the image on the left. This demonstrates a successful disentangling of semantic concepts in our interpreted space.
+
+The previous case has confirmed the applicability of our approach to roughly aligned images. We now test it on unaligned images of articulated persons on DeepFashion. Fig. 6 presents results for attribute swapping as in the previous experiment. Evidently, our approach can handle articulated objects and enables pose guided human synthesis.
+
+Finally, we conduct this swapping experiment on ColorMNIST to investigate simultaneous disentangling of multiple factors. Fig. 2 shows a swapping using an interpretation of three factors: digit type, color, and residual.
+
+Evaluating the Unsupervised Case: To investigate our approach in case of no supervision regarding semantic concepts, we analyze its capability to turn simple autoencoders into generative models. Because our interpretations yield normally distributed representations, we can sample them, translate them back onto the latent space of the autoencoder, and finally decode them to images,
+
+$$
+\tilde {z} \sim \mathcal {N} (\tilde {z} | 0, \mathbf {1}), \quad x = G \left(T ^ {- 1} (\tilde {z})\right). \tag {13}
+$$
+
+ | MNIST | FashionMNIST | CIFAR-10 | CelebA |
| TwoStageVAE | 12.6 ± 1.5 | 29.3 ± 1.0 | 72.9 ± 0.9 | 44.4 ± 0.7 |
| WGAN GP | 20.3 ± 5.0 | 24.5 ± 2.1 | 55.8 ± 0.9 | 30.3 ± 1.0 |
| WGAN | 6.7 ± 0.4 | 21.5 ± 1.6 | 55.2 ± 2.3 | 41.3 ± 2.0 |
| DRAGAN | 7.6 ± 0.4 | 27.7 ± 1.2 | 69.8 ± 2.0 | 42.3 ± 3.0 |
| BEGAN | 13.1 ± 1.0 | 22.9 ± 0.9 | 71.9 ± 1.6 | 38.9 ± 0.9 |
| Ours | 6.4 ± 0.1 | 16.0 ± 0.1 | 45.7 ± 0.3 | 20.2 ± 0.5 |
+
+Table 2: FID scores of various AE-based and GAN models as reported in [4].
+
+We employ the standard evaluation protocol for generative models and measure image quality with Fréchet Inception Distance (FID scores). [4] presented an approach to generative modeling using two Variational Autoencoders and achieved results competitive with approaches based on GANs. We follow [4] and use an autoencoder architecture based on [33] and train on the same datasets with losses as in [26]. Tab. 2 presents mean and std of FID scores over three trials with 10K generated images. We significantly improve over state-of-the-art FID reported in [4]. Our approach can utilize a learned similarity metric similar to GANs, which enables them to produce high-quality images. In contrast to approaches based on GANs, we can rely on an autoencoder and a reconstruction loss. This enables stable training and avoids the mode-collapse problem of GANs, which explains our improvement in FID.
+
+Besides sampling from the model as described by equation (13), our approach supports semantic interpolation in the representation space, since the invertible network constitutes a lossless encoder/decoder framework. We obtain semantic axes $\tilde{z}^{F\rightarrow \bar{F}}$ by encoding two sets of images $X^{F} = \{x^{F}\}$ $X^{\bar{F}} = \{x^{\bar{F}}\}$ , showing examples with an attribute in $X^F$ and without that attribute in $X^{\bar{F}}$ . Note that these sets are only required after training, i.e. during test
+
+
+Figure 7: CelebA: Four randomly drawn samples (corners) and corresponding interpolations obtained with unsupervised training, see Sec. 4.1.
+
+time. $\tilde{z}^{F\to \bar{F}}$ is then obtained as the average direction between examples of $X_{F}$ and $X_{\bar{F}}$
+
+$$
+\tilde {z} ^ {F \rightarrow \bar {F}} = \frac {1}{| X ^ {\bar {F}} |} \sum_ {x ^ {\bar {F}} \in X ^ {\bar {F}}} x ^ {\bar {F}} - \frac {1}{| X ^ {F} |} \sum_ {x ^ {F} \in X ^ {F}} x ^ {F}. \tag {14}
+$$
+
+Such vector arithmetic depends on a meaningful linear structure of our interpreted representation space. We illustrate this structure in Fig. 4. Linear walks in our interpretable space always result in meaningful decoded images, indicating that the backtransformed representations lie on the data manifold. In contrast, decoded images of linear walks in the encoder's hidden representation space contain ghosting artifacts. Consequently, our model can transform nonlinear hidden representations to an interpretable space with linear structure. Fig. 7 visualizes a 2D submanifold on CelebA.
+
+Fig. 8 provides an example for an interpolation as described in Eq. 14 between attributes on the CelebA dataset. We linearly walk along the beardiness and smiling attributes, increasing the former and decreasing the latter.
+
+
+Figure 8: Interpolating along semantic directions in disentangled representation space: First four rows show interpolations along beardiness, while the last four depict interpolations along smiling attribute. Note the change of gender in rows 1,2,4, reflecting the strong correlation of beard and gender in the original data.
+
+
+Figure 9: Left: Output variance per class of a digit classifier on ColorMNIST, assessed via distribution of log-softmax logits and class predictions. $T$ disentangles $\tilde{z}_0$ (residual), $\tilde{z}_1$ (digit) and $\tilde{z}_2$ (color). Right: 1d disentangled UMAP embeddings of $\tilde{z}_1$ and $\tilde{z}_2$ . See Sec. 4.2.
+
+
+
+
+
+# 4.2. Interpretation of Classifiers
+
+After interpreting autoencoder architectures we now analyze classification networks: (i) A digit classifier on ColorMNIST (accuracy $\sim 97\%$ ). To interpret this network, we extract hidden representations $z \in \mathbb{R}^{64}$ just before the classification head. (ii) A ResNet-50 classifier [12] trained on classes of AnimalFaces. Hidden representations $z \in \mathbb{R}^{2048}$ are extracted after the fully convolutional layers.
+
+Network Response Analysis: We now analyze how class output probabilities change under manipulations in the interpretation space: First, we train the translator $T$ to disentangle $K$ (plus a residual) distinct factors $\tilde{z}_k$ . For evaluation we modify a single factor $\tilde{z}_k$ while keeping all others fixed. More precisely, we modify $\tilde{z}_k$ by replacing it with samples drawn from a random walk in a harmonic potential (an Ornstein-Uhlenbeck process, see Sec. B of the supplementary), starting at $\tilde{z}_k$ . This yields a sequence of modified factors $(\tilde{z}_k^{(1)}, \tilde{z}_k^{(2)}, \ldots, \tilde{z}_k^{(n)})$ when performing $n$ modification steps. We invert every element in this sequence back to its hidden representation and apply the classifier. We analyze the response of the network to each modified factor $k$ through the distribution of the logits and class predictions.
+
+
+Figure 10: UMAP embedding of a ColorMNIST classifier's latent space $z = E(x)$ . Colors of dots represent classes of test examples. We map latent representations $z$ to interpretable representations $\tilde{z} = T(z)$ , where we perform a random walk in one of the factors $\tilde{z}_k$ . Using $T^{-1}$ , this random walk is mapped back to the latent space and shown as black crosses connected by gray lines. On the left, a random walk in the digit factor jumps between digit clusters, whereas on the right, a random walk in the color factor stays (mostly) within the digit cluster it starts from.
+
+
+
+Interpreting Classifiers to Estimate their Invariance: Network interpretation also identifies the invariance properties of a learned representation. Here we evaluate invariances of a digit classifier to color. We learn a translation $T$ to disentangle digit $\tilde{z}_1$ , color $\tilde{z}_2$ , and a residual $\tilde{z}_0$ . Fig. 9 shows the network response analysis. The distribution of log softmax-values and predicted classes is indeed not sensitive to variations in the factor color, but turns out to be quite responsive when altering the digit representation. We additionally show a UMAP [35] of the reversed factor manipulations in Fig. 10 (in black). Since the entire modification occurs within one cluster, this underlines that $T$ found a disentangled representation and that the classifier is almost invariant to color. Additionally, we employ another 1D-UMAP dimensionality reduction to each factor separately and then plot their pair-wise correlation in Fig. 9.
+
+Next, we trained a transformer $T$ to evaluate interpretability in case of the popular ResNet-50. The analysis of three factors, grayscale value $\tilde{z}_1$ , roundness $\tilde{z}_2$ , and a residual $\tilde{z}_0$ reveals an invariance of the classifier towards grayness but not roundness. More details can be found in Sec. B of the supplementary.
+
+# 5. Conclusion
+
+We have shown that latent representations of black boxes can be translated to interpretable representations where disentangled factors represent semantic concepts. We presented an approach to perform this translation without loss of information. For arbitrary models, we provide the ability to work with interpretable representations which are equivalent to the ones used internally by the model. We have shown how this provides a better understanding of models and data as seen by a model. Invertibility of our approach enables semantic modifications and we showed how it can be used to obtain state-of-the-art autoencoder-based generative models.
+
+# References
+
+[1] Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. *PloS one*, 10(7):e0130140, 2015. 3
+[2] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. 3
+[3] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs, 2014. 1
+[4] Bin Dai and David Wipf. Diagnosing and enhancing vae models, 2019. 7, 12
+[5] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation, 2014. 2
+[6] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp, 2016. 2, 11
+[7] Patrick Esser, Johannes Haux, and Bjorn Ommer. Unsupervised robust disentangling of latent characteristics for image synthesis. In Proceedings of the IEEE International Conference on Computer Vision, pages 2699-2709, 2019. 2
+[8] Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u-net for conditional appearance and shape generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8857-8866, 2018. 2
+[9] Ruth Fong and Andrea Vedaldi. Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. 2, 3
+[10] Lore Goetschalckx, Alex Andonian, Aude Oliva, and Phillip Isola. Ganalyze: Toward visual definitions of cognitive image properties. arXiv preprint arXiv:1906.10112, 2019. 3
+[11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014. 3
+[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 8, 12
+[13] Geoffrey E. Hinton, James L. McClelland, and David E. Rumelhart. Distributed representations. 1986. 2
+[14] Quan Hoang, Tu Dinh Nguyen, Trung Le, and Dinh Phung. MGAN: Training generative adversarial nets with multiple generators. In International Conference on Learning Representations, 2018. 3
+[15] Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. 1
+
+[16] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 11
+[17] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125-1134, 2017. 11
+[18] Jorn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, and Matthias Bethge. Excessive invariance causes adversarial vulnerability, 2018. 2
+[19] Jorn-Henrik Jacobsen, Arnold Smeulders, and Edouard Oyallon. i-revnet: Deep invertible networks, 2018. 2
+[20] Sergey Karayev, Matthew Trentacoste, Helen Han, Aseem Agarwala, Trevor Darrell, Aaron Hertzmann, and Holger Winnemoeller. Recognizing image style. arXiv preprint arXiv:1311.3715, 2013. 12
+[21] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 12
+[22] Durk P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pages 10215-10224, 2018. 2, 4, 11
+[23] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. 3
+[24] Dmytro Kotovenko, Artsiom Sanakoyeu, Sabine Lang, and Bjorn Ommer. Content and style disentanglement for artistic style transfer. In Proceedings of the IEEE International Conference on Computer Vision, pages 4422-4431, 2019. 2
+[25] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009. 6
+[26] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric, 2015. 3, 7
+[27] Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998. 6
+[28] Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. Few-shot unsupervised image-to-image translation. arXiv preprint arXiv:1905.01723, 2019. 6
+[29] Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaou Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1096-1104, 2016. 6
+[30] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. 6
+[31] Ziwei Liu, Sijie Yan, Ping Luo, Xiaogang Wang, and Xiaou Tang. Fashion landmark detection in the wild. Lecture Notes in Computer Science, page 229–245, 2016. 6
+[32] Dominik Lorenz, Leonard Bereska, Timo Milbich, and Bjorn Ommer. Unsupervised part-based disentangling of object
+
+shape and appearance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10955-10964, 2019. 2
+[33] Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study, 2017. 7, 11
+[34] Liqian Ma, Qianru Sun, Stamatios Georgoulis, Luc Van Gool, Bernt Schiele, and Mario Fritz. Disentangled person image generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 99-108, 2018. 2
+[35] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction, 2018. 8
+[36] Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1-38, Feb 2019. 1
+[37] Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211-222, May 2017. 2, 3
+[38] Anh Nguyen, Alexey Dosovitskiy, Jason Yosinski, Thomas Brox, and Jeff Clune. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks, 2016. 2
+[39] Lili Pan, Shen Cheng, Jian Liu, Yazhou Ren, and Zenglin Xu. Latent dirichlet allocation in generative adversarial networks, 2018. 3
+[40] Dae Young Park and Kwang Hee Lee. Arbitrary style transfer with style-attentional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5880–5888, 2019. 5, 12
+[41] Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models, 2018. 3
+[42] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks, 2015. 3
+[43] Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1252-1260. Curran Associates, Inc., 2015. 3
+[44] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on International Conference on Machine Learning-Volume 32, pages II-1278. JMLR.org, 2014. 3
+[45] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?". Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '16, 2016. 2
+[46] Yujun Shen, Jinjin Gu, Xiaou Tang, and Bolei Zhou. Interpreting the latent space of gans for semantic face editing. arXiv preprint arXiv:1907.10786, 2019. 3
+
+[47] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. 2
+[48] Christian Szegedy, Alexander Toshev, and Dumitru Erhan. Deep neural networks for object detection. In Advances in neural information processing systems, pages 2553-2561, 2013. 1
+[49] Paul Upchurch, Jacob Gardner, Geoff Pleiss, Robert Pless, Noah Snavely, Kavita Bala, and Kilian Weinberger. Deep feature interpolation for image content changes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7064-7073, 2017. 3, 6
+[50] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017. 6
+[51] Saining Xie, Ross Girshick, Piotr Dólár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492-1500, 2017. 1
+[52] Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization, 2015. 2
+[53] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. Lecture Notes in Computer Science, page 818-833, 2014. 2
+[54] Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu. Interpretable convolutional neural networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. 2
+[55] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun 2016. 2, 3
\ No newline at end of file
diff --git a/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/images.zip b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a1388780f60de74301b6eafe9d9064a4e6f75b1f
--- /dev/null
+++ b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99ead22c4367bf287d279f1e8f8d13e4e9aec6fe87c521b278ecd537b0288896
+size 542538
diff --git a/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/layout.json b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6509912a3f2ebce9f6530d26d3a4242b63f56d95
--- /dev/null
+++ b/adisentanglinginvertibleinterpretationnetworkforexplaininglatentrepresentations/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:24669c8aa8faee3e6852096263e6d2bbcdfa9738aed1b97059f9b7a086018f3b
+size 455164
diff --git a/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_content_list.json b/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..525627034f535f14e89b674f4a07dcfbf74125d8
--- /dev/null
+++ b/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8162449c0d0e659572bfdd90765e581353c976fa922a50836a751741d6dde7b
+size 77174
diff --git a/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_model.json b/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..be92eb9f2c6076b36e4711be2da0e2c6ef4a3ec8
--- /dev/null
+++ b/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:359d4dd11a8e0ea0eadb3bd5a2fc1567880f79cfb560a2032be1811f2d095591
+size 92415
diff --git a/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_origin.pdf b/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b0c2b3d04f950337ee682423cfbad0a29d888aeb
--- /dev/null
+++ b/agraduatedfiltermethodforlargescalerobustestimation/2ca541dd-a670-4997-92c4-460c769f9f3d_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf50bceb30895430b18dee12c76dd10d870f9172f9e099ff4e28f42a13b7f491
+size 689676
diff --git a/agraduatedfiltermethodforlargescalerobustestimation/full.md b/agraduatedfiltermethodforlargescalerobustestimation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..255b2ee0e1bfc25a129fa988c1867d5067dd9d92
--- /dev/null
+++ b/agraduatedfiltermethodforlargescalerobustestimation/full.md
@@ -0,0 +1,356 @@
+# A Graduated Filter Method for Large Scale Robust Estimation
+
+Huu Le and Christopher Zach Chalmers University of Technology, Sweden
+
+{huul, zach}@chalmers.se
+
+# Abstract
+
+Due to the highly non-convex nature of large-scale robust parameter estimation, avoiding poor local minima is challenging in real-world applications where input data is contaminated by a large or unknown fraction of outliers. In this paper, we introduce a novel solver for robust estimation that possesses a strong ability to escape poor local minima. Our algorithm is built upon the class of traditional graduated optimization techniques, which are considered state-of-the-art local methods to solve problems having many poor minima. The novelty of our work lies in the introduction of an adaptive kernel (or residual) scaling scheme, which allows us to achieve faster convergence rates. Like other existing methods that aim to return good local minima for robust estimation tasks, our method relaxes the original robust problem, but adapts a filter framework from nonlinear constrained optimization to automatically choose the level of relaxation. Experimental results on real large-scale datasets such as bundle adjustment instances demonstrate that our proposed method achieves competitive results.
+
+# 1. Introduction
+
+Robust parameter estimation plays a crucial role in many computer vision tasks, ranging from low-dimensional model fitting (e.g., fundamental or essential matrix estimation [17]) to large-scale instances in very high dimensional space, which may contain hundreds of thousands of measurements (e.g., pose graph optimization [19], SLAM [24] and bundle adjustment [28]). When the input data is relatively clean with a low percentage of gross outliers, the set of optimal parameters can be easily obtained under the maximum likelihood framework, i.e., by minimizing the sum of the squared residuals [29], and the cost can be optimized using popular off-the-shelf non-linear least squares solvers (e.g., Ceres [1]). However, under the presence of a large fraction of gross outliers, standard least-squares fit often yields results that are biased toward outlying measure
+
+ments. To achieve robustness in low-dimensional settings, randomized approaches such as RANdom SAmple Consensus (RANSAC) [11] and its variants [7, 6] are preferred. Deterministic algorithms that provide global [5] or local solutions [20, 4] to improve RANSAC also exist. However, they are inapplicable in large-scale settings, where a class of M-estimators [18] must be employed. Under this framework, if the fraction of outliers is small, then convex $\ell_1$ or Hub kernels can be sufficient. However, in order to achieve maximum robustness for the parameter estimation task with a large outlier ratio, a quasi-convex kernel is usually employed, which in most cases leads to non-convex problems with many sub-optimal local minima and flat regions in parameter space [31]. Solving such non-convex problems is well-known to be challenging as it is very likely for an algorithm to converge to a poor local minimum. Our work proposes a new algorithm to address this problem.
+
+A number of optimization schemes have been proposed to tackle the high non-convexity of robust estimation, and [32] evaluates some of the promising methods. Among these, graduated optimization, which is often referred as graduated non-convexity (GNC) in the computer vision community, shows to be the most promising approach due to its competitive ability to escape poor solutions. Therefore, they have been used widely in many robust fitting applications (e.g. [3, 21, 32, 30]). However, the use of GNC requires a careful design of the graduated optimization schedule, which requires prior knowledge about the problem. A wrong schedule may cause either unnecessarily long run time in several easy problem instances, where basic techniques that provide fast convergence such as Iteratively Re-weighted Least Squares (IRLS) are sufficient, or undesirable results as local minima are not effectively avoided (as demonstrated in Figure 2).
+
+Contributions To address the above problems, we introduce in this paper a new algorithm for large-scale robust estimation that is as competitive as GNC, but does not require a fixed optimization schedule. To achieve such goal, we propose to consider the scale parameters as variables that are jointly optimized with the original parameters. The
+
+introduction of the scale parameters to the original problem results in a constrained optimization problem, which can then be efficiently solved using a novel adaptation of the filter method [12]. Experimental results on several large-scale bundle adjustment datasets demonstrate that our method provides competitive objective values as well as convergence rates compared to existing state-of-the-art methods for robust estimation.
+
+# 2. Related Work
+
+Iteratively Re-weighted Least Squares (IRLS [16]) is arguably the most popular method being used to optimize high-dimensional robust cost functions. The main idea behind this approach is to associate each measurement with a weight computed based on its current residual, then minimize an instance of weighted least squares. The weights are updated after each iteration and the process repeats until convergence. It has been demonstrated that with a proper initialization of weights, IRLS may provide competitive results [34]. However, for more complex problems, the returned solutions are usually not satisfactory as it is very easy for IRLS to be trapped in a poor local minimum.
+
+To address the non-convexity of robust estimation, Zach [31] leveraged the half-quadratic minimization principle [14] and proposed to solve the problem in a "lifted" domain, where the non-convex robust kernel is re-parameterized by a new function in a higher dimensional space. The reformulated robust estimation problem incorporates both the original parameters and newly introduced unknowns representing the confident weights of the measurements. By employing such lifting approach, the flat region in the robust kernels can be avoided by indirectly representing the robustness into the new lifted objective, which is less sensitive to poor local minima. Using the lifting mechanism, different formulations and schemes have also been introduced. In contrast to the Multiplicative Half-Quadratic (M-HQ) lifting approach proposed in [31], Additive Half-Quadratic (A-HQ) has also been introduced [15, 33]. A double lifting method that combines M-HQ and A-HQ is also discussed in [33]. However, the above lifting approaches have some limitations. In particular, [34] demonstrates that the success of half-quadratic minimization relies on suitable initialization of confidence weights, and that M-HQ fails on problems with multiple "competing" residuals.
+
+Besides lifting, another popular approach to tackle problems containing many poor local minima is to "smooth" the objective using homotopy or graduation techniques [27, 9, 21] such as Graduated Non-convexity (GNC [3]). The underlying concept of graduated optimization is to successively approximate the original non-convex cost function by surrogate functions that are easier to minimize (i.e., leading to fewer local minima). In robust cost optimization, the sur
+
+rogate functions may be chosen as a scaled version of the original robust kernel (see Sec. 3.2), which induces fewer local minima than the original cost. Graduated optimization and GNC have demonstrated their utility in several large-scale robust estimation problems by guiding the optimization process to relatively good local minima compared to other approaches such as IRLS or lifting variants [32].
+
+# 3. Background
+
+# 3.1. Problem Formulation
+
+In this work, we are interested in large-scale robust estimation under the framework of M-estimators. Assume that we are given a set of $N$ measurements, and let us denote the residual vector induced by the $i$ -th observation by $\mathbf{r}_i(\pmb {\theta})\in \mathbb{R}^p$ , where the vector $\pmb {\theta}\in \mathbb{R}^{d}$ contains the desired parameters. In robust cost optimization, we wish to obtain the optimal parameters $\pmb{\theta}^{*}$ that solve the following program
+
+$$
+\boldsymbol {\theta} ^ {*} = \arg \min _ {\boldsymbol {\theta}} \Psi (\boldsymbol {\theta}) \quad \Psi (\boldsymbol {\theta}) := \sum_ {i = 1} ^ {N} \psi (| | \mathbf {r} _ {i} (\boldsymbol {\theta}) | |), \tag {1}
+$$
+
+where $\psi : \mathbb{R} \mapsto \mathbb{R}$ is a symmetric robust kernel that satisfies the following properties [14, 32]: $\psi(0) = 0$ , $\psi''(0) = 1$ , and the mapping $\phi : \mathbb{R}_0^+ \mapsto \mathbb{R}_0^+$ where $\phi(x) = \psi(\sqrt{2z})$ is concave and monotonically increasing. The problem (1) serves as a generic framework for several robust fitting tasks, in which the definitions of the parameters $\theta$ and the residual vectors $\{\mathbf{r}_i(\theta)\}$ depend on the specific application. For example, in robust metric bundle adjustment, the parameter vector $\theta$ consists of the set of camera matrices $\{\mathbf{R}_j, \mathbf{t}_j\}_{j=1}^{N_v}$ together with the set of 3-dimensional (3D) points $\{\mathbf{X}_k\}_{k=1}^{N_p}$ ( $N_v$ and $N_p$ are the number of cameras and the number of points, respectively), and each residual vector $\mathbf{r}_{ij} \in \mathbb{R}^2$ is defined as
+
+$$
+\mathbf {r} _ {i j} (\boldsymbol {\theta}) = \mathbf {u} _ {i j} - \pi \left(\mathbf {R} _ {i} \mathbf {X} _ {j} + \mathbf {t} _ {i}\right), \tag {2}
+$$
+
+where $\pi : \mathbb{R}^3 \mapsto \mathbb{R}^2$ is defined as $\pi(\mathbf{X}) = (X_1 / X_3, X_2 / X_3)$ , and $\mathbf{u}_{ij}$ is the 2D keypoint corresponding to the $j$ -th 3D point extracted in image $i$ .
+
+The robust kernel $\psi$ can be chosen from a wide range of functions (See [32]). This choice usually affects the robustness and the convexity of the resulting optimization problem. For example, if $\psi(x)$ is chosen such that $\psi(x) = \frac{x^2}{2}$ , one obtains the non-robust least squares estimate, which is easy to optimize but very sensitive to outliers. In this work, if not otherwise stated, we chose $\psi$ to be the smooth truncated kernel,
+
+$$
+\psi (r) = \left\{ \begin{array}{l l} \frac {1}{2} r ^ {2} \left(1 - \frac {r ^ {2}}{2 \tau^ {2}}\right) & \text {i f} r ^ {2} \leq \tau^ {2}, \\ \tau^ {2} / 4 & \text {o t h e r w i s e .} \end{array} \right. \tag {3}
+$$
+
+
+Figure 1: Illustration of a 1-d robust mean fitting problem, where the surrogate objective with scaled kernel (red) contains fewer local minima than the original cost (blue).
+
+
+Figure 2: A wrong schedule of GNC may lead to either poor results (GNC-2, which is not better than IRLS) or unnecessary iterations (GNC-5). Here GNC-2 and GNC-5 mean GNC with the number of levels $k$ set to 2 and 5, respectively. Our proposed method provides competitive objective value and converges faster than GNC.
+
+# 3.2. Graduated Optimization and Its Limitations
+
+In this section, we briefly review graduated optimization (or graduated non-convexity [3]), which is a popular technique commonly employed to avoid poor local minima in highly non-convex problems. It also serves as the foundation for our novel method proposed in this work. Indirectly it is leveraged also in coarse-to-fine schemes used e.g. in variational methods for optical flow [22]. The main idea behind this technique is to optimize the original highly nonconvex cost function $\Psi$ by minimizing a sequence of problems $(\Psi^k,\dots ,\Psi^0)$ , where $\psi^0 = \psi$ and $\psi^{k + 1}$ is "easier" to optimize than $\psi^k$ . Starting from the original robust kernel $\psi$ (as defined in (1)), the set of "easier" problems are obtained by a scaled version of $\psi$ . In particular, from the original minimization problem with the objective function $\Psi (\pmb {\theta})$ , each problem $\Psi^k$ is constructed with a new kernel $\psi^k$ ,
+
+$$
+\psi^ {k} (r) = s _ {k} ^ {2} \psi \left(r / s _ {k}\right), \tag {4}
+$$
+
+where the scale parameters are chosen such that $s_{k + 1} > s_k$ and $s_0 = 1$ . Figure 1 shows an example of a one-dimensional robust mean estimation, where we plot the objective values of the problem with the original kernel and its scaled version (with $s = 3$ ). As can be seen, the scaled kernel results in this case in a problem with no poor local minimum.
+
+To the best of our knowledge, methods that rely on grad
+
+uated optimization achieve state-of-the-art results for large-scale robust estimation tasks (most importantly, bundle adjustment problems) due to their ability to escape poor local minima. However, in practice it is necessary to define a schedule with a fixed number of levels $k$ . This requires some knowledge about the problem so that a proper value for $k$ can be assigned. A large value of $k$ may cause unnecessary iterations, which translates to high running time. On the other hand, setting a low $k$ may not provide sufficient scaling levels for the optimizer to avoid poor solutions (as shown in Figure 2). Moreover, in some easy applications, although GNC converges to a lower objective than its competitor (e.g., IRLS), the difference between the converged objectives may be insignificant. In such scenarios, an IRLS solver can provide acceptable results within a few iterations, while it may take longer for a GNC solver to go through all $k$ levels. However, using IRLS poses a risk of converging to bad local minima. Therefore, there is a trade-off between the selecting a solver and associated hyper-parameters (such as the annealing schedule in GNC) and the resulting efficiency.
+
+# 4. Adaptive Kernel Scaling
+
+In this section, we describe our novel solver for robust parameter estimation that addresses the above weaknesses of GNC. Our method is motivated by graduated optimization and its ability to avoid poor local minima. However, unlike previous graduated schemes employing a fixed schedule of kernel scaling, we consider the scale of each residual as a variable, and allow the scales to be jointly optimized with the set of parameters $\theta$ . This leads us to a new formulation for robust estimation, which is a constrained optimization problem and can be written as
+
+$$
+\min _ {\boldsymbol {\theta}, \left\{\sigma_ {i} \right\}} \sum_ {i = 1} ^ {N} \psi \left(\frac {\left\| \mathbf {r} _ {i} (\boldsymbol {\theta}) \right\|}{\sigma_ {i}}\right) \quad \text {s . t .} \sigma_ {i} = 1 \forall i = 1, \dots , N. \tag {5}
+$$
+
+In contrast to e.g. graduated optimization, which maintains usually a single smoothness parameter, we introduce a scaling factor $\sigma_{i}$ for each residual. Consequently, each scale $\sigma_{i}$ evolves differently during the optimization process. Clearly, (5) does not appear helpful, as enforcing the constraints $\sigma_{i} = 1$ strictly (i.e. maintaining a feasible solution throughout) makes (5) equivalent to the original task (1). Strategies such as graduated optimization do not maintain strictly feasible iterates, but use a schedule for $\sigma_{i}$ to eventually satisfy the constraints. Turning the original problem (1) into a constrained optimization problem (5) has two potential benefits: first, a larger set of optimization methods is applicable, and second, intermediate solutions may be infeasible but at the same time correspond to smoother problem instances.
+
+Observe that in order to obtain a solution for (5), besides
+
+the initialization $\theta_0$ for the parameters, one can also initialize the scales $\sigma_{i}$ to values that are greater than 1 and expect that the solver will drive $\sigma_{i}$ to the feasible region $\sigma_{i} = 1$ of (5). Therefore, by considering the problem (5) and setting $\sigma_{i}$ to initial values greater than 1, we are effectively conducting kernel scaling, which provides the potential of escaping poor local minima. In contrast to graduated optimization, the internal workings of the optimization method determine how feasibility of $\sigma_{i}$ is eventually achieved. In particular, $\sigma_{i}$ may be updated in non-monotonically and therefore being increased during the iterations of the optimization method. In this work we propose to utilize a filter method to address the constrained problem (5), since it is a highly flexible and non-monotone framework for constrained optimization problems.
+
+Another—and possibly more intuitive—way to convert the robust cost (1) is to replicate the residuals and enforcing consistency, e.g.
+
+$$
+\min _ {\boldsymbol {\theta}, \left\{\mathbf {p} _ {i} \right\}} \sum_ {i} \psi \left(\left\| \mathbf {p} _ {i} \right\|\right) \quad \text {s . t .} \mathbf {p} _ {i} = \mathbf {r} _ {i} (\boldsymbol {\theta}). \tag {6}
+$$
+
+Using a filter method in this setting can be shown to be related to additive half-quadratic minimization [15], and experimentally we found it far inferior compared to using (5) as starting point.
+
+# 5. Optimization with Filter Method
+
+By introducing the scale variables $\{\sigma_i\}$ , we obtained a constrained optimization problem as written in (5). One requirement for the optimization method of choice is, that the limit values of $\sigma_{i}$ must be 1 when the algorithm converges. Moreover, any proposed method for solving (5) should be competitive with existing second-order solvers for problem instances (1) (such as Ceres [1] and SSBA [31]). This requirement rules out e.g. first order methods for constrained programs.
+
+# 5.1. Background on Filter Method
+
+Our technique to solve the constrained program (5) is inspired by the filter method [12], which was initially developed as an alternative to penalty methods in constrained optimization [25]. In order to outline the filter method, let us consider a constrained optimization problem,
+
+$$
+\min _ {\mathbf {x} \in \mathbb {R} ^ {d}} f (\mathbf {x}), \text {s . t .} g _ {i} (\mathbf {x}) = 0, i = 1 \dots c, \tag {7}
+$$
+
+where $f, g_{i} : \mathbb{R}^{d} \mapsto \mathbb{R}$ are continuously differentiable functions, and $c$ is the number of constraints. We also define a function $h(\mathbf{x}) = \sum_{i} \| g_{i}(\mathbf{x}) \|$ to indicate the constraint violation. Obviously, $h(\mathbf{x}^{*}) = 0$ iff $\mathbf{x}^{*}$ is a feasible solution of (7). In classical penalty approaches, the constraint violation is incorporated into the objective with a penalty parameter $\mu$ in order to create a new objective (i.e., $f(\mathbf{x}) + \mu h(\mathbf{x})$ ).
+
+The resulting objective can then be optimized using a suitable local method. Usually, $\mu$ increased monotonically according to a specified schedule to ensure that the solution converges to a feasible region of (7). One drawback of such approach is that the initial value of $\mu$ and how it is increased must be carefully tuned. Another practical issue with penalty methods is, that feasibility of the solution is only guaranteed when $\mu \to \infty$ (unless one utilizes an exact but usually non-smooth penalizer $h$ [25]).
+
+Algorithm 1 Optimization with Filter Method
+Require: Initial solution $\mathbf{x}^0$ , filter margin $\alpha$ max_iter
+1: Initialization: $t\gets 0,\mathcal{F}\gets \emptyset ,\mathbb{F}\gets \emptyset$
+2: while true and $t < \mathrm{max\_iter}$ do
+3: if $\mathbf{x}^t$ is stationary then
+4: break;
+5: end if
+6: $\tilde{f}\gets f_t - \alpha h_t;\tilde{h}\gets h_t - \alpha h_t$
+7: $\mathcal{F}\leftarrow \mathcal{F}\cup \{(\tilde{f},\tilde{h})\}$
+8: $\mathbf{F}_{t + 1}\gets \{\mathbf{x}|f(\mathbf{x})\geq \tilde{f},h(\mathbf{x})\geq \tilde{h}\}$
+9: $\mathbb{F}\gets \mathbb{F}\cup \mathbb{F}_{t + 1}$
+10: Compute $\mathbf{x}^{t + 1}\notin \mathbb{F}$ (Sec. 5.2.1 and 5.2.2)
+11: if $f(\mathbf{x}^{t + 1}) < f(\mathbf{x}^t)$ then
+12: $\mathcal{F}\gets \mathcal{F}\backslash \{(f,\tilde{h})\} ;\mathbb{F}\gets \mathbb{F}\backslash \mathbf{F}_{t + 1}$
+13: end if
+14: $t\gets t + 1$
+15: end while
+16: return $\mathbf{x}^t$
+
+In contrast to penalty methods, Fletcher et al. [12] proposes a entirely different mechanism to solve (7) by introducing the concept of a filter (see Figure 3), which offers more freedom in the step computation. At a current value of $\mathbf{x}$ , let us denote by $F(\mathbf{x})$ the pair combining the objective value and the constraint violation, $F(\mathbf{x}) = (f(\mathbf{x}), h(\mathbf{x})) \in \mathbb{R}^2$ . For brevity, we sometimes use $f$ and $h$ to denote $f(\mathbf{x})$ and $h(\mathbf{x})$ , respectively. Given two pairs $F_i = (f_i, h_i)$ and $F_j = (f_j, h_j)$ , the concept of domination is defined as follows: $F_i$ is said to dominate $F_j$ if $f_i < f_j$ and $h_i < h_j$ . A filter is then defined as a set $\mathcal{F} = \{F_i\}_{i=1}^m \subseteq \mathbb{R}^2$ containing mutually non-dominating entries. The filter $\mathcal{F}$ defines a dominated (and therefore forbidden) region $\mathbb{F}$ in the 2D plane. A pair $F_t$ is said to be accepted by the filter $\mathcal{F}$ if it is not dominated by any pair in $\mathcal{F}$ . Figure 3 visualizes an example of a filter, where the gray areas is the forbidden region defined by the filter pairs.
+
+Filter methods are iterative, and the basic filter approach is summarized in Algorithm 1. The filter $\mathcal{F}$ and the forbidden region $\mathbb{F}$ are initialized to empty sets. At the beginning of each iteration, a new pair $(\tilde{f},\tilde{h})$ is temporarily added to the filter $\mathcal{F}$ , where $\tilde{f} = f_t - \alpha h_t$ and $\tilde{h} = h_t - \alpha h_t$ . Here $\alpha >0$ specifies the filter margin in order to assure that new points acceptable by the filter must induce a sufficient re
+
+
+Figure 3: Example of a filter. The $x$ axis is the objective, while the $y$ axis is the constraint violation. The gray area indicates the forbidden region defined by three mutually nondominated pairs (shown in red). Optimization with filter method involves finding, from a current $\mathbf{x}^k$ , a new value $\mathbf{x}^{k + 1}$ that is not dominated by the filter. A step that reduces both $f$ and $h$ is preferable (as illustrated by the blue arrow).
+
+duction in the objective value or the constraint violation. Thus, convergence to feasible solutions is ensured by a such a margin [26]. The procedure to compute $\mathbf{x}^{t + 1}$ (Line 10 of Alg. 1) will be discussed in the following section. Once $\mathbf{x}^{t + 1}$ is obtained, if the objective is reduced, the pair $(\tilde{f},\tilde{h})$ is removed from $\mathcal{F}$ , otherwise it is retained in the filter. For greatest flexibility in computing $\mathbf{x}^{k + 1}$ (and therefore fastest convergence) the filter should contain as few elements as necessary to guarantee convergence to a feasible solution. On the other hand, adding already feasible iterates to the filter leads to zero margins and is consequently harmful. New iterates that only certify a sufficient reduction of the constraint violation lead to the temporarily added filter element made permanent. It can be shown [26], that filter elements are always strictly infeasible, but accumulation points are feasible. The process is repeated until reaching a stationary point of the problem. Interested readers are referred to [12, 26] for more detailed information.
+
+# 5.2. Application to Robust Estimation
+
+Our approach to solve (5) follows closely the steps described in Algorithm 1. However, the main contribution of our work is a novel strategy to compute $\mathbf{x}^{t + 1}$ that is accepted by the filter. In addition, our method is able to leverage existing non-linear least-squares solvers.
+
+We restrict $\sigma_{i}$ to be greater or equal to 1, as $\sigma_{i} \in (0,1)$ will lead to a harder problem than (1). Therefore, it is convenient to re-parameterize $\sigma_{i}$ as $\sigma_{i} = 1 + s_{i}^{2}$ and we can rewrite the problem (5) as follows
+
+$$
+\min _ {\boldsymbol {\theta}, \left\{s _ {i} \right\}} \quad \sum_ {i = 1} ^ {N} \psi \left(\frac {\left\| \mathbf {r} _ {i} (\boldsymbol {\theta}) \right\|}{1 + s _ {i} ^ {2}}\right) \quad \text {s . t .} s _ {i} = 0 \forall i. \tag {8}
+$$
+
+In the context of (7), let $\mathbf{x} = [\pmb{\theta}^T\mathbf{s}^T]^T$ where $\mathbf{s} = [s_1\ldots s_n]^T$ is a vector that collects the values of $s_i$ . Finally,
+
+the functions $f(\mathbf{x})$ and $h(\mathbf{x})$ correspond to
+
+$$
+f (\mathbf {x}) = \sum_ {i = 1} ^ {N} \psi \left(\frac {\| \mathbf {r} _ {i} (\boldsymbol {\theta}) \|}{1 + s _ {i} ^ {2}}\right) \quad h (\mathbf {x}) = \sum_ {i} s _ {i} ^ {2}. \tag {9}
+$$
+
+# 5.2.1 Cooperative Step
+
+An appealing feature of Algorithm 1 is, that it offers a flexible choice of algorithms to perform variable update, as long as $\mathbf{x}^{t + 1}$ is accepted by the filter (i.e., $\mathbf{x}^{t + 1} \notin \mathbb{F}$ as described in Line. 10 of Algorithm. 1). Like filter methods for nonlinear constrained minimization there are two possible steps to obtain a new acceptable iterate: the cooperative step described in this section is the main workhorse of the algorithm. It replaces the sequential quadratic program (SQP) used as the main step in filter methods for general non-linear programs [12, 26]. The cooperative step is complemented with a restoration step as a fall-back option, that is described in the following section.
+
+The cooperative step is motivated by the fact that reducing both the main objective and the constraint violation (i.e., $f(\mathbf{x})$ and $h(\mathbf{x})$ ) by a sufficient amount (as induced by the margin parameter $\alpha$ ) leads to a new solution that is guaranteed to be acceptable by the filter. We use a second-order approximation of $f$ and $h$ around the current values $\mathbf{x}^t$ ,
+
+$$
+f (\mathbf {x} ^ {t} + \Delta \mathbf {x}) = f (\mathbf {x} ^ {t}) + \mathbf {g} _ {f} ^ {T} \Delta \mathbf {x} + \Delta \mathbf {x} ^ {T} \mathbf {H} _ {f} \Delta \mathbf {x},
+$$
+
+$$
+h \left(\mathbf {x} ^ {t} + \Delta \mathbf {x}\right) = h \left(\mathbf {x} ^ {t}\right) + \mathbf {g} _ {h} ^ {T} \Delta \mathbf {x} + \Delta \mathbf {x} ^ {T} \mathbf {H} _ {h} \Delta \mathbf {x}, \tag {10}
+$$
+
+where $\mathbf{g}_f$ and $\mathbf{g}_h$ are the gradients, while $\mathbf{H}_f$ and $\mathbf{H}_h$ are true or approximated Hessian of $f$ and $h$ , respectively. Hence, a cooperative update direction $\Delta \mathbf{x}$ possibly decreasing both $f$ and $h$ is given by [13]
+
+$$
+\underset {\Delta \mathbf {x}} {\arg \min } \max \left\{\Delta f, \Delta h \right\}, \tag {11}
+$$
+
+where $\Delta f = \mathbf{g}_f^T\Delta \mathbf{x} + \Delta \mathbf{x}^T\mathbf{H}_f\Delta \mathbf{x}$ , and $\Delta h = \mathbf{g}_h^T\Delta \mathbf{x} + \Delta \mathbf{x}^T\mathbf{H}_h\Delta \mathbf{x}$ . This is a convex quadratic program, which can be efficiently solved using any iterative solver. However, as previously discussed, our ultimate goal is to integrate our algorithm into existing solvers, following [34] we relax the problem (11) to, rather than solving, we aim to find $\Delta \mathbf{x}^t$ that solves
+
+$$
+\begin{array}{l} \Delta \mathbf {x} ^ {t} = \operatorname * {a r g m i n} _ {\Delta \mathbf {x}} \max \{\Delta f, \beta \Delta h \} \\ = \arg \min _ {\Delta \mathbf {x}} \mu_ {f} \Delta f + \mu_ {h} \Delta h, \tag {12} \\ \end{array}
+$$
+
+where $\mu_f > 0$ and $\mu_h > 0$ with $\mu_f + \mu_h = 1$ are suitably chosen coefficients. $\beta > 0$ is a scaling factor between the objectives that is implicitly determined by solving for $\Delta \mathbf{x}$ . Adding a Levenberg-Marquardt-type damping [23] with parameter $\lambda$ yields
+
+$$
+\Delta \mathbf {x} ^ {t} = \left(\mu_ {f} \mathbf {H} _ {f} + \mu_ {h} \mathbf {H} _ {h} + \lambda \mathbf {I}\right) ^ {- 1} \left(\mu_ {f} \mathbf {g} _ {f} + \mu_ {h} \mathbf {g} _ {h}\right). \tag {13}
+$$
+
+If the new iterate $\mathbf{x}^{t + 1} = \mathbf{x}^t +\Delta \mathbf{x}^t$ is acceptable by $\mathcal{F}$ then $\lambda$ is decreased, otherwise increased. We refer to the supplementary material for further details.
+
+With an appropriate choice of $\mu_f$ , $\mu_g$ and a sufficiently large $\lambda$ , it can be shown that $\Delta \mathbf{x}^t$ leads to a reduction of both $f$ and $g$ as long as $\mathbf{g}_f$ and $\mathbf{g}_f$ are not pointing in opposite directions [34]. If $\mathbf{x}^t + \Delta \mathbf{x}^t$ leads to a sufficient decrease of both $f$ and $h$ , then this new solution is by construction acceptable by the current filter. Otherwise, the new iterate may be still acceptable, but increases either $f$ or $h$ (and is therefore a non-monotone step). If the new solution is not acceptable by the filter, then a non-monotone restoration step is applied (that also leads to an increase of either $f$ or $h$ ). The filter condition ensures that $h$ eventually converges to 0.
+
+# 5.2.2 Restoration Step
+
+Although (13) gives us a way to compute preferable update step, it does not guarantee to provide always steps that are accepted by the filter. In such cases, we revert to a restoration step described below.
+
+In the filter methods literature a restoration step essentially reduces the constraint violation and is applied if the SQP step did not yield an acceptable new iterate. Note that in our setting, just reducing the constraint violation is trivial, and a perfectly feasible solution can be obtained by setting $s_i = 0$ for all $i$ . A good restoration step aims to yield a good starting point for the next main step (which is SQP in traditional filter methods and a cooperative step in our approach). Consequently, the goal of our restoration step is to determine a suitable new solution for the subsequent cooperative step. One simple criterion for such a new point is given by the angle between the gradients of $f$ and $h$ , which is to be minimized in order to facilitate cooperative minimization. Our deliberate design choice is to adjust only the parameters $s_i$ in the restoration step, i.e.
+
+$$
+\Delta \mathbf {x} = \gamma \binom {0} {\Delta \mathbf {s}}, \tag {14}
+$$
+
+where $\gamma$ is a step-size determined by a grid search,
+
+$$
+\gamma = \arg \min _ {\gamma} \angle \left(\mathbf {g} _ {f} (\mathbf {x} + \Delta \mathbf {x}), \mathbf {g} _ {h} (\mathbf {x} + \Delta \mathbf {x})\right). \tag {15}
+$$
+
+Note that adjusting $\mathbf{s}$ affects both $\mathbf{g}_f$ and $\mathbf{g}_h$ . The search direction $\Delta \mathbf{s}$ is chosen as $\Delta \mathbf{s} = -\mathbf{s}$ . Due to the particular choice of $h$ this search direction coincides with the direction to the global minimum $\mathbf{s} = 0$ of $h$ , with the negated gradient $-\nabla_{\mathbf{s}} h(\mathbf{s})$ , and with a Newton step optimizing $h$ . We limit the search for $\gamma$ to the range $[-1/2, 1/2]$ . A detailed summarization of our algorithm is provided in the supplementary material.
+
+# 6. Experimental Results
+
+In this section, we evaluate the performance of our proposed algorithm and compare it against current state-of-the-art algorithms for large-scale robust estimation, including: IRLS [16], Multiplicative Half-Quadratic Lifting (M-HQ) [31], Graduated Non-Convexity (GNC) as implemented in [32], and LM-MOO [34]. For brevity, we name our method ASKER (which stands for Adaptive Scaling of KERnels). Following recent works [34, 32], we use large-scale bundle adjustment as the main problem to test our method, and a small section of the experiments is devoted to evaluate the stereo dense correspondences problem.
+
+We implement our algorithm in $\mathrm{C + + }$ based on the framework provided by $\mathrm{SSBA}^2$ , which is built on direct sparse linear solvers3. All experiments are executed on an Ubuntu workstation with an AMD Ryzen 2950X CPU and 64GB RAM. Other methods are also implemented based on the SSBA framework. For better visualization of the figures, we only compare our algorithm against the methods listed above, which are the best representatives for baseline and state-of-the-art approaches. As reported in [32], methods such as square-rooting the kernels [10] or Triggs correction [29] do not offer much improvement compared to IRLS, hence we omit these in our experiments. All methods are initialized from the same starting point. In the following, we report the main quantitative and qualitative results, more results on bundle adjustment and reconstructed structures are provided in the supplementary material.
+
+# 6.1. Robust Bundle Adjustment
+
+We use the well-known dataset provided by [2] for our robust bundle adjustment experiments4. This dataset contains the 3D structures reconstructed from a set of images as described in [2]. The whole reconstruction is divided into five sub-datasets: Ladybug, Trafalgar Square, Dubrovnik, Venice, and Final. We extract 20 sequences (the list is provided in the supplementary material), which are considered challenging for robust estimation and have been used throughout recent works [34, 32]. We conduct metric bundle adjustment that optimizes the camera poses and the 3D points, with the residual function as described in (2). In the following, due to space limit, we only report results for 12 representative datasets. The rest of the results can be found in the supplementary material.
+
+We investigate the performance of the algorithms by executing all the methods with a maximum number of 100 iterations. The values of $(\mu_f, \mu_h)$ are set to (0.7, 0.3) respectively. The filter margin $\alpha$ is set to $10^{-4}$ , and all $s_i$ are initialized to 5.0 for all datasets.
+
+
+
+
+
+
+
+
+
+
+(a) Trafalgar-126
+
+
+(b) Trafalgar-138
+
+
+(c) Dubrovnik-150
+
+
+(d) Dubrovnik-16
+
+
+(e) Trafalgar-201
+(i) Dubrovnik-253
+
+
+(f) Dubrovnik-202
+(j) Ladybug-318
+
+
+(g) Trafalgar-21
+(k) Final-93
+
+
+(h) Trafalgar-225
+(1) Final-394
+
+
+Figure 4: Performance of the algorithms. We compare ours (ASKER) against standard IRLS, M-HQ [31], GNC [32], and LM-MOO [34], which are state-of-the-art methods.
+Figure 5: Performance profiles of the best cost (left) and average cost (right) after 100 iterations corresponding to the detailed results in Fig. 4. The average cost can be seen as "area-under-the-curve" in Fig. 4 and is one measure on how fast the target objective decreases w.r.t. the iterations.
+
+
+
+Fig. 4 depicts the evolution of best encountered objective values with respect to the run time (in seconds) for the participating methods. The first message that can be extracted from this figure is, that the final objective values obtained by our method are similar or lower than the best ones found by GNC or LM-MOO. Classical IRLS, as anticipated, is suffering from poor local minima in most problem instances (refer to the supplementary material for visual results of the recon
+
+structed structures, where we show that ASKER provides much better structures compared to the poor solutions obtained by IRLS). M-HQ provides better results than IRLS, but is generally not as competitive as GNC, LM-MOO and ours.
+
+Fig. 5 summarizes the findings illustrated in Fig. 4 using performance profiles [8]. A performance profile indicates for each method the fraction $\rho$ of problem instances,
+
+for which the method is within a factor $\tau$ compared to the best method (with respect to a chosen performance measure). In Fig. 5 (left) the performance profile w.r.t. to the best objective value reached after 100 iterations is shown. In several applications a fast decrease of the objective value is of interest, e.g. in real-time scenarios when the solver can be interrupted at any time. We use the mean objective value averaged over the first 100 iterations as rough indicator of how fast a method decreases the objective (which can be also understood as the area under the respective graph in Fig. 4). The resulting performance profile is shown in Fig. 5 (right). The take-home message from these figures is, that our proposed method (ASKER) is leading in both profiles, i.e. yields very competitive local minima and good convergence behavior in most of the instances.
+
+Table 1 shows the inlier fractions (with the inlier threshold set to 1 pixel) obtained by the algorithms for some datasets. This table conforms with the performance showed in Figure 4, where our algorithm achieved competitive results compared to GNC and LM-MOO.
+
+ | IRLS | M-HQ | GNC | MOO | ASKER |
| Ladybug-49 | 80.4 | 82.3 | 82.1 | 81.9 | 82.3 |
| Trafalgar-21 | 50.9 | 69.0 | 68.3 | 68.8 | 69.10 |
| Trafalgar-201 | 66.31 | 69.2 | 69.1 | 68.8 | 69.33 |
| Trafalgar-225 | 67.05 | 69.8 | 69.9 | 70.01 | 70.01 |
+
+Table 1: Inlier percentage achieved by the methods.
+
+# 6.2. Dense Correspondences
+
+Following [34], we also test our algorithm on the stereo dense correspondence problem, which is also considered a challenging problem in robust estimation. The robust objective can be written as [34],
+
+$$
+\sum_ {p \in \mathcal {V}} \left(\eta \sum_ {k = 1} ^ {K} \psi_ {\text {d a t a}} \left(d _ {p} - \hat {d} _ {p, k}\right) + \sum_ {q \in \mathcal {N} (p)} \psi_ {\text {r e g}} \left(d _ {p} - d _ {q}\right)\right), \tag {16}
+$$
+
+where $v$ is a pixel in the image $\mathcal{V}$ , $\mathcal{N}(p)$ is the 4-neighborhood, and $\hat{d}_{p,k}$ is the position of the $k$ local minimum. The parameter $\eta$ is set to 4. The parameter settings of our method follows the bundle adjustment experiments, and all methods are executed with a maximum of 150 iterations. We also compare our algorithm against GNC and LM-MOO in this experiment. Figure 6 shows the visual results, while Figure 7 plots the objectives obtained by the methods. Observe that we also achieve competitive results for this problem, where our method offers lower objectives than LM-MOO and GNC and provides visually better depth map estimations.
+
+
+
+
+Figure 6: Visual results of dense correspondences experiment for the Teddy (top) and Cones (bottom) image pairs. From left to right: LM-MOO, HOM, and Ours.
+
+
+
+
+
+
+
+
+
+
+Figure 7: Final objective obtained by the methods for dense correspondences.
+
+# 7. Conclusion and Future Work
+
+In this work we propose a method for large-scale robust estimation that uses an optimization-driven schedule to steer the difficulty of intermediate optimization tasks. This puts out method in contrast to graduated optimization techniques such as graduated non-convexity, which always uses a monotone schedule from easy to successively harder problem instances. By using an adaptive schedule, in our experiments the proposed method achieves a good balance between fast decrease of the target objective and reaching a competitive local minimum.
+
+Since filter methods are one framework to relax (or to smooth) difficult optimization problems in a well-justified way, future work will investigate into further applications of this technique. Another direction for future work is to further leverage the problem structure, such as the bi-partite nature of unknowns in bundle adjustment.
+
+# Acknowledgment
+
+This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
+
+# References
+
+[1] Sameer Agarwal, Keir Mierle, and Others. Ceres solver. http://ceres-solver.org.
+[2] Sameer Agarwal, Noah Snavely, Steven M Seitz, and Richard Szeliski. Bundle adjustment in the large. In European conference on computer vision, pages 29-42. Springer, 2010.
+[3] Andrew Blake and Andrew Zisserman. Visual reconstruction. 1987.
+[4] Zhipeng Cai, Tat-Jun Chin, Huu Le, and David Suter. Deterministic consensus maximization with biconvex programming. In Proceedings of the European Conference on Computer Vision (ECCV), pages 685-700, 2018.
+[5] Tat-Jun Chin, Pulak Purkait, Anders Eriksson, and David Suter. Efficient globally optimal consensus maximisation with tree search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2413-2421, 2015.
+[6] Ondrej Chum and Jiri Matas. Matching with prosacprogressive sample consensus. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 220-226. IEEE, 2005.
+[7] Ondrej Chum, Jiří Matas, and Josef Kittler. Locally optimized ransac. In DAGM. Springer, 2003.
+[8] Elizabeth D Dolan and Jorge J Moré. Benchmarking optimization software with performance profiles. Mathematical programming, 91(2):201-213, 2002.
+[9] Daniel M Dunlavy and Dianne P O'Leary. Homotopy optimization methods for global optimization. Technical report, Sandia National Laboratories, 2005.
+[10] Chris Engels, Henrik Stewénius, and David Nister. Bundle adjustment rules. Photogrammetric computer vision, 2(2006), 2006.
+[11] Martin A Fischler and Robert C Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6):381-395, 1981.
+[12] Roger Fletcher and Sven Leyffer. Nonlinear programming without a penalty function. Mathematical programming, 91(2):239-269, 2002.
+[13] Joerg Fliege, LM Grana Drummond, and Benar Fux Svaiter. Newton's method for multiobjective optimization. SIAM Journal on Optimization, 20(2):602-626, 2009.
+[14] Donald Geman and George Reynolds. Constrained restoration and the recovery of discontinuities. IEEE Transactions on pattern analysis and machine intelligence, 14(3):367-383, 1992.
+[15] Donald Geman and Chengda Yang. Nonlinear image recovery with half-quadratic regularization. IEEE transactions on Image Processing, 4(7):932-946, 1995.
+[16] Peter J Green. Iteratively reweighted least squares for maximum likelihood estimation, and some robust and resistant alternatives. Journal of the Royal Statistical Society: Series B (Methodological), 46(2):149-170, 1984.
+[17] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university press, 2003.
+
+[18] Peter J Huber et al. Robust estimation of a location parameter. The Annals of Mathematical Statistics, 35(1):73-101, 1964.
+[19] Rainer Kummerle, Giorgio Grisetti, Hauke Strasdat, Kurt Konolige, and Wolfram Burgard. g 2 o: A general framework for graph optimization. In 2011 IEEE International Conference on Robotics and Automation, pages 3607-3613. IEEE, 2011.
+[20] Huu Le, Tat-Jun Chin, and David Suter. An exact penalty method for locally convergent maximum consensus. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on. IEEE, 2017.
+[21] Hossein Mobahi and John W Fisher. On the link between gaussian homotopy continuation and convex envelopes. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pages 43-56. Springer, 2015.
+[22] Hossein Mobahi, C Lawrence Zitnick, and Yi Ma. Seeing through the blur. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 1736-1743. IEEE, 2012.
+[23] Jorge J Moré. The levenberg-marquardt algorithm: implementation and theory. In Numerical analysis, pages 105-116. Springer, 1978.
+[24] Raul Mur-Artal, Jose Maria Martinez Montiel, and Juan D Tardos. Orb-slam: a versatile and accurate monocular slam system. IEEE transactions on robotics, 31(5):1147-1163, 2015.
+[25] Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.
+[26] Ademir A Ribeiro, Elizabeth W Karas, and Clóvis C Gonzaga. Global convergence of filter methods for nonlinear programming. SIAM Journal on Optimization, 19(3):1231-1249, 2008.
+[27] Kenneth Rose. Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proceedings of the IEEE, 86(11):2210-2239, 1998.
+[28] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism: exploring photo collections in 3d. In ACM transactions on graphics (TOG), volume 25, pages 835-846. ACM, 2006.
+[29] Bill Triggs, Philip F McLauchlan, Richard I Hartley, and Andrew W Fitzgibbon. Bundle adjustment—a modern synthesis. In International workshop on vision algorithms, pages 298-372. Springer, 1999.
+[30] Heng Yang, Pasquale Antonante, Vasileios Tzoumas, and Luca Carlone. Graduated non-convexity for robust spatial perception: From non-minimal solvers to global outlier rejection. arXiv preprint arXiv:1909.08605, 2019.
+[31] Christopher Zach. Robust bundle adjustment revisited. In European Conference on Computer Vision, pages 772-787. Springer, 2014.
+[32] Christopher Zach and Guillaume Bourmaud. Descending, lifting or smoothing: Secrets of robust cost optimization. In Proceedings of the European Conference on Computer Vision (ECCV), pages 547-562, 2018.
+
+[33] Christopher Zach and Guillaume Bourmaud. Multiplicative vs. additive half-quadratic minimization for robust cost optimization. 2018.
+[34] Christopher Zach and Guillaume Bourmaud. Pareto meets huber: Efficiently avoiding poor minima in robust estimation. In Proceedings of the IEEE International Conference on Computer Vision, pages 10243-10251, 2019.
\ No newline at end of file
diff --git a/agraduatedfiltermethodforlargescalerobustestimation/images.zip b/agraduatedfiltermethodforlargescalerobustestimation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..0a5bae422158fe203940196dd807cca76f231cf9
--- /dev/null
+++ b/agraduatedfiltermethodforlargescalerobustestimation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9330ac4377e5460a2b6641fbb5cbb8e6c7b6f08a631fbaad8f7e77cf76e75ba2
+size 426132
diff --git a/agraduatedfiltermethodforlargescalerobustestimation/layout.json b/agraduatedfiltermethodforlargescalerobustestimation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..37b72206a611d7bb1db54d451fcbe26b95fd2a3b
--- /dev/null
+++ b/agraduatedfiltermethodforlargescalerobustestimation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b054d45c2b94761196daa1b9ffc75aae5713aae8ca9bc32d3d6aa9c613726c9
+size 491697
diff --git a/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_content_list.json b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..8305003ee1e4e0f47189a8fa082c85b08854f224
--- /dev/null
+++ b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec5c22eb2fb25b1af46aac5c27ab5382920189cd23e17ed5df6f6b7fcaf047c9
+size 72142
diff --git a/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_model.json b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5f78b7d081cc4487ef77731b6f8d79df4973175c
--- /dev/null
+++ b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0a44d0a2f03a472083a0da393a8b182e139f61723204a37d4afaa3732b3ec3a
+size 89682
diff --git a/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_origin.pdf b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..bf38da7389c6e399eb44393bdca33244fc56fdfb
--- /dev/null
+++ b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/5d1bfc0c-cf9c-42b1-99ec-64275d19d07e_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:430e65dd88bf2144300fac6c14a72aab962c69220956f5bf34e6b1450a71ea06
+size 768181
diff --git a/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/full.md b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..aa69eb81e4a1f2e4a59461592e096c28a9ccf762
--- /dev/null
+++ b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/full.md
@@ -0,0 +1,288 @@
+# A Hierarchical Graph Network for 3D Object Detection on Point Clouds
+
+Jintai Chen $^{1*}$ , Biwen Lei $^{1*}$ , Qingyu Song $^{1*}$ , Haochao Ying $^{1}$ , Danny Z. Chen $^{2}$ , Jian Wu $^{1}$
+
+$^{1}$ Zhejiang University, Hangzhou, 310027, China
+
+$^{2}$ Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN 46556, USA {JTigerChen, biwen1996, qingyusong, haochaoying}@zju.edu.com, dchen@nd.edu, wujian2000@zju.edu.cn
+
+# Abstract
+
+3D object detection on point clouds finds many applications. However, most known point cloud object detection methods did not adequately accommodate the characteristics (e.g., sparsity) of point clouds, and thus some key semantic information (e.g., shape information) is not well captured. In this paper, we propose a new graph convolution (GConv) based hierarchical graph network (HGNet) for 3D object detection, which processes raw point clouds directly to predict 3D bounding boxes. HGNet effectively captures the relationship of the points and utilizes the multilevel semantics for object detection. Specially, we propose a novel shape-attentive GConv (SA-GConv) to capture the local shape features, by modelling the relative geometric positions of points to describe object shapes. An SA-GConv based U-shape network captures the multi-level features, which are mapped into an identical feature space by an improved voting module and then further utilized to generate proposals. Next, a new GConv based Proposal Reasoning Module reasons on the proposals considering the global scene semantics, and the bounding boxes are then predicted. Consequently, our new framework outperforms state-of-the-art methods on two large-scale point cloud datasets, by $\sim 4\%$ mean average precision (mAP) on SUN RGB-D and by $\sim 3\%$ mAP on ScanNet-V2.
+
+# 1. Introduction
+
+3D object detection on point clouds has many applications, such as autonomous driving, fault detection for parts, housekeeping robots, and augmented reality. Since point clouds lie in irregular space and can be sparse, known methods (e.g., convolutional neural networks) designed for grid-structured data did not perform well on point clouds (e.g., see discussion in [2]). Many methods have been proposed
+
+
+Figure 1. The predicted object centers and bounding boxes. Different colors of points indicate the center predictions based on the semantics of different levels. The semantics of different levels are then centralized and aggregated to predict the bounding boxes.
+
+for 3D object detection on point clouds, such as projection based methods [35, 4], volumetric convolution based methods [19, 8], and PointNet based methods [29, 30]. The former two types tried to stiffly transform point cloud data into grid-structured data, and the latter aggregated features without explicitly considering the geometric positions of points.
+
+Compared to other known methods, PointNet++ [32] aimed to preserve the spatial structure of points, and thus was widely used as backbone for feature learning in state-of-the-art frameworks [29, 46, 30]. Recently, Charles et al. proposed VoteNet [29], voting for points to be at the object centers based on learned features from PointNet++ [32]. This method yielded excellent results. But, there are still some challenging drawbacks. First, using PointNet++ as backbone neglected some local shape information, since the relative geometric positions of points were not accounted for. Second, the multi-level semantics were not adequately utilized by the structures of the frameworks, which might neglect some helpful information for object detection.
+
+In this paper, we propose a novel Hierarchical Graph Network (HGNet) for 3D object detection on point clouds, based on graph convolutions (GConvs). HGNet contains three main components: a GConv based U-shape network (GU-net), a Proposal Generator, and a Proposal Reasoning Module (ProRe Module). Specially, we develop a new
+
+Shape-attentive GConv (SA-GConv), which captures the object shape information by modelling the relative geometric positions of points. In our pipeline, the SA-GConv based GU-net takes a point cloud as input and captures the semantics of multi-levels (see Fig. 2), which are further aggregated to generate proposals by the Proposal Generator that contains an improved voting module (see Sec. 3.4). Incorporating the global scene semantics, the novel Proposal Reasoning Module (ProRe Module) leverages a fully-connected graph to reason on the proposals, and the bounding boxes are predicted. The detection results are finally obtained after performing 3D non-maximum suppression (NMS). An example of our object detection results is shown in Fig. 1.
+
+The entire HGNet is trained in end-to-end manner. In our framework, the local shape information, semantics of multilevels, and global scene information (features of proposals) of point clouds are sufficiently captured, aggregated, and incorporated by the hierarchical graph model, giving full consideration of the characteristics of point cloud data.
+
+Our main contributions in this work are as follows:
+
+(A) We develop a novel Hierarchical Graph Network (HGNet) for 3D object detection on point clouds, which outperforms the state-of-the-art methods by a clear margin.
+(B) We propose a novel SA-(De)GConv, which is effective at aggregating features and capturing shape information of objects in point clouds.
+(C) We build a new GU-net for generating multi-level features, which are vital for 3D object detection.
+(D) Leveraging global information, we propose the ProRe Module to promote performance by reasoning on proposals.
+
+# 2. Related Work
+
+# 2.1. 3D Object Detection on Point Clouds
+
+Point clouds have some special characteristics (e.g., sparse and irregular), which are often not suitable for convolutional neural networks to process. Many methods [2, 38, 20, 44, 9, 23] have been proposed for 3D object detection on point clouds, such as projection methods (e.g., Complex-YOLO [35], BirdNet [4]), volumetric convolution based methods (e.g., 3DFCN [19], Vote3Deep [8]), and PointNet based methods (e.g., F-PointNet [30], STD [46]). PointNet [31] pioneered a method using raw points as input and obtained good performances, followed by many frameworks [31, 32, 14, 29, 42]. Lang et al. [17] introduced the Pillar Feature Network, encoding point clouds into pseudo images and being processed by 2D CNN. Although novel and fast, the localization information of the framework [17] was not well preserved. PointNet based methods showed good performance, as they dealt with raw points directly. However, PointNet did not consider the dependence of points in information aggregation. Yang et al. [46] proposed a two-stage fusion method STD,
+
+combining PointNet based methods and volumetric convolution based methods. However, the two-stage process might learn some unmatched features for object detection. VoteNet [29] proposed a new voting method, predicting the object centers with the features learned which helped aggregate distant semantic information. However, the local shape information was not well accounted for in the VoteNet. Since there can be a variety of objects, the features needed for detecting different objects may not be in an identical distribution. In other words, semantics of multi-levels may be needed for identifying different objects.
+
+# 2.2. Spatial-based Graph Convolution Networks
+
+Graph convolution networks (GCNs) can be divided into two types: spatial-based [26, 3, 28] and spectral-based [12, 6, 15, 10]. Spatial-based methods are mainly based on the spatial relations of vertices in graphs, and are widely used on point clouds. Thus, we focus on reviewing these methods. The first spatial-based GCN was proposed in [26], by summing up the neighborhood information of vertices directly. Later, an inductive feature aggregation algorithm (GraphSAGE, including Mean aggregator, LSTM aggregator, and Pooling aggregator) was proposed in [10] to replace the transductive learning. Strictly speaking, GraphSAGE is not a kind of GCN, but it embodied the ideas of GCNs. Graph Attention Networks [40] employed attention mechanisms in learning relative weights among neighboring vertices, and showed attractive performance over previous works. In addition, many attention based GCNs [18, 1, 25] were proposed. GINs [45] assigned different weights for the central vertex and its neighboring vertices. For 3D data, Li et al. [21] introduced the dilated GCNs, which better balanced the receptive fields and computation. Feature-Steered GConv [41] verified that GConvs could capture shape information by modelling the geometric positions of the points, and outperformed the traditional shape descriptors. Wang et al. presented a dynamic edge convolution method for semantic segmentation, called EdgeConv [42], which aimed to capture the relationship of points but neglected the importance of the relative geometric positions of points.
+
+# 3. Hierarchical Graph Network
+
+# 3.1. Motivation and Overview
+
+We aim to develop a new effective method for 3D object detection on point clouds. Different from 2D image data, point clouds often do not present clear object shape information (e.g., corners and edges), and thus some shape-attentive feature extractors are needed to process point clouds. Even though the previous work [42, 32, 41] im
+
+
+Figure 2. An overview of our HGNet framework, which contains GU-net, Proposal Generator, and ProRe Module. The counts of points are indicated on the left. In inference, 3D non-maximum suppression (NMS) is utilized and predicted 3D boxes are finally produced.
+
+plicityly used the positions of points, it is more efficient to explicitly model the geometric positions of points, which better describe the shapes of objects. In addition, the multilevel semantics were proved beneficial [43, 39, 34, 22] to detecting objects of various sizes. Also, points can be sparse on the surfaces of objects, and thus the semantics of different levels may provide complementary information for one another. Many previous studies for 3D object detection did not sufficiently utilize multi-level semantics, which was inefficient to tackle point clouds with objects of various sizes and point sparsity.
+
+In this work, we develop an end-to-end hierarchical graph network (HGNet) for 3D object detection on point clouds, as shown in Fig. 2. The entire HGNet contains three main parts: a GConv based U-shape network (GU-net), a Proposal Generator, and a Proposal Reasoning Module (ProRe Module). A new shape-attentive GConv is proposed to capture the local shape semantics. GU-net generates the multi-level semantics, which are aggregated to generate proposals by the Proposal Generator. Finally, the ProRe Module reasons on the proposals to help predict the bounding boxes by leveraging the global scene semantics.
+
+Below we will discuss in detail the novel Shape-attentive (De)GConv in Sec. 3.2, GU-net in Sec. 3.3, the Proposal Generator in Sec. 3.4, the ProRe Module in Sec. 3.5, and the loss functions in Sec. 3.6.
+
+# 3.2. Shape-attentive Graph (De)Convolutions
+
+Point clouds usually do not present the object shapes clearly, yet shape information is important for 3D object detection. One might describe the local shape around a point using the relative geometric positions of its neighboring points. In this section, we will present a novel Shape-attentive GConv, which captures object shapes by modelling the geometric positions of points.
+
+Shape-attentive Graph Convolution. Consider a point set $X = \{x_{i} \in \mathbb{R}^{D + 3}\}_{i=1}^{n}$ , where a point $x_{i} = [f_{i}, p_{i}]$
+
+$p_i \in \mathbb{R}^3$ is the geometric position and $f_i \in \mathbb{R}^D$ is the $D$ -dimensional feature. From $X$ , we want to generate a point set $X' = \{x_i \in \mathbb{R}^{D' + 3}\}_{i=1}^{n'}$ , $n' < n$ . Here we design a GConv to aggregate features from $X$ to $X'$ . Similar to the sampling layer in PointNet++, we first sample $n'$ points from $n$ points. Typically, $k$ Nearest Neighbors ( $kNN$ ) or ball-query [32] with respect to the geometric positions of points is used to construct a local region after sampling a point $x_i \in X$ as the central point for feature aggregation. In this paper, we use $kNN$ as example.
+
+Our shape-attentive GConv (SA-GConv) models the point positions by an independent term. Consider two points $x_{i}$ and $x_{j}$ in a local region, where $x_{i}$ is the central point and $x_{j}$ is one of the neighboring points of $x_{i}$ . The relative geometric position vector, $e_{ij} = p_i - p_j$ , can well express the relative geometric direction and the relative geometric distance between points $x_{i}$ and $x_{j}$ . Usually, a local region contains dozens of points, which are sufficient for local shape description in the 3-dimensional space if $x_{j}$ enumerates all the points in the local region except $x_{i}$ . To model the relative geometric positions of points and adaptively aggregate the point features, we define a directed GConv, SA-GConv, in an attractively simple way as:
+
+$$
+f _ {i} = \max _ {x _ {j} \in k N N (x _ {i})} \mathbf {g} \left(p _ {i} - p _ {j}\right) \cdot \mathbf {f} \left(x _ {i}, x _ {j}\right) \tag {1}
+$$
+
+We model the relative geometric positions by a learnable function $\mathbf{g}:\mathbb{R}^3\to \mathbb{R}^1$ , and the point features (including geometric positions) are addressed by $\mathbf{f}:\mathbb{R}^{D + 3}\times \mathbb{R}^{D + 3}\rightarrow$ $\mathbb{R}^{D^{\prime}}$ . Without loss of generality, we employ the maxpooling operation to finally aggregate the features. In particular, we can implement $\mathbf{g}$ by a simple one-by-one convolution with the Sigmoid activation function, and implement $\mathbf{f}$ by $\mathbf{f}(x_i,x_j) = \mathbf{MLP}([x_i,x_j])$ , where $x_{j}^{\prime} = x_{j} - x_{i}$ $\mathbf{MLP}(\cdot)$ is a multi-layer perceptron with batch normalization and ReLU activation, and $[\cdot ,\cdot ]$ indicates channel wise concatenation. The operation is illustrated as in Fig. 3. In
+
+Fig. 3, the blue point $x_{i}$ is sampled from a point set as central point, and the corresponding local region contains 3 nearest neighbors of $x_{i}$ (including the orange, green, yellow points); the features of the 3 nearest neighbors of $x_{i}$ are aggregated to $x_{i}$ following Eq. (1). Our proposed SA-GConv has the property of permutation invariance, as the max-pooling operation is symmetric with respect to the input.
+
+This shape-attentive operation is different from the simple MLP based operations (e.g., EdgeConv [42]). Eq. (1) explicitly computes the shape information by an independent function $\mathbf{g}$ while MLP based methods used learned weights. Three dimensions (e.g., for geometric positions) in a high dimensional feature space have very limited impacts if one co-treats all features (including "positions") using merely an MLP. Beside, as shown in Fig. 7, the function $\mathbf{g}$ is highly responsive to the shape information, and such shape description is beneficial to object detection.
+
+# Shape-attentive Graph De-Convolution.
+
+In processing grid-structured data, an effective up-sampling operation often pads the feature maps (e.g., by interpolation) and then performs a convolution, as shown in the left part of Fig. 4. Generalizing this operation to irregular data, we propose the Shape-attentive graph De-Convolution (SA-DeGConv), which performs the inverse operation of SA-GConv. SA-DeGConv provides a method to propagate the features from certain points to more points in an adaptive way, as shown in the right part of Fig. 4.
+
+The SA-DeGConv is performed in three steps. (1) Padding the points. As shown in Fig. 2, if we up-sample the features in the point feature maps $U4$ to generate $U3$ , we should pad the points on $U3$ by following the positions of points on $D3$ , as the points on $D3$ and $U3$ shall be positionally aligned. (2) Feature Initialization. As $\{p_i^{(4)}\} \subset \{p_i^{(3)}\}$ , and $p_i^{(3)}, p_i^{(4)}$ indicate the geometric positions of the $i$ -th point on $U3$ and $U4$ , respectively. Thus, for the points on $U3$ , we use arithmetic average to initialize the features by $f_i^* = \sum_{j=1}^k f_j^{(4)} / k$ , where $f_j^{(4)}$ indicates the features of the $j$ -th $k$ positionally neighboring points on $U4$ . (3) Feature aggregation. We use SA-GConv (Eq. (1)) to update the features of all the points on $U3$ , as illustrated in Fig. 4.
+
+# 3.3.GU-net
+
+Effectively detecting objects needs to use abundant semantics. Previous methods (e.g., PointNet based methods) barely utilized semantics of multi-levels, which was not very beneficial to detecting objects of various sizes, as discussed in [22, 24]. Besides, as points can be sparse and even missing on the surfaces of objects, using multi-level semantics provides abundant information for object detection. To capture the multi-level semantics, we propose a new U-shape network called GU-net, based
+
+
+Figure 3. An illustration of the Shape-attentive GConv operation. The blue point $x_{i}$ indicates a sampled point whose feature is updated by aggregating the features from other points $(x_{j},$ including the orange, yellow, and green points). $p$ indicates the geometric position. The aggregation follows from Eq. (1).
+
+on SA-(De)GConv. We design a down-sampling module, and repeatedly stack it 4 times to form the down-sampling pathway, while an up-sampling module is repeatedly stacked twice to make up the up-sampling way. Similar to FPN [22], GU-net generates a feature pyramid with three point feature maps (see Fig. 2).
+
+Down-sampling Module. Given a point feature map with $N$ points, we first sample a subset containing $N'$ ( $N' < N$ ) points by the farthest point sampling (FPS) [27, 7, 32]. Then we construct the local regions by kNN or ball-query around the sampled points, and then update the features of sampled points by performing the SA-GConv. In this way, a point feature map is processed to generate a higher-level point feature map with fewer points (e.g., $D4$ is generated from $D3$ ).
+
+Up-sampling Module. The process of the up-sampling module is inverse of the process of the down-sampling module, mainly performed by SA-GConv. The skip connections are also used to bridge the corresponding point feature maps (e.g., $U3$ and $D3$ ) by channel-wise concatenation, except for the top-most point feature map $U4$ . $U4$ and $D4$ is connected by MLP. Thus, the GU-net outputs a feature pyramid with three point feature maps (see Fig. 2).
+
+# 3.4. Proposal Generator
+
+Three point feature maps are generated by GU-net (see Fig. 2), containing the multi-level semantics. Some previous methods (e.g., VoteNet [29]) used only one feature map for object prediction. Even though the higher-level features are computed by fusing the lower-level features in the upsampling pathway, it is more beneficial to use the multilevel features together for proposal generation as the features of different levels provide various semantics. To this end, we propose the Proposal Generator to predict the object centers (shown in Fig. 1) with an improved voting module as the main structure, which transforms the multi-level features into an identical feature space.
+
+Improved Voting Module. The voting module in
+
+
+Figure 4. An illustration of the up-sampling operation (left) to the grid-structured data, and SA-DeGConv (right). In the right part, the dashed circles denote the padded points and the blue arrows indicate the arithmetic average for feature initialization in performing SA-DeGConv.
+
+VoteNet [29] was proposed to predict the object centers and centralize the object features. In our paper, we perform the voting operation on all the point feature maps in the feature pyramid. Thus, the improved voting module also helps to transform the multi-level features (of different feature spaces) into an identical feature space (as shown in Fig. 1), which can be further utilized directly to generate proposals. We implement the improved voting module with SA-GConv, since SA-GConv is more efficient. The voting process is specified by:
+
+$$
+\left[ f _ {v}, p _ {v} \right] = [ f, p ] + [ \Delta f, \Delta p ] \tag {2}
+$$
+
+$$
+[ \Delta f, \Delta p ] = \mathbf {S A} - \mathbf {G C o n v} ([ f, p ])
+$$
+
+where $\mathbf{f} \in \mathbb{R}^{F}$ and $p \in \mathbb{R}^{3}$ are the features and the geometric positions of the points in feature pyramid, and $f_{v} \in \mathbb{R}^{F_{v}}$ and $p_{v} \in \mathbb{R}^{3}$ are the features and geometric positions of the votes. SA-GConv( $\cdot$ ) follows from Eq. (1). We use SA-GConv to implement the improved voting module by adding three additional channels to predict the geometric shifts.
+
+Generating Proposals. By performing the improved voting module, the features in the feature pyramid are transformed into an identical feature space. To aggregate the features, we retain $N_{p}$ votes by FPS and aggregate the features of all the votes into them, similar to VoteNet ( $N_{p} = 256$ as default). Thus, the features of multi-levels are fully fused to predict bounding boxes and categories.
+
+# 3.5. Proposal Reasoning Module
+
+With the structures presented above, the local semantics and multi-level semantics are captured and fully fused. On one hand, these semantics are learned in the local receptive fields, yet the global scene semantics are not used in object detection. On the other hand, some objects contain very few points on their external surfaces (e.g., see the point clouds of the SUN RGB-D dataset in Fig. 6), and it can be hard to detect those objects with such limited information. Hence, we propose a new GConv based Proposal Reasoning Module (ProRe Module) to reason on proposals by leveraging
+
+the global scene information. The features of the proposals are updated by a new GConv, incorporating the global semantics and using the relative positions of the proposals as an attention map. We formulate the relation of the proposals as a directed graph $\mathcal{G}_g = (\mathcal{V}_g,\mathcal{E}_g)$ . $\mathcal{V}_g$ denotes the vertex set, and each vertex is for a proposal presenting as high dimensional features. The edges $\mathcal{E}_g$ in $\mathcal{G}_g$ are initially set as fully-connected with self-loops.
+
+Formally, given a proposal set in which the features of the proposals lie in an $F$ -dimensional space, we consider a proposal-feature tensor $\mathbf{H}_p \in \mathbb{R}^{n \times F}$ and a tensor $\mathbf{P} \in \mathbb{R}^{n \times n \times 3}$ recording the relative positions of the proposals. In $\mathbf{P}$ , an element $P_{i,j,k} = p_{i,k} - p_{j,k}$ , where $p_{i,k}$ and $p_{j,k}$ are the $k$ -th dimension ( $k \in \{x,y,z\}$ ) of the geometric positions of the $i$ -th and $j$ -th proposals, respectively. The reasoning procedure can be specified as:
+
+$$
+\mathbf {H} _ {p} ^ {\prime} = \Phi (\mathbf {P}, \mathbf {H} _ {p}) = \gamma (\mathbf {P}) \odot \Psi_ {c} \left(\Psi_ {v} \left(\mathbf {H} _ {p}\right) ^ {T} + \mathbf {H} _ {p} ^ {T}\right) ^ {T} (3)
+$$
+
+where $\mathbf{\Psi}^{T}$ indicates a residual connection [11], $\odot$ denotes the Hadamard product, and the operation $\Psi_{i}$ ( $i\in \{c,v\}$ ) is mainly implemented by one-dimensional convolutions, operating along the vertex-wise and channel-wise directions, respectively. The vertex-wise operation $\Psi_{v}$ incorporates features and propagates information among vertices (proposals), and the channel-wise operation $\Psi_{c}$ updates the features of proposals. $\mathbf{H}_p^{\prime}\in \mathbb{R}^{n\times F^{\prime}}$ denotes the proposal-feature tensor after reasoning. Different from the previous GConvs, ProRe considers the relative geometric positions among proposals in feature aggregation using $\gamma$ , which transforms $\mathbf{P}$ into size $n\times F^{\prime}$ for Hadamard production. After the reasoning, the 3D bounding boxes and corresponding categories are predicted as in VoteNet [29].
+
+# 3.6. Loss Functions
+
+The improved voting process on the feature pyramid is under the guidance of $\mathcal{L}_{\mathrm{voting}}$ , as:
+
+$$
+\mathcal {L} _ {\text {v o t i n g}} = \sum_ {m} \left(\frac {1}{M _ {m}} \sum_ {i} \left| \Delta p _ {i} - \Delta p _ {i} ^ {*} \right| \mathbb {1} \left[ x _ {i} \text {o n o b j e c t} \right]\right) \tag {4}
+$$
+
+where $\mathbb{1}[x_i$ on object] indicates whether a point $x_{i}$ is on an object surface. $M_{m}$ is the point number on a certain object in the $m$ -th level point feature maps of feature pyramid, and $|\cdot |$ denotes the $L_{1}$ loss. The other loss terms $\mathcal{L}_{\mathrm{obj -cls}},\mathcal{L}_{\mathrm{boxes}},\mathcal{L}_{\mathrm{sem -cls}}$ also follow VoteNet. The loss function of the entire framework is defined by:
+
+$$
+\mathcal {L} = \mathcal {L} _ {\text {v o t i n g}} + \lambda_ {1} \mathcal {L} _ {\text {o b j - c l s}} + \lambda_ {2} \mathcal {L} _ {\text {b o x e s}} + \lambda_ {3} \mathcal {L} _ {\text {s e m - c l s}} \tag {5}
+$$
+
+where $\lambda_1 = 0.5, \lambda_2 = 1$ , and $\lambda_3 = 0.1$ as default.
+
+# 4. Experiments
+
+To evaluate our method, two key questions should be addressed by the experiments of HGNet.
+
+$\mathbf{Q}_1$ : How does HGNet compare to the state-of-the-art methods for 3D object detection on point clouds?
+$\mathbf{Q}_2$ : How to analyze the performance of SA-(De)GConv (for local shape semantics), GU-net with Proposal Generator (for semantics of multi-levels), and the ProRe Module (for global semantics)?
+
+# 4.1. Implementation Details
+
+The entire HGNet in Fig. 2 is trained end-to-end. We implement our framework using PyTorch 1.0 on Python 3.6. The framework is trained on 1 GeForce RTX 2080Ti GPU. We train HGNet with the Adam optimizer. With a batch size of 8, the learning rate is $10^{-3}$ initially, is reduced by $10 \times$ after 80 epochs, and is reduced by $10 \times$ again after 120 epochs. Training the whole framework to convergence takes about 18 hours on SUN RGB-D and about 5 hours on ScanNetV2. In our experiments, the evaluation metrics follow those in [29], using the average precision (AP). In addition to the mean average precision (mAP) for evaluating the performance of the frameworks compared, we also use the coefficient of variation for AP (cvAP) to show the adaptability of the frameworks to detect various objects, defined as
+
+$$
+\operatorname {c v A P} = \left[ \frac {\sum_ {i} ^ {N _ {c}} \left(\mathrm {A P} _ {i} - \mathrm {m A P}\right) ^ {2}}{N _ {c} \cdot \mathrm {m A P} ^ {2}} \right] ^ {\frac {1}{2}} \tag {6}
+$$
+
+where $N_{c}$ indicates the numer of the object categories. The lower cvAP is, the better a framework is.
+
+# 4.2. Datasets
+
+SUN RGB-D [36] is a single-view dataset showing indoor scenes, with 37 object categories in total (but 10 most common categories are used). The whole dataset contains $\sim 5\mathrm{K}$ RGB-D images with 5,285 images for training. All the images are annotated with oriented 3D bounding boxed and the categories. We convert the depth images into point cloud data before model processing.
+
+ScanNet-V2 [5] is a dataset of indoor scenes, containing RGB-D scans of about 1.5K scenes. To compare with the state-of-the-art frameworks, we prepare the data as in [13].
+
+Input Data. Similar to PointNet [31], we use raw points as input, after randomly sampling 20,000 points from a point cloud in SUN RGB-D or 40,000 points from a 3D scan in ScanNet-V2. We only use the height features and geometric positions as in VoteNet [29], without RGB cues. For data augmentation, we randomly flip point clouds along the $x$ -axis and $y$ -axis, and randomly scale the point clouds by $s$ times, $s \sim U(0.9, 1.1)$ .
+
+# 4.3. Evaluation Results
+
+Comparison with State-of-the-art Methods. To answer question $\mathbf{Q}_1$ , we compare on SUN RGB-D and ScanNet-V2 with various state-of-the-art methods: Deep sliding shapes
+
+(DSS) [37], 3D-SIS [13], 2D-driven [16], F-PointNet [30], GSPN [47], Cloud of gradients descriptor (COGD) [33], and VoteNet [29]. The experimental results are shown in Table 1 and Table 2. The performance results of the previous methods are obtained from either the original papers or [29].
+
+The experimental results show that our HGNet outperforms all the previous methods by a large margin without RGB cues. Specifically, HGNet promotes the AP scores for large objects compared to VoteNet [29], such as desk and bathtub, which puzzled VoteNet, as shown in Table 1. Note that HGNet has less bias than the previous methods (even reducing cvAP by $\sim 9\%$ on SUN RGB-D), which illustrates that HGNet is more adaptive to various objects. This likely is due to the proposed feature pyramid and our hierarchical graph modelling (SA-GConv, GU-net, and ProRe Module). It is worth noting that the AP scores cannot completely show the power of HGNet, and this will be discussed in the next paragraph. Besides, the difference of the inference time per point cloud between VoteNet and HGNet is within 0.001s on our GPUs, on both SUN RGB-D and SCanNet.
+
+Visualization Results. Fig. 6 gives some visualization examples of point clouds, comparing the predicted bounding boxes and ground truth boxes. These examples show that HGNet has good performance on various objects. Besides, HGNet often detects some objects in the scenes that are not annotated by the ground truth (see the first and second rows for SUN RGB-D in Fig. 6). This implies that the indicator AP might underestimate the ability of HGNet.
+
+# 4.4. Ablation Analysis
+
+Ablation Experiments. To answer question $\mathrm{Q}_2$ , we evaluate the contributions of SA-GConv, GU-net, and the ProRe Module via ablation experiments on the SUN RGB-D dataset. Some quantitative results are shown in Table 3. We compare SA-GConv with a simple GConv (SGConv) by SGConv $(x_i, x_j) = \mathbf{f}(x_i, x_j)$ , eliminating the position modelling term $\mathbf{g}(p_i - p_j)$ . Also, we compare De-GConv with the arithmetic interpolation (Inter.), which is the initialization method of De-GConv (described in Sec. 3.2). We compare feature pyramid with $U2$ (as shown in Fig. 2). The first row in Table 3 is for the baseline. As one can see in Table 3, SA-GConv, ProRe Module, and feature pyramid contributes $\sim 2\%$ , respectively. Besides, SA-DeGConv also contributes $0.4\%$ . It is clear that these proposed components are useful. Below we further discuss the effects of ProRe module and SA-GConv.
+
+Local Shape Information Capturing. To further illustrate the performance of SA-(De)GConv, we compare it with the set abstraction module (SA) of PointNet++. We replace SA-GConv by SA in HGNet, and compare the precision of the voting results on SUN RGB-D. The voting results show the power of feature capturing. We define a smaller box with
+
+ | Input style | bathtub | bed | bookshelf | chair | desk | dresser | nightstand | sofa | table | toilet | mAP | cvAP |
| DSS | XYZ + RGB | 44.2 | 78.8 | 11.9 | 61.2 | 20.5 | 6.4 | 15.4 | 53.5 | 50.3 | 78.9 | 42.1 | 0.61 |
| COGD | XYZ + RGB | 58.3 | 63.7 | 31.8 | 62.2 | 45.2 | 15.5 | 27.4 | 51.0 | 51.3 | 70.1 | 47.7 | 0.35 |
| 2D-driven | XYZ + RGB | 43.5 | 64.5 | 31.4 | 48.3 | 27.9 | 25.9 | 41.9 | 50.4 | 37.0 | 80.4 | 45.1 | 0.36 |
| F-PointNet | XYZ + RGB | 43.3 | 81.1 | 33.3 | 64.2 | 24.7 | 32.0 | 58.1 | 61.1 | 51.1 | 90.9 | 54.0 | 0.38 |
| VoteNet | XYZ | 74.4 | 83.0 | 28.8 | 75.3 | 22.0 | 29.8 | 62.2 | 64.0 | 47.3 | 90.1 | 57.7 | 0.40 |
| HGNet | XYZ | 78.0 | 84.5 | 35.7 | 75.2 | 34.3 | 37.6 | 61.7 | 65.7 | 51.6 | 91.1 | 61.6 | 0.31 |
+
+Table 1. 3D object detection performance results on the SUN RGB-D V1 dataset. The average precision with a 3D IoU threshold of 0.25 is used. Only the 10 most common categories are shown. cvAP is defined in Eq. (6).
+
+ | Input style | mAP@0.25 | mAP@0.50 | cvAP@0.25 | cvAP@0.5 |
| DSS | XYZ + RGB | 15.2 | 6.8 | - | - |
| F-PointNet | XYZ + RGB | 19.8 | 10.8 | - | - |
| GSPN | XYZ + RGB | 30.6 | 17.7 | - | - |
| 3D-SIS | XYZ | 27.6 | 16.0 | 0.65 | 1.25 |
| 3D-SIS | XYZ+5 views | 32.2 | 24.7 | 0.97 | 1.03 |
| VoteNet | XYZ | 58.6 | 33.5 | 0.40 | 0.84 |
| HGNet | XYZ | 61.3 | 34.4 | 0.38 | 0.82 |
+
+Table 2. 3D object detection results on the ScanNet-V2 dataset with 3D IoU thresholds of 0.25 and 0.5, respectively. “-” means “not applicable”, since the corresponding data were not available.
+
+| SGConv | GU-net | ProRe | mAP |
| SA-GConv | SGConv | SA-DeGConv | Inter. | FP | U2 |
| ✓ | | ✓ | | ✓ | | 57.3 |
| ✓ | | ✓ | ✓ | | | 59.5 |
| ✓ | | | ✓ | | ✓ | | 59.7 |
| ✓ | | ✓ | | | ✓ | | 60.1 |
| ✓ | | ✓ | | ✓ | ✓ | 58.9 |
| ✓ | | ✓ | | ✓ | | | 60.8 |
| ✓ | | ✓ | | ✓ | | ✓ | 61.6 |
+
+Table 3. Quantitative ablation experiments on SUN RGB-D. "FP" indicates the feature pyramid.
+
+the same center in the bounding box of an object, and the lengths of the small box are only $30\%$ of those of the bounding box. We define "precise votes" if the votes lie in the small box. We calculate the ratio of "precise votes" over the votes from $U2$ (as in Fig. 2). As shown in Table 4, it can be seen that the points are better clustered (by over $6\%$ in the "precise votes" ratio) to the object centers with SA-GConv. Note that the proposals are generated from the votes, and thus the voting results are very important.
+
+To demonstrate the shape information capturing capability of SA-GConv, we let SA-GConv $_g(x) = \max_{x_j \in kNN(x_i)} \{ g(p_i - p_j) \}$ , with $\mathbf{f}(x_i, x_j) \equiv 1$ in Eq. (1). The $\mathbf{g}$ parameters of SA-GConv $_g(x)$ are inherited from the $\mathbf{g}$ parameters of the first SA-GConv in GU-net (see Fig. 2). Then we operate SA-GConv $_g(x)$ on the SUN RGB-D point clouds. As illustrated in Fig. 7, the object parts that have obvious shape information (e.g., corners, edges) are highly responsive. Besides, the response hot maps are similar among the objects in the same category. This obviously verifies that our SA-GConv (especially $\mathbf{g}$ ) well captures the shape information by modelling the geometric positions.
+
+The ProRe Module helps the features propagate among the proposals. This module might not be so useful if the features for detecting an object had been adequately learned; but it helps in detecting an object with very few points (e.g., the points can be sparse or missing on some objects). In each category of SUN RGB-D, we sort the objects based on the numbers of points on them in increasing order, and divide the objects into 10 groups based on the sorted order. Then we calculate the total average recall (AR) across the categories in every percentile range (group). As shown in Fig. 5, as the number of points on the objects decreases, the impact of the ProRe Module is gradually becoming apparent. For objects with very few points, the ProRe Module can promote the recall rate by even over $12\%$ .
+
+
+Figure 5. The $x$ -axis is for the percentile ranges in the sorted order of the objects, and the $y$ -axis is for AR with respect to the objects.
+
+# 5. Conclusions
+
+For 3D object detection on point clouds, we proposed a novel framework HGNet, learning the semantics via hierarchical graph modelling. Specifically, we proposed the novel and light Shape-attentive (De)GConv to capture the local shape semantics, which aggregates the features considering the relative geometric positions of points. We built GU-net based on SA-GConv and SA-DeGConv, generating the feature pyramid containing the multi-level semantics. The points on the feature pyramid vote to be at the corresponding object centers and the semantics of multi-levels are further aggregated to generate proposals. Then a ProRe Module is employed to incorporate and propagate the features among the proposals, promoting the detection
+
+| Ratio of “precise votes” (%) | bathtub | bed | bookshelf | chair | desk | dresser | nightstand | sofa | table | toilet | average |
| HGNet + SA | 32.5 | 51.2 | 14.8 | 46.3 | 26.5 | 24.7 | 32.1 | 44.4 | 36.3 | 55.4 | 36.4 |
| HGNet + SA-GConv | 42.1 | 59.3 | 19.6 | 48.3 | 31.7 | 32.6 | 38.9 | 53.7 | 41.4 | 57.0 | 42.5 |
+
+Table 4. Comparison of voting results between SA-GConv and SA module in HGNet on SUN RGB-D dataset.
+
+
+Figure 6. Comparison between the predicted bounding boxes and ground truth boxes on SUN RGB-D and ScanNet-V2.
+
+
+Figure 7. Visualization examples of the response values of SA-GConv $_g$ on some objects of the SUN RGB-D dataset. One can see that the edges (green arrows), corners (grey arrows), and sharp parts (blue arrows) of the objects are highly responsive.
+
+performance by leveraging the global scene semantics. Finally, the bounding boxes and the categories are predicted. Different from the previous methods, HGNet attains better performance by carefully considering the shape information and aggregating the semantics of multi-levels.
+
+# 6. Acknowledgements.
+
+The research of Real Doctor AI Research Centre was partially supported by the Zhejiang University Education Foundation under grants No.K18-511120-004, No.K17-511120-017, and No.K17-518051-021, the National Natural Science Foundation of China under grant No.61672453, the National key R&D program sub project "large scale cross-modality medical knowledge management" under grant No.2018AAA0102100, the Zhejiang public welfare technology research project under grant No.LGF20F020013, the National Key R&D Program Project of "Software Testing Evaluation Method Research and its Database Development on Artificial Intelligence Medical Information System" under the Fifth Electronics Research Institute of the Ministry of Industry and Information Technology (No.2019YFC0118802), and The National Key R&D Program Project of "Full Life Cycle Detection Platform and Application Demonstration of Medical Artificial Intelligence Product" under the National Institutes for Food and Drug Control (No.2019YFB1404802), and the Key Laboratory of Medical Neurobiology of Zhejiang Province. D. Chen's research was supported in part by NSF Grant CCF-1617735. We like to thank three anonymous reviewers for their professional suggestions. We also like to thank Maosen Li in SJTU and Wenting Zhang in CSU for their helpful suggestions.
+
+# References
+
+[1] Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou, and Alexander A Alemi. Watch Your Step: Learning Node Embeddings via Graph Attention. In NeurIPS, 2018.
+[2] Eduardo Arnold, Omar Y Al-Jarrah, Mehrdad Dianati, Saber Fallah, David Oxtoby, and Alex Mouzakitis. A Survey on 3D Object Detection Methods for Autonomous Driving Applications. T-ITS, 2019.
+[3] James Atwood and Don Towsley. Diffusion-Convolutional Neural Networks. In NeurIPS, 2016.
+[4] Jorge Beltrán, Carlos Guindel, Francisco Miguel Moreno, Daniel Cruzado, Fernando Garcia, and Arturo De La Escalera. BirdNet: A 3D Object Detection Framework from LiDAR Information. In ITSC, 2018.
+[5] Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. In CVPR, 2017.
+[6] Michael Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In NeurIPS, 2016.
+[7] Yuval Eldar, Michael Lindenbaum, Moshe Porat, and Yehoshua Y Zeevi. The Farthest Point Strategy for Progressive Image Sampling. IEEE Transactions on Image Processing, 1997.
+[8] Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, and Ingmar Posner. Vote3Deep: Fast Object Detection in 3D Point Clouds Using Efficient Convolutional Neural Networks. In ICRA, 2017.
+[9] Di Feng, Lars Rosenbaum, and Klaus Dietmayer. Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network for Lidar 3D Vehicle Detection. In ITSC, 2018.
+[10] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive Representation Learning on Large Graphs. In NeurIPS, 2017.
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
+[12] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep Convolutional Networks on Graph-structured Data. arXiv preprint arXiv:1506.05163, 2015.
+[13] Ji Hou, Angela Dai, and Matthias Nießner. 3D-SIS: 3D Semantic Instance Segmentation of RGB-D Scans. In CVPR, 2019.
+[14] Qiangui Huang, Weiyue Wang, and Ulrich Neumann. Recurrent Slice Networks for 3D Segmentation of Point Clouds. In CVPR, 2018.
+[15] Thomas N Kipf and Max Welling. Semi-Supervised Classification with Graph Convolutional Networks. In ICLR, 2017.
+[16] Jean Lahoud and Bernard Ghanem. 2D-Driven 3D Object Detection in RGB-D Images. In ICCV, 2017.
+[17] Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. PointPillars: Fast Encoders for Object Detection From Point Clouds. In CVPR, 2019.
+
+[18] John Boaz Lee, Ryan Rossi, and Xiangnan Kong. Graph Classification Using Structural Attention. In KDD, 2018.
+[19] Bo Li. 3D Fully Convolutional Network for Vehicle Detection in Point Cloud. In IROS, 2017.
+[20] Bo Li, Tianlei Zhang, and Tian Xia. Vehicle Detection from 3D Lidar Using Fully Convolutional Network. arXiv preprint arXiv:1608.07916, 2016.
+[21] Guohao Li, Matthias Müller, Ali Thabet, and Bernard Ghanem. DeepGCNs: Can GCNs Go as Deep as CNNs? In ICCV, 2019.
+[22] Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature Pyramid Networks for Object Detection. In CVPR, 2017.
+[23] Or Litany et al. ASIST: Automatic Semantically Invariant Scene Transformation. Computer Vision and Image Understanding, 2017.
+[24] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single Shot Multibox Detector. In ECCV, 2016.
+[25] Ziqi Liu, Chaochao Chen, Longfei Li, Jun Zhou, Xiaolong Li, Le Song, and Yuan Qi. Geniepath: Graph Neural Networks with Adaptive Receptive Paths. In AAAI, 2019.
+[26] Alessio Micheli. Neural Network for Graphs: A Contextual Constructive Approach. IEEE Transactions on Neural Networks, 2009.
+[27] Carsten Moenning and Neil A Dodgson. Fast Marching Farthest Point Sampling. Technical report, University of Cambridge, Computer Laboratory, 2003.
+[28] Mathias Niepert, Mohamed Ahmed, and Konstantin Kutzkov. Learning Convolutional Neural Networks for Graphs. In ICML, 2016.
+[29] Charles R. Qi, Or Lility, Kaiming He, and Leonidas J Guibas. Deep Hough Voting for 3D Object Detection in Point Clouds. In ICCV, 2019.
+[30] Charles R Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J Guibas. Frustum PointNets for 3D Object Detection from RGB-D Data. In CVPR, 2018.
+[31] Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In CVPR, 2017.
+[32] Charles R. Qi, Li Yi, Hao Su, and Leonidas J Guibas. Point-Net++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In NeurIPS, 2017.
+[33] Zhile Ren and Erik B Sudderth. Three-Dimensional Object Detection and Layout Prediction Using Clouds of Oriented Gradients. In CVPR, 2016.
+[34] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. In MICCAI, 2015.
+[35] Martin Simon, Stefan Milz, Karl Amende, and Horst-Michael Gross. Complex-YOLO: An Euler-Region-Proposal for Real-Time 3D Object Detection on Point Clouds. In ECCV, 2018.
+[36] Shuran Song, Samuel P Lichtenberg, and Jianxiong Xiao. SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite. In CVPR, 2015.
+
+[37] Shuran Song and Jianxiong Xiao. Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images. In CVPR, 2016.
+[38] Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view Convolutional Neural Networks for 3D Shape Recognition. In ICCV, 2015.
+[39] Zhi Tian et al. Fcos: Fully Convolutional One-stage Object Detection. In ICCV, 2019.
+[40] Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph Attention Networks. In ICLR, 2017.
+[41] Nitika Verma et al. Feastnet: Feature-steered Graph Convolutions for 3D Shape Analysis. In CVPR, 2018.
+[42] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic Graph CNN for Learning on Point Clouds. TOG, 2019.
+[43] Kun Wei et al. Adversarial Fine-Grained Composition Learning for Unseen Attribute-Object Recognition. In ICCV, 2019.
+[44] Bichen Wu, Alvin Wan, Xiangyu Yue, and Kurt Keutzer. Squeeze-Seg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D Li-dar Point Cloud. In ICRA, 2018.
+[45] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How Powerful are Graph Neural Networks? In ICLR, 2019.
+[46] Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. STD: Sparse-to-Dense 3D Object Detector for Point Cloud. In ICCV, 2019.
+[47] Li Yi, Wang Zhao, He Wang, Minhyuk Sung, and Leonidas J Guibas. GSPN: Generative Shape Proposal Network for 3D Instance Segmentation in Point Cloud. In CVPR, 2019.
\ No newline at end of file
diff --git a/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/images.zip b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ef900e1f5635c4e8fb2b7cca0b1c0fbcf58d4e6a
--- /dev/null
+++ b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8b6db8de1ef015035c99312fad9c8028e3bba2b564d9d332d93a8ca89caf438
+size 452605
diff --git a/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/layout.json b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5777532f07d4cfa9eea98700d904e4ae92b80c21
--- /dev/null
+++ b/ahierarchicalgraphnetworkfor3dobjectdetectiononpointclouds/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bdd98244220663515266b0b40709fcc1f33257ed6910a93eefe030cda8328f2b
+size 412238
diff --git a/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_content_list.json b/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c5df6df2f051fba4685e75ac3ba359fbee270a70
--- /dev/null
+++ b/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5dff1312205d40ca300dbf6ae2d30306a078da2da18828f7a08586f5df79f177
+size 76442
diff --git a/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_model.json b/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..da7ddbb5e93a55f9b4b290dd1439999e8e1c5a1d
--- /dev/null
+++ b/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c62759ffa8d0b50cef591efb3266fe09062b36775ec4192d214bd031f7cd6d15
+size 87993
diff --git a/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_origin.pdf b/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..23027db55364505ce2851ff07014bf4f95ad0998
--- /dev/null
+++ b/alightinginvariantpointprocessorforshading/ff91bcb1-2949-4074-bbb4-6c4561b0bfec_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0c50da1e5ed313d8d6d1174dc3a697138e01b8039748e5e4e3a78f6fa8acb244
+size 2016901
diff --git a/alightinginvariantpointprocessorforshading/full.md b/alightinginvariantpointprocessorforshading/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..93f7fc88efc59168ebd8ac1900df7e824c5e5d20
--- /dev/null
+++ b/alightinginvariantpointprocessorforshading/full.md
@@ -0,0 +1,327 @@
+# A Lighting-Invariant Point Processor for Shading
+
+Kathryn Heal Jialiang Wang Steven J. Gortler Todd Zickler
+Harvard University
+
+{kathrynheal@g, jialiangwang@g, sjg@cs, zickler@seas}.harvard.edu
+
+# Abstract
+
+Under the conventional diffuse shading model with unknown directional lighting, the set of quadratic surface shapes that are consistent with the spatial derivatives of intensity at a single image point is a two-dimensional algebraic variety embedded in the five-dimensional space of quadratic shapes. We describe the geometry of this variety, and we introduce a concise feedforward model that computes an explicit, differentiable approximation of the variety from the intensity and its derivatives at any single image point. The result is a parallelizable processor that operates at each image point and produces a lighting-invariant descriptor of the continuous set of compatible surface shapes at the point. We describe two applications of this processor: two-shot uncalibrated photometric stereo and quadratic-surface shape from shading.
+
+# 1. Introduction
+
+The shading variations in an image $I(x, y)$ of a diffuse, curved surface—say, a surface with height function $f(x, y)$ —induce a perception of the surface shape. Mimicking this perceptual capability in machines is referred to as recovering "shape from shading."
+
+There exist established techniques for recovering shape from shading in special cases where the strengths and locations of the light sources around the surface are known a priori, or are somehow accurately inferred. These techniques can be understood as using a connected two-dimensional array of image "point processors", where each point processor reads the intensity $I$ at a single image point and, based on the known or estimated lighting conditions, calculates an intermediate numerical representation of the set of compatible local shapes at that point, comprising a set of (or probability density over) local surface orientations $\{(f_x,f_y)\}$ at the point. Each of the intermediate per-point orientation sets is ambiguous on its own, but when the array of point processors is connected together—by enforcing surface continuity and by including supplementary visual cues like occluding contours or top-down semantics—one can begin to recover
+
+
+Figure 1. The set of local second order surface shapes $\{(f_x, f_y, f_{xx}, f_{xy}, f_{yy})\}$ that are consistent with the derivatives $\mathbf{I} = (I, I_x, I_y, I_{xx}, I_{xy}, I_{yy})$ at one image point (black circle, left) satisfy three polynomial equations. The zero locus (i.e., variety) is two-dimensional and is visualized here projected to three dimensions ( $f_{xx}, f_{xy}, f_{yy}$ ). Each element of the variety is a local shape (four are called out) that produces the image derivatives under some light direction. We show that for any non-degenerate $\mathbf{I}$ the two-dimensional variety has four isomorphic components (colored in this example) and can be efficiently approximated by a coupled pair of shallow neural networks.
+
+shapes $f(x,y)$
+
+This has been the dominant paradigm for shape from shading for nearly fifty years [10], but it is far from satisfactory. Despite a half-century of research, it remains sensitive to non-idealities and is rarely deployed without substantial aid from a human annotator who first indicates occluding contours in an image or provides a segmentation of a relevant diffuse surface region. One reason for this fragility is that lighting is typically non-uniform across surfaces, due to self-shadowing and other physical effects. This makes it hard to infer the lighting conditions for each image point, which in turn distorts the per-point orientation sets $\{(f_x,f_y)\}$ upon which reconstruction is based. Moreover, even when lighting is uniform across a surface, the veridical location and strength of a scene's dominant light source can be impossible to infer from an image due to inherent mathematical ambiguities [3]. In comparison, monocular human vision
+
+seems to perform quite well at perceiving diffusely-shaded shape, at least modulo these ambiguities [12], despite being quite poor at inferring lighting [4].
+
+This paper introduces a point processor for shading that might help address these deficiencies, by providing per-point constraints on shape without requiring knowledge of lighting. The input to the processor is a measurement comprising a vector of spatial derivatives of intensity at one point, denoted by $\mathbf{I} \coloneqq (I, I_x, I_y, I_{xx}, I_{xy}, I_{yy})$ , Koenderink's 2-jet [11]. The internal structure of the processor is a coupled pair of shallow neural networks, and the processor's output is a compact representation of a continuous set of compatible local second-order shapes $F(\mathbf{I}) \coloneqq \{(f_x, f_y, f_{xx}, f_{xy}, f_{yy})\}$ in the form of a parametrized two-dimensional manifold in $\mathbb{R}^5$ . The processor provides useful per-point constraints because even though there are many compatible shapes $F(\mathbf{I})$ , the overwhelming majority of shapes are ruled out.
+
+Our main contribution is an algebraic analysis of Lambertian shading that provides the foundation for the point processor's internal structure and the format of its output. Specifically, we prove that the set of compatible local second-order shapes $F(\mathbf{I})$ are contained in the zero-set of three polynomial equations, i.e., are contained in a two-dimensional algebraic variety in $\mathbb{R}^5$ . We show that special properties of this variety allow it to be represented in explicit form by a function from $\mathbb{R}^2$ to $\mathbb{R}^3$ , which in turn can be approximated efficiently by a coupled pair of shallow neural networks.
+
+The most important property of this point processor is that it is "invariant to illumination" in the sense that the output shape-set $F(\mathbf{I})$ always includes the veridical local second-order shape, regardless of how the surface is lit. This means that while a surface lit from different directions will generally induce different measurements $\mathbf{I}$ at a point, and while these different image measurements will in turn produce different shape-sets $F(\mathbf{I})$ , all of the predicted shape-sets will include the true second-order shape at that point.
+
+As examples of how the point processor can be used for image analysis, we describe two scenarios in which the intrinsic two-dimensional shape ambiguities $F(\mathbf{I})$ at each point can be reduced to a discrete four-way choice by exploiting additional constraints or information. One scenario is uncalibrated two-shot photometric stereo, where the input is two images of a surface under two unknown light directions. The other is quadratic shape from shading, where the input is a single image of a shape that is quadratic over an extended region. We demonstrate these using synthetic images, leaving the development of robust algorithms and deployment on captured photographs for future work.
+
+Throughout this paper, we assume a frame of reference such that our measurements are graphs of some polynomial function. We represent these local surface height and image values as vectors of their coefficients - applying the Monge-Taylor map - ignoring dependence of $f_{xx}$ on $f_x$ . This is to
+
+say that we are not attempting to solve any partial differential equations; instead, we are studying algebraic constraints in a local linear coefficient coordinate space.
+
+# 2. Background and Related Work
+
+Most approaches to shape from shading rely on a perpoint relationship between scalar intensity $I$ and surface orientation $(f_x, f_y)$ . If the lighting is from a single direction, for example, then the set of compatible orientations is a right-circular cone with axis equal to the light direction and apex angle proportional to intensity. Similarly, if the lighting is a positive-valued function on the directional two-sphere then the set of compatible orientations is well approximated by a one-dimensional manifold defined by the light function's spherical harmonic coefficients up to third degree [15, 2]. Regardless, any such relation between intensity and surface orientation necessarily requires prior knowledge of, or accurate estimates of, the lighting at every surface point. Despite substantial recent progress [19, 1, 16, 8], including the abilities to accommodate some amounts of non-uniform lighting and non-uniform surface material properties, obtaining useful results continues to require extensive help from a human, who must first label the region that contains a continuous surface and/or indicate the locations of occluding contours.
+
+In contrast, we follow Kunsberg and Zucker [14] by enhancing the per-point analysis to consider not just the intensity and surface orientation at a point, but also higher order derivatives of intensity and shape. This allows eliminating the dependence on lighting entirely, and it suggests the possibility of a different approach where perceptual grouping and shape reconstruction can occur without explicit knowledge of lighting, and perhaps with lighting being (approximately) inferred later, as a by-product of shape perception. In this paper we consider just the first step toward this possibility: the design of the essential point processor.
+
+We are also motivated by the results of Xiong et al. [17], who consider a lighting-invariant local area processor instead of a pure point processor, and show that the intensity values in an extended image patch determine the extended quadratic shape up to a discrete four-way choice. This four-way choice leads to the automorphism (i.e. an bijection from a space to itself) group that we describe in Section 4.
+
+Our work is complementary to recent learning-based approaches to monocular depth estimation (e.g., [6]) that aim to exploit diffuse shading and many other bottom-up cues while also exploiting contextual cues in large image datasets. Our goal is to explore alternative front-end architectures and interpretable intermediate representations that can improve the generality and efficiency of such systems in the future.
+
+# 3. Local Shape Sets as Algebraic Varieties
+
+Our illumination-invariant point processor is inspired by the work of Kunsberg and Zucker [14], who use differential geometry to derive three lighting-invariant rational equations that relate the image 2-jet $\mathbf{I}$ at a point to the surface height derivatives at that point. We take an algebraic-geometric approach instead, which provides an abbreviated derivation of equivalent equations and also reveals the shape-set to be contained in an algebraic variety (i.e. in the zero-set of certain polynomial equations) that, as will be seen in Section 4, has useful geometric structure.
+
+# 3.1. Shading and Surface Models
+
+Our analysis applies to any point in a 2D image. We assign the coordinates $(0,0)$ to the point of interest and let $I(x,y)$ denote the intensity in a bounded local neighborhood $U\subset \mathbb{R}^2$ of that point. We refer to $U$ as the receptive field. In practice it is no larger than is required to robustly compute a discrete approximation to the first and second spatial derivatives of $I(x,y)$ at the origin.
+
+Within the neighborhood $U$ , we assume that the image is the orthographic projection of a curved Lambertian surface, and that the surface can be represented by a height function $f(x,y)$ . The surface albedo $\rho \in \mathbb{R}^{+}$ is assumed to be constant within $U$ . We also assume that the lighting is uniform and directional within $U$ , so that it can be represented by $\mathbf{L} \in \mathbb{R}^3$ with strength $\| \mathbf{L} \|$ and direction $\mathbf{L} / \| \mathbf{L} \|$ . Under these assumptions the intensity is
+
+$$
+I (x, y) = \rho \mathbf {L} \cdot \frac {\mathbf {N} (x , y)}{| | \mathbf {N} (x , y) | |}, \qquad (x, y) \in U, \qquad (1)
+$$
+
+where $\mathbf{N}(x,y) := (-\partial f / \partial x)(x,y), - (\partial f / \partial y)(x,y), 1)^T$ is the normal field. Note that we allow for the projection, albedo, and lighting to vary outside of neighborhood $U$ .
+
+We assume that the surface $f$ is locally smooth enough around the point $(x, y)$ that we can ignore any third or higher order derivatives at that point.
+
+$$
+f (x, y) = f _ {x} x + f _ {y} y + \frac {1}{2} \left(f _ {x x} x ^ {2} + 2 f _ {x y} x y + f _ {y y} y ^ {2}\right). \tag {2}
+$$
+
+We refer to $\mathbf{f} := (f_x, f_y, f_{xx}, f_{xy}, f_{yy}) \in \mathbb{R}^5$ as the local shape at the point $(x, y)$ . We assume that all local shapes are not flat or cylindrical, or more precisely are nondegenerate in this sense:
+
+Definition 1. A local shape $\mathbf{f}$ is nondegenerate if $(f_{xx} + f_{yy})(f_{xx}f_{yy} - f_{xy}^2)(4f_{xy}^2 + (f_{xx} - f_{yy})^2) \neq 0$ .
+
+Local shapes can produce many different image intensity patterns depending on the lighting direction. We call the set of all possible image 2-jets generated by any combination of local shape and lighting realizable, and we say that a realizable image 2-jet produced by a particular shape is consistent with that shape.
+
+Definition 2. The set of realizable measurements $\mathcal{I}$ is the set of vectors $\pmb{\nu} \in \mathbb{R}^6$ for which there exists a light direction $\pmb{L} \in \mathbb{R}^3$ and nondegenerate local shape $\mathbf{f}$ such that $\pmb{\nu} = \mathbf{I}$ when shape model (2) is combined with shading model (1).
+
+Definition 3. If for a pair $(\mathbf{I}, \mathbf{f}) \in \mathcal{I} \times \mathbb{R}^5$ there exists such an $L$ , we say that $\mathbf{I}$ and $\mathbf{f}$ are consistent. This means that for some light direction, $\mathbf{f}$ is a valid explanation of image measurements $\mathbf{I}$ .
+
+# 3.2. Sets of Local Shapes
+
+Our immediate goal is to characterize the set of shapes $F(\mathbf{I})$ that are consistent with observation $\mathbf{I}$ for any light direction. This set of admissible shapes turns out to be contained in the locus of real solutions to three polynomial equations. An important feature is that the albedo and lighting do not appear in these equations.
+
+Theorem 1. Assume the shading model of (1) and the surface model of (2), and suppose we are given a measurement $\mathbf{I} \in \mathcal{I}$ generated by some unknown surface/lighting combination. Define polynomials
+
+$$
+\begin{array}{l} C _ {1} (\mathbf {f}; \mathbf {I}): = f _ {x} ^ {4} I _ {x x} + 2 f _ {x} ^ {3} f _ {x x} I _ {x} + f _ {x} ^ {2} f _ {x y} ^ {2} I + 2 f _ {x} ^ {2} f _ {x y} f _ {y} I _ {x} + 2 f _ {x} ^ {2} f _ {y} ^ {2} I _ {x x} \\ + 2 f _ {x} ^ {2} I _ {x x} - 2 f _ {x} f _ {x x} f _ {x y} f _ {y} I + 2 f _ {x} f _ {x x} f _ {y} ^ {2} I _ {x} + 2 f _ {x} f _ {x x} I _ {x} \\ + f _ {x x} ^ {2} f _ {y} ^ {2} I + f _ {x x} ^ {2} I + f _ {x y} ^ {2} I + 2 f _ {x y} f _ {y} ^ {3} I _ {x} + 2 f _ {x y} f _ {y} I _ {x} \\ + f _ {y} ^ {4} I _ {x x} + 2 f _ {y} ^ {2} I _ {x x} + I _ {x x}, \tag {3} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} C _ {2} (\mathbf {f}; \mathbf {I}) := f _ {x} ^ {4} I _ {y y} + 2 f _ {x} ^ {3} f _ {x y} I _ {y} + 2 f _ {x} ^ {2} f _ {y} ^ {2} I _ {y y} + 2 f _ {x} ^ {2} f _ {y} f _ {y y} I _ {y} + f _ {x} ^ {2} f _ {y y} ^ {2} I \\ + 2 f _ {x} ^ {2} I _ {y y} + 2 f _ {x} f _ {x y} f _ {y} ^ {2} I _ {y} - 2 f _ {x} f _ {x y} f _ {y} f _ {y y} I + 2 f _ {x} f _ {x y} I _ {y} \\ + f _ {x y} ^ {2} f _ {y} ^ {2} I + f _ {x y} ^ {2} I + f _ {y} ^ {4} I _ {y y} + 2 f _ {y} ^ {3} f _ {y y} I _ {y} + 2 f _ {y} ^ {2} I _ {y y} \\ + 2 f _ {y} f _ {y y} I _ {y} + f _ {y y} ^ {2} I + I _ {y y}, \tag {4} \\ \end{array}
+$$
+
+$$
+\begin{array}{l} C _ {3} (\mathbf {f}; \mathbf {I}) := f _ {x} ^ {4} I _ {x y} + f _ {x} ^ {3} f _ {x x} I _ {y} + f _ {x} ^ {3} f _ {x y} I _ {x} + f _ {x} ^ {2} f _ {x y} f _ {y} I _ {y} + f _ {x} ^ {2} f _ {x y} f _ {y y} I \\ + 2 f _ {x} ^ {2} f _ {y} ^ {2} I _ {x y} + f _ {x} ^ {2} f _ {y} f _ {y y} I _ {x} + 2 f _ {x} ^ {2} I _ {x y} + f _ {x} f _ {x x} f _ {y} ^ {2} I _ {y} \\ - f _ {x} f _ {x x} f _ {y} f _ {y y} I + f _ {x} f _ {x x} I _ {y} - f _ {x} f _ {x y} ^ {2} f _ {y} I \\ + f _ {x} f _ {x y} f _ {y} ^ {2} I _ {x} + f _ {x} f _ {x y} I _ {x} + f _ {x x} f _ {x y} f _ {y} ^ {2} I + f _ {x x} f _ {x y} I \\ + f _ {x y} f _ {y} ^ {3} I _ {y} + f _ {x y} f _ {y} I _ {y} + f _ {x y} f _ {y y} I + f _ {y} ^ {4} I _ {x y} + f _ {y} ^ {3} f _ {y y} I _ {x} \\ + 2 f _ {y} ^ {2} I _ {x y} + f _ {y} f _ {y y} I _ {x} + I _ {x y}. \tag {5} \\ \end{array}
+$$
+
+Then, any nondegenerate local shape $\mathbf{f} \in \mathbb{R}^5$ that is a valid explanation of measurements $\mathbf{I}$ will satisfy $C_i = 0 \forall i$ . Equivalently, the affine variety $F := V(C_1, C_2, C_3)$ contains the set of all shapes $\mathbf{f}$ consistent with $\mathbf{I}$ .
+
+Proof sketch. We provide a sketch of the proof here, with details in the supplement. Begin with (1), absorbing albedo $\rho$ into (non-unit length) $\mathbf{L}$ . Introduce auxiliary variable $w$ , which plays the role of $1 / ||\mathbf{N}(x,y)||$ . Substitution into (1) yields polynomials $g_{1}(x,y,w,\mathbf{f}) \coloneqq I(x,y) - \mathbf{L} \cdot \mathbf{N}(x,y)$ and $g_{2}(x,y,w,\mathbf{f}) \coloneqq w^{2}||\mathbf{N}(x,y)||^{2} - 1$ . Calculate first and second spatial derivatives of $g_{1}$ with respect to $x,y$ , evaluate all polynomials at $(x,y) = (0,0)$ , and re-arrange to eliminate variables $\mathbf{L}$ and $w$ .
+
+
+Figure 2. Visualizations of the two-dimensional varieties for different measurements of the form $\mathbf{I} \approx (1 - t, -4.10, -5.87, -12.41, -13.41, -20.30) + t$ . Each variety is projected to the same three dimensions as in Figure 1 and is colored by its isomorphic pieces.
+
+Remark 1. The real solutions to these equations are identical to those of Corollary 4.2 of [14]; we offer our algebraic derivation as an alternative to the differential-geometric approach presented in that work.
+
+Theorem 1 states that the set of local shapes that are consistent with a given measurement $\mathbf{I}$ must satisfy a set of three algebraically independent polynomials and thus, by definition, is contained in a real two-dimensional algebraic variety embedded in the five-dimensional shape space. (We use the notation $\mathbf{V}(\cdot)$ to denote the variety corresponding to a set of polynomials. This is essentially their zero locus.) This variety is analogous to the one-dimensional manifold of surface orientations in classical shape from shading, and it provides substantial constraints on local shape, because although there are still infinitely many admissible local shapes, the vast majority of shapes are disqualified.
+
+The variety for a particular measurement $\mathbf{I}$ is visualized in Figure 1, projected from the five-dimensional shape space to a three dimensional space that corresponds to the second-order shape dimensions $(f_{xx}, f_{xy}, f_{yy})$ . Additional examples are in Figure 2, which shows how the varieties change for different measurements.
+
+# 4. Properties of Local Shape Sets
+
+At this stage we have an implicit description of a shape set $F(\mathbf{I})$ in terms of generating polynomials (3)-(5). For a useful point processor, we want instead an explicit representation, as well as an efficient way to calculate (and store) that explicit representation for any particular image 2-jet $\mathbf{I}$ . An explicit analytic representation remains out of reach1, but fortunately the varieties exhibit three properties that make them easy to approximate.
+
+First we show that the variety is equipped with an automorphism group that naturally divides it into four isomorphic pieces, allowing the entire shape set to be represented by
+
+any one piece (Section 4.1). We then relate the one piece to a continuous function $\phi_{\mathbf{I}}$ from $\mathbb{R}^2$ to $\mathbb{R}^3$ , which implies that the point processor is equivalent to a map from vectors $\mathbf{I} \subset \mathbb{R}^6$ to continuous functions $\phi_{\mathbf{I}}: \mathbb{R}^2 \mapsto \mathbb{R}^3$ (Section 4.2). Finally, we show that any consistent pair of measurement and shape $\mathbf{I}$ , $\mathbf{f}$ can be simultaneously rotated in the image plane without affecting consistency, which allows a lossless compression of the input space $\mathcal{I}$ .
+
+As we will see later in Section 5, these three properties enable an efficient point processor in the form of a neural network approximation of the mapping from 2-jets $\mathbf{I}$ to functions $\phi_{\mathbf{I}}$ (see Figure 3). Examples of how this representation can be used for shape from shading are described in Section 6.
+
+# 4.1. An automorphism group on $F(\mathbf{I})$
+
+The first property follows from the fact that each variety $F(\mathbf{I})$ exhibits two symmetries. These symmetries are useful because they allow each variety $F(\mathbf{I})$ to be partitioned into four isomorphic components and therefore represented more compactly by just a single component. This partition applies everywhere except on what is generically a single pair of points of $F(\mathbf{I})$ . Thus while we must technically define this partition over a "punctured" variety (what we will call $F_0$ below), in practice we can typically ignore this distinction, and may in what follows drop the subscript. The symmetries follow from those described for extended quadratic patches in [17] and can be verified by substitution into (3)-(5).
+
+Observation 1. There exists a subset $F_{+}(\mathbf{I}) \subseteq F(\mathbf{I})$ whose orbit under the automorphism group generated by
+
+$$
+\begin{array}{l} \rho_ {1}: \left(f _ {x}, f _ {y}, f _ {x x}, f _ {x y}, f _ {y y}\right) \mapsto - \left(f _ {x}, f _ {y}, f _ {x x}, f _ {x y}, f _ {y y}\right) \\ \rho_ {2} \colon (f _ {x}, f _ {y}, f _ {x x}, f _ {x y}, f _ {y y}) \mapsto \\ \frac {1}{\sqrt {4 f _ {x y} ^ {2} + \left(f _ {x x} - f _ {y y}\right) ^ {2}}} \left( \begin{array}{c} f _ {x} f _ {x x} - f _ {x} f _ {y y} + 2 f _ {y} f _ {x y} \\ 2 f _ {x} f _ {x y} + f _ {y} f _ {y y} - f _ {y} f _ {x x} \\ f _ {x x} ^ {2} - f _ {x x} f _ {y y} + 2 f _ {x y} ^ {2} \\ f _ {x x} f _ {x y} + f _ {x y} f _ {y y} \\ f _ {y y} ^ {2} - f _ {x x} f _ {y y} + 2 f _ {x y} ^ {2} \end{array} \right) \tag {6} \\ \end{array}
+$$
+
+is precisely $F_0(\mathbf{I})$ , where
+
+$$
+F _ {0} (\mathbf {I}) := F (\mathbf {I}) \backslash V (4 f _ {x y} ^ {2} + (f _ {x x} - f _ {y y}) ^ {2}).
+$$
+
+Thus for fixed $\mathbf{I}$ and $f_{x}, f_{y}$ , there will be zero, two, or four nonzero real solutions to (3)-(5), each of which corresponds to a local shape that is some combination of concave/convex and saddle/spherical. Figure 1 shows an example where the variety's four components are clearly visible, and where the four highlighted surfaces comprise one orbit.
+
+We can choose any of the variety's components to be the representative one. The component that corresponds to shapes with positive curvatures is convenient to characterize, so we choose that one and call it the positive shape set.
+
+Definition 4. We call the semi-algebraic set
+
+$$
+F _ {+} := \left\{\mathbf {f} \in F _ {0}: f _ {x x} + f _ {y y} > 0 \text {a n d} f _ {x x} f _ {y y} - f _ {x y} ^ {2} > 0 \right\} \tag {7}
+$$
+
+the positive shape set. This subset $F_{+}(\mathbf{I})$ is the set $F_{0}(\mathbf{I})$ modulo the group action of $\langle \rho_1,\rho_2\rangle$ .
+
+It is easily verified for non-planar images that $\mathbf{0} \notin F_{+}$ , that there exist no real fixed points of $\rho_{2}$ , and that by Definition 1 $4f_{xy}^{2} + (f_{xx} - f_{yy})^{2} \neq 0$ on $F_{+}(\mathbf{I})$ . Therefore the maps $\rho_{1}, \rho_{2}$ are well-defined on $F_{+}(\mathbf{I})$ .
+
+# 4.2. $F_{+}(\mathbf{I})$ is a graph
+
+Our aim is to find a parsimonious representation of the positive shape subsets $F_{+}(\mathbf{I})$ (and thus of the entire shape set $F(\mathbf{I})$ ) as well as an efficient way to compute this representation for any particular measurement $\mathbf{I}$ . Since $F(\mathbf{I})$ and its subset $F_{+}(\mathbf{I})$ are determined by $\mathbf{I}$ , we may define a map $\Phi : \mathbf{I} \mapsto F_{+}(\mathbf{I})$ .
+
+In order to simplify the map $\Phi$ from vectors $\mathbf{I}$ to positive subsets $F_{+}(\mathbf{I})$ , we assume that each (two-dimensional) positive subset can be parametrized by surface orientation $(f_x, f_y)$ , so that the map $\Phi(\mathbf{I}) = \{(f_x, f_y, f_{xx}, f_{xy}, f_{yy})\}$ can be decomposed as
+
+$$
+\Phi (\mathbf {I}) = \left\{\left(f _ {x}, f _ {y}, \phi_ {\mathbf {I}} \left(f _ {x}, f _ {y}\right)\right) \right\}, \tag {8}
+$$
+
+with $\phi_{\mathbf{I}}:\mathbb{R}^2\mapsto \mathbb{R}^3$ a continuous function.
+
+While we frame it here as an assumption, this decomposition may in fact be exact. The Implicit Function Theorem guarantees existence (and uniqueness) of a function $\phi(f_x, f_y) = (f_{xx}, f_{xy}, f_{yy})$ in a local neighborhood of every $\mathbf{f}$ for which the Jacobian of system $(C_1, C_2, C_3)$ is nonsingular. While proving that the Jacobian is always nonsingular—that is, non-singular for any $\mathbf{I} \in \mathcal{I}$ and any real $(f_x, f_y)$ —remains an open problem, we conjecture that it is true. Experimentally we have never witnessed a non-singular Jacobian, and we can prove non-singularity in simplified cases like the following.
+
+Example 1. Consider the case in which the measurements $\mathbf{I}$ satisfy $I_x = I_y = 0$ , i.e. in which the image's normal is parallel to the viewing direction. In this case the determinant of the Jacobian of system $(C_1, C_2, C_3)$ is
+
+$$
+\det J = \gamma ((1 + f _ {y} ^ {2}) f _ {x x} - 2 f _ {x} f _ {y} f _ {x y} + (1 + f _ {x} ^ {2}) f _ {y y})
+$$
+
+where $\gamma = -4(f_{xx}f_{yy} - f_{xy}^2) / (1 + f_x^2 +f_y^2)^5$ . This has a real solution only if its discriminant takes with respect to $f_{x}$ ,
+
+$$
+\operatorname {d i s c r} _ {f _ {x}} \det J = \gamma ((1 + f _ {y} ^ {2}) (f _ {x x} f _ {y y} - f _ {x y} ^ {2}) + (f _ {x y} ^ {2} + f _ {y y} ^ {2})),
+$$
+
+is strictly positive. On $F_{+}(\mathbf{I})$ , the term $f_{xx}f_{yy} - f_{xy}^{2} > 0$ so $\mathrm{discr}_{f_x}\operatorname*{det}J < 0$ over $\mathbb{R}$ . This implies that there are no points in $F_{+}(\mathbf{I})$ where the implicit function fails.
+
+# 4.3. Isomorphisms induced by rotations
+
+A third property is a rotational symmetry about the local viewing direction that allows us to losslessly compress the input space $\mathcal{I}$ . Any local relation that exists between an image $I(x,y)$ and surface $f(x,y)$ must persist for any orthogonal change of basis of their common two-dimensional domain $(x,y)$ . We are therefore free to define a local coordinate system that adapts to each measurement $\mathbf{I}$ .
+
+One choice is the local coordinate system that aligns with the image gradient $(I_x, I_y)$ , using an orthogonal transform that maps $I_y$ to zero and $I_x$ to a non-negative real number. This implies three transformations,
+
+$$
+T _ {\mathbf {I}} := \frac {1}{\sqrt {I _ {x} ^ {2} + I _ {y} ^ {2}}} \left[ \begin{array}{l l} I _ {x} & I _ {y} \\ - I _ {y} & I _ {x} \end{array} \right] =: \left[ \begin{array}{l l} G _ {1 1} & G _ {1 2} \\ G _ {2 1} & G _ {2 2} \end{array} \right], \tag {9}
+$$
+
+$$
+S _ {\mathbf {I}} := \left[ \begin{array}{c c c} G _ {1 1} ^ {2} & 2 G _ {1 1} G _ {2 1} & G _ {2 1} ^ {2} \\ G _ {1 1} G _ {1 2} & G _ {1 1} G _ {2 2} + G _ {1 2} G _ {2 1} & G _ {2 1} G _ {2 2} \\ G _ {1 2} ^ {2} & 2 G _ {1 2} G _ {2 2} & G _ {2 2} ^ {2} \end{array} \right], \tag {10}
+$$
+
+$$
+R _ {\mathbf {I}} := \left[ \begin{array}{c c c} 1 & \mathbf {0} & \mathbf {0} \\ \mathbf {0} & T _ {\mathbf {I}} & \mathbf {0} \\ \mathbf {0} & \mathbf {0} & S _ {\mathbf {I}} ^ {- 1} \end{array} \right], \tag {11}
+$$
+
+for which one can verify that $\mathbf{f} \in F(\mathbf{I})$ if and only if $\hat{R}_{\mathbf{I}}\mathbf{f} \in F(R_{\mathbf{I}}\mathbf{I})$ , with $\hat{R}_{\mathbf{I}}$ the principal submatrix of $R_{\mathbf{I}}$ obtained by removing its first row and column.
+
+By using these transformations to pre-process each of our point processor's inputs $\mathbf{I}$ , and to correspondingly postprocess each output shape $\mathbf{f}$ , we reduce the effective input space from $\mathcal{I} \subset \mathbb{R}^6$ to $\tilde{\mathcal{I}} \subset \mathbb{R}^4 \times \mathbb{R}_+$ . In the supplement, we show that this transformation always maps $I_{yy}$ to a nonpositive value, so $\tilde{\mathcal{I}}$ is actually contained in $\mathbb{R}^3 \times \mathbb{R}_+ \times \mathbb{R}_-$ . The size of $\tilde{\mathcal{I}}$ can be reduced further by exploiting the linearity of (3)-(5) in $\mathbf{I}$ , which implies that $F_+$ is invariant under any positive real scaling of $\mathbf{I}$ . This means we can additionally restrict $\tilde{\mathcal{I}}$ to the unit sphere $S^4$ without loss of generality.
+
+Combined, this reduces the effective input space to $\tilde{\mathcal{I}}\subset \mathbb{R}^3\times \mathbb{R}_+ \times \mathbb{R}_-\cap S^4$ . We reap the benefits of this domain simplification when designing and training our neural network.
+
+
+Figure 3. The structure of our two-stage network approximator $\hat{\phi}_{\mathbf{I}}$ for the map from vectors $\mathbf{I}$ to functions $\phi_{\mathbf{I}}$ . The right shows orientation domain and output samples for the same $\mathbf{I}$ as in Figure 1.
+
+# 5. A Neural Network Approximator
+
+Let us ignore for now the pre- and post-processing transformations related to rotations, and consider the task of approximating the mapping from vectors $\mathbf{I} \in \mathcal{I}$ to functions $\phi_{\mathbf{I}}$ . One convenient way to do this is to couple a pair of neural networks, with the output of one network providing the weights of the other. That is, we can use
+
+$$
+\hat {\phi} _ {\mathbf {I}} \left(f _ {x}, f _ {y}\right) := h \left(f _ {x}, f _ {y}; g _ {\theta} (\mathbf {I})\right), \tag {12}
+$$
+
+where $g_{\theta}:\mathbb{R}^{6}\mapsto \mathbb{R}^{M}$ is a (fully-connected, few-layer) neural network with tunable weights $\theta \in \mathbb{R}^N$ and $h_\psi :\mathbb{R}^2\mapsto \mathbb{R}^3$ is a (fully-connected, single layer) neural network whose weights $\psi \in \mathbb{R}^{M}$ are provided by the output of $g$ . This means that under the hood, $\hat{\phi}_{\mathbf{I}}$ is a function of $\theta$ .
+
+This is convenient because it provides a compact representation that can be efficiently fit to a large set of training samples. We can fit the weights $\theta$ by synthetically generating many measurements $\mathbf{I}$ and for each one computing many samples $\mathbf{f}$ from the corresponding semi-algebraic set $F_{+}(\mathbf{I})$ using Theorem 1 and Observation 1. This produces a set of samples $\{(\mathbf{I}^{(j)},\mathbf{f}^{(i,j)})\}_{i,j}$ that we can use to solve
+
+$$
+\begin{array}{l} \theta = \underset {\theta} {\arg \min } \sum_ {j} \sum_ {i} \left\| \left(f _ {x x} ^ {(i, j)}, f _ {x y} ^ {(i, j)}, f _ {y y} ^ {(i, j)}\right) \right. \\ \left. - h \left(f _ {x} ^ {(i, j)}, f _ {y} ^ {(i, j)}; g _ {\theta} \left(\mathbf {I} ^ {(j)}\right)\right) \right\| ^ {2} \tag {13} \\ \end{array}
+$$
+
+via stochastic gradient descent.
+
+Now, with only small modifications, we can incorporate the rotational transformations of Section 4.3 to make the approximator more efficient and reduce the training burden. This simply requires surrounding the neural network with linear transformation blocks (see Figure 3) that pre-process an
+
+input measurement $\mathbf{I}$ , and that correspondingly pre-process the orientation domain $(f_x, f_y)$ and post-process the output curvatures $(f_{xx}, f_{xy}, f_{yy})$ using (9-11). This reduces the domain of network $g_\theta$ from $\mathbb{R}^6$ to $\mathbb{R}^3 \times \mathbb{R}_+ \times \mathbb{R}_-\cap S^4$ . For example, if the input to block $\dot{\cdot}_{\| \cdot \|} \circ R_{\mathbf{I}}$ in Figure 3 is $\mathbf{I} = (I, I_x, I_y, I_{xx}, I_{xy}, I_{yy})$ then its output is
+
+$$
+(\tilde {I}, \tilde {I} _ {x}, \tilde {I} _ {x x}, \tilde {I} _ {x y}, \tilde {I} _ {y y}) / \| (\tilde {I}, \tilde {I} _ {x}, \tilde {I} _ {x x}, \tilde {I} _ {x y}, \tilde {I} _ {y y}) \| \tag {14}
+$$
+
+with $\tilde{\mathbf{I}} = R_{\mathbf{I}}\mathbf{I}$ (and dropping the now-redundant $\tilde{I}_y = 0$ ).
+
+# 5.1. Training Data and Network Architecture
+
+Training requires samples of 2-jets $\mathbf{I}^{(j)}\in \mathcal{I}$ as well as samples of the positive set $F_{+}(\mathbf{I}^{(j)})$ for each 2-jet. We generate the former by sampling light source directions $\mathbf{L}$ and quadratic patches $\mathbf{f}$ and then applying Eqs. (1) and (2) (and their spatial derivatives) to render 2-jets $\mathbf{I}^{(j)}$ . Specifically, we sample light sources uniformly from the subset of $S^2$ contained in an angular radius of $\pi /4$ from the view direction $(0,0,1)$ , and we sample surface orientations $f_{x},f_{y}$ uniformly from the unit disk $B^2$ . By Observation 1 it is sufficient to sample positive curvatures, so we sample $f_{xx},f_{xy},f_{yy}$ uniformly from a bounded subset of $\mathbb{R}^3\cap \{f_{xx}f_{yy} - (f_{xy})^2 >0\} \cap \{f_{xx} + f_{yy} > 0\}$ .
+
+To create samples of the positive set $F_{+}(\mathbf{I}^{(j)})$ for each 2-jet, we first generate a dense set of sample orientations $\{(f_x^{(i)},f_y^{(i)})\}$ from the unit disk to serve as input to network $h_\phi$ . Then, for each $\mathbf{I}^{(j)}$ and for each $f_x^{(i)},f_y^{(i)}$ the corresponding "ground truth" second order shape values $(f_{xy}^{(i,j)},f_{xy}^{(i,j)},f_{yy}^{(i,j)})$ are computed by applying a numerical root-finder to (3)-(5). The result is a training set $\{(\mathbf{I}^{(j)},\mathbf{f}^{(i,j)})\}_{i,j}$ .
+
+Numerical root finding can be expensive, but the simplification of the domain of $g_{\theta}$ (see the previous section) in our case reduces the computational burden. Rather than generating enough 2-jets $\mathbf{I}^{(j)}$ to sufficiently sample $\mathcal{I}$ , we need only generate enough for the measurements that are pre-processed by block $\tilde{\mathbf{\Pi}}\cdot \tilde{\mathbf{\Pi}}\circ R_{\mathbf{I}}$ to sufficiently sample $\tilde{\mathcal{I}}$ .
+
+For network $g_{\theta}:\mathbb{R}^{5}\mapsto \mathbb{R}^{M}$ we use $d_g = 1$ hidden layer with $w_{g} = 25$ ReLU nodes. For network $h_{\psi}:\mathbb{R}^2\mapsto \mathbb{R}^3$ we use one hidden layer with $w_{h} = 50$ ReLU nodes. The total number of tunable parameters is $N = 6w_{g} + (d_{g}-1)w_{g}(w_{g} + 1) + M(w_{g} + 1) + M$ , and once the model is trained, the output description of the shape-set $F(\mathbf{I})$ for any 2-jet $\mathbf{I}$ consists of $M = 3(2w_{h} + 1)$ rational numbers (the size of vector $\psi$ ). The entire shape set at each image point is therefore summarized by only $M = 303$ numbers. Figure 4 visualizes the quality of fit for a representative test measurement $\mathbf{I}$ that was not used during training.
+
+# 6. Applications
+
+The point processor transforms image values at a single point $\mathbf{I}$ into an intermediate representation of the consistent
+
+
+Figure 4. Visualization of the approximator's interpolation error. This figure depicts $\hat{\phi}_{\mathbf{I}}$ for an $\mathbf{I}$ that was randomly chosen from the convex hull of the training data set, but that was not used as a training sample. The inset shows the four randomly-chosen solutions for which our approximation performs worst, i.e. those $\mathbf{f}$ that maximize the error $||\mathbf{f} - (f_x,f_y,\hat{\phi}_{\mathbf{I}}(f_x,f_y))||_2^2$
+
+local shape-set, in the form of a two-dimensional manifold parametrized by surface orientation, $\left(f_{x},f_{y},\hat{\phi}_{\mathbf{I}}(f_{x},f_{y})\right)$ . To demonstrate how this continuous representation of per point shapes can be used for image analysis, we consider two simple scenarios. In both cases, the per-point ambiguity is resolved (up to a discrete four-way choice of shapes) by exploiting additional information or assumptions.
+
+Our demonstrations use simple images rendered according to (1) with $1\%$ additive Gaussian noise and 64-bit quantization. We estimate spatial image derivatives using Gaussian derivative filters.
+
+# 6.1. Uncalibrated two-shot photometric stereo
+
+The per-point ambiguity can be resolved by capturing additional images of the same surface under distinct light directions. When the light directions are unknown this is called uncalibrated photometric stereo [18, 7, 3]. In the traditional formulation, which is based purely on surface orientation $(f_{x},f_{y})$ , it requires at least three images under three distinct lights [9]. Our point processor based on second order shape provides a similar capability with only two input images instead of three.
+
+Consider two measurements $\mathbf{I}_1, \mathbf{I}_2$ generated at the same point from two (unknown) light sources $\mathbf{L}_1, \mathbf{L}_2$ . A simulated example is depicted in the top of Figure 5. The first measurement $\mathbf{I}_1$ limits the shape to being in the set $F_+(\mathbf{I}_1)$ ,
+
+but within this set all shapes are equally likely. Since the set is parametrized by surface orientation $\left(f_{x},f_{y},\hat{\phi}_{\mathbf{I}_{1}}(f_{x},f_{y})\right)$ , we can visualize the (uniform) "likelihood" over some reasonably-sized disk of the orientation domain $(f_x,f_y)$ . This is shown in the left of Figure 5, with the magenta dot indicating the orientation of the latent true shape $\mathbf{f}^*$ that was used for the simulation.
+
+The second measurement $\mathbf{I}_2$ further restricts the shape to being in the intersection of sets $F_{+}(\mathbf{I}_{1})$ and $F_{+}(\mathbf{I}_{2})$ . Thus, we can improve the "likelihood" based on how close each shape is to $F_{+}(\mathbf{I}_{1}) \cap F_{+}(\mathbf{I}_{2})$ . One way to quantify this is
+
+$$
+L \left(f _ {x}, f _ {y}\right) := \left\| \hat {\phi} _ {\mathbf {I} _ {2}} \left(f _ {x}, f _ {y}\right) - \hat {\phi} _ {\mathbf {I} _ {1}} \left(f _ {x}, f _ {y}\right) \right\| ^ {2} \tag {15}
+$$
+
+for $(f_x, f_y)$ in the disk. For our simulation, this updated two-measurement likelihood is shown on the right in Figure 5, where it provides a successful identification of the true shape.
+
+Recovering the correct per-point shape (up to the four-way choice) by this simple strategy relies on the intersection $F_{+}(\mathbf{I}_{1}) \cap F_{+}(\mathbf{I}_{2})$ being a single point, as seems to be the case for our simulation, as shown in the bottom of Figure 5. Our experiments suggest this is typically the case, but analytically characterizing the conditions for uniqueness may be a worthwhile direction for future work. Also, resolving the four-way choice at each point would require making additional surface continuity assumptions, analogous to how "integrability" is used to reduce the inherent global linear ambiguity in traditional three-shot photometric stereo [18].
+
+# 6.2. Surface continuity
+
+An alternative way to reduce the per-point ambiguity is to design a 2D array of point processors that are connected together by enforcing surface continuity across an extended region of the input image. As a simple example, we consider the scenario in which the entire surface is an extended quadratic function, meaning one that satisfies (2) over the entire image $I(x,y)$ with some "true shape" values $\mathbf{f}^{*} = (f_{x}^{*},f_{y}^{*},f_{xx}^{*},f_{xy}^{*},f_{yy}^{*})$ .
+
+When the surface is known to be an extended quadratic, any single local shape $\mathbf{f} \in F_{+}(\mathbf{I}_{1})$ at one point, say the image origin, immediately predicts a corresponding local shape $\mathbf{f}'$ at every other point $(x,y)$ in the image, via $(f_{xx}', f_{xy}', f_{yy}') = (f_{xx}, f_{xy}, f_{yy})$ and $(f_x', f_y') = (f_x, f_y) + A(x,y) \cdot (f_{xx}, f_{xy}, f_{yy})$ with matrix
+
+$$
+A (x, y) = \left[ \begin{array}{l l l} x & y & 0 \\ 0 & x & y \end{array} \right]. \tag {16}
+$$
+
+As before, we begin with a uniform relative likelihood over the shape set $F_{+}(\mathbf{I}_{1})$ obtained by a single measurement at the origin in an input image of an extended quadratic surface (left of Figure 6). Then given a measurement $\mathbf{I}_2$ at one other point $(x_2,y_2)$ , we use that information to update the likelihood
+
+
+
+
+Figure 5. Uncalibrated two-shot photometric stereo. Top row: two simulated images of a surface under different lights, with measurements $\mathbf{I}_1$ , $\mathbf{I}_2$ at the same pixel location. Center row: "Likelihood" of different shapes using only one measurement (left) or both measurements (right), visualized over the orientation domain. Magenta dot indicates true shape used for simulation. Bottom row: Shape sets $F_{+}(\mathbf{I}_{1})$ , $F_{+}(\mathbf{I}_{2})$ and their intersection (open circle).
+
+over the first set using (15), but with the term $\hat{\phi}_{\mathbf{I}_2}(f_x,f_y)$ replaced by $\left(\hat{\phi}_{\mathbf{I}_2}(f_x,f_y) + A(x_2,y_2)\cdot \hat{\phi}_{\mathbf{I}_1}(f_x,f_y)\right)$ . The updated two-measurement likelihood is shown in the second column of Figure 6.
+
+We continue this process by adding information from additional measurements, $\mathbf{I}_3$ at $(x_3,y_3)$ and $\mathbf{I}_4$ at $(x_4,y_4)$ , each time updating the likelihood over the original set $L(f_x,f_y)$ by accumulating the intersection errors between $F_{+}(\mathbf{I}_{i})$ and $F_{+}(\mathbf{I}_{1})$ . The evolution of this likelihood for three and four points is shown in Figure 6. We see that the composite likelihood function achieves its global maximum at a shape $\mathbf{f}\in F_{+}(\mathbf{I})$ that is very close to $\mathbf{f}^*$ modulo the irreconcilable four-way ambiguity. This is consistent with the area-based analysis of Xiong et al. [17], which proves the uniqueness of shape reconstruction for extended quadratic patches.
+
+
+Figure 6. Combining shape information at multiple points on an extended quadratic surface. Given one measurement $\mathbf{I}_1$ , all quadratic shapes in $F_{+}(\mathbf{I}_{1})$ are equally likely. This is depicted on the left as a constant relative likelihood over the domain of function $\hat{\phi}_{\mathbf{I}}$ . Incorporating measurements $\mathbf{I}_i$ at two or more points modifies the likelihood to have a maximum that is close to the true shape (magenta dot) modulo $\rho_1, \rho_2$ .
+
+# 7. Conclusion
+
+This paper takes preliminary steps toward a deployable point processor for shading that does not require knowledge of lighting at a point or rely on accurate estimates of that lighting. It suggests a new intermediate representation of the set of consistent second-order shapes at each image point, in the form of an explicit differentiable, parametrized two-dimensional manifold. It also provides two simple examples of how this new intermediate representation can be used for shape analysis. The distinguishing feature of this approach is that it has the potential to enable shape processing to succeed in real-world situations where the lighting varies across surfaces and is therefore difficult or impossible to accurately infer.
+
+The contributions of this paper are primarily theoretical, and turning this research into practice will require substantial progress in several directions. This may include combining multi-scale derivatives, creating spatial regularization schemes that are suitable for piecewise smooth surfaces, extending the approach from local second-order shape to local third-order shape, and exploring the ability of the factored network architecture to represent more general (e.g. non-Lambertian) rendering models and to be trained from images instead of algebraic equations.
+
+# References
+
+[1] Jonathan T. Barron and Jitendra Malik. Shape, illumination, and reflectance from shading. IEEE Transactions on Pat-
+
+tern Analysis and Machine Intelligence (TPAMI), 37(8):1670-1687, 2015. 2
+[2] Ronen Basri and David W Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), (2):218-233, 2003. 2
+[3] Peter N Belhumeur, David J Kriegman, and Alan L Yuille. The bas-relief ambiguity. International Journal of Computer Vision (IJCV), 35(1):33-44, 1999. 1, 7
+[4] Patrick Cavanagh. The artist as neuroscientist. Nature, 434:301-307, 2005. 2
+[5] Eng-Wee Chionh, Ronald N Goldman, and James R Miller. Using multivariate resultants to find the intersection of three quadric surfaces. ACM Transactions on Graphics (TOG), 10(4):378–400, 1991. 4
+[6] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In Advances in Neural Information Processing Systems (NeurIPS), pages 2366-2374, 2014. 2
+[7] Joel Fan and Lawrence B Wolff. Surface curvature and shape reconstruction from unknown multiple illumination and integrability. Computer Vision and Image Understanding, 65(2):347-359, 1997. 7
+[8] David A Forsyth. Variable-source shading analysis. International Journal of Computer Vision (IJCV), 91(3):280-302, 2011. 2
+[9] Hideki Hayakawa. Photometric stereo under a light source with arbitrary motion. Journal of the Optical Society of America (JOSA) A, 11(11):3079-3089, 1994. 7
+[10] Berthold KP Horn. Shape from shading: A method for obtaining the shape of a smooth opaque object from one view. Technical Report AITR-232, MIT Artificial Intelligence Laboratory, 1970. 1
+[11] Jan J Koenderink and Andrea J van Doorn. Representation of local geometry in the visual system. Biological cybernetics, 55(6):367-375, 1987. 2
+[12] Jan J Koenderink, Andrea J Van Doorn, Astrid ML Kappers, and James T Todd. Ambiguity and the 'mental eye' in pictorial relief. Perception, 30(4):431-448, 2001. 2
+[13] Zuzana Kukelova, Jan Heller, and Andrew Fitzgibbon. Efficient intersection of three quadrics and applications in computer vision. In Computer Vision and Pattern Recognition (CVPR), pages 1799-1808, June 2016. 4
+[14] Benjamin Kunsberg and Steven W Zucker. How shading constrains surface patches without knowledge of light sources. SIAM Journal on Imaging Sciences, 7(2):641-668, 2014. 2, 3, 4
+[15] Ravi Ramamoorthi and Pat Hanrahan. An efficient representation for irradiance environment maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pages 497-500. ACM, 2001. 2
+[16] Stephan R Richter and Stefan Roth. Discriminative shape from shading in uncalibrated illumination. In Computer Vision and Pattern Recognition (CVPR), pages 1128-1136, 2015. 2
+
+[17] Ying Xiong, Ayan Chakrabarti, Ronen Basri, Steven J Gortler, David W Jacobs, and Todd Zickler. From shading to local shape. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 37(1):67-79, 2015. 2, 4, 8
+[18] A Yuille and D Snow. Shape and albedo from multiple images using integrability. In Computer Vision and Pattern Recognition (CVPR), pages 158-164, 1997. 7
+[19] Daniel Zoran, Dilip Krishnan, Jose Bento, and Bill Freeman. Shape and illumination from shading using the generic viewpoint assumption. In Advances in Neural Information Processing Systems (NeurIPS), pages 226-234, 2014. 2
\ No newline at end of file
diff --git a/alightinginvariantpointprocessorforshading/images.zip b/alightinginvariantpointprocessorforshading/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b92c86ec1976e90e60eeb72f1db705b157e90ef9
--- /dev/null
+++ b/alightinginvariantpointprocessorforshading/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87123d4a66a4a9429bc5488f66310cf2176357ce1a310d0628d746125edb23ea
+size 395881
diff --git a/alightinginvariantpointprocessorforshading/layout.json b/alightinginvariantpointprocessorforshading/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..6282248494b895117c46d63aab81b1b43fa281b6
--- /dev/null
+++ b/alightinginvariantpointprocessorforshading/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c4cba984571c4baa9115c6d1e147e380c5b0a41c1e516ea2e4eba621c6b499b0
+size 507503
diff --git a/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_content_list.json b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..93f8ef0eb743a2825d1b032c6466d5c94d7ee741
--- /dev/null
+++ b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9c53d2425643169526420e7de4f19e80b22f58d53968055d546e5bdcae3f17a
+size 74993
diff --git a/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_model.json b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..73e59ad7ab937b22c32e2e60b7b14044a17c5c90
--- /dev/null
+++ b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:297e104caa52a9bba659c7a3f9b034bff069c96d25726f2c41be3eff5f57a231
+size 91941
diff --git a/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_origin.pdf b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a53d8ffbb1e5be47cf94b2e2b6cd839e0cb7b8d1
--- /dev/null
+++ b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/5366456f-397a-45d6-b885-2f5ca634add3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e50127115a2226ec48e7fe699bb2f193fd47fb866a0192d5437a2f8043c105e2
+size 2598796
diff --git a/alocaltoglobalapproachtomultimodalmoviescenesegmentation/full.md b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..979d103645cf91dacbeeaa3b933f25f4d02e3420
--- /dev/null
+++ b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/full.md
@@ -0,0 +1,313 @@
+# A Local-to-Global Approach to Multi-modal Movie Scene Segmentation
+
+Anyi Rao $^{1}$ , Linning Xu $^{2}$ , Yu Xiong $^{1}$ , Guodong Xu $^{1}$ , Qingqiu Huang $^{1}$ , Bolei Zhou $^{1}$ , Dahua Lin $^{1}$ $^{1}$ CUHK - SenseTime Joint Lab, The Chinese University of Hong Kong
+ $^{2}$ The Chinese University of Hong Kong, Shenzhen
+
+{anyirao, xy017, xg018, hq016, bzhou, dhlin}@ie.cuhk.edu.hk, linningxu@link.cuhk.edu.cn
+
+# Abstract
+
+Scene, as the crucial unit of storytelling in movies, contains complex activities of actors and their interactions in a physical environment. Identifying the composition of scenes serves as a critical step towards semantic understanding of movies. This is very challenging – compared to the videos studied in conventional vision problems, e.g. action recognition, as scenes in movies usually contain much richer temporal structures and more complex semantic information. Towards this goal, we scale up the scene segmentation task by building a large-scale video dataset MovieScenes, which contains $21K$ annotated scene segments from 150 movies. We further propose a local-to-global scene segmentation framework, which integrates multi-modal information across three levels, i.e. clip, segment, and movie. This framework is able to distill complex semantics from hierarchical temporal structures over a long movie, providing top-down guidance for scene segmentation. Our experiments show that the proposed network is able to segment a movie into scenes with high accuracy, consistently outperforming previous methods. We also found that pretraining on our MovieScenes can bring significant improvements to the existing approaches.1
+
+# 1. Introduction
+
+Imagine you are watching the movie Mission Impossible starred by Tom Cruise: In a fight scene, Ethan leaps onto a helicopter's landing skid and attaches an exploding gum to the windshield to destroy the enemy. Suddenly, the story jumps into an emotional scene where Ethan pulled the trigger and sacrificed his life to save his wife Julia. Such a dramatic change of scenes plays an important role in the movie's storytelling. Generally speaking, a movie is composed of a well-designed series of intriguing scenes with transitions, where the underlying storyline determines the
+
+
+Shot A
+Shot B
+Shot C
+Shot D
+
+
+Previous Scene
+Shot
+
+
+Shot 4
+
+Shot 6
+Next Scene
+Figure 1. When we look at any single shot from figure (a), e.g. the woman in shot B, we cannot infer what the current event is. Only when we consider all the shots 1-6 in this scene, as shown in figure (b), we can recognize that "this woman is inviting a couple to dance with the band."
+
+order of the scenes being presented. Therefore recognizing the movie scenes, including the detection of scene boundaries and the understanding of the scene content, facilitates a wide-range of movie understanding tasks such as scene classification, cross movie scene retrieval, human interaction graph and human-centric storyline construction.
+
+It is worth noting that scenes and shots are essentially different. In general, a shot is captured by a camera that operates for an uninterrupted period of time and thus is visually continuous; while a scene is a semantic unit at a higher level. As illustrated in Figure 1, a scene comprises a sequence of shots to present a semantically coherent part of the story. Therefore, whereas a movie can be readily divided into shots based on simple visual cues using existing tools [23], the task of identifying those sub-sequences of shots that constitute scenes is challenging, as it requires semantic understanding in order to discover the associations between those shots that are semantically consistent but visually dissimilar.
+
+There has been extensive studies on video understanding. Despite the great progress in this area, most existing works focus on recognizing the categories of certain activities from short videos [28, 6, 14]. More importantly, these works assume a list of pre-defined categories that are visually distinguishable. However, for movie scene segmentation, it is impossible to have such a list of categories. Ad
+
+ditionally, shots are grouped into scenes according to their semantical coherence rather than just visual cues. Hence, a new method needs to be developed for this purpose.
+
+To associate visually dissimilar shots, we need semantical understanding. The key question here is "how can we learn semantics without category label?" Our idea to tackle this problem consists in three aspects: 1) Instead of attempting to categorize the content, we focus on scene boundaries. We can learn what constitute a boundary between scenes in a supervised way, and thus get the capability of differentiating between within-scene and cross-scene transitions. 2) We leverage the cues contained in multiple semantic elements, including place, cast, action, and audio, to identify the associations across shots. By integrating these aspects, we can move beyond visual observations and establish the semantic connections more effectively. 3) We also explore the top-down guidance from the overall understanding of the movie, which brings further performance gains.
+
+Based on these ideas, we develop a local-to-global framework that performs scene segmentation through three stages: 1) extracting shot representations from multiple aspects, 2) making local predictions based on the integrated information, and finally 3) optimizing the grouping of shots by solving a global optimization problem. To facilitate this research, we construct MovieScenes, a large-scale dataset that contains over $21K$ scenes containing over $270K$ shots from 150 movies.
+
+Experiments show that our method raise performance by $68\%$ (from 28.1 to 47.1 in terms of average precision) than the existing best method [1]. Existing methods pretrained on our dataset also have a large gain in performance.
+
+# 2. Related Work
+
+Scene boundary detection and segmentation. The earliest works exploit a variety of unsupervised methods. [22] clusters shots according to shot color similarity. In [17], the author plots a shot response curve from low-level visual features and set a threshold to cut scene. [4, 3] further group shots using spectral clustering with a fast global k-means algorithm. [10, 24] predict scene boundaries with dynamic programming by optimizing a predefined optimizing objective. Researchers also resort to other modality information, e.g. [13] leverages scripts with HMM, [23] uses low-level visual and audio features to build scene transition graph. These unsupervised methods are not flexible and heavily rely on manually setting parameters for different videos.
+
+Researchers move on to supervised approaches and start to build up new datasets. IBM OVSD [21] consists of 21 short videos with rough scenes, which may contain more than one plot. BBC Planet Earth [1] comes from 11 Episodes of BBC documentaries. [15] generates synthetic data from Places205 [31]. However, the videos in these datasets lack rich plots or storylines, thus limits their real-
+
+world applications. The number of test videos is so small that cannot reflect the effectiveness of the methods considering the vast variety of scenes. Additionally, their methods take shot as the analytical unit and implement scene segmentation in the local region recursively. Due to their lack of consideration of the semantics within a scene, it is hard to learn high-level semantics and achieve an ideal result.
+
+Scene understanding in images and short videos. Image-based scene analysis [31, 29, 9] can infer some basic knowledge about scenes, e.g. what is contained in this image. However, it is hard to tell the action from a single static image since it lacks contextual information around it. Dynamic scene understanding are further studied with seconds-long short videos [6, 14]. However, all these videos take single shot video without enough variations capturing the change of time and places compared to long videos.
+
+Scene understanding in long videos. There are few datasets focusing on scene in long videos. Most available long video datasets focus on identifying casts in movies or TV series [2, 12, 16] and localizing and classifying the actions [8]. MovieGraphs [26] focuses on the individual scene clips in a movie and the language structures of a scene. Some transition parts between scenes are discarded, making the information incomplete.
+
+In order to achieve more general scene analysis that could be extended to videos with long time duration, we address scene segmentation in movies with our large-scale MovieScenes dataset. We propose a framework considering both the relationship among shots locally and the relationship among scenes globally using multiple semantic elements, achieving much better segmentation results.
+
+# 3. MovieScenes Dataset
+
+To facilitate the scene understanding in movies, we construct MovieScenes, a large-scale scene segmentation dataset that contains $21K$ scenes derived by grouping over $270K$ shots from 150 movies. This dataset provides a foundation for studying the complex semantics within scenes, and facilitates plot-based long video understanding on the top of scenes.
+
+# 3.1. Definition of Scenes
+
+Following previous definition of scene [17, 4, 10, 24], a scene is a plot-based semantic unit, where a certain activity takes place among a certain group of characters. While a scene often happens in a fixed place, it is also possible that a scene traverses between multiple places continually, e.g. during a fighting scene in a movie, the characters move from indoor to outdoor. These complex entanglements in scenes cast more difficulty in the accurate detection of scenes which require high-level semantic infor
+
+
+Figure 2. Example of the annotated scenes from movie Bruce Almighty (2003). The blue line in the bottom corresponds to the whole movie timeline where the dark blue and light blue regions represent different scenes. In Scene 10, the characters are having a phone call in two different places, thus it requires a semantic understanding of this scene to prevent it from categorizing them into different scenes. In Scene 11, the task becomes even more difficult, as this live broadcasting scene involves more than three places and groups of characters. In this case, visual cues only are likely to fail, thus the inclusion of other aspects such as the audio cues becomes critical.
+
+Table 1. Data consistency statistics of MovieScenes. We divide all annotations into three categories: high/low consistency cases and Unsure cases according to annotators consistency. unsure cases are discarded in our experiments. More details are specified in the supplementary materials.
+
+| Consist. | High | Low | Unsure |
| Transit. | 16,392 (76.5%) | 5,036 (23.5%) | - |
| Non-trans. | 225,836 (92.6%) | 18,048 (7.4%) | - |
| Total | 242,052 (89.5%) | 23,260 (8.6%) | 5,138 (1.9%) |
+
+Table 2. A comparison of existing scene datasets.
+
+ | #Shot | #Scene | #Video | Time(h) | Source |
| OVSD [21] | 10,000 | 300 | 21 | 10 | MiniFilm |
| BBC [1] | 4,900 | 670 | 11 | 9 | Docu. |
| MovieScenes | 270,450 | 21,428 | 150 | 297 | Movies |
+
+mation. Figure 2 illustrates some examples of annotated scenes in MovieScenes, demonstrating this difficulty.
+
+The vast diversity of movie scenes makes it hard for the annotators complying with each other. To ensure the consistency of results from different annotations, during the annotation procedure, we provided a list of ambiguous examples with specific guidance to clarify how such cases should be handled. Moreover, all data are annotated by different annotators independently for multiple times. In the end, our multiple times annotation with the provided guidance leads to highly consistent results, i.e. $89.5\%$ high consistency cases in total, as shown in Table 1.
+
+# 3.2. Annotation Tool and Procedure
+
+Our dataset contains 150 movies, and it would be a prohibitive amount of work if the annotators go through the movies frame by frame. We adopt an shot-based approach, based on the understanding that a $\text{shot}^2$ could always be
+
+uniquely categorized into one scene. Consequently, the scene boundaries must be a subset of all the shot boundaries. For each movie, we first divide it into shots using off-the-shelf methods [23]. This shot-based approach greatly simplifies the scene segmentation task and speeds up the annotation process. We also developed a web-based annotation tool3 to facilitate annotation. All of the annotators went through two rounds annotation procedure to ensure the high consistency. In the first round, we dispatch each chunk of movies to three independent annotators for later consistency check. In the second round, inconsistent annotations will be re-assigned to two additional annotators for extra evaluations.
+
+# 3.3. Annotation Statistics
+
+Large-scale. Table 2 compares MovieScenes with existing similar video scene datasets. We show that MovieScenes is significantly larger than other datasets in terms of the number of shots/scenes and the total time duration. Furthermore, our dataset covers a much wider range of diverse sources of data, capturing all kinds of scenes, compared with short films or documentaries.
+
+Diversity. Most movies in our dataset have time duration between 90 to 120 minutes, providing rich information about individual movie stories. A wide range of genres is covered, including most popular ones such as dramas, thrillers, action movies, making our dataset more comprehensive and general. The length of the annotated scenes varies from less than $10s$ to more than $120s$ , where the majority last for $10 \sim 30s$ . This large variability existing in both the movie level and the scene level makes movie scene segmentation task more challenging. $^4$
+
+# 4. Local-to-Global Scene Segmentation
+
+As mentioned above, a scene is a series of continuous shots. Therefore, scene segmentation can be formulated as a binary classification problem, i.e. to determine whether a shot boundary is a scene boundary. However, this task is not easy, since segmenting scenes requires the recognition of multiple semantic aspects and usage of the complex temporal information.
+
+To tackle this problem, we propose a Local-to-Global Scene Segmentation framework (LGSS). The overall formulation is shown in Equation 1. A movie with $n$ shots is represented as a shot sequence $[\mathbf{s}_1,\dots ,\mathbf{s}_n]$ , where each shot is represented with multiple semantic aspects. We design a three-level model to incorporate different levels of contextual information, i.e. clip level $(\mathcal{B})$ , segment level $(\mathcal{T})$ and movie level $(\mathcal{G})$ , based on the shot representation $\mathbf{s}_i$ . Our model gives a sequence of predictions $[o_1,\dots ,o_{n - 1}]$ where $o_i\in \{0,1\}$ denotes whether the boundary between the $i$ -th and $(i + 1)$ -th shots is a scene boundary.
+
+$$
+\mathcal {G} \left\{\mathcal {T} \left[ \mathcal {B} \left(\left[ \mathbf {s} _ {1}, \mathbf {s} _ {2}, \dots , \mathbf {s} _ {n} \right]\right) \right] \right\} = \left[ o _ {1}, o _ {2}, \dots , o _ {n - 1} \right] \tag {1}
+$$
+
+In the following parts of this section, we will first introduce how to get $\mathbf{s}_i$ , namely how to represent the shot with multiple semantic elements. Then we will illustrate the details of the three levels of our model, i.e. $\mathcal{B}$ , $\mathcal{T}$ and $\mathcal{G}$ . The overall framework is shown in Figure 3.
+
+# 4.1. Shot Representation with Semantic Elements
+
+Movie is a typical multi-modal data that contains different high-level semantic elements. A global feature extracted from a shot by a neural network, which is widely used by previous works [1, 24], is not enough to capture the complex semantic information.
+
+A scene is where a sequence of shots sharing some common elements, e.g. place, cast, etc. Thus, it is important to take these related semantic elements into consideration for better shot representation. In our LGSS framework, a shot is represented with four elements that play important roles in the constitution of a scene, namely place, cast, action, and audio.
+
+To obtain semantic features for each shot $\mathbf{s}_i$ , we utilize 1) ResNet50 [11] pretrained on Places dataset [31] on key frame images to get place features, 2) Faster-RCNN [19] pretrained on CIM dataset [12] to detect cast instances and ResNet50 pretrained on PIPA dataset [30] to extract cast features, 3) TSN [27] pretrained on AVA dataset [8] to get action features, 4) NaverNet [5] pretrained on AVA-ActiveSpeaker dataset [20] to separate speech and background sound, and stft [25] to get their features respectively in a shot with $16\mathrm{KHz}$ sampling rate and 512 windowed signal length, and concatenate them to obtain audio features.
+
+# 4.2. Shot Boundary Representation at Clip Level
+
+As we mentioned before, scene segmentation can be formulated as a binary classification problem on shot boundaries. Therefore, how to represent a shot boundary becomes a crucial question. Here, we propose a Boundary Network (BNet) to model the shot boundary. As shown in Equation 2, BNet, denoted as $\mathcal{B}$ , takes a clip of the movie with $2w_{b}$ shots as input and outputs a boundary representation $\mathbf{b}_i$ . Motivated by the intuition that a boundary representation should capture both the differences and the relations between the shots before and after, BNet consists of two branches, namely $\mathcal{B}_d$ and $\mathcal{B}_r$ . $\mathcal{B}_d$ is modeled by two temporal convolution layers, each of them embeds the shots before and after the boundary respectively, following an inner product operation to calculate their differences. $\mathcal{B}_r$ aims to capture the relations of the shots, it is implemented by a temporal convolution layer followed a max pooling.
+
+$$
+\begin{array}{l} \mathbf {b} _ {i} = \mathcal {B} ([ \mathbf {s} _ {i - (w _ {b} - 1)}, \dots , \mathbf {s} _ {i + w _ {b}} ]) \quad (\text {w i n d o w s i z e} 2 w _ {b}) \\ = \left[ \begin{array}{l} \mathcal {B} _ {d} \left(\left[ \mathbf {s} _ {i - \left(w _ {b} - 1\right)}, \dots , \mathbf {s} _ {i} \right], \left[ \mathbf {s} _ {i + 1}, \dots , \mathbf {s} _ {i + w _ {b}} \right]\right) \\ \mathcal {B} _ {r} \left(\left[ \mathbf {s} _ {i - \left(w _ {b} - 1\right)}, \dots , \mathbf {s} _ {i} \right], \left. \mathbf {s} _ {i + 1}, \dots , \mathbf {s} _ {i + w _ {b}} \right]\right) \end{array} \right] \tag {2} \\ \end{array}
+$$
+
+# 4.3. Coarse Prediction at Segment Level
+
+After we get the representatives of each shot boundary $\mathbf{b}_i$ , the problem becomes predicting a sequence binary labels $[o_1, o_2, \dots, o_{n-1}]$ based on the sequence of representatives $[\mathbf{b}_1, \dots, \mathbf{b}_{n-1}]$ , which can be solved by a sequence-to-sequence model [7]. However, the number of shots $n$ is usually larger than 1000, which is hard for existing sequential models to contain such a long memory. Therefore, we design a segment-level model to predict a coarse results based on a movie segment that consists of $w_t$ shots ( $w_t \ll n$ ). Specifically, we use a sequential model $\mathcal{T}$ , e.g., a Bi-LSTM [7], with stride $w_t / 2$ shots to predict a sequence of coarse score $[p_1, \dots, p_{n-1}]$ , as shown in Equation 3. Here $p_i \in [0,1]$ is the probability of a shot boundary to be a scene boundary.
+
+$$
+[ p _ {1}, \dots , p _ {n - 1} ] = \mathcal {T} ([ \mathbf {b} _ {1}, \dots , \mathbf {b} _ {n - 1} ]) \tag {3}
+$$
+
+Then we get a coarse prediction $\bar{o}_i\in \{0,1\}$ , which indicates whether the $i$ -th shot boundary is a scene boundary. By binarizing $p_i$ with a threshold $\tau$ , we get
+
+$$
+\bar {o} _ {i} = \left\{ \begin{array}{l l} 1 & \text {i f} p _ {i} > \tau , \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {4}
+$$
+
+# 4.4. Global Optimal Grouping at Movie Level
+
+The segmentation result $\bar{o}_i$ obtained by the segment-level model $\mathcal{T}$ is not good enough, since it only considers the local information over $w_{t}$ shots while ignoring the global contextual information over the whole movie. In order to
+
+
+Clip-level shot boundary representation
+
+
+Segment-level coarse prediction
+Movie-level global optimal grouping
+Scene results
+Figure 3. Local-to-Global Scene Segmentation framework (LGSS). At the clip level, we extract four encoding for each shot and take a BNet to model shot boundary. The local sequence model outputs a rough scene cut results at the segment level. Finally, at the movie level, global optimal grouping is applied to refine the scene segmentation results.
+
+capture the global structure, we develop a global optimal model $\mathcal{G}$ to take movie-level context into consideration. It takes the shot representations $\mathbf{s}_i$ and the coarse prediction $\bar{o}_i$ as inputs and make the final decision $o_i$ as follows,
+
+$$
+\left[ o _ {1}, \dots , o _ {n - 1} \right] = \mathcal {G} \left(\left[ \mathbf {s} _ {1}, \dots , \mathbf {s} _ {n} \right], \left[ \bar {o} _ {1}, \dots , \bar {o} _ {n - 1} \right]\right) \tag {5}
+$$
+
+The global optimal model $\mathcal{G}$ is formulated as an optimization problem. Before introducing it, we establish the concept of super shots and objective function first.
+
+The local segmentation gives us an initial rough scene cut set $\mathbf{C} = \{\mathcal{C}_k\}$ , here we denote $\mathcal{C}_k$ as a super shot, i.e. a sequence of consecutive shots determined by the segment-level results $[\bar{o}_1,\dots ,\bar{o}_{n - 1}]$ . Our goal is to merge these super shots into $j$ scenes $\Phi (n = j) = \{\phi_1,\ldots ,\phi_j\}$ , where $\mathbf{C} = \bigcup_{k = 1}^{j}\phi_{k}$ and $|\phi_k|\geq 1$ . Since $j$ is not given, to automatically decide the target scene number $j$ , we need to look through all the possible scene cuts, i.e. $F = \max_{j,j < |\mathbf{C}|}F(n = j)$ . With fixed $j$ , we want to find the optimal scene cut set $\Phi^{\star}(n = j)$ . The overall optimization problem is as follows,
+
+$$
+\begin{array}{l} F ^ {\star} = \max _ {j} F (n = j) \tag {6} \\ = \max _ {j} \left(\max _ {\Phi} \sum_ {\phi_ {k} \in \Phi} g (\phi_ {k})\right), \\ \begin{array}{l} \text {s . t .} \quad j < | \mathbf {C} |, | \boldsymbol {\Phi} | = j. \end{array} \\ \end{array}
+$$
+
+Here, $g(\phi_k)$ is the optimal scene cut score achieved by
+
+the scene $\phi_{k}$ . It formulates the relationship between a super shot $\mathcal{C}_l\in \phi_k$ and the rest super shots $\mathcal{P}_{k,l} = \phi_k\backslash \mathcal{C}_l$ . $g(\phi_k)$ constitutes two terms to capture a global relationship and a local relationship, $F_{s}(\mathcal{C}_{k},\mathcal{P}_{k})$ is similarity score between $\mathcal{C}_k$ and $\mathcal{P}_k$ , and $F_{t}(\mathcal{C}_{k},\mathcal{P}_{k})$ is an indicate function that whether there is a very high similarity between $\mathcal{C}_k$ and any super shot from $\mathcal{P}_k$ aiming to formulate shots thread in a scene. Specifically,
+
+$$
+g \left(\phi_ {k}\right) = \sum_ {\mathcal {C} _ {k} \in \phi_ {k}} f \left(\mathcal {C} _ {k}, \mathcal {P} _ {k}\right) = \sum_ {\mathcal {C} _ {k} \in \phi_ {k}} \left(F _ {s} \left(\mathcal {C} _ {k}, \mathcal {P} _ {k}\right) + F _ {t} \left(\mathcal {C} _ {k}, \mathcal {P} _ {k}\right)\right),
+$$
+
+$$
+F _ {s} \left(\mathcal {C} _ {k}, \mathcal {P} _ {k}\right) = \frac {1}{\left| \mathcal {P} _ {k} \right|} \sum_ {\hat {\mathcal {C}} _ {k} \in \mathcal {P} _ {k}} \cos \left(\mathcal {C} _ {k}, \hat {\mathcal {C}} _ {k}\right),
+$$
+
+$$
+F_{t}(\mathcal{C}_{k},\mathcal{P}_{k}) = \sigma (\max_{\hat{\mathcal{C}}_{k}\in \mathcal{P}_{k}}\cos (\mathcal{C}_{k},\hat{\mathcal{C}}_{k})).
+$$
+
+DP. Solving the optimization problem and determining target scene number can be effectively conducted by dynamic programming (DP). The update of $F(n = j)$ is
+
+$$
+\max _ {k} \{F ^ {\star} (n = j - 1 | \mathbf {C} _ {1: k}) + g (\phi_ {j} = \{\mathcal {C} _ {k + 1}, \dots , \mathcal {C} _ {| \mathbf {C} |} \}) \},
+$$
+
+where $\mathbf{C}_{1:k}$ is the set containing the first $k$ super shots.
+
+Iterative optimization. The above DP could give us a scene cut result, but we can further take this result as a new
+
+super shot set and iteratively merge them to improve the final result. When the super shot updates, we also need to update these super shot representations. A simple summation over all the contained shots may not be an ideal representation for a super shot, as there are some shots containing less informations. Therefore, it would be better if we refine the representation of super shots in the optimal grouping. The details of this refinement on super shot representation are given in the supplements.
+
+# 5. Experiments
+
+# 5.1. Experimental Setup
+
+Data. We implement all the baseline methods with our MovieScenes dataset. The whole annotation set is split into Train, Val, and Test sets with the ratio 10:2:3 on video level.
+
+Implementation details. We take cross entropy loss for the binary classification. Since there exists unbalance in the dataset, i.e. non-scene-transition shot boundaries dominate in amount (approximate 9:1), we take a 1:9 weight on cross entropy loss for non-scene-transition shot boundary and scene-transition shot boundary respectively. We train these models for 30 epochs with Adam optimizer. The initial learning rate is 0.01 and the learning rate will be divided by 10 at the 15th epoch.
+
+In the global optimal grouping, we take $j = 600$ super shots from local segmentation according to the obtained classification scores for these shot boundaries (a movie usually contains $1k \sim 2k$ shot boundaries.) The range of target scenes are from 50 to 400, i.e. $i \in [50, 400]$ . These values are estimated based on the MovieScenes statistics.
+
+Evaluation Metrics. We take three commonly used metrics: 1) Average Precision (AP). Specifically in our experiment, it is the mean of AP of $o_i = 1$ for each movie. 2) Miou: a weighted sum of intersection of union of a detected scene boundary with respect to its distance to the closest ground-truth scene boundary. 3) Recall@3s: recall at 3 seconds, the percentage of annotated scene boundaries which lies within 3s of the predicted boundary.
+
+# 5.2. Quantitative Results
+
+The overall results are shown in Table 3. We reproduce existing methods [18, 4, 10, 21, 24, 1] with deep place features for fair comparison. The base model applies temporal convolution on shots with the place feature, and we gradually add the following four modules to it, i.e., 1) multiple semantic elements (Multi-Semantics), 2) shot boundary representation at clip level (BNet), 3) coarse prediction at segment level with a local sequence model (Local Seq), and 4) global optimal grouping at movie level (Global).
+
+Analysis of overall results. The performance of random method depends on the ratio of scene-transition/non-scene-
+
+transition shot boundary in the test set, which is approximately $1:9$ . All the conventional methods [18, 4, 10, 21] outperform random guess, yet do not achieve good performance since they only consider the local contextual information and fail to capture semantic information. [24, 1] achieve better results than conventional methods [18, 4, 10, 21] by considering a large range information.
+
+Analysis of our framework. Our base model applies temporal convolution on shots with the place feature and achieves 19.5 on AP. With the help of multiple semantic elements, our method improves from 19.5 (Base) to 24.3 (Multi-Semantics) $(24.6\%)$ relatively). The framework with shot boundary modeling using BNet raises the performance from 24.3 (Multi-Semantics) to 42.2 (Multi-Semantics+BNet) $(73.7\%)$ relatively) which suggests that in the scene segmentation task, modeling shot boundary directly is useful. The method with local sequence model (Multi-Semantics+BNet+Local Seq) achieves 2.7 absolute and $6.4\%$ relative improvement than model (Multi-Semantics+BNet) from 42.2 to 44.9. The full model includes both local sequence model and global optimal grouping (Multi-Semantics+BNet+Local Seq+Global) further improves the results from 44.9 to 47.1, which shows that a movie level optimization are important to scene segmentation.
+
+In all, with the help of multiple semantic elements, clip level shot modeling, segment level local sequence model, and movie level global optimal grouping, our best model outperforms base model and former best model [1] by a large margin, which improves 27.6 absolutely and $142\%$ relatively on base model (Base), and improves 19.0 absolutely and $68\%$ relatively on Siamese [1]. These verify the effectiveness of this local-to-global framework.
+
+# 5.3. Ablation Studies
+
+Multiple semantic elements. We take the pipeline with shot boundary modeling BNet, local sequence model and global optimal grouping as the base model. As shown in Table 4, gradually adding mid-level semantic elements improves the final results. Starting from the model using place only, audio improves 4.4, action improves 6.5, casts improves 4.0, and improves 8.1 with all together. This result indicates that place, cast, action and audio are all useful information to help scene segmentation.
+
+Additionally, with the help of our multi-semantic elements, other methods [21, 24, 1] achieve $20\% \sim 30\%$ relative improvements. This result further justifies our assumption that multi-semantic elements contributing to the scene segmentation.
+
+Influence of temporal length. We choose different window sizes in the shot boundary modeling at clip level (BNet) and different sequence lengths of Bi-LSTM at segment level
+
+Table 3. Scene segmentation result. In our pipeline, Multi-Semantics means multiple semantic elements, BNet means shot boundary modeling boundary net, Local Seq means local sequence model, Global means global optimal grouping.
+
+| Method | AP (↑) | Miou (↑) | Recall(↑) | Recall@3s (↑) |
| Random guess | 8.2 | 26.8 | 49.8 | 54.2 |
| Rasheed et al., GraphCut [18] | 14.1 | 29.7 | 53.7 | 57.2 |
| Chasanis et al., SCSA [4] | 14.7 | 30.5 | 54.9 | 58.0 |
| Han et al., DP [10] | 15.5 | 32.0 | 55.6 | 58.4 |
| Rotman et al., Grouping [21] | 17.6 | 33.1 | 56.6 | 58.7 |
| Tapaswi et al., StoryGraph [24] | 25.1 | 35.7 | 58.4 | 59.7 |
| Baraldi et al., Siamese [1] | 28.1 | 36.0 | 60.1 | 61.2 |
| LGSS (Base) | 19.5 | 34.0 | 57.1 | 58.9 |
| LGSS (Multi-Semantics) | 24.3 | 34.8 | 57.6 | 59.4 |
| LGSS (Multi-Semantics+BNet) | 42.2 | 44.7 | 67.5 | 78.1 |
| LGSS (Multi-Semantics+BNet+Local Seq) | 44.9 | 46.5 | 71.4 | 77.5 |
| LGSS (all, Multi-Semantics+BNet+Local Seq+Global) | 47.1 | 48.8 | 73.6 | 79.8 |
| Human upper-bound | 81.0 | 91.0 | 94.1 | 99.5 |
+
+Table 4. Multiple semantic elements scene segmentation ablation results, where four elements are studied including place, cast, action and audio.
+
+| Method | place | cast | act | aud | AP (↑) |
| Grouping [21] | ✓ | | | | 17.6 |
| StoryGraph [24] | ✓ | | | | 25.1 |
| Siamese [1] | ✓ | | | | 28.1 |
| Grouping [21] | ✓ | ✓ | ✓ | ✓ | 23.8 |
| StoryGraph [24] | ✓ | ✓ | ✓ | ✓ | 33.2 |
| Siamese [1] | ✓ | ✓ | ✓ | ✓ | 34.1 |
| LGSS | | | | ✓ | 17.5 |
| LGSS | | | ✓ | | 32.1 |
| LGSS | | ✓ | | | 15.9 |
| LGSS | ✓ | | | | 39.0 |
| LGSS | ✓ | | | ✓ | 43.4 |
| LGSS | ✓ | | ✓ | | 45.5 |
| LGSS | ✓ | ✓ | | | 43.0 |
| LGSS | ✓ | ✓ | ✓ | ✓ | 47.1 |
+
+(Local Seq). The result is shown in Table 5. The experiments show that a longer range of information improves the performance. Interestingly, the best results come from 4 shots for a shot boundary modeling and 10 shot boundaries as the input of a local sequence model, which involves 14 shot information in total. This is approximately the length of a scene. It shows that this range of temporal information is helpful to scene segmentation.
+
+Choice of hyper-parameters in global optimal grouping. We differ the iteration number of optimization (Iter #) and
+
+Table 5. Comparison of different temporal window size at clip and segment level. The vertical line differs on the window size of clip level shot boundary modeling (BNet), the horizontal line differs on the length of segment level sequence model (seq.).
+
+| BNet\seq | 1 | 2 | 5 | 10 | 20 |
| 2 | 43.4 | 44.2 | 45.4 | 46.3 | 46.5 |
| 4 | 44.9 | 45.2 | 45.7 | 47.1 | 46.9 |
| 6 | 44.7 | 45.0 | 45.8 | 46.7 | 46.6 |
+
+Table 6. Comparison of different hyper-parameters in global optimal grouping and different choices of initial super shot number.
+
+| Iter #\Init # | 400 | 600 | 800 | 1000 |
| 2 | 46.5 | 46.3 | 45.9 | 45.1 |
| 4 | 46.5 | 46.9 | 46.4 | 45.9 |
| 5 | 46.5 | 47.1 | 46.6 | 46.0 |
| Converged value | 46.5 | 47.1 | 46.6 | 46.0 |
+
+the initial super shots number (Init #) and show the results in Table 6.
+
+We first take a look at each row and change the initial super shots number. The setting with initial number 600 achieves the best results, since it is close to the target scene number $50 \sim 400$ and meanwhile ensures enough large search space. Then, when we look at each column, we observe that the setting with initial number 400 converges in the fastest way. It achieves the best results very quickly after 2 iterations. And all the settings converge within 5 iterations.
+
+
+Figure 4. Multiple semantic elements interpretation, where the norm of similarity of each semantic element is represented by the corresponding bar length. These four movie clips illustrate how different elements contribute to the prediction of a scene.
+
+
+Figure 5. Qualitative results of global optimal grouping in two cases. In each case, the first and second row are the results before and after the global optimal grouping respectively. The red line among two shots means there is a scene cut. The ground truth of each case is that these shots belong to the same scene.
+
+# 5.4. Qualitative Results
+
+Qualitative results showing the effectiveness of our multi-modal approach is illustrated in Figure 4, and the qualitative results of global optimal grouping are shown in Figure 5.
+
+Multiple semantic elements. To quantify the importance of multiple semantic elements, we take the norm of the cosine similarity for each modality. Figure 4 (a) shows an example where the cast is very similar in consecutive shots and help to contribute to the formation of a scene. In Figure 4 (b), the characters and their actions are hard to recognize: the first shot is a long shot where the character is very small, and the last shot only shows one part of the character without a clear face. In these cases, a scene is recognized thanks to the similar audio feature that is shared among these shots. Figure 4 (c) is a typical "phone call" scene where the action in each shot is similar. In Figure 4 (d), only place is similar and we still conclude it as one scene. From the above observations and analysis on more such cases, we come to the following empirical conclusions: multi-modal information is complementary to each other and help the scene segmentation.
+
+Table 7. Scene segmentation cross dataset transfer result (AP) on existing datasets.
+
+| Method | OVSD [21] | BBC [1] |
| DP [10] | 58.3 | 55.1 |
| Siamese [1] | 65.6 | 62.3 |
| LGSS | 76.2 | 79.5 |
| DP-pretrained [10] | 62.9 | 58.7 |
| Siamese-pretrained [1] | 76.8 | 71.4 |
| LGSS-pretrained | 85.7 | 90.2 |
+
+Optimal grouping. We show two cases to demonstrate the effectiveness of optimal grouping. There are two scenes in Figure 5. Without global optimal grouping, a scene with sudden view point change is likely to predict a scene transition (red line in the figure), e.g. in the first case, the coarse prediction gets two scene cuts when the shot type changes from a full shot to a close shot. In the second case, the coarse prediction gets a scene cut when a extreme close up shot appears. Our global optimal grouping is able to smooth out these redundant scene cuts as we expected.
+
+# 5.5. Cross Dataset Transfer
+
+We test different methods DP [10] and Siamese [1] on existing datasets OVSD [1] and BBC [21] with pretraining on our MovieScenes dataset, and the results are shown in Table 7. With pretraining on our dataset, the performances achieve significant improvements, i.e. $\sim 10$ absolute and $\sim 15\%$ relative improvements in AP. The reason is that our dataset covers much more scenes and brings a better generalization ability to the model pretrained on it.
+
+# 6. Conclusion
+
+In this work, we collect a large-scale annotation set for scene segmentation on 150 movies containing $270K$ annotations. We propose a local-to-global scene segmentation framework to cover a hierarchical temporal and semantic information. Experiments show that this framework is very effective and achieves much better performance than existing methods. A successful scene segmentation is able to support a bunch of movie understanding applications. All the studies in this paper together show that scene analysis is a challenging but meaningful topic which deserves further research efforts.
+
+Acknowledgment This work is partially supported by the General Research Fund (GRF) of Hong Kong (No. 14203518 & No. 14205719) and SenseTime Collaborative Grant on Large-scale Multi-modality Analysis.
+
+# References
+
+[1] Lorenzo Baraldi, Costantino Grana, and Rita Cucchiara. A deep siamese network for scene detection in broadcast videos. In 23rd ACM International Conference on Multimedia, pages 1199-1202. ACM, 2015. 2, 3, 4, 6, 7, 8
+[2] Piotr Bojanowski, Francis Bach, Ivan Laptev, Jean Ponce, Cordelia Schmid, and Josef Sivic. Finding actors and actions in movies. In Proceedings of the IEEE International Conference on Computer Vision, pages 2280-2287, 2013. 2
+[3] Brandon Castellano. Pyscenedetect: Intelligent scene cut detection and video splitting tool. https://pyscenedetect.readthedocs.io/en/latest/, 2018.2
+[4] Vasileios T Chasanis, Aristidis C Likas, and Nikolaos P Galatsanos. Scene detection in videos using shot clustering and sequence alignment. IEEE transactions on multimedia, 11(1):89-100, 2008. 2, 6, 7
+[5] Joon Son Chung. Naver at activitynet challenge 2019-task b active speaker detection (ava). arXiv preprint arXiv:1906.10555, 2019. 4
+[6] Bernard Ghanem Fabian Caba Heilbron, Victor Escorcia and Juan Carlos Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961-970, 2015. 1, 2
+[7] Alex Graves and Jürgen Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks, 18(5-6):602-610, 2005. 4
+[8] Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caroline Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan, George Toderici, Susanna Ricco, Rahul Sukthankar, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6047-6056, 2018. 2, 4
+[9] Saurabh Gupta and Jitendra Malik. Visual semantic role labeling. arXiv preprint arXiv:1505.04474, 2015. 2
+[10] Bo Han and Weiguo Wu. Video scene segmentation using a novel boundary evaluation criterion and dynamic programming. In 2011 IEEE International conference on multimedia and expo, pages 1-6. IEEE, 2011. 2, 6, 7, 8
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 4
+[12] Qingqiu Huang, Yu Xiong, and Dahua Lin. Unifying identification and context learning for person recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2217-2225, 2018. 2, 4
+[13] Chao Liang, Yifan Zhang, Jian Cheng, Changsheng Xu, and Hanqing Lu. A novel role-based movie scene segmentation method. In Pacific-Rim Conference on Multimedia, pages 917-922. Springer, 2009. 2
+[14] Mathew Monfort, Alex Andonian, Bolei Zhou, Kandan Ramakrishnan, Sarah Adel Bargal, Yan Yan, Lisa Brown, Quanfu Fan, Dan Gutfreund, Carl Vondrick, et al. Moments
+
+in time dataset: one million videos for event understanding. IEEE transactions on pattern analysis and machine intelligence, 2019. 1, 2
+[15] Stanislav Protasov, Adil Mehmood Khan, Konstantin Sozykin, and Muhammad Ahmad. Using deep features for video scene detection and annotation. Signal, Image and Video Processing, pages 1-9, 2018. 2
+[16] Vignesh Ramanathan, Armand Joulin, Percy Liang, and Li Fei-Fei. Linking people in videos with "their" names using coreference resolution. In European conference on computer vision, pages 95–110. Springer, 2014. 2
+[17] Zeeshan Rasheed and Mubarak Shah. Scene detection in hollywood movies and tv shows. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings., volume 2, pages II-343. IEEE, 2003. 2
+[18] Zeeshan Rasheed and Mubarak Shah. Detection and representation of scenes in videos. IEEE transactions on Multimedia, 7(6):1097-1105, 2005. 6, 7
+[19] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91-99. Curran Associates, Inc., 2015. 4
+[20] Joseph Roth, Sourish Chaudhuri, Ondrej Klejch, Radhika Marvin, Andrew Gallagher, Liat Kaver, Sharadh Ramaswamy, Arkadiusz Stopczynski, Cordelia Schmid, Zhonghua Xi, et al. Ava-activespeaker: An audiovisual dataset for active speaker detection. arXiv preprint arXiv:1901.01342, 2019. 4
+[21] Daniel Rotman, Dror Porat, and Gal Ashour. Optimal sequential grouping for robust video scene detection using multiple modalities. International Journal of Semantic Computing, 11(02):193-208, 2017. 2, 3, 6, 7, 8
+[22] Yong Rui, Thomas S Huang, and Sharad Mehrotra. Exploring video structure beyond the shots. In Proceedings. IEEE International Conference on Multimedia Computing and Systems (Cat. No. 98TB100241), pages 237-240. IEEE, 1998. 2
+[23] Panagiotis Sidiropoulos, Vasileios Mezaris, Ioannis Kompatsiaris, Hugo Meinedo, Miguel Bugalho, and Isabel Trancoso. Temporal video segmentation to scenes using high-level audiovisual features. IEEE Transactions on Circuits and Systems for Video Technology, 21(8):1163-1177, 2011. 1, 2, 3
+[24] Makarand Tapaswi, Martin Bauml, and Rainer Stiefelhagen. Storygraphs: visualizing character interactions as a timeline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 827-834, 2014. 2, 4, 6, 7
+[25] Srinivasan Umesh, Leon Cohen, and D Nelson. Fitting the mel scale. In 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), volume 1, pages 217-220. IEEE, 1999. 4
+[26] Paul Vicol, Makarand Tapaswi, Lluis Castrejon, and Sanja Fidler. Moviegraphs: Towards understanding human-centric situations from videos. In Proceedings of the IEEE Con-
+
+ference on Computer Vision and Pattern Recognition, pages 8581-8590, 2018. 2
+[27] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Val Gool. Temporal segment networks: Towards good practices for deep action recognition. In ECCV, 2016. 4
+[28] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaou Tang, and Luc Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision, pages 20-36. Springer, 2016. 1
+[29] Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. Situation recognition: Visual semantic role labeling for image understanding. In Conference on Computer Vision and Pattern Recognition, 2016. 2
+[30] Ning Zhang, Manohar Paluri, Yaniv Taigman, Rob Fergus, and Lubomir Bourdev. Beyond frontal faces: Improving person recognition using multiple cues. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4804-4813, 2015. 4
+[31] Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6):1452-1464, 2018. 2, 4
\ No newline at end of file
diff --git a/alocaltoglobalapproachtomultimodalmoviescenesegmentation/images.zip b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..86e2c3ff9f54c140b971e3e83bdef86425db2b47
--- /dev/null
+++ b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31a8961df3053adda694b3d28096afb87cbccd449ab9d01bb2a16c814d39bcd5
+size 513895
diff --git a/alocaltoglobalapproachtomultimodalmoviescenesegmentation/layout.json b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..d0f053a9922822ac816bbd27ce02cb8757a2eac2
--- /dev/null
+++ b/alocaltoglobalapproachtomultimodalmoviescenesegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90d4ea42cc3f7692854772efce87d7f4de7fafae0f45f11ed416d212e5fca1b4
+size 385529
diff --git a/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_content_list.json b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..617620c49f786cd1d5c111159b85a339c6bd698c
--- /dev/null
+++ b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b621eeb6c92071593b6aa02ba4d26e2f5cb4389deedcbd3e9cc03c32a36cfba9
+size 97373
diff --git a/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_model.json b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..87939ba7c3e67a5bfb7bff4f819a8c0e1c6e13bd
--- /dev/null
+++ b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e62298173a8d4fb77785678db5dbc26aac689a91f6b2946342d76c24425585ab
+size 125098
diff --git a/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_origin.pdf b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..ce02ddc47adb0d3666db8f5cd6511164af146b3f
--- /dev/null
+++ b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/5bb58f2c-57a0-4b06-bc1f-ccbf3dd0e05f_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1ca51285b3dd4d0f3f29a0af16a8f90c12a132aba44c54fc15a3fbdb110f45e
+size 6858173
diff --git a/amodeldrivendeepneuralnetworkforsingleimagerainremoval/full.md b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..86ab09e49df1587c5e0e7ab33993055f3e9433cc
--- /dev/null
+++ b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/full.md
@@ -0,0 +1,487 @@
+# A Model-driven Deep Neural Network for Single Image Rain Removal
+
+Hong Wang $^{1,*}$ , Qi Xie $^{1,*}$ , Qian Zhao $^{1}$ , Deyu Meng $^{2,1,\dagger}$
+
+$^{1}$ Xi'an Jiaotong University; $^{2}$ Macau University of Science and Technology
+
+{hongwang01,xq.liwu}@stu.xjtu.edu.cn timmy.zhaoqian@gmail.com dymeng@mail.xjtu.edu.cn
+
+# Abstract
+
+Deep learning (DL) methods have achieved state-of-the-art performance in the task of single image rain removal. Most of current DL architectures, however, are still lack of sufficient interpretability and not fully integrated with physical structures inside general rain streaks. To this issue, in this paper, we propose a model-driven deep neural network for the task, with fully interpretable network structures. Specifically, based on the convolutional dictionary learning mechanism for representing rain, we propose a novel single image deraining model and utilize the proximal gradient descent technique to design an iterative algorithm only containing simple operators for solving the model. Such a simple implementation scheme facilitates us to unfold it into a new deep network architecture, called rain convolutional dictionary network (RCDNet), with almost every network module one-to-one corresponding to each operation involved in the algorithm. By end-to-end training the proposed RCDNet, all the rain kernels and proximal operators can be automatically extracted, faithfully characterizing the features of both rain and clean background layers, and thus naturally lead to its better deraining performance, especially in real scenarios. Comprehensive experiments substantiate the superiority of the proposed network, especially its well generality to diverse testing scenarios and good interpretability for all its modules, as compared with state-of-the-arts both visually and quantitatively.
+
+# 1. Introduction
+
+Images taken under various rain conditions often suffer from unfavorable visibility, and always severely affect the performance of outdoor computer vision tasks, such as objection tracking [5], video surveillance [37], and pedestrian detection [31]. Hence, removing rain streaks from rainy images is an important pre-processing task and has drawn much research attention in the recent years [39, 26].
+
+In the past years, various methods have been proposed for single image rain removal task. Many researchers made
+
+
+(a) RCD model for rain layer
+
+
+(b) Algorithm for solving the proposed model
+
+
+(c) Illustration of the proposed RCDNet
+Figure 1. (a) Rain convolutional dictionary (RCD) model for rain layer. (b) The formulated optimization model and the corresponding iterative solution algorithm. (c) Visual illustration of the proposed RCDNet one-to-one corresponding to the algorithm (b).
+
+focus on exploring physical properties of rain layer and
+
+background layer, and introduced various prior structures to regularize and separate them. Along this research line, the representative methods include layer priors with Gaussian mixture model (GMM) [28], discriminative sparse coding (DSC) [51], and joint convolutional analysis and synthesis sparse representation (JCAS) [13]. Especially, inspired by the fact that rain streaks repeatedly appear at different locations over a rainy image with similar local patterns like shape, thickness, and direction, very recently researchers represented this configuration of rain layer by the convolutional dictionary learning model [15, 16]. Such a representation finely delivers this prior knowledge by imposing rain kernels (conveying repetitive local patterns) on sparse rain maps, as intuitively depicted in Fig. 1 (a). These methods thus achieved state-of-the-art (SOTA) performance when the background can also be well represented, e.g., by low-rank prior in surveillance video sequences [25].
+
+Albeit effective in certain applications, the rationality of these techniques depends on the subjective prior assumptions imposed on the unknown background and rain layers to be recovered. In real scenarios, however, such learning regimes could not always adapt to different rainy images with complex, diverse, and variant structures collected from different resources. Besides, these methods generally need time-consuming iterative computations, often with efficiency issue in real applications.
+
+Driven by the significant success of deep learning (DL) in low level vision, recent years have also witnessed the rapid progress of deep convolutional neural networks (CNN) for single image rain removal [8, 52, 53, 40]. The current DL-based derainers mainly focus on designing network modules, and then train network parameters based on abundant rainy/clean image pairs to extract the background layer. Typical deraining network structures include deep detail network (DDN) [9], recurrent squeeze-and-excitation context aggregation module (RESCAN) [27], progressive image deraining network (PReNet) [35], spatial attentive unit (SPANet) [41], and many others.
+
+These DL strategies, however, also possess evident deficiencies. The most significant one is their weak interpretability. Network structures are always complicated and diverse, making it difficult to analyze the role of different modules and understand the underlying insights of their mechanism. Besides, most of them treat CNN as an encapsulated end-to-end mapping module without deepening into the rationality, and neglect the intrinsic prior knowledge of rain streaks such as sparsity and nonlocal similarity. This makes this methodology easily trapped into the overfitting-to-training-sample issue.
+
+To alleviate the aforementioned issues, this paper designs an interpretable deep network, which sufficiently considers the characteristics of rain streaks and attempts to combine the advantages of the conventional model-driven
+
+prior-based and current data-driven DL-based methodologies. Specifically, our contributions are mainly three-fold:
+
+Firstly, we propose a concise rain convolutional dictionary (RCD) model for single image by exploiting the intrinsic convolutional dictionary learning mechanism to encode rain shapes, and specifically adopt the proximal gradient technique [2] to design an optimization algorithm for solving it. Different from traditional solvers for the RCD model containing complex operations (e.g., Fourier transformation), the algorithm only contains simple computations (see Fig. 1 (b)) easy to be implemented by general network modules. This facilitates our algorithm capable of being easily unfolded into a deep network architecture.
+
+Secondly, by unfolding the algorithm, we design a new deep network architecture for image deraining, called RCDNet. The specificity of this network lies on its exact step-by-step corresponding relationship between its modules and the algorithm operators, and thus successively possesses the interpretability of all its modules as that of all steps in the algorithm. Specifically, as shown in Fig. 1 (b) and (c), each iteration of the algorithm contains two sub-steps, respectively updating the rain map (convoluted by the learned rain kernels) and background layer, and each stage of the RCDNet also contains two sub-networks (M-net and B-net). Each output of the intermediate layer in the network is thus with clear interpretation, which greatly facilitates a deeper analysis on what happens inside the network during training, and a comprehensive understanding why the network works or not (as the analysis presented in Sec. 5.2).
+
+Thirdly, comprehensive experimental results substantiate the superiority of the RCDNet beyond SOTA conventional prior-based and current DL-based methods both quantitatively and visually. Especially, attributed to its well interpretability, not only the underlying rationality and insights of the network can be intuitively understood through visualizing the amelioration process (like the gradually rectified background and rain maps) over all network layers by general users, but also the network can yield generally useful rain kernels for expressing rain shapes and proximal operators for delivering the prior knowledge of background and rain maps for a rainy image, facilitating their general availability to more real-world rainy images.
+
+The paper is organized as follows. Sec. 2 reviews the related rain removal work. Sec. 3 presents the RCD model for rain removal as well as the algorithm designed for solving it. Then Sec. 4 introduces the unfolding deep network for the algorithm. The experimental results are demonstrated in Section 5 and the paper is finally concluded.
+
+# 2. Related work
+
+In this section, we give a brief review on the most related work on rain removal for images. Depending on the input data, the existing algorithms can be categorized into two
+
+groups: video based and single image based ones.
+
+# 2.1. Video deraining methods
+
+Garg and Nayar [10] first tried to analyze the visual effects of raindrops on imaging systems, and utilized a space-time correlation model to capture the dynamics of raindrops and a physics-based motion blur model to illustrate the photometry of rain. For better visual quality, they further proposed to increase the exposure time or reduce the depth of field of a camera [12, 11]. Later, both temporal and chromatic properties of rain were considered and then background layer was extracted from rainy video by utilizing different strategies such as K-means clustering [55], Kalman filter [33], and GMM [3]. Besides, a spatio-temporal frequency based raindrop detection method was provided in [1].
+
+In recent years, researchers introduced more intrinsic characteristics of rainy video to the task, e.g., similarity and repeatability of rain streaks [4], low-rankness among multiframes [20], and sparsity and smoothness of rain streaks [18]. To handle heavy rain and dynamic scenes, a matrix decomposition based video deraining algorithm was presented in [36]. Afterwards, rain streaks were encoded as a patch based GMM to adapt a wider range of rain variations [45]. More characteristics of rain streaks in a rainy video were explored including repetitive local patterns and multi-scale configurations and they were described as multiscale convolutional sparse coding model [25]. More recently, there are some DL-based methods proposed for this task. Chen et al. [19] presented a CNN architecture and utilized superpixel to handle torrential rain fall with opaque streak occlusions. To further improve visual quality, Liu et al. [30] designed a joint recurrent rain removal and reconstruction network that integrated rain degradation classification, rain removal, and background details reconstruction. To handle dynamic video contexts, they further developed a dynamic routing residue recurrent network [29]. Though these methods work well for videos, they cannot directly perform in single image cases due to the lack of temporal knowledge.
+
+# 2.2. Single image deraining methods
+
+Compared with video deraining task under a sequence of images, rain removal from a single image is much more challenging. The early attempts utilized the model-driven strategies by decomposing a single rainy image into low frequency part (LFP) and high frequency part (HFP) and then specifically extracted rain layer from the HFP based on various processing such as guided filter [6, 21] and nonlocal means filtering [23]. Later, researchers made more focus on exploring the prior knowledge of rain and rain-free layers of a rainy image, and designing proper regularizer to extract and separate them [22, 38, 51, 28, 42, 56]. E.g., [13] considered the specific sparsity characteristics of rain-free and rain
+
+parts and expressed them as the joint analysis and synthesis sparse representation models, respectively. [15] used a similar manner to deliver local repetitive patterns of rain streaks across the image as the RCD model. Albeit achieving good performance on certain scenarios, these prior-based methods rely on the subjective prior assumptions, while could not always generally work well for practical complicated and highly diverse rain shapes in real rainy images collected from different resources.
+
+Recently, a number of DL-based single image rain streak removal methods were proposed through constructing diverse network modules [8, 9, 27, 52, 53]. To handle heavy rain, Yang et al. [49] developed a multi-stage joint rain detection and estimation network for single image (JORDER_E). Very recently, Ren et al. [35] designed a PReNet that repeatedly unfolded several Resblocks and a LSTM layer. Wang et al. [41] presented an attention unit based SPANet for removing rain in a local-to-global manner. Through using abundant rainy/clean image pairs to train the deep model, these methods achieve favorable visual quality and SOTA quantitative measures of derained results. Most of these methods, however, just utilize network modules assembled with some off-the-shelf components in current DL toolkits to directly learn background layer in an end-to-end way, and largely ignore the intrinsic prior structures inside the rain streaks. This makes them lack of evident interpretability in their network architectures and still have room for further performance enhancement.
+
+At present, there is a new type of single image derainers that try to combine prior and DL methodologies. For example, Mu et al. [32] utilized CNN to implicitly learn prior knowledge for background and rain streaks, and formulated them into traditional bi-layer optimization iterations. Wei et al. [44] provided a semi-supervised rain removal method (SIRR) that described rain layer prior as a general GM-M and jointly trained the backbone-DDN. Albeit obtaining initial success, they still use CNN architectures as their main modules to construct the network, which is thus still lack of sufficient interpretability.
+
+# 3. RCD model for single image deraining
+
+# 3.1. Model formulation
+
+For a observed color rainy image denoted as $\mathcal{O} \in \mathbb{R}^{H \times W \times 3}$ , where $H$ and $W$ are the height and width of the image, respectively, it can be rationally separated as:
+
+$$
+\mathcal {O} = \mathcal {B} + \mathcal {R}, \tag {1}
+$$
+
+where $\mathcal{B}$ and $\mathcal{R}$ represent the background and rain layers of the image, respectively. Then, the aim of most of DL-based deraining methods is to estimate the mapping function (expressed by a deep network) from $\mathcal{O}$ to $\mathcal{B}$ (or $\mathcal{R}$ ).
+
+Instead of heuristically constructing a complex deep network architecture, we first consider the problem under the conventional prior-based methodology through exploiting the prior knowledge for representing rain streaks [13, 15, 25]. Specifically, as shown in Fig. 1 (a), by adopting the RCD mechanism, the rain layer can be modeled as:
+
+$$
+\mathcal {R} ^ {c} = \sum_ {n = 1} ^ {N} C _ {n} ^ {c} \otimes M _ {n}, c = 1, 2, 3, \tag {2}
+$$
+
+where $\mathcal{R}_c$ denotes the $c^{\mathrm{th}}$ color channel of $\mathcal{R}$ , and $\{C_n^c\}_{n,c} \subset \mathbb{R}^{k \times k}$ is a set of rain kernels which describes the repetitive local patterns of rain streaks, and $\{M_n\}_n \subset \mathbb{R}^{H \times W}$ represents the corresponding rain maps representing the locations where local patterns repeatedly appear. $N$ is the number of kernels and $\otimes$ is the 2-dimensional (2D) convolutional operation. For conciseness, we rewrite (2) as $\mathcal{R} = \sum_{n=1}^{N} C_n \otimes M_n$ throughout the paper, where $C_n \in \mathbb{R}^{k \times k \times 3}$ is the tensor form of $C_n^c s$ and the convolution is performed between $C_n$ and the matrix $M_n$ one channel by one channel. Then, we can rewrite the model (1) as:
+
+$$
+\mathcal {O} = \mathcal {B} + \sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes M _ {n}. \tag {3}
+$$
+
+It should be noted that the rain kernels actually can be viewed a set of convolutional dictionary [16] for representing repetitive and similar local patterns underlying rain streaks, and a small number of rain kernels can finely represent wide range of rain shapes1. They are common knowledge for representing different rain types across all rainy images, and thus could be learned from abundant training data by virtue of the strong learning capability of end-to-end training manner of deep learning (see more details in Sec. 4). Unlike rain kernels, the rain maps must vary with the input rainy image as the locations of rain streaks are totally random. Therefore, for predicting the clean image from a testing input rainy one, the key issue is to output $M_{n}$ s and $\mathcal{B}$ from $\mathcal{O}$ with the rain kernels $\mathcal{C}_n$ s fixed, and the corresponding optimization problem is:
+
+$$
+\min _ {\mathcal {M}, \mathcal {B}} \left\| \mathcal {O} - \mathcal {B} - \sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes M _ {n} \right\| _ {F} ^ {2} + \alpha g _ {1} (\mathcal {M}) + \beta g _ {2} (\mathcal {B}), \tag {4}
+$$
+
+where $\mathcal{M} \in \mathbb{R}^{H \times W \times N}$ is the tensor form of $M_{n}$ s. $\alpha$ and $\beta$ are trade-off parameters. $g_{1}(\cdot)$ and $g_{2}(\cdot)$ mean the regularizers to deliver the prior structures of $M_{n}$ and $\mathcal{B}$ , respectively.
+
+# 3.2. Optimization algorithm
+
+Since we want to build a possibly perfect step-by-step corresponding deep unfolding network architecture for
+
+solving the problem (4), it is critical to build an algorithm which contains only simple computations easy to be transformed to network modules. The traditional solvers for RCD-based model usually contain certain complicated operations, e.g., the Fourier transform and inverse Fourier transform [16, 46, 25], which are hard to accomplish such exact transformation from algorithm to network structure. We thus prefer to build a new algorithm for solving the problem through alternately updating $\mathcal{M}$ and $\mathcal{B}$ by proximal gradient method [2]. In this manner, only simple computations can be involved. The details are as follows:
+
+Updating $\mathcal{M}$ : The rain maps $\mathcal{M}$ can be updated by solving the quadratic approximation [2] of the problem (4) as:
+
+$$
+\min _ {\mathcal {M}} \frac {1}{2} \left\| \mathcal {M} - \left(\mathcal {M} ^ {(s - 1)} - \eta_ {1} \nabla f \left(\mathcal {M} ^ {(s - 1)}\right)\right) \right\| _ {F} ^ {2} + \alpha \eta_ {1} g _ {1} (\mathcal {M}), \tag {5}
+$$
+
+where $\mathcal{M}^{(s - 1)}$ is the updating result of the last iteration, $\eta_{1}$ is the stepsize parameter, and $f\left(\mathcal{M}^{(s - 1)}\right) = \left\| \mathcal{O} - \mathcal{B}^{(s - 1)} - \sum_{n = 1}^{N}\mathcal{C}_{n}\otimes M_{n}^{(s - 1)}\right\|_{F}^{2}$ . Corresponding to general regularization terms [7], the solution of Eq. (5) is:
+
+$$
+\mathcal {M} ^ {(s)} = \operatorname {p r o x} _ {\alpha \eta_ {1}} \left(\mathcal {M} ^ {(s - 1)} - \eta_ {1} \nabla f \left(\mathcal {M} ^ {(s - 1)}\right)\right). \tag {6}
+$$
+
+Moreover, by substituting
+
+$$
+\nabla f \left(\mathcal {M} ^ {(s - 1)}\right) = \mathcal {C} \otimes^ {T} \left(\sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes \mathcal {M} _ {n} ^ {(s - 1)} + \mathcal {B} ^ {(s - 1)} - \mathcal {O}\right), \tag {7}
+$$
+
+where $\mathcal{C} \in \mathbb{R}^{k \times k \times N \times 3}$ is a 4-D tensor stacked by $\mathcal{C}_n\mathrm{s}$ , and $\otimes^T$ denotes the transposed convolution, we can obtain the updating formula for $\mathcal{M}$ as:
+
+$$
+\mathcal {M} ^ {(s)} =
+$$
+
+$$
+\left. \operatorname {p r o x} _ {\alpha \eta_ {1}} \left(\mathcal {M} ^ {(s - 1)} - \eta_ {1} \mathcal {C} \otimes^ {T} \left(\sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes M _ {n} ^ {(s - 1)} + \mathcal {B} ^ {(s - 1)} - \mathcal {O}\right)\right), \right. \tag {8}
+$$
+
+where $\mathrm{prox}_{\alpha \eta_1}(\cdot)$ is the proximal operator dependent on the regularization term $g_{1}(\cdot)$ with respect to $\mathcal{M}$ . Instead of choosing a fixed regularizer in the model, the form of the proximal operator can be automatically learned from training data. More details will be presented in the next section.
+
+Updating $\mathcal{B}$ : Similarly, the quadratic approximation of the problem (4) with respect to $\mathcal{B}$ is:
+
+$$
+\left. \min _ {\mathcal {B}} \frac {1}{2} \left\| \mathcal {B} - \left(\mathcal {B} ^ {(s - 1)} - \eta_ {2} \nabla h \left(\mathcal {B} ^ {(s - 1)}\right)\right) \right\| _ {F} ^ {2} + \beta \eta_ {2} g _ {2} (\mathcal {B}). \right. \tag {9}
+$$
+
+where $\nabla h\left(\mathcal{B}^{(s - 1)}\right) = \sum_{n = 1}^{N}\mathcal{C}_{n}\otimes M_{n}^{(s)} + \mathcal{B}^{(s - 1)} - \mathcal{O}$ , and it is easy to deduce that the final updating rule for $\mathcal{B}$ is:
+
+$$
+\mathcal {B} ^ {(s)} =
+$$
+
+$$
+\left. \operatorname {p r o x} _ {\beta \eta_ {2}} \left((1 - \eta_ {2}) \mathcal {B} ^ {(s - 1)} + \eta_ {2} \left(\mathcal {O} - \sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes M _ {n} ^ {(s)}\right)\right). \right. \tag {10}
+$$
+
+
+
+
+(a) Illustration of the entire RCDNet
+(b) The design of a single stage
+Figure 2. (a) The proposed network with $S$ stages. The network takes a rainy image $\mathcal{O}$ as input and outputs the learned rain kernel $\mathcal{C}$ , rain map $\mathcal{M}$ , and clean background image $\mathcal{B}$ . (b) Illustration of the network architecture at the $s^{\text{th}}$ stage. Each stage consists of M-net and B-net to accomplish the update of rain map $\mathcal{M}$ and background layer $\mathcal{B}$ , respectively. The images are better to be zoomed in on screen.
+
+where $\mathrm{prox}_{\beta \eta_2}(\cdot)$ is the proximal operator correlated to the regularization term $g_{2}(\cdot)$ with respect to $\mathcal{B}$ .
+
+Based on this iterative algorithm, we can then construct our deep unfolding network as follows.
+
+# 4. The rain convolutional dictionary network
+
+Inspired by the recently raised deep unfolding techniques in various tasks such as deconvolution [54], compressed sensing [50], and dehazing [48], we build a network structure for single image rain removal task by unfolding each iterative steps of the aforementioned algorithm as the corresponding network module. We especially focus on making all network modules one-to-one corresponding to the algorithm implementation operators, for better interpretability.
+
+As shown in Fig. 2 (a), the proposed network consists of $S$ stages, corresponding to $S$ iterations of the algorithm for solving (4). Each stage achieves the sequential updates of $\mathcal{M}$ and $\mathcal{B}$ by M-net and B-net. As displayed in Fig. 2 (b), exactly corresponding to each iteration of the algorithm, in each stage of the network, M-net takes the observed rainy image $\mathcal{O}$ and the previous outputs $\mathcal{B}^{(s-1)}$ and $\mathcal{M}^{(s-1)}$ as inputs, and outputs an updated $\mathcal{M}^{(s)}$ , and then B-net takes $\mathcal{O}$ and $\mathcal{M}^{(s)}$ as inputs, and outputs an updated $\mathcal{B}^{(s)}$ .
+
+# 4.1. Network design
+
+The key issue of unrolling the algorithm here is how to represent the two proximal operators involved in (8) and (10) while other operations can be naturally performed with generally used operators in normal networks [34]. In this work, we simply choose a ResNet [14] to construct the two proximal operators as many other works did [47, 48]. Then, we can separately decompose the updating rules for $\mathcal{M}$ as (8) and $\mathcal{B}$ as (10) into sub-steps and achieve the following
+
+procedures for the $s^{\mathrm{th}}$ stage of the RCDNet:
+
+$$
+\mathbf {M} - \operatorname {n e t}: \left\{ \begin{array}{l} \widehat {\mathcal {R}} ^ {(s)} = \mathcal {O} - \mathcal {B} ^ {(s - 1)}, \\ \widetilde {\mathcal {R}} ^ {(s)} = \sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes \boldsymbol {M} _ {n} ^ {(s - 1)}, \\ \mathcal {E} ^ {(s)} = \eta_ {1} \mathcal {C} \otimes^ {T} \left(\widetilde {\mathcal {R}} ^ {(s)} - \widehat {\mathcal {R}} ^ {(s)}\right), \\ \mathcal {M} ^ {(s)} = \operatorname {p r o x N e t} _ {\theta_ {m} ^ {(s)}} \left(\mathcal {M} ^ {(s - 1)} - \mathcal {E} ^ {(s)}\right), \end{array} \right. \tag {11}
+$$
+
+$$
+\text {B - n e t}: \left\{ \begin{array}{l} \mathcal {R} ^ {(s)} = \sum_ {n = 1} ^ {N} \mathcal {C} _ {n} \otimes M _ {n} ^ {(s)}, \\ \widehat {\mathcal {B}} ^ {(s)} = \mathcal {O} - \mathcal {R} ^ {(s)}, \\ \mathcal {B} ^ {(s)} = \operatorname {p r o x N e t} _ {\theta_ {b} ^ {(s)}} \left((1 - \eta_ {2}) \mathcal {B} ^ {(s - 1)} + \eta_ {2} \widehat {\mathcal {B}} ^ {(s)}\right), \end{array} \right. \tag {12}
+$$
+
+where $\mathrm{proxNet}_{\theta_m^{(s)}}(\cdot)$ and $\mathrm{proxNet}_{\theta_b^{(s)}}(\cdot)$ are two ResNets consisting of several Resblocks with the parameters $\theta_m^{(s)}$ and $\theta_b^{(s)}$ at the $s^{\mathrm{th}}$ stage, respectively.
+
+We can then design the network architecture, as shown in Fig. 2, by transforming the operators in (11) and (12) step-by-step. All the parameters involved can be automatically learned from training data in an end-to-end manner, including $\{\theta_m^{(s)},\theta_b^{(s)}\}_{s = 1}^S$ , rain kernels $\mathcal{C}$ $\eta_{1}$ ,and $\eta_{2}$
+
+It should be indicated that both of the two sub-networks are very interpretable. As shown in Fig. 2 (b), the M-net accomplishes the extraction of residual information $\mathcal{E}^{(s)}$ of rain maps. Specifically, $\widehat{\mathcal{R}}^{(s)}$ is the rain layer estimated with the previous background $\mathcal{B}^{(s-1)}$ , and $\widetilde{\mathcal{R}}^{(s)}$ is the rain layer achieved by the generative model (2) with the estimated $\mathcal{M}^{(s-1)}$ . Then the M-net calculates the residual information between the two rain layers obtained in this two ways, and extracts the residual information $\mathcal{E}^{(s)}$ of rain maps with the transposed convolution of rain kernels to update the rain map. Next, the B-net recovers the background $\widehat{\mathcal{B}}^{(s)}$ estimated with current rain kernel and rain maps $\mathcal{M}^{(s)}$ , and fuses this estimated $\widehat{\mathcal{B}}^{(s)}$ with the previously estimated $\mathcal{B}^{(s-1)}$ by
+
+weighted parameters $\eta_{2}$ and $(1 - \eta_{2})$ to get the updated background $\mathcal{B}^{(s)}$ . Here, we set $\mathcal{M}^{(0)}$ as 0 and initialize $\mathcal{B}^{(0)}$ by a convolutional operator on $\mathcal{O}^4$ .
+
+Remark: From Fig. 2, the input tensor of $\mathrm{proxNet}_{\theta_{b}^{(s)}}(\cdot)$ has the same size $H\times W\times 3$ as the to-be-estimated $\mathcal{B}$ . Evidently, this is not beneficial for learning $\mathcal{B}$ since most of the previous updating information would be compressed due to few channels. To better keep and deliver image features, we simply expand the input tensor at the $3^{\mathrm{rd}}$ mode for more channels in experiments (see more in supplemental file).
+
+# 4.2. Network training
+
+Training loss. For simplicity, we adopt the mean square error (MSE) [21] for the learned background and rain layer at every stage as the training objective function:
+
+$$
+L = \sum_ {s = 0} ^ {S} \lambda_ {s} \left\| \mathcal {B} ^ {(s)} - \mathcal {B} \right\| _ {F} ^ {2} + \sum_ {s = 1} ^ {S} \gamma_ {s} \left\| \mathcal {O} - \mathcal {B} - \mathcal {R} ^ {(s)} \right\| _ {F} ^ {2}, \tag {13}
+$$
+
+where $\mathcal{B}^{(s)}$ and $\mathcal{R}^{(s)}$ separately denote the derained result and extracted rain layer as expressed in (12) at the $s^{\mathrm{th}}$ stage $(s = 0,1,\dots ,S)$ . $\lambda_{s}$ and $\gamma_{s}$ are tradeoff parameters5.
+
+Implement details. We implement our network based on a NVIDIA GeForce GTX 1080Ti GPU. We adopt the Adam optimizer [24] with the batch size of 16 and the patch size of $64 \times 64$ . The initial learning rate is $1 \times 10^{-3}$ and divided by 5 every 25 epochs. The total epoch is 100.
+
+# 5. Experimental results
+
+We first conduct ablation study and model visualization to verify the underlying mechanism of the proposed network, and then present experiments on synthesized benchmark datasets and real datasets for performance evaluation.
+
+# 5.1. Ablation study
+
+Dataset and performance metrics. In this section, we use Rain100L to perform all the ablation studies. The synthesized dataset consists of 200 rainy/clean image pairs for training and 100 pairs for testing [49]. Two performance metrics are employed, including peak-signal-to-noise ratio (PSNR) [17] and structure similarity (SSIM) [43]. Note that as the human visual system is sensitive to the Y channel of a color image in YCbCr space, we compute PSNR and SSIM based on this luminance channel.
+
+Table 1 reports the effect of stage number $S$ on deraining performance of our network. Here, $S = 0$ means that the initialization $\mathcal{B}^{(0)}$ is directly regraded as the recovery result.
+
+Table 1. Effect of stage number $S$ on the performance of RCDNet.
+
+| Stage No. | S=0 | S=2 | S=5 | S=8 | S=11 | S=14 | S=17 | S=20 |
| PSNR | 35.93 | 38.46 | 39.35 | 39.60 | 39.81 | 39.90 | 40.00 | 39.91 |
| SSIM | 0.9689 | 0.9813 | 0.9842 | 0.9850 | 0.9855 | 0.9858 | 0.9860 | 0.9858 |
+
+
+Figure 3. Visualization of the recovery background $\mathcal{B}^{(s)}$ , $\widehat{\mathcal{B}}^{(s)}$ as expressed in Eq. (12), and the rain layer $\mathcal{R}^{(s)}$ at different stages. The stage number $S$ is 17. PSNR/SSIM for reference. The images are better to be zoomed in on screen.
+
+
+Figure 4. At the final stage $s = 17$ , the extracted rain layer, rain kernels $\mathcal{C}_n$ , and rain maps $M_n$ for the input $\mathcal{O}$ in Fig. 3. The lower left is the rain kernels $\mathcal{C}$ learned from Rain100L. The images are better to be zoomed in on screen.
+
+Taking $S = 0$ as a baseline, it is seen that only with 2 stages, our method achieves significant rain removal performance, which validates the essential role of the proposed M-net and B-net. We also observe that when $S = 20$ , its deraining performance is slightly lower than that of $S = 17$ since larger $S$ would make gradient propagation more difficult. Based on such observation, we easily set $S$ as 17 throughout all our experiments. More ablation results and discussions are listed in supplementary material.
+
+# 5.2. Model verification
+
+We then show how the interpretability of this RCDNet facilitates an easy analysis for the working mechanism inside the network modules.
+
+Fig. 3 presents the extracted background layer $\mathcal{B}^{(s)}$ (1 $^{\text{st}}$ row), $\widehat{\mathcal{B}}^{(s)}(2^{\text{nd}}$ row) that represents the role of M-net in helping restore clean background, and rain layer $\mathcal{R}^{(s)}$ (3 $^{\text{rd}}$ row) at different stages. We can find that with the increase of $s$ , $\mathcal{R}^{(s)}$ covers more rain streaks and fewer image details, and $\widehat{\mathcal{B}}^{(s)}$ and $\mathcal{B}^{(s)}$ are also gradually ameliorated. These should
+
+
+Input / Groundtruth 27.37/0.8154
+
+
+DSC 29.34/0.8479
+
+
+GMM 32.38/0.9306
+
+
+JCAS 31.45/0.9151
+
+
+Clear 31.59/0.9380
+
+
+DDN 37.31/0.9704
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+RESCAN 41.26/0.9887
+Figure 5. $1^{\mathrm{st}}$ column: input rainy image (upper) and groundtruth (lower). $2^{\mathrm{nd}} - 12^{\mathrm{th}}$ column: derained results (upper) and extracted rain layers (lower) by 11 competing methods. PSNR/SSIM for reference. Bold indicates top $1^{\mathrm{st}}$ rank.
+
+
+PRENet 37.27/0.9793
+
+
+SPANet 35.67/0.9700
+
+
+JORDER E 41.11/0.9894
+
+
+SIRR 36.99/0.9692
+
+
+RCDNet 42.15/0.9912
+
+be attributed to the proper guidance of the RCD prior for rain streaks and the mutual promotion of M-net and B-net that enables the RCDNet to be evolved to a right direction.
+
+Fig. 4 presents the learned rain kernels and the rain maps for the input $\mathcal{O}$ in Fig. 3. Clearly, the RCDNet finely extracts proper rain layers explicitly based on the RCD model. This not only verifies the reasonability of our method but also manifests the peculiarity of our proposal. On one hand, we utilize a M-net to learn sparse rain maps instead of directly learning rain streaks that makes learning process easier. On the other hand, we exploit training data to automatically learn rain kernels representing general repetitive local patterns of rain with diverse shapes. This facilitates their general availability to more real-world rainy images.
+
+Table 2. PSNR and SSIM comparisons on four benchmark datasets. Bold and bold italic indicate top $1^{\text{st}}$ and $2^{\text{nd}}$ rank, respectively.
+
+| Datasets | Rain100L | Rain100H | Rain1400 | Rain12 |
| Metrics | PSNR SSIM | PSNR SSIM | PSNR SSIM | PSNR SSIM |
| Input | 26.90 0.8384 | 13.56 0.3709 | 25.24 0.8097 | 30.14 0.8555 |
| DSC[51] | 27.34 0.8494 | 13.77 0.3199 | 27.88 0.8394 | 30.07 0.8664 |
| GMM[28] | 29.05 0.8717 | 15.23 0.4498 | 27.78 0.8585 | 32.14 0.9145 |
| JCAS[13] | 28.54 0.8524 | 14.62 0.4510 | 26.20 0.8471 | 33.10 0.9305 |
| Clear[8] | 30.24 0.9344 | 15.33 0.7421 | 26.21 0.8951 | 31.24 0.9353 |
| DDN[9] | 32.38 0.9258 | 22.85 0.7250 | 28.45 0.8888 | 34.04 0.9330 |
| RESCAN[27] | 38.52 0.9812 | 29.62 0.8720 | 32.03 0.9314 | 36.43 0.9519 |
| PReNet[35] | 37.45 0.9790 | 30.11 0.9053 | 32.55 0.9459 | 36.66 0.9610 |
| SPANet[41] | 35.33 0.9694 | 25.11 0.8332 | 29.85 0.9148 | 35.85 0.9572 |
| JORDER_E[49] | 38.59 0.9834 | 30.50 0.8967 | 32.00 0.9347 | 36.69 0.9621 |
| SIRR[44] | 32.37 0.9258 | 22.47 0.7164 | 28.44 0.8893 | 34.02 0.9347 |
| RCDNet | 40.00 0.9860 | 31.28 0.9093 | 33.04 0.9472 | 37.71 0.9649 |
+
+# 5.3. Experiments on synthetic data
+
+Comparison methods and datasets. We then compare our network with the current SOTA single image derain
+
+ers, including model-based DSC [51], GMM [28], and J-CAS [13]; DL-based Clear [8], DDN [9], RESCAN [27], PReNet [35], SPANet [41], JORDER_E [49], and SIRR [44]6, based on four benchmark datasets, including Rain100L, Rain100H [49], Rain1400 [9], and Rain12 [28].
+
+Fig. 5 illustrates the deraining performance of all competing methods on a rainy image from Rain100L. As shown, the deraining result of RCDNet is better than that of other methods in sufficiently removing the rain streaks and finely recovering the image textures. Moreover, the rain layer extracted by RCDNet contains fewer unexpected background details as compared with other competing methods. Our RCNet thus achieves the best PSNR and SSIM.
+
+Table 2 reports the quantitative results of all competing methods. It is seen that our RCDNet attains best deraining performance among all methods on each dataset. This substantiates the flexibility and generality of our method, in diverse rain types contained in these datasets.
+
+# 5.4. Experiments on real data
+
+We then analyze the performance of all methods on two real datasets from [41]: the first one (called SPA-Data) contains 638492 rainy/clean image pairs for training and 1000 testing ones, and the second one (called Internet-Data) includes 147 rainy images without groundtruth.
+
+Table 3 and Fig. 6 compare the derained results on SPA-Data of all competing methods visually and quantitatively. It is easy to see that even for such complex rain patterns, the proposed RCDNet still achieves an evident superior perfor
+
+
+Groundtruth
+
+
+Input
+
+
+DSC
+
+
+GMM
+
+
+JCAS
+
+
+Clear
+
+
+DDN
+
+
+29.42/0.8960
+RESCAN
+36.12/0.9656
+
+
+30.73/0.9081
+PRENet
+37.42/0.9835
+
+
+30.87/0.9155
+SPANet
+38.34/0.9837
+
+
+31.24/0.9264
+JORDER E
+38.88/0.9833
+
+
+32.79/0.9421
+SIRR
+31.34/0.9153
+
+
+31.84/0.9217
+RCDNet
+40.96/0.9879
+
+
+Figure 6. Rain removal performance comparisons on a rainy image from SPA-Data. The images are better to be zoomed in on screen.
+Input
+
+
+DSC
+
+
+GMM
+
+
+JCAS
+
+
+Clear
+
+
+DDN
+
+
+
+
+RESCAN
+Figure 7. Derained results for two samples with various rain patterns from Internet-Data. The images are better to be zoomed in on screen.
+
+
+
+
+PReNet
+
+
+
+
+SPANet
+
+
+
+
+JORDER_E
+
+
+
+
+SIRR
+
+
+
+
+RCDNet
+
+Table 3. PSNR and SSIM comparisons on SPA-Data [41].
+
+| Methods | Input | DSC | GMM | JCAS | Clear | DDN |
| PSNR | 34.15 | 34.95 | 34.30 | 34.95 | 34.39 | 36.16 |
| SSIM | 0.9269 | 0.9416 | 0.9428 | 0.9453 | 0.9509 | 0.9463 |
| Methods | RESCAN | PReLUNet | SPANet | JORDER_E | SIRR | RCDNet |
| PSNR | 38.11 | 40.16 | 40.24 | 40.78 | 35.31 | 41.47 |
| SSIM | 0.9707 | 0.9816 | 0.9811 | 0.9811 | 0.9411 | 0.9834 |
+
+mance than other methods. Especially, similar to its superiority in synthetic experiments, it is also observed that our method better removes the rain streaks and recovers image details than other competing ones.
+
+Further, we select two real hard samples with various rain densities to evaluate the generalization ability of all competing methods. From Fig. 7, we can find that traditional model-based methods tend to leave obvious rain streaks. Although DL-based comparison methods remove apparent rain streaks, they still leave distinct rain marks or blur some image textures. Comparatively, our RCDNet better preserves background details as well as removes more rain streaks. This shows its good generalization capability to unseen complex rain types.
+
+# 6. Conclusion
+
+In this paper, we have explored the intrinsic prior structure of rain streaks that can be explicitly expressed as convolutional dictionary learning model, and proposed a novel interpretable network architecture for single image deraining. Each module in the network can one-to-one correspond to the implementation operators of the algorithm designed for solving the model, and thus the network is almost "white-box" with easily visualized interpretation for all its module elements. Comprehensive experiments implemented on synthetic and real rainy images validate that such interpretability brings a good effect of the proposed network, and especially facilitates the analysis for how it happens in the network and why it works in testing prediction process. The extracted elements through the end-to-end learning by the network, like the rain kernels, are also potentially useful for the related tasks on rainy images.
+
+Acknowledgment. This research was supported by the China NSFC projects under contract 11690011, 61721002, U1811461 and MoE-CMCC "Artificial Intelligence" Project with No. MCM20190701
+
+# References
+
+[1] Peter C Barnum, Srinivasa Narasimhan, and Takeo Kanade. Analysis of rain and snow in frequency space. International journal of computer vision, 86(2-3):256, 2010. 3
+[2] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183-202, 2009. 2, 4
+[3] Jérémie Bossu, Nicolas Hautière, and Jean-Philippe Tarel. Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International journal of computer vision, 93(3):348–367, 2011. 3
+[4] Yi Lei Chen and Chiou Ting Hsu. A generalized low-rank appearance model for spatio-temporally correlated rain streaks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1968-1975, 2013. 3
+[5] Dorin Comaniciu, Visvanathan Ramesh, and Peter Meer. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5):564-575, 2003. 1
+[6] Xinghao Ding, Liqin Chen, Xianhui Zheng, Huang Yue, and Delu Zeng. Single image rain and snow removal via guided 10 smoothing filter. Multimedia Tools and Applications, 75(5):2697-2712, 2016. 3
+[7] David L Donoho. De-noising by soft-thresholding. IEEE transactions on information theory, 41(3):613-627, 1995. 4
+[8] Xueyang Fu, Jiabin Huang, Xinghao Ding, Yinghao Liao, and John Paisley. Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing, 26(6):2944-2956, 2017. 2, 3, 7
+[9] Xueyang Fu, Jiabin Huang, Delu Zeng, Huang Yue, Xinghao Ding, and John Paisley. Removing rain from single images via a deep detail network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3855-3863, 2017. 2, 3, 7
+[10] Kshitiz Garg and S. K. Nayar. Detection and removal of rain from videos. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1, pages I-I, 2004. 3
+[11] Kshitiz Garg and Shree K Nayar. When does a camera see rain? In Tenth IEEE International Conference on Computer Vision, volume 2, pages 1067-1074, 2005. 3
+[12] Kshitiz Garg and Shree K Nayar. Vision and rain. International Journal of Computer Vision, 75(1):3-27, 2007. 3
+[13] Shuhang Gu, Deyu Meng, Wangmeng Zuo, and Zhang Lei. Joint convolutional analysis and synthesis sparse representation for single image layer separation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1708-1716, 2017. 2, 3, 4, 7
+[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 5
+[15] Zhang He and Vishal M. Patel. Convolutional sparse and low-rank coding-based rain streak removal. In IEEE Winter Conference on Applications of Computer Vision, pages 1259-1267, 2017. 2, 3, 4
+
+[16] Furong Huang and Animashree Anandkumar. Convolutional dictionary learning through tensor factorization. Computer Science, pages 1-30, 2015. 2, 4
+[17] Q. Huynh-Thu and M. Ghanbari. Scope of validity of p-snr in image/video quality assessment. *Electronics Letters*, 44(13):800-801, 2008. 6
+[18] Tai Xiang Jiang, Ting Zhu Huang, Xi Le Zhao, Liang Jian Deng, and Yao Wang. A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4057-4066, 2017. 3
+[19] Chen Jie, Cheen Hau Tan, Junhui Hou, Lap Pui Chau, and Li He. Robust video content alignment and compensation for rain removal in a cnn framework. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6286-6295, 2018. 3
+[20] Kim Jin-Hwan, Sim Jae-Young, and Kim Chang-Su. Video deraining and desnowing using temporal correlation and low-rank matrix completion. IEEE Transactions on Image Processing, 24(9):2658-2670, 2015. 3
+[21] Xu Jing, Zhao Wei, Liu Peng, and Xianglong Tang. Removing rain and snow in a single image using guided filter. In IEEE International Conference on Computer Science and Automation Engineering, volume 2, pages 304-307, 2012. 3, 6
+[22] L. W. Kang, C. W. Lin, and Y. H. Fu. Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4):1742-1755, 2012. 3
+[23] Jin Hwan Kim, Chul Lee, Jae Young Sim, and Chang Su Kim. Single-image deraining using an adaptive nonlocal means filter. In IEEE International Conference on Image Processing, pages 914-917, 2014. 3
+[24] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. Computer Science, 2014. 6
+[25] Minghan Li, Qi Xie, Qian Zhao, Wei Wei, Shuhang Gu, Jing Tao, and Deyu Meng. Video rain streak removal by multiscale convolutional sparse coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6644-6653, 2018. 2, 3, 4
+[26] Siyuan Li, Iago Bruno Araujo, Wenqi Ren, Zhangyang Wang, Eric K Tokuda, Roberto Hirata Junior, Roberto Cesar-Junior, Jiawan Zhang, Xiaojie Guo, and Xiaochun Cao. Single image deraining: A comprehensive benchmark analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3838-3847, 2019. 1
+[27] Xia Li, Jianlong Wu, Zhouchen Lin, Hong Liu, and Hongbin Zha. Recurrent squeeze-and-excitation context aggregation net for single image deraining. In Proceedings of the European Conference on Computer Vision, pages 254-269, 2018. 2, 3, 7
+[28] Yu Li. Rain streak removal using layer priors. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2736-2744, 2016. 2, 3, 7
+[29] Jiaying Liu, Wenhan Yang, Shuai Yang, and Zongming Guo. D3r-net: Dynamic routing residue recurrent network for
+
+video rain removal. IEEE Transactions on Image Processing, 28(2):699-712, 2018. 3
+[30] Jiaying Liu, Wenhan Yang, Shuai Yang, and Zongming Guo. Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3233-3242, 2018. 3
+[31] O. Ludwig, David Delgado, Valter Goncalves, and Urbano Nunes. Trainable classifier-fusion schemes: an application to pedestrian detection. In International IEEE Conference on Intelligent Transportation Systems, pages 1-6, 2009. 1
+[32] Pan Mu, Jian Chen, Risheng Liu, Xin Fan, and Zhongxuan Luo. Learning bilevel layer priors for single image rain streaks removal. IEEE Signal Processing Letters, 26(2):307-311, 2019. 3
+[33] Wan-Joo Park and Kwae-Hi Lee. Rain removal using kalman filter in video. In International Conference on Smart Manufacturing Application, pages 494-497, 2008. 3
+[34] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017. 5
+[35] Dongwei Ren, Wangmeng Zuo, Qinghua Hu, Pengfei Zhu, and Deyu Meng. Progressive image deraining networks: a better and simpler baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3937-3946, 2019. 2, 3, 7
+[36] Weihong Ren, Jiandong Tian, Han Zhi, Antoni Chan, and Yandong Tang. Video desnowing and deraining based on matrix decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4210-4219, 2017. 3
+[37] M. S. Shehata, Jun Cai, W. M. Badawy, T. W. Burr, M. S. Pervez, R. J. Johannesson, and Ahmad Radmanesh. Videobased automatic incident detection for smart roads: The outdoor environmental challenges regarding false alarms. IEEE Transactions on Intelligent Transportation Systems, 9(2):349-360, 2008. 1
+[38] Shao-Hua Sun, Shang-Pu Fan, and Yu-Chiang Frank Wang. Exploiting image structural similarity for single image rain removal. In IEEE International Conference on Image Processing (ICIP), pages 4482-4486, 2014. 3
+[39] Hong Wang, Yichen Wu, Minghan Li, Qian Zhao, and Deyu Meng. A survey on rain removal from video and single image. arXiv:1909.08326, 2019. 1
+[40] Hong Wang, Qi Xie, Yichen Wu, Qian Zhao, and Deyu Meng. Single image rain streaks removal: a review and an exploration. International Journal of Machine Learning and Cybernetics, pages 1-20, 2020. 2
+[41] Tianyu Wang, Xin Yang, Ke Xu, Shaozhe Chen, Qiang Zhang, and Rynson WH Lau. Spatial attentive single-image deraining with a high quality real rain dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12270-12279, 2019. 2, 3, 7, 8
+[42] Y. Wang, S. Liu, C. Chen, and B. Zeng. A hierarchical approach for rain or snow removing in a single color image. IEEE Transactions on Image Processing, 26(8):3936-3950, 2017. 3
+
+[43] Zhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Processing, 13(4):600-612, 2004. 6
+[44] Wei Wei, Deyu Meng, Qian Zhao, Zongben Xu, and Ying Wu. Semi-supervised transfer learning for image rain removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3877-3886, 2019. 3, 7
+[45] Wei Wei, Lixuan Yi, Qi Xie, Qian Zhao, Deyu Meng, and Zongben Xu. Should we encode rain streaks in video as deterministic or stochastic? In Proceedings of the IEEE International Conference on Computer Vision, pages 2516-2525, 2017. 3
+[46] Brendt Wohlberg. Efficient convolutional sparse coding. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2014. 4
+[47] Qi Xie, Minghao Zhou, Qian Zhao, Deyu Meng, Wangmeng Zuo, and Zongben Xu. Multispectral and hyperspectral image fusion by ms/hs fusion net. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1585-1594, 2019. 5
+[48] Dong Yang and Jian Sun. Proximal dehaze-net: A prior learning-based deep network for single image dehazing. In Proceedings of the European Conference on Computer Vision (ECCV), pages 702-717, 2018. 5
+[49] Wenhan Yang, Robby T. Tan, Jiashi Feng, Jiaying Liu, Shuicheng Yan, and Zongming Guo. Joint rain detection and removal from a single image with contextualized deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP(99):1-1, 2019. 3, 6, 7
+[50] Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Admmnet: A deep learning approach for compressive sensing mri. arXiv preprint arXiv:1705.06869, 2017. 5
+[51] Luo Yu, Xu Yong, and Ji Hui. Removing rain from a single image via discriminative sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, pages 3397-3405, 2015. 2, 3, 7
+[52] He Zhang and Vishal M Patel. Density-aware single image de-raining using a multi-stream dense network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 695-704, 2018. 2, 3
+[53] He Zhang, Vishwanath Sindagi, and Vishal M Patel. Image de-raining using a conditional generative adversarial network. IEEE Transactions on Circuits and Systems for Video Technology, 2019. 2, 3
+[54] Jiawei Zhang, Jinshan Pan, Wei-Sheng Lai, Rynson WH Lau, and Ming-Hsuan Yang. Learning fully convolutional networks for iterative non-blind deconvolution. 2017. 5
+[55] Xiaopeng Zhang, Hao Li, Yingyi Qi, Wee Kheng Leow, and Teck Khim Ng. Rain removal in video by combining temporal and chromatic properties. In IEEE International Conference on Multimedia and Expo, pages 461-464, 2006. 3
+[56] Lei Zhu, Chi Wing Fu, Dani Lischinski, and Pheng Ann Heng. Joint bi-layer optimization for single-image rain streak removal. In Proceedings of the IEEE international conference on computer vision, pages 2526-2534, 2017. 3
\ No newline at end of file
diff --git a/amodeldrivendeepneuralnetworkforsingleimagerainremoval/images.zip b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2639aadde680340d806db5733c2c3b01b9bd49f6
--- /dev/null
+++ b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7a2f09a62285a4d95a8a440b609cc5a5486fd168211ac12afce9e830b175c94
+size 980520
diff --git a/amodeldrivendeepneuralnetworkforsingleimagerainremoval/layout.json b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e5d7638e28fc4cc148bc25909f1ce988a41aa493
--- /dev/null
+++ b/amodeldrivendeepneuralnetworkforsingleimagerainremoval/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e02f256d43501342e32f30c55ed1c60383e258c67c0b28e669076c4402539dad
+size 620233
diff --git a/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_content_list.json b/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..f1395d8f2f52df13349b3b6353b309de4bd98844
--- /dev/null
+++ b/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffadafc4e056ce5c8877f7d9230895f4a54de3affafb46978e1460f984706d6f
+size 87668
diff --git a/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_model.json b/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..cda06289c63369d7983ce43c8557390d56d36da1
--- /dev/null
+++ b/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:274b2c1fff00cd5f6636faaaf2c27152f024f294535d4efe7663ac78a9b3ee01
+size 98514
diff --git a/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_origin.pdf b/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..8aef20c00d5652b9ead81cca711caa8a3c6b301e
--- /dev/null
+++ b/amorphablefacealbedomodel/f5d9ee84-2dfd-469e-bebc-dd8bdf5a442b_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6acbe2881a1ef841c012e6b4acb7338d30d0fc5a9918abfd7e0af548ffb4acbc
+size 5841486
diff --git a/amorphablefacealbedomodel/full.md b/amorphablefacealbedomodel/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..c7fbfbec9002f6474c2e6def9fccb87e2014aae4
--- /dev/null
+++ b/amorphablefacealbedomodel/full.md
@@ -0,0 +1,397 @@
+# A Morphable Face Albedo Model
+
+William A. P. Smith1 Bernard Tiddeman3 J
+
+Alassane Seck²,³ hua Tenenbaum⁴
+
+Hannah Dee3
+Bernhard Egger4
+
+1University of York, UK 2ARM Ltd, UK 3Aberystwyth University, UK 4MIT - BCS, CSAIL & CBMM, USA
+william.smith@york.ac.uk, alou.kces@live.co.uk, {hmd1,bpt}@aber.ac.uk, {jbt,egger}@mit.edu
+
+
+Figure 1: First 3 principal components of our statistical diffuse (left) and specular (middle) albedo models. Both are visualised in linear sRGB space. Right: rendering of the combined model under frontal illumination in nonlinear sRGB space.
+
+# Abstract
+
+In this paper, we bring together two divergent strands of research: photometric face capture and statistical 3D face appearance modelling. We propose a novel lightstage capture and processing pipeline for acquiring ear-to-ear, truly intrinsic diffuse and specular albedo maps that fully factor out the effects of illumination, camera and geometry. Using this pipeline, we capture a dataset of 50 scans and combine them with the only existing publicly available albedo dataset (3DRFE) of 23 scans. This allows us to build the first morphable face albedo model. We believe this is the first statistical analysis of the variability of facial specular albedo maps. This model can be used as a plug in replacement for the texture model of the Basel Face Model and we make our new albedo model publicly available. We ensure careful spectral calibration such that our model is built in a linear sRGB space, suitable for inverse rendering of images taken by typical cameras. We demonstrate our model in a state of the art analysis-by-synthesis 3DMM fitting pipeline, are the first to integrate specular map estimation and outperform the Basel Face Model in albedo reconstruction.
+
+# 1. Introduction
+
+3D Morphable Models (3DMMs) were proposed over 20 years ago [4] as a dense statistical model of 3D face geometry and texture. They can be used as a generative model of 2D face appearance by combining shape and texture parameters with illumination and camera parameters that are pro
+
+vided as input to a graphics renderer. Using such a model in an analysis-by-synthesis framework allows a principled disentangling of the contributing factors of face appearance in an image. More recently, 3DMMs and differentiable renderers have been used as model-based decoders to train convolutional neural networks (CNNs) to regress 3DMM parameters directly from a single image [29].
+
+The ability of these methods to disentangle intrinsic (geometry and reflectance) from extrinsic (illumination and camera) parameters relies upon the 3DMM capturing only intrinsic parameters, with geometry and reflectance modelled independently. 3DMMs are usually built from captured data [4, 22, 5, 7]. This necessitates a face capture setup in which not only 3D geometry but also intrinsic face reflectance properties, e.g. diffuse albedo, can be measured. A recent large scale survey of 3DMMs [10] identified a lack of intrinsic face appearance datasets as a critical limiting factor in advancing the state-of-the-art. Existing 3DMMs are built using ill-defined "textures" that bake in shading, shadowing, specularities, light source colour, camera spectral sensitivity and colour transformations. Capturing truly intrinsic face appearance parameters is a well studied problem in graphics but this work has been done largely independently of the computer vision and 3DMM communities.
+
+In this paper we present a novel capture setup and processing pipeline for measuring ear-to-ear diffuse and specular albedo maps. We use a lightstage to capture multiple photometric views of a face. We compute geometry using uncalibrated multiview stereo, warp a template to the raw
+
+scanned meshes and then stitch seamless per-vertex diffuse and specular albedo maps. We capture our own dataset of 50 faces, combine this with the 3DRFE dataset [27] and build a statistical albedo model that can be used as a drop-in replacement for existing texture models. We make this model publicly available. To demonstrate the benefits of our model, we use it with a state-of-the-art fitting algorithm and show improvements over existing texture models.
+
+# 1.1. Related work
+
+3D Morphable Face Models The original 3DMM of Blanz and Vetter [4] was built using 200 scans captured in a Cyberware laser scanner which also provides a colour texture map. Ten years later the first publicly available 3DMM, the Basel Face Model (BFM) [22], was released. Again, this was built from 200 scans, this time captured using a structured light system from ABW-3D. Here, texture is captured by three cameras synchronised with three flashes with diffusers, providing relatively consistent illumination. The later BFM 2017 [14] used largely the same data from the same scanning setup. More recently, attempts have been made to scale up training data to better capture variability across the population. Both the large scale face model (LSFM) [5] (10k subjects) and Liverpool-York Head Model (LYHM) [7] (1.2k subjects) use shape and textures captured by a 3DMD multiview structured light scanner under relatively uncontrolled illumination conditions. Ploumpis et al. [24] show how to combine the LSFM and LYHM but do so only for shape, not for texture. All of these previous models use texture maps that are corrupted by shading effects related to geometry and the illumination environment, mix specular and diffuse reflectance and are specific to the camera with which they were captured. Gecer et al. [12] use a Generative Adversarial Network (GAN) to learn a nonlinear texture model from high resolution scanned textures. Although this enables them to capture high frequency details usually lost by linear models, it does not resolve the issues with the source textures.
+
+Recently, there have been attempts to learn 3DMMs directly from in-the-wild data simultaneously with learning to fit the model to images [30, 28]. The advantage of such approaches is that they can exploit the vast resource of available 2D face images. However, the separation of illumination and albedo is ambiguous while non-Lambertian effects are usually neglected and so these methods do not currently provide intrinsic appearance models of a quality comparable with those built from captured textures.
+
+Face Capture Existing methods for face capture fall broadly into two categories: photometric and geometric. Geometric methods rely on finding correspondences between features in multiview images enabling the triangulation of 3D position. These methods are relatively robust,
+
+can operate in uncontrolled illumination conditions, provide instantaneous capture and can provide high quality shape estimates [3]. They are sufficiently mature that commercial systems are widely available, for example using structured light stereo, multiview stereo or laser scanning. However, the texture maps captured by these systems are nothing other than an image of the face under a particular set of environmental conditions and hence are useless for relighting. Worse, since appearance is view-dependent (the position of peculiarities changes with viewing direction), no one single appearance can explain the set of multiview images.
+
+On the other hand, photometric analysis allows estimation of additional reflectance properties such as diffuse and specular albedo [21], surface roughness [15] and index of refraction [16] through analysis of the intensity and polarisation state of reflected light. This separation of appearance into geometry and reflectance is essential for the construction of 3DMMs that truly distentangle the different factors of appearance. The required setups are usually much more restrictive, complex and not yet widely commercially available. Hence, the availability of datasets has been extremely limited, particularly of the scale required for learning 3DMMs. There is a single publicly available dataset of scans, the 3D Relightable Facial Expression (3DRFE) database [27] captured using the setup of Ma et al. [21].
+
+Ma et al. [21] were the first to propose the use of polarised spherical gradient illumination in a lightstage. This serves two purposes. On the one hand, spherical gradient illumination provides a means to perform photometric stereo that avoids problems caused by binary shadowing in point source photometric stereo. On the other hand, the use of polarising filters on the lights and camera enables separation of diffuse and specular reflectance which, for the constant illumination case, allows measurement of intrinsic albedo. This was extended to realtime performance capture by Wilson et al. [31] who showed how a certain sequence of illumination conditions allowed for temporal upsampling of the photometric shape estimates. The main drawback of the lightstage setup is that the required illumination polariser orientation is view dependent and so diffuse/specular separation is only possible for a single viewpoint which does not permit capturing full ear-to-ear face models. Ghosh et al. [17] made an empirical observation that using two illumination fields with locally orthogonal patterns of polarisation allows approximate specular/diffuse separation from any viewpoint on the equator. Although practically useful, in this configuration specular and diffuse reflectance is not fully separated. More generally, lightstage albedo bakes in ambient occlusion (which depends on geometry) and RGB values are dependent on the light source spectra and camera spectral sensitivities.
+
+3D Morphable Model Fitting The estimation of 3DMM parameters (shape, expression, colour, illumination and
+
+camera) is an ongoing inverse rendering challenge. Most approaches focus on shape estimation only and omit the reconstruction of colour/albedo and illumination, e.g. [20]. The few methods taking the colour into account suffer from the ambiguity between albedo and illumination demonstrated in Egger et al. [9]. This ambiguity is especially hard to overcome for two reasons: 1. all publicly available face models don't model real diffuse or specular albedo, 2. most models have a strong bias towards Caucasian faces which results in a strongly biased prior. The reflectance models used for inverse rendering are usually dramatically simplified and the specular term is either omitted or constant. Genova et al. [13] point out the limitation of no statistics on specularity and use a heuristic for their specular term. Romdhani et al. [25] use the position of specularities as shape cues but again with homogeneous specular maps. The work of Yamaguchi et al. [32] demonstrate the value of separate estimation of specular and diffuse albedo, however they do not explore the statistics or build a generative model and their approach is not available to the community. Current limitations are mainly caused by the lack of a publicly available diffuse and specular albedo model.
+
+# 2. Data capture
+
+A lightstage exploits the phenomenon that specular reflection from a dielectric material preserves the plane of polarisation of linearly polarised incident light whereas subsurface diffuse reflection randomises it. This allows separation of specular and diffuse reflectance by capturing a pair of images under polarised illumination. A polarising filter on each lightsource is oriented such that a specular reflection towards the viewer has the same plane of polarisation. The first image, $I_{\mathrm{para}}$ , has a polarising filter in front of the camera oriented parallel to the plane of polarisation of the specularly reflected light, allowing both specular and diffuse transmission. The second, $I_{\mathrm{perp}}$ , has the polarising filter oriented perpendicularly, blocking the specular but still permitting transmission of the diffuse reflectance. The difference, $I_{\mathrm{para}} - I_{\mathrm{perp}}$ , gives only the specular reflection.
+
+Setup Our setup comprises a custom built lightstage with polarised LED illumination, a single photometric camera (Nikon D200) with optoelectric polarising filter (LC-Tec FPM-L-AR) and seven additional cameras (Canon 7D) to provide multiview coverage. We use 41 ultra bright white LEDs mounted on a geodesic dome of diameter $1.8\mathrm{m}$ . Each LED has a rotatable linear polarising filter in front of it. Their orientation is tuned by placing a sphere of low diffuse albedo and high specular albedo (a black snooker ball) in the centre of the dome and adjusting the filter orientation until the specular reflection is completely cancelled in the photometric camera's view. Since we only seek to estimate albedo maps, we require only the constant illumination con
+
+dition in which all LEDs are set to maximum brightness.
+
+In contrast to previous lightstage-based methods, we capture multiple virtual viewpoints by capturing the face in different poses, specifically frontal and left/right profile. This provides full ear-to-ear coverage for the single polarisation-calibrated photometric viewpoint. The optoelectric polarising filter enables the parallel/perpendicular conditions to be captured in rapid succession without requiring mechanical filter rotation. We augment the photometric camera with additional cameras providing multiview, single-shot images captured in sync with the photometric images. We position these additional cameras to provide overlapping coverage of the face. We do not rely on a fixed geometric calibration, so the exact positioning of these cameras is unimportant and we allow the cameras to autofocus between captures. In our setup, we use 7 such cameras in addition to the photometric view giving a total of 8 simultaneous views. Since we repeat the capture three times, we have 24 effective views. For synchronisation, we control camera shutters and the polarisation state of the photometric camera using an MBED micro controller. A complete dataset for a face is shown in Fig. 2.
+
+Participants We captured 50 individuals (13 females) in our setup. Our participants range in age from 18 to 67 and cover skin types I-V of the Fitzpatrick scale [11].
+
+# 3. Data processing
+
+In order to merge these views and to provide a rough base mesh, we perform a multiview reconstruction. We then warp the 3DMM template mesh to the scan geometry. As well as other sources of alignment error, since the three photometric views are not acquired simultaneously, there is likely to be non-rigid deformation of the face between these views. For this reason, in Section 3.3 we propose a robust algorithm for stitching the photometric views without blurring potentially misaligned features. We provide an implementation of our sampling, weighting and blending pipeline as an extension of the MatlabRenderer toolbox [2].
+
+# 3.1. Multiview stereo
+
+We commence by applying uncalibrated structure-from-motion followed by dense multiview stereo [1] to all 24 viewpoints (see Fig. 2, blue boxed images). Solving this uncalibrated multiview reconstruction problem provides both the base mesh (see Fig. 2, bottom left) to which we fit the 3DMM template and also intrinsic and extrinsic camera parameters for the three photometric views. These form the input to our stitching process.
+
+# 3.2. Template fitting
+
+To build a 3DMM from raw scanning data, we establish correspondence to a template. We use the Basel
+
+
+Figure 2: Overview of our capture and blending pipeline. Images within a blue box are captured simultaneously. Photometric image pairs within a dashed orange box are captured sequentially with perpendicular/parallel polarisation state respectively.
+
+Face Pipeline [14] which uses smooth deformations based on Gaussian Processes. We adopted the threshold to exclude vertices from the optimisation for the different levels (to $32\mathrm{mm}$ , $16\mathrm{mm}$ , $8\mathrm{mm}$ , $4\mathrm{mm}$ , $2\mathrm{mm}$ , $1\mathrm{mm}$ , $0.5\mathrm{mm}$ from coarse to fine) to reach better performance for missing parts of the scans. Besides this minor change we used the Basel Face Pipeline as is, with between 25 and 45 manually annotated landmarks (eyes: 8, nose 9, mouth 6, eyebrows 4, ears 18). We used the template of the BFM 2017 for registration which makes our model compatible to this model.
+
+# 3.3. Sampling and stitching
+
+We stitch the multiple photometric viewpoints into seamless diffuse and specular per-vertex albedo maps using Poisson blending. Blending in the gradient domain via solution of a Poisson equation was first proposed by Pérez et al. [23] for 2D images. The approach allows us to avoid visible seams where texture or geometry from different views are inconsistent.
+
+For each viewpoint, $v\in \mathcal{V} = \{v_1,\ldots ,v_k\}$ , we sample RGB intensities onto the $n$ vertices of the mesh, $\mathbf{I}^v\in \mathbb{R}^{n\times 3}$ . Then, for each view we compute a per-triangle confidence value for each of the $t$ triangles, $\mathbf{w}^v\in \mathbb{R}^t$ . For each tri
+
+angle, this is defined as the minimum per-vertex weight for each vertex in the triangle, where the per-vertex weights are defined as follows. If the vertex is not visible in that view, the weight is set to zero. We also set the weight to zero if the vertex projection is within a threshold distance of the occluding boundary to avoid sampling background onto the mesh. Otherwise, we take the dot product between the surface normal and view vectors as the weight, giving preference to observations whose projected resolution is higher.
+
+Next, we define a selection matrix for each view, $\mathbf{S}_v\in$ $\{0,1\}^{m_v\times t}$ , that selects a triangle if view $v$ has the highest weight for that triangle:
+
+$$
+\left(\mathbf {S} _ {v} ^ {T} \mathbf {1} _ {m _ {v}}\right) _ {i} = 1 \text {i f f} \forall u \in \mathcal {V} \backslash \{v \}, w _ {i} ^ {u} < w _ {i} ^ {v}. \tag {1}
+$$
+
+We define an additional selection matrix $\mathbf{S}_{v_{k + 1}}$ that selects all triangles not selected in any view (i.e. that have no nonzero weight). Hence, every triangle is selected exactly once and $\sum_{i = 1}^{k + 1}m_{v_i} = t$ . We similarly define per-vertex selection matrices $\tilde{\mathbf{S}}_v\in \{0,1\}^{\tilde{m}_v\times n}$ that select the vertices for which view $v$ has the highest per-vertex weights.
+
+We write a screened Poisson equation as a linear system
+
+[8] in the unknown stitched RGB intensities $\mathbf{I}^{\mathrm{stitch}}\in \mathbb{R}^{n\times 3}$
+
+$$
+\left[ \begin{array}{c} \mathbf {S G} \\ \lambda \tilde {\mathbf {S}} _ {v _ {1}} \end{array} \right] \mathbf {I} ^ {\text {s t i t c h}} = \left[ \begin{array}{c} \left(\mathbf {I} _ {3} \otimes \mathbf {S} _ {v _ {1}}\right) \mathbf {G I} ^ {v _ {1}} \\ \vdots \\ \left(\mathbf {I} _ {3} \otimes \mathbf {S} _ {v _ {k}}\right) \mathbf {G I} ^ {v _ {k}} \\ \mathbf {0} _ {3 m _ {k + 1} \times 3} \\ \lambda \tilde {\mathbf {S}} _ {v _ {1}} \mathbf {I} ^ {v _ {1}} \end{array} \right], \tag {2}
+$$
+
+where $\otimes$ is the Kronecker product,
+
+$$
+\mathbf {S} = \left[ \begin{array}{c} \mathbf {I} _ {3} \otimes \mathbf {S} _ {v _ {1}} \\ \vdots \\ \mathbf {I} _ {3} \otimes \mathbf {S} _ {v _ {k + 1}} \end{array} \right], \tag {3}
+$$
+
+$\mathbf{I}_3$ is the $3 \times 3$ identity matrix and $\mathbf{G} \in \mathbb{R}^{3t \times n}$ computes the per-triangle gradient in the $x, y$ and $z$ directions of a function defined on the $n$ vertices of the mesh. We solve (2) in a least squares sense so that $\mathbf{I}^{\mathrm{stitch}}$ seeks to match the selected gradients in each triangle. Triangles with no selected view are assumed to have zero gradient. View $v_1$ is chosen as the reference in order to resolve colour offset indeterminacies and $\lambda$ is the screening weight. We use $k = 3$ views, the frontal view is chosen as the reference and we set $\lambda = 0.1$ .
+
+# 3.4. Calibrated colour transformation
+
+Our photometric camera captures RAW linear images. We transform these to linear sRGB space using a colour transformation matrix computed from light SPD and camera spectral sensitivity calibrations, discretised at $D$ evenly spaced wavelengths. We measure the spectral power distribution of the LEDs used in our lightstage, $\mathbf{e} \in \mathbb{R}^D$ , using a B&W Tek BSR111E-VIS spectroradiometer. We use the spectral sensitivity measurement, $\mathbf{C} \in \mathbb{R}^{D \times 3}$ , for the Nikon D200 as included in the database of Jiang et al. [19]. The overall colour transformation is given by a product of three transformations: $\mathbf{T} = \mathbf{T}_{\mathrm{xyz2rgb}} \mathbf{T}_{\mathrm{raw2xyz}}(\mathbf{C}) \mathbf{T}_{\mathrm{wb}}(\mathbf{C}, \mathbf{e})$ . The first performs white balancing:
+
+$$
+\mathbf {T} _ {\mathrm {w b}} (\mathbf {C}, \mathbf {e}) = \operatorname {d i a g} \left(\mathbf {C} ^ {T} \mathbf {e}\right) ^ {- 1}. \tag {4}
+$$
+
+The second converts from the camera-specific colour space to the standardised XYZ space:
+
+$$
+\mathbf {T} _ {\text {r a w 2 x y z}} (\mathbf {C}) = \mathbf {C} _ {\mathrm {C I E}} \mathbf {C} ^ {+}, \tag {5}
+$$
+
+where $\mathbf{C}_{\mathrm{CIE}}\in \mathbb{R}^{D\times 3}$ contains the wavelength discrete CIE-1931 2-degree color matching function and $\mathbf{C}^+$ is the pseudoinverse of $\mathbf{C}$ . To preserve white balance we rescale each row such that: $\mathbf{T}_{\mathrm{raw2xyz}}(\mathbf{C})\mathbf{1} = \mathbf{1}$ . The final transformation, $\mathbf{T}_{\mathrm{xyz2rgb}}$ , is a fixed matrix to convert from XYZ to sRGB space. As part of our model we provide $\mathbf{T}$ , $\mathbf{C}$ and e.
+
+# 4. Integrating 3DRFE
+
+We augment our own dataset by additionally including the 23 scans from the 3DRFE dataset [27]. This uses the
+
+
+Figure 3: (a)-(c): Source geometry and albedo maps from the 3DRFE dataset [27]. (d)-(e): final registered, colour transformed albedo maps on warped template geometry.
+
+
+
+original capture setup of Ma et al. [21] which means that photometric information is only captured from the one view for which the polariser orientations are calibrated. Scans are provided in the form of single viewpoint specular and diffuse albedo maps and a mesh (see Fig. 3(a)-(c)) whose UV coordinates are the 2D perspective projection of the mesh into the maps. This enables us to estimate geometric camera calibration parameters from the 3D vertex positions and corresponding 2D UV coordinates. We perform the calibration using [6] and estimate both intrinsic and distortion parameters. We fit the BFM template to the meshes in the same way as for our own data (see Section 3.2). We then project the fitted template into the maps using the estimated camera calibration, directly sample diffuse/specular albedo for visible vertices and inpaint vertices with no sample using a zero gradient assumption.
+
+The diffuse and specular albedo maps are stored in a nonlinear colour space so we preprocess them by applying inverse gamma (of value 2.2) to transform them back to a linear space. To account for variation in overall skin brightness, during capture the camera gain (ISO) was adjusted for each subject. This means that albedo maps cannot be directly compared or modelled since their individual scale is different. We obtained from the original authors the ISO setting for each subject and compensate by dividing each albedo map by its ISO number. Finally, the albedo maps differ from those taken in our setup by an unknown overall scale factor and colour transformation. To compensate for this, we find the optimal $3 \times 3$ colour transformation to transform the mean diffuse albedo of the 3DRFE scans onto the mean of our scans. We apply this transformation to all of the linearised, ISO-normalised albedo maps to give the final set of maps used in our model.
+
+# 5. Modelling
+
+We model diffuse and specular albedo using a linear statistical model learnt with PCA:
+
+$$
+\mathbf {x} (\mathbf {b}) = \mathbf {P b} + \bar {\mathbf {x}}, \tag {6}
+$$
+
+where $\mathbf{P} \in \mathbb{R}^{3n \times d}$ contains the $d$ principal components, $\bar{\mathbf{x}} \in \mathbb{R}^{3n}$ is the vectorised average map and $\mathbf{x}: \mathbb{R}^d \mapsto \mathbb{R}^{3n}$ is the generator function that maps from the low dimensional parameter vector $\mathbf{b} \in \mathbb{R}^d$ to a vectorised albedo map. Whilst there are more elaborate techniques to model facial texture, we decided to use PCA because of its very stable performance even in the very low data regime and its quality in terms of generalisation and specificity.
+
+Inpainting The stitched albedo maps produced by the process described in Section 3.3 may still contain artefacts, for example in regions with no observed data, stray hairs across the face, where background is sampled onto the face or due to alignment errors in the pipeline. In addition, some faces in the 3DRFE database have closed eyes which is not desired in our model. For this reason, we manually mask all regions containing artefacts (amounting to $5\%$ of the vertices in our dataset) and complete them using a novel hybrid of statistical inpainting and Poisson blending.
+
+For each sample, we assume the set $\mathcal{M} \subset \{1, \dots, n\}$ contains a subset of the $n$ model vertices that have been masked out. We begin by computing a linear statistical model (6) in which masked out values are replaced by the average over non-missing values.
+
+As before, we define a selection matrix for the masked and non-masked vertices, $\tilde{\mathbf{S}}_{\mathcal{M}}$ and $\tilde{\mathbf{S}}_{\mathcal{M}'}$ respectively. We also define selection matrices for the triangles whose vertices are all masked, $\mathbf{S}_{\mathcal{M}}$ , all non-masked, $\mathbf{S}_{\mathcal{M}'}$ , and the $s$ triangles that contain a mix of masked and non-masked vertices $\mathbf{S}_{\mathrm{mix}} \in \mathbb{R}^{s \times t}$ . We compute the parameters of a least squares fit of the preliminary model to the stitched colours of the non-masked vertices:
+
+$$
+\mathbf {b} ^ {*} = \left(\left(\mathbf {1} _ {3} \otimes \tilde {\mathbf {S}} _ {\mathcal {M} ^ {\prime}}\right) \mathbf {P}\right) ^ {+} \left(\mathbf {1} _ {3} \otimes \tilde {\mathbf {S}} _ {\mathcal {M} ^ {\prime}}\right) \left(\operatorname {v e c} \left(\mathbf {I} ^ {\text {s t i t c h}}\right) - \bar {\mathbf {x}}\right), \tag {7}
+$$
+
+where $^+$ denotes the pseudoinverse. We compute the final albedo maps by again writing a screened Poisson equation as a linear system:
+
+$$
+\left[ \begin{array}{c} \left(\mathbf {1} _ {3} \otimes \mathbf {S} _ {\mathcal {M}}\right) \mathbf {G} \\ \left(\mathbf {1} _ {3} \otimes \mathbf {S} _ {\mathrm {m i x}}\right) \mathbf {G} \\ \tilde {\mathbf {S}} _ {\mathcal {M} ^ {\prime}} \end{array} \right] \mathbf {I} ^ {\text {c o m p l e t e}} = \left[ \begin{array}{c} \left(\mathbf {1} _ {3} \otimes \mathbf {S} _ {\mathcal {M}}\right) \mathbf {G} \mathbf {I} ^ {\text {s t a t}} \\ \mathbf {0} _ {s} \\ \tilde {\mathbf {S}} _ {\mathcal {M} ^ {\prime}} \mathbf {I} ^ {\text {s t i t c h}} \end{array} \right] \tag {8}
+$$
+
+where $\operatorname{vec}(\mathbf{I}^{\mathrm{stat}}) = \mathbf{P}\mathbf{b}^{*} + \bar{\mathbf{x}}$ is the statistically inpainted texture. The solution encourages the texture gradient in the masked out region to match the gradient of the statistically inpainted texture but to match the original texture in the non-masked region. For triangles on the boundary between masked and non-masked regions we encourage zero gradient. In Fig. 4 we show an example for the face with most masking required. Note that simply using the statistical inpainting (middle) leads to seams in the texture. The process can be iterated so that these completed textures are used to rebuild the statistical model though we note no significant improvement after the first iteration. We apply this masking
+
+
+Figure 4: Hole filling (subject with most masked vertices). Left: manually masked albedo map. Middle: statistically inpainted. Right: Poisson blend.
+
+
+
+
+
+and blending procedure to both diffuse and specular albedo maps.
+
+We perform an additional final step for the specular maps. Specular albedo is not meaningfully estimated in the eyeball region. This is because the eyeball surface is highly specular compared to skin (i.e. the specular lobe is much narrower). Since the spherical illumination is discretised by a relatively small number of light sources, most points on the eye surface do not specularly reflect towards the viewer (see Fig. 3(c) - zoom for detail). For this reason, we replace specular albedo values in the eyeball region by a robust maximum (95th percentile) of the estimated specular albedo values in that region (see Fig. 3(e)).
+
+Statistical modelling The most straightforward way to model diffuse and specular albedo is with two separate models of the same form as equation (6). However, a drawback of this is that the two maps are not independent and allowing arbitrary combinations of the two model parameters can lead to unrealistic appearance. For example, if the face has a beard in the diffuse albedo map, then the specular albedo should be lower in the beard region. An obvious alternative is to learn a joint model in which diffuse and specular are concatenated and modelled together. A drawback of this model is that it may be desirable to retain different numbers of principal components for the two models or to use the diffuse model alone. Using only the diffuse part of this joint model is no longer orthonormal. In addition, since diffuse albedo conveys most of the information about the identity of a face, it is desirable to have the statistics focused on the diffuse part. For these reasons, we propose an additional third alternative. Here, we learn a diffuse only model and then build a specular model in which the principal components are made from the same linear combinations of training samples as the diffuse modes. This means that the same parameters can be used for both models while retaining orthonormality of the diffuse model:
+
+$$
+\operatorname {v e c} \left(\mathbf {I} ^ {\text {d i f f}}\right) = \mathbf {P} ^ {\text {d i f f}} \mathbf {b} + \bar {\mathbf {x}} ^ {\text {d i f f}}, \tag {9}
+$$
+
+$$
+\operatorname {v e c} \left(\mathbf {I} ^ {\text {s p e c}}\right) = \mathbf {P} ^ {\text {s p e c}} \mathbf {b} + \bar {\mathbf {x}} ^ {\text {s p e c}}, \tag {10}
+$$
+
+Comparing the three alternatives (see Fig. 5), the inde
+
+
+Figure 5: Leave-one-out generalisation error for three variants of the specular model.
+
+pendent specular model generalises best, the concatenated model second best and the proposed model with principal component weights transferred from the diffuse model worst. However, the difference is negligible and the combination of having a single set of parameters for both models and retaining optimality of the independent diffuse model makes this the best choice.
+
+We use symmetry augmentation in our modelling. The BFM template is bilaterally symmetric with known symmetry correspondences. Therefore, we include each sample twice, once as captured, once reflected. This gives us a total of 146 training samples. We make all variants of our model publicly available using both the full BFM 2017 template and also a template cropped only to the inner face region.
+
+Image formation model To use our model for synthesis or analysis-by-synthesis requires a slightly different image formation model than is typically used with 3DMMs. Appearance at a vertex $v$ should be computed as follows:
+
+$$
+\mathbf {i} _ {v} = \left[ \mathbf {i} _ {\text {d i f f}} \left(\mathbf {P} _ {v} ^ {\text {d i f f}} \mathbf {b} + \bar {\mathbf {x}} _ {v} ^ {\text {d i f f}}\right) + \mathbf {i} _ {\text {s p e c}} \left(\mathbf {P} _ {v} ^ {\text {s p e c}} \mathbf {b} + \bar {\mathbf {x}} _ {v} ^ {\text {s p e c}}\right) \right] ^ {\frac {1}{2 . 2}} \tag {11}
+$$
+
+where $\mathbf{i}_{\mathrm{diff}}$ and $\mathbf{i}_{\mathrm{spec}}$ are colour diffuse and specular shading (computed using a chosen reflectance model and dependent on illumination, geometry and viewing direction), $\mathbf{P}_v^{\mathrm{diffuse}}$ denotes the three rows of $\mathbf{P}^{\mathrm{diffuse}}$ corresponding to vertex $v$ , similarly for $\mathbf{P}_v^{\mathrm{spec}}$ , $\bar{\mathbf{x}}_v^{\mathrm{diff}}$ and $\bar{\mathbf{x}}_v^{\mathrm{spec}}$ . See Fig. 1 (right) for a visualisation using this image formation model. In addition, for a camera that does not work in sRGB colour space, an additional transformation to the camera's colour space prior to nonlinear gamma of 2.2 is required.
+
+# 6. Experiments
+
+Our final model is a combination of the proposed diffuse and specular albedo model to model facial appearance and the BFM 2017 to model face shape and expressions. Since the shape part of the model is identical to the BFM 2017, we focus on the evaluation of the appearance model and reconstruction of facial albedo.
+
+
+Figure 6: Comparison with current state-of-the-art and publicly available models. Our full model is shown in Fig. 1
+
+We begin by providing a qualitative comparison between our proposed model and the currently most used publicly available 3DMMs in Fig. 6. We observe that the first mode of our proposed model is more diverse and less biased than the BFM. Additionally, we see that the appearance between models varies dramatically which shows how arbitrary the albedo in the LYHM and BFM are. Our full model presented in Fig. 1 is unprecedented and there is no other model to compare to.
+
+Next, we use our model in a standard inverse rendering setting. We adopted the publicly available model adaptation framework1 based on [26] and compare it directly to model adaptation results based on the BFM in Fig.7. This implementation adapts shape, albedo and camera parameters, as well as the first three bands of a spherical harmonics illumination model and is based on Markov Chain Monte Carlo Sampling. We perform the experiment on the LFW dataset [18] exactly as proposed in [14] and just exchanged the model (including applying gamma) and used statistical specular albedo maps during model adaptation.
+
+Finally, we perform an evaluation in the same inverse rendering setting as the previous experiment but with known ground truth albedo maps. We use six identities from our own dataset and build a model excluding them. We then fit to images from our dataset taken by the non-photometric
+
+
+Target
+
+
+
+
+
+
+
+
+
+
+Figure 7: Qualitative model adaptation results on the LFW dataset [18]. Our model leads to comparable results whilst explicitly disentangling albedo and estimating diffuse and specular albedo.
+
+
+BFM 2017
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ours
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ours diffuse
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ours specular
+
+
+input
+
+
+Figure 8: Albedo estimation results based on the exact same inverse rendering pipeline for the BFM 2017 and the proposed model. The proposed model is both visually and in terms of mean squared error (see Table 1) closer to the ground truth.
+
+
+
+
+
+
+
+
+
+ | reconstruction | model mean |
| BFM17 | 0.0192 ± 0.0121 | 0.0575 ± 0.0551 |
| ours | 0.0060 ± 0.0022 | 0.0170 ± 0.0270 |
+
+Table 1: Albedo estimation results for the BFM 2017 and the proposed method. The second column shows the reconstruction based on the respective model mean solely. Those results are based on the reconstructions depicted in Fig 8.
+
+cameras. These are simply SLR cameras in auto mode with no polarisation, representing a realistic image in approximately ambient light. We apply the inverse rendering framework with the same configuration, except for limiting the illumination condition to an ambient one and estimate albedo and observe better albedo reconstruction performance for our proposed model compared to the BFM for every single case. We applied gamma for both models since it leads to better results even for the albedo reconstruction
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+BFM17
+
+
+
+
+
+
+
+
+
+
+ours groundtruth
+input
+BFM17
+
+
+
+
+
+
+
+
+
+
+
+
+ours
+
+
+
+
+
+
+oundtruth
+
+of the BFM. Visual results can be found in Fig. 8 and quantitative values are shown in Table 1.
+
+# 7. Conclusion
+
+We built and make available the first statistical model of facial diffuse and specular albedo. The model at hand fills a gap in 3DMM literature and might be beneficial in various directions. This model leverages the computer graphics part of the inverse rendering setting where 3DMMs are classically applied. We present superior performance compared to the BFM 2017 in terms of albedo reconstruction from the facial appearance in a 2D image. Besides the computer vision application of inverse rendering with all its various approaches, we see big potential in the direction of de-biasing current face processing pipelines. To the best of our knowledge, this work is the first to combine diffuse and specular albedo and jointly model different skin types with their matching specular reflection properties. Besides applications for computer graphics and vision, we also see a benefit for studying human face perception. Whilst other 3DMMs were already used in behavioural experiments, this is the first model enabling to study human face perception based on a real disentangled representation of illumination, shading, and reflection. We make our model and accompanying code publicly available2.
+
+Acknowledgement W. Smith is supported by a Royal Academy of Engineering/The Leverhulme Trust Senior Research Fellowship. B. Egger and J. Tenenbaum are supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. We acknowledge Abhishek Dutta for the original design and construction of our light stage.
+
+# References
+
+[1] LLC Agisoft and Russia St Petersburg. Agisoft metashape. Professional Edition, 7, 2019.
+[2] Anil Bas and William A. P. Smith. What does 2D geometric information really tell us about 3D face shape? International Journal of Computer Vision, 127(10):1455-1473, 2019.
+[3] T. Beeler, B. Bickel, P. Beardsley, B. Sumner, and M. Gross. High-quality single-shot capture of facial geometry. ACM Transactions on Graphics (Proceedings of SIGGRAPH), 29(3), 2010.
+[4] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3D faces. In ACM Transactions on Graphics (Proceedings of SIGGRAPH), pages 187-194, 1999.
+[5] James Booth, Anastasios Roussos, Stefanos Zafeiriou, Allan Ponniahy, and David Dunaway. A 3D morphable model learnt from 10,000 faces. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5543-5552, 2016.
+[6] Jean-Yves Bouguet. Camera calibration toolbox for matlab. http://www.vision.caltech.edu/bouguetj/calib_doc/, 2008. Accessed: 2019-10-04.
+[7] Hang Dai, Nick Pears, William A. P. Smith, and Christian Duncan. A 3D morphable model of craniofacial shape and texture variation. In Proc. International Conference on Computer Vision (ICCV), 2017.
+[8] Arnaud Dessein, William AP Smith, Richard C Wilson, and Edwin R Hancock. Seamless texture stitching on a 3D mesh by poisson blending in patches. In Proc. IEEE International Conference on Image Processing, pages 2031-2035. IEEE, 2014.
+[9] Bernhard Egger. Semantic Morphable Models. PhD thesis, University of Basel, 2018.
+[10] Bernhard Egger, William AP Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhoefer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, et al. 3D morphable face models-past, present and future. arXiv preprint arXiv:1909.01815, 2019.
+[11] Thomas B Fitzpatrick. The validity and practicality of sun-reactive skin types i through vi. Archives of dermatology, 124(6):869-871, 1988.
+[12] Baris Gecer, Stylianos Ploumpis, Irene Kotsia, and Stefanos Zafeiriou. GANFIT: Generative adversarial network fitting for high fidelity 3D face reconstruction. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1155-1164, 2019.
+[13] Kyle Genova, Forrester Cole, Aaron Maschinot, Aaron Sarna, Daniel Vlasic, and William T Freeman. Unsupervised training for 3d morphable model regression. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8377-8386, 2018.
+[14] Thomas Gereg, Andreas Morel-Forster, Clemens Blumer, Bernhard Egger, Marcel Luthi, Sandro Schonborn, and Thomas Vetter. Morphable face models—an open framework. In Proc. International Conference on Automatic Face and Gesture Recognition, pages 75–82. IEEE, 2018.
+[15] A. Ghosh, T. Chen, P. Peers, C. A. Wilson, and P. Debevec. Estimating specular roughness and anisotropy from second
+
+order spherical gradient illumination. Computer Graphics Forum (Proc. EGSR), 28(4):1161-1170, 2009.
+[16] A. Ghosh, T. Chen, P. Peers, C. A. Wilson, and P. Debevec. Circularly polarized spherical illumination reflectometry. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), 29(6), 2010.
+[17] Abhijeet Ghosh, Graham Fyffe, Borom Tunwattanapong, Jay Busch, Xueming Yu, and Paul Debevec. Multiview face capture using polarized spherical gradient illumination. In ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia), volume 30, page 129, 2011.
+[18] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A database for studying face recognition in unconstrained environments. Technical Report 07-49, University of Massachusetts, Amherst, October 2007.
+[19] Jun Jiang, Dengyu Liu, Jinwei Gu, and Sabine Süssstrunk. What is the space of spectral sensitivity functions for digital color cameras? In Proc. IEEE Winter Conference on Applications of Computer Vision (WACV), pages 168-179, 2013.
+[20] Feng Liu, Ronghang Zhu, Dan Zeng, Qijun Zhao, and Xiaoming Liu. Disentangling features in 3d face shapes for joint face reconstruction and recognition. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5216-5225, 2018.
+[21] W. C. Ma, T. Hawkins, P. Peers, C. F. Chabert, M. Weiss, and P. Debevec. Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination. In Proc. Eurographics Symposium on Rendering, pages 183-194, 2007.
+[22] Pascal Paysan, Reinhard Knothe, Brian Amberg, Sami Romdhani, and Thomas Vetter. A 3D face model for pose and illumination invariant face recognition. In Proc. IEEE International Conference on Advanced Video and Signal Based Surveillance, pages 296-301. IEEE, 2009.
+[23] P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. ACM Transactions on Graphics (Proceedings of SIGGRAPH), 22(3):313-318, 2003.
+[24] Stylianos Ploumpis, Haoyang Wang, Nick Pears, William A. P. Smith, and Stefanos Zafeiriou. Combining 3D morphable models: A large scale face-and-head model. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 10934-10943, 2019.
+[25] Sami Romdhani and Thomas Vetter. Estimating 3d shape and texture using pixel intensity, edges, specular highlights, texture constraints and a prior. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 986-993 vol. 2, June 2005.
+[26] Sandro Schonborn, Bernhard Egger, Andreas Morel-Forster, and Thomas Vetter. Markov chain monte carlo for automated face image analysis. International Journal of Computer Vision, 123(2):160–183, Jun 2017.
+[27] Giota Stratou, Abhijeet Ghosh, Paul Debevec, and Louis-Philippe Morency. Effect of illumination on automatic expression recognition: a novel 3D reconfigurable facial database. In Proc. International Conference on Automatic Face and Gesture Recognition, pages 611–618. IEEE, 2011.
+
+[28] Ayush Tewari, Florian Bernard, Pablo Garrido, Gaurav Bharaj, Mohamed Elgharib, Hans-Peter Seidel, Patrick Pérez, Michael Zollhoefer, and Christian Theobalt. Fml: Face model learning from videos. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
+[29] Ayush Tewari, Michael Zollhoefer, Hyeongwoo Kim, Pablo Garrido, Florian Bernard, Patrick Perez, and Theobalt Christian. MoFA: Model-based Deep Convolutional Face Autoencoder for Unsupervised Monocular Reconstruction. In Proc. International Conference on Computer Vision (ICCV), 2017.
+[30] Luan Tran and Xiaoming Liu. Nonlinear 3D face morphable model. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, June 2018.
+[31] C. A. Wilson, A. Ghosh, P. Peers, J.-Y. Chiang, J. Busch, and P. Debevec. Temporal upsampling of performance geometry using photometric alignment. ACM Transactions on Graphics (Proceedings of SIGGRAPH), 29(2), 2010.
+[32] Shuco Yamaguchi, Shunsuke Saito, Koki Nagano, Yajie Zhao, Weikai Chen, Kyle Olszewski, Shigeo Morishima, and Hao Li. High-fidelity facial reflectance and geometry inference from an unconstrained image. ACM Transactions on Graphics (Proceedings of SIGGRAPH), 37(4):162, 2018.
\ No newline at end of file
diff --git a/amorphablefacealbedomodel/images.zip b/amorphablefacealbedomodel/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..2f9b83869937f67f4795c6e3ac760d7ba48406c4
--- /dev/null
+++ b/amorphablefacealbedomodel/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7bdab047d8468468d16076d4af47769d56d47f84665ea20eaf8c7795774b46f
+size 586369
diff --git a/amorphablefacealbedomodel/layout.json b/amorphablefacealbedomodel/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e78b9fdbc674461e0c46d47d69c0ac281b5ba944
--- /dev/null
+++ b/amorphablefacealbedomodel/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:09e68c5c4bae8274820acf1b8644bb256065072d428f146e73726e157912c649
+size 429873
diff --git a/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_content_list.json b/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..01ef9addc89916add29f39b036957aa936f310c0
--- /dev/null
+++ b/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c7232d05cbad00fb755c8be2619c034b84bb9b2811e7815bd510912a939820a7
+size 82243
diff --git a/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_model.json b/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..b52a3f875584e92af36c7b83c7bea6a214ef1e81
--- /dev/null
+++ b/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ba7d504c157a0d5e58c0438d3ac6599d174eb78aa80dc29959e766e83339152
+size 101804
diff --git a/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_origin.pdf b/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..f661a94ed924b55e604fc8e26de24f14f406e857
--- /dev/null
+++ b/amultigridmethodforefficientlytrainingvideomodels/10b86ce7-347e-470d-ab66-2bb6f7066e67_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83c4f7915eaf5f3758d3e5449ed6da77644054deac151ee6892f31872df816b0
+size 193959
diff --git a/amultigridmethodforefficientlytrainingvideomodels/full.md b/amultigridmethodforefficientlytrainingvideomodels/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bc889b4b592d2ea78b0a453f828e7573ac2cc0d9
--- /dev/null
+++ b/amultigridmethodforefficientlytrainingvideomodels/full.md
@@ -0,0 +1,319 @@
+# A Multigrid Method for Efficiently Training Video Models
+
+Chao-Yuan $\mathbf{W}\mathbf{u}^{1,2}$
+
+Christoph Feichtenhofer2
+
+1The University of Texas at Austin
+
+Ross Girshick2
+
+Kaiming He
+
+Philipp Kähenbuhl1
+
+$^{2}$ Facebook AI Research (FAIR)
+
+# Abstract
+
+Training competitive deep video models is an order of magnitude slower than training their counterpart image models. Slow training causes long research cycles, which hinders progress in video understanding research. Following standard practice for training image models, video model training has used a fixed mini-batch shape: a specific number of clips, frames, and spatial size. However, what is the optimal shape? High resolution models perform well, but train slowly. Low resolution models train faster, but are less accurate. Inspired by multigrid methods in numerical optimization, we propose to use variable mini-batch shapes with different spatial-temporal resolutions that are varied according to a schedule. The different shapes arise from resampling the training data on multiple sampling grids. Training is accelerated by scaling up the mini-batch size and learning rate when shrinking the other dimensions. We empirically demonstrate a general and robust grid schedule that yields a significant out-of-the-box training speedup without a loss in accuracy for different models (I3D, non-local, SlowFast), datasets (Kinetics, Something-Something, Charades), and training settings (with and without pretraining, 128 GPUs or 1 GPU). As an illustrative example, the proposed multigrid method trains a ResNet-50 SlowFast network $4.5 \times$ faster (wall-clock time, same hardware) while also improving accuracy ( $+0.8\%$ absolute) on Kinetics-400 compared to baseline training. Code is available online. $^{1}$
+
+# 1. Introduction
+
+Training deep networks (CNNs [27]) on video is more computationally intensive than training 2D CNN image models, potentially by an order of magnitude. Long training time slows progress in video understanding research, hinders scaling out to real-world data sources, and consumes significant amounts of energy and hardware. Is this slow training unavoidable, or might there be video-specific optimization strategies that can accelerate training?
+
+3D CNN video models are trained using mini-batch optimization methods (e.g., SGD) that process one mini-batch
+
+
+Figure 1. Training time vs. top-1 accuracy on Kinetics-400 with a ResNet-50 SlowFast network. Each point corresponds to a model trained for a specific number of epochs. Multigrid training, the method developed in this paper, obtains a significantly better tradeoff than baseline training. For example, under default settings, multigrid training is $4.5 \times$ faster while achieving higher $(+0.8\%)$ absolute) top-1 accuracy. All methods here, and throughout the paper, use the same hardware and software implementation.
+
+per iteration. The mini-batch shape $B \times T \times H \times W^2$ (mini-batch size × number of frames × height × width) is typically constant throughout training. A variety of considerations go into selecting this input shape, but a common heuristic is to make the $T \times H \times W$ dimensions large in order to improve accuracy, e.g., as observed in [9, 45, 47].
+
+This heuristic is only one possible choice, however, and in general there are trade-offs. For example, one may use a smaller number of frames and/or spatial size while simultaneously increasing the mini-batch size $B$ . With such an exchange, it is possible to process the same number of epochs (passes over the dataset) with lower wall-clock time because each iteration processes more examples. The resulting trade-off is faster training with lower accuracy.
+
+The central idea of this paper is to avoid this trade-off—i.e., to have faster training without losing accuracy—by making the mini-batch shape variable during training. By viewing the input video clips in a mini-batch as raw video signals that are sampled on a sampling grid (to be defined), we can draw a connection to multigrid methods for numeri-
+
+cal analysis [1]. These methods exploit coarse-to-fine grids to accelerate optimization. Intuitively, if we use large minibatches with relatively small time and space dimensions (a 'coarse grid') early in training and small mini-batches with large time and space dimensions (a 'fine grid') later, then SGD may be able to scan through the data more quickly on average while finally solving for a high accuracy model, akin to how coarse grids enable solving problems on finer grids more rapidly in multigrid numerical solvers [1].
+
+Multigrid training is possible because video models are compatible with input data of variable space and time dimensions due to weight sharing operations (e.g., convolutions). In addition, CNNs are effective at learning patterns at multiple scales, e.g., as observed when training with data augmentation [18, 26, 38]. We observe similar multi-scale robustness and generalization with multigrid training.
+
+Our proposed multigrid training method is simple and effective. It is easy to implement and typically only requires small changes to a data loader. Empirically, it works with default learning rate schedules and hyper-parameters already in use. No tuning is required. Moreover, multigrid training works robustly out-of-the-box for different models (I3D [3], non-local [47], SlowFast [9]), datasets (Kinetics-400 [23], Something-Something V2 [14], and Charades [36]), initializations (random and pre-trained), and hardware scales (e.g., 128 GPUs or 1 GPU). We observe a consistent speedup and performance gain in all cases without tuning. As an example, we train a SlowFast network $\sim 4.5\times$ faster in wall-clock time on the large-scale Kinetics dataset (Fig. 1) while also reaching a higher accuracy $(+0.8\%$ absolute). We hope these benefits provided by multigrid training will make research on video understanding more accessible, scalable, and economical.
+
+# 2. Related Work
+
+3D CNN video models extend 2D CNNs to model both spatial and temporal patterns. They are currently the state of the art for video understanding [3,9,12,19,30-32,34,43-45,47-49]. These methods are computationally expensive, both for training and inference [43,49]. Some recent studies propose lighter weight models that use efficient temporal modules [2,5,19,21,28-30,32,34,40,44,46] and/or exploit temporal redundancy [9,51]. In this paper, we show that the training time of state-of-the-art efficient models [9] can still be reduced significantly.
+
+Efficient training can also be advanced through, e.g., optimization methods (e.g., [8, 24, 33, 39]), pre-training [3, 11], distributed training [13, 50], or advances in hardware [22] and software design [4, 6]. In this paper, we propose a complementary direction that exploits variable mini-batch shapes for fast training. Related to our method, Wang et al. [47] and Feichtenhofer et al. [10] initialize larger models
+
+with smaller, fully-trained ones. These methods can potentially speed up training as well, and (as can be seen later) are a special case of multigrid training.
+
+Multi-scale training in segmentation [16] and classification [18, 38] uses multiple image crop sizes. However, the mini-batch shape remains fixed [16, 18, 38]. Multigrid training on the other hand uses variable mini-batch shapes. He et al. [17] change the input shapes, but fix the mini-batch size. These methods show that training with variable scales can be beneficial. Multigrid training enjoys the same property.
+
+Multigrid methods were originally proposed for numerical boundary value problems, and later developed into an entire field in computational mathematics [1]. They typically involve iterating through cycles of coarse and fine problems, and exploit the fact that a coarse problem can be solved efficiently to speed up the overall problem solving. He and Xu [15] connect multigrid methods to deep networks through identifying the correspondence between steps in traditional multigrid methods and operators in a convolutional neural network. In this paper, we take inspiration from multigrid concepts from a more abstract view to accelerate video model training.
+
+# 3. Multigrid Training for Video Models
+
+To develop our multigrid training method we will consider a reference video model (e.g., C3D [43], I3D [3]) that is trained by a baseline mini-batch optimizer (e.g., SGD) that operates on mini-batches of shape $B \times T \times H \times W$ (mini-batch size × number of frames × height × width) for some number of epochs (e.g., 100). The spatial-temporal shape, $T \times H \times W$ , arises from resampling source videos in the training dataset according to a sampling grid that is specified by a temporal span, a spatial span, a temporal stride, and a spatial stride (defined in §3.1). These concepts intuitively correspond to a grid's duration/area (span) and sampling rate (stride). The baseline optimizer holds the mini-batch shape constant across all training iterations.
+
+Proposed Multigrid Method. Inspired by multigrid methods in numerical analysis, which solve optimization problems on alternating coarse and fine grids, the core observation in this paper is that the underlying sampling grid that is used to train video models need not be constant during training. In fact, we will show in experiments that by varying the sampling grid and the mini-batch size during training it is possible to reduce training complexity substantially (in terms of total FLOPs and wall-clock time) while achieving similar accuracy in comparison with the baseline.
+
+The fundamental concept that enables multigrid training is the balance between computation allocated to processing more examples per mini-batch vs. the computation allocated to processing larger time and space dimensions. To control
+
+this balance, we will consider temporal and spatial shapes $t \times w \times h$ that are formed by resampling source videos with a new sampling grid that has its own spans and strides. When changing the input shape we use a scaled mini-batch size $b$ satisfying the relation $b \cdot t \cdot h \cdot w = B \cdot T \cdot H \cdot W$ , or
+
+$$
+b = B \frac {T}{t} \frac {H}{h} \frac {W}{w}, \tag {1}
+$$
+
+which yields computation (in FLOPs) that is roughly equal to the computation of the aforementioned baseline minibatch for typical 3D CNNs. $^{3}$
+
+Our multigrid method uses a set of sampling grids and a grid schedule that determines which grid to use in each training iteration. If training is run for a similar number of epochs regardless of the choice of grids, then by making $b > B$ on average the entire training process can use fewer total FLOPs and have a lower wall-clock time.
+
+We will experimentally investigate two questions: (i) is there a set of grids with a grid schedule that can lead to faster training without a loss in accuracy? and, (ii) if so, does it robustly generalize to new models and datasets without modification? In the following we will develop the core multigrid training concepts in detail ( $\S 3.1$ ), provide an implementation (i.e., a set of grids and a grid schedule) that work well in practice ( $\S 3.2$ ), and then explore ablation and generalization experiments ( $\S 4$ ).
+
+# 3.1. Multigrid Training Concepts
+
+Sampling Grids. Each video in a dataset is a discrete signal that was sampled from an underlying continuous signal generated by the physical world. The video has some number of frames and pixels per frame, which are related to the physical world by the temporal and spatial resolution of the recording device (which depends on a number of camera properties). When using one of these source videos in a training mini-batch, a sampling grid is used to resample it.
+
+A sampling grid in one dimension (space or time) is defined by two quantities: a span and a stride. Their units are defined w.r.t. the source video being resampled. For the time dimension, the units are frames while for the spatial dimensions the units are pixels. The span is the support size of the grid and defines the duration or area that the grid covers. The stride is the spacing between sampling points. Dividing
+
+the span by the stride gives the number of points in the grid, which determines the shape of the input data. Note that different grids can yield the same data shape, which implies that the mini-batch size will only change (Equation (1)) if a change in the sampling grid also changes the data shape.
+
+We note that spatial sampling grids already appear in the baseline optimizer if it uses multi-scale spatial data augmentation [7, 26, 37]. Under our multigrid perspective, multi-scale spatial data augmentation changes the spatial spans and strides of the resampling grid proportionally so that the resulting mini-batch always has the same $H \times W$ spatial shape. In contrast, we will change spans and strides by different factors, which results in a different spatial shape $h \times w$ for each grid (and likewise for the time dimension).
+
+Grid Scheduling. We use mini-batch optimizers, which have as their most basic scheduling unit a single mini-batch iteration in which one model update is performed. The training schedule consists of some number of mini-batch iterations and is often expressed in terms of epochs. For example, training may consist of 100 or 200 epochs worth of iterations. Within this overall training schedule it is common to let the learning rate vary, such as annealing it according to a schedule defined in terms of iterations or epochs.
+
+Scheduling other training properties is also possible. Central to our multigrid method is the idea of scheduling the sampling grids that are used throughout training. When changing grids, the mini-batch size is always scaled according to Equation (1) so that mini-batch FLOPs are held roughly constant. Grid scheduling is highly flexible, admitting a large design space from simply cycling through a sequence of pre-defined grids to using randomized grids. In §3.2 we will present a randomized, hierarchical schedule that works well in practice.
+
+Multigrid Properties. Multigrid training relies on two properties of the data and model. First, resampling the data on different grids requires a suitable operator. For video, this operator can be a reconstruction filter applied to the source discrete signal followed by computing the values at the points specified by the grid (e.g., bilinear interpolation).
+
+Second, the model must be compatible with inputs that are resampled on different grids, and therefore might have different shapes during training. Models that are composed of functions that use weight sharing across the dimensions that are resampled, e.g., 2D and 3D convolutions, recurrent functions, and self-attention, are compatible and cover most of the commonly used architectures; fully-connected layers, unless their inputs are pooled to a fixed size, are not compatible. We will focus on models that use 2D and 3D con
+
+
+(a) Baseline
+
+
+(b) Long cycles
+Figure 2. A general and robust grid schedule (§3.2). We contrast multigrid training with standard baseline training. (a) Baseline training methods typically use a fixed mini-batch shape throughout training. (b) Multigrid long cycles loop over inputs from small shapes (with large mini-batch sizes) to large shapes (with small mini-batch sizes), staying on each shape for several epochs. (c) Multigrid short cycles rapidly move through a variety of spatial shapes, changing at each iteration. (d) Multigrid long + short cycles (our default setting) combines long and short cycles, and moves through shapes at two frequencies simultaneously. Dark green points in (b), (c), and (d) correspond to one full period of a long cycle, a full short cycle, and a long+short cycle, respectively.
+
+
+(c) Short cycles
+
+
+(d) Long + short cycles
+
+volutions, as well as self-attention operations in the form of non-local blocks [47]; all models end with global average pooling and a single full-connected layer as the classifier, as is common practice.
+
+Training and Testing Distributions. The focus of this work is on multigrid methods for training and therefore we use a standard inference method that uses a single shape for the testing data. This choice, however, may introduce a mismatch between the data distribution used to train the model and the data distribution used at test time. To close this gap, training may be finished with some number of 'fine-tuning' iterations that use grids more closely aligned with the testing distribution, e.g., see [42]. We find that this fine-tuning gives a small, but consistent improvement.
+
+# 3.2. Implementation Details
+
+Multigrid training involves a choice of sampling grids and a grid schedule, which leads to a rich design space. We use a hierarchical schedule that involves alternating between mini-batch shapes at two different frequencies: a long cycle that moves through a set of base shapes, generated by a variety of grids, staying on each shape for several epochs, and a short cycle that moves through a set of shapes that are 'nearby' the current base shape, staying on each one for a single iteration. This hierarchical grid schedule is described in more detail shortly and illustrated in Fig. 2.
+
+The remainder of this subsection provides details for this design, which we have found to work well in practice. After presenting these details, we will explore what design decisions are important in ablation experiments.
+
+Optimizer. We use SGD with momentum and a stepwise learning rate decay schedule since these are common choices in practice [9, 18, 26, 43]. Using other learning rate schedules and optimizers is also possible. Specific schedules are given in each experimental section.
+
+Long Cycle. We use sampling grids that result in an ordered sequence of $S = 4$ base mini-batch shapes of nondecreasing size along each dimension: $8B \times \frac{T}{4} \times \frac{H}{\sqrt{2}} \times \frac{W}{\sqrt{2}}$ , $4B \times \frac{T}{2} \times \frac{H}{\sqrt{2}} \times \frac{W}{\sqrt{2}}$ , $2B \times \frac{T}{2} \times H \times W$ , and $B \times T \times H \times W$ . These four shapes cover an intuitive range and work well in practice. The long cycle is synchronized with the stepwise learning rate decay schedule: a full cycle over the $S$ shapes occurs exactly once for each learning rate stage. We train on each shape for the same number of iterations.
+
+We use a simple randomized strategy to generate a minibatch with the target input shape for each training iteration. For each video to be used in the mini-batch, we select a random span from a specified range and set the stride such that the desired shape is produced when sampling on the resulting grid. For the spatial dimensions, this strategy amounts to resizing a random crop to the desired shape using bilinear interpolation (similar to random cropping used in image classification [18, 26, 38]). For the temporal dimension, this strategy amounts to selecting a random temporal crop and subsampling its frames. The sampling range for spans is specified in each experimental section.
+
+Short Cycle. The short cycle rapidly moves through a variety of spatial shapes, changing at each iteration. By default, we use the following 3-shape short cycle. For iteration $i$ , let $m = i$ (mod 3); if $m = 0$ , then we set the spatial shape to $\frac{H}{2} \times \frac{W}{2}$ ; if $m = 1$ , we use $\frac{H}{\sqrt{2}} \times \frac{W}{\sqrt{2}}$ ; otherwise, the current base spatial shape from the long cycle is used.
+
+The short cycle can be applied on its own or in conjunction with the long cycle. The mini-batch size is again scaled using Equation (1). The same randomized grid strategy is applied to sample data for the target mini-batch shape.
+
+Learning Rate Scaling. When the mini-batch size changes due to the long cycle, we apply the linear scaling rule [13] to adjust the learning rate by the mini-batch size scaling factor (thus either $8 \times, 4 \times, 2 \times$ , or $1 \times$ ). We found that this adjust-
+
+ment is harmful if applied to mini-batch size changes due to the short cycle and therefore we only adjust the learning rate when the long cycle base shape changes.
+
+Fine-tuning Phase. If the baseline optimizer uses $L$ learning rate (LR) stages, then we apply the long and short cycles in the first $L - 1$ LR stages. We use the corresponding $L$ -th stage for fine-tuning to help match the training and testing distributions, similar to [42]. In the first half of the fine-tuning iterations we use the $L - 1$ -st learning rate and in the second half we use the final ( $L$ -th) learning rate. While fine-tuning we use the short cycle (as data augmentation), but not the long cycle.
+
+Batch Normalization. The behavior of Batch Normalization (BN) [20] depends on mini-batch statistics. In traditional trainers, the constant mini-batch size is also a hyperparameter that impacts BN behaviors (e.g., the noisiness of the statistics). As our multigrid method uses variable minibatches sizes, it is desirable to decouple its impact on BN from that of training speedup. The following heuristic works well in practice: we compute BN statistics with a standardized sub-mini-batch of size 8; when the short cycle increases the overall mini-batch size by $2 \times$ or $4 \times$ , we likewise increase the BN sub-mini-batch size to 16 and 32, respectively.
+
+# 4. Experiments on Kinetics
+
+We conduct ablation studies on the Kinetics-400 dataset [23], which is used in prior research and requires classifying each video into one of 400 categories. It contains $\sim 240\mathrm{k}$ training videos and $\sim 20\mathrm{k}$ validation videos on which we report results. Performance is measured by top-1 and top-5 accuracy.
+
+Baseline Model and Training. We use a ResNet-50 (R50) SlowFast network [9, 18] with a 32-frame fast pathway, speed ratio $\alpha = 4$ , and channel ratio $\beta = 1/8$ as our default model. Input frames are sampled at a temporal stride of 2.
+
+Our baseline training recipe follows Feichtenhofer et al. [9]. We run synchronous SGD for 112k iterations on 128 GPUs with a mini-batch size of 4 clips per GPU ( $\sim$ 239 epochs) with initial learning rate of 0.8. (We perform single GPU experiments in §5.) The learning rate is decreased by $10 \times$ at iterations 44k, 72k, and 92k. We use a weight decay of $10^{-4}$ , momentum of 0.9, and a linear learning rate warm-up [13] from 0.002 over 16k iterations. Input clips are random $224 \times 224$ spatial crops from clips that are randomly resized such that the shorter side $\in$ [256, 340] pixels.
+
+At test time, we sample 10 clips per video with uniform temporal spacing and combine the predictions with average pooling following [9,25,47]. We use $224 \times 224$ center crop
+
+testing by default [25,48] and present results with other settings in Supplementary Material.
+
+We select these training and inference procedures based on validation accuracy using the baseline training method. We adopt the exact same recipe for multigrid training experiments, aside from multigrid specific changes. This choice may put multigrid training at a disadvantage, but it reflects the realistic scenario in which one wants to apply multigrid training to accelerate an already known training schedule without further tuning.
+
+Evaluation. Speedup factors are wall-clock GPU training time on P100 GPUs with CUDA 9.2 and cuDNN 7.6.3. For fair comparison, the same hardware and software implementation is used for all methods. We note that multigrid training exploits larger mini-batches, which increases data loading throughput requirements. Training may become IO bound if the data loader is not optimized appropriately or if remote data access is used. With sufficient local disk and an optimized data loader, training is typically not IO bound.
+
+Multigrid Training Details. To sample data with spatial shape $h \times w$ that is smaller than $H \times W$ , we change the default random short-side interval to $[256\frac{h}{H}, 340]$ , noting that $w = h$ in our experiments. For the temporal dimension, we take $t (t < T)$ frames with random stride in $[2, 2\frac{T}{t}]$ .
+
+# 4.1. Main Results
+
+We compare multigrid training to baseline training in Fig. 3. In addition to the default baseline, one could speed up training by using a smaller spatial-temporal shape with a larger mini-batch size and learning rate, so we also compare to this baseline variant. For each method, we experiment with training schedules that range from $0.25 \times$ to $3 \times$ the number of baseline epochs ( $\sim 239$ ) to study the trade-off between training time and accuracy. Overall, multigrid training always achieves a better trade-off than baseline training. For example, multigrid training with both the long and short cycles can iterate through $1.5 \times$ more epochs than baseline method, while only requiring $1/3.4 \times$ the number of iterations, $1/4.5 \times$ training time, and achieving higher accuracy ( $75.6\% \rightarrow 76.4\%$ ). The wall-clock speedup is greater than the iteration reduction factor, as a larger mini-batch with smaller space/time dimensions is more parallelism-friendly on modern GPUs. Both the long and short cycles improve the trade-off and using both together performs the best.
+
+In Fig. 3 we also observe that baseline training suffers a decline in accuracy when training for $\geq 1.5 \times$ epochs. With either long and/or short cycles, a decrease in accuracy is not observed for schedules up to $2.0 \times$ epochs, indicating that variable grids can help prevent overfitting.
+
+In the following we use multigrid training with long and short cycles and $1.5 \times$ more epochs than the baseline as our default since it obtains a good trade-off.
+
+
+Figure 3. Multigrid vs. baseline training. Each point corresponds to one model trained with a specific schedule choice. Annotations denote training epochs relative to the baseline $1.0 \times$ schedule. For example, $1.5 \times$ denotes training for $1.5 \times$ more epochs than the default $1.0 \times$ baseline schedule (112k iterations or $\sim 239$ epochs). We see that all variants of multigrid training achieve a better trade-off than baseline training, which uses a constant mini-batch shape. Also note that multigrid training can iterate through the same number of epochs more efficiently.
+
+# 4.2. Ablation Experiments
+
+Long Cycle Design. By default, we use $S = 4$ long cycle shapes with a $1.5 \times$ epoch schedule. In Table 1a, we explore using fewer shapes, where we take the last $S' < S$ shapes, for $S' \in \{1, 2, 3\}$ . The short cycle is used in these experiments, and the $S' = 1$ setting is equivalent to using the short cycle only. We run all variants for the same number of training iterations (to roughly preserve total training FLOPs), noting that methods which use fewer shapes will process fewer epochs due to having smaller mini-batches on average compared to the $S = 4$ design.
+
+We see that using each additional shape improves accuracy and saturates at $S = 4$ (default). The improvement in accuracy is possibly due to the more examples seen by the model given the same amount of iterations. Compared with $S = 1$ (i.e., short cycle only), our default choice improves the top-1 accuracy by absolute $2.4\%$ ( $74.0\% \rightarrow 76.4\%$ ), while being slightly faster ( $4.0\times \rightarrow 4.5\times$ ). All results use the fine-tuning phase, which we find is beneficial to varying degrees in different settings. With the default schedule, it leads to $0.4\%$ absolute gain ( $76.0\% \rightarrow 76.4\%$ ; not shown in table).
+
+Short Cycle Design. Adding each input shape to the short cycle leads to a clear accuracy improvement, Table 1b. Our default short cycle design (3-shape) improves over 1-shape (i.e., no short cycle / long-cycle only) by absolute $1.9\%$ $(74.5\% \rightarrow 76.4\%)$ in top-1 accuracy.
+
+# 4.3. Generalization to Different Training Settings
+
+Next we study how multigrid training generalizes to different training settings that are common in practice.
+
+Pre-training. In our main results, we train models from random initialization. We see in Table 2a that with ImageNet [35] pre-training, our multigrid method obtains a
+
+similar speedup and performance gain. (We will present more results on ImageNet-pre-trained models in §4.4.)
+
+Temporal Shape. Next we show generalization of multigrid training for models of different temporal shapes $T$ . We compare models that use 16-frame, 32-frame (default), and 64-frame input clips. In all cases (Table 2b), multigrid training achieves a consistent accuracy gain and speedup. The 64-frame model enjoys the largest performance gain $(75.9\% \rightarrow 77.6\%)$ and the best speedup $(5.5\times)$ .
+
+Spatial Shape. We also demonstrate generalization of our method for models of different spatial shapes $H \times W$ . We increase the baseline shape from $224 \times 224$ (default) to $320 \times 320$ and study the impact. Inference for the $320 \times 320$ model is analogous to the $224 \times 224$ case; we resize shorter side to 352 pixels and test on center $320 \times 320$ crops. In Table 2c, we see that multigrid training leads to an even larger performance gain ( $75.1\% \rightarrow 76.8\%$ ) and a more significant speedup ( $6.5 \times$ ) in the $320 \times 320$ case. Also note with the baseline method $320 \times 320$ does not work better than $224 \times 224$ , possibly due to overfitting, similar to what is reported in Tan et al. [41]. On the other hand, with multigrid training, spatial scaling brings improvement, possibly due to the data augmentation brought by multigrid training.
+
+# 4.4. Generalization to Different Models
+
+So far we have focused on state-of-the-art SlowFast network [9] for analysis. We next demonstrate generalization of multigrid training to different networks by presenting results using a standard R50-I3D model [3, 18] and its extension with non-local blocks (I3D-NL) [47].
+
+ | long cycle design | speedup | top-1 | top-5 |
| Baseline | - | - | 75.6 | 91.9 |
| Multigrid | 1-shape (short cycle only) | 4.0× | 74.0 | 91.4 |
| 2-shape | 4.3× | 75.5 | 92.1 |
| 3-shape | 4.4× | 76.2 | 92.4 |
| 4-shape (default) | 4.5× | 76.4 | 92.4 |
+
+(a) Long cycle design (with default short cycle)
+
+ | short cycle design | speedup | top-1 | top-5 |
| Baseline | - | - | 75.6 | 91.9 |
| 1-shape (long cycle only) | 4.2× | 74.5 | 91.6 |
| Multigrid | 2-shape | 4.3× | 75.5 | 92.1 |
| 3-shape (default) | 4.5× | 76.4 | 92.4 |
+
+(b) Short cycle design (with default long cycle)
+Table 1. Ablation Study. We perform ablations on Kinetics-400 using an R50-SlowFast network. We analyze the impact of the long cycle (Table 1a) and short cycle (Table 1b) designs. All variants of multigrid training use the same number of training iterations as our default $1.5 \times$ epoch schedule; this roughly preserves the total training FLOPs. We report wall-clock speedup relative to the baseline trained for $1.0 \times$ epochs.
+
+ | pre-train? | speedup | top-1 | top-5 |
| Baseline | | - | 75.6 | 91.9 |
| Multigrid | | 4.5× | 76.4 | 92.4 |
| Baseline | ✓ | - | 75.4 | 91.9 |
| Multigrid | ✓ | 4.5× | 76.0 | 92.4 |
+
+(a) Pre-training
+
+| T | | speedup | top-1 | top-5 |
| 16 | Baseline | - | 74.8 | 91.4 |
| 16 | Multigrid | 4.0× | 75.2 | 91.9 |
| 32 | Baseline | - | 75.6 | 91.9 |
| 32 | Multigrid | 4.5× | 76.4 | 92.4 |
| 64 | Baseline | - | 75.9 | 92.1 |
| 64 | Multigrid | 5.5× | 77.6 | 93.2 |
+
+(b) Temporal shape $T$
+
+| H×W | | speedup | top-1 | top-5 |
| 224 | Baseline | - | 75.6 | 91.9 |
| 224 | Multigrid | 4.5× | 76.4 | 92.4 |
| 320 | Baseline | - | 75.1 | 91.8 |
| 320 | Multigrid | 6.5× | 76.8 | 92.8 |
+
+Implementation Details. Both models are ImageNet-pretrained with 3D convolutions inflated from 2D convolutions following common practice [3,10,47]. Each input clip consists of 16 frames, sampled at a stride of 4. I3D-NL additionally contains 5 (dot product) non-local blocks [47] in $\mathrm{res}_3$ and $\mathrm{res}_4$ stages. The exact model specification is given in Supplementary Material.
+
+The baseline recipe trains for 100k iterations using 128 GPUs, with a mini-batch size of 2 clips per GPU ( $\sim$ 106 epochs) and a learning rate of 0.04, which is decreased by a factor of 10 at iteration 37.5k and 75k. We do not use learning rate warm-up [13] following prior work [47]. Other training details are analogous to SlowFast training. We note again that this training recipe is selected to be the best for the baseline training method and we apply multigrid training on top without further tuning.
+
+Evaluation. We summarize the results in Table 3. For both I3D and I3D-NL, multigrid training with the default schedule $(1.5 \times$ epoch) obtains similar or better accuracy, while being up to $3.9 \times$ faster. We also experiment with a shorter baseline schedule ('baseline $\frac{1}{3.3}$ ' in table), which trains for the same number of iterations as the multigrid training. The shorter baseline schedule obtains a lower accuracy $(3.7\%)$ and $3.2\%$ absolute top-1 lower than multigrid). We also see that I3D-NL has a lower speedup than I3D. This is in part due to the less optimized NL operator than convolution, consuming a large portion of the training time. We ob
+
+(c) Spatial shape $H\times W$
+Table 2. Generalization Analysis. We study how multigrid training generalizes to models both with and without ImageNet pre-training (Table 2a) and models of different temporal (Table 2b) and spatial (Table 2c) shapes. All experiments use R50-SlowFast with results on Kinetics-400. We use the default setting for multigrid training ( $1.5 \times$ more epochs, corresponding to $3.4 \times$ fewer iterations than baseline) in all settings. We observe that the default choice brings consistent speedup and performance gain in all cases.
+
+| model | | speedup | top-1 | top-5 |
| I3D | Baseline | - | 74.4 | 91.4 |
| I3D | Baseline 1/3.3 | 3.3× | 71.1 | 89.9 |
| I3D | Multigrid | 3.9× | 74.8 | 91.7 |
| I3D-NL | Baseline | - | 75.5 | 92.1 |
| I3D-NL | Baseline 1/3.3 | 3.3× | 72.3 | 90.6 |
| I3D-NL | Multigrid | 3.3× | 75.5 | 92.4 |
+
+Table 3. Kinetics-400 accuracy with I3D and I3D-NL. While developed on SlowFast [9], multigrid training provides a consistent speedup and performance gain with I3D [3] and I3D-NL [47].
+
+serve consistent improvements with larger backbone models (R101); see Supplementary Material.
+
+# 5. Case Study: 1-GPU Training on Kinetics
+
+Our experiments thus far use a large number of GPUs (128) in parallel. However, a more common training recipe may use far fewer GPUs (e.g., 1 to 8) and given that one of our goals is to make video research more accessible by reducing computational requirements it is important to explore the application of our multigrid method in the few-GPU regime, without any tuning.
+
+As a case study, we use a single GPU to train an I3D model on Kinetics-400 using the quick training recipe from the public repository of Wang et al. [47]. We apply multi
+
+ | training time (days) | top-1 | top-5 |
| Baseline | 6.7 | 72.5 | 90.4 |
| Multigrid | 2.0 | 72.5 | 90.4 |
+
+grid training on top without further tuning. This schedule trains for 1200k iterations (after adjusting with the linear scaling rule [13]) on one GPU with 8 clips ( $\sim$ 40 epochs). The learning rate is 0.00125, which is decreased by a factor of 10 at iteration 600k and 1000k. Dropout and random scaling are disabled to accelerate convergence given the short schedule. Each input clip consists of 8 frames, sampled at a stride of 8, when using the baseline optimizer. Other training details are the same as the I3D experiments.
+
+Table 4 shows that multigrid training generalizes well out-of-the-box to a few-GPU, short-schedule setting. With multigrid training we are able to achieve $72.5\%$ (73.1% with 30-crop testing [47]) top-1 accuracy in 2 days using only 1 GPU, while the baseline method would need nearly 1 week. (When using a small model, we observe a smaller wall-clock speedup of $\sim 3.3\times$ compared to a larger model, which typically yields a $\sim 4.5\times$ speedup). We hope the reduced training time with multigrid training will make video understanding research more accessible and economical.
+
+# 6. Experiments on Something-Something V2
+
+We next evaluate multigrid training on the Something-Something V2 dataset [14], which contains 169k training, and 25k validation videos. Each video shows an interaction with everyday objects. The task is classification with 174 action classes. Performance is evaluated by top-1 and top-5 accuracy. This task is known to require more 'temporal modeling' to solve than Kinetics [49].
+
+Implementation Details. We use an R50-SlowFast model [9, 18] with 64-frame fast pathway with speed ratio $\alpha = 4$ and channel ratio $\beta = 1/8$ . The model is pre-trained on Kinetics-400 following prior work [29]. The baseline training recipe trains for $230k$ iterations on 8 GPUs, with a mini-batch size of 2 clips per GPU and a learning rate of 0.03, which is decreased by a factor of 10 at iteration $150k$ and $190k$ . Other training details are analogous to Kinetics experiments; see Supplementary Material for details.
+
+Results. Similar to what we observe on Kinetics, multigrid training obtains a better trade-off than baseline training on Something-Something V2 (Table 5). With the default $1.5 \times$ epoch training, multigrid training is $5.6 \times$ faster while obtaining a slightly higher accuracy. Multigrid training behaves consistently for the 'spatial heavy' Kinetics dataset and the 'temporal heavy' Something-Something V2 dataset.
+
+Table 4. Case study: 1-GPU training on Kinetics-400. Multigrid training reduces the training time from nearly 1 week to 2 days on a single GPU. We hope the reduced training time will make video understanding research more accessible and economical.
+
+ | speedup | top-1 | top-5 |
| Baseline | - | 60.9±0.31 | 87.2±0.13 |
| Baseline 1/5.2 | 5.2× | 54.6±0.13 | 83.0±0.14 |
| Multigrid 1.0× | 8.3× | 60.0±0.31 | 86.8±0.05 |
| Baseline 1/3.4 | 3.4× | 57.3±0.13 | 84.7±0.15 |
| Multigrid 1.5× (default) | 5.6× | 61.2±0.18 | 87.4±0.12 |
| Baseline 1/2.6 | 2.6× | 58.7±0.06 | 85.8±0.15 |
| Multigrid 2.0× | 4.2× | 61.7±0.20 | 87.8±0.12 |
+
+Table 5. Results on Something-Something V2. Multigrid training achieves a better trade-off than baseline training. Results are the mean and standard deviation over 5 runs.
+
+ | speedup | mAP (%) |
| Baseline | - | 38.0±0.18 |
| Baseline 1/5.3 | 5.3× | 27.5±0.15 |
| Multigrid 1.0× | 8.6× | 36.8±0.31 |
| Baseline 1/3.5 | 3.5× | 31.5±0.26 |
| Multigrid 1.5× (default) | 5.7× | 38.2±0.06 |
| Baseline 1/2.6 | 2.6× | 33.6±0.13 |
| Multigrid 2.0× | 4.3× | 37.4±0.15 |
+
+Table 6. Results on Charades. Multigrid training shows consistent speedups compared with the other datasets. Results are the mean and standard deviation over 5 runs.
+
+# 7. Experiments on Charades
+
+We finally evaluate our method on the Charades dataset [36], which is relatively small, consisting of only 9,848 videos in 157 action classes. The task is to predict all actions in a video. Performance is measured by mAP.
+
+Implementation Details. We use the same R50-SlowFast model [9, 18], with the same Kinetics pre-training as the Something-Something experiments. Training details are available in Supplementary Material.
+
+Results. Overall we observe consistent results compared with Kinetics and Something-Something V2 (Table 6). The default multigrid training is $5.7 \times$ faster, while achieving slightly better mAP. Overall, we see that even for the smaller Charades dataset, with strong large-scale pretraining, multigrid training is beneficial.
+
+# 8. Conclusion
+
+We propose a multigrid method for fast training of video models. Our method varies the sampling grid and the minibatch size during training, and can process the same number of epochs using a small fraction of the computation of the baseline trainer. With a single out-of-the-box setting, it works on multiple datasets and models, and consistently brings a $\sim 3 - 6\times$ speedup with comparable or higher accuracy. It works across a spectrum of hardware settings from 128 GPU distributed training to single GPU training. We hope the reduced training time will make video understanding research more accessible, scalable, and economical.
+
+Acknowledgments. This work was supported in part by the National Science Foundation (Grant No. IIS-1845485) and the Facebook Fellowship.
+
+# References
+
+[1] WF Briggs, VE Henson, and Stephen F McCormick. A Multigrid Tutorial, 2nd Edition. SIAM, 2000. 2
+[2] Joao Carreira, Viorica Patraucean, Laurent Mazare, Andrew Zisserman, and Simon Osindero. Massively parallel video networks. In ECCV, 2018. 2
+[3] Joao Carreira and Andrew Zisserman. Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, 2017. 2, 6, 7
+[4] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, et al. TVM: An automated end-to-end optimizing compiler for deep learning. In OSDI, 2018. 2
+[5] Yunpeng Chen, Yannis Kalantidis, Jianshu Li, Shuicheng Yan, and Jiashi Feng. Multi-fiber networks for video recognition. In ECCV, 2018. 2
+[6] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. codnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. 2
+[7] Dan C Ciresan, Ueli Meier, Jonathan Masci, Luca M Gambardella, and Jurgen Schmidhuber. High-performance neural networks for visual object classification. arXiv preprint arXiv:1102.0183, 2011. 3
+[8] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2011. 2
+[9] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. SlowFast networks for video recognition. In ICCV, 2019. 1, 2, 4, 5, 6, 7, 8
+[10] Christoph Feichtenhofer, Axel Pinz, and Richard Wildes. Spatiotemporal residual networks for video action recognition. In NIPS, 2016. 2, 7
+[11] Deepti Ghadiyaram, Du Tran, and Dhruv Mahajan. Large-scale weakly-supervised pre-training for video action recognition. In CVPR, 2019. 2
+[12] Rohit Girdhar, Joao Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer network. In CVPR, 2019. 2
+[13] Priya Goyal, Piotr Dolkar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. 2, 4, 5, 7, 8
+[14] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz Mueller-Freitag, et al. The "Something Something" video database for learning and evaluating visual common sense. In ICCV, 2017. 2, 8
+[15] Juncai He and Jinchao Xu. MgNet: A unified framework of multigrid and convolutional neural network. Science China Mathematics, 2019. 2
+[16] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask R-CNN. In ICCV, 2017. 2
+
+[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. PAMI, 2015. 2
+[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016. 2, 4, 5, 6, 8
+[19] Noureldien Hussein, Efstratios Gavves, and Arnold WM Smeulders. Timeception for complex action recognition. In CVPR, 2019. 2
+[20] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, 2015. 5
+[21] Boyuan Jiang, MengMeng Wang, Weihao Gan, Wei Wu, and Junjie Yan. STM: Spatiotemporal and motion encoding for action recognition. In ICCV, 2019. 2
+[22] Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In ISCA, 2017. 2
+[23] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. The Kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017. 2, 5
+[24] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 2
+[25] Bruno Korbar, Du Tran, and Lorenzo Torresani. SCSampler: Sampling salient clips from video for efficient action recognition. In ICCV, 2019. 5
+[26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. 2, 3, 4
+[27] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989. 1
+[28] Myunggi Lee, Seungeui Lee, Sungjoon Son, Gyutae Park, and Nojun Kwak. Motion feature network: Fixed motion filter for action recognition. In ECCV, 2018. 2
+[29] Ji Lin, Chuang Gan, and Song Han. Temporal shift module for efficient video understanding. In ICCV, 2019. 2, 8
+[30] Chenxu Luo and Alan L Yuille. Grouped spatial-temporal aggregation for efficient action recognition. In ICCV, 2019. 2
+[31] Brais Martinez, Davide Modolo, Yuanjun Xiong, and Joseph Tighe. Action recognition with spatial-temporal discriminative filter banks. In ICCV, 2019. 2
+[32] AJ Piergiovanni, Anelia Angelova, Alexander Toshev, and Michael S Ryoo. Evolving space-time neural architectures for videos. In ICCV, 2019. 2
+[33] Siyuan Qiao, Zhe Lin, Jianming Zhang, and Alan L Yuille. Neural rejuvenation: Improving deep network training by enhancing computational resource utilization. In CVPR, 2019. 2
+
+[34] Zhaofan Qiu, Ting Yao, and Tao Mei. Learning spatiotemporal representation with pseudo-3D residual networks. In ICCV, 2017. 2
+[35] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015. 6
+[36] Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In ECCV, 2016. 2, 8
+[37] Patrice Y Simard, David Steinkraus, and John C Platt. Best practices for convolutional neural networks applied to visual document analysis. In ICDAR, 2003. 3
+[38] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. 2, 4
+[39] Samuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V Le. Don't decay the learning rate, increase the batch size. In ICLR, 2018. 2
+[40] Lin Sun, Kui Jia, Dit-Yan Yeung, and Bertram E Shi. Human action recognition using factorized spatio-temporal convolutional networks. In ICCV, 2015. 2
+[41] Mingxing Tan and Quoc V Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In ICML, 2019. 6
+[42] Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and Hervé Jégou. Fixing the train-test resolution discrepancy. In NeurIPS, 2019. 4, 5
+[43] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3D convolutional networks. In ICCV, 2015. 2, 4
+[44] Du Tran, Heng Wang, Lorenzo Torresani, and Matt Feiszli. Video classification with channel-separated convolutional networks. In ICCV, 2019. 2
+[45] Du Tran, Heng Wang, Lorenzo Torresani, Jamie Ray, Yann LeCun, and Manohar Paluri. A closer look at spatiotemporal convolutions for action recognition. In CVPR, 2018. 1, 2
+[46] Limin Wang, Wei Li, Wen Li, and Luc Van Gool. Appearance-and-relation networks for video classification. In CVPR, 2018. 2
+[47] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018. 1, 2, 4, 5, 6, 7, 8
+[48] Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krähenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. In CVPR, 2019. 2, 5
+[49] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification. In ECCV, 2018. 2, 8
+[50] Yang You, Zhao Zhang, Cho-Jui Hsieh, James Demmel, and Kurt Keutzer. ImageNet training in minutes. In ICPP, 2018. 2
+
+[51] Mohammadreza Zolfaghari, Kamaljeet Singh, and Thomas Brox. ECO: Efficient convolutional network for online video understanding. In ECCV, 2018. 2
\ No newline at end of file
diff --git a/amultigridmethodforefficientlytrainingvideomodels/images.zip b/amultigridmethodforefficientlytrainingvideomodels/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..28bc8ed138378b0d2369a90f5887c255925c8fa9
--- /dev/null
+++ b/amultigridmethodforefficientlytrainingvideomodels/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:581bce811569c5c696ba556d46a603fb9b40bad69c587ab019cf84e6eac93548
+size 279412
diff --git a/amultigridmethodforefficientlytrainingvideomodels/layout.json b/amultigridmethodforefficientlytrainingvideomodels/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..88442722e7a2ac526864f1dfc422f098b32ffaa9
--- /dev/null
+++ b/amultigridmethodforefficientlytrainingvideomodels/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c631196f5ee6b4aaf08717fc5e435a996124272e2db4111466caadc9f6cd181
+size 436502
diff --git a/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_content_list.json b/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a820891bfeac83decb84b1bdebede02576120f60
--- /dev/null
+++ b/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00b687e207d5fc15d810df8d896039e1135278b84788b3aff9dbf6b99175007b
+size 87616
diff --git a/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_model.json b/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..69298ee72e057fa0275d86e31cedd7aaf783d090
--- /dev/null
+++ b/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:010fd32cdd3c7307b91d5e919188472c3945f1fd79b785b84f3cfebe99270ef3
+size 108386
diff --git a/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_origin.pdf b/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..3674c60fbb53c6e9f10534d3647498b5a59b63a4
--- /dev/null
+++ b/amultihypothesisapproachtocolorconstancy/ab4d9ac0-6c3a-4192-960e-216ce5f9e4c3_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e2d07aedd0b2acef8ddffdc9f27ae5975120b25b798b847989e7bf50ea263dfd
+size 516079
diff --git a/amultihypothesisapproachtocolorconstancy/full.md b/amultihypothesisapproachtocolorconstancy/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cdf2a99b25481dc0884e9119fdc177e805e9b364
--- /dev/null
+++ b/amultihypothesisapproachtocolorconstancy/full.md
@@ -0,0 +1,370 @@
+# A Multi-Hypothesis Approach to Color Constancy
+
+Daniel Hernandez-Juarez1, Sarah Parisot1,2, Benjamin Busam1,3, Aleks Leonardis1, Gregory Slabaugh1, Steven McDonagh1
+
+dbernandez0@gmail.com,
+
+{sarah.parisot,benjamin.busam,ales.leonardis,gregory.slabaugh,steven.mcdonagh}@huawei.com
+
+1Huawei Noah's Ark Lab
+
+2Mila, Montréal
+
+$^{3}$ Technical University of Munich
+
+# Abstract
+
+Contemporary approaches frame the color constancy problem as learning camera specific illuminant mappings. While high accuracy can be achieved on camera specific data, these models depend on camera spectral sensitivity and typically exhibit poor generalisation to new devices. Additionally, regression methods produce point estimates that do not explicitly account for potential ambiguities among plausible illuminant solutions, due to the ill-posed nature of the problem. We propose a Bayesian framework that naturally handles color constancy ambiguity via a multi-hypothesis strategy. Firstly, we select a set of candidate scene illuminants in a data-driven fashion and apply them to a target image to generate a set of corrected images. Secondly, we estimate, for each corrected image, the likelihood of the light source being achromatic using a camera-agnostic CNN. Finally, our method explicitly learns a final illumination estimate from the generated posterior probability distribution. Our likelihood estimator learns to answer a camera-agnostic question and thus enables effective multi-camera training by disentangling illuminant estimation from the supervised learning task. We extensively evaluate our proposed approach and additionally set a benchmark for novel sensor generalisation without re-training. Our method provides state-of-the-art accuracy on multiple public datasets (up to $11\%$ median angular error improvement) while maintaining real-time execution.
+
+# 1. Introduction
+
+Color constancy is an essential part of digital image processing pipelines. When treated as a computational process, this involves estimation of scene light source color, present at capture time, and correcting an image such that its appearance matches that of the scene captured under an achromatic light source. The algorithmic process of recovering the illuminant of a scene is commonly known as computa
+
+
+Figure 1. Our multi-hypothesis strategy allows us to leverage multi-camera datasets. Example image taken from the NUS dataset [14]. Single camera training: (a) state of the art method FFCC [7] and (b) our method obtains similar angular-error. Training with all 8 dataset cameras: aggregate all images to (c) define FFCC histogram center and (d) use an illuminant candidate set per camera. $\left[\frac{r}{g},\frac{b}{g}\right]$ color space plots show training set illuminant distributions. Each camera is encoded with a different color in (d) to highlight camera-specific illuminants. Our model leverages the extra data to achieve lower angular error. Images are rendered in sRGB color space.
+
+tional Color Constancy (CC) or Automatic White Balance (AWB). Accurate estimation is essential for visual aesthetics [24], as well as downstream high-level computer vision tasks [2, 4, 13, 17] that typically require color-unbiased and device-independent images.
+
+Under the prevalent assumption that the scene is illuminated by a single or dominant light source, the observed pixels of an image are typically modelled using the physical model of Lambertian image formation captured under a trichromatic photosensor:
+
+$$
+\rho_ {k} (X) = \int_ {\Omega} E (\lambda) S (\lambda , X) C _ {k} (\lambda) d \lambda \quad k \in \{R, G, B \}. \tag {1}
+$$
+
+where $\rho_{k}(X)$ is the intensity of color channel $k$ at pixel location $X$ , $\lambda$ the wavelength of light such that $E(\lambda)$ represents the spectrum of the illuminant, $S(\lambda, X)$ the surface reflectance at pixel location $X$ and $C_{k}(\lambda)$ camera sensitivity function for channel $k$ , considered over the spectrum of wavelengths $\Omega$ .
+
+The goal of computational CC then becomes estimation of the global illumination color $\rho_{k}^{E}$ where:
+
+$$
+\rho_ {k} ^ {E} = \int_ {\Omega} E (\lambda) C _ {k} (\lambda) d \lambda \quad k \in \{R, G, B \}. \tag {2}
+$$
+
+Finding $\rho_{k}^{E}$ in Eq. (2) results in an ill-posed problem due to the existence of infinitely many combinations of illuminant and surface reflectance that result in identical observations at each pixel $X$ .
+
+A natural and popular solution for learning-based color constancy is to frame the problem as a regression task [1, 28, 25, 10, 48, 34, 9]. However, typical regression methods provide a point estimate and do not offer any information regarding possible alternative solutions. Solution ambiguity is present in many vision domains [45, 36] and is particularly problematic in the cases where multi-modal solutions exist [35]. Specifically for color constancy we note that, due to the ill-posed nature of the problem, multiple illuminant solutions are often possible with varying probability. Data-driven approaches that learn to directly estimate the illuminant result in learning tasks that are inherently camera-specific due to the camera sensitivity function $c.f$ . Eq. (2). This observation will often manifest as a sensor domain gap; models trained on a single device typically exhibit poor generalisation to novel cameras.
+
+In this work, we propose to address the ambiguous nature of the color constancy problem through multiple hypothesis estimation. Using a Bayesian formulation, we discretise the illuminant space and estimate the likelihood that each considered illuminant accurately corrects the observed image. We evaluate how plausible an image is after illuminant correction, and gather a discrete set of plausible solutions in the illuminant space. This strategy can be interpreted as framing color constancy as a classification problem, similar to recent promising work in this direction [6, 7, 38]. Discretisation strategies have also been successfully employed in other computer vision domains, such as 3D pose estimation [35] and object detection [42, 43], resulting in e.g. state of the art accuracy improvement.
+
+In more detail, we propose to decompose the AWB task into three sub-problems: a) selection of a set of candidate
+
+illuminants b) learning to estimate the likelihood that an image, corrected by a candidate, is illuminated achromatically, and c) combining candidate illuminants, using the estimated posterior probability distribution, to produce a final output.
+
+We correct an image with all candidates independently and evaluate the likelihood of each solution with a shallow CNN. Our network learns to estimate the likelihood of white balance correctness for a given image. In contrast to prior work, we disentangle camera-specific illuminant estimation from the learning task thus allowing to train a single, device agnostic, AWB model that can effectively leverage multi-device data. We avoid distribution shift and resulting domain gap problems [1, 41, 22], associated with camera specific training, and propose a well-founded strategy to leverage multiple data. Principled combination of datasets is of high value for learning based color constancy given the typically small nature of individual color constancy datasets (on the order of only hundreds of images). See Figure 1.
+
+Our contributions can be summarised as:
+
+1. We decompose the AWB problem into a novel multi-hypothesis three stage pipeline.
+2. We introduce a multi-camera learning strategy that allows to leverage multi-device datasets and improve accuracy over single-camera training.
+3. We provide a training-free model adaptation strategy for new cameras.
+4. We report improved state-of-the-art performance on two popular public datasets (NUS [14], Cube+ [5]) and competitive results on Gehler-Shi [47, 23].
+
+# 2. Related work
+
+Classical color constancy methods utilise low-level statistics to realise various instances of the gray-world assumption: the average reflectance in a scene under a neutral light source is achromatic. Gray-World [12] and its extensions [18, 50] are based on these assumptions that tie scene reflectance statistics (e.g. mean, max reflectance) to the achromaticity of scene color.
+
+Related assumptions define perfect reflectance [32, 20] and result in White-Patch methods. Statistical methods are fast and typically contain few free parameters, however their performance is highly dependent on strong scene content assumptions and these methods falter in cases where these assumptions fail to hold.
+
+An early Bayesian framework [19] used Bayes' rule to compute the posterior distribution for the illuminants and scene surfaces. They model the prior of the illuminant and the surface reflectance as a truncated multivariate normal distribution on the weights of a linear model. Other Bayesian works [44, 23], discretise the illuminant space and
+
+
+Figure 2. Method overview: we first generate a list of $n$ candidate illuminants $\ell_{i}$ (candidate illuminants are shown left of the respective corrected images) using $K$ -means clustering [33]. We correct the input image with each of the $n$ candidates independently and then estimate the likelihood $o_{i}$ of each corrected image with our network. We combine illuminant candidates using the posterior probability distribution to generate an illuminant estimation $\ell^{*}$ . The error is back-propagated through the network using angular error loss $\mathcal{L}$ . The $[\frac{r}{g}, \frac{b}{g}]$ plot in the upper-right illustrates the posterior probability distribution (triangles encoded from blue to red) of the candidates $\ell_{i}$ , the final prediction vector $\ell^{*}$ (blue circle) and the ground-truth illuminant $\ell^{GT}$ (green circle). Images are rendered in sRGB color space.
+
+
+
+model the surface reflectance priors by learning real world histogram frequencies; in [44] the prior is modelled as a uniform distribution over a subset of illuminants while [23] uses the empirical distribution of the training illuminants. Our work uses the Bayesian formulation proposed in previous works [44, 19, 23]. We estimate the likelihood probability distribution with a CNN which also explicitly learns to model the prior distribution for each illuminant.
+
+Fully supervised methods. Early learning-based works [21, 53, 52] comprise combinational and direct approaches, typically relying on hand-crafted image features which limited their overall performance. Recent fully supervised convolutional color constancy work offers state-of-the-art estimation accuracy. Both local patch-based [9, 48, 10] and full image input [6, 34, 7, 25, 28] have been considered, investigating different model architectures [9, 10, 48] and the use of semantic information [28, 34, 7].
+
+Some methods frame color constancy as a classification problem, e.g. CCC [6] and the follow-up refinement FFCC [7], by using a color space that identifies image reillumination with a histogram shift. Thus, they elegantly and efficiently evaluate different illuminant candidates. Our method also discretises the illuminant space but we explicitly select the candidate illuminants, allowing for multicamera training while FFCC [7] is constrained to use all histogram bins as candidates and single-camera training.
+
+The method of [38] uses $K$ -means [33] to cluster illuminants of the dataset and then applies a CNN to frame the problem as a classification task; network input is a single (pre-white balanced) image and output results in $K$ class probabilities, representing the prospect of each illuminant (each class) explaining the correct image illumination. Our method first chooses candidate illuminants similarly, however, the key difference is that our model learns to infer whether an image is well white balanced or not. We ask this question $K$ times by correcting the image, independently, with each illuminant candidate. This affords an independent estimation of the likelihood for each illuminant and thus enables multi-device training to improve results.
+
+Multi-device training The method of [1] introduces a two CNN approach; the first network learns a 'sensor independent' linear transformation $(3\times 3$ matrix), the RGB image is transformed to this 'canonical' color space and then, a second network provides the predicted illuminant. The method is trained on multiple datasets except the test camera and obtains competitive results.
+
+The work of [37] affords fast adaptation to previously unseen cameras, and robustness to changes in capture device by leveraging annotated samples across different cameras and datasets in a meta-learning framework.
+
+A recent approach [8], makes an assumption that sRGB images collected from the web are well white balanced,
+
+therefore, they apply a simple de-gamma correction to approximate an inverse tone mapping and then find achromatic pixels with a CNN to predict the illuminant. These web images were captured with unknown cameras, were processed by different ISP pipelines and might have been modified with image editing software. Despite additional assumptions, the method achieves promising results, however, not comparable with the supervised state-of-the-art.
+
+In contrast we propose an alternative technique to enable multi-camera training and mitigate well understood sensor domain-gaps. We can train a single CNN using images captured by different cameras through the use of camera-dependent illuminant candidates. This property, of accounting for camera-dependent illuminants, affords fast model adaption; accurate inference is achievable for images captured by cameras not seen during training, if camera illuminant candidates are available (removing the need for model re-training or fine-tuning). We provide further methodological detail of these contributions and evidence towards their efficacy in Sections 3 and 4 respectively.
+
+# 3. Method
+
+Let $\mathbf{y} = (y_r, y_g, y_b)$ be a pixel from an input image $Y$ in linear RGB space. We model the global illumination, Eq. (2), with the standard linear model [51] such that each pixel $\mathbf{y}$ is the product of the surface reflectance $\mathbf{r} = (r_r, r_g, r_b)$ and a global illuminant $\ell = (\ell_r, \ell_g, \ell_b)$ shared by all pixels such that:
+
+$$
+y _ {k} = r _ {k} \cdot \ell_ {k} \quad k \in \{R, G, B \}. \tag {3}
+$$
+
+Given $Y = (\mathbf{y}_1, \ldots, \mathbf{y}_m)$ , comprising $m$ pixels, and $R = (\mathbf{r}_1, \ldots, \mathbf{r}_m)$ , our goal is to estimate $\ell$ and produce $R = \mathrm{diag}(\ell)^{-1}Y$ .
+
+In order to estimate the correct illuminant to adjust the input image $Y$ , we propose to frame the CC problem with a probabilistic generative model with unknown surface reflectances and illuminant. We consider a set $\ell_i \in \mathbb{R}^3, i \in \{1, \dots, n\}$ of candidate illuminants, each of which are applied to $Y$ to generate a set of $n$ tentatively corrected images $\mathrm{diag}(\ell_i)^{-1}Y$ . Using the set of corrected images as inputs, we then train a CNN to identify the most probable illuminants such that the final estimated illuminant is a linear combination of the candidates. In this section, we first introduce our general Bayesian framework, followed by our proposed implementation of the main building blocks of the model. An overview of the method can be seen in Figure 2.
+
+# 3.1. Bayesian approach to color constancy
+
+Following the Bayesian formulation previously considered [44, 19, 23], we assume that the color of the light and the surface reflectance are independent. Formally
+
+$\mathrm{P}(\ell ,R) = \mathrm{P}(\ell)\mathrm{P}(R),$ i.e. knowledge of the surface reflectance provides us with no additional information about the illuminant, $\mathrm{P}(\ell \mid R) = \mathrm{P}(\ell)$ . Based on this assumption we decompose these factors and model them separately.
+
+Using Bayes' rule, we define the posterior distribution of $\ell$ illuminants given the input image $Y$ as:
+
+$$
+\mathrm {P} (\ell \mid Y) = \frac {\mathrm {P} (Y \mid \ell) \mathrm {P} (\ell)}{\mathrm {P} (Y)}. \tag {4}
+$$
+
+We model the likelihood of an observed image $Y$ for a given illuminant $\ell$ :
+
+$$
+\begin{array}{l} \mathrm {P} (Y \mid \ell) = \int_ {r} \mathrm {P} (Y \mid \ell , R = r) \mathrm {P} (R = r) d r \tag {5} \\ = \mathrm {P} (R = \operatorname {d i a g} (\ell) ^ {- 1} Y) \\ \end{array}
+$$
+
+where $R$ are the surface reflectances and $\mathrm{diag}(\ell)^{-1}Y$ is the image as corrected with illuminant $\ell$ . The term $\mathrm{P}(Y|\ell, R = r)$ is only non-zero for $R = \mathrm{diag}(\ell)^{-1}Y$ . The likelihood rates whether a corrected image looks realistic.
+
+We choose to instantiate the model of our likelihood using a shallow CNN. The network should learn to output a high likelihood if the reflectances look realistic. We model the prior probability $\mathrm{P}(\ell)$ for each candidate illuminant independently as learnable parameters in an end-to-end approach; this effectively acts as a regularisation, favouring more likely real-world illuminants. We note that, in practice, the function modelling the prior also depends on factors such as the environment (indoor / outdoor), the time of day, ISO etc. However, the size of currently available datasets prevent us from modelling more complex proxies.
+
+In order to estimate the illuminant $\ell^{*}$ , we optimise the quadratic cost (minimum MSE Bayesian estimator), minimised by the mean of the posterior distribution:
+
+$$
+\ell^ {*} = \int_ {\ell} \ell \cdot \mathrm {P} (\ell \mid Y) d \ell \tag {6}
+$$
+
+This is done in the following three steps (c.f. Figure 2):
+
+1. Candidate selection (Section 3.2): Choose a set of $n$ illuminant candidates to generate $n$ corrected thumb-nail $(64 \times 64)$ images.
+2. Likelihood estimation (Section 3.3): Evaluate these $n$ images independently with a CNN, a network designed to estimate the likelihood that an image is well white balanced $\mathrm{P}(Y \mid \ell)$ .
+3. Illuminant determination (Section 3.4): Compute the posterior probability of each candidate illuminant and determine a final illuminant estimation $\ell^{*}$ .
+
+This formulation allows estimation of a posterior probability distribution, allowing us to reason about a set of
+
+probable illuminants rather than produce a single illuminant point estimate (c.f. regression approaches). Regression typically does not provide feedback on a possible set of alternative solutions which has shown to be of high value in alternative vision problems [35].
+
+The second benefit that our decomposition affords is a principled multi-camera training process. A single, device agnostic CNN estimates illuminant likelihoods and performs independent selection of candidate illuminants for each camera. By leveraging image information across multiple datasets we increase model robustness. Additionally, the amalgamation of small available CC datasets provides a step towards harnessing the power of large capacity models for this problem domain $c.f$ . contemporary models.
+
+# 3.2. Candidate selection
+
+The goal of candidate selection is to discretise the illuminant space of a specific camera in order to obtain a set of representative illuminants (spanning the illuminant space). Given a collection of ground truth illuminants, measured from images containing calibration objects (i.e. a labelled training set), we compute candidates using $K$ -means clustering [33] on the linear RGB space.
+
+By forming $n$ clusters of our measured illuminants, we define the set of candidates $\ell_i \in \mathbb{R}^3, i \in \{1, \dots, n\}$ as the cluster centers. $K$ -means illuminant clustering is previously shown to be effective for color constancy [38] however we additionally evaluate alternative candidate selection strategies (detailed in the supplementary material); our experimental investigation confirms a simple $K$ -means approach provides strong target task performance. Further, the effect of $K$ is empirically evaluated in Section 4.4.
+
+Image $Y$ , captured by a given camera, is then used to produce a set of images, corrected using the illuminant candidate set for the camera, on which we evaluate the accuracy of each candidate.
+
+# 3.3. Likelihood estimation
+
+We model the likelihood estimation step using a neural network which, for a given illuminant $\ell$ and image $Y$ , takes the tentatively corrected image $\mathrm{diag}(\ell)^{-1}Y$ as input, and learns to predict the likelihood $P(Y|\ell)$ that the image has been well white balanced i.e. has an appearance of being captured under an achromatic light source.
+
+The success of low capacity histogram based methods [6, 7] and the inference-training tradeoff for small datasets motivate a compact network design. We propose a small CNN with one spatial convolution and subsequent layers constituting $1 \times 1$ convolutions with spatial pooling. Lastly, three fully connected layers gradually reduce the dimensionality to one (see supplementary material for architecture details). Our network output is then a single value that represents the log-likelihood that the image is
+
+well white balanced:
+
+$$
+\log (\mathrm {P} (Y \mid \ell)) = f ^ {W} (\operatorname {d i a g} (\ell) ^ {- 1} Y). \tag {7}
+$$
+
+Function $f^{W}$ is our trained CNN parametrised by model weights $W$ . Eq. (7) estimates the log-likelihood of each candidate illuminant separately. It is important to note that we only train a single CNN which is used to estimate the likelihood for each candidate illuminant independently. However, in practice, certain candidate illuminants will be more common than others. To account for this, following [7], we compute an affine transformation of our log-likelihood $\log(\mathrm{P}(Y|\ell))$ by introducing learnable, illuminant specific, gain $G_{\ell}$ and bias $B_{\ell}$ parameters. Gain $G_{l}$ affords amplification of illuminant likelihoods. The bias term $B_{\ell}$ learns to prefer some illuminants i.e. a prior distribution in a Bayesian sense: $B_{\ell} = \log(\mathrm{P}(\ell))$ . The log-posterior probability can then be formulated as:
+
+$$
+\log (\mathrm {P} (\ell \mid Y)) = G _ {\ell} \cdot \log (\mathrm {P} (Y \mid \ell)) + B _ {\ell}. \tag {8}
+$$
+
+We highlight that learned affine transformation parameters are training camera-dependent and provide further discussion on camera agnostic considerations in Section 3.5.
+
+# 3.4. Illuminant determination
+
+We require a differentiable method in order to train our model end-to-end, and therefore the use of a simple Maximum a Posteriori (MAP) inference strategy is not possible. Therefore to estimate the illuminant $\ell^{*}$ , we use the minimum mean square error Bayesian estimator, which is minimised by the posterior mean of $\ell$ (c.f. Eq. (6)):
+
+$$
+\begin{array}{l} \ell^ {*} = \sum_ {i = 1} ^ {n} \ell_ {i} \cdot \operatorname {s o f t m a x} (\log (\mathrm {P} (\ell_ {i} \mid Y))) \tag {9} \\ = \frac {1}{\sum e ^ {\log (\mathrm {P} (\ell_ {i} | Y))}} \sum_ {i = 1} ^ {n} \ell_ {i} \cdot e ^ {\log (\mathrm {P} (\ell_ {i} | Y))}. \\ \end{array}
+$$
+
+The resulting vector $\ell^{*}$ is $l_{2}$ -normalised. We leverage our $K$ -means centroid representation of the linear RGB space and use linear interpolation within the convex hull of feasible illuminants to determine the estimated scene illuminant $\ell^{*}$ . For Eq. (9), we take inspiration from [29, 38], who have successfully explored similar strategies in CC and stereo regression, e.g. [29] introduced an analogous soft-argmin to estimate disparity values from a set of candidates. We apply a similar strategy for illuminant estimation and use the soft-argmax which provides a linear combination of all candidates weighted by their probabilities.
+
+We train our network end-to-end with the commonly used angular error loss function, where $\ell^{*}$ and $\ell^{GT}$ are the prediction and ground truth illuminant, respectively:
+
+$$
+\mathcal {L} _ {\text {e r r o r}} = \arccos \left(\frac {\ell^ {G T} \cdot \ell^ {*}}{\| \ell^ {G T} \| \| \ell^ {*} \|}\right) \tag {10}
+$$
+
+# 3.5. Multi-device training
+
+As discussed in previous work [1, 41, 22], CC models typically fail to train successfully using multiple camera data due to distribution shifts between camera sensors, making them intrinsically device-dependent and limiting model capacity. A device-independent model is highly appealing due to the small number of images commonly available in camera-specific public color constancy datasets. The cost and time associated with collecting and labelling new large data for specific novel devices is expensive and prohibitive.
+
+Our CNN learns to produce the likelihood that an input image is well white balanced. We claim that framing part of the CC problem in this fashion results in a device-independent learning task. We evaluate the benefit of this hypothesis experimentally in Section 4.
+
+To train with multiple cameras we use camera-specific candidates, yet learn only a single model. Specifically, we train with a different camera for each batch, use camera-specific candidates yet update a single set of CNN parameters during model training. In order to ensure that our CNN is device-independent, we fix previously learnable parameters that depend on sensor specific illuminants, i.e. $B_{\ell} = 0$ and $G_{\ell} = 1$ . The absence of these parameters, learned in a camera-dependent fashion, intuitively restricts model flexibility however we observe this drawback to be compensated by the resulting ability to train using amalgamated multi-camera datasets i.e. more data. This strategy allows our CNN to be camera-agnostic and affords the option to refine existing CNN quality when data from novel cameras becomes available. We however clarify that our overarching strategy for white balancing maintains use of camera-specific candidate illuminants.
+
+# 4. Results
+
+# 4.1. Training details
+
+We train our models for 120 epochs and use $K$ -mean [33] with $K = 120$ candidates. Our batch size is 32, we use the Adam optimiser [30] with initial learning rate $5 \times 10^{-3}$ , divided by two after 10, 50 and 80 epochs. Dropout [27] of $50\%$ is applied after average pooling. We take the log transform of the input before the first convolution. Efficient inference is feasible by concatenating each candidate corrected image into the batch dimension. We use PyTorch 1.0 [39] and an Nvidia Tesla V100 for our experiments. The first layer is the only spatial convolution, it is adapted from [49] and pretrained on ImageNet [16]. We fix the weights of this first layer to avoid over-fitting. The total amount of weights is $22.8K$ . For all experiments cali
+
+bration objects are masked, black level subtracted and oversaturated pixels are clipped at $95\%$ threshold. We resize the image to $64\times 64$ and normalise.
+
+# 4.2. Datasets
+
+We experiment using three public datasets. The Gehler-Shi dataset [47, 23] contains 568 images of indoor and outdoor scenes. Images were captured using Canon 1D and Canon 5D cameras. We highlight our awareness of the existence of multiple sets of non-identical ground-truth labels for this dataset (see [26] for further detail). Our Gehler-Shi evaluation is conducted using the SFU ground-truth labels [47] (consistent with the label naming convention in [26]). The NUS dataset [14] originally consists of 8 subsets of $\sim 210$ images per camera providing a total of 1736 images. The Cube+ dataset [5] contains 1707 images captured with Canon 550D camera, consisting of predominantly outdoor imagery.
+
+For the NUS [14] and Gehler-Shi [47, 23] datasets we perform three-fold cross validation (CV) using the splits provided in previous work [7, 6]. The Cube+ [5] dataset does not provide splits for CV so we use all images for learning and evaluate using a related set of test images, provided for the recent Cube+ ISPA 2019 challenge [31]. We compare with the results from the challenge leader-board.
+
+For the NUS dataset [14], we additionally explore training multi-camera models and thus create a new set of CV folds to facilitate this. We are careful to highlight that the NUS dataset consists of eight image subsets, pertaining to eight capture devices. Each of our new folds captures a distinct set of scene content (i.e. sets of up to eight similar images for each captured scene). This avoids testing on similar scene content seen during training. We define our multi-camera CV such that multi-camera fold $i$ is the concatenation of images, pertaining to common scenes, captured from all eight cameras. The folds that we define are made available in our supplementary material.
+
+# 4.3. Evaluation metrics
+
+We use the standard angular error metric for quantitative evaluation (c.f. Eq. (10)). We report standard CC statistics to summarise results over the investigated datasets: Mean, Median, Trimean, Best $25\%$ , Worst $25\%$ . We further report method inference time in the supplementary material. Other works' results were taken from corresponding papers, resulting in missing statistics for some methods. The NUS [14] dataset is composed of 8 cameras, we report the geometric mean of each statistic for each method across all cameras as standard in the literature [7, 6, 28].
+
+# 4.4. Quantitative evaluation
+
+Accuracy experiments. We report competitive results on the dataset of Gehler-Shi [47, 23] (c.f. Table 1). This dataset
+
+| Method | Mean | Med. | Tri. | Best 25% | Worst 25% |
| Gray-world [12] | 6.36 | 6.28 | 6.28 | 2.33 | 10.58 |
| White-Patch [11] | 7.55 | 5.86 | 6.35 | 1.45 | 16.12 |
| Bayesian [23] | 4.82 | 3.46 | 3.88 | 1.26 | 10.49 |
| Quasi-unsupervised [8] | 2.91 | 1.98 | - | - | - |
| Afifi et al. 2019 [1] | 2.77 | 1.93 | - | 0.55 | 6.53 |
| Meta-AWB [37] | 2.57 | 1.84 | 1.94 | 0.47 | 6.11 |
| Cheng et al. 2015 [15] | 2.42 | 1.65 | 1.75 | 0.38 | 5.87 |
| CM 2019 [25] | 2.48 | 1.61 | 1.80 | 0.47 | 5.97 |
| Oh et al. [38] | 2.16 | 1.47 | 1.61 | 0.37 | 5.12 |
| CCC [6] | 1.95 | 1.22 | 1.38 | 0.35 | 4.76 |
| DS-Net [48] | 1.90 | 1.12 | 1.33 | 0.31 | 4.84 |
| FC4 [28] (SqueezeNet) | 1.65 | 1.18 | 1.27 | 0.38 | 3.78 |
| FC4 [28] (AlexNet) | 1.77 | 1.11 | 1.29 | 0.34 | 4.29 |
| FFCC [7] (model P) | 1.61 | 0.86 | 1.02 | 0.23 | 4.27 |
| Ours | 2.35 | 1.43 | 1.63 | 0.40 | 5.80 |
| Ours (pretrained) | 2.10 | 1.32 | 1.53 | 0.36 | 5.10 |
+
+Table 1. Angular error statistics for Gehler-Shi dataset [47, 23].
+
+can be considered very challenging as the number of images per camera is imbalanced: There are 86 Canon $1D$ and 482 Canon $5D$ images. Our method is not able to outperform the state-of-the-art likely due to the imbalanced nature and small size of Canon $1D$ . Pretraining on a combination of NUS [14] and Cube+ [5] provides moderate accuracy improvement despite the fact that the Gehler-Shi dataset has a significantly different illuminant distribution compared to those seen during pre-training. We provide additional experiments, exploring the effect of varying $K$ , for $K$ -means candidate selection in the supplementary material.
+
+Results for NUS [14] are provided in Table 2. Our method obtains competitive accuracy and the previously observed trend, pre-training using additional datasets (here Gehler-Shi [47, 23] and Cube+ [5]), again improves results.
+
+In Table 3, we report results for our multi-device setting on the NUS [14] dataset. For this experiment we introduce a new set of training folds to ensure that scenes are well separated and refer to Sections 3.5 for multi-device training and 4.2 for related training folds detail. We draw multi-device comparison with FFCC [7], by choosing to center the FFCC histogram with the training set (of amalgamated camera datasets). Note that results are not directly comparable with Table 2 due to our redefinition of CV folds. Our method is more accurate than the state-of-the-art when training considers all available cameras at the same time. Note that multi-device training improves the median angular error of each individual camera dataset (we provide results in the supplementary material). Overall performance is improved by $\sim 11\%$ in terms of median accuracy.
+
+We also outperform the state-of-the-art on the recent Cube challenge [31] as shown in Table 4. Pretraining together on Gehler-Shi [47, 23] and NUS [14] improves our Mean and Worst $95\%$ statistics.
+
+In summary, we observe strong generalisation when using multiple camera training (e.g. NUS [14] results c.f. Tables 2 and 3). These experiments illustrate the
+
+| Method | Mean | Med. | Tri. | Best 25% | Worst 25% |
| White-patch [11] | 9.91 | 7.44 | 8.78 | 1.44 | 21.27 |
| Gray-world [12] | 4.59 | 3.46 | 3.81 | 1.16 | 9.85 |
| Bayesian [23] | 3.50 | 2.36 | 2.57 | 0.78 | 8.02 |
| Oh et al. [38] | 2.36 | 2.09 | - | - | 4.16 |
| Quasi-unsupervised [8] | 1.97 | 1.91 | - | - | - |
| CM 2019 [25] | 2.25 | 1.59 | 1.74 | 0.50 | 5.13 |
| FC4 [28] (SqueezeNet) | 2.23 | 1.57 | 1.72 | 0.47 | 5.15 |
| FC4 [28] (AlexNet) | 2.12 | 1.53 | 1.67 | 0.48 | 4.78 |
| Afifi et al. 2019 [1] | 2.05 | 1.50 | - | 0.52 | 4.48 |
| CCC [6] | 2.38 | 1.48 | 1.69 | 0.45 | 5.85 |
| Cheng et al. 2015 [15] | 2.18 | 1.48 | 1.64 | 0.46 | 5.03 |
| DS-Net [48] | 2.21 | 1.46 | 1.68 | 0.48 | 6.08 |
| Meta-AWB [37] | 1.89 | 1.34 | 1.44 | 0.45 | 4.28 |
| FFCC [7] (model Q) | 2.06 | 1.39 | 1.53 | 0.39 | 4.80 |
| FFCC [7] (model M) | 1.99 | 1.31 | 1.43 | 0.35 | 4.75 |
| Ours | 2.39 | 1.61 | 1.74 | 0.50 | 5.67 |
| Ours (pretrained) | 2.35 | 1.55 | 1.73 | 0.46 | 5.62 |
+
+Table 2. Angular error statistics for NUS [14].
+
+| Method | Mean | Med. | Tri. | Best 25% | Worst 25% |
| One model per device |
| FFCC [7] (model Q) | 2.37 | 1.50 | 1.69 | 0.46 | 5.76 |
| Ours (pretrained) | 2.35 | 1.48 | 1.67 | 0.47 | 5.71 |
| Multi-device training |
| FFCC [7] (model Q) | 2.59 | 1.77 | 1.94 | 0.52 | 6.14 |
| Ours (pretrained) | 2.22 | 1.33 | 1.53 | 0.44 | 5.49 |
+
+Table 3. Angular error statistics for NUS [14] using multi-device cross-validation folds (see Section 4.2). FFCC model $Q$ is considered for fair comparison (thumbnail resolution input).
+
+| Method | Mean | Med. | Tri. | Best 25% | Worst 25% |
| Gray-world [12] | 4.44 | 3.50 | - | 0.77 | 9.64 |
| 1st-order Gray-Edge [50] | 3.51 | 2.30 | - | 0.56 | 8.53 |
| V Vuk et al. [31] | 6.00 | 1.96 | 2.25 | 0.99 | 18.81 |
| Y Qian et al. [31] | 2.21 | 1.32 | 1.41 | 0.43 | 5.65 |
| K Chen et al. [31] | 1.84 | 1.27 | 1.32 | 0.39 | 4.41 |
| Y Qian et al. [40] | 2.27 | 1.26 | 1.35 | 0.39 | 6.02 |
| Afifi et al. 2019 [1] | 2.10 | 1.23 | - | 0.47 | 5.38 |
| FFCC [7] (model J) | 2.10 | 1.23 | 1.34 | 0.47 | 5.38 |
| A Savchik et al. [46] | 2.05 | 1.20 | 1.30 | 0.40 | 5.24 |
| WB-sRGB [3, 1] | 1.83 | 1.15 | - | 0.35 | 4.60 |
| Ours | 1.99 | 1.06 | 1.14 | 0.35 | 5.35 |
| Ours (pretrained) | 1.95 | 1.16 | 1.25 | 0.39 | 4.99 |
+
+Table 4. Angular error for Cube challenge [31].
+
+large benefit achievable with multi-camera training when illuminant distributions of the cameras are broadly consistent. Gehler-Shi [47, 23] has a very disparate illuminant distribution with respect to alternative datasets and we are likely unable to exploit the full advantage of multi-camera training. We note the FFCC [7] state of the art method is extremely shallow and therefore optimised for small datasets. In contrast, when our model is trained on large and relevant datasets we are able to achieve superior results.
+
+Run time. Regarding run-time; we measure inference speed at $\sim 10$ milliseconds, implemented in unoptimised PyTorch (see supplementary material for further detail).
+
+# 4.5. Training on novel sensors
+
+To explore camera agnostic elements of our model, we train on a combination of the full NUS [14] and Gehler-Shi [47, 23] datasets. As described in Section 3.5, the only remaining device dependent component involves performing illuminant candidate selection per device. Once the model is trained, we select candidates from Cube+ [5] and test on the Cube challenge dataset [31]. We highlight that neither Cube+ nor Cube challenge imagery is seen during model training. For meaningful evaluation, we compare against both classical and recent learning-based [1] camera-agnostic methods. Results are shown in Table 5. We obtain results that are comparable to Table 4 without seeing any imagery from our target camera, outperforming both baselines and [1]. We clarify that our method performs candidate selection using Cube+ [5] to adapt the candidate set to the novel device while [1] does not see any information from the new camera.
+
+We provide additional experimental results for differing values of $K$ ( $K$ -means candidate selection) in the supplementary material. We observe stability for $K >= 25$ . The low number of candidates required is likely linked to the two Cube datasets having reasonably compact distributions.
+
+# 4.6. Qualitative evaluation
+
+We provide visual results for the Gehler-Shi [47, 23] dataset in Figure 3. We sort inference results by increasing angular error and sample 5 images uniformly. For each row, we show (a) the input image (b) our estimated illuminant color and resulting white-balanced image (c) the ground truth illuminant color and resulting white-balanced image. Images are first white-balanced, then, we apply an estimated CCM (Color Correction Matrix), and finally, sRGB gamma correction. We mask out the Macbeth Color Checker calibration object during both training and evaluation.
+
+Our most challenging example (c.f. last row of Figure 3) is a multi-illuminant scene (indoor and outdoor lights), we observe our method performs accurate correction for objects illuminated by the outdoor light, yet the ground truth is only measured for the indoor illuminant, hence the high angular error. This highlights the limitation linked to our single global illuminant assumption, common to the majority of CC algorithms. We show additional qualitative results in the supplementary material.
+
+| Method | Mean | Med. | Tri. | Best 25% | Worst 25% |
| Gray-world [12] | 4.44 | 3.50 | - | 0.77 | 9.64 |
| 1st-order Gray-Edge [50] | 3.51 | 2.30 | - | 0.56 | 8.53 |
| Afifi et al. 2019 [1] | 2.89 | 1.72 | - | 0.71 | 7.06 |
| Ours | 2.07 | 1.31 | 1.43 | 0.41 | 5.12 |
+
+Table 5. Angular error for the Cube challenge [31] trained solely on the dataset of NUS [14] and Gehler-Shi [47, 23]. For our method, candidate selection is performed on Cube+ [5] dataset.
+
+# 5. Conclusion
+
+We propose a novel multi-hypothesis color constancy model capable of effectively learning from image samples that were captured by multiple cameras. We frame the problem under a Bayesian formulation and obtain data-driven likelihood estimates by learning to classify achromatic imagery. We highlight the challenging nature of multi-device learning due to camera color space differences, spectral sensitivity and physical sensor effects. We validate the benefits of our proposed solution for multi-device learning and provide state-of-the-art results on two popular color constancy datasets while maintaining real-time inference constraints. We additionally provide evidence supporting our claims that framing the learning question as a classification task $c.f$ . regression can lead to strong performance without requiring model re-training or fine-tuning.
+
+
+(a) Input image
+
+
+(b) Our prediction
+Error: $0.03^{\circ}$
+
+
+(c) Ground Truth
+
+
+
+
+Error: $0.65^{\circ}$
+
+
+
+
+
+
+Error: $1.33^{\circ}$
+
+
+
+
+
+
+Error: $2.82^{\circ}$
+
+
+
+
+
+
+Error: $14.62^{\circ}$
+Figure 3. Example results taken from the Gehler-Shi [47, 23] dataset. Input, our result and ground truth per row. Images to visualise are chosen by sorting all test images using increasing error and evenly sampling images according to that ordering. Images are rendered in sRGB color space.
+
+
+
+# References
+
+[1] Mahmoud Affifi and Michael Brown. Sensor-Independent Illumination Estimation for DNN Models. In Proceedings of the British Machine Vision Conference 2019, BMVC 2019, Cardiff University, Cardiff, UK, September 9-12, 2019, 2019.
+[2] Mahmoud Afifi and Michael S. Brown. What else can fool deep learning? addressing color constancy errors on deep neural network performance. In 2019 IEEE International Conference on Computer Vision, ICCV 2019, Seoul, Korea, October 29-November 1, 2019, 2019.
+[3] Mahmoud Affifi, Brian L. Price, Scott Cohen, and Michael S. Brown. When color constancy goes wrong: Correcting improperly white-balanced images. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 1535-1544, 2019.
+[4] Alexander Andreopoulos and John K. Tsotsos. On sensor bias in experimental methods for comparing interest-point, saliency, and recognition algorithms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(1):110-126, 2012.
+[5] Nikola Banic and Sven Loncaric. Unsupervised learning for color constancy. In Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2018) - Volume 4: VISAPP, Funchal, Madeira, Portugal, January 27-29, 2018, pages 181-188, 2018.
+[6] Jonathan T. Barron. Convolutional color constancy. In 2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015, pages 379-387, 2015.
+[7] Jonathan T. Barron and Yun-Ta Tsai. Fast fourier color constancy. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6950-6958, 2017.
+[8] Simone Bianco and Claudio Cusano. Quasi-unsupervised color constancy. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 12212-12221, 2019.
+[9] Simone Bianco, Claudio Cusano, and Raimondo Schettini. Color constancy using cnns. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2015, Boston, MA, USA, June 7-12, 2015, pages 81-89, 2015.
+[10] Simone Bianco, Claudio Cusano, and Raimondo Schettini. Single and multiple illuminant estimation using convolutional neural networks. IEEE Transactions on Image Processing, 26(9):4347-4362, 2017.
+[11] David H Brainard and Brian A Wandell. Analysis of the retina theory of color vision. JOSA A, 3(10):1651-1661, 1986.
+[12] Gershon Buchsbaum. A spatial processor model for object colour perception. Journal of the Franklin institute, 310(1):1-26, 1980.
+[13] Alexandra Carlson, Katherine A. Skinner, and Matthew Johnson-Roberson. Modeling camera effects to im
+
+prove deep vision for real and synthetic data. CoRR, abs/1803.07721, 2018.
+[14] Dongliang Cheng, Dilip K Prasad, and Michael S Brown. Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution. JOSA A, 31(5):1049-1058, 2014.
+[15] Dongliang Cheng, Brian L. Price, Scott Cohen, and Michael S. Brown. Effective learning-based illuminant estimation using simple features. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 1000-1008, 2015.
+[16] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA, pages 248-255, 2009.
+[17] Steven Diamond, Vincent Sitzmann, Stephen P. Boyd, Gordon Wetzstein, and Felix Heide. Dirty pixels: Optimizing image classification architectures for raw sensor data. CoRR, abs/1701.06487, 2017.
+[18] Graham D. Finlayson and Elisabetta Trezzi. Shades of gray and colour constancy. In The Twelfth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications, CIC 2004, Scottsdale, Arizona, USA, November 9-12, 2004, pages 37-41, 2004.
+[19] William T. Freeman and David H. Brainard. Bayesian decision theory, the maximum local mass estimate, and color constancy. In Proceedings of the Fifth International Conference on Computer Vision (ICCV 95), Massachusetts Institute of Technology, Cambridge, Massachusetts, USA, June 20-23, 1995, pages 210-217, 1995.
+[20] Brian V. Funt and Lilong Shi. The rehabilitation of maxrgb. In 18th Color and Imaging Conference, CIC 2010, San Antonio, Texas, USA, November 8-12, 2010, pages 256-259, 2010.
+[21] Brian V. Funt and Weihua Xiong. Estimating illumination chromaticity via support vector regression. In The Twelfth Color Imaging Conference: Color Science and Engineering Systems, Technologies, Applications, CIC 2004, Scottsdale, Arizona, USA, November 9-12, 2004, pages 47-52, 2004.
+[22] Shao-Bing Gao, Ming Zhang, Chao-Yi Li, and Yong-Jie Li. Improving color constancy by discounting the variation of camera spectral sensitivity. JOSA A, 34(8):1448-1462, 2017.
+[23] Peter V. Gehler, Carsten Rother, Andrew Blake, Thomas P. Minka, and Toby Sharp. Bayesian color constancy revisited. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2008), 24-26 June 2008, Anchorage, Alaska, USA, 2008.
+[24] Arjan Gijsenij, Theo Gevers, and Marcel P Lucassen. Perceptual analysis of distance measures for color constancy algorithms. JOSA A, 26(10):2243-2256, 2009.
+[25] Han Gong. Convolutional mean: A simple convolutional neural network for illuminant estimation. In Proceedings of the British Machine Vision Conference 2019, BMVC 2019, Cardiff University, Cardiff, UK, September 9-12, 2019, 2019.
+[26] Ghalia Hemrit, Graham D Finlayson, Arjan Gijsenij, Peter Gehler, Simone Bianco, Brian Funt, Mark Drew, and Lilong
+
+Shi. Rehabilitating the colorchecker dataset for illuminant estimation. In Color and Imaging Conference, volume 2018, pages 350-353. Society for Imaging Science and Technology, 2018.
+[27] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
+[28] Yuanming Hu, Baoyuan Wang, and Stephen Lin. Fc $^{4}$ : Fully convolutional color constancy with confidence-weighted pooling. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 330-339, 2017.
+[29] Alex Kendall, Hayk Martirosyan, Saumitro Dasgupta, and Peter Henry. End-to-end learning of geometry and context for deep stereo regression. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 66-75, 2017.
+[30] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+[31] Karlo Koscevic and Nikola Banic. ISPA 2019 Illumination Estimation Challenge. https://www.isispa.org/illumination-estimation-challenge. Accessed November 14, 2019.
+[32] Edwin H Land and John J McCann. Lightness and retina theory. Josa, 61(1):1-11, 1971.
+[33] Stuart P. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129-136, 1982.
+[34] Zhongyu Lou, Theo Gevers, Ninghang Hu, and Marcel P. Lucassen. Color constancy by deep learning. In Proceedings of the British Machine Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015, pages 76.1-76.12, 2015.
+[35] Siddharth Mahendran, Haider Ali, and René Vidal. A mixed classification-regression framework for 3d pose estimation from 2d images. In *British Machine Vision Conference* 2018, BMVC 2018, Northumbria University, Newcastle, UK, September 3-6, 2018, page 72, 2018.
+[36] Fabian Manhardt, Diego Arroyo, Christian Rupprecht, Benjamin Busam, Tolga Birdal, Nassir Navab, and Federico Tombari. Explaining the ambiguity of object detection and 6d pose from visual data. 2019 IEEE International Conference on Computer Vision, ICCV 2019, Seoul, Korea, October 29-November 1, 2019, 2019.
+[37] Steven McDonagh, Sarah Parisot, Zhenguo Li, and Gregory G. Slabaugh. Meta-learning for few-shot camera-adaptive color constancy. CoRR, abs/1811.11788, 2018.
+[38] Seoung Wug Oh and Seon Joo Kim. Approaching the computational color constancy as a classification problem through deep learning. Pattern Recognition, 61:405-416, 2017.
+[39] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner,
+
+Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 8024-8035, 2019.
+[40] Yanlin Qian, Ke Chen, and Huanglin Yu. Fast fourier color constancy and grayness index for ISPA illumination estimation challenge. In 11th International Symposium on Image and Signal Processing and Analysis, ISPA 2019, Dubrovnik, Croatia, September 23-25, 2019, pages 352-354, 2019.
+[41] Nguyen Ho Man Rang, Dilip K. Prasad, and Michael S. Brown. Raw-to-raw: Mapping between image sensor color responses. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages 3398-3405, 2014.
+[42] Joseph Redmon, Santosh Kumar Divvala, Ross B. Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 779-788, 2016.
+[43] Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137-1149, 2017.
+[44] Charles R. Rosenberg, Thomas P. Minka, and Alok Ladsariya. Bayesian color constancy with non-gaussian models. In Advances in Neural Information Processing Systems 16 [Neural Information Processing Systems, NIPS 2003, December 8-13, 2003, Vancouver and Whistler, British Columbia, Canada], pages 1595-1602, 2003.
+[45] Christian Rupprecht, Iro Laina, Robert S. DiPietro, and Maximilian Baust. Learning in an uncertain world: Representing ambiguity through multiple hypotheses. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 3611-3620, 2017.
+[46] A. Savchik, Egor I. Ershov, and Simon M. Karpenko. Color cerberus. In 11th International Symposium on Image and Signal Processing and Analysis, ISPA 2019, Dubrovnik, Croatia, September 23-25, 2019, pages 355-359, 2019.
+[47] Lilong Shi and Brian Funt. Re-processed version of the gehler color constancy dataset. https://www2.cs.sfu.ca/~colour/data/shi_gehler/. Accessed November 14, 2019.
+[48] Wu Shi, Chen Change Loy, and Xiaou Tang. Deep specialized network for illuminant estimation. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, pages 371-387, 2016.
+[49] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
+[50] Joost van de Weijer, Theo Gevers, and Arjan Gijsenij. Edge-based color constancy. IEEE Transactions on Image Processing, 16(9):2207-2214, 2007.
+
+[51] Johannes Von Kries. Influence of adaptation on the effects produced by luminous stimuli. handbuch der Physiologie des Menschen, 3:109-282, 1905.
+[52] Ning Wang, De Xu, and Bing Li. Edge-based color constancy via support vector regression. IEICE Transactions on Information and Systems, 92-D(11):2279-2282, 2009.
+[53] Weihua Xiong and Brian Funt. Estimating illumination chromaticity via support vector regression. Journal of Imaging Science and Technology, 50(4):341-348, 2006.
\ No newline at end of file
diff --git a/amultihypothesisapproachtocolorconstancy/images.zip b/amultihypothesisapproachtocolorconstancy/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..60ad7ce4324e175f6054c9b6fa6f5732774db10e
--- /dev/null
+++ b/amultihypothesisapproachtocolorconstancy/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ef0f8b8afa54f15a1a35b1bdc282d305cafea2f7738c82a1ba67b9424b5ec69
+size 476787
diff --git a/amultihypothesisapproachtocolorconstancy/layout.json b/amultihypothesisapproachtocolorconstancy/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..f8c9d245370a071a944fb45be91bc29dbeda836a
--- /dev/null
+++ b/amultihypothesisapproachtocolorconstancy/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c45c9540f521973ad9802fa278e47f2b97f54861689aeae2dd657815620d8e6
+size 462856
diff --git a/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_content_list.json b/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..b0cc5ec1d44cc9af3096226743d279040e787e07
--- /dev/null
+++ b/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f9acd661784d6417d6516dc415abfb9196fe450fe5e668a35c1b92e97cde1827
+size 79717
diff --git a/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_model.json b/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..71ced767027c778dcacc43dc05a27fa061590508
--- /dev/null
+++ b/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ee020830966d07a5c025e616340f75290c6b63ba9b1881ac5c984f5828e34cd
+size 99440
diff --git a/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_origin.pdf b/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a704ee995b39aa2f60292dfe2b4c828b8899d3d4
--- /dev/null
+++ b/amultitaskmeanteacherforsemisupervisedshadowdetection/c6c34185-fdf2-4984-bff9-67800d42a264_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c02ca400eec8ba1b1e0edc91be4da5e8a443852365f09f05d14267af9b6cbb40
+size 1599153
diff --git a/amultitaskmeanteacherforsemisupervisedshadowdetection/full.md b/amultitaskmeanteacherforsemisupervisedshadowdetection/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e45a99ef97b225fa2ff1163ab31ef999538acfd
--- /dev/null
+++ b/amultitaskmeanteacherforsemisupervisedshadowdetection/full.md
@@ -0,0 +1,377 @@
+# A Multi-task Mean Teacher for Semi-supervised Shadow Detection
+
+Zhihao Chen $^{1,\ast}$ , Lei Zhu $^{2,1*}$ , Liang Wan $^{1}$ , Song Wang $^{1,3}$ , Wei Feng $^{1}$ , and Pheng-Ann Heng $^{2,4}$
+
+1 College of Intelligence and Computing, Tianjin University
+
+$^{2}$ Department of Computer Science and Engineering, The Chinese University of Hong Kong
+
+3 Department of Computer Science and Engineering, University of South Carolina
+
+4 Shenzhen Key Laboratory of Virtual Reality and Human Interaction Technology,
+
+Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
+
+# Abstract
+
+Existing shadow detection methods suffer from an intrinsic limitation in relying on limited labeled datasets, and they may produce poor results in some complicated situations. To boost the shadow detection performance, this paper presents a multi-task mean teacher model for semisupervised shadow detection by leveraging unlabeled data and exploring the learning of multiple information of shadows simultaneously. To be specific, we first build a multitask baseline model to simultaneously detect shadow regions, shadow edges, and shadow count by leveraging their complementary information and assign this baseline model to the student and teacher network. After that, we encourage the predictions of the three tasks from the student and teacher networks to be consistent for computing a consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from the predictions of the multi-task baseline model. Experimental results on three widely-used benchmark datasets show that our method consistently outperforms all the compared state-of-the-art methods, which verifies that the proposed network can effectively leverage additional unlabeled data to boost the shadow detection performance.
+
+# 1. Introduction
+
+As a common phenomenon in our daily life, shadows in natural images have hints for extracting the scene geometry [29, 17], light direction [22], camera location and its parameters [16], and benefit different high-level image understanding tasks, e.g., image segmentation [4], object detection [2], and object tracking [27]. For these applications, we need to detect shadows from images with high accuracy.
+
+Existing methods detect shadows by developing physical models of color and illumination [6, 5], by using data-driven
+
+
+
+
+input mages
+
+
+ground truths
+
+
+
+
+our method
+DSD [41]
+CVPR'2019
+
+
+
+
+BDRAR [43] ECCV'2018
+
+
+
+
+Figure 1: Shadow detection on two inputs with a soft shadow (the first row) and multiple shadow regions (the second row). Results in 3rd to 5-th columns are produced by our method, DSD [41], and BDRAR [43]. Apparently, our method can more accurately identify the shadow regions, while some dark regions, as well as shadow boundaries are mistakenly recognized by DSD and BDRAR.
+
+approaches based on hand-crafted features [13, 23, 42] or by learning discriminative features from a convolutional neural network (CNN) [19, 33, 28, 12, 24, 43, 10, 41]. While the state-of-the-art methods have already achieved high accuracy on benchmark datasets [33, 42, 35, 10], they almost require sufficient amounts of annotated data for training, and such training data are usually captured in limited scenes. Creating large labeled datasets for diverse scenes, however, is expensive and time-consuming. Le et al. [24] proposed to augment training images by weakening the shadow area of the original training image, but we notice that these augmented images tend to be fake, and their non-shadow backgrounds are similar to those on the original training image, hindering the generalization capability. Compared with labeled datasets, we could easily collect abundant unlabeled shadowed images in real applications. Hence, it is highly desirable to leverage additional unlabeled data to improve the shadow detection performance when training with limited labeled data.
+
+On the other hand, when testing the existing methods on various natural images, we found that they may ne
+
+
+Figure 2: The schematic illustration of our multi-task mean teacher network (MTMT-Net). We first develop a multi-task CNN (MT-CNN; see Fig. 3) to mutually learn three tasks including shadow edge detection, shadow region detection, and shadow count detection. After that, we compute a multi-task supervised loss for labeled data and a multi-task consistency loss for unlabeled data. Finally, we fuse the supervised loss and consistency loss to train our shadow detection network.
+
+glect small shadow regions, misrecognize dark regions as shadows, and miss non-obvious or soft shadows due to the weak boundaries. These situations result in poor shadow boundaries and may alter the number of shadow regions (see Fig. 1). Inspired by the success of multi-task learning in many computer vision applications [14, 3, 18, 26], we decide to investigate the complementary information of shadow regions, shadow edges and shadow count in our work, to enhance shadow region detection from both global and detail views. Specifically, shadow count detection sets a global constraint on the total number of shadow regions, while shadow edge detection sets detail-level constraints on the boundaries of shadow regions.
+
+In this regard, we develop a multi-task mean teacher framework (MTMT-Net) for boosting the shadow detection performance. We first design a multi-task CNN, denoted as MT-CNN, for mutually learning three tasks (i.e., shadow region detection, shadow edge detection, and shadow count detection), and take this MT-CNN model as both the student network and the teacher network. We then propose a supervised multi-task loss for labeled data to integrate the supervised losses on all three tasks. After that, we enforce the three tasks' results of the student network and the teacher network to be consistent, respectively, on all the unlabeled data. By adding the supervised loss from the developed MT-CNN and the consistency loss from the three tasks to train the model, our network can more accurately detect shadow regions than the state-of-the-art methods. Our major contributions are summarized as:
+
+- First, we develop a multi-task CNN (MT-CNN) for shadow detection by simultaneously detecting shadow regions, shadow edges, and shadow count from the single input image. The MT-CNN can produce a better shadow detection result on labeled data than the one
+
+with only shadow detection task.
+
+- Second, we propose to design a multi-task mean teacher framework to fuse consistency loss of unlabeled data from three prediction tasks for shadow detection. As a self-ensembling model, our framework has the potential to be used for developing semi-supervised frameworks on other vision tasks, including saliency detection, boundary detection, and semantic segmentation.
+- Lastly, we show that the proposed network outperforms the state-of-the-art methods by a large margin on three widely-used benchmark datasets.
+
+# 2. Related Work
+
+Traditional methods. Early attempts [6, 5, 32] explored illumination models and color information to identify shadow regions and most of them work well only on high-quality and well-constrained images [28, 41]. Later data-driven strategies design certain hand-crafted features [42, 23, 7, 13, 34] on annotated data and feed these features into different classifiers [42, 23, 7, 13, 34] for shadow detection. Although achieving accuracy improvements, these strategies usually suffer from degraded performance in complex cases where hand-crafted features are not sufficiently discriminative for detecting shadow regions.
+
+Deep learning based methods. Inspired by the remarkable progress of deep learning in diverse vision tasks, convolutional neural network (CNN) based methods have been developed for shadow detection to learn deep shadow inference features from labeled datasets. Khan et al. [19] formulated the first network to classify image pixels as shadows/non-shadows by building a 7-layer CNN, which extracts deep features from superpixels, and then feeding
+
+
+Figure 3: The schematic illustration of the proposed MT-CNN in Fig. 2. Taking a shadow image as the input, our MT-CNN predicts a shadow region map, a shadow edge map, and a shadow count (i.e., the number of shadow regions) by fusing their complementary information; see Section 3.1 for details.
+
+the features to a conditional random field (CRF) model to smooth the shadow detection results. Vicente et al. [33] learned an image-level shadow prior and combined it with local image patches to train a patch-based CNN for generating a shadow mask. Later, a generative adversarial network based shadow detector, called scGAN [28], predicts a shadow map by formulating a conditional generator on the input image. A fast deep shadow detection network in [8] obtains a shadow prior map from hand-crafted features, applies a patch-level CNN to predict shadow masks of patches, and combines the results from multi-scale patches for predicting the whole shadow map.
+
+Recently, Hu et al. [12] detected shadow pixels by learning direction-aware spatial context features. Zhu et al. [43] designed a recurrent attention residual (RAR) module to combine the contexts of two adjacent CNN layers and then formulated two series of RAR modules to iteratively integrate spatial contexts over the CNN layers. Le et al. [24] combined a shadow detection network (D-Net) with a shadow attenuation network (A-Net) that generated adversarial training examples. Wang et al. [37] stacked multiple parallel fusion branches to fuse global semantic cues and local spatial details in a deeply supervised framework. Zheng et al. [41] presented a distraction-aware shadow (DS) module to predict false positive and false negative pixels, and fused the obtained distraction features in each CNN layer for shadow detection.
+
+Although improving the shadow detection bar, existing methods almost suffer from an intrinsic limitation that training their detection networks requires a large amount of data with pixel-level annotations. Although ADNet [24] augments training images from a single shadow image by weakening the shadow area, we argue that these augmented
+
+images tend to be fake, and the backgrounds are very similar to the original training image, resulting in a limited generalization capability. In this paper, we leverage unlabelled data for helping shadow detection. For this purpose, we embed a multi-task learning into a self-ensembling framework to enforce consistency loss of shadow-detection tasks. Results show that our method outperforms state-of-the-art shadow detectors as detailed in the later experiment section.
+
+# 3. Methodology
+
+Fig. 2 shows the workflow of the proposed MTMT-Net that integrates labeled data and unlabeled data by using the mean teacher semi-supervised learning. Specifically, we develop a multi-task convolutional neural network (MT-CNN) by considering three tasks, i.e., shadow region detection, shadow edge detection, and shadow count detection. MT-CNN is used for both the student network and the teacher network. During the training, the labeled data is fed into the student network, and a multi-task supervised loss is computed by fusing the three task losses. Then, for unlabeled data, we produce one auxiliary shadow map from the input image and feed them into the student network and teacher network, respectively. A multi-task consistency loss is computed on the two groups of predicted shadow information. In the testing stage, we only utilize the student network to predict the shadow map for the input image.
+
+# 3.1. Multi-task Convolutional Neural Network (MT-CNN)
+
+Although achieving remarkable results, existing shadow detection methods suffer from performance degradation when detecting soft shadows due to the weak boundaries. They also tend to neglect small shadow regions or mis
+
+identify dark non-shadow regions, thereby may significantly alter the count of detected shadow regions. To address these concerns, we argue that explicitly considering shadow edges and shadow count is helpful to augment shadow region detection in both localization accuracy and segmentation quality. In this paper, we proposed a multitask CNN (MT-CNN) to model and fuse the complementary of shadow edge, shadow count, and shadow region information within a single network in an end-to-end manner, as illustrated in Fig. 3.
+
+# 3.1.1 Shadow Region Detection
+
+Given an input shadow image, we first use a convolutional neural network (ResNeXt-101[38] in our experiment) to produce a set of feature maps (denoted as $EF_{1}, EF_{2}, EF_{3}, EF_{4}$ , and $EF_{5}$ ) at different scales (see Fig. 3).
+
+Note that there is complementary information among different CNN layers for shadow detection. The shallow CNN layers capture shadow details as well as many nonshadow details, while the deep CNN layers neglect most of the non-shadow pixels and also miss parts of shadow regions. Here, we employ the short connections [9] to merge feature maps at the last four CNN layers, resulting in four new feature maps (denoted as $DF_{2}$ , $DF_{3}$ , $DF_{4}$ , and $DF_{5}$ ). Specifically, the merged feature map $DF_{k}$ at $k$ -th CNN layer ( $k = 2, \dots, 5$ ) is computed by:
+
+$$
+D F _ {k} = \operatorname {C o n v} \left(\operatorname {C o n c a t} \left(E F _ {k}, \dots , E F _ {5}\right)\right). \tag {1}
+$$
+
+We then merge the shallowest features $(EF_{1})$ and the deepest features $(EF_{5})$ to generate a new feature map, denoted as $DF_{1}$ , which is used for predicting shadow edge map (see Section 3.1.2). After that, to integrate the shadow edge and shadow region information, we refine $\{DF_k,k = 2,\dots,5\}$ by first up-sampling them into the spatial resolution of $DF_{1}$ and then element-wise adding $DF_{1}$ . The refined feature maps are denoted as $\{RF_k,k = 2,\dots,5\}$ , given by
+
+$$
+R F _ {k} = u p \left(D F _ {k}\right) + D F _ {1}. \tag {2}
+$$
+
+Finally, we predict four shadow region maps from $DF_{2}$ , $DF_{3}$ , $DF_{4}$ , and $DF_{5}$ , four shadow region maps from $RF_{2}$ , $RF_{3}$ , $RF_{4}$ , and $RF_{5}$ , and a shadow map (denoted as $S_{f}$ in Fig. 3) from the refined feature maps, which is produced by element-wisely adding, i.e.
+
+$$
+S _ {f} = \operatorname {P r e d} \left(\sum_ {k = 2} ^ {5} R F _ {k}\right). \tag {3}
+$$
+
+The prediction $Pred(\cdot)$ is realized by using three $3 \times 3$ convolutional layers, a $1 \times 1$ convolutional layer, and a sigmoid activation layer [43] on features.
+
+# 3.1.2 Shadow Edge Detection
+
+By observing shadow images, we notice that for soft shadows, the boundaries may not be distinguishable from the
+
+surrounding non-shadow regions. This motivates us to think about utilizing edge knowledge to enhance the detection performance. Recent saliency detectors [14, 15] have also proved this point, in which edge knowledge is helpful to improve the saliency detection quality.
+
+In our MT-CNN, we fuse the low-level CNN features $EF_{1}$ with the high-level features $EF_{5}$ at the deepest CNN layer to produce the feature map $DF_{1}$ , which is then used for predicting a shadow edge map. Although low-level features $EF_{1}$ capture sufficient shadow edge information, detecting shadow edges only with $EF_{1}$ is not sufficient, since $EF_{1}$ also encodes many non-shadow background details. On the other hand, the deep layer features $EF_{5}$ has the largest receptive field to effectively suppress the non-shadow pixels. Specifically, $DF_{1}$ is computed via an element-wise addition on $EF_{1}$ and $EF_{5}$ .
+
+# 3.1.3 Shadow Count Detection
+
+By analyzing the results of existing shadow detection methods, we find three common failure cases: small shadow regions are missed; non-shadow regions are mis-identified; and nearby shadow regions are mistakenly detected together. These cases all result in an inaccurate shadow region number. Therefore, we explore the number of shadow regions for enhancing the shadow detection performance.
+
+Detecting the shadow region number requires a global understanding of the whole image. As shown in Fig. 3, we rely on $EF_{5}$ at the deepest CNN layer for the detection. Specifically, we apply a single fully-connected layer on $EF_{5}$ to obtain a score $(\mathcal{A})$ indicating the shadow count. Since the number of shadow regions can be very large, to make the computation feasible, we set a maximum constraint $N_{max}$ , and empirically compute the scalar $\mathcal{A}$ as the regression problem:
+
+$$
+\mathcal {A} = \frac {\operatorname* {m i n} \left(N _ {\text {a c t u a l}} , N _ {\text {m a x}}\right)}{N _ {\text {m a x}}} \tag {4}
+$$
+
+where $N_{\text{actual}}$ denotes the actual number of the shadow regions, and we empirically set $N_{\text{max}} = 8$ in our work.
+
+# 3.2. Multi-task Supervised Loss on Labeled data
+
+For labeled data, we can have a pair of input shadow image and the corresponding annotated shadow mask. It is natural that we take the annotated shadow mask as the ground truth of the shadow region detection $(G_r)$ . Then, we apply the Canny operator [1] on the annotated shadow mask to generate an edge map as the ground-truth of the shadow edge detection $(G_e)$ . We further observe each labeled image and manually count the number of shadow regions to obtain $\mathcal{A}$ (see Eq. (4)), which is regarded as the ground truth of the shadow count detection $(G_c)$ .
+
+After obtaining the ground truths, the multi-task supervised loss (denoted as $\mathcal{L}^s$ ) for a labeled image $(x)$ is computed by adding the supervised losses of the shadow region
+
+detection $(\mathcal{L}_r^s)$ , shadow edge detection $(\mathcal{L}_e^s)$ , and shadow count detection $(\mathcal{L}_c^s)$ , i.e.
+
+$$
+\mathcal {L} ^ {s} (x) = \mathcal {L} _ {r} ^ {s} + \alpha \mathcal {L} _ {e} ^ {s} + \beta \mathcal {L} _ {c} ^ {s}, \tag {5}
+$$
+
+where
+
+$$
+\mathcal {L} _ {r} ^ {s} = \sum_ {j = 1} ^ {9} \Phi_ {B C E} \left(P _ {r} (j), G _ {r}\right) , \tag {6}
+$$
+
+$$
+\mathcal {L} _ {e} ^ {s} = \Phi_ {B C E} (P _ {e}, G _ {e}),
+$$
+
+$$
+\mathcal {L} _ {c} ^ {s} = \Phi_ {M S E} \left(P _ {c}, G _ {c}\right).
+$$
+
+Here, $P_{r}(j)$ represents one of the nine predicted shadow maps, $P_{e}$ is the predicted shadow edge map, and $P_{c}$ is the predicted shadow count value. $\Phi_{BCE}$ and $\Phi_{MSE}$ are the binary cross-entropy loss and MAE loss functions, respectively. We empirically set the weights $\alpha = 10$ and $\beta = 1$ in the network training.
+
+# 3.3. Multi-task Consistency Loss on Unlabeled Data
+
+For the unlabeled data, we pass it into the student and teacher networks to obtain three tasks' results, which are the nine shadow region maps (denoted as $S_{r_1}$ to $S_{r_9}$ ), a shadow edge map (denoted as $S_e$ ), and a shadow count score (denoted as $S_c$ ). We then enforce the predictions of the three tasks from the student network and teacher network to be consistent, resulting in a multi-task consistency loss $(\mathcal{L}^c)$ . Mathematically, $\mathcal{L}^c$ for an unlabeled image (denoted as $y$ ) is
+
+$$
+\mathcal {L} ^ {c} (y) = \mathcal {L} _ {r} ^ {c} + \mathcal {L} _ {e} ^ {c} + \mathcal {L} _ {c} ^ {c} \tag {7}
+$$
+
+where
+
+$$
+\mathcal {L} _ {r} ^ {c} = \sum_ {j = 1} ^ {9} \Phi_ {M S E} \left(S _ {r _ {j}}, T _ {r _ {j}}\right), \tag {8}
+$$
+
+$$
+\mathcal {L} _ {e} ^ {c} = \Phi_ {M S E} (S _ {e}, T _ {e}),
+$$
+
+$$
+\mathcal {L} _ {c} ^ {c} = \Phi_ {M S E} \left(S _ {c}, T _ {c}\right),
+$$
+
+where $\mathcal{L}_r^c$ , $\mathcal{L}_e^c$ , and $\mathcal{L}_c^c$ denote the consistency loss of the shadow region detection, shadow edge detection, and shadow count detection, respectively.
+
+# 3.4. Our Network
+
+We apply the multi-task learning with the semi-supervised self-ensembling model for shadow detection. The total loss of our network is
+
+$$
+\mathcal {L} _ {\text {t o t a l}} = \sum_ {i = 1} ^ {N} \mathcal {L} ^ {s} \left(x _ {i}\right) + \lambda \sum_ {j = 1} ^ {M} \mathcal {L} ^ {c} \left(y _ {j}\right), \tag {9}
+$$
+
+where $N$ and $M$ are the numbers of labeled images and unlabeled images in our training set. $\mathcal{L}^s (x_i)$ denotes the multi-task supervised loss (Eq. (5)) for the $i$ -th labeled image while $\mathcal{L}^c (y_j)$ is the multi-task consistency loss (Eq. (7)) for the $j$ -th unlabeled image. The weight $\lambda$ is to balance the multi-task supervised loss on labeled data and the multi-task consistency loss on unlabeled data. Following [21, 31], we
+
+use a time dependent Gaussian warming up function to update $\lambda$ : $\lambda(t) = \lambda_{max} e^{(-5(1 - t / t_{max})^2)}$ , where $t$ denotes the current training iteration and $t_{max}$ is the maximum training iteration. In our experiments, we empirically set $\lambda_{max} = 10$ .
+
+We minimize $\mathcal{L}_{total}$ to train the student network, and the parameters of the teacher network in each training step, are updated via the exponential moving average (EMA) strategy in [31]. The parameters of the teacher network at the $t$ training iteration are:
+
+$$
+\theta_ {t} ^ {\prime} = \eta \theta_ {t - 1} ^ {\prime} + (1 - \eta) \theta_ {t}, \tag {10}
+$$
+
+where $\theta_{t}$ denotes the student network parameter at the $t$ training iteration. The EMA decay $\eta$ is empirically set as 0.99, as suggested in [21, 31].
+
+Our unlabeled data. The unlabeled data in our work has 3,424 images with shadows. It consists of two parts: one is the USR dataset from a recent shadow removal work [11], while the other is our collection of 979 images from the internet. The USR dataset [11] has 2,445 shadow images without shadow detection annotations.
+
+# 3.5. Training and Testing Strategies
+
+Training parameters. To accelerate the training procedure and reduce the overfitting risk, we initialize the parameters of MT-CNN (student network) by ResNeXt [38], which has been well-trained for the image classification task on the ImageNet. Other parameters in the MT-CNN are initialized as random values. Stochastic gradient descent (SGD) equipped with a momentum of 0.9 and a weight decay of 0.0005 is used to optimize the whole network with 10,000 iterations. The learning rate is adjusted by a poly strategy [25] with the initial learning rate of 0.005 and the power of 0.9. We resize all the labeled and unlabeled images to $416 \times 416$ for training our network on a single GTX 2080Ti GPU, and augment the training set by random horizontal flipping. We use the mini-batch size of 6, which consists of 4 labeled images and 2 unlabeled data images.
+
+Inference. During testing, we resize the input images to $416 \times 416$ , feed the resized image into the student network, and take the rightmost shadow region detection map ( $S_{f}$ in Fig. 3) as the final output of our MTMT-Net. Following recent shadow detection networks [43, 41], we apply a fully connected conditional random field (CRF) [20] to further post-process the predicted result of our network.
+
+# 4. Experimental Results
+
+In this section, we first present the shadow detection benchmark datasets and evaluation metric, then compare the proposed MTMT-Net with the state-of-the-art shadow detectors and those relevant works including shadow removal, saliency detection and semantic segmentation, and
+
+Table 1: Comparing our network (MTMT-Net) against the state-of-the-art shadow detectors.
+
+ | | SBU [33] | UCF [42] | ISTD [35] |
| Method | Year | BER ↓ | Shadow ↓ | Non Shad. ↓ | BER ↓ | Shadow ↓ | Non Shad. ↓ | BER ↓ | Shadow ↓ | Non Shad. ↓ |
| MTMT-Net(ours) | - | 3.15 | 3.73 | 2.57 | 7.47 | 10.31 | 4.63 | 1.72 | 1.36 | 2.08 |
| Ours-w/o-CRF | - | 3.15 | 3.72 | 2.58 | 8.06 | 12.23 | 3.90 | 1.77 | 1.16 | 2.39 |
| DSDNet [41] | 2019 | 3.45 | 3.33 | 3.58 | 7.59 | 9.74 | 5.44 | 2.17 | 1.36 | 2.98 |
| DC-DSPF [37] | 2019 | 4.90 | 4.70 | 5.10 | 7.90 | 6.50 | 9.30 | - | - | - |
| BDRAR [43] | 2018 | 3.64 | 3.40 | 3.89 | 7.81 | 9.69 | 5.94 | 2.69 | 0.50 | 4.87 |
| ADNet [24] | 2018 | 5.37 | 4.45 | 6.30 | 9.25 | 8.37 | 10.14 | - | - | - |
| DSC [12] | 2018 | 5.59 | 9.76 | 1.42 | 10.54 | 18.08 | 3.00 | 3.42 | 3.85 | 3.00 |
| ST-CGAN [35] | 2018 | 8.14 | 3.75 | 12.53 | 11.23 | 4.94 | 17.52 | 3.85 | 2.14 | 5.55 |
| patched-CNN [8] | 2018 | 11.56 | 15.60 | 7.52 | - | - | - | - | - | - |
| scGAN [28] | 2017 | 9.10 | 8.39 | 9.69 | 11.50 | 7.74 | 15.30 | 4.70 | 3.22 | 6.18 |
| stacked-CNN [33] | 2016 | 11.00 | 8.84 | 12.76 | 13.00 | 9.00 | 17.10 | 8.60 | 7.69 | 9.23 |
| Unary-Pairwise [7] | 2011 | 25.03 | 36.26 | 13.80 | - | - | - | - | - | - |
| DeshadowNet [30] | 2017 | 6.96 | - | - | 8.92 | - | - | - | - | - |
| EGNet [14] | 2019 | 4.49 | 5.23 | 3.75 | 9.20 | 11.28 | 7.12 | 1.85 | 1.75 | 1.95 |
| SRM [36] | 2017 | 6.51 | 10.52 | 2.50 | 12.51 | 21.41 | 3.60 | 7.92 | 13.97 | 1.86 |
| Amulet [39] | 2017 | 15.13 | - | - | 15.17 | - | - | - | - | - |
| PSPNet [40] | 2017 | 8.57 | - | - | 11.75 | - | - | 4.26 | 4.51 | 4.02 |
+
+finally report ablation study results. Our code, model parameters, and shadow detection results on three benchmark datasets have been released at https://github.com/eraserNut/MTMT.
+
+# 4.1. Datasets and Evaluation Metrics
+
+Benchmark datasets. We evaluate our method on three widely-used shadow detection benchmark datasets: SBU [33], UCF [42], and ISTD [35]: (i) The SBU dataset is the largest annotated shadow dataset with 4,089 training images and 638 testing images; (ii) The UCF dataset consists of 145 training images and 76 testing images, covering outdoor scenes; and (iii) ISTD is a recently developed dataset for both shadow detection and removal. It has 1,870 triples of shadow images, shadow maps, and shadow-free images, and 540 of them are used for testing. Similar to recent works [12, 24, 43, 41], for SBU and UCF, we obtained the evaluation results by training our network on the SBU training set and our unlabeled dataset. Since ISTD only contains cast shadow images that are different from SBU images, following [41], we re-train our method and most competitors on the ISTD training dataset with our unlabeled data. Our training time for SBU is 1 hour, and that for ISTD is 0.5 hours. The model size of our network is 169 M. In the testing, our MTMT-Net takes around 0.05 seconds to process an image with a $416 \times 416$ image resolution.
+
+Evaluation metric. We employ a commonly-used metric, i.e., balance error rate (BER), to quantitatively evaluate the shadow detection performance. The BER [43, 12] equally considers the quality of shadow and non-shadow regions, which is given by:
+
+$$
+B E R = \left(1 - \frac {1}{2} \left(\frac {N _ {t p}}{N _ {p}} + \frac {N _ {t n}}{N _ {n}}\right)\right) \times 1 0 0, \tag {11}
+$$
+
+where $N_{tp}, N_{tn}, N_p$ , and $N_n$ are the number of true positives, true negatives, shadow and non-shadow pixels of the
+
+shadow image, respectively. A small BER value indicates a better shadow detection performance.
+
+# 4.2. Comparison with the State-of-the-art Shadow Detectors
+
+We make comparison with 10 recent shadow detectors including DSDNet [41], DC-DSPF [37], BDRAR [43], AD-Net [24], DSC [12], ST-CGAN [35], patched-CNN [8], scGAN [28], stacked-CNN [33], and Unary-Pairwise [7]. Among them, the last method is based on hand-crafted features while all the others are deep-learning-based methods. To make the comparisons fair, we adopt the available results of compared methods by either directly from the authors or using their report in published paper.
+
+Quantitative comparison. Table 1 summarizes the quantitative results of different methods on the three benchmark datasets. The BER score is the average of shadow and nonshadow BER scores. Apparently, the deep learning based methods [33, 12, 8] have much smaller BER values than the hand-crafted detector [7], since they can learn more powerful features for shadow detection from the annotated training images. Among the deep learning based shadow detectors, DSDNet [41] is the second best-performing method, which explicitly learns and integrates the semantics of visual distraction regions to infer shadows. Compared to the best-performing existing method, our method has $8.70\%$ , $1.58\%$ , and $20.7\%$ lower BER scores on SBU, UCF, and ISTD, respectively. In addition, our method has a better BER score on non-shadow pixels for SBU and UCF and a better BER score on shadow pixels for ISTD. This shows that our network predicts more shadow pixels for SBU and UCF and reduces the false positive predictions on non-shadow regions for ISTD. Like the three comparative methods [43, 12, 41], we also use CRF [20] as post-processing. The second row in Table 1 shows the performance of our
+
+
+Figure 4: Visual comparison of shadow maps produced by our method and other methods (4th-10th columns) against ground truths shown in 2nd column. Note that "stCNN" and "paCNN" stand for "stacked-CNN" and "patched-CNN", respectively.
+
+method without using CRF. The results indicate using CRF obtains a certain degree of improvement, mainly on the UCF dataset, while without CRF still achieves better performance than most state-of-the-art methods.
+
+Visual comparison. We further visually compare the shadow detection maps produced by our method and the state-of-the-arts, as shown in Figs. 4. From the results, we can see that our MTMT-Net (3rd column of Figs. 4) has the best performance among all the shadow detectors. It can effectively locate different shadows under various backgrounds, and successfully discriminates true shadows from those non-shadow regions with shadow appearance. For example, in the 3rd, 5th and 7th row, MTMT-Net can accurately detect the shadow regions, while the others mistakenly recognize the road, the sky and the dark ground as shadows, respectively. What's more, for high-contrast objects in a large shadow region, MTMT-Net can still recognize them as shadows, as demonstrated in the last two rows.
+
+# 4.3. Comparison with Shadow Removal, Saliency Detection and Semantic Segmentation Methods
+
+It is noted that deep networks designed for shadow removal, saliency detection and semantic segmentation can be re-trained for shadow detection by using annotated shadow datasets. To further evaluate the effectiveness of our method, we apply a shadow removal model, i.e., DeshadowNet [30], three saliency detection models, i.e., SRM [36],
+
+Amulet [39] and EGNet [14], and a semantic segmentation model, i.e., PSPNet [40] on shadow detection datasets.
+
+We adopt the available results of compared methods by either re-training the released code or using those reported. For a fair comparison, we try our best to fine tune their training parameters and select the best shadow detection results. The last five rows in Table 1 report their BER values. We see that these models can achieve superior BER performance over some existing shadow detectors, yet are still worse than our network.
+
+# 4.4. Ablation Analysis
+
+Baseline network design. We perform ablation study experiments to evaluate the proposed multi-task supervised loss (see Eq. (5)) and multi-task consistency loss (see Eq. (7)) of our MTMT-Net. Here, we consider seven baseline networks.
+
+The first four baseline networks are constructed by removing the teacher model. It means that only supervised loss on labeled data is used to train MT-CNN. Specifically, the first baseline network (denoted as "basic") only considers the shadow region detection supervised loss $(\mathcal{L}_r^s$ in Eq. (5)). The second (denoted as "basic+SE") is to add the shadow edge detection supervised loss $(\mathcal{L}_e^s$ of Eq. (5)), while the third (denoted as "basic+SC") is to add shadow count detection supervised loss $(\mathcal{L}_c^s$ of Eq. (5)). The fourth is to fuse the supervised loss of three tasks together.
+
+Another three baseline networks are built to verify the multi-task consistency loss on unlabeled data. The first one
+
+
+input
+images
+Figure 5: Visual comparison of shadow maps produced by our method and other baseline networks (see Table 2 for details).
+
+
+ground
+truths
+
+
+our
+method
+
+
+basic-MT+
+SC-MT
+
+
+basic-MT+
+SE-MT
+
+
+basic-MT
+
+
+basic+three-
+tasks
+
+
+basic+SC
+
+
+basic+SE
+
+
+basic
+
+Table 2: Ablation analysis. Here, "SR" denotes the shadow region detection; "SE" denotes the shadow edge detection; "SC" denotes the shadow count detection; and "MT" denotes the mean teacher.
+
+ | | | | | SBU [33] | UCF [42] | ISTD [35] |
| Network | SR | SE | SC | MT | BER ↓ | BER ↓ | BER ↓ |
| basic | ✓ | × | × | × | 5.28 | 9.57 | 2.23 |
| basic+SE | ✓ | ✓ | × | × | 4.07 | 8.09 | 1.8 |
| basic+SC | ✓ | × | ✓ | × | 4.72 | 9.34 | 2.04 |
| basic+three-tasks | ✓ | ✓ | ✓ | × | 3.61 | 7.64 | 2.03 |
| basic-MT | ✓ | × | × | ✓ | 4.49 | 8.29 | 2.12 |
| basic-MT+SE-MT | ✓ | ✓ | × | ✓ | 3.83 | 7.81 | 1.75 |
| basic-MT+SC-MT | ✓ | × | ✓ | ✓ | 4.41 | 8.61 | 2.03 |
| our method | ✓ | ✓ | ✓ | ✓ | 3.15 | 7.34 | 1.72 |
+
+(denoted as "basic-MT") only considers the mean teacher model on the shadow region task by fusing the supervised loss $(\mathcal{L}_r^s$ of Eq. (5)) and the consistency loss $(\mathcal{L}_r^c$ of Eq. (7)). The second one (denoted as "basic-MT+SE-MT") is to apply the mean teacher model on the shadow region detection and shadow edge detection, which means that $\mathcal{L}_r^s$ and $\mathcal{L}_e^s$ in Eq. (5) as well as $\mathcal{L}_r^c$ and $\mathcal{L}_e^c$ in Eq. (7) are used to train the network. The last one (denoted as "basic-MT+SC-MT") is to use the mean teacher model on the shadow region detection and shadow count detection. In other words, the supervised loss $(\mathcal{L}_r^s$ and $\mathcal{L}_c^s$ in Eq. (5)) and the consistency loss $(\mathcal{L}_r^c$ and $\mathcal{L}_c^c$ in Eq. (7)) are used for the network training.
+
+We train all the seven baseline networks using the SBU training set and our unlabeled data to obtain results of SBU and UCF. For ISTD, we use the ISTD training set and our unlabeled data to train all four networks and test them using the ISTD testing set.
+
+Quantitative comparisons. Table 2 summaries the BER values of our network and seven baseline networks on the three benchmark datasets. From the results, we have the following observations: (i) "basic+SE" and "basic+SC" have superior BER values over "basic", which means that detecting shadow boundaries and shadow count can provide helpful information for shadow detection. (ii) "basic+three-tasks" has better BER performance than "basic+SE" and "basic+SC", demonstrating that fusing the three tasks for a supervised shadow detection together incurs a better shadow detection performance. (iii) "basic-MT" can more accurately detect shadow pixels than "basic" due to its smaller BER values. It indicates that the additional con
+
+sistency loss from the unlabeled data incurs a superior shadow detection performance. (iv) "basic-MT+SE-MT" and "basic-MT+SC-MT" produce smaller BER results than "basic-MT", showing that the shadow edge detection and shadow region detection benefit the mean teacher model for shadow detection. (v) We can find that the shadow edge detection has a more contribution than the shadow count detection to the success of our method since "basic-MT+SE-MT" has a better BER result than "basic-MT+SC-MT". (vi) By designing a three-task mean teacher model, our MTMT-Net has the best BER performance on three benchmarks.
+
+Visual comparisons. Moreover, Fig. (5) visually compares shadow maps produced by our MTMT-Net and seven baseline networks. Apparently, our method can identify shadows better than all seven baselines in both shadow segmentation quality and localization accuracy. This proves the effectiveness of considering shadow edge, shadow count information and unlabeled data within one framework.
+
+# 5. Conclusion
+
+This paper presents a novel network for single-image shadow detection by developing a multi-task mean teacher framework. Our key idea is to first develop a multi-task network for simultaneously predicting shadow region detection, shadow edge detection, as well as shadow count estimation by leveraging their complementary information. Then we employ the mean teacher semi-supervised learning to leverage additional unlabeled data for further improving the detection performance. Experimental results on three benchmark datasets show that our network consistently outperforms the state-of-the-art methods by a large margin. Like other works [43, 41, 12], our method might not work well for images with multiple and complex shadows. Resolving this challenging problem is considered as a future direction of our work.
+
+# Acknowledgments
+
+The work is supported by the National Natural Science Foundation of China (Project No. 61572354, 61902275, 61671325, U1803264, 61672376), and CUHK Research Committee Direct Grant for Research 2018/19.
+
+# References
+
+[1] John Canny. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6):679-698, 1986. 4
+[2] Rita Cucchiara, Costantino Grana, Massimo Piccardi, and Andrea Prati. Detecting moving objects, ghosts, and shadows in video streams. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10):1337-1342, 2003. 1
+[3] Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. In ICCV, pages 2051-2060, 2017. 2
+[4] Aleksandris Ecins, Cornelia Fermuller, and Yiannis Aloimonos. Shadow free segmentation in still images using local density measure. In ICCP, pages 1-8, 2014. 1
+[5] Graham D Finlayson, Mark S Drew, and Cheng Lu. Entropy minimization for shadow removal. International Journal of Computer Vision, 85(1):35-57, 2009. 1, 2
+[6] Graham D Finlayson, Steven D Hordley, Cheng Lu, and Mark S Drew. On the removal of shadows from images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1):59-68, 2006. 1, 2
+[7] Ruiqi Guo, Qieyun Dai, and Derek Hoiem. Single-image shadow detection and removal using paired regions. In CVPR, pages 2033-2040, 2011. 2, 6
+[8] Sepideh Hosseinzadeh, Moein Shakeri, and Hong Zhang. Fast shadow detection from a single image using a patched convolutional neural network. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 3124-3129, 2018. 3, 6, 7
+[9] Qibin Hou, Ming-Ming Cheng, Xiaowei Hu, Ali Borji, Zhuowen Tu, and Philip Torr. Deeply supervised salient object detection with short connections. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(4):815-828, 2019. 4
+[10] Xiaowei Hu, Chi-Wing Fu, Lei Zhu, Jing Qin, and Pheng-Ann Heng. Direction-aware spatial context features for shadow detection and removal. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019. to appear. 1
+[11] Xiaowei Hu, Yitong Jiang, Chi-Wing Fu, and Pheng-Ann Heng. Mask-ShadowGAN: Learning to remove shadows from unpaired data. In ICCV, pages 2472–2481, 2019. 5
+[12] Xiaowei Hu, Lei Zhu, Chi-Wing Fu, Jing Qin, and Pheng-Ann Heng. Direction-aware spatial context features for shadow detection. In CVPR, pages 7454–7462, 2018. 1, 3, 6, 7, 8
+[13] Xiang Huang, Gang Hua, Jack Tumblin, and Lance Williams. What characterizes a shadow boundary under the sun and sky? In ICCV, pages 898-905, 2011. 1, 2
+[14] Deng-Ping Fan Yang Cao Ju-Feng Yang Ming-Ming Cheng Jia-Xing Zhao, Jiang-Jiang Liu. EGNet: Edge guidance network for salient object detection. In ICCV, pages 8779-8788, 2019. 2, 4, 6, 7
+[15] Ming-Ming Cheng Jiashi Feng Jianmin Jiang Jiang-Jiang Liu, Qibin Hou. A simple pooling-based design for real-time salient object detection. In CVPR, pages 3917-3926, 2019. 4
+
+[16] Imran N Junejo and Hassan Foroosh. Estimating geotemporal location of stationary cameras using shadow trajectories. In ECCV, pages 318-331, 2008. 1
+[17] Kevin Karsch, Varsha Hedau, David Forsyth, and Derek Hoiem. Rendering synthetic objects into legacy photographs. ACM Trans. on Graphics (SIGGRAPH Asia), 30(6):157:1-157:12, 2011. 1
+[18] Alex Kendall, Yarin Gal, and Roberto Cipolla. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In CVPR, pages 7482-7491, 2018. 2
+[19] Salman Hameed Khan, Mohammed Bennamoun, Ferdous Sohel, and Roberto Togneri. Automatic feature learning for robust shadow detection. In CVPR, pages 1939-1946, 2014. 1, 2
+[20] Philipp Krahenbuhl and Vladlen Koltun. Efficient inference in fully connected CRFs with Gaussian edge potentials. In NIPS, pages 109-117, 2011. 5, 6
+[21] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016. 5
+[22] Jean-François Lalonde, Alexei A Efros, and Srinivasa G Narasimhan. Estimating natural illumination from a single outdoor image. In ICCV, pages 183-190, 2009. 1
+[23] Jean-François Lalonde, Alexei A Efros, and Srinivasa G Narasimhan. Detecting ground shadows in outdoor consumer photographs. In ECCV, pages 322-335, 2010. 1, 2
+[24] Hieu Le, Yago Vicente, F Tomas, Vu Nguyen, Minh Hoai, and Dimitris Samaras. A+ D Net: Training a shadow detector with adversarial shadow attenuation. In ECCV, pages 662-678, 2018. 1, 3, 6
+[25] Wei Liu, Andrew Rabinovich, and Alexander C Berg. ParseNet: Looking wider to see better. arXiv preprint arXiv:1506.04579, 2015. 5
+[26] Ishan Misra, Abhinav Shrivastava, Abhinav Gupta, and Martial Hebert. Cross-stitch networks for multi-task learning. In CVPR, pages 3994-4003, 2016. 2
+[27] Sohail Nadimi and Bir Bhanu. Physical models for moving shadow and object detection in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(8):1079-1087, 2004. 1
+[28] Vu Nguyen, Tomas F Yago Vicente, Maozheng Zhao, Minh Hoai, and Dimitris Samaras. Shadow detection with conditional generative adversarial networks. In ICCV, pages 4510-4518, 2017. 1, 2, 3, 6, 7
+[29] Takahiro Okabe, Imari Sato, and Yoichi Sato. Attached shadow coding: Estimating surface normals from shadows under unknown reflectance and lighting conditions. In ICCV, pages 1693-1700, 2009. 1
+[30] Liangqiong Qu, Jiandong Tian, Shengfeng He, Yandong Tang, and Rynson WH Lau. DeshadowNet: A multi-context embedding deep network for shadow removal. In CVPR, pages 4067-4075, 2017. 6, 7
+[31] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. pages 1195–1204, 2017. 5
+
+[32] Jiandong Tian, Xiaojun Qi, Liangqiong Qu, and Yandong Tang. New spectrum ratio properties and features for shadow detection. Pattern Recognition, 51:85-96, 2016. 2
+[33] Tomás F Yago Vicente, Le Hou, Chen-Ping Yu, Minh Hoai, and Dimitris Samaras. Large-scale training of shadow detectors with noisily-annotated shadow examples. In ECCV, pages 816–832, 2016. 1, 3, 6, 7, 8
+[34] Yago Vicente, F Tomas, Minh Hoai, and Dimitris Samaras. Leave-one-out kernel optimization for shadow detection. In ICCV, pages 3388-3396, 2015. 2
+[35] Jifeng Wang, Xiang Li, and Jian Yang. Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal. In CVPR, pages 1788-1797, 2018. 1, 6, 8
+[36] Tiantian Wang, Ali Borji, Lihe Zhang, Pingping Zhang, and Huchuan Lu. A stagewise refinement model for detecting salient objects in images. In ICCV, pages 4019-4028, 2017. 6, 7
+[37] Yupei Wang, Xin Zhao, Yin Li, Xuecai Hu, Kaiqi Huang, et al. Densely cascaded shadow detection network via deeply supervised parallel fusion. In IJCAI, pages 1007-1013, 2019. 3, 6
+[38] Saining Xie, Ross Girshick, Piotr Dolkar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, pages 5987-5995, 2017. 4, 5
+[39] Pingping Zhang, Dong Wang, Huchuan Lu, Hongyu Wang, and Xiang Ruan. Amulet: Aggregating multi-level convolutional features for salient object detection. In CVPR, pages 202-211, 2017. 6, 7
+[40] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In CVPR, pages 2881-2890, 2017. 6, 7
+[41] Quanlong Zheng, Xiaotian Qiao, Ying Cao, and Rynson WH Lau. Distraction-aware shadow detection. In CVPR, pages 5167-5176, 2019. 1, 2, 3, 5, 6, 7, 8
+[42] Jiejie Zhu, Kegan GG Samuel, Syed Z Masood, and Marshall F Tappen. Learning to recognize shadows in monochromatic natural images. In CVPR, pages 223-230, 2010. 1, 2, 6, 8
+[43] Lei Zhu, Zijun Deng, Xiaowei Hu, Chi-Wing Fu, Xuemiao Xu, Jing Qin, and Pheng-Ann Heng. Bidirectional feature pyramid network with recurrent attention residual modules for shadow detection. In ECCV, pages 121–136, 2018. 1, 3, 4, 5, 6, 7, 8
\ No newline at end of file
diff --git a/amultitaskmeanteacherforsemisupervisedshadowdetection/images.zip b/amultitaskmeanteacherforsemisupervisedshadowdetection/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a5edd783d87a59329aa04f189177934dea818239
--- /dev/null
+++ b/amultitaskmeanteacherforsemisupervisedshadowdetection/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:593f3c421b2b7e6c29c1f16009bfe245726e2dea4a8b91604ee09bf9a6f78f6c
+size 514522
diff --git a/amultitaskmeanteacherforsemisupervisedshadowdetection/layout.json b/amultitaskmeanteacherforsemisupervisedshadowdetection/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5742d3827e70a464be04ddbc0fcac2b3ac3ebaac
--- /dev/null
+++ b/amultitaskmeanteacherforsemisupervisedshadowdetection/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ec0ddba9a4ca9bac4f6e08c3d87a2a351a67dd89d46fabffe8d94d6e317a761
+size 441780
diff --git a/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_content_list.json b/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c5ffdf13cdc6f251245ad745495b95782ca26e16
--- /dev/null
+++ b/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c878ac19f15f3f90c11fb53c9be62f14d8b3511e756f8a034ff6a5917080958
+size 82103
diff --git a/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_model.json b/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..175ae6e79c9f5671e8964974c6a14aa9c6e3fcf2
--- /dev/null
+++ b/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84bafe618d37a4f4d12e0eaae5af11ba405fe69416b7f801a7f5e2cb437c616e
+size 109011
diff --git a/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_origin.pdf b/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6610b20a6f8aa5fab3980e3c9d6550233aef44ff
--- /dev/null
+++ b/aneuralrenderingframeworkforfreeviewpointrelighting/3a0705da-6f15-4a10-89a6-f400c443caaa_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bbf7f0d54dd0f2319351cded4ddf7c4fc94462a272f6337e4dbcb73fcfb05484
+size 2769154
diff --git a/aneuralrenderingframeworkforfreeviewpointrelighting/full.md b/aneuralrenderingframeworkforfreeviewpointrelighting/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..cb2093335ec6272768d9af9225806a15fd6f828c
--- /dev/null
+++ b/aneuralrenderingframeworkforfreeviewpointrelighting/full.md
@@ -0,0 +1,356 @@
+# A Neural Rendering Framework for Free-Viewpoint Relighting
+
+Zhang Chen $^{1,2,3}$
+
+Anpei Chen
+
+Guli Zhang
+
+Chengyuan Wang4
+
+Yu Ji5
+
+Kiriakos N. Kutulakos
+
+Jingyi Yu
+
+$^{1}$ ShanghaiTech University $^{2}$ Shanghai Institute of Microsystem and Information Technology
+
+3 University of Chinese Academy of Sciences 4 Shanghai University 5 DGene, Inc.
+
+$^{6}$ University of Toronto
+
+{chenzhang, chenap, zhanggl, yujingyi}@shanghaitech.edu.cn ericw385@i.shu.edu.cn
+
+yu.ji@dgene.com kyros@cs.toronto.edu
+
+github.com/LansburyCH/relighttable-nr
+
+# Abstract
+
+We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multiview image inputs. Existing neural rendering (NR) does not explicitly model the physical rendering process and hence has limited capabilities on relighting. RNR instead models image formation in terms of environment lighting, object intrinsic attributes, and light transport function (LTF), each corresponding to a learnable component. In particular, the incorporation of a physically based rendering process not only enables relighting but also improves the quality of view synthesis. Comprehensive experiments on synthetic and real data show that RNR provides a practical and effective solution for conducting free-viewpoint relighting.
+
+# 1. Introduction
+
+Neural rendering (NR) has shown great success in the past few years on producing photorealistic images under complex geometry, surface reflectance, and environment lighting. Unlike traditional modeling and rendering techniques that rely on elaborate setups to capture detailed object geometry and accurate surface reflectance properties, often also with excessive artistic manipulations, NR can produce compelling results by using only images captured under uncontrolled illumination. By far, most existing NR methods have focused on either free-viewpoint rendering under fixed illumination or image-based relighting under fixed viewpoint. In this paper, we explore the problem of simultaneous novel view synthesis and relighting using NR.
+
+State-of-the-art deep view synthesis techniques follow the pipeline that first extracts deep features from input images and 3D models, then projects the features to the image space via traditional camera projection, and finally ap
+
+
+Figure 1. Results from our Relightable Neural Renderer (RNR). Top row shows the relighting results for a synthetic sphere composed of complex materials. Bottom row shows free-viewpoint relighting results for real captured data.
+
+plies a rendering network to render the projected features to a RGB image. Such approaches exploit learnable components to both encode 3D representations and model the rendering process. Approaches such as neural point cloud [2], neural volume [63] and neural texture [69] utilize deep representations for 3D content. With rich training data, these methods can tolerate inaccuracies in geometry and maintain reasonable rendering quality. For example, DeepVoxels [63] uses a learnable volume as an alternative to standard 3D representation while combining physically based forward/backward projection operators for view synthesis.
+
+Using NR to produce visually plausible free-viewpoint relighting is more difficult compared with changing viewpoints under fixed illumination. This is because under fixed illumination, existing NRs manage to model 2D/3D geometry as learnable components to directly encode appearance of different views. Relighting, in contrast, requires further separating appearance into object intrinsic attributes and illumination. From a NR perspective, the final rendering step
+
+in existing approaches cannot yet achieve such separation.
+
+In this paper, we present a novel Relightable Neural Renderer (RNR) for view synthesis and relighting from multiview inputs. A unique step in our approach is that we model image formation in terms of environment lighting, object intrinsic attributes, and light transport function (LTF). RNR sets out to conduct regression on these three individual components rather than directly translating deep features to appearance as in existing NR. In addition, the use of LTF instead of a parametric BRDF model extends the capability of modeling global illumination. While enabling relighting, RNR can also produce view synthesis using the same network architecture. Comprehensive experiments on synthetic and real data show that RNR provides a practical and effective solution for conducting free-viewpoint relighting.
+
+# 2. Related Work
+
+Image-based Rendering (IBR). Traditional IBR methods [17, 37, 24, 5, 7, 86, 23, 57] synthesize novel views by blending pixels from input images. Compared with physically based rendering, which requires high-resolution geometry and accurate surface reflectance, they can use lower quality geometry as proxies to produce relatively high quality rendering. The ultimate rendering quality, however, is a trade-off between the density of sampled images and geometry: low quality geometry requires dense sampling to reduce artifacts; otherwise the rendering exhibits various artifacts including ghosting, aliasing, misalignment and appearance jumps. The same trade-off applies to image-based relighting, although for low frequency lighting, sparse sampling may suffice to produce realistic appearance. Handcrafted blending schemes [9, 35, 8, 23, 57] have been developed for specific rendering tasks but they generally require extensive parameter tuning.
+
+Deep View Synthesis. Recently, there has been a large corpus of works on learning-based novel view synthesis. [68, 13] learn an implicit 3D representation by training on synthetic datasets. Warping-based methods [88, 55, 67, 90, 28, 11] synthesize novel views by predicting the optical flow field. Flow estimation can also be enhanced with geometry priors [87, 45]. Kalantari et al. [30] separate the synthesis process into disparity and color estimations for light field data. Srinivasan et al. [66] further extend to RGB-D view synthesis on small baseline light fields.
+
+Eslami et al. [14] propose Generative Query Network to embed appearances of different views in latent space. Disentangled understanding of scenes can also be conducted through interpretable transformations [82, 36, 77], Lie groups-based latent variables [15] or attention modules [6]. Instead of 2D latent features, [72, 52, 20] utilize volumetric representations as a stronger multi-view constraint
+
+whereas Sitzmann et al. [64] represent a scene as a continuous mapping from 3D geometry to deep features.
+
+To create more photo-realistic rendering for a wide viewing range, [22, 70, 10, 63, 47, 69, 2, 61, 49, 79] require many more images as input. Hedman et al. [22] learn the blending scheme in IBR. Thies et al. [70] model the view-dependent component with self-supervised learning and then combine it with the diffuse component. Chen et al. [10] apply fully connected networks to model the surface light field by exploiting appearance redundancies. Volume-based methods [63, 47] utilize learnable 3D volume to represent scene and combine with projection or ray marching to enforce geometric constraint. Thies et al. [69] present a novel learnable neural texture to model rendering as image translation. They use coarse geometry for texture projection and offer flexible content editing. Aliev et al. [2] directly use neural point cloud to avoid surface meshing. Auxiliary information such as poses can be used to synthesize more complex objects such as human bodies [61].
+
+To accommodate relighting, Meshry et al. [49] learn an embedding for appearance style whereas Xu et al. [79] use deep image-based relighting [81] on multi-view multilight photometric images captured using specialized gantry. Geometry-differentiable neural rendering [58, 46, 44, 40, 32, 48, 84, 43, 27, 71, 51] can potentially handle relighting but our technique focuses on view synthesis and relighting without modifying 3D geometry.
+
+Free-Viewpoint Relighting. Earlier free-viewpoint relighting of real world objects requires delicate acquisitions of reflectance [18, 75, 76] while more recent low-cost approaches still require controlled active illumination or known illumination/geometry [50, 25, 89, 78, 16, 83, 42, 31, 12, 41, 80]. Our work aims to use multi-view images captured under single unknown natural illumination. Previous approaches solve this ill-posed problem via spherical harmonics (SH) [85] or wavelets [19] or both [39] to represent illumination and a parametric BRDF model to represent reflectance. Imber et al. [26] extract pixel-resolution intrinsic textures. Despite these advances, accurate geometry remains as a key component for reliable relighting whereas our RNR aims to simultaneously compensate for geometric inaccuracy and disentangle intrinsic properties from lighting. Tailored illumination models can support outdoor relighting [59, 21, 60] or indoor inverse rendering [3] whereas our RNR uses a more generic lighting model for learning the light transport process. Specifically, our work uses a set of multi-view images of an object under fixed yet unknown natural illumination as input. To carry out view projection and texture mapping, we assume known camera parameters of the input views and known coarse 3D geometry of the object, where standard structure-from-motion and multi-view stereo reconstruction can provide reliable estimations.
+
+
+Figure 2. The neural rendering pipeline of RNR.
+
+# 3. Image Formation Model
+
+Under the rendering equation [29], the radiance $\mathbf{I}$ emitting from point $\mathbf{x}$ at viewing direction $\omega_{o}$ is computed as:
+
+$$
+\mathbf {I} (\mathbf {x}, \boldsymbol {\omega} _ {o}) = \int_ {\mathcal {S} ^ {2}} f _ {r} (\mathbf {x}, \boldsymbol {\omega} _ {i}, \boldsymbol {\omega} _ {o}) v (\mathbf {x}, \boldsymbol {\omega} _ {i}) L (\mathbf {x}, \boldsymbol {\omega} _ {i}) \mathbf {n} \cdot \boldsymbol {\omega} _ {i} d \boldsymbol {\omega} _ {i}, \tag {1}
+$$
+
+where $L(\mathbf{x}, \boldsymbol{\omega}_i)$ is the radiance that arrives at point $\mathbf{x}$ from direction $\boldsymbol{\omega}_i$ . $v(\mathbf{x}, \boldsymbol{\omega}_i)$ denotes the visibility of $\mathbf{x}$ from direction $\boldsymbol{\omega}_i$ and $f_r(\mathbf{x}, \boldsymbol{\omega}_i, \boldsymbol{\omega}_o)$ is the bidirectional reflectance distribution function (BRDF) that describes the ratio of outgoing radiance over the incident irradiance. $S^2$ is the upper hemisphere surrounding the surface point. For distant illumination, $L(\mathbf{x}, \boldsymbol{\omega}_i)$ can be replaced with $L(\boldsymbol{\omega}_i)$ .
+
+Instead of separately conducting regression to recover each individual term in Eq. 1, we learn light transport function (LTF) $\mathbf{T}(\mathbf{x},\boldsymbol{\omega}_i,\boldsymbol{\omega}_o) = f_r(\mathbf{x},\boldsymbol{\omega}_i,\boldsymbol{\omega}_o)v(\mathbf{x},\boldsymbol{\omega}_i)\mathbf{n}\cdot \boldsymbol{\omega}_i$ . By further seperating view-independent albedo $\rho (\mathbf{x})$ from LTF (we still refer to this counterpart with albedo factored out as LTF in this paper for brevity), we have
+
+$$
+\mathbf {I} (\mathbf {x}, \boldsymbol {\omega} _ {o}) = \int_ {\mathcal {S} ^ {2}} \rho (\mathbf {x}) \mathbf {T} (\mathbf {x}, \boldsymbol {\omega} _ {i}, \boldsymbol {\omega} _ {o}) L (\boldsymbol {\omega} _ {i}) d \boldsymbol {\omega} _ {i}. \qquad (2)
+$$
+
+The key observation here is that, for static objects, the LTF can be decoupled from illumination. This allows us to decompose photometric attributes into albedo, light transport and illumination for conducting relighting. Specifically, our RNR uses a network to represent the LTF $\mathbf{T}(\cdot)$ . Learning the LTF instead of the BRDF has several advantages. First, it can compensate for outlier effects such as incorrect visibility caused by inaccurate 3D proxy common in IBR. Second, under distant illumination, since LTF can be regarded as the total contribution (with all light paths taken into account) from incoming radiance along a direction to the outgoing radiance, it can potentially encode non-local effects such as inter-reflection. Finally, it reduces the computation when evaluating the radiance of a pixel. It is worth
+
+noting that inferring the LTF can be viewed as the inverse problem of precomputed radiance transfer (PRT) [65] which is widely used in physically based rendering.
+
+Same with previous relighting techniques, we assume illumination can be modelled using Spherical Harmonics (SH) up to order 10. The implicit assumption here is that the object cannot be too specular or mirror like. Following common practices, we further decompose into diffuse and specular components, which gives:
+
+$$
+\begin{array}{l} \mathbf {I} (\mathbf {x}, \boldsymbol {\omega} _ {o}) = \int_ {\mathcal {S} ^ {2}} \rho_ {d} (\mathbf {x}) \mathbf {T} _ {d} (\mathbf {x}, \boldsymbol {\omega} _ {i}, \boldsymbol {\omega} _ {o}) \sum_ {k} c _ {k} Y _ {k} (\boldsymbol {\omega} _ {i}) d \boldsymbol {\omega} _ {i} + \\ \int_ {\mathcal {S} ^ {2}} \rho_ {s} (\mathbf {x}) \mathbf {T} _ {s} (\mathbf {x}, \boldsymbol {\omega} _ {i}, \boldsymbol {\omega} _ {o}) \sum_ {k} c _ {k} Y _ {k} (\boldsymbol {\omega} _ {i}) d \boldsymbol {\omega} _ {i}, \tag {3} \\ \end{array}
+$$
+
+where $\rho_{d}$ and $\mathbf{T}_d$ are the albedo and LTF of diffuse component, $\rho_{s}$ and $\mathbf{T}_s$ are the albedo and LTF of specular component, $Y_{k}$ is the $k$ th SH basis and $c_{k}$ its coefficient.
+
+Illumination Initialization. Our SH representation contains 121 coefficients for each color channel. We first exploit the background regions of multi-view images to initialize illumination. We assume that background pixels lie faraway, so we establish the image-to-panorama mapping and fill in the environment map with image pixels. We take the median of the image pixels that map to the same position in environment map to reduce ghosting artifacts. We then project the environment map onto SH basis to obtain the initial value of SH coefficients.
+
+Neural Texture. Neural texture [69] provides an efficient encoding of latent properties of 3D scenes. It can be seen as an extension of traditional texture-space data such as color texture, normal map, displacement map, etc. While these data record certain hand-crafted properties of 3D content,
+
+neural texture is learnable and can be trained to encode the critical information for a given task (e.g., novel view synthesis). We use the first 3 channels of neural texture as diffuse albedo and second 3 channels as specular albedo. For the rest of the channels, we leave them unconstrained so as to encode latent properties. To project neural texture to image space, we first rasterize 3D proxy using camera parameters to obtain uv map (texel-to-pixel mapping) and use bilinear interpolation to sample features from neural texture. Following [69], we use a 4-level mipmap Laplacian pyramid for neural texture and set the resolution of the top level as $512 \times 512$ . We also evaluate the first 9 SH coefficients at per-pixel view direction and multiply with channel 7-15 of projected neural texture (neural image).
+
+# 4. Relightable Neural Renderer (RNR)
+
+Next, we set out to simultaneously estimate the albedos $\rho_{d}(\cdot), \rho_{s}(\cdot)$ , the LTFs $\mathbf{T}_{d}(\cdot), \mathbf{T}_{s}(\cdot)$ and the SH coefficients $c_{k}$ . We use the neural texture [69] to encode the albedo and additional latent properties of the object. We then propose sampling schemes for the light directions used in evaluating Eq. 3. Next, we propose a Light-Transport-Net (LTN) to predict light transport at the sampled light directions for each pixel. Note that the entire process is differentiable and only requires 2D supervision from input multi-view images. Fig. 2 shows our pipeline.
+
+# 4.1. Light Direction Sampling
+
+Instead of densely sampling light directions for each vertex (high angular resolution but low spatial resolution), we resort to sparsely sampling light directions for each pixel (low angular resolution but high spatial resolution). In this case, high rendering quality can be achieved even with coarse 3D proxy. We argue that under SH lighting, using sparse light direction sampling only leads to minor inaccuracy on the radiance evaluated in Eq. 3, which can be effectively compensated by LTN.
+
+Since diffuse and specular light transport behaves differently based on light direction and view direction, we utilize different sampling schemes, as shown in Fig. 3. For the diffuse component, we first construct $k_{d}$ cones centered around the surface normal, with half angles of $\{\theta_1^d,\theta_2^d,\dots,\theta_{k_d}^d\}$ . Then we uniformly sample directions on each cone. This is motivated by the fact that diffuse light transport (ignoring visibility and other effects) follows a cosine attenuation based on the angle between light direction and surface normal. Therefore, light directions nearer to the surface normal are more likely to contribute more to the radiance at the surface point. For the specular component, we similarly construct $k_{s}$ cones around the surface normal, and uniformly sample on these cones to obtain halfway directions. Then we reflect view direction around these halfway directions to obtain sampled light directions. This is motivated by the
+
+
+Samples for Diffuse Component
+
+
+Samples for Specular Component
+Figure 3. Light direction sampling schemes for diffuse and specular components.
+
+microfacets theory which models surfaces as collections of perfect mirror microfacets. The normals of these microfacets follow a normal distribution function, which we assume to cluster around the macro surface normal.
+
+We carry out the above light direction sampling in tangent space and then transform to world space by
+
+$$
+\boldsymbol {\omega} _ {i} (\mathbf {x}) = \mathbf {R} _ {T B N} (\mathbf {x}) \cdot \boldsymbol {\omega} _ {i} ^ {\prime} (\mathbf {x}), \tag {4}
+$$
+
+where $\omega_{i}^{\prime}(\mathbf{x})$ is the sampled directions in tangent space and $\mathbf{R}_{TBN}(\mathbf{x})$ is the rotation matrix from tangent space to world space. By stacking the sampled light directions $\{\omega_i\}_d, \{\omega_i\}_s$ of the two components along the channel dimension, we form a light direction map, which is then input to LTN.
+
+# 4.2. Light Transport Estimation
+
+Our LTN consists of a graph convolutional network (GCN) to extract global geometric features and a modified U-Net to predict per-pixel light transports at the sampled light directions for diffuse and specular components.
+
+We first concatenate neural image with view direction map, normal map and light direction map as input to the U-Net. As 2D convolutional network does not fully exploit information in a non-Euclidean structure data, we further augment U-Net with a GCN [34, 4] to extract global features of the 3D geometry. Inspired by [62, 74], we use dynamic edge connections during the training of GCN to learn a better graph representations. But different from [74], which changes the edge connection by finding the nearest neighbour of each vertex, we follow the step of [73, 38] and apply a dilated K-NN method on feature space to search the neighborhood of each vertex. Moreover, rather than using naive GCN, we utilize ResGCN - a much deeper GCN with residual blocks [38] to gather higher-level features of each vertex. At the end of the ResGCN, a fully connected layer is applied to fuse all the features into global geometric features. We repeat and concatenate this feature vector with the U-Net feature map after the first downsample layer. This allows light transport estimation to incorporate global geometric information rather than being limited to features within a single view.
+
+The output of the U-Net is a light transport map, which contains per-pixel light transport at each sampled light di
+
+rection. To render an image, we retrieve the illumination radiance on each sampled light direction, and then integrate with albedo and light transport following Eq. 3. Notice that we carry out the integration separately for the diffuse and specular components and then sum these two components to obtain the final image.
+
+# 4.3. Loss Functions
+
+We use $\ell_1$ loss for the difference between rendered images and ground-truth images:
+
+$$
+\mathcal {L} _ {i m} = \frac {1}{n} \sum_ {\mathbf {x}} | | \mathbf {I} (\mathbf {x}) - \mathbf {I} _ {\text {r e n d e r}} (\mathbf {x}) | | _ {1}, \tag {5}
+$$
+
+where $n$ is the number of image pixels. However, with $\ell_1$ loss alone, we cannot guarantee correct relighting. This is due to the ambiguity between albedo, light transport and illumination in Eq. 3: the network can overfit training images with an incorrect combination of the three components. Therefore, we need to apply additional losses to ensure the network learns a physically plausible interpretation.
+
+Chromaticity of Light Transport To constrain the learned LTFs, we propose a novel loss on the chromaticity of light transports. For a pixel, while its light transports at different light directions differ in intensity, they usually share similar chromaticity. An exception is that for pixels with low intensities, their light transports may contain a visibility of 0 and hence do not have valid chromaticities. Therefore, we formulate a weighted chromaticity loss on light transport as:
+
+$$
+\mathcal {L} _ {c h r} = \frac {1}{n m} \sum_ {\mathbf {x}} \sum_ {\boldsymbol {\omega} _ {i}} w (\mathbf {x}) \left(1 - \mathbf {T} ^ {\prime} \left(\mathbf {x}, \boldsymbol {\omega} _ {i}, \boldsymbol {\omega} _ {o}\right) \cdot \mathbf {T} _ {m e a n} ^ {\prime} \left(\mathbf {x}, \boldsymbol {\omega} _ {o}\right)\right), \tag {6}
+$$
+
+where $m$ is the number of sampled light directions, $w(\mathbf{x}) = \min (20\cdot ||\mathbf{I}(\mathbf{x})||_2,1)$ is a weight depending on image intensity. $\mathbf{T}'(\mathbf{x},\boldsymbol {\omega}_i,\boldsymbol {\omega}_o) = \frac{\mathbf{T}(\mathbf{x},\boldsymbol{\omega}_i,\boldsymbol{\omega}_o)}{||\mathbf{T}(\mathbf{x},\boldsymbol{\omega}_i,\boldsymbol{\omega}_o)||_2}$ is the chromaticity of a light transport and $\mathbf{T}_{mean}'(\mathbf{x},\boldsymbol {\omega}_o)$ is the mean chromaticity for the light transports at pixel $\mathbf{x}$ . We compute the loss separately for the diffuse and specular components and then sum together.
+
+Illumination Although the initial environment map stitched from input multi-view images contains artifacts such as ghosting, the corresponding SH-based environment map is smooth and relatively accurate. Therefore, we would like to constrain our final estimated illumination to be close to the initial one within the regions that are initially covered. We first uniformly sample 4096 directions in the unit sphere and then compute loss based on the SH radiance on these directions:
+
+$$
+\mathcal {L} _ {i l l u m} = \frac {1}{p} \sum_ {\mathbf {p}} \sum_ {k} \left| \left| c _ {k} Y _ {k} (\mathbf {p}) - c _ {k} ^ {\prime} Y _ {k} (\mathbf {p}) \right| \right| _ {1}, \tag {7}
+$$
+
+where $p$ is the number of directions within initial covered regions, $c_{k}$ is estimated SH coefficients and $c_{k}^{\prime}$ is initial SH coefficients.
+
+Albedo From Eq. 3, we can see that there is a scale ambiguity between albedo and light transport. Hence, we include a regularization on albedo so that its mean is close to 0.5:
+
+$$
+\mathcal {L} _ {a l b} = \frac {1}{q} \sum_ {\mathbf {x}} | | \rho (\mathbf {x}) - 0. 5 | | _ {1}, \tag {8}
+$$
+
+where $q$ is the number of texels. This loss is applied to both diffuse and specular albedo.
+
+Our total loss is a weighted composition of the above losses:
+
+$$
+\mathcal {L} = \mathcal {L} _ {i m} + \lambda_ {c h r} \mathcal {L} _ {c h r} + \lambda_ {i l l u m} \mathcal {L} _ {i l l u m} + \lambda_ {a l b} \mathcal {L} _ {a l b}. \tag {9}
+$$
+
+# 5. Experimental Results
+
+We implement our method in PyTorch [56]. Before training, we precompute uv map along with view direction map, normal map and per-pixel tangent space transformation matrix for each training view. We remove the parts in the initial 3D proxy that correspond to background and use a downsampled mesh with 7,500 vertices per model as input to ResGCN. For neural texture, we use 24 channels. For light direction sampling, we set the half angles of cones to $\{20^{\circ}, 40^{\circ}\}$ for the diffuse component and $\{5^{\circ}, 10^{\circ}\}$ for the specular component. We train our end-to-end network using Adam [33] as optimizer, with a learning rate of 0.001, $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ . We set $\lambda_{chr} = \lambda_{illum} = \lambda_{alb} = 1$ and train our models for 20k to 50k iterations based on object complexity.
+
+# 5.1. Evaluations on Synthetic Data
+
+We first evaluate RNR on synthetic data for both novel view synthesis and relighting. We choose 4 objects with different geometry complexity, for each we render 200 randomly sampled views under 2 different illuminations. We purposely set the illuminations to have different brightness and color tones. We use a physically based renderer - Tungsten [1] and render at a resolution of $1024 \times 1024$ . We further use different material configurations for each object, ranging from nearly diffuse to moderately specular, and from single material to multiple materials. Example images are shown in the first 4 rows of Fig. 4. As aforementioned, our technique cannot handle highly specular objects.
+
+Novel View Synthesis. We compare our approach with two state-of-the-art view synthesis methods: DeepVoxels [63] and Deferred Neural Rendering (DeferredNR) [69]. The two methods and ours all require per-scene training. For each object, we randomly select 180 images under the first illumination as training views and use the rest 20 for testing. We downsample the images to $512 \times 512$ before
+
+
+Figure 4. Comparisons on view synthesis. The top 4 rows are rendered synthetic data and the bottom 3 are captured real data. For each set of close-up views, the first row shows zoomed-in rgb patches while the second row shows corresponding error maps.
+
+feeding into the three methods and set an equal batch size of 1. At each iteration, DeepVoxels takes one source view and two additional target views as input whereas DeferredNR and ours only require one view as input. For DeepVoxels, we use their implementation and default hyperparameters. For DeferredNR, we implement our own version since it is not yet open source. We increase the number of neural texture channels as well as the feature channels in the rendering network to match the number of parameters with ours. We notice slight improvements with this modification. Since our goal is to synthesize views of the object instead of the entire scene, we only compute the loss for the pix
+
+els on the object for all three methods. For each object, we train our network for an equal or smaller number of iterations than the other two methods. The left 4 columns in Table 1 compare the PSNR and SSIM on the test views. Our proposed method outperforms the two state-of-the-art by a noticeable margin in all cases. Qualitative comparisons, close-up views, and error maps are shown in the first 4 rows of Fig. 4. Compared to ours, DeepVoxels produces oversmoothed results whereas DeferredNR introduces higher errors near specular highlights, as shown in the 1st and 3rd rows. This illustrates the benefits of encoding the image formation model in rendering process.
+
+Table 1. Quantitative comparisons (PSNR/SSIM) of our RNR vs. DeepVoxels [63] and DeferredNR [69] on view synthesis.
+
+| Method | Bunny | Horse | Material | Earth | Beauty | Apple | Dyke |
| Deep Voxels | 26.67/0.86 | 27.98/0.89 | 28.92/0.93 | 21.00/0.75 | 22.05/0.81 | 19.39/0.75 | 29.75/0.94 |
| DeferredNR | 31.53/0.93 | 36.44/0.97 | 30.93/0.93 | 30.13/0.96 | 28.12/0.87 | 26.05/0.89 | 36.36/0.98 |
| RNR (Ours) | 39.08/0.98 | 38.48/0.98 | 36.18/0.98 | 31.39/0.97 | 32.82/0.97 | 28.29/0.93 | 37.62/0.99 |
+
+
+Figure 5. Relighting results of RNR on synthetic data. The top row shows ground truth, the second row relighting results, and the bottom row error maps.
+
+Table 2. Quantitative evaluation (PSNR/SSIM) of RNR (w/ GCN) and RNR (no GCN) on relighting synthetic scenes.
+
+| Method | Earth | Bunny | Material | Horse |
| w/ GCN | 26.29/0.94 | 25.13/0.92 | 28.04/0.89 | 29.56/0.94 |
| no GCN | 25.87/0.93 | 24.96/0.91 | 27.67/0.81 | 28.76/0.93 |
+
+Free-Viewpoint Relighting. For each object, we use the model trained under the first illumination to carry out free-viewpoint relighting, verified by using the second illumination rendered at a novel viewpoint. We compare the synthesized results with the rendered ground truth in Fig. 5. We also conduct an ablation study on the effectiveness of using GCN to augment U-Net. Table. 2 shows the evaluation metrics for w/ and w/o GCN, from which we can see that using GCN leads to moderate performance improvement. We further analyze the importance of each loss in Fig. 6. Without light transport chromaticity loss or illumination loss, we observe the learnable components will overfit the training data and lead to incorrect relighting results. This illustrates the importance of our regularization terms.
+
+Number of Training Views. To illustrate the effectiveness of RNR in encoding geometric and photometric representations, we further carry out an experiment using sparse input views (20 views in our case). Table. 3 shows the PSNR and SSIM measure for view synthesis and relighting. We observe that both DeepVoxels and DeferredNR degrades drastically with sparse training views. In contrast, RNR is less affected in both tasks. This reveals the effec
+
+
+Figure 6. Ablation study on losses. On top left shows PSNR and SSIM of each case.
+
+Table 3. Quantitative comparisons (PSNR/SSIM) of DeepVoxels [63], DeferredNR [69] and our RNR when using sparse inputs.
+
+| Method | Bunny | Earth |
| DeepVoxels | 17.97/0.76 | 16.44/0.54 |
| DeferredNR | 24.38/0.81 | 22.10/0.86 |
| RNR | 30.87/0.94 | 25.32/0.91 |
| RNR (Relight) | 22.47/0.85 | 25.41/0.90 |
+
+tiveness of encoding the image formation model. Specifically, compared with a black box solution, RNR can interpret object appearance following the actual physically based rendering model, thus boosting its generalization to unseen views and lighting.
+
+# 5.2. Evaluations on Real Data
+
+We have also compared RNR with DeepVoxels and DeferredNR on 3 real scenes: Beauty, Apple, Dyck. We captured the first two scenes using a handheld DSLR, with the objects positioned on a tripod. Dyck is directly adopted from DeepVoxels, captured as a video sequence. Beauty and Apple contain 151 and 144 views. For Dyck, we first remove images that contain excessive motion blurs and use the remaining 224 views. We use structure-from-motion software Agisoft Photoscan to estimate camera parameters as well as 3D proxy. Similar to synthetic data, we use $90\%$ of the images for training and $10\%$ for testing. The right 3 columns in Table 1 show the performance of each method, where our method performs the best for all cases. The last 3 rows of Fig. 4 show visual comparisons. DeferredNR produces similar results as RNR although RNR manages to better preserve sharp details in Beauty. Fig. 7 shows the view extrapolation results of DeferredNR and RNR. We observe that DeferredNR exhibits visual artifacts such as incorrect highlights and color blocks whereas RNR produces more coherent estimations. Please refer to the supplementary video and material for additional results.
+
+We further apply relighting to Beauty, Apple and Dyck in Fig. 8. It is worth noting that Dyck only contains views from
+
+
+Figure 7. Comparisons of our RNR vs. DeferredNR [69] on view extrapolation. On the left are two example training views.
+
+
+
+
+Figure 9. Comparisons on view synthesis and relighting using data from [53].
+
+
+Figure 8. Relighting results of RNR on real data.
+
+the front, i.e., the initial illumination stitched by our method only covers a small portion of the entire environment map. Yet RNR manages to produce reasonable relighting results.
+
+To evaluate relighting accuracy, we use an additional Pig scene from Multi-view Objects Under the Natural Illumination Database [53]. The data contains HDR images captured under 3 different illuminations, each with about 16 calibrated views. We use the images captured in "outdoor" illumination for training. Since the source images are tightly cropped at the object, we are not able to stitch the initial illumination. Hence we use the ground truth illumination in this experiment. The reconstructed geometry in [53] is not publicly available, so we use the laser-scanned mesh followed by smoothing and simplification as our 3D proxy. For testing, we synthesize with the camera parameters and illumination corresponding to a novel view under "indoor
+
+
+
+illumination. The rightmost column of Fig. 9 shows our synthesized results vs. [54] and the ground truth. We observe the results of RNR appear more realistic than [54], although RNR incurs inaccuracy in highlight and color. This is partially attributed to the low number of training views as well as inaccurate camera parameters provided by the dataset. The left 3 columns in Fig. 9 show that our view synthesis is also more reliable than DeferredNR.
+
+# 6. Conclusions and Future Work
+
+We have presented a new neural rendering scheme called Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting. RNR has exploited the physically based rendering process and seperates appearance into environment lighting, object intrinsic attributes, and light transport function (LTF). All three components are learnable through deep networks. In particular, we have shown that by incorporating rendering constraints, our method not only enables relighting but also produces better generalization for novel view synthesis.
+
+Our current approach cannot yet refine geometry or adaptively sample light directions. When 3D proxy contains severe artifacts, they also negatively impact rendering quality. We refer readers to supplementary material for failure cases. We also do not explicitly handle the lack of dynamic range during data capture, which may influence relighting quality. A possible way is to learn the conversion from LDR inputs to the HDR ones. In addition, RNR cannot handle highly specular objects. In the future, all-frequency lighting representations can be used in conjunction with LTF for free-viewpoint relighting of highly specular objects.
+
+# Acknowledgments
+
+This work is supported by the National Key Research and Development Program (2018YFB2100500), the programs of NSFC (61976138 and 61977047), STCSM (2015F0203-000-06), and SHMEC (2019-01-07-00-01-E00003).
+
+# References
+
+[1] https://github.com/tunabrain/tungsten.
+[2] Kara-Ali Aliev, Dmitry Ulyanov, and Victor Lempitsky. Neural point-based graphics. arXiv preprint arXiv:1906.08240, 2019.
+[3] Dejan Azinovic, Tzu-Mao Li, Anton Kaplanyan, and Matthias Nießner. Inverse path tracing for joint material and lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2447-2456, 2019.
+[4] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
+[5] Chris Buehler, Michael Bosse, Leonard McMillan, Steven Gortler, and Michael Cohen. Unstructured lumigraph rendering. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 425-432. ACM, 2001.
+[6] Christopher P Burgess, Loic Matthew, Nicholas Watters, Rishabh Kabra, Irina Higgins, Matt Botvinick, and Alexander Lerchner. Monet: Unsupervised scene decomposition and representation. arXiv preprint arXiv:1901.11390, 2019.
+[7] Joel Carranza, Christian Theobalt, Marcus A Magnor, and Hans-Peter Seidel. Free-viewpoint video of human actors. ACM transactions on graphics (TOG), 22(3):569-577, 2003.
+[8] Rodrigo Ortiz Cayon, Abdelaziz Djelouah, and George Drettakis. A bayesian approach for selective image-based rendering using superpixels. In 2015 International Conference on 3D Vision, pages 469-477. IEEE, 2015.
+[9] Gaurav Chaurasia, Sylvain Duchene, Olga Sorkine-Hornung, and George Drettakis. Depth synthesis and local warps for plausible image-based navigation. ACM Transactions on Graphics (TOG), 32(3):30, 2013.
+[10] Anpei Chen, Minye Wu, Yingliang Zhang, Nianyi Li, Jie Lu, Shenghua Gao, and Jingyi Yu. Deep surface light fields. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 1(1):14, 2018.
+[11] Xu Chen, Jie Song, and Otmar Hilliges. Nvs machines: Learning novel view synthesis with fine-grained view control. arXiv preprint arXiv:1901.01880, 2019.
+[12] Valentin Deschaintre, Miika Aittala, Fredo Durand, George Drettakis, and Adrien Bousseau. Single-image svbrdf capture with a rendering-aware deep network. ACM Transactions on Graphics (TOG), 37(4):128, 2018.
+[13] Alexey Dosovitskiy, Jost Tobias Springenberg, Maxim Tatarchenko, and Thomas Brox. Learning to generate chairs, tables and cars with convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 39(4):692-705, 2016.
+[14] SM Ali Eslami, Danilo Jimenez Rezende, Frederic Besse, Fabio Viola, Ari S Morcos, Marta Garnelo, Avraham Ruderman, Andrei A Rusu, Ivo Danihelka, Karol Gregor, et al. Neural scene representation and rendering. Science, 360(6394):1204-1210, 2018.
+[15] Luca Falorsi, Pim de Haan, Tim R Davidson, Nicola De Cao, Maurice Weiler, Patrick Forre, and Taco S Cohen. Explo
+
+rations in homeomorphic variational auto-encoding. arXiv preprint arXiv:1807.04689, 2018.
+[16] Duan Gao, Xiao Li, Yue Dong, Pieter Peers, Kun Xu, and Xin Tong. Deep inverse rendering for high-resolution svbrdf estimation from an arbitrary number of images. ACM Transactions on Graphics (TOG), 38(4):134, 2019.
+[17] Steven J Gortler, Radek Grzeszcuk, Richard Szeliski, and Michael F Cohen. The lumigraph. In Siggraph, volume 96, pages 43-54, 1996.
+[18] Darya Guarnera, Giuseppe Claudio Guarnera, Abhijeet Ghosh, Cornelia Denk, and Mashhuda Glencross. Brdf representation and acquisition. In Computer Graphics Forum, volume 35, pages 625-650. Wiley Online Library, 2016.
+[19] Tom Haber, Christian Fuchs, Philippe Bekaer, Hans-Peter Seidel, Michael Goesele, and Hendrik PA Lensch. Relighting objects from image collections. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 627-634. IEEE, 2009.
+[20] Adam W Harley, Fangyu Li, Shrinidhi K Lakshmikanth, Xian Zhou, Hsiao-Yu Fish Tung, and Katerina Fragkiadaki. Embodied view-contrastive 3d feature learning. arXiv preprint arXiv:1906.03764, 2019.
+[21] Daniel Cabrini Hauagge, Scott Wehrwein, Paul Upchurch, Kavita Bala, and Noah Snively. Reasoning about photo collections using models of outdoor illumination. In BMVC, 2014.
+[22] Peter Hedman, Julien Philip, True Price, Jan-Michael Frahm, George Drettakis, and Gabriel Brostow. Deep blending for free-viewpoint image-based rendering. In SIGGRAPH Asia 2018 Technical Papers, page 257. ACM, 2018.
+[23] Peter Hedman, Tobias Ritschel, George Drettakis, and Gabriel Brostow. Scalable inside-out image-based rendering. ACM Transactions on Graphics (TOG), 35(6):231, 2016.
+[24] Benno Heigl, Reinhard Koch, Marc Pollefeys, Joachim Denzler, and Luc Van Gool. Plenoptic modeling and rendering from image sequences taken by a hand-held camera. In *Mustererkennung* 1999, pages 94–101. Springer, 1999.
+[25] Zhuo Hui, Kalyan Sunkavalli, Joon-Young Lee, Sunil Hadap, Jian Wang, and Aswin C Sankaranarayanan. Reflectance capture using univariate sampling of brdfs. In Proceedings of the IEEE International Conference on Computer Vision, pages 5362-5370, 2017.
+[26] James Imber, Jean-Yves Guillemaut, and Adrian Hilton. Intrinsic textures for reconfigurable free-viewpoint video. In European Conference on Computer Vision, pages 392-407. Springer, 2014.
+[27] Eldar Insafutdinov and Alexey Dosovitskiy. Unsupervised learning of shape and pose with differentiable point clouds. In Advances in Neural Information Processing Systems, pages 2802-2812, 2018.
+[28] Shi Jin, Ruiynag Liu, Yu Ji, Jinwei Ye, and Jingyi Yu. Learning to dodge a bullet: Concyclic view morphing via deep learning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 218-233, 2018.
+[29] James T Kajiya. The rendering equation. In ACM Siggraph Computer Graphics, volume 20, pages 143-150. ACM, 1986.
+
+[30] Nima Khademi Kalantari, Ting-Chun Wang, and Ravi Ramamoorthi. Learning-based view synthesis for light field cameras. ACM Transactions on Graphics (TOG), 35(6):193, 2016.
+[31] Kaizhang Kang, Zimin Chen, Jiaping Wang, Kun Zhou, and Hongzhi Wu. Efficient reflectance capture using an autoencoder. ACM Trans. Graph., 37(4):127-1, 2018.
+[32] Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada. Neural 3d mesh renderer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3907-3916, 2018.
+[33] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
+[34] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
+[35] Johannes Kopf, Michael F Cohen, and Richard Szeliski. First-person hyper-lapse videos. ACM Transactions on Graphics (TOG), 33(4):78, 2014.
+[36] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in neural information processing systems, pages 2539-2547, 2015.
+[37] Marc Levoy and Pat Hanrahan. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, pages 31-42. ACM, 1996.
+[38] Guohao Li, Matthias Muller, Ali Thabet, and Bernard Ghanem. Deep GCs: Can GCs go as deep as cnns? In Proceedings of the IEEE International Conference on Computer Vision, pages 9267-9276, 2019.
+[39] Guannan Li, Chenglei Wu, Carsten Stoll, Yebin Liu, Kiran Varanasi, Qionghai Dai, and Christian Theobalt. Capturing reightable human performances under general uncontrolled illumination. In Computer Graphics Forum, volume 32, pages 275-284. Wiley Online Library, 2013.
+[40] Tzu-Mao Li, Miika Aittala, Frédo Durand, and Jaakko Lehtinen. Differentiable monte carlo ray tracing through edge sampling. In SIGGRAPH Asia 2018 Technical Papers, page 222. ACM, 2018.
+[41] Xiao Li, Yue Dong, Pieter Peers, and Xin Tong. Modeling surface appearance from a single photograph using self-augmented convolutional neural networks. ACM Transactions on Graphics (TOG), 36(4):45, 2017.
+[42] Zhengqin Li, Kalyan Sunkavalli, and Manmohan Chandraker. Materials for masses: Svbrdf acquisition with a single mobile phone image. In Proceedings of the European Conference on Computer Vision (ECCV), pages 72-87, 2018.
+[43] Chen-Hsuan Lin, Chen Kong, and Simon Lucey. Learning efficient point cloud generation for dense 3d object reconstruction. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+[44] Hsueh-Ti Derek Liu, Michael Tao, and Alec Jacobson. Paparazzi: surface editing by way of multi-view image processing. ACM Trans. Graph., 37(6):221-1, 2018.
+[45] Miaomiao Liu, Xuming He, and Mathieu Salzmann. Geometry-aware deep network for single-image novel view
+
+synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4616-4624, 2018.
+[46] Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3d reasoning. In Proceedings of the IEEE International Conference on Computer Vision, pages 7708-7717, 2019.
+[47] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. ACM Transactions on Graphics (TOG), 38(4):65, 2019.
+[48] Matthew M Loper and Michael J Black. Opendr: An approximate differentiable renderer. In European Conference on Computer Vision, pages 154-169. Springer, 2014.
+[49] Moustafa Meshry, Dan B Goldman, Sameh Khamis, Hugues Hoppe, Rohit Pandey, Noah Snavely, and Ricardo MartinBrualla. Neural rerendering in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6878-6887, 2019.
+[50] Giljoo Nam, Joo Ho Lee, Diego Gutierrez, and Min H Kim. Practical svbrdf acquisition of 3d objects with unstructured flash photography. In SIGGRAPH Asia 2018 Technical Papers, page 267. ACM, 2018.
+[51] Thu H Nguyen-Phuoc, Chuan Li, Stephen Balaban, and Yongliang Yang. Rendernet: A deep convolutional network for differentiable rendering from 3d shapes. In Advances in Neural Information Processing Systems, pages 7891-7901, 2018.
+[52] Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, and Linjie Luo. Transformable bottleneck networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 7648-7657, 2019.
+[53] Geoffrey Oxholm and Ko Nishino. Multiview shape and reflectance from natural illumination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2155-2162, 2014.
+[54] Geoffrey Oxholm and Ko Nishino. Shape and reflectance estimation in the wild. IEEE transactions on pattern analysis and machine intelligence, 38(2):376-389, 2015.
+[55] Eunbyung Park, Jimei Yang, Ersin Yumer, Duygu Ceylan, and Alexander C Berg. Transformation-grounded image generation network for novel 3d view synthesis. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3500-3509, 2017.
+[56] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+[57] Eric Penner and Li Zhang. Soft 3d reconstruction for view synthesis. ACM Transactions on Graphics (TOG), 36(6):235, 2017.
+[58] Felix Petersen, Amit H Bermano, Oliver Deussen, and Daniel Cohen-Or. Pix2vex: Image-to-geometry reconstruction using a smooth differentiable renderer. arXiv preprint arXiv:1903.11149, 2019.
+[59] Julien Philip, Michael Gharbi, Tinghui Zhou, Alexei A Efros, and George Drettakis. Multi-view relighting using a
+
+geometry-aware network. ACM Transactions on Graphics (TOG), 38(4):78, 2019.
+[60] Qi Shan, Riley Adams, Brian Curless, Yasutaka Furukawa, and Steven M Seitz. The visual turing test for scene reconstruction. In 2013 International Conference on 3D Vision-3DV 2013, pages 25–32. IEEE, 2013.
+[61] Aliaksandra Shysheya, Egor Zakharov, Kara-Ali Aliev, Renat Bashirov, Egor Burkov, Karim Iskakov, Aleksei Ivakhnenko, Yury Malkov, Igor Pasechnik, Dmitry Ulyanov, et al. Textured neural avatars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2387-2397, 2019.
+[62] Martin Simonovsky and Nikos Komodakis. Dynamic edge-conditioned filters in convolutional neural networks on graphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3693-3702, 2017.
+[63] Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhofer. Deepvoxels: Learning persistent 3d feature embeddings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2437-2446, 2019.
+[64] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems, pages 1119–1130, 2019.
+[65] Peter-Pike Sloan, Jan Kautz, and John Snyder. Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments. In ACM Transactions on Graphics (TOG), volume 21, pages 527-536. ACM, 2002.
+[66] Pratul P Srinivasan, Tongzhou Wang, Ashwin Sreelal, Ravi Ramamoorthi, and Ren Ng. Learning to synthesize a 4d rgbd light field from a single image. In Proceedings of the IEEE International Conference on Computer Vision, pages 2243-2251, 2017.
+[67] Shao-Hua Sun, Minyoung Huh, Yuan-Hong Liao, Ning Zhang, and Joseph J Lim. Multi-view to novel view: Synthesizing novel views with self-learned confidence. In Proceedings of the European Conference on Computer Vision (ECCV), pages 155–171, 2018.
+[68] Maxim Tatarchenko, Alexey Dosovitskiy, and Thomas Brox. Single-view to multi-view: Reconstructing unseen views with a convolutional network. arXiv preprint arXiv:1511.06702, 6, 2015.
+[69] Justus Thies, Michael Zollhöfer, and Matthias Nießner. Deferred neural rendering: Image synthesis using neural textures. ACM Transactions on Graphics (TOG), 38(4):1-12, 2019.
+[70] Justus Thies, Michael Zollhöfer, Christian Theobalt, Marc Stamminger, and Matthias Nießner. Ignor: Image-guided neural object rendering. arXiv preprint arXiv:1811.10720, 2018.
+[71] Shubham Tulsiani, Tinghui Zhou, Alexei A Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2626-2634, 2017.
+
+[72] Hsiao-Yu Fish Tung, Ricson Cheng, and Katerina Fragkiadaki. Learning spatial common sense with geometry-aware recurrent networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2595-2603, 2019.
+[73] Diego Valsesia, Giulia Fracastoro, and Enrico Magli. Learning localized generative models for 3d point clouds via graph convolution. In ICLR, 2019.
+[74] Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E Sarma, Michael M Bronstein, and Justin M Solomon. Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (TOG), 38(5):146, 2019.
+[75] Michael Weinmann and Reinhard Klein. Advances in geometry and reflectance acquisition (course notes). In SIGGRAPH Asia 2015 Courses, page 1. ACM, 2015.
+[76] Tim Weyrich, Jason Lawrence, Hendrik PA Lensch, Szymon Rusinkiewicz, Todd Zickler, et al. Principles of appearance acquisition and representation. Foundations and Trends in Computer Graphics and Vision, 4(2):75-191, 2009.
+[77] Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Interpretable transformations with encoder-decoder networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5726-5735, 2017.
+[78] Rui Xia, Yue Dong, Pieter Peers, and Xin Tong. Recovering shape and spatially-varying surface reflectance under unknown illumination. ACM Transactions on Graphics (TOG), 35(6):187, 2016.
+[79] Zexiang Xu, Sai Bi, Kalyan Sunkavalli, Sunil Hadap, Hao Su, and Ravi Ramamoorthi. Deep view synthesis from sparse photometric images. ACM Transactions on Graphics (TOG), 38(4):76, 2019.
+[80] Zexiang Xu, Jannik Boll Nielsen, Jiyang Yu, Henrik Wann Jensen, and Ravi Ramamoorthi. Minimal brdf sampling for two-shot near-field reflectance acquisition. ACM Transactions on Graphics (TOG), 35(6):188, 2016.
+[81] Zexiang Xu, Kalyan Sunkavalli, Sunil Hadap, and Ravi Ramamoorthi. Deep image-based relighting from optimal sparse samples. ACM Transactions on Graphics (TOG), 37(4):126, 2018.
+[82] Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling with recurrent transformations for 3d view synthesis. In Advances in Neural Information Processing Systems, pages 1099-1107, 2015.
+[83] Wenjie Ye, Xiao Li, Yue Dong, Pieter Peers, and Xin Tong. Single image surface appearance modeling with self-augmented cnns and inexact supervision. In Computer Graphics Forum, volume 37, pages 201-211. Wiley Online Library, 2018.
+[84] Wang Yifan, Felice Serena, Shihao Wu, Cengiz Öz Tireli, and Olga Sorkine-Hornung. Differentiable surface splatting for point-based geometry processing. ACM Transactions on Graphics (TOG), 38(6):1-14, 2019.
+[85] Tianli Yu, Hongcheng Wang, Narendra Ahuja, and Wei-Chao Chen. Sparse lumigraph relighting by illumination and reflectance estimation from multi-view images. In ACM SIGGRAPH 2006 Sketches, page 175. ACM, 2006.
+
+[86] Ke Colin Zheng, Alex Colburn, Aseem Agarwala, Maneesh Agrawala, David Salesin, Brian Curless, and Michael F Cohen. Parallax photography: creating 3d cinematic effects from stills. In Proceedings of Graphics Interface 2009, pages 111-118. Canadian Information Processing Society, 2009.
+[87] Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. Stereo magnification: Learning view synthesis using multiplane images. arXiv preprint arXiv:1805.09817, 2018.
+[88] Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. View synthesis by appearance flow. In European conference on computer vision, pages 286-301. Springer, 2016.
+[89] Zhiming Zhou, Guojun Chen, Yue Dong, David Wipf, Yong Yu, John Snyder, and Xin Tong. Sparse-as-possible svbrdf acquisition. ACM Transactions on Graphics (TOG), 35(6):189, 2016.
+[90] Hao Zhu, Hao Su, Peng Wang, Xun Cao, and Ruigang Yang. View extrapolation of human body from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4450-4459, 2018.
\ No newline at end of file
diff --git a/aneuralrenderingframeworkforfreeviewpointrelighting/images.zip b/aneuralrenderingframeworkforfreeviewpointrelighting/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..ea17a86011650523f285533769331c6719dce2e0
--- /dev/null
+++ b/aneuralrenderingframeworkforfreeviewpointrelighting/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1d994ff5cfcd1c32a3a5532bc8f24e04485d7ae2b62dac8fda6449a29dc3797
+size 612607
diff --git a/aneuralrenderingframeworkforfreeviewpointrelighting/layout.json b/aneuralrenderingframeworkforfreeviewpointrelighting/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a762deb077293526231f327d285bc8f20839eb1c
--- /dev/null
+++ b/aneuralrenderingframeworkforfreeviewpointrelighting/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:300438f9ffe8f7c4eca79a10648b65e76d8eefc2b226ec04a85e2813dde4bddf
+size 410903
diff --git a/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_content_list.json b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..83a21edbda86a4b0971067c21d7e37f27e6585b5
--- /dev/null
+++ b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab2037354990ec4d6fe3e6495d14b45c8cb90cc37163ae7b2d9eaea626dff245
+size 65828
diff --git a/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_model.json b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..be4b317f131a2bbbf7945898bcce4d6528c8498f
--- /dev/null
+++ b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65771d468759be32e2f0b02f4b1762a49918e3806a71682ddf1cb943d71ae06f
+size 79155
diff --git a/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_origin.pdf b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..2141438374134c4c2463d548a1a57d0290be5977
--- /dev/null
+++ b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/6a192b9c-8f34-4f40-8b29-8b0d5f367715_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:04835f31fe00f16d41e023132c6fbe9af6f8dd0c17bddce3db8c6ecc11396965
+size 1995192
diff --git a/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/full.md b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..b7245e320c0fdfa15d33dd3f9827bdd9c5bdb76d
--- /dev/null
+++ b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/full.md
@@ -0,0 +1,242 @@
+# A Novel Recurrent Encoder-Decoder Structure for Large-Scale Multi-view Stereo Reconstruction from An Open Aerial Dataset
+
+Jin Liu and Shunping Ji*
+School of Remote Sensing and Information Engineering, Wuhan University
+{liujinwhu, jishunping}@whu.edu.cn
+
+# Abstract
+
+A great deal of research has demonstrated recently that multi-view stereo (MVS) matching can be solved with deep learning methods. However, these efforts were focused on close-range objects and only a very few of the deep learning-based methods were specifically designed for large-scale 3D urban reconstruction due to the lack of multi-view aerial image benchmarks. In this paper, we present a synthetic aerial dataset, called the WHU dataset, we created for MVS tasks, which, to our knowledge, is the first large-scale multi-view aerial dataset. It was generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters. We also introduce in this paper a novel network, called RED-Net, for wide-range depth inference, which we developed from a recurrent encoder-decoder structure to regularize cost maps across depths and a 2D fully convolutional network as framework. RED-Net's low memory requirements and high performance make it suitable for large-scale and highly accurate 3D Earth surface reconstruction. Our experiments confirmed that not only did our method exceed the current state-of-the-art MVS methods by more than $50\%$ mean absolute error (MAE) with less memory and computational cost, but its efficiency as well. It outperformed one of the best commercial software programs based on conventional methods, improving their efficiency 16 times over. Moreover, we proved that our RED-Net model pre-trained on the synthetic WHU dataset can be efficiently transferred to very different multi-view aerial image datasets without any fine-tuning. Dataset and code are available at http://gpcv.whu.edu.cn/data.
+
+# 1. Introduction
+
+Large-scale and highly accurate 3D reconstruction of the Earth's surface, including cities, is mainly realized from dense matching of multi-view aerial images implemented
+
+and dominated by commercial software such as Pix4D [24], Smart3D [8], and SURE [27], all of which were developed from conventional methods [33, 3, 13]. Recent attempts at multi-view stereo (MVS) matching with deep learning methods are found in the literature [14, 16, 36, 37, 15]. While these deep learning approaches can produce satisfactory results on close-range object reconstruction, they have two critical limitations when applied to Earth surface reconstruction from multi-view aerial images. The first limitation is the lack of aerial dataset benchmarks, which makes it difficult to train, discover, and improve the appropriate networks through between-method comparison. In addition, most of the existing MVS datasets are images of laboratory, and models trained on them cannot be satisfactorily transferred to a bird's eye view of a terrestrial scene. The second limitation of these methods is their high GPU memory demand in recent MVS networks [36, 15, 25, 34], which makes them less suitable for large-scale and high-resolution scene reconstruction. The state-of-the-art R-MVSNet method [37] has achieved depth inference with unlimited depth-wise resolution, however, the resolution quality of its results is not high as the output depth map is down-sampled four times.
+
+In this paper, we present a synthetic aerial dataset we created for large-scale MVS matching and Earth surface reconstruction. Each image in the dataset was simulated from a complete and accurate 3D urban scene produced from a real multi-view aerial image collection with software and careful manual editing. The dataset includes thousands of simulated images covering an area of $6.7 \times 2.2 \mathrm{~km}^2$ , along with the ground truth depth and camera parameters for multi-view images, as well as disparity maps for rectified epipolar images. Due to the large size of the aerial images ( $5376 \times 5376$ pixels), there are subsets provided consisting of cropped sub-blocks that can be used directly for training CNN models on a single GPU. Note that the simulated camera parameters are unbiased and the provided ground truths are absolutely complete even in occluded regions, which ensures the accuracy and reliability of the dataset for detailed 3D reconstruction.
+
+We also introduce in this paper an MVS network, called RED-Net, we created for large scale MVS matching. A recurrent encoder-decoder (RED) architecture is utilized to sequentially regularize cost maps obtained from a series of convolutions on multi-view images. When compared to the state-of-the-art method [37], we achieved higher efficiency and accuracy using less GPU memory while maintaining unlimited depth resolution, which is beneficial to city-scale reconstruction. Our experiments confirmed that RED-Net outperformed all the comparable methods evaluated on the WHU aerial dataset.
+
+We had a third aim for our work beyond addressing the two limitations of the existing methods. That goal was to demonstrate that our MVS network could be generalized for cross-dataset transfer learning. We demonstrate here that RED-Net pre-trained on our WHU dataset could be directly applied on another quite different aerial dataset with slightly better accuracy than one of the best commercial software programs with efficiency improved 16 times over.
+
+# 2. Related Work
+
+# 2.1. Datasets
+
+Two-view datasets. Middlebury [28] and KITTI [9] are two popular datasets for stereo disparity estimation. However, these datasets are too small for current applications, especially when training deep learning models, and the lack of sufficient samples often leads to overfitting and low generalization. Considering this situation, [21] created a large synthetic dataset that consists of three subsets: FlyingThings3D, Monkaa, and Driving, which provide thousands of stereo images with dense and complete ground truth disparities. However, a model pretrained on this synthetic dataset cannot easily be applied to a real scene dataset due to the heterogeneous data sources.
+
+Multi-view datasets. The Middlebury multi-view dataset [31] was designed for evaluating MVS matching algorithms on equal ground and is a collection of calibrated image sets from only two small scenes in a laboratory environment. The DTU dataset [1] is a large scale close-range MVS benchmark that contains 124 scenes with a variety of objects and materials under different lighting conditions, which make it well-suited for evaluating advanced methods. The Tanks and Temples benchmark [18] provides high-resolution data with largesize images acquired in complex outdoor environments. A recent benchmark called ETH3D [30] was created for high-resolution stereo and multi-view reconstruction, which consists of artificial scenes and outdoor and indoor scenes and represents various real-world reconstruction challenges.
+
+Reconstructing the Earth's surface and cities is mainly realized with matching multi-view aerial images. The
+
+ISPRS Association and the EuroSDR Center jointly provided two small aerial datasets called München and Vaihingen [11], which consist of dozens of aerial images; however, these datasets are currently not publicly accessible. In our work, we created a large-scale synthetic aerial dataset with accurate camera parameters and complete ground truths for MVS method evaluation and urban scene reconstruction.
+
+# 2.2. Networks
+
+Inspired by the success of the deep learning based stereo methods [23, 17, 38, 4], some researchers attempted to apply CNNs to the MVS task. Hartmann et al. [12] proposed an N-way Siamese network to learn the similarity score over a set of multi-patches. The first end-to-end learning network designed for MVS was SurfaceNet [15] by building colored voxel cubes outside the network to encode the camera parameters through perspective projection, which combined multi-view images to a single cost volume. The Learnt Stereo Machine (LSM) [16] ensures end-to-end MVS reconstruction by differentiable projection and unprojection operations. The features are unprojected into 3D feature grids with known camera parameters, and 3D CNN then is used to detect the surface of the 3D object in the voxel. Both SurfaceNet and LSM utilize volumetric representation; nevertheless, they only reconstruct low-resolution objects and have a huge GPU memory consumption of 3D voxel; for example, they created the world grid at a resolution of $32 \times 32 \times 32$ .
+
+3D cost volume has its advantage in encoding camera parameters and image features. DeepMVS [14] generates a plane-sweep volume for each reference image, and an encoder-decoder structure with skip connections is used to aggregate the cost and estimate depths with fully-connected conditional random field (Dense-CRF) [19]. [36] built a 3D cost volume by differentiable homography warping. Its memory requirement grows cubically with the depth quantization number, which makes it unrealistic for large scale scenes. The state-of-the-art method, R-MVSNet [37], regularized 2D cost maps sequentially across depths via a convolutional gated recurrent unit (GRU) [5] instead of 3D CNNs, which reduced the memory consumption and made high-resolution reconstruction possible. However, R-MVSNet regularized the cost maps with a small $3 \times 3$ receptive field in the GRUs and down-sampled the output depth four times, which resulted in contextual information loss and coarse reconstruction.
+
+Our RED-Net approach follows the idea of sequentially processing 2D features along the depth direction for wide-depth range inference. However, we introduce a recurrent encoder-decoder architecture to regularize the 2D cost maps rather than simply stacking the GRU blocks as in [37]. The RED structure provides multi-scale receptive fields
+
+
+Figure 1: The dataset. Area 0: the complete dataset consists of 1,776 virtual aerial images each $5376 \times 5376$ pixels in size. For facilitating machine learning methods, areas $1/4/5/6$ were allocated for the training set, which consisted of 261 images. Areas 2 and 3, which consisted of 93 images, were used as the test set. In the training and testing area, the images also were cropped into tiles of $768 \times 384$ pixel-size for a single GPU.
+
+to exploit neighborhood information effectively in fine resolution scenes, which allows us to achieve large-scale and full-resolution reconstruction with higher accuracy and efficiency and lower memory requirements.
+
+# 3. WHU Dataset
+
+This section describes the synthetic aerial dataset we created for large-scale and high-resolution Earth surface reconstruction call the WHU dataset. The aerial images in the dataset were simulated from a 3D surface model that was produced by software and refined by manual editing. The dataset includes a complete aerial image set and cropped sub-image sets for facilitating deep learning.
+
+# 3.1. Data Source
+
+A 3D digital surface model (DSM) with OSGB format [35] was reconstructed using Smart3D software [8] from a set of multi-view aerial images captured from an oblique five-view camera rig mounted on an unmanned aerial vehicle (UAV). One camera was pointed straight down and the optical axis of the other four surrounding cameras was at a $40^{\circ}$ tilt angle, which guaranteed most of the scenes, including the building façade, could be well captured. We manually edited some errors in the surface model to improve its resemblance to the real scene. The model covered an area of about $6.7 \times 2.2\mathrm{km}^2$ over Meitan County, Guizhou Province in China with about 0.1 m ground resolution. The county contains dense and tall buildings, sparse factories, mountains covered with forests, and some bare ground and rivers.
+
+# 3.2. Synthetic Aerial Dataset
+
+First, a discrete 3D points set on a $0.06 \times 0.06 \times 0.06 \mathrm{~m}^3$ grid covering the whole scene was generated by interpolating the OSGB mesh. Each point includes the object position $(X, Y, Z)$ and the texture $(R, G, B)$ .
+
+Then, we simulated the imaging process of a single-lens camera. Given the camera's intrinsic parameters (focal length $f$ , principal point $x_0$ , $y_0$ , image size $W$ , $H$ , and sensor size) and the exterior orientation (camera center $(Xs$ , $Ys$ , $Zs$ ) and three rotational angles $(\varphi, \omega, \kappa)$ ). We projected the 3D discrete points onto the camera to obtain a virtual image, and the depth map was simultaneously retrieved from the 3D points. Note that the depth map was complete even on the building façade since the 3D model had full scene mesh. The virtual image was taken at $550 \mathrm{~m}$ above the ground with $10 \mathrm{~cm}$ ground resolution. A total of 1,776 images $(5376 \times 5376$ in size) were captured in 11 strips with $90\%$ heading overlap and $80\%$ side overlap, with corresponding 1,776 depth maps as ground truth. We set the rotational angles at $(0,0,0)$ , and two adjacent images therefore could be regarded as a pair of epipolar images. A total of 1,760 disparity maps along the flight direction also were provided for evaluating the chosen stereo matching methods. We provided 8-bit RGB images and 16-bit depth maps with the lossless PNG format and text files that recorded the orientation parameters that included the camera center $(Xs$ , $Ys$ , $Zs$ ) and the rotational matrix $\mathbf{R}$ .
+
+# 3.3. Sub-Dataset for Deep Learning
+
+In addition to providing the complete dataset, we selected six representative sub-areas covering different scene types as training and test sets for deep learning methods, which are shown in Figure 1. "Area 1" is a flat suburb with large and low factory buildings. "Area 2" contains trees, roads, buildings, and open spaces. "Area 3" is a residential area with a mixture of low and high buildings. "Area 4" and "Area 5" are the town center covering dense buildings with complex rooftop structures. "Area 6" is a mountainous area covered by agricultural land and forests. A total of 261 virtual images of Areas $1/4/5/6$ were used as the training set, and 93 images from Area 2
+
+
+Figure 2: The images and depth maps from different viewpoints. A five-view unit took the Image with ID 1 as the reference image, the images with ID 0 and 2 in the heading direction and the images with ID 3 and 4 in the side strips as the search images. The three-view set consisted of images with ID 0, 1, and 2. In the stereo dataset, Image 1 and Image 2 were treated as a pair of stereo epipolar images.
+
+
+(a)
+
+
+(b)
+Figure 3: (a) A five-view sub-set with size of $768 \times 384$ pixels. The three sub-images in red rectangle comprise the three-view set. (b) The organization of images, depths, and camera files in the MVS dataset.
+
+and Area 3 comprised the test set. The ratio of the training to the test set was roughly 3:1. For a direct application of the deep learning-based MVS methods on the sub-dataset, we additionally provided a multi-view and a stereo sub-set by cropping the virtual aerial images into sub-blocks as an image of $5376 \times 5376$ pixels may not be fed into a current single GPU.
+
+Multi-view Dataset. A multi-view unit consists of five images as shown in Figure 2. The central image with ID 1 was treated as the reference image, and the images with ID 0 and 2 in the heading direction and the images with ID 3 and 4 in the side strips were the search images. We cropped the overlapped pixels into the sub-block at a size
+
+of $768 \times 384$ pixels. A five-view unit yielded 80 pairs (400 sub images) (Figure 3(a)). The depth maps were cropped at the same time. The dataset was ultimately organized as Figure 3(b). The virtual images, depth maps, and camera parameters were in the first level folder. The second level folders took the name of the reference image in a five-view unit; for example, 006_8 represented the eighth image in the sixth strip. The five subFolders were named as $0/1/2/3/4$ to store the sub images generated from the five-view virtual images respectively. In addition, there was a three-view dataset that consisted of the images with ID 0, 1, and 2.
+
+Stereo Dataset. Each adjacent image pair in a strip was also epipolar images. Similar to the multi-view set, we cropped each image and disparity map into $768 \times 384$ pixels and obtained 154 sub-image pairs in a two-view unit.
+
+# 4. RED-Net
+
+We developed a network, which we named RED-Net, that combines a series of weight-shared convolutional layers that extract the features from separate multi-view images and recurrent encoder-decoder (RED) structures that sequentially learn regularized depth maps across both the depth and spatial directions for large-scale and high-resolution multi-view reconstruction. The framework was inspired by [37]. However, instead of using a stack of three GRU blocks, we utilized a 2D recurrent encoder-decoder structure to sequentially regularize the cost maps, which not only significantly reduced the memory consumption and greatly improved the computational efficiency, but also captured the finer structures for depth inference. The output of RED-Net has the same resolution as the input reference images rather than being downsized by four as in [37], which ensures high-resolution reconstruction for large-scale and wide depth range scenes. The network structure is illustrated in Figure 4.
+
+2D Feature Extraction. RED-Net infers a depth map with depth sample number $D$ from $N$ -view images where $N$ is typically no less than three. The 2D convolution layers first are separately used to extract the features of the $N$ input images with shared weights, which can be seen as an $N$ -way Siamese network architecture [6]. Each branch consists of five convolutional layers with 8, 8, 16, 16, 16 channels, respectively, and a $3 \times 3$ kernel size and a stride of 1 (except for the third layer, which has a $5 \times 5$ kernel size and a stride of 2). All of the layers are followed by a rectified linear unit (ReLU) [10] except for the last layer. The 2D network yields 16-channel feature representations for each input image half the width and height of the input image.
+
+Cost Maps. A group of 2D image features are backprojected onto successive virtual planes in 3D space to build cost maps. The plane sweep methods [7] were adopted to warp these features into reference camera viewpoint, which is described as differentiable homography warping
+
+
+Figure 4: The structure of the RED-Net. $W, H$ , and $D$ are the image width, height, and depth sample number, respectively.
+
+in [36, 37]. The variance operation [36] was adopted to concatenate multiple feature maps to one cost map at a certain depth plane in 3D space. Finally, $D$ cost maps are built at each depth plane.
+
+Recurrent Encoder-Decoder Regularization. Inspired by the U-Net [26], GRU [5], and RCNN [2], in this paper we introduce a recurrent encoder-decoder architecture to regularize the $D$ cost maps that are obtained from the 2D convolutions and plane sweep methods. In the spatial dimension, one cost map $\mathbf{C}_i$ is the input to the recurrent encoder-decoder structure at a time, which is then processed by a four-scale convolutional encoder. Except for the first convolution layer with stride 1 and channel number 8, we doubled the feature channels at each downsampling step in the encoder. The decoder consists of three up-convolutional layers, and each layer expands the feature map generated by the previous layer and halves the feature channels. At each scale, the encoded feature maps are regularized by a convolutional GRU [37], which are then added to the corresponding feature maps at the same scale in the decoder. After the decoder, an up-convolutional layer is used to upsample the regularized cost maps to the input image size and reduce channel number to 1.
+
+In the depth direction, the contextual information of the sequential cost maps is recorded in the previous regulated
+
+GRUs and transferred to current cost map $\mathbf{C}_i$ . There are four GRU state transitions in the laddered encoder-decoder structure, denoted as state, to gather and refine the contextual features in different spatial scales.
+
+By regularizing the cost maps in the spatial direction and aggregating the geometric and contextual information in the depth direction by the recurrent encoder-decoder, RED-Net realized globally consistent spatial/contextual representations for multi-view depth inference. Compared to a stack of GRUs [37], our multi-scale recurrent encoder-decoder exploits multi-scale neighborhood information with more details and less parameters.
+
+Loss computation. A cost volume is obtained by stacking all the regularized cost maps together. We turned it into a probability volume by utilizing a softmax operator along the depth direction as accomplished in previous works [17]. From this probability volume, the depth value can be estimated pixel-wise and compared to the ground truth with the cross-entropy loss, which is the same as [37].
+
+To maintain an end-to-end manner, we did not provide a post-processing process. The inferred depth maps are translated into dense 3D points according to the camera parameters, all of which constitute the complete 3D scene. However, many classic post-processing methods [22] can be applied for refinement.
+
+
+Figure 5: The inferred depth maps of three sub-units in the WHU test set. Our method produced the finest depth maps.
+
+
+
+
+
+
+
+
+
+
+
+
+
+# 5. Experiments
+
+# 5.1. Experimental Settings and Results
+
+We evaluated our proposed RED-Net on our WHU dataset and compared it to several recent MVS methods and software, including COLMAP [29] and commercial software SURE [27] (aerial version for trial [32]), which are based on conventional methods, and the MVSNet [36] and R-MVSNet [37], which are based on deep neural networks. We directly applied COLMAP and SURE to the WHU test set, which contained 93 images ( $5376 \times 5376$ in size) and output depth maps or dense clouds. We trained the CNN-based methods, which includes our method, with the WHU training set, which contained 3,600 sub-units ( $768 \times 384$ in size) and then evaluated them on the WHU test set, which contained 1,360 sub-units with the same image size. The input view numbers were $N = 3$ and $N = 5$ for WHU-3 and WHU-5, respectively, with depth sample number $D = 200$ . The depth range can vary in each image, so we evaluated the initial depth with COLMAP and set the depth range accordingly for each image. In the test set, the depth number was variable and we set the interval at $0.15\mathrm{m}$ . The performances of the different methods were compared on the depth maps without any post-processing. For SURE, the generated dense point clouds were translated to depth maps in advance.
+
+In the training stages of RED-Net, RMSProp [20] was chosen as the optimizer, and the learning rate was set at 0.001 with a decay of 0.9 for every 5k iterations. The model was trained for three epochs with a batch size of one, which involved about 150k iterations in total. All the experiments were conducted on a 24 GB NVIDIA Titan RTX graphics card and TensorFlow platform.
+
+We used four measures to evaluate the depth quality: 1) Mean absolute error (MAE): the average of the L1 distances between the estimated and true depths, and only the distances within 100 depth intervals were counted in order to exclude the extreme outliers; 2) $< 0.6\mathbf{m}$ : the percentage of pixels whose L1 error were less than the $0.6\mathrm{m}$ threshold; 3) 3-interval-error ( $< 3$ -interval): the
+
+| Method | Train & Test | MAE (m) | <3-interval (%) | <0.6m (%) | Comp. |
| COLMAP | / | 0.1548 | 94.95 | 95.67 | 98% |
| SURE | / | 0.2245 | 92.09 | 93.69 | 94% |
| MVSNet | WHU-3 | 0.1974 | 93.22 | 94.74 | 100% |
| WHU-5 | 0.1543 | 95.36 | 95.82 | 100% |
| R-MVSNet | WHU-3 | 0.1882 | 94.00 | 94.90 | 100% |
| WHU-5 | 0.1505 | 95.64 | 95.99 | 100% |
| RED-Net | WHU-3 | 0.1120 | 97.90 | 98.10 | 100% |
| WHU-5 | 0.1041 | 97.93 | 98.08 | 100% |
+
+Table 1: The quantitative results on WHU dataset.
+
+percentage of pixels whose L1 error was less than three depth intervals; 4) Completeness: the percentage of pixels with the estimated depth values in the depth map.
+
+Our quantitative results are shown in Table 1. RED-Net outperformed all the other methods for all the indicators and obtained at least $50\%$ MAE improvement compared to the second-best R-MVSNet. For the 3-interval-error and $0.6\mathrm{m}$ threshold indicators, our method exceeded all the other methods at least $2\%$ . Our qualitative results in Figure 5 show that RED-Net's reconstructed depth map was the cleanest and most similar to the ground truth.
+
+# 5.2. GPU Memory and Runtime
+
+The GPU memory requirement and running speed of RED-Net, MVSNet, and R-MVSNet on the WHU dataset are listed in Table 2. The memory requirement of MVSNet increased with depth sample number $D$ , whereas that of RED-Net and R-MVSNet were constant at $D$ . The occupied memory of RED-Net was nearly half that of R-MVSNet, and RED-Net could reconstruct a depth map with full resolution, which was 16-time larger than the latter.
+
+The runtime was related to the depth sample number, input image size, and image number. Given the same $N$ -view images, (R-)MVSNet generated a depth map down-sampled by 4 and was slightly faster, while RED-Net kept the same resolution with input inference. Therefore, considering the output resolution, our network was much more efficient than the others.
+
+| Methods | Input size | Depth sample number (3-view) | (5-view) | Output size |
| D = 800 | D = 400 | D = 200 | D = 128 | D = 200 |
| MVSNet | 384 × 768 | 17085M 1.1s | 8893M 0.6s | 4797M 0.3s | 2749M 0.2s | 4797M 0.5s | 96 × 192 |
| R-MVSNet | 384 × 768 | 4419M 1.2s | 4419M 0.6s | 4419M 0.4s | 4419M 0.3s | 4547M 0.6s | 96 × 192 |
| RED-Net | 384 × 768 | 2493M 1.8s | 2493M 0.95s | 2493M 0.6s | 2493M 0.5s | 2509M 0.8s | 384 × 768 |
+
+
+Figure 6: The inferred depth maps of three sub-units on München aerial image set. The deep learning based methods are trained on the WHU-3 training set.
+
+# 5.3. Generalization
+
+The WHU dataset was created under well-controlled imaging processes. To demonstrate the representation of the WHU dataset for aerial datasets and the generalization of RED-Net, five methods were tested on the real aerial dataset München [11]. The München dataset is somewhat different from the WHU dataset in that it was captured at a metropolis instead of a town. It is comprised of 15 aerial images $(7072\times 7776$ in size) and $80\%$ and $60\%$ overlapping in the heading and side directions, respectively. The three CNN-based models were pre-trained on the DTU or WHU datasets without any fine-tuning. The input view number of the München dataset was $\mathrm{N} = 3$ and the depth sample resolution was $0.1\mathrm{m}$ . The quantitative results are shown in Table 3. Some qualitative results are shown in Figure 6. Three conclusions can be drawn from Table 3. First, RED-Net, which was trained on the WHU-3 dataset, performed the best in all the indicators. RED-Net also exceeded the other methods by at least $6\%$ in 3-interval-error. The model trained on the WHU-5 dataset performed almost the same as RED-Net. Second, the WHU dataset guaranteed the generalizability while the indoor DTU dataset could not. When trained on the DTU dataset, all the CNN-based methods performed worse than the two conventional methods. For example, (R-)MVSNet was $30\%$ worse than the two conventional methods in 3-interval-error; however, when trained on the WHU dataset, their performances were comparable to the latter. Finally, the recurrent encoder-decoder structure in RED-Net led to better generalizability compared to the stack of GRUs in R-MVSNet and the 3D convolutions in MVSNet. When trained on the DTU dataset, our method experienced a $20\%$ improvement over (R-)MVSNet in 3-interval-error.
+
+Table 2: Comparisons of memory requirement and runtime between (R-)MVSNet and RED-Net. Our method requires less memory but achieves full-resolution reconstruction.
+
+| Methods | Train set | MAE (m) | <3-interval (%) | <0.6m (%) |
| COLMAP | / | 0.5860 | 73.36 | 81.95 |
| SURE | / | 0.5138 | 73.71 | 85.70 |
| MVSNet | DTU | 1.1696 | 43.19 | 61.26 |
| WHU-3 | 0.6169 | 69.33 | 81.36 |
| WHU-5 | 0.5882 | 70.43 | 83.46 |
| R-MVSNet | DTU | 0.7809 | 43.22 | 70.26 |
| WHU-3 | 0.6228 | 74.33 | 83.35 |
| WHU-5 | 0.6426 | 74.08 | 83.68 |
| RED-Net | DTU | 0.6867 | 63.04 | 78.89 |
| WHU-3 | 0.5063 | 80.67 | 86.98 |
| WHU-5 | 0.5283 | 80.40 | 86.69 |
+
+Table 3: Quantitative evaluation on the München aerial image set with different MVS methods. The deep learning based methods were trained on the WHU or the DTU training set.
+
+# 6. Discussion
+
+# 6.1. Advantage of the Recurrent Encoder-Decoder
+
+In this section, we evaluate the effectiveness of the recurrent encoder-decoder in an MVS network. We downsampled the feature maps by four times in the 2D extraction stage. By doing this, the cost maps in RED-Net were the same size as R-MVSNet. The final output was also changed to 1/16 size of the input to keep consistent with the R-MVSNet. The results are compared in Table 4. On the three aerial datasets, RED-Net demonstrated obvious advantages for all measures, which indicates that the high performance of RED-Net is not only due to improvement of the output resolution, but also to the encoder-decoder structure, which learned spatial and contextual representations better than stacked GRUs.
+
+
+Figure 7: The point cloud reconstructions of a large area using RED-Net. The right is an enlarged part from the left scene.
+
+# 6.2. Evaluation on DTU
+
+Although RED-Net is mainly developed for large-scale aerial MVS problem, it surpassed the state-of-the-art R-MVSNet on the close-range DTU dataset. Table 5 shows that, with the same post-processing (photometric and geometric filtering), the overall score of RED-Net outperformed that of R-MVSNet by $18\%$ , and also outperformed the results provided in [37] with full four post-processing methods. Overall score is derived from two representative indicators accuracy and completeness suggested by the DTU dataset [1] and used in [37].
+
+# 6.3. Large-scale Reconstruction
+
+RED-Net produced full resolution depth maps with arbitrary depth sample numbers, which particularly can benefit high-resolution large-scale reconstruction of the Earth's surface from multi-view aerial images with a wide depth range. Moreover, RED-Net can handle three-view images with a size of $7040 \times 7040$ pixels on a 24GB GPU, taking only 58 seconds to infer a depth map with 128 depth sample numbers. When we inferred the depth of a scene covering $1.8 \times 0.85 \mathrm{~km}^2$ (Figure 7), RED-Net with 3-view input and 200 depth sample numbers took 9.3 minutes while SURE took 150 minutes and COLMAP took 608 minutes.
+
+# 7. Conclusion
+
+In this paper, we introduced and demonstrated a synthetic aerial dataset, called the WHU dataset, that we created for large-scale and high-resolution MVS reconstruction, which, to our knowledge, is the largest and only available multiview aerial dataset. We confirmed in this paper that the WHU dataset will be a beneficial supplement to current close-range multi-view datasets and will help facilitate the study of large-scale reconstruction of the Earth's surface and cities.
+
+We also introduced in this paper a new approach we developed for multi-view reconstruction called RED-Net.
+
+| Dataset | Methods | MAE (m) | <3-interval (%) | <0.6m (%) |
| Müchen | R-MVSNet | 0.4264 | 81.43 | 88.67 |
| RED-Net* | 0.3677 | 83.63 | 89.95 |
| WHU-3 | R-MVSNet | 0.1882 | 94.00 | 94.90 |
| RED-Net* | 0.1574 | 95.52 | 96.03 |
| WHU-5 | R-MVSNet | 0.1505 | 95.64 | 95.99 |
| RED-Net* | 0.1379 | 95.89 | 96.64 |
+
+Table 4: Results of the R-MVSNet and RED-Net with the same size of inferred depth map on three datasets. \* means that the cost maps and outputs of our method are downsampled by four as the R-MVSNet. Models are trained and tested on the same dataset respectively.
+
+| Methods(D=256) | Mean Acc. | Mean Comp. | Overall(mm) |
| R-MVSNet [10] | 0.385 | 0.459 | 0.422 |
| R-MVSNet* | 0.551 | 0.373 | 0.462 |
| RED-Net | 0.456 | 0.326 | 0.391 |
+
+Table 5: Results of the R-MVSNet and RED-Net on DTU benchmark. \* means our implementation with only photometric and geometric filtering post-processing, the same as in RED-Net.
+
+This new network was shown to achieve highly efficient large-scale and full resolution reconstruction with relatively low memory requirements, and its performance exceeded that of both the deep learning-based methods and commercial software. Our experiments also showed that RED-Net pre-trained on our newly created WHU dataset could be directly applicable to a somewhat different aerial dataset due to the proper training data and model's powerful generalizability, which has sent a signal that deep learning based approaches may take place of conventional MVS methods in practical large-scale reconstruction.
+
+# Acknowledgement
+
+This work was supported by the Huawei Company, Grant No. YBN2018095106.
+
+# References
+
+[1] H. Aanaes, R. R. Jensen, G. Vogiatzis, E. Tola, and A. B. Dahl. Large-scale data for multiple-view stereopsis. International Journal of Computer Vision, 120(2):153-168, 2016.
+[2] Md Zahangir Alom, Mahmudul Hasan, Chris Yakopcic, Tarek M Taha, and Vijayan K Asari. Recurrent residual convolutional neural network based on u-net (r2unet) for medical image segmentation. arXiv preprint arXiv:1802.06955, 2018.
+[3] Michael Bleyer, Christoph Rhemann, and Carsten Rother. Patchmatch stereo - stereo matching with slanted support windows. In Proceedings of the British Machine Vision Conference 2011, pages 1-11, 2011.
+[4] J. R. Chang and Y. S. Chen. Pyramid stereo matching network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5410-5418, 2018.
+[5] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
+[6] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 539-546, 2005.
+[7] R. T. Collins. A space-sweep approach to true multi-image matching. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 358-363, 1996.
+[8] ContextCapture. Available: http://www.bentley.com/en/products/brands/contextcapture.
+[9] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3354-3361, 2012.
+[10] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 315-323, 2011.
+[11] Norbert Haala. The landscape of dense image matching algorithms. 2013.
+[12] W. Hartmann, S. Galliani, M. Havlena, L. Van Gool, and K. Schindler. Learned multi-patch similarity. In IEEE International Conference on Computer Vision (ICCV), pages 1595-1603, 2017.
+[13] H. Hirschmuller. Stereo processing by semiglobal matching and mutual information. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):328-341, 2008.
+[14] P. H. Huang, K. Matzen, J. Kopf, N. Ahuja, and J. B. Huang. Deepmvs: Learning multi-view stereopsis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2821-2830, 2018.
+[15] M. Q. Ji, J. R. Gall, H. T. Zheng, Y. B. Liu, and L. Fang. Surfacenet: An end-to-end 3d neural network for multiview stereopsis. In IEEE International Conference on Computer Vision (ICCV), pages 2326-2334, 2017.
+
+[16] Abhishek Kar, Christian Hane, and Jitendra Malik. Learning a multi-view stereo machine. In Advances in Neural Information Processing Systems, pages 365-376, 2017.
+[17] A. Kendall, H. Martirosyan, S. Dasgupta, P. Henry, R. Kennedy, A. Bachrach, and A. Bry. End-to-end learning of geometry and context for deep stereo regression. In IEEE International Conference on Computer Vision (ICCV), pages 66-75, 2017.
+[18] A. Knapitsch, J. Park, Q. Y. Zhou, and V. Koltun. Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 36(4):78, 2017.
+[19] Philipp Krahenbuhl and Vladlen Koltun. Efficient inference in fully connected crfs with gaussian edge potentials. In Advances in neural information processing systems, pages 109-117, 2011.
+[20] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436, 2015.
+[21] N. Mayer, E. Ilg, P. Hausser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4040-4048, 2016.
+[22] Paul Merrell, Amir Akbarzadeh, Liang Wang, Philippos Mordohai, Jan-Michael Frahm, Ruigang Yang, David Nistér, and Marc Pollefeys. Real-time visibility-based fusion of depth maps. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8. IEEE, 2007.
+[23] J. H. Pang, W. X. Sun, J. S. J. Ren, C. X. Yang, and Q. Yan. Cascade residual learning: A two-stage convolutional neural network for stereo matching. In IEEE International Conference on Computer Vision Workshops, pages 878-886, 2017.
+[24] Pix4D. Available: https://wwwpix4d.com/.
+[25] G. Riegler, A. O. Ulusoy, and A. Geiger. Octnet: Learning deep 3d representations at high resolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6620-6629, 2017.
+[26] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-assisted Intervention, volume 9351, pages 234-241, 2015.
+[27] Mathias Rothermel, Konrad Wenzel, Dieter Fritsch, and Norbert Haala. Sure: Photogrammetric surface reconstruction from imagery. In Proceedings LC3D Workshop, Berlin, page 2, 2012.
+[28] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. International Journal of Computer Vision, 47(1-3):7-42, 2002.
+[29] Johannes L Schonberger, Enliang Zheng, Jan-Michael Frahm, and Marc Pollefeys. Pixelwise view selection for unstructured multi-view stereo. In European Conference on Computer Vision, pages 501-518. Springer, 2016.
+[30] T. Schops, J. L. Schonberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, and A. Geiger. A multi-view stereo benchmark with high-resolution images and multi-camera
+
+videos. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2538-2547, 2017.
+[31] Steven M Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and evaluation of multi-view stereo reconstruction algorithms. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, pages 519-528. IEEE, 2006.
+[32] SURE-Aerial. Available: http://www.nframes.com/products/sure-aerial/.
+[33] E. Tola, C. Strecha, and P. Fua. Efficient large-scale multiview stereo for ultra high-resolution image sets. Machine Vision and Applications, 23(5):903-920, 2012.
+[34] P. S. Wang, Y. Liu, Y. X. Guo, C. Y. Sun, and X. Tong. O-cnn: Octree-based convolutional neural networks for 3d shape analysis. ACM Transactions on Graphics, 36(4):72, 2017.
+[35] Rui Wang and Xuelei Qian. OpenSceneGraph 3.0: Beginner's Guide. Packt Publishing Ltd, 2010.
+[36] Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, and Long Quan. Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European Conference on Computer Vision (ECCV), pages 767-783, 2018.
+[37] Yao Yao, Zixin Luo, Shiwei Li, Tianwei Shen, Tian Fang, and Long Quan. Recurrent mvsnet for high-resolution multiview stereo depth inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5525-5534, 2019.
+[38] Feihu Zhang, Victor Prisacariu, Ruigang Yang, and Philip HS Torr. Ga-net: Guided aggregation net for end-to-end stereo matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 185-194, 2019.
\ No newline at end of file
diff --git a/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/images.zip b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..befc3df7285f4d4a9d2b77332cd475ac6d0fa828
--- /dev/null
+++ b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:389701f6ed2a785d3877c8e56a2a0ed3d59f0bf4e828e3b56115cf366b30f201
+size 698527
diff --git a/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/layout.json b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..224bddee4b2a1d4d29eabec3db15b095919a1275
--- /dev/null
+++ b/anovelrecurrentencoderdecoderstructureforlargescalemultiviewstereoreconstructionfromanopenaerialdataset/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0234dc27ee0f71f7ff1fcae84d6c30b1a26239b48a83ac89fb6d8f9c4e90619f
+size 316694
diff --git a/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_content_list.json b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..7e4b0bc178211c1a66cc69fe7ec82f5ac93d7e32
--- /dev/null
+++ b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d02f5460c932a6df447b75577e7ff935acd4c68d87ce839344bd1211dd253e0
+size 83262
diff --git a/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_model.json b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..e159ab61a6457dc2886b0abba1305268d27f8c3d
--- /dev/null
+++ b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85546726d7455f78840ebc6c7c09a25b9073ff1cc1ef8c9262a01b3a27868032
+size 107447
diff --git a/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_origin.pdf b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..9ed09125380e660e6cfc4a3b4bb9553bd1374b31
--- /dev/null
+++ b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/734c6453-70f2-45a7-84aa-004f55942770_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:567ac4529cfb0820b74055ac3c4b29c27e7eac4d7fa934bf07df3f6c6b9d93c1
+size 7559063
diff --git a/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/full.md b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..351c3e05fa28d41d742a2ba1a7430ec4a51c278a
--- /dev/null
+++ b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/full.md
@@ -0,0 +1,368 @@
+# A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising
+
+Kaixuan Wei $^{1}$ Ying Fu $^{1*}$ Jiaolong Yang $^{2}$ Hua Huang $^{1}$ $^{1}$ Beijing Institute of Technology ${ }^{2}$ Microsoft Research
+
+# Abstract
+
+Lacking rich and realistic data, learned single image denoising algorithms generalize poorly to real raw images that do not resemble the data used for training. Although the problem can be alleviated by the heteroscedastic Gaussian model for noise synthesis, the noise sources caused by digital camera electronics are still largely overlooked, despite their significant effect on raw measurement, especially under extremely low-light condition. To address this issue, we present a highly accurate noise formation model based on the characteristics of CMOS photosensors, thereby enabling us to synthesize realistic samples that better match the physics of image formation process. Given the proposed noise model, we additionally propose a method to calibrate the noise parameters for available modern digital cameras, which is simple and reproducible for any new device. We systematically study the generalizability of a neural network trained with existing schemes, by introducing a new low-light denoising dataset that covers many modern digital cameras from diverse brands. Extensive empirical results collectively show that by utilizing our proposed noise formation model, a network can reach the capability as if it had been trained with rich real data, which demonstrates the effectiveness of our noise formation model.
+
+# 1. Introduction
+
+Light is of paramount importance to photography. Night and low light place very demanding constraints on photography due to limited photon count and inescapable noise. The natural reaction is to gather more light by, e.g., enlarging aperture setting, lengthening exposure time and opening flashlight. However, each method is a tradeoff - large aperture incurs small depth of field, and is unavailable in smartphone cameras; long exposure can induce blur due to scene variations or camera motions; flash can cause color aberrations and is useful only for nearby objects.
+
+A practical rescue for low-light imaging is to use burst capturing [46, 28, 42, 40], in which a burst of images are aligned and fused to increase the signal-to-noise ratio
+
+
+
+
+
+
+
+
+(a) Input
+(d) Paired real data
+Figure 1: An image from the See-in-the-Dark (SID) Dataset [9], where we present (a) the short-exposure noisy input image; (f) the long-exposure reference image; (b-e) the outputs of UNets [51] trained with (b) synthetic data generated by the homoscedastic Gaussian noise model (G), (c) synthetic data generated by the signal-dependent heteroscedastic Gaussian noise model $(\mathrm{G} + \mathrm{P})$ [22], (d) paired real data of [9], and (e) synthetic data generated by our proposed noise model respectively. All images were converted from raw Bayer space to sRGB for visualization; similarly hereinafter.
+
+
+(b) G
+(e) Ours
+
+
+(c) $\mathrm{G} + \mathrm{P}$
+(f) Reference
+
+(SNR). However, burst photography can be fragile, suffering from ghosting effect [28, 56] when capturing dynamic scenes in the presence of vehicles, humans, etc. An emerging alternative approach is to employ a neural network to automatically learn the mapping from a low-light noisy image to its long-exposure counterpart [9]. However, such a deep learning approach generally requires a large amount of labelled training data that resembles low-light photographs in the real world. Collecting rich high-quality training samples from diverse modern camera devices is tremendously labor-intensive and expensive.
+
+In contrast, synthetic data is simple, abundant and inexpensive, but its efficacy is highly contingent upon how accurate the adopted noise formation model is. The heteroscedastic Gaussian noise model [22], instead of the commonly-used homoscedastic one, approximates well the real noise occurred in daylight or moderate low-light settings [5, 27, 28]. However, it cannot delineate the full picture of sensor noise under severely low illuminance. An illustrative example is shown in Fig. 1, where the objec
+
+tionable banding pattern artifacts, an unmodeled noise component that is exacerbated in dim environments, become clearly noticeable by human eyes.
+
+In this paper, to avoid the effect on noise model from the image processing pipeline (ISP) [9, 5, 46] converting raw data to sRGB, we mainly focus on the noise formation model for raw images. We propose a physics-based noise formation model for extreme low-light raw denoising, which explicitly leverages the characteristics of CMOS photosensors to better match the physics of noise formation. As shown in Fig. 2, our proposed synthetic pipeline derives from the inherent process of electronic imaging by considering how photons go through several stages. It models sensor noise in a fine-grained manner that includes many noise sources such as photon shot noise, pixel circuit noise, and quantization noise. Besides, we provide a method to calibrate the noise parameters from available digital cameras. In order to investigate the generality of our noise model, we additionally introduce an extreme low-light denoising (ELD) dataset taken by various camera devices to evaluate our model. Extensive experiments show that the network trained only with the synthetic data from our noise model can reach the capability as if it had been trained with rich real data.
+
+Our main contributions can be summarized as follows:
+
+- We formulate a noise model to synthesize realistic noisy images that can match the quality of real data under extreme low-light conditions.
+- We present a noise parameter calibration method that can adapt our model to a given camera.
+- We collect a dataset with various camera devices to verify the effectiveness and generality of our model.
+
+# 2. Related Work
+
+Noise removal from a single image is an extensively-studied yet still unresolved problem in computer vision and image processing. Single image denoising methods generally rely on the assumption that both signal and noise exhibit particular statistical regularities such that they can be separated from a single observation. Crafting an analytical regularizer associated with image priors (e.g. smoothness, sparsity, self-similarity, low rank), therefore, plays a critical role in traditional design pipeline of denoising algorithms [52, 48, 18, 16, 43, 15, 6, 26]. In the modern era, most single image denoising algorithms are entirely data-driven, which consist of deep neural networks that implicitly learn the statistical regularities to infer clean images from their noisy counterparts [53, 12, 45, 61, 24, 57, 10, 27]. Although simple and powerful, these learning-based approaches are often trained on synthetic image data due to practical constraints. The most widely-used additive, white, Gaussian
+
+
+Figure 2: Overview of electronic imaging pipeline and visualization of noise sources and the resulting image at each stage.
+
+noise model deviates strongly from realistic evaluation scenarios, resulting in significant performance declines on photographs with real noise [49, 2].
+
+To step aside the domain gap between synthetic images and real photographs, some works have resorted to collecting paired real data not just for evaluation but for training [2, 9, 54, 8, 33]. Notwithstanding the promising results, collecting sufficient real data with ground-truth labels to prevent overfitting is exceedingly expensive and time-consuming. Recent works exploit the use of paired (Noise2Noise [38]) or single (Noise2Void [37]) noisy images as training data instead of paired noisy and noise-free images. However, they can not substantially ease the burden of labor requirements for capturing a massive amount of real-world training data.
+
+Another line of research has focused on improving the realism of synthetic training data to circumvent the difficulties in acquiring real data from cameras. By considering both photon arrival statistics ("shot" noise) and sensor readout effects ("read" noise), the works of [46, 5] employed a signal-dependent heteroscedastic Gaussian model [22] to characterize the noise properties in raw sensor data. Most recently, Wang et al. [59] proposes a noise model, which considers the dynamic streak noise, color channel heterogeneous and clipping effect, to simulate the high-sensitivity noise on real low-light color images. Concurrently, a flow-based generative model, namely Noiseflow [1] is proposed to formulate the distribution of real noise using latent variables with tractable density1. However, these approaches oversimplify the modern sensor imaging pipeline, especially the noise sources caused by camera electronics, which have been extensively studied in the electronic imaging community [36, 29, 25, 3, 17, 19, 30, 31, 4, 58, 14]. In this work, we propose a physics-based noise formation model stemming from the essential process of electronic imaging to synthesize the noisy dataset and show that sizeable improvements of denoising performance on real data, particularly under extremely low illuminance.
+
+# 3. Physics-based Noise Formation Model
+
+The creation of a digital sensor raw image $D$ can be generally formulated by a linear model
+
+$$
+D = K I + N, \tag {1}
+$$
+
+where $I$ is the number of photoelectrons that is proportional to the scene irradiation, $K$ represents the overall system gain composed by analog and digital gains, and $N$ denotes the summation of all noise sources physically caused by light or camera. We focus on the single raw image denoising problem under extreme low-light conditions. In this context, the characteristics of $N$ are formulated in terms of the sensor physical process beyond the existing noise models. Deriving an optimal regularizer to tackle such noise is infeasible, as there is no analytical solver for such a noise distribution2. Therefore, we rely on a learning-based neural network pipeline to implicitly learn the regularities from data. Creating training samples for this task requires careful considerations of the characteristics of raw sensor data. In the following, we first describe the detailed procedures of the physical formation of a sensor raw image as well as the noise sources introduced during the whole process. An overview of this process is shown in Fig. 2.
+
+# 3.1. Sensor Raw Image Formation
+
+Our photosensor model is primarily based upon the CMOS sensor, which is the dominating imaging sensor nowadays [50]. We consider the electronic imaging pipeline of how incident light is converted from photons to electrons, from electrons to voltage, and finally from voltage to digital numbers, to model noise.
+
+From Photon to Electrons. During exposure, incident lights in the form of photons hit the photosensor pixel area, which liberates photon-generated electrons (photoelectrons) proportional to the light intensity. Due to the quantum nature of light, there exists an inevitable uncertainty in the number of electrons collected. Such uncertainty imposes a Poisson distribution over this number of electrons, which follows
+
+$$
+(I + N _ {p}) \sim \mathcal {P} (I), \tag {2}
+$$
+
+where $N_{p}$ is termed as the photon shot noise and $\mathcal{P}$ denotes the Poisson distribution. This type of noise depends on the light intensity, i.e., on the signal. Shot noise is a fundamental limitation and cannot be avoided even for a perfect sensor. There are other noise sources introduced during the photon-to-electron stage, such as photo response nonuniformity and dark current noise, reported by many previous literatures [29, 25, 58, 3]. Over the last decade, technical advancements in CMOS sensor design and fabrication, e.g., on-sensor dark current suppression, have led to
+
+a new generation of digital single lens reflex (DSLR) cameras with lower dark current and better photo response uniformity [23, 41]. Therefore, we assume a constant photo response and absorb the effect of dark current noise $N_{d}$ into read noise $N_{\text{read}}$ , which will be presented next.
+
+From Electrons to Voltage. After electrons are collected at each site, they are typically integrated, amplified and read out as measurable charge or voltage at the end of exposure time. Noise present during the electrons-to-voltage stage depends on the circuit design and processing technology used, and thus is referred to as pixel circuit noise [25]. It includes thermal noise, reset noise [36], source follower noise [39] and banding pattern noise [25]. The physical origin of these noise components can be found in the electronic imaging literatures [36, 25, 58, 39]. For instance, source follower noise is attributed to the action of traps in silicon lattice which randomly capture and emit carriers; banding pattern noise is associated with the CMOS circuit readout pattern and the amplifier.
+
+By leveraging this knowledge, we consider the thermal noise $N_{t}$ , source follower noise $N_{s}$ and banding pattern noise $N_{b}$ in our model. The noise model of $N_{b}$ will be presented later. Here, we absorb multiple noise sources into a unified term, i.e. read noise
+
+$$
+N _ {\text {r e a d}} = N _ {d} + N _ {t} + N _ {s}. \tag {3}
+$$
+
+Read noise can be assumed to follow a Gaussian distribution, but the analysis of noise data (in Section 3.2) tells a long-tailed nature of its shape. This can be attributed by the flicker and random telegraph signal components of source follower noise [25], or the dark spikes raised by dark current [36]. Therefore, we propose using a statistical distribution that can better characterize the long-tail shape. Specifically, we model the read noise by a Tukey lambda distribution $(TL)$ [34], which is a distributional family that can approximate a number of common distributions (e.g., a heavy-tailed Cauchy distribution):
+
+$$
+N _ {r e a d} \sim T L (\lambda ; 0, \sigma_ {T L}), \tag {4}
+$$
+
+where $\lambda$ and $\sigma$ indicate the shape and scale parameters respectively, while the location parameter is set to be zero given the zero-mean noise assumption.
+
+Banding pattern noise $N_{b}$ appears in images as horizontal or vertical lines. We only consider the row noise component (horizontal stripes) in our model, as the column noise component (vertical stripes) is generally negligible when measuring the noise data (Section 3.2). We simulate the row noise $N_{r}$ by sampling a value from a zero-mean Gaussian distribution with a scale parameter $\sigma_{r}$ , then adding it as an offset to the whole pixels within a single row.
+
+From Voltage to Digital Numbers. To generate an image that can be stored in a digital storage medium, the analog voltage signal read out during last stage is quantized
+
+
+Figure 3: Centralized Fourier spectrum of bias frames captured by SonyA7S2 (left) and (right) NikonD850 cameras
+
+into discrete codes using an ADC. This process introduces quantization noise $N_{q}$ given by
+
+$$
+N _ {q} \sim U (- 1 / 2 q, 1 / 2 q), \tag {5}
+$$
+
+where $U(\cdot, \cdot)$ denotes the uniform distribution over the range $[-1/2q, 1/2q]$ and $q$ is the quantization step.
+
+To summarize, our noise formation model consists of four major noise components:
+
+$$
+N = K N _ {p} + N _ {\text {r e a d}} + N _ {r} + N _ {q}, \tag {6}
+$$
+
+where $K$ , $N_{p}$ , $N_{read}$ , $N_{r}$ and $N_{q}$ denotes the overall system gain, photon shot noise, read noise, row noise and quantization noise, respectively.
+
+# 3.2. Sensor Noise Evaluation
+
+In this section, we present a noise parameter calibration method attached to our proposed noise formation model. According to Eq. (2) (4) (6), the necessary parameters to specify our noise model include overall system gain $K$ for photon shot noise $N_{p}$ ; shape and scale parameters ( $\lambda$ and $\sigma_{TL}$ ) for read noise $N_{read}$ ; scale parameter $\sigma_{r}$ for row noise $N_{r}$ . Given a new camera, our noise calibration method consists of two main procedures, i.e. (1) estimating noise parameters at various ISO settings3, and (2) modeling joint distributions of noise parameters.
+
+Estimating noise parameters. We record two sequences of raw images to estimate $K$ and other noise parameters: flat-field frames and bias frames.
+
+Flat-field frames are the images captured when sensor is uniformly illuminated. They can be used to derive $K$ according to the Photon Transfer method. [32]Once we have $K$ , we can firstly convert a raw digital signal $D$ into the number of photoelectrons $I$ , then impose a Poisson distribution on it, and finally revert it to $D$ - this simulates realistic photon shot noise.
+
+Bias frames are the images captured under a lightless environment with the shortest exposure time. We took them at a dark room and the camera lens was capped on. Bias frames delineate the read noise picture independent of light, blended by the multiple noise sources aforementioned. The banding pattern noise can be tested via performing discrete Fourier transform on a bias frame. In Fig.3, the highlighted
+
+
+
+
+
+
+
+
+Figure 4: Distribution fitting of read noise for SonyA7S2 (top) and NikonD850 (bottom) cameras. Left: probability plot against the Gaussian distribution; Middle: Tukey lambda PPCC plot that determines the optimal $\lambda$ (shown in red line); Right: probability plot against the Tukey Lambda distribution. A higher $R^2$ indicates a better fit. (Best viewed with zoom)
+
+
+
+
+
+vertical pattern in the centralized Fourier spectrum reveals the existence of row noise component. To analyze the distribution of row noise, we extract the mean values of each row from raw data. These values, therefore, serve as good estimates to the underlying row noise intensities, given the zero-mean nature of other noise sources. The normality of the row noise data is tested by a Shapiro-Wilk test [55]: the resulting $p$ -value is higher than 0.05, suggesting the null hypothesis that the data are normally distributed cannot be rejected. The related scale parameter $\sigma_r$ can be easily estimated by maximizing the log-likelihood.
+
+After subtracting the estimated row noise from a bias frame, statistical models can be used to fit the empirical distribution of the residual read noise. A preliminary diagnosis (Fig. 4 Left) shows the main body of the data may follow a Gaussian distribution, but it also unveils the long-tail nature of the underlying distribution. In contrast to regarding extreme values as outliers, we observe an appropriate long-tail statistical distribution can characterize the noise data better.
+
+We generate a probability plot correlation coefficient (PPCC) plot [20] to identify a statistical model from a Tukey lambda distributional family [34] that best describes the data. The Tukey lambda distribution is a family of distributions that can approximate many distributions by varying its shape parameter $\lambda$ . It can approximate a Gaussian distribution if $\lambda = 0.14$ , or derive a heavy-tailed distribution if $\lambda < 0.14$ . The PPCC plot (Fig. 4 Middle) is used to find a good value of $\lambda$ . The probability plot [60] (Fig. 4 Right) is then employed to estimate the scale parameter $\sigma_{TL}$ . The goodness-of-fit can be evaluated by $R^2$ - the coefficient of determination w.r.t. the resulting probability plot [47]. The $R^2$ of the fitted Tukey Lambda distribution is much higher than the Gaussian distribution (e.g., 0.972 vs. 0.886), indicating a much better fit to the empirical data.
+
+Although we use a unified noise model for different cam
+
+
+Figure 5: Simulated and real bias frames of two cameras. A higher $R^2$ indicates a better fit quantitatively. (Best viewed with zoom)
+
+
+Figure 6: Linear least squares fitting from estimated noise parameter samples (blue dots) from a NikonD850 camera. Left and right figures show the joint distributions of $(K, \sigma_{TL})$ and $(K, \sigma_r)$ respectively, where we sample the noise parameters from the blue shadow regions.
+
+
+
+eras, the noise parameters estimated from different cameras are highly diverse. Figure 4 shows the selected optimal shape parameter $\lambda$ differs camera by camera, implying distributions with varying degree of heavy tails across cameras. The visual comparisons of real and simulated bias frames are shown in Fig. 5. It shows that our model is capable of synthesizing realistic noise across various cameras, which outperforms the Gaussian noise model both in terms of the goodness-of-fit measure (i.e., $R^2$ ) and the visual similarity to real noise.
+
+Modeling joint parameter distributions. To choose noise parameters for our noise formation model, we infer the joint distributions of $(K,\sigma_{TL})$ and $(K,\sigma_r)$ , from the parameter samples estimated at various ISO settings. As shown in Fig. 6, we use the linear least squares method to find the line of best fit for two sets of log-scaled measurements. Our noise parameter sampling procedure is
+
+$$
+\log \left(K\right) \sim U \left(\log (\hat {K} _ {m i n}), \log (\hat {K} _ {m a x})\right),
+$$
+
+$$
+\log \left(\sigma_ {T L}\right) | \log (K) \sim \mathcal {N} \left(a _ {T L} \log (K) + b _ {T L}, \hat {\sigma} _ {T L}\right), \tag {7}
+$$
+
+$$
+\log \left(\sigma_ {r}\right) | \log (K) \sim \mathcal {N} \left(a _ {r} \log (K) + b _ {r}, \hat {\sigma} _ {r}\right),
+$$
+
+where $U(\cdot ,\cdot)$ denotes a uniform distribution and $\mathcal{N}(\mu ,\sigma)$ denotes a Gaussian distribution with mean $\mu$ and standard
+
+
+Figure 7: Capture setup and example images from our dataset.
+
+
+
+Table 1: Quantitative Results on Sony set of the SID dataset. The noise models are indicated as follows. $G$ : the Gaussian model for read noise $N_{\text{read}}$ ; $G^{*}$ : the tukey lambda model for $N_{\text{read}}$ ; $P$ : the Gaussian approximation for photon shot noise $N_{p}$ ; $P^{*}$ : the true Poisson model for $N_{p}$ ; $R$ : the Gaussian model for row noise $N_{r}$ ; $U$ : the uniform distribution model for quantization noise $N_{q}$ . The best results are indicated by red color and the second best results are denoted by blue color.
+
+| Model | ×100 PSNR / SSIM | ×250 PSNR / SSIM | ×300 PSNR / SSIM |
| BM3D | 32.92 / 0.758 | 29.56 / 0.686 | 28.88 / 0.674 |
| A-BM3D | 33.79 / 0.743 | 27.24 / 0.518 | 26.52 / 0.558 |
| Paired real data | 38.60 / 0.912 | 37.08 / 0.886 | 36.29 / 0.874 |
| Noise2Noise | 37.42 / 0.853 | 33.48 / 0.725 | 32.37 / 0.686 |
| G | 36.10 / 0.800 | 31.87 / 0.640 | 30.99 / 0.624 |
| G+P | 37.08 / 0.839 | 32.85 / 0.697 | 31.87 / 0.665 |
| G+P* | 38.31 / 0.884 | 34.39 / 0.765 | 33.37 / 0.730 |
| G*+P* | 39.10 / 0.911 | 36.46 / 0.869 | 35.69 / 0.855 |
| G*+P*+R | 39.23 / 0.912 | 36.89 / 0.877 | 36.01 / 0.864 |
| G*+P*+R+U | 39.27 / 0.914 | 37.13 / 0.883 | 36.30 / 0.872 |
+
+deviation $\sigma$ . $\hat{K}_{min}$ and $\hat{K}_{max}$ are the estimated overall system gains at the minimum and maximum ISO of a camera respectively. $a$ and $b$ indicate the fitted line's slope and intercept respectively. $\hat{\sigma}$ is an unbiased estimator of standard deviation of the linear regression under the Gaussian error assumption. For shape parameter $\lambda$ , we simply sample it from the empirical distribution of the estimated parameter samples.
+
+Noisy image synthesis. To synthesize noisy images, clean images are chosen and divided by low light factors sampled uniformly from [100, 300] to simulate low photon count in the dark. Noise is then generated and added to the scaled clean samples, according to Eq. (6) (7). The created noisy images are finally normalized by multiplying the same low light factors to expose bright but excessively noisy contents.
+
+# 4. Extreme Low-light Denoising (ELD) Dataset
+
+To systematically study the generality of the proposed noise formation model, we collect an extreme low-light denoising (ELD) dataset that covers 10 indoor scenes and 4 camera devices from multiple brands (SonyA7S2, NikonD850, CanonEOS70D, CanonEOS700D). We also
+
+record bias and flat field frames for each camera to calibrate our noise model. The data capture setup is shown in Fig. 7. For each scene and each camera, a reference image at the base ISO was firstly taken, followed by noisy images whose exposure time was deliberately decreased by low light factors $f$ to simulate extreme low light conditions. Another reference image then was taken akin to the first one, to ensure no accidental error (e.g. drastic illumination change or accidental camera/scene motion) occurred. We choose three ISO levels (800, 1600, $3200$ ) and two low light factors (100, 200) for noisy images to capture our dataset, resulting in 240 ( $3 \times 2 \times 10 \times 4$ ) raw image pairs in total. The hardest example in our dataset resembles the image captured at a "pseudo" ISO up to 640000 ( $3200 \times 200$ ).
+
+# 5. Experiments
+
+# 5.1. Experimental setting
+
+Implementation details. A learning-based neural network pipeline is constructed to perform low-light raw denoising. We utilize the same U-Net architecture [51] as [9]. Raw Bayer images from SID Sony training dataset [9] are used to create training data. We pack the raw Bayer images into four channels (R-G-B-G) and crop non-overlapped $512 \times 512$ regions augmented by random flipping/rotation. Our approach only use the clean raw images, as the paired noisy images are generated by the proposed noise model on-the-fly. Besides, we also train networks based upon other training schemes as references, including training with paired real data (short exposure and long exposure counterpart) and training with paired real noisy images (i.e., Noise2Noise [38]).
+
+Our implementation $^5$ is based on PyTorch. We train the models with 200 epoch using $L_{1}$ loss and Adam optimizer [35] with batch size 1. The learning rate is initially set to $10^{-4}$ , then halved at epoch 100, and finally reduced to $10^{-5}$ at epoch 180.
+
+Competing methods. To understand how accurate our proposed noise model is, we compare our method with:
+
+1. The approaches that use real noisy data for training, i.e. "paired real data" [9] and Noise2Noise [38];
+2. Previous noise models, i.e. homoscedastic (G) and heteroscedastic Gaussian noise models $(\mathrm{G} + \mathrm{P})$ [22, 21];
+3. The representative non-deep methods, i.e. BM3D [15] and Anscombe-BM3D (A-BM3D) [44].
+
+
+(a) Noise2Noise
+
+
+(b) Paired real data
+
+
+(c) Ground Truth
+
+
+(d) $G$
+
+
+(e) $G + P$
+
+
+(f) $G^{*} + P^{*} + R + U$
+Figure 8: Visual result comparison of different training schemes. Our final model $(G^{*} + P^{*} + R + U)$ suppresses the "purple" color shift, residual bandings and chroma artifacts compared to other baselines.
+
+# 5.2. Results on SID Sony dataset
+
+Single image raw denoising experiment is firstly conducted on images from SID Sony validation and test sets. For quantitative evaluation, we focus on indoor scenes illuminated by natural lights, to avoid flickering effect of alternating current lights [2] $^{8}$ . To account for the imprecisions of shutter speed and analog gain [2], a single scalar is calculated and multiplied into the reconstructed image to minimize the mean square error evaluated by the ground truth.
+
+Ablation study on noise models. To verify the efficacy of the proposed noise model, we compare the performance of networks trained with different noise models developed in Section 3.1. All noise parameters are calibrated using the ELD dataset, and sampled with a process following (or similar to) Eq. (7). The results of the other methods described in Section 5.1 are also presented as references.
+
+As shown in Table 1, the domain gap is significant between the homoscedastic/heteroscedastic Gaussian models and the de facto noise model (characterized by the model trained with paired real data). This can be attributed to (1) the Gaussian approximation of Possion distribution is not justified under extreme low illuminance; (2) horizontal bandings are not considered in the noise model; (3) long-tail nature of read noise is overlooked. By taking all these factors into account, our final model, i.e. $G^{*} + P^{*} + R + U$ gives rise to a striking result: the result is comparable to or sometimes even better than the model trained with paired real data. Besides, training only with real low-light noisy data is not effective enough, due to the clipping effects (that violates the zero-mean noise assumption) and the large variance of corruptions (that leads to a large variance of the Noise2Noise solution) [38]. A visual comparison of our final model and other methods is presented in Fig. 8, which
+
+Table 2: Quantitative results (PSNR/SSIM) of different methods on our ELD dataset containing four representative cameras.
+
+| Camera | f | Index | Non-deep | Training with real data | Training with synthetic data |
| BM3D [15] | A-BM3D [44] | Paired data [9] | Noise2Noise [38] | G | G+P [22] | Ours |
| SonyA7S2 | ×100 | PSNR | 37.69 | 37.74 | 44.50 | 41.63 | 42.35 | 42.46 | 45.36 |
| SSIM | 0.803 | 0.776 | 0.971 | 0.856 | 0.893 | 0.889 | 0.972 |
| ×200 | PSNR | 34.06 | 35.26 | 42.45 | 37.98 | 38.93 | 38.88 | 43.27 |
| SSIM | 0.696 | 0.721 | 0.945 | 0.775 | 0.813 | 0.812 | 0.949 |
| NikonD850 | ×100 | PSNR | 33.97 | 36.60 | 41.28 | 40.47 | 39.57 | 40.29 | 41.79 |
| SSIM | 0.725 | 0.779 | 0.938 | 0.848 | 0.823 | 0.845 | 0.912 |
| ×200 | PSNR | 31.36 | 32.59 | 39.44 | 37.98 | 36.68 | 37.26 | 39.69 |
| SSIM | 0.618 | 0.723 | 0.910 | 0.820 | 0.757 | 0.786 | 0.875 |
| CanonEOS70D | ×100 | PSNR | 30.79 | 31.88 | 40.10 | 38.21 | 40.59 | 40.94 | 40.62 |
| SSIM | 0.589 | 0.692 | 0.931 | 0.826 | 0.925 | 0.934 | 0.937 |
| ×200 | PSNR | 28.06 | 28.66 | 37.32 | 34.33 | 37.49 | 37.64 | 38.17 |
| SSIM | 0.540 | 0.597 | 0.867 | 0.704 | 0.871 | 0.873 | 0.890 |
| CanonEOS700D | ×100 | PSNR | 29.70 | 30.13 | 39.05 | 38.29 | 39.77 | 40.08 | 39.84 |
| SSIM | 0.556 | 0.640 | 0.906 | 0.859 | 0.884 | 0.897 | 0.921 |
| ×200 | PSNR | 27.52 | 27.68 | 36.50 | 34.94 | 37.67 | 37.86 | 37.59 |
| SSIM | 0.537 | 0.579 | 0.850 | 0.766 | 0.870 | 0.879 | 0.879 |
+
+
+Figure 9: Raw image denoising results on both indoor and outdoor scenes from SID Sony dataset. (Best viewed with zoom)
+
+
+Figure 10: Raw image denoising results on our ELD dataset. (Best viewed with zoom)
+
+shows the effectiveness of our noise formation model.
+
+Though we only quantitatively evaluate the results on indoor scenes of the SID Sony set, our method can be applied
+
+to outdoor scenes as well. The visual comparisons of both indoor and outdoor scenes from SID Sony set are presented in Fig. 9. It can be seen that the random noise can be sup-
+
+
+(a) Input
+
+
+(b) Paired real data
+
+
+(c) Ours
+
+
+Figure 11: Denoising results of a low-light image captured by a Huawei Honor 10 camera.
+(a)
+Figure 12: (a) Performance boost when training with more synthesized data. (b) Noise parameter sensitivity test.
+
+
+(b)
+
+pressed by the model learned with heteroscedastic Gaussian noise $(\mathrm{G} + \mathrm{P})$ [22], but the resulting colors are distorted, the banding artifacts become conspicuous, and the image details are barely discernible. By contrast, our model produces visually appealing results as if it had been trained with paired real data.
+
+# 5.3. Results on our ELD dataset
+
+Method comparisons. To see whether our noise model can be applicable to other camera devices as well, we assess model performance on our ELD dataset. Table 2 and Fig. 10 summarize the results of all competing methods. It can be seen that the non-deep denoising methods, i.e. BM3D and A-BM3D, fail to address the banding residuals, the color bias and the extreme values presented in the noisy input, whereas our model recovers vivid image details which can be hardly perceived on the noisy image by human observers. Moreover, our model trained with synthetic data even often outperforms the model trained with paired real data. We note the finding here conforms with the evaluation of sensor noise presented in Section 3.2, especially in Fig. 4 and 5, where we show the underlying noise distribution varies camera by camera. Consequently, training with paired real data from SID Sony camera inevitably overfits to the noise pattern merely existed on the Sony camera, leading to suboptimal results on other types of cameras. In contrast, our model relies on a very flexible noise model and a noise calibration process, making it adapts to noise characteristics of other (calibrated) camera models as well. Additional evidence can be found in Fig. 11, where we apply these two models to an image captured by a smartphone camera. Our reconstructed image is clearer and cleaner than what is re
+
+
+(a) SID only
+Figure 13: Denoising results of a low-light image captured by a NikonD850 camera.
+
+
+(b) SID + MIT5K
+
+
+(c) Ground Truth
+
+stored by the model trained with paired real data.
+
+Training with more synthesized data. A useful merit of our approach against the conventional training with paired real data, is that our model can be easily incorporated with more real clean samples to train. Fig. 12(a) shows the relative improvements of our model when training with the dataset synthesized by additional clean raw images from MIT5K dataset [7]. We find the major improvements, as shown in Fig. 13, are owing to the more accurate color and brightness restoration. By training with more raw image samples from diverse cameras, the network learns to infer picture appearances more naturally and precisely.
+
+Sensitivity to noise calibration. Another benefit of our approach is we only need clean samples and a noise calibration process to adapt to a new camera, in contrast to capturing real noisy images accompanied with densely-labeled ground truth. Besides, the noise calibration process can be simplified once we already have a collection of parameter samples from various cameras. Fig. 12(b) shows models can reach comparable performance on target cameras without noise calibration, by simply sampling parameters from other three calibrated cameras instead.
+
+# 6. Conclusion
+
+We have presented a physics-based noise formation model together with a noise parameter calibration method to help resolve the difficulty of extreme low-light denoising. We revisit the electronic imaging pipeline and investigate the influential noise sources overlooked by existing noise models. This enables us to synthesize realistic noisy raw data that better match the underlying physical process of noise formation. We systematically study the efficacy of our noise formation model by introducing a new dataset that covers four representative camera devices. By training only with our synthetic data, we demonstrate a convolutional neural network can compete with or sometimes even outperform the network trained with paired real data.
+
+Acknowledgments We thank Tianli Tao for the great help in collecting the ELD dataset. This work was partially supported by the National Natural Science Foundation of China under Grants No. 61425013 and No. 61672096.
+
+# References
+
+[1] Abdelrahman Abdelhamed, Marcus A. Brubaker, and Michael S. Brown. Noise flow: Noise modeling with conditional normalizing flows. In The IEEE International Conference on Computer Vision (ICCV), 2019.
+[2] Abdelrahman Abdelhamed, Stephen Lin, and Michael S. Brown. A high-quality denoising dataset for smartphone cameras. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[3] Richard L. Baer. A model for dark current characterization and simulation. Proceedings of SPIE - The International Society for Optical Engineering, 6068:37-48, 2006.
+[4] Robert A. Boie and Ingemar J. Cox. An analysis of camera noise. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(6):671-674, 1992.
+[5] Tim Brooks, Ben Mildenhall, Tianfan Xue, Jiawen Chen, Dillon Sharlet, and Jonathan T Barron. Unprocessing images for learned raw denoising. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11036-11045, 2019.
+[6] Antoni Buades, Bartomeu Coll, and Jean-Michel Morel. A non-local algorithm for image denoising. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2005.
+[7] Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. Learning photographic global tonal adjustment with a database of input / output image pairs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.
+[8] Chen Chen, Qifeng Chen, Minh N. Do, and Vladlen Koltun. Seeing motion in the dark. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[9] Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[10] Chang Chen, Zhiwei Xiong, Xinmei Tian, and Feng Wu. Deep boosting for image denoising. In The European Conference on Computer Vision (ECCV), September 2018.
+[11] Guangyong Chen, Fengyuan Zhu, and Pheng Ann Heng. An efficient statistical method for image noise level estimation. In The IEEE International Conference on Computer Vision (ICCV), December 2015.
+[12] Yunjin Chen, Wei Yu, and Thomas Pock. On learning optimized reaction diffusion processes for effective image restoration. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
+[13] Roger N. Clark. Exposure and digital cameras, part 1: What is iso on a digital camera? when is a camera is-less? iso myths and digital cameras. http://www.clarkvision.com/articles/iso/, 2012.
+[14] Roberto Costantini and Sabine Susstrunk. Virtual sensor design. Proceedings of SPIE - The International Society for Optical Engineering, 5301:408-419, 2004.
+[15] Kostadin Dabov, Alessandro Foi, Vladimir Katkovnik, and Karen Egiazarian. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8):2080-2095, 2007.
+
+[16] Weisheng Dong, Xin Li, Lei Zhang, and Guangming Shi. Sparsity-based image denoising via dictionary learning and structural clustering. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 457-464. IEEE, 2011.
+[17] Abbas El Gamal and Helmy Eltoukhy. Cmos image sensors. IEEE Circuits and Devices Magazine, 21(3):6-20, 2005.
+[18] Michael Elad and Michal Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15(12):3736-3745, 2006.
+[19] Joyce Farrell and Manu Parmar. Sensor calibration and simulation. Proceedings of SPIE - The International Society for Optical Engineering, 2008.
+[20] James J. Filliben. The probability plot correlation coefficient test for normality. Technometrics, 17(1):111-117, 1975.
+[21] Alessandro Foi. Clipped noisy images: Heteroskedastic modeling and practical denoising. Signal Processing, 89(12):2609-2629, 2009.
+[22] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. IEEE Transactions on Image Processing, 17(10):1737-1754, 2008.
+[23] Eric R. Fossum and Donald B. Hondongwa. A review of the pinned photodiode for ccd and cmos image sensors. IEEE Journal of the Electron Devices Society, 2(3):33-43, 2014.
+[24] Michael Gharbi, Gaurav Chaurasia, Sylvain Paris, and Frédo Durand. Deep joint demosaicking and denoising. ACM Transactions on Graphics, 35(6):191:1-191:12, Nov. 2016.
+[25] Ryan D. Gow, David Renshaw, Keith Findlater, Lindsay Grant, Stuart J. Mcleod, John Hart, and Robert L. Nicol. A comprehensive tool for modeling cmos image-sensor-noise performance. IEEE Transactions on Electron Devices, 54(6):1321-1329, 2007.
+[26] Shuhang Gu, Zhang Lei, Wangmeng Zuo, and Xiangchu Feng. Weighted nuclear norm minimization with application to image denoising. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
+[27] Shi Guo, Zifei Yan, Kai Zhang, Wangmeng Zuo, and Lei Zhang. Toward convolutional blind denoising of real photographs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+[28] Samuel W. Hasinoff, Dillon Sharlet, Ryan Geiss, Andrew Adams, and Marc Levoy. Burst photography for high dynamic range and low-light imaging on mobile cameras. ACM Transactions on Graphics, 35(6):192, 2016.
+[29] Glenn E. Healey and Raghava Kondepudy. Radiometric ccd camera calibration and noise estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(3):267-276, 1994.
+[30] Kenji Irie, Alan E. McKinnon, Keith Unsworth, and Ian M. Woodhead. A model for measurement of noise in ccd digital-video cameras. Measurement Science and Technology, 19(4):334-340, 2008.
+[31] Kenji Irie, Alan E. McKinnon, Keith Unsworth, and Ian M. Woodhead. A technique for evaluation of ccd video-camera noise. IEEE Transactions on Circuits and Systems for Video Technology, 18(2):280-284, 2008.
+
+[32] James Janesick, Kenneth Klaasen, and Tom Elliott. Ccd charge collection efficiency and the photon transfer technique. Proceedings of SPIE - The International Society for Optical Engineering, 570:7-19, 1985.
+[33] Haiyang Jiang and Yinqiang Zheng. Learning to see moving objects in the dark. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[34] Brian L. Joiner and Joan R. Rosenblatt. Some properties of the range in samples from tukey's symmetric lambda distributions. *Publications of the American Statistical Association*, 66(334):394-399, 1971.
+[35] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[36] Mikhail V Konnik and James S Welsh. High-level numerical simulations of noise in ccd and cmos photosensors: review and tutorial. arXiv preprint arXiv:1412.4031, 2014.
+[37] Alexander Krull, Tim-Oliver Buchholz, and Florian Jug. Noise2void - learning denoising from single noisy images. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
+[38] Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. Noise2noise: Learning image restoration without clean data. In International Conference on Machine Learning (ICML), pages 2971-2980, 2018.
+[39] Cedric Leyris, Alain Hoffmann, Matteo Valenza, J.-C. Vildeuil, and F. Roy. Trap competition inducing r.t.s noise in saturation range in n-mosfets. Proceedings of SPIE - The International Society for Optical Engineering, 5844:41-51, 2005.
+[40] Orly Liba, Kiran Murthy, Yun-Ta Tsai, Tim Brooks, Tianfan Xue, Nikhil Karnad, Qiurui He, Jonathan T Barron, Dillon Sharlet, Ryan Geiss, et al. Handheld mobile photography in very low light. ACM Transactions on Graphics (TOG), 38(6):1-16, 2019.
+[41] Wensheng Lin, Guoming Sung, and Jyunlong Lin. High performance cmos light detector with dark current suppression in variable-temperature systems. Sensors, 17(1):15, 2016.
+[42] Ziwei Liu, Yuan Lu, Xiaou Tang, Matt Uytendaele, and Sun Jian. Fast burst images denoising. ACM Transactions on Graphics, 33(6):1-9, 2014.
+[43] Julien Mairal, Michael Elad, and Guillermo Sapiro. Sparse representation for color image restoration. IEEE Transactions on Image Processing, 17(1):53-69, 2008.
+[44] Markku Makitalo and Alessandro Foi. Optimal inversion of the anscombe transformation in low-countoisson image denoising. IEEE Transactions on Image Processing, 20(1):99-109, 2011.
+[45] Xiaojiao Mao, Chunhua Shen, and Yu-Bin Yang. Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections. In Advances in Neural Information Processing Systems (NIPS), pages 2802-2810, 2016.
+[46] Ben Mildenhall, Jonathan T. Barron, Jiawen Chen, Dillon Sharlet, Ren Ng, and Robert Carroll. Burst denoising with kernel prediction networks. In The IEEE Conference
+
+on Computer Vision and Pattern Recognition (CVPR), June 2018.
+[47] Eugene C. Morgan, Matthew Lackner, Richard M. Vogel, and Laurie G. Baise. Probability distributions for offshore wind speeds. Energy Conversion and Management, 52(1):15-26, 2011.
+[48] Stanley Osher, Martin Burger, Donald Goldfarb, Jinjun Xu, and Wotao Yin. An iterative regularization method for total variation-based image restoration. Multiscale Modeling and Simulation, 4(2):460-489, 2005.
+[49] Tobias Plotz and Stefan Roth. Benchmarking denoising algorithms with real photographs. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
+[50] Grand View Research. Image sensors market analysis, 2016. [online]. http://www.grandviewresearch.com/ industry-analysis/imagesensors-market, 2016.
+[51] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015.
+[52] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D Nonlinear Phenomena, 60(14):259-268, 1992.
+[53] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.
+[54] Eli Schwartz, Raja Giryes, and Alex M. Bronstein. Deepisp: Learning end-to-end image processing pipeline. IEEE Transactions on Image Processing, PP(99):1-1, 2018.
+[55] S. S. Shapiro and R. S. Francia. An approximate analysis of variance test for normality. Biometrika, 67(337):215-216, 1975.
+[56] Ziyi Shen, Wenguan Wang, Xiankai Lu, Jianbing Shen, Haibin Ling, Tingfa Xu, and Ling Shao. Human-aware motion deblurring. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[57] Ying Tai, Jian Yang, Xiaoming Liu, and Chunyan Xu. Memnet: A persistent memory network for image restoration. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+[58] Hans Wach and Edward R. Dowski Jr. Noise modeling for design and simulation of computational imaging systems. Proceedings of SPIE - The International Society for Optical Engineering, 5438:159-170, 2004.
+[59] Wei Wang, Xin Chen, Cheng Yang, Xiang Li, Xuemei Hu, and Tao Yue. Enhancing low light videos by exploring high sensitivity camera noise. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
+[60] Martin B. Wilk and Ram Gnanadesikan. Probability plotting methods for the analysis of data. Biometrika, 55(1):1-17, 1968.
+[61] Kai Zhang, Wangmeng Zuo, Yunjin Chen, Deyu Meng, and Lei Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 2017.
\ No newline at end of file
diff --git a/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/images.zip b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..7f952a12bda1f46d2c06c1cf4faea0b127ae29b5
--- /dev/null
+++ b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:38c05281948542a03bde2c6ccc7919da21c65cd7b6e2a30e2e2a3f753ebe237b
+size 853138
diff --git a/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/layout.json b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..1877e4ed0718277869a49cef3b0475b963a87ddb
--- /dev/null
+++ b/aphysicsbasednoiseformationmodelforextremelowlightrawdenoising/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6415a926c97c109ee61e052d6b2636a730519057c1dc41a7ff280208c67a85f
+size 476113
diff --git a/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_content_list.json b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1b687d6a05599ab6f24ec79049325cab37a4dbd
--- /dev/null
+++ b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0cb779be2b4c57d32f72dec6cb03bd61f544931983d0501315d80a651728c473
+size 68101
diff --git a/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_model.json b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..38822f25980579288cb4cde14827766133c5c51e
--- /dev/null
+++ b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:64ec7112e35e215e9441419deae24c8ee94172d68563f86a58b46ab0c7dddd9c
+size 81286
diff --git a/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_origin.pdf b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..d7e8154348ef4d5931d96e049708922760db0ebb
--- /dev/null
+++ b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/271c4737-2147-4993-a485-f63bee8b2b13_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7c1c1b4c4ccad76838ea9e3e2c52792213ab26bd407ebb169f53f7c9da5daae5
+size 402070
diff --git a/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/full.md b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..0dcdba3ca96dc6635eaea9676bfc9fa5e26f995f
--- /dev/null
+++ b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/full.md
@@ -0,0 +1,258 @@
+# A Programmatic and Semantic Approach to Explaining and Debugging Neural Network Based Object Detectors
+
+Edward Kim *1, Divya Gopinath †2, Corina S. Păsăreanu ‡2, and Sanjit A. Seshia §1
+
+1University of California, Berkeley
+2NASA Ames Research Center
+
+
+Figure 1: Overview of the workflow proposed in this paper. The green and red bounding boxes in the images are ground truth and prediction, respectively.
+
+# Abstract
+
+Even as deep neural networks have become very effective for tasks in vision and perception, it remains difficult to explain and debug their behavior. In this paper, we present a programmatic and semantic approach to explaining, understanding, and debugging the correct and incorrect behaviors of a neural network-based perception system. Our approach is semantic in that it employs a high-level representation of the distribution of environment scenarios that the detector is intended to work on. It is programmatic in that scenario representation is a program in a domain-specific probabilistic programming language which can be used to generate synthetic data to test a given perception module. Our framework assesses the performance of a per
+
+ception module to identify correct and incorrect detections, extracts rules from those results that semantically characterizes the correct and incorrect scenarios, and then specializes the probabilistic program with those rules in order to more precisely characterize the scenarios in which the perception module operates correctly or not. We demonstrate our results using the SCENIC probabilistic programming language and a neural network-based object detector. Our experiments show that it is possible to automatically generate compact rules that significantly increase the correct detection rate (or conversely the incorrect detection rate) of the network and can thus help with understanding and debugging its behavior.
+
+# 1. Introduction
+
+Models produced by Machine Learning (ML) algorithms, especially deep neural networks (DNNs), have proved very effective at performing various tasks in computer vision and perception. Moreover, ML models are being deployed in domains where trustworthiness is a big concern, such as automotive systems [18], health care [3], and cyber-security [6]. Research in adversarial machine learning [11], verification [8, 24], and testing [28] has shown that DNN-based vision/perception systems are not always robust and can be fooled, sometimes leading to unsafe situations for the overall system (e.g., autonomous vehicle).
+
+Given this lack of robustness and potential for unsafe behavior, it is crucial that we develop methods to better understand, debug, and characterize scenarios where DNN-based perception components fail and where they perform correctly. The emerging literature on explaining and understanding ML models provides one approach to address this concern. However, while there are several techniques proposed to explain the behavior of ML-based perception (e.g. [5, 16, 17, 19, 25]), almost all of them operate on the concrete input feature space of the network. For example, attribution-based methods(e.g. [26, 31, 23]) indicate pixels in an input image that are associated with the output of a DNN on that input. These methods, while very useful, do not directly identify the higher-level "semantic" features of the scene that are associated with that decision; they require a human to make that judgment. Additionally, in many cases it is important to generate "population-level" explanations of correct/incorrect behavior on such higher-level features. For example, it would be useful to identify whether the perception module of an autonomous vehicle generally misses cars of a certain model or color, or on a particular region of a road, and leverage this knowledge to describe a high-level success/failure scenario of a perception module without the bottleneck of human intervention.
+
+In this paper, we present a programmatic and semantic approach to explaining and debugging DNN-based perception module, with a focus on object detection. In this approach, we begin by formalizing the semantic feature space as a distribution over a set of scenes, where a scene is a configuration of objects in the three dimensional space and semantic features are features of the scene that capture its semantics (e.g., the position and orientation of a car, its model and color, the time of day, weather, etc.). We then represent the semantic feature space using a program in a domain-specific programming language - hence the term programmatic. Given such a representation and generated data corresponding to correct and incorrect behaviors of an object detector, we seek to compute specializations of the program corresponding to those correct/incorrect behaviors. The specialized programs serve as interpretable representations of environment scenes that result in those correct/in
+
+correct behaviors, enabling us to debug failure cases and to understand where the object detector succeeds.
+
+We implement our approach using the SCENIC [2, 9] probabilistic programming language. Probabilistic programming has already been demonstrated to be applicable to various computer vision tasks (see, e.g., [14]). SCENIC is a domain-specific language used to model semantic feature spaces, i.e., distributions over scenes. It has a generative back-end that allows one to automatically produce synthetic data when it is connected to a renderer or simulator, such as the Grand Theft Auto V (GTA-V) video game. It is thus a particularly good fit for our approach. Using SCENIC, we implement the workflow shown in Fig. 1. We begin with a SCENIC program $P$ that captures a distribution that we would like our DNN-based detector to work on. Generating test data from $P$ , we evaluate the performance of the detector, partitioning the test set into correct and incorrect detections. For each partition, we use a rule extraction algorithm to generate rules over the semantic features that are highly correlated with successes/failures of the detector. Rule extraction is performed using decision tree learning and anchors [22]. We further propose a novel white-box approach that analyzes the neuron activation patterns of the neural network to get insights into its inner workings. Using these activation patterns, we show how to derive semantically understandable rules over the high-level input features to characterize scenarios.
+
+The generated rules are then used to refine $P$ yielding programs $P^{+}$ and $P^{-}$ that characterize more precisely the correct and incorrect feature spaces, respectively. Using this framework, we evaluate DNN-based object detector for autonomous vehicles, using data generated using SCENIC and GTA-V. We demonstrate that our approach is very effective, producing rules and refined programs that significantly increase the correct detection rate (from $65.3\%$ to $89.4\%$ ) and incorrect detection rate (from $34.7\%$ to $87.2\%$ ) of the network and can thus help with understanding, debugging and retraining the network.
+
+In summary, we make the following contributions:
+
+- Formulation of a programming language-based semantic framework to characterize success/failure scenarios for an ML-based perception module as programs that help delineate its performance boundaries and generate new data in a principled way;
+- An approach based on anchors and decision tree learning for deriving rules for refining scenario programs;
+- A novel white-box technique that uses activation patterns of convolutional neural networks to enhance scenario feature space refinement;
+- A data generation platform enabling research into debugging and explaining DNN-based perception, and
+- Experimental results demonstrating that our framework is effective for a complex convolutional neural network
+
+| Feature | Range |
| Weather | Neutral, Clear, Extrasunny, Smog, Clouds, Overcast, Rain, Thunder, Clearing, Xmas, Foggy, Snowlight, Blizzard, Snow |
| Time | [00:00, 24:00) |
| Car Model | Blisha, Bus, Ninef, Asea, Baller, Bison, Buffalo, Bob-catxl, Dominator, Granger, Jackal, Oracle, Patriot, Pranger |
| Car Color | R = [0, 255], G = [0, 255], B = [0, 255] |
| Car Heading | [0, 360) deg |
| Car Position | Anywhere on a road on GTA-V's map |
+
+Table 1: Environment features and their ranges in GTA-V
+
+used in autonomous driving.
+
+# 2. Background
+
+SCENIC is a probabilistic programming language for scenario specification and scene generation. The language can be used to describe environments for various autonomous systems such as autonomous cars or robots. The environments are scenes, i.e. configurations of objects and agents. SCENIC allows assigning distributions to the features of the scenes, as well as hard and soft mathematical constraints over the features in the scene. Generating scenes from a SCENIC program requires sampling from the distributions defined in the program. SCENIC comes with efficient sampling techniques that take advantage of the structure of the SCENIC program, to perform sampling efficiently, using aggressive pruning of the sampling space. The generated scenes are rendered into images with the help of a simulator. In this paper (and similar to [9]) we use the Grand Theft Auto V (GTA-V) game engine [10] to create realistic images with a case study that uses SqueezeDet [30], a convolutional neural network for object detection in autonomous cars. Note that the framework we put forth is not specific to this network, and can be used with other object detectors as well.
+
+The semantic features that we use in our case study are described in Table 1. These features are determined and limited by the environment parameters that the simulator allows users to control. If distributions over these environment features are not specified in a SCENIC program, then, by default, they are uniformly randomly selected from ranges shown in Table 1. Note that for a different application domain, we would have a different set of features.
+
+SCENIC is designed to be easily understandable, with simple and intuitive syntax. We illustrate it via an example, shown in Figure 2. The formal syntax and semantics can be found in [9].
+
+As shown in Figure 2, the program describes a rare situation where a car is illegally intruding over a white striped traffic island to either cut in or belatedly avoid entering elevated highway. In line 1, "param time = (6*60, 18*60)"
+
+```txt
+1 param time $=$ (6\*60, 18\*60)
+2 ego $=$ Car at -209.091 @ -686.231
+3
+4 spot $=$ OrientedPoint on visible curb
+5 badAngle $=$ (-90,90) deg
+6
+7 otherCar $=$ Car at spot,
+8 facing badAngle relative to egoheading
+9
+10 require otherCar in egovisibleRegion
+11 require ((angle to otherCar) - egoheading) < 0
+12 require (distance from ego.position to otherCar) >= 5
+13 require (distance from ego.position to otherCar) <= 20
+```
+
+Figure 2: Example SCENIC program
+
+means that time of the day is uniformly randomly sampled from 6:00 to 18:00. In line 2, an ego car is placed at specific x @ y coordinate on GTA-V's map. In line 4, a spot on a traffic island (in SCENIC, we referred to it as a curb) that is within a visible region from a camera mounted on ego car is selected. Of all visible region of the traffic island, a spot is uniformly randomly sampled. In line 7 and 8, otherCar is placed on the spot facing -90 to 90 degree off of where ego car is facing, simulating cases when a car may be protruding into a traffic flow. Lastly, SCENIC allows users to define hard and soft constraints using require statements. In this scenario, all four require statements define hard constraints. In line 10, the entire surface of the otherCar must be within the view region of the ego car. So, a scene where only front half of the otherCar is visible is not allowed. In line 11, the otherCar must be positioned in the right half of the ego car's visible region. In line 12 and 13, the distance of the otherCar from ego car should be 5 to 20 meters.
+
+# 3. Related Work
+
+Most techniques that aim to provide explainability and interpretability for deep neural networks (DNNs) in the field of computer vision focus on attributing the network's decisions to portions of the input images ([16, 19, 25, 26, 31]). GradCAM [23] is a popular approach for interpreting CNN models that visualizes how parts of the image affect the neural network's output by looking into class activation maps (CAM). Other techniques focus on understanding the internal layers by visualizing their activation patterns [5, 17]. Our approach, on the other hand, aims to provide characterizations at a higher level than raw image pixels, namely at the level of abstract features defined in a SCENIC program.
+
+Rule extraction techniques either aim to represent the entire functionality of the network as a set of rules making it too complex [32] or require the presence of pre-mined set of rules [15] which would be difficult to obtain for the object detection scenario. Anchors [22], which improves on LIME [21], is closest to our work (and we discuss it in more detail later).
+
+Recent work aims to explain the decisions of DNNs in
+
+terms of higher-level concepts. The technique in [13] introduces the idea of concept activation vectors, which provide an interpretation of a neural network's internal state in terms of human-friendly concepts. Feature Guided Exploration [29] aims to analyze the robustness of networks used in computer vision applications by applying perturbations over high-level input features extracted from raw images. They use object detection techniques (such as SIFT – Scale Invariant Feature Transform) to extract the features from an image. In contrast to these techniques we directly leverage SCENIC which defines the high-level features in a way that is already understandable for humans. Existing approaches typically use classification networks whose output directly corresponds to the decision being made and rely on the derivative of the output with respect to the input to calculate importance. In our application, there is no direct correlation between the output of the object detector network and the validity of the bounding boxes. Furthermore, unlike all previous work, we can use the synthesized rules to automatically generate more input instances, by refining the original SCENIC program and then using it to generate data. These instances can be used to test, debug and retrain the network.
+
+# 4. Approach
+
+The key idea of our approach is to leverage the high-level semantic features formally encoded in a SCENIC program to derive rules (sufficient conditions) that explain the behavior of a detection module in terms of those features. Our hypothesis is that since these features describe the important characteristics that should be present in an image and furthermore they are much fewer than the raw, low-level pixels, they should lead to small, compact rules that have a clear meaning for the developer.
+
+The problem that our technique aims to address can be formalized as follows. Suppose a function $\mathbf{g}$ defines a mapping from a feature vector, $[f_1, f_2, \dots, f_n] \in D_1 \times D_2 \times \dots \times D_n$ , to a matrix of pixels, $m \in M$ , of an image, where each $D_i$ represents the feature domain of feature $f_i$ and $M$ is a domain of $m$ . Let function $h$ denote the given perception module. Finally, let $e$ be an evaluation function which compares the perception module's prediction to the ground truths, and outputs a boolean class (correct or incorrect) based on a certain performance threshold. Given a SCENIC program, according to its feature dependencies and hard and soft constraints, the feature space, $D_1 \times D_2 \times \dots \times D_n$ , is defined. The problem is to find the subset feature space, $d_1 \times d_2 \times \dots \times d_n \subseteq D_1 \times D_2 \times \dots \times D_n$ such that when we sample a certain number of features $[f_1, f_2, \dots, f_n] \in d_1 \times d_2 \times \dots \times d_n$ , the probability that $e(h(g([f_1, f_2, \dots, f_n]))$ is equal to a target class (correct or incorrect) is maximized.
+
+A high-level overview of our analysis pipeline is illus
+
+trated in Figure 3. We start with a SCENIC program that encodes constraints (and distributions) over high-level semantic features that are relevant for a particular application domain, in our case object detection for autonomous driving. Intuitively, the program (henceforth called scenario) encodes the environments that the user wants to focus on in order to test the module. Based on this scenario, SCENIC generates a set of feature vectors by sampling from the specified distributions. A simulator is then used to generate a set of realistic, synthetic images (i.e. raw low-level pixel values) based on those features.
+
+The images are fed to the object detector. Each image is assigned a binary label, correct or incorrect, based on the performance of the object detector on the image (see Section 4.1). The labels obtained for the images are mapped back to the feature vectors that led to the generation of the respective images. The result is a labeled data set that maps each high-level feature vector to the the respective label.
+
+We then use off-the-shelf methods to extract rules from this data set. The rule extraction is described in more detail in Sec. 4.2. The result is a set of rules encoding the conditions on high-level features that lead to likely correct or incorrect detection. The obtained rules can be used to refine the SCENIC program, which in turn can be sampled to generate more images that can be used to test, debug or retrain the detection module. This iterative process can continue until one obtains refined rules, and SCENIC programs, of desired precision. In the following we provide more details about our approach.
+
+# 4.1. Labelling
+
+Obtaining the label (correct/incorrect) for an image is performed using the F1 score metric (harmonic mean of the precision and recall). This metric is commonly used in statistical analysis of binary classification. The F1 score is computed in the following way. For each image, the true positive (TP) is the number of ground truth bounding boxes correctly predicted by the detection module. Correctly predicted here means intersection-over-union (IoU for object detection) is greater than 0.5. The false positive (FP) is the number of predicted bounding boxes that falsely predicted ground truths. This false prediction includes duplicate predictions on one ground truth box. The false negative (FN) is the number of ground truth boxes that is not detected correctly. We computed the F1 score for each image, and if it is greater than a threshold, we assigned correct label; if not, incorrect. The threshold used in our experiments was 0.8.
+
+# 4.2. Rule Extraction
+
+Methods: We experimented with two methods, decision tree (DT) learning for classification [20] and anchors [22], to extract rules capturing the subspace of the feature space defined in the given SCENIC program.
+
+
+Figure 3: Analysis Pipeline
+
+Decision tree learning is commonly used to extract rules explaining the global behavior of a complex system while the anchors method is a state-of-the-art technique for extracting explanation rules that are locally faithful.
+
+Decision trees encode decisions (and their consequences) in a tree-like structure. They are highly interpretable, provided that the trees are short. One can easily extract rules for explaining different classes, by simply following the paths through the trees and conjuncting the decisions encoded in the tree nodes. We used the rpart [27] package in R software, which implements corresponding algorithm in [4], with default parameters.
+
+The anchor method is a state-of-the-art technique that aims to explain the behavior of complex ML models with high-precision rules called anchors, representing local, sufficient conditions for predictions. The system can efficiently compute these explanations for any black-box model with high-probability guarantees. We used the code from [1] with the default parameters. Applying the method to the object detector directly would result in anchors describing conditions on low-level pixel values, which would be difficult to interpret and use. Instead what we want is to extract anchors in terms of high-level features. While one can use the simulator together with the object detector as the black-box model, this would be very inefficient. Instead we built a surrogate model mapping high-level SCENIC features to output labels; we used a random forest learning algorithm for this purpose as in the code. This surrogate model was then passed to the anchor method to extract the rules.
+
+Blackbox vs Whitebox Analysis: So far we explained how we can obtain rules when treating the detection module as a black box. We also investigated a white-box analysis, to determine whether we can exploit the information about the internal workings of the module to improve the rule inference. The white-box analysis is one of our novel contributions in this paper. We leverage recent work [12] which aims to infer likely properties of neural networks. The properties are in terms of on/off activation patterns (at different internal layers) that lead to the same predictions. These pat
+
+terns are computed by applying decision-tree learning over the activations observed during the execution of the network on the training or testing set.
+
+We analyzed the architecture of the SqueezeDet network and we determined that there are three maxpool layers which provide a natural decomposition of the network. Furthermore they have relatively low dimensionality making them a good target for property inference.
+
+We consider activation patterns over maxpool neurons based on whether the neuron output is greater or equal to zero. A decision tree can then be learned over these patterns to fit the prediction labels. For our experiments we selected patterns from the maxpool layer 5, which turned out to be highly correlated to images that lead to correct/in-correct predictions.
+
+Then, we augmented the assigned correct and incorrect labels with corresponding decision pattern in the following way. For example, using a decision pattern for correct labels (i.e. the decision pattern that most correlated to images with correct label), we created two sub-classes for correct class. By feeding in only images with correct label to the perception module, the images satisfying the decision pattern is re-labelled as "correct-decision-pattern," otherwise, "correct-unlabelled." Likewise, the incorrect class is augmented using a decision pattern that is most correlated to images with incorrect label. It is our intuition that the decision pattern captures more focused properties (or rules) among images belonging to a target class. Hence, we hypothesize that this label augmentation would help anchor and decision tree methods to better identify rules.
+
+Rule Selection Criteria: Once we extracted rules with either DT or anchors, we selected the best rule using following criteria. To best achieve our objective, first, we chose the rule with highest precision on a held-out testset of feature vectors. If there are more than one rule with equal high precision, then we chose the rule with the highest coverage (i.e. the number of feature vectors satisfying the rules). Finally, if there is still more than one rule left, then we broke the tie by choosing the most compact rule which has the
+
+| Scenario # (Baseline→Rule Precision) | Rules |
| Scenario 1 (65.3% → 89.4%) | x coordinate ≥ -198.1 |
| Scenario 2 (72.3% → 82.3%) | hour ≥ 7.5 ∧ weather = all except neutral ∧ car0 distance from ego ≥ 11.3m ∧ car0 model = {Asea, Bison, Bista, Buffalo, Dominator, Jackal, Ninef, Oracle} |
| Scenario 3 (61.7% → 79.4%) | car0 red color ≥ 74.5 ∧ car0 heading ≥ 220.3 deg |
| Scenario 4 (89.6% → 96.2%) | car0 model = {Asea, Baller, Bista, Buffalo, Dominator, Jackal, Ninef, Oracle} |
+
+Table 2: Rules for correct behaviors of the detection module with the highest precision from Table 6
+
+| Scenario # (Baseline→Rule Precision) | Rules |
| Scenario 1 (34.7% → 87.2%) | x coordinate ≤ -200.76 ∧ distance ≤ 8.84 ∧ car model = PRANGER |
| Scenario 2 (27.7% → 44.9%) | hour ≥ 7.5 ∧ weather = all except Neutral ∧ car0 distance from ego < 11.3 |
| Scenario 3 (38.3% → 83.4%) | weather = neutral ∧ agent0 heading = ≤ 218.08 deg ∧ hour ≤ 8.00 ∧ car2 red color ≤ 95.00 |
| Scenario 4 (10.4% → 57.3%) | car0 model = PATRIOT ∧ car1 model = NINEF ∧ car2 model = BALLER ∧ 92.25 < car0 green color ≤ 158 ∧ car0 blue color ≤ 84.25 ∧ 178.00 < car2 red color ≤ 224 |
+
+Table 3: Rules for incorrect behaviors of detection module with the highest precision from Table 7
+
+least number of features. The last two criteria are established to select the most general high-precision rule.
+
+# 5. Experiments
+
+In this section we report on our experiments with the proposed approach on the object detector. We investigate whether we can synthesize rules that are effective in generating test inputs that increase the probability of correct/incorrect detection, thus explaining the correct/incorrect behavior of the analyzed module. We evaluate the proposed techniques along the following dimensions: decision tree (DT) vs anchor, black-box (BB) vs white-box (WB).
+
+# 5.1. Scenarios
+
+We experimented with our approach on four different scenarios. Images generated from these scenarios are shown in Figure 4. Scenario 1 (Figure 2) describes the situation where a car is illegally intruding over a white striped traffic island at the entrance of an elevated highway. Scenario 2 describes two-car scenario where one car occludes the ego car's view of another car at a T-junction intersection on an elevated road. describes scenes where other cars are merging into ego car's lane. The location in this scenario is carefully chosen such that the sun rises in front of ego car, causing a glare. describes a set of scenes when nearest car is abruptly switching into ego car's lane while another car on the opposite traffic direction lane is slightly intruding over the middle yellow line into ego car's lane.
+
+# 5.2. Setup
+
+The object detector was trained on a separate set of 10,000 GTA images with one to four cars in various locations of the map producing different background scenes. The GTA-V simulator provided images, ground truth boxes, and values of the environment features.
+
+For each scenario, we generated 950 new images as a train set and another 950 new images as a test set. We denote the labels corresponding to the maxpool layer 5 decision pattern as p5c.correct) and p5.ic Incorrect) and the remaining as correct_unlabelled and incorrect_unlabelled, respectively. We augmented the feature vector with some extra features that are not part of the feature values provided by the simulator but could help with extracting meaningful rules. For example, in Scenario 1, the distance from ego to otherCar is not part of the feature values provided by GTAV. However, it can be computed with Euclidean distance metric using $(\mathrm{x},\mathrm{y})$ location coordinates of ego and otherCar. Also, the difference in heading angle between ego and otherCar is also added as extra feature to represent "badAngle" variable in the program.
+
+From the train set, we extracted rules to predict each label based on the feature vectors. These rules were evaluated on the test set based on precision, recall, and F1 score metrics. For DT learning we adjusted the label weight to account for the uneven ratio among labels for both black-box and white-box labels. For the Anchors method, we applied it on each instance of the training set until we had covered a maximum of 50 instances for every label (correct, incorrect for Black Box, and p5c, p5 ic, correct_unlabelled, incorrect_unlabelled for White Box). The best anchor rule for every label is selected based on the rule selection criteria mentioned in section 4.2.
+
+
+
+
+
+
+Figure 4: From top-left one-car image, each image corresponds to scenario 1, 2, 3, and 4 in a clockwise manner. The scenario number is the number of cars
+
+
+
+# 5.3. Results
+
+Tables 2 and 3 show the best rules (wrt. precision) extracted with our proposed framework, along with the baseline correct/incorrect detection rate for each given scenario and the detection rate for the generated rules. The results indicate that indeed our framework can generate rules that increase significantly the correct and incorrect detection rate of the module. Furthermore, the generated rules are compact and easily interpretable.
+
+For example, the rule for correct behavior for Scenario 1 is "x coordinate $\geqslant -198.1$ ." In GTA-V, at ego car's specific location, the condition on x coordinate was equivalent to the otherCar's distance from ego being greater than $11\mathrm{m}$ . On the other hand, the rule for incorrect behavior for Scenario 1 requires the otherCar to be within $8.84\mathrm{m}$ and its car model to be PRANGER. These rules, counter-intuitively, indicate that the object detector fails when the otherCar is close by, and performs well when located further away.
+
+Results for Correct Behavior: Tables 5 and 6 summarize the results for the rules explaining correct behavior. The results indicate that there are clear signals in the heavily abstracted feature space and they can be used effectively for scenario characterization via the generated high-precision rules.
+
+The results also indicate that DT learning extracts rules with better F1 scores for all scenarios as compared to anchors. This could be attributed to the difference in the nature of the techniques. The anchor approach aims to construct rules that have high precision in the locality of a given instance. Decision-trees on the other hand aim to construct global rules that discriminate one label from another. Given that a large proportion of instances were detected correctly by the analyzed module, the decision tree was able to build rules with high precision and coverage for correct behavior.
+
+| Scenario # | 1 | 2 | 3 | 4 |
| Correct DP | 0.626 | 0.651 | 0.514 | 0.824 |
| Incorrect DP | 0.276 | 0.175 | 0.234 | 0.212 |
+
+Table 4: Support for correct and incorrect decision patterns
+
+| Scenario # | 1 | 2 | 3 | 4 |
| BB Decision Tree | 0.723 | 0.342 | 0.631 | 0.622 |
| WB Decision Tree | 0.727 | 0.696 | 0.601 | 0.778 |
| BB Anchor | 0.361 | 0.457 | 0.302 | 0.438 |
| WB Anchor | 0.520 | 0.188 | 0.149 | 0.438 |
+
+Table 5: F1 score of correct rules on testset
+
+| Scenario # | 1 | 2 | 3 | 4 |
| Original Program | 0.653 | 0.723 | 0.617 | 0.896 |
| BB Decision Tree | 0.843 | 0.778 | 0.787 | 0.950 |
| WB Decision Tree | 0.826 | 0.823 | 0.788 | 0.962 |
| BB Anchor | 0.727 | 0.811 | 0.652 | 0.928 |
| WB Anchor | 0.894 | 0.817 | 0.794 | 0.928 |
+
+Table 6: Precision of correct rules on the testset
+
+| Scenario # | 1 | 2 | 3 | 4 |
| Original Program | 0.347 | 0.277 | 0.383 | 0.104 |
| BB Decision Tree | 0.703 | 0.418 | 0.506 | 0.375 |
| WB Decision Tree | 0.73 | 0.449 | 0.494 | 0.099 |
| BB Anchor | 0.872 | 0.357 | 0.834 | 0.573 |
| WB Anchor | 0.674 | 0.422 | 0.365 | 0.176 |
+
+Table 7: Precision of incorrect rules on 500 new data generated from each refined SCENIC program
+
+The results also highlight the benefit of using white-box information to extract rules for correct behavior.
+
+Table 4 shows the support for the decision pattern is significant (greater than $65\%$ on average for all scenarios). The support is defined as a correlation of the decision pattern to a specific label. Using this information to augment the labels of the dataset helped to improve the precision and F1 score of the rules (w.r.t. SCENIC features) for both DT learning and anchor method.
+
+Results for Incorrect Behavior: Tables 3 and 7 summarize the results for the rules explaining incorrect behavior. Rule derivation for incorrect behavior is more challenging than for correct behavior due to the low percentage of inputs that lead to the incorrect detection for a well trained network.
+
+In fact the F1 scores (computed on the test set) for rules
+
+
+Figure 5: The cumulative ratio of incorrectly detected images generated from refined SCENIC programs (using incorrect rules) stabilizes over 500 samples. Each color has four graphs representing four different rule extraction methods
+
+predicting incorrect behavior were too low due to very low (in some cases 0) recall values.
+
+To properly validate the efficacy of the generated rules, we refined the SCENIC programs by encoding the rules as constraints and we generated 500 new images. We then evaluated our module's performance on these new datasets. Figure 5 justifies our choice of 500 as the number of new images that we generate for evaluation.
+
+All four methods contributed to more precisely identifying the subset features spaces in which the module performs worse. Specifically, Table 7 illustrates that the black-box anchor method enhanced the generation rate of incorrectly detected images by $48\%$ on average in Scenarios 1, 3, and 4 compared to the baseline. This is a significant increase in the ratio of incorrectly labelled images generated from the program, providing evidence that the refined programs are more precisely characterizing the failure scenarios.
+
+We also note that the anchor method outperforms DT learning. This is expected, because the anchor method extracts rules that are highly precise within a local feature space. The exception is Scenario 2. We conjecture that the reason that the anchor method did not perform better than DT learning is due to uncontrollable non-determinism in GTA-V, which generated pedestrians in close vicinity to the camera of ego car even though its SCENIC program did not have any pedestrian. GTA-V non-deterministically instantiated these pedestrians, and the perception module often incorrectly predicted the pedestrians as cars. This is an issue with the GTA-V which originally was not built for data generation purpose. GTA-V does not allow users to control or eliminate these pedestrians and it does not provide features related to pedestrians during data collection process. In future work, we plan to incorporate simulators that allows a deterministic control (such as CARLA [7]) for further experimentation.
+
+Unlike the results for correct behavior, the whitebox approach tends to perform worse than blackbox when focusing on incorrect behavior. This outcome can be attributed to very low support for decision patterns computed for incorrect behavior, with maximum of $27.6\%$ among the four scenarios as shown in Table 4.
+
+However, we do observe that the white-box approach for both DT learning and anchors does, in general, enhance the ratio of incorrectly detected images as shown in Table 7, compared to those of the original programs.
+
+Limitations: Our technique relies on abstracting an image with a high resolution (for instance $1920 \times 1200$ in our example) to a vector of a small set of semantic features. In our experiments we were able to derive compact rules with high precision and coverage. However, we do note that in other application domains, other than autonomous driving, the abstraction may lead to under-determined representation, which may not yield any noticeable patterns. Therefore, appropriate selection of a subset of essential features for a given application domain (facilitated by an appropriate definition using SCENIC), is essential. We also note that all the SCENIC programs we experimented with contained only uniform distributions. Also, for each of the scenario programs that we analyzed, we fixed the location and heading angle of the camera. In these restricted settings, we were able to extract rules that distinguished correctly detected scenes from the incorrect ones.
+
+# 6. Conclusion and Future Work
+
+We presented a semantic and programmatic framework for characterizing success and failure scenarios of a given perception module in the form of programs. The technique leverages the ScENIC language to derive rules in terms of high-level, meaningful features and generates new inputs that conform with these rules. For future work, we plan on applying this approach to other domains, by looking into more general input distributions and transformations.
+
+# 7. Acknowledgment
+
+We thank Daniel Fremont for his help on our use of SCENIC, Jinkyu Kim and Taesung Park for their thorough comments, and Xiangyu Yue for interfacing GTA-V with SCENIC. This project is supported by an NSF graduate fellowship (Grant#: DGE1752814), NSF grants CNS-1545126 (VeHICaL), CNS-1739816, and CCF-1837132, by the DARPA Assured Autonomy program, by Berkeley Deep Drive, and by Toyota under the iCyPhy center. This work was partially done under the NASA Ames Internship Program, 2019.
+
+# References
+
+[1] Anchor Method Repository. https://github.com/marcotcr/anchor.5
+[2] The SCENIC Probabilistic Programming Language. https://github.com/BerkeleyLearnVerify/ Scenic.2
+[3] Babak Alipanahi, Andrew Delong, Matthew T Weirauch, and Brendan J Frey. Predicting the sequence specificities of DNA-and RNA-binding proteins by deep learning. Nature biotechnology, 2015. 2
+[4] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth and Brooks, Monterey, CA, 1984. 5
+[5] Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, and Chris Olah. Activation atlas. Distill, 2019. https://distill.pub/2019/activation-atlas. 2, 3
+[6] George E Dahl, Jack W Stokes, Li Deng, and Dong Yu. Large-scale malware classification using random projections and neural networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3422-3426. IEEE, 2013. 2
+[7] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA: An open urban driving simulator. In Proceedings of the 1st Annual Conference on Robot Learning, pages 1-16, 2017. 8
+[8] Tommaso Dreossi, Alexandre Donze, and Sanjit A. Seshia. Compositional falsification of cyber-physical systems with machine learning components. In Proceedings of the NASA Formal Methods Conference (NFM), pages 357–372, May 2017. 2
+[9] Daniel J. Fremont, Tommaso Dreossi, Shromona Ghosh, Xi-angyu Yue, Alberto L. Sangiovanni-Vincentelli, and Sanjit A. Seshia. Scenic: A language for scenario specification and scene generation. In Proceedings of the 40th annual ACM SIGPLAN conference on Programming Language Design and Implementation (PLDI), June 2019. 2, 3
+[10] Rockstar Games. Grand theft auto v. Windows PC version, 2015. 3
+[11] Ian Goodfellow, Patrick McDaniel, and Nicolas Papernot. Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7):56-66, 2018. 2
+[12] Divya Gopinath, Hayes Converse, Corina S. Pasareanu, and Ankur Taly. Property inference for neural networks. CoRR, abs/1904.13215, 2019. 5
+[13] Been Kim, Martin Wattenberg, Justin Gilmer, Carrie J. Cai, James Wexler, Fernanda B. Viégas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In Jennifer G. Dy and Andreas Krause, editors, ICML, volume 80 of Proceedings of Machine Learning Research, pages 2673-2682. PMLR, 2018. 4
+[14] Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In Proceedings of the
+
+ii ee conference on computer vision and pattern recognition, pages 4390-4399, 2015. 2
+[15] Himabindu Lakkaraju, Stephen H. Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1675-1684, 2016. 3
+[16] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 4765-4774. Curran Associates, Inc., 2017. 2, 3
+[17] Alexander Mordvintsev, Michael Tyka, and Christopher Olah. DeepDream, https://github.com/google/deepdream.2,3
+[18] NVIDIA. Nvidia tegra drive px: Self-driving car computer, 2015. 2
+[19] Zhongang Qi, Saeed Khorram, and Fuxin Li. Visualizing deep networks by optimizing with integrated gradients. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 2, 3
+[20] J. Ross Quinlan. Induction of decision trees. Machine learning, 1(1):81-106, 1986. 4
+[21] Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1135-1144, 2016. 3
+[22] Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018, pages 1527-1535, 2018. 2, 3, 4
+[23] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 618-626, 2017. 2, 3
+[24] Sanjit A. Seshia, Dorsa Sadigh, and S. Shankar Sastry. Towards Verified Artificial Intelligence. ArXiv e-prints, July 2016. 2
+[25] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. CoRR, abs/1706.03825, 2017. 2, 3
+[26] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. CoRR, abs/1703.01365, 2017. 2, 3
+[27] Terry Therneau, Beth Atkinson, and Brian Ripley. rpart: Recursive partitioning and regression trees. 5
+
+[28] Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. Deeptest: automated testing of deep-neural-network-driven autonomous cars. In Proceedings of the 40th International Conference on Software Engineering, ICSE 2018, Gothenburg, Sweden, May 27 - June 03, 2018, pages 303-314, 2018. 2
+[29] Matthew Wicker, Xiaowei Huang, and Marta Kwiatkowska. Feature-guided black-box safety testing of deep neural networks. In Tools and Algorithms for the Construction and Analysis of Systems - 24th International Conference, TACAS 2018, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2018, Thessaloniki, Greece, April 14-20, 2018, Proceedings, Part I, pages 408-426, 2018. 4
+[30] Bichen Wu, Forrest N. Iandola, Peter H. Jin, and Kurt Keutzer. Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops, pages 446-454, 2017. 3
+[31] Bolei Zhou, David Bau, Aude Oliva, and Antonio Torralba. Interpreting deep visual representations via network dissection. CoRR, abs/1711.05611, 2017. 2, 3
+[32] Jan Zilke, Eneldo Mencia, and Frederik Janssen. Deeppred - rule extraction from deep neural networks. pages 457-473, 10 2016. 3
\ No newline at end of file
diff --git a/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/images.zip b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..e8c4348b653889e3d3b7e70ed915e9323cb66154
--- /dev/null
+++ b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f47cb1a935da204545f037b50c2fc01f092d3b6090edec13c906d8ca96ce7e2
+size 434058
diff --git a/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/layout.json b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..e6157804956aa930bff96b2f20d32580a6689be4
--- /dev/null
+++ b/aprogrammaticandsemanticapproachtoexplaininganddebuggingneuralnetworkbasedobjectdetectors/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be4e479094634b2541d5f80cea3e962949811086d794cd968789af080cf9fbf1
+size 276736
diff --git a/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_content_list.json b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..6d5a36eb894b87294c939a6504cf22c25a56af0a
--- /dev/null
+++ b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d335cdd921b73a6700f6d1f87a7aa7b8c8360456ca8bfd2668b29745eda29d78
+size 93153
diff --git a/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_model.json b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..5e48f7c64673f3e19df8c3dd2715b911e1be424d
--- /dev/null
+++ b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2731ca1e221b65ea4f526543d9d979d0c91f58c2dc06586fc42a20fdc14610a8
+size 116090
diff --git a/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_origin.pdf b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..974e4b3355d59a93f02ccc3ab9c0ae56f1489593
--- /dev/null
+++ b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/e37e626c-6f80-4800-8f05-af8bf0c0e0b7_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1ddc77c6bede1bd57712137cffda2caac2b1761374fa5976b71de97966a26a6
+size 1234881
diff --git a/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/full.md b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7b41c6fbfe5fc3746fe527081a100ef255c40715
--- /dev/null
+++ b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/full.md
@@ -0,0 +1,406 @@
+# A Quantum Computational Approach to Correspondence Problems on Point Sets
+
+Vladislav Golyanik
+
+Christian Theobalt
+
+Max Planck Institute for Informatics, Saarland Informatics Campus
+
+# Abstract
+
+Modern adiabatic quantum computers (AQC) are already used to solve difficult combinatorial optimisation problems in various domains of science. Currently, only a few applications of AQC in computer vision have been demonstrated. We review AQC and derive a new algorithm for correspondence problems on point sets suitable for execution on AQC. Our algorithm has a subquadratic computational complexity of the state preparation. Examples of successful transformation estimation and point set alignment by simulated sampling are shown in the systematic experimental evaluation. Finally, we analyse the differences in the solutions and the corresponding energy values.
+
+# 1. Introduction
+
+Since their proposal in the early eighties [8, 42, 27], quantum computers have attracted much attention of physicists and computer scientists. Impressive advances both in quantum computing hardware and algorithms have been demonstrated over the last thirty years [39, 30, 60, 57, 41, 19, 25, 48, 64, 47]. Quantum computers are not universally faster than conventional machines, but they can natively execute algorithms relying on quantum parallelism, i.e., the ability to perform operations on exponentially many superimposed memory states simultaneously [58].
+
+To harness the advantages, carefully designed algorithms are required. Nowadays, the motivation to take advantage of quantum effects in computing is also facilitated by the classical computing paradigm approaching its limits, since the quantum effects are becoming non-neglectable while manufacturing and using conventional CPUs. As a result, alternative paradigms such as massively parallel computing devices have been brought into being.
+
+While universal gate quantum computer technology has not yet reached the maturity, modern adiabatic quantum annealers (AQA) are already capable of solving difficult real-world combinatorial optimisation problems [15, 14, 23, 48]. The primary difference of universal gate quantum computing and AQA is that the latter can address objectives formulated as quadratic unconstrained binary optimisation prob
+
+
+Figure 1: Different 2D point sets — fish [46], qubit, kanji and composer — aligned with our QA approach. For every pair of point sets, the initial misalignment is shown on the left, and the registration is shown on the right. QA is the first transformation estimation and point set alignment method which can be executed on adiabatic quantum computers.
+
+lems (QUBOP) defined as
+
+$$
+\arg \min _ {\mathbf {q} \in \mathbf {B} ^ {n}} \mathbf {q} ^ {\top} \mathbf {P} \mathbf {q}, \tag {1}
+$$
+
+where $\mathbf{q}$ is a set of $n$ binary variables, and $\mathbf{P}$ is a symmetric matrix of weights between the variables. The operational principle of AQA is grounded on the adiabatic theorem of quantum mechanics [17] which states that
+
+if a quantum-mechanical system is in the ground state of a time-dependent Hamiltonian and parameters of this Hamiltonian are changing gradually enough, the (2) system will continue to remain in the ground state during the evolution (see Table 1 for quantum notions).
+
+In their seminal paper, Farhi et al. [26] have shown that the adiabatic principle (2) can be used for solving $\mathcal{NP}$ -complete optimisation problems and laid the foundation for adiabatic quantum computing. Several years later, Aharonov et al. [2] theoretically showed the equivalence between classical quantum computing and quantum annealing models. As of 2019-2020, general-purpose quantum computers accessible for research purposes and applications contain up to 20 qubits [19]. In contrast, the latest quantum annealers support up to $2^{10}$ qubits [21] $^1$ . Nevertheless, due
+
+to design and practical restrictions, quantum algorithms for the gates model such as Shor's prime number factorisation [60] or Grover's search algorithms [30] cannot be implemented on current quantum annealers.
+
+Motivation and Contributions. Considering recent successful applications of AQA in several fields of computational science [41, 48, 64], we are motivated to investigate how useful AQA can be for computer vision and which problems can be potentially solved on the new hardware. The vast majority of available materials about quantum annealers are either oriented to physicists or lack technical details and clarity. Our goal is to fill this gap, introduce the reader into the modern AQA and provide all notions and the background to understand, analyse, simulate and design quantum algorithms for computer vision which can potentially run on modern AQA, as well as interpret the results.
+
+We consider correspondence problems on point sets which have various applications in computer vision. They consist in finding an optimal rigid transformation between inputs [33, 13, 46, 65]. While transformation estimation assumes known matches, point set alignment is more general and targets, in addition, the recovery of correspondences. We consider two inputs, i.e., a fixed reference point set and a template undergoing a rigid transformation. Thus, our goal is to design a quantum approach for point set alignment which can potentially run on AQA and show that it offers advantages compared to the classical counterparts.
+
+Therefore, we adapt the recent progress in rigid point set alignment and formulate a globally multiply-linked energy functional which does not require any intermediate correspondence updates [29]. In the gravitational approach (GA) [29], the optimal alignment is achieved when the gravitational potential energy (GPE) of the system with two interacting particle swarms is locally minimal. Proceeding from GA, we build the weight matrix $\mathbf{P}$ for the associated QUBOP (1) which is unalterably valid in the course of the optimisation. Along with that, we are targeting at a method which is implementable on classical hardware and can solve real-world problems, cf. Fig. 1. To summarise, the main contributions of this paper are:
+
+- A self-contained and detailed introduction into modern quantum annealers for computer vision problems, including notions from quantum physics and computing (Sec. 2), modern adiabatic quantum annealers (Sec. 3) including D-WAVE (Sec. 3.2), and previous and related works from quantum computing (Sec. 4).
+- The first quantum approach (QA) to transformation estimation (Sec. 5) and point set alignment (Sec. 6) which can run on the upcoming quantum annealers (Sec. 6.2).
+- Experimental analysis of the proposed method in a simulated environment on several datasets (Sec. 7).
+
+| quantum notion | classical counterpart |
| qubit (states |0\r) and |1\r) | bit (states 0 and 1) |
| (time-dependent) Hamiltonian | energy functional |
| eigenstate | some energy state |
| ground state | globally optimal energy state |
| quantum system evolution | optimisation process |
| quantum annealing [26] | simulated annealing [38] |
+
+Table 1: Quantum notions and their counterparts in computer vision.
+
+# 2. Preliminaries, Definitions and Notations
+
+In this section, we introduce the reader into the basics of quantum computing. See Table 1 for a lookup of notions specific to AQA which have counterparts and interpretation in the classical optimisation theory for computer vision.
+
+Qubit. Quantum computing encompasses tasks which can be performed on quantum-mechanical systems [52]. Quantum superposition and entanglement are two forms of parallelism evidenced in quantum computers. A qubit is a quantum-mechanical equivalent of a classical bit. A qubit $|\phi \rangle$ — written in the Dirac notation — can be in the state $|0\rangle$ , $|1\rangle$ or an arbitrary superposition of both states denoted by $|\phi \rangle = \alpha |0\rangle + \beta |1\rangle$ , where $\alpha$ and $\beta$ are the (generally, complex) probability amplitudes satisfying $|\alpha|^2 + |\beta|^2 = 1$ . In quantum computing, the state $\frac{|0\rangle + |1\rangle}{\sqrt{2}}$ denoted by $|+ \rangle$ is often used for initialisation of a qubit register. The state of a qubit remains hidden during the entire computation and reveals when measured. If qubits are entangled, measuring one of them influences the measurement outcome of the other one [58]. During the measurement, the qubit's state irreversibly collapses to one of the basis states $|0\rangle$ or $|1\rangle$ . Efficient physical realisation of a qubit demand very low temperatures. Otherwise, thermal fluctuations will destroy it and lead to arbitrary changes of the measured qubit state.
+
+One possible physical implementation of a qubit is an electron which possesses a spin, i.e., its intrinsic magnetic moment [52, 62]. The spin of an electron can be manipulated and brought to the state spin down, spin up, or a superposition of both. A concrete experimentally realised scheme that uses this property is represented by an atom of phosphorus $^{31}\mathrm{P}$ embedded into a $^{28}\mathrm{Si}$ silicon lattice attached to a transistor [36, 45, 66]. The nucleus of $^{31}\mathrm{P}$ has a positive charge compensated by electrons. The bundle of electrons in the transistor is filled up to the energetic level between the energy of spin-down and spin-up state of $^{31}\mathrm{P}$ . To change a state of a $^{31}\mathrm{P}-^{28}\mathrm{Si}$ qubit, a microwave pulse of the frequency — which is equal to the resonance frequency of the atom — is applied to it. The new state $|\phi\rangle$ depends on the duration of the exposure. A transistor is used to measure a state of the $^{31}\mathrm{P}-^{28}\mathrm{Si}$ qubit. If the extra electron of $^{31}\mathrm{P}$ tunnels into the electron bundle, a positive charge is measured in the transistor indicating the spin-up state (e.g., $|1\rangle$ ).
+
+Fig. 2-(a) visualises a qubit with a so-called Bloch sphere. Every qubit can be both in a superposition and en
+
+
+Figure 2: (a): Schematic depiction of a qubit with a Bloch sphere. Spin-up or $|1\rangle$ is located on the north pole, and spin down or $|0\rangle$ is located on the south pole. The state $\frac{|0\rangle + |1\rangle}{\sqrt{2}}$ with equal probability amplitudes to measure $|1\rangle$ and $|0\rangle$ values is geotically equidistant to both poles. A point on the surface of the Bloch sphere corresponds to a valid pure state $|\phi\rangle = \alpha |0\rangle + \beta |1\rangle$ . (b): Schematic visualisation of adiabatic quantum annealing (AQA). At the beginning, all qubits are initialised in the state $|+\rangle$ . After the annealing is finished, the qubit states are measured and returned. After the measurement, the states of variables are classical.
+
+
+
+tangled with other qubits. Thus, quantum superposition is the property that calculations are performed on all possible inputs simultaneously which can result in exponential parallelism in the number of qubits. When entangled, states of qubits cannot be described independently from each other.
+
+Schrödinger Equation. In the universal or gates model, changes are expressed by a series of unitary transformations applied to qubits. This is a useful practical simplification, while the evolution of every quantum-mechanical system can be described more precisely by continuous Schrödinger equation, which in common notation reads:
+
+$$
+- i \frac {d}{d t} | \phi (t) \rangle = \hat {\mathbf {H}} (t) | \phi (t) \rangle . \tag {3}
+$$
+
+For the simplicity, we denote here by $|\phi \rangle$ a state of $n$ qubits at time $t$ , and $\hat{\mathcal{H}}(t)$ is a Hamiltonian which is, in this case, a $2^n \times 2^n$ Hermitian matrix. Thus, a discrete time evolution of the quantum system is given by a unitary transformation.
+
+Hamiltonian. Hamiltonian $\hat{\mathcal{H}}$ is an energy operator of a system of $n$ qubits. It defines the energy spectrum of a system or, in our case, the space of all possible solutions. The ground state of the system is its lowest energy eigenstate. Finding a ground state of a Hamiltonian is equivalent to finding an optimal solution to the problem. The expectation value of Hamiltonian $\langle \hat{\mathcal{H}}\rangle$ provides an instantaneous energy of a given qubit configuration. In correspondence problems, $\langle \hat{\mathcal{H}}\rangle$ is a quantitative characteristic of point set alignment. We denote by $\Delta (\hat{\mathcal{H}})$ the spectral gap of $\hat{\mathcal{H}}$ , i.e., the difference between the energies of the ground state and the second lowest eigenstate. The spectral gap influences the annealing rate and is considerable for algorithm design and evaluation in quantum annealing.
+
+Pauli Matrices. An arbitrary Hamiltonian of a $n$ -qubit-system can be expressed by a linear combination of tensor products of Pauli matrices denoted by:
+
+$$
+\boldsymbol {\sigma} ^ {x} = \left( \begin{array}{c c} 0 & 1 \\ 1 & 0 \end{array} \right), \boldsymbol {\sigma} ^ {y} = \left( \begin{array}{c c} 0 & - i \\ i & 0 \end{array} \right), \boldsymbol {\sigma} ^ {z} = \left( \begin{array}{c c} 1 & 0 \\ 0 & - 1 \end{array} \right). \tag {4}
+$$
+
+The Pauli matrices are $2 \times 2$ Hermitian and unitary. Together with the identity $\sigma^0 = \mathbf{I}_{2 \times 2}$ , they form a basis for $\mathbb{C}^{2 \times 2}$ . $\sigma^x$ flips the probabilities to measure $|0\rangle$ and $|1\rangle$ , whereas $\sigma^z |0\rangle = |0\rangle$ , and $\sigma^z |1\rangle = -|1\rangle$ .
+
+Pseudo-Boolean Functions. A pseudo-boolean function is a real vector-valued function of $n$ boolean variables denoted by $\mathbf{x}$ of the form $\mathbf{F}(\mathbf{x}) : \mathbf{B}^n \to \mathbb{R}^M$ , where $M$ is the number of real-valued outputs.
+
+Quantum Annealing. Quantum annealing is a heuristic combinatorial optimisation method for finding global optima which relies on quantum effects (superposition, entanglement and tunnelling) [11, 35]. In particular, it is used to find a ground state of an Ising Hamiltonian [34, 56], which encodes the target computational problem, see Fig. 2-(b).
+
+Quantum annealing is the quantum counterpart of simulated annealing [44, 38]. Starting from the superposition state $[| + \rangle ]^{\otimes n}$ (this is a shorthand for $n$ qubits in the state $| + \rangle$ , cf. (9)), the system evolves according to (3) under an external time-dependent magnetic field (a transverse field). When the external field is faded away, the system reaches the ground state of an Ising model [34]. According to (2), if an external magnetic field is changing gradually enough, the system remains near the ground state with high probability throughout the optimisation. Quantum annealing systems taking advantage of (2) are called adiabatic quantum computers (AQC). QUBOP is the most common problem form which can be mapped to current realisations of AQC.
+
+# 3. Modern Adiabatic Quantum Computation
+
+Adiabatic quantum computation is a form of quantum annealing which relies on the adiabatic theorem of quantum mechanics (2) [17]. Starting from a ground state of an initial default Hamiltonian $\hat{\mathcal{H}}_I$ , an AQC system adiabatically evolves into the ground state of a problem Hamiltonian $\hat{\mathcal{H}}_P$ which encodes a solution to a problem [26]. In the case of adiabatic quantum annealing (AQA), the problem Hamiltonian $\hat{\mathcal{H}}_P$ is given by the Ising model [34]:
+
+$$
+\hat {\mathcal {H}} _ {P} = \sum_ {j \in V} h _ {j} \sigma_ {j} ^ {z} + \sum_ {(j, k) \in E _ {P}} J _ {j, k} \sigma_ {j} ^ {z} \otimes \sigma_ {k} ^ {z}, \tag {5}
+$$
+
+with the Kronecker product $\otimes$ , $h_j$ denoting exterior local magnetic fields and $J_{i,j}$ standing for the pairwise connections between the particles. $V$ is a set of particles, and $E_P$ is a set of edges (intra-particle links) of the graph. Eq. (5) is written in a notation common in physics. The first term of (5) on the right side in the explicit notation reads
+
+$$
+\hat {\mathcal {H}} _ {P} ^ {j \in V} = \left(\left[ \begin{array}{c} \left[ \boldsymbol {\sigma} ^ {z} \otimes \mathbf {I} \otimes \dots \otimes \mathbf {I} \right] \\ \left[ \mathbf {I} \otimes \boldsymbol {\sigma} ^ {z} \otimes \dots \otimes \mathbf {I} \right] \\ \ddots \\ \left[ \mathbf {I} \otimes \mathbf {I} \otimes \dots \otimes \boldsymbol {\sigma} ^ {z} \right] \end{array} \right] ^ {\mathrm {T}}\right) _ {2 ^ {n} \times n 2 ^ {n}} \left[ \begin{array}{c} h _ {1} \mathbf {I} _ {2 ^ {n} \times 2 ^ {n}} \\ h _ {2} \mathbf {I} _ {2 ^ {n} \times 2 ^ {n}} \\ \vdots \\ h _ {n} \mathbf {I} _ {2 ^ {n} \times 2 ^ {n}} \end{array} \right] _ {n 2 ^ {n} \times 2 ^ {n}}
+$$
+
+where $\mathbf{I}$ without a subscript is a $2\times 2$ identity matrix. The second term $\hat{\mathcal{H}}_P^{(j,k)\in E_P}$ of (5) can be expressed in a similar manner, involving pairs of $\sigma^z$ in the tensor product depending on the connectivity of the lattice.
+
+Theoretically, each particle can interact with any other particle from the whole set of qubits. In practice, the couplings are restricted to local neighbourhoods (see Sec. 3.2). Thus, (5) describes a system of $N$ interacting spin- $\frac{1}{2}$ particles under the influence of distributed magnetic forces, and in the expanded form, $\hat{\mathcal{H}}_P$ is a $2^n \times 2^n$ matrix. Finding a ground state of an Ising model is an $\mathcal{NP}$ -hard problem [7]. In the ground state, the spin configuration of all particles which minimises Ising energy $\mathbf{E}_{\mathrm{Ising}}$ is given by:
+
+$$
+\mathbf {E} _ {\text {I s i n g}} = \sum_ {i} h _ {i} s _ {i} + \sum_ {i, j} J _ {i, j} s _ {i} s _ {j}, \tag {7}
+$$
+
+where $s_i \in \{1, -1\}$ denotes two possible spin measurement outcomes of a spin- $1/2$ particle.
+
+# 3.1. Quantum System Evolution
+
+Solving $\mathcal{NP}$ -hard problems such as QUBOP on a classical computer requires exponential time in the size of the input. The main idea of the AQC is that a QUBOP (1) can be mapped to the Ising model (5) and optimised by allowing the system to evolve according to the adiabatic principle (2). Once annealing is finished, the qubit register will represent the solution to the programmed problem with a high probability [26] (cf. supplemental material of this paper on the annealing rate criterion). The initial Hamiltonian of the system is always initialised in the state
+
+$$
+\hat {\mathcal {H}} _ {I} = - \sum_ {j \in V} B _ {x} \sigma_ {j} ^ {x}, \tag {8}
+$$
+
+where $B_{x} > 0$ stands for a magnetic field pointing in the $x$ direction. The ground state of (8) is a symmetrised superposition with equal normalised probability amplitudes for the states $|0\rangle$ and $|1\rangle$ for all qubits, i.e.,
+
+$$
+[ | + \rangle ] ^ {\otimes n} = \left(\frac {| 0 \rangle + | 1 \rangle}{\sqrt {2 ^ {n}}}\right) ^ {\otimes n}. \tag {9}
+$$
+
+This initial state (9) is comparably easy to construct by radiating a microwave of the same duration and wavelength to all qubits. In mathematical terms, (9) is obtained by applying a Hadamard transform $H = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$ to $n|0\rangle$ qubits.
+
+The lowest energy $E_{\mathrm{GS}} = -nB_x$ of (8) is achieved when all qubits in the system point in the anti-parallel direction of the magnetic field, so that $\sigma_j^x |s_j\rangle = |s_j\rangle$ . During AQC, the initial Hamiltonian $\hat{\mathcal{H}}_I$ is evolving into the problem Hamiltonian $\hat{\mathcal{H}}_P$ , with a high probability of reaching the ground state of $\hat{\mathcal{H}}_P$ [26]. The interpolation between the Hamiltonians can be written as
+
+$$
+\hat {\boldsymbol {\mathcal {H}}} = [ 1 - s ] \hat {\boldsymbol {\mathcal {H}}} _ {I} + s \hat {\boldsymbol {\mathcal {H}}} _ {P}, \tag {10}
+$$
+
+with $s \in [0;1]$ being the time in relative units from the start of annealing at $s = 0$ until reaching the ground state of $\hat{\mathcal{H}}_P$ at $s = 1$ . The problem Hamiltonian and the final state of the system depend on the objective function $f(x)$ or the matrix of weights between the qubits $\mathbf{P}$ in (1). After the annealing is accomplished, the state of each qubit is measured, and the result corresponds to the solution of the programmed problem with a high probability. At this stage, the states of all binary variables are classical, and not quantum anymore.
+
+To remain in the ground state during the system evolution, the annealing rate has to be carefully chosen. The condition of adiabaticity (2) is derived from the time-dependent perturbation theory of quantum systems. It is achieved when the average energy pumped into the system per time interval $T$ is smaller than the minimal energy difference between the ground state and the first excited state. This statement was quantified in [4] which generalises the original adiabatic theorem [17] for periodic driving, see our supplemental material for further details.
+
+# 3.2. Quantum Annealer D-WAVE
+
+D-WAVE relies on the adiabatic criterion in its specified form and currently supports up to $\approx 2000$ qubits [22]. It reflects the state of the art in physical realisation of quantum processors. It is relatively inexpensive to bring the system in the superposition state, and every computation on D-WAVE starts with the problem-independent $\hat{\mathcal{H}}_I$ (8). Qubits can interact with a restricted number of other qubits, and it is possible to define qubit equality and entanglement constraints [22]. Possible interactions can be seen from the chimera graph which schematically depicts the layout of the quantum processor [22, 16]. At the same time, the physically realised connectivity can model QUBOP with arbitrary connectivities through an internal conversion [16]. The drawback is that in the worst case, a quadratic increase in the number of variables is required. A fully connected graph of layers with $N$ qubits would require $N^2$ qubits for processing. Some QUBOP cannot be mapped to the chimera graph, and some problems can be mapped in multiple ways [54].
+
+# 4. Previous and Related Work
+
+Universal Quantum Computers. The paradigm of the universal quantum computer originates in the attempts to gain control over individual quantum systems in the early eighties [61, 52]. Later, extending the control to multiple quantum systems has attracted the interest of physicists, promising to facilitate discoveries in quantum physics [52]. By that time, it was noticed that simulating a quantum-mechanical system on a classical computer requires exponential time in the number of simulated elements [42, 27]. "Can you do it" with a new kind of computer - a quantum com
+
+puter?" [27] is a famous quote by R. Feynman which has triggered research on quantum computers in the subsequent years. The so-called no-cloning theorem [55, 63] belongs to the first discoveries strongly influenced quantum information theory and quantum computations. Nowadays, quantum computers can be used not only to fulfil their primary goal, i.e., to simulate quantum-mechanical systems for different branches of science, but also to solve other computational problems — such as balanced function decision problem [24], quantum Turing machines for complexity analysis [12], prime number factorisation and discrete logarithms [60], database search [30], graph matching [3], data classification [57] and principal component analysis [41] — faster than on classical machines. The related field of quantum communication and quantum key distribution has already found broad practical use nowadays [10, 9, 59].
+
+Classical Methods using Quantum Analogies. Quantum-mechanical effects inspired multiple techniques for conventional computers including variants of genetic and evolutionary algorithms [31, 32], non-rigid mesh analysis [5] and image segmentation [6], among others.
+
+Quantum Annealers in Computer Vision. Only a few theoretical results and applications of AQC to image processing, machine learning and computer vision are known. Neven et al. [50] have shown how image recognition can be formulated as QUBOP. Image classification on $12 \times 12$ images with AQC was addressed in [51]. The approach of O'Malley et al. can learn facial features and reproduce facial image collections [53]. Boyda et al. [18] propose an AQC method to detect areas with trees from aerial images. Several methods target classification, dimensionality reduction and training of deep neural networks [49, 37, 1]. Not all theoretical findings of these works are possible to test on the real AQC hardware yet. Nonetheless, we believe that it is essential to explore the theory and highlight the advantages of the upcoming hardware for computer vision tasks.
+
+# 5. Quantum Transformation Estimation
+
+In this section, we introduce our QA to transformation estimation. The inputs are a reference point set $[\mathbf{x}_n] \in \mathbf{X} \in \mathbb{R}^{D \times N}$ and a template point set $[\mathbf{y}_n] \in \mathbf{Y} \in \mathbb{R}^{D \times N}$ , $n \in \{1, \dots, N\}$ . $N$ is the number of points in both point sets and $D$ is the dimensionality of the points. We assume that translation is resolved, the centroids of the point sets coincide, and points are in correspondence.
+
+# 5.1. Transformation Estimation in 2D
+
+To obtain an advantage in solving transformation estimation on a quantum annealer we should avoid uniform sampling of rotations applied to $\mathbf{Y}$ . Elements of the rotation group are non-commutative, and it is not possible to formulate multiplication of basis rotations as QUBOP. Instead, we propose to represent the transformation matrix as a linear
+
+combination of basis elements. Recall that for any rotation matrix, $\mathbf{R}^{-1} = \mathbf{R}^{\mathsf{T}}$ . Rotation in 2D consists of four elements, i.e., $\mathbf{R} = \begin{pmatrix} r_{1,2} & r_{2,2} \\ r_{2,1} & r_{2,2} \end{pmatrix}$ . Additively, we can create a basis for all possible values of $\mathbf{R}$ and encode the influence of the additive elements as binary variables. Consider instead the power series of $\mathbf{R}$ in 2D. Every such matrix has a corresponding skew-symmetric matrix of the form
+
+$$
+\mathbf {S} = \theta \mathbf {M}, \quad \mathbf {M} = \left[ \begin{array}{c c} 0 & - 1 \\ 1 & 0 \end{array} \right], \tag {11}
+$$
+
+with a real number $\theta$ . According to the Cayley-Hamilton theorem, $\mathbf{S}^2 + \theta^2\mathbf{I} = 0$ which leads to the following exponential map for $\mathbf{R}$ with power series:
+
+$$
+\begin{array}{l} \mathbf {R} = \exp (\mathbf {S}) = \\ \cos (\theta) \mathbf {I} + \left(\frac {\sin (\theta)}{\theta}\right) \mathbf {S} = \cos (\theta) \mathbf {I} + \sin (\theta) \mathbf {M}. \tag {12} \\ \end{array}
+$$
+
+From (12) we see that $\mathbf{R}$ is composed of an identity weighted by $\cos(\theta)$ and $\mathbf{M}$ weighted by $\sin(\theta)$ . If the basis would resemble additive elements $\mathbf{I}$ and $\mathbf{M}$ of the exponential map, we can stronger constrain the resulting $\mathbf{R}$ . We see that $r_{1,1}$ is entangled with $r_{2,2}$ , and $r_{1,2}$ is entangled with $r_{2,1}$ . Eventually, we need fewer basis elements, the optimisation will finish faster and the method can be also implemented and tested on a classical computer. Thus, our basis $\mathbf{Q} = \{\mathbf{Q}_k\}$ for $\mathbf{R}$ is a compound of $K = 20$ elements:
+
+$$
+\begin{array}{l} \left\{\mathbf {Q} _ {k} = \omega \mathbf {C} \in \mathbb {R} ^ {2 \times 2}, \forall \omega \in \{0. 5, 0. 2, 0. 1, 0. 1, 0. 0 5 \}, \right. \tag {13} \\ \forall \mathbf {C} \in \{\mathbf {I}, \mathbf {M}, - \mathbf {I}, - \mathbf {M} \} \}. \\ \end{array}
+$$
+
+Since we want to find $\mathbf{R}$ which minimises the distances between the corresponding points $(\mathbf{x}_n,\mathbf{y}_n)$ , we multiply each template point with a negative sign $-\mathbf{y}_n$ with each basis element $\mathbf{Q}_k$ and stack the result into $\Phi$ :
+
+$$
+\boldsymbol {\Phi} = \left[ \begin{array}{c c c c} \mathbf {x} _ {1} ^ {\top} & \mathbf {x} _ {2} ^ {\top} & \dots & \mathbf {x} _ {N} ^ {\top} \\ - [ \mathbf {Q} _ {1} \mathbf {y} _ {1} ] ^ {\top} & - [ \mathbf {Q} _ {1} \mathbf {y} _ {2} ] ^ {\top} & \dots & - [ \mathbf {Q} _ {1} \mathbf {y} _ {N} ] ^ {\top} \\ - [ \mathbf {Q} _ {2} \mathbf {y} _ {1} ] ^ {\top} & - [ \mathbf {Q} _ {2} \mathbf {y} _ {2} ] ^ {\top} & \dots & - [ \mathbf {Q} _ {2} \mathbf {y} _ {N} ] ^ {\top} \\ \vdots & \vdots & \ddots & \vdots \\ - [ \mathbf {Q} _ {K} \mathbf {y} _ {1} ] ^ {\top} & - [ \mathbf {Q} _ {K} \mathbf {y} _ {2} ] ^ {\top} & \dots & - [ \mathbf {Q} _ {K} \mathbf {y} _ {N} ] ^ {\top} \end{array} \right]. \tag {14}
+$$
+
+Next, we set the weight matrix in (1) as
+
+$$
+\mathbf {P} = \boldsymbol {\Phi} \boldsymbol {\Phi} ^ {\top}, \tag {15}
+$$
+
+and the final QUBOP reads
+
+$$
+\arg \min _ {\mathbf {q} \in \mathbf {B} ^ {2 1}} \mathbf {q} ^ {\top} \boldsymbol {\Phi} \boldsymbol {\Phi} ^ {\top} \mathbf {q}. \tag {16}
+$$
+
+In total, 21 qubits are required to resolve the transformation on AQC in 2D, with the first qubit of $\mathbf{q}$ being fixed to $|1\rangle$ . After solving (16) with quantum annealing and measuring $\mathbf{q}$ , we obtain a classical bitstring $\hat{\mathbf{q}}$ . The resulting (perhaps approximate) $\mathbf{R}$ is then obtained by unembedding as
+
+$$
+\mathbf {R} = \sum_ {k = 1} ^ {K} \hat {\mathbf {q}} _ {k + 1} \mathbf {Q} _ {k}. \tag {17}
+$$
+
+# 5.2. Transformation Estimation in 3D
+
+In 3D, a skew-symmetric matrix can be represented as
+
+$$
+\mathbf {S} = \theta \mathbf {M}, \mathbf {M} = \left[ \begin{array}{l l l} m _ {1, 1} & m _ {1, 2} & m _ {1, 3} \\ m _ {2, 1} & m _ {2, 2} & m _ {2, 3} \\ m _ {3, 1} & m _ {3, 2} & m _ {3, 3} \end{array} \right] = \left[ \begin{array}{c c c} 0 & a & b \\ - a & 0 & c \\ - b & - c & 0 \end{array} \right], \tag {18}
+$$
+
+where $\theta$ , $a$ , $b$ and $c$ are real numbers, and $a^2 + b^2 + c^2 = 1$ . In the 3D case, the Cayley-Hamilton theorem states that $-\mathbf{S}^3 - \theta^2\mathbf{S} = 0$ . The exponential map for $\mathbf{R}$ in 3D with power series reads
+
+$$
+\begin{array}{l} \mathbf {R} = \exp (\mathbf {S}) = \mathbf {I} + \left(\frac {\sin \theta}{\theta}\right) \mathbf {S} + \left(\frac {1 - \cos \theta}{\theta^ {2}}\right) \mathbf {S} ^ {2} = \\ \mathbf {I} + \sin \theta \mathbf {M} + (1 - \cos \theta) \mathbf {M} ^ {2}. \tag {19} \\ \end{array}
+$$
+
+Next, $\mathbf{M}$ can be decomposed as follows:
+
+$$
+\mathbf {M} = \underbrace {a \left[ \begin{array}{r r r} 0 & 1 & 0 \\ - 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right]} _ {\mathbf {M} _ {a}} + \underbrace {b \left[ \begin{array}{r r r} 0 & 0 & 1 \\ 0 & 0 & 0 \\ - 1 & 0 & 0 \end{array} \right]} _ {\mathbf {M} _ {b}} + \underbrace {c \left[ \begin{array}{r r r} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & - 1 & 0 \end{array} \right]} _ {\mathbf {M} _ {c}}. \tag {20}
+$$
+
+Regarding $\mathbf{M}$ , we see that
+
+- $\{m_{1,2}; m_{2,1}\}, \{m_{1,3}; m_{3,1}\}$ and $\{m_{2,3}; m_{3,2}\}$ are mutually dependent or entangled,
+- $m_{i,j} \in [-1; 1]$ ,
+- $\mathbf{M} = -\mathbf{M}^{\mathrm{T}}, \mathbf{M}_a = -\mathbf{M}_a^{\mathrm{T}}, \mathbf{M}_b = -\mathbf{M}_b^{\mathrm{T}}$ and $\mathbf{M}_c = -\mathbf{M}_c^{\mathrm{T}}$ , i.e., they are anti-symmetric, and
+$\mathbf{M}^2 = \left[ \begin{array}{ccc}v_1^{-} & d & e\\ d & v_2^{-} & f\\ e & f & v_3^{-} \end{array} \right]$ is symmetric negative semi-definite, with $\{v_1^-,v_2^-,v_3^-\} \in \mathbb{R}^-$ , and $\{d,e,f\} \in \mathbb{R}$ .
+
+The basis for rotation in 3D is comprised of the identity matrix $\mathbf{I}$ , $\mathbf{M}_a$ , $\mathbf{M}_b$ , $\mathbf{M}_c$ as well as the basis for $\mathbf{M}^2$ :
+
+$$
+\mathbf {M} _ {d} = \left[ \begin{array}{l l l} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right], \mathbf {M} _ {e} = \left[ \begin{array}{l l l} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right], \mathbf {M} _ {f} = \left[ \begin{array}{l l l} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right]. \tag {21}
+$$
+
+Thus, our basis $\mathbf{Q}^{3D} = \{\mathbf{Q}_k^{3D}\}$ for $\mathbf{R}$ in 3D is a compound of $K = 80$ elements:
+
+$$
+\begin{array}{l} \left\{\mathbf {Q} _ {k} ^ {3 D} = \omega \mathbf {C} ^ {3 D} \in \mathbb {R} ^ {3 \times 3}, \forall \omega \in \{0. 5, 0. 2, 0. 1, 0. 1, 0. 0 5 \}, \right. \\ \forall \mathbf {C} ^ {3 D} \in \left\{\mathbf {I}, - \mathbf {I}, \mathbf {M} _ {a}, - \mathbf {M} _ {a}, \mathbf {M} _ {b}, - \mathbf {M} _ {b}, \mathbf {M} _ {c}, - \mathbf {M} _ {c}, \right. \\ \left. \mathbf {M} _ {d}, - \mathbf {M} _ {d}, \mathbf {M} _ {e}, - \mathbf {M} _ {e}, \mathbf {M} _ {f}, - \mathbf {M} _ {f} \right\}. \tag {22} \\ \end{array}
+$$
+
+The final QUBOP and the unembedding (i.e., decoding the solution to QUBOP) after quantum annealing for the 3D case are obtained similarly to (14)-(17) with $\mathbf{q} \in \mathbf{B}^{81}$ ( $\mathbf{q}_0$ remains fixed to $|1\rangle$ and $\mathbf{Q}_k$ are replaced by $\mathbf{Q}_k^{3D}$ in (17)).
+
+# 6. Quantum Point Set Registration
+
+In point set registration, the input point sets are of different cardinalities, and correspondences between points are, generally, not known, i.e., $[\mathbf{x}_n] \in \mathbf{X} \in \mathbb{R}^{D \times N}$ and $[\mathbf{y}_m] \in \mathbf{Y} \in \mathbb{R}^{D \times M}$ , $m \in \{1, \ldots, M\}$ . $N$ and $M$ are the numbers of points in the reference and template, respectively, while $D$ is the point dimensionality. The objective of point set alignment is to recover rotation $\mathbf{R}$ $(\mathbf{R}^{-1} = \mathbf{R}^{\top}, \det(\mathbf{R}) = 1)$ and translation $\mathbf{t}$ aligning $\mathbf{Y}$ to $\mathbf{X}$ . We assume that the translation is resolved in the pre-processing step by bringing the point set centroids into coincidence.
+
+Point set alignment can be alternatingly solved on AQC by finding some point matches and estimating the transformation with the given correspondences in the ICP fashion [13]. This would result in a sequence of QUBOP of the form (16). To express alignment as a single QUBOP, we have to find an energy functional which is correspondence-free and which, when minimised in one shot on AQC, would result in an optimal alignment. The desired form of the energy functional has been recently shown in the literature [29].
+
+# 6.1. Particle Dynamics Based Alignment
+
+Barnes-Hut Rigid Gravitational Approach (BHRGA) [29] is a recent point set alignment method with a single energy functional which remains unchanged during the entire optimisation. BHRGA is a globally multiply-linked approach, i.e., all $\mathbf{y}_m$ interact with all $\mathbf{x}_n$ . In [29], point sets are aligned by minimising the mutual gravitational potential energy (GPE) $\mathbf{E}$ of the corresponding system of particles in the force field induced by $\mathbf{X}$ :
+
+$$
+\mathbf {E} (\mathbf {R}, \mathbf {t}) = \sum_ {m} \sum_ {n} \mu_ {\mathbf {y} _ {m}} \mu_ {\mathbf {x} _ {n}} \| \mathbf {R} \mathbf {y} _ {m} + \mathbf {t} - \mathbf {x} _ {n} \| _ {2}, \tag {23}
+$$
+
+where $\mu_{\mathbf{y}_m}$ and $\mu_{\mathbf{x}_n}$ denote masses of $\mathbf{y}_m$ and $\mathbf{x}_n$ , respectively. With no imposed boundary conditions, particles are initialised with unit masses. In [29], (23) is optimised with the Levenberg-Marquardt algorithm [40, 43], and the optimum is achieved when the system's GPE is locally minimal. Without acceleration by a $2^D$ -tree, the method has quadratic complexity and (23) involves all possible interactions between the template and reference points.
+
+We can now derive a QUBOP in the similar fashion as in Sec. 5 for the transformation estimation. Note, however, that the bases (13) and (22) allow for affine transformations and scaling. Thus, implicitly, we would optimise
+
+$$
+\mathbf {E} (\mathbf {R}, \mathbf {t}, s) = \sum_ {m} \sum_ {n} \mu_ {\mathbf {y} _ {m}} \mu_ {\mathbf {x} _ {n}} \| \mathbf {R} \mathbf {y} _ {m} s + \mathbf {t} - \mathbf {x} _ {n} \| _ {2}, \tag {24}
+$$
+
+where the scalar $s$ is the scaling of the template. As proven in [28], allowing for scale in globally multiply-linked point set alignment results in the shrinkage of the template to a single point with a very high probability. To remedy the
+
+problem, either prior correspondences can be used, or point interactions can be restricted to local vicinities [28]. In our QA, we opt for the second solution which allows us to use the rotational bases (13) and (22) elaborated in Sec. 5. Eventually, the $\Phi \in \mathbb{R}^{(K + 1)\times (D)(L(1) + L(2) + \ldots +L(N))}$ matrix encoding point interactions for point set alignment reads
+
+$$
+\boldsymbol {\Phi} = \left[ \boldsymbol {\Phi} _ {1} \boldsymbol {\Phi} _ {2} \dots \boldsymbol {\Phi} _ {N} \right], \tag {25}
+$$
+
+with $\Phi_n, n \in \{1, \ldots, N\}$ , of the form
+
+$$
+\left[ \begin{array}{c c c c} \mathbf {x} _ {n} ^ {\top} & \mathbf {x} _ {n} ^ {\top} & \dots & \mathbf {x} _ {n} ^ {\top} \\ - \left[ \mathbf {Q} _ {1} \mathbf {y} _ {1} ^ {n} \right] ^ {\top} & - \left[ \mathbf {Q} _ {1} \mathbf {y} _ {2} ^ {n} \right] ^ {\top} & \dots & - \left[ \mathbf {Q} _ {1} \mathbf {y} _ {L (n)} ^ {n} \right] ^ {\top} \\ - \left[ \mathbf {Q} _ {2} \mathbf {y} _ {1} ^ {n} \right] ^ {\top} & - \left[ \mathbf {Q} _ {2} \mathbf {y} _ {2} ^ {n} \right] ^ {\top} & \dots & - \left[ \mathbf {Q} _ {2} \mathbf {y} _ {L (n)} ^ {n} \right] ^ {\top} \\ \vdots & \vdots & \ddots & \vdots \\ - \left[ \mathbf {Q} _ {K} \mathbf {y} _ {1} ^ {n} \right] ^ {\top} & - \left[ \mathbf {Q} _ {K} \mathbf {y} _ {2} ^ {n} \right] ^ {\top} & \dots & - \left[ \mathbf {Q} _ {K} \mathbf {y} _ {L (n)} ^ {n} \right] ^ {\top} \end{array} \right], \tag {26}
+$$
+
+with $\mathbf{Q}_k$ being as in (13) or (22) for the 2D and 3D case, respectively. $\Phi_1,\Phi_2$ and $\Phi_N$ encode point interactions between every $\mathbf{x}_n$ and corresponding $L(n)\ll M$ points of the template denoted by superscripted $\{\mathbf{y}_1^n,\mathbf{y}_2^n,\dots ,\mathbf{y}_{L(n)}^n\}$ . Note that the latter build $N$ subsets of $\{\mathbf{y}_1,\mathbf{y}_2,\ldots ,\mathbf{y}_M\}$ of different cardinalities $L(n)$ , $\bar{L}$ on average. If different $\mathbf{x}_n$ interact with the same $\mathbf{y}_m$ , the corresponding subcolumns $\left[[\mathbf{Q}_1\mathbf{y}_m]^{\mathrm{T}}[\mathbf{Q}_2\mathbf{y}_m]^{\mathrm{T}}\dots [\mathbf{Q}_K\mathbf{y}_m]^{\mathrm{T}}\right]^{\mathrm{T}}$ of $\Phi_n$ can be computed only once and reused. The final QUBOP for point set alignment with $\Phi_{n}$ as in (26) reads
+
+$$
+\arg \min _ {\mathbf {q} \in \mathbf {B} ^ {K + 1}} \mathbf {q} ^ {\top} \boldsymbol {\Phi} \boldsymbol {\Phi} ^ {\top} \mathbf {q}. \tag {27}
+$$
+
+In total, $K + 1 = 21$ and $K + 1 = 81$ qubits are required to align point sets on AQC in the 2D and 3D case, respectively. Both transformation estimation and point set alignment need the same number of qubits in the same dimensions, and the difference lies in the complexity to construct $\mathbf{P}$ (see Sec. 6.2). Note that if the same template has to be aligned to multiple references, the corresponding $\Phi$ can be obtained by reusing $\left[\left[\mathbf{Q}_1\mathbf{y}_m\right]^{\top}\left[\mathbf{Q}_2\mathbf{y}_m\right]^{\top}\dots \left[\mathbf{Q}_K\mathbf{y}_m\right]^{\top}\right]^{\top}$ (which has to be computed only once). The first qubit of $\mathbf{q}$ has to be fixed to $|1\rangle$ , since the first element of every column contains a reference point which has to be active during the entire optimisation. The unembedding is performed similarly to the case of transformation estimation, see Sec. 5.
+
+# 6.2. Complexity to Prepare $\mathbf{P} = \Phi \mathbf{\Phi}^{\top}$
+
+To prepare $\Phi$ , $\mathcal{O}(KDN\xi)$ and $\mathcal{O}(KDN\bar{L}\xi)$ operations are required for the transformation estimation and point set alignment, respectively. $\xi$ denotes the number of operations for multiplying $\mathbf{y}_m$ with one element of the additive basis $\mathbf{Q}_k$ . To obtain the final $\mathbf{P}$ , we need to transpose $\Phi$ and multiply $\Phi$ with $\Phi^{\top}$ which, in the worst case, takes $\mathcal{O}(K^2 DN)$ operations for the transformation estimation and $\mathcal{O}(K^2 DN\bar{L})$ operations for the point set alignment. There are also slightly faster algorithms for matrix multiplication compared to the naive way [20].
+
+ | TE | K |
| 10 | 20 | 30 | 40 | 50 |
| e2D | 0.023 | 0.026 | 0.041 | 0.078 | 0.17 | 0.3 |
| σ2D | 0.012 | 0.013 | 0.012 | 0.012 | 0.012 | 0.013 |
| eR | 0.058 | 0.062 | 0.083 | 0.22 | 0.47 | 0.764 |
| σR | 0.041 | 0.044 | 0.041 | 0.036 | 0.031 | 0.03 |
+
+Table 2: The accuracy of QA under random initial misalignments, for the transformation estimation ("TE") and point set alignment ( $K > 1$ ).
+
+# 7. Experimental Evaluation
+
+The current generation of D-WAVE annealers does not support the precision of weights in $\mathbf{P}$ necessary for our method $[22]^3$ . It is foreseeable that future generations will enable a higher accuracy for couplings. We thus implement and test QA with an AQC sampler on a conventional computer (Intel i7-6700K CPU with 32GB RAM). All quantitative tests are performed with 21 binary variables corresponding to the size of the $\mathbf{Q}$ basis in 2D.
+
+We report two error metrics, i.e., the alignment error $e_{2D}$ and the transformation discrepancy $e_{\mathbf{R}}$ , together with their standard deviations denoted by $\sigma_{2D}$ and $\sigma_{\mathbf{R}}$ , respectively. The alignment error $e_{2D} = \frac{\|\mathbf{RY} - \mathbf{X}\|_{\mathcal{HS}}}{\|\mathbf{X}\|_{\mathcal{HS}}}$ ( $\| \cdot \|_{\mathcal{HS}}$ denotes the Hilbert-Schmidt norm) measures how accurately the aligned shape coincides with the reference and requires ground truth correspondences. The transformation discrepancy is defined as $e_{\mathbf{R}} = \left\| \mathbf{I} - \mathbf{RR}^{\mathrm{T}} \right\|_{\mathcal{HS}}$ , where $\mathbf{R}$ is the recovered rotation. It measures how closely the recovered transformation resembles a valid rigid transformation. The usage of two complementary metrics is necessary because a low $e_{\mathbf{R}}$ does not automatically imply an accurate registration. On the other hand, a low $e_{2D}$ does not quantify how rigid the recovered transformation is.
+
+Datasets and Proof of Concept. We use four 2D datasets, i.e., fish [46], qubit, kanji and composer with cardinalities varying from 91 (fish) to 7676 (composer), see Fig. 1 for qualitative registration results. For point sets with up to a few thousand points, the simulation time $\tau_{\mathbf{P}} < 1$ sec. For $\sim 7.7k$ , $\tau_{\mathbf{P}}$ grows to 20.178 sec (by a factor of $\sim 10^4$ ). Simulation with $n = 30$ takes already $\sim 2.5$ days. More binary variables allow for more elements in the basis $\mathbf{Q}$ resulting in more accurate alignment. Note that even with 80 qubits, i.e., for problems with $n = 80$ , annealing on AQC takes around 100 ms. A simulation with $n = 80$ is not possible even on a conventional supercomputer in a reasonable time.
+
+Initial Misalignment and Point Linking. We test how accurately our method recovers the transformation under the random angle of initial misalignment $\theta$ and the different size of the point linking region. We generate 500 random transformations in the range $\theta \in [0;2\pi]$ of the fish dataset and resolve them with QA, for each $K \in \{1,10,20,30\}$ . The results are summarised in Table 2. We see that $e_{2D}$ corre
+
+
+Figure 3: The metrics as the functions of $A /$ : the size of the point interaction region parametrised by $K$ ; $B /$ : the angle of initial misalignment $\theta$ ; $C /$ : the template noise ratio.
+
+ulates with $e_{\mathbf{R}}$ for all tested $K$ . For $K = 30$ — which corresponds to one third of the template points — both metrics are still comparably low. We also study how the choice of the point interaction region or $K$ affects the accuracy of the transformation recovery and plot $e_{2D}$ and $e_{\mathbf{R}}$ as the functions of $K$ for several angles of initial misalignment $\theta$ in Fig. 3-A. Interacting points are determined with the $K$ nearest neighbour rule for each $\mathbf{x}_n$ . Recall that according to the singularity theorem [28], the globally multiply-linked alignment (here, $K = 91$ ) results in a shrinkage of the template to a single point, which is observed experimentally.
+
+Next, we systematically vary the angle of initial misalignment $\theta$ in the range $[0;2\pi]$ with the angular step $\frac{\pi}{36}$ and report $e_{2D}$ and $e_{\mathbf{R}}$ as the functions of $\theta$ , for $K \in \{1, 10, 20, 30, 40, 50\}$ . This test reveals the differences in the transformations caused by $\theta$ , which arise due to the composition and the expressiveness of the chosen basis $M$ , see Fig. 3-B. QA is almost agnostic to $\theta$ , which is a desirable property of every point set alignment method.
+
+Sensitivity to Noise. We systematically add uniformly distributed noise to the template and test the robustness of the proposed QA to outliers in the data, since real data often contains outliers. The highest template noise ratio amounts to $50\%$ . Each metric for every noise ratio and every $K$ is averaged over 50 runs, see Fig. 3-C. $\sigma_{\mathbf{R}}$ and $\sigma_{2D}$ do not exceed 0.057 and 0.03, respectively. We observe both the increasing alignment error and the discrepancy in the obtained transformations with the increasing noise level. For small $K$ , nonetheless, even large noise ratios seem not to influence the metrics significantly.
+
+Spectral Gap Analysis. Spectral gap $\Delta (\hat{\mathcal{H}})$ is the difference between the energy of the ground state and the second-lowest eigenstate. Each problem has an intrinsic and unique $\Delta (\hat{\mathcal{H}})$ . Even though a rigorous analysis of the spectral gap is out of the scope of this paper, we make several qualitative observations about the energy landscape of QA, the difference in the energy values and the corresponding registrations for one exemplary problem. In Fig. 4, we plot the se
+
+
+Figure 4: The sequences of energy-decreasing transitions and the corresponding energy values observed in our sampler, for transformation estimation $(K = 1)$ and point set alignment with $K = 30$ interactions per $\mathbf{x}_n$ . Besides the graphs, we visualise alignment results for selected energy values and the angle of initial misalignment $\theta \in \{\frac{\pi}{8}, \frac{\pi}{4}, \frac{\pi}{2}\}$ .
+
+quences of energy-decreasing transitions together with the energy values in the experiment with fish, for three $\theta$ values. We notice that some solutions have very small differences in the energies and are qualitatively indistinguishable from each other. This is accounted for by the choice of the additive basis, i.e., that the same alignment can be encoded in different ways. In contrast, we see significant differences in the energy values of the qualitatively different solutions (orders of magnitudes larger in the analysed experiment).
+
+We conclude that even though $\Delta (\hat{\mathcal{H}})$ is small, the alignments corresponding to several few lowest eigenstates are qualitatively similar. This suggests that our selection of the basis leads to problems with sufficient spectral gaps.
+
+# 8. Conclusions
+
+This paper introduces AQC for the computer vision community and shows that fundamental low-level problems can be brought to a representation suitable for solving on AQC. In simulations on a classical computer and in a wide range of scenarios, our QA is shown to successfully recover 2D transformations which are close approximations of globally optimal transformations. With the chosen basis of 20 elements, the solutions result in low transformation discrepancy and alignment errors. Observations on how to avoid singularities as well as the noise sensitivity and spectral gap analysis complement the experimental section.
+
+In future work, our technique can be extended to affine transformations and other related computer vision problems. We hope to see more research on computer vision methods with quantum hardware in the next decades.
+
+Acknowledgements. This work was supported by the ERC Consolidator Grant 770784. VG is grateful to Polina Matveeva for many enlightening discussions on the physical foundations of adiabatic quantum computing. The authors thank Bertram Taetz and Hanno Ackermann for reviewing an earlier version of this paper.
+
+# References
+
+[1] Steven H. Adachi and Maxwell P. Henderson. Application of Quantum Annealing to Training of Deep Neural Networks. arXiv e-prints, 2015.
+[2] Dorit Aharonov, Wim van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, and Oded Regev. Adiabatic quantum computation is equivalent to standard quantum computation. SIAM J. Comput., 37(1):166-194, 2007.
+[3] Andris Ambainis and Robert Špalek. Quantum algorithms for matching and network flows. In STACS, 2006.
+[4] Mohammad H. S. Amin. Consistency of the adiabatic theorem. Physical Review Letters, 102:220401, 2009.
+[5] Mathieu Aubry, Ulrich Schlickewei, and Daniel Cremers. The wave kernel signature: A quantum mechanical approach to shape analysis. In International Conference on Computer Vision (ICCV) Workshops, pages 1626-1633, 2011.
+[6] Caglar Aytekin, Serkan Kiranyaz, and Moncef Gabbouj. Automatic object segmentation by quantum cuts. In International Conference on Pattern Recognition (ICPR), 2014.
+[7] Francisco Barahona. On the computational complexity of ising spin glass models. Journal of Physics A: Mathematical and General, 15(10):3241-3253, 1982.
+[8] Paul Benioff. The computer as a physical system: A microscopic quantum mechanical hamiltonian model of computers as represented by Turing machines. Journal of Statistical Physics, 22(5):563-591, 1980.
+[9] Charles H. Bennett, François Bessette, Gilles Brassard, Louis Salvail, and John Smolin. Experimental quantum cryptography. J. Cryptol., 5(1):3-28, 1992.
+[10] Charles H. Bennett and Gilles Brassard. Quantum cryptography: Public key distribution and coin tossing. In International Conference on Computers, Systems, and Signal Processing, 1984.
+[11] Aleta Berk Finnila, Maria Gomez, C. Sebenik, Catherine Stenson, and Jimmie D. Doll. Quantum annealing: A new method for minimizing multidimensional functions. Chemical Physics Letters, 219, 1994.
+[12] Ethan Bernstein and Umesh Vazirani. Quantum complexity theory. In Symposium on Theory of Computing (STOC), 1993.
+[13] Paul J. Besl and Neil D. McKay. A method for registration of 3-d shapes. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 14(2):239-256, 1992.
+[14] Zhengbing Bian, Fabian Chudak, Robert Brian Israel, Brad Lackey, William G. Macready, and Aidan Roy. Mapping constrained optimization problems to quantum annealing with application to fault diagnosis. Frontiers in ICT, 3:14, 2016.
+[15] Zhengbing Bian, Fabian Chudak, William G. MacReady, Lane Clark, and Frank Gaitan. Experimental determination of ramsey numbers with quantum annealing. Physical Review Letters, 111(13):130505, 2013.
+[16] Tomas Boothby, Andrew D. King, and Aidan Roy. Fast clique minor generation in chimera qubit connectivity graphs. Quantum Information Processing, 15(1):495-508, 2016.
+
+[17] Max Born and Vladimir Fock. Beweis des adiabatensatzes. Zeitschrift für Physik, 51(3):165-180, 1928.
+[18] Edward Boyda, Saikat Basu, Sangram Ganguly, Andrew Michaelis, Supratik Mukhopadhyay, and Ramakrishna R. Nemani. Deploying a quantum annealing processor to detect tree cover in aerial imagery of california. PLOS ONE, 12(2), 2017.
+[19] Patrick J. Coles et al. Quantum Algorithm Implementations for Beginners. arXiv e-prints, 2018.
+[20] Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 9(3):251-280, 1990.
+[21] D-Wave Systems. D-Wave Announces D-Wave 2000Q Quantum Computer and First System Order. https://www.dwavesys.com/press-releases/., 2019. online; accessed 15 October 2019.
+[22] D-Wave Systems. Technical Description of the D-Wave Quantum Processing Unit. https://docs.dwavesys.com/docs/latest/doc_qpu.html, 2019. online; accessed 5 November 2019.
+[23] Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John Martinis, and Hartmut Neven. What is the computational value of finite-range tunneling? Phys. Rev. X, 6:031015, 2016.
+[24] David Deutsch and Richard Jozsa. Rapid solution of problems by quantum computation. Proceedings of the Royal Society of London Series A, 439(1907):553-558, 1992.
+[25] Simon J. Devitt. Performing quantum computing experiments in the cloud. Phys. Rev. A, 94:032329, 2016.
+[26] Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Joshua Lapan, Andrew Lundgren, and Daniel Preda. A quantum adiabatic evolution algorithm applied to random instances of an np-complete problem. Science, 292(5516):472-475, 2001.
+[27] Richard P. Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21(6):467-488, 1982.
+[28] Vladislav Golyanik and Christian Theobalt. Optimising for scale in globally multiply-linked gravitational point set registration leads to singularities. In International Conference on 3D Vision (3DV), 2019.
+[29] Vladislav Golyanik, Christian Theobalt, and Didier Stricker. Accelerated gravitational point set alignment with altered physical laws. In International Conference on Computer Vision (ICCV), 2019.
+[30] Lov K. Grover. A fast quantum mechanical algorithm for database search. In Annual ACM Symposium on Theory of Computing, pages 212-219, 1996.
+[31] Kuk-Hyun Han and Jong-Hwan Kim. Genetic quantum algorithm and its application to combinatorial optimization problem. In Congress on Evolutionary Computation, 2000.
+[32] Kuk-Hyun Han and Jong-Hwan Kim. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. Trans. Evol. Comp, 6(6):580-593, 2002.
+[33] Berthold K. P. Horn, Hugh M. Hilden, and Shahriar Negahdaripour. Closed-form solution of absolute orientation using orthonormal matrices. J. Opt. Soc. Am. A, 5(7):1127-1135, 1988.
+
+[34] Ernst Ising. Beitrag zur theorie des ferromagnetismus. Zeitschrift für Physik, 31(1):253-258, 1925.
+[35] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse ising model. Phys. Rev. E, 58:5355-5363, 1998.
+[36] Bruce E. Kane. A silicon-based nuclear spin quantum computer. Nature, 393:133-137, 1998.
+[37] Amir Khoshaman, Walter Vinci, Brandon Denis, Evgeny Andriyash, Hossein Sadeghi, and Mohammad H. Amin. Quantum variational autoencoder. Quantum Science and Technology, 4(1), 2018.
+[38] Scott Kirkpatrick, C. Daniel Gelatt, and Mario P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671-680, 1983.
+[39] Trevor Lanting et al. Entanglement in a quantum annealing processor. Phys. Rev. X, 4:021041, 2014.
+[40] Kenneth Levenberg. A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied Mathematics, II(2):164-168, 1944.
+[41] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature Physics, 10, 2013.
+[42] Yuri Manin. Computable and Noncomputable. Sov. Radio., 1980.
+[43] Donald W. Marquardt. An algorithm for least-squares estimation of nonlinear parameters. SIAM Journal on Applied Mathematics, 11(2):431-441, 1963.
+[44] Nicholas Metropolis, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21(6):1087-1092, 1953.
+[45] Andrea Morello et al. Single-shot readout of an electron spin in silicon. Nature, 467:687-691, 2010.
+[46] Andriy Myronenko and Xubo Song. Point-set registration: Coherent point drift. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2010.
+[47] Kae Nemoto, Michael Trupke, Simon J. Devitt, Ashley M. Stephens, Burkhard Scharfenberger, Kathrin Buczak, Tobias Nobauer, Mark S. Everitt, Jorg Schmiedmayer, and William J. Munro. Photonic architecture for scalable quantum information processing in diamond. Phys. Rev. X, 4:031022, 2014.
+[48] Florian Neukart, Gabriele Compostella, Christian Seidel, David von Dollen, Sheir Yarkoni, and Bob Parney. Traffic flow optimization using a quantum annealer. Frontiers in ICT, 4:29, 2017.
+[49] Hartmut Neven, Vasil S. Denchev, Georgie Rose, and William G. Macready. Qboost: Large scale classifier training with adiabatic quantum optimization. In *Asian Conference on Machine Learning (ACML)*, 2012.
+[50] Hartmut Neven, Georgie Rose, and William G. Macready. Image recognition with an adiabatic quantum computer I. Mapping to quadratic unconstrained binary optimization. arXiv e-prints, 2008.
+[51] Nga T. T. Nguyen and Garrett T. Kenyon. Image classification using quantum inference on the D-Wave 2X. arXiv e-prints, 2019.
+
+[52] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 2011.
+[53] Daniel O'Malley, Velimir V. Vesselinov, Boian S. Alexandrov, and Ludmil B. Alexandrov. Nonnegative/binary matrix factorization with a d-wave quantum annealer. PLOS ONE, 13, 2018.
+[54] Ojas Parekh, Jeremy Wendt, Luke Shulenburger, Andrew Land ahl, Jonathan Moussa, and John Aidun. Benchmarking Adiabatic Quantum Optimization for Complex Network Analysis. arXiv e-prints, 2016.
+[55] James L. Park. The concept of transition in quantum mechanics. Foundations of Physics, 1(1):23-33, 1970.
+[56] Rudolph Peierls. On ising's model of ferromagnetism. Mathematical Proceedings of the Cambridge Philosophical Society, 32(3):477-481, 1936.
+[57] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classification. *Physical Review Letters*, 113, 07 2013.
+[58] Eleanor Rieffel and Wolfgang Polak. An introduction to quantum computing for non-physicists. ACM Computing Surveys (CSUR), 32(3):300–335, 2000.
+[59] Tobias Schmitt-Manderbach, Henning Weier, Martin Fürst, Rupert Ursin, Felix Tiefenbacher, Thomas Scheidl, Josep Perdigues, Zoran Sodnik, Christian Kurtsiefer, John G. Rarity, Anton Zeilinger, and Harald Weinfurter. Experimental demonstration of free-space decoy-state quantum key distribution over $144\mathrm{km}$ . In European Conference on Lasers and Electro-Optics and the International Quantum Electronics Conference, 2007.
+[60] Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput., 26(5):1484-1509, 1997.
+[61] Peter W. Shor. Introduction to Quantum Algorithms. arXiv e-prints, 2001.
+[62] Janus H. Wesenberg, Arzhang Ardavan, G. Andrew D. Briggs, John J. L. Morton, Robert J. Schoelkopf, David I. Schuster, and Klaus Mølmer. Quantum computing with an electron spin ensemble. Phys. R. Lett., 103:070502, 2009.
+[63] William K. Wootters and Wojciech Hubert Zurek. A single quantum cannot be cloned. Nature, 299(5886), 1982.
+[64] Richard Y. Li, Rosa Di Felice, Remo Rohs, and Daniel Lidar. Quantum annealing versus classical machine learning applied to a simplified computational biology problem. npj Quantum Information, 4:14, 02 2018.
+[65] Jiaolong Yang, Hongdong Li, Dylan Campbell, and Yunde Jia. Go-icp: A globally optimal solution to 3d icp point-set registration. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 38(11):2241-2254, 2016.
+[66] Floris A. Zwanenburg, Andrew S. Dzurak, Andrea Morello, Michelle Y. Simmons, Lloyd C. L. Hollenberg, Gerhard Klimeck, Sven Rogge, Susan N. Coppersmith, and Mark A. Eriksson. Silicon quantum electronics. Rev. Mod. Phys., 85:961-1019, 2013.
\ No newline at end of file
diff --git a/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/images.zip b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..a47db7a0754fb8ad5e8e0f04db473434025b92f2
--- /dev/null
+++ b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9fa87029791384c95ff81929b38f06b8c4732a1476235bd38daf82eb53df615
+size 413750
diff --git a/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/layout.json b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..deee985103267de088100348934e97430e9eba05
--- /dev/null
+++ b/aquantumcomputationalapproachtocorrespondenceproblemsonpointsets/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:56dd6eef54f665c0066c8a09a1d05999543cf694539fe5ca5e7e5c84181f2a71
+size 615398
diff --git a/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_content_list.json b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..091884cb3b8be143a76172e79fac6825d4ee9154
--- /dev/null
+++ b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:851bcb5b7c6dab0f55938422e31fb5dc4c59771c9785f66c66ebff9fe655c687
+size 74520
diff --git a/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_model.json b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..2efbaf754d2d1758c0b91d009b76ed0f6b6480ee
--- /dev/null
+++ b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1856dace461dbaa58a2e372208df8d15cbd837a01b173e759110c582f437e8fc
+size 91175
diff --git a/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_origin.pdf b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..1bd3cf0ee38a5ab289f9c9414b5bc97206cede37
--- /dev/null
+++ b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/0e10a00d-1309-42d2-bb95-49da6622b0ad_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2215e50533555ceef427c0ed08b24deef71b2d28e39c6e9b81ed42abaf1eec1
+size 2118998
diff --git a/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/full.md b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..f2b733d1a175e3d448b37a5f86ba3f881106deb6
--- /dev/null
+++ b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/full.md
@@ -0,0 +1,310 @@
+# A Real-Time Cross-modality Correlation Filtering Method for Referring Expression Comprehension
+
+Yue Liao $^{1,3}$ Si Liu $^{1*}$ Guanbin Li $^{2}$ Fei Wang $^{3}$ Yanjie Chen $^{3}$ Chen Qian $^{3}$ Bo Li $^{1}$ $^{1}$ School of Computer Science and Engineering, Beihang University
+ $^{2}$ Sun Yat-sen University $^{3}$ SenseTime Research
+
+liaoyue.ai@gmail.com; {liusi, boli}@buaa.edu.cn; liguanbin@mail.sysu.edu.cn; {wangfei, chenyanjie, qianchen}@sensetime.com
+
+# Abstract
+
+Referring expression comprehension aims to localize the object instance described by a natural language expression. Current referring expression methods have achieved good performance. However, none of them is able to achieve real-time inference without accuracy drop. The reason for the relatively slow inference speed is that these methods artificially split the referring expression comprehension into two sequential stages including proposal generation and proposal ranking. It does not exactly conform to the habit of human cognition. To this end, we propose a novel Real-time Cross-modality Correlation Filtering method (RCCF). RCCF reformulates the referring expression comprehension as a correlation filtering process. The expression is first mapped from the language domain to the visual domain and then treated as a template (kernel) to perform correlation filtering on the image feature map. The peak value in the correlation heatmap indicates the center points of the target box. In addition, RCCF also regresses a 2-D object size and 2-D offset. The center point coordinates, object size and center point offset together to form the target bounding box. Our method runs at 40 FPS while achieving leading performance in RefClef, RefCOCO, RefCOCO+ and RefCOCOg benchmarks. In the challenging RefClef dataset, our methods almost double the state-of-the-art performance (34.70% increased to 63.79%). We hope this work can arouse more attention and studies to the new cross-modality correlation filtering framework as well as the one-stage framework for referring expression comprehension.
+
+# 1. Introduction
+
+Referring expression comprehension [34, 32, 27] has attracted much attention in recent years. A referring expression is a natural language description of a particular object
+
+
+Figure 1. Precision (IOU>0.5) versus inference time on the RefCOCO testA set at single Titan Xp GPU. Our method RCCF achieves 40 fps (0.25ms per image), which exceeds the real-time speed of 25 fps and is significantly faster than existing methods by a significant margin (12 times). The precision of RCCF also outperforms the state-of-the-art methods.
+
+in an image. Given such a referring expression, the target of referring expression comprehension is to localize the object instance in the image. It is one of the key tasks in the field of machine intelligence to realize human-computer interaction, robotics and early education.
+
+Conventional methods for referring expression comprehension mostly formulate this problem as an object retrieval task, where an object that best matches the referring expression is retrieved from a set of object proposals. These methods [32, 29, 28, 27] are mainly composed of two stages. In the first stage, given an input image, a pre-trained object detection network is applied to generate a set of object proposals. In the second stage, given an input expression, the best matching region from the detected object proposals is selected. Although existing two-stage methods have achieved great advance, there are still some problems. 1) The performance of the two-stage methods is very limited to the qual
+
+ity of object proposals generated in the first stage. If the target object is not accurately detected, it is impossible to match the language in the second stage. 2) In the first stage, a lot of extra object detection data, i.e., COCO [17] and Visual Genome [13], are indispensable to achieve satisfactory result. 3) Two-stage methods are usually computationally costly. For each object proposal, both feature extraction and cross-modality similarity computation should be conducted. However, only the proposal with highest similarity is selected finally. As we can see in Figure 1, the accuracy of current two-stage methods is reasonable while the inference speed still has a large gap to reach real-time.
+
+The three aforementioned problems are difficult to solve in existing two-stage frameworks. We reformulate referring expression comprehension as a cross-modality template matching problem, where the language serves as the template.filter kernel) and the image feature map is the search space to perform correlation filtering on. Mathematically, referring expression comprehension aims to learn a function $f(z,x)$ that compares an expression $z$ to a candidate image $x$ and returns a high score in the corresponding regions. The region is represented by 2-dim center point, 2-dim object size (height and width) and 2-dim offset to recover the discretization error [15, 36, 6]. Our proposed RCCF is end-to-end trainable. The language embedding is used as correlation filter and applied to the feature map to produce the heatmap for center point. For more accurate localization, we compute the correlation map on multi-level image feature and fuse the output maps to produce the final heatmap of object center. Moreover, the width, height and offset heatmap are regressed with visual feature only. During inference, the text is first embedded into visual space and then slides on the image feature maps. The peak point in the object center heatmap is selected as the center of the target. The corresponding width, height and offset are collected to form the target bounding box, which is the referring expression comprehension result.
+
+The advantages of our proposed RCCF method can be summarized as three-folds:
+
+- The inference speed of our method reaches real-time (40 FPS) with a single GPU, which is 12-times faster than the two-stage methods.
+- Our method can be trained with referring expression dataset only, with no need for any additional object detection data. Moreover, our one-stage model can avoid error accumulation from the object detector in traditional two-stage methods.
+- RCCF has achieved the state-of-the-art performance in RefClef, RefCOCO, RefCOCO+ and RefCOCOg datasets. Especially, in the RefClef dataset, our method outperforms the state-of-the-art methods by
+
+a significant margin from $34.70\%$ to $63.79\%$ , almost double the performance of the state-of-the-art method.
+
+# 2. Related Work
+
+# 2.1. Referring Expression Comprehension
+
+Conventional methods for referring expression comprehension are mostly composed of two-stage. In the first stage, given an input image, a pre-trained object detection network or an unsupervised method is applied to generate a set of object proposals. In the second stage, given an input expression, the best matching region is selected from the detected object proposals. With the development of deep learning, the two-stage methods has achieved great progress. The most two-stage methods focus on improving the second stage. Most of them [20, 9, 35, 32, 27, 28] mainly focus on exploring how to mine context information from the language and image or model the relationship between referents, for example, MAttNet [32] proposed a modular attention model to capture multi-modality context information.
+
+Though existing two-stage methods have achieved pretty-well performance, there are some common problems. Firstly, the performance of two-stage methods is limited to the object detectors. Secondly, these methods waste a lot of time in object proposals generation and features extraction for each proposal. Therefore, we propose to localize the target object directly given an expression with our correlation filtering based method.
+
+# 2.2. Correlation Filtering
+
+The correlation filtering is firstly proposed to train a linear template to discriminate between images and their translations. The correlation filtering is widely used in different areas of computer vision. Object classification [14, 7, 26] can be seen as a correlation filtering task, where the output image feature vector can be seen as a filter kernel, which performs correlation filtering on the weight matrix of the last multi-layer perceptron. For single object tracking, which aims to localize an object in a video given the object region in the first frame, the correlation filtering can play a role in comparing the first frame with the rest ones. The early works [2, 8] in tracking firstly transfer the image into Fourier domain, and perform correlation filtering in Fourier domain. Siamese FC [1] proposed to directly learn a correlation layer on the spatial domain, where Siamese FC compares two image features extracted from a Siamese network.
+
+Inspired by human visual perception mechanism, we believe that the process of performing language based visual grounding can be analogized to the process of filter-based visual response activation. Specifically, people generally comprehend the semantic information of a sentence in a
+
+global way, and form a feature template about the sentence description in the mind, then quickly perform attention matching on the image based on the template, wherein the salient region with the highest response value is considered as the target matching region. To this end, we formulate the problem of referring expression comprehension as a cross-modality correlation filtering process and solve with a single-stage joint optimization paradigm.
+
+# 3. Method
+
+In this section, we introduce our proposed RCCF method for referring expression comprehension. Our goal is to localize the object described by the reference expression directly without proposal generation step. To this end, we formulate referring expression comprehension task as a cross-modality template matching problem. In RCCF, we first localize the center point of the object described by the expression by performing correlation filtering on the image feature with a language-guided filter kernel. Then, we apply a regression module to regress the object size and center point offset. The peak value in the correlation heatmap, the regressed object size and center point offset together form the target bounding box.
+
+# 3.1. Framework
+
+Let $Q$ represent a query sentence and $I \in \mathbb{R}^{H \times W \times 3}$ denote the image of width $W$ and height $H$ . Our aim is to find the object region described by the expression. The target object region is represented by its center point $(x_{t}, y_{t})$ and the object size $(w_{t}, h_{t})$ . Additionally, to recover the discretization error caused by the output stride, we predict a local offset $(\delta x_{t}, \delta y_{t})$ for the center point $t$ . To sum up, the referring expression comprehension can be formulated as a mapping function $(x_{t}, y_{t}, w_{t}, h_{t}, \delta x_{t}, \delta y_{t}) = \phi(Q, I)$ .
+
+As shown in Figure 2, our proposed RCCF is composed of three modules, i.e., expression and image encoder, correlation filtering as well as size and offset regression modules. The expression and image encoder module includes the language feature extractor $L(\cdot)$ and visual feature extractor $E(\cdot)$ . The extracted features are represented as $L_{Q}$ and $E_{I}$ respectively. The expression feature $L_{Q}$ is then mapped from the language domain to the visual domain by the cross-modality mapping function $M(\cdot)$ . The correlation filtering module treats the mapping result $M(L_{Q})$ as the filter (kernel) to convolve with the visual feature map $E_{I}$ and produces a heatmap $C \in \mathbb{R}_{d}^{\frac{H}{d} \times \frac{W}{d}}$ , where $d$ is the output stride. The peak value of $C$ indicates the center point of the object $(x,y)$ depicted by the expression. Moreover, the size and offset regression module predicts the object size $(w,h)$ and local offset of the center point $(\delta x,\delta y)$ . Next, we will introduce the three modules in detail.
+
+# 3.2. Expression and Image Encoder
+
+The expression encoder $L(\cdot)$ takes the expression as input, and produces a 512-D feature vector. We first embed the expression into a 1024-D vector, followed by a fully connected layer to transform the vector into 512-D. Then we feed the transformed feature into a Bi-LSTM to get the expression feature $L_{Q}$ .
+
+The image encoder $E(\cdot)$ adopts the Deep Layer Aggregation (DLA) [31] architecture with deformable convolution [4]. DLA is an image classification network with hierarchical skip connections. Following Centernet [36], we use the modified DLA network with 34 layers, which replace the skip connection with the deformable convolution. Because a referring expression may consist of various kinds of semantic information such as attribute, relationship and spatial location. To well match the expression, we use three level visual features. As shown in Figure 2, we extract three level features $[E_I^1,E_I^2,E_I^3 ] = E(I)$ from the DLA net which are transformed into a unified size $\frac{H}{d}\times \frac{W}{d}$ from $\frac{H}{8d}\times \frac{W}{8d},\frac{H}{4d}\times \frac{W}{4d}$ , and $\frac{H}{2d}\times \frac{W}{2d}$ respectively. The size of $[E_I^1,E_I^2,E_I^3 ]$ are all $64\times \frac{H}{d}\times \frac{W}{d}$ . When computing the correlation map $\hat{C}$ , all three level features are utilized. During regression process, only $E_I^1$ with the highest resolution is used for computational efficiency.
+
+# 3.3. Cross-modality Correlation Filtering
+
+The aim of cross-modality correlation filtering is to localize the center of the target box $(x,y)$ . It contains three steps, including language-guided kernel generation, cross-modality correlation operation and correlation maps fusion. Firstly, we utilize three different linear functions to generate three filter kernels $[k_{1},k_{2},k_{3}] = [M_{1}(L_{Q}),M_{2}(L_{Q}),M_{3}(L_{Q})]$ from the expression feature $L_{Q}$ . The three fully connected layers $M_1(\cdot),M_2(\cdot)$ and $M_3(\cdot)$ serve as the cross-modality mapping function to project from the expression space to the visual space. Each kernel is a 64-D feature vector which is then reshaped into a $64\times 1\times 1$ filter for subsequent operations. Secondly, we perform correlation operation on the three levels of visual features with their corresponding language-mapped kernels $[C^1,C^2,C^3] = [k_1*E_I^1,k_2*E_I^2,k_3*E_I^3]$ , where $*$ denotes convolution operation. Thirdly, the three correlation maps are pixel-wisely averaged and fed into an activation function $\hat{C} = \operatorname {Sigmod}(\frac{C^1 + C^2 + C^3}{3})$ . The size of $\hat{C}$ , $C^1$ , $C^2$ and $C^3$ are all $\mathbb{R}_{\frac{H}{d}\times \frac{W}{d}}^{\frac{H}{d}\times \frac{W}{d}}$ . The location with highest score in $\hat{C}$ is the center point of the target object.
+
+We train the center point prediction network following [15, 36]. For the ground-truth center point $(\tilde{x}^g,\tilde{y}^g)$ , we compute a low-resolution equivalent $(x^{g},y^{g}) = \lfloor \frac{(\tilde{x}^{g},\tilde{y}^{g})}{d}\rfloor$ by considering the output stride $d$ . We use the Gaussian kernel $C_{xy} = \exp \left(-\frac{(x - x^g)^2 + (y - y^g)^2}{2\sigma_t^2}\right)$ to splat the ground
+
+
+Figure 2. Overview of the proposed RCCF framework. a) Expression and Image Encoder: Bi-LSTM and DLA structure are used for expression and visual feature extraction. b) Cross-modality Correlation Filtering: the extracted language feature is mapped into three different filter kernels. Then we perform correlation filtering on three levels of image features with the corresponding kernel to generate three correlation maps respectively. Finally, we fuse the three correlation maps by pixel-wise averaging. The center point corresponds to the peak value of the fused heatmap. c) Size and Offset Regression: the 2-dim object size and the local offset for the center point are regressed based on the last-level image feature only. The target object region is obtained by combining the estimated center point, the object size and the local offset.
+
+truth center point in a heatmap $C \in [0,1]^{\frac{W}{d} \times \frac{H}{d}}$ , where $C_{xy}$ is the value of $C$ at the spatial location $(x,y)$ and $\sigma_t$ is the standard deviation corresponding to the object size. The training objective is a penalty-reduced pixel-wise logistic regression with focal loss [16]:
+
+$$
+L _ {c} = - \sum_ {x y} \left\{ \begin{array}{l l} {\left(1 - \hat {C} _ {x y}\right) ^ {\alpha} \log \left(\hat {C} _ {x y}\right)} & \text {i f} C _ {x y} = 1 \\ {(1 - C _ {x y}) ^ {\beta} \left(\hat {C} _ {x y}\right) ^ {\alpha}} & \text {o t h e r w i s e} \\ \log \left(1 - \hat {C} _ {x y}\right) & \end{array} \right. \tag {1}
+$$
+
+where $\alpha$ and $\beta$ are hyper-parameters of the focal loss. We empirically set $\alpha$ to 2, and $\beta$ to 4 in our experiments.
+
+# 3.4. Size and Offset Regression
+
+As shown in Figure 2, the module contains two parallel branches. The size regression branch predicts the $\hat{W} \in \mathbb{R}_{\frac{H}{d} \times \frac{W}{d}}^{H \times W}$ and $\hat{H} \in \mathbb{R}_{\frac{H}{d} \times \frac{W}{d}}^{H \times W}$ while the offset regression branch estimates $\hat{\Delta} x \in \mathbb{R}_{\frac{H}{d} \times \frac{W}{d}}^{H \times W}$ and $\hat{\Delta} y \in \mathbb{R}_{\frac{H}{d} \times \frac{W}{d}}^{H \times W}$ . The regressed size and offset maps are pixel-wisely corresponded to the estimated center points heatmap $\hat{C}$ .
+
+Both branches take the visual feature $E_{I}^{1}$ as input. The regression is conducted without using any expression features. The reason is that the spatial structure information
+
+is important for the regression, adding expression features may destroy the rich spatial information in the visual features. Both size and offset regression branches contain a $3 \times 3$ convolutional layer with ReLU followed by a $1 \times 1$ convolutional layer.
+
+$L1$ loss function is used during training. The object size loss $L_{size}$ and the local offset regression loss $L_{off}$ are defined as:
+
+$$
+L _ {s i z e} = \left| \hat {W} _ {x ^ {g} y ^ {g}} - w ^ {g} \right| + \left| \hat {H} _ {x ^ {g} y ^ {g}} - h ^ {g} \right| \tag {2}
+$$
+
+$$
+L _ {o f f} = \left| \hat {\Delta} x _ {x ^ {g} y ^ {g}} - \delta x ^ {g} \right| + \left| \hat {\Delta} y _ {x ^ {g} y ^ {g}} - \delta y ^ {g} \right|,
+$$
+
+where $w^g$ and $h^g$ are the ground truth width and height of the target box and $\delta x^g = \left(\frac{x^g}{d} - x^g\right)$ and $\delta y^g = \left(\frac{y_g}{d} - y^g\right)$ are the ground truth offset vector. $\hat{W}_{x^g y^g}$ is the value of $\hat{W}$ at the spatial location $(x^g, y^g)$ while $\hat{H}_{x^g y^g}$ , $\hat{\Delta} x_{x^g y^g}$ and $\hat{\Delta} y_{x^g y^g}$ are defined similarly. Note that the regression loss acts only at the location of the center point $(x^g, y^g)$ , all other locations are ignored.
+
+# 3.5. Loss and Inference
+
+The final loss is the weighted summation of three loss terms:
+
+$$
+L o s s = L _ {c} + \lambda_ {\text {s i z e}} L _ {\text {s i z e}} + \lambda_ {\text {o f f}} L _ {\text {o f f}} \tag {3}
+$$
+
+where we set $\lambda_{size}$ to 0.1 and $\lambda_{off}$ to 1. $\lambda_{size}$ is equivalent to a normalized coefficient for the object size.
+
+During inference, we select the point $(x_{t},y_{t})$ with the highest confidence score in the heatmap $\hat{C}$ as the target center point. The target size and offset are obtained from the corresponding position in the $\hat{W}$ , $\hat{H}$ , $\hat{\Delta} x$ and $\hat{\Delta} y$ as $\hat{W}_{x_t,y_t}$ , $\hat{H}_{x_t,y_t}$ , $\hat{\Delta} x_{x_t,y_t}$ and $\hat{\Delta} y_{x_t,y_t}$ . The coordinates of the top-left and bottom-right corner of the target box are obtained by:
+
+$$
+\begin{array}{l} \left(x _ {t} + \hat {\Delta} x _ {x _ {t}, y _ {t}} - \frac {\hat {W} _ {x _ {t} , y _ {t}}}{2}, y _ {t} + \hat {\Delta} y _ {x _ {t}, y _ {t}} - \frac {\hat {H} _ {x _ {t} , y _ {t}}}{2}, \right. \tag {4} \\ x _ {t} + \hat {\Delta} x _ {x _ {t}, y _ {t}} + \frac {\hat {W} _ {x _ {t} , y _ {t}}}{2}, y _ {t} + \hat {\Delta} y _ {x _ {t}, y _ {t}} + \frac {\hat {H} _ {x _ {t} , y _ {t}}}{2}). \\ \end{array}
+$$
+
+# 4. Experiments
+
+In this section, we first introduce the experimental setting and implementation details, and then evaluate our method on four public benchmarks comparing to the state-of-the-art methods. After that, we analyze in detail the effectiveness of each component in our framework through a set of ablation experiments. Finally, we conduct an efficiency analysis followed by the qualitative results analysis.
+
+# 4.1. Experimental Setting
+
+Dataset. The experiments are conducted and evaluated on four common referring expression benchmarks, including RefClef [11], RefCOCO [11], RefCOCO+ [11] and RefCOCOg [20]. RefClef is also known as Refitgame, and is a subset of the ImageCLEF dataset. The other three datasets are all built on MS COCO images. RefCOCO and RefCOCO+ are collected in an interactive game, where the referring expressions tend to be short phrases. Comparing to RefCOCO, RefCOCO+ forbids using absolute location words and takes more attention on appearance description. To produce longer expressions, RefCOCOg is collected in a non-interactive setting. RefClef has 130,363 expressions for 99,296 objects in 19,997 images. RefCOCO has 142,210 expressions for 50,000 objects in 19,994 images, RefCOCO+ has 141,565 expressions for 49,856 objects in 19,992 images, and RefCOCOg has 104,560 expressions for 54,822 objects in 26,711 images.
+
+Both RefCOCO and RefCOCO+ are divided into four subsets: 'train', 'val', 'testA' and 'testB'. The focus of the 'testA' and 'testB' are different. An image contains multiple people in 'testA' and multiple objects in 'testB'. For RefCOCOg, we follow the split in [32]. For fair comparison, we used the split released by [35] for RefClef.
+
+Evaluation Metric. Following the detection proposal setting in the previous works, we use the Prec@0.5 to evaluate our method, where a predicted region is correct if its intersection over union (IOU) with the ground-truth bounding box is greater than 0.5.
+
+ | Params (Million) | FLOPs (Billion) | Top-1 Error (%) |
| VGG16 | 138 | 15.3 | 28.07 |
| ResNet-101 | 44.5 | 7.6 | 21.75 |
| DLA-34 | 18.4 | 3.5 | 25.32 |
+
+# 4.2. Implementation Details
+
+We set hyper-parameters following Centernet [36]. Our RCCF method is also robust to these hyper-parameters. All experiments are conducted on the Titan Xp GPU and CUDA 9.0 with Intel Xeon CPU E5-2680v4@2.4G.
+
+The resolution of the input image is $512 \times 512$ , and we set the output stride to 4. Thereby the output resolution is $128 \times 128$ . Our proposed model is trained with Adam [12]. We train on 8 GPUs with a batch-size of 128 for 80 epochs, with a learning rate of 5e-4 which is decreased by 10 at the 60 epochs, and again at 70 epochs. We use random shift and random scaling as the data augmentation. There is none augmentation during inference. The visual encoder are initialized with the weights pretrained on COCO's training images excluding the val/test set of RefCOCO series datasets, and the language encoder and the output heads are randomly initialized. For ablation study, we also conduct experiments on the visual encoder initialized with ImageNet [5] pretrain.
+
+Table 1. The parameters, computation and top-1 error on ImageNet validation of the three backbone networks used in referring expression comprehension methods.
+
+| Method | Precise@0.5 (%) |
| SCRC [10] | 17.93 |
| GroundR [25] | 26.93 |
| MCB [3] | 26.54 |
| CMN [9] | 28.33 |
| VC [35] | 31.13 |
| GGRE [19] | 31.85 |
| MNN [3] | 32.21 |
| CITE [23] | 34.13 |
| IGOP [30] | 34.70 |
| Ours | 63.79 |
+
+Table 2. Comparison with the state-of-the-arts on RefClef.
+
+# 4.3. Comparison to the State-of-the-art
+
+We compare RCCF to the state-of-the-art methods on four public benchmarks. The comparison results on RefClef dataset is shown in Table 2 while the results on the other three datasets are illustrated in Table 3. The previous methods use a 16-layer VGGNet [26] or a 101-layer ResNet [7] as the image encoder, while our proposed RCCF adopts DLA-34 [31] to encode images. The reason is that the VGG16 and ResNet-101 are not suitable for the key
+
+point estimation alike tasks according to [15, 6].
+
+For fair comparison, we compare the two backbone networks with DLA-34 from three aspects in Table 1. We can see the DLA-34 has the minimum parameters and computations (FLOPs), and its performance in image classification on ImageNet [5] is worse than ResNet-101.
+
+Therefore, the performance gain of our RCCF comes from the framework itself, instead of more parameters or more complex backbone network. The baselines we compared with mainly use Faster-RCNN [24], pretrained in object detection dataset, i.e., COCO and Visual Genome, to generate object proposals first, then matches the expression with all object proposals.
+
+RefClef. The results in RefClef are presented in Table 2. Comparing to the state-of-the-art methods in RefClef, our method increases the state-of-the-arts by a significant margin from $34.70\%$ to $63.79\%$ , almost double the precision.
+
+RefCOCO, RefCOCO+ and RefCOCOg. Refer to Table 3, our method outperforms existing methods in all evaluation sets on RefCOCO and RefCOCO+, and achieves comparable performance with the state-of-the-art method on RefCOCOg. Our result is a slightly inferior to MAttNet [32] in the RefCOCOg dataset. The performance gain of MAttNet partly comes from the additional supervision, such as attributes and class labels of region proposals, while our method only utilizes the language-image pair. Additionally, MAttNet uses a more complex backbone ResNet-101 while we only use DLA-34.
+
+In conclusion, our method can achieve pretty-well performance in all of the four datasets. In addition, the two-stage methods achieve much higher precision in the three RefCOCO series datasets than in RefClef. It is owing that all three RefCOCO series datasets are subsets of COCO, so the two-stage methods can train a very accurate detector based COCO object detection dataset, while RefClef does not have a such large corresponding object detection dataset. Therefore, traditional two-stage methods are heavily dependent on the object detector performance and the object detection dataset, while our novel RCCF framework avoid the explicit object detection stage and tackles the referring expression problem straightly.
+
+# 4.4. Ablation Studies
+
+In this section, we perform ablation studies from five different aspects on RefCOCO dataset to analyse the rationality and effectiveness of the proposed components in RCCF. The results are shown in Table 4.
+
+Fusion Strategy. In the first two rows, we report the results on two different fusion manners for the output correlation maps. In the first manner, we fuse the correlation by pixel-wisely taking the maximum value. To accomplish it,
+
+we concatenate the three output correlation maps, and obtain pixel-wise maximum across all channels. In the second manner, we generate the output heatmap by concatenating the three correlation maps, followed by a $1 \times 1$ convolutional layer. The results can be seen in the first row and the second row in Table 4. We conclude both the maximum fusion and concatenation are not as good as the average fusion shown in row 10.
+
+Filter Kernel Setting Here we perform ablation studies on the different variations of language filters (kernels). $3 \times 3$ Filter (row 3) is the method by expanding the language filter channels by 9 times, and reshaping it into $3 \times 3$ . Then, we perform correlation filter using the $3 \times 3$ kernels. The result is almost the same with the 'Ours' with the $1 \times 1$ kernel (row 10). Considering the additional computational cost, we choose to use $1 \times 1$ kernel.
+
+In row 4, we only generate one filter from the language feature, and perform correlation filtering on the three level visual features with the same kernel. In this case, the precision has dropped about 3 points. This shows that the diversity of the language kernels is important to match the visual features of different levels.
+
+Single Level Visual Feature. In row 5, we perform the correlation filtering only based on the last level of the visual feature $E_{I}^{1}$ with single language kernel. The performance has dropped a lot from "Ours", but only dropped a little from the single language filter, multi-level visual features setting in row 4. Therefore, it can be concluded that the different language filters are sensitive to the different level of visual features.
+
+Language-guided Regression. To verify whether the feature filtered by the language filter is suitable for the regression, we feed the concatenated feature of the three correlation maps into two convolutional layers in two regression branches. As shown in row 6, the performance drops a lot, about 6 points. Therefore, it is not a good choice to use language-guided features to regress the object size and offset in our RCCF framework.
+
+Expression & Image Encoder. The row 7 to row 9 of Table 4 show our method with various encoders. In row 7, to explore the effect of the visual encoder pretrain model on the performance, we initialize the DLA-34 with ImageNet pretrain instead of COCO object detection pretrain. The results have dropped about 2 points, but also achieved comparable results to the state-of-the-art method. It proves that our method can also work well without any prior knowledge from object detection. In row 8, we use GloVe [22] as the word embedding. There is little change in the performance, so our method is robust to the two different language embeddings. In row 9, we replace the visual encoder with
+
+ | | | RefCOCO | RefCOCO+ | RefCOCOg | |
| Method | Visual Encoder | testA | testB | testA | testB | test | Time (ms) |
| 1 | MMI [20] | VGG16 | 64.90 | 54.51 | 54.03 | 42.81 | - | - |
| 2 | NegBag [21] | VGG16 | 58.60 | 56.40 | - | - | 49.50 | - |
| 3 | CG [19] | VGG16 | 67.94 | 55.18 | 57.05 | 43.33 | - | - |
| 4 | Attr [18] | VGG16 | 72.08 | 57.29 | 57.97 | 46.20 | - | - |
| 5 | CMN [9] | VGG16 | 71.03 | 65.77 | 54.32 | 47.76 | - | - |
| 6 | Speaker [33] | VGG16 | 67.64 | 55.16 | 55.81 | 43.43 | - | - |
| 7 | Speaker+Listener+Reinforcer [34] | VGG16 | 72.94 | 62.98 | 58.68 | 47.68 | - | 1235 |
| 8 | Speaker+Listener+Reinforcer [34] | VGG16 | 72.88 | 63.43 | 60.43 | 48.74 | - | 1332 |
| 9 | VC[35] | VGG16 | 73.33 | 67.44 | 58.40 | 53.18 | - | 383 |
| 10 | ParallelAttn [37] | VGG16 | 75.31 | 65.52 | 61.34 | 50.86 | - | - |
| 11 | LGRANs [27] | VGG16 | 76.6 | 66.4 | 64.0 | 53.4 | - | - |
| 12 | DGA [29] | VGG16 | 78.42 | 65.53 | 69.07 | 51.99 | 63.28 | 330 |
| 13 | Speaker+Listener+Reinforcer [34] | ResNet-101 | 73.71 | 64.96 | 60.74 | 48.80 | 59.63 | - |
| 14 | Speaker+Listener+Reinforcer [34] | ResNet-101 | 73.10 | 64.85 | 60.04 | 49.56 | 59.21 | - |
| 15 | MAttNet [32] | ResNet-101 | 80.43 | 69.28 | 70.26 | 56.00 | 67.01 | 314 |
| 16 | Ours | DLA-34 | 81.06 | 71.85 | 70.35 | 56.32 | 65.73 | 25 |
+
+Table 3. Comparison with state-of-the-art approaches on RefCOCO, RefCOCO+ and RefCOCOg.
+
+
+(a) "the middle piece of the chicken rollup"
+
+
+(b)"man's hand with ring on it"
+
+
+(c) table behind pizza box"
+
+
+(d) "The corner of the gray table visible to the right of the hand"
+
+
+(e) "A steel chair near a lady and back of the man"
+
+
+(f) "space between two train cars"
+
+
+(g) "Baseball player holding the bat"
+
+
+(h)“front guy in white”
+
+
+(i) "woman under umbrella left"
+
+
+(j) "woman"
+
+
+$(k)$ "person behind fence on the left white hair"
+
+
+"blond lady standing behind girl sitting with glasses"
+Figure 3. Visualization results on RefCOCO series dataset. The first row (a-f) shows the comparisons of our approach with the state-of-the-art method MAttNet. The second row shows some representative failure cases of our method. The red bounding-box represents the prediction of our method, the blue bounding-box represents the prediction of MAttNet, and the green bounding-box is the corresponding ground-truth.
+
+a deeper network Hourglass-104 [15] in a single level setting. Comparing to the row 5, this setting has just improved a little, but this setting is much slower than our basic setting with DLA-34 during inference and training. More than 100 hours are needed for training and the inference speed is much lower.
+
+# 4.5. Efficiency Analysis
+
+Inference. As can be seen in Figure 1, our model runs at $25\mathrm{ms}$ per image on a single Titan Xp GPU and is the only real-time method in referring expression comprehension.
+
+sion area. In comparison, our method is 12 times faster than the state-of-the-art two-stage method MAttNet which needs to cost $314\mathrm{ms}$ for an image. For more detail comparison, the inference time per image of the first stage and the second stage of MAttNet are $262\mathrm{ms}$ and $52\mathrm{ms}$ respectively. The cost of either stage is longer than the total inference time of our method. More comparisons of the timing and precision can be found in Figure 1.
+
+Training. Our method is also fast to train. Training with DLA-34 on RefCOCO takes 35 hours in our synchronized 8-GPU implementation (1.78s per 128 image
+
+ | | RefCOCO | |
| Method | testA | testB | Time(ms) |
| 1 | Maximum Fusion | 77.16 | 69.15 | 25 |
| 2 | Concatenation | 79.85 | 69.83 | 26 |
| 3 | 3x3 Filter | 80.83 | 72.01 | 26 |
| 4 | Single Language Filter | 77.66 | 68.87 | 24 |
| 5 | Single Level Visual Feature | 77.14 | 68.50 | 23 |
| 6 | Language-guided Regression | 75.13 | 66.16 | 24 |
| 7 | ImageNet Pretrained | 78.93 | 66.73 | 25 |
| 8 | Glove Expression Encoder | 81.05 | 71.17 | 25 |
| 9 | Hourglass Image Encoder | 78.12 | 69.38 | 80 |
| 10 | Ours | 81.06 | 71.85 | 25 |
+
+Table 4. Ablation experiments on RefCOCO dataset.
+
+language pairs mini-batch).
+
+# 4.6. Qualitative Results Analyses
+
+Correlation Map. Figure 4 shows the correlation map of the object center. We can see that given different expressions for the same image, the correlation map responses to different locations. Otherwise, it can be seen that the response is very high in areas near the center of object described by the expression. Moreover, there are very small responses in other locations. It shows that our model is capable to well match the expression and visual features.
+
+Comparison to the State-of-the-art. In the first row of Figure 3, we compare our method with the state-of-the-art method MAttNet. Our method can accurately localize the target objects under the guidance of the language, even if the objects are hard to be detected for common object detectors. For example, although the described objects "piece" (Figure 3(a)) and "space" (Figure 3(f)) are very abstract and not included in the COCO categories, our method can still find them through the expression. It proves that our method can well match expression and visual features. While MAttNet is dependent on the object detector, MAttNet will fail if the object category is beyond the scope of the detector category set.
+
+Failure Case Analysis. The second row of Figure 3 illustrates some possible failure cases. As shown in the Figure 3(g), we find the right object, but fail to accurately locate the bounding-box. Another example is shown in Figure 3(h), the target object is occluded heavily, and the model cannot capture enough appearance information. In addition, the ground-truth error may occur. For example in Figure 3(j), there are more than one target objects described by the expression. Some failure cases may be caused by that target object lies in the background and it is difficult to find the appearance feature described by the expression. In addition, when expression is very complex and long, our model may fail to understand it well, such as the case in Figure 3(l). We leave how to solve these failure cases as interesting future works.
+
+
+
+
+"right bottom partial black"
+
+
+" guy with red pants standing
+
+
+
+
+" guy all the way right in front
+"guy in the center most to the front"
+"the green cup on the top right has the word after on it"
+Figure 4. Visualization of visual grounding results and correlation map. On the left image, the red bounding-box represents the prediction of our method while the green bounding-box represents the ground-truth. The right image shows the corresponding predicted correlation map for the center point of the object (pointed by the blue arrow).
+
+
+"tall bottle with yellow tag"
+
+# 5. Conclusion and Future Works
+
+In this paper, we propose a real-time and high-performance framework for referring expression comprehension. Completely different from the previous two-stage methods, our proposed RCCF directly localizes the object given an expression by predicting the object center through computing a correlation map between the referent and the image. The RCCF is able to achieve state-of-the-art performance in four referring expression datasets at real-time speed. For future work, on the one hand, we plan to explore how to capture more context information from expression and image, and thus understand the expression better. On the other hand, the referring expression is difficult to annotate, so we want to explore how to utilize other easy annotated types of datasets to train our method, like object detection, image caption.
+
+Acknowledgement This work was partially supported by the State Key Development Program (Grant 2016YFB1001004), Sensetime Ltd. Group, the National Natural Science Foundation of China (Grant 61876177, Grant 61976250), Beijing Natural Science Foundation (L182013, 4202034), Zhejiang Lab (No. 2019KD0AB04), and Fundamental Research Funds for the Central Universities.
+
+# References
+
+[1] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolutional siamese networks for object tracking. In ECCV, 2016.
+[2] David S Bolme, J Ross Beveridge, Bruce A Draper, and Yui Man Lui. Visual object tracking using adaptive correlation filters. In CVPR, 2010.
+[3] Kan Chen, Rama Kovvuri, Jiyang Gao, and Ram Nevatia. Msrc: Multimodal spatial regression with semantic context for phrase grounding. In ICMR, 2017.
+[4] Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolutional networks. In ICCV, 2017.
+[5] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
+[6] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. Centernet: Object detection with keypoint triplets. arXiv preprint arXiv:1904.08189, 2019.
+[7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
+[8] João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed tracking with kernelized correlation filters. TPAMI, 2014.
+[9] Ronghang Hu, Marcus Rohrbach, Jacob Andreas, Trevor Darrell, and Kate Saenko. Modeling relationships in referential expressions with compositional modular networks. In CVPR, 2017.
+[10] Ronghang Hu, Huazhe Xu, Marcus Rohrbach, Jiashi Feng, Kate Saenko, and Trevor Darrell. Natural language object retrieval. In CVPR, 2016.
+[11] Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara Berg. Referitag: Referring to objects in photographs of natural scenes. In EMNLP, 2014.
+[12] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[13] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017.
+[14] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012.
+[15] Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. In ECCV, 2018.
+[16] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dólár. Focal loss for dense object detection. In ICCV, 2017.
+[17] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
+
+[18] Jingyu Liu, Liang Wang, and Ming-Hsuan Yang. Referring expression generation and comprehension via attributes. In ICCV, 2017.
+[19] Ruotian Luo and Gregory Shakhnarovich. Comprehension-guided referring expressions. In CVPR, 2017.
+[20] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan L Yuille, and Kevin Murphy. Generation and comprehension of unambiguous object descriptions. In CVPR, 2016.
+[21] Varun K Nagaraja, Vlad I Morariu, and Larry S Davis. Modeling context between objects for referring expression understanding. In ECCV, 2016.
+[22] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In EMNLP, 2014.
+[23] Bryan A Plummer, Paige Kordas, M Hadi Kiapour, Shuai Zheng, Robinson Piramuthu, and Svetlana Lazebnik. Conditional image-text embedding networks. In ECCV, 2018.
+[24] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015.
+[25] Anna Rohrbach, Marcus Rohrbach, Ronghang Hu, Trevor Darrell, and Bernt Schiele. Grounding of textual phrases in images by reconstruction. In ECCV, 2016.
+[26] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
+[27] Peng Wang, Qi Wu, Jiewei Cao, Chunhua Shen, Lianli Gao, and Anton van den Hengel. Neighbourhood watch: Referring expression comprehension via language-guided graph attention networks. In CVPR, 2019.
+[28] Sibei Yang, Guanbin Li, and Yizhou Yu. Cross-modal relationship inference for grounding referring expressions. In CVPR, 2019.
+[29] Sibei Yang, Guanbin Li, and Yizhou Yu. Dynamic graph attention for referring expression comprehension. In ICCV, 2019.
+[30] Raymond Yeh, Jinjun Xiong, Wen-Mei Hwu, Minh Do, and Alexander Schwing. Interpretable and globally optimal prediction for textual grounding using image concepts. In NIPS, 2017.
+[31] Fisher Yu, Dequan Wang, Evan Shelhamer, and Trevor Darrell. Deep layer aggregation. In CVPR, 2018.
+[32] Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. Mattnet: Modular attention network for referring expression comprehension. In CVPR, 2018.
+[33] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In ECCV, 2016.
+[34] Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L Berg. A joint speaker-listener-reinforcer model for referring expressions. In CVPR, 2017.
+[35] Hanwang Zhang, Yulei Niu, and Shih-Fu Chang. Grounding referring expressions in images by variational context. In CVPR, 2018.
+
+[36] Xingyi Zhou, Dequan Wang, and Philipp Krahenbuhl. Objects as points. arXiv preprint arXiv:1904.07850, 2019.
+[37] Bohan Zhuang, Qi Wu, Chunhua Shen, Ian Reid, and Anton van den Hengel. Parallel attention: A unified framework for visual object discovery through dialogs and queries. In CVPR, 2018.
\ No newline at end of file
diff --git a/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/images.zip b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..5991e73800d1c368d3ecc34149e70007d845d5bb
--- /dev/null
+++ b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:944cfae8b55094e53ca4995faaa4f2b225f0c048fd7c11e2ec469b5f821e2212
+size 584602
diff --git a/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/layout.json b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..a4de9c4630beaabca0baa34462302eb368025471
--- /dev/null
+++ b/arealtimecrossmodalitycorrelationfilteringmethodforreferringexpressioncomprehension/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21ce434422936fb68a6c94071eb81cf82eabd48ff216af793ab356446482f30d
+size 411079
diff --git a/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_content_list.json b/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a748d3594c2d931dda7a059e5875dc57a576d6d8
--- /dev/null
+++ b/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec14a7d1a88636d2cbb4118e3730aeb25b22772fe78741b49bc8129f9526cac9
+size 84773
diff --git a/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_model.json b/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6ff1b7ee055c455adaf68e0eec56cba4b349231d
--- /dev/null
+++ b/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac53061404c4f098d8d3c02a6a7d5a3167b8148a9e5b2744365e951610587a31
+size 108290
diff --git a/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_origin.pdf b/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c2933829fbb81dd805ad0c97fe3a3c2d73a51199
--- /dev/null
+++ b/aselfsupervisedapproachforadversarialrobustness/ca2055e5-23a2-4290-a121-4b5cd31fedae_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d65a29a220568644514134cd484446f9b1f401b88e97f4fe9f363e5f06884f0
+size 991612
diff --git a/aselfsupervisedapproachforadversarialrobustness/full.md b/aselfsupervisedapproachforadversarialrobustness/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..7faedee73b6b5a207f1b54b6cc3ef901b049aac9
--- /dev/null
+++ b/aselfsupervisedapproachforadversarialrobustness/full.md
@@ -0,0 +1,330 @@
+# A Self-supervised Approach for Adversarial Robustness
+
+Muzammal Naseer\*, Salman Khan†, Munawar Hayat†, Fahad Shahbaz Khan†§, Fatih Porikli*
+*Australian National University, Australia, ‡Data61-CSIRO, Australia
+†Inception Institute of Artificial Intelligence, UAE, §CVL, Linköping University, Sweden
+{muzammal.naseer, fatih.porikli}@anu.edu.au
+{salman.khan, munawar.hayat, fahad.khan}@inceptioniai.org
+
+# Abstract
+
+Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems e.g., for classification, segmentation and object detection. The vulnerability of DNNs against such attacks can prove a major roadblock towards their real-world deployment. Transferability of adversarial examples demand generalizable defenses that can provide cross-task protection. Adversarial training that enhances robustness by modifying target model's parameters lacks such generalizability. On the other hand, different input processing based defenses fall short in the face of continuously evolving attacks. In this paper, we take the first step to combine the benefits of both approaches and propose a self-supervised adversarial training mechanism in the input space. By design, our defense is a generalizable approach and provides significant robustness against the unseen adversarial attacks (e.g. by reducing the success rate of translation-invariant ensemble attack from $82.6\%$ to $31.9\%$ in comparison to previous state-of-the-art). It can be deployed as a plug-and-play solution to protect a variety of vision systems, as we demonstrate for the case of classification, segmentation and detection. Code is available at: https://github.com/Muzammal-Naseer/NRP.
+
+# 1. Introduction
+
+Adversarial training (AT) has shown great potential to safeguard neural networks from adversarial attacks [29, 35]. So far in literature, AT is performed in the model space i.e., a model's parameters are modified by minimizing empirical risk for a given data distribution as well as the perturbed images. Such AT strategy results in the following challenges. (a) Task dependency: AT is task-dependent e.g. robust classification models cannot directly be incorporated into an object detection or a segmentation pipeline, since the overall system would still require further training
+
+
+Figure 1: Our main idea is to train a Purifier Network in a self-supervised manner. We generate perturbed images using our proposed Self-supervised Perturbation (SSP) attack that disrupts the deep perceptual features. The Purifier Network projects back the perturbed images close to the perceptual space of clean images. This creates a training loop independent of the task or label space.
+
+with modified task-dependant loss functions. (b) Computational cost: AT is computationally expensive [29] which restricts its applicability to high-dimensional and large-scale datasets such as ImageNet [34]. (c) Accuracy drop: models trained with AT lose significant accuracy on the original distribution e.g. ResNet50 [17] accuracy on ImageNet validation set drops from $76\%$ to $64\%$ when robustified against PGD attack [29] at a perturbation budget of only $\epsilon \leq 2$ (i.e. maximum change in each pixel can be 2/255). (d) Label leakage: supervised AT suffers from label leakage [23] which allows the model to overfit on perturbations thus affecting model generalization to unseen adversaries [50].
+
+In comparison to AT, input processing methods [14, 45] for adversarial defense are scalable and can work across different tasks. However, they have been broken in white-box
+
+settings [2] and shown to be least effective in black-box settings. For example, [10] successfully transfer their attack against multiple input processing based defenses even when the backbone architecture is adversarially trained using [43]. Furthermore, input transformations (e.g., Gaussian smoothing and JPEG compression) can maximize the attack strength instead of minimizing it [32, 10].
+
+Motivated by the complementary strengths of AT and input processing methods, we propose a self-supervised AT mechanism in the input space. Our approach (Fig. 1) uses a min-max (saddle point) formulation to learn an optimal input processing function that enhances model robustness. In this way, our optimization rule implicitly performs AT. The main advantage of our approach is its generalization ability, once trained on a dataset, it can be applied off-the-shelf to safeguard a completely different model. This makes it a more attractive solution compared to popular AT approaches that are computationally expensive (and thus less scalable to large-scale datasets). Furthermore, in comparison to previous pre-processing based defenses that are found to be vulnerable towards recent attacks, our defense demonstrates better robustness. Our main contributions are:
+
+- Task Generalizability: To ensure a task independent AT mechanism, we propose to adversarially train a purifying model named Neural Representation Purifier (NRP). Once trained, NRP can be deployed to safeguard across different tasks, e.g., classification, detection and segmentation, without any additional training (Sec. 3).
+- Self-Supervision: The supervisory signal used for AT should be self-supervised to make it independent of label space. To this end, we propose an algorithm to train NRP on adversaries found in the feature space in random directions to avoid any label leakage (Sec. 3.1).
+- Defense against strong perturbations: Attacks are continuously evolving. In order for NRP to generalize, it should be trained on worst-case perturbations that are transferable across different tasks. We propose to find highly transferable perceptual adversaries (Sec. 4.3).
+- Maintaining Accuracy: A strong defense must concurrently maintain accuracy on the original data distribution. We propose to train the NRP with an additional discriminator to bring adversarial examples close to original samples by recovering the fine texture details (Sec. 4.2).
+
+# 2. Related Work
+
+Defenses: A major class of adversarial defenses processes the input images to achieve robustness against adversarial patterns. For example, [14] used JPEG compression to remove high-frequency components that are less important to human vision using discrete cosine transform. A compressed sensing approach called Total Variation Minimization (TVM) was proposed in [14] to remove the small localized changes caused by adversarial perturbations. Xie
+
+et al. [46] introduced the process of Random Resizing and Padding (R&P) as a pre-processing step to mitigate the adversarial effect. A High-level representation Guided Denoiser (HGD) [26] framework was used as a pre-processing step to remove perturbations. NeurIPS 2017 Defense Competition Rank-3 (NeurIPS-r3) approach [42] introduced a two-step prep-processing pipeline where the images first undergo a series of transformations (JPEG, rotation, zoom, shift and sheer) and then passed through an ensemble of adversariably trained models to obtain the weighted output response as a prediction. [36] proposed to recover adversaries using GAN and [31] super-resolve images to minimize adversarial effect. As compared to the above defenses, we design an input processing model that derives a self-supervised signal from the deep feature space to adversarially train the defense model. Our results show significantly superior performance to all so-far developed input processing based defenses.
+
+Attacks: The self-supervised perturbation signal obtained to adversarially train our proposed approach can also be used as an adversarial attack. Since the seminal work of Szegedy et al. [41], many adversarial attack algorithms [12, 13, 3, 9] have been proposed to show the vulnerability of neural networks against imperceptible changes to inputs. A single-step attack, called Fast Gradient Sign Method (FGSM), was proposed in [12]. In a follow-up work, Kurakin et al. [13] proposed a robust multi-step attack, called Iterative Fast Gradient Sign Method (I-FGSM) that iteratively searches the loss surface of a network under a given metric norm. To improve transferability, a variant of I-FGSM, called momentum iterative fast gradient sign method (MI-FGSM), was introduced [9], which significantly enhanced the transferability of untargeted attacks on ImageNet dataset [34] under a $l_{\infty}$ norm budget. More recently, [47] proposed a data augmentation technique named input diversity method (DIM) to further boost the transferability of these attack methods. In contrast to our self-supervised attack approach, all of these methods are supervised adversarial attacks that rely on cross-entropy loss to find the deceptive gradient direction.
+
+# 3. Neural Representation Purifier
+
+Our defense aims to combine the benefits of adversarial training and input processing methods in a single framework that is computationally efficient, generalizable across different tasks and retains the clean image accuracy. The basic intuition behind our defense mechanism is to effectively use information contained in the feature space of deep networks to obtain an automatic supervisory signal. To this end, we design a Neural Representation Purifier (NRP) model that learns to clean adversariably perturbed images based on the automatically derived (self) supervision.
+
+The objective is to recover the original benign image $\pmb{x}$
+
+
+Figure 2: Neural Representation Purifier. Using a self-supervision signal, the proposed defense learns to purify perturbed images, such that their corresponding perceptual representation in deep feature space becomes close to clean natural images.
+
+
+
+given an input adversarial image $\pmb{x}^{\prime}$ . We wish to remove the adversarial patterns by training a neural network $\mathcal{P}_{\theta}$ parameterized by $\pmb{\theta}$ , which we refer as the purifier network. The main objective is to be independent of the task-specific objective function, such that once trained, the proposed defense is transferable to other models (even across tasks). Towards this end, the network $\mathcal{P}_{\theta}$ is trained in an adversarial manner by playing a game with the critic network $\mathcal{C}_{\phi}$ , and a feature extractor $\mathcal{F}_{\psi}$ (see Fig. 2). The function of the purifier and critic networks is similar to generator and discriminator in a traditional Generative Adversarial Network (GAN) framework, with the key difference that in our case, $\mathcal{P}_{\theta}$ performs image restoration instead of image generation. The feature extractor, $\mathcal{F}_{\psi}$ , is pretrained on ImageNet and remains fixed, while the other two networks are optimized during training. Adversarial examples $\pmb{x}^{\prime}$ are created by maximizing the $\mathcal{F}_{\psi}$ 's response in random directions defined by a distance measure (Algorithm 1), while at minimization step, $\mathcal{P}_{\theta}$ tries to recover the original sample $\pmb{x}$ by minimizing the same distance (Algorithm 2).
+
+# 3.1. Self-Supervision
+
+The automatic supervision signal to train NRP defense is obtained via a loss-agnostic attack approach. Below, we first outline why such a Self-Supervised Perturbation (SSP) is needed and then describe our approach.
+
+Motivation: Strong white-box attacks [13, 6], that are generally used for AT, consider already-known network parameters $\theta$ and perturb the inputs to create $x^{\prime}$ , such that they are misclassified by the target model, i.e. $\mathcal{T}(\boldsymbol{x}';\boldsymbol{\theta}) \neq y$ . Since the perturbations are calculated using gradient directions specific to $\theta$ , the resulting perturbed images $x^{\prime}$ do not generalize well to other networks [9, 38, 9, 47, 52]. This dependency limits these attacks to a specific network and task. In contrast, our goal is to design a self-supervised perturbation mechanism that can generalize across networks and tasks, thus enabling a transferable defense approach.
+
+
+Figure 3: Fooling rate of Inc-v4 and average feature distortion is shown for adversaries generated on Inc-v3 (black-box setting) by I-FGSM and MI-FGSM. As the number of iterations increases, fooling rate of I-FGSM decreases along with its feature distortion while MI-FGSM maintains its distortion as iterations increase.
+
+The self-supervised perturbation is based on the concept of 'feature distortion', introduced next.
+
+Feature Distortion: Given a clean image $\pmb{x}$ and its perturbed counterpart $\pmb{x}'$ that is crafted to fool the target model $\mathcal{T}(\cdot)$ , the feature distortion refers to the change that $\pmb{x}'$ causes to the internal representations of a neural network $\mathcal{F}(\cdot)$ relative to $\pmb{x}$ . This can be represented by,
+
+$$
+\Delta (\boldsymbol {x}, \boldsymbol {x} ^ {\prime}) = \boldsymbol {d} \left(\mathcal {F} (\boldsymbol {x}; \boldsymbol {\theta}) | _ {n}, \mathcal {F} \left(\boldsymbol {x} ^ {\prime}; \boldsymbol {\theta}\right) | _ {n}\right), \tag {1}
+$$
+
+where, $\mathcal{F}(\pmb{x};\pmb{\theta})|_n$ denotes the internal representation obtained from the $n^{th}$ layer of a pretrained deep network $\mathcal{F}(\cdot)$ and $\pmb{d}(\cdot)$ is a distance metric which can be $\ell_p$ [12], Wasserstein distance [1] or cosine similarity between the features of the original and perturbed sample.
+
+The reason why we base our self-supervised perturbation on feature distortion is its direct impact on the perturbation transferability. To show this, we conduct a proof-of-concept experiment by generating adversarial examples
+
+# Algorithm 1 SSP: Self-Supervised Perturbation
+
+Require: A feature extractor $\mathcal{F}_{\psi}$ , batch of clean samples $\pmb{x}$ , input transformation $\mathcal{R}$ , perturbation budget $\epsilon$ , step-size $\kappa$ , and number of iterations $T$ .
+Ensure: Perturbed sample $\pmb{x}^{\prime}$ with $\| \pmb{x}^{\prime} - \pmb {x}\|_{\infty}\leq \epsilon$
+1: $g_0 = 0$ ; $x' = \mathcal{R}(x)$ ;
+2: for $t = 1$ to $T$ do
+3: Forward pass $\pmb{x}_t^\prime$ to $\mathcal{F}_{\psi}$ and compute $\Delta$ using Eq. 1;
+4: Compute gradients $\pmb{g}_t = \nabla_{\pmb{x}}\Delta (\pmb{x}_t,\pmb{x}')$
+5: Generate adversaries using;
+$\pmb{x}_{t + 1}^{\prime} = \pmb{x}_{t}^{\prime} + \kappa \cdot \mathrm{sign}(\pmb{g}_{t})$ (2)
+6: Project adversaries in the vicinity of $\pmb{x}$
+$\pmb{x}_{t + 1}^{\prime}$ =clip $(x_{t + 1}^{\prime},\pmb {x} - \epsilon ,\pmb {x} + \epsilon)$ (3)
+7: end for
+8: return $\pmb{x}^{\prime} = \pmb{x}_{T}^{\prime}$ .
+
+on ImageNet-NeurIPS [7]. We consider two popular attack methods, MI-FGSM [9] and I-FGSM [13], among which MI-FGSM has higher transferability compared to I-FGSM. Interestingly, feature distortion strength of I-FGSM decreases as the number of attack iterations increases, compared to MI-FGSM (Fig. 3). MI-FGSM maintains its perturbation strength with increasing number of iterations. This indicates that feature distortion has a direct impact on transferability and therefore maximizing the objective in Eq. 1 (signifying feature-space distortion) can boost the transferability of adversarial examples without using any decision boundary information. Based on this observation, our proposed perturbation generation approach directly maximizes the distortion in deep feature space to create strong, highly generalizable and task-independent adversarial examples.
+
+Self-supervised Perturbation: Conventional black-box attacks operate in the logit-space of deep networks. The objective of 'logit-based' adversarial attacks is to change the target model's prediction for a clean image $\mathcal{T}(\boldsymbol{x}) \neq \mathcal{T}(\boldsymbol{x}')$ such that $\boldsymbol{x}'$ is bounded: $\| \boldsymbol{x} - \boldsymbol{x}' \| \leq \epsilon$ . In contrast to these methods, we propose to find adversaries by maximizing the feature loss (Sec. 3.2) of neural networks. Our approach does not rely on decision-boundary information since our 'representation-based' attack directly perturbs the feature space by solving the following optimization problem:
+
+$$
+\max _ {\boldsymbol {x} ^ {\prime}} \Delta \left(\boldsymbol {x}, \boldsymbol {x} ^ {\prime}\right) \text {s u b j e c t t o :} \| \boldsymbol {x} - \boldsymbol {x} ^ {\prime} \| _ {\infty} \leq \epsilon , \tag {4}
+$$
+
+Our proposed method to maximize feature distortion for a given input sample is summarized in Algorithm 1. We apply a transformation $\mathcal{R}$ to input $\pmb{x}$ at the first iteration (Algorithm 1) to create a neural representation difference between an adversarial and benign example and then maximize the difference within a given perturbation budget. There can be different choices for $\mathcal{R}$ but in this work, $\mathcal{R}$ simply adds random noise to the input sample, i.e. our algorithm takes a random step at the first iteration.
+
+Algorithm 2 NRP: Neural Representation Purification via Self-Supervised Adversarial Training
+
+Require: Training data $\mathcal{D}$ , Purifier $\mathcal{P}_{\theta}$ , feature extractor $\mathcal{F}_{\psi}$ , critic network $\mathcal{C}_{\phi}$ , perturbation budget $\epsilon$ and loss criteria $\mathcal{L}$ .
+Ensure: Randomly initialize $\mathcal{P}_{\theta}$ and $\mathcal{C}_{\phi}$ .
+
+1: repeat
+
+2: Sample mini-batch of data, $\mathbf{x}$ , from the training set.
+3: Find adversaries, $\pmb{x}^{\prime}$ , at a given perturbation budget $\epsilon$ by maximizing distance, $\Delta$ (Eq. 1), using Algorithm 1.
+4: Forward-pass $\pmb{x}^{\prime}$ through $\mathcal{P}_{\theta}$ and calculate $\mathcal{L}_{\mathcal{P}_{\theta}}$ (Eq. 8).
+5: Back-pass and update $\theta$ to minimize $\mathcal{L}_{\mathcal{P}_\theta}$ (Eq. 8).
+6: Update $\mathcal{C}_{\phi}$ to classify $\pmb{x}$ from $\mathcal{P}_{\theta}(\pmb{x}^{\prime})$
+7: until $\mathcal{P}_{\theta}$ converges.
+
+# 3.2. NRP Loss functions
+
+We propose a hybrid loss function that is used to train the purifier network (see Algorithm 2). This loss function consists of three terms that we explain below:
+
+Feature loss: The Self-supervised Perturbation (SSP) generated by Algorithm 1 is the direct result of increasing the feature loss function, $\Delta$ , defined on the feature extractor $\mathcal{F}_{\psi}$ . In order to learn the purifier network, we must decrease this distance as follows:
+
+$$
+\mathcal {L} _ {\text {f e a t}} = \Delta \left(\mathcal {F} _ {\psi} (\boldsymbol {x}), \mathcal {F} _ {\psi} \left(\mathcal {P} _ {\theta} \left(\boldsymbol {x} ^ {\prime}\right)\right)\right), \tag {5}
+$$
+
+where, $\Delta$ is formally defined in Eq. 1, and the distance measure used to compute $\Delta$ is the mean absolute error (MAE). We empirically observe that removing $\mathcal{L}_{\text {feat }}$ loss leads to a network that does not converge to a meaningful state and produces weaker defense (see Fig. 5).
+
+Pixel loss: Smoothing images can help in mitigating the adversarial effect since the perturbation patterns resemble to that of noise. Therefore, in order to encourage smoothness, we apply $l_{2}$ loss in the image pixel space,
+
+$$
+\mathcal {L} _ {i m g} = \left\| \mathcal {P} _ {\theta} \left(\boldsymbol {x} ^ {\prime}\right) - \boldsymbol {x} \right\| _ {2}. \tag {6}
+$$
+
+Adversarial loss: Instead of using vanilla GAN objective, we use relativistic average GAN which has shown better convergence properties [20, 32]. For a given batch of original, $x$ , and adversarial examples, $x'$ , the relativistic loss for the purifier network $\mathcal{P}_{\theta}$ is given as:
+
+$$
+\mathcal {L} _ {a d v} = - \log \left(\sigma \left(\mathcal {C} _ {\phi} \left(\mathcal {P} _ {\theta} \left(\boldsymbol {x} ^ {\prime}\right)\right) - \mathcal {C} _ {\phi} (\boldsymbol {x})\right)\right), \tag {7}
+$$
+
+where $\sigma$ represents the sigmoid layer. The overall loss objective for $\mathcal{P}_{\theta}$ is the combination of losses defined on pixel and feature spaces as well as the relativistic loss:
+
+$$
+\mathcal {L} _ {\mathcal {P} _ {\theta}} = \underbrace {\alpha \cdot \mathcal {L} _ {a d v}} _ {\text {A d v e r s a r i a l l o s s}} + \underbrace {\gamma \cdot \mathcal {L} _ {i m g}} _ {\text {P i x e l l o s s}} + \underbrace {\lambda \cdot \mathcal {L} _ {f e a t}} _ {\text {F e a t u r e l o s s}}. \tag {8}
+$$
+
+The pixel and feature losses focus on restoring image content and style, while adversarial loss restores texture details.
+
+
+Figure 4: Fooling rate for Inc-v3 [23] on ImageNet-NeurIPS dataset. Adversaries are created by applying SSP (Algorithm 1) at different layers and best results for each model is selected. Perceptual adversaries found in VGG space has the highest transferability (further analysis is in supplementary material).
+
+
+
+# 3.3. NRP Architecture
+
+Here, we outline the architecture of generator, feature extractor and discriminator blocks. Generator $(\mathcal{P}_{\theta})$ : Our generator architecture is inspired by [24, 44]. It consists of a convolution layer followed by multiple "basic blocks". Each basic block is composed of 3 "dense blocks" and each dense block contains five convolutional layers followed by leaky-relu [48] and finally a convolutional layer that has output with same dimension as input. Generally, adding a skip connection from input to generator's output helps in restoration tasks e.g., image super resolution [24] and deblurring [22]. However, in our case an important design criteria is to avoid such skip connection since our objective is to remove adversarial noise and a direct skip connection can potentially reintroduce harmful noise patterns. Feature Extractor $(\mathcal{F}_{\psi})$ : It is a VGG [37] network pretrained on ImageNet. During training, $\mathcal{F}_{\psi}$ remains fixed while its response is maximized in random directions (adversary generation process) and minimized (purification process) using a predefined distance metric. In our experiments, we demonstrate the effectiveness of VGG space for creating strong adversaries as compared to other deep architectures. Discriminator $(\mathcal{C}_{\phi})$ : Our discriminator architecture is also based on VGG network [37]. It consists of five convolutional blocks containing convolutional layers followed by batch-norm and leaky-relu and then a fully connected layer.
+
+# 3.4. On Suitable Perceptual Adversaries
+
+The intuition to train NRP on boundary-agnostic perceptual adversaries is based on the extensive study [51] that found correlation of deep features with human perception. Specifically, [51] compares three models i.e. VGG [37], AlexNet [21] and SqueezeNet [19]. Following [51], we study these models from adversarial perspective by applying feature distortion at different layers in Fig. 4. Our findings are as follows: (a) VGG's perceptual adversaries are more transferable than AlexNet and SqueezeNet (a detailed transferability analysis on seen/unseen perturbations of VGG is in supplementary material), (b) under same fea
+
+ture distortion settings, adversaries found at different layers are not equally transferable e.g. conv3.3 (block 3, layer 3) features offer better adversarial transferability than the rest of the network. We believe this is because the initial VGG layers learn low-level features while the deeper ones become too specific to the label space. Further, we found that increasing the representation loss at multiple network layers does not notably increase attack success rate and adds a significant computational overhead. Since NRP training process is agnostic to the label-space of the source model i.e., it neither depends on a particular task-specific loss function (e.g., cross entropy) nor on the ground-truth labels, this makes it a generic algorithm, which can defend a totally unseen model. Furthermore, we demonstrate that perturbations discovered with our SSP approach offer high transferability across models trained on different datasets and tasks.
+
+# 4. Experiments
+
+# 4.1. Training Details
+
+Training is done on randomly selected $25\mathrm{k}$ images from MS-COCO data set. These images are resized to $480\times$ $480\times 3$ . Adversaries created using SSP are fed as inputs to NRP with their corresponding clean images used as target labels. During training, we randomly crop images of $128\times 128\times 3$ . Batch size is set to 16 and training is done on four Tesla v100 GPUs. Learning rates for generator and discriminator are set to $10^{-4}$ , with the value of $\alpha = 5\times 10^{-3}$ , $\gamma = 1\times 10^{-2}$ and $\lambda = 1$ . We study eight models trained on the ImageNet [34]. Five of these models are naturally trained. These include Inceptionv3 (Inc-v3) [40], Inceptionv4 (Inc-v4), Inception Resnet v2 (IncRes-v2) [39], Resnet v2-152 (Res-152) [18] and VGG-19 [37]. The other three models including Adv-v3 [23], Inc-v3ens3 and IncRes-v2ens [43] are adversarily trained. The specific details about these models can be found in [23, 43].
+
+# 4.2. Defense Results and Insights
+
+(a) Generalizability Across Attacks: Figs. 6, 7 & 8 demonstrate generalization ability of NRP to recover images from strong adversarial noise. Quantitative analysis in Table 1 shows that compared to previously broken defenses [10], NRP achieves strong robustness against state-of-the-art attacks [47, 10], bringing down the effectiveness of the ensemble translation-invariant attack with input diversity $(\mathrm{DIM}_{TI})$ [10] from $79.8\%$ to $31.9\%$ .
+
+(b) NRP as Cross-task Defense: In order to measure the cross-task defense capabilities, we deploy NRP against cross-domain attack (CDA) [32], a state-of-the-art attack that generates diverse cross-domain adversarial perturbations. Results in Table 2 demonstrate that NRP successfully removes all unseen perturbations and proves a generic cross-task defense for classification, object detection and in-
+
+Table 1: Robustness of different defense methods against state-of-the-art black-box attacks (lower is better). IncRes-v2ens is used as backbone model following [10]. NRP significantly reduces the attack success rate. Adversaries ( $\epsilon \leq 16$ ) are created against Inc-v3, Inc-v4, IncRes-v2, Res-v2-152 and Ensemble.
+
+ | Defenses | Attacks |
| FGSM | FGSM17 | MIFGSM | MIFGSM17 | DIM | DIM17 |
| Inc-v3 | JPEG [15] | 19.9 | 25.5 | 20.3 | 28.2 | 30.7 | 37.0 |
| TVM [15] | 18.8 | 30.7 | 19.4 | 34.9 | 24.4 | 44.2 |
| NIPS-r3 [42] | 9.8 | 24.5 | 12.9 | 26.7 | 18.0 | 41.4 |
| R&P [45] | 6.5 | 19.8 | 8.7 | 23.9 | 13.3 | 36.8 |
| HGD [25] | 2.1 | 18.4 | 6.9 | 25.7 | 9.7 | 38.3 |
| APE-GAN [36] | 19.6 | 28.0 | 17.9 | 30.4 | 23.6 | 38.6 |
| SR [31] | 23.0 | 36.7 | 23.6 | 38.3 | 32.5 | 49.0 |
| NRP | 3.2 | 4.8 | 4.5 | 9.1 | 5.1 | 11.0 |
| Inc-v4 | JPEG [15] | 21.8 | 27.9 | 26.0 | 31.6 | 38.6 | 43.5 |
| TVM [15] | 19.9 | 31.8 | 24.8 | 38.4 | 29.1 | 45.6 |
| NIPS-r3 [42] | 11.5 | 24.6 | 15.6 | 29.5 | 14.1 | 41.9 |
| R&P [45] | 7.9 | 21.6 | 12.1 | 28.0 | 17.2 | 39.3 |
| HGD [25] | 2.6 | 18.1 | 9.6 | 27.8 | 32.4 | 58.7 |
| APE-GAN [36] | 21.1 | 28.8 | 20.7 | 32.8 | 25.0 | 39.0 |
| SR [31] | 25.3 | 34.1 | 29.2 | 42.3 | 39.3 | 52.3 |
| NRP | 3.1 | 4.4 | 4.8 | 10.3 | 5.2 | 12.5 |
| IncRes-v2 | JPEG [15] | 24.7 | 32.4 | 31.6 | 45.9 | 47.2 | 55.7 |
| TVM [15] | 23.4 | 38.5 | 34.4 | 55.4 | 41.7 | 66.2 |
| NIPS-r3 [42] | 13.3 | 31.4 | 22.7 | 46.2 | 37.6 | 61.5 |
| R&P [45] | 9.9 | 28.1 | 18.6 | 45.2 | 30.2 | 61.4 |
| HGD [25] | 3.9 | 25.4 | 19.6 | 45.1 | 32.4 | 58.7 |
| APE-GAN [36] | 24.7 | 36.8 | 30.4 | 50.5 | 36.3 | 60.5 |
| SR [31] | 27.6 | 42.4 | 42.6 | 62.1 | 54.3 | 72.2 |
| NRP | 3.5 | 6.9 | 7.6 | 18.7 | 7.5 | 20.8 |
| Res-v2-152 | JPEG [15] | 24.0 | 32.7 | 31.2 | 38.3 | 42.4 | 50.8 |
| TVM [15] | 22.0 | 38.1 | 24.5 | 41.2 | 36.8 | 55.7 |
| NIPS-r3 [42] | 12.5 | 30.1 | 18.0 | 34.4 | 34.4 | 52.9 |
| R&P [45] | 8.6 | 27.4 | 14.6 | 31.1 | 26.4 | 50.4 |
| HGD [25] | 3.6 | 24.4 | 15.1 | 31.8 | 32.6 | 51.8 |
| APE-GAN [36] | 24.3 | 37.1 | 23.2 | 38.6 | 34.3 | 53.8 |
| SR [31] | 26.3 | 41.8 | 30.2 | 49.2 | 48.4 | 63.9 |
| NRP | 3.4 | 6.5 | 5.8 | 11.9 | 6.3 | 17.8 |
| Ensemble | JPEG [15] | 38.1 | 43.3 | 67.7 | 77.2 | 82.5 | 83.4 |
| TVM [15] | 30.0 | 39.8 | 50.1 | 72.1 | 64.1 | 79.8 |
| NIPS-r3 [42] | 19.8 | 33.9 | 43.9 | 71.4 | 63.7 | 83.1 |
| R&P [45] | 13.8 | 31.2 | 32.8 | 68.3 | 51.7 | 81.4 |
| HGD [25] | 4.9 | 29.9 | 38.6 | 73.3 | 57.7 | 82.6 |
| APE-GAN [36] | 32.0 | 42.1 | 44.6 | 69.3 | 59.6 | 74.5 |
| SR [31] | 38.1 | 45.8 | 65.2 | 79.9 | 79.3 | 84.9 |
| NRP | 3.7 | 7.9 | 10.1 | 27.8 | 11.4 | 31.9 |
+
+stance level segmentation against CDA.
+
+(c) Ablation: Fig. 5 thoroughly investigates the impact of different training mechanisms in combination with our defense, and provides the following insights: (i) Relativistic GAN loss offers a more robust solution than vanilla GAN, (ii) NRP performance decreases slightly without pixel loss, (iii) NRP without feature loss loses supervisory signal defined by perceptual-space boundary, hence the generator
+
+Table 2: NRP generalizability across different adversarial attacks. Classification model is defended against CDA trained against Inc-v3 while detection and segmentation models are defended against CDA trained against Res-v2-152 (higher is better). (q=quantity, w=weights, win=window size)
+
+| Classification: Defending IncRes-v2ens [43] against CDA [32] |
| Method | No Attack | ImageNet | Comics | Paintings |
| l∞≤8 | l∞≤16 | l∞≤8 | l∞≤16 | l∞≤8 | l∞≤16 |
| No Defense | 97.8 | 83.0 | 30.9 | 94.0 | 56.6 | 71.6 | 23.7 |
| JPEG (q=75) | 97.6 | 74.9 | 18.6 | 90.1 | 42.6 | 68.0 | 18.0 |
| JPEG (q=50) | 96.2 | 74.2 | 19.0 | 90.1 | 43.4 | 66.0 | 19.2 |
| JPEG (q=20) | 94.1 | 73.4 | 21.7 | 87.0 | 51.3 | 62.7 | 18.8 |
| TVM (w=10) | 93.1 | 82.3 | 30.2 | 91.0 | 77.2 | 72.7 | 27.4 |
| TVM (w=30) | 96.0 | 81.1 | 27.3 | 93.4 | 66.4 | 70.6 | 24.1 |
| MF (win=3) | 95.4 | 77.3 | 27.7 | 92.4 | 66.8 | 65.0 | 22.1 |
| NRP | 95.6 | 95.7 | 96.0 | 95.4 | 94.2 | 95.3 | 94.1 |
| Detection: Defending Mask-RCNN [16] against CDA [32] |
| No Defense | 59.9 | 35.2 | 8.1 | 40.5 | 16.8 | 41.7 | 14.8 |
| JPEG (q=75) | 57.6 | 41.3 | 11.9 | 41.6 | 19.4 | 44.5 | 18.3 |
| JPEG (q=50) | 54.6 | 41.7 | 14.5 | 39.5 | 18.5 | 47.7 | 19.9 |
| JPEG (q=20) | 39.7 | 30.7 | 15.1 | 28.2 | 14.7 | 30.5 | 15.3 |
| TVM (w=10) | 54.1 | 32.1 | 14.3 | 40.5 | 28.9 | 37.6 | 21.5 |
| TVM (w=30) | 58.0 | 39.9 | 10.1 | 46.8 | 21.0 | 45.4 | 17.2 |
| MF (win=3) | 54.7 | 32.1 | 9.0 | 41.1 | 20.4 | 37.6 | 15.2 |
| NRP | 54.4 | 51.5 | 50.3 | 53.5 | 53.7 | 53.2 | 54.3 |
| Segmentation: Mask-RCNN [16] defense against CDA [32] |
| No Defense | 56.8 | 32.4 | 7.3 | 37.6 | 15.5 | 39.1 | 13.8 |
| JPEG (q=75) | 54.4 | 38.5 | 11 | 38.5 | 17.8 | 41.7 | 16.9 |
| JPEG (q=50) | 51.5 | 38.9 | 13.4 | 36.6 | 17.3 | 40 | 18.2 |
| JPEG (q=20) | 37.1 | 28.8 | 14.0 | 26.3 | 13.8 | 28.3 | 14.3 |
| TVM (w=10) | 50.8 | 29.8 | 13.2 | 37.6 | 26.6 | 34.9 | 19.8 |
| TVM (w=30) | 54.4 | 37.1 | 9.3 | 43.7 | 19.3 | 42.3 | 15.9 |
| MF (win=3) | 51.5 | 29.8 | 8.3 | 36.0 | 18.8 | 34.9 | 13.9 |
| NRP | 51.3 | 48.4 | 47.3 | 50.3 | 50.8 | 50.2 | 51.4 |
+
+
+Figure 5: Ablation. Proposed NRP is able to recover input samples from the strong black-box ensemble attack [10] as compared to GNP and FGSP. NRP trained without $\mathcal{L}_{\text {feat }}$ performs poorly indicating the importance of perceptual loss. Top-1 accuracy (higher is better) is reported for IncRes-v2ens [43] on ImageNet-NeurIPS.
+
+does not converge to a meaningful state, (iv) Gaussian smoothing (Gaussian noise data augmentation) proves to be useful in reducing adversarial vulnerability of classifier [8, 49]. Training NRP as a Gaussian denoiser, named Gaus-
+
+Table 3: Success rate (lower is better) of BPDA [6] and $\mathrm{DIM}_{TI}$ [10] attacks against NRP. Res-v2-152 [18] is combined with other purifier networks (ResG [24], UNet [33]). Adversaries are then transferred to the naturally and adversarially trained models. NRP protects the backbone network even when the attacker tries to bypass using BPDA technique. (attack iterations: $10, \epsilon \leq 16$ )
+
+| Source | Attack | NRP | Inc-v3 | Inc-v4 | IncRes-v2 | Adv-v3 | Inc-v3ens3 | IncRes-v2ens |
| Res-v2-152 | DIMTI | × | 77.4 | 77.9 | 74.2 | 51.2 | 56.2 | 47.7 |
| ResG ⊕ Res-v2-152 | DIMTI ⊕ BPDA | ✓ | 29.7 | 26.2 | 19.6 | 22.3 | 22.1 | 16.1 |
| UNet ⊕ Res-v2-152 | DIMTI ⊕ BPDA | ✓ | 29.0 | 27.1 | 19.5 | 26.9 | 27.7 | 18.8 |
+
+
+Afghan Hound $(0.73, X)$
+Monarch Butterfly $(0.65, \checkmark)$
+
+
+Porcupine $(0.64, X)$ Dung Beetle $(0.90, \checkmark)$
+
+
+Erythrocebus Patas $(0.53,X)$ Lycaenid $(0.94,\checkmark)$
+
+
+Guenon Monkey $(0.77, \mathcal{X})$ Lorikeet $(0.94, \checkmark)$
+
+
+Crane $(0.55, X)$ Flamingo $(0.90, \checkmark)$
+
+
+Figure 6: A visual illustration of NRP generalizability to different adversaries ( $\epsilon \leq 16$ ) (top: attacked; bottom: purified). Our method can clean challenging adversarial patterns resulting from SSP applied to adversarially robust model [11]. Previous denoising methods are not designed for this type of structured noise. IncRes-v2ens backbone is used here. (see supplementary material for more examples)
+
+
+
+
+
+
+
+
+
+sian Noise Purifier (GNP) does not prove effective against translation-invariant attacks [10], and (v) Training NRP to stabilize FGSM adversaries (termed FGSP in Fig. 5) performs relatively better than GNP.
+
+(d) What if Attacker knows about the Defense: We study this difficult scenario with the following criteria: (i) attacker knows that the defense is deployed and has access to its training data and training mechanism, and (ii) attacker trains a local defense similar to NRP, and then uses BPDA [6] to bypass the defense. To simulate this attack, we train residual generator (ResG) [24] and UNet [33] with the same training mechanism as described in Sec. 4.1. We then combine BPDA [2] with translation-invariant attack to bypass NRP. Under these challenging settings, NRP shows a relative gain of $74\%$ and $66\%$ respectively for IncRes-v2, IncRes-v2ens (see Table 3).
+
+# 4.3. Self Supervised Perturbation as an Attack
+
+Next, we evaluate the strength of SSP as an attack for the tasks of classification, detection and segmentation.
+
+Classification: Table 5 compares SSP with FGSM [12], R-FGSM [43], I-FGSM [13], MI-FGSM [9], TAP [52] and DIM [47] using their standard hyper-parameters (see sup
+
+Table 4: Cross-task SSP Attack: Pixel-level accuracy is shown for SegNet-Basic [4] on Camvid testset [5], while mAP (with IoU = 0.5) is reported for Mask-RCNN.
+
+| Problem | Method | No Attack | SSP (l∞ ≤ 8) | SSP (l∞ ≤ 16) |
| Semantic Seg. | SegNet [4] | 79.70 | 52.48 | 32.59 |
| Instance Seg. | Mask-RCNN [16] | 56.8 | 29.4 | 8.8 |
| Object Det. | RetinaNet [27] | 53.78 | 22.75 | 5.16 |
| Mask-RCNN [16] | 59.50 | 31.8 | 9.7 |
+
+plementary material). The results in Table 5 provide the following insights. (i) SSP consistently demonstrates a strong black-box adversarial transferability on both naturally and adversarially trained models, bringing down top-1 accuracy of IncRes-v2 [39] from $100.0\%$ to $14.1\%$ , (ii) While MIFGSM [9] and DIM [47] perform slightly better on adversarially trained ensemble models [43] in terms of top-1 accuracy, SSP shows comparable top-1 rate and surpasses in terms of top-5 accuracy, and (iii) These results indicate that decision-boundary based attacks flip the label of input sample to the near-by class category, while SSP being agnostic to decision-level information pushes the adversaries far from the original input category.
+
+
+DIM [47]: Welsh Springer $(0.52, X)$
+
+
+Purified: Pomeranian (0.88, ✓)
+
+
+$\mathrm{DIM}_{TI}$ [10]: Cocker $(0.71, X)$
+
+
+Purified: Pomeranian $(0.86, \checkmark)$
+
+
+Figure 7: NRP successfully recovers diverse patterns from strongest black-box attacks $(l_{\infty} \leq 16)$ . IncRes-v2ens used as backbone.
+CDA [32]: Adversarial
+Figure 8: NRP successfully removes perturbation generated by CDA[32] ( $\epsilon \leq 16$ ) and stabilizes Mask-RCNN [16] predictions.
+
+
+Prediction for Adversarial
+
+
+Purified
+
+
+Prediction for Purified
+
+Table 5: SSP as an attack for Classification. Top-1 (T-1) and Top-5 (T-5) accuracies are reported under untargeted $l_{\infty}$ adversarial attacks on ImageNet-NIPS with perturbation budget $l_{\infty} \leq 16$ . ** indicates white-box attacks.
+
+ | Attack | Naturally Trained | Adv. Trained |
| Inc-v3 | Inc-v4 | Res-152 | IncRes-v2 | VGG-19 | Adv-v3 | Inc-v3ens3 | IncRes-v2ens |
| T-1 | T-5 | T-1 | T-5 | T-1 | T-5 | T-1 | T-5 | T-1 | T-5 | T-1 | T-5 | T-1 | T-5 | T-1 | T-5 |
| Res 152 | FGSM [12] | 55.1 | 81.1 | 62.6 | 85.1 | 18.9* | 44.7* | 65.0 | 86.5 | 43.9 | 70.4 | 64.6 | 85.8 | 76.9 | 93.5 | 87.9 | 98.2 |
| R-FGSM [43] | 60.8 | 84.3 | 68.4 | 88.1 | 14.6* | 40.3* | 71.9 | 90.3 | 55.8 | 71.4 | 74.8 | 92.3 | 81.1 | 96.0 | 87.1 | 97.5 |
| I-FGSM [13] | 80.9 | 96.7 | 85.3 | 97.8 | 0.9* | 10.8* | 93.1 | 98.8 | 75.9 | 94.8 | 89.2 | 99.2 | 90.5 | 97.9 | 94.6 | 99.5 |
| MI-FGSM [9] | 38.9 | 72.7 | 44.8 | 76.5 | 0.6* | 2.9* | 47.7 | 79.6 | 42.1 | 71.8 | 67.0 | 89.9 | 69.4 | 93.3 | 81.5 | 96.4 |
| TAP [52] | 48.2 | - | 55.7 | - | 7.6* | - | 55.2 | - | - | - | 49.2 | - | 57.8 | - | 64.1 | - |
| DIM [47] | 15.9 | 44.0 | 17.3 | 48.4 | 0.8* | 3.0* | 20.0 | 50.2 | 25.6 | 56.3 | 55.8 | 82.8 | 54.9 | 84.2 | 71.5 | 93.1 |
| VGG16 | FGSM [12] | 32.6 | 58.6 | 38.4 | 62.6 | 38.5 | 66.3 | 44.5 | 68.5 | 8.8 | 25.1 | 51.7 | 75.3 | 54.9 | 81.7 | 70.8 | 90.7 |
| R-FGSM [43] | 44.4 | 69.5 | 47.6 | 75.1 | 51.1 | 78.8 | 56.4 | 78.8 | 11.2 | 31.8 | 65.5 | 87.4 | 66.7 | 89.2 | 77.5 | 93.6 |
| I-FGSM [13] | 69.2 | 93.0 | 75.2 | 93.7 | 79.0 | 96.2 | 85.6 | 96.8 | 14.4 | 49.3 | 83.5 | 97.7 | 83.9 | 96.7 | 92.1 | 98.8 |
| MI-FGSM [9] | 20.4 | 45.0 | 19.7 | 43.2 | 25.2 | 53.8 | 26.8 | 53.8 | 1.5 | 12.1 | 43.0 | 70.9 | 42.0 | 72.7 | 62.0 | 86.8 |
| TAP [52] | 23.9 | - | 28.1 | - | 23.9 | - | 32.3 | - | - | - | 38.8 | - | 41.9 | - | 63.8 | - |
| DIM [47] | 14.7 | 38.8 | 16.6 | 39.0 | 21.0 | 48.0 | 21.5 | 45.7 | 0.6 | 7.6 | 35.8 | 65.8 | 31.8 | 60.8 | 53.7 | 79.5 |
| FFF [30] | 61.7 | 80.7 | 60.8 | 78.7 | 72.8 | 90.1 | 76.1 | 90.1 | 44.0 | 68.0 | 79.6 | 93.1 | 83.1 | 93.1 | 92.8 | 98.5 |
| SSP | 5.3 | 11.0 | 5.9 | 11.9 | 16.5 | 29.5 | 14.1 | 25.5 | 2.7 | 6.8 | 25.9 | 43.2 | 40.2 | 58.3 | 58.0 | 75.0 |
+
+Cross-task Adversarial Attack: Since SSP is loss-agnostic, it enables attacks on altogether different tasks. Table 4 explores SSP for object detection and image segmentation. For Segmentation, the self-supervised perturbations created on CAMVID [5] in VGG-16 feature space are able to bring down the per pixel accuracy of Segnet-Basic by $47.11\%$ within $l_{\infty} \leq 16$ . For object detection, on MS-COCO validation set [28], mean Average Precision (mAP) with 0.5 intersection over union (IOU) of RetinaNet [27] and Mask-RCNN [16] drop from $53.78\%$ to $5.16\%$ and $59.5\%$ to $9.7\%$ , respectively, under $l_{\infty} \leq 16$ .
+
+# 5. Conclusion
+
+We propose a novel defense approach that removes harmful perturbations using an adversarially trained purifier. Our defense does not require large training data and is independent of the label-space. It exhibits a high generalizability to the unseen state-of-the-art attacks and successfully defends a variety of tasks including classification, segmentation and object detection. Notably, our defense is able to remove structured noise patterns where an adversarial image is maliciously embedded into the original image.
+
+# References
+
+[1] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. 3
+[2] Anish Athalye, Nicholas Carlini, and David A. Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning (ICML), 2018. 2, 7
+[3] Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In International Conference on Machine Learning (ICML), 2017. 2
+[4] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39:2481-2495, 2017. 7
+[5] Gabriel J Brostow, Julien Fauqueur, and Roberto Cipolla. Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters, 30(2):88-97, 2009. 7, 8
+[6] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39-57. IEEE, 2017. 3, 7
+[7] NeurIPS Challenge. https://www.kaggle.com/c/nips-2017-defense-against-adversarial-attack/data.Kaggle, 2017.4
+[8] Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019. 6
+[9] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 2, 3, 4, 7, 8
+[10] Yinpeng Dong, Tianyu Pang, Hang Su, and Jun Zhu. Evading defenses to transferable adversarial examples by translation-invariant attacks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019. 2, 5, 6, 7, 8
+[11] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Aleksander Madry. Adversarial robustness as a prior for learned representations, 2019. 7
+[12] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICRL), 2015. 2, 3, 7, 8
+[13] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Adversarial examples in the physical world. In International Conference on Learning Representations (ICRL), 2017. 2, 3, 4, 7, 8
+[14] Chuan Guo, Mayank Rana, Moustapha Cissé, and Laurens van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations (ICRL), 2017. 1, 2
+[15] Chuan Guo, Mayank Rana, Moustapha Cisse, and Laurens van der Maaten. Countering adversarial images using input transformations. In International Conference on Learning Representations, 2018. 6
+
+[16] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross B. Girshick. Mask r-cnn. 2017 IEEE International Conference on Computer Vision (ICCV), pages 2980-2988, 2017. 6, 7, 8
+[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 1
+[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pages 630-645. Springer, 2016. 5, 7
+[19] Forrest N. Iandola, Matthew W. Moskewicz, Khalid Ashraf, Song Han, William J. Dally, and Kurt Keutzer. SqueezeNet: Alexnet-level accuracy with 50x fewer parameters and ;1mb model size. ArXiv, abs/1602.07360, 2017. 5
+[20] Alexia Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard gan. arXiv preprint arXiv:1807.00734, 2018. 4
+[21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. Commun. ACM, 60:84-90, 2012. 5
+[22] Orest Kupyn, Volodymyr Budzan, Mykola Mykhailych, Dmytro Mishkin, and Ji Matas. Deblurgan: Blind motion deblurring using conditional adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 5
+[23] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. 1, 5
+[24] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photorealistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4681-4690, 2017. 5, 7
+[25] Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, and Jun Zhu. Defense against adversarial attacks using high-level representation guided denoiser. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. 6
+[26] Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Jun Zhu, and Xiaolin Hu. Defense against adversarial attacks using high-level representation guided denoiser. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1778-1787, 2017. 2
+[27] Tsung-Yi Lin, Priyal Goyal, Ross Girshick, Kaiming He, and Piotr Dólar. Focal loss for dense object detection. IEEE transactions on pattern analysis and machine intelligence, 2018. 7, 8
+[28] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014. 8
+[29] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learn
+
+ing models resistant to adversarial attacks. In International Conference on Learning Representations, 2018. 1
+[30] Konda Reddy Mopuri, Utsav Garg, and R Venkatesh Babu. Fast feature fool: A data independent approach to universal adversarial perturbations. In Proceedings of the British Machine Vision Conference (BMVC), 2017. 8
+[31] Aamir Mustafa, Salman H Khan, Munawar Hayat, Jianbing Shen, and Ling Shao. Image super-resolution as a defense against adversarial attacks. arXiv preprint arXiv:1901.01677, 2019. 2, 6
+[32] Muzammal Naseer, Salman H Khan, Harris Khan, Fahad Shahbaz Khan, and Fatih Porikli. Cross-domain transferability of adversarial perturbations. Advances in Neural Information Processing Systems, 2019. 2, 4, 5, 6, 8
+[33] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234-241. Springer, 2015. 7
+[34] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252, 2015. 1, 2, 5
+[35] Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! arXiv preprint arXiv:1904.12843, 2019. 1
+[36] Shiwei Shen, Guoqing Jin, Ke Gao, and Yongdong Zhang. Ape-gan: Adversarial perturbation elimination with gan. arXiv preprint arXiv:1707.05474, 2017. 2, 6
+[37] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2014. 5
+[38] Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, and Yupeng Gao. Is robustness the cost of accuracy? - a comprehensive study on the robustness of 18 deep image classification models. In Vittorio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, editors, Computer Vision - ECCV 2018, pages 644-661, Cham, 2018. Springer International Publishing. 3
+[39] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In AAAI, volume 4, page 12, 2017. 5, 7
+[40] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2818-2826, 2016. 5
+[41] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations (ICRL), 2014. 2
+[42] Anil Thomas and Oguz Elibol. Defense against adversarial attacks-3rd place. https://github.com/anlthms/nips-2017/blob/master/poster/defense.pdf, 2017.2,6
+
+[43] Florian Tramér, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations (ICRL), 2018. 2, 5, 6, 7, 8
+[44] Xintao Wang, Kelvin C.K. Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration with enhanced deformable convolutional networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019. 5
+[45] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991, 2017. 1, 6
+[46] Cihang Xie, Jianyu Wang, Zhishuai Zhang, Zhou Ren, and Alan Yuille. Mitigating adversarial effects through randomization. In International Conference on Learning Representations, 2018. 2
+[47] Cihang Xie, Zhishuai Zhang, Jianyu Wang, Yuyin Zhou, Zhou Ren, and Alan Yuille. Improving transferability of adversarial examples with input diversity. arXiv preprint arXiv:1803.06978, 2018. 2, 3, 5, 7, 8
+[48] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015. 5
+[49] Valentina Zantedeschi, Maria-Irina Nicolae, and Ambrish Rawat. Efficient defenses against adversarial attacks. ArXiv, abs/1707.06728, 2017. 6
+[50] Haichao Zhang and Jianyu Wang. Defense against adversarial attacks using feature scattering-based adversarial training. arXiv preprint arXiv:1907.10764, 2019. 1
+[51] Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 586-595, 2018. 5
+[52] Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. Transferable adversarial perturbations. In The European Conference on Computer Vision (ECCV), September 2018. 3, 7, 8
\ No newline at end of file
diff --git a/aselfsupervisedapproachforadversarialrobustness/images.zip b/aselfsupervisedapproachforadversarialrobustness/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..04aacde6dcd765a33ba08429f957aec3f9f6713a
--- /dev/null
+++ b/aselfsupervisedapproachforadversarialrobustness/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9fd13d4b7bbc83c6dfd3c7e1e2d81b47cfc02f3b7fa904c34346ff8a9ee46e47
+size 996202
diff --git a/aselfsupervisedapproachforadversarialrobustness/layout.json b/aselfsupervisedapproachforadversarialrobustness/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..fd72459d7430c805b7990e0832c6ab2fb47ca70c
--- /dev/null
+++ b/aselfsupervisedapproachforadversarialrobustness/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ced03fc82d5a7fbce0fb71de2d3156c56d54717fe04c2797b52664c81172d29
+size 480576
diff --git a/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_content_list.json b/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..c77a78b406a8a598aad1f4ac9ca157c75932d3a3
--- /dev/null
+++ b/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b813b5740813a94193a9a609b4228477353751d4e235c8f05761001f4ffdab35
+size 78885
diff --git a/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_model.json b/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..8eed53e39978cdcc91f187a4280287fa1193bd15
--- /dev/null
+++ b/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8631e066fa9e2900875a3fbc98603824c200adb356517dfb2ff348f702651b0e
+size 101048
diff --git a/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_origin.pdf b/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..6b4a59c4679fd7ec5ef52f32c9776ce286942718
--- /dev/null
+++ b/asemisupervisedassessorofneuralarchitectures/235e754f-73c8-4778-84cf-f029cc2bbe69_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:388bcd8f2007bd0321ce24231ebcf99431e1465af47b3809aa4871a2bac6e804
+size 819168
diff --git a/asemisupervisedassessorofneuralarchitectures/full.md b/asemisupervisedassessorofneuralarchitectures/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..87b8186f1035e0a87c7f5c14fc3e703391f8d945
--- /dev/null
+++ b/asemisupervisedassessorofneuralarchitectures/full.md
@@ -0,0 +1,328 @@
+# A Semi-Supervised Assessor of Neural Architectures
+
+Yehui Tang $^{1,2}$ , Yunhe Wang $^{2}$ , Yixing Xu $^{2}$ , Hanting Chen $^{1,2}$ , Boxin Shi $^{3,4}$ , Chao Xu $^{1}$ , Chunjing Xu $^{2*}$ , Qi Tian $^{2}$ , Chang Xu $^{5}$
+
+1 Key Lab of Machine Perception (MOE), Dept. of Machine Intelligence, Peking University.
+
+$^{2}$ Noah's Ark Lab, Huawei Technologies. $^{3}$ NELVT, Dept. of CS, Peking University. $^{4}$ Peng Cheng Laboratory.
+
+$^{5}$ School of Computer Science, Faculty of Engineering, University of Sydney.
+
+{yhtang,chenhanting,shiboxin}@pktu.edu.cn;xuchao@cis.pku.edu.cn
+
+{yunhe.wang,xuyixing,xuchunjing,tian.qil}@huawei.com;c.xu@sydney.edu.au
+
+# Abstract
+
+Neural architecture search (NAS) aims to automatically design deep neural networks of satisfactory performance. Wherein, architecture performance predictor is critical to efficiently value an intermediate neural architecture. But for the training of this predictor, a number of neural architectures and their corresponding real performance often have to be collected. In contrast with classical performance predictor optimized in a fully supervised way, this paper suggests a semi-supervised assessor of neural architectures. We employ an auto-encoder to discover meaningful representations of neural architectures. Taking each neural architecture as an individual instance in the search space, we construct a graph to capture their intrinsic similarities, where both labeled and unlabeled architectures are involved. A graph convolutional neural network is introduced to predict the performance of architectures based on the learned representations and their relation modeled by the graph. Extensive experimental results on the NAS-Benchmark-101 dataset demonstrated that our method is able to make a significant reduction on the required fully trained architectures for finding efficient architectures.
+
+# 1. Introduction
+
+The impressive successes in computer vision tasks, such as image classification [11, 10], detection [4] and segmentation [43], heavily depends on an effective design the backbone deep neural networks, which are usually overparameterized for the sake of effectiveness. Instead of resorting to human expert experience, Neural Architecture Search (NAS) framework focuses on an automatic way to select hyper-parameters and design appropriate network architectures.
+
+There have been a large body of works on NAS, and they can be roughly divided into two categories. Combinatorial optimization methods search architectures in a discrete space by generating, evaluating and selecting different architectures, e.g. Evolutionary Algorithm (EA) based methods [29] and Reinforcement Learning (RL) based methods [44]. The other kind of NAS methods are continuous optimization based, which relax the original search space to a continuous space and gradient-based optimization is usually applied [24, 20, 3, 35, 36]. In NAS, to get the exact performance of an architecture, it often takes hours or even days for a sufficient training process. Reducing the number of training epochs or introducing the weight sharing mechanism could alleviate prohibitive computational cost, but it would result in inaccurate performance estimation for the architectures. Recently, there are studies to collect many network architectures with known real performance on the specific tasks and train a performance predictor [5, 32]. This one-off training of the predictor can then be applied to evaluate the performance of intermediate searched architectures in NAS, and the overall evaluation cost of an individual architecture can be reduced from hours to milliseconds.
+
+A major bottleneck in obtaining a satisfactory architecture performance predictor could be the collection of a large annotated training set. Given the expensive cost on annotating a neural architecture with its real performance, the training set for the performance predictor is often small, which would lead to an undesirable over-fitting result. Existing methods insist on the fully supervised way to train the performance predictor, but neglect the significance of those neural architectures without annotations. In the search space of NAS, a number of valid neural architectures can be sampled with ease. Though the real performance could be unknown, their architecture similarity with those annotated architectures would convey invaluable information to optimize the performance predictor.
+
+In this paper, we propose to assess neural architectures
+
+in a semi-supervised way for training the architecture predictor using the well-trained networks as fewer as possible. Specifically, a very small proportion of architectures are randomly selected and trained on the target dataset to obtain the ground-truth labels. With the help of massive unlabeled architectures, an auto-encoder is used to discover meaningful representations. Then we construct a relation graph involving both labeled and unlabeled architectures to capture intrinsic similarities between architectures. The GCN assessor takes the learned representations of all these architectures and the relation graph as input to predict the performance of unlabeled architectures. The entire system containing the auto-encoder and GCN assessor can be trained in an end-to-end manner. Extensive experiments results on the NAS-bench-101 dataset [40] demonstrate the superiority of the proposed semi-supervised assessor for searching efficient neural architectures.
+
+This paper is organized as follows: in Sec. 2 we briefly review several performance predictors and analyze pros and cons of them, and give an introduction of NAS, GCN and auto-encoder. Sec. 3 gives a detailed implementation of the proposed method. Several experiments conducted on NAS-Bench dataset and the results are shown in Sec. 4. Finally, Sec. 5 summarizes the conclusions.
+
+# 2. Related Works
+
+In this section, we first review current methods of NAS and performance predictor, and then introduce the classical GCN and auto-encoder.
+
+# 2.1. Neural Architecture Search (NAS)
+
+Current NAS framework for obtaining desired DNNs can be divided into two sub-problems, i.e., search space and search method.
+
+A well-defined search space is extremely important for NAS, and there are mainly three kinds of search spaces in the state-of-the-art NAS methods. The first is cell based search space [28, 44, 45, 22]. Once a cell structure is searched, it is used in all the layers across the network by stacking multiple cells. Each cell contains several blocks, and each of the block contains two branches, with each branch applying an operation to the output of one of the former blocks. The outputs of the two branches are added to get the final output of the block. The second is Direct Acyclic Graph (DAG) based search space [40]. The difference between cell based and DAG based search space is that the latter does not restrict the number of branches. The input and output number of a node in the cell is not limited. The third is factorized hierarchical search space [34, 35, 9], which allows different layer architectures in different blocks.
+
+Besides search space, most of the NAS research focus on developing efficient search methods, which can be di
+
+vided into combinatorial optimization methods and continuous optimization methods[23, 38, 37, 24]. Combinatorial optimization methods include Evolutionary Algorithm (EA) based methods [23, 26, 29, 30, 39] and Reinforcement Learning (RL) based methods [44, 45, 1]. Continuous optimization methods include DARTS [24], which makes the search space continuous by relaxing the categorical choice of a particular operation to a softmax over all possible operations, and several one-shot methods that solve the problem in a one-shot procedure [28]. Recently, architecture datasets with substantial full-trained neural architectures are also proposed to compare different NAS methods conveniently and fairly [40, 7, 41].
+
+# 2.2. NAS Predictor
+
+There are limited works focusing on predicting the network performance. Some of the previous works were designed on hyper-parameter optimization with Gaussian Process [33], which focus on developing optimization functions to better evaluate the hyper-parameter. Other methods directly predict the performance of a given network architecture. The first way is to predict the final accuracy by using part of the learning curves with a mixture of parametric functions [6], Bayesian Neural Network [16] or v-SVR [2]. The second way is to predict the performance of a network with a predictor. Deng et al. [5] extract the feature of a given network architecture layer by layer, and the features with flexible length are sent to LSTM to predict the final accuracy. Istrate et al. [12] use a similar manner to predict the accuracy with random forest, believing that few training data are required by using random forest. Luo et al. [25] propose an end-to-end manner by using an encoder to extract features of the networks. The learned features are optimized with gradient descent and then decoded into new architectures with an decoder. The architecture derived in this way is regarded as the optimal architecture with a high performance.
+
+# 2.3. Graph Convolutional Network (GCN)
+
+GCN is a prevalent technique tackling data generated from non-Euclidean domains and represented as graphs with complex relation. Sperduti et al. [31] first tackle DAGs with neural networks and recently GCNs achieve the-state-of-art performance in multiple tasks, such as citation networks [15], social networks [19] and point clouds data analyses [42]. Both graph-level task and node level tasks can be tackled with GCNs. For a graph-level task, each graph is seen as an individual and the GCN is to predict the labels of those graphs. As for node-level tasks, the examples are seen as vertices of a graph which reflects the relation between them, and the labels of examples are predicted by the GCN with the help of graph. Beyond the features of examples, the graph also provides extra valuable information and
+
+
+Labeled and Unlabeled Architectures
+Figure 1. Performance prediction pipeline of the proposed semi-supervised assessor. Both labeled and unlabeled architectures are sent to the auto-encoder to get the meaningful representations. Then a relation graph is constructed to capture architecture similarities based the learned representations. Both the representations and relation graph are sent to the GCN assessor to outputs estimated performance of architectures. The entire system can be trained end-to-end.
+
+improves prediction accuracy.
+
+# 3. Approach
+
+Consider the search space $X = X^{l} \bigcup X^{u}$ with $N = N_{l} + N_{u}$ architectures, where $X^{l} = \{x_{1}^{l}, x_{2}^{l}, \dots, x_{N_{l}}^{l}\}$ are annotated architectures with the corresponding ground-truth performance $y^{l} = \{y_{1}^{l}, y_{2}^{l}, \dots, y_{N_{l}}^{l}\}$ , and $X^{u} = \{x_{1}^{u}, x_{2}^{u}, \dots, x_{N_{u}}^{u}\}$ are the remaining massive unlabeled architectures. The assessor $\mathcal{P}$ is to take the architecture $x_{i} \in X$ as the input and output the estimated performance $\hat{y}_{i} = \mathcal{P}(W_{p}, x_{i})$ , where $W_{p}$ is the trainable parameters of the assessor $\mathcal{P}$ . Given a sufficiently large labeled architecture set as the training data, the assessor $\mathcal{P}$ can be trained in a supervised manner to fit the ground truth performance [5, 32], i.e.,
+
+$$
+\min _ {W _ {p}} \frac {1}{N _ {l}} \sum_ {i = 1} ^ {N _ {l}} \left| \left| \mathcal {P} \left(W _ {p}, \boldsymbol {x} _ {i} ^ {l}\right) - y _ {i} ^ {l} \right| \right| _ {2} ^ {2}, \tag {1}
+$$
+
+where $||\cdot ||_2$ denotes $\ell_2$ norm. However, due to the limitation of time and computational resources, very limited architectures can be trained from scratch to get the ground-truth performance, which would not be enough to support the training of a predictor with high accuracy. Actually, there are massive architectures without annotations and they can participate in the prediction progress. The similarity between architectures can provide extra information to make up the insufficiency of labeled architectures and help training the performance predictor to achieve higher performance.
+
+# 3.1. Architecture Embedding
+
+Before sending neural architectures to the performance predictor, we need an encoder $\mathcal{E}$ is to get the appropriate
+
+embedding of architectures. There are already some common hand-crafted representations of architectures for specific search spaces. For example, Ying et al. [40] represent the architectures in a Directed Acyclic Graph (DAG) based search space with adjacency matrices, where 0 represents no connection between two nodes and the non-zero integers denote the operation types. Though these hand-crafted representations can describe different architectures, they are usually redundant and noisy to express the intrinsic property of architectures. In contrast with this manual approach, we aim to discover more effective representations of neural architectures with an auto-encoder.
+
+A classical auto-encoder [13] contains two modules: the encoder $\mathcal{E}$ and decoder $\mathcal{D}$ . $\mathcal{E}$ takes the hand-crafted representations of both labeled architectures $x^{l} \in X^{l}$ and unlabeled architectures $x^{u} \in X^{u}$ as input and maps them to a low-dimension space. Then the learn compact representation is sent to the decoder $\mathcal{D}$ to reconstruct the original input. The auto-encoder is trained as:
+
+$$
+\begin{array}{l} \min _ {W _ {e}, W _ {d}} \mathcal {L} _ {r c} = \frac {1}{N _ {l}} \sum_ {\substack {i = 1 \\ N}} ^ {N _ {l}} \left\| \mathcal {D} \left(\mathcal {E} \left(\boldsymbol {x} _ {i} ^ {l}; W _ {e}\right); W _ {d}\right) - \boldsymbol {x} _ {i} ^ {l} \right\| _ {2} ^ {2} \tag{2} \\ + \frac {1}{N _ {u}} \sum_ {j = 1} ^ {N _ {u}} | | \mathcal {D} (\mathcal {E} (\boldsymbol {x} _ {j} ^ {u}; W _ {e}); W _ {d}) - \boldsymbol {x} _ {j} ^ {u} | | _ {2} ^ {2}, \\ \end{array}
+$$
+
+where $W_{e}$ and $W_{d}$ are the trainable parameters of the encoder $\mathcal{E}$ and decoder $\mathcal{D}$ , respectively1. The feature $\mathcal{E}(\boldsymbol{x}_{i})$ for architectures $\boldsymbol{x}_{i} \in X$ learned by the auto-encoder can be more compact representations of architectures. Most importantly, the auto-encoder can be optimized together with
+
+the predictor $\mathcal{P}$ in an end-to-end manner, which enables feature $\mathcal{E}(\boldsymbol{x}_i)$ to be more compatible with $\mathcal{P}$ to predict the performance of architectures.
+
+# 3.2. Semi-supervised Architecture Assessor
+
+The architectures in a search space are not independent and there are some intrinsic relation between architectures. For example, an architecture can always be obtained by slightly modifying a very 'similar' architecture, such as replacing an operation type, adding/removing an edge, changing the width/depth and so on. Most importantly, beyond the limited labeled architectures, the massive unlabeled architectures in search space would also be helpful for the training of assessor $\mathcal{P}$ , because of their underlying connections with those labeled architectures. Though obtaining the real performance of all architectures is impossible, exploiting the large volume of unlabeled architectures and exploring intrinsic constraints underlying different architectures will make up the insufficiency of labeled architectures.
+
+Based on the learned representation $\mathcal{E}(\pmb{x}_i)$ of architectures, we adopt the common Radial Basis Function (RBF) [8] to define the similarity measure $s(\pmb{x}_i,\pmb{x}_j)$ between architectures $\pmb{x}_i\in X$ and $\pmb{x}_j\in X$ , i.e.,
+
+$$
+s \left(\boldsymbol {x} _ {i}, \boldsymbol {x} _ {j}\right) = \exp \left(- \frac {d \left(\mathcal {E} \left(\boldsymbol {x} _ {i}\right) , \mathcal {E} \left(\boldsymbol {x} _ {j}\right)\right)}{2 \sigma^ {2}}\right), \tag {3}
+$$
+
+where $d(\cdot, \cdot)$ denotes the distance measure (e.g., Euclidean distance) and $\sigma$ is a scale factor. $s(\pmb{x}_i, \pmb{x}_j)$ ranges in [0,1] and $s(\pmb{x}_i, \pmb{x}_i) = 1$ . When the distance between representation $\mathcal{E}(\pmb{x}_i)$ and $\mathcal{E}(\pmb{x}_j)$ becomes larger, the similarity $s(\pmb{x}_i, \pmb{x}_j)$ decreases rapidly.
+
+Given this similarity measurement, the relation between architectures can be easily modeled by a graph $G$ , where individual vertex denotes an architecture $\boldsymbol{x}_i \in X$ and the edge reflects the similarity between architectures. Both labeled and unlabeled architectures are involved in the graph $G$ . Denote the adjacency matrix of graph $G$ as $A \in \mathbb{R}^{N \times N}$ , where $A_{ij} = s(\boldsymbol{x}_i, \boldsymbol{x}_j)$ if $s(\boldsymbol{x}_i, \boldsymbol{x}_j)$ exceeds the threshold $\tau$ and zero otherwise. Note that $A_{ii} = 1$ and there are self-connections in graph $G$ . Two similar architectures thus tend to locate close with each other in the graph and are connected by edges with a large weight. The architectures connected by edges have direct relation while those disconnected architectures interact with each other in an implicit way via other vertices. This is accordant to the intuition that two very different architectures can be connected by some intermediate architectures.
+
+To utilize both limited labeled architectures and massive unlabeled architectures with their similarity modeled by the graph $G$ , we construct the assessor $\mathcal{P}$ by stacking multiple graph convolutional layers[15], which takes the learned representations of both labeled and unlabeled architectures as inputs. The graph $G$ is also embedded into each layer and
+
+guides the information propagation between the features of different architectures. Taking all these architectures as a whole and utilizing the relation between architectures, the assessor $\mathcal{P}$ outputs their estimated performance. A assessor $\mathcal{P}$ composing of two graph convolutional layers is:
+
+$$
+\begin{array}{l} [ \hat {\boldsymbol {y}} ^ {l}, \hat {\boldsymbol {y}} ^ {u} ] = \mathcal {P} (\mathcal {E} ([ X ^ {l}, X ^ {u} ]), G, W _ {p}) \\ = \hat {A} \operatorname {R e L U} \left(\hat {A} \mathcal {E} ([ X ^ {l}, X ^ {u} ]) W _ {p} ^ {(0)}\right) W _ {p} ^ {(1)}, \tag {4} \\ \end{array}
+$$
+
+where $\mathcal{E}([X^l,X^u ])$ denotes the learned representations of both labeled and unlabeled architectures, and $\hat{\pmb{y}}^{l} = \{\hat{y}_{1}^{l},\hat{y}_{2}^{l},\dots ,\hat{y}_{N_{l}}^{l}\}$ and $\hat{\pmb{y}}^u = \{\hat{y}_1^u,\hat{y}_2^u,\dots ,\hat{y}_{N_u}^u\}$ are their estimated performance, respectively. $D$ is a diagonal matrix where $D_{ii} = \sum_{j}A_{ij}$ , and $\hat{A} = D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ . $W_{p}^{(0)}$ , $W_{p}^{(1)}$ are the weight matrices.
+
+As shown in Eq. (4), the output of the assessor $\mathcal{P}$ depends on not only their input representation but also the neighboring architectures in the graph $G$ due to adjacency matrix $A$ , and thus the performance prediction processes of labeled and unlabeled architectures interact with each other. In fact, GCN can be considered as a Laplacian smoothing operator [21] and intuitively, two connected nodes on the graph tend to have similar features and produce similar outputs. As both labeled and unlabeled architectures are sent to the predictor simultaneously, their intermediate features interrelate with each other.
+
+The assessor $\mathcal{P}$ is trained to fit the ground-truth performance of labeled architectures based as both the architectures themselves and the relation between them, i.e.,
+
+$$
+\min _ {W _ {p}} \mathcal {L} _ {r g} = \frac {1}{N _ {l}} \sum_ {i = 1} ^ {N _ {l}} | | \hat {y} _ {i} ^ {l} - y _ {i} ^ {l} | | _ {2} ^ {2}, \tag {5}
+$$
+
+where $W_{p}$ is the trainable parameter of assessor $\mathcal{P}$ . Though the supervised loss is only applied on labeled architectures, the unlabeled architectures also participate in the performance prediction of the labeled architectures via the relation graph $G$ , and thus the supervision information from those limited performance labels can guide the feature generation process of those unlabeled architectures. Intuitively, the labels can propagate along the edge in the relation graph $G$ , considering the length of paths and the weights of edges. What's more, the training process helps the predictor learn to predict the performance of a given architecture with the assistance of its neighbors in the graph $G$ , which makes the prediction more robust and improve the prediction accuracy.
+
+# 3.3. Optimization
+
+The auto-encoder and assessor can constitute an end-to-end system, which learns the representations of architectures and predict performance simultaneously. As shown in Figure 1, the hand-crafted representations of both labeled architectures $X^{l}$ and unlabeled architectures $X^{u}$ are
+
+first delivered into the encoder $\mathcal{E}$ to produce learned representations $\mathcal{E}([X^l, X^u])$ , and then the relation graph $G$ is constructed based on the representation $\mathcal{E}([X^l, X^u])$ via Eq. (3). Both the representation $\mathcal{E}([X^l, X^u])$ and relation graph $G$ are sent to the GCN assessor $\mathcal{P}$ to get the estimated performance $\hat{\pmb{y}}$ . In the training phase, the learned representations $\mathcal{E}([X^l, X^u])$ are also sent to the decoder $\mathcal{D}$ to reconstruct the original input. Combining the regression loss $\mathcal{L}_{rg}$ that fits the ground-truth performance and the reconstruction loss $\mathcal{L}_{rc}$ , the entire system is trained as:
+
+$$
+\min _ {W _ {e}, W _ {d}, W _ {p}} \mathcal {L} = (1 - \lambda) \mathcal {L} _ {r g} + \lambda \mathcal {L} _ {r c}. \tag {6}
+$$
+
+where $\lambda \in [0,1]$ is the hyper-parameter that balances the two types of loss functions. In the end to end system, the learning of architecture representations and performance prediction are promoted mutually. The regression loss $\mathcal{L}_{rg}$ focuses on fitting the ground-truth performance of labeled architectures and propagating labels to the unlabeled architectures, which also makes the learned representations $\mathcal{E}([X^l,X^u])$ have stronger relativity to the ground-truth performance. The reconstruction loss $\mathcal{L}_{rc}$ refines information from the massive unlabeled architectures to supply the limited labeled examples and makes the training process more robust. Note that for both regression loss $\mathcal{L}_{rg}$ and reconstruction loss $\mathcal{L}_{rc}$ , the unlabeled architectures participate in their optimization process and play an important role.
+
+When implementing the proposed semi-supervised assessor to a large search space containing massive architectures, it is inefficient to construct a large graph containing all the $N$ architectures. Constructing the graph needs to calculate the similarity of arbitrary two architectures which is time-consuming, and storing such a graph also needs a large memory. Mini-batch is a common strategy to tackle big data in deep learning [18], and we propose to construct the graph and train the entire system with mini-batch. For each mini-batch, labeled and unlabeled architectures are randomly sampled from $X^l$ and $X^u$ , and the graph is constructed with those examples. Thus the entire system can be trained efficiently with random gradient descent on memory-limited GPUs. The mini-batch training algorithm is presented in Algorithm 1.
+
+# 4. Experiments
+
+In this section, we conduct extensive experiments to validate the effectiveness of the proposed semi-supervised assessor. Firstly, the performance prediction accuracies of our method are compared with several state-of-the-art methods. Then we embed the proposed assessor and peer competitors to the combinatorial searching algorithm (such as evolutionary algorithm) to identify architectures with good performance. Ablation studies are also conducted to further
+
+# Algorithm 1 Training of the semi-supervised assessor.
+
+Input: Search space $X = X^l \bigcup X^u$ , and the ground-truth performance $\pmb{y}^l$ for labeled architectures.
+
+# 1: repeat
+
+2: Randomly select labeled and unlabeled architectures from $X^l$ and $X^u$ respectively to form a mini-batch $\mathcal{B}$ ;
+3: Send the architectures $\pmb{x} \in \mathcal{B}$ to feature extractor $\mathcal{E}$ and get the learned representation $\mathcal{E}(\pmb{x})$ ;
+4: Calculate the similarity between architectures via Eq. (3) and construct the relation graph $G$ ;
+5: Send the learned representation $\mathcal{E}(\pmb{x})$ and relation graph $G$ to the GCN assessor $\mathcal{P}$ and output the approximate performance $\hat{y}$ ;
+6: Calculate the regression loss $\mathcal{L}_{rg}$ via Eq. (5);
+7: Send the learned representation $\mathcal{E}(\pmb{x})$ to the decoder $\mathcal{D}$ and calculate the reconstruction loss $\mathcal{L}_{rc}$ via Eq. (2);
+8: Calculate the final loss $\mathcal{L} = (1 - \lambda)\mathcal{L}_{rq} + \lambda \mathcal{L}_{rc}$
+9: Backward and update the parameters of encoder $\mathcal{E}$ , assessor $\mathcal{P}$ and decoder $\mathcal{D}$ ;
+10: until Convergence;
+
+Output: The trained encoder $\mathcal{E}$ and assessor $\mathcal{P}$ .
+
+analyze the proposed method.
+
+Dataset. Nas-Bench-101 [40] is the largest public architecture dataset for NAS research proposed recently, containing 423K unique CNN architectures trained on CIFAR-10 [17] for image classification, and the best architecture achieves a test accuracy of $94.23\%$ . The search space for Nas-Bench-101 is a feed-forward structure stacked by blocks and each block is constructed by stacking the same cell 3 times. As all the network architectures in the search space are trained completely to get their ground-truth performance, it is fair and convenient to compare different performance prediction methods comprehensively on Nas-Bench-101. A more detailed description of the dataset can be referred to [40]. Besides Nas-Bench-101, we also construct a small architecture dataset on CIFAR-100 to verify the effectiveness of the methods on different datasets.
+
+Implementation details. The encoder $\mathcal{E}$ is constructed by stacking two convolutional layers followed by a full-connected layer and the decoder $\mathcal{D}$ is the reverse. The inputs of $\mathcal{E}$ are the matrix representations of architectures following [40, 37]. The assessor $\mathcal{P}$ consists of two graph convolutional layers and outputs the predicted performance. The scale factor $\sigma$ and threshold $\tau$ for constructing graph are set to 0.01 and $10^{-5}$ , and $\lambda$ in Eq. (6) is set to 0.5 empirically. The entire system is trained end-to-end with Adam optimizer [14] without weight decay for 200 epochs $^2$ . The
+
+Table 1. Comparison of performance prediction results on Nas-Bench-101 dataset.
+
+| Nl | Criteria | Peephole [5] | E2EPP [32] | Ours |
| 1k | KTau | 0.4373±0.0112 | 0.5705±0.0082 | 0.6541±0.0078 |
| MSE | 0.0071±0.0005 | 0.0042±0.0003 | 0.0031±0.0003 |
| r | 0.4013±0.0092 | 0.4467±0.0071 | 0.5240±0.0068 |
| 10k | KTau | 0.4870±0.0096 | 0.6941±0.0058 | 0.7814±0.0042 |
| MSE | 0.0037±0.0004 | 0.0032±0.0003 | 0.0026±0.0002 |
| r | 0.4672±0.0075 | 0.6164±0.0063 | 0.6812±0.0051 |
| 100k | KTau | 0.4976±0.0055 | 0.7004±0.0051 | 0.8456±0.0031 |
| MSE | 0.0036±0.0003 | 0.0024±0.0002 | 0.0016±0.0002 |
| r | 0.4804±0.0074 | 0.5874±0.0051 | 0.8047±0.0049 |
+
+batch size and initial learning rate are set to 1024 and 0.001, respectively. All the experiments are conducted with Pytorch library[27] on NVIDIA V100 GPUs.
+
+# 4.1. Comparison of Prediction Accuracies
+
+We compare the proposed method with the state-of-the-art predictors based methods Peephole [5] and E2EPP [32]. Since the main function of the performance predictors is to identify better architectures in a search space, accurate performance ranking of architectures is more important than their absolute values. $\mathrm{KTau} \in [-1, 1]$ is a common indicator measuring the correlation between the ranking of prediction values and the actual labels, and higher values mean more accurate prediction. Two other common criteria mean square error (MSE) and correlation coefficient (r) are also compared for completeness. MSE measures the deviation of predictions from the ground truth directly, and $\mathbf{r} \in [-1, 1]$ measures the correlation degree between prediction values and true labels.
+
+The experimental results are shown in Table 1. We randomly sample $N_{l}$ architectures from the search space (including 423k architectures) as labeled examples, and varies $N_{l}$ from $\{1\mathrm{k}, 10\mathrm{k}, 100\mathrm{k}\}$ . All possible architectures are available once the search space has been given, and thus the remaining architectures are used as unlabeled architectures, i.e., $N^{u} = N - N^{l}$ . As shown in Table 1, the proposed semi-supervised assessor surpasses the state-of-the-art methods on three criteria with different number of labeled examples. For example, with 1k labeled architectures, KTau of our method can achieve 0.6541, which is 0.2168 higher than Peephole (0.4373) and 0.0836 higher than E2EPP (0.5705), meaning more accurate predicted ranking. The correlation coefficient $r$ is also improved by 0.1227 and 0.0773, indicating higher correction between predicted values and ground-truth labels using our method. The improved performance comes from more thorough exploitation of the information in the search space, which makes up the insufficiency of labeled data. Note that increasing $N_{l}$ improves the performance of all these methods, but the computational cost of training these architectures is
+
+also increased. Thus, the balance between the performance of the predictors and the computation cost of getting labeled examples needs to be considered in practice.
+
+The qualitative results are shown in Figure (2). For clarity, 5k architectures are randomly sampled and shown in the scatter diagrams. The $x$ -axis of each point (architecture) is its ground truth ranking and the $y$ -axis is predicted ranking. For our method the points are much closer to the diagonal line, implying stronger consistency between the predicted ranking and ground truth ranking. Both the numerical criteria and intuitive diagrams show that our method surpasses the state-of-the-art methods.
+
+
+(a) Peephole
+
+
+(b) E2EPP
+Figure 2. Predicted ranking of architectures and the corresponding true ranking on Nas-Bench-101 dataset. The $x$ -axis denotes the true ranking and $y$ -axis denotes the predicted ranking.
+
+
+(c) Ours
+
+# 4.2. Searching Results on NAS-Bench-101
+
+The performance predictors can be embedded to various architecture search algorithms [32] such as random search, Reinforcement learning (RL) based methods [44] and Evolutionary Algorithm (EA) based methods [29]. Taking EA based methods as an example, the performance predicted by the predictors can be used as fitness, and other progresses including population generation, cross-over and mutation are not changed. Since we focus on the design of performance predictors, we embed different prediction methods into EA to find the architectures with high performance. Concretely, we compare the best performance among the top-10 architectures selected by different methods, and all the methods are repeated 20 times with different random seeds.
+
+The performance of the best architecture selected by dif
+
+Table 2. Classification accuracies on CIFAR-10 and the performance ranking among all the architectures of Nas-Bench-101. 1k architectures randomly selected from Nas-Bench-101 are used as annotated examples.
+
+| Method | Top-1 Accuracy (%) | Ranking (%) |
| Peephole [5] | 93.41±0.34 | 1.64 |
| E2EPP [32] | 93.77±0.13 | 0.15 |
| Ours | 94.01±0.12 | 0.01 |
+
+
+out Output
+3x3 Conv
+
+(a) Peephole
+
+$\boxed{\mathrm{in}}$ Input
+1x1 1x1 Conv
+
+(b) E2EPP
+
+MP Max-pool
+
+
+(c) Ours
+Figure 3. Visualization of the best network architectures selected by different methods. 1k architectures randomly selected from Nas-bench-101 are used as annotated examples.
+
+ferent methods is shown in Table 2. The second column is the accuracies of architectures on CIFAR-10 dataset and the third column is their real performance rankings in all the architectures of Nas-Bench-101. The best network identified by the proposed semi-supervised assessor achieves performance $94.01\%$ , outperforming the compared methods $(93.41\%$ for Peephole and $93.77\%)$ by a large margin, since the proposed method can make a more accurate estimation of performance and further identify those architectures with better performance. Though only 1k architectures are sampled to train the predictor, it can still find the architectures whose real performance is in the top $0.01\%$ of the search space. Compared to the global best architecture with performance $94.23\%$ , which is obtained by exhaustively enumerating all the possible architectures in the search space, the performance $94.01\%$ obtained by our method with only 1k labeled architectures is comparable.
+
+We further show the intuitive representation of the best architectures identified by different methods in Figure 3. There are some common characteristics for these architectures with good performance, e.g., both existing a very short path (e.g., path length 1) and a long path from the first node to the last. The long path consisting of multiple operations ensures the representation ability of the networks, and the short path makes gradient propagate easily to the shallow layers. The architecture identified by our method (Figure 3(c)) also contains a max pooling layer in the longest path to enlarge the receptive field, which may be a reason for the better performance.
+
+Table 3. Classification accuracies of the best network architectures on CIFAR-100 selected by different methods. 1k network architectures trained on CIFAR-100 are used as annotated examples.
+
+| Method | Top-1 Accuracy (%) | Top-5 Accuracy (%) |
| Peephole [5] | 74.21±0.32 | 92.04±0.15 |
| E2EPP [32] | 75.86±0.19 | 93.11±0.10 |
| Ours | 78.64±0.16 | 94.23±0.08 |
+
+(a) Peephole
+
+in Input
+1x1 1x1 Conv
+out
+3x3 Conv
+
+
+(b) E2EPP
+
+(c) Ours
+Figure 4. Visualization of the best network architectures selected by different methods. 1k network architectures trained on CIFAR-100 are used as annotated examples.
+
+
+MP Max-pool
+
+# 4.3. Experiments on CIFAR-100 Dataset
+
+To verify the effectiveness of the proposed semi-supervised assessor in different datasets, we further conduct experiments on the common object classification dataset CIFAR-100. Since there is no architecture dataset with ground-truth performance based on CIFAR-100, we randomly sample 1k architectures from the search space of NAS-bench-101 and train them completely from scratch using the same training hyper-parameters in [40]. With the 1k labeled architectures, different performance prediction methods are embedded into the EA to find the best performance. As CIFAR-100 contains 100 categories, we both compare the top-1 and top-5 accuracies. The best performance among the top-10 architectures are compared and all the methods are repeated for 20 times with different random seeds.
+
+The accuracies and diagrams of different architectures are shown in Table 3 and Figure 4, respectively. The best architecture identified by our method achieves much higher performance (78.64% for top-1 and 94.23% for top-5) compared with the state-of-the-art methods (e.g., 75.86% for top-1 and 93.11% for top-5 in E2EPP). It implies that exploring the relation between architectures and utilizing the massive unlabeled examples in the proposed method works well in different datasets.
+
+# 4.4. Ablation study
+
+The impact of scale factor $\sigma$ . The hyper-parameter $\sigma$ impacts the similarity measurement in Eq. (3) and thereby impacts the construct of the graph. With a fixed threshold
+
+
+Figure 5. Performance prediction results of the proposed semi-supervised assessor w.r.t. different scale factor $\sigma$ , weight $\lambda$ , and the number of unlabeled architectures $N^u$ .
+
+
+
+
+
+Table 4. Comparison of prediction accuracies (Ktau) with or without the auto-encoder on Nas-Bench-101 dataset.
+
+| Nl | W/o Auto-encoder | Ours |
| 1k | 0.5302±0.0081 | 0.6541±0.0078 |
| 10k | 0.7188±0.0025 | 0.7814±0.0042 |
| 100k | 0.7578±0.0038 | 0.8456±0.0031 |
+
+$\tau$ , a denser graph $G$ is constructed with a bigger $\sigma$ , and more interaction between different architectures is applied when predicting performance with the GCN assessor. The prediction results with different scale factor $\sigma$ are shown in Figure 5(a), which verifies the effectiveness of utilizing unlabeled architectures with a relation graph to train a more accurate performance predictor. An excessive $\sigma$ also incurs the drop of accuracies in Figure 5(a), as putting too much attention on other architectures also disturb the supervision training process.
+
+The impact of weight $\lambda$ . The weight $\lambda$ balances the regression loss $\mathcal{L}_{rg}$ and the reconstruction loss $\mathcal{L}_{rc}$ . When the reconstruction loss do not participate in the training process $(\lambda = 0)$ , prediction accuracies (Ktau and $r$ ) are lower than those with reconstruction loss as shown in Figure 5(b), since the information in massive unlabeled architectures is not well preserved when constructing the learned architecture representation.
+
+The number of unlabeled architectures $N^u$ . The unlabeled architectures can provide extra information to assist the training of the architecture assessor to make an accurate prediction. As shown in Figure 5, with the increasing of unlabeled architectures, both the two criteria KTau and $r$ are increased correspondingly, indicating more accurate performance prediction. The improvement of accuracies comes from that more information is provided by the unlabeled architectures. When the number of unlabeled architectures is enough to reflect the property of the search space (e.g., $N^u = 50k$ ), adding extra unlabeled architectures only brings limited accuracy improvement.
+
+The effect of auto-encoder. To show the superiority of
+
+the learned representations compared with the hand-craft representations, the prediction results with or without the auto-encoder are shown in Table 4. The prediction accuracies (Ktau) are improved obviously by the auto-encoder (e.g., 0.6541 v.s. 0.5302 with 1k labeled architectures), which indicates the learned representations can reflect the intrinsic characteristics of architectures, which are capable of measuring the architecture similarity and being used as inputs of the performance predictor.
+
+# 5. Conclusion
+
+The paper proposes a semi-supervised assessor to evaluate the network architectures by predicting their performance directly. Different from the conventional performance predictors trained in a fully supervised way, the proposed semi-supervised assessor takes advantage of the massive unlabeled architectures in the search space by exploring the intrinsic similarity between architectures. Meaningful representations of architectures are discovered by an auto-encoder and a relation graph involving both labeled and unlabeled architectures is constructed based on the learned representations. The GCN assessor takes both the representations and relation graph to predict the performance. With only 1k architectures randomly sampled from the large NAS-Benchmark-101 dataset [40], the architecture with $94.01\%$ accuracy (top $0.01\%$ of the entire search space) can be found with the proposed method. We plan to investigate the sampling strategy to construct more representative training sets for the assessor and identify better architectures with fewer labeled architectures in the future.
+
+# Acknowledgment
+
+This work is supported by National Natural Science Foundation of China under Grant No. 61876007, 61872012, National Key R&D Program of China (2019YFF0302902), Australian Research Council under Project DE-180101438, and Beijing Academy of Artificial Intelligence (BAAI).
+
+# References
+
+[1] Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. Designing neural network architectures using reinforcement learning. arXiv preprint arXiv:1611.02167, 2016.
+[2] Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. Practical neural network performance prediction for early stopping. arXiv preprint arXiv:1705.10823, 2(3):6, 2017.
+[3] Gabriel Bender, Pieter-Jan Kindermans, Barret Zoph, Vijay Vasudevan, and Quoc Le. Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning, pages 550-559, 2018.
+[4] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6154-6162, 2018.
+[5] Boyang Deng, Junjie Yan, and Dahua Lin. Peephole: Predicting network performance before training. arXiv preprint arXiv:1712.03351, 2017.
+[6] Tobias Domhan, Jost Tobias Springenberg, and Frank Hutter. Speeding up automatic hyperparameter optimization of deep neural networks by extrapolation of learning curves. In Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015.
+[7] Xuanyi Dong and Yi Yang. Nas-bench-102: Extending the scope of reproducible neural architecture search. arXiv preprint arXiv:2001.00326, 2020.
+[8] Andrew C Good and W Graham Richards. Rapid evaluation of shape similarity using gaussian functions. Journal of chemical information and computer sciences, 33(1):112-116, 1993.
+[9] Zichao Guo, Xiangyu Zhang, Haoyuan Mu, Wen Heng, Zechun Liu, Yichen Wei, and Jian Sun. Single path one-shot neural architecture search with uniform sampling. arXiv preprint arXiv:1904.00420, 2019.
+[10] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in neural information processing systems, pages 8527-8537, 2018.
+[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
+[12] Roxana Istrate, Florian Scheidegger, Giovanni Mariani, Dimitrios Nikolopoulos, Costas Bekas, and A Cristiano I Malossi. Tapas: Train-less accuracy predictor for architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3927-3934, 2019.
+[13] Memoona Khanum, Tahira Mahboob, Warda Imtiaz, Humaraia Abdul Ghafoor, and Rabeea Sehar. A survey on unsupervised machine learning algorithms for automation, classification and maintenance. International Journal of Computer Applications, 119(13), 2015.
+[14] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+
+[15] Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
+[16] Aaron Klein, Stefan Falkner, Jost Tobias Springenberg, and Frank Hutter. Learning curve prediction with bayesian neural networks. 2016.
+[17] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Cite-seer, 2009.
+[18] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012.
+[19] Jia Li, Yu Rong, Hong Cheng, Helen Meng, Wenbing Huang, and Junzhou Huang. Semi-supervised graph classification: A hierarchical graph perspective. In The World Wide Web Conference, pages 972-982. ACM, 2019.
+[20] Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. arXiv preprint arXiv:1902.07638, 2019.
+[21] Qimai Li, Zhichao Han, and Xiao-Ming Wu. Deeper insights into graph convolutional networks for semi-supervised learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
+[22] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. Progressive neural architecture search. In Proceedings of the European Conference on Computer Vision (ECCV), pages 19–34, 2018.
+[23] Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436, 2017.
+[24] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018.
+[25] Renqian Luo, Fei Tian, Tao Qin, Enhong Chen, and Tie-Yan Liu. Neural architecture optimization. In Advances in neural information processing systems, pages 7816-7827, 2018.
+[26] Risto Miikkulainen, Jason Liang, Elliot Meyerson, Aditya Rawal, Daniel Fink, Olivier Francon, Bala Raju, Hormoz Shahrzad, Arshak Navruzyan, Nigel Duffy, et al. Evolving deep neural networks. In Artificial Intelligence in the Age of Neural Networks and Brain Computing, pages 293-312. Elsevier, 2019.
+[27] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+[28] Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeff Dean. Efficient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
+[29] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4780-4789, 2019.
+
+[30] Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2902–2911. JMLR.org, 2017.
+[31] Alessandro Sperduti and Antonina Starita. Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 8(3):714-735, 1997.
+[32] Yanan Sun, Handing Wang, Bing Xue, Yaochu Jin, Gary G Yen, and Mengjie Zhang. Surrogate-assisted evolutionary deep learning using an end-to-end random forest-based performance predictor. IEEE Transactions on Evolutionary Computation, 2019.
+[33] Kevin Swersky, Jasper Snoek, and Ryan Prescott Adams. Freeze-thaw bayesian optimization. arXiv preprint arXiv:1406.3896, 2014.
+[34] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2820-2828, 2019.
+[35] Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10734-10742, 2019.
+[36] Sirui Xie, Hehui Zheng, Chunxiao Liu, and Liang Lin. Snas: stochastic neural architecture search. arXiv preprint arXiv:1812.09926, 2018.
+[37] Yixing Xu, Yunhe Wang, Kai Han, Hanting Chen, Yehui Tang, Shangling Jui, Chunjing Xu, Qi Tian, and Chang Xu. Rnas: Architecture ranking for powerful networks. arXiv preprint arXiv:1910.01523, 2019.
+[38] Chao Xue, Junchi Yan, Rong Yan, Stephen M Chu, Yonggang Hu, and Yonghua Lin. Transferable automl by model sharing over grouped datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9002-9011, 2019.
+[39] Zhaohui Yang, Yunhe Wang, Xinghao Chen, Boxin Shi, Chao Xu, Chunjing Xu, Qi Tian, and Chang Xu. Cars: Continuous evolution for efficient neural architecture search. arXiv preprint arXiv:1909.04977, 2019.
+[40] Chris Ying, Aaron Klein, Esteban Real, Eric Christiansen, Kevin Murphy, and Frank Hutter. Nas-bench-101: Towards reproducible neural architecture search. arXiv preprint arXiv:1902.09635, 2019.
+[41] Arber Zela, Julien Siems, and Frank Hutter. Nas-bench-1shot1: Benchmarking and dissecting one-shot neural architecture search. arXiv preprint arXiv:2001.10422, 2020.
+[42] Yingxue Zhang and Michael Rabbat. A graph-cnn for 3d point cloud classification. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6279-6283. IEEE, 2018.
+[43] Yizhou Zhou, Xiaoyan Sun, Zheng-Jun Zha, and Wenjun Zeng. Context-reinforced semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4046-4055, 2019.
+
+[44] Barret Zoph and Quoc V Le. Neural architecture search with reinforcement learning. arXiv preprint arXiv:1611.01578, 2016.
+[45] Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V Le. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8697-8710, 2018.
\ No newline at end of file
diff --git a/asemisupervisedassessorofneuralarchitectures/images.zip b/asemisupervisedassessorofneuralarchitectures/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..cb528d309d8b9e6616083dde1aa9413be20f17ba
--- /dev/null
+++ b/asemisupervisedassessorofneuralarchitectures/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3159687fbe90fc845e7102feec096f2829a142dd475547887a9cd458a0c0a90
+size 322054
diff --git a/asemisupervisedassessorofneuralarchitectures/layout.json b/asemisupervisedassessorofneuralarchitectures/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..5a8f3f17d212bbb82dc72333ae4f23cf33e7bcdb
--- /dev/null
+++ b/asemisupervisedassessorofneuralarchitectures/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7c179654853165a70296030fcb559370474b550cd54ae1f99d0265ed2966895
+size 490733
diff --git a/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_content_list.json b/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..5beb3c303608015959dd0373a244e92dabed3d37
--- /dev/null
+++ b/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:652059cf63c1ea9b77dba8984a6d4bea2e41aee49d7e864e542e5d5bd775020a
+size 88531
diff --git a/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_model.json b/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..606e5c9b90bfda182375ed30768d75dcb60104cb
--- /dev/null
+++ b/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5b8cabd1fe506a14a04a644012df0bcffd8258bb47c6298c80af5240c357f4d
+size 111590
diff --git a/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_origin.pdf b/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..44c8dfeacbe081435e6176c4e285e2c6d38fa2cb
--- /dev/null
+++ b/asharedmultiattentionframeworkformultilabelzeroshotlearning/cf6097f9-8042-4e1c-bcd3-4617ce789559_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:74c8e7c9cf0e2babdf0979837a97a648d645f715e9369b235289a4e2290346d4
+size 1131304
diff --git a/asharedmultiattentionframeworkformultilabelzeroshotlearning/full.md b/asharedmultiattentionframeworkformultilabelzeroshotlearning/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..672c3cd2f8888982af2d5292ee36662ffef98de6
--- /dev/null
+++ b/asharedmultiattentionframeworkformultilabelzeroshotlearning/full.md
@@ -0,0 +1,347 @@
+# A Shared Multi-Attention Framework for Multi-Label Zero-Shot Learning
+
+Dat Huynh Northeastern University
+
+huynh.dat@husky.neu.edu
+
+Ehsan Elhamifar Northeastern University
+
+eelhami@ccs.neu.edu
+
+# Abstract
+
+In this work, we develop a shared multi-attention model for multi-label zero-shot learning. We argue that designing attention mechanism for recognizing multiple seen and unseen labels in an image is a non-trivial task as there is no training signal to localize unseen labels and an image only contains a few present labels that need attentions out of thousands of possible labels. Therefore, instead of generating attentions for unseen labels which have unknown behaviors and could focus on irrelevant regions due to the lack of any training sample, we let the unseen labels select among a set of shared attentions which are trained to be label-agnostic and to focus on only relevant/foreground regions through our novel loss. Finally, we learn a compatibility function to distinguish labels based on the selected attention. We further propose a novel loss function that consists of three components guiding the attention to focus on diverse and relevant image regions while utilizing all attention features. By extensive experiments, we show that our method improves the state of the art by $2.9\%$ and $1.4\%$ F1 score on the NUS-WIDE and the large scale Open Images datasets, respectively.
+
+# 1. Introduction
+
+Recognition of all labels in an image, referred to as multi-label recognition, is a fundamental problem in computer vision with applications in self-driving cars, surveillance systems and assistive robots, among others. To successfully support real-world tasks, multi-label recognition systems must accurately learn tens of thousands of labels, handle unseen labels and localize them in images. Despite advances, in particular, using deep neural networks, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], to date, there is no multi-label learning algorithm that can achieve all these goals. This paper takes steps towards addressing large-scale multi-label zero-shot learning and localization.
+
+The majority of existing work on multi-label learning have focused on exploiting dependencies among labels to improve the recognition performance of methods that learn
+
+
+Figure 1: Visualization of attentions learned by a single attention for all labels, one attention per label and our shared multi-attention model. Our method successfully attends to relevant image regions for both seen and unseen labels while producing only a few number of attentions that significantly improves the memory and computational complexity for predicting thousands of labels.
+
+a separate classifier for each label [1, 11, 3, 12, 13, 14, 3, 8, 15]. However, they cannot handle the classification of (multiple) unseen labels in an image and cannot localize labels. A few recent work have incorporated attention mechanism into multi-label learning to focus on relevant image regions [16, 17, 18, 10], yet, they lack the ability to handle unseen labels. Moreover, the recurrent neural network employed in [16, 18], which has to sequentially compute the attention regions for the subsequent label to be predicted, imposes large training and inference time and limits the scalability to classify a large number of labels in an image.
+
+On the other hand, a large body of work have focused on zero-shot learning with the goal of recognizing unseen labels [19, 20, 21, 7, 9, 22, 23, 24], with some of the recent work taking advantage of attention models [25, 26, 27] to improve the prediction accuracy. However, these methods address the multi-class zero-shot learning, where each image is assumed to have one label, hence cannot handle the multi-label setting, where an image contains several labels, some of which could be unseen. Moreover, as observed by [28, 29, 30], using a single feature vector to encode discrimin
+
+
+Figure 2: The overview of our shared multi-attention zero-shot learning. Image features of $R$ regions are extracted and fed into our shared multi-attention mechanism to compute multiple attention features. The attention features are projected into the joint visual-label semantic embedding space to determine their labels.
+
+inative information about all labels is restrictive, especially when dealing with a large number of labels.
+
+A few work have addressed the problem of multi-label zero-shot learning [7, 9, 31] by taking advantage of the correlation between unseen and seen labels which is inferred from a global image representation. However, they only capture dominant labels and ignore the ones in smaller regions of images. To overcome this issue, [28, 29, 30] use pre-trained object detection modules and learn to select bounding boxes of seen and unseen labels. However, this approach is costly and not scalable to a large number of labels as it requires ground-truth bounding boxes for training. Moreover, it cannot handle abstract concepts, e.g., 'travel' or 'singing', which often do not have a clear bounding box. On the other hand, one can naively generalize attention techniques to the multi-label zero-shot setting by computing one attention per label [32]. However, this not only is computationally and memory expensive, but more importantly is prone to overfitting, due to a small number of training images for each label.
+
+Paper Contributions. In this paper, we develop a framework for multi-label zero-shot learning based on a novel shared multi-attention mechanism that handles recognition of a large number of labels, can recognize multiple unseen labels in an image and finds relevant regions to each label. Our method consists of multiple label-agnostic attention modules that generate multiple attention features simultaneously and uses the semantic vector of each label to select the most suitable feature to compute the prediction score of the label, see Figure 2. Thus, instead of generating one attention feature for all labels, which cannot encode discriminative information about labels, and instead of generating one attention feature per label, which cannot generalize well to unseen labels, we generate multiple shared attention features that capture both common and discriminative information about labels, hence not only do well for the prediction of seen labels, but also transfer attention to unseen labels.
+
+Our method automatically discovers related labels and assigns them to the same attention module. Moreover, it
+
+dynamically allocates an appropriate number of attention modules to each label depending on its complexity. By eliminating any recurrent attention structure and using a small number of attention modules compared to the large number of labels, our method significantly reduces the time and memory complexity of computing one attention per label or of recurrent attention mechanisms.
+
+Given that each training image only contains the list of present labels without ground-truth bounding box information, to effectively train our shared multi-attention method, we propose a novel loss function that enforces i) different attention modules to focus on diverse regions of an image, covering different labels; ii) to find relevant regions that would lead to high prediction scores for present labels; iii) to effectively use all attention modules. We conduct experiments on both multi-label zero-shot and generalize zero-shot learning on the NUS-Wide and the large-scale Open Images datasets, showing the effectiveness of our method, which improves the F1 score of the state of the art by $2.9\%$ and $1.4\%$ , respectively.
+
+# 2. Related Work
+
+Multi-label learning can be naively addressed by learning a binary classifier for each label [33, 34], which does not incorporate correlations among labels. Thus, the majority of multi-label learning methods have focused on incorporating label dependencies [11, 2, 15, 8, 35, 36]. However, some methods require training data with the full annotation of images [10, 18], some cannot generalize to unseen labels [11, 2, 15, 8, 35, 36, 33, 34], and some work with global feature representation of images, which is restrictive when dealing with a large number of labels, and cannot find regions of labels [35, 37, 38].
+
+To localize labels, [39, 40, 28, 29, 30] find region proposals followed by applying CNN-based recognition on each proposal. This can recognize few labels for foreground objects (not concepts, e.g., 'travel' or 'singing') and requires costly bounding box annotations. On the other hand, attention modeling [10, 41, 42, 43, 32] has provided powerful tools to address the localization of labels by learning to fo
+
+cus on relevant parts of images. However, most existing methods cannot generalize to unseen labels [42, 10]. While image captioning [44, 45, 46] can be thought of as multi-label learning (labels are words in the generated caption), it requires training and predicted labels with a sequential order. While [3, 47, 48] have proposed methods to find semantic orders of labels, their sequential nature does not allow fast training or inference (e.g., via parallelization) and they cannot localize labels or generalize to unseen labels.
+
+Zero-shot learning, on the other hand, addresses the problem of generalizing learning to unseen labels [49, 50, 5, 51, 52, 53, 54, 55]. This often requires using semantic information from seen and unseen labels in the form of attribute vectors [56, 57, 58] or word vector representations [51, 59, 57, 20, 60, 52]. The semantic label vectors are often combined with the image features via learning a compatibility score between the two, which then allows to classify unseen labels [20, 60, 50]. Having shown great success, the majority of zero-shot learning methods find only the dominant label in each image [5, 51, 20, 21, 25, 26, 27] and rely on using a global feature without localizing labels. Recently, both single attention [25] and double-attention [26] mechanisms have been employed for single class zero-shot learning. However, these works learn a single representation to predict all classes and cannot recognize diverse labels in an image.
+
+The recent works in [7, 61, 62] address the problem of zero-shot multi-label learning by finding the joint embedding space of image and labels while optimizing the standard zero-shot ranking loss modified for multi-label learning. These works, however, do not localize labels and neglect the importance of discriminative features from local image regions by using global features. Moreover, [9] requires access to a knowledge graph between seen and unseen labels. [63, 28, 30] use multiple features generated by an object proposal algorithm for zero-shot prediction. However, the proposal is designed for objects and cannot generalize to abstract concepts. Multi-modal attention [32] can be used to generate specific attention for each label and generalize to unseen labels through label semantics. However, this has large time and memory complexity when computing thousands of attention for thousands of classes. Moreover, the extrapolation of seen to unseen attentions often focuses on irrelevant regions as there is no supervision on unseen attention (see the experiments).
+
+Finally, our proposed method is different from [64, 27], which have proposed multi-attention models without attention sharing mechanism, thus can not effectively generalize to unseen labels. Moreover, they fuse all attention features into a single global feature which discards discriminative information obtained by each attention model.
+
+# 3. Visual Attention Review
+
+Visual attention generates a feature from the most relevant region of an image and has been shown to be effective for image classification, saliency detection and captioning, among others [42, 18, 10, 65, 44, 66]. More specifically, one divides an image $I$ into $R$ regions denoted by $I^1,\dots,I^R$ , which can be arbitrary [41] or equal-size grid cells [44]. For simplicity and reproducibility, we use the latter approach. Let $\pmb{f}^{r} = f_{\Theta}(I^{r})$ denote the feature vector of the region $r$ , extracted using a CNN parametrized by $\Theta$ . Given region features $\{\pmb{f}^r\}_{r = 1}^R$ , the goal of the attention module, $g(\cdot)$ , is to find the most relevant regions for the task. This is done by finding an attention feature, $\mathcal{Z}$ , defined as
+
+$$
+\boldsymbol {z} = g \left(\boldsymbol {f} ^ {1}, \dots , \boldsymbol {f} ^ {R}\right) = \sum_ {r = 1} ^ {R} \alpha_ {r} \left(\boldsymbol {f} ^ {r}\right) \boldsymbol {f} ^ {r}, \tag {1}
+$$
+
+where $\alpha_{r}(\pmb{f}^{r})$ denotes the weight or preference of selecting the region $r$ . These weights are unknown and the task of the attention module is to find them for an input image. In the soft-attention mechanism [44], which we use in the paper, one assumes that $\alpha_{r}\in [0,1]$ and $\sum_{r = 1}^{R}\alpha_{r} = 1$ to select different regions with different degrees of importance. The attention weights are often modeled by the output of a neural network, normalized using the softmax function.
+
+# 4. Multi-Label Zero-Shot Learning via Attention Sharing
+
+In this section, we discuss our proposed framework for multi-label zero-shot learning. We first define the problem settings and then present our approach based on a shared multi-attention mechanism.
+
+# 4.1. Problem Setting
+
+Assume we have two sets of labels $\mathcal{C}_s$ and $\mathcal{C}_u$ , where $\mathcal{C}_s$ denotes seen labels that have training images and $\mathcal{C}_u$ denotes unseen labels without training annotations. We denote the set of all labels by $\mathcal{C} \triangleq \mathcal{C}_s \cup \mathcal{C}_u$ . Let $(I_1, \mathcal{Y}_1), \ldots, (I_N, \mathcal{Y}_N)$ be $N$ training samples, where $I_i$ denotes the $i$ -th training image and $\mathcal{Y}_i \subseteq \mathcal{C}_s$ denotes the set of labels present in the image. The goal of multi-label zero-shot learning is to find the labels in $\mathcal{C}$ that appear in a new test image. Given that there are no training images for the unseen labels, $\mathcal{C}_u$ , similar to exiting work on zero-shot learning [67, 57, 20, 59], we assume access to semantic vectors $\{\pmb{v}^c\}_{c \in \mathcal{C}}$ that provide descriptions of labels, e.g., using attributes or word embeddings [67, 57, 20, 59, 4, 50].
+
+Naive Approach. To address the problem using the attention mechanism, we can consider two naive extreme cases. First, we can generate one attention feature $\mathbf{z}_i^c$ for each label $c \in \mathcal{C}$ in an image $i$ . Thus, the prediction score for the label $c$ in image $i$ can be computed as
+
+$$
+s _ {i} ^ {c} = \left\langle \boldsymbol {\theta} ^ {c}, \boldsymbol {z} _ {i} ^ {c} \right\rangle , \tag {2}
+$$
+
+where $\langle \cdot, \cdot \rangle$ denotes the inner product and $\theta^c$ denotes the parameters of the logistic classifier for the label $c$ (the exact form of $\theta^c$ will be defined later, see (12)). We can then determine the labels in the image $i$ by ranking and picking the top prediction scores $\{s_i^c\}_{c \in \mathcal{C}}$ across all labels. This has two major drawbacks. First, it is not clear to how learn an attention model for an unseen label that has no training images. While we can extend and employ methods such as [32] by using the label semantic vector to generate an attention feature, learning would be prone to overfitting as a large number of attention models have to be learned with no or few training images, hence, will often focus on irrelevant regions (see Figure 1). Second, training and computing a separate attention feature for each label is computationally and memory expensive, especially when dealing with thousands of labels, e.g., in the Open Images dataset. However, for seen labels with sufficient number of training images, this approach allows to learn informative features that are able to focus on label-relevant regions of an image.
+
+In the second approach, instead of learning label-specific attention features as above, we can compute a single attention feature for all labels. This approach has the benefit that it does not suffer from overfitting and is memory and computationally efficient. However, the model will not have enough capacity to represent and localize a large number of possibly diverse labels (see Figure 1).
+
+# 4.2. Proposed Shared Multi-Attention Approach
+
+We develop a multi-label zero-shot learning method based on attention mechanism that overcomes the limitations of the two above approaches and enjoys the advantages of both, i.e., learns informative features that focus on label-relevant regions of an image, does not suffer from overfitting and generalizes well to unseen labels, and is computationally and memory efficient. To do so, we propose a shared multi-attention mechanism that consists of $M \ll |\mathcal{C}|$ attention modules generating $M$ attention features, where each feature will be used for the prediction of a subset of related labels, which are determined automatically. We also propose an efficient learning scheme that uses label semantic vectors and training images that contain seen labels without access to their ground-truth localization.
+
+For an image $i$ with region features $\{\pmb{f}_i^r\}_{r=1}^R$ , let $\{\pmb{z}_i^m\}_{m=1}^M$ denote $M$ attention features obtained via the attention modules $\{g_m(\cdot)\}_{m=1}^M$ . We define
+
+$$
+\begin{array}{l} \boldsymbol {F} _ {i} \triangleq \left[ \begin{array}{c c c c} \boldsymbol {f} _ {i} ^ {1} & \boldsymbol {f} _ {i} ^ {2} & \dots & \boldsymbol {f} _ {i} ^ {R} \end{array} \right], \\ \boldsymbol {\alpha} ^ {m} \left(\boldsymbol {F} _ {i}\right) \triangleq \left[ \begin{array}{l l l} & & \\ & \dots & \end{array} \right. \left. \alpha_ {R} ^ {m} \left(\boldsymbol {f} _ {i} ^ {R}\right) \right] ^ {\top}, \tag {3} \\ \end{array}
+$$
+
+where $\mathbf{F}_i$ denotes a matrix whose columns are $R$ region features and $\alpha^m(\mathbf{F}_i)$ denotes the $R$ -dimensional weight vector of the attention module $m$ , for the image $i$ . Using the model (1), we can write the $m$ -th attention feature of the image $i$ ,
+
+denoted by $z_{i}^{m}$ , as a linear combination of all region features as
+
+$$
+\boldsymbol {z} _ {i} ^ {m} = \boldsymbol {F} _ {i} \boldsymbol {\alpha} ^ {m} (\boldsymbol {F} _ {i}). \tag {4}
+$$
+
+To learn and infer $\alpha^m (F_i)$ , we use a simple two-layer neural network model
+
+$$
+\boldsymbol {\alpha} ^ {m} \left(\boldsymbol {F} _ {i}\right) = \frac {\exp \left(\boldsymbol {e} _ {i} ^ {m}\right)}{\sum_ {r = 1} ^ {R} \exp \left(e _ {i , r} ^ {m}\right)}, \boldsymbol {e} _ {i} ^ {m} = \tanh \left(\boldsymbol {F} _ {i} ^ {\top} \boldsymbol {W} _ {1} ^ {m}\right) \boldsymbol {w} _ {2} ^ {m}, \tag {5}
+$$
+
+where $\{\pmb{W}_1^m, \pmb{w}_2^m\}_{m=1}^M$ are the model parameters, $\tanh(\cdot)$ is the element-wise hyperbolic tangent function, $\alpha^m(\pmb{F}_i)$ is the softmax normalization on each element $e_{i,r}^m$ of the $R$ -dimensional unnormalized attention weights, $e_i^m$ , (i.e., before applying softmax) from the attention module $m$ .
+
+Given $M$ attention features $\{z_i^m\}_{m = 1}^M$ , we propose a model in which the score of each label $c\in \mathcal{C}$ is obtained by the maximum response of the classifier $c$ over the $M$ attention features, i.e.,
+
+$$
+s _ {i} ^ {c} \triangleq \max _ {m = 1, \dots , M} \langle \boldsymbol {\theta} ^ {c}, \boldsymbol {z} _ {i} ^ {m} \rangle . \tag {6}
+$$
+
+Thus, different attention features can be used for the prediction of different labels. To learn the parameters of the $M$ attention modules, we propose an efficient learning scheme with a novel loss function, which we discuss next.
+
+Diverse Multi-Attention Features: We ideally want different attention modules to attend to different regions of an image. Thus, we define a diversity loss that promotes obtaining diverse attention features for an image. More specifically, using the cosine similarity between distinct pairs of unnormalized attention weight vectors, we define
+
+$$
+\mathcal {L} _ {d i v} \triangleq \sum_ {i} \sum_ {m \neq n} \frac {\left\langle \boldsymbol {e} _ {i} ^ {m} , \boldsymbol {e} _ {i} ^ {n} \right\rangle}{\left\| \boldsymbol {e} _ {i} ^ {m} \right\| _ {2} \left\| \boldsymbol {e} _ {i} ^ {n} \right\| _ {2}}, \tag {7}
+$$
+
+whose minimization promotes small or no overlap in the focus regions of different attention modules. For efficient learning, we use unnormalized attention weights $e$ instead of normalized weights $\alpha$ , since the gradient of $\alpha$ vanishes when softmax function saturates. Also, we do not minimize $\langle e_i^m, e_i^n \rangle$ , since it reduces not only the cosine similarity but also the $\ell_2$ -norm of each weights vector, which prevents the weights of an attention module to concentrate on a single region. Notice that our diversity loss is less restrictive than [44] as we do not enforce the attention model to attend to all regions of an image, instead to attend to only regions that are diverse and relevant for prediction.
+
+Relevant Multi-Attention Features: Given that the training data does not include information about locations of labels in images, unlike existing work [25, 29], we cannot learn attention models by enforcing that attention weights on ground-truth regions be larger than weights on irrelevant regions. Here, we are only given the set of existing labels
+
+in each image. To tackle the problem, we use the prediction scores as surrogates for relevant regions to attend.
+
+Our key observation is that when a seen label $o \in \mathcal{C}_s$ is present in an image, there must be a region containing $o$ on which we have a high score for the label $o$ . Thus, when successfully focused on the region of a label, the score of our multi-attention mechanism must be larger than simply weighting all regions equally. More specifically, let $\bar{s}_i^o \triangleq \frac{1}{R} \sum_r \langle \pmb{\theta}^o, \pmb{f}_i^r \rangle$ be the average score of the label $o$ across all regions, i.e., the score when all regions contribute equally. We define a region relevance loss function that promotes our multi-attention mechanism to produce higher scores than $\bar{s}_i^o$ for present labels and lower scores for absent labels. In other words, we define
+
+$$
+\mathcal {L} _ {r e l} \triangleq \sum_ {i} \sum_ {o \in \mathcal {C} _ {s}} \max \left(\left(\bar {s} _ {i} ^ {o} - s _ {i} ^ {o}\right) y _ {i} ^ {o}, 0\right), \tag {8}
+$$
+
+where $y_{i}^{o} \triangleq 1$ for $o \in \mathcal{V}_i$ and $y_{i}^{o} \triangleq -1$ otherwise. Notice that with the above loss, attention modules find not only regions of present labels, but also indicative regions of absent labels, e.g., to predict the absence of the label 'desert', the attention may focus on a region with the label 'ocean'.
+
+Using All Multi-Attention Modules: Given the ability to select among $M$ different attention features in (6) and the non-convexity of learning, the model could potentially learn to use only some attention modules for prediction of all labels and not use the rest. Thus, we propose a loss function to encourage that each of the $M$ attention modules will be used for the prediction of some of the seen labels. We start by defining a score $\ell_{m}$ that measures the utility of the $m$ -th attention module by computing the number of labels across training images that use the attention module $m$ ,
+
+$$
+\ell_ {m} \triangleq \sum_ {i} \sum_ {o \in \mathcal {Y} _ {i}} \mathrm {I} _ {m} \left(\operatorname {a r g m a x} _ {n} \left\langle \boldsymbol {\theta} ^ {o}, \boldsymbol {z} _ {i} ^ {n} \right\rangle\right), \tag {9}
+$$
+
+where $\operatorname{I}_m(x)$ is the indicator function, which outputs 1 when $x = m$ and 0 otherwise. Notice that the term inside the first sum in (9) corresponds to the number of labels of the image $i$ that use the attention model $m$ , hence, $\ell_m$ measures the utility of the attention module $m$ across all training images. Ideally, we want every attention module to be used for predictions, hence, we want to avoid having a few large $\ell_m$ 's while most being zero. Thus, we propose to minimize the attention distribution loss,
+
+$$
+\mathcal {L} _ {\text {d i s t}} \triangleq \sum_ {m = 1} ^ {M} \ell_ {m} ^ {2}. \tag {10}
+$$
+
+The difficulty of minimizing $\tilde{\mathcal{L}}_{dist}$ is that the $\ell_m$ defined in (9) is non-differentiable, due to the indicator function. We tackle this by using a softmax function instead, where
+
+$$
+\ell_ {m} \triangleq \sum_ {i} \sum_ {o \in \mathcal {Y} _ {i}} \frac {\exp \left(\langle \boldsymbol {\theta} ^ {o} , \boldsymbol {z} _ {i} ^ {m} \rangle\right)}{\sum_ {n = 1} ^ {M} \exp \left(\langle \boldsymbol {\theta} ^ {o} , \boldsymbol {z} _ {i} ^ {n} \rangle\right)}. \tag {11}
+$$
+
+Notice that softmax function approximates the indicator of argmax, with the two coinciding when the magnitude of $\langle \pmb{\theta}^o, \pmb{z}_i^m \rangle$ is significantly larger than other $\langle \pmb{\theta}^o, \pmb{z}_i^n \rangle$ .
+
+Bilinear Compatibility Function: Given that we do not have training images for $\mathcal{C}_u$ , we cannot directly optimize over and learn $\theta^u$ for $u \in \mathcal{C}_u$ . Thus, similar to previous work on zero-shot learning [57, 59, 51], we use the semantic vectors $\{\pmb{v}^c\}_{c \in \mathcal{C}}$ of labels, allowing to transfer knowledge from seen to unseen labels. More specifically, we express the parameters of each classifier as a function of its semantic vector $\pmb{\theta}^c = \pmb{W}_3 \pmb{v}^c$ and substituting in (6), compute the compatibility score of each label $c \in \mathcal{C}$ in an image $i$ as
+
+$$
+s _ {i} ^ {c} = \max _ {m = 1, \dots , M} \left\langle \boldsymbol {W} _ {3} \boldsymbol {v} ^ {c}, \boldsymbol {z} _ {i} ^ {m} \right\rangle . \tag {12}
+$$
+
+Once we learn $W_{3}$ , as discussed below, we can determine the labels in an image $i$ by ranking and picking the top prediction scores $\{s_i^c\}_{c \in \mathcal{C}}$ across all labels.
+
+To learn the parameters of the compatibility function, $W_{3}$ , and the attention models, we use the ranking loss that imposes the scores of present labels in each image be larger by a margin than the scores of absent labels. More specifically, we define the ranking loss as
+
+$$
+\mathcal {L} _ {\text {r a n k}} \triangleq \sum_ {i} \sum_ {o \in \mathcal {Y} _ {i}, o ^ {\prime} \notin \mathcal {Y} _ {i}} \max \left(1 + s _ {i} ^ {o ^ {\prime}} - s _ {i} ^ {o}, 0\right), \tag {13}
+$$
+
+in which the margin is set to one.
+
+Final Loss Function: Putting all loss functions, discussed above, together we propose to minimize
+
+$$
+\min _ {\Theta , \left\{\boldsymbol {W} _ {1} ^ {m}, \boldsymbol {w} _ {2} ^ {m} \right\} _ {m}, \boldsymbol {W} _ {3}} \mathcal {L} _ {\text {r a n k}} + \lambda_ {\text {d i v}} \mathcal {L} _ {\text {d i v}} + \lambda_ {\text {r e l}} \mathcal {L} _ {\text {r e l}} + \lambda_ {\text {d i s t}} \mathcal {L} _ {\text {d i s t}}, \tag {14}
+$$
+
+where $\lambda_{div},\lambda_{rel},\lambda_{dist}\geq 0$ are regularization parameters. We minimize this loss function using stochastic gradient descent (see experiments for details). In the experiments, we investigate the effectiveness of each loss function term and show the robustness of our methods with respect to the values of the regularization parameters.
+
+# 5. Experiments
+
+We evaluate our proposed shared multi-attention framework for multi-label zero-shot learning on NUS-WIDE [68] and the large-scale Open Images [69] datasets. Below, we discuss the datasets, evaluation metrics, baseline methods then present and analyze the results on both datasets. Given that our method handles multi-label learning, we also report the multi-label learning performance on both datasets in the supplementary material.
+
+# 5.1. Experimental Setup
+
+Datasets: We perform experiments on the NUS-WIDE [68] and the Open Images [69] datasets. In the NUS-WIDE, each
+
+image has 81 labels, called 'ground-truth' labels, which are carefully labeled by human annotators, in addition to 925 labels extracted from Flicker user tags. Similar to [7], we use the 925 labels as seen and the other 81 labels as unseen. We run all methods on the full dataset that has $20\%$ more training and testing samples than the data used in [7].
+
+To demonstrate the effectiveness of our method on a larger number of labels and images and to investigate the localization performance of our method, we use the largescale Open Images (v4) dataset, which consists of 9 millions training images in addition to 41,620 and 125,436 images for validation and testing, respectively. For the seen labels, we use 7,186 labels in the training set, where each label has at least 100 training samples. We select 400 most frequent test set labels that are not observed in the training data as the unseen labels. Each unseen label has at least 75 test samples for evaluation. Due to the large number of classes, each image has unannotated labels.
+
+Evaluation Metrics: Similar to other work on multi-label learning [2, 10], for evaluation, we use the mean Average Precision (mAP) [70] and F1 score at top K predictions [7] in each image. The details of computing the scores are provided in the supplementary materials. Notice that the mAP score captures how accurate the model ranks images for each label, while the F1 score measures how accurate the model ranks present labels in each image.
+
+Baselines: We compare with CONSE [59] (ensemble of classifiers), LabelEM [57] (joint image-label embedding) and Fast0Tag, which is a state-of-the-art multi-label zero-shot learning method. We also compare with [32], which uses one attention per label, hence learning a total of $|C|$ attention modules. This allows to investigate the effectiveness of our method that uses a small number of attention modules and share them across labels.
+
+We refer to our method as LEarning by Sharing Attentions (LESA) and train the following variants of our method: i) LESA $(M = 1)$ , where we use a single attention module learned by the combination of the ranking and relevance losses (since there is one attention, there will be no diversity and distribution losses). This allows to demonstrate the effectiveness of sharing multiple attention modules; ii) LESA with $M$ attention modules, learned via our proposed loss function in (14). In addition, we use the semantic vectors of labels to cluster them into $M$ groups via kmeans and learn an attention module for the labels in each group using the combination of the ranking and relevance losses (referred to as One Attention per Cluster). This allows us to investigate the effectiveness of our multi-attention sharing framework that automatically allocates an appropriate number of attention modules for each label.
+
+Implementation Details: Similar to other works [7], we use a pretrained VGG-19 for feature extraction in all methods. We extract the feature map at the last convolutional
+
+layer whose size is $14 \times 14 \times 512$ and treat it as a set of features from $14 \times 14$ regions. For our all variants of our method, we freeze the VGG network and learn an additional convolutional layer of size $2 \times 2 \times 512$ on top of the VGG's last convolutional layer. Thus, our convolutional layer has significantly smaller number of parameters than [7], which learns three fully connected layers. We extract the semantic vectors $\{v^c\}_{c \in \mathcal{C}}$ using the GloVe model [71] trained on Wikipedia articles.
+
+We implement all methods in Tensorflow and optimize with the default setting of RMSprop [72] with the learning rate 0.001 and batch size of 32, and use exponential learning rate decay of 0.8 whenever the training model degrades performance on the validation set. We also use early stopping [73] as a form of regularization in all models. We train all models on an NVIDIA V100 GPU for 40 epochs for the NUS-WIDE and 2 epochs for Open Images. In our method, we do not perform heavy hyperparameter tuning and set $(\lambda_{div},\lambda_{rel},\lambda_{dist})$ to $(1e^{-2},1e^{-3},1e^{-1})$ for both datasets. We set the number of attention modules to $M = 10$ , unless stated otherwise. For simplicity, we share the parameters $W_1^m$ across the attention modules.
+
+# 5.2. Experimental Results
+
+Multi-Label Zero-Shot Learning: We consider both multi-label zero-shot learning, where models are trained on seen labels and tested only on unseen labels, and multi-label generalized zero-shot learning, where models are tested on both seen and unseen labels. Table 1 shows the mAP score and F1 score at $K \in \{3,5\}$ for NUS-WIDE and at $K \in \{10,20\}$ for Open Images. We use a larger $K$ for Open Images, since models need to make a larger number of predictions due to a much larger number of labels. From the results, we make the following observations:
+
+- Our method outperforms the state of the art on both datasets, improving the mAP score on NUS-WIDE by $4.3\%$ for zero-shot and by $2.9\%$ on generalized zero-shot learning. On Open Images, our method improves F1@10 by $0.7\%$ for zero-shot learning and by $1.4\%$ for generalized zero-shot learning, similarly for F1@20, we obtain $0.4\%$ and $1.4\%$ improvement, respectively.
+- Learning one attention module per label cannot scale to thousands of labels as in Open Images, due to significantly large memory requirement, hence, we do not report it. Moreover, on NUS-WIDE, it does not do as well as our method or Fast0Tag², due to its class myopic nature and lack of ability to capture shared characteristics of different labels to transfer to unseen ones.
+- Clustering labels based on semantic vectors and learning
+
+| Method | Task | NUS-WIDE (#seen/#unseen = 925/81) | Open Images (#seen/#unseen = 7186/400) |
| K=3 | K=5 | mAP | K=10 | K=20 | mAP |
| P | R | F1 | P | R | F1 | P | R | F1 | P | R | F1 |
| CONSE [59] | ZS | 17.5 | 28.0 | 21.6 | 13.9 | 37.0 | 20.2 | 9.4 | 0.2 | 7.3 | 0.4 | 0.2 | 11.3 | 0.3 | 40.4 |
| GZS | 11.5 | 5.1 | 7.0 | 9.6 | 7.1 | 8.1 | 2.1 | 2.4 | 2.8 | 2.6 | 1.7 | 3.9 | 2.4 | 43.5 |
| LabelEM [57] | ZS | 15.6 | 25.0 | 19.2 | 13.4 | 35.7 | 19.5 | 7.1 | 0.2 | 8.7 | 0.5 | 0.2 | 15.8 | 0.4 | 40.5 |
| GZS | 15.5 | 6.8 | 9.5 | 13.4 | 9.8 | 11.3 | 2.2 | 4.8 | 5.6 | 5.2 | 3.7 | 8.5 | 5.1 | 45.2 |
| Fast0Tag [7] | ZS | 22.6 | 36.2 | 27.8 | 18.2 | 48.4 | 26.4 | 15.1 | 0.3 | 12.6 | 0.7 | 0.3 | 21.3 | 0.6 | 41.2 |
| GZS | 18.8 | 8.3 | 11.5 | 15.9 | 11.7 | 13.5 | 3.7 | 14.8 | 17.3 | 16.0 | 9.3 | 21.5 | 12.9 | 45.2 |
| One Attention per Label [32] | ZS | 20.9 | 33.5 | 25.8 | 16.2 | 43.2 | 23.6 | 10.4 | - | - | - | - | - | - | - |
| GZS | 17.9 | 7.9 | 10.9 | 15.6 | 11.5 | 13.2 | 3.7 | - | - | - | - | - | - | - |
| One Attention per Cluster (M=10) | ZS | 20.0 | 31.9 | 24.6 | 15.7 | 41.9 | 22.9 | 12.9 | 0.6 | 22.9 | 1.2 | 0.4 | 32.4 | 0.9 | 40.7 |
| GZS | 10.4 | 4.6 | 6.4 | 9.1 | 6.7 | 7.7 | 2.6 | 15.7 | 18.3 | 16.9 | 9.6 | 22.4 | 13.5 | 44.9 |
| LESA (M=1) | ZS | 24.3 | 38.8 | 29.8 | 18.9 | 50.3 | 27.5 | 17.6 | 0.6 | 23.2 | 1.2 | 0.5 | 35.3 | 1.0 | 41.2 |
| GZS | 22.6 | 10.0 | 13.8 | 19.1 | 14.0 | 16.2 | 5.1 | 15.2 | 17.7 | 16.4 | 9.6 | 22.3 | 13.4 | 45.3 |
| LESA (M=10) | ZS | 25.7 | 41.1 | 31.6 | 19.7 | 52.5 | 28.7 | 19.4 | 0.7 | 25.6 | 1.4 | 0.5 | 37.4 | 1.0 | 41.7 |
| GZS | 23.6 | 10.4 | 14.4 | 19.8 | 14.6 | 16.8 | 5.6 | 16.2 | 18.9 | 17.4 | 10.2 | 23.9 | 14.3 | 45.4 |
+
+Table 1: Multi-Label Zero-Shot (ZS) and Multi-Label Generalized Zero-Shot (GZS) performance on NUS-WIDE and Open Images.
+
+| Method | Task | F1K=3 | F1K=5 | mAP |
| Lrank | ZS | 28.3 | 26.1 | 13.5 |
| GZS | 12.4 | 14.8 | 3.8 |
| Lrank + Lrel | ZS | 31.0 | 28.1 | 16.9 |
| GZS | 14.5 | 16.8 | 5.3 |
| Lrank + Lrel + Ldiv | ZS | 31.3 | 28.6 | 18.0 |
| GZS | 14.4 | 16.8 | 5.0 |
| Lrank + Lrel + Ldiv + Ldist | ZS | 31.6 | 28.7 | 19.4 |
| GZS | 14.4 | 16.8 | 5.6 |
+
+Table 2: Ablation study for multi-label zero-shot (ZS) and multi-label generalized zero-shot (GZS) performance on NUS-WIDE.
+
+an attention for each cluster as well as only learning one attention module for all labels do not do as well as our LESA $(M = 10)$ , which shows the importance of allowing each label to use more than one attention module, and generally a number of attentions depending on the complexity of the label (i.e., visual variations across images).
+
+- The F1 score of all methods on Open Images is much smaller than on NUS-WIDE. This comes from the fact that Open Images has significantly larger number of labels, hence, ranking the right labels within an image becomes more challenging, which results in the F1 score drop. On the other hand, the mAP scores of all methods is larger on Open Images than NUS-WIDE. This is because Open Images has more number of positive samples per label, hence, the model has higher change of retrieving relevant images.
+
+Figure 4 (top) shows the frequency of using each attention module for each of the 81 unseen labels in the NUS-WIDE. Notice that our method learns one main attention module (attention module 6) to predict most unseen labels and depending on the complexity of each label, it would use more attention modules, if needed. In particular, simple labels such as 'window' and 'road' use only one attention module, 'flowers' and 'tree' use two attention modules, while more visually varying labels such as 'cityscape' and 'coral' use multiple attentions. This is a unique property of our framework that dynamically allocates the right number of attention modules to labels and allows different labels to be predicted by different modules, if needed, and the quantitative results in Table 1 verify its importance.
+
+
+Figure 3: F1/mAP improvement $(\%)$ over Fast0Tag for different numbers of attention features (left) and effect of $\lambda_{rel}, \lambda_{div}, \lambda_{dist}$ on multi-label zero-shot mAP $(\%)$ (right) on NUS-WIDE.
+
+
+
+Ablation Studies: Table 2 shows the F1 and mAP scores for our method on the NUS-WIDE for multi-label zero-shot and multi-label generalized zero-shot learning by using different components of our proposed loss function. Notice that only using $\mathcal{L}_{\text {rank }}$ as in standard zero-shot learning does perform worst. We obtain $2.7\%$ ( $3.4\%$ ) improvement in F1@3 (mAP) scores when using $\mathcal{L}_{\text {rel }}$ , which promotes to select relevant regions to labels in images. Enforcing attention diversity further improves F1@3 (mAP) by $0.3\%$ ( $1.1\%$ ). Finally, adding the distribution loss $\mathcal{L}_{\text {dist }}$ obtains the best result with $31.6\%$ F1@3 and $19.4\%$ mAP score.
+
+Effect of Hyperparameters: Figure 3 (left) shows our model's improvement over Fast0Tag for (generalized) zero-shot learning with different number of attention features on test images in NUS-WIDE. We observe improvement by using shared attention regardless of the number of attention modules (for one attention, we use our new loss $\mathcal{L}_{\text {rank }} + \mathcal{L}_{\text {rel }}$ ). Notice in all cases, the performance saturates or peaks at 10 attention features and drops if more attention features are used. This again verifies that large number of attention features could harm by overfitting to seen labels.
+
+Figure 3 (right) shows the effect of hyperparameters for multi-label zero-shot learning on NUS-WIDE test images. We first set $(\lambda_{div}, \lambda_{rel}, \lambda_{dist})$ to $(1e^{-2}, 1e^{-3}, 1e^{-1})$ , and fixing two regularizations, change the other one by a magnitude shown on the horizontal axis. Notice that, generally, the score improves as the regularization parameters increase and is stable around the nominal values.
+
+
+Figure 4: Top: Visualization of the frequency of using attention modules. For each label, we count over all training images the number of times that a prediction is made using each attention module. Each column shows the frequency over each label. Bottom: Visualization of learned attentions for few images from NUS-WIDE.
+
+| Method | mAP (seen) | mAP (unseen) | Harmonic mean (seen+unseen) |
| One attention per label | 10.8 | 2.0 | 3.4 |
| One attention for all labels | 8.7 | 2.4 | 3.8 |
| Ours | 9.4 | 2.7 | 4.2 |
+
+Table 3: mAP for label localization on the Open Images test set.
+
+Zero-Shot Label Localization: To demonstrate the effectiveness of shared multi-attention method on localization of labels, we measure the mean average precision score of localization. We follow [74], which uses this score to measure localization on Pascal VOC and MSCOCO. Roughly speaking the score captures whether the attention(s) puts maximal weights on the ground-truth bounding box(es) of the label(s) and whether the model is confident about the label(s) (see the supplementary materials for the precise definition). We report the mAP on seen and unseen labels as well as the harmonic mean of seen and unseen predictions to measure the seen/unseen trade off.
+
+In Open Images, out of all trainable labels with least 100 training samples for each, there are 558 labels that have bounding box annotations in the test set. Thus, we divide these labels into 420 seen labels and 138 unseen labels. We train our method, one attention for all labels and one attention per label on 420 seen labels in the training set and evaluate their localization score on the test set.
+
+Table 3 shows the localization score of our method compared with other baselines. Notice that, as expected and discussed earlier, one attention per label does well on seen labels and performs worst on unseen labels, while one attention for all labels does better on generalization to unseen labels, yet performs poorly on seen labels. On the other hand, our shared multi-attention model, which combines the advantages of both, does well on both seen and unseen and achieves the largest overall performance, measured by the harmonic mean. Finally, Figure 5 further shows the localization mAP improvement of our method with respect to one attention per label for 20 unseen labels with the largest improvement and 20 unseen labels with the largest drop. Notice that our method significantly improves (more than
+
+
+Figure 5: Localization mAP improvement over one attention per label on unseen labels in Open Images test set.
+
+15%) on some unseen labels such as 'Drawer', 'Lizard' and 'Coffee cup' while having negative impact, however much smaller (less than 6%), on labels such as 'Dress' or 'Kitchen appliance', which have wide appearance change, hence better captured by a specialized attention module.
+
+Qualitative results: Figure 4 (bottom) visualizes learned attentions by our method for a few images from NUS-WIDE. Notice that our method learns to successfully focus on both seen and unseen labels, including abstract concepts. For instance, in the first image, the model focuses on the person and the surrounding wave to recognize the seen label 'action', while uses the same attention feature to predict the unseen label 'surf'.
+
+# 6. Conclusion
+
+We proposed a novel shared multi-attention mechanism which predicts all labels in an image, including multiple unseen ones. We proposed a novel loss function that consists of three components guiding the attention to focus on diverse and relevant image regions while utilizing all attention features. By extensive experiments on NUS-WIDE dataset and the large-scale Open Images dataset, we showed that our framework improves the state of the art.
+
+# Acknowledgements
+
+This work is partially supported by DARPA Young Faculty Award (D18AP00050), NSF (IIS-1657197), ONR (N000141812132) and ARO (W911NF1810300).
+
+# References
+
+[1] D. Huynh and E. Elhamifar, "Interactive multi-label CNN learning with partial labels," IEEE Conference on Computer Vision and Pattern Recognition, 2020. 1
+[2] Y. Gong, Y. Jia, T. Leung, A. Toshev, and S. Ioffe, "Deep convolutional ranking for multilabel image annotation," 2013. 1, 2, 6
+[3] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu, "Cnn-rnn: A unified framework for multi-label image classification," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. 1, 3
+[4] M. Bucher, S. Herbin, and F. Jurie, "Generating visual representations for zero-shot classification," IEEE International Conference on Computer Vision Workshops, 2017. 1, 3
+[5] Y. Xian, T. Lorenz, B. Schiele, and Z. Akata, “Feature generating networks for zero-shot learning,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5542–5551, 2018. 1, 3
+[6] H. F. Yu, P. Jain, P. Kar, and I. S. Dhillon, “Large-scale multi-label learning with missing labels,” International Conference on Machine Learning, 2014. 1
+[7] Y. Zhang, B. Gong, and M. Shah, "Fast zero-shot image tagging," IEEE Conference on Computer Vision and Pattern Recognition, 2016. 1, 2, 3, 6, 7
+[8] J. Deng, N. Ding, Y. Jia, A. Frome, K. Murphy, S. Bengio, Y. Li, H. Neven, and H. Adam, "Large-scale object classification using label relation graphs," European Conference on Computer Vision, 2014. 1, 2
+[9] C. W. Lee, W. Fang, C. K. Yeh, and Y. C. F. Wang, "Multilabel zero-shot learning with structured knowledge graphs," IEEE Conference on Computer Vision and Pattern Recognition, 2012. 1, 2, 3
+[10] Z. Wang, T. Chen, G. Li, G. Li, and L. Lin, "Multi-label image recognition by recurrently discovering attentional regions," IEEE International Conference on Computer Vision, 2017. 1, 2, 3, 6
+[11] J. Weston, S. Bengio, and N. Usunier, "Wsabie: Scaling up to large vocabulary image annotation," *IJCAI*, 2011. 1, 2
+[12] T. Durand, N. Mehrasa, and G. Mori, “Learning a deep convnet for multi-label classification with partial labels,” IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1
+[13] Z. M. Chen, X. S. Wei, P. Wang, and Y. Guo, "Multi-label image recognition with graph convolutional networks," IEEE Conference on Computer Vision and Pattern Recognition, vol. abs/1904.03582, 2019. 1
+[14] L. Feng, B. An, and S. He, "Collaboration based multi-label learning," AAAI Conference on Artificial Intelligence, 2019. 1
+[15] S. Behpour, W. Xing, and B. D. Ziebart, “Arc: Adversarial robust cuts for semi-supervised and multi-label classification,” AAAI Conference on Artificial Intelligence, 2018. 1, 2
+
+[16] T. Chen, Z. Wang, G. Li, and L. Lin, "Recurrent attentional reinforcement learning for multi-label image recognition," AAAI Conference on Artificial Intelligence, 2018. 1
+[17] S. F. Chen, Y. C. Chen, C. K. Yeh, and Y. C. F. Wang, "Order-free rnn with visual attention for multi-label classification," AAAI Conference on Artificial Intelligence, 2018. 1
+[18] J. Ba, V. Mnih, and K. Kavukcuoglu, "Multiple object recognition with visual attention," International Conference on Learning Representations, vol. abs/1412.7755, 2015. 1, 2, 3
+[19] D. Huynh and E. Elhamifar, “Fine-grained generalized zero-shot learning via dense attribute-based attention,” IEEE Conference on Computer Vision and Pattern Recognition, 2020. 1
+[20] Y. Xian, B. Schiele, and Z. Akata, “Zero-shot learning — the good, the bad and the ugly,” IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1, 3
+[21] E. Schonfeld, S. Ebrahimi, S. Sinha, T. Darrell, and Z. Akata, "Generalized zero- and few-shot learning via aligned variational autoencoders," IEEE Conference on Computer Vision and Pattern Recognition, 2019. 1, 3
+[22] R. Felix, B. G. V. Kumar, I. D. Reid, and G. Carneiro, “Multimodal cycle-consistent generalized zero-shot learning,” European Conference on Computer Vision, 2018. 1
+[23] H. Jiang, R. Wang, S. Shan, and X. Chen, "Transferable contrastive network for generalized zero-shot learning," IEEE International Conference on Computer Vision, 2019. 1
+[24] Y. Atzmon and G. Chechik, “Adaptive confidence smoothing for generalized zero-shot learning,” IEEE Conference on Computer Vision and Pattern Recognition, 2018. 1
+[25] Y. Yu, Z. Ji, Y. Fu, J. Guo, Y. Pang, and Z. Zhang, "Stacked semantics-guided attention model for fine-grained zero-shot learning," Neural Information Processing Systems, 2018. 1, 3, 4
+[26] Y. Zhu, J. Xie, Z. Tang, X. Peng, and A. Elgammal, "Learning where to look: Semantic-guided multi-attention localization for zero-shot learning," Neural Information Processing Systems, 2019. 1, 3
+[27] P. Wang, L. Liu, C. Shen, Z. Huang, A. v. d. Hengel, and H. T. Shen, "Multi-attention network for one shot learning," IEEE Conference on Computer Vision and Pattern Recognition, 2017. 1, 3
+[28] A. Bansal, K. Sikka, G. Sharma, R. Chellappa, and A. Divakaran, “Zero-shot object detection,” European Conference on Computer Vision, 2018. 1, 2, 3
+[29] S. Rahman, S. Khan, and F. Porikli, “Zero-shot object detection: Learning to simultaneously recognize and localize novel concepts,” Asian Conference on Computer Vision, 2018. 1, 2, 4
+[30] S. Rahman, S. Khan, and N. Barnes, "Transductive learning for zero-shot object detection," IEEE International Conference on Computer Vision, 2019. 1, 2, 3
+
+[31] T. Mensink, E. Gavves, and C. G. Snoek, "Costa: Co-occurrence statistics for zero-shot classification," IEEE Conference on Computer Vision and Pattern Recognition, 2014. 2
+[32] J. w. Kim, J. Jun, and B. Zhang, “Bilinear attention networks,” Neural Information Processing Systems, 2018. 2, 3, 4, 6, 7
+[33] G. Tsoumakas and I. Katakis, "Multi-label classification: An overview," International Journal Data Warehousing and Mining, vol. 3, 2007. 2
+[34] I. Misra, C. L. Zitnick, M. Mitchell, and R. Girshick, "Seeing through the human reporting bias: Visual classifiers from noisy human-centric labels," IEEE Conference on Computer Vision and Pattern Recognition, 2016. 2
+[35] C. Wang, S. Yan, L. Zhang, and H. J. Zhang, "Multi-label sparse coding for automatic image annotation," IEEE Conference on Computer Vision and Pattern Recognition, 2009. 2
+[36] L. Jing, L. Yang, and J. Y. M. K. Ng, "Semi-supervised low-rank mapping learning for multi-label classification," IEEE Conference on Computer Vision and Pattern Recognition, 2015. 2
+[37] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache, "Learning visual features from large weakly supervised data," European Conference on Computer Vision, 2016. 2
+[38] T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang, "Learning from massive noisy labeled data for image classification," IEEE Conference on Computer Vision and Pattern Recognition, 2015. 2
+[39] J. Redmon and A. Farhadi, “Yolo9000: Better, faster, stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525, 2017. 2
+[40] S. Ren, K. He, R. B. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, pp. 1137-1149, 2015. 2
+[41] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu, “Spatial transformer networks,” Neural Information Processing Systems, 2015. 2, 3
+[42] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, "Recurrent models of visual attention," in Neural Information Processing Systems, 2014. 2, 3
+[43] W. Kim, B. Goyal, K. Chawla, J. Lee, and K. Kwon, "Attention-based ensemble for deep metric learning," in European Conference on Computer Vision, 2018. 2
+[44] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. R. Salakhutdinov, R. S. Zemel, and Y. Bengio, "Show, attend and tell: Neural image caption generation with visual attention," 2015. 3, 4
+[45] J. Lu, C. Xiong, D. Parikh, and R. Socher, "Knowing when to look: Adaptive attention via a visual sentinel for image captioning," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3242-3250, 2017. 3
+
+[46] A. Karpathy and L. Fei-Fei, "Deep visual-semantic alignments for generating image descriptions," IEEE Conference on Computer Vision and Pattern Recognition, pp. 3128-3137, 2015. 3
+[47] J. Nam, E. L. Mencia, H. J. Kim, and J. Furnkranz, “Maximizing subset accuracy with recurrent neural networks in multi-label classification,” Neural Information Processing Systems, 2017. 3
+[48] W. Liu and I. W. H. Tsang, “On the optimality of classifier chain for multi-label classification,” Neural Information Processing Systems, 2015. 3
+[49] C. H. Lampert, H. Nickisch, and S. Harmeling, "Attribute-based classification for zero-shot visual object categorization," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013. 3
+[50] B. Romera-Paredes and P. H. Torr, “An embarrassingly simple approach to zero-shot learning,” International Conference on Machine learning, 2015. 3
+[51] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov, "Devise: A deep visual-semantic embedding model," Neural Information Processing Systems, 2013, 3, 5
+[52] R. Socher, M. Ganjoo, C. D. Manning, and A. Y. Ng, “Zero-shot learning through cross-modal transfer,” Neural Information Processing Systems, 2013. 3
+[53] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka, "Metric learning for large scale image classification: Generalizing to new classes at near-zero cost," European Conference on Computer Vision, 2012. 3
+[54] S. Liu, M. Long, J. Wang, and M. I. Jordan, "Generalized zero-shot learning with deep calibration network," Neural Information Processing Systems, 2018. 3
+[55] A. Zhao, M. Ding, J. Guan, Z. Lu, T. Xiang, and J. R. Wen, "Domain-invariant projection learning for zero-shot recognition," Neural Information Processing Systems, 2018. 3
+[56] C. Lampert, M. Blaschko, and T. Hofmann, "Beyond sliding windows: Object localization by efficient subwindow search," 2008. 3
+[57] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid, "Label-embedding for image classification," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016. 3, 5, 6, 7
+[58] D. Jayaraman and K. Grauman, “Zero-shot recognition with unreliable attributes,” Neural Information Processing Systems, 2014. 3
+[59] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. S. Corrado, and J. Dean, “Zero-shot learning by convex combination of semantic embeddings,” International Conference on Learning Representations, 2014. 3, 5, 6, 7
+[60] Y. Xian, Z. Akata, G. Sharma, Q. Nguyen, M. Hein, and B. Schiele, “Latent embeddings for zero-shot classification,” IEEE Conference on Computer Vision and Pattern Recognition, 2016. 3
+
+[61] M. Ye and Y. Guo, “Multi-label zero-shot learning with transfer-aware label embedding projection,” CoRR, vol. abs/1808.02474, 2018. 3
+[62] Z. Lu, J. Zeng, S. Shan, and X. Chen, “Zero-shot facial expression recognition with multi-label label propagation,” Asian Conference on Computer Vision, vol. abs/1512.06963, 2018. 3
+[63] Z. Ren, H. Jin, Z. L. Lin, C. Fang, and A. L. Yuille, "Multi-instance visual-semantic embedding," British Machine Vision Conference, vol. abs/1512.06963, 2015. 3
+[64] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," Neural Information Processing Systems, 2017. 3
+[65] J. Kuen, Z. Wang, and G. Wang, "Recurrent attentional networks for saliency detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3668-3677, 2016. 3
+[66] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and L. Zhang, "Bottom-up and top-down attention for image captioning and visual question answering," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6077-6086, 2018. 3
+[67] C. H. Lampert, H. Nickisch, and S. Harmeling, “Learning to detect unseen object classes by between-class attribute transfer,” IEEE Conference on Computer Vision and Pattern Recognition, 2009. 3
+[68] T. S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. T. Zheng, "Nus-wide: A real-world web image database from national university of bangalore," ACM International Conference on Image and Video Retrieval, 2009. 5
+[69] I. Krasin, T. Duerig, N. Alldrin, A. Veit, S. Abu-El-Haija, S. Belongie, D. Cai, Z. Feng, V. Ferrari, V. Gomes, A. Gupta, D. Narayanan, C. Sun, G. Chechik, and K. Murphy, "Open images: A public dataset for large-scale multi-label and multi-class image classification," available from https://github.com/openimages, 2016.5
+[70] A. Veit, N. Alldrin, I. K. G. Chechik, A. Gupta, and S. Belongie, “Learning from noisy large-scale datasets with minimal supervision,” IEEE Conference on Computer Vision and Pattern Recognition, 2017. 6
+[71] J. Pennington, R. Socher, and C. D. Manning, “Glove: Global vectors for word representation,” Empirical Methods in Natural Language Processing (EMNLP), 2014. 6
+[72] T. Tijmen and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude,” COURSERA: Neural networks for machine learning 4.2, 2012. 6
+[73] L. Prechelt, “Early stopping-but when?” Neural Networks: Tricks of the Trade, 1996. 6
+[74] I. L. M. Oquab, L. Bottou and J. Sivic, "Is object localization for free? - weakly-supervised learning with convolutional neural networks," IEEE Conference on Computer Vision and Pattern Recognition, 2015. 8
\ No newline at end of file
diff --git a/asharedmultiattentionframeworkformultilabelzeroshotlearning/images.zip b/asharedmultiattentionframeworkformultilabelzeroshotlearning/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b4a0038a32877c82f27e661b47d78b6f5f03cfee
--- /dev/null
+++ b/asharedmultiattentionframeworkformultilabelzeroshotlearning/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5976af94ac54c18189ed7c6b7d68d4980e42a32c50b4944c55f4abfea184c8a1
+size 432751
diff --git a/asharedmultiattentionframeworkformultilabelzeroshotlearning/layout.json b/asharedmultiattentionframeworkformultilabelzeroshotlearning/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..2e918d528a27c93c380c19f36bfa80eb2864b292
--- /dev/null
+++ b/asharedmultiattentionframeworkformultilabelzeroshotlearning/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a77aa3eb0fa2ca3991f436f739341eec6713ceeba29a4249daf0855d8b79ed3e
+size 473984
diff --git a/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_content_list.json b/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..01074f186d2c135f2f3081ca08e929b1f5ec529e
--- /dev/null
+++ b/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d62b635a4a28c9e35622e8983c74e83d5cd38da63d5956f751e1b1a541fd02e
+size 84443
diff --git a/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_model.json b/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c1070e7573de74bfe3164c14518c98f746a7778e
--- /dev/null
+++ b/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c738c8b98e82e9aa8de17030a409ce7cead9fb1d331f5f7d2039af5ffad70147
+size 102926
diff --git a/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_origin.pdf b/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..b55138887dde35b7c085d569824f5a1f5100b3b1
--- /dev/null
+++ b/asparseresultantbasedmethodforefficientminimalsolvers/ba5e4039-294f-4310-ba39-0f5becaefe16_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dbf24dfb877415961ead7db6586c80f02599c19af57807b0bb4d42a8e191794d
+size 423977
diff --git a/asparseresultantbasedmethodforefficientminimalsolvers/full.md b/asparseresultantbasedmethodforefficientminimalsolvers/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..af46bdf5e8465fd24ad202fb451536d22a930427
--- /dev/null
+++ b/asparseresultantbasedmethodforefficientminimalsolvers/full.md
@@ -0,0 +1,308 @@
+# A sparse resultant based method for efficient minimal solvers
+
+Snehal Bhayani
+
+Zuzana Kukelova
+
+Janne Heikkila
+
+$^{1}$ Center for Machine Vision and Signal Analysis, University of Oulu, Finland $^{2}$ Visual Recognition Group, Faculty of Electrical Engineering, Czech Technical University in Prague
+
+# Abstract
+
+Many computer vision applications require robust and efficient estimation of camera geometry. The robust estimation is usually based on solving camera geometry problems from a minimal number of input data measurements, i.e. solving minimal problems in a RANSAC framework. Minimal problems often result in complex systems of polynomial equations. Many state-of-the-art efficient polynomial solvers to these problems are based on Gröbner bases and the action-matrix method that has been automated and highly optimized in recent years. In this paper we study an alternative algebraic method for solving systems of polynomial equations, i.e., the sparse resultant-based method and propose a novel approach to convert the resultant constraint to an eigenvalue problem. This technique can significantly improve the efficiency and stability of existing resultant-based solvers. We applied our new resultant-based method to a large variety of computer vision problems and show that for most of the considered problems, the new method leads to solvers that are the same size as the best available Gröbner basis solvers and of similar accuracy. For some problems the new sparse-resultant based method leads to even smaller and more stable solvers than the state-of-the-art Gröbner basis solvers. Our new method can be fully automated and incorporated into existing tools for automatic generation of efficient polynomial solvers and as such it represents a competitive alternative to popular Gröbner basis methods for minimal problems in computer vision.
+
+# 1. Introduction
+
+Computing camera geometry is one of the most important tasks in computer vision [17] with many applications e.g. in structure from motion [39], visual navigation [38], large scale 3D reconstruction [19] and image localization [37].
+
+The robust estimation of camera geometry is usually based on solving so-called minimal problems [35, 24, 23], i.e. problems that are solved from minimal samples of input data, inside a RANSAC framework [14, 9, 36]. Since the camera geometry estimation has to be performed many
+
+times in RANSAC [14], fast solvers to minimal problems are of high importance. Minimal problems often result in complex systems of polynomial equations in several variables. A popular approach for solving minimal problems is to design procedures that can efficiently solve only a special class of systems of equations, e.g. systems resulting from the 5-pt relative pose problem [35], and move as much computation as possible from the "online" stage of solving equations to an earlier pre-processing "offline" stage.
+
+Most of the state-of-the-art specific minimal solvers are based on Gröbner bases and the action-matrix method [10]. The Gröbner basis method was popularized in computer vision by Stewenius [40]. The first efficient Gröbner basis solvers were mostly handcrafted [41, 42] and sometimes very unstable [43]. However, in the last 15 years much effort has been put into making the process of constructing the solvers more automatic [24, 29, 30] and the solvers stable [6, 7] and more efficient [29, 30, 28, 5, 32]. There are now powerful tools available for the automatic generation of efficient Gröbner basis solvers [24, 29].
+
+While the Gröbner basis method for generating efficient minimal solvers was thoroughly studied in computer vision and all recently generated Gröbner basis solvers are highly optimized in terms of efficiency and stability, little attention has been paid to an alternative algebraic method for solving systems of polynomial equations, i.e. the resultant-based method. The resultant-based method was manually applied to several computer vision problems [25, 16, 16, 20, 23, 25]. However in contrast to the Gröbner basis method, there is no general method for automatically generating efficient resultant-based minimal solvers. The most promising results in this direction were proposed by Emiris [12] and Heikkilä [18], where methods based on sparse resultants were proposed and applied to camera geometry problems. While these methods can be extended for general minimal problems that appear in computer vision and can be automated, they usually lead (due to linearizations) to larger and less efficient solvers than Gröbner basis solvers.
+
+In this paper, we propose a novel approach to generating minimal solvers using sparse resultants, which is based on adding an extra equation of a special form to the in
+
+put system. Our algorithm is inspired by the ideas explored in [18, 12], but thanks to the special form of added equation and by solving the resultant as a small eigenvalue problem, in contrast to a polynomial eigenvalue problem in [18], the new approach achieves significant improvements over [18, 12] in terms of efficiency of the generated solvers. Specifically our contributions include,
+
+- A novel sparse resultant-based approach to generating polynomial solvers based on adding an extra equation of a special form and transforming the resultant matrix constraint to a regular eigenvalue problem.
+- Two procedures to reduce the size of resultant matrix that lead to faster solvers than the best available state-of-the-art solvers for some minimal problems.
+- A general method for automatic generation of efficient resultant-based polynomial solvers for many important minimal problems that achieves competitive performance in terms of speed and stability with respect to the best available state-of-the-art solvers generated by highly optimized Gröbner basis techniques [29, 32]. The automatic generator of resultant-based solvers will be made publicly available at [45].
+
+# 2. Theoretical background and related work
+
+In this paper we use notation and basic concepts from the book by Cox et al. [10]. Our objective is to solve $m$ polynomial equations,
+
+$$
+\left\{f _ {1} \left(x _ {1}, \dots , x _ {n}\right) = 0, \dots , f _ {m} \left(x _ {1}, \dots , x _ {n}\right) = 0 \right\} \tag {1}
+$$
+
+in $n$ unknowns, $X = \{x_{1},\ldots ,x_{n}\}$ . Let $\mathbb{C}[X]$ denote the set of all polynomials in unknowns $X$ with coefficients in $\mathbb{C}$ . The ideal $I = \langle f_1,\dots ,f_m\rangle \subset \mathbb{C}[X]$ is the set of all polynomial combinations of our generators $f_{1},\ldots ,f_{m}$ . The set $V$ of all solutions of the system (1) is called the affine variety. Each polynomial $f\in I$ vanishes on the solutions of (1). Here we assume that the ideal $I$ generates a zero-dimensional variety, i.e. the system (1) has a finite number of solutions. Using the ideal $I$ we can define the quotient ring $A = \mathbb{C}[X] / I$ which is the set of equivalence classes over $\mathbb{C}[X]$ defined by the relation $a\sim b\iff (a - b)\in I$ . If $I$ has a zero-dimensional variety then the quotient ring $A = \mathbb{C}[X] / I$ is a finite-dimensional vector space over $\mathbb{C}$ . For an ideal $I$ there exist special sets of generators called Gröbner bases which have the nice property that the remainder after division is unique. Using a Gröbner basis we can define a linear basis for the quotient ring $A = \mathbb{C}[X] / I$ .
+
+# 2.1. Gröbner Basis method
+
+Gröbner bases can be used to solve our system of polynomial equations (1). One of the popular approaches for solving systems of equations using Gröbner bases is the multiplication matrix method, known also as the action matrix
+
+method [10, 44]. This method was recently used to efficiently solve many of the minimal problems in computer vision [23, 24, 29, 32]. The goal of this method is to transform the problem of finding the solutions to (1) to a problem of eigendecomposition of a special multiplication matrix [11]. Let us consider the mapping $T_f: A \to A$ of the multiplication by a polynomial $f \in \mathbb{C}[X]$ . $T_f$ is a linear mapping for which $T_f = T_g$ iff $f - g \in I$ . In our case $A$ is a finite-dimensional vector space over $\mathbb{C}$ and therefore we can represent $T_f$ by its matrix with respect to some linear basis $B$ of $A$ . For a basis $B = ([b_1], \ldots, [b_k])$ consisting of $k$ monomials, $T_f$ can be represented by $k \times k$ multiplication (action) matrix $\mathsf{M}_f := (m_{ij})$ such that $T_f([b_j]) = [fb_j] = \sum_{i=1}^k m_{ij}[b_i]$ . It can be shown [11] that $\lambda \in \mathbb{C}$ is an eigenvalue of the matrix $\mathsf{M}_f$ iff $\lambda$ is a value of the function $f$ on the variety $V$ . In other words, if $f$ is e.g. $x_n$ then the eigenvalues of $\mathsf{M}_f$ are the $x_n$ -coordinates of the solutions of (1). The solutions to the remaining variables can be obtained from the eigenvectors of $\mathsf{M}_f$ . This means that after finding the multiplication matrix $\mathsf{M}_f$ , we can recover the solutions by solving the eigendecomposition of $\mathsf{M}_f$ for which efficient algorithms exist. Moreover, if the ideal $I$ is a radical ideal, i.e. $I = \sqrt{I}$ , [11], then $k$ is equal to the number of solutions to the system (1). Therefore, Gröbner basis methods usually solve an eigenvalue problem of a size that is equivalent to the number of solutions of the problem. For more details and proofs we refer the reader to [10].
+
+The coefficients of the multiplication matrix $\mathbb{M}_f$ are polynomial combinations of coefficients of the input polynomials (1). For computer vision problems these polynomial combinations are often found "offline" in a pre-processing step. In this step, a so-called elimination template is generated, which is actually an expanded set of equations constructed by multiplying original equations with different monomials. This template matrix is constructed such that after filling it with coefficients from the input equations and performing Gauss-Jordan(G-J) elimination of this matrix, the coefficients of the multiplication matrix $\mathbb{M}_f$ can be obtained from this eliminated template matrix.
+
+The first automatic approach for generating elimination templates and Gröbner basis solvers was presented in [24]. Recently an improvement to the automatic generator [24] was proposed in [29] to exploit the inherent relations between the input polynomial equations and it results in more efficient solvers than [24]. The automatic method from [29] was later extended by a method for dealing with saturated ideals [30] and a method for detecting symmetries in polynomial systems [28].
+
+In general, the answer to the question "What is the smallest elimination template for a given problem?" is not known. In [32] the authors showed that the method [29], which is based on the grevlex ordering of monomials and the so-called standard bases of the quotient ring $A$ is not
+
+optimal in terms of template sizes. The authors of [29] proposed two methods for generating smaller elimination templates. The first is based on enumerating and testing all Gröbner bases w.r.t. different monomial orderings, i.e., the so-called Gröbner fan. By generating solvers w.r.t. all these Gröbner bases and using standard bases of the quotient ring $A$ , smaller solvers were obtained for many problems. The second method goes "beyond Gröbner bases" and it uses a manually designed heuristic sampling scheme for generating "non-standard" monomial bases $B$ of $A = \mathbb{C}[X] / I$ . This heuristic leads to more efficient solvers than the Gröbner fan method in many cases. While the Gröbner fan method will provably generate at least as efficient solvers as the grevlex-based method from [29], no proof can be in general given for the "heuristic-based" method. The proposed heuristic sampling scheme uses only empirical observations on which basis monomials will likely result in small templates and it samples a fixed number (1000 in the paper) of candidate bases consisting of these monomials. Even though, e.g. the standard grevlex monomial basis will most likely be sampled during the sampling, it is in general not clear how large templates it will generate for a particular problem. The results will also depend on the number of bases tested inside the heuristic.
+
+# 2.2. Sparse Resultants
+
+An alternate approach towards solving polynomial equations is that of using resultants. Simply put, a resultant is an irreducible polynomial constraining coefficients of a set of $n + 1$ polynomials, $F = \{f_{1}(x_{1},\ldots ,x_{n}),\ldots ,f_{n + 1}(x_{1},\ldots ,x_{n})\}$ in $n$ variables to have a non-trivial solution. One can refer to Cox et al. [10] for a more formal theory on resultants. We have $n + 1$ equations in $n$ variables because resultants were initially developed to determine whether a system of polynomial equations has a common root or not. If a coefficient of monomial $\mathbf{x}^{\alpha}$ in the $i^{th}$ polynomial of $F$ is denoted as $u_{i,\alpha}$ the resultant is a polynomial $Res([u_{i,\alpha}])$ with $u_{i,\alpha}$ as variables.
+
+Using this terminology, the basic idea for a resultant based method is to expand $F$ to a set of linearly independent polynomials which can be linearised as $\mathbb{M}([u_{i,\alpha}])\mathbf{b}$ , where $\mathbf{b}$ is a vector of monomials of form $\mathbf{x}^{\alpha}$ and $\mathbb{M}([u_{i,\alpha}])$ has to be a square matrix that has full rank for generic values of $u_{i,\alpha}$ , i.e. $\det \mathbb{M}([u_{i,\alpha}]) \neq 0$ . The determinant of the matrix $\mathbb{M}([u_{i,\alpha}])$ is a non-trivial multiple of the resultant $Res([u_{i,\alpha}])$ [10]. Thus $\det \mathbb{M}([u_{i,\alpha}])$ must vanish, if the resultant vanishes, i.e. $Res([u_{i,\alpha}]) = 0 \Rightarrow \det \mathbb{M}([u_{i,\alpha}]) = 0$ . It is known that $Res([u_{i,\alpha}])$ vanishes iff the polynomial system $F$ has a solution [10]. This gives us the necessary condition for the existence of roots of $F = 0$ . Hence the equation $\det \mathbb{M}([u_{i,\alpha}]) = 0$ gives us those values of $u_{i,\alpha}$ such that $F = 0$ have a common root.
+
+Resultants can be used to solve $n$ polynomial equations
+
+in $n$ unknowns. The most common approach used for this purpose is to hide a variable by considering it as a constant. By hiding, say $x_{n}$ , we obtain $n$ polynomials in $n - 1$ variables, so we can use the concept of resultants and compute $Res([u_{i,\alpha}], x_{n})$ which now becomes a function of $u_{i,\alpha}$ as well as $x_{n}$ . Algorithms based on hiding a variable attempt to expand $F$ to a linearly independent set of polynomials that can be re-written in a matrix form as
+
+$$
+\mathbb {M} ([ u _ {i, m} ], x _ {n}) \mathbf {b} = 0, \tag {2}
+$$
+
+where $\mathbb{M}([u_{i,\alpha}], x_n)$ is a square matrix whose elements are polynomials in $x_n$ and coefficients $u_{i,\alpha}$ and $\mathbf{b}$ is the vector of monomials in $x_1, \ldots, x_{n-1}$ . For simplicity we will denote the matrix $\mathbb{M}([u_{i,\alpha}], x_n)$ as $\mathbb{M}(x_n)$ in the rest of this section. Here we actually estimate a multiple of the actual resultant via the determinant of the matrix $\mathbb{M}(x_n)$ in (2). This resultant is known as a hidden variable resultant and it is a polynomial in $x_n$ whose roots are the $x_n$ -coordinates of the solutions of the system of polynomial equations. For theoretical details and proofs see [10]. Such a hidden variable approach has been used in the past to solve various minimal problems [16, 20, 23, 25].
+
+The most common way to solve the original system of polynomial equations is to transform (2) to a polynomial eigenvalue problem (PEP) [11] that transforms (2) as
+
+$$
+\left(\mathrm {M} _ {0} + \mathrm {M} _ {1} x _ {n} + \dots + \mathrm {M} _ {l} x _ {n} ^ {l}\right) \mathbf {b} = \mathbf {0}, \tag {3}
+$$
+
+where $l$ is the degree of the matrix $\mathbb{M}(x_n)$ in the hidden variable $x_{n}$ and matrices $\mathbb{M}_0,\ldots ,\mathbb{M}_l$ are matrices that depend only on the coefficients $u_{i,\alpha}$ of the original system of polynomials. The PEP (3) can be easily converted to a generalized eigenvalue problem (GEP):
+
+$$
+\mathbf {A} \mathbf {y} = x _ {n} \mathbf {B} \mathbf {y}, \tag {4}
+$$
+
+and solved using standard efficient eigenvalue algorithms [25]. Basically, the eigenvalues give us the solution to $x_{n}$ and the rest of the variables can be solved from the corresponding eigenvectors, $\mathbf{y}$ [10]. But this transformation to a GEP relaxes the original problem of finding the solutions to our input system and computes eigenvectors that do not satisfy the monomial dependencies induced by the monomial vector $\mathbf{b}$ . And many times it also introduces extra parasitic (zero) eigenvalues leading to slower polynomial solvers.
+
+Alternatively, we can add a new polynomial
+
+$$
+f _ {n + 1} = u _ {0} + u _ {1} x _ {1} + \dots + u _ {n} x _ {n} \tag {5}
+$$
+
+to $F$ and compute a so-called $u$ -resultant [10] by hiding $u_0, \ldots, u_n$ . In general random values are assigned to $u_1, \ldots, u_n$ . The $u$ -resultant matrix is computed from these $n + 1$ polynomials in $n$ variables in a way similar to the one explored above. For more details one can refer to [10].
+
+For sparse polynomial systems it is possible to obtain more compact resultants using specialized algorithms. Such resultants are commonly referred to as Sparse Resultants. A sparse resultant would mostly lead to a more compact matrix $\mathsf{M}(x_n)$ and hence a smaller eigendecomposition problem. Emiris et al. [13, 8] have proposed a generalised approach for computing sparse resultants using mixed-subdivision of polytopes. Based on [13, 8] Emiris proposed a method for generating a resultant-based solver for sparse systems of polynomial equations, that was divided in "offline" and "online" computations. The resulting solvers were based either on the hidden-variable trick (2) or the u-resultant of the general form (5). As such the resulting solvers were usually quite large and not very efficient. More recently Heikkilä [18] have proposed an improved approach to test and extract smaller $\mathsf{M}(x_n)$ . This method transforms (2) to a GEP (4) and solves for eigenvalues and eigenvectors to compute solutions to unknowns. The methods [8, 12, 13, 18] suffer from the drawback that they require the input system to have as many polynomials as unknowns to be able to compute a resultant. Additionally, the algorithm [18] suffers from other drawbacks and can not be directly applied to most of the minimal problems. These drawbacks can be overcome, as we describe in the supplementary material. However, even with our proposed improvements the resultant-based method [18], which is based on hiding one of the input variables in the coefficient field, would result in a GEP with unwanted eigenvalues and in turn unwanted solutions to original system (1). This leads to slower solvers for most of the studied minimal problems.
+
+Therefore, we investigate an alternate approach where instead of hiding one of the input variables [12, 18] or using $u$ -resultant of a general form (5) [12], we introduce an extra variable $\lambda$ and a new polynomial of a special form, i.e., $x_{i} - \lambda$ . The augmented polynomial system is solved by hiding $\lambda$ and reducing a constraint similar to (2) into a regular eigenvalue problem that leads to smaller solvers than [12, 18]. Next section lays the theoretical foundation of our approach and outlines the algorithm along with the steps for computing a sparse resultant matrix $\mathsf{M}(\lambda)$ .
+
+# 3. Sparse resultants using an extra equation
+
+We start with a set of $m$ polynomials from (1) in $n$ variables $x_{1},\ldots ,x_{n}$ to be solved. Introducing an extra variable $\lambda$ we define $\mathbf{x}^{\prime} = [x_{1},\dots,x_{n},\lambda ]$ and an extra polynomial $f_{m + 1}(\mathbf{x}^{\prime}) = x_{i} - \lambda$ . Using this, we propose an algorithm inspired by [18] and [12] to solve the following augmented polynomial system for $\mathbf{x}^{\prime}$ ,
+
+$$
+f _ {1} \left(\mathbf {x} ^ {\prime}\right) = 0, \dots , f _ {m} \left(\mathbf {x} ^ {\prime}\right) = 0, f _ {m + 1} \left(\mathbf {x} ^ {\prime}\right) = 0. \tag {6}
+$$
+
+Our idea is to compute its sparse resultant matrix $\mathsf{M} (= \mathsf{M}(\lambda))$ by hiding $\lambda$ in a way that allows us to solve (6) by reducing its linearization (similar to (2)) to an eigenvalue problem.
+
+# 3.1. Sparse resultant and eigenvalue problem
+
+Our algorithm computes the monomial multiples of the polynomials in (6) in the form of a set $T = \{T_1, \ldots, T_m, T_{m+1}\}$ where each $T_i$ denotes the set of monomials to be multiplied by $f_i(\mathbf{x}')$ . We may order monomials in each $T_i$ to obtain a vector form, $\mathbf{T}_i = \mathrm{vec}(T_i)$ and stack these vectors as $\mathbf{T} = [\mathbf{T}_1, \ldots, \mathbf{T}_m, \mathbf{T}_{m+1}]$ . The set of all monomials present in the resulting extended set of polynomials $\{\mathbf{x}^{\alpha_i} f_i(\mathbf{x}') \mid \forall \mathbf{x}^{\alpha_i} \in T_i, i = 1, \ldots, m+1\}$ is called the monomial basis and is denoted as $B = \{\mathbf{x}^\alpha \mid \alpha \in \mathbb{Z}_{\geq 0}^n\}$ . The vector form of $B$ w.r.t. some monomial ordering is denoted as $\mathbf{b}$ . Then the extended set of polynomials can be written in a matrix form,
+
+$$
+\mathrm {M} \mathbf {b} = 0, \tag {7}
+$$
+
+The coefficient matrix $\mathsf{M}$ is a function of $\lambda$ as well as the coefficients of input polynomials (6). Let $\varepsilon = |B|$ . Then by construction [18] $\mathsf{M}$ is a tall matrix with $p\geq \varepsilon$ rows. We can remove extra rows and form an invertible square matrix which is the sparse resultant matrix mentioned in previous section. While Heikkilä [18] solve a problem similar to (7) as a GEP, we exploit the structure of newly added polynomial $f_{m + 1}(\mathbf{x}^{\prime})$ and propose a block partition of $\mathsf{M}$ to reduce the matrix equation of (7) to a regular eigenvalue problem.
+
+Proposition 3.1. Let $f_{m + 1}(\mathbf{x}^{\prime}) = x_{i} - \lambda$ , then there exists a block partitioning of M in (7) as:
+
+$$
+\mathrm {M} = \left[ \begin{array}{l l} \mathrm {M} _ {1 1} & \mathrm {M} _ {1 2} \\ \mathrm {M} _ {2 1} & \mathrm {M} _ {2 2} \end{array} \right], \tag {8}
+$$
+
+such that (7) can be converted to an eigenvalue problem of the form $\mathbf{X}\mathbf{b}' = \lambda \mathbf{b}'$ .
+
+Proof: In order to block partition the columns in (8) we need to partition $B$ as $B = B_{\lambda} \sqcup B_{c}$ where
+
+$$
+B _ {\lambda} = B \cap T _ {m + 1}, \quad B _ {c} = B - B _ {\lambda}. \tag {9}
+$$
+
+Let us order the monomials in $B$ , such that $\mathbf{b} = \operatorname{vec}(B) = \left[\operatorname{vec}(B_{\lambda})\operatorname{vec}(B_c)\right]^T = \left[\mathbf{b}_1\mathbf{b}_2\right]^T$ . Such a partition of $\mathbf{b}$ induces a column partition of $\mathbb{M}$ (7). We row partition $\mathbb{M}$ such that the lower block is row-indexed by monomial multiples of $f_{m+1}(\mathbf{x}')$ which are linear in $\lambda$ (i.e. $\mathbf{x}^{\alpha_{\mathrm{j}}}(x_i - \lambda), \mathbf{x}^{\alpha_{\mathrm{j}}} \in T_{m+1}$ ) while the upper block is indexed by monomial multiples of $f_1(\mathbf{x}')$ , $\ldots$ , $f_m(\mathbf{x}')$ . Such a row and column partition of $\mathbb{M}$ gives us a block partition as in (8). As $\left[\mathbb{M}_{11} \quad \mathbb{M}_{12}\right]$ contains polynomials independent of the $\lambda$ and $\left[\mathbb{M}_{21} \quad \mathbb{M}_{22}\right]$ contains polynomials of the form $\mathbf{x}^{\alpha_{\mathrm{j}}}(x_i - \lambda)$ we obtain
+
+$$
+\begin{array}{l} \mathrm {M} _ {1 1} = \mathrm {A} _ {1 1}, \quad \mathrm {M} _ {1 2} = \mathrm {A} _ {1 2} \\ \mathrm {M} _ {2 1} = \mathrm {A} _ {2 1} + \lambda \mathrm {B} _ {2 1}, \quad \mathrm {M} _ {2 2} = \mathrm {A} _ {2 2} + \lambda \mathrm {B} _ {2 2}, \tag {10} \\ \end{array}
+$$
+
+where $\mathsf{A}_{11},\mathsf{A}_{12},\mathsf{A}_{21}$ and $\mathsf{A}_{22}$ are matrices dependent only on the coefficients of input polynomials in (6). We assume here
+
+that $\mathsf{A}_{12}$ has full column rank. Substituting (10) in (8) gives
+
+$$
+M = \left[ \begin{array}{l l} M _ {1 1} & M _ {1 2} \\ M _ {2 1} & M _ {2 2} \end{array} \right] = \left[ \begin{array}{l l} A _ {1 1} & A _ {1 2} \\ A _ {2 1} & A _ {2 2} \end{array} \right] + \lambda \left[ \begin{array}{c c} 0 & 0 \\ B _ {2 1} & B _ {2 2} \end{array} \right] \tag {11}
+$$
+
+We can order monomials so that $\mathbf{T}_{m + 1} = \mathbf{b}_1$ . Now chosen partition of $\mathbb{M}$ implies that $\mathbb{M}_{21}$ is column indexed by $\mathbf{b}_1$ and row indexed by $\mathbf{T}_{m + 1}$ . As $\left[\mathbb{M}_{21} \quad \mathbb{M}_{22}\right]$ has rows of form $\mathbf{x}^{\alpha_{\mathrm{j}}}(x_i - \lambda)$ , $\mathbf{x}^{\alpha_{\mathrm{j}}} \in T_{m + 1} \implies \mathbf{x}^{\alpha_{\mathrm{j}}} \in B_{\lambda}$ . This gives us, $\mathsf{B}_{21} = -\mathsf{I}$ , where $\mathsf{I}$ is an identity matrix of size $|B_{\lambda}|$ and $\mathsf{B}_{22}$ is a zero matrix of size $|B_{\lambda}| \times |B_{c}|$ . This also means that $\mathsf{A}_{21}$ is a square matrix of same size as $\mathsf{B}_{21}$ . Thus we have a decomposition as
+
+$$
+\mathbb {M} = \mathbb {M} _ {0} + \lambda \mathbb {M} _ {1} = \left[ \begin{array}{l l} \mathrm {A} _ {1 1} & \mathrm {A} _ {1 2} \\ \mathrm {A} _ {2 1} & \mathrm {A} _ {2 2} \end{array} \right] + \lambda \left[ \begin{array}{l l} 0 & 0 \\ - \mathrm {I} & 0 \end{array} \right], \tag {12}
+$$
+
+where $\mathsf{M}$ is a $p\times \varepsilon$ matrix. If $\mathsf{M}$ is a tall matrix, so is $\mathsf{A}_{12}$ from which we can eliminate extra rows to obtain a square invertible matrix $\hat{\mathsf{A}}_{12}$ while preserving the above mentioned structure, as discussed in Section 3.3. Let $\mathbf{b} = \left[\mathbf{b}_1\quad \mathbf{b}_2\right]^T$ . Then from (7) and (12) we have
+
+$$
+\begin{array}{r l r} & {\left[ \begin{array}{l l} \hat {\mathbf {A}} _ {1 1} & \hat {\mathbf {A}} _ {1 2} \\ \mathbf {A} _ {2 1} & \mathbf {A} _ {2 2} \end{array} \right] \left[ \begin{array}{l} \mathbf {b} _ {1} \\ \mathbf {b} _ {2} \end{array} \right] + \lambda \left[ \begin{array}{l l} 0 & 0 \\ - \mathbf {I} & 0 \end{array} \right] \left[ \begin{array}{l} \mathbf {b} _ {1} \\ \mathbf {b} _ {2} \end{array} \right]} & = & {\mathbf {0}} \\ {\Rightarrow} & {\tilde {\mathbf {A}} _ {1 1} \mathbf {b} _ {1} + \hat {\mathbf {A}} _ {1 2} \mathbf {b} _ {2}} & = & {\mathbf {0},} \\ & {\mathbf {A} _ {2 1} \mathbf {b} _ {1} + \mathbf {A} _ {2 2} \mathbf {b} _ {2} - \lambda \mathbf {b} _ {1}} & = & {\mathbf {0}} \end{array} \tag {13}
+$$
+
+Eliminating $\mathbf{b}_2$ from the above pair of equations we obtain
+
+$$
+\overbrace {\left(\mathbf {A} _ {2 1} - \mathbf {A} _ {2 2} \hat {\mathbf {A}} _ {1 2} ^ {- 1} \hat {\mathbf {A}} _ {1 1}\right)} ^ {\mathrm {x}} \mathbf {b} _ {1} = \lambda \mathbf {b} _ {1}, \tag {14}
+$$
+
+which is the schur complement of $\hat{\mathsf{A}}_{12}$ . If $\mathsf{A}_{12}$ does not have full column rank, we change the partitioning of columns of $\mathsf{M}$ by changing the partitions, $B_{\lambda} = \{\mathbf{x}^{m}\in T_{m + 1}\mid x_{i}\mathbf{x}^{m}\in B\}$ and $B_{c} = B - B_{\lambda}$ by exploiting the form of $f_{m + 1}(\mathbf{x}^{\prime})$ . This gives us $\mathsf{A}_{21} = \mathbb{I}$ and $\mathsf{A}_{22} = 0$ . It also results in a different $\mathsf{A}_{12}$ and a different $\hat{\mathsf{A}}_{12}$ after removing extra rows. Hence from (12) we have
+
+$$
+M = M _ {0} + \lambda M _ {1} = \left[ \begin{array}{c c} \hat {\mathbf {A}} _ {1 1} & \hat {\mathbf {A}} _ {1 2} \\ \mathbf {I} & 0 \end{array} \right] + \lambda \left[ \begin{array}{c c} 0 & 0 \\ \mathsf {B} _ {2 1} & \mathsf {B} _ {2 2} \end{array} \right], \tag {15}
+$$
+
+which is substituted in (7) to get $\hat{\mathbf{A}}_{11}\mathbf{b}_1 + \hat{\mathbf{A}}_{12}\mathbf{b}_2 = \mathbf{0}$ and $\lambda (\mathtt{B}_{21}\mathtt{b}_1 + \mathtt{B}_{22}\mathtt{b}_2) + \mathtt{b}_1 = \mathtt{0}$ . Eliminating $\mathbf{b}_2$ from these equations we get an alternate eigenvalue formulation:
+
+$$
+\left(\mathrm {B} _ {2 1} - \mathrm {B} _ {2 2} \hat {\mathbf {A}} _ {1 2} ^ {- 1} \hat {\mathbf {A}} _ {1 1}\right) \mathbf {b} _ {1} = - (1 / \lambda) \mathbf {b} _ {1}. \quad \square \tag {16}
+$$
+
+We note that (14) defines our proposed solver. Here we can extract solutions to $x_{1}, \ldots, x_{n}$ by computing eigenvectors of $\mathbf{X}$ . If in case $\hat{\mathbf{A}}_{12}$ is not invertible, we can use the alternate formulation (16) and extract solutions in a similar manner. It is worth noting that the speed of execution of the solver depends on the size of $\mathbf{b}_{1}(= |B_{\lambda}|)$ as well the size
+
+of $\hat{\mathbf{A}}_{12}$ while the accuracy of the solver largely depends on the matrix to be inverted i.e. $\hat{\mathbf{A}}_{12}$ . Hence, in next section we outline a generalized algorithm for computing a set of monomial multiples $T$ as well as the monomial basis $B$ that leads to matrix M satisfying Proposition 3.1.
+
+# 3.2. Computing a monomial basis
+
+Our approach is based on the algorithm explored in [18] for computing a monomial basis $B$ for a sparse resultant.
+
+We briefly define the basic terms related to convex polytopes used for computing a monomial basis $B$ . A Newton polytope of a polynomial $\mathrm{NP}(f)$ is defined as a convex hull of the exponent vectors of the monomials occurring in the polynomial (also known as the support of the polynomial). Hence, we have $\mathrm{NP}(f_i) = \mathrm{Conv}(A_i)$ where $A_i = \{\alpha | \alpha \in \mathbb{Z}_{\geq 0}^n\}$ is the set of all integer vectors that are exponents of monomials with non-zero coefficients in $f_i$ . A Minkowski sum of any two convex polytopes $P_1, P_2$ is defined as $P_1 + P_2 = \{p_1 + p_2 \mid \forall p_1 \in P_1, p_2 \in P_2\}$ . An extensive treatment of polytopes can be found in [10]. The algorithm by Heikkila [18] basically computes the Minkowski sum of the Newton polytopes of a subset of input polynomials, $Q = \Sigma_i \mathrm{NP}(f_i(\mathbf{x}))$ . The set of integer points in the interior of $Q$ defined as $B = \mathbb{Z}_{n-1} \cap (Q + \delta)$ , where $\delta$ is a small random displacement vector, can provide a monomial basis $B$ satisfying the constraint (2). Our proposed approach computes $B$ as a prospective monomial basis in a similar way, albeit for a modified polynomial system (6). Next we describe our approach and provide a detailed algorithm for the same in the supplementary material.
+
+Given a system of $m(\geq n)$ polynomials (1) in $n$ variables $X = \{x_{1},\ldots ,x_{n}\}$ we introduce a new variable $\lambda$ and create $n$ augmented systems $F^{\prime} = \{f_{1},\dots ,f_{m},x_{i} - \lambda \}$ for each variable $x_{i}\in X$ . Then we compute the support $A_{j} = \operatorname {supp}(f_{j})$ and the Newton polytope $\mathrm{NP}(f_j) = \mathrm{conv}(A_j)$ for each polynomial $f_{j}\in F^{\prime}$ . The unit simplex $\mathrm{NP}_0\subset \mathbb{Z}^n$ is also computed. For each polynomial system $F^{\prime}$ , we consider each subset of polynomials $F_{\mathrm{sub}}\subset F^{\prime}$ and compute its Minkowski sum, $Q = \mathrm{NP}_0 + \Sigma_{f\in F_{\mathrm{sub}}}\mathrm{NP}(f)$ . Then for various displacement vectors $\delta$ we try to compute a candidate monomial basis $B$ as the set of integer points inside $Q + \delta$ .
+
+From $B$ we compute $T_{j} = \{t\in \mathbb{Z}^{n}\mid t + \mathrm{supp}(f_{j})\subset B\}, \forall f_{j}\in F^{\prime}$ . Assuming $T$ to be the set of monomial multiples for input polynomials, our approach tests that $\Sigma_{j = 1}^{m + 1}|T_j|\geq |B|$ , $\min_{j}|T_{j}| > 0$ and $\operatorname {rank}(\mathbb{M}) = |B|$ . If successful, we compute the coefficient matrix $\mathbb{M}$ indexed by $B$ and $T$ as in Section 3.1 and partition $B$ into sets $B_{\lambda} = B\cap T_{m + 1}$ (or $B_{\lambda} = \{\mathbf{x}^{m}\in T_{m + 1}\mid x_{i}\mathbf{x}^{m}\in B\}$ if we need to use the alternate formulation (16)) and $B_{c} = B - B_{\lambda}$ . If the submatrix of $\mathbb{M}$ column indexed by $B_{c}$ and row indexed by $T_{1}\cup \dots \cup T_{m}$ has full column rank then we add $B$ to the list of favourable monomial bases.
+
+Our algorithm then goes through all of the favorable
+
+monomial bases so computed and selects the smallest monomial basis $B$ among them along with the corresponding set of monomial multiples $T$ from which the coefficient matrix $\mathbb{M}$ is constructed as described in Section 3.1.
+
+Next, we list the prominent features of our approach and how they seek to address the shortcomings of [12, 18]:
+
+1. We attempt to generate the smallest basis $B$ by testing adding an extra polynomial (5) of a special form $x_{i} - \lambda$ for each $i$ in $1, \ldots, n$ .
+2. We explicitly test for rank of $\mathbf{M}$ for each candidate basis $B$ to ensure that we have a full rank solver. This addresses the issue of rank-deficient solvers in [18].
+3. The partition of monomial basis, $B = B_{\lambda} \sqcup B_{c}$ (9)(or the alternate partition of $B$ as described in Proposition 3.1) highlights our approach that leads to a favourable decomposition of the coefficient matrix $M$ as in (12), for solving (7) as an eigenvalue problem. This helps us compute much smaller and more stable solvers as compared to ones generated in [12, 13, 18].
+4. The special form of the extra polynomial aids us to construct $\mathsf{M}$ that is largely smaller than the one constructed by general $u$ -resultant approach in [12].
+5. Our method can generate solvers for $m \geq n$ in (1).
+
+# 3.3. Removing columns from coefficient matrix
+
+The next step in our method is to attempt to reduce the size of the coefficient matrix $\mathbb{M}$ computed in the previous section. For this, we select columns of $\mathbb{M}$ one by one in a random order to test for their removal. For each such column, we select rows (say $r_1,\ldots ,r_k$ ) that contain non-zero entries in the column and also consider all columns (say $c_{1},\dots ,c_{l}$ ) that have non-zero entries in $r_1,\ldots ,r_k$ . Then we can remove these $k$ rows and $l$ columns from $\mathbb{M}$ only if the following conditions hold true for the resulting reduced matrix $\mathbb{M}_{\mathrm{red}}$ . This also means that we would be removing monomials from $B$ that index $c_{1},\ldots ,c_{l}$ and removing monomials from $T$ that index $r_1,\ldots ,r_k$ .
+
+1. After eliminating the monomials from $T$ , we require that there is at least one monomial left in each $T_{i}$ .
+2. If $\mathbb{M}$ is of size $p \times \varepsilon$ , the reduced matrix $\mathbb{M}_{\mathrm{red}}$ would be of size $(p - k) \times (\varepsilon - l)$ . Then we require $p - k \geq \varepsilon - l$ and $\operatorname{rank}(\mathbb{M}_{\mathrm{red}}) = \varepsilon - l$ .
+3. $\mathrm{M}_{\mathrm{red}}$ must be block partitioned and decomposed as in Proposition 3.1.
+
+We repeat the above process until there are no more columns that can be removed. We note that the last condition is important as it ensures that at each stage, the reduced matrix can still be partitioned and decomposed into an eigenvalue formulation (14)(or alternately (16)). Now, reusing the notation, let's denote $\mathbf{M}$ to be the reduced coefficient matrix and denote $B$ and $T$ to be reduced monomial basis and set of monomial multiples, respectively.
+
+If $\mathbb{M}$ still has more rows than columns, we transform it into a square matrix by removing extra rows (say $q_{1},\ldots ,q_{j}$ ) and the monomials from $T$ indexing these rows. These rows are chosen in a way so that the three conditions mentioned above are still satisfied. Moreover, our proposed approach first tries to remove as many rows as possible from the lower block, indexed by $T_{m + 1}$ . This is to reduce $|T_{m + 1}| (= |B_{\lambda}|)$ as much as possible and ensure that the matrix $\mathsf{A}_{21}$ and hence $\mathbf{X}$ (14)(or in (16)) for eigenvalue problem has as small size as possible. Then, if there are more rows still to be removed, the rest are randomly chosen from the upper block indexed by $\{T_1,\dots,T_m\}$ . Detailed algorithms for these two steps of matrix reduction are provided in the supplementary material. But we note that at the end of these two steps, we have the sparse resultant matrix, $\mathbb{M}$ satisfying (7) which is then reduced to the eigenvalue formulation (14) or to the alternate formulation (16).
+
+# 4. Experiments
+
+We evaluate the performance of our method by comparing the stabilities as well as computational complexities of the solvers generated using our method with the state-of-art Gröbner basis solvers for many interesting minimal problems. The minimal problems selected for comparison represent a huge variety of relative and absolute pose problems and correspond to that studied in [32]. Results for additional problems are provided in the supplementary material.
+
+# 4.1. Evaluation
+
+The comparison of the computational complexity of minimal solvers is based on the sizes of matrix templates to be solved. E.g. a solver of size $11 \times 20$ in the table means inverting a $11 \times 11$ matrix and then a computation of $20 - 11 = 9$ eigenvalues and eigenvectors. So in Table 1 we compare the size of templates in our resultant-based solvers with the templates used in state-of-the-art Gröbner basis solvers as well as in the original solvers proposed by the respective authors (see column 3). The Gröbner basis solvers used for comparison include the solvers generated using the approach in [29], the Gröbner fan and heuristic-based approaches in [32]. As we can see from Table 1, our new resultant-based approach leads to the smallest templates and hence fastest solvers for most of the minimal problems while for only a few problems our generated solver is slightly larger than the state-of-the-art solver based on the Gröbner fan or the heuristic-based method [32]. For some solvers though we have a slightly larger eigenvalue problem, the overall template size is considerably smaller. E.g. in the problem of estimating the relative pose and radial distortion parameter from 6pt correspondences [24] we have an eigenvalue problem of size $56 \times 56$ and matrix inversion of size $39 \times 39$ whereas the heuristic-based solver has a $52 \times 52$ eigenvalue problem but inversion of a larger
+
+
+Figure 1. Top row: Example of an input image (left). Undistorted image using the proposed resultant-based P4Pfr solver (middle). Input 3D point cloud and an example of registered camera (right). Bottom row: Histograms of errors for 62 images. The measured errors are (left) the $Log_{10}$ relative focal length $|f - f_{GT}| / f_{GT}$ , radial distortion $|k - k_{GT}| / |k_{GT}|$ , and the relative translation error $\| \vec{t} - \vec{t}_{GT} \| / \| \vec{t}_{GT} \|$ , and (right) the rotation error in degrees.
+
+matrix of size $53 \times 53$ . For this problem the resultant-based solver is slightly faster than the state-of-the-art heuristic-based solver [32]. Note that for this problem we failed to generate a Gröbner fan solver [32] in reasonable time. It is worth noting that here we do not compare our solvers' sizes with resultant-based solvers generated by original versions of [18] and [12]. These methods can not be directly applied to most of the studied minimal problems as they can not handle more equations than unknowns. With [18] we also failed to generate full rank solvers for some problems. Even after proposing extensions to these methods [18, 12], the generated solvers were larger than ours, and GEP involved in [18] led also to many unwanted solutions. We give the sizes of these solvers in supplementary material along with a brief description of our proposed improvements to [18].
+
+We evaluate and compare the stabilities of our solvers from Table 1 with Gröbner basis solvers. As it is not feasible to generate scene setups for all considered problems, we instead evaluate the stability of minimal solvers using 5K instances of random data points. Stability measures include mean and median of $Log_{10}$ of normalized equation residuals for computed solutions as well as the solvers failures as a % of 5K instances for which at least one solution has a normalized residual $> 10^{-3}$ . These measures on randomly generated inputs have been shown to be sufficiently good indicators of solver stabilities [29]. Table 2 shows stabilities of solvers for six minimal problems selected from Table 1. We note that for the "Rel.pose $\lambda + \mathrm{E} + \lambda$ " problem, our solver is not only faster, but also more stable than the state-of-the-art solvers whose histogram is provided in the supplementary material along with the stabilities for the remaining problems and the histograms of their residuals. In general, our new method generates solvers that are stable with only very few failures.
+
+Note that as our new solvers are solving the same formulations of problems as the existing state-of-the-art solvers, the performance on noisy measurements and real data would be the same as the performance of the state-of-the-art solvers. The only difference in the performance comes from numerical instabilities that already appear in the noise-less case and are detailed in Table 2 (fail%). For performance of the solvers in real applications we refer the reader to papers where the original formulations of the studied problems were presented (see Table 1, column 3). Here we select two interesting problems, i.e. one relative and one absolute pose problem, and perform experiments on synthetically generated scenes and on real images, respectively.
+
+$\mathbf{E} + f\lambda$ solver on synthetic scenes: We study the numerical stability of the new resultant-based solver for the problem of estimating the relative pose of one calibrated and one camera with unknown focal length and radial distortion from 7-point correspondences, i.e. the Rel. pose $\mathrm{E} + f\lambda$ 7pt problem from Table 1. We considered the formulation "elim. $\lambda$ proposed in [32] that leads to the smallest solvers. We studied the performance on noise-free data and compared it to the results of Gröbner basis solvers from Table 1.
+
+We generated 10K scenes with 3D points drawn uniformly from a $[-10,10]^3$ cube. Each 3D point was projected by two cameras with random feasible orientation and position. The focal length of the first camera was randomly drawn from the interval $f_{gt} \in [0.5,2.5]$ and the focal length of the second camera was set to 1, i.e., the second camera was calibrated. The image points in the first camera were corrupted by radial distortion following the one-parameter division model. The radial distortion parameter $\lambda_{gt}$ was drawn at random from the interval $[-0.7,0]$ representing distortions of cameras with a small distortion up to slightly more than GoPro-style cameras. A graph for the $Log_{10}$ of the relative errors of the distortion parameter $\lambda$ as well as the focal length $f$ are provided in the supplementary material.
+
+P4Pfr solver on real images: We evaluated the resultant-based solver for a practical problem of estimating the absolute pose of camera with unknown focal length and radial distortion from four 2D-to-3D point correspondences, i.e. the P4Pfr solver, on real data. We consider the Rotunda dataset, which was proposed in [26] and in [31] it was used for evaluating P4Pfr solvers. This dataset consists of 62 images captured by a GoPro Hero4 camera. Example of an input image from this dataset (left) as well as undistorted (middle) and registered image (right) using our new solver, is shown in Figure 1 (top). The Reality Capture software [1] was used to build a 3D reconstructions of this scene. We used the 3D model to estimate the pose of each image using the new P4Pfr resultant-based solver $(28 \times 40)$ in a RANSAC framework. Similar to [31], we used the camera and distortion parameters obtained from [1] as ground
+
+| Problem | Our | Original* | [29] | GFan [32] | (#GB) | Heuristic [32] |
| Rel. pose F+λ 8pt(‡)(8 sols.) | 7 × 16 | 12 × 24 [22] | 11 × 19 | 11 × 19 | (10) | 7 × 15 |
| Rel. pose E+f 6pt (9 sols.) | 11 × 20 | 21 × 30 [3] | 21 × 30 | 11 × 20 | (66) | 11 × 20 |
| Rel. pose f+E+f 6pt (15 sols.) | 12 × 30 | 31 × 46 [24] | 31 × 46 | 31 × 46 | (218) | 21 × 36 |
| Rel. pose E+λ 6pt (26 sols.) | 14 × 40 | 48 × 70 [22] | 34 × 60 | 34 × 60 | (846) | 14 × 40 |
| Stitching fλ+R+fλ 3pt (18 sols.) | 18 × 36 | 54 × 77 [34] | 48 × 66 | 48 × 66 | (26) | 18 × 36 |
| Abs. Pose P4Pfr (16 sols.) | 52 × 68 | 136 × 152 [4] | 140 × 156 | 54 × 70 | (1745) | 54 × 70 |
| Abs. Pose P4Pfr (elim. f) (12 sols.) | 28 × 40 | 28 × 40 [31] | 48 × 60 | 28 × 40 | (699) | 28 × 40 |
| Rel. pose λ+E+λ 6pt(‡)(52 sols.) | 39 × 95 | 238 × 290 [24] | 149 × 201 | - | ? | 53 × 105 |
| Rel. pose λ1+F+λ2 9pt (24 sols.) | 90 × 117 | 179 × 203 [24] | 189 × 213 | 87 × 111 | (6896) | 87 × 111 |
| Rel. pose E+fλ 7pt (19 sols.) | 61 × 80 | 200 × 231[22] | 181 × 200 | 69 × 88 | (3190) | 69 × 88 |
| Rel. pose E+fλ 7pt (elim. λ) (19 sols.) | 22 × 41 | - | 52 × 71 | 37 × 56 | (332) | 24 × 43 |
| Rel. pose E+fλ 7pt (elim. fλ) (19 sols.) | 51 × 70 | 51 × 70 [27] | 51 × 70 | 51 × 70 | (3416) | 51 × 70 |
| Abs. pose quivers(†)(20 sols.) | 68 × 92 | 372 × 386 [21] | 203 × 223 | - | ? | 68 × 88 |
| Rel. pose E angle+4pt (20 sols.) | - | 270 × 290 [33] | 266 × 286 | - | ? | 183 × 203 |
| Abs. pose refractive P5P(†)(16 sols.) | 68 × 93 | 280 × 399 [15] | 199 × 215 | 112 × 128 | (8659) | 199 × 215 |
| Unsynch. Rel. pose (16 sols.) | 150 × 168 | 633 × 649[2] | 467 × 483 | - | ? | 299 × 315 |
+
+Table 1. Comparison of solver sizes for some minimal problems. Missing entries are when we failed to generate a solver. $(\ast)$ : Sizes for the original formulations. $(\dagger)$ : Input polynomials were eliminated using G-J elimination before generating a solver using our resultant method as well as solvers based on [29], the Gröbner fan-based solver [32] and the heuristic-based solver [32]. $(\ddagger)$ :Solved using the alternate eigenvalue formulation (16).
+
+| Problem | Our | [29] | Heuristic [32] |
| mean | med. | fail(%) | mean | med. | fail(%) | mean | med. | fail(%) |
| Rel. pose f+E+f 6pt | -12.55 | -12.90 | 0.52 | -12.09 | -12.53 | 2.36 | -12.05 | -12.48 | 1.44 |
| Abs. Pose P4Pfr (elim. f) | -12.86 | -13.08 | 0 | -12.59 | -12.85 | 0 | -12.73 | -13.00 | 0.02 |
| Rel. pose λ+E+λ 6pt | -8.99 | -9.33 | 14.66 | -6.92 | -7.45 | 25.9 | -8.13 | -8.73 | 26.46 |
| Rel. pose E+fλ 7pt(‡) | -11.29 | -11.59 | 0.36 | -10.69 | -11.13 | 7.58 | - | - | - |
| Rel. pose E+fλ 7pt (elim. λ) | -12.53 | -12.95 | 2.34 | -11.99 | -12.35 | 0.44 | -11.05 | -11.84 | 5.70 |
| Abs. pose refractive P5P(†) | -13.03 | -13.25 | 0 | -12.45 | -12.79 | 0.10 | -12.23 | -12.53 | 0.08 |
+
+Table 2. Stability comparison for solvers generated by our new method, solvers generated using [29] and heuristic-based solvers [32] on some interesting minimal problems. Mean and median are computed from $Log_{10}$ of normalized equation residuals. (†): Solvers generated after Gauss-Jordan(G-J) elimination of input polynomials. (†): Failed to extract solutions to all variables for the heuristic-based solver [32].
+
+truth for the experiment. Figure 1 (bottom) shows the errors for the focal length, radial distortion, and the camera pose. Overall the errors are quite small, e.g. most of the focal lengths are within $0.1\%$ of the ground truth and almost all rotation errors are less than 0.1 degrees, which shows that our new solver works well for real data. These results (summarized in the supplementary material) are consistent with the results of the P4Pfr solver presented in [31], which was tested on the same dataset. The slightly different results reported in [31] are due to RANSAC's random nature and a slightly different P4Pfr formulation (40x50) used in [31].
+
+# 5. Conclusion
+
+Here we propose a novel algorithm for generating efficient minimal solvers based on sparse resultants that achieves significant improvements over existing resultant-based methods in terms of efficiency of the generated solvers. Our experiments on many minimal problems on real and synthetic
+
+scenes show that the new method is a competitive alternative to the highly optimised Gröbner basis methods. The fact that new resultant-based solvers have for many problems the same size as the state-of-the-art heuristic or GFan solvers, shows that these solvers are maybe already "optimal" w.r.t. template sizes. On the other hand, there is no one general method (GFan/heuristic/resultant), which provably returns the smallest solver for every problem and we believe that especially for complex problems all methods have to be tested when trying to generate the "best" solver.
+
+# 6. Acknowledgement
+
+The authors would like to thank Academy of Finland for the financial support of this research (grant no. 297732). Zuzana Kukelova was supported by OP RDE project International Mobility of Researchers MSCA-IF at CTU Reg. No. CZ.02.2.69/0.0/0.0/17_050/0008025 and by the ERC-CZ grant MSMT LL1901.
+
+# References
+
+[1] RealityCapture. www.capturingreality.com.
+[2] Cenek Albl, Zuzana Kukelova, Andrew W. Fitzgibbon, Jan Heller, Matej Smid, and Tomás Pajdla. On the two-view geometry of unsynchronized cameras. CoRR, abs/1704.06843, 2017.
+[3] Martin Bujnak, Zuzana Kukelova, and Tomas Pajdla. 3d reconstruction from image collections with a single known focal length. In International Conference on Computer Vision (ICCV), pages 1803-1810. IEEE, 2009.
+[4] Martin Bujnak, Zuzana Kukelova, and Tomas Pajdla. New efficient solution to the absolute pose problem for camera with unknown focal length and radial distortion. In Asian Conference on Computer Vision (ACCV), pages 11-24. Springer, 2010.
+[5] M. Bujnak, Z. Kukelova, and T. Pajdla. Making minimal solvers fast. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2012), pages 1506-1513, June 2012.
+[6] Martin Byröd, Klas Josephson, and Kalle Åström. Improving numerical accuracy of gröbner basis polynomial equation solvers. In International Conference on Computer Vision (ICCV). IEEE, 2007.
+[7] Martin Byröd, Klas Josephson, and Kalle Åström. A column-pivoting based strategy for monomial ordering in numerical gröbner basis calculations. In European Conference on Computer Vision (ECCV). Springer Berlin Heidelberg, 2008.
+[8] John F. Canny and Ioannis Z. Emiris. A subdivision-based algorithm for the sparse resultant. J. ACM, 47(3):417-451, 2000.
+[9] Ondrej Chum, Jiří Matas, and Josef Kittler. Locally optimized ransac. In Pattern Recognition, pages 236-243. Springer Berlin Heidelberg, 2003.
+[10] D. Cox, J. Little, and D. O'Shea. Using Algebraic Geometry. Springer, 2nd edition, 2005.
+[11] D. A. Cox, J. Little, and D. O'Shea. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer, 2015.
+[12] Ioannis Z. Emiris. A general solver based on sparse resultants. CoRR, abs/1201.5810, 2012.
+[13] Ioannis Z. Emiris and John F. Canny. A practical method for the sparse resultant. In Proceedings of the 1993 International Symposium on Symbolic and Algebraic Computation, ISSAC '93, Kiev, Ukraine, July 6-8, 1993, pages 183-192, 1993.
+[14] M. A. Fischler and R. C. Bolles. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM, 24(6):381-395, June 1981.
+[15] Sebastian Haner and Kalle Åström. Absolute pose for cameras under flat refractive interfaces. In Computer Vision and Pattern Recognition (CVPR), pages 1428-1436, 2015.
+[16] R. Hartley and Hongdong Li. An efficient hidden variable approach to minimal-case camera motion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(12):2303-2314, 2012.
+
+[17] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge, 2nd edition, 2003.
+[18] Janne Heikkila. Using sparse elimination for solving minimal problems in computer vision. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 76-84, 2017.
+[19] J. Heinly, J. L. Schonberger, E. Dunn, and J.-M. Frahm. Reconstructing the world* in six days. In IEEE Conference on Computer Vision and Pattern Recognition, (CVPR 2015), pages 3287-3295, 2015.
+[20] Yoni Kasten, Meirav Galun, and Ronen Basri. Resultant based incremental recovery of camera pose from pairwise matches. In IEEE Winter Conference on Applications of Computer Vision, WACV 2019, Waikoloa Village, HI, USA, January 7-11, 2019, pages 1080-1088, 2019.
+[21] Yubin Kuang and Kalle Åström. Pose estimation with unknown focal length using points, directions and lines. In International Conference on Computer Vision (ICCV), pages 529-536, 2013.
+[22] Yubin Kuang, Jan Erik Solem, Fredrik Kahl, and Kalle Åström. Minimal solvers for relative pose with a single unknown radial distortion. In Computer Vision and Pattern Recognition (CVPR), pages 33-40. IEEE, 2014.
+[23] Z. Kukelova. Algebraic Methods in Computer Vision. PhD thesis, Czech Technical University in Prague, 2013.
+[24] Z. Kukelova, M. Bujnak, and T. Pajdla. Automatic generator of minimal problem solvers. In European Conference on Computer Vision (ECCV 2008), Proceedings, Part III, volume 5304 of Lecture Notes in Computer Science, 2008.
+[25] Zuzana Kukelova, Martin Bujnak, and Tomas Pajdla. Polynomial eigenvalue solutions to minimal problems in computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012.
+[26] Zuzana Kukelova, Jan Heller, Martin Bujnak, Andrew Fitzgibbon, and Tomas Pajdla. Efficient solution to the epipolar geometry for radially distorted cameras. In Proceedings of the IEEE international conference on computer vision, pages 2309-2317, 2015.
+[27] Zuzana Kukelova, Joe Kileel, Bernd Sturmfels, and Tomas Pajdla. A clever elimination strategy for efficient minimal solvers. In Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.
+[28] Viktor Larsson and Kalle Åström. Uncovering symmetries in polynomial systems. In European Conference on Computer Vision (ECCV). Springer, 2016.
+[29] Viktor Larsson, Kalle Åström, and Magnus Oskarsson. Efficient solvers for minimal problems by syzygy-based reduction. In Computer Vision and Pattern Recognition (CVPR), 2017.
+[30] Viktor Larsson, Kalle Åström, and Magnus Oskarsson. Polynomial solvers for saturated ideals. In International Conference on Computer Vision (ICCV), 2017.
+[31] Viktor Larsson, Zuzana Kukelova, and Yinqiang Zheng. Making minimal solvers for absolute pose estimation compact and robust. In International Conference on Computer Vision (ICCV), 2017.
+
+[32] Viktor Larsson, Magnus Oskarsson, Kalle Åström, Alge Wallis, Zuzana Kukelova, and Tomás Pajdla. Beyond grobner bases: Basis selection for minimal solvers. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 3945-3954, 2018.
+[33] Bo Li, Lionel Heng, Gim Hee Lee, and Marc Pollefeys. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1595-1601. IEEE, 2013.
+[34] Oleg Naroditsky and Kostas Daniilidis. Optimizing polynomial solvers for minimal geometry problems. In International Conference on Computer Vision (ICCV). IEEE, 2011.
+[35] D. Nister. An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6):756-770, June 2004.
+[36] R. Raguram, O. Chum, M. Pollefeys, J. Matas, and J.-M. Frahm. USAC: A universal framework for random sample consensus. IEEE Transactions on Pattern Recognition and Machine Intelligence, 35(8):2022-2038, 2013.
+[37] T. Sattler, B. Leibe, and L. Kobbelt. Efficient & Effective Prioritized Matching for Large-Scale Image-Based Localization. IEEE Transactions on Pattern Recognition and Machine Intelligence, 2016. (To appear).
+[38] D. Scaramuzza and F. Fraundorfer. Visual odometry [tutorial]. IEEE Robot. Automat. Mag., 18(4):80-92, 2011.
+[39] N. Snavely, S. M. Seitz, and R. Szeliski. Modeling the world from internet photo collections. International Journal Computer Vision, 80(2):189-210, Nov. 2008.
+[40] Henrik Stewenius. Gröbner Basis Methods for Minimal Problems in Computer Vision. PhD thesis, Lund University, Sweden, 2005.
+[41] H. Stewenius, C. Engels, and D. Nister. Recent developments on direct relative orientation. ISPRS J. of Photogrammetry and Remote Sensing, 60:284-294, 2006.
+[42] H. Stewenius, D. Nister, F. Kahl, and F. Schaffalitzky. A minimal solution for relative pose with unknown focal length. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2005), 2005.
+[43] Henrik Stewenius, Frederik Schaffalitzky, and David Nister. How hard is 3-view triangulation really? In International Conference on Computer Vision (ICCV). IEEE, 2005.
+[44] B. Sturmfels. Solving systems of polynomial equations. In American Mathematical Society, CBMS Regional Conferences Series, No 97, 2002.
+[45] Automatic Generator for Sparse Resultant Solvers. https://github.com/snehalbhayani/aut_gen_sparse_resSolver.
\ No newline at end of file
diff --git a/asparseresultantbasedmethodforefficientminimalsolvers/images.zip b/asparseresultantbasedmethodforefficientminimalsolvers/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b0274af8345263abf0b8d1a01d786d8981c562ff
--- /dev/null
+++ b/asparseresultantbasedmethodforefficientminimalsolvers/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d88919ffc6f7b9323e561cd851c690710e9532a5ecb5dc22a4d033fcb6bca052
+size 341421
diff --git a/asparseresultantbasedmethodforefficientminimalsolvers/layout.json b/asparseresultantbasedmethodforefficientminimalsolvers/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..ed310319ed2fc921b9e43bfee425a1f27dc8e4ae
--- /dev/null
+++ b/asparseresultantbasedmethodforefficientminimalsolvers/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4461ed84e8d6a77f854ec604c2d6c22491aafe4210de0683c4296c7a6cac775d
+size 591173
diff --git a/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_content_list.json b/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..84a26c874ebfd694f919f65430f6eb6efe56c9af
--- /dev/null
+++ b/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9f71737b00aa7cf66b44ff5d64ab65be5166abd586cf5c1f5be60d4fd79c33e4
+size 61953
diff --git a/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_model.json b/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..c413488d490d14c82395d4bf6cb37744a60388c5
--- /dev/null
+++ b/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfe70e5b40403e0ce37f7e14c8e34073baf0dfe7c63645ad1033bdcc7e790143
+size 76370
diff --git a/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_origin.pdf b/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..488465146c182b5c1621b02a6caede89235982f7
--- /dev/null
+++ b/aspatialrnncodecforendtoendimagecompression/3e58de57-54bf-4c37-901d-f0545d9ebe68_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fefc6c25417a466612360c8684bb08d0ab67995dcae0607fd8dc961d2d843c1e
+size 1426517
diff --git a/aspatialrnncodecforendtoendimagecompression/full.md b/aspatialrnncodecforendtoendimagecompression/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..6c7dbadc09327e2a438291a7a51d23efa1517b4e
--- /dev/null
+++ b/aspatialrnncodecforendtoendimagecompression/full.md
@@ -0,0 +1,264 @@
+# A Spatial RNN Codec for End-To-End Image Compression
+
+Chaoyi Lin, Jiabao Yao, Fangdong Chen, Li Wang
+Hikvision Research Institute
+Hangzhou, China
+
+{linchaoyi, yaojiabao, chenfangdong, wangli7}@hikvision.com
+
+# Abstract
+
+Recently, deep learning has been explored as a promising direction for image compression. Removing the spatial redundancy of the image is crucial for image compression and most learning based methods focus on removing the redundancy between adjacent pixels. Intuitively, to explore larger pixel range beyond adjacent pixel is beneficial for removing the redundancy. In this paper, we propose a fast yet effective method for end-to-end image compression by incorporating a novel spatial recurrent neural network. Block based LSTM is utilized to remove the redundant information between adjacent pixels and blocks. Besides, the proposed method is a potential efficient system that parallel computation on individual blocks is possible. Experimental results demonstrate that the proposed model outperforms state-of-the-art traditional image compression standards and learning based image compression models in terms of both PSNR and MS-SSIM metrics. It provides a $26.73\%$ bits-saving than High Efficiency Video Coding (HEVC), which is the current official state-of-the-art video codec.
+
+# 1. Introduction
+
+Image compression is an important technique for reducing communication traffic and saving data storage. Most traditional lossy image compression standards such as JPEG [25], WebP [4] and Better Portable Graphics (BPG) [5] are based on transform coding [9] framework. In this framework, a prediction transform module is used to map image pixel into a quantized latent representation and then compress the latents by entropy coding.
+
+Recently, the deep neural networks (DNNs) have shown their great advantages in various areas. Along with this progress of deep learning, learning based image compression models also have derived significant interests [14, 8, 19, 17, 3, 18, 10, 1, 11]. Auto-encoder is usually applied in image compression that an encoder transforms input image into a latent representation, and the decoder inversely transforms a quantized latent representation into the reconstruc
+
+
+Figure 1. The effective of block based methods. In the blue region, both the correlation for adjacent pixels and adjacent blocks are large. In the red region, due to the similar texture in different blocks, the correlation between adjacent blocks is larger than that between adjacent pixels.
+
+tion of input image. The neural networks in auto-encoder approximate nonlinear functions, which can map pixels into a more compressible latent space than the linear transform used by traditional image compression standards. Another advantage of learning based image compression models is that they can be easily optimized for specific metric such as SSIM [26] and MS-SSIM [27] by changing the loss function.
+
+Very recently, a few learning based image compression models have outperformed the state-of-the-art traditional image compression standard BPG in terms of PSNR metric [28, 13, 18, 7]. These works focus on removing the redundant information between adjacent pixels by CNN. However, in the latest developing image/video compression standards, such as Versatile Video Coding (VVC) [6], block based processing is preferred. By using the block based processing, the redundant information for both adjacent pixels and blocks can be removed through block based prediction transform [15]. Figure 1 illustrates the effective of block based methods. High correlation for adjacent pixels and blocks can be found in the region marked by blue line. In this case, both pixel based methods and block based methods are effective. However, in the red region, the pixel
+
+based methods can barely capture the redundancy because correlation for adjacent pixels is low. By using block based methods, the similar textures can be found between adjacent blocks and the spatial redundancy can be removed effectively in this case. This demonstrates that block based methods can further improve compression performance. However, it is seldom explored in learning based image compression model.
+
+Inspired by the latest compression standards, we propose a spatial RNN architecture for lossy image compression model. The spatial RNN architecture fully exploits spatial correlations existing in adjacent blocks through block based LSTM, which can further remove spatial redundant information. Besides, the adaptive quantization is adopted in our model where the network would learn to automatically allocate bits for the latent map according to its contents. Moreover, two hyperprior network are adopted instead of context model in proposed entropy model by considering both the performance and efficiency. Experimental results demonstrate that proposed image compression model can outperform state-of-the-art traditional compression standards BPG and other deep learning based image compression models. Moreover, proposed method is potential for parallel computing which is highly efficient.
+
+# 2. Related work
+
+Many standard codecs have been developed for lossy image compression. The most widely used lossy compression standard is JPEG. More sophisticated standards such as WebP and BPG are developed to be portable and more compression-efficient than JPEG. To our knowledge, BPG has the highest compression performance among existing lossy image compression standards.
+
+Recently, applying neural network to image compression has attracted considerable attention. Neural network architecture for image compression are usually based on auto-encoder framework. In this framework, both recurrent neural network (RNN) [5, 24, 28] and convolution neural network (CNN) [1, 17, 2, 22, 10] based models have been developed. Toderici et al. [5] propose RNN architecture for variable-rate image compression framework which compresses a $32 \times 32$ image in a progressive manner. In [24], a general architecture for compressing full resolution image with RNN, residual scaling and a variation of gated recurrent unit (GRU) is presented. Weber et al. [28] utilize RNN based architecture for image compression and classification. Different with works [5, 24], which only focus on removing the redundant information within each block, the redundancy between adjacent blocks is explored in our block based LSTM recurrent network.
+
+Entropy model, which approximates the distribution of discrete latent representation, improves the image compression performance significantly. Thus, recent methods have
+
+given increasing focus to entropy model to improve compression performance. Balle et al. [3] propose to use hyperprior to effectively capture the spatial dependencies in the latent representation. They model the distribution of latent representation as a zero-mean Gaussian distribution with standard deviation $\sigma$ . A scale hyperprior is introduced to estimate the $\sigma$ by stacking another auto-encoder on latent representation. Minnen et al. [18] further utilize the hyperprior to estimate the mean and standard deviation of learned latent representation to help removing spatial dependencies from the latents. Besides, context model is adopted in their model for achieving higher compression rate and it is the first learning based model that outperform BPG on PSNR metric. Lee et al. [13] also represent a lossy image compression with context model and a parametric model for an entropy model of hyperprior. In above works, only one hyperprior network is used to estimate the entropy parameter $\mu$ and $\sigma$ . However, in the proposed model, we find that using two joint hyperprior networks to estimate the entropy parameter respectively can further improve compression performance. Besides, though context model can improve the performance, it is time-consuming during decoding process. Thus, context model is not included in the proposed model for achieving lower computational complexity.
+
+# 3. Proposed method
+
+# 3.1. Overall framework
+
+The overall framework is shown in Figure 2, where encoder $E$ , decoder $D$ , quantization net $Q^z$ , and hyperprior networks $E^{h^1}$ , $D^{h^1}$ , $E^{h^2}$ , $D^{h^2}$ are neural networks. The proposed method incorporates analysis and synthesis transform, adaptive quantization and entropy model. The analysis transform generates the latent representation of the raw image while the synthesis transform maps the quantized latent back to reconstructed image. Firstly, the analysis transform $E$ maps a block of one image $x$ to the latent representation $z$ . It is in this module that most spatial redundancy is removed. The quantization network $Q^z$ generates the quantization steps $s$ adaptively, which is then quantized to form the quantized latent $\hat{z} = Q(z;s)$ . To achieve a higher compression performance, the latent is modeled as Gaussian distribution in our entropy model and two hyperprior networks are used to estimate the entropy parameters mean $m$ and variance $v$ of the distribution, respectively. Encoder then uses estimated entropy parameters to compress and transmit the quantized latent representation $\hat{z}$ . It is worth noting that quantization steps $s$ , quantized hyperprior $\hat{h}_1$ , and $\hat{h}_2$ are also transmitted as side information. On the decoder side, The quantization steps $s$ is first recovered to decode the hyperprior $\hat{h}_1$ and $\hat{h}_2$ . The two hyperpriors are used to estimate the entropy parameters and then the estimated entropy parameters are utilized to recover the quan-
+
+
+Figure 2. Network architecture of the proposed method. $Q^z$ represents the quantization net, $Q$ represents the quantization operation. $z$ represents the full-precision latent representation of $x$ , $s$ represents the quantization step of $Q$ , $\hat{z}$ is the integer-precision value of $z / s$ . $AE$ and $AD$ represent the arithmetic encoder and arithmetic decoder respectively. $\hat{h}_1$ and $\hat{h}_2$ represent the quantized latent representation of the mean value $m$ and variance value $v$ of the Gaussian probabilistic density model $N$ , $d$ represents the pixel wise subtraction.
+
+tized $\hat{z}$ . Finally, synthesis transform maps the latent into the reconstructed image $\hat{x}$ .
+
+# 3.2. Analysis and synthesis transform
+
+To take full use of the adjacent blocks to reduce the dependency between the blocks, we propose the block based Long Short-Term Memory (LSTM) architecture for image compression.
+
+The Figure 3 (a) presents the proposed individual block based RNN (BRNN) process for removing the redundancy between the adjacent sub-blocks. The input image is firstly divided into non-overlapping blocks $\chi$ . The i-th block $\chi_{i}$ is then split into four sub-blocks $\chi_{i}^{T}:\{\chi_{i}^{t},\chi_{i}^{t + 1},\chi_{i}^{t + 2},\chi_{i}^{t + 3}\}$ with the size of $h\times w$ for temporary processing. The LSTM is used to process these sub-blocks recurrently in the order of: TopLeft $\rightarrow$ TopRight $\rightarrow$ BottomRight $\rightarrow$ BottomLeft. The redundancy between sub-blocks can be removed in this recurrent process since the previous sub-block is used as the reference of current sub-block. In addition to the redundancy removing, each block $\chi_{i}$ is mapped to the latent representation individually, which demonstrates the potential to parallel computation for accelerating the encoding and decoding process.
+
+Figure 3 (b) shows the highly recurrent RNN (HRNN) process to explore the redundancy removing between adjacent blocks. It is similar to the process of (a) except that
+
+there are some sub-blocks $\chi_i^{t + 1}$ take the state of previous sub-blocks $\chi_{i - m}^{t_n}(0 < m < 4,0 < n < 4)$ as input of hidden state. For example, sub-block $\chi_i^2$ is the input of $\chi_{i + 1}^{1}$ which lies in the second block $\chi_{i + 1}$ . This methods can further exploit the correlation between adjacent blocks and achieve a higher performance. However, in this method, each block can not be computed concurrently and thus it is slower than method described in (a). Based on above analysis, method (a) is adopted in our model.
+
+Let the split sub-block $\chi_i^t$ denotes the input, $hid^t$ and $c^t$ denote the hidden and cell states, respectively. After passing LSTM layer, the new hidden state $hid^{t + 1}$ , cell $c^{t + 1}$ are computed as:
+
+$$
+\left\{o ^ {t}, f ^ {t}, i ^ {t}, g ^ {t} \right\} = a c t \left(W _ {s} \otimes h i d ^ {t} + W _ {i} \otimes x ^ {t}\right)
+$$
+
+$$
+\begin{array}{l} c ^ {t + 1} = f ^ {t} \odot c ^ {t} + i ^ {t} \odot g ^ {t} \tag {1} \\ h i d ^ {t + 1} = \sigma^ {t} \odot \tanh (c ^ {t}) \\ \end{array}
+$$
+
+where $\otimes$ and $\odot$ represent the convolution and element-wise multiplication operation, $W_{i}$ and $W_{s}$ are the weights of convolution layer $C_i$ and $C_s$ for the input components and state-to-state gates, respectively. The $o^t,f^t,i^t,g^t$ denotes the output gates, forget gates, input gates and contents gates, respectively. The act() is the activation function, which is sigmoid function for $o^t,f^t,i^t$ and is tanh function when referred to $g^{t}$ .
+
+
+(a)
+
+
+(b)
+
+
+Figure 3. Visualization of the input-to-state and state-to-state mappings for the proposed partitions. The left shows the process of BRNN, the right shows the process of HRNN.
+Figure 4. For LSTM in encoder side, the k represents the kernel size for both state-to-state convolutional layer $C_s$ and input convolutional layer $C_i$ , the c represents the output channel number of the hidden cell and output cell, the channel number of $C_s$ and $C_i$ must be set to the quadruple of c, the s represents the stride number of $C_i$ . For LSTM in decoder side, the $C_i$ is set to de-convolution operation with the up-sample factor s. For the last convolution layer in encoder, the channel number M is chosen based on the $\lambda$ .
+
+The layer details of encoder $E$ and decoder $D$ are shown in Figure 4. We denote $C_i$ as the state-to-state convolutional layer and $C_s$ is the input convolutional layer. The size of the tensor $T_i$ and $T_s$ , which is the output of $C_i$ and $C_s$ separately, is $4c \times \frac{h}{2} \times \frac{w}{2}$ (where $c$ corresponds to the number of channels). After the activation operation, the tensor is split into 4 chunks which are the input of four gates, respectively. The size of each chunk is $c \times \frac{h}{2} \times \frac{w}{2}$ . It should be noted that each sub-block $\chi_i^t$ shares the same weights of LSTM to ensure the invariance of the computed features. Finally, the output latents representation $Z\{z^t, z^{t+1}, z^{t+2}, z^{t+3}\}$ for each block are generated with the size of $4 \times M \times \frac{h}{16} \times \frac{w}{16}$ ( $M$ represents the output channel number of the last layer in encoder).
+
+# 3.3. Quantization
+
+It is found that great variability exists in the latents representation across channels, which means the importance of each channel should be different. Figure 5 shows the latent map for channel in a specific image. It can be seen that the
+
+first latent map preserves the high frequency characteristic since it preserves the details of the original images. Meanwhile, the last latent map represents the low frequency information. It can also be seen that the low frequency latent map is usually smooth and exists larger spatial redundancy than high frequency latent map. In practice, the smooth regions usually require less coding bits. Thus, less bits are allocated to the low frequency features by applying a larger quantization step. On the contrary, the high frequency features need a smaller quantization steps for achieving a better reconstruction quality. A quantization network is proposed to learn the quantization step adaptively in our model. As shown in the right side of Figure 5, the learned quantization steps are highly correlated with the latent feature maps.
+
+Fabian et al. [17] use the importance map to allocate different regions with different amounts of bits. However, we train the quantization step $s^i$ for each channel of the latent representation. The quantization step $s^i$ can be obtained by the quantization net $Q^z$ :
+
+$$
+s ^ {i} = Q ^ {z} \left(\tilde {z} ^ {i}; \theta_ {q}\right) \tag {2}
+$$
+
+where the $\theta_q$ represents the weights of quantization network $Q^z$ as shown in Figure 6.
+
+The quantization of the latent representation $z$ is a challenge for the end-to-end training, since the quantization operation is non-differentiable. Here we adopt Balle's quantization operation [2] that the additive uniform noise is added to the latent during training to replace the non-differentiable quantization. This quantization is denoted by $\tilde{z}^i$ . In the testing stage, actual quantization represented by $\hat{z}^i$ is used. The equations are shown as follow:
+
+$$
+\begin{array}{l} \tilde {z} ^ {i} = Q \left(z ^ {i}, s ^ {i}\right) = z ^ {i} + \mu \left(- \frac {s ^ {i}}{2}, \frac {s ^ {i}}{2}\right) \tag {3} \\ \hat {z} ^ {i} = Q (z ^ {i}, s ^ {i}) = \left\lfloor \frac {z ^ {i}}{s ^ {i}} + 0. 5 \right\rfloor \times s ^ {i} \\ \end{array}
+$$
+
+where $\mu \left(-\frac{s^i}{2},\frac{s^i}{2}\right)$ represents the uniform noise which range from $-\frac{s^i}{2}$ to $\frac{s^i}{2}$ , $\lfloor \cdot \rfloor$ represents the floor operation.
+
+# 3.4. Advanced multiple correlated hyperprior model
+
+To improve the efficiency and reduce the dependency of the context, we adopt the hyperprior model [3] to estimate the probability of the current element individually instead of the context-based method. Furthermore, we extend the hyperprior model by analyzing the correlation among the mean $m^i$ , the variance $\sigma^i$ of Gaussian probability density model and the latent representations $z^i$ . Two experiments are designed to control each variable. Firstly, we fix the variance $\sigma^i$ and quantization step $s$ of each latent representations, and derive the optimal mean $m^i$ for the latent $z^i$ . Then we fix the mean $m^i$ and quantization step $s$ to derive
+
+
+(a. The original image.)
+
+
+(b. The latent maps.)
+Figure 5. The visualization of the latent representation in each channel, and the average quantization step of the corresponding latent representation.
+
+
+(c. The trained quantization steps.)
+
+
+Figure 6. The architecture of $Q^z$ . The weights of the fully connected layer are shared between $z_{m}$ and $z_{a}$ , which is inspired by [30]. The green arrows represent the data flow of $z_{m}$ and the blue arrows represent the data flow of $z_{a}$ . Each fully connected layer is followed by sigmoid function and the number of channels is denoted by C. For the last fully connected layer, the number of channels is equal to that of z.
+
+
+Figure 7. The correlation coefficient curve from training samples.
+
+the optimal variance $\sigma^i$ . The green line of Figure 7 presents the results of correlation coefficient between the $m^i$ and the $z^i$ , which indicates the latent representation and mean value are highly correlated. The red line represents the correlation coefficient between the $m^i$ and the $\sigma^i$ , which is less correlated than green line. Then we further analysis the correlation of $\sigma^i$ and distance $d^i = z^i - m^i$ . As shown in Figure 7, the $\sigma^i$ is much more correlated with $d^i$ than $m^i$ . Therefore, different with previous works on estimating the entropy parameters of Gaussian distribution, the hyperprior is split into two sub-hyperpriors, one adopts the latent representation $\hat{z}^i$ as the input to estimate the mean value, the other estimates the $\sigma^i$ based on the distance $d^i$ .
+
+
+
+
+(a)
+(b)
+Figure 8. The figure on the top is the architecture of $H^{1}$ , the bottom is the architecture of $H^{2}$ . Q represents the quantization with the step $s^{i}$ trained by $Q^{z}$ . The Encoder LSTMs and Decoder LSTMs share all the weights of LSTM layers in encoder $E$ and decoder $D$ respectively.
+
+We use two sub-modules $H^1$ and $H^2$ , which generates mean $m^i$ and variance $v^i$ respectively to estimate the current latent value's probability. For the Gaussian distribution model, as the quantization step $s$ is fixed, we can only get the max probability when $\hat{z}^i = m^i$ . For $H^1$ , it is taken as a data compression process same as encoder $E$ and decoder $D$ , therefore we share the same LSTM weights with them to save the memory allocation and training time. For $H^2$ , the distance $d(\hat{z}^i, m^i)$ between $\hat{z}^i$ and $m^i$ in element-wise is taken as the input of $H^2$ to generate the variance $v^i$ . The probability mass functions can be described as:
+
+$$
+p _ {\hat {z} ^ {i}} \left(\hat {z} ^ {i} \mid m ^ {i}, v ^ {i}, s ^ {i}\right) = \left(N \left(m ^ {i}, v ^ {i}\right) * \mu \left(- \frac {s ^ {i}}{2}, \frac {s ^ {i}}{2}\right)\right) \left(\hat {z} ^ {i}\right) \tag {4}
+$$
+
+
+Figure 9. The parallel procedure in decoder side. $AD$ represents the arithmetic decoding, $D$ corresponds to Decoder, $D^{h^1}$ and $D^{h^2}$ represents the decoder of the two sub-hyperprior separately. The bits for each element module contain the side information $\hat{h}_1^i$ , $\hat{h}_2^i$ , $s^i$ and the compact representation $b^i$ . $hid_i^t$ represents the hidden states of each block.
+
+which can be evaluated in the closed form. Note that the $s^i$ , $\hat{h}_1$ and $\hat{h}_2$ are all transmitted in bitstreams as the side information.
+
+
+Figure 10. The RD curves of different models. The BRNN model is tested with the block size of $64 \times 64$ and $128 \times 128$ , the HRNN model is tested with the block size of $64 \times 64$ .
+
+# 3.5. Parallel procedure
+
+For the algorithm implementation on hardware, the long dependency between the elements is not desirable. For the encoder side, $H^{1}$ can only get started when the raw subblock $\chi_{i}^{t}$ is encoded to the latent representation $\hat{z}^{i}$ , and $H^{2}$ can only get started when the latent representation is encoded and decoded by $H^{1}$ to obtain $m^{i}$ . For the decoder side, all the blocks $\chi_{i}^{t}$ and modules including Decoder $D$ , the hyperprior decoder $D^{h^{1}}$ and $D^{h^{1}}$ can work concurrently. During the process of the arithmetic decoder, all elements can also work concurrently, since there is no dependency between them. Figure 9 shows the described parallel procedure in the decoder.
+
+
+Figure 11. The RD curves of different hyperprior models.
+
+# 3.6. Loss function
+
+The goal of image compression is to generate a latent representation with the shortest bitstream which is evaluated by the entropy of quantized latent under a given distortion. The resulting objective function is shown in Equation 5:
+
+$$
+\begin{array}{l} L = \lambda \cdot d + r = \lambda \cdot \mathbb {E} _ {x \sim p _ {x}} d (x, \hat {x}) + \mathbb {E} _ {x \sim p _ {x}} [ - l o g _ {2} p _ {\hat {z}} (\hat {z}) ] \\ + \mathbb {E} _ {x \sim p _ {x}} \left[ - l o g _ {2} p _ {\hat {h} _ {2}} \left(\hat {h} _ {2}\right) \right] + \mathbb {E} _ {x \sim p _ {x}} \left[ - l o g _ {2} p _ {\hat {h} _ {1}} \left(\hat {h} _ {1}\right) \right] \tag {5} \\ \end{array}
+$$
+
+where $d$ is the measurement between the reconstructed block $\hat{x}$ and the original block $x$ , $r$ denotes the costing of encoding $\hat{h}_1, \hat{h}_2$ and $\hat{z}$ . The hyper-parameter $\lambda$ controls the trade-off between the distortion $d$ and the rates $r$ in the loss function during training. $\lambda$ is not a trainable variable, but a manually configured hyperparameter. It is worth noting that the rate of quantization step $s$ is not included in Equation 5. We use context-based adaptive binary arithmetic coding (CABAC) in HEVC [29] to encode the quantization step, which is more compression-efficient. Our goal is to minimize Equation 5 over the training set $\chi$ by modeling the encoder $E$ , decoder $D$ , quantizer net $Q^z$ and hyperprior $E^{h^1}$ , $D^{h^1}$ , $E^{h^2}$ , $D^{h^2}$ . In this paper, we use the mean square error (MSE) and MS-SSIM to measure the distortion.
+
+# 4. Experiments
+
+We use Pytorch as the training platform on NVIDIA Tian Xp with 12GB memory GPU. There are 30 thousands of images chosen from the DIV2K [23], ILSVRC2012 [21] in our training dataset. Before training, each image is cropped $128 \times 128$ pixels randomly. We use the Adam algorithm with a mini-batch size of 16 to train proposed models. The learning rate is fixed as 1e-4. The output channel number M of encoder is set to the integral multiple of 32, varying from 32 to 256. To enable comparison with other approaches, we present our performance on Kodak PhotoCD [12] dataset.
+
+# 4.1. Spatial RNN architecture
+
+In this section, different neural network architectures proposed in this paper are compared. We compare the
+
+Table 1. The average time of different models with different block sizes on RGB Kodak.
+
+| CODEcs | BRNN(128×128) | BRNN(64×64) | HRNN(64×64) | FCNN(128×128) |
| Encoding(ms) | 8.719 | 10.304 | 27.214 | 2.519 |
| Decoding(ms) | 7.495 | 11.355 | 34.549 | 2.541 |
+
+
+Figure 12. The RD curves of different methods for PSNR.
+
+Highly Recurrent HRNN, BRNN, and full convolution neural network (FCNN). The architecture of FCNN is the same with proposed network except that the RNN layers are replaced with convolution layers. Figure 10 shows the rate-distortion (RD) curves of different models tested on Kodak dataset. It can be seen that HRNN achieves the best performance. It is unsurprising because HRNN can fully use the information from adjacent blocks. The BRNN also performs well and FCNN performs the worst. This demonstrates that the RNN plays an important role in our model. Besides, we find that with the increase of the block size, the performance of BRNN tends to getting worse. This may mean that appropriate block size is important for image compression and we set block size as 128 in our model.
+
+The Table 1 shows the average encoding and decoding time of different models with four threads for the block module. The FCNN is the fastest architecture among these models because no recurrent structure exists in FCNN. BRNN is also fast due to the parallelization between the blocks and we find that it becomes faster with increased block size. The HRNN takes the most encoding and decoding time compared to other architectures. HRNN is slow due to the dependency between adjacent blocks that blocks should be encoded or decoded consecutively, so the parallelization between the blocks can not be implemented in HRNN.
+
+In general, although the HRNN shows the best performance in Figure 10, we adopt the BRNN with the block size of $128 \times 128$ in our model by considering the efficiency in practical application.
+
+# 4.2. Performance results
+
+Figure 12 and 13 show the RD curves of our BRNN model in terms of PSNR and MS-SSIM metrics, respectively. The state-of-the-art image codec BPG and other deep
+
+
+Figure 13. The RD curves of different methods for MS-SSIM.
+
+learning based models are compared with ou model. It can be seen from the results that our model not only outperforms the existing conventional state-of-the-art image codec BPG, but also the other deep learning based methods.
+
+In BRNN process, multiple threads can be applied to improve the coding efficiency because the recurrent operation is performed within each individual blocks. To simulate the multiple threads operation, we divide input images into four non-overlapping sub-images and use four GPU cards to compress the sub-images. The average decoding time is compared with Minnen's method [3] in Table 3. As shown in Table 3, the proposed model is more efficient than Minnen [3] in decoder side.
+
+To evaluate the improvement of our methods accurately, we compare our results with HEVC in BD-Rate metric [20]. The BD-Rate metric represents the average percent saving in bitrate between two methods, for a given objective quality. The configuration of HEVC is intra high throughput mode and is tested on HM16.14 [16] platform. As shown in Table 2, our method demonstrates $26.73\%$ bits saving than HEVC. In Figure 14, we compare our subjective result to HEVC in low bitrate on kodim24 from Kodak. It can be seen that the output of our network has no obvious artifacts, even though our processing is block based and has no post-process filters like HEVC. The soft structures (like the texture on the wall) in our results are better preserved than HEVC. We refer to the supplementary material for further visual examples.
+
+# 4.3. Ablation study
+
+To verify the contribution of proposed hyperprior network and adaptive quantization network, we conduct the following ablation study. We train a image compression network with sigle hyperprior network, that only one subhyperprior network is used to estimate the mean and vari
+
+Table 2. The BD-Rate results compared with HM16.14.
+
+ | HM16.14 | Our Method | BD-Rate |
| Images | QPISlice | Bits | PSNR | Bits | PSNR | RGB Average |
| Kodak | 48 | 25594.00 | 25.79 | 159248.89 | 33.45 | -26.73% |
| 44 | 49815.67 | 27.58 | 219397.43 | 34.63 |
| 40 | 93788.33 | 29.59 | 275256.94 | 35.80 |
| 36 | 168263.33 | 31.90 | 324618.71 | 36.89 |
| 32 | 283362.33 | 34.43 | 366991.17 | 37.92 |
| 28 | 452545.67 | 37.19 | 399079.66 | 38.76 |
| 24 | 688151.67 | 40.07 | 568386.18 | 40.35 |
+
+Table 3. The average times of different codec on RGB Kodak.
+
+| CODECS | Minnen et al. [3] | Our Method |
| Decoding(ms) | 78124.802 | 28.421 |
+
+ance of latent. The structure of this single hyperprior network is the same as hyperprior network $H^{1}$ described in section 3.4, except that the channel number of the last convolution layer in decoder side is twice of that of $H^{1}$ in order to obtain the estimated mean and variance. We also train the image compression model with proposed two sub-hyperprior networks, denoted as joint hyperprior. It should be noted that the adaptive quantization module is removed in above models. The proposed model with two sub-hyperprior networks and adaptive quantization, denoted as the adaptive joint hyperprior, is compared to the above two models. The RD curves of these models are shown in Figure 11. The results demonstrate that with adaptive quantization and joint hyperpriors, the results are much better than single hyperprior for high bitrate (greater than 0.3 bpp). However, it performs worse in low bitrate condition.
+
+# 5. Conclusion
+
+In this paper, we propose a novel spatial recurrent neural network for end-to-end image compression. The block based LSTM is utilized in spatial RNN to fully exploit spatial redundancy. Both the redundancy between adjacent pixels and blocks is removed. Besides, adaptive quantization step is adopted in our model which can automatically account for the trade-off between the entropy and the distortion. By considering both the performance and efficiency, two hyperprior network are adopted to replace context model in proposed entropy model. Experimental results show that proposed methods outperform the state-of-the-art methods, such as HEVC, BPG and other learning based compression method. Results on decoding time shows that proposed method is efficient and demonstrate the effectiveness of proposed parallel system.
+
+# References
+
+[1] Johannes Balle, Valero Laparra, and Eero P Simoncelli. End-to-end optimization of nonlinear transform codes for perceptual quality. In Picture Coding Symposium (PCS), 2016, pages 1-5. IEEE, 2016. 1, 2
+
+
+
+
+(a. Ours, 0.272bpp.)
+
+
+(b. HEVC, 0.270bpp.)
+(c. Original Kodak 24 image.)
+Figure 14. The reconstruction from decoder generated by the adaptive joint hyperprior auto encoder and HEVC.
+
+[2] Johannes Balle, Valero Laparra, and Eero P Simoncelli.
+
+End-to-end optimized image compression. arXiv preprint arXiv:1611.01704, 2016.2,4
+[3] Johannes Balle, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior. arXiv preprint arXiv:1802.01436, 2018. 1, 2, 4, 7, 8
+[4] Somnath Banerjee and Vikas Arora. Webp compression study." code. google. com/speed/webp/docs/webp study. html, 2011. 1
+[5] Fabrice Bellard. Bpg image format (2017). URL http://bellard.org/bpg/.[Online, Accessed 2016-08-05]. 1, 2
+[6] J Chen and E Alshina. Algorithm description for versatile video coding and test model 1 (vtm1). In Document JVET-J1002 10th JVET Meeting, 2016. 1
+[7] Yoojin Choi, Mostafa El-Khamy, and Jungwon Lee. Variable rate deep image compression with a conditional autoencoder. In Proceedings of the IEEE International Conference on Computer Vision, pages 3146-3154, 2019. 1
+[8] Gergely Flamich, Marton Havasi, and José Miguel Hernández-Lobato. Compression without quantization, 2020. 1
+[9] Vivek K Goyal. Theoretical foundations of transform coding. IEEE Signal Processing Magazine, 18(5):9-21, 2001. 1
+[10] Feng Jiang, Wen Tao, Shaohui Liu, Jie Ren, Xun Guo, and Debin Zhao. An end-to-end compression framework based on convolutional neural networks. IEEE Transactions on Circuits and Systems for Video Technology, 2017. 1, 2
+[11] Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, and George Toderici. Improved lossy image compression with priming and spatially adaptive bit rates for recurrent networks. structure, 10:23, 2017. 1
+[12] Eastman Kodak. Kodak lossless true color image suite (photocd pcd0992). URL http://rOk.us/graphics/kodak, 1993. 6
+[13] Jooyoung Lee, Seunghyun Cho, and Seung-Kwon Beach. Context-adaptive entropy model for end-to-end optimized image compression. In International Conference on Learning Representations, 2019. 1, 2
+[14] Xiang Li and Shihao Ji. Neural image compression and explanation, 2019. 1
+[15] I. Matsuda, K. Unno, H. Aomori, and S. Itoh. Block-based spatio-temporal prediction for video coding. In 2010 18th European Signal Processing Conference, pages 2052-2056, Aug 2010. 1
+[16] K McCann, C Rosewarne, B Bross, M Naccari, K Sharman, and G Sullivan. High efficiency video coding (hevc) encoder description v16 (hm16). JCT-VC High Efficiency Video Coding N, 14:703, 2014. 7
+[17] Fabian Mentzer, Eirikur Agustsson, Michael Tschannen, Radu Timofte, and Luc Van Gool. Conditional probability models for deep image compression. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 3, 2018. 1, 2, 4
+[18] David Minnen, Johannes Balle, and George D Toderici. Joint autoregressive and hierarchical priors for learned image compression. In Advances in Neural Information Processing Systems, pages 10771-10780, 2018. 1, 2
+
+[19] Ken Nakanishi, Shin ichi Maeda, Takeru Miyato, and Daisuke Okanohara. Neural multi-scale image compression, 2018. 1
+[20] Stéphane Pateux and Joel Jung. An excel add-in for computing bjontegaard metric and its evolution. ITU-T SG16 Q, 6, 2007. 7
+[21] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, and Michael Bernstein. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211-252, 2015. 6
+[22] Lucas Theis, Wenzhe Shi, Andrew Cunningham, and Ferenc Huszár. Lossy image compression with compressive autoencoders. arXiv preprint arXiv:1703.00395, 2017. 2
+[23] Radu Timofte, Kyoung Mu Lee, Xintao Wang, Yapeng Tian, Ke Yu, Yulun Zhang, Shixiang Wu, Chao Dong, Liang Lin, and Yu Qiao. Ntire 2017 challenge on single image superresolution: Methods and results. In IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1122-1131, 2017. 6
+[24] George Toderici, Damien Vincent, Nick Johnston, Sung Jin Hwang, David Minnen, Joel Shor, and Michele Covell. Full resolution image compression with recurrent neural networks. In CVPR, pages 5435-5443, 2017. 2
+[25] G. K. Wallace. The JPEG still picture compression standard. IEEE Transactions on Consumer Electronics, 38(1), Feb 1992. 1
+[26] Zhou Wang, Alan C Bovik, Hamid R Sheikh, Eero P Simoncelli, et al. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600-612, 2004. 1
+[27] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, volume 2, pages 1398-1402. IEEE, 2003. 1
+[28] Maurice Weber, Cedric Renggli, Helmut Grabner, and Ce Zhang. Lossy image compression with recurrent neural networks: from human perceived visual quality to classification accuracy. 2019. 1, 2
+[29] Mathias Wien. High efficiency video coding. Coding Tools and specification, pages 133-160, 2015. 6
+[30] Sanghyun Woo, Jongchan Park, Joon Young Lee, and In So Kweon. Cbam: Convolutional block attention module. 2018. 5
\ No newline at end of file
diff --git a/aspatialrnncodecforendtoendimagecompression/images.zip b/aspatialrnncodecforendtoendimagecompression/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..4bf7ca39961f6d20bb17c5a13e5f6fe2d1a795f0
--- /dev/null
+++ b/aspatialrnncodecforendtoendimagecompression/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:129924b21695e00d23cf64d422d8e184ee24eb4303e1ec66acd6db96d3395b47
+size 652890
diff --git a/aspatialrnncodecforendtoendimagecompression/layout.json b/aspatialrnncodecforendtoendimagecompression/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..365982012b3f1f8e725596bc75acacfd9f95f2af
--- /dev/null
+++ b/aspatialrnncodecforendtoendimagecompression/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c68d57ee7fda229af615cfbbf366bd525fdcb46f8b5f8ef57822610e4076b22c
+size 441182
diff --git a/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_content_list.json b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..a10918ca79792b11e1c88c7af3fea15a378f8a3e
--- /dev/null
+++ b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad167d5f5935bb2d0681ad646b832db4401c85aedb632578ec59350c06b8bb59
+size 68096
diff --git a/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_model.json b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..f90c818eb51b9be5198634091ae1c79ba83f4d44
--- /dev/null
+++ b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:75c9698fa197a865303293721e3596092a58d031e0e4dc649e67eb566f83ff9e
+size 84352
diff --git a/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_origin.pdf b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..c8611ae860cd0796d182fa05392cb2a71f99b73a
--- /dev/null
+++ b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/15a87c8f-8cb9-487c-917a-1c6796f44f76_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:612ca06b53a5038b6f4c582e638230a8fb23f61fcae29a97a4452f424390e877
+size 1581834
diff --git a/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/full.md b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..721a22bb37179860d00e1db1ac6d7cae7b7f965e
--- /dev/null
+++ b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/full.md
@@ -0,0 +1,321 @@
+# A Spatiotemporal Volumetric Interpolation Network for 4D Dynamic Medical Image
+
+Yuyu Guo $^{1,2}$ , Lei Bi $^{2}$ , Euijoon Ahn $^{2}$ , Dagan Feng $^{2}$ , Qian Wang $^{1*}$ , and Jinman Kim $^{2*}$
+
+$^{1}$ Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, China
+
+$^{2}$ School of Computer Science, University of Sydney, Australia
+
+{yu.quo, wang.qian}@sjtu.edu.cn, {lei.bi, euijoon.ahn, dagan.feng, jinman.kim}@sydney.edu.au
+
+# Abstract
+
+Dynamic medical images are often limited in its application due to the large radiation doses and longer image scanning and reconstruction times. Existing methods attempt to reduce the volume samples in the dynamic sequence by interpolating the volumes between the acquired samples. However, these methods are limited to either 2D images and/or are unable to support large but periodic variations in the functional motion between the image volume samples. In this paper, we present a spatiotemporal volumetric interpolation network (SVIN) designed for 4D dynamic medical images. SVIN introduces dual networks: the first is the spatiotemporal motion network that leverages the 3D convolutional neural network (CNN) for unsupervised parametric volumetric registration to derive spatiotemporal motion field from a pair of image volumes; the second is the sequential volumetric interpolation network, which uses the derived motion field to interpolate image volumes, together with a new regression-based module to characterize the periodic motion cycles in functional organ structures. We also introduce an adaptive multi-scale architecture to capture the volumetric large anatomy motions. Experimental results demonstrated that our SVIN outperformed state-of-the-art temporal medical interpolation methods and natural video interpolation method that has been extended to support volumetric images. Code is available at1.
+
+# 1. Introduction
+
+Dynamic medical imaging modalities enable the examination of functional and mechanical properties of the human body and are used for clinical applications, e.g., four
+
+dimensional (4D) computed tomography (CT) for respiratory organ motion modelling [33], 4D magnetic resonance (MR) imaging for functional heart analysis [9], and 4D ultrasound (US) for echocardiography analysis [40]. These 4D modalities have high spatial (volumetric) and temporal (time sequence) sampling rate to capture the periodic motion cycles of organ activities, and this information is used for clinical decision making. However, the acquisition of these dynamic images requires larger radiation doses which may cause harm to humans, and longer image scanning and reconstruction times; these factors limit the use of 4D imaging modalities to broader clinical applications [32, 10].
+
+
+Figure 1. The cardiac motions in two-time phases: End-Systole (ES) and End-Diastole (ED). The red bounding boxes highlight the heart structure. All images are shown in transaxial views, cropped in varying scales, to enlarge the heart.
+
+To mitigate these factors, reducing the temporal sampling has been widely employed but this compromises valuable temporal information [23, 15]. In these approaches, the intermediary temporal frames can be used to improve visual inspection, quantitative modelling (e.g., dynamic motion
+
+trajectory) and accurate interpretation / diagnosis. These benefits apply to various clinical applications, e.g., motion compensation in image-guided therapy [37], organ motion modeling [1] and alignment or fusion of 4D multi-modal images [1, 40]. Such interpolation methods are reliant on either non-rigid registration [6, 28, 40] or optical flow-based [30, 19] algorithms. Non-rigid registration approaches calculate the dense image volume correspondences that occur from one volume to another, and then uses the calculated correspondences to generate the intermediary volumes. Such approaches, however, often generate artifacts or fuzzy boundaries and do not perform well when the variations in anatomy or organ activity (e.g., size and shape) are large. An alternative approach was to use optical flow-based methods (using deep learning) [19, 38] to estimate a dense motion (i.e., deformation) field between image pairs. However, these methods were limited to 2D image interpolation and therefore did not utilize the rich spatial information inherent in medical image volumes. They are also limited when the motion between the image sequences are not in linear trajectory and are not changing in a constant velocity. Therefore, these approaches are not applicable to volumetric temporal imaging modalities that exhibit large non-linear motions in spatiotemporal space.
+
+In this paper we propose a spatiotemporal volumetric interpolation network (SVIN) designed for 4D dynamic medical images. To the best of our knowledge, this is the first deep learning-based method for 4D dynamic medical image interpolation. An overview of our model is illustrated in Fig. 2 which comprises of two main networks. Our first spatiotemporal motion network leverages the 3D convolutional neural network (CNN) for unsupervised parametric volumetric registration to derive spatiotemporal motion field from two-image volumes. In the second sequential volumetric interpolation network, the derived motion field is used to interpolate the image volume, together with a new regression-based module to characterize the periodic motion cycles in functional organ structures. We also propose an adaptive multi-scale architecture that learns the spatial and appearance deformation in multiple volumes to capture large motion characteristics. We demonstrate the application of our method on cardiac motion interpolation, which is acquired using both 4D CT and 4D MR images. These images are characterized by twisting action during contraction to relaxation of the heart structure, and has complex changes in muscle morphology, as depicted in Fig. 1. Our method was used to increase the temporal resolution in both the CT and MR image volumes. We evaluate our method in comparison to the state-of-the-art interpolation method. We further conducted an ablation study to demonstrate the effectiveness of our motion network.
+
+# 2. Related Works
+
+We partitioned the related works into three categories which we deemed relevant to our research: (1) Medical dynamic image interpolation; (2) spatiotemporal motion field calculation for medical image and (3) natural video interpolation approaches.
+
+# 2.1. Dynamic medical image interpolation
+
+Many existing medical image interpolation methods rely upon optical flow-based or non-rigid registration methods to generate a linearly interpolated image by averaging pixel values between the adjacent image sequences [6, 22, 30, 28, 40, 37]. For instance, Ehrhardt et al. [14] presented an optical flow-based method to establish spatial correspondence between adjacent slices for cardiac temporal image. Zhang et al. [40] used non-rigid registration-based method to synthesize echocardiography and cardiovascular MR image sequences. The main advantage of these approaches is that they track spatiotemporal motion field, in a pixel-wise manner, between the neighboring images to estimate the interpolation. However, their assumption limited the spatiotemporal motion between the adjacent images to be in a linear trajectory, and thus disregarded the complex, non-linear motions apparent in functional organ structures. Recently, there are two CNN based methods for temporal interpolation via motion field for MR images from Lin Zhang et al. [25] and Kim et al. [19]. They achieved outstanding performance compared with previous works. However, their method did not support full 3D volumetric information and did not perform well when there were large variations in motion.
+
+# 2.2. Learning spatiotemporal motion fields from volume image sequence
+
+Many studies used deformable medical image registration techniques to estimate the motion field between the input image sequences. The deformable medical image registration techniques can be divided into two parts: nonlearning based [36, 2, 20, 5, 13] and learning-based methods [21, 39, 35]. The typical non-learning based approaches are free-form deformations with B-splines [2], Demons [36] and ANTs [3]. These approaches optimize displacement vector fields by calculating the similarity of the topological structures. Deep learning-based methods, in recent years, used labelled data of spatiotemporal motion field and have shown great performances [21, 39, 35]. However, their performance was dependent upon the availability of large-scale labelled data. To address this, several unsupervised methods were proposed to predict the spatiotemporal motion field [12, 24, 4]. Although these methods demonstrated promising results, [12] and [24] were only useful in patch-based volumes or in 2D slices. Jiang et al. [4] recently developed
+
+
+Figure 2. An overview of the proposed method which contains a motion network and an interpolation network. An adaptive multi-scale architecture is used in both of motion and interpolation network to cover the large motion. A regression module is integrated in our interpolation network to constrain the intermediated motion field.
+
+a CNN, VoxelMorph which used full 3D volumetric information. However, it was not designed for dynamic image sequences where it has large variations in motion.
+
+# 2.3. Natural video interpolation approaches
+
+Video interpolation is an active research task in natural scenes, e.g., model-based tracking, patch identification, and matching and framerate upsampling [17, 11, 27, 29]. Niklaus et al. [31] developed a spatially-adaptive convolution kernel to estimate the motion for each pixels. Liu et al. [26] divided the frame interpolation into two steps, optical flow estimation and image interpolation. Their network learnt an input pair of consecutive frames in an unsupervised manner and then refined the interpolation based on the outputs of the estimation. Jiang et al. [18] presented Slomo - a technique which interpolates frame motion by linearly combining bi-directional optical flows, and then further refining the estimated motion flow field through an end-to-end CNN. Recently, Peleg et al. [34] presented a multi-scale structured architecture neural network to better capture the local details from high resolutions frame. However, when considering the application of these methods to dynamic medical images interpolation, this is a challenging problem as the temporal sampling in medical image volume sequences are much lower than that of natural scene videos. In addition, the deformation and visual differences in dynamic medical images are comparatively more complex and non-trivial than natural scene videos.
+
+# 3. Proposed Method
+
+Let $\{I_T, T = 1,2\ldots,N\}$ be a sequence of volumetric images representing the cardiac motion from end-diastole (ED) $(T = 1)$ to end-systole (ES) $(T = N)$ phase, and let $\{I_i, I_j \mid (i,j) \in T\}$ be a pair of cardiac images indicating two random time points within the cardiac motion. Our aim is to interpolate the intermediate image $I_t$ , $(t \in T)$ . For this work, we used images at ED (denote as $I_{ED}$ ) and ES (denote as $I_{ES}$ ) phase to interpolate the complete the cardiac motion. $\{\phi_{i \rightarrow j}, \phi_{j \rightarrow i}\}$ denotes the motion field between $I_i$ and $I_j$ in bi-directions.
+
+Fig. 2 shows the overall proposed method. Initially, spatiotemporal motion network was used to learn and capture bi-directional motion fields between $I_{ED}$ and $I_{ES}$ in an unsupervised manner. Two linearly interpolated intermediate images were then coarsely created using the learned spatiotemporal motion fields, $\phi_{ED\rightarrow ES}$ and $\phi_{ES\rightarrow ED}$ . Using the coarsely interpolated intermediate images and their corresponding deformation fields, we further refined the coarse intermediate images by the volumetric interpolation network, where we used a regression-based module to constrain the interpolation to follow the patterns of cardiac biological motion. Specifically, both of our volumetric motion estimation and interpolation network are using an adaptive multi-scale architecture which enables to capture various types of motions - both small and large volume spatiotemporal deformations (see Fig. 2 and 3).
+
+
+Figure 3. The architecture of our spatiotemporal volumetric motion network with an adaptive multi-scale architecture.
+
+# 3.1. Spatiotemporal volumetric motion field estimation
+
+Fig. 3 presents the architecture of 3D CNN for spatiotemporal motion field estimation. We estimate a motion field that can represent the voxel-wise motion flow of volume images at two individual time points. This can be represented as a function $D_{\theta}(I_i, I_j) = \phi_{i \leftrightarrow j}(\Delta x, \Delta y, \Delta z)$ , where $\phi_{i \leftrightarrow j}(\Delta x, \Delta y, \Delta z)$ indicates the vectors that represent the movement in 3D space and $\theta$ are the learnable parameters of the network. We used an encoder-decoder architecture with skip connections for generating $\phi_{i \leftrightarrow j}$ by given $I_i$ and $I_j$ .
+
+In order to produce a volumetric motion field that can cover various types of deformations, we propose an adaptive multi-scale architecture that embedded both a global and a local learning. More specifically, for global learning, our motion field estimation network focuses on large deformation while the volumetric images in a low-scale level would ignore the local details while more detailed information will be covered when the volumetric image is a high-scale. In addition, the global deformation from low-scale is integrated to high-scale, which reduces the difficulty of the network for learning and constrains the network to pay more attention to detailed deformation. Our deformation field can be defined as:
+
+$$
+\phi_ {i \rightarrow j} = D _ {\theta} \left(I _ {i}, I _ {j}\right) \quad \text {o r} \quad \phi_ {j \rightarrow i} = D _ {\theta} \left(I _ {j}, I _ {i}\right) \tag {1}
+$$
+
+$$
+I _ {j} = \zeta \left(I _ {i} \mid \phi_ {i \rightarrow j}\right) \quad \text {o r} \quad I _ {i} = \zeta \left(I _ {j} \mid \phi_ {j \rightarrow i}\right) \tag {2}
+$$
+
+where $\zeta(I_i|\phi_{i\rightarrow j})$ representing the warped image by the spatial vector field $\phi_{i\rightarrow j}$ with bilinear interpolation.
+
+For training our motion field estimation network, we used an image-wise similarity loss and a motion field smoothness regularization loss with an adaptive multi-scale network architecture (as shown in Fig. 3). Given the network output $\phi_{i\rightarrow j}^{n}\{n = 1,2,3\}$ , where $i$ denotes the volumetric images at different scales (we used 3 different scales in total), we define a motion field smoothness regularization loss as:
+
+$$
+\mathcal {L} _ {\phi} \left(D _ {\theta} \left(I _ {i}, I _ {j}\right)\right) = \sum_ {c = 1} ^ {3} \| \nabla \phi_ {i \rightarrow j} ^ {c} \| _ {1} \tag {3}
+$$
+
+Where $\nabla$ is the gradient operator. The image-wise similarity loss was leveraged from VoxelMorph [4] and this can be defined as:
+
+$$
+\mathcal {L} _ {s} \left(\zeta \left(I _ {i} \mid \phi_ {i \rightarrow j}\right), I _ {j}\right) = \sum_ {c = 1} ^ {3} \| \zeta \left(I _ {i} ^ {c} \mid \phi_ {i \rightarrow j} ^ {c}\right) - I _ {j} ^ {c} \| _ {2} \tag {4}
+$$
+
+# 3.2. Sequential volumetric interpolation network
+
+Based on the derived deformation fields $\phi_{ED\rightarrow ES}$ and $\phi_{ES\rightarrow ED}$ , we used linear interpolation approach to synthesize the intermediate deformation fields, as shown in follows:
+
+$$
+\phi_ {E D \rightarrow t} = t \phi_ {E D \rightarrow E S} \tag {5}
+$$
+
+$$
+\phi_ {E S \rightarrow t} = (1 - t) \phi_ {E S \rightarrow E D} \tag {6}
+$$
+
+Based on Eqs. 1 and 2, the linear interpolation based deformation field for $I_{t}$ can be approximated as:
+
+$$
+\tilde {I} _ {t} = (1 - t) \zeta \left(I _ {E D} \mid \phi_ {E D \rightarrow t}\right) + t \zeta \left(I _ {E S} \mid \phi_ {E S \rightarrow t}\right) \tag {7}
+$$
+
+To improve the consistency in bi-directions, Eq. 5 and 6 can be modified to as follows:
+
+$$
+\phi_ {E D \rightarrow t} = t (1 - t) \phi_ {E D \rightarrow E S} \tag {8}
+$$
+
+$$
+- t ^ {2} \zeta \left(\phi_ {E S \rightarrow E D} \mid \phi_ {E S \rightarrow E D}\right)
+$$
+
+$$
+\begin{array}{l} \phi_ {E S \rightarrow t} = - (1 - t) ^ {2} \zeta \left(\phi_ {E D \rightarrow E S} \mid \phi_ {E D \rightarrow E S}\right) \tag {9} \\ + t (1 - t) \phi_ {E S \rightarrow E D} \\ \end{array}
+$$
+
+In addition, we introduce a hyper-weight map $\gamma$ to balance the importance of using deformation from the bidirections (forward and backward directions) and this can be defined as:
+
+$$
+\gamma_ {E S} = 1 - \gamma_ {E D} \tag {10}
+$$
+
+Thus, the linear image interpolation based on bidirectional deformation and $\gamma$ can be defined as:
+
+$$
+\begin{array}{l} \tilde {I} _ {t} = (1 - t) \gamma_ {E D} \zeta \left(I _ {E D} \mid \phi_ {E D \rightarrow t}\right) \tag {11} \\ + t \gamma_ {E S} \zeta \left(I _ {E S} \mid \phi_ {E S \rightarrow t}\right) \\ \end{array}
+$$
+
+As exemplified in right-side of Fig. 2, we used an adaptive multi-scale network architecture to ensure the synthesized intermediate volumetric images will have a high spatial-temporal resolution.
+
+# 3.3. Regression-based module for interpolation constraints
+
+Since most biological movements have a relatively fixed motion pattern, especially in cardiac motion [8], we present a regression-based module to model the relationship between cardiac motion of the cardiac cycle and time phase (as shown in Fig. 4). Specifically, we attempted to build a regression model representing the population-based cardiac motion vector which indicate the shape variability at individual time point. The population-based cardiac motions at individual time point was then used to constrain the appearance of the synthetic intermediate volumetric images. Our regression estimation $R_{\theta}$ at time point $\tilde{t}$ is defined as:
+
+$$
+\begin{array}{l} \tilde {t} = R _ {\theta} \left(\phi_ {E D \rightarrow t} - t \phi_ {E D \rightarrow E S}, \right. \tag {12} \\ \phi_ {E S \rightarrow t} - (1 - t) \phi_ {E S \rightarrow E D}) \\ \end{array}
+$$
+
+# 3.4. Training details for volumetric interpolation
+
+For training our sequential volumetric interpolation network, our loss function is defined as a sum of an image-wise similarity loss $L_{\text{similar}}$ , a regression loss $L_r$ and a regulation loss $L_g$ :
+
+$$
+\mathcal {L} = \lambda_ {s} \mathcal {L} _ {\text {s i m i l a r}} + \lambda_ {r} \mathcal {L} _ {r} + \lambda_ {g} \mathcal {L} _ {g} \tag {13}
+$$
+
+where image-wise similarity loss $L_{\text{similar}}$ is used to evaluate the similarity of the predicted synthetic intermediate images and the real intermediate images at multiple image scales and is defined as:
+
+
+Figure 4. Illustration of left ventricle (LV) volume changing during the cardiac contraction period. The brown curve shows the real motion flow of LV, and blue hidden line shows the simple linear assumption. The blue points and green points represent the intermediate time points.
+
+$$
+\mathcal {L} _ {\text {s i m i l a r}} = \sum_ {c = 1} ^ {3} \sum_ {k = 1} ^ {N} \| \tilde {I} _ {t _ {k}} ^ {c} - I _ {t _ {k}} ^ {c} \| _ {2} \tag {14}
+$$
+
+where $\sum_{c=1}^{3}()$ represents a 3-scales volumetric image loss. $\{I_{t_n}\}_{n=1}^N$ represents the real intermediate volumetric images and $\{\tilde{I}_{t_n}\}_{n=1}^N$ represents the predicted synthetic intermediate volumetric images. The regression loss $L_r$ is defined as the appearance difference at individual time point:
+
+$$
+\mathcal {L} _ {r} = \sum_ {k = 1} ^ {N} \| \tilde {t} _ {k} - t _ {k} \| _ {1} \tag {15}
+$$
+
+Regularization loss $L_{g}$ is used to constrain the predicted motions to be consistent in bi-directions and is defined as:
+
+$$
+\mathcal {L} _ {g} = \sum_ {c = 1} ^ {3} \| \nabla \phi_ {E D \rightarrow t} ^ {c} + \nabla \phi_ {E S \rightarrow t} ^ {c} \| _ {1} \tag {16}
+$$
+
+The weights $\lambda_r = 1; \lambda_s = 500; \lambda_g = 50$ have been set empirically using a validation set.
+
+# 4. Experiments
+
+# 4.1. Materials and implementation details
+
+We demonstrate our method with two datasets: 4D Cardiac CT (4D-C-CT), and ACDC (4D-MR cardiac cine or tagged MR imaging) [7]. Fig. 5 shows a snapshot of randomly sampled cardiac sequence volume slices. The 4D-C-CT dataset consists of 18 patient data, each having 5 time points (image volumes) from ED to ES. Image volume is characterized by a high-resolution ranging from 0.32 to 0.45 mm in intra-slice (x- and y-resolutions) and from 0.37mm to $0.82\mathrm{mm}$ in inter-slice (z-resolution). The ACDC dataset contains 100 patient data. On average, each patient has 10.93 time points from ED to ES and it has an imaging resolution from 1.37 to $1.68~\mathrm{mm}$ in x- and y-resolution and 5 to $10\mathrm{mm}$ in z-resolution. All scans of 4D-C-CT were resampled to a $128\times 128\times 96$ grid and crop resulting images to
+
+96x96x96. For ACDC dataset, we resampled all scans to 160x160x10. We pad ACDC data in z-axis by 0, increasing its size to 160x160x12 to reduce the border effects of 3D convolution. We randomly selected 80 training / 20 testing patient data and applied contrast-normalization to both of datasets, consistent to other similar researches [16].
+
+We implemented all the networks using Pytorch library and was trained on two 11GB Nvidia 1080Ti GPUs. All models were trained with a learning rate of 0.0001. In all our evaluations, we used 3-fold cross-validation on both the datasets.
+
+
+4D-C-CT
+Figure 5. A snapshot of our training data showing CT (left) and MRI (right).
+
+
+ACDC
+Figure 6. Comparison of spatiotemporal volumetric motion estimation results. The intensity image is warped from estimated spatiotemporal motion field. The red curve represents the real segmentation results while the green color shows the warped segmentation results (see the yellow arrows indicated). The red arrows indicate some organ boundaries.
+
+# 4.2. Evaluation and metrics
+
+In order to evaluate the two networks in our SVIN, we conducted an ablation study. For the unsupervised spatiotemporal motion network, we compared it with state-of-the-art CNN based deformable medical image registration - VoxelMorph [4]. For the interpolation network, state-ofthe-art image interpolation methods were used in the comparison including (i) RVLI [40] - registration-based volume linear interpolation for medical images, (ii) MFIN [25] - CNN-based medical image interpolation (2D slice-based), and (iii) Slomo - natural video interpolation [18] in 2D as well as its extension to work on medical image volumes (3D-Slomo). For image volume interpolation, we interpolated 3 intermediate volumes in between the ED-ES frames (see Fig 5), evenly distributed across the time points.
+
+We used the standard image interpolation evaluation metrics including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Squared Error, and Normalized Root Mean Square Error (NRMSE). We used the same evaluation metrics for the spatiotemporal motion field estimation, consistent to other medical image registration approaches [25]. In addition, we further used Dice Similarity Coefficient (DSC) to measure the usefulness of our interpolation in medical imaging applications.
+
+# 5. Results and Discussion
+
+# 5.1. Ablation study - spatiotemporal volumetric motion field estimation
+
+The results of motion field estimation on two datasets - 4D-C-CT and ACDC are shown in Table 1 and 2. Our re
+
+
+Warped $I_{1}$ from $I_{0}$ with label
+
+
+Pair images with label
+
+
+Warped $I_0$ from $I_1$ with label
+
+Table 1. The performance of spatiotemporal motion field estimation on 4D-C-CT dataset.
+
+ | MSE(10-2) | PSNR | NRMSE | SSIM | Dsc |
| VoxelMorph | 0.787 | 27.10 | 0.276 | 0.807 | 0.880 |
| Ours | 0.197 | 33.17 | 0.138 | 0.918 | 0.944 |
+
+Table 2. The performance of spatiotemporal motion field estimation on ACDC dataset.
+
+ | MSE(10-1) | PSNR | NRMSE | SSIM | Dsc |
| VoxelMorph | 0.194 | 38.06 | 0.132 | 0.912 | 0.920 |
| Ours | 0.168 | 38.93 | 0.121 | 0.914 | 0.936 |
+
+sults show that motion estimation network with our adaptive architecture outperforms the recent VoxelMorph [4] across all metrics on 4D-C-CT dataset, achieving the PSNR score of 33.176, NRMSE of 0.1388, SSIM of 0.9185 and MSE of 0.00197. Similarly, it also had better scores across all metrics on ACDC dataset. Our motion estimation architecture had higher improvements on 4D-C-CT dataset than that of ACDC dataset relative to VoxelMorph. We attribute this to our robust multi-scale adaptive 3D CNN which can effectively learn both large and small variations in motion.
+
+Fig. 6 shows the synthesized volumes based on the derived motion field and their corresponding warped segmentation results. It clearly shows that the warped segmentation results from the motion field learnt by our motion architecture is more similar to the ground truth.
+
+# 5.2. Comparison with the state-of-the-art interpolation methods
+
+Table 3 and 4 represent the interpolation results of different time points from ED to ES on 4D-C-CT and ACDC
+
+
+Figure 7. Qualitative results of the two 4D-C-CT samples (top two rows for the first sample; bottom two for the second sample). The first left column shows the paired-input volumes (ED and ES) and the last right column shows the real intermediate volume. The rest of the columns show the interpolated intermediary volumes reconstructed using different approaches.
+
+datasets, respectively. As expected, results show that the intermediate volumes that are in later time points had better performances. This is due to the fact that the earlier time points have larger motion variations, which contributed to its lower accuracy.
+
+Table 3. Multi-volume cardiac sequence interpolation results on the 4D-C-CT dataset.
+
+ | MSE(10-2) | PSNR | NRMSE | SSIM |
| 1st-point | 0.45 | 29.45 | 0.211 | 0.830 |
| 2nd-point | 0.43 | 29.47 | 0.210 | 0.825 |
| 3rd-point | 0.28 | 31.52 | 0.165 | 0.863 |
+
+Table 4. Multi-volume cardiac sequence interpolation results on the ACDC dataset.
+
+ | MSE(10-2) | PSNR | NRMSE | SSIM |
| 1st-point | 1.22 | 39.34 | 0.109 | 0.934 |
| 2nd-point | 0.95 | 40.42 | 0.087 | 0.950 |
| 3rd-point | 0.28 | 45.86 | 0.052 | 0.977 |
+
+The comparative quantitative results for volume interpolation are shown in Table 5 and 6. SVIN outperformed all other state-of-the-art interpolation method on 4D-C-CT dataset across all measures. Similarly, it also had the best
+
+scores across all metrics on the ACDC dataset. We attribute this to our adaptive multi-scale architecture capturing the variant type of motions and regression-based module which effectively constrains the intermediate volumetric motions and learn relevant inherent functional motion patterns (see Fig. 7 and 8). Our results show that the RVLI was the closest to our results. However, the RVLI was not able to accurately interpolate the volumes when there were artifacts as evident in Fig. 7 and 8. MFIN and Slomo also did not consider full 3D volumetric information, i.e., limited to 2D space, which contributed to its lower scores. As expected, our implemented 3D-Slomo produced a better result relative to the 2D methods. The 3D-Slomo, however, was not able to accurately synthesize the clear organ boundary and estimate the motion trajectory when there are large changes of cardiac activities (see Fig. 7).
+
+Table 5. Performance comparisons on the 4D-C-CT dataset.
+
+ | MSE(10-2) | PSNR | NRMSE | SSIM | Dsc |
| MFIN | 1.06 | 26.84 | 0.308 | 0.709 | 0.844 |
| Slomo | 1.13 | 26.52 | 0.308 | 0.704 | 0.839 |
| 3D-Slomo | 0.92 | 26.33 | 0.303 | 0.713 | 0.872 |
| RVLI | 0.54 | 28.70 | 0.237 | 0.806 | - |
| Ours | 0.39 | 30.15 | 0.196 | 0.840 | 0.917 |
+
+
+Figure 8. Qualitative results of two samples from the ACDC dataset. The first left column shows the paired-input volumes (ED and ES) and the last right column shows the real intermediate volume. The rest columns show the interpolated intermediary volumes of different approaches.
+
+Table 6. Performance comparisons on the ACDC dataset.
+
+ | MSE(10-1) | PSNR | NRMSE | SSIM |
| MFIN | 1.082 | 30.69 | 0.309 | 0.607 |
| Slomo | 1.001 | 31.08 | 0.296 | 0.630 |
| 3D-Slomo | 0.341 | 35.27 | 0.178 | 0.845 |
| RVLI | 0.331 | 35.66 | 0.173 | 0.860 |
| Ours | 0.081 | 41.87 | 0.085 | 0.953 |
+
+# 6. Conclusion
+
+# 6.1. Summary
+
+We presented a novel interpolation method for 4D dynamic medical images. Our proposed two-stage network was designed to exploit the volumetric medical images that exhibit large variations between the motion sequences. Our SVIN outperformed state-of-the-art temporal medical interpolation methods as well as natural video interpolation methods that has been extended to support volumetric images. Ablation study further exemplified that our motion network with our SVIN was able to better represent the large functional motion compared with the state-of-the-art unsupervised medical registration methods.
+
+# 6.2. Extensions implementation
+
+In Section 4, we discussed our multi-scale architecture for learning the spatial appearance volume in different
+
+scales to retain the spatial information for volume synthesis. Rather than learning a spatial transform model, as future work, we will implement our architecture in other volume synthesis task.
+
+We leveraged a regression based constrain module to explore the potential 'rule' associated with functional motion. This could be extended to other 4D volumetric medical image tasks having periodic cycle motion.
+
+Although we demonstrated our SVIN model on cardiac imaging modality, there is no restriction of our method to be applied to other dynamic images. We suggest that our method is broadly applicable to other 4D medical images, as well as to non-medical image volume interpolation problems where the motion field can be modeled.
+
+# References
+
+[1] Puyol-Anton, et al. A multimodal spatiotemporal cardiac motion atlas from mr and ultrasound data. Medical image analysis, 40:96-110, 2017.
+[2] John Ashburner. A fast diffeomorphic image registration algorithm. Neuroimage, 38(1):95-113, 2007.
+[3] Brian B Avants, Nicholas J Tustison, Gang Song, Philip A Cook, Arno Klein, and James C Gee. A reproducible evaluation of ants similarity metric performance in brain image registration. Neuroimage, 54(3):2033-2044, 2011.
+[4] Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. An unsupervised learning model
+
+for deformable medical image registration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9252-9260, 2018.
+[5] Serdar K Balci, Polina Golland, Martha Elizabeth Shenton, and William Mercer Wells. Free-form b-spline deformation model for groupwise registration. Med Image Comput Comput Assist Interv, 2007.
+[6] Christian F Baumgartner, Christoph Kolbitsch, Jamie R McClelland, Daniel Rueckert, and Andrew P King. Groupwise simultaneous manifold alignment for high-resolution dynamic mr imaging of respiratory motion. In International Conference on Information Processing in Medical Imaging, pages 232-243. Springer, 2013.
+[7] Olivier Bernard, Alain Lalande, Clement Zotti, Frederick Cervenansky, Xin Yang, Pheng-Ann Heng, Irem Cetin, Karim Lekadir, Oscar Camara, Miguel Angel Gonzalez Ballester, et al. Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE transactions on medical imaging, 37(11):2514-2525, 2018.
+[8] Benedetta Biffi, Jan L Bruse, Maria A Zuluaga, Hopewell N Ntsinjana, Andrew M Taylor, and Silvia Schievano. Investigating cardiac motion patterns using synthetic high-resolution 3d cardiovascular magnetic resonance images and statistical shape analysis. Frontiers in pediatrics, 5:34, 2017.
+[9] Axel Bornstedt, Eike Nagel, Simon Schalla, Bernhard Schnackenburg, Christoph Klein, and Eckart Fleck. Multislice dynamic imaging: Complete functional cardiac mr examination within 15 seconds. Journal of Magnetic Resonance Imaging: An Official Journal of the International Society for Magnetic Resonance in Medicine, 14(3):300-305, 2001.
+[10] Federico Canè, Benedict Verhegghe, Matthieu De Beule, Philippe B Bertrand, Rob J Van der Geest, Patrick Segers, and Gianluca De Santis. From 4d medical images (ct, mri, and ultrasound) to 4d structured mesh models of the left ventricular endocardium for patient-specific simulations. BioMed research international, 2018, 2018.
+[11] Byeong-Doo Choi, Jong-Woo Han, Chang-Su Kim, and Sung-Jea Ko. Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation. IEEE Transactions on Circuits and Systems for Video Technology, 17(4):407-416, 2007.
+[12] Bob D de Vos, Floris F Berendsen, Max A Viergever, Marius Staring, and Ivana Isgum. End-to-end unsupervised deformable image registration with a convolutional neural network. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pages 204-212. Springer, 2017.
+[13] Ian L Dryden. Shape analysis. Wiley StatsRef: Statistics Reference Online, 2014.
+[14] Jan Ehrhardt, Dennis Säring, and Heinz Handels. Optical flow based interpolation of temporal image sequences. In Medical Imaging 2006: Image Processing, volume 6144, page 61442K. International Society for Optics and Photonics, 2006.
+[15] Kieren Grant Hollingsworth. Reducing acquisition time in clinical mri by data undersampling and compressed
+
+sensing reconstruction. Physics in Medicine & Biology, 60(21):R297, 2015.
+[16] Yeonggul Jang, Yoonmi Hong, Seongmin Ha, Sekeun Kim, and Hyuk-Jae Chang. Automatic segmentation of lv and rv in cardiac mri. In International Workshop on Statistical Atlases and Computational Models of the Heart, pages 161-169. Springer, 2017.
+[17] Bo-Won Jeon, Gun-Ill Lee, Sung-Hee Lee, and Rae-Hong Park. Coarse-to-fine frame interpolation for frame rate upconversion using pyramid structure. IEEE Transactions on Consumer Electronics, 49(3):499-508, 2003.
+[18] Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, and Jan Kautz. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9000-9008, 2018.
+[19] Neerav Karani, Christine Tanner, Sebastian Kozerke, and Ender Konukoglu. Temporal interpolation of abdominal mris acquired during free-breathing. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 359–367. Springer, 2017.
+[20] Arno Klein, Jesper Andersson, Babak A Ardekani, John Ashburner, Brian Avants, Ming-Chang Chiang, Gary E Christensen, D Louis Collins, James Gee, Pierre Hellier, et al. Evaluation of 14 nonlinear deformation algorithms applied to human brain mri registration. Neuroimage, 46(3):786-802, 2009.
+[21] Julian Krebs, Tommaso Mansi, Herve Delingette, Li Zhang, Florin C Ghesu, Shun Miao, Andreas K Maier, Nicholas Ayache, Rui Liao, and Ali Kamen. Robust non-rigid registration through agent-based action learning. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 344–352. Springer, 2017.
+[22] Gun-Ill Lee, Rae-Hong Park, Young-Seuk Song, Cheol-An Kim, and Jae-Sub Hwang. Real-time 3d ultrasound fetal image enhancement techniques using motion-compensated frame rate up-conversion. In Medical Imaging 2003: Ultrasonic Imaging and Signal Processing, volume 5035, pages 375–385. International Society for Optics and Photonics, 2003.
+[23] Guang Li, Deborah Citrin, Kevin Camphausen, Boris Mueller, Chandra Burman, Borys Mychalczak, Robert W Miller, and Yulin Song. Advances in 4d medical imaging and 4d radiation therapy. Technology in cancer research & treatment, 7(1):67-81, 2008.
+[24] Hongming Li and Yong Fan. Non-rigid image registration using fully convolutional networks with deep self-supervision. arXiv preprint arXiv:1709.00799, 2017.
+[25] Zhang Lin, Karani Neerav, Tanner Christine, and Konukoglu Ender. Temporal interpolation via motion field prediction. 2018.
+[26] Ziwei Liu, Raymond A Yeh, Xiaou Tang, Yiming Liu, and Aseem Agarwala. Video frame synthesis using deep voxel flow. In Proceedings of the IEEE International Conference on Computer Vision, pages 4463-4471, 2017.
+[27] Gucan Long, Laurent Kneip, Jose M Alvarez, Hongdong Li, Xiaohu Zhang, and Qifeng Yu. Learning image matching by
+
+simply watching video. In European Conference on Computer Vision, pages 434-450. Springer, 2016.
+[28] Coert T Metz, Stefan Klein, Michiel Schaap, Theo van Walsum, and Wiro J Niessen. Nonrigid registration of dynamic medical imaging data using nd+ t b-splines and a groupwise optimization approach. Medical image analysis, 15(2):238-249, 2011.
+[29] Simone Meyer, Abdelaziz Djelouah, Brian McWilliams, Alexander Sorkine-Hornung, Markus Gross, and Christopher Schroers. Phasenet for video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 498-507, 2018.
+[30] Tae-Jin Nam, Rae-Hong Park, and Jae-Ho Yun. Optical flow based frame interpolation of ultrasound images. In International Conference Image Analysis and Recognition, pages 792-803. Springer, 2006.
+[31] Simon Niklaus, Long Mai, and Feng Liu. Video frame interpolation via adaptive separable convolution. In Proceedings of the IEEE International Conference on Computer Vision, pages 261-270, 2017.
+[32] Yoshiharu Ohno, Hiroto Hatabu, Daisuke Takenaka, Shuji Adachi, Michio Kono, and Kazuro Sugimura. Solitary pulmonary nodules: potential role of dynamic mr imaging in management—initial experience. Radiology, 224(2):503-511, 2002.
+[33] Tinsu Pan, Ting-Yim Lee, Eike Rietzel, and George TY Chen. 4d-ct imaging of a volume influenced by respiratory motion on multi-slice ct. Medical physics, 31(2):333-340, 2004.
+[34] Tomer Peleg, Pablo Szekely, Doron Sabo, and Omry Sendik. Im-net for high resolution video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2398-2407, 2019.
+[35] Hessam Sokooti, Bob de Vos, Floris Berendsen, Boudewijn PF Lelieveldt, Ivana Isgum, and Marius Staring. Nonrigid image registration using multi-scale 3d convolutional neural networks. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 232-239. Springer, 2017.
+[36] J-P Thirion. Image matching as a diffusion process: an analogy with maxwell's demons. Medical image analysis, 2(3):243-260, 1998.
+[37] Erik Tryggestad, Aaron Flammang, Sarah Han-Oh, Russell Hales, Joseph Herman, Todd McNutt, Teboh Roland, Steven M Shea, and John Wong. Respiration-based sorting of dynamic mri to derive representative 4d-mri for radiotherapy planning. Medical physics, 40(5):051909, 2013.
+[38] Wenjun Yan, Yuanyuan Wang, Zeju Li, Rob J Van Der Geest, and Qian Tao. Left ventricle segmentation via optical-flow-net from short-axis cine mri: preserving the temporal coherence of cardiac motion. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 613-621. Springer, 2018.
+[39] Xiao Yang, Roland Kwitt, Martin Styner, and Marc Niethammer. Quicksilver: Fast predictive image registration-a deep learning approach. NeuroImage, 158:378-396, 2017.
+[40] Weiwei Zhang, J Michael Brady, Harald Becher, and J Alison Noble. Spatio-temporal (2d+ t) non-rigid registration
+
+of real-time 3d echocardiography and cardiovascular mr image sequences. Physics in Medicine & Biology, 56(5):1341, 2011.
\ No newline at end of file
diff --git a/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/images.zip b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..770df03cb74cfe40295b77e16f4183dc0dbf4a81
--- /dev/null
+++ b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e6a426de00fe3074fb23937b72ddf05d045370ce658c3422955cbbdb421eae5
+size 657474
diff --git a/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/layout.json b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..9b6beb37991e961423a722866ff4221c516c8aad
--- /dev/null
+++ b/aspatiotemporalvolumetricinterpolationnetworkfor4ddynamicmedicalimage/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4f46710b33e376f3a13ff92a03f27ed2240b22eddac381b073e18735dde2dddb
+size 333341
diff --git a/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_content_list.json b/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..905f7b6c962d63d593a2e5d543a6c4dd6c0d1234
--- /dev/null
+++ b/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8d0fa2860af75cf2b5b2e77f097a5234c014135687b441d49a77a382540954e4
+size 83044
diff --git a/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_model.json b/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..6713ea43a0059000221ce37a64486de2cfb0e252
--- /dev/null
+++ b/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fb7e827c332cd0f961c443d0a23abf85101a9db6ed980bc62684c993d4ff77e1
+size 97257
diff --git a/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_origin.pdf b/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..66ab4390630234a780e7a710332e603f2fba57eb
--- /dev/null
+++ b/astochasticconditioningschemefordiversehumanmotionprediction/0800977c-278d-4a6d-af03-eaa83b276475_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d4499a1ac0efb39cac9f7f150c72c510b8656917fe99ee26895ced52b0f6553
+size 703280
diff --git a/astochasticconditioningschemefordiversehumanmotionprediction/full.md b/astochasticconditioningschemefordiversehumanmotionprediction/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..a7aa4b94a752037e61db457e713140ce822d53bc
--- /dev/null
+++ b/astochasticconditioningschemefordiversehumanmotionprediction/full.md
@@ -0,0 +1,264 @@
+# A Stochastic Conditioning Scheme for Diverse Human Motion Prediction*
+
+Sadegh Aliakbarian $^{1,2,4}$ , Fatemeh Sadat Saleh $^{1,2}$ , Mathieu Salzmann $^{3}$ , Lars Petersson $^{1,4}$ , Stephen Gould $^{1,2}$ , Australian National University, $^{2}$ ACRV, $^{3}$ CVLab, EPFL, $^{4}$ Data61, CSIRO
+
+{fname.lname}@anu.edu.au, methieu.salzmann@epfl.ch, lars.petersson@data61.csiro.au https://mix-and-match.github.io/
+
+# Abstract
+
+Human motion prediction, the task of predicting future 3D human poses given a sequence of observed ones, has been mostly treated as a deterministic problem. However, human motion is a stochastic process: Given an observed sequence of poses, multiple future motions are plausible. Existing approaches to modeling this stochasticity typically combine a random noise vector with information about the previous poses. This combination, however, is done in a deterministic manner, which gives the network the flexibility to learn to ignore the random noise. Alternatively, in this paper, we propose to stochastically combine the root of variations with previous pose information, so as to force the model to take the noise into account. We exploit this idea for motion prediction by incorporating it into a recurrent encoder-decoder network with a conditional variational autoencoder block that learns to exploit the perturbations. Our experiments on two large-scale motion prediction datasets demonstrate that our model yields high-quality pose sequences that are much more diverse than those from state-of-the-art stochastic motion prediction techniques.
+
+# 1. Introduction
+
+Human motion prediction aims to forecast the sequence of future poses of a person given past observations of such poses. To achieve this, existing methods typically rely on recurrent neural networks (RNNs) that encode the person's motion [28, 14, 37, 22, 5, 31, 32]. While they predict reasonable motions, RNNs are deterministic models and thus cannot account for the highly stochastic nature of human motion; given the beginning of a sequence, multiple, diverse futures are plausible. To correctly model this, it is therefore critical to develop algorithms that can learn the multiple modes of human motion, even when presented with only deterministic training samples.
+
+*This research was supported by the Australian Government through the Australian Research Council (ARC).
+
+Equal contribution.
+
+Recently, several attempts have been made at modeling the stochastic nature of human motion [40, 5, 37, 22, 26]. These methods rely on sampling a random vector that is then combined with an encoding of the observed pose sequence. In essence, this combination is similar to the conditioning of generative networks; the resulting models aim to generate an output from a random vector while taking into account additional information about the content.
+
+While standard conditioning strategies, i.e., concatenating the condition to the latent variable, may be effective for many tasks, as in [41, 21, 11, 10, 4, 23], they are ill-suited for motion prediction. The reason is the following: In other tasks, the conditioning variable only provides auxiliary information about the output to produce, such as the fact that a generated face should be smiling. By contrast, in motion prediction, it typically contains the core signal to produce the output, i.e., the information about the previous poses. We empirically observed that, since the prediction model is trained using deterministic samples (i.e., one condition per sample), it can then simply learn to ignore the random vector and still produce a meaningful output based on the conditioning variable only. In other words, the model can ignore the root of variations, and thus essentially become deterministic. This problem was discussed in [6] in the context of unconditional text generation, and we identified it in our own motion prediction experiments.
+
+We introduce a simple yet effective approach to counteracting this loss of diversity and thus to generating truly diverse future pose sequences. At the heart of our approach lies the idea of Mix-and-Match perturbations: Instead of combining a noise vector with the conditioning variables in a deterministic manner, we randomly select and perturb a subset of these variables. By randomly changing this subset at every iteration, our strategy prevents training from identifying the root of variations and forces the model to take it into account in the generation process. Consequently, as supported by our experiments, our approach produces not only high-quality predictions but also truly diverse ones.
+
+In short, our contributions are $(i)$ a novel way of imposing diversity into conditional VAEs, called Mix-and-Match
+
+perturbations; (ii) a new motion prediction model capable of generating multiple likely future pose sequences from an observed motion; (iii) a new set of evaluation metrics for quantitatively measuring the quality and the diversity of generated motions, thus facilitating the comparison of different stochastic approaches; and $(iv)$ a curriculum learning paradigm for training generative models that use Mix-and-Match perturbation as the stochastic conditioning scheme. Despite its simplicity, curriculum learning of variation is essential to achieve optimal performance in case of imposing large variations.
+
+# 2. Related Work
+
+Deterministic Motion Prediction. Most motion prediction approaches are based on deterministic models [32, 31, 14, 18, 28, 15, 12, 13, 27], casting motion prediction as a regression task where only one outcome is possible given the observations. Due to the success of RNN-based methods at modeling sequence-to-sequence learning problems, many attempts have been made to address motion prediction within a recurrent framework [28, 14, 37, 22, 5, 31, 32]. Typically, these approaches try to learn a mapping from the observed sequence of poses to the future sequence. Another group of study addresses this problem within feed-forward models [27, 24, 7], either with fully-connected [7], convolutional [24], or more recently, graph neural networks [27]. While a deterministic approach may produce accurate predictions, it fails to reflect the stochastic nature of human motion, where multiple plausible outcomes can be highly likely for a single given series of observations. Modeling this diversity is the topic of this paper, and we therefore focus the discussion below on the other methods that have attempted to do so.
+
+Stochastic Motion Prediction. The general trend to incorporate variations in the predicted motions consists of combining information about the observed pose sequence with a random vector. In this context, two types of approaches have been studied: The techniques that directly incorporate the random vector into the RNN decoder, e.g., as in GANs, and those that make use of an additional Conditional Variational Autoencoder (CVAE) [36] to learn a latent variable that acts as the root of variation.
+
+In the first class of methods, [26] sample a random vector $z_{t} \sim \mathcal{N}(0,I)$ at each time step and add it to the pose input to the RNN decoder. By relying on different random vectors at each time step, however, this strategy is prone to generating discontinuous motions. To overcome this, [22] make use of a single random vector to generate the entire sequence. This vector is both employed to alter the initialization of the decoder and concatenated with a pose embedding at each iteration of the RNN. By relying on concatenation as a mean to fuse the condition and the random vector, these two methods contain parameters that are specific to the ran
+
+dom vector, and thus give the model the flexibility to ignore this information. In [5], instead of using concatenation, the random vector is added to the hidden state produced by the RNN encoder. While addition prevents having parameters that are specific to the random vector, this vector is first transformed by multiplication with a parameter matrix, and thus can again be zeroed out so as to remove the source of diversity, as we observe empirically in Section 4.2.
+
+The second category of stochastic methods introduce an additional CVAE between the RNN encoder and decoder. This allows them to learn a more meaningful transformation of the noise, combined with the conditioning variables, before passing the resulting information to the RNN decoder. In this context, [37] propose to directly use the pose as conditioning variable. As will be shown in our experiments, while this approach is able to maintain some degree of diversity, albeit less than ours, it yields motions of lower quality because of its use of independent random vectors at each time step. In [8], an approach similar to that of [37] is proposed, but with one CVAE per limb. As such, this method suffers from the same discontinuity problem as [37, 26]. Finally, instead of perturbing the pose, the recent work of [40] uses the RNN decoder hidden state as conditioning variable in the CVAE, concatenating it with the random vector. While this approach generates high-quality motions, it suffers from the fact that the CVAE decoder gives the model the flexibility to ignore the random vector.
+
+Ultimately, both classes of methods suffer from the fact that they allow the model to ignore the random vector, thus relying entirely on the conditioning information to generate future poses. Here, we introduce an effective way to maintain the root of diversity by randomizing the combination of the random vector with the conditioning variable.
+
+# 3. Proposed Method
+
+In this section, we first introduce our Mix-and-Match approach to introducing diversity in CVAE-based motion prediction. We then describe the motion prediction architecture we used in our experiments and propose a novel evaluation metric to quantitatively measure the diversity and quality of generated motions.
+
+# 3.1. Mix-and-Match Perturbation
+
+The main limitation of prior work in the area of stochastic motion modeling, such as [37, 5, 40], lies in the way they fuse the random vector with the conditioning variable, i.e., RNN hidden state or pose, which causes the model to learn to ignore the randomness and solely exploit the deterministic conditioning information to generate motion. To overcome this, we propose to make it harder for the model to decouple the random variable from the deterministic information. Specifically, we observe that the way the random variable and the conditioning one are combined in existing
+
+
+Figure 1. Mix-and-Match perturbation. (Top) Illustration of the Sampling operation (left) and of the Resampling one (right). Given a sampling rate $\alpha$ and a vector length $L$ , the Sampling operation samples $\lceil \alpha L \rceil$ indices, say $\mathcal{I}$ . The complementary, unsampled indices are denoted by $\overline{\mathcal{I}}$ . Then, given two $L$ -dimensional vectors and the corresponding $\lceil \alpha L \rceil$ and $\lfloor (1 - \alpha)L \rfloor$ indices, the Resampling operation mixes the two vectors to form a new $L$ -dimensional one. (Middle) Example of Mix-and-Match perturbation. (Bottom) Example of perturbation by concatenation, as in [40]. Note that, in Mix-and-Match perturbations, sampling is stochastic; the indices are sampled uniformly randomly for each mini-batch. By contrast, in [40], sampling is deterministic, and the indices in $\mathcal{I}$ are fixed and correspond to $\mathcal{I} = \{1,\dots,\frac{L}{2}\}$ .
+
+methods is deterministic. We therefore propose to make this process stochastic.
+
+Similarly to [40], we propose to make use of the hidden state as the conditioning variable and generate a perturbed hidden state by combining a part of the original hidden state with the random vector. However, as illustrated in Fig. 1, instead of assigning predefined, deterministic indices to each piece of information, such as the first half for the hidden state and the second one for the random vector, we assign the values of the hidden state to random indices and the random vector to the complementary ones.
+
+More specifically, as depicted in Fig. 1, a mix-and-match perturbation takes two vectors of size $L$ as input, say $h_t$ and $z$ , and combines them in a stochastic manner. To this end, it relies on two operations. The first one, called Sampling, chooses $\lceil \alpha L \rceil$ indices uniformly at random among the $L$ possible values, given a sampling rate $0 \leq \alpha \leq 1$ . Let us denote by $\mathcal{I} \subseteq \{1, \dots, L\}$ , the resulting set of indices and by $\bar{\mathcal{I}}$ the complementary set. The second operation, called Resampling, then creates a new $L$ -dimensional vector whose values at indices in $\mathcal{I}$ are taken as those at corresponding indices in the first input vector and the others at the complementary indices, of dimension $\lfloor (1 - \alpha)L \rfloor$ , in the second input vector.
+
+# 3.2. M&M Perturbation for Motion Prediction
+
+Let us now describe the way we use our mix-and-match perturbation strategy for motion prediction. To this end, we first discuss the network we rely on during inference, and then explain our training strategy.
+
+Inference. The high-level architecture we use at infer
+
+ence time is depicted by Fig. 2 (Top). It consists of an RNN encoder that takes $t$ poses $x_{1:t}$ as input and outputs an $L$ -dimensional hidden vector $h_t$ . A random $[\alpha L]$ -dimensional portion of this hidden vector, $h_t^{\mathcal{I}}$ , is then combined with an $\lfloor (1 - \alpha)L\rfloor$ -dimensional random vector $z \sim \mathcal{N}(0,I)$ via our mix-and-match perturbation strategy. The resulting $L$ -dimensional output is passed through a small neural network (i.e., ResBlock2 in Fig. 2) that reduces its size to $[\alpha L]$ , and then fused with the remaining $\lfloor (1 - \alpha)L\rfloor$ -dimensional portion of the hidden state, $h_t^{\mathcal{I}}$ . This, in turn, is passed through the VAE decoder to produce the final hidden state $h_z$ , from which the future poses $x_{t + 1:T}$ are obtained via the RNN decoder.
+
+Training. During training, we aim to learn both the RNN parameters and the CVAE ones. Because the CVAE is an autoencoder, it needs to take as input information about future poses. To this end, we complement our inference architecture with an additional RNN future encoder, yielding the training architecture depicted in Fig. 2 (Bottom). Note that, in this architecture, we incorporate an additional mix-and-match perturbation that fuses the hidden state of the RNN past encoder $h_t$ with that of the RNN future encoder $h_T$ and forms $h_{tT}^p$ . This allows us to condition the VAE encoder in a manner similar to the decoder. Note that, for each mini batch, we use the same set of sampled indices for all mix-and-match perturbation steps throughout the network. Furthermore, following the standard CVAE strategy, during training, the random vector $z_p$ is sampled from the approximate posterior distribution $\mathcal{N}(\mu_\theta(x), \Sigma_\theta(x))$ , whose mean $\mu_\theta(x)$ and covariance matrix $\Sigma_\theta(x)$ are produced by the CVAE encoder with parameters $\theta$ . This, in practice, is done by the reparameterization technique [20]. Note that, during inference, $z_p = \epsilon \sim \mathcal{N}(0, I)$ since we do not have access to $x$ , hence to $\mu_\theta(x)$ and $\Sigma_\theta(x)$ .
+
+To learn the parameters of our model, we rely on the availability of a dataset $D = \{X_{1}, X_{2}, \ldots, X_{N}\}$ containing $N$ videos $X_{i}$ depicting a human performing an action. Each video consists of a sequence of $T$ poses, $X_{i} = \{x_{i}^{1}, x_{i}^{2}, \ldots, x_{i}^{T}\}$ , and each pose comprises $J$ joints forming a skeleton, $x_{i}^{t} = \{x_{i,1}^{t}, x_{i,2}^{t}, \ldots, x_{i,J}^{t}\}$ . The pose of each joint is represented as a 4D quaternion. Given this data, we train our model by minimizing a loss function of the form
+
+$$
+\mathcal {L} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left(\mathcal {L} _ {\text {r o t}} \left(X _ {i}\right) + \mathcal {L} _ {\text {s k l}} \left(X _ {i}\right)\right) + \lambda \mathcal {L} _ {\text {p r i o r}}. \tag {1}
+$$
+
+The first term in this loss compares the output of the network with the ground-truth motion using the squared loss. That is,
+
+$$
+\mathcal {L} _ {\text {r o t}} \left(X _ {i}\right) = - \sum_ {k = t + 1} ^ {T} \sum_ {j = 1} ^ {J} \left\| \hat {x} _ {i, j} ^ {k} - x _ {i, j} ^ {k} \right\| ^ {2}, \tag {2}
+$$
+
+where $\hat{x}_{i,j}^{k}$ is the predicted 4D quaternion for the $j^{th}$ joint at time $k$ in sample $i$ , and $x_{i,j}^{k}$ the corresponding ground
+
+
+
+
+Figure 2. Overview of our approach. (Top): Overview of the model during inference. During inference, given past information and a random vector sampled from a Normal distribution, the model generates new motions. (Bottom): Overview of the model during training. During training, we use a future pose autoencoder with a CVAE between the encoder and the decoder. The RNN encoder-decoder network mapping the past to the future then aims to generate good conditioning variables for the CVAE.
+
+
+Figure 3. Example of curriculum perturbation of the hidden state.
+
+truth one. The main weakness of this loss is that it treats all joints equally. However, when working with angles, some joints have a much larger influence on the pose than others. For example, because of the kinematic chain, the pose of the shoulder affects that of the rest of the arm, whereas the pose of the wrists has only a minor effect. To take this into account, we define our second loss term as the error in 3D space. That is,
+
+$$
+\mathcal {L} _ {s k l} (X _ {i}) = - \sum_ {k = t + 1} ^ {T} \sum_ {j = 1} ^ {J} \| \hat {p} _ {i, j} ^ {k} - p _ {i, j} ^ {k} \| ^ {2}, \tag {3}
+$$
+
+where $\hat{p}_{i,j}^{k}$ is the predicted 3D position of joint $j$ at time $k$ in sample $i$ and $p_{i,j}^{k}$ the corresponding ground-truth one. These 3D positions can be computed using forward kinematics, as in [32, 31]. Note that, to compute this loss, we first perform a global alignment of the predicted pose and the ground-truth one by rotating the root joint to face [0, 0, 0]. Finally, following standard practice in training VAEs, we define our third loss term as the KL divergence
+
+$$
+\begin{array}{l} \mathcal {L} _ {p r i o r} = - K L \left(\mathcal {N} \left(\mu_ {\theta} (x), \Sigma_ {\theta} (x))\right) \| \mathcal {N} (0, I)\right) \\ = - \frac {1}{2} \sum_ {j = 1} ^ {d} \left(1 + \log \left(\sigma_ {\theta} (x) _ {j} ^ {2}\right) - \mu_ {\theta} (x) _ {j} ^ {2} - \sigma_ {\theta} (x) _ {c _ {j}} ^ {2}\right). \tag {4} \\ \end{array}
+$$
+
+where $\Sigma_{\theta}(x) = \text{diag}(\sigma_{\theta}(x)^2)I$ and $d$ is the length of the diagonal of the covariance matrix. In practice, since our VAE appears within a recurrent model, we weigh $\mathcal{L}_{prior}$ by a function $\lambda$ corresponding to the KL annealing weight of [6]. We start from $\lambda = 0$ , forcing the model to encode as much information in $z$ as possible, and gradually increase it to $\lambda = 1$ , following a logistic curve.
+
+# 3.3. Curriculum Learning of Variation
+
+The parameter $\alpha$ in our mix-and-match perturbation scheme determines a trade-off between stochasticity and motion quality. The larger $\alpha$ , the larger the portion of the original hidden state that will be perturbed. Thus, the model incorporates more randomness and less information from the original hidden state. As such, given a large $\alpha$ , it becomes harder for the model to deliver motion information from the observation to the future representation since a large portion of the hidden state is changing randomly. In particular, we observed that training becomes unstable if we use a large $\alpha$ from the beginning, with the motion-related loss terms fluctuating while the prior loss $\mathcal{L}_{prior}$ quickly converges to zero. To overcome this while still enabling the use of sufficiently large values of $\alpha$ to achieve high diversity, we introduce the curriculum learning strategy depicted by Fig. 3. In essence, we initially select $\lceil \alpha L \rceil$ indices in a deterministic manner and gradually increase the randomness of these indices as training progresses. More specifically, given a set of $\lceil \alpha L \rceil$ indices, we replace $c$ indices from the sampled ones with the corresponding ones from the remaining $\lfloor (1 - \alpha)L \rfloor$ indices. Starting from $c = 0$ , we gradually increase $c$ to the point where all $\lceil \alpha L \rceil$ indices are sampled uniformly randomly. More details, including the pseudo-code of this approach, are provided in the supplementary material. This strategy helps the motion decoder to initially learn and incorporate information about the observations (as in [40]), yet, in the long run, still prevents it from ignoring the random vector.
+
+# 3.4. Quality and Diversity Metrics
+
+When dealing with multiple plausible motions, or in general diverse solutions to a problem, evaluation is a challenge. The standard metrics used for deterministic motion prediction models are ill-suited to this task, because they typically compare the predictions to the ground truth, thus inherently penalizing diversity. For multiple motions, two
+
+aspects are important: the diversity and the quality, or realism, of each individual motion. Prior work typically evaluates these aspects via human judgement. While human evaluation is highly valuable, and we will also report human results, it is very costly and time-consuming. Here, we therefore introduce two metrics that facilitate the quantitative evaluation of both quality and diversity of generated human motions. We additionally extend the Inception-Score [34] to our task.
+
+To measure the quality of generated motions, we propose to rely on a binary classifier trained to discriminate real (ground-truth) samples from fake (generated) ones. The accuracy of this classifier on the test set is thus inversely proportional to the quality of the generated motions. In other words, high-quality motions are those that are not distinguishable from real ones. Note that we do not rely on adversarial training, i.e., we do not define a loss based on this classifier when training our model. To measure the diversity of the generated motions, a naive approach would consist of relying on the distance between the generated motion and a reference one. However, generating identical motions that are all far from the reference one would therefore yield a high value, while not reflecting diversity. To prevent this, we propose to make use of the average distance between all pairs of generated motions. A similar idea has been investigated to measure the diversity of solutions in other domains [43, 42].
+
+The quality and diversity metrics can reliably evaluate a stochastic motion prediction model. While providing valuable information, drawing conclusion about the performance of a model is always easier with a single measure. To this end, we extend the Inception-Score (IS) [34] used to measure the quality of images produced by a generative model. Our extension to IS is twofold: (1) Inspired by [16], we extend IS to the conditional case, where the condition provides the core signal to generate the sample; (2) Our extended IS measures the quality and diversity of sequential solutions. To this end, we first train a strong skeleton-based action classifier [25] on ground-truth motions. With then compute the IS of each of the multiple motions generated for a given condition (observed motion), and report the mean IS and its standard deviation over all conditions. The reason behind reporting the mean IS over all conditions is to evaluating the diversity of generated motions given each observation. Note that studying IS only makes it hard to evaluate quality and diversity separately, and thus we still believe that all three metrics are required. Importantly, we show empirically that our proposed metrics are in line with human judgement, at considerably lower cost.
+
+# 4. Experiments
+
+We now evaluate the effectiveness of our approach at generating multiple plausible motions. To this end, we use
+
+Human3.6M [17] and the CMU Mocap dataset1, two large publicly available motion capture datasets. In this section, we introduce the baselines and give information about the implementation details and evaluation metrics. We then provide all the experimental results.
+
+Baselines. We compare our Mix-and-Match approach with the different means of imposing variation in motion prediction discussed in Section 2, i.e., concatenating the hidden state to a learned latent variable, Yan et al., [40], concatenating the pose to a learned latent variable at each time-step, Walker et al., [37], and adding a (transformed) random noise to the hidden state, Barsoum et al., [5]. For the comparison to be fair, we use 16 frames (i.e., $640\mathrm{ms}$ ) as observation to generate the next 60 frames (i.e., 2.4sec) for all baselines. All models are trained with the same motion representation, annealing strategy, backbone network, and losses, except for Barsoum et al., [5] which cannot make use of $\mathcal{L}_{prior}$ .
+
+Implementation Details. The motion encoders and decoders in our model are single layer GRU [9] networks, comprising 1024 hidden units each. For the decoders, we use a teacher forcing technique [39] to decode motion. At each time-step, the network chooses with probability $P_{tf}$ whether to use its own output at the previous time-step or the ground-truth pose as input. We initialize $P_{tf} = 1$ , and decrease it linearly at each training epoch such that, after a certain number of epochs, the model becomes completely autoregressive, i.e., uses only its own output as input to the next time-step. We train our model on a single GPU with the Adam optimizer [19] for 100K iterations. We use a learning rate of 0.001 and a mini-batch size of 64. To avoid exploding gradients, we use the gradient-clipping technique of [29] for all layers in the network. We implemented our model using the Pytorch framework of [30].
+
+Evaluation Metrics. In addition to the metrics discussed in Section 3.4, we also report the standard ELBO metric (approximated by the reconstruction loss and the KL on the test set) and the sampling loss (S-MSE) of our approach and the state-of-the-art stochastic motion prediction techniques. However, evaluating only against one ground-truth motion (i.e., one sample from multi-modal distribution), as in MSE or S-MSE, can lead to a high score for one sample while penalizing other plausible modes. This behavior is undesirable since it cannot differentiate a multi-modal solution from a good, but uni-modal one. Similarly, the metrics in [40] or the approximate ELBO only evaluate quality given one single ground truth. While the ground truth has high quality, there exist multiple high quality continuations of an observation, which our proposed metric accounts for. As discussed in Section 3.4, we evaluate the quality and diversity of the predicted motions. Note, these metrics should be considered together, since each one taken separately does not provide a complete picture of how well a model can
+
+Quantitative results on Human3.6M dataset
+
+| Method | ELBO ↓ (KL ↑) | Diversity ↑ | Quality ↑ | IS ↑ | Tr KL ↑ |
| Yan et al., [40] | 0.51 (0.06) | 0.26 | 0.45 | 1.9±0.4 | 0.08 |
| Walker et al., [37] | 2.08 (N/A) | 1.70 | 0.13 | 1.8±0.6 | N/A |
| Barsoum et al., [5] | 0.61 (N/A) | 0.48 | 0.47 | 2.1±1.3 | N/A |
| Mix-and-Match | 0.55 (2.03) | 3.52 | 0.42 | 7.3±1.4 | 1.98 |
+
+Quantitative results on CMU Mocap dataset
+
+| Method | ELBO ↓ (KL ↑) | Diversity ↑ | Quality ↑ | IS ↑ | Tr KL ↑ |
| Yan et al., [40] | 0.25 (0.08) | 0.41 | 0.46 | 2.4±0.1 | 0.01 |
| Walker et al., [37] | 1.93 (N/A) | 3.00 | 0.18 | 1.4±0.4 | N/A |
| Barsoum et al., [5] | 0.24 (N/A) | 0.43 | 0.45 | 2.0±1.0 | N/A |
| Mix-and-Match | 0.25 (2.92) | 2.63 | 0.46 | 9.0±1.7 | 2.20 |
+
+Table 1. Comparison of our approach with the stochastic motion prediction baselines on Human3.6M dataset (left) and CMU Mocap dataset (right). Tr KL stands for KL term at training convergence.
+
+
+
+Figure 4. Qualitative evaluation of diversity. The first row (black box) shows the ground-truth motion. The next six rows depict six randomly generated motions (not cherry-picked) given the same observations (the first four poses of each motion). The green box shows the last observed frame and the first generated one, illustrating the consistency of the generated motions. The orange boxes show the diversity of the generated motions in different temporal windows. The blue box shows a randomly sampled motion whose poses are similar to the ground-truth ones. Best seen in color and zoomed in.
+
+
+Figure 5. Diversity of $K$ RNN decoder inputs, generated with $K = 50$ different random vectors. We report the mean diversity over $N = 50$ samples and the corresponding standard deviation.
+
+predict multiple plausible future motions. For example, a model can generate diverse but unnatural motions, or, conversely, realistic but identical motions. To evaluate quality, as discussed in Section 3.4, we use a recurrent binary classifier whose task is to determine whether a sample comes from the ground-truth data or was generated by the model. We train such a classifier for each method, using $25\mathrm{K}$ samples generated at different training steps together with $25\mathrm{K}$ real samples, forming a binary dataset of $50\mathrm{K}$ motions for each method. To evaluate diversity, as discussed in Section 3.4, we compute the mean Euclidean distance from each motion to all other $K - 1$ motions when generating $K = 50$ motions. To compute IS, we trained an action classifier [25] with $50\mathrm{K}$ real motions. We then compute the IS for $K = 50$ samples per condition for 50 different conditions. We followed Section 3.4 to report IS. Furthermore, we also performed a human evaluation to measure the quality of the motions generated by each method. To this end, we asked eight users to rate the quality of 50 motions generated by each method, for a total of 200 motions. The ratings
+
+were defined on a scale of 1-5, 1 representing a low-quality motion and 5 a high-quality, realistic one. We then scaled the values to the range 0-50 to make them comparable with those of the binary classifier.
+
+# 4.1. Comparison to the State-of-the-Art
+
+In this section, we quantitatively compare our approach to the state-of-the-art stochastic motion prediction techniques in terms of approximate ELBO, Diversity, Quality, and IS on a held-out test set, as well as the training KL term at convergence. Table 1 shows the results on the Human3.6M and CMU Mocap datasets.
+
+These results show that Mix-and-Match is highly capable of learning the variation in human motion while maintaining a good motion quality. This is shown by IS, Diversity, and Quality metrics, which should be considered together. It is also evidenced by the low reconstruction loss and higher KL term on the test set. The training KL term at convergence also shows that, in Mix-and-Match, the posterior does not collapse to the prior distribution, i.e., the model does not ignore the latent variable. While the MSE of our approach is slightly higher than that of Yan et al., [40] on Human3.6M and Barsoum et al., [5] on the CMU Mocap dataset, we effectively exploit the latent variables, as demonstrated by the KL term on the test set, the IS and diversity metric and the qualitative results provided in Fig. 4 and in the supplementary material. As evidenced by the examples of diverse motions generated by our model in Fig. 4, given a single observation, Mix-and-Match is able to gener
+
+
+Figure 6. (Left) Diversity of our approach and the stochastic baselines. (Middle) Quality of our approach and the stochastic baselines. (Right) Comparing classifier-based and human evaluation of quality for our approach and the baselines, where the statistics correspond to evaluation after the models are fully trained. The numbers are provided in the supplementary material to facilitate future comparisons.
+
+
+
+
+
+ate diverse, but natural motions2.
+
+# 4.2. Analysis on Diversity and Quality
+
+To provide a deeper understanding of our approach, we evaluate different aspects of Mix-and-Match. All these experiments were done on Human3.6M. In the following, we first analyze the diversity in the hidden state space, i.e., the first part of the model where variation is imposed. We then evaluate the quality and diversity of prediction when tested at different stages of the training. We also perform a human evaluation on the quality of the generated motions, comparing it with our inexpensive, automatic quality metric. Finally, we compare Mix-and-Match with other stochastic techniques in terms of sampling error (S-MSE), i.e., by computing the error of the best of $K$ generated motions given the ground-truth one. More experiments and visualizations are provided in the supplementary material.
+
+Diversity in Hidden State Space. In Fig. 5, we plot the diversity of the representations used as input to the RNN decoders of [40] and [5], two state-of-the-art methods that are closest in spirit to our approach. Here, diversity is measured as the average pairwise distance across the $K = 50$ representations produced for a single series of observations. We report the mean diversity over 50 samples and the corresponding standard deviation. As can be seen from the figure, the diversity of [40] and [5] decreases as training progresses, thus supporting our observation that these models learn to ignore the perturbations. As evidenced by the black curve, which shows an increasing diversity as training progresses, our approach produces not only high-quality predictions but also truly diverse ones. The gradual but steady increase in diversity of our approach is due to our curriculum learning strategy described in Section 3.3. Without it, training is less stable, with large diversity variations.
+
+Diversity and Quality in Motion Space. Now, we thoroughly compare our approach with state-of-the-art stochastic motion prediction models in terms of quality and diversity. The results of the metrics of Section 3.4 are provided in Fig. 6(Left and Middle) and those of the human evalua
+
+tion in Fig. 6(Right). Below, we analyze the results of the different models.
+
+As can be seen from Fig. 6, [40] tends to ignore the random variable $z$ , thus ignoring the root of variation. As a consequence, it achieves a low diversity, much lower than ours, but produces samples of high quality, albeit almost identical, which is also shown in qualitatively in Fig. 3 of the supplementary material. We empirically observed that the magnitude of the weights acting on $z$ to be orders of magnitude smaller than that of acting on the condition, 0.008 versus 232.85 respectively. Note that this decrease in diversity occurs after 16K iterations, indicating that the model takes time to identify the part of the hidden state that contains the randomness. Nevertheless, at iteration 16K, prediction quality is low, and thus one could not simply stop training at this stage. Note that the lack of diversity of [40] is also evidenced by Fig. 5. As can be verified in Fig. 6(Right), where [40] appears in a region of high quality but low diversity, the results of human evaluation match those of our classifier-based quality metric.
+
+Fig. 6 also evidences the limited diversity of the motions produced by [5] despite its use of random noise during inference. Note that the authors of [5] mentioned in their paper that the random noise was added to the hidden state. Only by studying their publicly available code3 did we understand the precise way this combination was done. In fact, the addition relies on a parametric, linear transformation of the noise vector. That is, the perturbed hidden state is obtained as $h_{perturbed} = h_{original} + W^{z \to h} z$ . Because the parameters $W^{z \to h}$ are learned, the model has the flexibility to ignore $z$ (the magnitude of $W^{z \to h}$ is in the order of $O(1e^{-3})$ ), which causes the behavior observed in Figs. 6 and 5. Note that the authors of [5] acknowledged that, despite their best efforts, they noticed very little variations between predictions obtained with different $z$ values. By depicting [5] in a region of high quality but low diversity, the human evaluation results in Fig. 6(Right) again match those of our classifier-based quality metric.
+
+As can be seen in Fig. 6(Left and Middle), [37] pro
+
+| Walking |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Yan et al., [40] | 0.73 | 0.79 | 0.90 | 0.93 | 0.95 | 1.05 |
| Barsoum et al., [5] | 0.61 | 0.62 | 0.71 | 0.79 | 0.83 | 1.07 |
| Walker et al., [37] | 0.56 | 0.66 | 0.98 | 1.05 | 1.28 | 1.60 |
| Mix-and-Match | 0.33 | 0.48 | 0.56 | 0.58 | 0.64 | 0.68 |
+
+| Smoking |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Yan et al., [40] | 1.00 | 1.14 | 1.43 | 1.44 | 1.68 | 1.99 |
| Barsoum et al., [5] | 0.64 | 0.78 | 1.05 | 1.12 | 1.64 | 1.84 |
| Walker et al., [37] | 0.59 | 0.83 | 1.25 | 1.36 | 1.67 | 2.03 |
| Mix-and-Match | 0.23 | 0.42 | 0.79 | 0.77 | 0.82 | 1.25 |
+
+| Eating |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Yan et al., [40] | 0.68 | 0.74 | 0.95 | 1.00 | 1.03 | 1.38 |
| Barsoum et al., [5] | 0.53 | 0.67 | 0.79 | 0.88 | 0.97 | 1.12 |
| Walker et al., [37] | 0.44 | 0.60 | 0.71 | 0.84 | 1.05 | 1.54 |
| Mix-and-Match | 0.23 | 0.34 | 0.41 | 0.50 | 0.61 | 0.91 |
+
+| Discussion |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Yan et al., [40] | 0.80 | 1.01 | 1.22 | 1.35 | 1.56 | 1.69 |
| Barsoum et al., [5] | 0.79 | 1.00 | 1.12 | 1.29 | 1.43 | 1.71 |
| Walker et al., [37] | 0.73 | 1.10 | 1.33 | 1.34 | 1.45 | 1.85 |
| Mix-and-Match | 0.25 | 0.60 | 0.83 | 0.89 | 1.12 | 1.30 |
+
+Table 2. Quantitative comparison of the S-MSE against stochastic baselines for four actions of the Human3.6M dataset.
+
+| Walking |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Zero Velocity | 0.39 | 0.86 | 0.99 | 1.15 | 1.35 | 1.32 |
| AGED [14] | 0.22 | 0.36 | 0.55 | 0.67 | 0.78 | 0.91 |
| Imitation [38] | 0.21 | 0.34 | 0.53 | 0.59 | 0.67 | 0.69 |
| LSTM-3LR [12] | 1.18 | 1.50 | 1.67 | 1.76 | 1.81 | 2.20 |
| SRNN [18] | 1.08 | 1.34 | 1.60 | 1.80 | 1.90 | 2.13 |
| DAE-LSTM [13] | 1.00 | 1.11 | 1.39 | 1.48 | 1.55 | 1.39 |
| GRU [28] | 0.28 | 0.49 | 0.72 | 0.81 | 0.93 | 1.03 |
| LTD [27] | 0.18 | 0.31 | 0.49 | 0.56 | 0.65 | 0.67 |
| Mix-and-Match | 0.33 | 0.48 | 0.56 | 0.58 | 0.64 | 0.68 |
+
+| Smoking |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Zero Velocity | 0.26 | 0.48 | 0.97 | 0.95 | 1.02 | 1.69 |
| AGED [14] | 0.27 | 0.43 | 0.82 | 0.84 | 1.06 | 1.21 |
| Imitation [38] | 0.23 | 0.44 | 0.86 | 0.85 | 0.95 | 1.63 |
| LSTM-3LR [12] | 2.05 | 2.34 | 3.10 | 3.18 | 3.24 | 3.42 |
| SRNN [18] | 1.90 | 2.30 | 2.90 | 3.10 | 3.21 | 3.23 |
| DAE-LSTM [13] | 0.92 | 1.03 | 1.15 | 1.25 | 1.38 | 1.77 |
| GRU [28] | 0.33 | 0.61 | 1.05 | 1.15 | 1.25 | 1.50 |
| LTD [27] | 0.22 | 0.41 | 0.86 | 0.80 | 0.87 | 1.57 |
| Mix-and-Match | 0.23 | 0.42 | 0.79 | 0.77 | 0.82 | 1.25 |
+
+| Eating |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Zero Velocity | 0.27 | 0.48 | 0.73 | 0.86 | 1.04 | 1.38 |
| AGED [14] | 0.17 | 0.28 | 0.51 | 0.64 | 0.86 | 0.93 |
| Imitation [38] | 0.17 | 0.30 | 0.52 | 0.65 | 0.79 | 1.13 |
| LSTM-3LR [12] | 1.36 | 1.79 | 2.29 | 2.42 | 2.49 | 2.82 |
| SRNN [18] | 1.35 | 1.71 | 2.12 | 2.21 | 2.28 | 2.58 |
| DAE-LSTM [13] | 1.31 | 1.49 | 1.86 | 1.89 | 1.76 | 2.01 |
| GRU [28] | 0.23 | 0.39 | 0.62 | 0.76 | 0.95 | 1.08 |
| LTD [27] | 0.16 | 0.29 | 0.50 | 0.62 | 0.76 | 1.12 |
| Mix-and-Match | 0.23 | 0.34 | 0.41 | 0.50 | 0.61 | 0.91 |
+
+| Discussion |
| Method | 80ms | 160ms | 320ms | 400ms | 560ms | 1000ms |
| Zero Velocity | 0.31 | 0.67 | 0.94 | 1.04 | 1.41 | 1.96 |
| AGED [14] | 0.27 | 0.56 | 0.76 | 0.83 | 1.25 | 1.30 |
| Imitation [38] | 0.27 | 0.56 | 0.82 | 0.91 | 1.34 | 1.81 |
| LSTM-3LR [12] | 2.25 | 2.33 | 2.45 | 2.46 | 2.48 | 2.93 |
| SRNN [18] | 1.67 | 2.03 | 2.20 | 2.31 | 2.39 | 2.43 |
| DAE-LSTM [13] | 1.11 | 1.20 | 1.38 | 1.42 | 1.53 | 1.73 |
| GRU [28] | 0.31 | 0.68 | 1.01 | 1.09 | 1.43 | 1.69 |
| LTD [27] | 0.20 | 0.51 | 0.77 | 0.85 | 1.33 | 1.70 |
| Mix-and-Match | 0.25 | 0.60 | 0.83 | 0.89 | 1.12 | 1.30 |
+
+Table 3. Comparison against deterministic motion prediction techniques for four actions of the Human3.6M dataset.
+
+duces motions with higher diversity than [5, 40], but of much lower quality. The main reason behind this is that the random vectors that are concatenated to the poses at each time-step are sampled independently of each other, which translates to discontinuities in the generated motions. Human evaluation in Fig.6(Right) further confirms that [37]'s results lie in a low-quality, medium-diversity region.
+
+The success of our approach is confirmed by Fig. 6(Left and Middle). Our model generates diverse motions, even after a long training time, and the quality of these motions is high. While this quality is slightly lower than that of [5, 40] when looking at our classifier-based metric, it is rated higher by IS and humans, as can be verified from Fig. 6(Right) and Table 1. Altogether, these results confirm the ability of our approach to generate highly diverse yet realistic motions.
+
+Evaluating the Sampling Error. We now quantitatively compare our approach with other stochastic baselines in terms of sampling error (aka S-MSE). To this end, we follow the evaluation setting of deterministic motion prediction (as in [12, 31, 32, 28, 14]) which allows further comparisons to deterministic baselines. We report the standard metric, i.e., the Euclidean distance between the generated and ground-truth Euler angles (aka MAE). To evaluate this metric for our method and the stochastic motion prediction models, which generate multiple, diverse predictions, we make use of the best sample among the $K$ generated ones with $K = 50$ for the stochastic baselines and for our approach. This evaluation procedure aims to show that, among the $K$ generated motions, at least one is close to the
+
+ground truth. As shown in Table 2, by providing higher diversity, our approach outperforms the baselines. Similarly, in Table 3, we compare the best of $K = 50$ sampled motions for our approach with the deterministic motion prediction techniques. Note that the goal of this experiment is not to provide a fair comparison to deterministic models, but to show that, among the diverse set of motions generated by our model, there exists at least one motion that is very close to the ground-truth one. The point of bringing the MAE of other deterministic methods, is to show how good deterministic models, with sophisticated architectures and complicated loss functions, perform on this task.
+
+# 5. Conclusion
+
+In this paper, we have proposed an effective way of perturbing the hidden state of an RNN such that it becomes capable of learning the multiple modes of human motions. Our evaluation of quality and diversity, based on both new quantitative metrics and human judgment, have evidenced that our approach outperforms existing stochastic methods. Generating diverse plausible motions given limited observations has many applications, especially when the motions are generated in an action-agnostic manner, as done here. For instance, our model can be used for human action forecasting [33, 2, 35, 1, 3], where one seeks to anticipate the action as early as possible, or for motion inpainting, where, given partial observations, one aims to generate multiple inbetween solutions. In the future, we will therefore investigate the use of our approach in such applications.
+
+# References
+
+[1] Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. Encouraging lstms to anticipate actions very early. In The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
+[2] Mohammad Sadegh Aliakbarian, Fatemehsadat Saleh, Basura Fernando, Mathieu Salzmann, Lars Petersson, and Lars Andersson. Deep action-and context-aware sequence learning for activity recognition and anticipation. arXiv preprint arXiv:1611.05520, 2016.
+[3] Mohammad Sadegh Aliakbarian, Fatemeh Sadat Saleh, Mathieu Salzmann, Basura Fernando, Lars Petersson, and Lars Andersson. Viena: A driving anticipation dataset. In Asian Conference on Computer Vision, pages 449-466. Springer, 2018.
+[4] Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, and Gang Hua. Cvae-gan: fine-grained image generation through asymmetric training. In Proceedings of the IEEE International Conference on Computer Vision, pages 2745-2754, 2017.
+[5] Emad Barsoum, John Kender, and Zicheng Liu. Hp-gan: Probabilistic 3d human motion prediction via gan. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1418-1427, 2018.
+[6] Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
+[7] Judith Butepage, Michael J Black, Danica Kragic, and Hedvig Kjellstrom. Deep representation learning for human motion prediction and classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6158-6166, 2017.
+[8] Judith Butepage, Hedvig Kjellström, and Danica Kragic. Anticipating many futures: Online human motion prediction and generation for human-robot interaction. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 1-9. IEEE, 2018.
+[9] Kyunghyun Cho, Bart Van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
+[10] Jesse Engel, Matthew Hoffman, and Adam Roberts. Latent constraints: Learning to generate conditionally from unconditional generative models. arXiv preprint arXiv:1711.05772, 2017.
+[11] Patrick Esser, Ekaterina Sutter, and Björn Ommer. A variational u-net for conditional appearance and shape generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8857-8866, 2018.
+[12] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In Proceedings of the IEEE International Conference on Computer Vision, pages 4346-4354, 2015.
+
+[13] Partha Ghosh, Jie Song, Emre Aksan, and Otmar Hilliges. Learning human motion models for long-term predictions. In 2017 International Conference on 3D Vision (3DV), pages 458-466. IEEE, 2017.
+[14] Liang-Yan Gui, Yu-Xiong Wang, Xiaodan Liang, and José MF Moura. Adversarial geometry-aware human motion prediction. In Proceedings of the European Conference on Computer Vision (ECCV), pages 786-803, 2018.
+[15] Liang-Yan Gui, Yu-Xiong Wang, Deva Ramanan, and José MF Moura. Few-shot human motion prediction via meta-learning. In Proceedings of the European Conference on Computer Vision (ECCV), pages 432-450, 2018.
+[16] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 172-189, 2018.
+[17] Catalin Ionescu, Dragos Papava, Vlad Olaru, and Cristian Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(7):1325-1339, jul 2014.
+[18] Ashesh Jain, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-rnn: Deep learning on spatio-temporal graphs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5308-5317, 2016.
+[19] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
+[20] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
+[21] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in neural information processing systems, pages 2539-2547, 2015.
+[22] Jogendra Nath Kundu, Maharshi Gor, and R Venkatesh Babu. Bihmp-gan: Bidirectional 3d human motion prediction gan. arXiv preprint arXiv:1812.02591, 2018.
+[23] Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, Hugo Larochelle, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
+[24] Chen Li, Zhen Zhang, Wee Sun Lee, and Gim Hee Lee. Convolutional sequence to sequence model for human dynamics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5226-5234, 2018.
+[25] Chao Li, Qiaoyong Zhong, Di Xie, and Shiliang Pu. Cooccurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. International Joint Conferences on Artificial Intelligence, 2018.
+[26] Xiao Lin and Mohamed R Amer. Human motion modeling using dvgens. arXiv preprint arXiv:1804.10652, 2018.
+[27] Wei Mao, Miaomiao Liu, Mathieu Salzmann, and Hongdong Li. Learning trajectory dependencies for human motion prediction. arXiv preprint arXiv:1908.05436, 2019.
+[28] Julieta Martinez, Michael J Black, and Javier Romero. On human motion prediction using recurrent neural networks.
+
+In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4674-4683. IEEE, 2017.
+[29] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. In International conference on machine learning, pages 1310-1318, 2013.
+[30] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
+[31] Dario Pavllo, Christoph Feichtenhofer, Michael Auli, and David Grangier. Modeling human motion with quaternion-based neural networks. arXiv preprint arXiv:1901.07677, 2019.
+[32] Dario Pavllo, David Grangier, and Michael Auli. Quaternet: A quaternion-based recurrent model for human motion. arXiv preprint arXiv:1805.06485, 2018.
+[33] Cristian Rodriguez, Basura Fernando, and Hongdong Li. Action anticipation by predicting future dynamic images. In European Conference on Computer Vision, pages 89-105. Springer, 2018.
+[34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved techniques for training gans. In Advances in neural information processing systems, pages 2234-2242, 2016.
+[35] Yuge Shi, Basura Fernando, and Richard Hartley. Action anticipation with rbf kernelized feature mapping rn. In Proceedings of the European Conference on Computer Vision (ECCV), pages 301-317, 2018.
+[36] Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pages 3483-3491, 2015.
+[37] Jacob Walker, Kenneth Marino, Abhinav Gupta, and Martial Hebert. The pose knows: Video forecasting by generating pose futures. In Computer Vision (ICCV), 2017 IEEE International Conference on, pages 3352-3361. IEEE, 2017.
+[38] Borui Wang, Ehsan Adeli, Hsu kuang Chiu, De-An Huang, and Juan Carlos Niebles. Imitation learning for human pose prediction. arXiv preprint arXiv:1909.03449, 2019.
+[39] Ronald J Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2):270-280, 1989.
+[40] Xinchen Yan, Akash Rastogi, Ruben Villegas, Kalyan Sunkavalli, Eli Shechtman, Sunil Hadap, Ersin Yumer, and Honglak Lee. Mt-vae: Learning motion transformations to generate multimodal human dynamics. In European Conference on Computer Vision, pages 276–293. Springer, 2018.
+[41] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. In European Conference on Computer Vision, pages 776-791. Springer, 2016.
+[42] Dingdong Yang, Seunghoon Hong, Yunseok Jang, Tiangchen Zhao, and Honglak Lee. Diversity-sensitive conditional generative adversarial networks. In International Conference on Learning Representations, 2019.
+
+[43] Ye Yuan and Kris Kitani. Diverse trajectory forecasting with determinantal point processes. arXiv preprint arXiv:1907.04967, 2019.
\ No newline at end of file
diff --git a/astochasticconditioningschemefordiversehumanmotionprediction/images.zip b/astochasticconditioningschemefordiversehumanmotionprediction/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..34e160ac1f631548d592b8a76f956d8634bcaba4
--- /dev/null
+++ b/astochasticconditioningschemefordiversehumanmotionprediction/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13ee477756328fed544c8027584675ee110f8a98446d7d47e8fa73c59f9fc519
+size 536696
diff --git a/astochasticconditioningschemefordiversehumanmotionprediction/layout.json b/astochasticconditioningschemefordiversehumanmotionprediction/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..8ff9ccaafc710011957e2705448b84f6521fd376
--- /dev/null
+++ b/astochasticconditioningschemefordiversehumanmotionprediction/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a71e44075fc23327e0b784e8ae9de21a4dbd03850d83b9fc0cab3203719d3482
+size 387480
diff --git a/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_content_list.json b/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..357c11040bf6479da290fd475f4763cef2e5b1ad
--- /dev/null
+++ b/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:881afed1900450961d31c48b50df306ba3563636595b69b64b0c42e343f7c542
+size 78090
diff --git a/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_model.json b/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..d887f68f260a34da9b040ba3e5b34b6e89011544
--- /dev/null
+++ b/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fdb7687fd09c7f7f74c36f22962a7a229c7fd4f806ae31180f6a32f21a717a71
+size 95467
diff --git a/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_origin.pdf b/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..a4a60178468b27c02e2567ceafebe83d816c9562
--- /dev/null
+++ b/atransductiveapproachforvideoobjectsegmentation/97c60c57-9c75-4e33-ac34-d62ff0e132b5_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e5d355f227a0cf18ec58c0dc9f012240eb66fe5d82e7446158a7c24a80573f2b
+size 3557568
diff --git a/atransductiveapproachforvideoobjectsegmentation/full.md b/atransductiveapproachforvideoobjectsegmentation/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..d165ea5c34b84076b8a1500bd019916ed703a2ed
--- /dev/null
+++ b/atransductiveapproachforvideoobjectsegmentation/full.md
@@ -0,0 +1,339 @@
+# A Transductive Approach for Video Object Segmentation
+
+Yizhuo Zhang $^{2\star}$ Zhirong Wu $^{1\star}$ $^{1}$ Microsoft Research Asia
+
+Houwen Peng1 Stephen Lin1
+2Carnegie Mellon University
+
+# Abstract
+
+Semi-supervised video object segmentation aims to separate a target object from a video sequence, given the mask in the first frame. Most of current prevailing methods utilize information from additional modules trained in other domains like optical flow and instance segmentation, and as a result they do not compete with other methods on common ground. To address this issue, we propose a simple yet strong transductive method, in which additional modules, datasets, and dedicated architectural designs are not needed. Our method takes a label propagation approach where pixel labels are passed forward based on feature similarity in an embedding space. Different from other propagation methods, ours diffuses temporal information in a holistic manner which take accounts of long-term object appearance. In addition, our method requires few additional computational overhead, and runs at a fast $\sim 37$ fps speed. Our single model with a vanilla ResNet50 backbone achieves an overall score of $72.3\%$ on the DAVIS 2017 validation set and $63.1\%$ on the test set. This simple yet high performing and efficient method can serve as a solid baseline that facilitates future research. Code and models are available at https://github.com/microsoft/transductive-vos.pytorch.
+
+# 1. Introduction
+
+Video object segmentation addresses the problem of extracting object segments from a video sequence given the annotations in the starting frame. This semi-supervised setting is challenging as it requires the system to generalize to various objects, deformations, and occlusions. Nevertheless, video object segmentation has received considerable attention because of its broad practical applications in surveillance, self-driving cars, robotics, and video editing.
+
+Despite the simplicity of the formulation, video object segmentation is closely related to many other visual problems, such as instance segmentation [19], object re-identification [13], optical flow estimation [15], and object
+
+
+Figure 1: A comparison of performance and speed for semi-supervised video object segmentation methods on the DAVIS 2017 validation set. Ours performs comparably to the state-of-the-art methods, while running at an online speed ( $>30$ fps).
+
+tracking [5]. As these tasks share similar challenges with video object segmentation, previous efforts [29, 30] attempt to transfer the modules trained for such tasks into video object segmentation pipeline. More specifically, optical flow and tracking encourage local dependencies by estimating displacements in nearby frames, while instance segmentation and object re-identification enforces global dependencies by learning invariances to large appearance changes. The integration of such modules allows a significant performance improvement in video object segmentation.
+
+The idea of enforcing local and global dependencies has been a central topic in general semi-supervised learning [49, 14] (also known as transductive inference). The basic assumptions are: 1) nearby samples tend to have the same label and 2) samples that lie on the same manifold should should have the same label. The local and global dependencies describe a sufficiently smooth affinity distribution, so that label propagation on the unlabeled data gives reliable estimates. Prior classical approaches that realize this idea include random walk [37], graph-cut [6] and spectral methods [4].
+
+This inspires us to explore a unified approach for semi-supervised video object segmentation without the integration of the modules derived from other domains. We model
+
+the local dependency through a spatial prior and a motion prior. It is based on the assumption that spatially nearby pixels are likely to have same labels and that temporally distant frames weakens the spatial continuity. On the other hand, we model the global dependency through visual appearance, which is learned by convolutional neural networks on the training data.
+
+The inference follows the regularization framework [49] which propagates labels in the constructed spatio-temporal dependency graph. While label propagation algorithms have been explored in the recent literature for video object segmentation [41, 9, 22, 36, 34], the manner in which they learn and propagate affinity is sparse and local, i.e., learning pixel affinities either between adjacent frames or between the first frame and a distant frame. We observe that there exists much smooth unlabeled structure in a temporal volume that these methods do not exploit. This may cause failures when handling deformations and occlusions. In contrast, our label propagation approach attempts to capture all frames which span the video sequence from the first frame to the frame preceding the current frame. To limit the computational overhead, sampling is performed densely within the recent history and sparsely in the more distant history, yielding a model that accounts for object appearance variation while reducing temporal redundancy.
+
+In its implementation, our model does not rely on any other task modules, additional datasets, nor dedicated architectural designs beyond a pretrained ResNet-50 model from the ImageNet model zoo [20]. During inference, per-frame prediction involves only a feed-forward pass through the base network plus an inner product with the prediction history. Thus the inference is fast and also not affected by the number of objects. Experimentally, our model runs at a frame rate of 37 per second, achieving an overall score of $72.3\%$ on Davis 2017 validation set, as well as $63.1\%$ on Davis 2017 test set. Our model also achieves a competitive overall score of $67.8\%$ on the recent Youtube-VOS validation set. Our method is competitive to current prevailing methods while being substantially simpler and faster. We hope the model can serve as a simple baseline for future works.
+
+# 2. Related Work
+
+We review related work on video object segmentation in the semi-supervised setting. For an overview of unsupervised and interactive video object segmentation, we refer readers to other papers [11, 12, 2, 39, 27, 28].
+
+Single frame models. In the past few years, methods with leading performance have been based on finetuning the model on the single annotated frame and performing inference on individual test frames. These methods basically learn an objectness prior and spatial continuity without con
+
+sidering temporal information. The convolutional neural network architecture plays an important role for finetuning on a single frame to be effective. OSVOS [7] is the pioneering work in this direction. Lucid [25] seeks to augment the data specifically for each video from only a single frame of ground truth. OnAVOS [42] mines confident regions in the testing sequence to augment the training data. The later work of OSVOS-S [31] integrates semantic information from an instance segmentation model to boost performance. PReMVOS [30], CNN-MRF [3], and DyeNet [29] also build on top of single frame models.
+
+The effectiveness of single frame models demonstrates that optimizing a domain-specific spatial smoothness term greatly enhances performance. However, finetuning via gradient descent generally takes tens of seconds per video, which can make it impractical for many applications.
+
+Propagation-based models. Propagation-based methods embed image pixels into a feature space and utilize pixel similarity in the feature space to guide label propagation. In methods such as VideoMatch [22, 9], only pixels in the first frame are used for reference in computing pixel similarity. Since no finetuning at run time is involved, propagation-based models run much faster than the aforementioned single frame models, but the lack of domain-specific finetuning leads to performance that is much worse. Later works [36, 45, 34, 41] explore adding the preceding frame to the first frame as reference, which significantly improves performance and leads to greater temporal smoothness. However, this local and sparse propagation scheme suffers from the drifting problem [16].
+
+Long-range spatio-temporal models. There are two lines of work which attempt to optimize over a dense long-range spatio-temporal volume. The first [45, 21] builds a recurrent neural network which uses the estimate from the previous frame to predict the object segmentation in the current frame. The whole model is learned via backpropagation through time. However, such models are sensitive to estimation errors in the previous frame.
+
+The second direction is based on graphical models [17, 38, 26, 48, 8, 32] (i.e., Markov Random Fields) defined over the spatio-temporal domain. These works were popular prior to deep learning, and employed edge potentials defined by handcrafted features such as SIFT. The models are computationally expensive and no longer competitive to learning-based methods.
+
+Relation to other vision problems. As the above methods suggest, video object segmentation is closely related to a variety of computer vision problems such as instance segmentation, object re-identification, and optical flow estimation and tracking. Many recent methods integrate components for these other tasks into the video object segmentation pipeline. For example, OSVOS-S [31] includes a instance
+
+
+(a) Previous Induction Model
+
+
+(b) Our Transduction Model
+Figure 2: We pose video object segmentation from a transductive inference perspective, where dense long-term similarity dependencies are constructed to discover structures in the spatio-temporal volume. a) Previous induction model transfers knowledge from the first frame to other frames. b) Our transduction model considers holistic dependencies in the unlabeled spatio-temporal volume for joint inference.
+
+segmentation module; PReMVOS [30] and DyeNet [29] incorporate a object re-identification module; CNN-MRF [3], MaskTrack [34] and MaskRNN [21] rely on optical flow estimation. The integration of other modules heavily depends on transfer learning from other datasets. Though performance improvement is observed, it usually involves further complications. For example, instance segmentation becomes less useful when the video encounters a new object category which is not present in instance segmentation model. Optical flow [15] suffers from occlusions which would mislead label propagation.
+
+Most relevant works. Space-time memory network (STM) [33] is a significant work and most similar with ours. Ours is developed independently from STM, while STM is published earlier than ours. The insight that exploiting dense long-term information is similar. However, the transductive framework in the proposed approach, which stems from the classical semi-supervised learning, brings theoretical foundations to video object segmentation. Moreover, in the implementation, ours is much simpler and more efficient which does not require additional datasets, and infers all objects simultaneously.
+
+# 3. Approach
+
+In contrast to much prior work on finetuning a model on a single annotated frame or transferring knowledge from other related tasks, our approach focuses on fully exploiting the unlabeled structure in a video sequence. This enables us to build a simple model that is both strong in performance and fast in inference.
+
+We first describe a generic semi-supervised classification
+
+framework [49] and then adapt it to online video object segmentation in a manner that follows our ideas.
+
+# 3.1. A Transductive Inference Framework
+
+Let us first consider a general semi-supervised classification problem. Suppose that we have a dataset $\mathcal{D} = \{(x_1, y_1), (x_2, y_2), (x_l, y_l), x_{l+1}, \ldots, x_n\}$ , which contains $l$ labeled data pairs and $n - l$ unlabeled data points. The task is to infer the labels $\{\hat{y}_i\}_{i=l+1}^n$ for the unlabeled data $\{x_{l+1}, \ldots, x_n\}$ based on all the observation $\mathcal{D}$ . Inference of the unlabeled data is formulated in prior work [49] as a transductive regularization framework,
+
+$$
+\mathcal {Q} (\hat {\mathbf {y}}) = \sum_ {i, j} ^ {n} w _ {i j} | | \frac {\hat {y} _ {i}}{\sqrt {d _ {i}}} - \frac {\hat {y} _ {j}}{\sqrt {d _ {j}}} | | ^ {2} + \mu \sum_ {i = 1} ^ {l} \| \hat {y} _ {i} - y _ {i} \| ^ {2}, \tag {1}
+$$
+
+where $w_{ij}$ encodes the similarity between data points $(x_i, x_j)$ , and $d_i$ denotes the degree $d_i = \sum_j w_{ij}$ for pixel $i$ . The first term is a smoothness constraint that enforces similar points to have identical labels. The second term is a fitting constraint, which penalizes solutions that deviate from the initial observations. The parameter $\mu$ balances these two terms. Semi-supervised classification amounts to solve the following optimization problem,
+
+$$
+\hat {\mathbf {y}} = \operatorname {a r g m i n} \mathcal {Q} (\mathbf {y}). \tag {2}
+$$
+
+It is shown [49] that the above energy minimization problem can be solved by iterative algorithm as follows. Let $\mathbf{S} = \mathbf{D}^{-1/2}\mathbf{W}\mathbf{D}^{-1/2}$ be the normalized similarity matrix constructed from $w_{ij}$ . Iteratively solve for $\hat{\mathbf{y}}(k)$ until
+
+convergence, as
+
+$$
+\hat {\mathbf {y}} (k + 1) = \alpha \mathbf {S} \hat {\mathbf {y}} (k) + (1 - \alpha) \mathbf {y} (0), \tag {3}
+$$
+
+where $\alpha = \mu / (\mu + 1)$ , and $\mathbf{y}(0) = [y_1, y_2, \dots, y_n]^T$ is the initial observation of the label clamped with supervised labels. The typical value of $\alpha$ is 0.99. The power of this transduction model comes from the globalized model it builds over the dense structures in the unlabeled data.
+
+# 3.2. Online Video Object Segmentation
+
+Based on this general framework [49], we build a transductive model for semi-supervised video object segmentation that accounts for dense long-range interactions.
+
+This gives rise to three challenges. First, video frames stream sequentially, so the model must work in an online fashion, where the inference of one frame should not depend on future frames. Second, the number of pixels in one video can scale into the tens of millions. A similarity matrix over all the pixels would thus be intractable to compute. Third, an effective similarity measure $W$ needs to be learned between pixels in a video sequence.
+
+For the algorithm to run online, it is assumed that the predictions on all prior frames have been determined when the current frame $t$ arrives. We therefore approximate the Eqn 3 by expanding the inference procedure through time,
+
+$$
+\hat {\mathbf {y}} (t + 1) = \mathbf {S} _ {1: t \rightarrow t + 1} \hat {\mathbf {y}} (t). \tag {4}
+$$
+
+$\mathbf{S}_{1:t\to t + 1}$ represents the similarity matrix $\mathbf{S}$ that is only constructed between pixels up to the $t$ -th frame and the pixels in the $t + 1$ -th frame. Since no labels are provided beyond the first frame, the prior term $\mathbf{y}(0)$ is omitted for the frame $t + 1$
+
+For time $t + 1$ , the above propagation procedure is equiv- antly minimizing a set of smoothness terms in the spatiotemporal volume,
+
+$$
+\mathcal {Q} ^ {t + 1} (\hat {\mathbf {y}}) = \sum_ {i} \sum_ {j} w _ {i j} \| \frac {\hat {y} _ {i}}{\sqrt {d _ {i}}} - \frac {\hat {y} _ {j}}{\sqrt {d _ {j}}} \| ^ {2}, \tag {5}
+$$
+
+where $i$ indexes the pixels at the target time $t + 1$ , $j$ indexes the pixels in all frames prior to and including time $t$ .
+
+# 3.3. Label Propagation
+
+Given the annotations on the starting frame of a video, we process the remaining frames sequentially, propagating labels to each frame based on Eqn. 4. The quality of video object segmentation heavily depends on the similarity metric $\mathbf{S}$ , whose core component is the the affinity matrix $\mathbf{W}$ .
+
+
+Figure 3: Sampling strategy for label propagation. We sample densely in the recent history, and more sparsely in the distant history.
+
+Similarity metric. In order to build a smooth classification function, the similarity metric should account for global high-level semantics and local low-level spatial continuity. Our similarity measure $w_{ij}$ includes an appearance term and a spatial term,
+
+$$
+w _ {i j} = \exp \left(f _ {i} ^ {T} f _ {j}\right) \cdot \exp \left(- \frac {\left| \left| \operatorname {l o c} (i) - \operatorname {l o c} (j) \right| \right| ^ {2}}{\sigma^ {2}}\right), \tag {6}
+$$
+
+where $f_{i}, f_{j}$ are the feature embeddings for pixels $p_{i}, p_{j}$ through a convolutional neural network. $\mathrm{loc}(i)$ is the spatial location of pixel $i$ . The spatial term is controlled by a locality parameter $\sigma$ . Learning of the appearance model is described in the next section.
+
+Frame sampling. Computing a similarity matrix $\mathbf{S}$ over all the previous frames is computationally infeasible, as long videos can span hundreds of frames or more. Inspired by Temporal Segment Networks [43], we sample a small number of frames in the observance of the temporal redundancy in videos.
+
+Specifically, as in Figure 3, we sample a total of 9 frames from the preceding 40 frames: the 4 consecutive frames before the target frame to model the local motion, and 5 more frames sparsely sampled from the remaining 36 frames to model long-term interactions. We find this sampling strategy to strike a good balance between efficiency and effectiveness. Detailed ablations about the choice of frame sampling are presented in the experiments.
+
+A simple motion prior. Pixels that are more distant in the temporal domain have weaker spatial dependencies. To integrate this knowledge, we use a simple motion prior where a smaller $\sigma = 8$ is used when the temporal references are sampled locally and densely, and a larger $\sigma = 21$ is employed when the reference frames are distant. We find this simple motion model to be effective for finding long-term dependencies.
+
+| Methods | Architecture | Optical | Proposal | Tracking | Re-ID |
| DyeNet [29] | ResNet 101 | ✓ | ✓ | ✓ | ✓ |
| CNN-MRF [3] | Deeplab | ✓ | ✗ | ✗ | ✗ |
| PReLUVS [30] | Deeplab-V3+ | ✓ | ✓ | ✓ | ✓ |
| FEELVOS [41] | Deeplab-V3+ | ✗ | ✗ | ✗ | ✗ |
| STM [33] | 2×ResNet-50 | ✗ | ✗ | ✗ | ✗ |
| TVOS (ours) | ResNet-50 | ✗ | ✗ | ✗ | ✗ |
+
+Table 1: A brief overview of leading VOS methods with dependent modules for other related vision tasks.
+
+# 3.4. Learning the appearance embedding
+
+We learn the appearance embedding in a data-driven fashion using a 2D convolutional neural network. The embedding aims to capture both short-term and long-term variations due to motion, scale and deformations. The embedding is learned from the training data in which each frame from the video is annotated with the segmented object and the object identity.
+
+Given a target pixel $x_{i}$ and we consider all pixels in the prior frames as references. Denote $f_{i}$ and $f_{j}$ the feature embeddings for pixel $x_{i}$ and a reference pixel $x_{j}$ . Then the predicted label $\hat{y}_{i}$ of $x_{i}$ is given by
+
+$$
+\hat {y} _ {i} = \sum_ {j} \frac {\exp \left(f _ {i} ^ {T} f _ {j}\right)}{\sum_ {k} \exp \left(f _ {i} ^ {T} f _ {k}\right)} \cdot y _ {j}, \tag {7}
+$$
+
+where the reference indexes $j, k$ span the temporal history before the current frame. We show detailed ablations on how sampling the historical frames affects the learning quality.
+
+We optimize the embedding via a standard cross-entropy loss on all pixels in the target frame,
+
+$$
+\mathcal {L} = - \sum_ {i} \log P \left(\hat {y} _ {i} = y _ {i} \mid x _ {i}\right). \tag {8}
+$$
+
+# 3.5. Implementation Details
+
+We use a ResNet-50 to train the embedding model. The convolution stride of the third and the fourth residual blocks is set to 1 to maintain a high-resolution output. We add one additional $1 \times 1$ convolutional layer to project the feature to a final embedding of 256 dimensions. The embedding model produces a feature with a total stride of 8.
+
+During training, we take the pretrained weights from the ImageNet model zoo, and finetune the model on the Davis 2017 [35] training set for 240 epochs and YoutuVOS [46] for 30 epochs. We apply the standard augmentations of random flipping and random cropping of size $256 \times 256$ on the input images. We use a SGD solver with an initial learning rate of 0.02 and a cosine annealing scheduler. The optimization takes 16 hours on 4 Tesla P100 GPUs, with a batch size of 16, each containing 10 snippets from a video sequence.
+
+
+Figure 4: The effect of our simple motion model. Distant frames have weaker spatial priors on the location of objects, thus reducing the drifting problem.
+
+During tracking, we extract features at the original image resolution of $480\mathrm{p}$ . The results of each video frame are predicted sequentially online.
+
+# 4. Results
+
+In this section, we first describe our experimental settings and datasets. Then we show detailed ablations on how the transductive approach takes advantage of unlabeled structures in the temporal sequence to significantly improve the performance. Results are conducted on various datasets to compare with the state-of-the-art. Finally, we discuss temporal stability and the relationship to optical flow. Our method is abbreviated as TVOS in the result tables.
+
+# 4.1. Experimental Setup
+
+Datasets. We evaluate our method on the Davis 2017 [35], and Youtube-VOS [46] datasets. Our model is trained on the respective training set and evaluated on the validation set. For Davis 2017, we also train our model on the combined train-val set, and submit the results on the testing set on the evaluation server.
+
+Davis 2017 contains 150 video sequences and it involves multiple objects with drastic deformations, heavy and prolonged occlusions, and very fast motions. High-definition annotations are available for all frames in the training sequences. Youtube-VOS is the largest dataset for this task to-date, containing 4453 training sequences and 474 validation sequences. It captures a comprehensive collection of 94 daily object categories. However, the frame rate of the video is much lower than videos in Davis (5 fps compared to 24 fps).
+
+Evaluation metrics. We use the standard evaluation metric of mean intersection over union (mIoU), averaged across objects and summed over all frames. The mIoU is evaluated on both the full objects (J measure) and only on the object boundaries (F measure). The global metric (G measure) is the average of the J and F measures. Youtube
+
+
+Figure 5: Using dense long-range dependencies improves the tracking performance. The spatial term smooths the object boundaries, while long-term dependencies up to 40 frames help to re-detect objects.
+
+| train / tracking | 1 frame | 3 frames | 9 frames | uniform sample | sparse sample | sparse + motion |
| 1 frame | 55.8 | 60.4 | 63.4 | 63.8 | 64.0 | 64.3 |
| 3 frames | 56.0 | 61.4 | 65.4 | 65.5 | 66.1 | 66.7 |
| 9 frames | 60.7 | 63.4 | 68.6 | 68.6 | 69.0 | 69.9 |
| uniform sample | 55.8 | 60.2 | 64.4 | 65.0 | 65.1 | 65.3 |
| sparse sample | 59.9 | 62.9 | 66.2 | 67.2 | 68.5 | 68.6 |
| Supervised | 47.5 | 52.2 | 53.8 | 54.0 | 54.5 | 54.8 |
| InstDisc[44] | 42.4 | 47.3 | 51.3 | 51.3 | 52.1 | 52.2 |
| MoCo[18] | 43.5 | 48.7 | 53.0 | 53.2 | 53.8 | 54.0 |
+
+Table 2: Ablation study on the range of temporal dependencies and the simple motion component. The mean $J$ measure on the Davis 2017 validation set is reported. See text for details.
+
+VOS also includes a separate measure of seen objects and unseen objects to measure the generalization ability. In Section 4.4, we provide a discussion of temporal stability.
+
+# 4.2. Ablation Study
+
+Dense local and global dependencies. While most prior works focus on optimizing single frame models, the key idea of this paper is to build dense long-term models over the spatio-temporal volume. In Table 2, we summarize the effect of such long-term potentials which capture both local and global dependency. Each row is an appearance embedding model trained with different reference frame sampling strategies. Each column corresponds to a tracking sampling strategy. We study the following settings: one reference frame preceding the target frame, 3
+
+consecutive frames preceding the target frame, 9 consecutive frames preceding the target frame, uniform sampling of 9 frames in the preceding 40 frames, and our sparse sampling of 9 frames in the preceding 40 frames as in Figure 3. We find that tracking over a longer term generally improves the performance, and denser sampling near the target frame is helpful. For learning the appearance embedding, training with 9 consecutive frames produces the best results, while longer ranges do not always lead to improvements. This may be due to very long ranges covering almost the entire video reduces the variations in the dataset, which leads to worse generalization for training.
+
+In Figure 5, we show some qualitative examples for long range tracking. Using 9 consecutive frames yields more stable predictions than using only the previous frame. Adding the spatial term smoothes the object boundaries. A long range of 40 frames enables the model to re-detect objects after heavy occlusions.
+
+Transferred representations. In the last rows of Table 2, we also test the tracking performance for models pretrained on ImageNet but without further training on the DAVIS dataset. The transferred ImageNet model obtains a mean $J$ measure of $54.8\%$ , which is actually better than some prior methods [47, 10] trained with additional Davis data. Also, even an unsupervised pretrained model on images obtains performance competitive to network modulation [47] using our transductive inference algorithm. Two recent unsupervised pretrained models on ImageNet are investigated [44, 18]. Since no domain-specific training is involved for the appearance embedding, the evaluation of transferred representations clearly validates the effective
+
+| Methods | FT | J | F | J&F | Speed |
| OnAVOS [42] | ✓ | 61.0 | 66.1 | 63.6 | 0.08 |
| DyeNet [29] | ✓ | 67.3 | 71.0 | 69.1 | 0.43 |
| CNN-MRF [3] | ✓ | 67.2 | 74.2 | 70.7 | 0.03 |
| PReLUVOS [30] | ✓ | 73.9 | 81.7 | 77.8 | 0.03 |
| Modulation [47] | ✗ | 52.5 | 57.1 | 54.8 | 3.57 |
| FAVOS [10] | ✗ | 54.6 | 61.8 | 58.2 | 0.83 |
| VideoMatch [22] | ✗ | 56.5 | 68.2 | 62.4 | 2.86 |
| RGMP [45] | ✗ | 64.8 | 68.8 | 66.7 | 3.57 |
| FEELVOS [41] | ✗ | 65.9 | 72.3 | 69.1 | 1.96 |
| STM [33] | ✗ | 69.2 | 74.0 | 71.6 | 6.25 |
| STM [33]+Pretrain | ✗ | 81.7 | 79.2 | 84.3 | 6.25 |
| TVOS | ✗ | 69.9 | 74.7 | 72.3 | 37 |
+
+ness of dense long-term modeling.
+
+The simple motion prior. As a weak spatial prior for modeling the dependency between distant frames, our simple motion model reduces noise from the model predictions and leads to about $1\%$ improvement. Figure 4 displays two concrete examples. More complicated motion models, such as a linear motion model [1], may be even more effective.
+
+# 4.3. Quantitative Results
+
+In Table 1, we first give a brief overview of the current leading methods, including those that use first-frame finetuning (CNN-MRF [3], DyeNet [29], PReMVOS [30]) and those that do not (FEELVOS [41], STM [33] and our TVOS). For DyeNet and PReMVOS, their sub-modules are learned on dedicated datasets such as optical flow on Flying Chairs, object proposal on MSCOCO, and object segmentation on PASCAL VOC. Since Davis is much smaller than the large-scale datasets, it remains unknown how much of the gains can be attributed to knowledge transfer or to the methods themselves. Therefore, the mentioned methods are not directly comparable with our method. FEELVOS, STM and ours are much simpler, as they do not rely on additional modules for this problem. STM additionally requires heavy pretraining on large-scale image datasets.
+
+It is also important to note that for PreMVOS, DyeNet, CNN-MRF, they are not able to run tracking in an online fashion. They use information from future frames to stabilize prediction for the target frame. Also, instead of using the first frame from the given video for training, they use the first frames from the entire test-dev set for training. Propagation-based methods are able to track objects sequentially online.
+
+DAVIS 2017. We summarize our results on the Davis 2017 validation set in Table 3, and on the Davis 2017 testdev set in Table 4. On the validation set, our method performs slightly better than STM [33] under the same amount
+
+Table 3: Quantitative evaluation on the Davis 2017 validation set. FT denotes methods that perform online training.
+
+| Methods | FT | J | F | J&F | Speed |
| OnAVOS [42] | ✓ | 53.4 | 59.6 | 56.5 | 0.08 |
| DyeNet [29] | ✓ | 65.8 | 70.5 | 68.2 | 0.43 |
| CNN-MRF [3] | ✓ | 64.5 | 70.5 | 67.5 | 0.02 |
| PReLUVOS [30] | ✓ | 67.5 | 75.7 | 71.6 | 0.02 |
| RGMP [45] | ✗ | 51.4 | 54.4 | 52.9 | 2.38 |
| FEELVOS [41] | ✗ | 51.2 | 57.5 | 54.4 | 1.96 |
| TVOS | ✗ | 58.8 | 67.4 | 63.1 | 37 |
+
+Table 4: Quantitative evaluation on the Davis 2017 test-dev set. FT denotes methods that perform online training.
+
+| Methods | Overall | Seen | Unseen |
| J | F | J | F |
| RGMP [45] | 53.8 | 59.5 | - | 45.2 | - |
| OnAVOS [42] | 55.2 | 60.1 | 62.7 | 46.6 | 51.4 |
| RVOS [40] | 56.8 | 63.6 | 67.2 | 45.5 | 51.0 |
| OSVOS [7] | 58.8 | 59.8 | 60.5 | 54.2 | 60.7 |
| S2S [46] | 64.4 | 71.0 | 70.0 | 55.5 | 61.2 |
| PreMVOS [30] | 66.9 | 71.4 | 75.9 | 56.5 | 63.7 |
| STM [33]+Pretrain | 79.4 | 79.7 | 84.2 | 72.8 | 80.9 |
| TVOS | 67.8 | 67.1 | 69.4 | 63.0 | 71.6 |
| TVOS (from DAVIS) | 67.4 | 66.7 | 69.8 | 62.5 | 70.6 |
+
+Table 5: Quantitative evaluation on the Youtube-VOS validation set.
+
+of training data, while surpassing other propagation-based methods which do not need fine-tuning, by $4\%$ for mean $J$ and $3\%$ for mean $J \& F$ . In comparison to finetuning based methods, our TVOS also outperforms DyeNet and CNN-MRF by $2\%$ while being significantly simpler and faster.
+
+We train our model on the combined training and validation set for evaluating on the test-dev set. We find that there is a large gap of distribution between the Davis 2017 test-dev and validation sets. Heavy and prolonged occlusions among objects belonging to the same category are more frequent in the test-dev, which favors methods with re-identification modules. As a result, we are $4 - 5\%$ lower than DyeNet and CNN-MRF on the test-dev set. FEELVOS is even more negatively affected, performing $8\%$ lower than ours in terms of mean $J\& F$ . STM [33] does not provide an evaluation on the test set.
+
+Youtube-VOS. We summarize the results on youtube-VOS validation set in Table 5. Ours surpasses all prior works except STM [33], which relies on heavy pretraining on a variety of segmentation datasets such as saliency detection and instance segmentation. Without pretraining, STM obtains a comparable result of $68.1\%$ . We also test the generalization ability of our model trained on DAVIS train-val and test on Youtube-VOS val. The transferred model shows great generalization ability with an overall score of $67.4\%$
+
+Speed analysis. During tracking, we cache the appearance embeddings for a history up to 40 frames. Inference
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 6: Per-frame IoU over time for PreMVOS and our method on a example video sequences from the DAVIS validation set. PreMVOS switches object identities frequently, while our predictions are temporally smooth. The color of each IoU curve matches its corresponding object segment.
+
+
+
+
+
+
+
+
+
+
+Overlaid Image Pair
+Ours
+
+
+Ours + Smoothness Constraint
+
+
+
+
+Figure 7: An example of optical flow computed from our model compared to FlowNet2.
+
+per frame thus only involves a feed-forward pass of the target frame through the base network, and an additional dot product of target embeddings to prior embeddings. Computation is also constant of any number of objects. This makes our algorithm extremely fast, with a runtime of 37 frames per second on a single Titan Xp GPU. Figure 1 compares current algorithms on their trade-off between speed and performance. Ours is an order of magnitude faster than prior methods, while achieving the results comparable to state-of-the-art methods.
+
+# 4.4. Discussions
+
+Temporal stability. Temporal stability is often a desirable property in video object segmentation, as sharp inconsistencies may be disruptive to downstream video analysis. However, temporal stability is typically not included as an evaluation criterion. Here, we give qualitative examples showing the difference in temporal stability between our model and the state-of-the-art PreMVOS [30].
+
+In Figure 6, we show examples of per-frame evaluation along video sequences. Although the state-of-the-art integrates various temporal smoothing modules, such as opti
+
+cal flow, merging and tracking, we observe the detection-based method to be prone to noise. For example, objects are lost suddenly, or being tagged with a different identity. Our method, on the other hand, makes temporally consistent predictions.
+
+Does our model learn optical flow? Our method learns a soft mechanism for associating pixels in the target frame with pixels in the history frames. This is similar to optical flow where hard correspondences are computed between pixels. We examine how much our learned model aligns with optical flow.
+
+We take two adjacent two frames and calculate the optical flow from our model as $\Delta d_{i} = \sum_{j}s_{ij}\Delta d_{ij}$ , where $s_{ij}$ is the normalized similarity, and $\Delta d_{ij}$ is the displacement between $i,j$ . Figure 7 shows an example visualization of the flow. Compared to the optical flow computed by FlowNet2 [23], our flow makes sense on objects that would be segmented, but is much more noisy on the background. We have further added a spatial smoothness constraint on the computed optical flow for jointly learning the embeddings, as widely used for optical flow estimation [15, 24]. We observe that the constraint smooths the optical flow on the background, but fails to regularize the model for tracking. Adding the term consistently hurts the performance of video object segmentation.
+
+# 5. Conclusion
+
+We present a simple approach to semi-supervised video object segmentation. Our main insight is that much more unlabeled structure in the spatio-temporal volume can be exploited for video object segmentation. Our model finds such structure via transductive inference. The approach is learned end-to-end, without the need of additional modules, additional datasets, or dedicated architectural designs. Our vanilla ResNet50 model achieves competitive performance with a compelling speed of 37 frames per second. We hope our model can serve as a solid baseline for future research.
+
+# References
+
+[1] A. Andriyenko and K. Schindler. Multi-target tracking by continuous energy minimization. In CVPR 2011, pages 1265-1272. IEEE, 2011. 7
+[2] X. Bai, J. Wang, D. Simons, and G. Sapiro. Video snapcut: robust video object cutout using localized classifiers. In ACM Transactions on Graphics (ToG), volume 28, page 70. ACM, 2009. 2
+[3] L. Bao, B. Wu, and W. Liu. Cnn in mrf: Video object segmentation via inference in a cnn-based higher-order spatio-temporal mrf. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5977-5986, 2018. 2, 3, 5, 7
+[4] M. Belkin and P. Niyogi. Semi-supervised learning on manifolds. Machine Learning Journal, 2002. 1
+[5] K. Bernardin and R. Stiefelhagen. Evaluating multiple object tracking performance: the clear mot metrics. Journal on Image and Video Processing, 2008:1, 2008. 1
+[6] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. 2001. 1
+[7] S. Caelles, K.-K. Maninis, J. Pont-Tuset, L. Leal-Taixe, D. Cremers, and L. Van Gool. One-shot video object segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5320-5329. IEEE, 2017. 2, 7
+[8] S. Chandra, C. Couprie, and I. Kokkinos. Deep spatiotemporal random fields for efficient video segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8915-8924, 2018. 2
+[9] Y. Chen, J. Pont-Tuset, A. Montes, and L. Van Gool. Blazingly fast video object segmentation with pixel-wise metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1189-1198, 2018. 2
+[10] J. Cheng, Y.-H. Tsai, W.-C. Hung, S. Wang, and M.-H. Yang. Fast and accurate online video object segmentation via tracking parts. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7415-7424, 2018. 6, 7
+[11] A. Faktor and M. Irani. Video segmentation by non-local consensus voting. In BMVC, volume 2, page 8, 2014. 2
+[12] Q. Fan, F. Zhong, D. Lischinski, D. Cohen-Or, and B. Chen. Jumpcut: non-successive mask transfer and interpolation for video cutout. ACM Trans. Graph., 34(6):195-1, 2015. 2
+[13] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani. Person re-identification by symmetry-driven accumulation of local features. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 2360–2367. IEEE, 2010. 1
+[14] R. Fergus, Y. Weiss, and A. Torralba. Semi-supervised learning in gigantic image collections. In Advances in neural information processing systems, pages 522-530, 2009. 1
+[15] P. Fischer, A. Dosovitskiy, E. Ilg, P. Häusser, C. Hazirbaş, V. Golkov, P. Van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. arXiv preprint arXiv:1504.06852, 2015. 1, 3, 8
+[16] H. Grabner, C. Leistner, and H. Bischof. Semi-supervised on-line boosting for robust tracking. In European conference on computer vision, pages 234-247. Springer, 2008. 2
+
+[17] M. Grundmann, V. Kwatra, M. Han, and I. Essa. Efficient hierarchical graph-based video segmentation. In 2010 IEEE computer society conference on computer vision and pattern recognition, pages 2141-2148. IEEE, 2010. 2
+[18] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019. 6
+[19] K. He, G. Gkioxari, P. Dollár, and R. Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961-2969, 2017. 1
+[20] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016. 2
+[21] Y.-T. Hu, J.-B. Huang, and A. Schwing. Maskrn: Instance level video object segmentation. In Advances in Neural Information Processing Systems, pages 325-334, 2017. 2, 3
+[22] Y.-T. Hu, J.-B. Huang, and A. G. Schwing. Videomatch: Matching based video object segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 54-70, 2018. 2, 7
+[23] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In 2017 IEEE conference on computer vision and pattern recognition (CVPR), pages 1647-1655. IEEE, 2017. 8
+[24] J. Janai, F. Güney, A. Ranjan, M. Black, and A. Geiger. Unsupervised learning of multi-frame optical flow with occlusions. In Proceedings of the European Conference on Computer Vision (ECCV), pages 690-706, 2018. 8
+[25] A. Khoreva, R. Benenson, E. Ilg, T. Brox, and B. Schiele. Lucid data dreaming for video object segmentation, 2018. 2
+[26] Y. J. Lee, J. Kim, and K. Grauman. Key-segments for video object segmentation. In 2011 International conference on computer vision, pages 1995-2002. IEEE, 2011. 2
+[27] S. Li, B. Seybold, A. Vorobyov, A. Fathi, Q. Huang, and C.-C. Jay Kuo. Instance embedding transfer to unsupervised video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6526-6535, 2018. 2
+[28] S. Li, B. Seybold, A. Vorobyov, X. Lei, and C.-C. Jay Kuo. Unsupervised video object segmentation with motion-based bilateral networks. In Proceedings of the European Conference on Computer Vision (ECCV), pages 207-223, 2018. 2
+[29] X. Li and C. Change Loy. Video object segmentation with joint re-identification and attention-aware mask propagation. In Proceedings of the European Conference on Computer Vision (ECCV), pages 90–105, 2018. 1, 2, 3, 5, 7
+[30] J. Luiten, P. Voigtlaender, and B. Leibe. Premvos: Proposal generation, refinement and merging for video object segmentation. arXiv preprint arXiv:1807.09190, 2018. 1, 2, 3, 5, 7, 8
+[31] K.-K. Maninis, S. Caelles, Y. Chen, J. Pont-Tuset, L. Leal-Taixe, D. Cremers, and L. Van Gool. Video object segmentation without temporal information. arXiv preprint arXiv:1709.06031, 2017. 2
+[32] N. Mäki, F. Perazzi, O. Wang, and A. Sorkine-Hornung. Bilateral space video segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni-
+
+tion, pages 743-751, 2016. 2
+[33] S. W. Oh, J.-Y. Lee, N. Xu, and S. J. Kim. Video object segmentation using space-time memory networks. arXiv preprint arXiv:1904.00607, 2019. 3, 5, 7
+[34] F. Perazzi, A. Khoreva, R. Benenson, B. Schiele, and A. Sorkine-Hornung. Learning video object segmentation from static images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2663-2672, 2017. 2, 3
+[35] J. Pont-Tuset, F. Perazzi, S. Caelles, P. Arbeláez, A. Sorkine-Hornung, and L. Van Gool. The 2017 davis challenge on video object segmentation. arXiv:1704.00675, 2017. 5
+[36] J. Shin Yoon, F. Rameau, J. Kim, S. Lee, S. Shin, and I. So Kweon. Pixel-level matching for video object segmentation using convolutional neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2167-2176, 2017. 2
+[37] M. Szummer and T. Jaakkola. Partially labeled classification with markov random walks. In Advances in neural information processing systems, pages 945-952, 2002. 1
+[38] D. Tsai, M. Flagg, A. Nakazawa, and J. M. Rehg. Motion coherent tracking using multi-label mrf optimization. International journal of computer vision, 100(2):190-202, 2012. 2
+[39] Y.-H. Tsai, M.-H. Yang, and M. J. Black. Video segmentation via object flow. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3899-3908, 2016. 2
+[40] C. Ventura, M. Bellver, A. Girbau, A. Salvador, F. Marques, and X. Giro-i Nieto. Rvos: End-to-end recurrent network for video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5277-5286, 2019. 7
+[41] P. Voigtlaender, Y. Chai, F. Schroff, H. Adam, B. Leibe, and L.-C. Chen. Feelvos: Fast end-to-end embedding learning for video object segmentation. arXiv preprint arXiv:1902.09513, 2019. 2, 5, 7
+[42] P. Voigtlaender and B. Leibe. Online adaptation of convolutional neural networks for video object segmentation. arXiv preprint arXiv:1706.09364, 2017. 2, 7
+[43] L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision, pages 20-36. Springer, 2016. 4
+[44] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733-3742, 2018. 6
+[45] S. Wug Oh, J.-Y. Lee, K. Sunkavalli, and S. Joo Kim. Fast video object segmentation by reference-guided mask propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7376-7385, 2018. 2, 7
+[46] N. Xu, L. Yang, Y. Fan, D. Yue, Y. Liang, J. Yang, and T. Huang. Youtube-vos: A large-scale video object segmentation benchmark. arXiv preprint arXiv:1809.03327, 2018. 5, 7
+[47] L. Yang, Y. Wang, X. Xiong, J. Yang, and A. Katsaggelos. Efficient video object segmentation via network modulation.
+
+In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6499-6507, 2018. 6, 7
+[48] D. Zhang, O. Javed, and M. Shah. Video object segmentation through spatially accurate and temporally dense extraction of primary object regions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 628-635, 2013. 2
+[49] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Schölkopf. Learning with local and global consistency. In Advances in neural information processing systems, pages 321-328, 2004. 1, 2, 3, 4
\ No newline at end of file
diff --git a/atransductiveapproachforvideoobjectsegmentation/images.zip b/atransductiveapproachforvideoobjectsegmentation/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..33130ed4e0512c6731ea72c642ef6a7a803c8390
--- /dev/null
+++ b/atransductiveapproachforvideoobjectsegmentation/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4ddeef8ce1d5af3c7f6acd98dd4398afd0eb23384bc817969785c16a0475d62c
+size 714358
diff --git a/atransductiveapproachforvideoobjectsegmentation/layout.json b/atransductiveapproachforvideoobjectsegmentation/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..de815a75e44335c76c5f2133937797daeefd4349
--- /dev/null
+++ b/atransductiveapproachforvideoobjectsegmentation/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4739b17cc5ed07528c91671595d5cc7803c0f483a5dbe4dcd11cf7b385260fb2
+size 389101
diff --git a/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_content_list.json b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4a0cfd2b52167ac7d698abbbdc37f4bc81f72277
--- /dev/null
+++ b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d1d3598a6ee2f7f8e97935c59e8ac8ba4114bb0bbb8fd77ed0e25380871b3bb
+size 77990
diff --git a/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_model.json b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..efe6db03c4286e33b3b2e4758efca04903d4d7a6
--- /dev/null
+++ b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ce80e7bbb4a4a8b2d97aee64f624008f42ff4a2082386af79f17e0aa2e0dda2e
+size 95084
diff --git a/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_origin.pdf b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..cd773f977c1ddbeb5274cd4c2c4526af8b5b16f8
--- /dev/null
+++ b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/8a4616ac-445d-4b3f-9830-eba1fd186f58_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a39186555ed70d04d9d3cba87ca74fef1fe8b8f1812b17286e0cd452feedfad
+size 922756
diff --git a/aunetbaseddiscriminatorforgenerativeadversarialnetworks/full.md b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..14132b44ec9c67f39befca0682ba5b206584c15e
--- /dev/null
+++ b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/full.md
@@ -0,0 +1,309 @@
+# A U-Net Based Discriminator for Generative Adversarial Networks
+
+Edgar Schonfeld
+
+Bosch Center for Artificial Intelligence
+
+edgar.schoenfeld@bosch.com
+
+Bernt Schiele
+
+Max Planck Institute for Informatics
+
+schiele@mpi-inf.mpg.com
+
+Anna Khoreva
+
+Bosch Center for Artificial Intelligence
+
+anna.khoreva@bosch.com
+
+# Abstract
+
+Among the major remaining challenges for generative adversarial networks (GANs) is the capacity to synthesize globally and locally coherent images with object shapes and textures indistinguishable from real images. To target this issue we propose an alternative U-Net based discriminator architecture, borrowing the insights from the segmentation literature. The proposed U-Net based architecture allows to provide detailed per-pixel feedback to the generator while maintaining the global coherence of synthesized images, by providing the global image feedback as well. Empowered by the per-pixel response of the discriminator, we further propose a per-pixel consistency regularization technique based on the CutMix data augmentation, encouraging the U-Net discriminator to focus more on semantic and structural changes between real and fake images. This improves the U-Net discriminator training, further enhancing the quality of generated samples. The novel discriminator improves over the state of the art in terms of the standard distribution and image quality metrics, enabling the generator to synthesize images with varying structure, appearance and levels of detail, maintaining global and local realism. Compared to the BigGAN baseline, we achieve an average improvement of 2.7 FID points across FFHQ, CelebA, and the proposed COCO-Animals dataset.
+
+# 1. Introduction
+
+The quality of synthetic images produced by generative adversarial networks (GANs) has seen tremendous improvement recently [5, 20]. The progress is attributed to large-scale training [32, 5], architectural modifications [50, 19, 20, 27], and improved training stability via the use of different regularization techniques [34, 51]. However, despite the recent advances, learning to synthesize images with global semantic coherence, long-range structure and the exactness of detail remains challenging.
+
+One source of the problem lies potentially in the discrim-
+
+
+Progression during training
+Figure 1: Images produced throughout the training by our U-Net GAN model (top row) and their corresponding per-pixel feedback of the U-Net discriminator (bottom row). The synthetic image samples are obtained from a fixed noise vector at different training iterations. Brighter colors correspond to the discriminator confidence of pixel being real (and darker of being fake). Note that the U-Net discriminator provides very detailed and spatially coherent response to the generator, enabling it to further improve the image quality, e.g. the unnaturally large man's forehead is recognized as fake by the discriminator and is corrected by the generator throughout the training.
+
+inator network. The discriminator aims to model the data distribution, acting as a loss function to provide the generator a learning signal to synthesize realistic image samples. The stronger the discriminator is, the better the generator has to become. In the current state-of-the-art GAN models, the discriminator being a classification network learns only a representation that allows to efficiently penalize the generator based on the most discriminative difference between real and synthetic images. Thus, it often focuses either on the global structure or local details. The problem amplifies as the discriminator has to learn in a non-stationary environ
+
+ronment: the distribution of synthetic samples shifts as the generator constantly changes through training, and is prone to forgetting previous tasks [7] (in the context of the discriminator training, learning semantics, structures, and textures can be considered different tasks). This discriminator is not incentivized to maintain a more powerful data representation, learning both global and local image differences. This often results in the generated images with discontinued and mottled local structures [27] or images with incoherent geometric and structural patterns (e.g. asymmetric faces or animals with missing legs) [50].
+
+To mitigate this problem, we propose an alternative discriminator architecture, which outputs simultaneously both global (over the whole image) and local (per-pixel) decision of the image belonging to either the real or fake class, see Figure 1. Motivated by the ideas from the segmentation literature, we re-design the discriminator to take a role of both a classifier and segmenter. We change the architecture of the discriminator network to a U-Net [39], where the encoder module performs per-image classification, as in the standard GAN setting, and the decoder module outputs per-pixel class decision, providing spatially coherent feedback to the generator, see Figure 2. This architectural change leads to a stronger discriminator, which is encouraged to maintain a more powerful data representation, making the generator task of fooling the discriminator more challenging and thus improving the quality of generated samples (as also reflected in the generator and discriminator loss behavior in Figure S1). Note that we do not modify the generator in any way, and our work is orthogonal to the ongoing research on architectural changes of the generator [20, 27], divergence measures [25, 1, 37], and regularizations [40, 15, 34].
+
+The proposed U-Net based discriminator allows to employ the recently introduced CutMix [47] augmentation, which is shown to be effective for classification networks, for consistency regularization in the two-dimensional output space of the decoder. Inspired by [47], we cut and mix the patches from real and synthetic images together, where the ground truth label maps are spatially combined with respect to the real and fake patch class for the segmenter (U-Net decoder) and the class labels are set to fake for the classifier (U-Net encoder), as globally the CutMix image should be recognized as fake, see Figure 3. Empowered by per-pixel feedback of the U-Net discriminator, we further employ these CutMix images for consistency regularization, penalizing per-pixel inconsistent predictions of the discriminator under the CutMix transformations. This fosters the discriminator to focus more on semantic and structural changes between real and fake images and to attend less to domain-preserving perturbations. Moreover, it also helps to improve the localization ability of the decoder. Employing the proposed consistency regularization leads to a
+
+stronger generator, which pays more attention to local and global image realism. We call our model U-Net GAN.
+
+We evaluate the proposed U-Net GAN model across several datasets using the state-of-the-art BigGAN model [5] as a baseline and observe an improved quality of the generated samples in terms of the FID and IS metrics. For unconditional image synthesis on FFHQ [20] at resolution $256 \times 256$ , our U-Net GAN model improves 4 FID points over the BigGAN model, synthesizing high quality human faces (see Figure 4). On CelebA [29] at resolution $128 \times 128$ we achieve 1.6 point FID gain, yielding to the best of our knowledge the lowest known FID score of 2.95. For class-conditional image synthesis on the introduced COCOAnimals dataset [28, 24] at resolution $128 \times 128$ we observe an improvement in FID from 16.37 to 13.73, synthesizing diverse images of different animal classes (see Figure 5).
+
+# 2. Related work
+
+Generative adversarial networks. GAN [14] and its conditional variant [33] have recently demonstrated impressive results on different computer vision tasks, including image synthesis [38, 50, 19, 5, 20, 27, 10]. Plenty of efforts have been made to improve the training and performance of GANs, from reformulation of the objective function [31, 1, 26, 37], integration of different regularization techniques [51, 34, 40, 48] and architectural changes [38, 19, 13, 27]. To enhance the quality of generated samples, [38] introduced the DCGAN architecture that employs strided and transposed convolutions. In SAGAN [50] the self-attention block was added to improve the network ability to model global structure. PG-GAN [19] proposed to grow both the generator and discriminator networks to increase the resolution of generated images. Other lines of work focused mainly on improving the discriminator by exploiting multiple [36, 13, 11] and multi-resolution [45, 42] discriminators, using spatial feedback of the discriminator [17], an auto-encoder architecture with the reconstruction-based feedback to the generator [52] or self-supervision to avoid catastrophic forgetting [7]. Most recently, the attention has been switched back to the generator network. StyleGAN [20] proposed to alter the generator architecture by injecting latent codes to each convolution layer, thus allowing more control over the image synthesis process. COCOGAN [27] integrated the conditional coordination mechanism into the generator, making image synthesis highly parallelizable. In this paper, we propose to alter the discriminator network to a U-Net based architecture, empowering the discriminator to capture better both global and local structures, enabled by per-pixel discriminator feedback. Local discriminator feedback is also commonly applied through PatchGAN discriminators [18]. Our U-Net GAN extends this idea to dense prediction over the whole image plane, with visual information being integrated over up- and down
+
+sampling pathways and through the encoder-decoder skip connections, without trading off local over global realism.
+
+Mix&Cut regularizations. Recently, a few simple yet effective regularization techniques have been proposed, which are based on augmenting the training data by creating synthetic images via mixing or/and cutting samples from different classes. In MixUp [49] the input images and their target labels are interpolated using the same randomly chosen factor. [43] extends [49] by performing interpolation not only in the input layer but also in the intermediate layers. CutOut [9] augments an image by masking a rectangular region to zero. Differently, CutMix [47] augments training data by creating synthetic images via cutting and pasting patches from image samples of different classes, marrying the best aspects of MixUp and CutOut. Other works employ the Mix&Cut approaches for consistency regularization [44, 4, 51], i.e. penalizing the classification network sensitivity to samples generated via MixUp or CutOut [49, 9]. In our work, we propose the consistency regularization under the CutMix transformation in the pixel output space of our U-Net discriminator. This helps to improve its localization quality and induce it to attend to nondiscriminative differences between real and fake regions.
+
+# 3. U-Net GAN Model
+
+A "vanilla" GAN consists of two networks: a generator $G$ and a discriminator $D$ , trained by minimizing the following competing objectives in an alternating manner:
+
+$$
+\begin{array}{l} \mathcal {L} _ {D} = - \mathbb {E} _ {x} [ \log D (x) ] - \mathbb {E} _ {z} [ \log (1 - D (G (z))) ], \tag {1} \\ \mathcal {L} _ {G} = - \mathbb {E} _ {z} [ \log D (G (z)) ] ^ {1}. \\ \end{array}
+$$
+
+$G$ aims to map a latent variable $z \sim p(z)$ sampled from a prior distribution to a realistic-looking image, while $D$ aims to distinguish between real $x$ and generated $G(z)$ images. Ordinarily, $G$ and $D$ are modeled as a decoder and an encoder convolutional network, respectively.
+
+While there are many variations of the GAN objective function and its network architectures [23, 30], in this paper we focus on improving the discriminator network. In Section 3.1, we propose to alter the $D$ architecture from a standard classification network to an encoder-decoder network - U-Net [39], leaving the underlying basic architecture of $D$ - the encoder part - untouched. The proposed discriminator allows to maintain both global and local data representation, providing more informative feedback to the generator. Empowered by local per-pixel feedback of the U-Net decoder module, in Section 3.2 we further propose a consistency regularization technique, penalizing per-pixel inconsistent predictions of the discriminator under the CutMix transformations [47] of real and fake images. This helps to improve
+
+
+Figure 2: U-Net GAN. The proposed U-Net discriminator classifies the input images on a global and local per-pixel level. Due to the skip-connections between the encoder and the decoder (dashed line), the channels in the output layer contain both high- and low-level information. Brighter colors in the decoder output correspond to the discriminator confidence of pixel being real (and darker of being fake).
+
+the localization quality of the U-Net discriminator and induce it to attend more to semantic and structural changes between real and fake samples. We call our model $U$ -Net GAN. Note that our method is compatible with most GAN models as it does not modify the generator in any way and leaves the original GAN objective intact.
+
+# 3.1. U-Net Based Discriminator
+
+Encoder-decoder networks [2, 39] constitute a powerful method for dense prediction. U-Nets [39] in particular have demonstrated state-of-art performance in many complex image segmentation tasks. In these methods, similarly to image classification networks, the encoder progressively downsamples the input, capturing the global image context. The decoder performs progressive upsampling, matching the output resolution to the input one and thus enabling precise localization. Skip connections route data between the matching resolutions of the two modules, improving further the ability of the network to accurately segment fine details.
+
+Analogously, in this work, we propose to extend a discriminator to form a U-Net, by reusing building blocks of the original discriminator classification network as an encoder part and building blocks of the generator network as the decoder part. In other words, the discriminator now consists of the original downsampling network and a new upsampling network. The two modules are connected via a bottleneck, as well as skip-connections that copy and concatenate feature maps from the encoder and the decoder modules, following [39]. We will refer to this discriminator as $D^{U}$ . While the original $D(x)$ classifies the input image $x$ into being real and fake, the U-Net discriminator $D^{U}(x)$ additionally performs this classification on a per-pixel basis, segmenting image $x$ into real and fake regions, along with the original image classification of $x$ from the encoder,
+
+see Figure 2. This enables the discriminator to learn both global and local differences between real and fake images.
+
+Hereafter, we refer to the original encoder module of the discriminator as $D_{enc}^{U}$ and to the introduced decoder module as $D_{dec}^{U}$ . The new discriminator loss is now can be computed by taking the decisions from both $D_{enc}^{U}$ and $D_{dec}^{U}$ :
+
+$$
+\mathcal {L} _ {D ^ {U}} = \mathcal {L} _ {D _ {e n c} ^ {U}} + \mathcal {L} _ {D _ {d e c} ^ {U}}, \tag {2}
+$$
+
+where similarly to Eq. 1 the loss for the encoder $L_{D_{enc}^{U}}$ is computed from the scalar output of $D_{enc}^{U}$ :
+
+$$
+\mathcal {L} _ {D _ {e n c} ^ {U}} = - \mathbb {E} _ {x} \left[ \log D _ {e n c} ^ {U} (x) \right] - \mathbb {E} _ {z} \left[ \log \left(1 - D _ {e n c} ^ {U} (G (z))\right) \right], \tag {3}
+$$
+
+and the loss for the decoder $L_{D_{enc}^{U}}$ is computed as the mean decision over all pixels:
+
+$$
+\begin{array}{l} \mathcal {L} _ {D _ {d e c} ^ {U}} = - \mathbb {E} _ {x} \left[ \sum_ {i, j} \log \left[ D _ {d e c} ^ {U} (x) \right] _ {i, j} \right] \\ - \mathbb {E} _ {z} \left[ \sum_ {i, j} \log \left(1 - \left[ D _ {\text {d e c}} ^ {U} (G (z)) \right] _ {i, j}\right) \right]. \tag {4} \\ \end{array}
+$$
+
+Here, $[D_{dec}^{U}(x)]_{i,j}$ and $[D_{dec}^{U}(G(z))]_{i,j}$ refer to the discriminator decision at pixel $(i,j)$ . These per-pixel outputs of $D_{dec}^{U}$ are derived based on global information from high-level features, enabled through the process of upsampling from the bottleneck, as well as more local information from low-level features, mediated by the skip connections from the intermediate layers of the encoder network.
+
+Correspondingly, the generator objective becomes:
+
+$$
+\begin{array}{l} \mathcal {L} _ {G} = - \mathbb {E} _ {z} \left[ \log D _ {e n c} ^ {U} (G (z)) \right. \\ \left. + \sum_ {i, j} \log \left[ D _ {\text {d e c}} ^ {U} (G (z)) \right] _ {i, j} \right], \tag {5} \\ \end{array}
+$$
+
+encouraging the generator to focus on both global structures and local details while synthesizing images in order to fool the more powerful discriminator $D^{U}$ .
+
+# 3.2. Consistency Regularization
+
+Here we present the consistency regularization technique for the U-Net based discriminator introduced in the previous section. The per-pixel decision of the well-trained $D^{U}$ discriminator should be equivariant under any class-domain-altering transformations of images. However, this property is not explicitly guaranteed. To enable it, the discriminator should be regularized to focus more on semantic and structural changes between real and fake samples and to pay less attention to arbitrary class-domain-preserving perturbations. Therefore, we propose the consistency regularization of the $D^{U}$ discriminator, explicitly encouraging the decoder module $D_{dec}^{U}$ to output equivariant predictions under the CutMix transformations [47] of real and fake samples. The
+
+
+Figure 3: Visualization of the CutMix augmentation and the predictions of the U-Net discriminator on CutMix images. 1st row: real and fake samples. 2nd&3rd rows: sampled real/fake CutMix ratio $r$ and corresponding binary masks M (color code: white for real, black for fake). 4th row: generated CutMix images from real and fake samples. 5th&6th row: the corresponding real/fake segmentation maps of $D^{U}$ with its predicted classification scores.
+
+CutMix augmentation creates synthetic images via cutting and pasting patches from images of different classes. We choose CutMix among other Mix&Cut strategies (cf. Section 2) as it does not alter the real and fake image patches used for mixing, in contrast to [49], preserving their original class domain, and provides a large variety of possible outputs. We visualize the CutMix augmentation strategy and the $D^{U}$ predictions in Figure 3.
+
+Following [47], we synthesize a new training sample $\tilde{x}$ for the discriminator $D^{U}$ by mixing $x$ and $G(z)\in$ $\mathbb{R}^{W\times H\times C}$ with the mask M:
+
+$$
+\begin{array}{l} \tilde {x} = \operatorname {m i x} (x, G (z), \mathrm {M}), \\ \operatorname {m i x} (x, G (z), \mathrm {M}) = \mathrm {M} \odot x + (1 - \mathrm {M}) \odot G (z), \tag {6} \\ \end{array}
+$$
+
+where $\mathrm{M} \in \{0,1\}^{W \times H}$ is the binary mask indicating if the pixel $(i,j)$ comes from the real $(\mathrm{M}_{i,j} = 1)$ or fake $(\mathrm{M}_{i,j} = 0)$ image, $1$ is a binary mask filled with ones, and $\odot$ is an element-wise multiplication. In contrast to [47], the class label $c \in \{0,1\}$ for the new CutMix image $\tilde{x}$ is set to be fake, i.e. $c = 0$ . Globally the mixed synthetic image should be recognized as fake by the encoder $D_{enc}^{U}$ , otherwise the generator can learn to introduce the CutMix augmentation into generated samples, causing undesirable artifacts. Note that for the synthetic sample $\tilde{x}$ , $c = 0$ and $\mathrm{M}$
+
+are the ground truth for the encoder and decoder modules of the discriminator $D^{U}$ , respectively.
+
+Given the CutMix operation in Eq. 6, we train the discriminator to provide consistent per-pixel predictions, i.e. $D_{dec}^{U}\big(\mathrm{mix}(x,G(z),\mathrm{M})\big)\approx \mathrm{mix}\big(D_{dec}^{U}(x),D_{dec}^{U}(G(z)),\mathrm{M}\big)$ , by introducing the consistency regularization loss term in the discriminator objective:
+
+$$
+\begin{array}{l} \mathcal {L} _ {D _ {d e c} ^ {U}} ^ {\text {c o n s}} = \left\| D _ {d e c} ^ {U} \left(\operatorname {m i x} (x, G (z), \mathrm {M})\right) \right. \\ \left. - \operatorname {m i x} \left(D _ {d e c} ^ {U} (x), D _ {d e c} ^ {U} (G (z)), \mathrm {M}\right) \right\| ^ {2}, \tag {7} \\ \end{array}
+$$
+
+where denotes $\| \cdot \|$ the $L^2$ norm. This consistency loss is then taken between the per-pixel output of $D_{dec}^{U}$ on the CutMix image and the CutMix between outputs of the $D_{dec}^{U}$ on real and fake images, penalizing the discriminator for inconsistent predictions.
+
+We add the loss term in Eq. 7 to the discriminator objective in Eq. 2 with a weighting hyper-parameter $\lambda$ :
+
+$$
+\mathcal {L} _ {D ^ {U}} = \mathcal {L} _ {D _ {e n c} ^ {U}} + \mathcal {L} _ {D _ {d e c} ^ {U}} + \lambda \mathcal {L} _ {D _ {d e c} ^ {U}} ^ {\text {c o n s}}.. \tag {8}
+$$
+
+The generator objective $\mathcal{L}_G$ remains unchanged, see Eq. 5.
+
+In addition to the proposed consistency regularization, we also use CutMix samples for training both the encoder and decoder modules of $D^{U}$ . Note that for the U-Net GAN we use the non-saturating GAN objective formulation [14]. However, the introduced consistency regularization as well as the U-Net architecture of the discriminator can be combined with any other adversarial losses [1, 26, 37].
+
+# 3.3. Implementation
+
+Here we discuss implementation details of the U-Net GAN model proposed in Section 3.1 and 3.2.
+
+U-Net based discriminator. We build upon the recent state-of-the-art BigGAN model [5], and extend its discriminator with our proposed changes. We adopt the BigGAN generator and discriminator architectures for the $256 \times 256$ (and $128 \times 128$ ) resolution with a channel multiplier $ch = 64$ , as described in detail in [5]. The original BigGAN discriminator downsamples the input image to a feature map of dimensions $16ch \times 4 \times 4$ , on which global sum pooling is applied to derive a $16ch$ dimensional feature vector that is classified into real or fake. In order to turn the discriminator into a U-Net, we copy the generator architecture and append it to the $4 \times 4$ output of the discriminator. In effect, the features are successively upsampled via ResNet blocks until the original image resolution $(H \times W)$ is reached. To complete the U-Net, the input to every decoder ResNet block is concatenated to the output features of the encoder blocks that share the same intermediate resolution. In this way, high-level and low-level information are effectively integrated on the way to the output feature map.
+
+Hereby, the decoder architecture is almost identical to the generator, with the exception of that we change the number of channels of the final output from 3 to $ch$ , append a final block of $1 \times 1$ convolutions to produce the $1 \times H \times W$ output map, and do not use class-conditional BatchNorm [8, 12] in the decoder, nor the encoder. Similarly to [5], we provide class information to $D^{U}$ with projection [35] to the $ch$ -dimensional channel features of the U-Net encoder and decoder output. In contrast to [5] and in alignment with [6], we find it beneficial not to use a hierarchical latent space, but to directly feed the same input vector $z$ to BatchNorm at every layer in the generator. Lastly, we also remove the self-attention layer in both encoder and decoder, as in our experiments they did not contribute to the performance but led to memory overhead. While the original BigGAN is a class-conditional model, we additionally devise an unconditional version for our experiments. For the unconditional model, we replace class-conditional BatchNorm with self-modulation [6], where the BatchNorm parameters are conditioned only on the latent vector $z$ , and do not use the class projection of [35] in the discriminator.
+
+All these modifications leave us with a two-headed discriminator. We compute the GAN loss at both heads with equal weight. Analogously to BigGAN, we keep the hinge loss [50] in all basic U-Net models, while the models that also employ the consistency regularization in the decoder output space benefit from using the non-saturating loss [14]. Our implementation builds on top of the original BigGAN PyTorch implementation2.
+
+Consistency regularization. For each training iteration a mini-batch of CutMix images $(\tilde{x}, c = 0, \mathrm{M})$ is created with probability $p_{mix}$ . This probability is increased linearly from 0 to 0.5 between the first $n$ epochs in order to give the generator time to learn how to synthesize more real looking samples and not to give the discriminator too much power from the start. CutMix images are created from the existing real and fake images in the mini-batch using binary masks M. For sampling M, we use the original CutMix implementation3: first sampling the combination ratio $r$ between the real and generated images from the uniform distribution (0,1) and then uniformly sample the bounding box coordinates for the cropping regions of $x$ and $G(z)$ to preserve the $r$ ratio, i.e. $r = \frac{|M|}{W * H}$ (see Figure 3). Binary masks M also denote the target for the decoder $D_{dec}^{U}$ , while we use fake, i.e. $c = 0$ , as the target for the encoder $D_{enc}^{U}$ . We set $\lambda = 1.0$ as it showed empirically to be a good choice. Note that the consistency regularization does not impose much overhead during training. Extra computational cost comes only from feeding additional CutMix images through the discriminator while updating its parameters.
+
+
+Figure 4: Images generated with U-Net GAN on FFHQ with resolution $256 \times 256$ when interpolating in the latent space (left to right). Note the high quality of synthetic samples and very smooth interpolations, maintaining global and local realism.
+
+# 4. Experiments
+
+# 4.1. Experimental Setup
+
+Datasets. We consider three datasets: FFHQ [20], CelebA [29] and the subset of the COCO [28] and OpenImages [24] images containing animal classes, which we will further on refer to as COCO-Animals. We use FFHQ and CelebA for unconditional image synthesis and COCO-Animals for class-conditional image synthesis, where the class label is used. We experiment with $256 \times 256$ resolution for FFHQ and $128 \times 128$ for CelebA and COCO-Animals.
+
+CelebA is a human face dataset of 200k images, featuring $\sim 10\mathrm{k}$ different celebrities with a variety of facial poses and expressions. Similarly, FFHQ is a more recent dataset of human faces, consisting of 70k high-quality images with higher variation in terms of age, ethnicity, accessories, and viewpoints. The proposed COCO-Animals dataset consists of $\sim 38\mathrm{k}$ training images belonging to 10 animal classes, where we choose COCO and OpenImages (using the human verified subset with mask annotations) samples in the categories bird, cat, dog, horse, cow, sheep, giraffe, zebra, elephant, and monkey. With its relatively small size and imbalanced number of images per class as well as due to its variation in poses, shapes, number of objects, and backgrounds, COCO-Animals presents a challenging task for class-conditional image synthesis. We choose to compose this dataset in order to perform conditional image genera
+
+tion in the mid- to high-resolution regime, with a reasonable computational budget and feasible training time. Other datasets in this order of size either have too few examples per class (e.g. AwA [46]) or too little inter- and intra-class variability. In contrast, the intra-class variability of COCOAnimals is very high for certain classes, e.g. bird and monkey, which span many subspecies. For more details we refer to Section D and E in the supplementary material.
+
+Evaluation metrics. For quantitative evaluation we use the Fréchet Inception distance (FID) [16] as the main metric, and additionally consider the Inception score (IS) [41]. Between the two, FID is a more comprehensive metric, which has been shown to be more consistent with human evaluation in assessing the realism and variation of the generated images [16], while IS is limited by what the Inception classifier can recognise, which is directly linked to its training data [3]. If one learns to generate something not present in the classifier's training data (e.g. human faces) then IS can still be low despite generating high quality images since that image does not get classified as a distinct class.
+
+In all our experiments, FID and IS are computed using 50k synthetic images, following [19]. By default all reported numbers correspond to the best or median FID of five independent runs achieved with 400k training iterations for FFHQ and COCO-Animals, and 800k training iterations for CelebA. For evaluation, we employ moving averages of the generator weights following [5, 19], with a decay of 0.9999.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Figure 5: Images generated with U-Net GAN trained on COCO-Animals with resolution $128 \times 128$ .
+
+
+
+
+
+
+
+
+
+
+
+
+
+Note that we do not use any truncation tricks or rejection sampling for image generation.
+
+Training details. We adopt the original training parameters of [5]. In particular, we use a uniformly distributed noise vector $z \in [-1, 1]^{140}$ as input to the generator, and the Adam optimizer [22] with learning rates of $1e-4$ and $5e-4$ for $G$ and $D^U$ . The number of warmup epochs $n$ for consistency regularization is set to 200 for COCO-Animals and 20 for FFHQ and CelebA. In contrast to [5], we operate with considerably smaller mini-batch sizes: 20 for FFHQ, 50 for CelebA and 80 for COCO-Animals. See Section F and B in the supplementary material for more details.
+
+# 4.2. Results
+
+We first test our proposed U-Net discriminator in two settings: unconditional image synthesis on FFHQ and class-conditional image synthesis on COCO-Animals, using the BigGAN model [5] as a baseline for comparison. We report our key results in Table 1 and Figure 6.
+
+In the unconditional case, our model achieves the FID score of 7.48, which is an improvement of 4.0 FID points over the canonical BigGAN discriminator (see Table 1). In addition, the new U-Net discriminator also improves over the baseline in terms of the IS metric (3.97 vs. 4.46). The same effect is observed for the conditional image generation setting. Here, our U-Net GAN achieves an FID of 13.73, improving 2.64 points over BigGAN, as well as increases the IS score from 11.77 to 12.29. Figure 6 visualizes the mean FID behaviour over the training across 5 independent runs. From Figure 6 it is evident that the FID score drops for both models at the similar rate, with a constant offset for the U-Net GAN model, as well as the smaller standard deviation of FID. These results showcase the high potential of the new U-Net based discriminator. For a detailed comparison of the FID mean, median and standard deviation across 5 runs we refer to Table S2 in the supplementary material.
+
+Qualitative results on FFHQ and COCO-Animals are shown in Figure 4 and Figure 5. Figure 4 displays human faces generated by U-Net GAN through linear interpolation in the latent space between two synthetic samples. We
+
+| FFHQ | COCO-Animals |
| Method | Best FID↓ IS↑ | Median FID↓ IS↑ | Best FID↓ IS↑ | Median FID↓ IS↑ |
| BigGAN [5] | 11.48 3.97 | 12.42 4.02 | 16.37 11.77 | 16.55 11.78 |
| U-Net GAN | 7.48 4.46 | 7.63 4.47 | 13.73 12.29 | 13.87 12.31 |
+
+Table 1: Evaluation results on FFHQ and COCO-Animals. We report the best and median FID score across 5 runs and its corresponding IS, see Section 4.2 for discussion.
+
+
+Figure 6: FID curves over iterations of the BigGAN model (blue) and the proposed U-Net GAN (red). Depicted are the FID mean and standard deviation across 5 runs per setting.
+
+
+
+observe that the interpolations are semantically smooth between faces, i.e. an open mouth gradually becomes a closed mouth, hair progressively grows in length, beards smoothly fade or appear, and hair color changes seamlessly. Furthermore, we notice that on several occasions men appear with pink beards. As FFHQ contains a fair share of people with pink hair, we suspect that our generator extrapolates hair color to beards, enabled by the global and local $D^{U}$ feedback during the training. Figure 5 shows generated samples on COCO-Animals. We observe diverse images of high quality. We further notice that employing the class-conditional projection (as used in BigGAN) in the pixel output space of the decoder does not introduce class leakage or influence the class separation in any other way. These observations confirm that our U-Net GAN is effective in both unconditional and class-conditional image generation.
+
+Ablation Study. In Table 2 we next analyze the individual effect of each of the proposed components of the U-Net
+
+| Method | COCO-Animals | FFHQ |
| BigGAN [5] | 16.55 | 12.42 |
| U-Net based discriminator | 15.86 | 10.86 |
| + CutMix augmentation | 14.95 | 10.30 |
| + Consistency regularization | 13.87 | 7.63 |
+
+Table 2: Ablation study of U-Net GAN on FFHQ and COCO-Animals. Shown are the median FID scores. The proposed components lead to better performance, on average improving the median FID by 3.7 points over BigGAN.
+
+| Method | FID ↓ | IS ↑ |
| PG-GAN [19] | 7.30 | - |
| COCO-GAN [27] | 5.74 | - |
| BigGAN [5] | 4.54 | 3.23 |
| U-Net GAN | 2.95 | 3.43 |
+
+Table 3: Comparison with the state-of-the-art models on CelebA (128 × 128). See Section 4.2 for discussion.
+
+GAN model (see Section 3 for details) to the baseline architecture of BigGAN on the FFHQ and COCO-Animals datasets, comparing the median FID scores. Note that each of these individual components builds on each other. As shown in Table 2, employing the U-Net architecture for the discriminator alone improves the median FID score from 12.42 to 10.86 for FFHQ and 16.55 to 15.86 for COCO-Animals. Adding the CutMix augmentation improves upon these scores even further, achieving FID of 10.30 for FFHQ and 14.95 for COCO-Animals. Employing the proposed consistency regularization in the segmenter $D_{dec}^{U}$ output space on the CutMix images enables us to get the most out of the CutMix augmentation and to leverage better the per-pixel feedback of the U-Net discriminator, without imposing much computational or memory costs. In effect, the median FID drops to 7.63 for FFHQ and to 13.87 for COCO-Animals. Overall, we observe that the proposed components of U-Net GAN improve performance in terms of FID.
+
+Comparison with the state of the art. Table 3 shows that U-Net GAN compares favourably with the state of the art on CelebA. The BigGAN baseline already outperforms COCO-GAN, the best result reported in the literature to the best of our knowledge, lowering FID from 5.74 to 4.54, whereas U-Net GAN further improves FID to $2.95^{4}$ . Note that BigGAN belongs to just one of the two state-of-the-art GAN families, led by BigGAN and StyleGAN, and their respective further improvements [51, 53, 21]. While in this paper we base our model on BigGAN, it would be interesting to also apply the U-Net discriminator to StyleGAN.
+
+Discriminator response visualization. Experimentally we observe that $D_{enc}^{U}$ and $D_{dec}^{U}$ often assign different real/-
+
+
+Figure 7: Predictions of the encoder $D_{enc}^{U}$ and decoder $D_{dec}^{U}$ during training, in a batch of 50 generated samples. For visualization, the $D_{dec}^{U}$ score is averaged over all pixels. Note that quite often decisions of $D_{enc}^{U}$ and $D_{dec}^{U}$ are not coherent. As judged by the U-Net discriminator, samples in the upper left are locally plausible but not globally coherent (in orange), whereas samples in the lower right look globally coherent but have local inconsistencies (example in purple: giraffe with too many legs and vague background).
+
+fake scores per sample. Figure 7 visualizes the per-sample predictions for a complete training batch. Here, the decoder score is computed as the average per-pixel prediction. The scores correlate with each other but have a high variance. Points in the upper left quadrant correspond to samples that are assigned a high probability of being real by the decoder, but a low probability by the encoder. This implies realism on a local level, but not necessarily on a global one. Similarly, the lower right quadrant represents samples that are identified as realistic by the encoder, but contain unrealistic patches which cause a low decoder score. The fact that the encoder and decoder predictions are not tightly coupled further implies that these two components are complementary. In other words, the generator receives more pronounced feedback by the proposed U-Net discriminator than it would get from a standard GAN discriminator.
+
+# 5. Conclusion
+
+In this paper, we propose an alternative U-Net based architecture for the discriminator, which allows to provide both global and local feedback to the generator. In addition, we introduce a consistency regularization technique for the U-Net discriminator based on the CutMix data augmentation. The proposed changes result in a stronger discriminator, enabling the generator to synthesize images with varying levels of detail, maintaining global and local realism. We demonstrate the improvement in FID over the state-of-the-art BigGAN model [5] on three different datasets.
+
+# References
+
+[1] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein generative adversarial networks. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2,5
+[2] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. Transactions on Pattern Analysis and Machine Intelligence, 2017. 3
+[3] Shane T. Barratt and Rishi Sharma. A note on the inception score. arXiv:1801.01973, 2018. 6
+[4] David Berthelot, Nicholas Carlini, Ian G Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 3
+[5] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale GAN training for high fidelity natural image synthesis. In International Conference on Learning Representations (ICLR), 2019. 1, 2, 5, 6, 7, 8
+[6] Ting Chen, Mario Lucic, Neil Houlsby, and Sylvain Gelly. On self modulation for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2018. 5
+[7] Ting Chen, Xiaohua Zhai, Marvin Ritter, Mario Lucic, and Neil Houlsby. Self-supervised gans via auxiliary rotation loss. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
+[8] Harm de Vries, Florian Strub, Jérémie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C. Courville. Modulating early visual processing by language. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 5
+[9] Terrance Devries and Graham W. Taylor. Improved regularization of convolutional neural networks with cutout. arXiv:1708.04552, 2017. 3
+[10] Rahul Dey, Felix Juefei-Xu, Vishnu Naresh Boddeti, and Marios Savvides. Rankgan: A maximum margin ranking gan for generating faces. In Asian Conference on Computer Vision (ACCV), 2018. 2
+[11] Thang Doan, João Monteiro, Isabela Albuquerque, Bogdan Mazoure, Audrey Durand, Joelle Pineau, and R. Devon Hjelm. Online adaptative curriculum learning for gans. In AAAI, 2018. 2
+[12] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. In International Conference on Learning Representations (ICLR), 2017. 5
+[13] Ishan P. Durugkar, Ian Gemp, and Sridhar Mahadevan. Generative multi-adversarial networks. In International Conference on Learning Representations (ICLR), 2017. 2
+[14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NeurIPS), 2014. 2, 3, 5
+[15] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of
+
+Wasserstein GANs. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2
+[16] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. GANs trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 6
+[17] Minyoung Huh, Shao-Hua Sun, and Ning Zhang. Feedback adversarial learning: Spatial feedback for improving generative adversarial networks. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 2
+[18] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125-1134, 2017. 2
+[19] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), 2018. 1, 2, 6, 8
+[20] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019. 1, 2, 6
+[21] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. arXiv preprint arXiv:1912.04958, 2019. 8
+[22] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), 2015. 7
+[23] Karol Kurach, Mario Lucic, Xiaohua Zhai, Marcin Michalski, and Sylvain Gelly. The GAN landscape: Losses, architectures, regularization, and normalization. arXiv: 1807.04720, 2018. 3
+[24] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv:1811.00982, 2018. 2, 6
+[25] Chun-Liang Li, Wei-Cheng Chang, Yu Cheng, Yiming Yang, and Barnabás Póczos. MMD GAN: Towards deeper understanding of moment matching network. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2
+[26] Jae Hyun Lim and Jong Chul Ye. Geometric gan. arXiv:1705.02894, 2017. 2, 5
+[27] Chieh Hubert Lin, Chia-Che Chang, Yu-Sheng Chen, Da-Cheng Juan, Wei Wei, and Hwann-Tzong Chen. Coco-gan: Generation by parts via conditional coordinating. In International Conference on Computer Vision (ICCV), 2019. 1, 2, 8
+[28] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollar, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV), 2014. 2, 6
+
+[29] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaou Tang. Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), 2015. 2, 6
+[30] Mario Lučić, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are GANs created equal? A large-scale study. In Advances in Neural Information Processing Systems (NeurIPS), 2018. 3
+[31] Xudong Mao, Qing Li, Haoran Xie, Raymond Y. K. Lau, and Zhen Wang. Multi-class generative adversarial networks with the L2 loss function. arXiv:1611.04076, 2016. 2
+[32] Jacob Menick and Nal Kalchbrenner. Generating high fidelity images with subscale pixel networks and multidimensional upscaling. In International Conference on Learning Representations (ICLR), 2019. 1
+[33] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014. 2
+[34] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2018. 1, 2
+[35] Takeru Miyato and Masanori Koyama. cGANs with projection discriminator. In International Conference on Learning Representations (ICLR), 2018. 5
+[36] Gonalo Mordido, Haojin Yang, and Christoph Meinel. Dropout-gan: Learning from a dynamic ensemble of discriminators. arXiv:1807.11346, 2018. 2
+[37] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. fGAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing Systems (NeurIPS), 2016. 2, 5
+[38] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR), 2016. 2
+[39] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015. 2, 3
+[40] Kevin Roth, Aurelien Lucchi, Sebastian Nowozin, and Thomas Hofmann. Stabilizing training of generative adversarial networks through regularization. In Advances in Neural Information Processing Systems (NeurIPS), 2017. 2
+[41] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen, and Xi Chen. Improved techniques for training GANs. In Advances in Neural Information Processing Systems (NeurIPS), 2016. 6
+[42] Rishi Sharma, Shane T. Barratt, Stefano Ermon, and Vijay S. Pande. Improved training with curriculum gans. arXiv:1807.09295, 2018. 2
+[43] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitlagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning (ICML), 2019. 3
+[44] Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, and David Lopez-Paz. Interpolation consistency training for semi-supervised learning. In IJCAI, 2019. 3
+
+[45] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
+[46] Yongqin Xian, Christoph H. Lampert, Bernt Schiele, and Zeynep Akata. Zero-shot learning a comprehensive evaluation of the good, the bad and the ugly. Transactions on Pattern Analysis and Machine Intelligence, 2018. 6
+[47] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In International Conference on Computer Vision (ICCV), 2019. 2, 3, 4
+[48] Dan Zhang and Anna Khoreva. PA-GAN: Improving GAN training by progressive augmentation. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 2
+[49] Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations (ICLR), 2018. 3, 4
+[50] Han Zhang, Ian J. Goodfellow, Dimitris N. Metaxas, and Augustus Odena. Self-attention generative adversarial networks. In International Conference on Machine learning (ICML), 2019. 1, 2, 5
+[51] Han Zhang, Zizhao Zhang, Augustus Odena, and Honglak Lee. Consistency regularization for generative adversarial networks. In International Conference on Learning Representations (ICLR), 2020. 1, 2, 3, 8
+[52] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126, 2016. 2
+[53] Zhengli Zhao, Sameer Singh, Honglak Lee, Zizhao Zhang, Augustus Odena, and Han Zhang. Improved consistency regularization for gans. arXiv preprint arXiv:2002.04724, 2020.8
\ No newline at end of file
diff --git a/aunetbaseddiscriminatorforgenerativeadversarialnetworks/images.zip b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..b50f0a7e2e5e24232618f1bf08db22c464a4df22
--- /dev/null
+++ b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:440b62e50effb669f9e6e81a95f6e6c3af86577867676ade13533cd6890ecb9b
+size 558370
diff --git a/aunetbaseddiscriminatorforgenerativeadversarialnetworks/layout.json b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..7eefb7c606b71e3b240f506463ed862712fea1dd
--- /dev/null
+++ b/aunetbaseddiscriminatorforgenerativeadversarialnetworks/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:41010273515ce801ecf3894b0417e2f2b9bd7d7b68b1208f31092fcac1940946
+size 406431
diff --git a/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_content_list.json b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..e95651645b3d505477d033dfc5c4cd8275f30b30
--- /dev/null
+++ b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e3e1564d162f46c44ea48172d6ce1f6d52af731c556236ca5621b533f6f2bfde
+size 80944
diff --git a/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_model.json b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..00728700257ea884075861daf7099d95dd442c88
--- /dev/null
+++ b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b04ce806720ffbb23c6cd09033c5eeed97d09d85e289ae219ab3507e33910b8a
+size 101824
diff --git a/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_origin.pdf b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..e11b26d04f8fe02d9e9fb5b0af37816f0f37b693
--- /dev/null
+++ b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/483c87df-560e-4124-853a-694b8386abce_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ba4e2b97aa87681b054223b2de0d25f41003c89f9361b42b4ced0c440801f229
+size 1272241
diff --git a/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/full.md b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..4f9b131f9c8900fee8213a7d406733bf6cb7d060
--- /dev/null
+++ b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/full.md
@@ -0,0 +1,311 @@
+# A Unified Object Motion and Affinity Model for Online Multi-Object Tracking
+
+Junbo Yin $^{1}$ , Wenguan Wang $^{2*}$ , Qinghao Meng $^{1}$ , Ruigang Yang $^{3,4,6}$ , Jianbing Shen $^{5,1}$
+
+$^{1}$ Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, China
+ $^{2}$ ETH Zurich, Switzerland ${}^{3}$ Baidu Research ${}^{4}$ National Engineering Laboratory of Deep Learning Technology and Application, China
+ $^{5}$ Inception Institute of Artificial Intelligence, UAE ${}^{6}$ University of Kentucky, Kentucky, USA
+yinjunbo@bit.edu.cn wenguanwang.ai@gmail.com
+https://github.com/yinjunbo/UMA-MOT
+
+# Abstract
+
+Current popular online multi-object tracking (MOT) solutions apply single object trackers (SOTs) to capture object motions, while often requiring an extra affinity network to associate objects, especially for the occluded ones. This brings extra computational overhead due to repetitive feature extraction for SOT and affinity computation. Meanwhile, the model size of the sophisticated affinity network is usually non-trivial. In this paper, we propose a novel MOT framework that unifies object motion and affinity model into a single network, named UMA, in order to learn a compact feature that is discriminative for both object motion and affinity measure. In particular, UMA integrates single object tracking and metric learning into a unified triplet network by means of multi-task learning. Such design brings advantages of improved computation efficiency, low memory requirement and simplified training procedure. In addition, we equip our model with a task-specific attention module, which is used to boost task-aware feature learning. The proposed UMA can be easily trained end-to-end, and is elegant - requiring only one training stage. Experimental results show that it achieves promising performance on several MOT Challenge benchmarks.
+
+# 1. Introduction
+
+Online multi-object tracking (MOT) aims to accurately locate trajectory of each target while maintaining their identities with information accumulated up to the current frame. In the last decades, MOT has attracted increasing attention, as it benefits a wide range of applications, such as video surveillance analyses and autonomous driving [48, 56, 57].
+
+Current MOT solutions typically involve an object motion model and an affinity model. The former one leverages temporal information for object instance localization and tracklet generation, while the latter deals with distractors
+
+(e.g., targets with similar appearance) or occlusions by measuring object similarity in data association. Specifically, some online MOT algorithms are based on a tracking-by-detection paradigm [25, 39, 52, 1, 29], i.e., associating detections across frames by computing pairwise affinity. Thus they mainly focus on the design of the affinity model. However, as temporal cues are not explored in the object detection phase, the quality of detections is often limited, further decreasing the MOT performance. MOT scenarios, e.g., the video sequences in MOT Challenge [32, 38], often yield crowded people with rare poses or various sizes. In such cases, even the leading detectors [43] may produce many False Positive (FP) and False Negative (FN) results, causing adverse impact on the subsequent data association stage.
+
+This calls for better leveraging motion cues in MOT. Thus another trend is to apply single object trackers (SOTs) in online MOT [11, 61]. They take advantage of SOTs to address the value of temporal information and recover the missing candidate detections. Such paradigm yields more natural tracklets and usually leads to better tracking results according to the FN metric. However, crowded distractors and their frequent interactions often lead to occlusion situations, which are quite challenging for these solutions. To tackle this issue, follow-up methods [44, 11, 68, 9, 10] integrate SOT based motion model with affinity estimation. In particular, they first recognize the state of the targets according to the confidence of SOTs, then update the tracked targets and maintain the identities of the occluded targets through affinity measure for tracklet-detection pairs in data association phase. Though inspired, they still suffer from several limitations. First, features used for SOTs and affinity measure are extracted from two separate models, which incurs expensive computational overhead. Second, since they do not make use of SOT features in affinity computation, they have to train an extra affinity network (e.g., ResNet50 in [68] and ResNet101 in [10]) to remedy this. This further increases their memory demand, which critically limits their applicability in source-constrained en
+
+vironments. Third, the independent feature extraction of SOTs and affinity models, and the complicated affinity network design, together make the training procedures sophisticated, which often require multiple alternations or cascaded-training strategy. Moreover, they do not explore the relation of the SOTs and the affinity model, i.e., the affinity model could help the SOTs to access the identity information and thus learn more discriminative features to better handle occlusions.
+
+To alleviate the above issues, we propose a multi-task learning based online MOT model, UMA, which end-to-end integrates the SOT based motion model and affinity network into a unified framework. The learnt features are promoted to capture more identity-discriminative information, thus simplifying the training and testing procedures. In particular, it unifies a Siamese SOT and a ranking network into a triplet architecture. Two branches of the triplet network, e.g., the positive and anchor branches, count for the SOT based motion prediction task, while all the three branches address the target identity-aware ranking task by metric learning. This provides several benefits. First, the metric learning within the ranking task assigns the learnt features identity-discriminative ability, facilitating the SOT model to better locate the targets and handle occlusions. Second, this enables feature sharing between the SOT based tracklet generation stage and the affinity-dependent data association stage, eliminating the requirement of designing an additional affinity network and improving the computation efficiency. Third, it provides a more straightforward, one-step training protocol instead of previous sophisticated, multi-alternation or cascaded training strategy. Furthermore, a task-specific attention (TSA) module is equipped with our UMA model, to address the specific nature of the multiple tasks and boost more task-specific feature learning. It performs context exploitation self-adaptively on the shared features extracted by the multi-task network and is lightweight with budgeted computational cost, meanwhile producing better performance. To summarize, we propose a triplet network, UMA, which unifies the object motion prediction and affinity measure tasks in online MOT. UMA addresses SOT-applicable as well as association-discriminative feature learning with an attentive multi-task learning mechanism. This presents an elegant, effective yet efficient MOT model with lower memory requirement and a simple, end-to-end training protocol. Further with an elaborately designed online tracking pipeline, our lightweight model reaches state-of-the-art performance against most online and even offline algorithms on several MOT Challenge benchmarks.
+
+# 2. Related Work
+
+MOT. Existing MOT approaches can be categorized into offline and online modes. Offline methods [41, 14, 51, 54, 52]
+
+can leverage both past and future frames for batch processing. They typically consider MOT as a global optimization problem in various forms such as multi-cut [51, 52], k-partite graph [66, 13] and network flow [67, 14]. Though favored in handling ambiguous tracking results, they are not suitable for causal applications such as autonomous driving.
+
+Online MOT methods can only access the information available up to the current frame, thus easily suffering from target occlusions or noisy detections. The majority of previous approaches [1, 2, 25, 39, 63] adopt a tracking-by-detection pipeline, whose performance is largely limited by the detection results. Some others [68, 44, 11, 9, 10] instead apply SOTs [22, 4, 34, 17, 16] to carry out online MOT and generally gain better results.
+
+Object Motion Models in Online MOT. Basically, object motion model is helpful in dealing with the noisy detections. For instance, Xiang et al. [62] employ an optical flow-based SOT, TLD [27], to track individual target. Sadeghian et al. [44] further extend this pipeline with a multi-LSTM network for exploiting different long-term cues. After that, Zhu et al. [68] equip their framework with a more advanced tracker: ECO [12], and design an attention-based network to handle occlusions. Their promising results demonstrate the advantages of applying SOTs as motion models. However, all these approaches require an additional affinity model to address occlusions, and often learn features for the SOTs and affinity models independently, leading to increased computation cost, non-trivial memory demand and sophisticated training protocols. Though [11] uses a shared backbone to extract features for all the targets, multiple online updating sub-nets are further added to specifically handle each target. In sharp contrast, we attempt to learn a 'universal' feature that preserves enough information for both motion and affinity models, which essentially simplifies the training and testing procedures.
+
+Object Affinity Models in Online MOT. In the data association phase, the object affinity model is usually used to link tracklets or detections cross frames in terms of the pairwise affinity, which is a crucial way to handle occlusions in online MOT. To produce reliable affinity estimations, object appearance cues are indispensable, and Siamese or triplet networks [8, 36, 55] with metric learning provide powerful tools to acquire a discriminative and robust feature embedding. In particular, Leal-Taixe et al. [31] apply a Siamese network to estimate the affinity of the provided detections by aggregating targets appearance and optical flow information. Son et al. [47] propose a quadruplet loss to stress targets appearance together with their temporal adjacencies. In [52], a Siamese network is used to leverage human pose information for long-range target-relation modeling. Voigtlaender et al. [53] extend the Mask R-CNN [20] with 3D convolutional layers and propose an association head to extract embedding vectors for each region proposals by us
+
+ing the batch hard triplet loss [24]. Bergmann et al. [2] also present a short-term re-identification model based on the Siamese network. Xu et al. [63] jointly utilize the appearance, location and topology information to compute the affinity by employing the relation networks [58] in both the spatial and temporal domains. Notably, all these methods work on the tracking-by-detection mode. Differently, we in-depth inject metric learning into a SOT model, through a unified triplet network. It learns a discriminative feature for both the object motion prediction and affinity measure sub-tasks, bringing an effective yet efficient solution.
+
+# 3. Our Algorithm
+
+In this section, we first give a brief review of the Siamese SOT [4] ( $\S 3.1$ ), as it is used as the backbone of our model. Then, the details of our UMA model are presented in $\S 3.2$ . Finally, in $\S 3.3$ , we elaborate on our whole online MOT pipeline. As UMA utilizes a single feature extraction network for both SOT based tracklet generation and object affinity measure, it presents a more efficient online solution with many non-trivial technical improvements.
+
+# 3.1. Preliminary of Siamese SOT
+
+Our backbone model is a recently proposed deep tracker: SiamFC [4], which is based on a Siamese network and shows promising performance in the single object tracking field. It operates at around 120 fps on one GPU, built upon the lightweight AlexNet [30].
+
+Basically, SiamFC transfers tracking task to patches matching in an embedding space. The Siamese network is learnt as the matching function, which is applied to find the most similar patch in the new frame compared with the initial target patch in the first frame. Specifically, as shown in Fig. 1, the Siamese tracker comprises two parameters-sharing branches, each of which is a 5-layer convolutional network $\phi$ . One branch takes as input the target detection given at the first frame, called as exemplar. The other one takes as input the instance, i.e., the searching region in each subsequent frame including the candidate patches. Given the features embeddings: $\phi(z)$ and $\phi(x)$ , of the exemplar $z$ and instance $x$ , a cross correlation layer $\tau$ is applied to compare their similarity and obtain a response map $v$ :
+
+$$
+v = \tau (x, z) = \phi (x) * \phi (z) + b, \tag {1}
+$$
+
+where $*$ indicates the convolutional operator and $b$ is the biases term. Then, given a ground-truth map $y$ , a logistic loss is applied on $v$ for training:
+
+$$
+L _ {\mathrm {S O T}} = \sum_ {p \in \mathcal {P}} \frac {1}{| \mathcal {P} |} \log \left(1 + e ^ {- v _ {p} y _ {p}}\right), \tag {2}
+$$
+
+where $p$ indicates a candidate position in the lattice $\mathcal{P}$ of $x$ . For each candidate $x_{p} \in x$ from the instance input $x$ , $v_{p}$ is the response value of an exemplar-candidate pair, i.e.,
+
+
+Figure 1: Illustration of the network architecture of the Siamese SOT during the training phase.
+
+$v_{p} = f(x_{p},z)$ , and $y_{p}\in \{+1, - 1\}$ is the corresponding ground-truth label for $v_{p}$ .
+
+# 3.2. Our UMA Model for Online MOT
+
+Main Idea. Previous SOT based online MOT methods typically design an extra network for affinity measure, in addition to the SOT network. In contrast, we try to integrate the object motion and affinity networks into a unified model. This brings several advantages, as mentioned in §1. The core idea is to enforce the network to simultaneously learn the two tasks: single object tracking and affinity prediction, forming a unified multi-task learning framework. Some ones may concern the features obtained from top-performing SOTs are already good enough for affinity measure. Actually, though SOT features are powerful, they are not discriminative enough to estimate a reliable affinity. This is because SOTs rarely access the identity information during training, thus their features typically distinguish targets from the substantial background well, while capture relatively less identity information. From the perspective of data association, SOT features have already encoded some useful information, thus it is more desirable and efficient to make use of these features instead of learning extra 'affinity features' from scratch. These considerations motivate us to learn a unified yet powerful feature that is applicable to both tasks, yielding an elegant online MOT framework.
+
+Triplet-based MOT Framework. To achieve our goal, our UMA model is designed as a triplet network architecture, as shown in Fig. 2, where the triplet network comprises three weight-sharing branches, i.e., an exemplar branch, a positive-instance branch and a negative-instance branch. We adopt the exemplar as the anchor. The instances from the same targets are used as positive samples, while the ones from different targets as negative. The integration of the exemplar branch and positive-instance branch can be viewed as a Siamese tracking network, while the whole triplet network yields a unified metric learning framework.
+
+Specifically, for the $i^{th}$ target, given an exemplar $z_{i}$ , a positive-instance $x_{i}$ , and a negative-instance $x_{j}$ sampled from a different target $j$ , we extract their features from the backbone AlexNet: $\pmb{f}_{z_i} = \phi (z_i)\in \mathbb{R}^{6\times 6\times 256}$ , $\pmb{f}_{x_i} = \phi (x_i)\in$ $\mathbb{R}^{20\times 20\times 256}$ , and $\pmb{f}_{x_j} = \phi (\pmb {x}_j)\in \mathbb{R}^{20\times 20\times 256}$ . Then, for the single object tracking task, it can be trained over $(\pmb{f}_{z_i},\pmb{f}_{x_i})$ using Eq. 2.
+
+For the whole triplet-based model, it is designed to learn
+
+
+Figure 2: Illustration of our proposed UMA model, which is built upon a triplet architecture with multi-task learning. UMA simultaneously learns two tasks: SOT based object motion prediction and affinity-dependent ranking, producing a strong feature that is applicable to both the tracklet generation as well as the affinity measure phases.
+
+the ranking task for affinity estimation. This is achieved by a metric learning paradigm, i.e., enforcing the features of positive examples closer to the anchor than other negative examples. Specifically, we first apply the ROI-Align [20] layer on $f_{x_i}$ and $f_{x_j}$ respectively, to extract two $6 \times 6 \times 256$ target features from the centers of $x_i$ and $x_j$ (as the targets are centred at the instance examples during training [4]). Such operation allows the model to specifically focus on learning more identity-discriminative features for affinity measure, suppress the information from cluster background, and produce feature maps with the same resolution to the anchor feature $f_{z_i}$ . Then a global average pooling (GAP) is applied to the anchor feature $f_{z_i}$ , as well as the aligned features of $f_{x_i}$ and $f_{x_j}$ , producing three 256-d features, denoted as $w_{z_i}$ , $w_{x_i}$ and $w_{x_j}$ , respectively. This enforces the regularization of the network and reduces model size.
+
+Given a mini-batch of $N$ training sample pairs, e.g., $\mathcal{B} = \{(x_i,z_i)\}_{i = 1}^N$ , a standard triplet loss [59] works in the following format:
+
+$$
+L _ {\mathrm {T r i}} = \frac {1}{N} \sum_ {i, j} ^ {N} \max (0, \| \boldsymbol {w} _ {z _ {i}} - \boldsymbol {w} _ {x _ {i}} \| _ {2} ^ {2} - \| \boldsymbol {w} _ {z _ {i}} - \boldsymbol {w} _ {x _ {j}} \| _ {2} ^ {2} + m), \tag {3}
+$$
+
+where $m$ is a margin that is enforced between positive and negative pairs. The objective of this loss is to keep distance between the anchor and positive smaller than the distance between the anchor and negative. However, in our batch construction, the number of the positive samples is significantly smaller than the negative ones, which will restrict the performance of the triplet loss $L_{\mathrm{Tri}}$ in hard data mining [24]. To overcome this hurdle, we replace Eq. 3 with a $N$ -pair loss [46]:
+
+$$
+L _ {\mathrm {N} - \text {p a i r}} = \frac {1}{N} \sum_ {i = 1} ^ {N} \log \left(1 + \sum_ {i \neq j} ^ {N} \exp \left(\boldsymbol {w} _ {z _ {i}} ^ {\top} \boldsymbol {w} _ {x _ {j}} - \boldsymbol {w} _ {z _ {i}} ^ {\top} \boldsymbol {w} _ {x _ {i}}\right)\right). \tag {4}
+$$
+
+The rationale is that, after looping over all the triplets in $\mathcal{B}$ , the final distance metric can be balanced correctly.
+
+Additionally, with the target identity at hand, we can further minimize a cross-entropy based identification loss [9]:
+
+$$
+L _ {\text {I d e n}} = - \left(\frac {1}{N} \sum_ {i = 1} ^ {N} \log \hat {p} _ {z _ {i}} + \frac {1}{N} \sum_ {i = 1} ^ {N} \log \hat {p} _ {x _ {i}}\right), \tag {5}
+$$
+
+where $\hat{p}_{z_i} \in [0,1]$ is the predicted probability of $z_i$ for the $i^{th}$ identity class. The identity prediction score is obtained by applying two fully connected layers (with dimension 512 and 439) and a softmax layer over $\boldsymbol{w}_{z_i}$ or $\boldsymbol{w}_{x_i}$ . Please note that there are a total of 439 identities in our training set.
+
+Hence, the final loss is computed as a combination of the SOT loss $L_{\mathrm{SOT}}$ , defined over the Siamese network, and the affinity-related losses $L_{\mathrm{N - pair}}$ and $L_{\mathrm{Iden}}$ , defined over the whole triplet network:
+
+$$
+L = L _ {\text {S O T}} + \left(\lambda_ {1} L _ {\text {N - p a i r}} + \lambda_ {2} L _ {\text {I d e n}}\right), \tag {6}
+$$
+
+where $\lambda$ s are the coefficients to balance different losses. In this way, both the SOT based motion model and the ranking based affinity model can be trained end-to-end in a unified triplet network, which facilitates the training process.
+
+Furthermore, with our multi-task design, we can derive a reliable affinity from the features extracted from our model:
+
+$$
+c = \boldsymbol {w} _ {I} ^ {\top} \boldsymbol {w} _ {I ^ {\prime}}, \tag {7}
+$$
+
+where $I$ and $I^{\prime}$ are two image patch inputs, e.g., an exemplar with an instance region or a detection patch, and $c$ is the affinity. To in-depth analyze the advantage of the features learnt from our model in affinity measure, we use the features extracted from the Siamese SOT model to compute the affinity, where the SOT model does not use either the additional branch or the extra losses (i.e., $L_{\mathrm{N - pair}}$ and $L_{\mathrm{Iden}}$ ). Fig. 3 gives the performance comparison of the two models with hard cases, e.g., the affinity between negative sample pairs with similar appearance or that between positive sample pairs with changeable appearance. From Fig. 3 (a) we can find that, when only using $L_{\mathrm{SOT}}$ , the affinity between negative sample pairs is even larger than the one between
+
+
+Figure 3: Affinities measured using the features (a) $f$ from the Siamese SOT with $L_{\mathrm{SOT}}$ loss, (b) $w$ from the triplet network with multi-task learning, and (c) $w^{\mathrm{AFF}}$ from our whole UMA with multi-task learning and TSA module, respectively.
+
+positive sample pairs. This clearly demonstrates the weak discriminability of the SOT features. In Fig. 3 (b) and (c), the affinity between positive sample pairs is substantially larger than that between negative ones even in hard cases, which proves our multi-task features $\pmb{w}$ are highly applicable for affinity measure. More detailed quantitative experiments can be found at $\S 4$ .
+
+Task-Specific Attention Module. For our triplet-based model described above, an identical feature produced by the backbone AlexNet $\phi(\cdot)$ is used for both the SOT based motion prediction and affinity measure tasks. Potential problems of such design lie in the loss of sensibility to subtle distinctions between the two tasks and the ignorance of their task-specific factors. A meaningful feature for SOT may not best fit affinity measure, vice versa. For example, context information is often stressed in SOT, e.g., auxiliary objects approaching the target may afford correlation information for tracking [65, 46]. However, for affinity measure, local semantic features around the key points are more informative to identify the query target [7, 49], while the auxiliary objects may interfere with the determination. To address this issue, we further equip our model with a task-specific attention (TSA) module for emphasizing task-aware feature learning with very low computation cost.
+
+Our TSA module is designed based on the famous Squeeze-and-Excitation Network (SENet) [26], as it does not rely on extra input and is trivial in runtime, which are essential for online MOT. It re-weights the feature response across channels using the squeeze and excitation operations. More specifically, the squeeze operator acquires global information via aggregating the features across all the spatial locations through channel-wise global average pooling:
+
+$$
+s _ {l} = \operatorname {G A P} _ {l} (\boldsymbol {f}) = \operatorname {G A P} _ {l} (\phi (\cdot)) \in \mathbb {R}, \tag {8}
+$$
+
+where $\mathrm{GAP}_l$ indicates global average pooling over the feature $f$ in $l^{th}$ channel. In the excite step, a gating mechanism is employed on the channel-wise descriptor $s = [s_1,s_2,\dots ,s_{256}]\in \mathbb{R}^{256}$ :
+
+$$
+\boldsymbol {a} = \sigma (\boldsymbol {W} _ {2} \delta (\boldsymbol {W} _ {1} \boldsymbol {s})) = [ a _ {1}, a _ {2}, \dots , a _ {2 5 6} ] \in [ 0, 1 ] ^ {2 5 6}. \tag {9}
+$$
+
+
+Figure 4: Illustration of the TSA module, which enables our model to stress task-specific features.
+
+
+Figure 5: Visualization of task-specific features stressed by TSA module. Features extracted for tracking part are shown in the $2^{nd}$ row, while those for affinity measure part are in the $3^{rd}$ row.
+
+$\sigma$ and $\delta$ are sigmoid and ReLU functions, respectively. Through the dimensionality reduction and increasing operations (parameterized by two fully connected layers $W_{1} \in \mathbb{R}^{64 \times 256}$ and $W_{2} \in \mathbb{R}^{256 \times 64}$ ), the attention vector $\pmb{a}$ encodes non-mutually-exclusive relations among the 256 channels.
+
+Using the SENet framework, our TSA module learns two kinds of attentions: $\pmb{a}^{\mathrm{SOT}}$ and $\pmb{a}^{\mathrm{AFF}}$ for addressing different tasks (see Fig. 4). $\pmb{a}^{\mathrm{SOT}}$ and $\pmb{a}^{\mathrm{AFF}}$ are first applied to re-weight the channels of the 'universal' feature $\pmb{f} = [f_{1},\dots,f_{256}]$ extracted from the backbone AlexNet:
+
+$$
+\boldsymbol {f} _ {\text {A F E}} ^ {\text {S O T}} = \left[ a _ {1} ^ {\text {S O T}} \cdot \boldsymbol {f} _ {1}, \dots , a _ {2 5 6} ^ {\text {S O T}} \cdot \boldsymbol {f} _ {2 5 6} \right], \tag {10}
+$$
+
+$$
+\boldsymbol {f} ^ {\mathrm {A F F}} = \left[ a _ {1} ^ {\mathrm {A F F}} \cdot \boldsymbol {f} _ {1}, \dots , a _ {2 5 6} ^ {\mathrm {A F F}} \cdot \boldsymbol {f} _ {2 5 6} \right].
+$$
+
+Then we feed the supervision of $L_{\mathrm{SOT}}$ to the SOT-aware feature $f^{\mathrm{SOT}}$ , while add $L_{\mathrm{N - pair}}$ and $L_{\mathrm{Iden}}$ losses to the affinity-related feature $w^{\mathrm{AFF}}$ (derived from $f^{\mathrm{AFF}}$ , as described before). In this way, the TSA module will learn to generate task-specific attentions. Through our lightweight TSA mechanism, our model is able to produce task-specific features while using a same backbone network $\phi(\cdot)$ . For the single object tracking task, the SOT-aware attention $a^{\mathrm{SOT}}$ can stress useful context for boosting tracking accuracy. For the affinity measure, the affinity-aware attention $a^{\mathrm{AFF}}$ is employed to capture fine-grained local semantic features. Thus the targets with changeable appearance can be better aligned. From Fig. 3 (c), we can observe further improved affinity estimations using the affinity-specific attention enhanced feature $w^{\mathrm{AFF}}$ . Visualization of the attention-enhanced features for each task is presented in Fig. 5. More detailed quantitative analyses can be found in §4.2.
+
+# 3.3. Our Online MOT Pipeline
+
+We have elaborated on our network architecture. Next we will detail our whole pipeline for online MOT. Basically,
+
+each target is associated with two states, i.e., tracked or occluded, decided by the occlusion detection. We first apply our UMA (working on the SOT mode) to generate tracklets for the tracked targets. Then we perform data association based on the affinity produced by UMA (working on the ranking mode), to recover the occluded targets.
+
+Tracklet Generation and Occlusion Detection: During tracking, our UMA is applied to each target (exemplar $z$ ), which is initialized by the provided detections. UMA is able to update the position of each target used as a SOT (relaying on the SOT-specific feature $f^{\mathrm{SOT}}$ ). Simultaneously, it measures the affinity between the exemplar and instances in subsequent frames to detect occlusions (using the affinity estimation-related feature $f^{\mathrm{AFF}}$ ).
+
+Concretely, we use a search region centred at the previous position of the target as the instance $x$ . Given $z$ and $x$ , we get a response map $v$ via Eq. 1 with SOT-specific feature $f^{\mathrm{SOT}}$ . Then the target bounding box (bbox) is obtained according to the position with the maximum score in $v$ [4].
+
+Meanwhile, with the exemplar $z$ and instance $x$ , UMA computes the affinity for detecting occlusions. It works in the ranking mode and uses the affinity-specific features $\boldsymbol{w}_{z}^{\mathrm{AFF}}$ and $\boldsymbol{w}_{x}^{\mathrm{AFF}}$ to get the affinity $c$ (Eq. 7). Note that the target may appear in any part of $x$ during the tracking stage, thus we apply ROI-Align [20] on the instance feature $(f_{x}^{\mathrm{AFF}} \in \mathbb{R}^{22 \times 22 \times 256}$ during testing) to obtain an aligned target feature, with the bbox provided by the SOT. Then we get $\boldsymbol{w}_{x}^{\mathrm{AFF}}$ through GAP and further compute the affinity $c$ . Compared with previous works [62, 44, 68] that use the confidence produced by SOTs to detect the occlusions, our method gives a more robust result, which is illustrated in Fig. 6. Additionally, following [68], we integrate the affinity with the historic average intersection-over-union (IOU) between the tracklet and the nearest detection, for filtering out the FP tracking results and detecting occlusions more reliably. Once the affinity $c$ is below a threshold $\alpha$ or the average IOU is below $\beta$ , the target is recognized to be occluded; tracked otherwise. We further refine the tracked bboxes by averaging the nearest detection with greedy algorithm. Then the refined bboxes are gathered as the tracklet of the target $z$ . Detections that have IOU below a certain threshold $\gamma$ with any tracking bboxes will be regarded as candidate detections, e.g., a reappearing occluded target or an entirely new target.
+
+Data Association: During data association, we deal with those candidate detections and address the occluded targets, i.e., recognizing a candidate detection as a reappearing occluded or an entirely new target, and then recovering its identity (if the first case) or assigning a new identity (if the second). Different from prior work designing complicated strategies [52, 13, 5], we use a relatively simple data association method, due to the reliable affinity measured from our UMA. Given the candidate detection set $\mathcal{D}$ and tracklet
+
+
+Figure 6: Illustration of the occlusion handling. The red line denotes the affinity produced by our UMA model, while the blue line signifies the confidence of the Siamese SOT. Our proposed model is more robust in detecting and addressing the occlusions.
+
+set $\mathcal{T}$ of occluded targets, produced from the previous stage, we build an affinity matrix $C\in \mathbb{R}^{|\mathcal{D}|\times |\mathcal{T}|}$ to obtain the optimal assignment. More specifically, for a tracklet $T\in \mathcal{T}$ , we uniformly sample $K$ samples from $T$ , i.e., $\{t_1,t_2,\dots,t_K\}$ . Then the affinity between $T$ and a candidate detection $d\in \mathcal{D}$ is calculated by:
+
+$$
+c ^ {\prime} = \frac {1}{K} \sum_ {k = 1} ^ {K} \boldsymbol {w} _ {d} ^ {\top} \boldsymbol {w} _ {t _ {k}}. \tag {11}
+$$
+
+After computing all the affinity, we construct the cost matrix $C$ (affinity matrix) and obtain the optimal assignment by applying the Hungarian algorithm [40] over $C$ . According to the assignment result, a candidate detection is assigned the identity of an occluded target that links to it. If a candidate detection does not link to any occluded targets, we view it as an entirely new target and assign it a new identity.
+
+Trajectory Management: For trajectory initialization, we adopt method in [35] to alleviate the influence caused by FP detections. Besides, a target will be terminated if it moves out of the view or keeps occluded for over certain frames.
+
+# 4. Experiments
+
+Datasets: We evaluate our approach on the MOT16 and MOT17 datasets from MOT Challenge [38], which is a standardized benchmark focusing on multiple people tracking. MOT16 dataset contains 14 video sequences (7 for training and 7 for testing) from unconstrained environments filmed with both static and moving cameras. It provides groundtruth annotations for the training set and detections [18] for both sets. MOT17 contains more video sequences than MOT16, and provides accurate annotations and richer detections from different detectors, i.e., DPM [18], SDP [64] and FRCNN [43]. For the evaluation of the two test sets, the results are submitted to the server of the benchmarks.
+
+Evaluation Metrics: For quantitative performance evaluation, we adopt the widely used CLEAR MOT metrics et al. [3], i.e., the multiple object tracking accuracy (MOTA), multiple object tracking precision (MOTP), false positives (FP), false negatives (FN), identity switches (IDS) and IDF1 score. In addition, the metrics defined in [33] are also used, including the percentage of mostly tracked targets (MT) and
+
+| Mode | Method | Publication | Year | MOTA↑ | IDF1↑ | MOTP ↑ | MT ↑ | ML ↓ | FP ↓ | FN ↓ | IDS ↓ | Hz ↑ |
| Online | STAM [11] | ICCV | 2017 | 46.0 | 50.0 | 74.9 | 14.60% | 43.60% | 6,895 | 91,117 | 473 | 0.2 |
| AMIR [44] | ICCV | 2017 | 47.2 | 46.3 | 75.8 | 14.00% | 41.60% | 2,681 | 92,856 | 774 | 1.0 |
| DMAN [68] | ECCV | 2018 | 46.1 | 54.8 | 73.8 | 17.40% | 42.70% | 7,909 | 89,874 | 532 | 0.3 |
| C-DRL [42] | ECCV | 2018 | 47.3 | - | 74.6 | 17.40% | 39.90% | 6,375 | 88,543 | - | 1.0 |
| KCF16 [9] | WACV | 2019 | 48.8 | 47.2 | 75.7 | 15.80% | 38.10% | 5,875 | 86,567 | 906 | 0.1 |
| Tracktor++ [2] | ICCV | 2019 | 54.4 | 52.5 | 78.2 | 19.00% | 36.90% | 3,280 | 79,149 | 682 | 2.0 |
| UMA (ours) | CVPR | 2020 | 50.5 | 52.8 | 74.1 | 17.80% | 33.70% | 7,587 | 81,924 | 685 | 5.0 |
| Offline | QuadMOT [47] | CVPR | 2017 | 44.1 | 38.3 | 76.4 | 14.60% | 44.90% | 6,388 | 94,775 | 745 | 1.8 |
| FWT [23] | CVPRW | 2018 | 47.8 | 47.8 | 77.0 | 19.10% | 38.20% | 8,886 | 85,487 | 852 | 0.2 |
| MHTBLSTM [29] | ECCV | 2018 | 42.1 | 47.8 | 75.9 | 14.90% | 44.40% | 11,637 | 93,172 | 753 | 1.8 |
| JCC [28] | TPAMI | 2018 | 47.1 | 52.3 | - | 20.40% | 46.90% | 6,703 | 89,368 | 370 | 1.8 |
| TLMHT [45] | TCSVT | 2018 | 48.7 | 55.3 | 76.4 | 15.70% | 44.50% | 6,632 | 86,504 | 413 | 4.8 |
| LNUH [60] | AAAI | 2019 | 47.5 | 43.6 | - | 19.40% | 36.90% | 13,002 | 81,762 | 1,035 | 0.8 |
+
+Table 1: Quantitative results on MOT16. The best scores of online and offline MOT methods are marked in red and blue, respectively.
+
+the percentage of mostly lost targets (ML). MT refers to the ratio of ground-truth trajectories that are covered by any track hypothesis for at least $80\%$ of their respective life span. ML is computed as the ratio of ground-truth trajectories that are covered by any track hypothesis for at most $20\%$ of their respective life span.
+
+Implementation Details: We adopt the sequences in MOT17 for training. The exemplar-positive instance pair $< x_{i}, z_{i} >$ is composed of image patches from the same targets in various frames. Patches from different targets are then chosen as the negative instances. During training, the sizes of the exemplar and instance are set as $127 \times 127$ and $239 \times 239$ , respectively. The AlexNet pre-trained model on ImageNet dataset [15] is used to initiate the shared part of our UMA model, while other layers are initialized through He initialization [21]. We use the learning rate configuration in [4]. The coefficient parameters in Eq. 6 are set as $\lambda_{1} = \lambda_{2} = 0.1$ . The total loss is minimized through momentum optimization [50] with a mini-batch of size 8. The thresholds $\alpha$ and $\beta$ used for detecting occlusions are set to 0.6 and 0.5, respectively. The threshold $\gamma$ is set to 0.5, which decides whether a detection is selected as candidates for data association. We empirically set the threshold for terminating an occluded target as 30 frames.
+
+# 4.1. Performance on MOT Benchmark Datasets
+
+Quantitative and Qualitative Performance: We evaluate our approach on the test sets of MOT16 and MOT17 benchmarks. The performance of our algorithm and other recent MOT algorithms are presented in Table 1 and Table 2, where our lightweight UMA model outperforms most online and even offline MOT algorithms according to the MOTA and IDF1 metrics. For instance, as shown in Table 1, we improve $1.7\%$ in MOTA, $5.6\%$ in IDF1 compared with KCF16 [9], which is an online algorithm taking KCF [22] as the motion model. Results from Table 2 give another powerful support to the performance of our approach, on which we simultaneously achieve the better
+
+
+Figure 7: Qualitative tracking results on the test sequences of MOT17 benchmark. The color of each bounding box indicates the target identity. The dotted line under each bounding box denotes the recent tracklet of each target.
+
+MOTA, MT, ML and FN against most published online and offline methods. In particular, FAMNet [10] is a recent work that also applies the Siamese SOT, and we surpass it in terms of both the MOTA and IDF1. Additionally, we improve Tracktor++ [2] by $2.1\%$ according to the IDF1 metric, which validates the effectiveness of our unified model in dealing with occlusions and distractors. In a nutshell, our lightweight UMA model achieves state-of-the-art performance, benefiting from the multi-task learning framework. Qualitative results of each sequence on the MOT17 test set are illustrated in Fig 7.
+
+Tracking Speed and Model Size: Our online MOT
+
+| Mode | Method | Publication | Year | MOTA↑ | IDF1↑ | MOTP ↑ | MT ↑ | ML ↓ | FP ↓ | FN ↓ | IDS ↓ | Hz ↑ |
| Online | DMAN [68] | ECCV | 2018 | 48.2 | 55.7 | 75.5 | 19.30% | 38.30% | 26,218 | 263,608 | 2,194 | 0.3 |
| MTDF [19] | TMM | 2019 | 49.6 | 45.2 | 74.5 | 18.90% | 33.10% | 37,124 | 241,768 | 5,567 | 1.2 |
| FAMNet [10] | ICCV | 2019 | 52.0 | 48.7 | 76.5 | 19.10% | 33.40% | 14,138 | 253,616 | 3,072 | 0.6 |
| Tracktor++ [2] | ICCV | 2019 | 53.5 | 52.3 | 78.0 | 19.50% | 36.60% | 12,201 | 248,047 | 2,072 | 2.0 |
| UMA (ours) | CVPR | 2020 | 53.1 | 54.4 | 75.5 | 21.50% | 31.80% | 22,893 | 239,534 | 2,251 | 5.0 |
| Offline | EDMT [6] | CVPRW | 2017 | 50.0 | 51.3 | 77.3 | 21.60% | 36.30% | 32,279 | 247,297 | 2,264 | 0.6 |
| FWT [23] | CVPRW | 2018 | 51.3 | 47.6 | 77.0 | 21.40% | 35.20% | 24,101 | 247,921 | 2,648 | 0.2 |
| MOTBLSTM [29] | ECCV | 2018 | 47.5 | 51.9 | 77.5 | 18.20% | 41.70% | 25,981 | 268,042 | 2,069 | 1.9 |
| TLMHT [45] | TCSVT | 2018 | 50.6 | 56.5 | 77.6 | 17.60% | 43.40% | 22,213 | 255,030 | 1,407 | 2.6 |
| JCC [28] | TPAMI | 2018 | 51.2 | 54.5 | - | 20.90% | 37.00% | 25,937 | 247,822 | 1,802 | 1.8 |
| SAS [37] | CVPR | 2019 | 44.2 | 57.2 | 76.4 | 16.10% | 44.30% | 29,473 | 283,611 | 1,529 | 4.8 |
+
+Table 2: Quantitative results on MOT17. The best scores of online and offline MOT methods are marked in red and blue, respectively.
+
+| Aspect | Module | validation set: {MOT17-09, MOT17-10} |
| MOTA↑ | IDF1↑ | MT↑ | ML↓ | FP↓ | FN↓ | IDS↓ |
| Full model | UMA w. TSA | 53.0 | 61.9 | 26.51% | 20.48% | 969 | 7,236 | 56 |
| Loss: LSOT (Eq. 2) + LN-pair (Eq. 4) + LIden (Eq. 5) |
| Loss (w/o. TSA) | LSOT (Eq. 2) + LTri (Eq. 3) | 51.7 | 50.9 | 24.10% | 19.28% | 922 | 7,483 | 87 |
| LSOT (Eq. 2) + LN-pair (Eq. 4) | 52.3 | 53.1 | 25.30% | 20.48% | 851 | 7,456 | 73 |
| LSOT (Eq. 2) + LN-pair (Eq. 4) + LIden (Eq. 5) | 52.4 | 58.6 | 25.30% | 20.48% | 853 | 7,458 | 58 |
| Structure | SOT only | 50.4 | 48.1 | 27.71% | 24.10% | 1,189 | 7,675 | 138 |
+
+Table 3: Ablation studies on two validation sequences of MOT17.
+
+pipeline operates at a speed of around 5.0 fps on the test sequences of MOT17 with a 1080TI GPU, which is more efficient than most previous work, without losing the accuracy. Regarding the model size, for [2] and [68], it is 270M and 300M, respectively. In contrast, the whole size of our UMA is only around 30M, which is more suitable in source-constrained environments.
+
+# 4.2. Ablation Studies
+
+In order to support the effectiveness of the proposed model, we conduct ablation studies on two sequences of the MOT17 training set, i.e., $\{\mathrm{MOT17 - 09},\mathrm{MOT17 - 10}\}$ , and use other sequences for training.
+
+As shown in Table 3, compared with the basic SOT (last row), our full model achieves significantly better MOTA. This verifies the effectiveness of our whole framework ( $\S 3.3$ ). As our main motivation is to jointly learn the object motion prediction and affinity measure tasks, we next validate the ability of our model with multi-task learning ( $w/o$ , the TSA module). We can observe that, compared with the framework trained only with the SOT, our method with the full loss (the next-to-last row) improves $13.8\%$ , $2.3\%$ and $59.4\%$ in terms of the IDF1, MOTA and IDS metrics, respectively. This demonstrates the significance of the integrated metric learning in addressing occlusions. Third, we compare different combinations of losses within our UMA to validate the discriminative ability of the learnt features embedding. Considering results from the $2^{nd}$ to $4^{th}$ rows, we can find that jointly applying the N-pair loss and identification loss gives the best results according to
+
+the IDF1 and IDS metrics. Finally, we observe that the full model with the TSA module produces the best results, which further improves performance in terms of the MOTA, IDF1 and FN metrics. This indicates that the original shared features are less compatible with the tasks, while the TSA module effectively stresses the task-aware context.
+
+# 5. Conclusions
+
+This work proposed a novel online MOT model, named UMA, aiming to perform the object motion prediction and affinity measure tasks in a unified network via multi-task learning. This is achieved by integrating the tracking loss and the metric learning losses into a triplet network during the training stage. The learnt feature not only enables the model to effectively measure the affinity in the data association phase, but also helps the SOT to distinguish the distractors during the tracklet generation phase. Compared with previous SOT based MOT approaches that train separate networks for the motion and affinity models, our method provides new insights by effectively improving the computation efficiency and simplifying the training procedure. Additionally, we extended our model with a TSA module to boost task-specific feature learning by emphasizing on different feature context. Extensive experimental results on the MOT16 and MOT17 benchmarks demonstrated the effectiveness of our lightweight model, which achieves competitive performance against many state-of-the-art approaches. Acknowledgements This work was sponsored by Zhejiang Lab's Open Fund (No. 2019KD0AB04), Zhejiang Lab's International Talent Fund for Young Professionals, CCF-Tencent Open Fund and ARO grant W911NF-18-1-0296.
+
+# References
+
+[1] Seung-Hwan Bae and Kuk-Jin Yoon. Robust online multi-object tracking based on tracklet confidence and online discriminative appearance learning. In CVPR, 2014. 1, 2
+[2] Philipp Bergmann, Tim Meinhardt, and Laura Leal-Taixe. Tracking without bells and whistles. In ICCV, 2019. 2, 3, 7, 8
+[3] Keni Bernstein and Rainer Stiefelhagen. Evaluating multiple object tracking performance: the clear mot metrics. Journal on Image and Video Processing, 2008:1, 2008. 6
+[4] Luca Bertinetto, Jack Valmadre, Joao F Henriques, Andrea Vedaldi, and Philip HS Torr. Fully-convolutional siamese networks for object tracking. In ECCV, 2016. 2, 3, 4, 6, 7
+[5] Visesh Chari, Simon Lacoste-Julien, Ivan Laptev, and Josef Sivic. On pairwise costs for network flow multi-object tracking. In CVPR, 2015. 6
+[6] Jiahui Chen, Hao Sheng, Yang Zhang, and Zhang Xiong. Enhancing detection model for multiple hypothesis tracking. In CVPR, 2017. 8
+[7] De Cheng, Yihong Gong, Sanping Zhou, Jinjun Wang, and Nanning Zheng. Person re-identification by multi-channel parts-based cnn with improved triplet loss function. In CVPR, 2016. 5
+[8] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In CVPR, 2005. 2
+[9] Peng Chu, Heng Fan, Chiu C Tan, and Haibin Ling. Online multi-object tracking with instance-aware tracker and dynamic model refreshment. In WACV, 2019. 1, 2, 4, 7
+[10] Peng Chu and Haibin Ling. Famnet: Joint learning of feature, affinity and multi-dimensional assignment for online multiple object tracking. In ICCV, 2019. 1, 2, 7, 8
+[11] Qi Chu, Wanli Ouyang, Hongsheng Li, Xiaogang Wang, Bin Liu, and Nenghai Yu. Online multi-object tracking using cnn-based single object tracker with spatial-temporal attention mechanism. In ICCV, 2017. 1, 2, 7
+[12] Martin Danelljan, Goutam Bhat, Fahad Shahbaz Khan, and Michael Felsberg. Eco: Efficient convolution operators for tracking. In CVPR, 2017. 2
+[13] Afshin Dehghan, Shayan Modiri Assari, and Mubarak Shah. Gmmmcp tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking. In CVPR, 2015. 2, 6
+[14] Afshin Dehghan, Yicong Tian, Philip HS Torr, and Mubarak Shah. Target identity-aware network flow for online multiple target tracking. In CVPR, 2015. 2
+[15] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 7
+[16] Xingping Dong, Jianbing Shen, Wenguan Wang, Ling Shao, Haibin Ling, and Fatih Porikli. Dynamical hyperparameter optimization via deep reinforcement learning in tracking. TPAMI, 2019. 2
+[17] Xingping Dong, Jianbing Shen, Dajiang Yu, Wenguan Wang, Jianhong Liu, and Hua Huang. Occlusion-aware real-time object tracking. TMM, 19(4):763-771, 2016. 2
+[18] Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. Object detection with discriminatively trained part-based models. TPAMI, 32(9):1627-1645, 2010.
+
+6
+
+[19] Zeyu Fu, Federico Angelini, Jonathon Chambers, and Syed Mohsen Naqvi. Multi-level cooperative fusion of gm-phd filters for online multiple human tracking. TMM, 21(9):2277-2291, 2019. 8
+[20] Kaiming He, Georgia Gkioxari, Piotr Dólar, and Ross Girshick. Mask r-cnn. In ICCV, 2017. 2, 4, 6
+[21] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, 2015. 7
+[22] João F Henriques, Rui Caseiro, Pedro Martins, and Jorge Batista. High-speed tracking with kernelized correlation filters. TPAMI, 37(3):583-596, 2014. 2, 7
+[23] Roberto Henschel, Laura Leal-Taixe, Daniel Cremers, and Bodo Rosenhahn. Fusion of head and full-body detectors for multi-object tracking. In CVPRW, 2018. 7, 8
+[24] Alexander Hermans, Lucas Beyer, and Bastian Leibe. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017. 3, 4
+[25] Ju Hong Yoon, Chang-Ryeol Lee, Ming-Hsuan Yang, and Kuk-Jin Yoon. Online multi-object tracking via structural constraint event aggregation. In CVPR, 2016. 1, 2
+[26] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In CVPR, 2018. 5
+[27] Zdenek Kalal, Krystian Mikolajczyk, Jiri Matas, et al. Tracking-learning-detection. TPAMI, 34(7):1409, 2012. 2
+[28] Margret Keuper, Siyu Tang, Bjoern Andres, Thomas Brox, and Bernt Schiele. Motion segmentation & multiple object tracking by correlation co-clustering. TPAMI, 42(1):140-153, 2018. 7, 8
+[29] Chanho Kim, Fuxin Li, and James M Rehg. Multi-object tracking with neural gating using bilinear lstm. In ECCV, 2018. 1, 7, 8
+[30] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NeurIPS, 2012. 3
+[31] Laura Leal-Taixe, Cristian Canton-Ferrer, and Konrad Schindler. Learning by tracking: Siamese cnn for robust target association. In CVPRW, 2016. 2
+[32] Laura Leal-Taixe, Anton Milan, Ian Reid, Stefan Roth, and Konrad Schindler. Motchallenge 2015: Towards a benchmark for multi-target tracking. arXiv preprint arXiv:1504.01942, 2015. 1
+[33] Yuan Li, Chang Huang, and Ram Nevatia. Learning to associate: Hybridboosted multi-target tracker for crowded scene. In CVPR, 2009. 6
+[34] Si Liu, Tianzhu Zhang, Xiaochun Cao, and Changsheng Xu. Structural correlation filter for robust visual tracking. In CVPR, 2016. 2
+[35] Yuanpei Liu, Junbo Yin, Dajiang Yu, Sanyuan Zhao, and Jianbing Shen. Multiple people tracking with articulation detection and stitching strategy. Neurocomputing, 2019. 6
+[36] Xiankai Lu, Wenguan Wang, Chao Ma, Jianbing Shen, Ling Shao, and Fatih Porikli. See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In CVPR, 2019. 2
+[37] Andrii Maksai and Pascal Fua. Eliminating exposure bias and loss-evaluation mismatch in multiple object tracking. In CVPR, 2019. 8
+[38] Anton Milan, Laura Leal-Taixe, Ian Reid, Stefan Roth, and
+
+Konrad Schindler. Mot16: A benchmark for multi-object tracking. arXiv preprint arXiv:1603.00831, 2016. 1, 6
+[39] Anton Milan, Seyed Hamid Rezatofighi, Anthony R Dick, Ian D Reid, and Konrad Schindler. Online multi-target tracking using recurrent neural networks. In AAAI, 2016. 1, 2
+[40] James Munkres. Algorithms for the assignment and transportation problems. Journal of the society for industrial and applied mathematics, 5(1):32-38, 1957. 6
+[41] Hamed Pirsiavash, Deva Ramanan, and Charless C Fowlkes. Globally-optimal greedy algorithms for tracking a variable number of objects. In CVPR, 2011. 2
+[42] Liangliang Ren, Jiwen Lu, Zifeng Wang, Qi Tian, and Jie Zhou. Collaborative deep reinforcement learning for multi-object tracking. In ECCV, 2018. 7
+[43] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NeurIPS, 2015. 1, 6
+[44] Amir Sadeghian, Alexandre Alahi, and Silvio Savarese. Tracking the untrackable: Learning to track multiple cues with long-term dependencies. In ICCV, 2017. 1, 2, 6, 7
+[45] Hao Sheng, Jiahui Chen, Yang Zhang, Wei Ke, Zhang Xiong, and Jingyi Yu. Iterative multiple hypothesis tracking with tracklet-level association. TPAMI, 29(12):3660-3672, 2018. 7, 8
+[46] Kihyuk Sohn. Improved deep metric learning with multiclass n-pair loss objective. In NeurIPS, 2016. 4, 5
+[47] Jeany Son, Mooyeol Baek, Minsu Cho, and Bohyung Han. Multi-object tracking with quadruplet convolutional neural networks. In CVPR, 2017. 2, 7
+[48] Xibin Song, Peng Wang, Dingfu Zhou, Rui Zhu, Chenye Guan, Yuchao Dai, Hao Su, Hongdong Li, and Ruigang Yang. Apollocar3d: A large 3d car instance understanding benchmark for autonomous driving. In CVPR, 2019. 1
+[49] Chi Su, Jianing Li, Shiliang Zhang, Junliang Xing, Wen Gao, and Qi Tian. Pose-driven deep convolutional model for person re-identification. In ICCV, 2017. 5
+[50] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013. 7
+[51] Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, and Bernt Schiele. Multi-person tracking by multicut and deep matching. In ECCV, 2016. 2
+[52] Siyu Tang, Mykhaylo Andriluka, Bjoern Andres, and Bernt Schiele. Multiple people tracking by lifted multicut and person re-identification. In CVPR, 2017. 1, 2, 6
+[53] Paul Voigtlaender, Michael Krause, Aljosa Osep, Jonathon Luiten, Berin Balachandar Gnana Sekar, Andreas Geiger, and Bastian Leibe. Mots: Multi-object tracking and segmentation. In CVPR, 2019. 2
+[54] Bing Wang, Li Wang, Bing Shuai, Zhen Zuo, Ting Liu, Kap Luk Chan, and Gang Wang. Joint learning of convolutional neural networks and temporally constrained metrics for tracklet association. In CVPRW, 2016. 2
+[55] Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. Learning fine-grained image similarity with deep ranking. In CVPR, 2014. 2
+[56] Wenguan Wang, Jianbing Shen, Fatih Porikli, and Ruigang Yang. Semi-supervised video object segmentation with super-trajectories. TPAMI, 41(4):985-998, 2018. 1
+
+[57] Wenguan Wang, Zhijie Zhang, Siyuan Qi, Jianbing Shen, Yanwei Pang, and Ling Shao. Learning compositional neural information fusion for human parsing. In ICCV, 2019. 1
+[58] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018. 3
+[59] Kilian Q Weinberger and Lawrence K Saul. Distance metric learning for large margin nearest neighbor classification. In NeurIPS, 2006. 4
+[60] Longyin Wen, Dawei Du, Shengkun Li, Xiao Bian, and Siwei Lyu. Learning non-uniform hypergraph for multi-object tracking. In AAAI, 2019. 7
+[61] Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple online and realtime tracking with a deep association metric. In ICIP, 2017. 1
+[62] Yu Xiang, Alexandre Alahi, and Silvio Savarese. Learning to track: Online multi-object tracking by decision making. In ICCV, 2015. 2, 6
+[63] Jiarui Xu, Yue Cao, Zheng Zhang, and Han Hu. Spatial-temporal relation networks for multi-object tracking. In ICCV, 2019. 2, 3
+[64] Fan Yang, Wongun Choi, and Yuanqing Lin. Exploit all the layers: Fast and accurate cnn object detector with scale dependent pooling and cascaded rejection classifiers. In CVPR, 2016. 6
+[65] Ming Yang, Ying Wu, and Gang Hua. Context-aware visual tracking. TPAMI, 31(7):1195-1209, 2009. 5
+[66] Amir Roshan Zamir, Afshin Dehghan, and Mubarak Shah. Gmcp-tracker: Global multi-object tracking using generalized minimum clique graphs. In ECCV. 2012. 2
+[67] Li Zhang, Yuan Li, and R. Nevatia. Global data association for multi-object tracking using network flows. In CVPR, 2008. 2
+[68] Ji Zhu, Hua Yang, Nian Liu, Minyoung Kim, Wenjun Zhang, and Ming-Hsuan Yang. Online multi-object tracking with dual matching attention networks. In ECCV, 2018. 1, 2, 6, 7, 8
\ No newline at end of file
diff --git a/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/images.zip b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..79e72a6e867b38655f1a04a9ba9daedcf9ef7056
--- /dev/null
+++ b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3f03d100e0b99d9a841be27f0b8a14a1267b47be25ea37b288c346f559d28bcd
+size 659981
diff --git a/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/layout.json b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..abd4a79dd550e699e89efd613a972569b09d7774
--- /dev/null
+++ b/aunifiedobjectmotionandaffinitymodelforonlinemultiobjecttracking/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:329655ca1ddb6637774e269b94b0058d0e5b77e0f4cb1e13c9b8c820cc66610e
+size 447533
diff --git a/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_content_list.json b/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_content_list.json
new file mode 100644
index 0000000000000000000000000000000000000000..4a53abe506210db8995ed9b64a97fc52b4c137a5
--- /dev/null
+++ b/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_content_list.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:636c861d2caf8b5b3240569b002cceb00c20ffdd09a5a8f7af4a1f09aebfb4e3
+size 84709
diff --git a/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_model.json b/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_model.json
new file mode 100644
index 0000000000000000000000000000000000000000..29ff74854c23be58d029f3d77bc8ad9c886aa9c7
--- /dev/null
+++ b/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_model.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bcecfe3a107fb23dca67e94254829e226e46f8a5d81c3d24ff9ed7d47263dbff
+size 101684
diff --git a/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_origin.pdf b/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_origin.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..90decb8fcc44937277d92a487d218115c4c37445
--- /dev/null
+++ b/aunifiedoptimizationframeworkforlowrankinducingpenalties/bf1f91eb-261f-4de6-be41-cf73c19fea57_origin.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a65c653be5f27e82faa4b9523bd295d7f93ea174adf7c66dcc9ffbc3d53c712
+size 783407
diff --git a/aunifiedoptimizationframeworkforlowrankinducingpenalties/full.md b/aunifiedoptimizationframeworkforlowrankinducingpenalties/full.md
new file mode 100644
index 0000000000000000000000000000000000000000..bb7aeb7aec92102014c79e32ee21f2f034adb963
--- /dev/null
+++ b/aunifiedoptimizationframeworkforlowrankinducingpenalties/full.md
@@ -0,0 +1,506 @@
+# A Unified Optimization Framework for Low-Rank Inducing Penalties
+
+Marcus Valtonen Örnhag1 Carl Olsson1,2
+
+1Centre for Mathematical Sciences Lund University
+
+$^{2}$ Department of Electrical Engineering Chalmers University of Technology
+
+{marcus.valtonen.ornhag, carl.olsson}@math.lth.se
+
+# Abstract
+
+In this paper we study the convex envelopes of a new class of functions. Using this approach, we are able to unify two important classes of regularizers from unbiased nonconvex formulations and weighted nuclear norm penalties. This opens up for possibilities of combining the best of both worlds, and to leverage each method's contribution to cases where simply enforcing one of the regularizers are insufficient.
+
+We show that the proposed regularizers can be incorporated in standard splitting schemes such as Alternating Direction Methods of Multipliers (ADMM), and other subgradient methods. Furthermore, we provide an efficient way of computing the proximal operator.
+
+Lastly, we show on real non-rigid structure-from-motion (NRSfM) datasets, the issues that arise from using weighted nuclear norm penalties, and how this can be remedied using our proposed method.
+
+# 1. Introduction
+
+Dimensionality reduction using Principal Component Analysis (PCA) is widely used for all types of data analysis, classification and clustering. In recent years, numerous subspace clustering methods have been proposed, to address the shortcomings of traditional PCA methods. The work on Robust PCA by Candès et al. [6] is one of the most influential papers on the subject, which sparked a large research interest from various fields including computer vision. Applications include, but are not limited to, rigid and non-rigid structure-from-motion [4, 1], photometric stereo [2] and optical flow [13].
+
+It is well-known that the solution to
+
+$$
+\min _ {\operatorname {r a n k} (X) \leq r} \| X - X _ {0} \| _ {F} ^ {2}, \tag {1}
+$$
+
+where $\| \cdot \| _F$ is the Frobenius norm, can be given in closed form using the singular value decomposition (SVD) of the measurement matrix $X_0$ . The character of the problem changes drastically, when considering objectives such as
+
+$$
+\min _ {\operatorname {r a n k} (X) \leq r} \| \mathcal {A} (X) - \mathbf {b} \| ^ {2}, \tag {2}
+$$
+
+where $\mathcal{A}:\mathbb{R}^{m\times n}\to \mathbb{R}^p$ is a linear operator, $\mathbf{b}\in \mathbb{R}^p$ , and $\| \cdot \|$ is the standard Euclidean norm. In fact, such problems are in general known to be NP hard [14]. In many cases, however, the rank is not known a priori, and a "soft rank" penalty can be used instead
+
+$$
+\min _ {X} \mu \operatorname {r a n k} (X) + \| \mathcal {A} (X) - \mathbf {b} \| ^ {2}. \tag {3}
+$$
+
+Here, the regularization parameter $\mu$ controls the trade-off between enforcing the rank and minimizing the residual error, and can be tuned to problem specific applications.
+
+In order to treat objectives of the form (2) and (3), a convex surrogate of the rank penalty is often used. One popular approach is to use the nuclear norm [30, 6]
+
+$$
+\| X \| _ {*} = \sum_ {i = 1} ^ {n} \sigma_ {i} (X), \tag {4}
+$$
+
+where $\sigma_{i}(X), i = 1, \ldots, n$ , is the $i$ :th singular value of $X$ . One of the drawbacks of using the nuclear norm penalty is that both large and small singular values are penalized equally hard. This is referred to as shrinking bias, and to counteract such behavior, methods penalizing small singular values (assumed to be noise) harder have been proposed [29, 23, 16, 26, 27, 20, 9, 32].
+
+# 1.1. Related Work
+
+Our work is a generalization of Larsson and Olsson [20]. They considered problems on the form
+
+$$
+\min _ {X} g \left(\operatorname {r a n k} (X)\right) + \| X - X _ {0} \| _ {F} ^ {2}, \tag {5}
+$$
+
+where the regularizer $g$ is non-decreasing and piecewise constant,
+
+$$
+g (k) = \sum_ {i = 1} ^ {k} g _ {i}. \tag {6}
+$$
+
+Note, that for $g_{i} \equiv \mu$ we obtain (3). Furthermore, if we let $g_{i} = 0$ for $i \leq r_0$ , and $\infty$ otherwise, (2) is obtained. The objectives (5) are difficult to optimize, as they, in general, are non-convex and discontinuous. Thus, it is natural to consider a relaxation
+
+$$
+\min _ {X} \mathcal {R} _ {g} (X) + \| X - X _ {0} \| _ {F} ^ {2}, \tag {7}
+$$
+
+where
+
+$$
+\mathcal {R} _ {g} (X) = \max _ {Z} \left(\sum_ {i = 1} ^ {n} \min \left(g _ {i}, \sigma_ {i} ^ {2} (Z)\right) - \| X - Z \| _ {F} ^ {2}\right). \tag {8}
+$$
+
+It was shown in [20], that this is the convex envelope of (5), hence share the same global minimizers.
+
+Another type of regularizer that has been successfully used in low-level imaging applications [15, 37, 36] is the weighted nuclear norm (WNNM),
+
+$$
+\| X \| _ {\mathbf {w}, *} = \sum_ {i = 1} ^ {k} w _ {i} \sigma_ {i} (X), \tag {9}
+$$
+
+where $\mathbf{w} = (w_{1},\dots ,w_{k})$ is a weight vector. Note that the WNNM formulation does not fit the assumptions (6), hence cannot be considered in this framework.
+
+For certain applications, it is of interest to include both regularizers, which we will show in Section 6. Typically, this is preferable when the rank constraint alone is not strong enough to yield accurate reconstructions, and further penalization is necessary to restrict the solutions. To this end, we suggest to merge these penalties. In [28] a similar approach was suggested, but is not general enough to include penalties like WNNM.
+
+Our main contributions are:
+
+- A novel method for combining bias reduction and nonconvex low-rank inducing objectives,
+- An efficient and fast algorithm to compute the proposed regularizer,
+- Theoretical insight in the quality of reconstructed missing data using WNNM, and practical demonstrations on how shrinking bias is perceived in these applications,
+- A new objective for Non-Rigid Structure from Motion (NRSfM), with improved performance, compared to state-of-the-art prior-free methods, capable of working in cases where the image sequences are unordered.
+
+First, however, we will lay the ground for the theory of a common framework of low-rank inducing objectives.
+
+# 2. Problem Formulation and Motivation
+
+In this paper we propose a new class of regularization terms for low rank matrix recovery problems that controls both the rank and the magnitude of the singular values of the recovered matrix. Our objective function has the form
+
+$$
+f _ {h} (X) = h (\boldsymbol {\sigma} (X)) + \left\| \mathcal {A} (X) - b \right\| ^ {2}, \tag {10}
+$$
+
+where $h(\sigma(X)) = \sum_{i=1}^{k} h_i(\sigma_i(X))$ and
+
+$$
+h _ {i} \left(\sigma_ {i} (X)\right) = \left\{ \begin{array}{l l} 2 a _ {i} \sigma_ {i} (X) + b _ {i} & \sigma_ {i} (X) \neq 0, \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {11}
+$$
+
+We assume that the sequences $\{a_i\}_{i=1}^k$ and $\{b_i\}_{i=1}^k$ are both non-decreasing.
+
+Our approach unifies the formulation of [19] with weighted nuclear norm. The terms $2a_{i}\sigma_{i}(X)$ correspond to the singular value penalties of a weighted nuclear norm [15]. These can be used to control the sizes of the non-zero singular values. In contrast, the constants $b_{i}$ corresponds to a rank penalization that is independent of these sizes and, as we will see in the next section, enables bias free rank selection.
+
+# 2.1. Controlled Bias and Rank Selection
+
+To motivate the use of both sets of variables $\{a_i\}_{i=1}^k$ and $\{b_i\}_{i=1}^k$ , and to understand their purpose, we first consider the simple recovery problem $\min_X f_h(X)$ , where
+
+$$
+f _ {h} (X) := h (\boldsymbol {\sigma} (X)) + \| X - X _ {0} \| _ {F} ^ {2}. \tag {12}
+$$
+
+Here $X_0$ is assumed to consist of a set of large singular values $\sigma_i(X_0)$ , $i = 1, \dots, r$ , corresponding to the matrix we wish to recover, and a set of small ones $\sigma_i(X_0)$ , $i = r + 1, \dots, k$ , corresponding to noise that we want to suppress.
+
+Due to von Neumann's trace theorem [22] the solution can be computed in closed form by considering each singular values separately, and minimize
+
+$$
+\left\{ \begin{array}{l l} 2 a _ {i} \sigma_ {i} (X) + b _ {i} + \left(\sigma_ {i} (X) - \sigma_ {i} \left(X _ {0}\right)\right) ^ {2} & \sigma_ {i} (X) \neq 0, \\ \sigma_ {i} \left(X _ {0}\right) ^ {2} & \sigma_ {i} (X) = 0, \end{array} \right. \tag {13}
+$$
+
+over $\sigma_{i}(X) \geq 0$ . Differentiating for the case $\sigma_{i}(X) \neq 0$ gives a stationary point at $\sigma_{i}(X) = \sigma_{i}(X_{0}) - a_{i}$ if $\sigma_{i}(X_{0}) - a_{i} > 0$ . Since this point has objective value $2a_{i}\sigma_{i}(X_{0}) - a_{k}^{2} + b_{k}$ it is clear that this point will be optimal if
+
+$$
+2 a _ {i} \sigma_ {i} \left(X _ {0}\right) - a _ {i} ^ {2} + b _ {i} \leq \sigma_ {i} \left(X _ {0}\right) ^ {2}, \tag {14}
+$$
+
+or equivalently $\sigma_{i}(X_{0}) - a_{i}\geq \sqrt{b_{i}}$ . Summarizing, we thus get the optimal singular values
+
+$$
+\sigma_ {i} (X) = \left\{ \begin{array}{l l} \sigma_ {i} \left(X _ {0}\right) - a _ {i} & \sigma_ {i} \left(X _ {0}\right) - a _ {i} \geq \sqrt {b _ {i}}, \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {15}
+$$
+
+
+Figure 1. The optimal recovered singular value $\sigma_{i}(X)$ as a function (red curve) of the observed $\sigma_{i}(X_{0})$ .
+
+Note, that this is a valid sequence of singular values since under our assumptions $\sigma_{i}(X_{0}) - a_{i}$ is decreasing and $\sqrt{b_i}$ increasing. The red curve of Figure 1 shows the recovered singular value as a function of the corresponding observed singular value. For comparison, we also plot the dotted blue curve which shows hard thresholding at $a_{i} + \sqrt{b}_{i}$ , i.e. singular values smaller than $a_{i} + \sqrt{b}_{i}$ vanish while the rest are left unaltered.
+
+Now, suppose that we want to recover the largest singular values unchanged. Using the weighted nuclear norm $(b_{i} = 0)$ it is clear that this can only be done if we know that the sought matrix has rank $r$ and let $a_{i} = 0$ for $i = 1,\dots,r$ . For any other setting the regularization will subtract $a_{i}$ from the corresponding non-zero singular value. In contrast, by letting $a_{i} = 0$ allows exact recovery of the large singular values by selecting $b_{i}$ appropriately even when the rank is unknown. Hence, in the presence of a weak prior on the rank of the matrix, using only the $b_{i}$ (the framework in [20]) allows exact recovery for a more general set of problems than use of the $a_{i}$ (weighted nuclear norm formulations).
+
+The above class of problems are well posed with a strong data term $\| X - X_0\| _F^2$ . For problems with weaker data terms, priors on the magnitude of the singular values can still be very useful. In the context of NRSfM it has been observed [27, 12] that adding a bias can improve the distance to the ground truth reconstruction, even though it does not alter the rank. The reason is that, when the scene is not rigid, several reconstructions with the same rank may co-exist, thus resulting in similar projections. By introducing bias on the singular values, further regularization is enforced on the deformations, which may aid in the search for correct reconstructions. For example, with $a_1 = 0$ and $a_{i} > 0$ , $i > 1$ we obtain a penalty that favors matrices that "are close to" rank 1. In the formulation of [12], where rank 1 corresponds to a rigid scene this can be thought of as an "as rigid as possible" prior, which is realistic in many cases [31, 24, 33, 18], but which has yet to be considered in the context of factorization methods.
+
+# 2.2. The Quadratic Envelope
+
+As discussed above the two sets of parameters $\{a_i\}$ and $\{b_i\}$ have complementary regularization effects. The main purpose of unifying them is to create more flexible priors allowing us to do accurate rank selection with a controlled bias. In the following sections, we also show that they have relaxations that can be reliably optimized. Specifically, the resulting formulation $h(\pmb{\sigma}(X))$ , which is generally non-convex and discontinuous, can be relaxed by computing the so called quadratic envelope [10, 11]. The resulting relaxation $\mathcal{R}_h(\pmb{\sigma}(X))$ is continuous and in addition $\mathcal{R}_h(\pmb{\sigma}(X)) + \|X - X_0\|_F^2$ is convex. For a more general data term there may be multiple local minimizers. However, it is known that
+
+$$
+h (\sigma (X)) + \| \mathcal {A} (X) - b \| ^ {2}, \tag {16}
+$$
+
+and
+
+$$
+\mathcal {R} _ {h} (\sigma (X)) + \| \mathcal {A} (X) - b \| ^ {2}, \tag {17}
+$$
+
+have the same global minimizer when $\| \mathcal{A} \| < 1$ [10]. In addition, potential local minima of (17) are also local minima of (16); however, the converse does not hold. We also show that the proximal operator of $\mathcal{R}_h(\sigma(X))$ can be efficiently computed which allows simple optimization using splitting methods such as ADMM [3].
+
+# 3. A New Family of Functions
+
+Consider functions on the form (12). This is a generalization of [20]; and the derivation for our objective follows a similar structure. We outline this in detail in the supplementary material, where we show that convex envelope $f_h^{**}$ is given by
+
+$$
+f _ {h} ^ {* *} (X) = \mathcal {R} _ {h} (X) + \| X - X _ {0} \| _ {F} ^ {2}, \tag {18}
+$$
+
+where
+
+$$
+\begin{array}{l} \mathcal {R} _ {h} (X) := \max _ {Z} \left(\sum_ {i = 1} ^ {n} \min \left(b _ {i}, [ \sigma_ {i} (Z) - a _ {i} ] _ {+} ^ {2}\right) + \| Z \| _ {F} ^ {2} \right. \\ \left. - \| X - Z \| _ {F} ^ {2} - \sum_ {i = 1} ^ {n} \left[ \sigma_ {i} (Z) - a _ {i} \right] _ {+} ^ {2}\right). \tag {19} \\ \end{array}
+$$
+
+As in [20], the optimization can be reduced to the singular values only,
+
+$$
+\begin{array}{l} \mathcal {R} _ {h} (X) = \max _ {\sigma (Z)} \left(\sum_ {i = 1} ^ {n} \min \left(b _ {i}, [ \sigma_ {i} (Z) - a _ {i} ] _ {+} ^ {2}\right) + \sigma_ {i} ^ {2} (Z) \right. \\ \left. - \left(\sigma_ {i} (X) - \sigma_ {i} (Z)\right) ^ {2} - \left[ \sigma_ {i} (Z) - a _ {i} \right] _ {+} ^ {2}\right). \tag {20} \\ \end{array}
+$$
+
+This can be achieved by applying von Neumann's trace theorem (see supplementary material). The optimization problem is concave, hence can be solved with standard convex solvers such as MOSEK or CVX; however, in the next section we show that the problem can be turned into a series of one-dimensional problems, and the resulting algorithm for computing (19) is magnitudes faster than applying a general purpose solver.
+
+# 4. Finding the Maximizing Sequence
+
+Following the approach used in [20], consider the program
+
+$$
+\max _ {s} f (s) \tag {21}
+$$
+
+$$
+s. t. \quad \sigma_ {i + 1} (Z) \leq s \leq \sigma_ {i - 1} (Z).
+$$
+
+where $\sigma_{i}(Z)$ is the $i$ :th singular value of the maximizing sequence in (20), and
+
+$$
+f (s) = \min \left\{b _ {i}, [ s - a _ {i} ] _ {+} ^ {2} \right\} - (s - \sigma_ {i} (X)) ^ {2} + s ^ {2} - [ s - a _ {i} ] _ {+} ^ {2}. \tag {22}
+$$
+
+The objective function $f$ can be seen as the pointwise minimum of two concave functions, namely, $f_{1}(s) = b_{i} + 2\sigma_{i}(X)s - \sigma_{i}^{2}(X) - [s - a_{i}]_{+}^{2}$ and $f_{2}(s) = 2\sigma_{i}(X)s - \sigma_{i}(X)^{2}$ , i.e. $f(s) = \min \{f_1(s),f_2(s)\}$ , hence $f$ is concave.
+
+The individual unconstrained optimizers are given by $s_i = a_i + \max \{\sqrt{b_i}, \sigma_i(X)\}$ . In previous work [20], where $a_i \equiv 0$ , an algorithm was devised to find the maximizing singular vector, by turning it to an optimization problem of a single variable. This method is not directly applicable, as the sequence $\{s_i\}_{i=1}^k$ , in general, does not satisfy the necessary conditions3. In fact, the number of local extrema in the sequence $\{s_i\}_{i=1}^k$ is only limited by its length. We show an example of a sequence in Figure 2, and the corresponding maximizing sequence. Nevertheless, it is possible to devise an algorithm that returns the maximizing singular value vector, as we will show shortly.
+
+In order to do so, we can apply some of the thoughts behind the proof behind [20]. Consider the more general optimization problem of minimizing $g(\pmb{\sigma}) = \sum_{i=1}^{k} f_i(s_i)$ , subject to $\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_k \geq 0$ , where $f_i$ are concave. Then, given the unconstrained sequence of minimizers $\{s_i\}_{i=1}^k$ , the elements of the constrained sequence $\{\sigma_i\}_{i=1}^k$ can be limited to three choices
+
+$$
+\sigma_ {i} = \left\{ \begin{array}{l l} s _ {i} & \text {i f} \sigma_ {i + 1} \leq s _ {i} \leq \sigma_ {i - 1}, \\ \sigma_ {i - 1} & \text {i f} \sigma_ {i - 1} < s _ {i}, \\ \sigma_ {i + 1} & \text {i f} s _ {i} < \sigma_ {i + 1}. \end{array} \right. \tag {23}
+$$
+
+Furthermore, the regions between local extreme points (of the unconstrained singular values) are constant.
+
+Data: Weights a, b, and $\sigma(X)$
+
+Result: Maximizing vector $\sigma(Z^{*}) = \{\sigma_{i}\}_{i}$
+
+Initialize with the unconstrained
+
+$$
+\operatorname {m a x i m i z e r s} \sigma_ {i} = a _ {i} + \max \left\{\sqrt {b _ {i}}, \sigma_ {i} (X) \right\};
+$$
+
+while $\sigma(Z^{*})$ is not a valid singular value vector do
+
+$$
+\begin{array}{l} \text {F i n d} \sigma (Z ^ {*}) \\ \text {s u b i n t e r v a l s} \left\{\iota_ {k} \right\} _ {k \in \mathcal {I}}; \\ \end{array}
+$$
+
+for $k\in \mathcal{I}$ do
+
+$$
+\begin{array}{l} \text {F i n d s c a l a r} s ^ {*} = \arg \max _ {s} f (s) \text {w h e r e} f \text {i s} \\ \text {d e f i n e d} (2 2); \\ \text {U p d a t e} \sigma_ {i} = s ^ {*} \text {f o r a l l} i \in \iota_ {k}. \\ \end{array}
+$$
+
+end
+
+Lemma 1. Assume $s_p$ and $s_q$ are local extrema of $\{s_i\}_{i=1}^k$ and that the subsequence $\{s_i\}_{i=p}^q$ are non-decreasing. Then the corresponding subsequence of the constrained problem $\{\sigma_i\}_{i=p}^q$ is constant.
+
+Proof. Consider $\sigma_{i}$ for some $p\leq i\leq q - 1$ . If $\sigma_{i} > s_{i}$ then by (23) we have $\sigma_{i + 1} = \sigma_{i}$ . If instead $\sigma_{i}\leq s_{i}$ ,we have $\sigma_{i + 1}\leq \sigma_i\leq s_i\leq s_{i + 1}$ and by (23), $\sigma_{i + 1} = \sigma_{i}$
+
+We can now devise an algorithm that returns the maximizing sequence, see Algorithm 1. Essentially, the algorithm starts at the unconstrained solution, and then adds more constraints, by utilizing Lemma 1, until all of them are fulfilled.
+
+Theorem 1. Algorithm 1 returns the maximizing sequence.
+
+Proof. See the supplementary material.
+
+
+Algorithm 1: Algorithm for finding the maximizing singular value vector.
+
+
+Figure 2. Example of a sequence of unconstrained maximizers (blue line), local extrema (green and red) and the maximizing sequences (dashed black) obtained by Algorithm 1.
+
+# 5. ADMM and the Proximal Operator
+
+We employ the splitting method ADMM [3], which is a standard tool for problems of this type. Thus, consider the
+
+Table 1. Distance to ground truth (normalized) mean valued over 20 problem instances for different percentages of missing data and data patterns. The standard deviation of the noise is kept constant at $\sigma = 0.1$ . Best results are marked in bold.
+
+| Missing data (%) | PCP [7] | WNNM [15] | Unifying [5] | LpSq [25] | S12L12 [32] | S23L23 [32] | IRNN [9] | APGL [34] | ||·||* [3] | Rμ [20] | Our |
| Uniform | 0 | 0.0400 | 0.0246 | 0.0406 | 0.0501 | 0.0544 | 0.0545 | 0.0551 | 0.0229 | 0.1959 | 0.0198 | 0.0199 |
| 20 | 0.3707 | 0.2990 | 0.3751 | 0.1236 | 0.1322 | 0.0972 | 0.0440 | 0.0233 | 0.2287 | 0.0257 | 0.0198 |
| 40 | 1.0000 | 0.6185 | 0.9355 | 0.1265 | 0.1222 | 0.1137 | 0.0497 | 0.0291 | 0.3183 | 0.2105 | 0.0248 |
| 60 | 1.0000 | 0.8278 | 1.0000 | 0.1354 | 0.1809 | 0.1349 | 0.0697 | 0.0826 | 0.5444 | 0.3716 | 0.0466 |
| 80 | 1.0000 | 0.9810 | 1.0000 | 0.7775 | 0.6573 | 0.5945 | 0.2305 | 0.4648 | 0.8581 | 0.9007 | 0.3117 |
| Tracking | 0 | 0.0399 | 0.0220 | 0.0399 | 0.0491 | 0.0352 | 0.0344 | 0.0491 | 0.0205 | 0.1762 | 0.0176 | 0.0177 |
| 10 | 0.3155 | 0.2769 | 0.1897 | 0.1171 | 0.0881 | 0.0874 | 0.0926 | 0.1039 | 0.2607 | 0.0829 | 0.0802 |
| 20 | 0.4681 | 0.4250 | 0.3695 | 0.1893 | 0.1346 | 0.1340 | 0.1430 | 0.1686 | 0.3425 | 0.2146 | 0.1343 |
| 30 | 0.5940 | 0.5143 | 0.4147 | 0.1681 | 0.2822 | 0.3081 | 0.1316 | 0.1594 | 0.3435 | 0.4137 | 0.1277 |
| 40 | 0.7295 | 0.6362 | 0.9331 | 0.2854 | 0.4262 | 0.4089 | 0.1731 | 0.2800 | 0.5028 | 0.5072 | 0.1705 |
| 50 | 0.7977 | 0.7228 | 0.9162 | 0.4439 | 0.5646 | 0.5523 | 0.2847 | 0.4219 | 0.5831 | 0.6464 | 0.3128 |
+
+augmented Lagrangian
+
+$$
+L (X, Y, \Lambda) = f _ {h} ^ {* *} (X) + \rho \| X - Y + \Lambda \| _ {F} ^ {2} + \mathcal {C} (Y) - \rho \| \Lambda \| _ {F} ^ {2}, \tag {24}
+$$
+
+where $X$ and $Y$ are minimized sequentially, and $\Lambda$ is the dual variable. All variables are of the same dimensionality. The function $\mathcal{C}$ is assumed to be convex and incorporates additional priors. In each iteration, we solve
+
+$$
+X _ {t + 1} = \underset {X} {\arg \min } f _ {h} ^ {* *} (X) + \rho \| X - Y _ {t} + \Lambda_ {t} \| _ {F} ^ {2}, \tag {25}
+$$
+
+$$
+Y _ {t + 1} = \underset {Y} {\arg \min } \rho \| X _ {t + 1} - Y + \Lambda_ {t} \| _ {F} ^ {2} + \mathcal {C} (Y), \tag {26}
+$$
+
+$$
+\Lambda_ {t + 1} = X _ {t + 1} - Y _ {t + 1} + \Lambda_ {t}. \tag {27}
+$$
+
+To evaluate the proximal operator $f_{h}^{**}$ one must solve
+
+$$
+\min _ {X} \mathcal {R} _ {h} (X) + \| X - X _ {0} \| _ {F} ^ {2} + \rho \| X - M \| _ {F} ^ {2}. \tag {28}
+$$
+
+Note, that due to the definition of (19), this can be seen as a convex-concave min-max problem, by restricting the minimization of $X$ over a compact set. By first solving for $X$ one obtains,
+
+$$
+X = M + \frac {X _ {0} - Z}{\rho} = \frac {(\rho + 1) Y - Z}{\rho}, \tag {29}
+$$
+
+where $Y = \frac{X_0 + \rho M}{1 + \rho}$ . Similarly, as in [20], we get a program of the type (excluding constants)
+
+$$
+\begin{array}{l} \max _ {Z} \left(\sum_ {i = 1} ^ {n} \min \left(b _ {i}, [ \sigma_ {i} (Z) - a _ {i} ] _ {+} ^ {2}\right) - \frac {\rho + 1}{\rho} \| Z - Y \| _ {F} ^ {2} \right. \\ \left. + \| Z \| _ {F} ^ {2} - \sum_ {i = 1} ^ {n} \left[ \sigma_ {i} (Z) - a _ {i} \right] _ {+} ^ {2}\right). \tag {30} \\ \end{array}
+$$
+
+Again, the optimization can be reduced to the singular values only. This bears strong resemblance to (21), and we show in the supplementary material that Algorithm 1 can be modified, with minimal effort, to solve this problem as well.
+
+# 6. Experiments
+
+We demonstrate the shortcomings of using WNNM for non-rigid reconstruction estimation and structure-from-motion, and show that our proposed method performs as good or better than the current state-of-the-art. In all applications, we apply the popular approach [8, 15, 17] to choose the weights inversely proportional to the singular values,
+
+$$
+w _ {i} = \frac {C}{\sigma_ {i} \left(X _ {0}\right) + \epsilon}, \tag {31}
+$$
+
+where $\epsilon > 0$ is a small number (to avoid division by zero), and $X_0$ is an initial estimate of the matrix $X$ . The trade-off parameter $C$ will be tuned to the specific application. In the experiments, we use $w_i = 2a_i$ , and choose $b_i$ depending on the specific application. This allows us to control the rank of the obtained solution without excessive penalization of the non-zero singular values.
+
+# 6.1. Synthetic Missing Data
+
+In this section we consider the missing data problem with unknown rank
+
+$$
+\min _ {X} \mu \operatorname {r a n k} (X) + \| W \odot (X - M) \| _ {F} ^ {2}, \tag {32}
+$$
+
+where $M$ is a measurement matrix, $\odot$ denotes the Hadamard (or element-wise) product, and $W$ is a missing data mask, with $w_{ij} = 1$ if the entry $(i,j)$ is known, and zero otherwise.
+
+Ground truth matrices $M_0$ of size $32 \times 512$ with $\mathrm{rank}(M_0) = 4$ are generated, and to simulate noise, a matrix $N$ is added to obtain the measurement matrix $M = M_0 + N$ . The entries of the noise matrix are normally distributed with zero mean and standard deviation $\sigma = 0.1$ .
+
+When benchmarking image inpainting and deblurring, it is common to assume a uniformly distributed missing data pattern. This assumption, however, is not applicable in many other subfields of computer vision. In structure-from-motion the missing data pattern is typically very structured,
+
+due to tracking failures. For comparison we show the reconstruction results for several methods, on both uniformly random missing data patterns and tracking failures. The tracking failure patterns were generated as in [21]. The results are shown in Table 1. Here we use the $a_{i} = \frac{\sqrt{\mu}}{\sigma_{i}(M) + \epsilon}$ , and $b_{i} = \frac{\mu}{\sigma_{i}(M) + \epsilon}$ , with $\epsilon = 10^{-6}$ . All other parameters are set as proposed by the respective authors.
+
+# 6.2. Non-Rigid Deformation with Missing Data
+
+This experiment is constructed to highlight the downsides of using WNNM, and to illustrate how shrinking bias can manifest itself in a real-world application. Non-rigid deformations can be seen as a low-rank minimizing problem by assuming that the tracked image points are moving in a low-dimensional subspace. This allows us to model the points using a linear shape basis, where the complexity of the motion is limited by the number of basis elements. This in turn, leads to the task of accurately making trade-offs while enforcing a low (and unknown) rank, which leads to the problem formulation
+
+$$
+\min _ {X} \mu \operatorname {r a n k} (X) + \| W \odot (X - M) \|, \tag {33}
+$$
+
+where $X = CB^T$ , with $B$ being concatenated basis elements and $C$ the corresponding coefficient matrix. We use the experimental setup from [19], where a KLT tracker is used on a video sequence. The usage of the tracker naturally induces a structured missing data pattern, due to the inability to track the points through the entire sequence.
+
+We consider the relaxation of (33)
+
+$$
+\min _ {X} \mathcal {R} _ {h} (X) + \| W \odot (X - M) \| _ {F} ^ {2}, \tag {34}
+$$
+
+and choose $a_{i} = \frac{C}{\sigma_{i}(M) + \epsilon}$ and $b_{i} = 0$ for $i \leq 3$ and $b_{i} = 1 / (C + \epsilon)$ otherwise. This choice of $\mathbf{b}$ encourages a rank 3 solution without penalizing the large singular values. By choosing the parameter $C$ , one may vary the strength of the fixed-rank regularization versus the weighted nuclear norm penalty. The datafit vs the parameter $C$ is shown in Table 2, and the reconstructed points for four frames of the book sequence are shown in Figure 3.
+
+Notice that, the despite the superior datafit for $C = 1$ (encouraging the WNNM penalty), it is clear by visual inspection that the missing points are suboptimally recovered. In Figure 3 the white center marker is the origin, and we note a tendency for the WNNM penalty to favor solutions where the missing points are closer to the origin. This is the consequence of a shrinking bias, and is only remedied by leaving the larger singular values intact, thus excluding WNNM as a viable option for such applications.
+
+Table 2. Datafit for different values of $C$ . Note that the datafit for $C = 1$ is better than for $C = 10^{-2}$ . This comes at the cost of incorrectly reconstructing the missing points, as is shown in Figure 3. The datafit is measured as $\| W \odot (X - M) \|_F$ .
+
+$$
+\begin{array}{c c c c} C & 1 0 ^ {- 2} & 1 & 1 0 0 \\ \hline \text {D a t a f i t} & 0. 8 3 5 4 & 0. 4 4 8 5 & 6. 5 2 2 1 \end{array}
+$$
+
+# 6.3. Motion Capture
+
+The popular prior-free objective, proposed by Dai et al. [12], for NRSfM
+
+$$
+\min _ {X} \mu \left\| X ^ {\sharp} \right\| _ {*} + \| R X - M \| _ {F} ^ {2}, \tag {35}
+$$
+
+where $X^{\sharp}$ a stacked version of $X$ (see [12] for details), suffers from shrinking bias, due to the nuclear norm penalty. Essentially, the nuclear norm penalty is a way of relaxing the soft rank penalty,
+
+$$
+\min _ {X} \mu \operatorname {r a n k} \left(X ^ {\sharp}\right) + \| R X - M \| _ {F} ^ {2}, \tag {36}
+$$
+
+however, it was shown in [27], that simply using the convex envelope of the rank function leads to non-physical reconstructions. To tackle this situation, it was proposed to penalize the 3D trajectories using a difference operator $D$ ,
+
+$$
+\min _ {X} \mu \operatorname {r a n k} \left(X ^ {\sharp}\right) + \| R X - M \| _ {F} ^ {2} + \left\| D X ^ {\sharp} \right\| _ {F} ^ {2}. \tag {37}
+$$
+
+While such an objective leads to more physical solutions [27], it also restricts the method to ordered sequences of images. To allow for unordered sequences, we replace the difference operator with an increasing penalty for smaller singular values, modelled by an increasing sequence of weights $\{a_i\}$ . More specifically, we consider the problem of minimizing
+
+$$
+\min _ {X} \mathcal {R} _ {h} \left(X ^ {\sharp}\right) + \| R X - M \| _ {F} ^ {2}, \tag {38}
+$$
+
+where sequences $\{a_i\}$ and $\{b_i\}$ are non-decreasing. This bears resemblance to the weighted nuclear norm approach presented in [17], recently, which coincide for the special case $b_i \equiv 0$ . Furthermore, this modified approach exhibits far superior reconstruction results compared to the original method proposed by Dai et al. [12]. In our comparison, we employ the same initialization heuristic for the weights $w_i$ on the singular values as in [15, 17], namely
+
+$$
+w _ {i} = \frac {C}{\sigma_ {i} \left(X _ {0} ^ {\sharp}\right) + \epsilon}, \tag {39}
+$$
+
+where $\epsilon = 10^{-6}$ and $C > 0$ . The matrix $X_0^\sharp = R^+ M$ , where $R^+$ is the pseudo-inverse of $R$ , has successfully been used as an initialization scheme for NRSfM by others [12, 35, 17].
+
+
+Frame 1
+
+
+Frame 121
+
+
+Frame 380
+
+
+Frame 668
+
+
+
+
+
+
+
+
+
+
+Figure 3. From top to bottom, $C = 10^{-2}$ , $C = 1$ and $C = 100$ . The white center dot is the origin in the chosen coordinate system. The green crosses show the observed data, and the blue dots the reconstruction of these points. The yellow dots correspond to the recovered (and missing) data. Notice the shrinking bias which is evident due to the recovered missing data being drawn towards the center of the image as the WNNM penalty increases.
+
+
+
+
+
+
+
+In practice, we choose $2a_{i} = w_{i}$ , as in (39), with $C = 2\sqrt{\mu}$ and $b_{i} = w_{i}$ , with $C = \mu$ . This enforces mixed a soft-rank and hard rank thresholding.
+
+We select four sequences from the CMU MOCAP dataset, and compare to the original method proposed by Dai et al. [12], the newly proposed weighted approach by Kumar [17], the method by Larsson and Olsson [20] and our proposed objective (38), all of which are prior-free, and do not assume that the image sequences are ordered. For the nuclear norm approach by Dai et al. we use the regularization parameter $\lambda = 2\sqrt{\mu}$ and for Kumar, we set $C = 2\sqrt{\mu}$ (as for $\mathcal{R}_h$ ) and run the different methods for a wide range of values for $\mu$ using the same random initial solutions. We then measure the datafit, defined as $\|RX - M\|_F$ and the distance to ground truth $\|X - X_{\mathrm{gt}}\|_F$ , and show how these depend on the output rank (here defined as the number of singular values larger than $10^{-6}$ ). By doing so, we see the ability of the method to make accurate trade-offs between fitting the data and enforcing the rank. The results are shown in Figure 4.
+
+Note that, the datafit for all methods decrease as the rank increases, which is to be expected; however, we immediately note that the "soft rank" penalty (3), in this case, is too weak. This manifests itself by mostly fitting to data, and the distance to ground truth does not correlate with the datafit for solutions with rank larger than three. For the re
+
+vised method by Kumar [17], as well as ours, the correlation between the two quantities is much stronger. What is interesting to see is that our method consistently performs better than the WNNM approach for lower rank levels, suggesting that the shrinking bias is affecting the quality of these reconstruction. Note, however, that the minimum distance to ground truth, obtained using the WNNM approach is as good (or better) than the one obtained using $\mathcal{R}_h$ . To obtain such a solution, however, requires careful tuning of the $\mu$ parameter and is unlikely to work on other datasets.
+
+# 7. Conclusions
+
+Despite success in many low-level imaging applications, there are limitations of the applicability of WNNM in other applications of low-rank regularization. In this paper, we have provided theoretical insight into the issues surrounding shrinking bias, and proposed a solution where the shrinking bias can be partly or completely eliminated, while keeping the rank low. This can be done using the proposed $\mathcal{R}_h$ regularizer, which has the benefit of unifying weighted nuclear norm regularization with another class of low-rank inducing penalties. Furthermore, an efficient way of computing the regularizer has been proposed, as well as the related proximal operator, which makes it suitable for optimization using splitting scheme, such as ADMM.
+
+
+Drink
+
+
+
+
+
+
+Pickup
+
+
+
+
+
+
+Stretch
+
+
+
+
+
+
+Yoga
+
+
+Figure 4. Results for the experiment on the CMU MOCAP dataset. First column: Example images with skeleton added for visualization. Second column: The datafit, measured as $\|RX - M\|_F$ , as a function of the rank. Last column: Distance to ground truth, measured as $\|X - X_{\mathrm{gt}}\|_F$ , as a function of the rank.
+
+
+
+# References
+
+[1] Roland Angst, Christopher Zach, and Marc Pollefeys. The generalized trace-norm and its application to structure-from-motion problems. In International Conference on Computer Vision, 2011. 1
+[2] Ronen Basri, David Jacobs, and Ira Kemelmacher. Photometric stereo with general, unknown lighting. International Journal of Computer Vision, 72(3):239-257, May 2007. 1
+[3] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1-122, 2011. 3, 4, 5
+[4] C. Bregler, A. Hertzmann, and H. Biermann. Recovering non-rigid 3d shape from image streams. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2000. 1
+[5] R. Cabral, F. De la Torre, J. P. Costeira, and A. Bernardino. Unifying nuclear norm and bilinear factorization approaches for low-rank matrix decomposition. In International Conference on Computer Vision (ICCV), 2013. 5
+[6] Emmanuel J Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM (JACM), 58(3):11, 2011. 1
+[7] Emmanuel J. Candès, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? J. ACM, 58(3):11:1-11:37, 2011. 5
+[8] Emmanuel J Candes, Michael B Wakin, and Stephen P Boyd. Enhancing sparsity by reweighted $l^1$ minimization. Journal of Fourier analysis and applications, 14(5-6):877-905, 2008. 5
+[9] Lu Canyi, Jinhui Tang, Shuicheng Yan, and Zhouchen Lin. Nonconvex nonsmooth low-rank minimization via iteratively reweighted nuclear norm. IEEE Transactions on Image Processing, 25, 10 2015. 1, 5
+[10] Marcus Carlsson. On convexification/optimization of functionals including an 12-misfit term. arXiv preprint arXiv:1609.09378, 2016. 3
+[11] Marcus Carlsson, Daniele Gerosa, and Carl Olsson. An unbiased approach to compressed sensing. arXiv preprint, arXiv:1806.05283, 2018. 3
+[12] Yuchao Dai, Hongdong Li, and Mingyi He. A simple prior-free method for non-rigid structure-from-motion factorization. International Journal of Computer Vision, 107(2):101-122, 2014. 3, 6, 7
+[13] Ravi Garg, Anastasios Roussos, and Lourdes Agapito. A variational approach to video registration with subspace constraints. International Journal of Computer Vision, 104(3):286-314, 2013. 1
+[14] N. Gillis and F. Glinuer. Low-rank matrix approximation with weights or missing data is np-hard. SIAM Journal on Matrix Analysis and Applications, 32(4), 2011. 1
+[15] Shuhang Gu, Qi Xie, Deyu Meng, Wangmeng Zuo, Xi-angchu Feng, and Lei Zhang. Weighted nuclear norm minimization and its applications to low level vision. International Journal of Computer Vision, 121, 07 2016. 2, 5, 6
+[16] Y. Hu, D. Zhang, J. Ye, X. Li, and X. He. Fast and accurate matrix completion via truncated nuclear norm regularization.
+
+IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(9):2117-2130, 2013. 1
+[17] Suryansh Kumar. Non-rigid structure from motion: Prior-free factorization method revisited. In The IEEE Winter Conference on Applications of Computer Vision (WACV), March 2020. 5, 6, 7
+[18] Suryansh Kumar, Yuchao Dai, and Hongdong Li. Superpixel soup: Monocular dense 3d reconstruction of a complex dynamic scene. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11 2019. 3
+[19] Viktor Larsson, Erik Bylow, Carl Olsson, and Fredrik Kahl. Rank minimization with structured data patterns. In European Conference on Computer Vision, 2014. 2, 6
+[20] Viktor Larsson and Carl Olsson. Convex low rank approximation. International Journal of Computer Vision, 120(2):194-214, 2016. 1, 2, 3, 4, 5, 7
+[21] Viktor Larsson and Carl Olsson. Compact matrix factorization with dependent subspaces. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4361-4370, 07 2017. 6
+[22] L. Mirsky. A trace inequality of john von neumann. Monatshefte fr Mathematik, 79:303-306, 1975. 2
+[23] Karthik Mohan and Maryam Fazel. Iterative reweighted least squares for matrix rank minimization. In Annual Allerton Conference on Communication, Control, and Computing, pages 653-661, 2010. 1
+[24] R. A. Newcombe, D. Fox, and S. M. Seitz. Dynamicfusion: Reconstruction and tracking of non-rigid scenes in real-time. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 343-352, June 2015. 3
+[25] Feiping Nie, Hua Wang, Xiao Cai, Heng Huang, and Chris H. Q. Ding. Robust matrix completion via joint schatten p-norm and lp-norm minimization. In ICDM, pages 566-574, 2012. 5
+[26] T. H. Oh, Y. W. Tai, J. C. Bazin, H. Kim, and I. S. Kweon. Partial sum minimization of singular values in robust PCA: Algorithm and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(4):744-758, 2016. 1
+[27] Carl Olsson, Marcus Carlsson, Fredrik Andersson, and Viktor Larsson. Non-convex rank/sparsity regularization and local minima. Proceedings of the International Conference on Computer Vision, 2017. 1, 3, 6
+[28] Carl Olsson, Marcus Carlsson, and Daniele Gerosa. Bias reduction in compressed sensing. arXiv preprint, arxiv:1812.11329, 2018. 2
+[29] S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Hassibi. Simultaneously structured models with application to sparse and low-rank matrices. IEEE Transactions on Information Theory, 61(5):2886-2908, 2015. 1
+[30] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Rev., 52(3):471-501, Aug. 2010. 1
+[31] C. Russell, R. Yu, and L. Agapito. Video-popup: Monocular 3d reconstruction of dynamic scenes. In European Conference on Computer Vision (ECCV), 2014. 3
+
+[32] F. Shang, J. Cheng, Y. Liu, Z. Luo, and Z. Lin. Bilinear factor matrix norm minimization for robust PCA: Algorithms and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(9):2066-2080, Sep. 2018. 1, 5
+[33] Jonathan Taylor, Allan Jepson, and Kiriakos Kutulakos. Non-rigid structure from locally-rigid motion. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2761-2768, 2010. 3
+[34] Kim-Chuan Toh and Sangwoon Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pacific Journal of Optimization, 6, 09-2010. 5
+[35] J. Valmadre, S. Sridharan, S. Denman, C. Fookes, and S. Lucey. Closed-form solutions for low-rank non-rigid reconstruction. In 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1-6, Nov 2015. 6
+[36] Jun Xu, Lei Zhang, David Zhang, and Xiangchu Feng. Multi-channel weighted nuclear norm minimization for real color image denoising. International Conference on Computer Vision (ICCV), 2017. 2
+[37] Noam Yair and Tomer Michaeli. Multi-scale weighted nuclear norm image restoration. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 2
\ No newline at end of file
diff --git a/aunifiedoptimizationframeworkforlowrankinducingpenalties/images.zip b/aunifiedoptimizationframeworkforlowrankinducingpenalties/images.zip
new file mode 100644
index 0000000000000000000000000000000000000000..6ad0f7ce7c84428de6fd376b396a5b15124e1a67
--- /dev/null
+++ b/aunifiedoptimizationframeworkforlowrankinducingpenalties/images.zip
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:01e2b33c5259b821c34d03494857de6e5137301bf7380452555fb51d13e476f1
+size 674847
diff --git a/aunifiedoptimizationframeworkforlowrankinducingpenalties/layout.json b/aunifiedoptimizationframeworkforlowrankinducingpenalties/layout.json
new file mode 100644
index 0000000000000000000000000000000000000000..242986df93010ff4e3dba5b453ed68b338ab9576
--- /dev/null
+++ b/aunifiedoptimizationframeworkforlowrankinducingpenalties/layout.json
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c66f923f02b0e56b1773b91586f1fd987bfe0a890ae7264af25ded4d33a1af0
+size 524941