citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1803.04848
1501.07418
INTRODUCTION
Although the robust approach is computationally efficient when the uncertainty set is state-wise independent, compact and convex, it can lead to overly conservative results #REFR .
[ "A strategy that maximizes the accumulated expected reward is then considered as optimal and can be learned from sampling.", "However, besides the uncertainty that results from stochasticity of the environment, model parameters are often estimated from noisy data or can change during testing #OTHEREFR Roy et al., 2017] .", "This second type of uncertainty can significantly degrade the performance of the optimal strategy from the model's prediction.", "Robust MDPs were proposed to address this problem #OTHEREFR Nilim and El Ghaoui, 2005; #OTHEREFR .", "In this framework, a transition model is assumed to belong to a known uncertainty set and an optimal strategy is learned under the worst parameter realizations." ]
[ "For example, consider a business scenario where an agent's goal is to make as much money as possible.", "It can either create a startup which may make a fortune but may also result in bankruptcy.", "Alternatively, it can choose to live off school teaching and have almost no risk but low reward.", "By choosing the teaching strategy, the agent may be overly conservative and not account for opportunities to invest in his own promising projects.", "Our claim is that one could relax this conservativeness and construct a softer behavior that interpolates between being aggressive and robust." ]
[ "robust approach" ]
background
{ "title": "Soft-Robust Actor-Critic Policy-Gradient", "abstract": "Robust Reinforcement Learning aims to derive an optimal behavior that accounts for model uncertainty in dynamical systems. However, previous studies have shown that by considering the worst case scenario, robust policies can be overly conservative. Our soft-robust framework is an attempt to overcome this issue. In this paper, we present a novel Soft-Robust ActorCritic algorithm (SR-AC). It learns an optimal policy with respect to a distribution over an uncertainty set and stays robust to model uncertainty but avoids the conservativeness of robust strategies. We show the convergence of SR-AC and test the efficiency of our approach on different domains by comparing it against regular learning methods and their robust formulations." }
{ "title": "Distributionally Robust Counterpart in Markov Decision Processes", "abstract": "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way. Index Terms-Distributional robustness, Markov decision processes, parameter uncertainty." }
1803.04848
1501.07418
RELATED WORK
These #REFR in which the optimal strategy maximizes the expected reward under the most adversarial distribution over the uncertainty set.
[ "Our work solves the problem of conservativeness encountered in robust MDPs by incorporating a variational form of distributional robustness.", "The SR-AC algorithm combines scalability to large scale state-spaces and online estimation of the optimal policy in an actor-critic algorithm. Table 1 compares our proposed algorithm with previous approaches.", "Many solutions have been addressed to mitigate conservativeness of robust MDP.", "relax the state-wise independence property of the uncertainty set and assume it to be coupled in a way such that the planning problem stays tracktable.", "Another approach tends to assume a priori information on the parameter set." ]
[ "For finite and known MDPs, under some structural assumptions on the considered set of distributions, this max-min problem reduces to classical robust MDPs and can be solved efficiently by dynamic programming [Puterman, 2009] .", "However, besides becoming untracktable under largesized MDPs, these methods use an offline learning approach which cannot adapt its level of protection against model uncertainty and may lead to overly conservative results. The work of Lim et al.", "[2016] solutions this issue and addresses an online algorithm that learns the transitions that are purely stochastic and those that are adversarial.", "Although it ensures less conservative results as well as low regret, this method sticks to the robust objective while strongly relying on the finite structure of the state-space.", "To alleviate the curse of dimensionality, we incorporate function approximation of the objective value and define it as a linear functional of features." ]
[ "optimal strategy", "adversarial distribution" ]
background
{ "title": "Soft-Robust Actor-Critic Policy-Gradient", "abstract": "Robust Reinforcement Learning aims to derive an optimal behavior that accounts for model uncertainty in dynamical systems. However, previous studies have shown that by considering the worst case scenario, robust policies can be overly conservative. Our soft-robust framework is an attempt to overcome this issue. In this paper, we present a novel Soft-Robust ActorCritic algorithm (SR-AC). It learns an optimal policy with respect to a distribution over an uncertainty set and stays robust to model uncertainty but avoids the conservativeness of robust strategies. We show the convergence of SR-AC and test the efficiency of our approach on different domains by comparing it against regular learning methods and their robust formulations." }
{ "title": "Distributionally Robust Counterpart in Markov Decision Processes", "abstract": "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way. Index Terms-Distributional robustness, Markov decision processes, parameter uncertainty." }
1906.05988
1501.07418
In Section 3, we formulate the DR Bellman equation and show that the value function is convex, when the ambiguity set is characterized by moments as in #REFR , and introduce several examples of moment-based ambiguity set.
[ "The state then makes a transition according to p and DM's production decision, and the DM receives a reward according to how much demand he/she is able to satisfy, or pays a stocking cost.", "Assuming a family of distributions of unknown climate, the DM aims to maximize the worst-case revenue given the nature being an adversary.", "The above problem is especially important for planning orders or production in agriculture.", "The main research in this paper is to develop a DR formulation of POMDP and analyze its properties, as well as to investigate efficient computational methods, when assuming the accessibility of transition-observation probability at the end of each time.", "Section 2 provides a comprehensive review of the related literature in MDP, POMDP, and distributionally robust optimization." ]
[ "In Section 4, we present an approximation algorithm for DR-POMDP for infinite-horizon case by using a DR variant of the heuristic value search iteration (HVSI) algorithm.", "Numerical studies are presented in Section 5 to compare DR-POMDP with", "POMDP, and to demonstrate properties of DR-POMDP solutions based on randomly generated observation outcomes.", "We conclude the paper and describe future research in Section 6.", "2 Literature Review" ]
[ "moment-based ambiguity" ]
background
{ "title": "Distributionally Robust Partially Observable Markov Decision Process with Moment-based Ambiguity", "abstract": "We consider a distributionally robust (DR) formulation of partially observable Markov decision process (POMDP), where the transition probabilities and observation probabilities are random and unknown, only revealed at the end of every time step. We construct the ambiguity set of the joint distribution of the two types of probabilities using moment information bounded via conic constraints and show that the value function of DR-POMDP is convex with respect to the belief state. We propose a heuristic search value iteration method to solve DR-POMDP, which finds lower and upper bounds of the optimal value function. Computational analysis is conducted to compare DR-POMDP with the standard POMDP using random instances of dynamic machine repair and a ROCKSAMPLE benchmark." }
{ "title": "Distributionally Robust Counterpart in Markov Decision Processes", "abstract": "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way. Index Terms-Distributional robustness, Markov decision processes, parameter uncertainty." }
1712.02228
1406.7611
Introduction
These indicators were developed because evidences have been published that this data is -similar to bibliometric data -field-and time-dependent (see, e.g., #REFR .
[ "(3) The publication of the altmetrics manifesto by #OTHEREFR gave this new area in scientometrics a name and thus a focal point.", "Today, many publishers add altmetrics to papers in their collections (e.g., Wiley", "and Springer) #OTHEREFR .", "Altmetrics are also recommended by Snowball Metrics #OTHEREFR for research evaluation purposes -an initiative publishing global standards for institutional benchmarking in the academic sector (www.snowballmetrics.com).", "In recent years, some altmetrics indicators have been proposed which are field-and time-normalized." ]
[ "Obviously, some fields are more relevant to a broader audience or general public than others #OTHEREFR .", "and #OTHEREFR introduced the mean discipline normalized reader score (MDNRS) and the mean normalized reader score (MNRS) based on", "Mendeley data (see also #OTHEREFR .", "#OTHEREFR propose the Twitter Percentile (TP) -a field-and time-normalized indicator for Twitter data.", "This indicator was developed against the backdrop of a problem with altmetrics data which is also addressed in this study -the inflation of the data with zero counts. The overview of #OTHEREFR" ]
[ "indicators", "data" ]
method
{ "title": "Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data", "abstract": "Recently, two new indicators (Equalized Mean-based Normalized Proportion Cited, EMNPC, and Mean-based Normalized Proportion Cited, MNPC) were proposed which are intended for sparse data. We propose a third indicator (Mantel-Haenszel quotient, MHq) belonging to the same indicator family. The MHq is based on the MH analysis - an established method for polling the data from multiple 2x2 contingency tables based on different subgroups. We test (using citations and assessments by peers) if the three indicators can distinguish between different quality levels as defined on the basis of the assessments by peers (convergent validity). We find that the indicator MHq is able to distinguish between the quality levels in most cases while MNPC and EMNPC are not." }
{ "title": "Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime", "abstract": "Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag \"good for teaching\" do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (\"new finding\"). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers' subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters' journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics." }
1803.08423
1209.1730
It is known #REFR that G admits two edge-Kempe inequivalent colorings c 1 and c 2 .
[ "The degree of the covering p constructed explicitly in Lemma 4 is precisely d − 1.", "Note that we pass to a further cover twice when relying on Lemma 3 and the covering degree increases by a factor of β(d − 1) each time.", "As explained in Remark 2 no further covers are necessery for the proof. This establishes the claim.", "An example.", "Let G = K 3,3 denote the complete bipartite graph on six vertices. The graph G is 3-regular." ]
[ "These are illustrated in the bottom row of Figure 1 .", "The colors 1, 2 and 3 correspond to blue, red and black, respectively.", "The required graph covering G and edge-Kempe switches are described in the top row of Figure 1 .", "These are performed along the bold cycles and indicated by the sign.", "The value of the function κ : V (G) → C = Z/2Z = {1, 2 = 0} is indicated on the vertices of (G, c 1 ) in the left bottom graph." ]
[ "two edge-Kempe inequivalent" ]
background
{ "title": "Edge Kempe equivalence of regular graph covers", "abstract": "Abstract. Let G be a finite d-regular graph with a legal edge coloring. An edge Kempe switch is a new legal edge coloring of G obtained by switching the two colors along some bi-chromatic cycle. We prove that any other edge coloring can be obtained by performing finitely many edge Kempe switches, provided that G is replaced with a suitable finite covering graph. The required covering degree is bounded above by a constant depending only on d." }
{ "title": "Counting edge-Kempe-equivalence classes for 3-edge-colored cubic graphs", "abstract": "Two edge colorings of a graph are edge-Kempe equivalent if one can be obtained from the other by a series of edge-Kempe switches. This work gives some results for the number of edge-Kempe equivalence classes for cubic graphs. In particular we show every 2-connected planar bipartite cubic graph has exactly one edge-Kempe equivalence class. Additionally, we exhibit infinite families of nonplanar bipartite cubic graphs with a range of numbers of edge-Kempe equivalence classes. Techniques are developed that will be useful for analyzing other classes of graphs as well." }
1702.08166
1610.05507
Related work
Work #REFR used a different analysis and showed a global linear convergence rate in iterate point error, i.e., x k − x * .
[ "Work #OTHEREFR is the first study that establishes a global linear convergence rate for the PIAG method in function value error, i.e., Φ(x k ) − Φ(x * ), where x * denotes the minimizer point of Φ(x)." ]
[ "The authors of #OTHEREFR combined the results presented in #OTHEREFR and #OTHEREFR and provided a stronger linear convergence rate for the PIAG method in the recent paper #OTHEREFR .", "However, all these mentioned works are built on the strongly convex assumption, which is actually not satisfied by many application problems and hence motives lots of research to find weaker alternatives.", "Influential weaker conditions include the error bound property, the restricted strongly convex property, the quadratic growth condition, and the Polyak-Lojasiewicz inequality; the interested reader could refer to #OTHEREFR .", "Works #OTHEREFR studied the linear convergence of the FBS method under these weaker conditions.", "But to our knowledge, there is no work of studying the global linear convergence of the PIAG method under these weaker conditions." ]
[ "global linear convergence" ]
method
{ "title": "Linear Convergence of the Proximal Incremental Aggregated Gradient Method under Quadratic Growth Condition", "abstract": "Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition-a strictly weaker condition than the strongly convex assumption, we derive a new global linear convergence rate result, which implies that the PIAG method attains global linear convergence rates in both the function value and iterate point errors. The main idea behind is to construct a certain Lyapunov function." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1702.08166
1610.05507
Proof of Lemma 2
The second part is a standard argument, which is different from the optimality condition based method adopted in the proof of Theorem in #REFR . Part 1.
[ "We divide the proof into two parts.", "The first part can be found from the proof of Theorem 1 in [1]; we include it here for completion." ]
[ "Since each component function f n (x) is convex with L n -continuous gradient, we have the following upper bound estimations:", "Summing (15) over all components functions and using the expression of g k , we obtain", "The last term of the inequality above can be upper-bounded using Jensen's inequality as follows:", "Therefore,", "Part 2." ]
[ "optimality condition", "based method" ]
method
{ "title": "Linear Convergence of the Proximal Incremental Aggregated Gradient Method under Quadratic Growth Condition", "abstract": "Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition-a strictly weaker condition than the strongly convex assumption, we derive a new global linear convergence rate result, which implies that the PIAG method attains global linear convergence rates in both the function value and iterate point errors. The main idea behind is to construct a certain Lyapunov function." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1807.00110
1610.05507
Introduction
We note that the approach in #REFR is essentially a primal algorithm that allows for one proximal term (and hence one constrained set).
[ "(This largely rules out primal-only methods since they usually allow just one proximal term.) Hence, the algorithm would be able to allow for constrained optimization, where the feasible region is the intersection of several sets.", "(6) able to allow for time-varying graphs in the sense of #OTHEREFR (to be robust against failures of communication between two agents). (7) able to use simpler subproblems for subdifferentiable functions. (8) able to use simpler subproblems for smooth functions. (9) able to allow for partial communication of data.", "Since Dykstra's algorithm is also dual block coordinate ascent, the following property is obtained:", "(10) choosing a large number of dual variables to be maximized over gives a greedier increase of the dual objective value.", "We are not aware of other algorithms that satisfy properties 1-5 at the same time." ]
[ "Due to technical difficulties (see Remark 4.3), a dual or primal-dual method seems necessary to handle the case of more than one constrained set.", "Algorithms derived from the primal dual algorithm #OTHEREFR , like #OTHEREFR , are very much different from what we study in this paper.", "The most notable difference is that they study ergodic convergence rates, which is not directly comparable with our results.", "1.2.1. Convergence rates.", "Since the subproblems in our case are strongly convex, standard techniques for block coordinate minimization, like #OTHEREFR , can be used to prove the O(1/k) convergence rate when a dual solution exists and all functions are treated as proximable functions." ]
[ "primal algorithm" ]
background
{ "title": "Linear and sublinear convergence rates for a subdifferentiable distributed deterministic asynchronous Dykstra's algorithm", "abstract": "Abstract. In [Pan18a, Pan18b], we designed a distributed deterministic asynchronous algorithm for minimizing the sum of subdifferentiable and proximable functions and a regularizing quadratic on time-varying graphs based on Dykstra's algorithm, or block coordinate dual ascent. Each node in the distributed optimization problem is the sum of a known regularizing quadratic and a function to be minimized. In this paper, we prove sublinear convergence rates for the general algorithm, and a linear rate of convergence if the function on each node is smooth with Lipschitz gradient. Our numerical experiments also verify these rates." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1806.09429
1610.05507
Comparison of the results with the literature
In the case of uniformly bounded delays, the derived link between epoch and time sequence enables us to compare our rates in the strongly convex case (Theorem 3.1) with the ones obtained for PIAG #REFR 27, 28] .
[ "This simple but powerful remark is one of the main technical contributions of this paper.", "In order to get comparisons with the literature, the following result provides explicit bounds on our epoch sequence for our framework with two different kind of bounds on delays uniformly in time.", "The proof of this proposition is basic and reported in Appendix C. The detailed results are summarized in the following table. uniform bound average bound", "Bounding the average delay among the workers is an attractive assumption which is however much less common in the literature.", "The defined epoch sequence and associated analysis subsumes this kind of assumption." ]
[ "To simply the comparison, let us consider the case where all the workers share the same strong convexity and smoothness constants µ and L.", "The first thing to notice is that the admissible stepsize for PIAG depend on the delays uniform upper bound d which is practically concerning, while the usual proximal gradient stepsizes are used for the proposed DAve-RPG.", "Using the optimal stepsizes in each case, the convergence rates in terms of time k are: Stepsize", "We notice in both cases the exponent inversely proportional to the maximal delay d but the term inside the parenthesis is a hundred times smaller for PIAG.", "Even if our algorithm is made for handling the flexible delays, this comparison illustrates the interest of our approach over PIAG for distributed asynchronous optimization in the case of bounded delays." ]
[ "uniformly bounded delays" ]
result
{ "title": "A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm", "abstract": "We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth functions, local to each worker, and a non-smooth function. Unlike many existing methods, our distributed algorithm is adjustable to various levels of communication cost, delays, machines computational power, and functions smoothness. A unique feature is that the stepsizes do not depend on communication delays nor number of machines, which is highly desirable for scalability. We prove that the algorithm converges linearly in the strongly convex case, and provide guarantees of convergence for the non-strongly convex case. The obtained rates are the same as the vanilla proximal gradient algorithm over some introduced epoch sequence that subsumes the delays of the system. We provide numerical results on large-scale machine learning problems to demonstrate the merits of the proposed method. • the gradient of its local function ∇f i ; • the proximity operator of the common non-smooth function prox ). 1 Our preliminary work in a machine learning context [17] presents briefly the asynchronous framework and a theoretical study in the strongly convex case. We extend this work on several aspects with in particular a deeper analysis of the asynchronous setting, the use of local stepsizes, and the study of the general convex case. We further consider a master slave framework where the workers exchange information with a master machine which has no global information about the problem but only coordinates the computation of agents in order to minimize (1). Having asynchronous exchanges between the workers and the master is of paramount importance for practical efficiency as it eliminates idle times (see e.g. the recent [10]): in the optimization algorithm, at each moment when the master receives an update from some worker, updates its master variable, and sends it back so that the worker carries on its computation from the updated iterate. This distributed setting covers a variety of scenarios when computation are scattered over distributed devices (computer clusters, mobiles), each having a local part of the data (the locality arising from the prohibitive size of the data, or its privacy [23]), as in federated learning [12] . In the large-scale machine learning applications for instance, data points can be split across the M workers, so that each worker i has a local function f i with properties that may be different due to data distribution unevenness. This context of optimization over distributed devices requires paying a special attention to delays, [16] . Indeed some worker may update more frequently than others, due to heterogeneity of machines, data distribution, communication instability, etc. For example, in the mobile context, users cannot have their cellphone send updates without internet connection, or perform computations when not charging. In this distributed setting, we provide an asynchronous algorithm and the associated analysis that adapts to local functions parameters and can handle any kind of delays. The algorithm is based on fully asynchronous proximal gradient iterations with different stepsizes, which makes it adaptive to the functions properties. In order to subsume delays, we develop a new epoch-based mathematical analysis, encompassing computation times and communication delays, to refocus the theory on algorithmics. We show convergence in the general convex case and linear convergence in the strongly convex case, with a rate independent of the computing system, which is highly desirable for scalability. This algorithm thus handles the diversity of the previously-discussed applications. The paper is organized as follows. In Section 2, we give a description of the algorithm, split into the communication and the optimization scheme, as well as a comparison with the most related algorithm. In Section 3, we develop our epoch-based analysis of convergence, separating the general and the strongly convex case. In Section 4, we provide illustrative computational experiments on standard 1 -regularized problems showing the efficiency of the algorithm and its resilience to delays." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1611.08022
1610.05507
Assumption 2.2. (Strong Convexity)
Before presenting the main result of this work, we introduce the following lemma, which was presented in #REFR , in a slightly different form.
[ "3. Main Result.", "In this section, we characterize the global linear convergence rate of the PIAG algorithm. Let", "denote the suboptimality in the objective value at iteration k.", "The paper #OTHEREFR presented two lemmas regarding the evolution of F k and ||d k || 2 .", "In particular, the first lemma investigates how the suboptimality in the objective value evolves over the iterations and the second lemma relates the direction of update to the suboptimality in the objective value at a given iteration k." ]
[ "This lemma shows linear convergence rate for a nonnegative sequence Z k that satisfies a contraction relation perturbed by shocks (represented by Y k in the lemma).", "Lemma 3.3.", "[1, Lemma 1] Let {Z k } and {Y k } be a sequence of non-negative real numbers satisfying", "for any k ≥ 0 for some constants α > 1, β ≥ 0, γ ≥ 0 and A ∈ Z + . If", "We next present the main theorem of this paper, which characterizes the linear convergence rate of the PIAG algorithm." ]
[ "following lemma" ]
background
{ "title": "A Stronger Convergence Result on the Proximal Incremental Aggregated Gradient Method", "abstract": "Abstract. We study the convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions (where the sum is strongly convex) and a non-smooth convex function. At each iteration, the PIAG method moves along an aggregated gradient formed by incrementally updating gradients of component functions at least once in the last K iterations and takes a proximal step with respect to the non-smooth function. We show that the PIAG algorithm attains an iteration complexity that grows linear in the condition number of the problem and the delay parameter K. This improves upon the previously best known global linear convergence rate of the PIAG algorithm in the literature which has a quadratic dependence on K." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1711.01136
1610.05507
Key Lemmas and Main Results
First of all, we introduce a key result, which was given in #REFR . Lemma 1.
[ "Throughout this section, we remind the reader that for simplicity we consider the sequence {x k } generated by the PLIAG method with α k ≡ α.", "All the obtained results and the proofs are also valid for the PLIAG method with different α k ." ]
[ "Assume that the nonnegative sequences {V k } and {w k } satisfy", "for some real numbers a ∈ (0, 1), b ≥ 0, c ≥ 0, and some nonnegative integer k 0 .", "Assume also that w k = 0 for k < 0, and the following holds:", "In addition, we need another crucial result, which can be viewed as a generalization of the standard descent lemma (i.e., [4, Lemma 2.3]) for the PG method.", "Lemma 2." ]
[ "Lemma" ]
background
{ "title": "Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence under Bregman Distance Growth Conditions", "abstract": "We introduce a unified algorithmic framework, called proximal-like incremental aggregated gradient (PLIAG) method, for minimizing the sum of smooth convex component functions and a proper closed convex regularization function that is possibly non-smooth and extendedvalued, with an additional abstract feasible set whose geometry can be captured by using the domain of a Legendre function. The PLIAG method includes many existing algorithms in the literature as special cases such as the proximal gradient (PG) method, the incremental aggregated gradient (IAG) method, the incremental aggregated proximal (IAP) method, and the proximal incremental aggregated gradient (PIAG) method. By making use of special Lyapunov functions constructed by embedding growth-type conditions into descent-type lemmas, we show that the PLIAG method is globally convergent with a linear rate provided that the step-size is not greater than some positive constant. Our results recover existing linear convergence results for incremental aggregated methods even under strictly weaker conditions than the standard assumptions in the literature." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1810.10328
1606.06511
Time Complexity
Similarly to other well-established machine learning algorithms which share this bottleneck, one could make use of approximations that would trade off accuracy for computational expenses #REFR .
[ "The algorithm requires the computation of a similarity matrix which would require O(N 2 ), where N is the number of data points, and then compute the generalized Laplacian.", "The bottleneck is computing its inverse which has complexity O(N 3 )." ]
[ "We also note that the per iteration complexity scales linearly in N , due to the normalization step." ]
[ "well-established machine learning", "algorithms" ]
background
{ "title": "LABEL PROPAGATION FOR LEARNING WITH LABEL PROPORTIONS", "abstract": "Learning with Label Proportions (LLP) is the problem of recovering the underlying true labels given a dataset when the data is presented in the form of bags. This paradigm is particularly suitable in contexts where providing individual labels is expensive and label aggregates are more easily obtained. In the healthcare domain, it is a burden for a patient to keep a detailed diary of their daily routines, but often they will be amenable to provide higher level summaries of daily behavior. We present a novel and efficient graph-based algorithm that encourages local smoothness and exploits the global structure of the data, while preserving the 'mass' of each bag. 978-1-5386-5477-4/18/$31.00 c 2018 IEEE" }
{ "title": "Literature survey on low rank approximation of matrices", "abstract": "Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n 3 ) operations are required for n × n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexity is not linear in n). So, it is very expensive to construct the low rank approximation of a matrix if the dimension of the matrix is very large. There are alternative techniques like Cross/Skeleton approximation which gives the low-rank approximation with linear complexity in n. In this article we review low rank approximation techniques briefly and give extensive references of many techniques." }
1802.08901
1606.06511
Hermitian Space -Dynamic Mode Decomposition with control
The use of E-SVD reduces the complexity to O(mnr) ( #REFR ]) by computing only the first r singular values and vectors.
[ "Because the solar cycle lasts over a decade, this requires a large data set of more than (m ≈) 400,000 snapshots with a 0.25 hr resolution.", "A 5 degree grid resolution in TIE-GCM results in a state vector size of (n ≈) 75,000 with a 2.5 degree grid resolution resulting in n ≈ 300, 000.", "Large data has motivated extensions to DMD even beyond E-SVD ( #OTHEREFR", "al.,(2017 ]), but have been limited to systems with no exogenous inputs.", "The theoretical computational complexity of full rank SVD of X 1 ∈ R n×m used in DMDc is O(mn 2 ) with n ≤ m, making its application intractable for the problem at hand." ]
[ "HS-DMDc reduces the computation of the psuedoinverse ( † ) to the Hermitian space by performing an eigendecomposition of the correlation matrix,", "n×n , reducing the full rank complexity to O(nn 2 ).", "The complexity can be reduced to O(n 2 r) using an economy EigenDecomposition (E-ED).", "In theory, the computation of the correlation matrix X 1 X T 1 also introduces linear scaling with m -O(mn 2 ).", "Although formulating the problem in the Hermitian space is somewhat of a common practice, motivated in part by the method of snapshot formalism of POD, it is important to note that using Eigendecomposition to compute the singular values and vectors can be more sensitive to numerical roundoff errors." ]
[ "E-SVD" ]
method
{ "title": "M ar 2 01 8 A quasi-physical dynamic reduced order model for thermospheric mass density via Hermitian Space Dynamic Mode Decomposition", "abstract": "Thermospheric mass density is a major driver of satellite drag, the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO) pertinent to space situational awareness. Most existing models for thermosphere are either physics-based or empirical. Physics-based models offer the potential for good predictive/forecast capabilities but require dedicated parallel resources for real-time evaluation and data assimilative capabilities that have yet to be developed. Empirical models are fast to evaluate, but offer very limited forecasting abilities. This paper presents methodology for developing a reduced-order dynamic model from high-dimensional physics-based models by capturing the underlying dynamical behavior. The quasi-physical reduced order model (ROM) for thermospheric mass density is developed using a large dataset of TIE-GCM (Thermosphere-Ionosphere-Electrodynamics General Circular Model) simulations spanning 12 years and covering a complete solar cycle. Towards this end, a new reduced order modeling approach, based on Dynamic Mode Decomposition with control (DMDc), that uses the Hermitian space of the problem to derive the dynamics and input matrices in a tractable manner is developed. Results show that the ROM performs well in serving as a reduced order surrogate for TIE-GCM while almost always maintaining the forecast error to within 5% of the simulated densities after 24 hours." }
{ "title": "Literature survey on low rank approximation of matrices", "abstract": "Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n 3 ) operations are required for n × n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexity is not linear in n). So, it is very expensive to construct the low rank approximation of a matrix if the dimension of the matrix is very large. There are alternative techniques like Cross/Skeleton approximation which gives the low-rank approximation with linear complexity in n. In this article we review low rank approximation techniques briefly and give extensive references of many techniques." }
2004.03623
1704.00648
Experiments
For relaxed bernoulli in Q O , we start with the temperature of 1.0 with an annealing rate of 3 × 10 −5 (following the details in #REFR ).
[ "For ImageNet, φ(x) is a ResNet18 model (a conv layer followed by four residual blocks).", "For all datasets, Q A and Q O have a single conv layer each.", "For classification, we start from φ(x), and add a fully-connected layer with 512 hidden units and a final fully-connected layer as classifier. More details can be found in the supplemental material.", "During the unsupervised learning phase of training, all methods are trained for 90 epochs for CIFAR100 and Indoor67, 2 epochs for Places205, and 30 epochs for ImageNet dataset.", "All methods use ADAM optimizer for training, with initial learning rate of 1 × 10 −4 and a minibatch size of 128." ]
[ "For training the classifier, all methods use stochastic gradient descent (SGD) with momentum with a minibatch size of 128.", "Initial learning rate is 1 × 10 −2 and we reduce it by a factor of 10 every 30 epochs.", "All experiments are trained for 90 epochs for CIFAR100 and Indoor67, 5 epochs for Places205, and 30 epochs for ImageNet datasets.", "Baselines.", "We use the β-VAE model (Section 3.1) as our primary baseline." ]
[ "details", "relaxed bernoulli" ]
method
{ "title": "PatchVAE: Learning Local Latent Codes for Recognition", "abstract": "Unsupervised representation learning holds the promise of exploiting large amounts of unlabeled data to learn general representations. A promising technique for unsupervised learning is the framework of Variational Auto-encoders (VAEs). However, unsupervised representations learned by VAEs are significantly outperformed by those learned by supervised learning for recognition. Our hypothesis is that to learn useful representations for recognition the model needs to be encouraged to learn about repeating and consistent patterns in data. Drawing inspiration from the mid-level representation discovery work, we propose PatchVAE, that reasons about images at patch level. Our key contribution is a bottleneck formulation that encourages mid-level style representations in the VAE framework. Our experiments demonstrate that representations learned by our method perform much better on the recognition tasks compared to those learned by vanilla VAEs." }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
1811.12817
1704.00648
Loss
Thereby, the z (s) = F (s) (x) are defined using the learned feature extractor blocks E (s) , and p(x, z #REFR , . . .
[ "We are now ready to define the loss, which is a generalization of the discrete logistic mixture loss introduced in #OTHEREFR . Recall from Sec.", "3.1 that our goal is to model the true joint distribution of x and the representations z (s) , i.e., p(x, z #OTHEREFR , . . .", ", z (s) ) as accurately as possible using our model p(x, z #OTHEREFR , . . . , z (s) )." ]
[ ", z (s) ) is a product of discretized (conditional) logistic mixture models with parameters defined through the f (s) , which are in turn computed using the learned predictor blocks D (s) . As discussed in Sec.", "3.1, the expected coding cost incurred by coding x, z #OTHEREFR", "Note that the loss decomposes into the sum of the crossentropies of the different representations.", "Also note that this loss corresponds to the negative log-likelihood of the data w.r.t.", "our model which is typically the perspective taken in the generative modeling literature (see, e.g., #OTHEREFR" ]
[ "learned feature extractor" ]
method
{ "title": "Practical Full Resolution Learned Lossless Image Compression", "abstract": "We propose the first practical learned lossless image compression system, L3C, and" }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
2001.09417
1704.00648
Image Compression based on DNN
In #REFR , similar to the soft quantization strategy, a soft entropy is designed by summing up the partial assignments to each center instead of counting.
[ "With the quantizer being differentiable, in order to jointly minimize the bitrate and distortion, we also need to make the entropy differentiable.", "For example, in #OTHEREFR , the quantizer is added with uniform noise.", "The density function of this relaxed formulation is continuous and can be used as an approximation of the entropy of the quantized values." ]
[ "In #OTHEREFR , an entropy coding scheme is trained to learn the dependencies among the symbols in the latent representation by using a context model. These methods allow jointly optimizing the R-D function." ]
[ "soft quantization strategy" ]
method
{ "title": "Deep Learning-based Image Compression with Trellis Coded Quantization", "abstract": "Recently many works attempt to develop image compression models based on deep learning architectures, where the uniform scalar quantizer (SQ) is commonly applied to the feature maps between the encoder and decoder. In this paper, we propose to incorporate trellis coded quantizer (TCQ) into a deep learning based image compression framework. A soft-tohard strategy is applied to allow for back propagation during training. We develop a simple image compression model that consists of three subnetworks (encoder, decoder and entropy estimation), and optimize all of the components in an end-to-end manner. We experiment on two high resolution image datasets and both show that our model can achieve superior performance at low bit rates. We also show the comparisons between TCQ and SQ based on our proposed baseline model and demonstrate the advantage of TCQ." }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
2002.10032
1704.00648
INTRODUCTION
In #REFR , a soft-to-hard vector quantization approach was introduced, and a unified framework was developed for image compression.
[ "Deep learning-based image compression #OTHEREFR has shown the potential to outperform standard codecs such as JPEG2000, the H.265/HEVC-based BPG image codec #OTHEREFR , and the new versatile video coding test model (VTM) #OTHEREFR .", "Learned image compression was first used in #OTHEREFR to compress thumbnail images using long short-term memory (LSTM)-based recurrent neural networks (RNNs) in which better SSIM results than JPEG and WebP were reported.", "This approach was generalized in #OTHEREFR , which utilized spatially adaptive bit allocation to further improve the performance.", "In #OTHEREFR , a scheme based on generalized divisive normalization (GDN) and inverse GDN (IGDN) were proposed, which outperformed JPEG2000 in both PSNR and SSIM.", "A compressive autoencoder framework with residual connection as in ResNet was proposed in #OTHEREFR , where the quantization was replaced by a smooth approximation, and a scaling approach was used to get different rates." ]
[ "In order to take the spatial variation of image content into account, a contentweighted framework was also introduced in #OTHEREFR , where an importance map for locally adaptive bit rate allocation was employed to handle the spatial variation of image content.", "A learned channel-wise quantization along with arithmetic coding was also used to reduce the quantization error.", "There have also been some efforts in taking advantage of other computer vision tasks in image compression frameworks.", "For example, in #OTHEREFR , a deep semantic segmentation-based layered image compression (DSSLIC) was proposed, by taking advantage of the Generative Adversarial Network (GAN) and BPG-based residual coding.", "It outperformed the BPG codec (in RGB444 format) in both PSNR and MS-SSIM #OTHEREFR ." ]
[ "image compression", "soft-to-hard vector quantization" ]
method
{ "title": "Generalized Octave Convolutions for Learned Multi-Frequency Image Compression", "abstract": "Learned image compression has recently shown the potential to outperform all standard codecs. The state-of-the-art ratedistortion performance has been achieved by context-adaptive entropy approaches in which hyperprior and autoregressive models are jointly utilized to effectively capture the spatial dependencies in the latent representations. However, the latents contain a mixture of high and low frequency information, which has inefficiently been represented by features maps of the same spatial resolution in previous works. In this paper, we propose the first learned multi-frequency image compression approach that uses the recently developed octave convolutions to factorize the latents into high and low frequencies. Since the low frequency is represented by a lower resolution, their spatial redundancy is reduced, which improves the compression rate. Moreover, octave convolutions impose effective high and low frequency communication, which can improve the reconstruction quality. We also develop novel generalized octave convolution and octave transposed-convolution architectures with internal activation layers to preserve the spatial structure of the information. Our experiments show that the proposed scheme outperforms all standard codecs and learning-based methods in both PSNR and MS-SSIM metrics, and establishes the new state of the art for learned image compression. Index Termsgeneralized octave convolutions, multifrequency autoencoder, learned image compression, learned entropy model" }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
2002.01416
1803.06893
2D Kelvin-Helmholtz simulation
The energy and enstrophy for EMAC and SKEW agree well with each other, and with results in #REFR .
[ "For Re = 100, solutions are computed up to T = 10 on a uniform triangulation with h = 1 96 is used with a time step size of ∆t = 0.01.", "For Re = 1000, solutions are computed up to T = 20 on a uniform triangulation with h = 1 196 and ∆t = 0.005.", "The nonlinear problems were resolved with Newton's method, and in most cases converged in 2 to 3 iterations.", "We first present results for the Re = 100 simulations.", "Plots of energy, enstrophy, absolute total momentum (defining |M | = |M 1 + M 2 |), and angular momentum versus time are shown in figure 6." ]
[ "For momentum, the initial condition has 0 momentum in both the x and y directions; EMAC maintains this momentum up to roundoff error, while SKEW produces solutions with momentum near 10 −7 which is still quite small.", "The plots of angular momentum versus time are quite interesting, as EMAC agrees with SKEW up to around t = 2, at which point it deviates significantly.", "This deviation coincides with the differences in the absolute vorticity contours in figure 7 (we show the domain extended once periodically to the right, to aid in presentation of the results), where we see that EMAC joins the middle 2 eddies from the t=2.1 solution to form a bigger eddy, while SKEW joins the left eddies together and the right eddies together.", "Since the solution is periodic in the horizontal direction, we believe both of these solutions to be correct, however it is still interesting how the different formulations find different solutions.", "We note that the solution plots from figure 7 are in good qualitative agreement with those shown in #OTHEREFR , although as discussed in #OTHEREFR the times at which eddy combining happens is very sensitive and so some minor differences for evolution-in-time is both expected and observed." ]
[ "EMAC", "enstrophy" ]
result
{ "title": "Longer time accuracy for incompressible Navier-Stokes simulations with the EMAC formulation", "abstract": "In this paper, we consider the recently introduced EMAC formulation for the incompressible Navier-Stokes (NS) equations, which is the only known NS formulation that conserves energy, momentum and angular momentum when the divergence constraint is only weakly enforced. Since its introduction, the EMAC formulation has been successfully used for a wide variety of fluid dynamics problems. We prove that discretizations using the EMAC formulation are potentially better than those built on the commonly used skew-symmetric formulation, by deriving a better longer time error estimate for EMAC: while the classical results for schemes using the skew-symmetric formulation have Gronwall constants dependent on exp(C · Re · T ) with Re the Reynolds number, it turns out that the EMAC error estimate is free from this explicit exponential dependence on the Reynolds number. Additionally, it is demonstrated how EMAC admits smaller lower bounds on its velocity error, since incorrect treatment of linear momentum, angular momentum and energy induces lower bounds for L 2 velocity error, and EMAC treats these quantities more accurately. Results of numerical tests for channel flow past a cylinder and 2D Kelvin-Helmholtz instability are also given, both of which show that the advantages of EMAC over the skew-symmetric formulation increase as the Reynolds number gets larger and for longer simulation times. in a domain Ω ⊂ R d , d=2 or 3, with polyhedral and Lipschitz boundary, u and p representing the unknown velocity and pressure, f an external force, u 0 the initial velocity, and ν the kinematic viscosity which is inversely proportional to the Reynolds number Re. Appropriate boundary conditions are required to close the system, and for simplicity we will consider the case of homogeneous Dirichlet boundary conditions, u| ∂Ω = 0. In the recent work [6], the authors showed that due to the divergence constraint, the NSE nonlinearity could be equivalently be written as u · ∇u + ∇p = 2D(u)u + (div u)u + ∇P, with P = p− 1 2 |u| 2 and D denoting the rate of deformation tensor. Reformulating in this way was named in [6] to be the energy, momentum and angular momentum conserving (EMAC) formulation of the NSE, since when discretized with a Galerkin method that only weakly enforces the divergence constraint, the EMAC formulation still produces a scheme that conserves each of energy, momentum, and angular-momentum, as well as properly defined 2D enstrophy, helicity, and total vorticity. This is in contrast to the well-known convective, conservative, rotational, and skew-symmetric formulations, which are each shown in [6] to not conserve at least one of energy, momentum or angular momentum. The EMAC formulation, and its related numerical schemes, is part of a long line of research extending back at least to Arakawa that has the theme \"incorporating more accurate physics into discretizations leads to more stable and accurate numerical solutions, especially over long time intervals.\" There are typically many ways to discretize a particular PDE, but choosing (or developing) a method that more accurately reproduces important physical balances or conservation laws will often lead to better solutions. Arakawa recognized this when he designed an energy and enstrophy conserving scheme for the 2D Navier-Stokes equations in [2], as did Fix for ocean circulation models in [11] , Arakawa and Lamb for the shallow water equations [3], and many others for various evolutionary systems from physics, e.g. [24, 1, 38, 34, 32, 30, 3] . It is important to note that if divergence-free elements are used, such as those recently developed in [16, 40, 15, 4] , then the finite element velocity found with the EMAC formulation is the same vector field as recovered from more traditional convective and skew-symmetric formulations, and all of these conservation properties will hold for those formulations as well. However, the development of strongly divergence-free methods is still quite new, often requires non-standard meshing and elements, and is not yet included into most major software packages. Since its original development in 2017 in [6] , the EMAC formulation has gained considerable attention by the CFD community. It has been used for a wide variety of problems, including vortex-induced vibration [31] , turbulent flow simulation [22] , cardiovascular simulations and hemodynamics [10, 9] , noise radiated by an open cavity [25] , and others [29, 23] . It has proven successful in these simulations, and a common theme reported for it has been that it exhibits low dissipation compared to other common schemes, which is likely due to EMAC's better adherence to physical conservation laws and balances. Not surprisingly, less has been done from an analysis viewpoint, as only one paper has appeared in this direction; in [7], the authors analyzed conservation properties of various time stepping methods for EMAC. In particular, no analysis for EMAC has been found which improves upon the well-known analysis of mixed finite elements for the incompressible NSE in skew-symmetric form. The present paper addresses the challenge of providing such new analysis. This paper extends the study of the EMAC formulation both analytically and computationally. Analytically, we show how the better convergence properties of EMAC unlock the potential for decreasing the approximation error of FE methods. In particular, we show that while the classical semidiscrete error bound for the skew-symmetric formulation has a Gronwall constant exp(C · Re · T ) [18], where T is the simulation end time, the analogous EMAC scheme has a Gronwall constant exp(C · T ), i.e. with no explicit exponential dependence on Re (and the rest of the terms in the error bound are similar). We note that previously, such ν-uniform error bounds were believed to be an exclusive property of finite element methods that enforced the divergence constraint strongly through divergence-free elements [37] or through stabilization/penalization of the divergence error [8] . Additionally, we show how the lack of momentum conservation in convective, skew-symmetric and rotational forms produce a lower bound on the error, which EMAC is free from. Numeri-" }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
2002.01416
1803.06893
2D Kelvin-Helmholtz simulation
The plots of energy and enstrophy are in agreement with those in #REFR (after adjusting time units).
[ "The plots of angular momentum versus time are quite interesting, as EMAC agrees with SKEW up to around t = 2, at which point it deviates significantly.", "This deviation coincides with the differences in the absolute vorticity contours in figure 7 (we show the domain extended once periodically to the right, to aid in presentation of the results), where we see that EMAC joins the middle 2 eddies from the t=2.1 solution to form a bigger eddy, while SKEW joins the left eddies together and the right eddies together.", "Since the solution is periodic in the horizontal direction, we believe both of these solutions to be correct, however it is still interesting how the different formulations find different solutions.", "We note that the solution plots from figure 7 are in good qualitative agreement with those shown in #OTHEREFR , although as discussed in #OTHEREFR the times at which eddy combining happens is very sensitive and so some minor differences for evolution-in-time is both expected and observed.", "For Re = 1000, plots of energy, absolute total momentum, angular momentum, and enstrophy versus time are shown in figure 9, and we observe very similar results for EMAC and SKEW, except for momentum where EMAC gives close to round off error while SKEW is O(10 −5 ), which is still quite small." ]
[ "Contours of absolute vorticity for EMAC and SKEW are shown in figure 9 , and they both display qualitative behavior consistent with results of #OTHEREFR , although with some minor differences being that the max absolute vorticity for SKEW is slightly higher (notice the colorbar scale), and perhaps more important is that the center of the SKEW eddies at later times show oscillations while those for EMAC do not." ]
[ "time units", "enstrophy" ]
result
{ "title": "Longer time accuracy for incompressible Navier-Stokes simulations with the EMAC formulation", "abstract": "In this paper, we consider the recently introduced EMAC formulation for the incompressible Navier-Stokes (NS) equations, which is the only known NS formulation that conserves energy, momentum and angular momentum when the divergence constraint is only weakly enforced. Since its introduction, the EMAC formulation has been successfully used for a wide variety of fluid dynamics problems. We prove that discretizations using the EMAC formulation are potentially better than those built on the commonly used skew-symmetric formulation, by deriving a better longer time error estimate for EMAC: while the classical results for schemes using the skew-symmetric formulation have Gronwall constants dependent on exp(C · Re · T ) with Re the Reynolds number, it turns out that the EMAC error estimate is free from this explicit exponential dependence on the Reynolds number. Additionally, it is demonstrated how EMAC admits smaller lower bounds on its velocity error, since incorrect treatment of linear momentum, angular momentum and energy induces lower bounds for L 2 velocity error, and EMAC treats these quantities more accurately. Results of numerical tests for channel flow past a cylinder and 2D Kelvin-Helmholtz instability are also given, both of which show that the advantages of EMAC over the skew-symmetric formulation increase as the Reynolds number gets larger and for longer simulation times. in a domain Ω ⊂ R d , d=2 or 3, with polyhedral and Lipschitz boundary, u and p representing the unknown velocity and pressure, f an external force, u 0 the initial velocity, and ν the kinematic viscosity which is inversely proportional to the Reynolds number Re. Appropriate boundary conditions are required to close the system, and for simplicity we will consider the case of homogeneous Dirichlet boundary conditions, u| ∂Ω = 0. In the recent work [6], the authors showed that due to the divergence constraint, the NSE nonlinearity could be equivalently be written as u · ∇u + ∇p = 2D(u)u + (div u)u + ∇P, with P = p− 1 2 |u| 2 and D denoting the rate of deformation tensor. Reformulating in this way was named in [6] to be the energy, momentum and angular momentum conserving (EMAC) formulation of the NSE, since when discretized with a Galerkin method that only weakly enforces the divergence constraint, the EMAC formulation still produces a scheme that conserves each of energy, momentum, and angular-momentum, as well as properly defined 2D enstrophy, helicity, and total vorticity. This is in contrast to the well-known convective, conservative, rotational, and skew-symmetric formulations, which are each shown in [6] to not conserve at least one of energy, momentum or angular momentum. The EMAC formulation, and its related numerical schemes, is part of a long line of research extending back at least to Arakawa that has the theme \"incorporating more accurate physics into discretizations leads to more stable and accurate numerical solutions, especially over long time intervals.\" There are typically many ways to discretize a particular PDE, but choosing (or developing) a method that more accurately reproduces important physical balances or conservation laws will often lead to better solutions. Arakawa recognized this when he designed an energy and enstrophy conserving scheme for the 2D Navier-Stokes equations in [2], as did Fix for ocean circulation models in [11] , Arakawa and Lamb for the shallow water equations [3], and many others for various evolutionary systems from physics, e.g. [24, 1, 38, 34, 32, 30, 3] . It is important to note that if divergence-free elements are used, such as those recently developed in [16, 40, 15, 4] , then the finite element velocity found with the EMAC formulation is the same vector field as recovered from more traditional convective and skew-symmetric formulations, and all of these conservation properties will hold for those formulations as well. However, the development of strongly divergence-free methods is still quite new, often requires non-standard meshing and elements, and is not yet included into most major software packages. Since its original development in 2017 in [6] , the EMAC formulation has gained considerable attention by the CFD community. It has been used for a wide variety of problems, including vortex-induced vibration [31] , turbulent flow simulation [22] , cardiovascular simulations and hemodynamics [10, 9] , noise radiated by an open cavity [25] , and others [29, 23] . It has proven successful in these simulations, and a common theme reported for it has been that it exhibits low dissipation compared to other common schemes, which is likely due to EMAC's better adherence to physical conservation laws and balances. Not surprisingly, less has been done from an analysis viewpoint, as only one paper has appeared in this direction; in [7], the authors analyzed conservation properties of various time stepping methods for EMAC. In particular, no analysis for EMAC has been found which improves upon the well-known analysis of mixed finite elements for the incompressible NSE in skew-symmetric form. The present paper addresses the challenge of providing such new analysis. This paper extends the study of the EMAC formulation both analytically and computationally. Analytically, we show how the better convergence properties of EMAC unlock the potential for decreasing the approximation error of FE methods. In particular, we show that while the classical semidiscrete error bound for the skew-symmetric formulation has a Gronwall constant exp(C · Re · T ) [18], where T is the simulation end time, the analogous EMAC scheme has a Gronwall constant exp(C · T ), i.e. with no explicit exponential dependence on Re (and the rest of the terms in the error bound are similar). We note that previously, such ν-uniform error bounds were believed to be an exclusive property of finite element methods that enforced the divergence constraint strongly through divergence-free elements [37] or through stabilization/penalization of the divergence error [8] . Additionally, we show how the lack of momentum conservation in convective, skew-symmetric and rotational forms produce a lower bound on the error, which EMAC is free from. Numeri-" }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
2003.06972
1803.06893
Consistency error bounds.
This compares well to results computed with a higher order method in #REFR for the planar case with Re = 10 4 .
[ "resulting in C K (Γ) 2 ≤ 1 2 1 .", "Substituting this in the above estimate for the kinetic energy, we arrive at the bound E(t) ≤ E(0) exp (−8ν t) = E(0) exp −4 · 10 −5 t .", "In Figure 7 .3 we show the kinetic energy plots for the computed solutions together with exponential fitting.", "There are two obvious reasons for the computed energy to decay faster than the upper estimate (7.5) suggests: the presence of numerical diffusion and the persistence of higher harmonics in the true solution.", "On the finest mesh the numerical solution looses about 0.5% of kinetic energy up to the point when the solution is dominated by two counter-rotating vortices." ]
[]
[ "higher order method" ]
result
{ "title": "Error analysis of higher order trace finite element methods for the surface Stokes equations", "abstract": "The paper studies a higher order unfitted finite element method for the Stokes system posed on a surface in R 3 . The method employs parametric P k -P k−1 finite element pairs on tetrahedral bulk mesh to discretize the Stokes system on embedded surface. Stability and optimal order convergence results are proved. The proofs include a complete quantification of geometric errors stemming from approximate parametric representation of the surface. Numerical experiments include formal convergence studies and an example of the Kelvin-Helmholtz instability problem on the unit sphere." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1808.04669
1803.06893
Numerical results
A good agreement of the kinetic energy can be clearly seen, while the enstrophy agrees pretty well till timet = 150, where the last vortex merging toke place for our simulation, while that happens at a much later timet = 250 for the scheme used in #REFR .
[ "The numerical dissipation in our simulation triggered the last vortex merging in a much earlier time, since we use a lower order method on a coarser mesh compared with #OTHEREFR .", "We notice that a numerical simulation at the scale of #OTHEREFR is out of reach for our desktop-based simulation.", "However, notice that the scheme #OTHEREFR involves one Stokes solver per time step while ours involve one (hybrid-)mixed Poisson per time step.", "Hence, using the same order approximation on the same mesh, our scheme shall be faster than that used in #OTHEREFR for the current high-Reynolds number flow simulation. In Fig.", "5 , we plot the evolution of kinetic energy and enstrophy of our simulation, together with the reference data provided in #OTHEREFR ." ]
[ "Example 4: flow around a cylinder.", "We consider the 2D-2 benchmark problem proposed in #OTHEREFR where a laminar flow around a cylinder is considered.", "The domain is a rectangular channel without an almost vertically centered circular obstacle, c.f.", "The boundary is decomposed into Γ in := {x = 0}, the inflow boundary, Γ out := {x = 2.2}, the outflow boundary, and Γ wall := ∂Ω\\(Γ in ∪ Γ out ), the wall boundary.", "On Γ out we prescribe natural boundary conditions (−ν∇u + pI)n = 0, on Γ wall homogeneous Dirichlet boundary conditions for the velocity (no-slip) and on Γ in the inflow Dirichlet boundary conditions u(0, y, t) = 6ū y(0.41 − y)/0.41 2 · (1, 0), withū = 1 the average inflow velocity." ]
[ "simulation", "kinetic energy" ]
result
{ "title": "An explicit divergence-free DG method for incompressible flow", "abstract": "Abstract. We present an explicit divergence-free DG method for incompressible flow based on velocity formulation only. A globally divergence-free finite element space is used for the velocity field, and the pressure field is eliminated from the equations by design. The resulting ODE system can be discretized using any explicit time stepping methods. We use the third order strongstability preserving Runge-Kutta method in our numerical experiments. Our spatial discretization produces the identical velocity field as the divergenceconforming DG method of Cockburn et al. [5] based on a velocity-pressure formulation, when the same DG operators are used for the convective and viscous parts. Due to the global nature of the divergence-free constraint, there exist no local bases for our finite element space. We present a key result on the efficient implementation of the scheme by identifying the equivalence of the (dense) mass matrix inversion of the globally divergence-free finite element space to a standard (hybrid-)mixed Poisson solver. Hence, in each time step, a (hybrid-)mixed Poisson solver is used, which reflects the global nature of the incompressibility condition. Since we treat viscosity explicitly for the NavierStokes equation, our method shall be best suited for unsteady high-Reynolds number flows so that the CFL constraint is not too restrictive." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1805.01706
1803.06893
Numerical tests
In addition, a qualitative comparison against benchmark data from #REFR is presented in terms of the temporal evolution of the enstrophy E(t) (here we rescale ω h with √ ν to match again the real vorticity).
[ "The characteristic time ist = δ 0 /u ∞ , the Reynolds number is Re= 10000, and the kinematic viscosity is ν = δ 0 u ∞ /Re.", "We use a structured mesh of 128 segments per side, representing 131072 triangular elements, and we solve the problem using our first-order DG scheme, setting again the stabilisation constants to a 11 = c 11 = σ = 1/∆t and d 11 = ν, where the timestep is taken as ∆t =t/20.", "The specification of this problem implies that the solutions will be quite sensitive to the initial perturbations present in the velocity, which will amplify and consequently vortices will appear.", "We proceed to compute numerical solutions until the dimensionless time t = 7, and present in Figure 4 sample solutions at three different simulation times.", "For visualisation purposes we zoom into the region 0.25 ≤ y ≤ 0.75, where all flow patterns are concentrated." ]
[ "We also record the evolution of the palinstrophy P (t), a quantity that encodes the dissipation process.", "These quantities are defined, and we remark that for the palinstrophy we use the discrete gradient associated with the DG discretisation.", "We show these quantities in Figure 5 , where also include results from #OTHEREFR that correspond to coarse and fine mesh solutions of the Navier-Stokes equations using a high order scheme based on Brezzi-Douglas-Marini elements." ]
[ "real vorticity" ]
method
{ "title": "Analysis and approximation of a vorticity-velocity-pressure formulation for the Oseen equations", "abstract": "We introduce a family of mixed methods and discontinuous Galerkin discretisations designed to numerically solve the Oseen equations written in terms of velocity, vorticity, and Bernoulli pressure. The unique solvability of the continuous problem is addressed by invoking a global inf-sup property in an adequate abstract setting for non-symmetric systems. The proposed finite element schemes, which produce exactly divergence-free discrete velocities, are shown to be well-defined and optimal convergence rates are derived in suitable norms. In addition, we establish optimal rates of convergence for a class of discontinuous Galerkin schemes, which employ stabilisation. A set of numerical examples serves to illustrate salient features of these methods." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1909.06229
1803.06893
Piecewise smooth manifolds
We can hence compare our numerical solution on Γ 0 to the results in the literature #REFR .
[ "In this subsection we consider 4 similar but different cylindrical setups in the following: is an open cylinder of height 1 with radius = (2 ) −1 , i.e.", "perimeter 1 and we can isometrically map the unit square (periodic in -direction) on Γ 0 . On the boundary we prescribe free slip boundary condition.", "As the surface Navier-Stokes equations are invariant under isometric maps we know that the solution to the corresponding 2D Kelvin-Helmholtz problem is identical." ]
[ "Γ 1 is a corresponding closed cylinder with bottom and top added, i.e. without boundary.", "Γ 2 is similar to Γ 1 except for the decreased height of 1 − 2 .", "Hence, the geodesics from the center of the top of the cylinder to the center of the bottom of the cylinder have length 1.", "The last case, case 3 considers an even shorter closed cylinder with height 1 2 . In Fig.", "13 the geometries and used meshes are sketched alongside with the decay of energy and enstrophy over time whereas in Fig." ]
[ "numerical solution" ]
result
{ "title": "Divergence-free tangential finite element methods for incompressible flows on surfaces", "abstract": "In this work we consider the numerical solution of incompressible flows on twodimensional manifolds. Whereas the compatibility demands of the velocity and the pressure spaces are known from the flat case one further has to deal with the approximation of a velocity field that lies only in the tangential space of the given geometry. Abandoning 1 -conformity allows us to construct finite elements which are -due to an application of the Piola transformation -exactly tangential. To reintroduce continuity (in a weak sense) we make use of (hybrid) discontinuous Galerkin techniques. To further improve this approach, (div Γ )-conforming finite elements can be used to obtain exactly divergence-free velocity solutions. We present several new finite element discretizations. On a number of numerical examples we examine and compare their qualitative properties and accuracy." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1703.05135
0909.2735
Notations and Description of the Phase Transition Model
In this section we fix notations and we recall some properties concerning the 2-Phase traffic model introduced in #REFR .
[]
[ "As already said, the model (1) is an extension of the classical LWR model, given by the following scalar conservation law", "where ρ is the traffic density and V = V (t, x, ρ) is the speed.", "We consider the following two assumptions on the speed:", "• We assume that, at a given density, different drivers may choose different velocities, that is, we assume that V = w ψ(ρ), where ψ = ψ(ρ) is a C 2 function and w = w(t, x) is the maximal speed of a driver, located at position x at time t.", "• We impose an overall bound on the speed V max . We get the following 2 × 2 system" ]
[ "2-Phase traffic model" ]
background
{ "title": "The Godunov method for a 2-phase model", "abstract": "We consider the Godunov numerical method to the phase-transition traffic model, proposed in [1], by Colombo, Marcellini, and Rascle. Numerical tests are shown to prove the validity of the method. Moreover we highlight the differences between such model and the one proposed in [2], by Blandin, Work, Goatin, Piccoli, and Bayen." }
{ "title": "A 2-phase traffic model based on a speed bound", "abstract": "We extend the classical LWR traffic model allowing different maximal speeds to different vehicles. Then, we add a uniform bound on the traffic speed. The result, presented in this paper, is a new macroscopic model displaying 2 phases, based on a non-smooth 2 × 2 system of conservation laws. This model is compared with other models of the same type in the current literature, as well as with a kinetic one. Moreover, we establish a rigorous connection between a microscopic Follow-The-Leader model based on ordinary differential equations and this macroscopic continuum model. Mathematics Subject Classification: 35L65, 90B20" }
1811.02514
1711.04819
III. PROPOSED UNCERTAINTY QUANTIFICATION METHODS
Firstly, we now concern the UQ strategies in general image/signal processing problems instead of just a special application in RI imaging in #REFR .
[ "Then a local credible interval (ξ −,Ωi , ξ +,Ωi ) for region Ω i is defined by #OTHEREFR where", "N is the index operator on Ω i with value 1 for pixels in Ω i otherwise 0.", "Note that ξ −,Ωi and ξ +,Ωi are actually the values that saturate the HPD credible region C α from above and from below at Ω i .", "Then the local credible interval (ξ − , ξ + ) for the whole image/signal is obtained by gathering all the (ξ −,Ωi , ξ +,Ωi ), ∀i, i.e.,", "We hereby briefly clarify the distinctions of this work from #OTHEREFR ." ]
[ "Secondly, here we adjust µ automatically, but #OTHEREFR assumes µ is known beforehand.", "Finally, we consider the over-complete bases Ψ (such as SARA #OTHEREFR , #OTHEREFR ) and explore their influence in UQ with synthesis and analysis priors, which is not considered in #OTHEREFR . 1 − α 1 − α Fig. 3 . HPD credible region.", "Plots on the left and right are the results using orthonormal basis and SARA dictionary, respectively.", "MRI brain image is used as an example here (results for RI image M31 are similar)." ]
[ "general image/signal processing", "RI imaging" ]
background
{ "title": "Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation", "abstract": "Abstract-Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated." }
{ "title": "Uncertainty quantification for radio interferometric imaging: II. MAP estimation", "abstract": "Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10 5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy." }
1105.4449
1011.1350
1.4.
In #REFR a geometric complexity theory (GCT) study of M M ult and its GL(V 1 ) × GL(V 2 ) × GL(V 3 ) orbit closure is considered.
[ "Connections to the GCT program.", "The triangle case is especially interesting because we remark below that in the critical dimension case it corresponds to", "where, setting", ",e 2 ,e 1 ∈ V 1 ⊗V 2 ⊗V 3 is the matrix multiplication operator, that is, as a tensor, M M ult e 3 ,e 2 ,e 1 = Id E 3 ⊗Id E 2 ⊗Id E 1 ." ]
[ "One sets e 1 = e 2 = e 3 = n and studies the geometry as n → ∞.", "It is a toy case of the varieties introduced by Mulmuley and Sohoni #OTHEREFR 13, #OTHEREFR , letting S d C k denote the homogeneous polynomials of degree d on (C k ) * , the varieties are GL n 2 · det n ⊂ S n C n 2 and GL n 2 · ℓ n−m perm m ⊂ S n C n 2 .", "Here det n ∈ S n C n 2 is the determinant, a homogeneous polynomial of degree n in n 2 variables, n > m, ℓ ∈ S 1 C 1 , perm m ∈ S m C m 2 is the permanent and an inclusion C m 2 +1 ⊂ C n 2 has been chosen.", "In #OTHEREFR it was shown that End C n 2 ·det n = GL n 2 · det n , and determining the difference between these sets is a subject of current research.", "The critical loop case with e s = 3 for all s is also related to the GCT program, as it corresponds to the multiplication of n matrices of size three." ]
[ "geometric complexity theory" ]
background
{ "title": "On the geometry of tensor network states", "abstract": "Abstract. We answer a question of L. Grasedyck that arose in quantum information theory, showing that the limit of tensors in a space of tensor network states need not be a tensor network state. We also give geometric descriptions of spaces of tensor networks states corresponding to trees and loops. Grasedyck's question has a surprising connection to the area of Geometric Complexity Theory, in that the result is equivalent to the statement that the boundary of the Mulmuley-Sohoni type variety associated to matrix multiplication is strictly larger than the projections of matrix multiplication (and re-expressions of matrix multiplication and its projections after changes of bases). Tensor Network States are also related to graphical models in algebraic statistics." }
{ "title": "Geometric complexity theory and tensor rank", "abstract": "Mulmuley and Sohoni [25, 26] proposed to view the permanent versus determinant problem as a specific orbit closure problem and to attack it by methods from geometric invariant and representation theory. We adopt these ideas towards the goal of showing lower bounds on the border rank of specific tensors, in particular for matrix multiplication. We thus study specific orbit closure problems for the group A key idea from [26] is that the irreducible Gs-representations occurring in the coordinate ring of the G-orbit closure of a stable tensor w ∈ W are exactly those having a nonzero invariant with respect to the stabilizer group of w. However, we prove that by considering Gs-representations, only trivial lower bounds on border rank can be shown. It is thus necessary to study G-representations, which leads to geometric extension problems that are beyond the scope of the subgroup restriction problems emphasized in [25, 26] and its follow up papers. We prove a very modest lower bound on the border rank of matrix multiplication tensors using G-representations. This shows at least that the barrier for Gs-representations can be overcome. To advance, we suggest the coarser approach to replace the semigroup of representations of a tensor by its moment polytope. We prove first results towards determining the moment polytopes of matrix multiplication and unit tensors. * A full version of this paper is available at arxiv.org/abs/1011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Categories and Subject Descriptors Algorithms, Theory Keywords geometric complexity theory, tensor rank, matrix multiplication, orbit closures, multiplicities, Kronecker coefficients We thank Matthias Christandl, Shrawan Kumar, Joseph Landsberg, Laurent Manivel, Ketan Mulmuley, and Jerzy Weyman for helpful discussions. We are grateful to the Fields Institute in Toronto for providing a forum for discussing some questions of GCT in the fall of 2009. Moreover, we thank the Center for Computational Intractability in Princeton for making possible the workshop in Geometric Complexity Theory in 2010, where some of the results presented here were discussed." }
1210.8368
1011.1350
HWV Obstructions
But the converse is not true in general, see for instance the discussion on Strassen's invariant in #REFR .
[ ",λ,i gh) = 0, which proves the proposition.", "We call such f λ a HWV obstruction against h ∈ Gc.", "We will show that some HWVs have a succinct encoding, which is linear in their degree d. . These properties can be rephrased as follows:", "• There exists some HWV f λ in C[V ] of weight λ that does not vanish on Gh.", "If λ is an occurence obstruction against h ∈ Gc, then there exists a HWV obstruction f λ of weight λ." ]
[ "Clearly, if the irreducible represenation corresponding to λ occurs in C[V ] with high multiplicity, then item one above is much harder to satisfy for occurence obstructions.", "While Proposition 3.3 tells us that h ∈ Gc can, in principle, always be proven by exhibiting a HWV obstruction, it is unclear whether this is also the case for occurence obstructions. We state this as an important open problem.", "hm,n / ∈ Gcn, is there an occurence obstruction proving this?", "Mulmuley and Sohoni conjecture that (2.2) can be proved with occurence obstructions, see [22, §3] ." ]
[ "discussion" ]
background
{ "title": "Explicit lower bounds via geometric complexity theory", "abstract": "We prove the lower bound R(Mm) ≥ 3 2 m 2 − 2 on the border rank of m × m matrix multiplication by exhibiting explicit representation theoretic (occurence) obstructions in the sense the geometric complexity theory (GCT) program. While this bound is weaker than the one recently obtained by Landsberg and Ottaviani, these are the first significant lower bounds obtained within the GCT program. Behind the proof is an explicit description of the highest weight vectors in Sym * in terms of combinatorial objects, called obstruction designs. This description results from analyzing the process of polarization and Schur-Weyl duality." }
{ "title": "Geometric complexity theory and tensor rank", "abstract": "Mulmuley and Sohoni [25, 26] proposed to view the permanent versus determinant problem as a specific orbit closure problem and to attack it by methods from geometric invariant and representation theory. We adopt these ideas towards the goal of showing lower bounds on the border rank of specific tensors, in particular for matrix multiplication. We thus study specific orbit closure problems for the group A key idea from [26] is that the irreducible Gs-representations occurring in the coordinate ring of the G-orbit closure of a stable tensor w ∈ W are exactly those having a nonzero invariant with respect to the stabilizer group of w. However, we prove that by considering Gs-representations, only trivial lower bounds on border rank can be shown. It is thus necessary to study G-representations, which leads to geometric extension problems that are beyond the scope of the subgroup restriction problems emphasized in [25, 26] and its follow up papers. We prove a very modest lower bound on the border rank of matrix multiplication tensors using G-representations. This shows at least that the barrier for Gs-representations can be overcome. To advance, we suggest the coarser approach to replace the semigroup of representations of a tensor by its moment polytope. We prove first results towards determining the moment polytopes of matrix multiplication and unit tensors. * A full version of this paper is available at arxiv.org/abs/1011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Categories and Subject Descriptors Algorithms, Theory Keywords geometric complexity theory, tensor rank, matrix multiplication, orbit closures, multiplicities, Kronecker coefficients We thank Matthias Christandl, Shrawan Kumar, Joseph Landsberg, Laurent Manivel, Ketan Mulmuley, and Jerzy Weyman for helpful discussions. We are grateful to the Fields Institute in Toronto for providing a forum for discussing some questions of GCT in the fall of 2009. Moreover, we thank the Center for Computational Intractability in Princeton for making possible the workshop in Geometric Complexity Theory in 2010, where some of the results presented here were discussed." }
1911.03990
1011.1350
Result details
The proof technique is based on the technique in #REFR . The proof is postponed to Section 10.
[ "The following Proposition 4.1 writes the multiplicity mult λ * C[Gp] as a nonnegative sum of products of multi-Littlewood-Richardson coefficients and plethysm coefficients.", "Then" ]
[ "We remark that if Problem 9 in [Sta00] is resolved positively, then Proposition 4.1 implies that the multiplicity mult λ * C[Gp] has a combinatorial description, i.e., the map (λ, m, d, D) → mult λ * C[Gp] is in #P.", "The same holds also for its summands b(λ, ̺, D, d).", "It is known that mult λ * C[Gq] = a λ (D, d) (see e.g. [Lan17, Sec.", "9.2.3]), so the same holds for mult λ * C[Gq].", "Our main technical theorem that enables us to find obstructions is the following." ]
[ "proof technique", "proof" ]
method
{ "title": "Implementing geometric complexity theory: On the separation of orbit closures via symmetries", "abstract": "Understanding the difference between group orbits and their closures is a key difficulty in geometric complexity theory (GCT): While the GCT program is set up to separate certain orbit closures, many beautiful mathematical properties are only known for the group orbits, in particular close relations with symmetry groups and invariant spaces, while the orbit closures seem much more difficult to understand. However, in order to prove lower bounds in algebraic complexity theory, considering group orbits is not enough. In this paper we tighten the relationship between the orbit of the power sum polynomial and its closure, so that we can separate this orbit closure from the orbit closure of the product of variables by just considering the symmetry groups of both polynomials and their representation theoretic decomposition coefficients. In a natural way our construction yields a multiplicity obstruction that is neither an occurrence obstruction, nor a so-called vanishing ideal occurrence obstruction. All multiplicity obstructions so far have been of one of these two types. Our paper is the first implementation of the ambitious approach that was originally suggested in the first papers on geometric complexity theory by Sohoni (SIAM J Comput 2001, 2008): Before our paper, all existence proofs of obstructions only took into account the symmetry group of one of the two polynomials (or tensors) that were to be separated. In our paper the multiplicity obstruction is obtained by comparing the representation theoretic decomposition coefficients of both symmetry groups. Our proof uses a semi-explicit description of the coordinate ring of the orbit closure of the power sum polynomial in terms of Young tableaux, which enables its comparison to the coordinate ring of the orbit." }
{ "title": "Geometric complexity theory and tensor rank", "abstract": "Mulmuley and Sohoni [25, 26] proposed to view the permanent versus determinant problem as a specific orbit closure problem and to attack it by methods from geometric invariant and representation theory. We adopt these ideas towards the goal of showing lower bounds on the border rank of specific tensors, in particular for matrix multiplication. We thus study specific orbit closure problems for the group A key idea from [26] is that the irreducible Gs-representations occurring in the coordinate ring of the G-orbit closure of a stable tensor w ∈ W are exactly those having a nonzero invariant with respect to the stabilizer group of w. However, we prove that by considering Gs-representations, only trivial lower bounds on border rank can be shown. It is thus necessary to study G-representations, which leads to geometric extension problems that are beyond the scope of the subgroup restriction problems emphasized in [25, 26] and its follow up papers. We prove a very modest lower bound on the border rank of matrix multiplication tensors using G-representations. This shows at least that the barrier for Gs-representations can be overcome. To advance, we suggest the coarser approach to replace the semigroup of representations of a tensor by its moment polytope. We prove first results towards determining the moment polytopes of matrix multiplication and unit tensors. * A full version of this paper is available at arxiv.org/abs/1011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Categories and Subject Descriptors Algorithms, Theory Keywords geometric complexity theory, tensor rank, matrix multiplication, orbit closures, multiplicities, Kronecker coefficients We thank Matthias Christandl, Shrawan Kumar, Joseph Landsberg, Laurent Manivel, Ketan Mulmuley, and Jerzy Weyman for helpful discussions. We are grateful to the Fields Institute in Toronto for providing a forum for discussing some questions of GCT in the fall of 2009. Moreover, we thank the Center for Computational Intractability in Princeton for making possible the workshop in Geometric Complexity Theory in 2010, where some of the results presented here were discussed." }
1702.07486
1508.00271
Related work
An encoding scheme is also applied by #REFR , who use an encoder-recurrent-decoder (ERD) model to predict human motion amongst others.
[ "The experiments are restricted to walking, jogging and running motions.", "Instead, we seek a more general model that can capture a large variety of actions.", "In #OTHEREFR , a low-dimensional manifold of human motion is learned using a one-layer convolutional autoencoder.", "For motion synthesis, the learned features and high-level action commands form the input to a feed-forward network that is trained to reconstruct the desired motion pattern.", "While the idea of manifold learning resembles our approach, the use of convolutional and pooling layers prevents the implementation of deeper hierarchies due to blurring effects #OTHEREFR ." ]
[ "The encoder-decoder framework learns to reconstruct joint angles, while the recurrent middle layer represents the temporal dynamics.", "As the whole framework is jointly trained, the learned representation is tuned towards the dynamics of the recurrent network and might not be generalizable to new tasks.", "Finally, a combination of recurrent networks and the structural hierarchy of the human body for motion prediction has been introduced by #OTHEREFR in form of structural RNNs (S-RNN).", "By constructing a structural graph in which both nodes and edges consist of LSTMs, the temporal dynamics of both individual limbs and the whole body are modelled.", "Without the aid of a low-dimensional representation, a single model is trained for each motion." ]
[ "human motion", "encoder-recurrent-decoder (ERD) model" ]
method
{ "title": "Deep Representation Learning for Human Motion Prediction and Classification", "abstract": "Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1702.07486
1508.00271
Motion prediction of specific actions
Note that predictions over 560 ms can diverge from the ground truth substantially due to stochasticity in human motion #REFR while remaining meaningful to a human observer.
[ "This indicates that a structural prior is beneficial to motion prediction.", "As expected, the fine-tuning to specific actions decreases the prediction error and is especially effective during long-term prediction and for actions that are not contained in the original training data, such as \"smoking\".", "We depict the prediction for a walking sequence contained in the H3.6M dataset for the whole range of around 1600 ms in Figure 4 .", "The fine-tuned model (middle) predicts the ground truth (top) with a high level of accuracy.", "The prediction by the general model is accurate up to around 600 ms." ]
[]
[ "human motion" ]
background
{ "title": "Deep Representation Learning for Human Motion Prediction and Classification", "abstract": "Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1908.07214
1508.00271
Spatio-temporal Recurrent Neural Network (STRNN)
Note that unlike some RNNs #REFR , the decoding and predicting only start after the TEncoder takes all the input, making it a sequence-to-sequence model.
[ "It also enables us to impose constraints in a longer time span to stabilize the network.", "The temporal network is named Two-way Bidirectional Temporal Network (TBTN), consisting of three parts: the temporal encoder (TEncoder), the temporal decoder (TDecoder) and the temporal predictor (TPredictor) (Figure 3 ).", "The training is done by iterations of forward and backward passes.", "The forward pass goes through an encoding phase and then a decoding/predicting phase.", "It starts by taking m + 1 frames into TEncoder." ]
[ "After the encoding phase, the internal state of TEncoder is copied to TDecoder and TPredictor as a good/reasonable initialization.", "Then, the forward pass continues on TDecoder and TPredictor simultaneously.", "The decoding in TBTN unrolls in both directions in time.", "The task of TDecoder is to decode the frames backwards in time and the task of the TPredictor is to predict the frames forwards into the future.", "The backward decoding improves the convergence speed as it first decodes the last few frames that the encoder just sees." ]
[ "RNNs" ]
background
{ "title": "Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling", "abstract": "Abstract-Data-driven modeling of human motions is ubiquitous in computer graphics and computer vision applications, such as synthesizing realistic motions or recognizing actions. Recent research has shown that such problems can be approached by learning a natural motion manifold using deep learning on a large amount data, to address the shortcomings of traditional data-driven approaches. However, previous deep learning methods can be sub-optimal for two reasons. First, the skeletal information has not been fully utilized for feature extraction. Unlike images, it is difficult to define spatial proximity in skeletal motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. On the one hand, a frame could be followed by several candidate frames leading to different motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance, resulting in featureless mean motion or over-estimate them resulting in jittery motions, which is a major source of visual artifacts. In this paper, we propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component for feature extraction. It is also equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. With our system, long-duration motions can be predicted/synthesized using an open-loop setup where the motion retains the dynamics accurately. It can also be used for denoising corrupted motions and synthesizing new motions with given control signals. We demonstrate that our system can create superior results comparing to existing work in multiple applications." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1904.00442
1508.00271
B. Human motion forecasting
In order to generate predictions for a joint (node) y starting from a given prefix sequence X pref , we build the distribution ppX|X pref , yq (see details in Section C) and we sample sequences from that posterior. Our evaluation method and metric again followed #REFR .
[ "For SpaMHMM, we used these same values of M and S and we did 3-fold cross validation on the training data of the action \"walking\" to finetune the value of λ in the range r10´4, 1s. We ended up using λ \" 0.05.", "The number of hidden states in 1-HMM was set to 51 and in K-HMM it was set to 11 hidden states per HMM.", "The same values were then used to train the models for the remaining actions.", "Every model was trained for 100 iterations of EM or until the loss plateaus.", "For SpaMHMM, we did 100 iterations of the inner loop on each M-step, using a learning rate ρ \" 10´2." ]
[ "We fed our model with 8 prefix subsequences with 50 frames each (corresponding to 2 seconds) for each joint from the test subject and we predicted the following 10 frames (corresponding to 400 miliseconds).", "Each prediction was built by sampling 100 sequences from the posterior and averaging.", "We then computed the average mean angle error for the 8 sequences at different time horizons.", "Results are in Table II .", "Among our models (1-HMM, K-HMM, MHMM and SpaMHMM), SpaMHMM outperformed the remaining in all actions except \"eating\"." ]
[ "given prefix sequence", "sequences" ]
method
{ "title": "SpaMHMM: Sparse Mixture of Hidden Markov Models for Graph Connected Entities", "abstract": "Abstract-We propose a framework to model the distribution of sequential data coming from a set of entities connected in a graph with a known topology. The method is based on a mixture of shared hidden Markov models (HMMs), which are jointly trained in order to exploit the knowledge of the graph structure and in such a way that the obtained mixtures tend to be sparse. Experiments in different application domains demonstrate the effectiveness and versatility of the method." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1705.02082
1508.00271
Related Work
Work of #REFR uses recurrent networks to predict a set of body joint heatmaps at a future frame.
[ "However, it is hard to train many mixtures of high dimensional output spaces, and, as it has been observed, many components often remain un-trained, with one component dominating the rest #OTHEREFR , unless careful mixture balancing is designed #OTHEREFR .", "Many recent data driven approaches predict motion directly from image pixels.", "In #OTHEREFR , a large, nonparametric image patch vocabulary is built for patch motion regression.", "In #OTHEREFR , dense optical flow is predicted from a single image and the multimodality of motion is handled by considering a different softmax loss for every pixel.", "Work of #OTHEREFR predicts ball trajectories in synthetic \"billiard-like\" worlds directly from a sequence of visual glimpses using a regression loss." ]
[ "Such representation though cannot possibly group the heatmap peaks into coherent 2D pose proposals.", "Work of #OTHEREFR casts frame prediction as sequential conditional prediction, and samples from a categorical distribution of 255 pixel values at every pixel location, conditioning at the past history and image generated so far.", "It is unclear how to handle the computational overhead of such models effectively.", "Stochastic neural networks.", "Stochastic variables have been used in a variety of settings in the deep learning literature e.g., for generative modeling, regularization, reinforcement learning, etc." ]
[ "recurrent networks" ]
background
{ "title": "Motion Prediction Under Multimodality with Conditional Stochastic Networks", "abstract": "Given a visual history, multiple future outcomes for a video scene are equally probable, in other words, the distribution of future outcomes has multiple modes. Multimodality is notoriously hard to handle by standard regressors or classifiers: the former regress to the mean and the latter discretize a continuous high dimensional output space. In this work, we present stochastic neural network architectures that handle such multimodality through stochasticity: future trajectories of objects, body joints or frames are represented as deep, non-linear transformations of random (as opposed to deterministic) variables. Such random variables are sampled from simple Gaussian distributions whose means and variances are parametrized by the output of convolutional encoders over the visual history. We introduce novel convolutional architectures for predicting future body joint trajectories that outperform fully connected alternatives [29] . We introduce stochastic spatial transformers through optical flow warping for predicting future frames, which outperform their deterministic equivalents [17] . Training stochastic networks involves an intractable marginalization over stochastic variables. We compare various training schemes that handle such marginalization through a) straightforward sampling from the prior, b) conditional variational autoencoders [23, 29] , and, c) a proposed K-best-sample loss that penalizes the best prediction under a fixed \" prediction budget\". We show experimental results on object trajectory prediction, human body joint trajectory prediction and video prediction under varying future uncertainty, validating quantitatively and qualitatively our architectural choices and training schemes." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1511.05298
1508.00271
Human motion modeling and forecasting
We show that our structured approach outperforms the state-of-the-art unstructured deep architecture #REFR on motion forecasting from motion capture (mocap) data.
[ "Human body is a good example of separate but well related components.", "Its motion involves complex spatiotemporal interactions between the components (arms, legs, spine), resulting in sensible motion styles like walking, eating etc.", "In this experiment, we represent the complex motion of humans over st-graphs and learn to model them with S-RNN." ]
[ "Several approaches based on Gaussian processes #OTHEREFR , Restricted Boltzmann Machines (RBMs) #OTHEREFR , and RNNs #OTHEREFR have been proposed to model human motion. Recently, Fragkiadaki et al.", "#OTHEREFR proposed an encoder-RNN-decoder (ERD) which gets state-of-the-art forecasting results on H3.6m mocap data set #OTHEREFR . S-RNN architecture for human motion.", "Our S-RNN architecture follows the st-graph shown in Figure 5a .", "According to the st-graph, the spine interacts with all the body parts, and the arms and legs interact with each other.", "The st-graph is automatically transformed to S-RNN following Section 3.2." ]
[ "motion forecasting" ]
method
{ "title": "Structural-RNN: Deep Learning on Spatio-Temporal Graphs", "abstract": "Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatiotemporal graphs are a popular tool for imposing such highlevel intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatiotemporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks. Links: Web" }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1511.05298
1508.00271
Human motion modeling and forecasting
The motion generated by ERD #REFR stays human-like in the short-term but it drifts away to non-human like motion in the long-term.
[ "Figure 6 shows forecasting 1000ms of human motion on \"eating\" activity -the subject drinks while walking.", "S-RNN stays close to the ground-truth in the short-term and generates human like motion in the long-term.", "On removing edgeRNNs, the parts of human body become independent and stops interacting through parameters.", "Hence without edgeRNNs the skeleton freezes to some mean position. LSTM-3LR suffers with a drifting problem.", "On many test examples it drifts to the mean position of walking human ( #OTHEREFR made similar observations about LSTM-3LR)." ]
[ "This was a common outcome of ERD on complex aperiodic activities, unlike S-RNN.", "Furthermore, ERD produced human motion was non-smooth on many test examples.", "See the video on the project web page for more examples #OTHEREFR . Quantitative evaluation. We follow the evaluation metric of Fragkiadaki et al.", "#OTHEREFR and present the 3D angle error between the forecasted mocap frame and the ground truth in Table 1 . Qualitatively, ERD models human motion better than LSTM-3LR.", "However, in the short-term, it does not mimic the ground-truth as well as LSTM-3LR. Fragkiadaki et al. #OTHEREFR also note this trade-off between ERD and LSTM-3LR." ]
[ "motion", "ERD" ]
background
{ "title": "Structural-RNN: Deep Learning on Spatio-Temporal Graphs", "abstract": "Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatiotemporal graphs are a popular tool for imposing such highlevel intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatiotemporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks. Links: Web" }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1806.08666
1508.00271
BACKGROUND
For example, Fragkiadaki and colleagues #REFR proposed two architectures: LSTM-3LR (3 layers of Long ShortTerm Memory cells) and ERD (Encoder-Recurrent-Decoder) to concatenate LSTM units to model the dynamics of human motions.
[ "Therefore, we will focus our discussion on generative motion models and their application in human motion generation and control.", "Our work builds upon a significant body of previous work on constructing generative statistical models for human motion analysis and synthesis.", "Generative statistical motion models are often represented as a set of mathematical functions, which describe human movement using a small number of hidden parameters and their associated probability distributions.", "Previous generative statistical models include Hidden Markov Models (HMMs) #OTHEREFR , variants of statistical dynamic models for modeling spatial-temporal variations within a temporal window #OTHEREFR , and concatenating statistical motion models into finite graphs of deformable motion models #OTHEREFR .", "Most recent work on generative modeling has been focused on employing deep recurrent neural networks (RNNs) to model dynamic temporal behavior of human motions for motion prediction #OTHEREFR ." ]
[ "Jain and colleagues #OTHEREFR introduced structural RNNs (SRNNs) for human motion prediction and generation by combining high-level spatio-temporal graphs with sequence modeling success of RNNs.", "RNNs is appealing to human motion modeling because it can handle nonlinear dynamics and long-term temporal dependencies in human motions.", "However, as observed by other researchers #OTHEREFR , current deep RNN based methods often have difficulty obtaining good performance for long term motion generation.", "They tend to fail when generating long sequences of motion as the errors in their prediction are fed back into the input and accumulate.", "As a result, their long-term results suffer from occasional unrealistic artifacts such as foot sliding and gradually converge to a static pose." ]
[ "Encoder-Recurrent-Decoder" ]
background
{ "title": "Combining Recurrent Neural Networks and Adversarial Training for Human Motion Synthesis and Control", "abstract": "This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1807.02350
1508.00271
I. INTRODUCTION
These models slightly outperform the results in #REFR , have lower computational complexity once trained, and are therefore applicable to online tasks, but may overfit training data due to their deterministic mapping between subsequences.
[ "proposed a generative model for human motion generation using a deep neural architecture with Variational Inference (VI) #OTHEREFR and Bayesian filtering with Dynamic Movement Primitives (DMP) #OTHEREFR which ensures local space-time continuity in movement representation in a reduced space.", "This latent space provides plausibility in data reconstruction, while being generalizable to new samples of movements.", "In #OTHEREFR , the authors compared three different generative structures of encoder-decoder networks with temporal encod-ing, enabling action prediction in the reduced feature space.", "Two used a fully connected DNN, while the last one used a Convolutional Neural Networks (CNN).", "In all these encoderdecoder networks, the encoder learns a smaller representation of an input subsequence x t:t+S while the decoder learns to predict the next data subsequence x t+S+1:t+2S+1 ." ]
[ "[17] proposed a method for motion prediction that outperforms #OTHEREFR by far, and is similar to #OTHEREFR , with the exception that a noise was applied to training samples, by feeding the network with its own generated predicted sequences.", "This noise injection at training time prevents the system overfitting.", "Nevertheless, the learned representation remains biased by the application, i.e.", "prediction, and thus might not learn useful features for recognition purposes.", "The same phenomenon may appear in #OTHEREFR , where a Recurrent Neural Network (RNN) was employed in a generative model, alongside Variational Auto-Encoders (VAE) #OTHEREFR , which generalizes features encoding while being biased by the integration of the RNN internal state variable." ]
[ "training data" ]
result
{ "title": "A Variational Time Series Feature Extractor for Action Prediction", "abstract": "Abstract-We propose a Variational Time Series Feature Extractor (VTSFE), inspired by the VAE-DMP model of Chen et al. [1] , to be used for action recognition and prediction. Our method is based on variational autoencoders. It improves VAE-DMP in that it has a better noise inference model, a simpler transition model constraining the acceleration in the trajectories of the latent space, and a tighter lower bound for the variational inference. We apply the method for classification and prediction of whole-body movements on a dataset with 7 tasks and 10 demonstrations per task, recorded with a wearable motion capture suit. The comparison with VAE and VAE-DMP suggests the better performance of our method for feature extraction. An open-source software implementation of each method with TensorFlow is also provided. In addition, a more detailed version of this work can be found in the indicated code repository. Although it was meant to, the VTSFE hasn't been tested for action prediction, due to a lack of time in the context of Maxime Chaveroche's Master thesis at INRIA." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1702.08212
1508.00271
B. Online human motion prediction
Note that due to the stochasticity in human motion, an accurate longterm prediction (> 560 ms) is often not possible #REFR .
[ "Additionally, we report a variance estimate for each time step in the predicted time window ∆t as the average sum of variances of the limb and spatial dimensions. In Fig.", "4 a)-c) we visualize the motion prediction errors of the torso, right arm and left arm model for the duration of 1660 ms.", "Since the skeleton is represented in a local reference frame, any natural movement of the torso is restricted to rotations. Therefore, the prediction error is comparatively low.", "The MPE for both arms is similar and grows more strongly than for the torso.", "Interestingly, the model seems to learn that there is less uncertainty for the initial position of the predictions." ]
[ "For HRI it is important to represent these uncertainties about motion predictions such that the robot can take these into account during motion planning.", "In comparison to our CVAE models, a simple linear extrapolation in Fig.", "4 d) showcases the The samples were generated by propagating the past motion window through the network, sampling from the encoder and transitioner distributions and visualizing the mean output of the decoder.", "We depict the past 800 ms and samples of the next 800 ms.", "importance of modeling dynamics." ]
[ "human motion" ]
background
{ "title": "Anticipating many futures: Online human motion prediction and synthesis for human-robot collaboration", "abstract": "Abstract-Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. A common approach to human intention inference is to model specific trajectories towards known goals with supervised classifiers. However, these approaches do not take possible future movements into account nor do they make use of kinematic cues, such as legible and predictable motion. The bottleneck of these methods is the lack of an accurate model of general human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motions. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1912.10150
1508.00271
Related Works
For deep-learning-based methods, RNNs are probably one of the most successful models #REFR .
[ "Restricted Boltzmann Machine (RBM) also has been applied for motion generation #OTHEREFR ).", "However, inference for RBM is known to be particularly challenging.", "Gaussian-process latent variable models #OTHEREFR Urtasun et al.", "2008 ) and its variants #OTHEREFR have been applied for this task.", "One problem with such methods, however, is that they are not scalable enough to deal with large-scale data." ]
[ "However, most existing models assume output distributions as Gaussian or Gaussian mixture.", "Different from our implicit representation, these methods are not expressive enough to capture the diversity of human actions.", "In contrast to action prediction, limited work has been done for diverse action generation, apart from some preliminary work.", "Specifically, the motion graph approach #OTHEREFR needs to extract motion primitives from prerecorded data; the diversity and quality of action will be restricted by way of defining the primitives and transitions between the primitives.", "Variational autoencoder and GAN have also been applied in #OTHEREFR" ]
[ "RNNs" ]
method
{ "title": "Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions", "abstract": "Human-motion generation is a long-standing challenging task due to the requirement of accurately modeling complex and diverse dynamic patterns. Most existing methods adopt sequence models such as RNN to directly model transitions in the original action space. Due to high dimensionality and potential noise, such modeling of action transitions is particularly challenging. In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality. Conditioned on a latent sequence, actions are generated by a frame-wise decoder shared by all latent action-poses. Specifically, an implicit RNN is defined to model smooth latent sequences, whose randomness (diversity) is controlled by noise from the input. Different from standard action-prediction methods, our model can generate action sequences from pure noise without any conditional action poses. Remarkably, it can also generate unseen actions from mixed classes during training. Our model is learned with a bi-directional generative-adversarial-net framework, which not only can generate diverse action sequences of a particular class or mix classes, but also learns to classify action sequences within the same model. Experimental results show the superiority of our method in both diverse action-sequence generation and classification, relative to existing methods." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1804.10692
1512.03012
Policy learning with perceptual rewards
Using the subject and object categories extracted from the natural language utterance, we retrieve corresponding 3D models from external 3D databases (3D Shapenet #REFR and 3D Warehouse [2]) and import them in a physics simulator (Bullet).
[ "Model-free policy search with binary rewards has notoriously high sample complexity due to the lack of informative gradients for the overwhelming majority of the sampled actions #OTHEREFR .", "Efficient policy search requires shaped rewards, either explicitly #OTHEREFR , or more recently, implicitly [5] , by encoding the goal configuration in a continuous space where similarity can be measured against alternative goals achieved during training.", "If we were able to visually picture the desired 3D object configuration to be achieved by our pick-and-place policies, then Euclidean distances to the pictured objects would provide an effective (approximate) shaping of the true rewards.", "We do so using analysis-by-synthesis, where our trained detector is used to select or discard sampled hypotheses.", "Given an initial configuration of two objects that we are supposed to manipulate towards a desired configuration, we seek a physically-plausible 3D object configuration which renders to an image that scores high with our corresponding reward detector." ]
[ "We sample 3D locations for the objects, render the scene and evaluate the score of our detector.", "Note that since we know the object identities, the relation module is the only one that needs to be considered for this scoring.", "We pick the highest scoring 3D configuration as our goal configuration.", "It is used at training time to provide effective shaping using 3D Euclidean distances between desired and current object locations and drastically reduces the number of samples needed for policy learning.", "However, our policy network takes 2D bounding box information as input, and does not need any 3D lifting, but rather operates reactively given the RGB images." ]
[ "natural language utterance", "3D Shapenet" ]
method
{ "title": "Reward Learning from Narrated Demonstrations", "abstract": "Humans effortlessly\"program\"one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved, by providing RGB images of goal configurations, or supplying a demonstration to be imitated. None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations(NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation.We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick-and place policies using learned visual reward detectors, (iii) benefit from object-factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2002.03892
1512.03012
A. Dataset and Evaluation Metrics
In these experiments, we mainly used a subset of ShapeNetCore #REFR containing 500 models from five categories including Mug, Chair, Knife, Guitar, and Lamp.
[]
[ "For each category, we randomly selected 100 object models and convert them into complete point clouds with the pyntcloud package.", "We then shift and resize the point clouds data and convert them into a 32 × 32 × 32 array as the input size of networks.", "To the best of our knowledge, there are no existing similar researches done before.", "Therefore, we manually labeled an affordance part for each object to provide ground truth data. Part annotations are represented as point labels.", "A set of examples of labeled affordance part for different objects is depicted in Fig. 6 (affordance parts are highlighted by orange color)." ]
[ "Chair", "500 models" ]
method
{ "title": "Learning to Grasp 3D Objects using Deep Residual U-Nets", "abstract": "Affordance detection is one of the challenging tasks in robotics because it must predict the grasp configuration for the object of interest in real-time to enable the robot to interact with the environment. In this paper, we present a new deep learning approach to detect object affordances for a given 3D object. The method trains a Convolutional Neural Network (CNN) to learn a set of grasping features from RGB-D images. We named our approach Res-U-Net since the architecture of the network is designed based on U-Net structure and residual network-styled blocks. It devised to be robust and efficient to compute and use. A set of experiments has been performed to assess the performance of the proposed approach regarding grasp success rate on simulated robotic scenarios. Experiments validate the promising performance of the proposed architecture on a subset of ShapeNetCore dataset and simulated robot scenarios." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1505.05641
1512.03012
3D Model Dataset
We download 3D models from ShapeNet #REFR , which has organized common daily objects with categorization labels and joint alignment.
[ "As we discussed in Sec 2, there are several largescale 3D model repositories online." ]
[ "Since we evaluate our method on the PASCAL 3D+ benchmark, we download 3D models belonging to the 12 categories of PASCAL 3D+, including 30K models in total.", "After symmetry-preserving model set augmentation (Sec 4.1), we make sure that every category has 10K models. For more details, please refer to supplementary material." ]
[ "3D models", "ShapeNet" ]
method
{ "title": "Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views", "abstract": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1811.11187
1512.03012
Alignment
Figure 6 : Unconstrained scenario where instead of having a ground truth set of CAD models given, we use a set of 400 randomly selected CAD models from ShapeNetCore #REFR , more closely mimicking a real-world application scenario.
[ "6 shows the capability of our method to align in an unconstrained real-world setting where ground truth CAD models are not given, we instead provide a set of 400 random CAD models from ShapeNet #OTHEREFR . #OTHEREFR scenes.", "Our approach to learning geometric features between real and synthetic data produce much more reliable keypoint correspondences, which coupled with our alignment optimization, produces significantly more accurate alignments.", "Table 2 : Accuracy comparison (%) on our CAD alignment benchmark.", "While handcrafted feature descriptors can achieve some alignment on more featureful objects (e.g., chairs, sofas), they do not tolerate well the geometric discrepancies between scan and CAD data -which remains difficult for the learned keypoint descriptors of 3DMatch.", "Scan2CAD directly addresses this problem of learning features that generalize across these domains, thus significantly outperforming state of the art." ]
[]
[ "CAD models" ]
method
{ "title": "Scan2CAD: Learning CAD Model Alignment in RGB-D Scans", "abstract": "Figure 1: Scan2CAD takes as input an RGB-D scan and a set of 3D CAD models (left). We then propose a novel 3D CNN approach to predict heatmap correspondences between the scan and the CAD models (middle). From these predictions, we formulate an energy minimization to find optimal 9 DoF object poses for CAD model alignment to the scan (right). Abstract" }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1907.09381
1512.03012
Implementation details
From ShapeNet #REFR , we select 401 different classes of vehicles and, for each vehicle, we screenshot each rendered image from 80 different viewpoints.
[ "3D model pool." ]
[ "Since the background of the rendered image is very clean, we can simply extract the accurate silhouettes by thresholding.", "In this way, we collect 32,080 silhouettes to form the auxiliary 3D model pool.", "Network structure and training.", "In practice, as encoder-decoder structure, both G 1 and G 2 downsample the resolution from 256 to 64 and then upsample to the original spatial resolution.", "As the middle layers, there are 8 residual blocks with the dilation rate 2." ]
[ "vehicle", "ShapeNet" ]
method
{ "title": "Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery", "abstract": "In this paper, we propose a novel iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts. In particular, firstly, to improve the quality of the segmentation completion, we present two coupled discriminators that introduce an auxiliary 3D model pool for sampling authentic silhouettes as adversarial samples. In addition, we propose a two-path structure with a shared network to enhance the appearance recovery capability. By iteratively performing the segmentation completion and the appearance recovery, the results will be progressively refined. To evaluate our method, we present a dataset, Occluded Vehicle dataset, containing synthetic and real-world occluded vehicle images. Based on this dataset, we conduct comparison experiments and demonstrate that our model outperforms the state-of-the-arts in both tasks of recovering segmentation mask and appearance for occluded vehicles. Moreover, we also demonstrate that our appearance recovery approach can benefit the occluded vehicle tracking in real-world videos." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1803.08457
1512.03012
3-D Point Cloud Clustering
For these experiments, we use objects from ShapeNet, #REFR which are sampled to create point clouds with 2048 points.
[ "Contrary to the datasets we have shown so far, the feature representation of the point clouds must be permutation-invariant and the reconstruction should match the shape outline and not the exact point coordinates.", "Therefore, a different autoencoder architecture and loss need to be used.", "We use the architecture in the work by Achlioptas et al.", "#OTHEREFR The number of filters in each layer of the encoder is 64-128-128-256-128 and the number of filters in each layer of the decoder is 256-256-3 Â #points in the point cloud. The loss function must be invariant to permutations. Therefore, the MSE loss function is not suitable. Instead, we used Chamfer loss.", "We perform two sets of experiments on 3d data-inter-class clustering, where the dataset contains different classes of 3-D objects, and intra-class clustering, where the dataset contains subcategories of the same class." ]
[ "The autoencoder is first trained for 1000 iterations using an Adam optimizer with a learning rate of 0.0005.", "During the clustering stage, the autoencoder learning rate is set to 0.0001 and the learning rate of U is set to 0.0001.", "The number of epochs between m update is set to 30." ]
[ "ShapeNet" ]
method
{ "title": "Clustering-Driven Deep Embedding With Pairwise Constraints", "abstract": "Recently, there has been increasing interest to leverage the competence of neural networks to analyze data. In particular, new clustering methods that employ deep embeddings have been presented. In this paper, we depart from centroid-based models and suggest a new framework, called Clustering-driven deep embedding with PAirwise Constraints (CPAC), for nonparametric clustering using a neural network. We present a clustering-driven embedding based on a Siamese network that encourages pairs of data points to output similar representations in the latent space. Our pair-based model allows augmenting the information with labeled pairs to constitute a semi-supervised framework. Our approach is based on analyzing the losses associated with each pair to refine the set of constraints. We show that clustering performance increases when using this scheme, even with a limited amount of user queries. We demonstrate how our architecture is adapted for various types of data and present the first deep framework to cluster three-dimensional (3-D) shapes." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1812.02725
1512.03012
Introduction
This advantage allows us to leverage both 2D image datasets and 3D shape collections #REFR and to synthesize objects of diverse shapes and texture.
[ "Finally, it learns to add diverse, realistic texture to 2.5D sketches and produce 2D images that are indistinguishable from real photos. We call our model Visual Object Networks (VON).", "32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. #OTHEREFR .", "(b) Our model produces three outputs: a 3D shape, its 2.5D projection given a viewpoint, and a final image with realistic texture.", "(c) Given this disentangled 3D representation, our method allows several 3D applications including changing viewpoint and editing shape or texture independently. Please see our code and website for more details.", "Wiring in conditional independence reduces our need for densely annotated data: unlike classic morphable face models #OTHEREFR , our training does not require paired data between 2D images and 3D shapes, nor dense correspondence annotations in 3D data." ]
[ "Through extensive experiments, we show that VON produce more realistic image samples than recent 2D deep generative models.", "We also demonstrate many 3D applications that are enabled by our disentangled representation, including rotating an object, adjusting object shape and texture, interpolating between two objects in texture and shape space independently, and transferring the appearance of a real image to new objects and viewpoints." ]
[ "2D image datasets", "3D shape collections" ]
background
{ "title": "Visual Object Networks: Image Generation with Disentangled 3D Representation", "abstract": "Recent progress in deep generative models has led to tremendous breakthroughs in image generation. However, while existing models can synthesize photorealistic images, they lack an understanding of our underlying 3D world. We present a new generative model, Visual Object Networks (VON), synthesizing natural images of objects with a disentangled 3D representation. Inspired by classic graphics rendering pipelines, we unravel our image formation process into three conditionally independent factors-shape, viewpoint, and texture-and present an end-to-end adversarial learning framework that jointly models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes that are indistinguishable from real shapes. It then renders the object's 2.5D sketches (i.e., silhouette and depth map) from its shape under a sampled viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches to generate natural images. The VON not only generates images that are more realistic than state-of-the-art 2D image synthesis methods, but also enables many 3D operations such as changing the viewpoint of a generated image, editing of shape and texture, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1907.13236
1512.03012
Introduction
Since collecting a large dataset with ground truth annotations is expensive and time-consuming, it is appealing to utilize synthetic data for training, such as using the ShapeNet repository which contains thousands of 3D shapes of different objects #REFR .
[ "A common environment in which manipulation tasks take place is on tabletops.", "Thus, in this paper, we approach this by focusing on the problem of unseen object instance segmentation (UOIS), where the goal is to separately segment every arbitrary (and potentially unseen) object instance, in tabletop environments.", "Training a perception module requires a large amount of data.", "In order to ensure the generalization capability of the module to recognize unseen objects, we need to learn from data that contains many various objects.", "However, in many robot environments, large-scale datasets with this property do not exist." ]
[ "However, there exists a domain gap between synthetic data and real world data.", "Training directly on synthetic data only usually does not work well in the real world #OTHEREFR .", "Consequently, recent efforts in robot perception have been devoted to the problem of Sim2Real, where the goal is to transfer capabilities learned in simulation to real world settings.", "For instance, some works have used domain adaptation techniques to bridge the domain gap when unlabeled real data is available #OTHEREFR .", "Domain randomization #OTHEREFR was proposed to diversify the rendering of synthetic data for training." ]
[ "ground truth annotations", "ShapeNet repository" ]
method
{ "title": "The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation", "abstract": "Abstract: In order to function in unstructured environments, robots need the ability to recognize unseen novel objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. We propose a novel method that separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. Our method is comprised of two stages where the first stage operates only on depth to produce rough initial masks, and the second stage refines these masks with RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method, trained on this dataset, can produce sharp and accurate masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping. Code, models and video can be found at https://rse-lab.cs. washington.edu/projects/unseen-object-instance-segmentation/." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1808.09351
1512.03012
Implementation Details
For object meshes, we choose eight CAD models from ShapeNet #REFR including cars, vans, and buses.
[ "Semantic branch.", "Our semantic branch adopts Dilated Residual Networks (DRN) for semantic segmentation . We train the network for 25 epochs.", "Geometric branch.", "We use Mask-RCNN for object proposal generation #OTHEREFR ." ]
[ "Given an object proposal, we predict its scale, rotation, translation, 4 3 FFD grid point coefficients, and an 8-dimensional distribution across candidate meshes with a ResNet-18 network .", "The translation t can be recovered using the estimated offset e, the normalized distance log τ , and the ground truth focal length of the image.", "They are then fed to a differentiable renderer #OTHEREFR to render the instance map and normal map.", "We empirically set λ reproj = 0.1.", "We first train the network with L pred using Adam #OTHEREFR with a learning rate of 10 −3 for 256 epochs and then fine-tune the model with L pred + λ reproj L reproj and REINFORCE with a learning rate of 10 −4 for another 64 epochs." ]
[ "ShapeNet" ]
method
{ "title": "3D-Aware Scene Manipulation via Inverse Graphics", "abstract": "We aim to obtain an interpretable, expressive, and disentangled scene representation that contains comprehensive structural and textural information for each object. Previous scene representations learned by neural networks are often uninterpretable, limited to a single object, or lacking 3D knowledge. In this work, we propose 3D scene de-rendering networks (3D-SDN) to address the above issues by integrating disentangled representations for semantics, geometry, and appearance into a deep generative model. Our scene encoder performs inverse graphics, translating a scene into a structured object-wise representation. Our decoder has two components: a differentiable shape renderer and a neural texture generator. The disentanglement of semantics, geometry, and appearance supports 3D-aware scene manipulation, e.g., rotating and moving objects freely while keeping the consistent shape and texture, and changing the object appearance without affecting its shape. Experiments demonstrate that our editing scheme based on 3D-SDN is superior to its 2D counterpart." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1803.07289
1512.03012
Experiments
To evaluate the effectiveness of our approach, we participate in two benchmarks that arise from the ShapeNet #REFR dataset, which consists of synthetic 3D models created by digital artists.
[ "We conducted several experiments to validate our approach.", "These show that our flex-convolution-based neural network yields competitive performance to previous work on synthetic data for single object classification ( #OTHEREFR , 1024 points) using fewer resources and provide some insights about human performance on this dataset.", "We improve single instance part segmentation ([32], 2048 points) .", "Furthermore, we demonstrate the effectiveness of our approach by performing semantic point cloud segmentation on a large-scale real-world 3D scan ( #OTHEREFR , 270 Mio. points) improving previous methods in both accuracy and speed." ]
[]
[ "ShapeNet dataset" ]
method
{ "title": "Flex-Convolution (Million-Scale Point-Cloud Learning Beyond Grid-Worlds)", "abstract": "Traditional convolution layers are specifically designed to exploit the natural data representation of images -- a fixed and regular grid. However, unstructured data like 3D point clouds containing irregular neighborhoods constantly breaks the grid-based data assumption. Therefore applying best-practices and design choices from 2D-image learning methods towards processing point clouds are not readily possible. In this work, we introduce a natural generalization flex-convolution of the conventional convolution layer along with an efficient GPU implementation. We demonstrate competitive performance on rather small benchmark sets using fewer parameters and lower memory consumption and obtain significant improvements on a million-scale real-world dataset. Ours is the first which allows to efficiently process 7 million points concurrently." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1806.04807
1512.03012
Network Architecture Details
Our results are poorer on the 'Scene11' dataset, because the images there are synthesized with random objects from the ShapeNet #REFR without physically correct scale.
[ "APPENDIX C: EVALUATION ON DEMON DATASET Table 5 summarizes our results on the DeMoN dataset.", "For a comparison, we also cite the results from DeMoN #OTHEREFR and the most recent work LS-Net .", "We further cite the results from some conventional approaches as reported in DeMoN, indicated as Oracle, SIFT, FF, and Matlab respectively.", "Here, Oracle uses ground truth camera poses to solve the multi-view stereo by SGM #OTHEREFR , while SIFT, FF, and Matlab further use sparse features, optical flow, and KLT tracking respectively for feature correspondence to solve camera poses by the 8-pt algorithm #OTHEREFR Table 5 : Quantitative comparisons on the DeMoN dataset.", "Our method consistently outperforms DeMoN #OTHEREFR at both camera motion and scene depth, except on the 'Scenes11' data, because we enforce multi-view geometry constraint in the BA-Layer." ]
[ "This setting is inconsistent with real data and makes it harder for our method to learn the basis depth map generator.", "When compared with LS-Net , our method achieves similar accuracy on camera poses but better scene depth.", "It proves our feature-metric BA with learned feature is superior than the photometric BA in the LS-Net." ]
[ "'Scene11' dataset", "ShapeNet" ]
method
{ "title": "BA-Net: Dense Bundle Adjustment Network", "abstract": "This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1912.05237
1512.03012
Experiments
Dataset: We render synthetic datasets using objects from ShapeNet #REFR , considering three datasets with varying difficulty.
[ "In this section, we first compare our approach to several baselines on the task of 3D controllable image generation, both on synthetic and real data.", "Next, we conduct a thorough ablation study to better understand the influence of different representations and architecture components." ]
[ "Two datasets contain cars, one with and the other without background.", "For both datasets, we randomly sample 1 to 3 cars from a total of 10 different car models.", "Our third dataset is the most challenging of these three.", "It comprises indoor scenes containing objects of different categories, including chairs, tables and sofas.", "As background we use empty room images from Structured3D #OTHEREFR , a synthetic dataset with photo-realistic 2D images." ]
[ "ShapeNet" ]
method
{ "title": "Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis", "abstract": "In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1809.05068
1512.03012
Training Paradigm
Our 2.5D sketch estimation network and 3D completion network are trained with images rendered with ShapeNet #REFR objects (see Sections 4.1 and 5 for details).
[ "We train our network in two stages.", "We first pre-train the three components of our model separately.", "The shape completion network is then fine-tuned with both voxel loss and naturalness losses." ]
[ "We train the 2.5D sketch estimator using a L2 loss and SGD with a learning rate of 0.001 for 120 epochs.", "We only use the supervised loss L voxel for training the 3D estimator at this stage, again with SGD, a learning rate of 0.1, and a momentum of 0.9 for 80 epochs.", "The naturalness network is trained in an adversarial manner, where we use Adam #OTHEREFR with a learning rate of 0.001 and a batch size of 4 for 80 epochs.", "We set λ = 10 as suggested in Gulrajani et al . #OTHEREFR .", "We then fine-tune our completion network with both voxel loss and naturalness losses as L = L voxel + αL natural ." ]
[ "3D completion network", "ShapeNet objects" ]
method
{ "title": "Learning Shape Priors for Single-View 3D Completion and Reconstruction", "abstract": "Abstract. The problem of single-view 3D shape completion or reconstruction is challenging, because among the many possible shapes that explain an observation, most are implausible and do not correspond to natural objects. Recent research in the field has tackled this problem by exploiting the expressiveness of deep convolutional networks. In fact, there is another level of ambiguity that is often overlooked: among plausible shapes, there are still multiple shapes that fit the 2D image equally well; i.e., the ground truth shape is non-deterministic given a single-view input. Existing fully supervised approaches fail to address this issue, and often produce blurry mean shapes with smooth surfaces but no fine details. In this paper, we propose ShapeHD, pushing the limit of single-view shape completion and reconstruction by integrating deep generative models with adversarially learned shape priors. The learned priors serve as a regularizer, penalizing the model only if its output is unrealistic, not if it deviates from the ground truth. Our design thus overcomes both levels of ambiguity aforementioned. Experiments demonstrate that ShapeHD outperforms state of the art by a large margin in both shape completion and shape reconstruction on multiple real datasets." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1802.09292
1512.03012
C. Render Pipeline
Render Pipeline for 3D Category-level Models: First, we choose a collection of CAD models of chairs, comprising of about 250 chairs sampled from the ShapeNet #REFR repository.
[ "Inspired by RenderForCNN #OTHEREFR , we implement our customized render pipeline for generating huge amounts of synthetic keypoint annotated chair images using a small set of 3D annotated keypoints.", "We briefly summarize the steps in our render pipeline, and how we exploit its advantages for learning 3D category-models as well as discriminative 2D feature extractors." ]
[ "For each chair, we synthesize a few (typically 8) 2D images with predetermined viewpoints (azimuth, elevation, and camera-tilt angles).", "Keypoints in these images are then annotated (in 2D) manually, and then triangulated to 3D to obtain 3D keypoint locations on the CAD model.", "Since the models are already assumed to be aligned, performing a Principal Component Analysis (PCA) over the (mean-subtracted) 3D keypoint locations results in the deformation basis (eigenvector obtained from PCA). This constitutes the category-level model learning phase." ]
[ "3D Category-level Models", "ShapeNet repository" ]
method
{ "title": "Constructing Category-Specific Models for Monocular Object-SLAM", "abstract": "We present a new paradigm for real-time objectoriented SLAM with a monocular camera. Contrary to previous approaches, that rely on object-level models, we construct category-level models from CAD collections which are now widely available. To alleviate the need for huge amounts of labeled data, we develop a rendering pipeline that enables synthesis of large datasets from a limited amount of manually labeled data. Using data thus synthesized, we learn categorylevel models for object deformations in 3D, as well as discriminative object features in 2D. These category models are instance-independent and aid in the design of object landmark observations that can be incorporated into a generic monocular SLAM framework. Where typical object-SLAM approaches usually solve only for object and camera poses, we also estimate object shape on-the-fly, allowing for a wide range of objects from the category to be present in the scene. Moreover, since our 2D object features are learned discriminatively, the proposed object-SLAM system succeeds in several scenarios where sparse feature-based monocular SLAM fails due to insufficient features or parallax. Also, the proposed categorymodels help in object instance retrieval, useful for Augmented Reality (AR) applications. We evaluate the proposed framework on multiple challenging real-world scenes and show -to the best of our knowledge -first results of an instance-independent monocular object-SLAM system and the benefits it enjoys over feature-based SLAM methods." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1703.04079
1512.03012
Rigid or man-made shapes
We create data for cars and aeroplanes mesh models from the ShapeNet database #REFR to feed into our neural network architecture.
[]
[ "We discuss the preprocessing steps and the correspondence development to create robust geometry image data for these synsets.", "Preprocessing: There are two constraints for the spherical parametrization technique of #OTHEREFR to work on a mesh model.", "First, the surface mesh needs to follow the Euler characteristic.", "Almost all mesh models in ShapeNet do not follow the Euler characteristic, and hence, we first voxelize all mesh models at resolution 128 × 128 × 128, and then create a α-shape at α-radius √ 3.", "This α-radius preserves the holes and sharp edges in the derived surface mesh from the voxelized model." ]
[ "ShapeNet database" ]
method
{ "title": "SurfNet: Generating 3D Shape Surfaces Using Deep Residual Networks", "abstract": "3D shape models are naturally parameterized using vertices and faces, i.e., composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object. Lifting convolution operators from the traditional 2D to 3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent ‘geometry images’ representing the shape surface of a category of 3D objects. We then use this consistent representation for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces allowing it to interpolate between shape orientations and poses, invent new shape surfaces and reconstruct 3D shape surfaces from previously unseen images. Our code is available at https://github.com/sinhayan/surfnet." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1906.01568
1512.03012
Generalization to other objects
For car images we render ShapeNet's #REFR synthetic car models from various viewpoints and textures.
[ "To understand the generalization of the method to other symmetric objects, we train on two additional datasets.", "We use the cat dataset provided by #OTHEREFR , crop the cat heads using the keypoint annotations and split the images by 8:1:1 into train, validation and test sets." ]
[ "We are able to reconstruct both object categories well and the results are visualized in fig. 4 .", "Although we assume Lambertian surfaces to estimate the shading, our model can reconstruct cat faces convincingly despite their fur which has complicated light transport mechanics.", "This shows that the other parts of the model constrain the shape enough to still converge to meaningful representations.", "Overall, the model is able to reconstruct cats and cars as well as human faces, showing that the method generalizes over object categories." ]
[ "ShapeNet's synthetic car" ]
method
{ "title": "Photo-Geometric Autoencoding to Learn 3D Objects from Unlabelled Images", "abstract": "We show that generative models can be used to capture visual geometry constraints statistically. We use this fact to infer the 3D shape of object categories from raw single-view images. Differently from prior work, we use no external supervision, nor do we use multiple views or videos of the objects. We achieve this by a simple reconstruction task, exploiting the symmetry of the objects' shape and albedo. Specifically, given a single image of the object seen from an arbitrary viewpoint, our model predicts a symmetric canonical view, the corresponding 3D shape and a viewpoint transformation, and trains with the goal of reconstructing the input view, resembling an auto-encoder. Our experiments show that this method can recover the 3D shape of human faces, cat faces, and cars from single view images, without supervision. On benchmarks, we demonstrate superior accuracy compared to other methods that use supervision at the level of 2D image correspondences." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.12753
1512.03012
A Baseline Approach for Single-view Reconstruction
Unlike the general objects in ShapeNet #REFR , the garment shape typically appears as a thin layer with open boundary.
[ "To demonstrate the usefulness of Deep Fashion3D, we propose a novel baseline approach for single-view garment reconstruction.", "Specifically, taking a single image I of a garment as input, we aim to reconstruct its 3D shape represented as a triangular mesh.", "Although recent advances in 3D deep learning techniques have achieved promising progress in single-view reconstruction on general objects, we found all existing approaches have difficulty scaling to cloth reconstruction. The main reasons are threefolds: (1) Non-closed surfaces." ]
[ "While implicit representation #OTHEREFR can only model closed surface, voxel based approach #OTHEREFR is not suited for recovering shell-like structure like the garment surface. (2) Complex shape topologies.", "As all existing mesh-based approaches #OTHEREFR rely on deforming a fixed template, they fail to handle the highly diversified topologies introduced by different clothing categories. (3) Complicated geometric details.", "While general man-made objects typically consist of smooth surfaces, the clothing dynamics often introduces intricate high-frequency surface deformations that are challenging to capture.", "Overview.", "To address the above issues, we propose to employ a hybrid representation that leverages the merits of each embedding." ]
[ "garment shape", "ShapeNet" ]
background
{ "title": "Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction from Single Images", "abstract": "High-fidelity clothing reconstruction is the key to achieving photorealism in a wide range of applications including human digitization, virtual try-on, etc. Recent advances in learning-based approaches have accomplished unprecedented accuracy in recovering unclothed human shape and pose from single images, thanks to the availability of powerful statistical models, e.g. SMPL [38] , learned from a large number of body scans. In contrast, modeling and recovering clothed human and 3D garments remains notoriously difficult, mostly due to the lack of large-scale clothing models available for the research community. We propose to fill this gap by introducing Deep Fashion3D, the largest collection to date of 3D garment models, with the goal of establishing a novel benchmark and dataset for the evaluation of image-based garment reconstruction systems. Deep Fashion3D contains 2078 models reconstructed from real garments, which covers 10 different categories and 563 garment instances. It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images. In addition, each garment is randomly posed to enhance the variety of real clothing deformations. To demonstrate the advantage of Deep Fashion3D, we propose a novel baseline approach for single-view garment reconstruction, which leverages the merits of both mesh and implicit representations. A novel adaptable template is proposed to enable the learning of all types of clothing in a single network. Extensive experiments have been conducted on the proposed dataset to verify its significance and usefulness." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1701.06507
1512.03012
Training data
Shape geometry comprises of 300 random cars from from ShapeNet #REFR . Note that the models were assumed to be upright.
[ "Training data comprises of synthetic images that show a random shape, with partially random reflectance shaded by random environment map illumination.", "Shape." ]
[ "This class was chosen, as it presents both smooth surfaces as well as hard edges typical for mechanical objects.", "Note that our results show many classes very different from cars, such as fruits, statues, mechanical appliances, etc.", "Please note that we specifically restricted training to only cars to evaluate how the CNN generalizes to other object classes.", "Other problems like optical flow have been solved using CNNs on general scenes despite being trained on very limited geometry, such as training exclusively on chairs #OTHEREFR .", "Reflectance." ]
[ "ShapeNet" ]
method
{ "title": "Plausible Shading Decomposition For Layered Photo Retouching", "abstract": "Figure 1: Our approach automatically splits input images into layers motivated by light transport, such as (a): occlusion, albedo, irradiance and specular, or (b): the six major spatial light directions, which can then be manipulated independently using off-the-shelf photo manipulation software and composed back to an improved image. For (a) shadows were made deeper, albedo hue changed, saturation of irradiance increased and the specular was blurred for a more glossy material. For (b) The front lighting was made weaker and light from the left had been tinted red. Photographers routinely compose multiple manipulated photos of the same scene (layers) into a single image, which is better than any individual photo could be alone. Similarly, 3D artists set up rendering systems to produce layered images to contain only individual aspects of the light transport, which are composed into the final result in post-production. Regrettably, both approaches either take considerable time to capture, or remain limited to synthetic scenes. In this paper, we suggest a system to allow decomposing a single image into a plausible shading decomposition (PSD) that approximates effects such as shadow, diffuse illumination, albedo, and specular shading. This decomposition can then be manipulated in any off-the-shelf image manipulation software and recomposited back. We do so by learning a convolutional neural network trained using synthetic data. We demonstrate the effectiveness of our decomposition on synthetic (i.e., rendered) and real data (i.e., photographs), and use them for common photo manipulation, which are nearly impossible to perform otherwise from single images." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2004.00543
1512.03012
Results and Discussion
We choose couch, chair, table, bike, and canoe as six common object classes, and we take the object meshes from ShapeNet #REFR .
[ "For a more concrete comparison, we consider a variant of PIXOR using PointPillar's pillar representation instead of voxelization and keep the backbone architecture identical.", "We compare this variant, PIXOR* against PIXOR and show the results in Figure 8 .", "Here, we can see that with even with identical backbones and training routines, PIXOR* is significantly more robust to our attack purely due to a different input representation.", "Common Objects In this section, to make attacks more realistic, we learn adversaries that resemble common objects that may appear on top of a vehicle in the real world.", "Instead of deforming an icosphere, we initialize from a common object mesh and deform the vertices while constraining the maximum perturbation distances." ]
[ "We apply uniform mesh re-sampling in meshlab #OTHEREFR to reduce the number of faces and produce regular geometry prior to deformation.", "In these experiments we limit the maximum vertex perturbation to 0.03m so that the adversary will resemble the common object, and limit translation to 0.1m, and allow free rotation.", "In Table 3 , we present the visualizations, results, and dimensions of the common objects.", "Moreover, the identity of the adversarial objects are unambiguous to a human, and we also verify that a Point-Net #OTHEREFR classifier trained on ShapeNet #OTHEREFR is also able to correctly classify our perturbed objects.", "This confirms the possibility that the placement of common objects can also hurt LiDAR detectors." ]
[ "ShapeNet" ]
method
{ "title": "Physically Realizable Adversarial Examples for LiDAR Object Detection", "abstract": "Modern autonomous driving systems rely heavily on deep learning models to process point cloud sensory data; meanwhile, deep models have been shown to be susceptible to adversarial attacks with visually imperceptible perturbations. Despite the fact that this poses a security concern for the self-driving industry, there has been very little exploration in terms of 3D perception, as most adversarial attacks have only been applied to 2D flat images. In this paper, we address this issue and present a method to generate universal 3D adversarial objects to fool LiDAR detectors. In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%. We report attack results on a suite of detectors using various input representation of point clouds. We also conduct a pilot study on adversarial defense using data augmentation. This is one step closer towards safer self-driving under unseen conditions from limited training data." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1809.05070
1512.03012
Decomposing Tools
Because of the absence of tool data in the ShapeNet Core #REFR dataset, we download the tools from 3D Warehouse and manually remove all unrelated models.
[ "We then demonstrate the practical applicability of our model by decomposing synthetic real-world tools. Table 4 . Quantitative results of physical parameter estimation on tools.", "Combining visual appearance with physics observations helps our model to perform much better on physical parameter estimation, and compared to all other baselines, our model performs significantly better on this dataset.", "Tools." ]
[ "In total, there are 204 valid tools, and we use Blender to remesh and clean up these tools to fix the issues with missing faces and normals. Following Chang et al .", "#OTHEREFR , we perform PCA on the point clouds and align models by their PCA axes.", "Sample tools in our dataset are shown in Figure 6 . Primitives. Similar to Zou et al .", "#OTHEREFR , we first use the energy-based optimization to fit the primitives from the point clouds, and then, we assign each vertex to its nearest primitive and refine each primitive with the minimum oriented bounding box of vertices assigned to it. Other Setups.", "We make use of the same set of materials and densities as in Table 1 and the same textures for materials as described in Section 5.1." ]
[ "tool data", "ShapeNet Core dataset" ]
method
{ "title": "Physical Primitive Decomposition", "abstract": "Abstract. Objects are made of parts, each with distinct geometry, physics, functionality, and affordances. Developing such a distributed, physical, interpretable representation of objects will facilitate intelligent agents to better explore and interact with the world. In this paper, we study physical primitive decomposition-understanding an object through its components, each with physical and geometric attributes. As annotated data for object parts and physics are rare, we propose a novel formulation that learns physical primitives by explaining both an object's appearance and its behaviors in physical events. Our model performs well on block towers and tools in both synthetic and real scenarios; we also demonstrate that visual and physical observations often provide complementary signals. We further present ablation and behavioral studies to better understand our model and contrast it with human performance." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1908.06277
1512.03012
Previous work
Propelled by the availability of large scale CAD collections such as ShapeNet #REFR and the increase in GPU parallel computing capabilities, learning based solutions have become the method of choice for reconstructing 3D shapes from single images.
[]
[ "Generally speaking, the 3D representations currently in use fall into three main categories: (i) grid based methods, such as voxel, which are 3D extensions of Pixels, (ii) topology preserving geometric methods, such as polygon meshes, and (iii) un-ordered geometric structures such as point clouds.", "Grid based methods form the largest body of work in the current literature.", "Voxels, however, do not scale well, due to their cubic memory to resolution ratio.", "To address this issue, researchers have come up with more efficient memory structures. Riegler et al. #OTHEREFR , Tatarchenko et al. #OTHEREFR and Häne et al.", "#OTHEREFR use nested tree structures (Octrees) to leverage the inherent sparsity of the voxel representation. Richter et al." ]
[ "ShapeNet" ]
method
{ "title": "Deep Meta Functionals for Shape Representation", "abstract": "We present a new method for 3D shape reconstruction from a single image, in which a deep neural network directly maps an image to a vector of network weights. The network parametrized by these weights represents a 3D shape by classifying every point in the volume as either within or outside the shape. The new representation has virtually unlimited capacity and resolution, and can have an arbitrary topology. Our experiments show that it leads to more accurate shape inference from a 2D projection than the existing methods, including voxel-, silhouette-, and meshbased methods. The code will be available at: https: //github.com/gidilittwin/Deep-Meta." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1612.00404
1512.03012
Experiments
We perform our experiments primarily using the ShapeNet #REFR dataset which has a large collection of 3D models.
[ "Dataset." ]
[ "In particular, we use the 'airplane' and 'chair' object categories which have thousands of meshes available.", "The ShapeNet models are already aligned in a canonical frame and are of a fixed scale.", "Additionally, in order to demonstrate applicability beyond rigid objects, we also manually download and similarly preprocess a set of around 100 models corresponding to four-legged animals.", "Network Architecture and Training.", "The dataset described above gives us a set of 3D objects {O i }." ]
[ "ShapeNet dataset" ]
method
{ "title": "Learning Shape Abstractions by Assembling Volumetric Primitives", "abstract": ": Examples of chair and animal shapes assembled by composing simple volumetric primitives (cuboids). The obtained reconstructions allows an interpretable representation for each object and provides a consistent parsing across shapes e.g. chair seats are captured by the same primitive across the category. We present a learning framework for abstracting complex shapes by learning to assemble objects using 3D volumetric primitives. In addition to generating simple and geometrically interpretable explanations of 3D objects, our framework also allows us to automatically discover and exploit consistent structure in the data. We demonstrate that using our method allows predicting shape representations which can be leveraged for obtaining a consistent parsing across the instances of a shape collection and constructing an interpretable shape similarity measure. We also examine applications for image-based prediction as well as shape manipulation." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2002.10342
1512.03012
III. METHOD
The scenario for our experimental comparison is table-top reconstruction and semantic labelling of a scene containing scattered objects, selected from a number of ShapeNet categories #REFR , as a depth camera browses the scene in an adhoc way.
[]
[ "Since our focus is on a fundamental comparison of view-based and map-based labelling, we choose a height map representation for our scenes whose 2.5D nature allows us to use the same CNN network architecture designed for RGB-D input for both labelling methods.", "We use Height Map Fusion #OTHEREFR as our scene reconstruction backend.", "For our experiments, we opt for a synthetic environment based on rendered RGB-D data using the methodology from SceneNet RGB-D #OTHEREFR .", "A key reason for this decision is the need for a wide variety of scene configurations with semantic label ground truth in order to train high-performing view-based and map-based semantic segmentation networks.", "Furthermore, synthetic data gives us a high level of control over multiple experimental factors, such as the variety of viewpoints, noise, and ground truth for RGB, depth and camera poses." ]
[ "semantic labelling", "ShapeNet categories" ]
method
{ "title": "Comparing View-Based and Map-Based Semantic Labelling in Real-Time SLAM", "abstract": "Generally capable Spatial AI systems must build persistent scene representations where geometric models are combined with meaningful semantic labels. The many approaches to labelling scenes can be divided into two clear groups: view-based which estimate labels from the input viewwise data and then incrementally fuse them into the scene model as it is built; and map-based which label the generated scene model. However, there has so far been no attempt to quantitatively compare view-based and map-based labelling. Here, we present an experimental framework and comparison which uses real-time height map fusion as an accessible platform for a fair comparison, opening up the route to further systematic research in this area." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1903.10170
1512.03012
Shape transform results and ablation studies
The first domain pair on which we test our network is the chair and table datasets from ShapeNet #REFR , which contain mesh models.
[]
[ "The chair dataset consists of 4,768 training shapes and 2,010 test shapes, while the table dataset has 5,933 training shapes and 2,526 test shapes.", "We normalize each chair/table mesh to make the diagonal of its bounding box equal to unit length and sample the normalized mesh uniformly at random to obtain 2,048 points for our point-set shape representation.", "All output point clouds, e.g., in Figures 8-11 , are in the same resolution of 2,048 points.", "Comparing autoencoding.", "With the chair-table domain pair, we first compare our autoencoder, which produces multi-scale and overcomplete latent codes, with two baseline alternatives:" ]
[ "ShapeNet" ]
method
{ "title": "LOGAN: Unpaired Shape Transform in Latent Overcomplete Space", "abstract": "We present LOGAN, a deep neural network aimed at learning generic shape transforms from unpaired domains. The network is trained on two sets of shapes, e.g., tables and chairs, but there is neither a pairing between shapes in the two domains to supervise the shape translation nor any point-wise correspondence between any shapes. Once trained, LOGAN takes a shape from one domain and transforms it into the other. Our network consists of an autoencoder to encode shapes from the two input domains into a common latent space, where the latent codes encode multi-scale shape features in an overcomplete manner. The translator is based on a generative adversarial network (GAN), operating in the latent space, where an adversarial loss enforces cross-domain translation while a feature preservation loss ensures that the right shape features are preserved for a natural shape transform. We conduct various ablation studies to validate each of our key network designs and demonstrate superior capabilities in unpaired shape transforms on a variety of examples over baselines and state-of-the-art approaches. We show that our network is able to learn what shape features to preserve during shape translations, either local or non-local, whether content or style, depending solely on the input domain pairs." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.12397
1512.03012
Related Work
However, it only handles synthetic 3D shapes composed of the most basic geometries, while our method is evaluated on ShapeNet #REFR models.
[ "grammar parsing for shape analysis and modeling. Teboul et al.", "#OTHEREFR use RL to parse the shape grammar of the building facade. Ruiz-Montiel et al.", "#OTHEREFR propose an approach to complement the generative power of shape grammars with reinforcement learning techniques.", "These methods all focus on the 2D domain, while our method targets 3D shape modeling, which is under-explored and more challenging. Sharma et al.", "#OTHEREFR present CSG-Net, which is a neural architecture to parse a 2D or 3D input into a collection of modeling primitives with operations." ]
[ "High-level shape understanding There has been growing interest in highlevel shape analysis, where the ideas are central to part-based segmentation #OTHEREFR and structure-based shape understanding #OTHEREFR .", "Primitive-based shape abstraction #OTHEREFR , in particular, is well-researched for producing structurally simple representation and reconstruction. Zou et al.", "#OTHEREFR introduce a supervised method that uses a generative RNN to predict a set of primitives step-by-step to synthesize a target shape. Li et al. #OTHEREFR and Sun et al.", "#OTHEREFR propose neural architectures to infer the symmetry hierarchy of a 3D shape. Tian et al.", "#OTHEREFR propose a neural program generator to represent 3D shapes as 3D programs, which can reflect shape regularity such as symmetry and repetition." ]
[ "synthetic 3D shapes", "ShapeNet models" ]
method
{ "title": "Modeling 3D Shapes by Reinforcement Learning", "abstract": "We explore how to enable machines to model 3D shapes like human modelers using reinforcement learning (RL). In 3D modeling software like Maya, a modeler usually creates a mesh model in two steps: (1) approximating the shape using a set of primitives; (2) editing the meshes of the primitives to create detailed geometry. Inspired by such artist-based modeling, we propose a two-step neural framework based on RL to learn 3D modeling policies. By taking actions and collecting rewards in an interactive environment, the agents first learn to parse a target shape into primitives and then to edit the geometry. To effectively train the modeling agents, we introduce a novel training algorithm that combines heuristic policy, imitation learning and reinforcement learning. Our experiments show that the agents can learn good policies to produce regular and structure-aware mesh models, which demonstrates the feasibility and effectiveness of the proposed RL framework." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1803.07252
1512.03012
VI. EXPERIMENTAL RESULTS
We first empirically tune parameters on a small dataset with 8 models, then generalize the parameter setting learned from the small dataset to a larger dataset, i.e., 100 models from the ShapeNetCore dataset #REFR for validation.
[ "The proposed scheme GLR is compared with existing works covering the four categories of point cloud denoising methods mentioned in Section II: APSS #OTHEREFR and RIMLS #OTHEREFR for MLS-based methods, AWLOP #OTHEREFR for LOP-based methods, MRPCA #OTHEREFR for sparsity-based methods, non-local denoising (NLD) algorithm #OTHEREFR and LR #OTHEREFR for non-local methods.", "APSS and RIMLS are implemented with MeshLab software #OTHEREFR , AWLOP is implemented with EAR software #OTHEREFR , MRPCA source code is provided by the author, NLD and LR are implemented by ourselves in MATLAB." ]
[ "Comparison with existing methods on both dataset are detailed as follows." ]
[ "ShapeNetCore dataset" ]
method
{ "title": "3D Point Cloud Denoising Using Graph Laplacian Regularization of a Low Dimensional Manifold Model", "abstract": "Abstract-3D point cloud-a new signal representation of volumetric objects-is a discrete collection of triples marking exterior object surface locations in 3D space. Conventional imperfect acquisition processes of 3D point cloud-e.g., stereomatching from multiple viewpoint images or depth data acquired directly from active light sensors-imply non-negligible noise in the data. In this paper, we extend a previously proposed lowdimensional manifold model for the image patches to surface patches in the point cloud, and seek self-similar patches to denoise them simultaneously using the patch manifold prior. Due to discrete observations of the patches on the manifold, we approximate the manifold dimension computation defined in the continuous domain with a patch-based graph Laplacian regularizer, and propose a new discrete patch distance measure to quantify the similarity between two same-sized surface patches for graph construction that is robust to noise. We show that our graph Laplacian regularizer leads to speedy implementation and has desirable numerical stability properties given its natural graph spectral interpretation. Extensive simulation results show that our proposed denoising scheme outperforms state-of-the-art methods in objective metrics and better preserves visually salient structural features like edges. Index Terms-Graph signal processing, point cloud denoising, low-dimensional manifold." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1604.06079
1512.03012
Experimental Results
In this paper, we propose to generate the ground-truth data through rendering ShapeNet models #REFR .
[ "We consider four popular object categories where the underlying reflectional symmetry is salient: chair, car, table, and sofa.", "An important challenge is to obtain ground-truth data to train each individual network.", "Standard dataset creation approaches such as human labeling or scanning are inappropriate for us due to the limitations in cost and in collecting diverse physical objects." ]
[ "We employ an open-source physically-based rendering software, Mitsuba, to generate realistic renderings.", "We use 700−2500 models for each category to generate training data.", "For each selected object, we choose 36 random views, each of which provides an image with ground-truth geometric information.", "For each training dataset, we leave out 20% of the data for validation. Figure 2 shows some example renderings." ]
[ "ShapeNet models" ]
method
{ "title": "DeepSymmetry: Joint Symmetry and Depth Estimation using Deep Neural Networks", "abstract": "Abstract. Due to the abundance of 2D product images from the internet, developing efficient and scalable algorithms to recover the missing depth information is central to many applications. Recent works have addressed the single-view depth estimation problem by utilizing convolutional neural networks. In this paper, we show that exploring symmetry information, which is ubiquitous in man made objects, can significantly boost the quality of such depth predictions. Specifically, we propose a new convolutional neural network architecture to first estimate dense symmetric correspondences in a product image and then propose an optimization which utilizes this information explicitly to significantly improve the quality of single-view depth estimations. We have evaluated our approach extensively, and experimental results show that this approach outperforms state-of-the-art depth estimation techniques." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.00230
1512.03012
Dataset Details
This synthetic dataset consists of about 32000 3D CAD models belonging to 16 shape categories from the original ShapeNetCore 3D data repository #REFR .
[ "vKITTI #OTHEREFR .", "Virtual-KITTI (vKITTI) is a synthetic large outdoor dataset with 13 semantic classes from urban scenes. vKITTI imitates data from the real-world KITTI dataset.", "It contains data from 5 different simulated worlds, resulting in 50 high resolution scenes.", "This dataset is used for a variety of tasks, where the most common one in regards to point clouds is semantic segmentation.", "ShapeNet #OTHEREFR ." ]
[ "Each point in a 3D model is annotated with a part label (e.g.", "a plane is segmented into body, wing, engine, and tail parts).", "We consider the subset used for the ShapeNet part segmentation challenge, which contains 17775 models with 50 parts in total.", "Each model is normalized into the 3D cube [−1, 1]", "3 and contains a maximum of 3000 points." ]
[ "16 shape categories" ]
method
{ "title": "MortonNet: Self-Supervised Learning of Local Features in 3D Point Clouds", "abstract": "We present a self-supervised task on point clouds, in order to learn meaningful point-wise features that encode local structure around each point. Our self-supervised network, named MortonNet, operates directly on unstructured/unordered point clouds. Using a multi-layer RNN, MortonNet predicts the next point in a point sequence created by a popular and fast Space Filling Curve, the Mortonorder curve. The final RNN state (coined Morton feature) is versatile and can be used in generic 3D tasks on point clouds. In fact, we show how Morton features can be used to significantly improve performance (+3% for 2 popular semantic segmentation algorithms) in the task of semantic segmentation of point clouds on the challenging and large-scale S3DIS dataset. We also show how MortonNet trained on S3DIS transfers well to another large-scale dataset, vKITTI, leading to an improvement over state-ofthe-art of 3.8%. Finally, we use Morton features to train a much simpler and more stable model for part segmentation in ShapeNet. Our results show how our self-supervised task results in features that are useful for 3D segmentation tasks, and generalize well to other datasets." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1802.00411
1512.03012
Overview
To generate ground truth training and evaluation pairs, we virtually scan 3D objects from ShapeNet #REFR . Fig.
[ "To achieve this task, each object model is represented by a high resolution 3D voxel grid.", "We use the simple occupancy grid for shape encoding, where 1 represents an occupied cell and 0 an empty cell.", "Specifically, the input 2.5D partial view, denoted as x x, is a 64 3 occupancy grid, while the output 3D shape, denoted as y y, is a high resolution 256 3 probabilistic voxel grid.", "The input partial shape is directly calculated from a single depth image given camera parameters.", "We use the ground truth dense 3D shape with aligned orientation as same as the input partial 2.5D depth view to supervise our network." ]
[ "1 is the t-SNE visualization #OTHEREFR of partial 2.5D views and the corresponding full 3D shapes for multiple general chair and bed models.", "Each green dot represents the t-SNE embedding of a 2.5D view, whilst a red dot is the embedding of the corresponding 3D shape.", "It can be seen that multiple categories inherently have similar 2.5D to 3D mapping relationships.", "Essentially, our neural network is to learn a smooth function, denoted as f, which maps green dots to red dots as close as possible in high dimensional space as shown in Equation #OTHEREFR .", "The function f is parametrized by neural layers in general." ]
[ "ShapeNet" ]
method
{ "title": "Dense 3D Object Reconstruction from a Single Depth View", "abstract": "In this paper, we propose a novel approach, 3D-RecGAN++, which reconstructs the complete 3D structure of a given object from a single arbitrary depth view using generative adversarial networks. Unlike existing work which typically requires multiple views of the same object or class labels to recover the full 3D geometry, the proposed 3D-RecGAN++ only takes the voxel grid representation of a depth view of the object as input, and is able to generate the complete 3D occupancy grid with a high resolution of 256 3 256 3 by recovering the occluded/missing regions. The key idea is to combine the generative capabilities of 3D encoder-decoder and the conditional adversarial networks framework, to infer accurate and fine-grained 3D structures of objects in high-dimensional voxel space. Extensive experiments on large synthetic datasets and real-world Kinect datasets show that the proposed 3D-RecGAN++ significantly outperforms the state of the art in single view 3D object reconstruction, and is able to reconstruct unseen types of objects." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1807.03407
1512.03012
Latent Denoising Optimization with DAE (DAE+LDO):
We use ShapeNetCore, a subset of the full ShapeNet #REFR dataset with manually verified category and alignment annotations.
[ "To show the transferability of LDO, we also apply it on DAE, with a GAN trained on GFVs produced by the DAE on clean training data.", "We show that LDO is able to capitalize on the more robust representations learnt by DAE to improve performance even further than AE+LDO.", "Dataset." ]
[ "It covers 55 common object categories with about 51,300 unique 3D models.", "For the purposes of our experiments, we use 4 classes with the most available data from the dataset, namely: airplane, car, chair and table.", "For each class, we split the models into 85/5/10 train-validation-test sets for our experiments and results.", "We use the models without any pose or scale augmentations.", "We uniformly sample the point clouds (2048 points each) from these models, which serve as the ground truth for our training." ]
[ "full ShapeNet" ]
method
{ "title": "High Fidelity Semantic Shape Completion for Point Clouds Using Latent Optimization", "abstract": "Semantic shape completion is a challenging problem in 3D computer vision where the task is to generate a complete 3D shape using a partial 3D shape as input. We propose a learning-based approach to complete incomplete 3D shapes through generative modeling and latent manifold optimization. Our algorithm works directly on point clouds. We use an autoencoder and a GAN to learn a distribution of embeddings for point clouds of object classes. An input point cloud with missing regions is first encoded to a feature vector. The representations learnt by the GAN are then used to find the best latent vector on the manifold using a combined optimization that finds a vector in the manifold of plausible vectors that is close to the original input (both in the feature space and the output space of the decoder). Experiments show that our algorithm is capable of successfully reconstructing point clouds with large missing regions with very high fidelity without having to rely on exemplar-based database retrieval." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.12641
1512.03012
Architecture
The network was implemented using four 1D convolution layers of sizes [256, 256, 128, #REFR .
[ "For a feature extractor, we used DGCNN #OTHEREFR with the same configurations as in the official PyTorch implementation: Four point-cloud convolution layers of sizes [64, 64, 128, 256] respectively and a 1D convolution layer with kernel size 1 (featurewise fully connected) with a size of 1024 before extracting a global feature vector by max-pooling.", "The classification head h PCM was implemented using three fully connected layers with sizes [512, 256, 10] respectively (where 10 is the number of classes).", "A dropout of 0.5 was applied to the two hidden layers.", "We implemented a spatial transformation network to align the input point set to a canonical space using two point-cloud convolution layers with sizes [64, 128] respectively, a 1D convolution layer of size 1024 and three fully connected layers of sizes [512, 256, 3] respectively.", "The SSL head h SSL takes as input the global feature vector (of size 1024) concatenated to the feature representations of each point from the initial four layers of the backbone network." ]
[ "We applied batch normalization [18] after all convolution layers and used leaky relu activation with a slope of 0.2." ]
[ "four 1D convolution", "network" ]
method
{ "title": "Self-Supervised Learning for Domain Adaptation on Point-Clouds", "abstract": "Self-supervised learning (SSL) allows to learn useful representations from unlabeled data and has been applied effectively for domain adaptation (DA) on images. It is still unknown if and how it can be leveraged for domain adaptation for 3D perception. Here we describe the first study of SSL for DA on point-clouds. We introduce a new pretext task, Region Reconstruction, motivated by the deformations encountered in sim-to-real transformation. We also demonstrate how it can be combined with a training procedure motivated by the MixUp method. Evaluations on six domain adaptations across synthetic and real furniture data, demonstrate large improvement over previous work." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.00817
1512.03012
Dataset
We construct our dataset from the Shapenet Core dataset #REFR with nearly 17, 000 shapes from 16 categories.
[]
[ "The dataset consists of labelled corresponding parts across various segmented 3D models.", "Therefore, we construct a set of corresponding pairs using the part-based registration from #OTHEREFR .", "Further, we extract ISS keypoints #OTHEREFR from the resultant parts and add to dataset, the points which have a corresponding keypoint in the corresponding pairs obtained previously.", "We construct the training set from 80% of the models, while 20% of the models are used for testing.", "For training the network, we need a set of positive and negative pairs." ]
[ "Shapenet Core" ]
method
{ "title": "DeepPoint3D: Learning Discriminative Local Descriptors using Deep Metric Learning on 3D Point Clouds", "abstract": "Learning local descriptors is an important problem in computer vision. While there are many techniques for learning local patch descriptors for 2D images, recently efforts have been made for learning local descriptors for 3D points. The recent progress towards solving this problem in 3D leverages the strong feature representation capability of image based convolutional neural networks by utilizing RGB-D or multi-view representations. However, in this paper, we propose to learn 3D local descriptors by directly processing unstructured 3D point clouds without needing any intermediate representation. The method constitutes a deep network for learning permutation invariant representation of 3D points. To learn the local descriptors, we use a multi-margin contrastive loss which discriminates between similar and dissimilar points on a surface while also leveraging the extent of dissimilarity among the negative samples at the time of training. With comprehensive evaluation against strong baselines, we show that the proposed method outperforms state-of-the-art methods for matching points in 3D point clouds. Further, we demonstrate the effectiveness of the proposed method on various applications achieving state-of-the-art results." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1907.09786
1512.03012
C. Ablation studies
However, we must clarify that there exists a major difference between the datasets of two tasks: unlike the 2-D road layout hallucinating, the prior knowledge and partial observation samples are from an identical distribution (ShapeNet dataset #REFR ) in the 3-D vehicle shape completion task.
[ "The performance without the observation pair degrades in terms of the contour accuracy with certain margins (-2.4% and -2.1% F -measure), due to the fact that the observation pair provides a clear supervision at boundary regions.", "With three supervisions, the network exhibits the best contour accuracy and an optimal overall performance.", "3-D vehicle shape completion: In terms of the 3-D vehicle shape completion task, similar conclusions can be drawn: with all the supervisions enabled, our approach achieves the optimal Hamming distance (0.035).", "Also, with only the masked prior knowledge pair or the pre-selection pair applied, the performance is already as good as the baselines (0.043) and even outperforms them (0.040).", "These two pairs are already sufficient for providing a valid supervision independently." ]
[ "This leads to some different observations in this ablation studies: 1) without the masked prior knowledge pair, the performance still remains optimal, and 2) adding the masked prior knowledge pair to the pre-selection pair degrades the performance.", "Due to the absence of the domain gap between the partially observed and prior knowledge dataset, the pre-selection pair provides a significantly stronger explicit supervision than the masked prior knowledge pair, which is not the case with more challenging data that have domain gaps, such as our 2-D road layout hallucinating benchmark." ]
[ "ShapeNet" ]
background
{ "title": "Hallucinating Beyond Observation: Learning to Complete with Partial Observation and Unpaired Prior Knowledge", "abstract": "We propose a novel single-step training strategy that allows convolutional encoder-decoder networks that use skip connections, to complete partially observed data by means of hallucination. This strategy is demonstrated for the task of completing 2-D road layouts as well as 3-D vehicle shapes. As input, it takes data from a partially observed domain, for which no ground truth is available, and data from an unpaired prior knowledge domain and trains the network in an endto-end manner. Our single-step training strategy is compared against two state-of-the-art baselines, one using a two-step autoencoder training strategy and one using an adversarial strategy. Our novel strategy achieves an improvement up to +12.2% F-measure on the Cityscapes dataset. The learned network intrinsically generalizes better than the baselines on unseen datasets, which is demonstrated by an improvement up to +23.8% F-measure on the unseen KITTI dataset. Moreover, our approach outperforms the baselines using the same backbone network on the 3-D shape completion benchmark by a margin of 0.006 Hamming distance." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1911.11130
1512.03012
Setup
For cars, we render 35k images of synthetic cars from ShapeNet #REFR with random viewpoints and illumination, and randomly split them by 8:1:1 into train, validation and test sets.
[ "We follow the protocol of #OTHEREFR to generate a dataset, sampling shapes, poses, textures, and illumination randomly.", "We use images from SUN Database #OTHEREFR as background and save ground truth depth maps for evaluation.", "We also test our method on cat faces and synthetic cars. We use two cat datasets #OTHEREFR .", "The first one has 10k cat images with nine keypoint annotations, and the second one is a collection of dog and cat images, containing 1.2k cat images with bounding box annotations.", "We combine the two datasets, crop the images around the cat heads, and split them by 8:1:1 into train, validation and test sets." ]
[ "Metrics.", "Since the scale of 3D reconstruction from projective cameras is inherently ambiguous #OTHEREFR , we discount it in the evaluation.", "Specifically, given the depth map d predicted by our model in the canonical view, we warp it to a depth mapd in the actual view using the predicted viewpoint and compare the latter to the ground-truth depth map d * using the scale-invariant depth error (SIDE) #OTHEREFR", "where ∆ uv = logd uv − log d * uv .", "We compare only valid depth pixel and erode the foreground mask by one pixel to discount rendering artefacts at object boundaries." ]
[ "ShapeNet" ]
method
{ "title": "Unsupervised Learning of Probably Symmetric Deformable 3D Objects from Images in the Wild", "abstract": "We propose a method to learn 3D deformable object categories from raw single-view images, without external supervision. The method is based on an autoencoder that factors each input image into depth, albedo, viewpoint and illumination. In order to disentangle these components without supervision, we use the fact that many object categories have, at least in principle, a symmetric structure. We show that reasoning about illumination allows us to exploit the underlying object symmetry even if the appearance is not symmetric due to shading. Furthermore, we model objects that are probably, but not certainly, symmetric by predicting a symmetry probability map, learned end-to-end with the other components of the model. Our experiments show that this method can recover very accurately the 3D shape of human faces, cat faces and cars from single-view images, without any supervision or a prior shape model. On benchmarks, we demonstrate superior accuracy compared to another method that uses supervision at the level of 2D image correspondences." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.01326
1512.03012
Experiment settings
For this dataset, because of the limited number of ShapeNet #REFR 3D chair models (6778 shapes), we render images from 60 randomly sampled views for each chair.
[ "Data We train HoloGAN using a variety of datasets: Basel Face #OTHEREFR , CelebA #OTHEREFR , Cats #OTHEREFR , Chairs #OTHEREFR , Cars #OTHEREFR , and LSUN bedroom #OTHEREFR .", "We train HoloGAN on resolutions of 64×64 pixels for Cats and Chairs, and 128×128 pixels for Basel Face, CelebA, Cars and LSUN bedroom.", "More details on the datasets and network architecture can be found in the supplemental document.", "Note that only the Chairs dataset contains multiple views of the same object; all other datasets only contain unique single views." ]
[ "During training, we ensure that each batch contains completely different types of chairs to prevent the network from using set supervision, i.e., looking at the same chair from different viewpoints in the same batch, to cheat." ]
[ "ShapeNet 3D chair" ]
method
{ "title": "HoloGAN: Unsupervised Learning of 3D Representations From Natural Images", "abstract": ". HoloGAN learns to separate pose from identity (shape and appearance) only from unlabelled 2D images without sacrificing the visual fidelity of the generated images. All results shown here are sampled from HoloGAN for the same identities in each row but in different poses. We propose a novel generative adversarial network (GAN) for the task of unsupervised learning of 3D representations from natural images. Most generative models rely on 2D kernels to generate images and make few assumptions about the 3D world. These models therefore tend to create blurry images or artefacts in tasks that require a strong 3D understanding, such as novel-view synthesis. HoloGAN instead learns a 3D representation of the world, and to render this representation in a realistic manner. Unlike other GANs, HoloGAN provides explicit control over the pose of generated objects through rigid-body transformations of the learnt 3D features. Our experiments show that using explicit 3D features enables HoloGAN to disentangle 3D pose and identity, which is further decomposed into shape and appearance, while still being able to generate images with similar or higher visual quality than other generative models. HoloGAN can be trained end-to-end from unlabelled 2D images only. In particular, we do not require pose labels, 3D shapes, or multiple views of the same objects. This shows that HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.03551
1512.03012
With the availability of large-scale 3D shape database #REFR , shape information can be efficiently encoded in a deep neural network, enabling faithful 3D reconstruction even from a single image.
[ "by the input views.", "Such limitation causes the single-view reconstruction particularly tricky due to the lack of correspondence with other views and large occlusions." ]
[ "Although many 3D representations (such as voxel-based and point cloud representations), have been utilized for 3D reconstruction, they are not efficient to express the surface details of the shape and may generate part-missing or broken structures due to the high computational cost and memory storage.", "On the contrary, the triangle mesh is well known by its high efficiency for modelling geometric details, it has attracted considerable attention in computer vision and computer graphics.", "Recently, the mesh-based 3D methods have been explored with the deep learning technology #OTHEREFR .", "The triangle mesh can be represented by the graphbased neural network #OTHEREFR .", "Although these methods can reconstruct the surface of the object, the reconstruction results are still limited to some categories of 3D models and miss structural information of the object." ]
[ "large-scale 3D shape", "shape information" ]
background
{ "title": "STD-Net: Structure-preserving and Topology-adaptive Deformation Network for 3D Reconstruction from a Single Image", "abstract": "3D reconstruction from a single view image is a long-standing problem in computer vision. Various methods based on different shape representations (such as point cloud or volumetric representations) have been proposed. However, the 3D shape reconstruction with fine details and complex structures are still challenging and have not yet be solved. Thanks to the recent advance of the deep shape representations, it becomes promising to learn the structure and detail representation using deep neural networks. In this paper, we propose a novel method called STD-Net to reconstruct the 3D models utilizing the mesh representation that is well suitable for characterizing complex structure and geometry details. To reconstruct complex 3D mesh models with fine details, our method consists of (1) an auto-encoder network for recovering the structure of an object with bounding box representation from a single image, (2) a topology-adaptive graph CNN for updating vertex position for meshes of complex topology, and (3) an unified mesh deformation block that deforms the structural boxes into structure-aware meshed models. Experimental results on the images from ShapeNet show that our proposed STD-Net has better performance than other state-of-theart methods on reconstructing 3D objects with complex structures and fine geometric details. We would like to encourage you to list your keywords within the abstract section" }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1812.06861
1512.03012
Datasets
MovingObjects3D: For the purpose of systematically evaluating highly varying object motions, we downloaded six categories of 3D models from ShapeNet #REFR .
[ "We systematically train and evaluate our method on four datasets which we now briefly describe." ]
[ "For each object category, we rendered 200 video sequences with 100 frames in each sequence using Blender.", "We use data rendered from the categories 'boat' and 'motorbike' as test set and data from categories 'aeroplane', 'bicycle', 'bus', 'car' as training set.", "From the training set we use the first 95% of the videos for training and the remaining 5% for validation.", "In total, we obtain 75K images for training, 5K images for validation, and 25K for testing.", "We further subsample the sequences using sampling intervals {1, 2, 4} in order to obtain small, medium and large motion subsets." ]
[ "ShapeNet" ]
method
{ "title": "Taking a Deeper Look at the Inverse Compositional Algorithm", "abstract": "In this paper, we provide a modern synthesis of the classic inverse compositional algorithm for dense image alignment. We first discuss the assumptions made by this well-established technique, and subsequently propose to relax these assumptions by incorporating data-driven priors into this model. More specifically, we unroll a robust version of the inverse compositional algorithm and replace multiple components of this algorithm using more expressive models whose parameters we train in an end-to-end fashion from data. Our experiments on several challenging 3D rigid motion estimation tasks demonstrate the advantages of combining optimization with learning-based techniques, outperforming the classic inverse compositional algorithm as well as data-driven image-to-pose regression approaches." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.06699
1512.03012
Introduction
By modeling the uncertainty in single-view reconstruction via a partially supervised architecture, our model achieves state-of-the-art 3D reconstruction test error on ShapeNetCore #REFR dataset.
[ "Furthermore, we propose a synthesis pipeline to transfer the single-view conditional model onto the task of multiview shape generation.", "Different from most existing methods which utilize a recurrent unit to ensemble multi-view features, we consider multi-view reconstruction as taking the intersection of the predicted shape space on each singleview image.", "By introducing a simple paired distance metric to constrain the multi-view consistency, we perform online optimization with respect to the multiple input vectors in each individual conditional model.", "Finally, we concatenate the multi-view point cloud results to obtain the final predictions.", "Our training pipeline benefits from pre-rendered depth image and the camera pose without explicit 3D supervision." ]
[ "Detailed ablation studies are performed to show the effectiveness of our proposed pipeline.", "Additional experiments demonstrate that our generative approach has promising generalization ability on real world images." ]
[ "single-view reconstruction" ]
method
{ "title": "Conditional Single-View Shape Generation for Multi-View Stereo Reconstruction", "abstract": "In this paper, we present a new perspective towards image-based shape generation. Most existing deep learning based shape reconstruction methods employ a single-view deterministic model which is sometimes insufficient to determine a single groundtruth shape because the back part is occluded. In this work, we first introduce a conditional generative network to model the uncertainty for single-view reconstruction. Then, we formulate the task of multi-view reconstruction as taking the intersection of the predicted shape spaces on each single image. We design new differentiable guidance including the front constraint, the diversity constraint, and the consistency loss to enable effective single-view conditional generation and multi-view synthesis. Experimental results and ablation studies show that our proposed approach outperforms state-of-the-art methods on 3D reconstruction test error and demonstrates its generalization ability on real world data." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.06699
1512.03012
Multi-category Experiments:
We tested our model in multi-category experiments following [4] on 13 popular categories on ShapeNet #REFR dataset.
[]
[ "As shown in Table 3 , our proposed method outperforms two baseline methods 3D-R2N2 #OTHEREFR and PSGN #OTHEREFR by a relatively large margin.", "Qualitative Results: For qualitative analysis, in Figure 7 we visualize the predicted shapes for two state-of-the-art baseline methods: 3D-R2N2 #OTHEREFR and PSGN #OTHEREFR .", "It is shown that our partially supervised conditional generative model can infer reasonable shapes which are dense and accurate.", "More details are generated due to the specific aim on the front parts of the objects." ]
[ "ShapeNet dataset" ]
method
{ "title": "Conditional Single-View Shape Generation for Multi-View Stereo Reconstruction", "abstract": "In this paper, we present a new perspective towards image-based shape generation. Most existing deep learning based shape reconstruction methods employ a single-view deterministic model which is sometimes insufficient to determine a single groundtruth shape because the back part is occluded. In this work, we first introduce a conditional generative network to model the uncertainty for single-view reconstruction. Then, we formulate the task of multi-view reconstruction as taking the intersection of the predicted shape spaces on each single image. We design new differentiable guidance including the front constraint, the diversity constraint, and the consistency loss to enable effective single-view conditional generation and multi-view synthesis. Experimental results and ablation studies show that our proposed approach outperforms state-of-the-art methods on 3D reconstruction test error and demonstrates its generalization ability on real world data." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.06699
1512.03012
Ablation Studies
Al- though the shape in ShapeNet #REFR dataset often has symmetric structure, the conditional generative model outperforms the deterministic counterpart by 0.25 on CD.
[ "Conditional vs.", "Deterministic: To demonstrate the effectiveness of the conditional model, we implemented a deterministic model S = f d (I).", "For fair comparison, we used an encoder-decoder structure similar with our network and trained the deterministic model for two stages with the front constraint. Single-category experiment was conducted on the deterministic model. Table 4 shows the results." ]
[ "Analysis on different features in the framework: We performed ablation analysis on three different features: twostage training, diversity constraint at multi-view training stage and consistency loss during inference.", "As shown in Table 5 , all features achieve consistent gain on the final performance.", "Front constraint vs.", "Projection loss: Our conditional model can be trained on single-view images with the front constraint and the diversity constraints.", "For comparison, we directly applied the projection loss used on multi-view images training in #OTHEREFR on single-view images, the training did not converge." ]
[ "ShapeNet" ]
method
{ "title": "Conditional Single-View Shape Generation for Multi-View Stereo Reconstruction", "abstract": "In this paper, we present a new perspective towards image-based shape generation. Most existing deep learning based shape reconstruction methods employ a single-view deterministic model which is sometimes insufficient to determine a single groundtruth shape because the back part is occluded. In this work, we first introduce a conditional generative network to model the uncertainty for single-view reconstruction. Then, we formulate the task of multi-view reconstruction as taking the intersection of the predicted shape spaces on each single image. We design new differentiable guidance including the front constraint, the diversity constraint, and the consistency loss to enable effective single-view conditional generation and multi-view synthesis. Experimental results and ablation studies show that our proposed approach outperforms state-of-the-art methods on 3D reconstruction test error and demonstrates its generalization ability on real world data." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1910.14442
1512.03012
B. Interactive Gibson Assets
The annotator is queried to retrieve the most similar CAD model from a list of possible shapes from ShapeNet #REFR .
[ "In areas with low reconstruction precision, the automatic instance segmentation results may contain duplicates as well as missing entries. These were manually corrected by in-house annotators ( Fig. 3.4 ).", "In total, over 4,000 objects proposals resulted from this stage.", "Object Alignment: The goal of this stage is to 1) select the most similar CAD model from a set of possibilities #OTHEREFR , and 2) obtain the scale and the pose to align the CAD model to the reconstructed mesh.", "To obtain the alignments we use a modification of the Scan2CAD #OTHEREFR annotation tool.", "We crowdsourced each object region proposal from the previous stage as HITs (Human Intelligence Tasks) on the Amazon's Mechanical Turk crowdsourcing market #OTHEREFR ." ]
[ "Then, the human has to annotate at least 6 keypoint correspondences between the CAD model and the scan object ( Fig. 3.4) .", "The scale and pose alignment is solved by minimizing the point-to-point distance among correspondences over seven parameters of a transformation matrix: scale (three), position (three), and rotation (one).", "Pitch and roll rotation parameters are predefined since the objects of interest almost always stand up-straight on the floor.", "Object Replacement and Re-texturing: Based on the alignment data, we process the corresponding region of the original mesh.", "We eliminate the vertices and triangular faces close to or inside the aligned CAD model. The resulting mesh contains discontinuities and holes." ]
[ "ShapeNet" ]
method
{ "title": "Interactive Gibson: A Benchmark for Interactive Navigation in Cluttered Environments", "abstract": "We present Interactive Gibson, the first comprehensive benchmark for training and evaluating Interactive Navigation: robot navigation strategies where physical interaction with objects is allowed and even encouraged to accomplish a task. For example, the robot can move objects if needed in order to clear a path leading to the goal location. Our benchmark comprises two novel elements: 1) a new experimental setup, the Interactive Gibson Environment, which simulates high fidelity visuals of indoor scenes, and high fidelity physical dynamics of the robot and common objects found in these scenes; 2) a set of Interactive Navigation metrics which allows one to study the interplay between navigation and physical interaction. We present and evaluate multiple learning-based baselines in Interactive Gibson, and provide insights into regimes of navigation with different trade-offs between navigation path efficiency and disturbance of surrounding objects. We make our benchmark publicly available 3 and encourage researchers from all disciplines in robotics (e.g. planning, learning, control) to propose, evaluate, and compare their Interactive Navigation solutions in Interactive Gibson." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1705.10904
1512.03012
Ablation Study on ShapeNet [5]
In this section, we perform ablation study and compare McRecon with the baseline methods on the ShapeNet #REFR dataset.
[]
[ "The synthetic dataset allows us to control external factors such as the number of viewpoints, quality of mask and is ideal for ablation study.", "Specifically, we use the renderings from #OTHEREFR since it contains a large number of images from various viewpoints and the camera model has more degree of freedom.", "In order to train the network on multiple categories while maintaining a semantically meaningful manifold across different classes, we divide the categories into furniture (sofa, chair, bench, table) and vehicles (car, airplane) classes and trained networks separately.", "We use the alpha channel of the renderings image to generate 2D mask supervisions (finite depth to indicate foreground silhouette).", "For the unlabeled 3D shapes, we simply voxelized the 3D shapes." ]
[ "ShapeNet dataset" ]
method
{ "title": "Weakly Supervised 3D Reconstruction with Adversarial Constraint", "abstract": "Supervised 3D reconstruction has witnessed a significant progress through the use of deep neural networks. However, this increase in performance requires large scale annotations of 2D/3D data. In this paper, we explore inexpensive 2D supervision as an alternative for expensive 3D CAD annotation. Specifically, we use foreground masks as weak supervision through a raytrace pooling layer that enables perspective projection and backpropagation. Additionally, since the 3D reconstruction from masks is an ill posed problem, we propose to constrain the 3D reconstruction to the manifold of unlabeled realistic 3D shapes that match mask observations. We demonstrate that learning a log-barrier solution to this constrained optimization problem resembles the GAN objective, enabling the use of existing tools for training GANs. We evaluate and analyze the manifold constrained reconstruction on various datasets for single and multi-view reconstruction of both synthetic and real images." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1608.05137
1512.03012
CAD Model Alignment
Specifically, we consider all 3D models in the ShapeNet repository #REFR associated with our object categories of interest, i.e., chair, table, sofa, bookshelf, bed, night table, chest yielding 9193 models in total.
[ "The object detection results from Section 3.3 identify the presence of a \"chair\" (e.g.,) in a certain region of the image with high probability.", "Now we wish to determine what kind of chair it is, its shape, and approximate 3D pose.", "Inspired by #OTHEREFR , we solve this retrieval problem by searching for 3D models that are most similar in appearance to the detected objects in the image." ]
[ "Each 3D model is rendered to 32 quantized viewpoints, consisting of 16 uniformly sampled azimuth angles and two elevation angles (15 and 30 degrees above horizontal).", "Robust comparison of photos with CAD models renderings is not straightforward; simple norms like L2 do not work well in practice, due to differences in shape, appearance, shading, and the presence of occluders.", "We achieve good results, once again, by using convolutional nets; we compute deep features for each of the rendered images and the detected image bounding boxes and use cosine similarity as our distance metric.", "More specifically, we use the convolution filter response in the ROI-pooling layer of the fine-tuned Faster-RCNN network #OTHEREFR explained in Section 3.3.", "A benefit of using the ROI-pooling layer is that the length of its feature vector does not depend on the size and the aspect ratio of the bounding box, thus avoiding the need for non-uniform rescaling (a source of artifacts in general)." ]
[ "3D models", "ShapeNet repository" ]
method
{ "title": "IM 2 CAD", "abstract": ": We introduce IM2CAD, a new system that takes a single photograph of a real scene (left), and automatically reconstructs a 3D CAD model (right) that is similar to the real scene. Given a single photo of a room and a large database of furniture CAD models, our goal is to reconstruct a scene that is as similar as possible to the scene depicted in the photograph, and composed of objects drawn from the database. We present a completely automatic system to address this IM2CAD problem that produces high quality results on challenging imagery from real estate web sites. Our approach iteratively optimizes the placement and scale of objects in the room to best match scene renderings to the input photo, used image comparison metrics trained using deep convolutional neural nets. By operating jointly on the full scene at once, we account for inter-object occlusions." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.02920
1512.03012
Segmentation
The segmentation methods based on point cloud obtained good results and maintained the same level with the results on ShapeNet #REFR .
[ "Methods based on points." ]
[ "SO-Net showed excellent performance on IOU and DSC of aneurysms, while PointConv had the best result on parent blood vessels.", "PN++ had the third-best performance and had the fastest training speed (5s per epoch, and converged at approximately an epoch of 115 on GTX 1080 Ti).", "Meanwhile, PointCNN had the slowest training speed (24s per epoch, and converged at approximately an epoch of 500 on GTX 1080 Ti) and moderate segmentation accuracy.", "Spi-derCNN did not have the same performance as it had on the ShapeNet, but CI95 was unusually high.", "Besides the methods mentioned in Section 4.2, we also tried 3D Cap-suleNet #OTHEREFR , but it classified every point into the healthy blood vessel, which shows its limited generalization crossing datasets." ]
[ "ShapeNet" ]
result
{ "title": "IntrA: 3D Intracranial Aneurysm Dataset for Deep Learning", "abstract": ": 3D models of intracranial aneurysm segments with segmentation annotation in our dataset. Hot pink shows the healthy blood vessel part, and aqua shows the aneurysm part for each model. Medicine is an important application area for deep learning models. Research in this field is a combination of medical expertise and data science knowledge. In this paper, instead of 2D medical images, we introduce an open-access 3D intracranial aneurysm dataset, IntrA, that makes the application of points-based and mesh-based classification and segmentation models available. Our dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction. We provide a large-scale benchmark of classification and part segmentation by testing state-of-the-art networks. We also discuss the performance of each method and demonstrate the challenges of our dataset. The published dataset can be accessed here: https://github.com/intra3d2019/IntrA." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1902.10840
1512.03012
NRSf M on CMU Motion Capture
One can see that our method gets far more precise reconstructions even when adding up #REFR Paladini et al.
[ "We randomly select a frame for each subject and render the reconstructed human skeleton in Figure 5 (a) to 5 (j).", "To give a sense of the quality of reconstructions when our method fails, we go through all ten subjects in a total of 140,606 frames and select the frames with the largest errors as shown in Figure 5 (k) and 5 (l).", "Even in the worst cases, our method grasps a rough 3D geometry of human body instead of completely diverging.", "Noise performance: To analyze the robustness of our method, we re-train the neural network for Subject 70 using projected points with Gaussian noise perturbation. The results are summarized in Figure 3 .", "The noise ratio is defined as noise F / W F ." ]
[ "#OTHEREFR fails on all sequences and therefore removed from the table.", "Works #OTHEREFR did not release code.", "Works #OTHEREFR Mean point distance (cm) to 20% noise to our image coordinates compared to baselines with no noise perturbation.", "This experiment clearly demonstrates the robustness of our model and its high accuracy against state-of-the-art works.", "Missing data: Landmarks are not always visible from the camera owing to the occlusion by other objects or itself." ]
[ "precise reconstructions" ]
method
{ "title": "Deep Interpretable Non-Rigid Structure from Motion", "abstract": "All current non-rigid structure from motion (NRSfM) algorithms are limited with respect to: (i) the number of images, and (ii) the type of shape variability they can handle. This has hampered the practical utility of NRSfM for many applications within vision. In this paper we propose a novel deep neural network to recover camera poses and 3D points solely from an ensemble of 2D image coordinates. The proposed neural network is mathematically interpretable as a multi-layer block sparse dictionary learning problem, and can handle problems of unprecedented scale and shape complexity. Extensive experiments demonstrate the impressive performance of our approach where we exhibit superior precision and robustness against all available state-of-the-art works. The considerable model capacity of our approach affords remarkable generalization to unseen data. We propose a quality measure (based on the network weights) which circumvents the need for 3D ground-truth to ascertain the confidence we have in the reconstruction. Once the network's weights are estimated (for a non-rigid object) we show how our approach can effectively recover 3D shape from a single image -outperforming comparable methods that rely on direct 3D supervision." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1801.03399
1512.03012
2D and 3D Keypoint Localization
For data synthesis, we sample CAD models of 472 cars, 100 sofas, 100 chairs and 62 beds from ShapeNet #REFR .
[ "In this Section, we demonstrate the performance of the deep supervision network (Fig.", "4) for predicting the locations of object keypoints on 2D image and 3D space. Dataset." ]
[ "Each car model is annotated with 36 keypoints #OTHEREFR and each furniture model (chair, sofa or bed) with 14 keypoints #OTHEREFR .", "#OTHEREFR We synthesize 600 k car images including occluded instances and 300 k images of fully visible furniture (chair+sofa+bed).", "We pick rendered images of 5 CAD models from each object category as validation set.", "We introduce KITTI-3D with annotations of 3D keypoint and occlusion type on 2040 car images from #OTHEREFR .", "We label car images with one of four occlusion types: no occlusion (or fully visible cars), truncation, multi-car occlusion (target car is occluded by other cars) and occlusion cause by other objects." ]
[ "ShapeNet" ]
method
{ "title": "Deep Supervision with Intermediate Concepts", "abstract": "Recent data-driven approaches to scene interpretation predominantly pose inference as an end-to-end black-box mapping, commonly performed by a Convolutional Neural Network (CNN). However, decades of work on perceptual organization in both human and machine vision suggest that there are often intermediate representations that are intrinsic to an inference task, and which provide essential structure to improve generalization. In this work, we explore an approach for injecting prior domain structure into neural network training by supervising hidden layers of a CNN with intermediate concepts that normally are not observed in practice. We formulate a probabilistic framework which formalizes these notions and predicts improved generalization via this deep supervision method. One advantage of this approach is that we are able to train only from synthetic CAD renderings of cluttered scenes, where concept values can be extracted, but apply the results to real images. Our implementation achieves the state-of-the-art performance of 2D/3D keypoint localization and image classification on real image benchmarks including KITTI, PASCAL VOC, PASCAL3D+, IKEA, and CIFAR100. We provide additional evidence that our approach outperforms alternative forms of supervision, such as multi-task networks." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1811.12016
1512.03012
Dataset
We consider the ShapeNet dataset #REFR which contains a rich collection of 3D CAD models, and is widely used in recent research works related to 2D/3D data.
[]
[ "Three categories, airplane, car, and chair, are selected for our experiments. For fair comparisons, we consider two different data settings.", "For supervised learning of our model and to perform comparisons, we follow the works of 3D-R2N2 #OTHEREFR , Octree Generating Network (OGN) #OTHEREFR , Point Set Generation Network (PSGN) #OTHEREFR , voxel tube network and Matryoshka network #OTHEREFR , which scale the ground truth voxels to fit into 32 × 32 × 32 grids.", "This makes ground truth voxels larger than those considered in MVC #OTHEREFR and DRC #OTHEREFR .", "We use the same rendered images, ground truth voxel, and data split as used in these work.", "For semi-supervised learning, we generate 24 rendered images of size 64 × 64 × 3 pixels and the corresponding ground truth 2D masks, using the same camera pose information and data split as used in Perspective Transformer Nets (PTN) #OTHEREFR ." ]
[ "ShapeNet" ]
method
{ "title": "3D Shape Reconstruction from a Single 2D Image via 2D-3D Self-Consistency", "abstract": "Aiming at inferring 3D shapes from 2D images, 3D shape reconstruction has drawn huge attention from researchers in computer vision and deep learning communities. However, it is not practical to assume that 2D input images and their associated ground truth 3D shapes are always available during training. In this paper, we propose a framework for semi-supervised 3D reconstruction. This is realized by our introduced 2D-3D self-consistency, which aligns the predicted 3D models and the projected 2D foreground segmentation masks. Moreover, our model not only enables recovering 3D shapes with the corresponding 2D masks, camera pose information can be jointly disentangled and predicted, even such supervision is never available during training. In the experiments, we qualitatively and quantitatively demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches in either supervised or semi-supervised settings." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1811.01068
1512.03012
Overview
Training: At training time, our method takes as input a class-specific collection of 3D shapes (we used ShapeN et #REFR ) for which part label annotations are available.
[ "In this section we provide a high level overview of our 3D Pick & Mix retrieval system.", "Our system requires a training stage in which: (i) manifolds of 3D shapes of object parts are built (see Fig.", "4 ) and (ii) a CNN is trained to take as input an image and regress the coordinates of each of its constituent parts on the shape manifolds (illustrated in Fig. 2 ).", "At query time the system receives an image or set of images as input and obtains the corresponding coordinates on the part manifolds.", "If the user chooses object parts from different images a cross-manifold optimization is carried out to retrieve a single shape that blends together properties from different images." ]
[ "The first step at training time is to learn a separate shape manifold for each object part (see Fig. 4 ).", "Each shape is represented with a Light Field descriptor #OTHEREFR and characterized with a pyramid of HoG features.", "The manifolds are then built using non-linear multi-dimensional-scaling (MDS) and the Fig. 2 .", "Summary of the architecture of ManifoldNet, our new deep network that takes an image as input and learns to regress the coordinates of each object part in the different part manifolds.", "The architecture has 3 sections: the first set of layers performs semantic segmentation of the image pixels into different semantic parts (such as \"backrest\", \"seat\", \"armrests\" or \"legs\" in the case of chairs)." ]
[ "3D shapes" ]
method
{ "title": "3D Pick&Mix: Object Part Blending in Joint Shape and Image Manifolds", "abstract": "Abstract. We present 3D Pick & Mix, a new 3D shape retrieval system that provides users with a new level of freedom to explore 3D shape and Internet image collections by introducing the ability to reason about objects at the level of their constituent parts. While classic retrieval systems can only formulate simple searches such as \"find the 3D model that is most similar to the input image\" our new approach can formulate advanced and semantically meaningful search queries such as: \"find me the 3D model that best combines the design of the legs of the chair in image 1 but with no armrests, like the chair in image 2\". Many applications could benefit from such rich queries, users could browse through catalogues of furniture and pick and mix parts, combining for example the legs of a chair from one shop and the armrests from another shop." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1809.10468
1512.03012
IV. EXPERIMENTAL RESULTS
In the next section we evaluate the repeatability and accuracy of the corner detector on 3D models of washers from the ShapeNet dataset #REFR .
[ "In the first section of our results, we evaluate the proposed algorithm for edge detection against state-of-the-art edge detection algorithms for organized and unorganized point clouds.", "We demonstrate our results on the RGB-D semantic segmentation dataset #OTHEREFR for comparison." ]
[ "Finally, we show how the algorithms proposed above can be used to automate welding of a panel workpiece.", "All experiments described in the following sections are run on an Intel i7-4600M CPU with 2.9 GHz and 8GB RAM.", "No multithreading or any other parallelism such as OpenMP or GPU was used in our implementation." ]
[ "ShapeNet" ]
method
{ "title": "Edge and Corner Detection for Unorganized 3D Point Clouds with Application to Robotic Welding", "abstract": "In this paper, we propose novel edge and corner detection algorithms for unorganized point clouds. Our edge detection method evaluates symmetry in a local neighborhood and uses an adaptive density based threshold to differentiate 3D edge points. We extend this algorithm to propose a novel corner detector that clusters curvature vectors and uses their geometrical statistics to classify a point as corner. We perform rigorous evaluation of the algorithms on RGB-D semantic segmentation and 3D washer models from the ShapeNet dataset and report higher precision and recall scores. Finally, we also demonstrate how our edge and corner detectors can be used as a novel approach towards automatic weld seam detection for robotic welding. We propose to generate weld seams directly from a point cloud as opposed to using 3D models for offline planning of welding paths. For this application, we show a comparison between Harris 3D and our proposed approach on a panel workpiece." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1612.00101
1512.03012
Method Overview
At test time, we use the ShapeNet database #REFR as a powerful geometric prior, where we retrieve high-resolution geometry that respects the high-level structure of the previously obtained predictions.
[ "Specifically, we input the probability class vector of a 3D-CNN classification output into the latent space of the 3D-EPN.", "Another important challenge on 3D shape completion is the high dimensionality; one of the insights here is that we use a (mostly) continuous distance field representation over an occupancy grid; this allows us to formulate a well-suited loss function for this specific task.", "Since regressing high-dimensional output with deep networks is challenging for high-resolutions -particularly in 3D space -, we expect the 3D-EPN to operate on a relatively low voxel resolution (e.g., 32", "3 voxel volumes).", "Although it lacks fine geometric detail, it facilitates the prediction of (missing) global structures of partially-scanned objects (e.g., chair legs, airplane wings, etc.)." ]
[ "We establish correlations between the low-resolution 3D-EPN output and the database geometry by learning a geometry lookup with volumetric features.", "Here, we utilize the feature learning of volumetric convolutional networks with a modified version of Qi et et al.", "#OTHEREFR whose learned features are the byproduct of a supervised classification network.", "For a given 3D-EPN prediction, we then run the 3D feature extraction and look up the three nearest shape neighbors in the database which are most similar regarding the underlying geometric structure.", "As a final step of our completion pipeline, we correlate the coarse geometric predictions from the 3D-EPN output with the retrieved shape models." ]
[ "ShapeNet database" ]
method
{ "title": "Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis", "abstract": "Our method completes a partial 3D scan using a 3D Encoder-Predictor network that leverages semantic features from a 3D classification network. The predictions are correlated with a shape database, which we use in a multi-resolution 3D shape synthesis step. We obtain completed high-resolution meshes that are inferred from partial, low-resolution input scans. We" }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1807.02740
1512.03012
Introduction
In our experiments, models from seven categories in ShapeNetCore #REFR are utilized in the training and testing process.
[ "In their recent work, they mention shape completion as one of the potential applications for their network.", "Nevertheless, there is no further exploration in upsampling conditions and the categories of objects.", "In this work, we build on and extend Achlioptas et al.'s work to deploy an upsampling method designed for different object categories and different upsampling amplification factors (AF).", "Furthermore, we study the attributes of the input clouds that lead to the most accurate upsampled point clouds.", "Finally, we expand the encoded input point information to incorporate the vertex normals obtained from the original mesh files and evaluate its influence on the reconstruction performance." ]
[ "The results reveal that data-driven upsampling of sparse point clouds can indeed benefit significantly from categorical class information and moreover, the richness in the data (as obtained through multi-class training) results in high-quality upsampled models for a variety of object categories.", "The key contributions of our work are as follows:", "• We propose a deep learning algorithm for learning point cloud upsampling using entire object models (rather than patches) as input;", "• We demonstrate the effect of input point distribution on upsampling quality;", "• We demonstrate the performance of our approach with diverse amplification factors and the flexibility of our algorithm with single and multiple category training scenarios." ]
[ "models" ]
method
{ "title": "Data-driven Upsampling of Point Clouds", "abstract": "High quality upsampling of sparse 3D point clouds is critically useful for a wide range of geometric operations such as reconstruction, rendering, meshing, and analysis. In this paper, we propose a data-driven algorithm that enables an upsampling of 3D point clouds without the need for hard-coded rules. Our approach uses a deep network with Chamfer distance as the loss function, capable of learning the latent features in point clouds belonging to different object categories. We evaluate our algorithm across different amplification factors, with upsampling learned and performed on objects belonging to the same category as well as different categories. We also explore the desirable characteristics of input point clouds as a function of the distribution of the point samples. Finally, we demonstrate the performance of our algorithm in single-category training versus multi-category training scenarios. The final proposed model is compared against a baseline, optimization-based upsampling method. Results indicate that our algorithm is capable of generating more accurate upsamplings with less Chamfer loss." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1912.03663
1512.03012
Results
The reconstruction task is evaluated with point sets of 2048 points, sampled from ShapeNet Core55 database #REFR .
[ "In this section, we present the results of our sampling approach for various applications: point cloud classification, retrieval, registration, and reconstruction.", "The performance with point clouds sampled by our method is contrasted with the commonly used FPS and the learned sampling method, S-NET, proposed by Dovrat et al. #OTHEREFR .", "Classification, retrieval, and registration are benchmarked on ModelNet40 #OTHEREFR .", "We use point clouds of 1024 points that were uniformly sampled from the dataset models.", "The official train-test split #OTHEREFR is used for training and evaluation." ]
[ "We use four shape classes with the largest number of examples: table, car, chair, and airplane. Each class is split to 85%/5%/10% for train/validation/test sets.", "Our sampling network SampleNet is based on PointNet architecture.", "It operates directly on point clouds and is invariant to permutations of the points.", "SampleNet applies MLPs to the input points, followed by a global max pooling.", "Then, a simplified point cloud is computed from the pooled feature vector and projected on the input point cloud. The complete experimental settings are detailed in the supplemental." ]
[ "ShapeNet Core55 database" ]
method
{ "title": "SampleNet: Differentiable Point Cloud Sampling", "abstract": "There is a growing number of tasks that work directly on point clouds. As the size of the point cloud grows, so do the computational demands of these tasks. A possible solution is to sample the point cloud first. Classic sampling approaches, such as farthest point sampling (FPS), do not consider the downstream task. A recent work showed that learning a task-specific sampling can improve results significantly. However, the proposed technique did not deal with the non-differentiability of the sampling operation and offered a workaround instead. We introduce a novel differentiable relaxation for point cloud sampling. Our approach employs a soft projection operation that approximates sampled points as a mixture of points in the primary input cloud. The approximation is controlled by a temperature parameter and converges to regular sampling when the temperature goes to zero. During training, we use a projection loss that encourages the temperature to drop, thereby driving every sample point to be close to one of the input points. This approximation scheme leads to consistently good results on various applications such as classification, retrieval, and geometric reconstruction. We also show that the proposed sampling network can be used as a front to a point cloud registration network. This is a challenging task since sampling must be consistent across two different point clouds. In all cases, our method works better than existing non-learned and learned sampling alternatives. Our code is publicly available 1 ." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1709.00849
1512.03012
Comparision of CRF and Grab-Cut segmentation from bound box labels
We choose ShapeNet #REFR database since it provides a large variety of models in the 20 categories for PASCAL segmentation challenge.
[ "We use Cycles Render Engine available with Blender since it supports ray-tracing to render synthetic images.", "Since all the required information for annotation is available, we use the PASCAL Segmentation label format with labelled pixels for 20 classes.", "Real world images have lot of information embedded about the environment, illumination, surface materials, shapes etc.", "Since the trained model, at test time must be able to generalize to the real world images, we take into consideration the following aspects during generation of each scenario:", "• Number of objects • Shape, Texture, and Materials of the objects • Background of the object • Position, Orientation of camera • Illumination via light sources In order to simulate the scenario, we need 3D models, their texture information and metadata. Thousands of 3D CAD models are available online." ]
[ "Figure 3a shows few of the models used for rendering images.", "The variety helps randomize the aspect of shape, texture and materials of the objects.", "We use images from SUN database #OTHEREFR as background images.", "From the large categories of images, we select few categories relevant as background to the classes of objects to be recognized.", "For generating training set with rendered images, the 3D scenes need to be distinct." ]
[ "PASCAL segmentation challenge", "ShapeNet database" ]
method
{ "title": "Dataset Augmentation with Synthetic Images Improves Semantic Segmentation", "abstract": "Although Deep Convolutional Neural Networks trained with strong pixel-level annotations have significantly pushed the performance in semantic segmentation, annotation efforts required for the creation of training data remains a roadblock for further improvements. We show that augmentation of the weakly annotated training dataset with synthetic images minimizes both the annotation efforts and also the cost of capturing images with sufficient variety. Evaluation on the PASCAL 2012 validation dataset shows an increase in mean IOU from 52.80% to 55.47% by adding just 100 synthetic images per object class. Our approach is thus a promising solution to the problems of annotation and dataset collection." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.04094
1512.03012
INTRODUCTION
The most obvious attempt for a 3D ImageNet comes in the form of ShapeNet #REFR .
[ "It is immediately clear that the number of training samples becomes an issue.", "However, the success of 2D deep learning is often largely accredited to the release of large open-access labelled datasets such as ImageNet #OTHEREFR , which contains > 14 * 10 6 images.", "It is now largely standard procedure to pre-train deep CNNs on the ImageNet benchmark dataset for initial model weight tuning.", "Achieving a similar dataset for 3D point cloud processing would be a substantially more challenging feat, and as such open training datasets on the scale of ImageNet do not exist for 3D point clouds. Regardless, * Corresponding author.", "there has been a range of efforts to address this issue." ]
[ "ShapeNet contains over 300 million models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships.", "Similarly, ScanNet #OTHEREFR contains over 1500 indoor scene scans, with each scan containing 400-600k points.", "With respect to outdoor point cloud processing there have also been significant efforts to address this problem, most noticeably; iQmumuls/TerraMobilita #OTHEREFR , TUM City Campus #OTHEREFR and the current largest, Semantic3D #OTHEREFR which contains 4 billion points.", "Although these datasets offer large point counts in absolute terms, they contain very large class-imbalances.", "This is due to the natural class imbalances present in both urban and sub-urban environments." ]
[ "3D ImageNet", "ShapeNet" ]
background
{ "title": "Weighted Point Cloud Augmentation for Neural Network Training Data Class-Imbalance", "abstract": "Recent developments in the field of deep learning for 3D data have demonstrated promising potential for end-to-end learning directly from point clouds. However, many real-world point clouds contain a large class im-balance due to the natural class im-balance observed in nature. For example, a 3D scan of an urban environment will consist mostly of road and façade, whereas other objects such as poles will be under-represented. In this paper we address this issue by employing a weighted augmentation to increase classes that contain fewer points. By mitigating the class im-balance present in the data we demonstrate that a standard PointNet++ deep neural network can achieve higher performance at inference on validation data. This was observed as an increase of F1 score of 19% and 25% on two test benchmark datasets; ScanNet and Semantic3D respectively where no class im-balance pre-processing had been performed. Our networks performed better on both highly-represented and under-represented classes, which indicates that the network is learning more robust and meaningful features when the loss function is not overly exposed to only a few classes." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1807.06010
1512.03012
Experiments
To perform the experiments presented in this section, we do not reuse the models in the training data set but download additional 3D models from ShapeNet #REFR .
[ "Dataset overview.", "Since most models in #OTHEREFR are manifolds without sharp edges, we collected 24 CAD models and 12 everyday objects as our training data set, and manually annotate sharp edges on them; see supplemental material.", "Then, we randomly crop 2,400 patches from the models (see Figure 2 ) to train our network; see the procedure in Sec. 2.1." ]
[ "For each testing model, we also use the procedure in Sec.", "2.1 to generate the virtual scanned point clouds as input." ]
[ "ShapeNet" ]
method
{ "title": "EC-Net: an Edge-aware Point set Consolidation Network", "abstract": "Abstract. Point clouds obtained from 3D scans are typically sparse, irregular, and noisy, and required to be consolidated. In this paper, we present the first deep learning based edge-aware technique to facilitate the consolidation of point clouds. We design our network to process points grouped in local patches, and train it to learn and help consolidate points, deliberately for edges. To achieve this, we formulate a regression component to simultaneously recover 3D point coordinates and pointto-edge distances from upsampled features, and an edge-aware joint loss function to directly minimize distances from output points to 3D meshes and to edges. Compared with previous neural network based works, our consolidation is edge-aware. During the synthesis, our network can attend to the detected sharp edges and enable more accurate 3D reconstructions. Also, we trained our network on virtual scanned point clouds, demonstrated the performance of our method on both synthetic and real point clouds, presented various surface reconstruction results, and showed how our method outperforms the state-of-the-arts." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1812.03441
1512.03012
Overview of user studies
When a piece of furniture was mismatched, it was replaced with another similar object of the same type taken from the ShapeNet database #REFR .
[ "• Furniture quality reduced.", "For both the couch and the stuffed chair, models were generated that contained 10%, 25%, 50%, and 75% of the original number of vertices.", "If \"decimated furniture\" was one of the errors in a given room, the sofa and the stuffed chair always varied together.", "• Furniture mismatched.", "One of the following was replaced at a time: coffee table, side tables, stuffed chair, couch, lamps, and a wooden chair." ]
[ "• Furniture repositioned.", "Furniture objects were either globally raised by 10cm, globally lowered by 10cm, or globally moved outward (away from room center) by 10%.", "The corresponding \"moved inward by 10% condition\" was not tested due to experimenter error.", "• Furniture rescaled.", "One of the following scaling errors occurred: All furniture was 25% larger, all furniture was 10% larger, all furniture was 10% smaller, the sofa was 25% larger, the sofa was 10% smaller, the coffee table was 25% larger." ]
[ "furniture", "ShapeNet database" ]
method
{ "title": "Virtual replicas of real places: Experimental investigations", "abstract": "The emergence of social virtual reality (VR) experiences, such as Facebook Spaces, Oculus Rooms, and Oculus Venues, will generate increased interest from users who want to share real places (both personal and public) with their fellow users in VR. At the same time, advances in scanning and reconstruction technology are making the realistic capture of real places more and more feasible. These complementary pressures mean that the representation of real places in virtual reality will be an increasingly common use case for VR. Despite this, there has been very little research into how users perceive such replicated spaces. This paper reports the results from a series of three user studies investigating this topic. Taken together, these results show that getting the scale of the space correct is the most important factor for generating a \"feeling of reality\", that it is important to avoid incoherent behaviors (such as floating objects), and that lighting makes little difference to perceptual similarity." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1804.10975
1512.03012
Prior Work
With the advent of large-scale shape collections #REFR , data-driven methods, and especially CNNs, have become the method of choice for predicting 3D shapes.
[]
[ "Insprired by the success of CNNs for dense 2D prediction tasks, Wu et al. #OTHEREFR adapted CNNs to volumetric outputs. Yan et al. #OTHEREFR and Zhu et al.", "#OTHEREFR showed that optimizing projections of the predicted shape benefits the reconstruction. Choy et al.", "#OTHEREFR developed a joint approach for shape reconstruction from one or multiple views. Girdhar et al.", "#OTHEREFR combined an autoencoder and a convolutional network to learn an embedding of images and 3D shapes. Wu et al.", "#OTHEREFR trained a generative adversarial network to synthesize 3D shapes. Tulsiani et al. #OTHEREFR learned a shape decoder from 2D supervision. Wu et al." ]
[ "large-scale shape collections" ]
method
{ "title": "Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers", "abstract": ". Exemplary shape reconstructions from a single image by our Matryoshka network based on nested shape layers. In" }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.08921
1512.03012
Introduction
With surface primitives in place of curves, we perform volumetric abstraction on ShapeNet #REFR , inputting an image or a distance field and outputting parametric primitives that approximate the model.
[ "We apply our new framework in the 2D context to a diverse dataset of fonts.", "We train a network that takes in a raster image of a glyph and outputs a representation as a collection of Bézier curves.", "This maps glyphs onto a common set of parameters that can be traversed intuitively.", "We use this embedding for font exploration and retrieval, correspondence, and unsupervised interpolation.", "We also show that our approach works in 3D." ]
[ "This output can be rendered at any resolution or converted to a mesh; it also can be used for segmentation. Contributions.", "We present a technique for predicting parametric shapes from 2D and 3D raster data, including:", "• a general distance field loss function allowing definition of several losses based on a common formulation; • application to 2D font glyph vectorization, with application to correspondence, exploration, retrieval, and repair; • application to 3D surface abstraction, with results for different primitives and constructive solid geometry (CSG) as well as application to segmentation." ]
[ "ShapeNet" ]
method
{ "title": "Deep Parametric Shape Predictions using Distance Fields", "abstract": "Many tasks in graphics and vision demand machinery for converting shapes into representations with sparse sets of parameters; these representations facilitate rendering, editing, and storage. When the source data is noisy or ambiguous, however, artists and engineers often manually construct such representations, a tedious and potentially time-consuming process. While advances in deep learning have been successfully applied to noisy geometric data, the task of generating parametric shapes has so far been difficult for these methods. Hence, we propose a new framework for predicting parametric shape primitives using deep learning. We use distance fields to transition between shape parameters like control points and input data on a raster grid. We demonstrate efficacy on 2D and 3D tasks, including font vectorization and surface abstraction." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2003.10983
1512.03012
Object Reconstruction
ShapeNet #REFR We quantitatively evaluate surface reconstruction accuracy of DeepLS and other shape learning methods on various classes from the ShapeNet dataset.
[]
[ "Quantitative results for the chamfer distance error are shown in Table 1 .", "As can be seen DeepLS improves over related approaches by approximately one order of magnitude.", "It should be noted that this is not a comparison between equal methods since the other methods infer a global, object-level representation that comes with other advantages. Also, the parameter distribution varies significantly (c.f. Tab. 1).", "Nonetheless, it proves that local shapes lead to superior reconstruction quality and that implicit functions modeled by a deep neural network are capable of representing fine details.", "Qualitatively, DeepLS encodes and reconstructs much finer surface details as can be seen in Fig. 4 ." ]
[ "ShapeNet dataset" ]
method
{ "title": "Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction", "abstract": ": Reconstruction performed by our Deep Local Shapes (DeepLS) of the Burghers of Calais scene [56] . DeepLS represents surface geometry as a sparse set of local latent codes in a voxel grid, as shown on the right. Each code compresses a local volumetric SDF function, which is reconstructed by an implicit neural network decoder. Abstract. Efficiently reconstructing complex and intricate surfaces at scale is a long-standing goal in machine perception. To address this problem we introduce Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements. DeepLS replaces the dense volumetric signed distance function (SDF) representation used in traditional surface reconstruction systems with a set of locally learned continuous SDFs defined by a neural network, inspired by recent work such as DeepSDF [38] . Unlike DeepSDF, which represents an object-level SDF with a neural network and a single latent code, we store a grid of independent latent codes, each responsible for storing information about surfaces in a small local neighborhood. This decomposition of scenes into local shapes simplifies the prior distribution that the network must learn, and also enables efficient inference. We demonstrate the effectiveness and generalization power of DeepLS by showing object shape encoding and reconstructions of full scenes, where DeepLS delivers high compression, accuracy, and local shape completion. Work performed during an internship at Facebook Reality Labs." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1911.06971
1512.03012
Auto-encoding 2D shapes
The order of the three shapes is sorted so that the diamond is always on the left and the hollow diamond is always on the right -this is to mimic the structure of shape datasets such as ShapeNet #REFR .
[ "To illustrate how our network works, we created a synthetic 2D dataset.", "We place a diamond, a cross, and a hollow diamond with varying sizes over 64 × 64 images; see Figure 6 : Segmentation and correspondence -Semantics implied from autoencoding by BSP-Net.", "Colors shown here are the result of a manual grouping of learned convexes.", "The color assignment was performed on a few shapes: once a convex is colored in one shape, we can propagate the color to the other shapes by using the learnt convex id. Table 2 : Segmentation: comparison in per-label IoU. Figure 4(a) ." ]
[ "After training Stage 1, our network has already achieved a good approximate S + reconstruction, however, by inspecting S * , the output of our inference, we can see there are several imperfections.", "After the fine-tuning in Stage 2, our network achieves near perfect reconstructions.", "Finally, the use of overlap losses significantly improves the compactness of representation, reducing the number of convexes per part; see Figure 4 (d).", "Figure 5 visualizes the planes used to construct individ- ual convexes -we visualize planes i in convex j so that T ij =1 and P 2 i1 + P 2 i2 + P 2 i3 >ε for a small threshold ε (to ignore planes with near-zero gradients).", "Note how BSP-Net creates a natural semantic correspondence across inferred convexes." ]
[ "ShapeNet" ]
method
{ "title": "BSP-Net: Generating Compact Meshes via Binary Space Partitioning", "abstract": "Polygonal meshes are ubiquitous in the digital 3D domain, yet they have only played a minor role in the deep learning revolution. Leading methods for learning generative models of shapes rely on implicit functions, and generate meshes only after expensive iso-surfacing routines. To overcome these challenges, we are inspired by a classical spatial data structure from computer graphics, Binary Space Partitioning (BSP), to facilitate 3D learning. The core ingredient of BSP is an operation for recursive subdivision of space to obtain convex sets. By exploiting this property, we devise BSP-Net, a network that learns to represent a 3D shape via convex decomposition. Importantly, BSP-Net is unsupervised since no convex shape decompositions are needed for training. The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built on a set of planes. The convexes inferred by BSP-Net can be easily extracted to form a polygon mesh, without any need for iso-surfacing. The generated meshes are compact (i.e., low-poly) and well suited to represent sharp geometry; they are guaranteed to be watertight and can be easily parameterized. We also show that the reconstruction quality by BSP-Net is competitive with state-ofthe-art methods while using much fewer primitives." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1904.05767
1512.03012
Related work
Unlike existing methods that typically input simple renderings of CAD models, such as ShapeNet #REFR , we work with complex images in the presence of hand occlusions.
[ "We employ the latter since meshes allow better modeling of the interaction with the hand.", "AtlasNet #OTHEREFR inputs vertex coordinates concatenated with image features and outputs a deformed mesh.", "More recently, Pixel2Mesh #OTHEREFR explores regularizations to improve the perceptual quality of predicted meshes.", "Previous works mostly focus on producing accurate shape and they output the object in a normalized coordinate frame in a categoryspecific canonical pose.", "We employ a view-centered variant of #OTHEREFR to handle generic object categories, without any category-specific knowledge." ]
[ "In-hand scanning #OTHEREFR , while performed in the context of manipulation, focuses on object reconstruction and requires RGB-D video inputs. Hand-object reconstruction.", "Joint reconstruction of hands and objects has been studied with multi-view RGB #OTHEREFR and RGB-D input with either optimization #OTHEREFR 57, #OTHEREFR or classification #OTHEREFR approaches.", "These works use rigid objects, except for a few that use articulated #OTHEREFR or deformable objects #OTHEREFR .", "Focusing on contact points, most works employ proximity metrics [57, #OTHEREFR , while #OTHEREFR directly regresses them from images, and #OTHEREFR uses contact measurements on instrumented objects.", "#OTHEREFR integrates physical constraints for penetration and contact, attracting fingers onto the object uni-directionally." ]
[ "ShapeNet" ]
method
{ "title": "Learning Joint Reconstruction of Hands and Manipulated Objects", "abstract": "Estimating hand-object manipulations is essential for in- terpreting and imitating human actions. Previous work has made significant progress towards reconstruction of hand poses and object shapes in isolation. Yet, reconstructing hands and objects during manipulation is a more challeng- ing task due to significant occlusions of both the hand and object. While presenting challenges, manipulations may also simplify the problem since the physics of contact re- stricts the space of valid hand-object configurations. For example, during manipulation, the hand and object should be in contact but not interpenetrate. In this work, we regu- larize the joint reconstruction of hands and objects with ma- nipulation constraints. We present an end-to-end learnable model that exploits a novel contact loss that favors phys- ically plausible hand-object constellations. Our approach improves grasp quality metrics over baselines, using RGB images as input. To train and evaluate the model, we also propose a new large-scale synthetic dataset, ObMan, with hand-object manipulations. We demonstrate the transfer- ability of ObMan-trained models to real data." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }