citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1803.04848
1501.07418
INTRODUCTION
Although the robust approach is computationally efficient when the uncertainty set is state-wise independent, compact and convex, it can lead to overly conservative results #REFR .
[ "A strategy that maximizes the accumulated expected reward is then considered as optimal and can be learned from sampling.", "However, besides the uncertainty that results from stochasticity of the environment, model parameters are often estimated from noisy data or can change during testing #OTHEREFR Roy et al., 2017] .", "This second type of uncertainty can significantly degrade the performance of the optimal strategy from the model's prediction.", "Robust MDPs were proposed to address this problem #OTHEREFR Nilim and El Ghaoui, 2005; #OTHEREFR .", "In this framework, a transition model is assumed to belong to a known uncertainty set and an optimal strategy is learned under the worst parameter realizations." ]
[ "For example, consider a business scenario where an agent's goal is to make as much money as possible.", "It can either create a startup which may make a fortune but may also result in bankruptcy.", "Alternatively, it can choose to live off school teaching and have almost no risk but low reward.", "By choosing the teaching strategy, the agent may be overly conservative and not account for opportunities to invest in his own promising projects.", "Our claim is that one could relax this conservativeness and construct a softer behavior that interpolates between being aggressive and robust." ]
[ "robust approach" ]
background
{ "title": "Soft-Robust Actor-Critic Policy-Gradient", "abstract": "Robust Reinforcement Learning aims to derive an optimal behavior that accounts for model uncertainty in dynamical systems. However, previous studies have shown that by considering the worst case scenario, robust policies can be overly conservative. Our soft-robust framework is an attempt to overcome this issue. In this paper, we present a novel Soft-Robust ActorCritic algorithm (SR-AC). It learns an optimal policy with respect to a distribution over an uncertainty set and stays robust to model uncertainty but avoids the conservativeness of robust strategies. We show the convergence of SR-AC and test the efficiency of our approach on different domains by comparing it against regular learning methods and their robust formulations." }
{ "title": "Distributionally Robust Counterpart in Markov Decision Processes", "abstract": "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way. Index Terms-Distributional robustness, Markov decision processes, parameter uncertainty." }
1803.04848
1501.07418
RELATED WORK
These #REFR in which the optimal strategy maximizes the expected reward under the most adversarial distribution over the uncertainty set.
[ "Our work solves the problem of conservativeness encountered in robust MDPs by incorporating a variational form of distributional robustness.", "The SR-AC algorithm combines scalability to large scale state-spaces and online estimation of the optimal policy in an actor-critic algorithm. Table 1 compares our proposed algorithm with previous approaches.", "Many solutions have been addressed to mitigate conservativeness of robust MDP.", "relax the state-wise independence property of the uncertainty set and assume it to be coupled in a way such that the planning problem stays tracktable.", "Another approach tends to assume a priori information on the parameter set." ]
[ "For finite and known MDPs, under some structural assumptions on the considered set of distributions, this max-min problem reduces to classical robust MDPs and can be solved efficiently by dynamic programming [Puterman, 2009] .", "However, besides becoming untracktable under largesized MDPs, these methods use an offline learning approach which cannot adapt its level of protection against model uncertainty and may lead to overly conservative results. The work of Lim et al.", "[2016] solutions this issue and addresses an online algorithm that learns the transitions that are purely stochastic and those that are adversarial.", "Although it ensures less conservative results as well as low regret, this method sticks to the robust objective while strongly relying on the finite structure of the state-space.", "To alleviate the curse of dimensionality, we incorporate function approximation of the objective value and define it as a linear functional of features." ]
[ "optimal strategy", "adversarial distribution" ]
background
{ "title": "Soft-Robust Actor-Critic Policy-Gradient", "abstract": "Robust Reinforcement Learning aims to derive an optimal behavior that accounts for model uncertainty in dynamical systems. However, previous studies have shown that by considering the worst case scenario, robust policies can be overly conservative. Our soft-robust framework is an attempt to overcome this issue. In this paper, we present a novel Soft-Robust ActorCritic algorithm (SR-AC). It learns an optimal policy with respect to a distribution over an uncertainty set and stays robust to model uncertainty but avoids the conservativeness of robust strategies. We show the convergence of SR-AC and test the efficiency of our approach on different domains by comparing it against regular learning methods and their robust formulations." }
{ "title": "Distributionally Robust Counterpart in Markov Decision Processes", "abstract": "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way. Index Terms-Distributional robustness, Markov decision processes, parameter uncertainty." }
1906.05988
1501.07418
In Section 3, we formulate the DR Bellman equation and show that the value function is convex, when the ambiguity set is characterized by moments as in #REFR , and introduce several examples of moment-based ambiguity set.
[ "The state then makes a transition according to p and DM's production decision, and the DM receives a reward according to how much demand he/she is able to satisfy, or pays a stocking cost.", "Assuming a family of distributions of unknown climate, the DM aims to maximize the worst-case revenue given the nature being an adversary.", "The above problem is especially important for planning orders or production in agriculture.", "The main research in this paper is to develop a DR formulation of POMDP and analyze its properties, as well as to investigate efficient computational methods, when assuming the accessibility of transition-observation probability at the end of each time.", "Section 2 provides a comprehensive review of the related literature in MDP, POMDP, and distributionally robust optimization." ]
[ "In Section 4, we present an approximation algorithm for DR-POMDP for infinite-horizon case by using a DR variant of the heuristic value search iteration (HVSI) algorithm.", "Numerical studies are presented in Section 5 to compare DR-POMDP with", "POMDP, and to demonstrate properties of DR-POMDP solutions based on randomly generated observation outcomes.", "We conclude the paper and describe future research in Section 6.", "2 Literature Review" ]
[ "moment-based ambiguity" ]
background
{ "title": "Distributionally Robust Partially Observable Markov Decision Process with Moment-based Ambiguity", "abstract": "We consider a distributionally robust (DR) formulation of partially observable Markov decision process (POMDP), where the transition probabilities and observation probabilities are random and unknown, only revealed at the end of every time step. We construct the ambiguity set of the joint distribution of the two types of probabilities using moment information bounded via conic constraints and show that the value function of DR-POMDP is convex with respect to the belief state. We propose a heuristic search value iteration method to solve DR-POMDP, which finds lower and upper bounds of the optimal value function. Computational analysis is conducted to compare DR-POMDP with the standard POMDP using random instances of dynamic machine repair and a ROCKSAMPLE benchmark." }
{ "title": "Distributionally Robust Counterpart in Markov Decision Processes", "abstract": "This technical note studies Markov decision processes under parameter uncertainty. We adapt the distributionally robust optimization framework, assume that the uncertain parameters are random variables following an unknown distribution, and seek the strategy which maximizes the expected performance under the most adversarial distribution. In particular, we generalize a previous study [1] which concentrates on distribution sets with very special structure to a considerably more generic class of distribution sets, and show that the optimal strategy can be obtained efficiently under mild technical conditions. This significantly extends the applicability of distributionally robust MDPs by incorporating probabilistic information of uncertainty in a more flexible way. Index Terms-Distributional robustness, Markov decision processes, parameter uncertainty." }
1712.02228
1406.7611
Introduction
These indicators were developed because evidences have been published that this data is -similar to bibliometric data -field-and time-dependent (see, e.g., #REFR .
[ "(3) The publication of the altmetrics manifesto by #OTHEREFR gave this new area in scientometrics a name and thus a focal point.", "Today, many publishers add altmetrics to papers in their collections (e.g., Wiley", "and Springer) #OTHEREFR .", "Altmetrics are also recommended by Snowball Metrics #OTHEREFR for research evaluation purposes -an initiative publishing global standards for institutional benchmarking in the academic sector (www.snowballmetrics.com).", "In recent years, some altmetrics indicators have been proposed which are field-and time-normalized." ]
[ "Obviously, some fields are more relevant to a broader audience or general public than others #OTHEREFR .", "and #OTHEREFR introduced the mean discipline normalized reader score (MDNRS) and the mean normalized reader score (MNRS) based on", "Mendeley data (see also #OTHEREFR .", "#OTHEREFR propose the Twitter Percentile (TP) -a field-and time-normalized indicator for Twitter data.", "This indicator was developed against the backdrop of a problem with altmetrics data which is also addressed in this study -the inflation of the data with zero counts. The overview of #OTHEREFR" ]
[ "indicators", "data" ]
method
{ "title": "Normalization of zero-inflated data: An empirical analysis of a new indicator family and its use with altmetrics data", "abstract": "Recently, two new indicators (Equalized Mean-based Normalized Proportion Cited, EMNPC, and Mean-based Normalized Proportion Cited, MNPC) were proposed which are intended for sparse data. We propose a third indicator (Mantel-Haenszel quotient, MHq) belonging to the same indicator family. The MHq is based on the MH analysis - an established method for polling the data from multiple 2x2 contingency tables based on different subgroups. We test (using citations and assessments by peers) if the three indicators can distinguish between different quality levels as defined on the basis of the assessments by peers (convergent validity). We find that the indicator MHq is able to distinguish between the quality levels in most cases while MNPC and EMNPC are not." }
{ "title": "Validity of altmetrics data for measuring societal impact: A study using data from Altmetric and F1000Prime", "abstract": "Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag \"good for teaching\" do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (\"new finding\"). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers' subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters' journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics." }
1803.08423
1209.1730
It is known #REFR that G admits two edge-Kempe inequivalent colorings c 1 and c 2 .
[ "The degree of the covering p constructed explicitly in Lemma 4 is precisely d − 1.", "Note that we pass to a further cover twice when relying on Lemma 3 and the covering degree increases by a factor of β(d − 1) each time.", "As explained in Remark 2 no further covers are necessery for the proof. This establishes the claim.", "An example.", "Let G = K 3,3 denote the complete bipartite graph on six vertices. The graph G is 3-regular." ]
[ "These are illustrated in the bottom row of Figure 1 .", "The colors 1, 2 and 3 correspond to blue, red and black, respectively.", "The required graph covering G and edge-Kempe switches are described in the top row of Figure 1 .", "These are performed along the bold cycles and indicated by the sign.", "The value of the function κ : V (G) → C = Z/2Z = {1, 2 = 0} is indicated on the vertices of (G, c 1 ) in the left bottom graph." ]
[ "two edge-Kempe inequivalent" ]
background
{ "title": "Edge Kempe equivalence of regular graph covers", "abstract": "Abstract. Let G be a finite d-regular graph with a legal edge coloring. An edge Kempe switch is a new legal edge coloring of G obtained by switching the two colors along some bi-chromatic cycle. We prove that any other edge coloring can be obtained by performing finitely many edge Kempe switches, provided that G is replaced with a suitable finite covering graph. The required covering degree is bounded above by a constant depending only on d." }
{ "title": "Counting edge-Kempe-equivalence classes for 3-edge-colored cubic graphs", "abstract": "Two edge colorings of a graph are edge-Kempe equivalent if one can be obtained from the other by a series of edge-Kempe switches. This work gives some results for the number of edge-Kempe equivalence classes for cubic graphs. In particular we show every 2-connected planar bipartite cubic graph has exactly one edge-Kempe equivalence class. Additionally, we exhibit infinite families of nonplanar bipartite cubic graphs with a range of numbers of edge-Kempe equivalence classes. Techniques are developed that will be useful for analyzing other classes of graphs as well." }
1702.08166
1610.05507
Related work
Work #REFR used a different analysis and showed a global linear convergence rate in iterate point error, i.e., x k − x * .
[ "Work #OTHEREFR is the first study that establishes a global linear convergence rate for the PIAG method in function value error, i.e., Φ(x k ) − Φ(x * ), where x * denotes the minimizer point of Φ(x)." ]
[ "The authors of #OTHEREFR combined the results presented in #OTHEREFR and #OTHEREFR and provided a stronger linear convergence rate for the PIAG method in the recent paper #OTHEREFR .", "However, all these mentioned works are built on the strongly convex assumption, which is actually not satisfied by many application problems and hence motives lots of research to find weaker alternatives.", "Influential weaker conditions include the error bound property, the restricted strongly convex property, the quadratic growth condition, and the Polyak-Lojasiewicz inequality; the interested reader could refer to #OTHEREFR .", "Works #OTHEREFR studied the linear convergence of the FBS method under these weaker conditions.", "But to our knowledge, there is no work of studying the global linear convergence of the PIAG method under these weaker conditions." ]
[ "global linear convergence" ]
method
{ "title": "Linear Convergence of the Proximal Incremental Aggregated Gradient Method under Quadratic Growth Condition", "abstract": "Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition-a strictly weaker condition than the strongly convex assumption, we derive a new global linear convergence rate result, which implies that the PIAG method attains global linear convergence rates in both the function value and iterate point errors. The main idea behind is to construct a certain Lyapunov function." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1702.08166
1610.05507
Proof of Lemma 2
The second part is a standard argument, which is different from the optimality condition based method adopted in the proof of Theorem in #REFR . Part 1.
[ "We divide the proof into two parts.", "The first part can be found from the proof of Theorem 1 in [1]; we include it here for completion." ]
[ "Since each component function f n (x) is convex with L n -continuous gradient, we have the following upper bound estimations:", "Summing (15) over all components functions and using the expression of g k , we obtain", "The last term of the inequality above can be upper-bounded using Jensen's inequality as follows:", "Therefore,", "Part 2." ]
[ "optimality condition", "based method" ]
method
{ "title": "Linear Convergence of the Proximal Incremental Aggregated Gradient Method under Quadratic Growth Condition", "abstract": "Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition-a strictly weaker condition than the strongly convex assumption, we derive a new global linear convergence rate result, which implies that the PIAG method attains global linear convergence rates in both the function value and iterate point errors. The main idea behind is to construct a certain Lyapunov function." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1807.00110
1610.05507
Introduction
We note that the approach in #REFR is essentially a primal algorithm that allows for one proximal term (and hence one constrained set).
[ "(This largely rules out primal-only methods since they usually allow just one proximal term.) Hence, the algorithm would be able to allow for constrained optimization, where the feasible region is the intersection of several sets.", "(6) able to allow for time-varying graphs in the sense of #OTHEREFR (to be robust against failures of communication between two agents). (7) able to use simpler subproblems for subdifferentiable functions. (8) able to use simpler subproblems for smooth functions. (9) able to allow for partial communication of data.", "Since Dykstra's algorithm is also dual block coordinate ascent, the following property is obtained:", "(10) choosing a large number of dual variables to be maximized over gives a greedier increase of the dual objective value.", "We are not aware of other algorithms that satisfy properties 1-5 at the same time." ]
[ "Due to technical difficulties (see Remark 4.3), a dual or primal-dual method seems necessary to handle the case of more than one constrained set.", "Algorithms derived from the primal dual algorithm #OTHEREFR , like #OTHEREFR , are very much different from what we study in this paper.", "The most notable difference is that they study ergodic convergence rates, which is not directly comparable with our results.", "1.2.1. Convergence rates.", "Since the subproblems in our case are strongly convex, standard techniques for block coordinate minimization, like #OTHEREFR , can be used to prove the O(1/k) convergence rate when a dual solution exists and all functions are treated as proximable functions." ]
[ "primal algorithm" ]
background
{ "title": "Linear and sublinear convergence rates for a subdifferentiable distributed deterministic asynchronous Dykstra's algorithm", "abstract": "Abstract. In [Pan18a, Pan18b], we designed a distributed deterministic asynchronous algorithm for minimizing the sum of subdifferentiable and proximable functions and a regularizing quadratic on time-varying graphs based on Dykstra's algorithm, or block coordinate dual ascent. Each node in the distributed optimization problem is the sum of a known regularizing quadratic and a function to be minimized. In this paper, we prove sublinear convergence rates for the general algorithm, and a linear rate of convergence if the function on each node is smooth with Lipschitz gradient. Our numerical experiments also verify these rates." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1806.09429
1610.05507
Comparison of the results with the literature
In the case of uniformly bounded delays, the derived link between epoch and time sequence enables us to compare our rates in the strongly convex case (Theorem 3.1) with the ones obtained for PIAG #REFR 27, 28] .
[ "This simple but powerful remark is one of the main technical contributions of this paper.", "In order to get comparisons with the literature, the following result provides explicit bounds on our epoch sequence for our framework with two different kind of bounds on delays uniformly in time.", "The proof of this proposition is basic and reported in Appendix C. The detailed results are summarized in the following table. uniform bound average bound", "Bounding the average delay among the workers is an attractive assumption which is however much less common in the literature.", "The defined epoch sequence and associated analysis subsumes this kind of assumption." ]
[ "To simply the comparison, let us consider the case where all the workers share the same strong convexity and smoothness constants µ and L.", "The first thing to notice is that the admissible stepsize for PIAG depend on the delays uniform upper bound d which is practically concerning, while the usual proximal gradient stepsizes are used for the proposed DAve-RPG.", "Using the optimal stepsizes in each case, the convergence rates in terms of time k are: Stepsize", "We notice in both cases the exponent inversely proportional to the maximal delay d but the term inside the parenthesis is a hundred times smaller for PIAG.", "Even if our algorithm is made for handling the flexible delays, this comparison illustrates the interest of our approach over PIAG for distributed asynchronous optimization in the case of bounded delays." ]
[ "uniformly bounded delays" ]
result
{ "title": "A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm", "abstract": "We develop and analyze an asynchronous algorithm for distributed convex optimization when the objective writes a sum of smooth functions, local to each worker, and a non-smooth function. Unlike many existing methods, our distributed algorithm is adjustable to various levels of communication cost, delays, machines computational power, and functions smoothness. A unique feature is that the stepsizes do not depend on communication delays nor number of machines, which is highly desirable for scalability. We prove that the algorithm converges linearly in the strongly convex case, and provide guarantees of convergence for the non-strongly convex case. The obtained rates are the same as the vanilla proximal gradient algorithm over some introduced epoch sequence that subsumes the delays of the system. We provide numerical results on large-scale machine learning problems to demonstrate the merits of the proposed method. • the gradient of its local function ∇f i ; • the proximity operator of the common non-smooth function prox ). 1 Our preliminary work in a machine learning context [17] presents briefly the asynchronous framework and a theoretical study in the strongly convex case. We extend this work on several aspects with in particular a deeper analysis of the asynchronous setting, the use of local stepsizes, and the study of the general convex case. We further consider a master slave framework where the workers exchange information with a master machine which has no global information about the problem but only coordinates the computation of agents in order to minimize (1). Having asynchronous exchanges between the workers and the master is of paramount importance for practical efficiency as it eliminates idle times (see e.g. the recent [10]): in the optimization algorithm, at each moment when the master receives an update from some worker, updates its master variable, and sends it back so that the worker carries on its computation from the updated iterate. This distributed setting covers a variety of scenarios when computation are scattered over distributed devices (computer clusters, mobiles), each having a local part of the data (the locality arising from the prohibitive size of the data, or its privacy [23]), as in federated learning [12] . In the large-scale machine learning applications for instance, data points can be split across the M workers, so that each worker i has a local function f i with properties that may be different due to data distribution unevenness. This context of optimization over distributed devices requires paying a special attention to delays, [16] . Indeed some worker may update more frequently than others, due to heterogeneity of machines, data distribution, communication instability, etc. For example, in the mobile context, users cannot have their cellphone send updates without internet connection, or perform computations when not charging. In this distributed setting, we provide an asynchronous algorithm and the associated analysis that adapts to local functions parameters and can handle any kind of delays. The algorithm is based on fully asynchronous proximal gradient iterations with different stepsizes, which makes it adaptive to the functions properties. In order to subsume delays, we develop a new epoch-based mathematical analysis, encompassing computation times and communication delays, to refocus the theory on algorithmics. We show convergence in the general convex case and linear convergence in the strongly convex case, with a rate independent of the computing system, which is highly desirable for scalability. This algorithm thus handles the diversity of the previously-discussed applications. The paper is organized as follows. In Section 2, we give a description of the algorithm, split into the communication and the optimization scheme, as well as a comparison with the most related algorithm. In Section 3, we develop our epoch-based analysis of convergence, separating the general and the strongly convex case. In Section 4, we provide illustrative computational experiments on standard 1 -regularized problems showing the efficiency of the algorithm and its resilience to delays." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1611.08022
1610.05507
Assumption 2.2. (Strong Convexity)
Before presenting the main result of this work, we introduce the following lemma, which was presented in #REFR , in a slightly different form.
[ "3. Main Result.", "In this section, we characterize the global linear convergence rate of the PIAG algorithm. Let", "denote the suboptimality in the objective value at iteration k.", "The paper #OTHEREFR presented two lemmas regarding the evolution of F k and ||d k || 2 .", "In particular, the first lemma investigates how the suboptimality in the objective value evolves over the iterations and the second lemma relates the direction of update to the suboptimality in the objective value at a given iteration k." ]
[ "This lemma shows linear convergence rate for a nonnegative sequence Z k that satisfies a contraction relation perturbed by shocks (represented by Y k in the lemma).", "Lemma 3.3.", "[1, Lemma 1] Let {Z k } and {Y k } be a sequence of non-negative real numbers satisfying", "for any k ≥ 0 for some constants α > 1, β ≥ 0, γ ≥ 0 and A ∈ Z + . If", "We next present the main theorem of this paper, which characterizes the linear convergence rate of the PIAG algorithm." ]
[ "following lemma" ]
background
{ "title": "A Stronger Convergence Result on the Proximal Incremental Aggregated Gradient Method", "abstract": "Abstract. We study the convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions (where the sum is strongly convex) and a non-smooth convex function. At each iteration, the PIAG method moves along an aggregated gradient formed by incrementally updating gradients of component functions at least once in the last K iterations and takes a proximal step with respect to the non-smooth function. We show that the PIAG algorithm attains an iteration complexity that grows linear in the condition number of the problem and the delay parameter K. This improves upon the previously best known global linear convergence rate of the PIAG algorithm in the literature which has a quadratic dependence on K." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1711.01136
1610.05507
Key Lemmas and Main Results
First of all, we introduce a key result, which was given in #REFR . Lemma 1.
[ "Throughout this section, we remind the reader that for simplicity we consider the sequence {x k } generated by the PLIAG method with α k ≡ α.", "All the obtained results and the proofs are also valid for the PLIAG method with different α k ." ]
[ "Assume that the nonnegative sequences {V k } and {w k } satisfy", "for some real numbers a ∈ (0, 1), b ≥ 0, c ≥ 0, and some nonnegative integer k 0 .", "Assume also that w k = 0 for k < 0, and the following holds:", "In addition, we need another crucial result, which can be viewed as a generalization of the standard descent lemma (i.e., [4, Lemma 2.3]) for the PG method.", "Lemma 2." ]
[ "Lemma" ]
background
{ "title": "Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence under Bregman Distance Growth Conditions", "abstract": "We introduce a unified algorithmic framework, called proximal-like incremental aggregated gradient (PLIAG) method, for minimizing the sum of smooth convex component functions and a proper closed convex regularization function that is possibly non-smooth and extendedvalued, with an additional abstract feasible set whose geometry can be captured by using the domain of a Legendre function. The PLIAG method includes many existing algorithms in the literature as special cases such as the proximal gradient (PG) method, the incremental aggregated gradient (IAG) method, the incremental aggregated proximal (IAP) method, and the proximal incremental aggregated gradient (PIAG) method. By making use of special Lyapunov functions constructed by embedding growth-type conditions into descent-type lemmas, we show that the PLIAG method is globally convergent with a linear rate provided that the step-size is not greater than some positive constant. Our results recover existing linear convergence results for incremental aggregated methods even under strictly weaker conditions than the standard assumptions in the literature." }
{ "title": "Analysis and Implementation of an Asynchronous Optimization Algorithm for the Parameter Server", "abstract": "This paper presents an asynchronous incremental aggregated gradient algorithm and its implementation in a parameter server framework for solving regularized optimization problems. The algorithm can handle both general convex (possibly non-smooth) regularizers and general convex constraints. When the empirical data loss is strongly convex, we establish linear convergence rate, give explicit expressions for step-size choices that guarantee convergence to the optimum, and bound the associated convergence factors. The expressions have an explicit dependence on the degree of asynchrony and recover classical results under synchronous operation. Simulations and implementations on commercial compute clouds validate our findings." }
1810.10328
1606.06511
Time Complexity
Similarly to other well-established machine learning algorithms which share this bottleneck, one could make use of approximations that would trade off accuracy for computational expenses #REFR .
[ "The algorithm requires the computation of a similarity matrix which would require O(N 2 ), where N is the number of data points, and then compute the generalized Laplacian.", "The bottleneck is computing its inverse which has complexity O(N 3 )." ]
[ "We also note that the per iteration complexity scales linearly in N , due to the normalization step." ]
[ "well-established machine learning", "algorithms" ]
background
{ "title": "LABEL PROPAGATION FOR LEARNING WITH LABEL PROPORTIONS", "abstract": "Learning with Label Proportions (LLP) is the problem of recovering the underlying true labels given a dataset when the data is presented in the form of bags. This paradigm is particularly suitable in contexts where providing individual labels is expensive and label aggregates are more easily obtained. In the healthcare domain, it is a burden for a patient to keep a detailed diary of their daily routines, but often they will be amenable to provide higher level summaries of daily behavior. We present a novel and efficient graph-based algorithm that encourages local smoothness and exploits the global structure of the data, while preserving the 'mass' of each bag. 978-1-5386-5477-4/18/$31.00 c 2018 IEEE" }
{ "title": "Literature survey on low rank approximation of matrices", "abstract": "Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n 3 ) operations are required for n × n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexity is not linear in n). So, it is very expensive to construct the low rank approximation of a matrix if the dimension of the matrix is very large. There are alternative techniques like Cross/Skeleton approximation which gives the low-rank approximation with linear complexity in n. In this article we review low rank approximation techniques briefly and give extensive references of many techniques." }
1802.08901
1606.06511
Hermitian Space -Dynamic Mode Decomposition with control
The use of E-SVD reduces the complexity to O(mnr) ( #REFR ]) by computing only the first r singular values and vectors.
[ "Because the solar cycle lasts over a decade, this requires a large data set of more than (m ≈) 400,000 snapshots with a 0.25 hr resolution.", "A 5 degree grid resolution in TIE-GCM results in a state vector size of (n ≈) 75,000 with a 2.5 degree grid resolution resulting in n ≈ 300, 000.", "Large data has motivated extensions to DMD even beyond E-SVD ( #OTHEREFR", "al.,(2017 ]), but have been limited to systems with no exogenous inputs.", "The theoretical computational complexity of full rank SVD of X 1 ∈ R n×m used in DMDc is O(mn 2 ) with n ≤ m, making its application intractable for the problem at hand." ]
[ "HS-DMDc reduces the computation of the psuedoinverse ( † ) to the Hermitian space by performing an eigendecomposition of the correlation matrix,", "n×n , reducing the full rank complexity to O(nn 2 ).", "The complexity can be reduced to O(n 2 r) using an economy EigenDecomposition (E-ED).", "In theory, the computation of the correlation matrix X 1 X T 1 also introduces linear scaling with m -O(mn 2 ).", "Although formulating the problem in the Hermitian space is somewhat of a common practice, motivated in part by the method of snapshot formalism of POD, it is important to note that using Eigendecomposition to compute the singular values and vectors can be more sensitive to numerical roundoff errors." ]
[ "E-SVD" ]
method
{ "title": "M ar 2 01 8 A quasi-physical dynamic reduced order model for thermospheric mass density via Hermitian Space Dynamic Mode Decomposition", "abstract": "Thermospheric mass density is a major driver of satellite drag, the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO) pertinent to space situational awareness. Most existing models for thermosphere are either physics-based or empirical. Physics-based models offer the potential for good predictive/forecast capabilities but require dedicated parallel resources for real-time evaluation and data assimilative capabilities that have yet to be developed. Empirical models are fast to evaluate, but offer very limited forecasting abilities. This paper presents methodology for developing a reduced-order dynamic model from high-dimensional physics-based models by capturing the underlying dynamical behavior. The quasi-physical reduced order model (ROM) for thermospheric mass density is developed using a large dataset of TIE-GCM (Thermosphere-Ionosphere-Electrodynamics General Circular Model) simulations spanning 12 years and covering a complete solar cycle. Towards this end, a new reduced order modeling approach, based on Dynamic Mode Decomposition with control (DMDc), that uses the Hermitian space of the problem to derive the dynamics and input matrices in a tractable manner is developed. Results show that the ROM performs well in serving as a reduced order surrogate for TIE-GCM while almost always maintaining the forecast error to within 5% of the simulated densities after 24 hours." }
{ "title": "Literature survey on low rank approximation of matrices", "abstract": "Low rank approximation of matrices has been well studied in literature. Singular value decomposition, QR decomposition with column pivoting, rank revealing QR factorization (RRQR), Interpolative decomposition etc are classical deterministic algorithms for low rank approximation. But these techniques are very expensive (O(n 3 ) operations are required for n × n matrices). There are several randomized algorithms available in the literature which are not so expensive as the classical techniques (but the complexity is not linear in n). So, it is very expensive to construct the low rank approximation of a matrix if the dimension of the matrix is very large. There are alternative techniques like Cross/Skeleton approximation which gives the low-rank approximation with linear complexity in n. In this article we review low rank approximation techniques briefly and give extensive references of many techniques." }
2004.03623
1704.00648
Experiments
For relaxed bernoulli in Q O , we start with the temperature of 1.0 with an annealing rate of 3 × 10 −5 (following the details in #REFR ).
[ "For ImageNet, φ(x) is a ResNet18 model (a conv layer followed by four residual blocks).", "For all datasets, Q A and Q O have a single conv layer each.", "For classification, we start from φ(x), and add a fully-connected layer with 512 hidden units and a final fully-connected layer as classifier. More details can be found in the supplemental material.", "During the unsupervised learning phase of training, all methods are trained for 90 epochs for CIFAR100 and Indoor67, 2 epochs for Places205, and 30 epochs for ImageNet dataset.", "All methods use ADAM optimizer for training, with initial learning rate of 1 × 10 −4 and a minibatch size of 128." ]
[ "For training the classifier, all methods use stochastic gradient descent (SGD) with momentum with a minibatch size of 128.", "Initial learning rate is 1 × 10 −2 and we reduce it by a factor of 10 every 30 epochs.", "All experiments are trained for 90 epochs for CIFAR100 and Indoor67, 5 epochs for Places205, and 30 epochs for ImageNet datasets.", "Baselines.", "We use the β-VAE model (Section 3.1) as our primary baseline." ]
[ "details", "relaxed bernoulli" ]
method
{ "title": "PatchVAE: Learning Local Latent Codes for Recognition", "abstract": "Unsupervised representation learning holds the promise of exploiting large amounts of unlabeled data to learn general representations. A promising technique for unsupervised learning is the framework of Variational Auto-encoders (VAEs). However, unsupervised representations learned by VAEs are significantly outperformed by those learned by supervised learning for recognition. Our hypothesis is that to learn useful representations for recognition the model needs to be encouraged to learn about repeating and consistent patterns in data. Drawing inspiration from the mid-level representation discovery work, we propose PatchVAE, that reasons about images at patch level. Our key contribution is a bottleneck formulation that encourages mid-level style representations in the VAE framework. Our experiments demonstrate that representations learned by our method perform much better on the recognition tasks compared to those learned by vanilla VAEs." }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
1811.12817
1704.00648
Loss
Thereby, the z (s) = F (s) (x) are defined using the learned feature extractor blocks E (s) , and p(x, z #REFR , . . .
[ "We are now ready to define the loss, which is a generalization of the discrete logistic mixture loss introduced in #OTHEREFR . Recall from Sec.", "3.1 that our goal is to model the true joint distribution of x and the representations z (s) , i.e., p(x, z #OTHEREFR , . . .", ", z (s) ) as accurately as possible using our model p(x, z #OTHEREFR , . . . , z (s) )." ]
[ ", z (s) ) is a product of discretized (conditional) logistic mixture models with parameters defined through the f (s) , which are in turn computed using the learned predictor blocks D (s) . As discussed in Sec.", "3.1, the expected coding cost incurred by coding x, z #OTHEREFR", "Note that the loss decomposes into the sum of the crossentropies of the different representations.", "Also note that this loss corresponds to the negative log-likelihood of the data w.r.t.", "our model which is typically the perspective taken in the generative modeling literature (see, e.g., #OTHEREFR" ]
[ "learned feature extractor" ]
method
{ "title": "Practical Full Resolution Learned Lossless Image Compression", "abstract": "We propose the first practical learned lossless image compression system, L3C, and" }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
2001.09417
1704.00648
Image Compression based on DNN
In #REFR , similar to the soft quantization strategy, a soft entropy is designed by summing up the partial assignments to each center instead of counting.
[ "With the quantizer being differentiable, in order to jointly minimize the bitrate and distortion, we also need to make the entropy differentiable.", "For example, in #OTHEREFR , the quantizer is added with uniform noise.", "The density function of this relaxed formulation is continuous and can be used as an approximation of the entropy of the quantized values." ]
[ "In #OTHEREFR , an entropy coding scheme is trained to learn the dependencies among the symbols in the latent representation by using a context model. These methods allow jointly optimizing the R-D function." ]
[ "soft quantization strategy" ]
method
{ "title": "Deep Learning-based Image Compression with Trellis Coded Quantization", "abstract": "Recently many works attempt to develop image compression models based on deep learning architectures, where the uniform scalar quantizer (SQ) is commonly applied to the feature maps between the encoder and decoder. In this paper, we propose to incorporate trellis coded quantizer (TCQ) into a deep learning based image compression framework. A soft-tohard strategy is applied to allow for back propagation during training. We develop a simple image compression model that consists of three subnetworks (encoder, decoder and entropy estimation), and optimize all of the components in an end-to-end manner. We experiment on two high resolution image datasets and both show that our model can achieve superior performance at low bit rates. We also show the comparisons between TCQ and SQ based on our proposed baseline model and demonstrate the advantage of TCQ." }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
2002.10032
1704.00648
INTRODUCTION
In #REFR , a soft-to-hard vector quantization approach was introduced, and a unified framework was developed for image compression.
[ "Deep learning-based image compression #OTHEREFR has shown the potential to outperform standard codecs such as JPEG2000, the H.265/HEVC-based BPG image codec #OTHEREFR , and the new versatile video coding test model (VTM) #OTHEREFR .", "Learned image compression was first used in #OTHEREFR to compress thumbnail images using long short-term memory (LSTM)-based recurrent neural networks (RNNs) in which better SSIM results than JPEG and WebP were reported.", "This approach was generalized in #OTHEREFR , which utilized spatially adaptive bit allocation to further improve the performance.", "In #OTHEREFR , a scheme based on generalized divisive normalization (GDN) and inverse GDN (IGDN) were proposed, which outperformed JPEG2000 in both PSNR and SSIM.", "A compressive autoencoder framework with residual connection as in ResNet was proposed in #OTHEREFR , where the quantization was replaced by a smooth approximation, and a scaling approach was used to get different rates." ]
[ "In order to take the spatial variation of image content into account, a contentweighted framework was also introduced in #OTHEREFR , where an importance map for locally adaptive bit rate allocation was employed to handle the spatial variation of image content.", "A learned channel-wise quantization along with arithmetic coding was also used to reduce the quantization error.", "There have also been some efforts in taking advantage of other computer vision tasks in image compression frameworks.", "For example, in #OTHEREFR , a deep semantic segmentation-based layered image compression (DSSLIC) was proposed, by taking advantage of the Generative Adversarial Network (GAN) and BPG-based residual coding.", "It outperformed the BPG codec (in RGB444 format) in both PSNR and MS-SSIM #OTHEREFR ." ]
[ "image compression", "soft-to-hard vector quantization" ]
method
{ "title": "Generalized Octave Convolutions for Learned Multi-Frequency Image Compression", "abstract": "Learned image compression has recently shown the potential to outperform all standard codecs. The state-of-the-art ratedistortion performance has been achieved by context-adaptive entropy approaches in which hyperprior and autoregressive models are jointly utilized to effectively capture the spatial dependencies in the latent representations. However, the latents contain a mixture of high and low frequency information, which has inefficiently been represented by features maps of the same spatial resolution in previous works. In this paper, we propose the first learned multi-frequency image compression approach that uses the recently developed octave convolutions to factorize the latents into high and low frequencies. Since the low frequency is represented by a lower resolution, their spatial redundancy is reduced, which improves the compression rate. Moreover, octave convolutions impose effective high and low frequency communication, which can improve the reconstruction quality. We also develop novel generalized octave convolution and octave transposed-convolution architectures with internal activation layers to preserve the spatial structure of the information. Our experiments show that the proposed scheme outperforms all standard codecs and learning-based methods in both PSNR and MS-SSIM metrics, and establishes the new state of the art for learned image compression. Index Termsgeneralized octave convolutions, multifrequency autoencoder, learned image compression, learned entropy model" }
{ "title": "Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations", "abstract": "We present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both." }
2002.01416
1803.06893
2D Kelvin-Helmholtz simulation
The energy and enstrophy for EMAC and SKEW agree well with each other, and with results in #REFR .
[ "For Re = 100, solutions are computed up to T = 10 on a uniform triangulation with h = 1 96 is used with a time step size of ∆t = 0.01.", "For Re = 1000, solutions are computed up to T = 20 on a uniform triangulation with h = 1 196 and ∆t = 0.005.", "The nonlinear problems were resolved with Newton's method, and in most cases converged in 2 to 3 iterations.", "We first present results for the Re = 100 simulations.", "Plots of energy, enstrophy, absolute total momentum (defining |M | = |M 1 + M 2 |), and angular momentum versus time are shown in figure 6." ]
[ "For momentum, the initial condition has 0 momentum in both the x and y directions; EMAC maintains this momentum up to roundoff error, while SKEW produces solutions with momentum near 10 −7 which is still quite small.", "The plots of angular momentum versus time are quite interesting, as EMAC agrees with SKEW up to around t = 2, at which point it deviates significantly.", "This deviation coincides with the differences in the absolute vorticity contours in figure 7 (we show the domain extended once periodically to the right, to aid in presentation of the results), where we see that EMAC joins the middle 2 eddies from the t=2.1 solution to form a bigger eddy, while SKEW joins the left eddies together and the right eddies together.", "Since the solution is periodic in the horizontal direction, we believe both of these solutions to be correct, however it is still interesting how the different formulations find different solutions.", "We note that the solution plots from figure 7 are in good qualitative agreement with those shown in #OTHEREFR , although as discussed in #OTHEREFR the times at which eddy combining happens is very sensitive and so some minor differences for evolution-in-time is both expected and observed." ]
[ "EMAC", "enstrophy" ]
result
{ "title": "Longer time accuracy for incompressible Navier-Stokes simulations with the EMAC formulation", "abstract": "In this paper, we consider the recently introduced EMAC formulation for the incompressible Navier-Stokes (NS) equations, which is the only known NS formulation that conserves energy, momentum and angular momentum when the divergence constraint is only weakly enforced. Since its introduction, the EMAC formulation has been successfully used for a wide variety of fluid dynamics problems. We prove that discretizations using the EMAC formulation are potentially better than those built on the commonly used skew-symmetric formulation, by deriving a better longer time error estimate for EMAC: while the classical results for schemes using the skew-symmetric formulation have Gronwall constants dependent on exp(C · Re · T ) with Re the Reynolds number, it turns out that the EMAC error estimate is free from this explicit exponential dependence on the Reynolds number. Additionally, it is demonstrated how EMAC admits smaller lower bounds on its velocity error, since incorrect treatment of linear momentum, angular momentum and energy induces lower bounds for L 2 velocity error, and EMAC treats these quantities more accurately. Results of numerical tests for channel flow past a cylinder and 2D Kelvin-Helmholtz instability are also given, both of which show that the advantages of EMAC over the skew-symmetric formulation increase as the Reynolds number gets larger and for longer simulation times. in a domain Ω ⊂ R d , d=2 or 3, with polyhedral and Lipschitz boundary, u and p representing the unknown velocity and pressure, f an external force, u 0 the initial velocity, and ν the kinematic viscosity which is inversely proportional to the Reynolds number Re. Appropriate boundary conditions are required to close the system, and for simplicity we will consider the case of homogeneous Dirichlet boundary conditions, u| ∂Ω = 0. In the recent work [6], the authors showed that due to the divergence constraint, the NSE nonlinearity could be equivalently be written as u · ∇u + ∇p = 2D(u)u + (div u)u + ∇P, with P = p− 1 2 |u| 2 and D denoting the rate of deformation tensor. Reformulating in this way was named in [6] to be the energy, momentum and angular momentum conserving (EMAC) formulation of the NSE, since when discretized with a Galerkin method that only weakly enforces the divergence constraint, the EMAC formulation still produces a scheme that conserves each of energy, momentum, and angular-momentum, as well as properly defined 2D enstrophy, helicity, and total vorticity. This is in contrast to the well-known convective, conservative, rotational, and skew-symmetric formulations, which are each shown in [6] to not conserve at least one of energy, momentum or angular momentum. The EMAC formulation, and its related numerical schemes, is part of a long line of research extending back at least to Arakawa that has the theme \"incorporating more accurate physics into discretizations leads to more stable and accurate numerical solutions, especially over long time intervals.\" There are typically many ways to discretize a particular PDE, but choosing (or developing) a method that more accurately reproduces important physical balances or conservation laws will often lead to better solutions. Arakawa recognized this when he designed an energy and enstrophy conserving scheme for the 2D Navier-Stokes equations in [2], as did Fix for ocean circulation models in [11] , Arakawa and Lamb for the shallow water equations [3], and many others for various evolutionary systems from physics, e.g. [24, 1, 38, 34, 32, 30, 3] . It is important to note that if divergence-free elements are used, such as those recently developed in [16, 40, 15, 4] , then the finite element velocity found with the EMAC formulation is the same vector field as recovered from more traditional convective and skew-symmetric formulations, and all of these conservation properties will hold for those formulations as well. However, the development of strongly divergence-free methods is still quite new, often requires non-standard meshing and elements, and is not yet included into most major software packages. Since its original development in 2017 in [6] , the EMAC formulation has gained considerable attention by the CFD community. It has been used for a wide variety of problems, including vortex-induced vibration [31] , turbulent flow simulation [22] , cardiovascular simulations and hemodynamics [10, 9] , noise radiated by an open cavity [25] , and others [29, 23] . It has proven successful in these simulations, and a common theme reported for it has been that it exhibits low dissipation compared to other common schemes, which is likely due to EMAC's better adherence to physical conservation laws and balances. Not surprisingly, less has been done from an analysis viewpoint, as only one paper has appeared in this direction; in [7], the authors analyzed conservation properties of various time stepping methods for EMAC. In particular, no analysis for EMAC has been found which improves upon the well-known analysis of mixed finite elements for the incompressible NSE in skew-symmetric form. The present paper addresses the challenge of providing such new analysis. This paper extends the study of the EMAC formulation both analytically and computationally. Analytically, we show how the better convergence properties of EMAC unlock the potential for decreasing the approximation error of FE methods. In particular, we show that while the classical semidiscrete error bound for the skew-symmetric formulation has a Gronwall constant exp(C · Re · T ) [18], where T is the simulation end time, the analogous EMAC scheme has a Gronwall constant exp(C · T ), i.e. with no explicit exponential dependence on Re (and the rest of the terms in the error bound are similar). We note that previously, such ν-uniform error bounds were believed to be an exclusive property of finite element methods that enforced the divergence constraint strongly through divergence-free elements [37] or through stabilization/penalization of the divergence error [8] . Additionally, we show how the lack of momentum conservation in convective, skew-symmetric and rotational forms produce a lower bound on the error, which EMAC is free from. Numeri-" }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
2002.01416
1803.06893
2D Kelvin-Helmholtz simulation
The plots of energy and enstrophy are in agreement with those in #REFR (after adjusting time units).
[ "The plots of angular momentum versus time are quite interesting, as EMAC agrees with SKEW up to around t = 2, at which point it deviates significantly.", "This deviation coincides with the differences in the absolute vorticity contours in figure 7 (we show the domain extended once periodically to the right, to aid in presentation of the results), where we see that EMAC joins the middle 2 eddies from the t=2.1 solution to form a bigger eddy, while SKEW joins the left eddies together and the right eddies together.", "Since the solution is periodic in the horizontal direction, we believe both of these solutions to be correct, however it is still interesting how the different formulations find different solutions.", "We note that the solution plots from figure 7 are in good qualitative agreement with those shown in #OTHEREFR , although as discussed in #OTHEREFR the times at which eddy combining happens is very sensitive and so some minor differences for evolution-in-time is both expected and observed.", "For Re = 1000, plots of energy, absolute total momentum, angular momentum, and enstrophy versus time are shown in figure 9, and we observe very similar results for EMAC and SKEW, except for momentum where EMAC gives close to round off error while SKEW is O(10 −5 ), which is still quite small." ]
[ "Contours of absolute vorticity for EMAC and SKEW are shown in figure 9 , and they both display qualitative behavior consistent with results of #OTHEREFR , although with some minor differences being that the max absolute vorticity for SKEW is slightly higher (notice the colorbar scale), and perhaps more important is that the center of the SKEW eddies at later times show oscillations while those for EMAC do not." ]
[ "time units", "enstrophy" ]
result
{ "title": "Longer time accuracy for incompressible Navier-Stokes simulations with the EMAC formulation", "abstract": "In this paper, we consider the recently introduced EMAC formulation for the incompressible Navier-Stokes (NS) equations, which is the only known NS formulation that conserves energy, momentum and angular momentum when the divergence constraint is only weakly enforced. Since its introduction, the EMAC formulation has been successfully used for a wide variety of fluid dynamics problems. We prove that discretizations using the EMAC formulation are potentially better than those built on the commonly used skew-symmetric formulation, by deriving a better longer time error estimate for EMAC: while the classical results for schemes using the skew-symmetric formulation have Gronwall constants dependent on exp(C · Re · T ) with Re the Reynolds number, it turns out that the EMAC error estimate is free from this explicit exponential dependence on the Reynolds number. Additionally, it is demonstrated how EMAC admits smaller lower bounds on its velocity error, since incorrect treatment of linear momentum, angular momentum and energy induces lower bounds for L 2 velocity error, and EMAC treats these quantities more accurately. Results of numerical tests for channel flow past a cylinder and 2D Kelvin-Helmholtz instability are also given, both of which show that the advantages of EMAC over the skew-symmetric formulation increase as the Reynolds number gets larger and for longer simulation times. in a domain Ω ⊂ R d , d=2 or 3, with polyhedral and Lipschitz boundary, u and p representing the unknown velocity and pressure, f an external force, u 0 the initial velocity, and ν the kinematic viscosity which is inversely proportional to the Reynolds number Re. Appropriate boundary conditions are required to close the system, and for simplicity we will consider the case of homogeneous Dirichlet boundary conditions, u| ∂Ω = 0. In the recent work [6], the authors showed that due to the divergence constraint, the NSE nonlinearity could be equivalently be written as u · ∇u + ∇p = 2D(u)u + (div u)u + ∇P, with P = p− 1 2 |u| 2 and D denoting the rate of deformation tensor. Reformulating in this way was named in [6] to be the energy, momentum and angular momentum conserving (EMAC) formulation of the NSE, since when discretized with a Galerkin method that only weakly enforces the divergence constraint, the EMAC formulation still produces a scheme that conserves each of energy, momentum, and angular-momentum, as well as properly defined 2D enstrophy, helicity, and total vorticity. This is in contrast to the well-known convective, conservative, rotational, and skew-symmetric formulations, which are each shown in [6] to not conserve at least one of energy, momentum or angular momentum. The EMAC formulation, and its related numerical schemes, is part of a long line of research extending back at least to Arakawa that has the theme \"incorporating more accurate physics into discretizations leads to more stable and accurate numerical solutions, especially over long time intervals.\" There are typically many ways to discretize a particular PDE, but choosing (or developing) a method that more accurately reproduces important physical balances or conservation laws will often lead to better solutions. Arakawa recognized this when he designed an energy and enstrophy conserving scheme for the 2D Navier-Stokes equations in [2], as did Fix for ocean circulation models in [11] , Arakawa and Lamb for the shallow water equations [3], and many others for various evolutionary systems from physics, e.g. [24, 1, 38, 34, 32, 30, 3] . It is important to note that if divergence-free elements are used, such as those recently developed in [16, 40, 15, 4] , then the finite element velocity found with the EMAC formulation is the same vector field as recovered from more traditional convective and skew-symmetric formulations, and all of these conservation properties will hold for those formulations as well. However, the development of strongly divergence-free methods is still quite new, often requires non-standard meshing and elements, and is not yet included into most major software packages. Since its original development in 2017 in [6] , the EMAC formulation has gained considerable attention by the CFD community. It has been used for a wide variety of problems, including vortex-induced vibration [31] , turbulent flow simulation [22] , cardiovascular simulations and hemodynamics [10, 9] , noise radiated by an open cavity [25] , and others [29, 23] . It has proven successful in these simulations, and a common theme reported for it has been that it exhibits low dissipation compared to other common schemes, which is likely due to EMAC's better adherence to physical conservation laws and balances. Not surprisingly, less has been done from an analysis viewpoint, as only one paper has appeared in this direction; in [7], the authors analyzed conservation properties of various time stepping methods for EMAC. In particular, no analysis for EMAC has been found which improves upon the well-known analysis of mixed finite elements for the incompressible NSE in skew-symmetric form. The present paper addresses the challenge of providing such new analysis. This paper extends the study of the EMAC formulation both analytically and computationally. Analytically, we show how the better convergence properties of EMAC unlock the potential for decreasing the approximation error of FE methods. In particular, we show that while the classical semidiscrete error bound for the skew-symmetric formulation has a Gronwall constant exp(C · Re · T ) [18], where T is the simulation end time, the analogous EMAC scheme has a Gronwall constant exp(C · T ), i.e. with no explicit exponential dependence on Re (and the rest of the terms in the error bound are similar). We note that previously, such ν-uniform error bounds were believed to be an exclusive property of finite element methods that enforced the divergence constraint strongly through divergence-free elements [37] or through stabilization/penalization of the divergence error [8] . Additionally, we show how the lack of momentum conservation in convective, skew-symmetric and rotational forms produce a lower bound on the error, which EMAC is free from. Numeri-" }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
2003.06972
1803.06893
Consistency error bounds.
This compares well to results computed with a higher order method in #REFR for the planar case with Re = 10 4 .
[ "resulting in C K (Γ) 2 ≤ 1 2 1 .", "Substituting this in the above estimate for the kinetic energy, we arrive at the bound E(t) ≤ E(0) exp (−8ν t) = E(0) exp −4 · 10 −5 t .", "In Figure 7 .3 we show the kinetic energy plots for the computed solutions together with exponential fitting.", "There are two obvious reasons for the computed energy to decay faster than the upper estimate (7.5) suggests: the presence of numerical diffusion and the persistence of higher harmonics in the true solution.", "On the finest mesh the numerical solution looses about 0.5% of kinetic energy up to the point when the solution is dominated by two counter-rotating vortices." ]
[]
[ "higher order method" ]
result
{ "title": "Error analysis of higher order trace finite element methods for the surface Stokes equations", "abstract": "The paper studies a higher order unfitted finite element method for the Stokes system posed on a surface in R 3 . The method employs parametric P k -P k−1 finite element pairs on tetrahedral bulk mesh to discretize the Stokes system on embedded surface. Stability and optimal order convergence results are proved. The proofs include a complete quantification of geometric errors stemming from approximate parametric representation of the surface. Numerical experiments include formal convergence studies and an example of the Kelvin-Helmholtz instability problem on the unit sphere." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1808.04669
1803.06893
Numerical results
A good agreement of the kinetic energy can be clearly seen, while the enstrophy agrees pretty well till timet = 150, where the last vortex merging toke place for our simulation, while that happens at a much later timet = 250 for the scheme used in #REFR .
[ "The numerical dissipation in our simulation triggered the last vortex merging in a much earlier time, since we use a lower order method on a coarser mesh compared with #OTHEREFR .", "We notice that a numerical simulation at the scale of #OTHEREFR is out of reach for our desktop-based simulation.", "However, notice that the scheme #OTHEREFR involves one Stokes solver per time step while ours involve one (hybrid-)mixed Poisson per time step.", "Hence, using the same order approximation on the same mesh, our scheme shall be faster than that used in #OTHEREFR for the current high-Reynolds number flow simulation. In Fig.", "5 , we plot the evolution of kinetic energy and enstrophy of our simulation, together with the reference data provided in #OTHEREFR ." ]
[ "Example 4: flow around a cylinder.", "We consider the 2D-2 benchmark problem proposed in #OTHEREFR where a laminar flow around a cylinder is considered.", "The domain is a rectangular channel without an almost vertically centered circular obstacle, c.f.", "The boundary is decomposed into Γ in := {x = 0}, the inflow boundary, Γ out := {x = 2.2}, the outflow boundary, and Γ wall := ∂Ω\\(Γ in ∪ Γ out ), the wall boundary.", "On Γ out we prescribe natural boundary conditions (−ν∇u + pI)n = 0, on Γ wall homogeneous Dirichlet boundary conditions for the velocity (no-slip) and on Γ in the inflow Dirichlet boundary conditions u(0, y, t) = 6ū y(0.41 − y)/0.41 2 · (1, 0), withū = 1 the average inflow velocity." ]
[ "simulation", "kinetic energy" ]
result
{ "title": "An explicit divergence-free DG method for incompressible flow", "abstract": "Abstract. We present an explicit divergence-free DG method for incompressible flow based on velocity formulation only. A globally divergence-free finite element space is used for the velocity field, and the pressure field is eliminated from the equations by design. The resulting ODE system can be discretized using any explicit time stepping methods. We use the third order strongstability preserving Runge-Kutta method in our numerical experiments. Our spatial discretization produces the identical velocity field as the divergenceconforming DG method of Cockburn et al. [5] based on a velocity-pressure formulation, when the same DG operators are used for the convective and viscous parts. Due to the global nature of the divergence-free constraint, there exist no local bases for our finite element space. We present a key result on the efficient implementation of the scheme by identifying the equivalence of the (dense) mass matrix inversion of the globally divergence-free finite element space to a standard (hybrid-)mixed Poisson solver. Hence, in each time step, a (hybrid-)mixed Poisson solver is used, which reflects the global nature of the incompressibility condition. Since we treat viscosity explicitly for the NavierStokes equation, our method shall be best suited for unsteady high-Reynolds number flows so that the CFL constraint is not too restrictive." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1805.01706
1803.06893
Numerical tests
In addition, a qualitative comparison against benchmark data from #REFR is presented in terms of the temporal evolution of the enstrophy E(t) (here we rescale ω h with √ ν to match again the real vorticity).
[ "The characteristic time ist = δ 0 /u ∞ , the Reynolds number is Re= 10000, and the kinematic viscosity is ν = δ 0 u ∞ /Re.", "We use a structured mesh of 128 segments per side, representing 131072 triangular elements, and we solve the problem using our first-order DG scheme, setting again the stabilisation constants to a 11 = c 11 = σ = 1/∆t and d 11 = ν, where the timestep is taken as ∆t =t/20.", "The specification of this problem implies that the solutions will be quite sensitive to the initial perturbations present in the velocity, which will amplify and consequently vortices will appear.", "We proceed to compute numerical solutions until the dimensionless time t = 7, and present in Figure 4 sample solutions at three different simulation times.", "For visualisation purposes we zoom into the region 0.25 ≤ y ≤ 0.75, where all flow patterns are concentrated." ]
[ "We also record the evolution of the palinstrophy P (t), a quantity that encodes the dissipation process.", "These quantities are defined, and we remark that for the palinstrophy we use the discrete gradient associated with the DG discretisation.", "We show these quantities in Figure 5 , where also include results from #OTHEREFR that correspond to coarse and fine mesh solutions of the Navier-Stokes equations using a high order scheme based on Brezzi-Douglas-Marini elements." ]
[ "real vorticity" ]
method
{ "title": "Analysis and approximation of a vorticity-velocity-pressure formulation for the Oseen equations", "abstract": "We introduce a family of mixed methods and discontinuous Galerkin discretisations designed to numerically solve the Oseen equations written in terms of velocity, vorticity, and Bernoulli pressure. The unique solvability of the continuous problem is addressed by invoking a global inf-sup property in an adequate abstract setting for non-symmetric systems. The proposed finite element schemes, which produce exactly divergence-free discrete velocities, are shown to be well-defined and optimal convergence rates are derived in suitable norms. In addition, we establish optimal rates of convergence for a class of discontinuous Galerkin schemes, which employ stabilisation. A set of numerical examples serves to illustrate salient features of these methods." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1909.06229
1803.06893
Piecewise smooth manifolds
We can hence compare our numerical solution on Γ 0 to the results in the literature #REFR .
[ "In this subsection we consider 4 similar but different cylindrical setups in the following: is an open cylinder of height 1 with radius = (2 ) −1 , i.e.", "perimeter 1 and we can isometrically map the unit square (periodic in -direction) on Γ 0 . On the boundary we prescribe free slip boundary condition.", "As the surface Navier-Stokes equations are invariant under isometric maps we know that the solution to the corresponding 2D Kelvin-Helmholtz problem is identical." ]
[ "Γ 1 is a corresponding closed cylinder with bottom and top added, i.e. without boundary.", "Γ 2 is similar to Γ 1 except for the decreased height of 1 − 2 .", "Hence, the geodesics from the center of the top of the cylinder to the center of the bottom of the cylinder have length 1.", "The last case, case 3 considers an even shorter closed cylinder with height 1 2 . In Fig.", "13 the geometries and used meshes are sketched alongside with the decay of energy and enstrophy over time whereas in Fig." ]
[ "numerical solution" ]
result
{ "title": "Divergence-free tangential finite element methods for incompressible flows on surfaces", "abstract": "In this work we consider the numerical solution of incompressible flows on twodimensional manifolds. Whereas the compatibility demands of the velocity and the pressure spaces are known from the flat case one further has to deal with the approximation of a velocity field that lies only in the tangential space of the given geometry. Abandoning 1 -conformity allows us to construct finite elements which are -due to an application of the Piola transformation -exactly tangential. To reintroduce continuity (in a weak sense) we make use of (hybrid) discontinuous Galerkin techniques. To further improve this approach, (div Γ )-conforming finite elements can be used to obtain exactly divergence-free velocity solutions. We present several new finite element discretizations. On a number of numerical examples we examine and compare their qualitative properties and accuracy." }
{ "title": "On reference solutions and the sensitivity of the 2D Kelvin-Helmholtz instability problem", "abstract": "Two-dimensional Kelvin-Helmholtz instability problems are popular examples for assessing discretizations for incompressible flows at high Reynolds number. Unfortunately, the results in the literature differ considerably. This paper presents computational studies of a Kelvin-Helmholtz instability problem with high order divergence-free finite element methods. Reference results in several quantities of interest are obtained for three different Reynolds numbers up to the beginning of the final vortex pairing. A meshindependent prediction of the final pairing is not achieved due to the sensitivity of the considered problem with respect to small perturbations. A theoretical explanation of this sensitivity to small perturbations is provided based on the theory of self-organization of 2D turbulence. Possible sources of perturbations that arise in almost any numerical simulation are discussed." }
1703.05135
0909.2735
Notations and Description of the Phase Transition Model
In this section we fix notations and we recall some properties concerning the 2-Phase traffic model introduced in #REFR .
[]
[ "As already said, the model (1) is an extension of the classical LWR model, given by the following scalar conservation law", "where ρ is the traffic density and V = V (t, x, ρ) is the speed.", "We consider the following two assumptions on the speed:", "• We assume that, at a given density, different drivers may choose different velocities, that is, we assume that V = w ψ(ρ), where ψ = ψ(ρ) is a C 2 function and w = w(t, x) is the maximal speed of a driver, located at position x at time t.", "• We impose an overall bound on the speed V max . We get the following 2 × 2 system" ]
[ "2-Phase traffic model" ]
background
{ "title": "The Godunov method for a 2-phase model", "abstract": "We consider the Godunov numerical method to the phase-transition traffic model, proposed in [1], by Colombo, Marcellini, and Rascle. Numerical tests are shown to prove the validity of the method. Moreover we highlight the differences between such model and the one proposed in [2], by Blandin, Work, Goatin, Piccoli, and Bayen." }
{ "title": "A 2-phase traffic model based on a speed bound", "abstract": "We extend the classical LWR traffic model allowing different maximal speeds to different vehicles. Then, we add a uniform bound on the traffic speed. The result, presented in this paper, is a new macroscopic model displaying 2 phases, based on a non-smooth 2 × 2 system of conservation laws. This model is compared with other models of the same type in the current literature, as well as with a kinetic one. Moreover, we establish a rigorous connection between a microscopic Follow-The-Leader model based on ordinary differential equations and this macroscopic continuum model. Mathematics Subject Classification: 35L65, 90B20" }
1811.02514
1711.04819
III. PROPOSED UNCERTAINTY QUANTIFICATION METHODS
Firstly, we now concern the UQ strategies in general image/signal processing problems instead of just a special application in RI imaging in #REFR .
[ "Then a local credible interval (ξ −,Ωi , ξ +,Ωi ) for region Ω i is defined by #OTHEREFR where", "N is the index operator on Ω i with value 1 for pixels in Ω i otherwise 0.", "Note that ξ −,Ωi and ξ +,Ωi are actually the values that saturate the HPD credible region C α from above and from below at Ω i .", "Then the local credible interval (ξ − , ξ + ) for the whole image/signal is obtained by gathering all the (ξ −,Ωi , ξ +,Ωi ), ∀i, i.e.,", "We hereby briefly clarify the distinctions of this work from #OTHEREFR ." ]
[ "Secondly, here we adjust µ automatically, but #OTHEREFR assumes µ is known beforehand.", "Finally, we consider the over-complete bases Ψ (such as SARA #OTHEREFR , #OTHEREFR ) and explore their influence in UQ with synthesis and analysis priors, which is not considered in #OTHEREFR . 1 − α 1 − α Fig. 3 . HPD credible region.", "Plots on the left and right are the results using orthonormal basis and SARA dictionary, respectively.", "MRI brain image is used as an example here (results for RI image M31 are similar)." ]
[ "general image/signal processing", "RI imaging" ]
background
{ "title": "Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation", "abstract": "Abstract-Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated." }
{ "title": "Uncertainty quantification for radio interferometric imaging: II. MAP estimation", "abstract": "Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approaches to perform Bayesian inference, like Markov Chain Monte Carlo (MCMC) sampling, can in principle recover the full posterior distribution of the image, from which uncertainties can then be quantified. However, for massive data sizes, like those anticipated from the Square Kilometre Array (SKA), it will be difficult if not impossible to apply any MCMC technique due to its inherent computational cost. We formulate Bayesian inference problems with sparsity-promoting priors (motivated by compressive sensing), for which we recover maximum a posteriori (MAP) point estimators of radio interferometric images by convex optimisation. Exploiting recent developments in the theory of probability concentration, we quantify uncertainties by post-processing the recovered MAP estimate. Three strategies to quantify uncertainties are developed: (i) highest posterior density credible regions; (ii) local credible intervals (cf. error bars) for individual pixels and superpixels; and (iii) hypothesis testing of image structure. These forms of uncertainty quantification provide rich information for analysing radio interferometric observations in a statistically robust manner. Our MAP-based methods are approximately 10 5 times faster computationally than state-of-the-art MCMC methods and, in addition, support highly distributed and parallelised algorithmic structures. For the first time, our MAP-based techniques provide a means of quantifying uncertainties for radio interferometric imaging for realistic data volumes and practical use, and scale to the emerging big-data era of radio astronomy." }
1105.4449
1011.1350
1.4.
In #REFR a geometric complexity theory (GCT) study of M M ult and its GL(V 1 ) × GL(V 2 ) × GL(V 3 ) orbit closure is considered.
[ "Connections to the GCT program.", "The triangle case is especially interesting because we remark below that in the critical dimension case it corresponds to", "where, setting", ",e 2 ,e 1 ∈ V 1 ⊗V 2 ⊗V 3 is the matrix multiplication operator, that is, as a tensor, M M ult e 3 ,e 2 ,e 1 = Id E 3 ⊗Id E 2 ⊗Id E 1 ." ]
[ "One sets e 1 = e 2 = e 3 = n and studies the geometry as n → ∞.", "It is a toy case of the varieties introduced by Mulmuley and Sohoni #OTHEREFR 13, #OTHEREFR , letting S d C k denote the homogeneous polynomials of degree d on (C k ) * , the varieties are GL n 2 · det n ⊂ S n C n 2 and GL n 2 · ℓ n−m perm m ⊂ S n C n 2 .", "Here det n ∈ S n C n 2 is the determinant, a homogeneous polynomial of degree n in n 2 variables, n > m, ℓ ∈ S 1 C 1 , perm m ∈ S m C m 2 is the permanent and an inclusion C m 2 +1 ⊂ C n 2 has been chosen.", "In #OTHEREFR it was shown that End C n 2 ·det n = GL n 2 · det n , and determining the difference between these sets is a subject of current research.", "The critical loop case with e s = 3 for all s is also related to the GCT program, as it corresponds to the multiplication of n matrices of size three." ]
[ "geometric complexity theory" ]
background
{ "title": "On the geometry of tensor network states", "abstract": "Abstract. We answer a question of L. Grasedyck that arose in quantum information theory, showing that the limit of tensors in a space of tensor network states need not be a tensor network state. We also give geometric descriptions of spaces of tensor networks states corresponding to trees and loops. Grasedyck's question has a surprising connection to the area of Geometric Complexity Theory, in that the result is equivalent to the statement that the boundary of the Mulmuley-Sohoni type variety associated to matrix multiplication is strictly larger than the projections of matrix multiplication (and re-expressions of matrix multiplication and its projections after changes of bases). Tensor Network States are also related to graphical models in algebraic statistics." }
{ "title": "Geometric complexity theory and tensor rank", "abstract": "Mulmuley and Sohoni [25, 26] proposed to view the permanent versus determinant problem as a specific orbit closure problem and to attack it by methods from geometric invariant and representation theory. We adopt these ideas towards the goal of showing lower bounds on the border rank of specific tensors, in particular for matrix multiplication. We thus study specific orbit closure problems for the group A key idea from [26] is that the irreducible Gs-representations occurring in the coordinate ring of the G-orbit closure of a stable tensor w ∈ W are exactly those having a nonzero invariant with respect to the stabilizer group of w. However, we prove that by considering Gs-representations, only trivial lower bounds on border rank can be shown. It is thus necessary to study G-representations, which leads to geometric extension problems that are beyond the scope of the subgroup restriction problems emphasized in [25, 26] and its follow up papers. We prove a very modest lower bound on the border rank of matrix multiplication tensors using G-representations. This shows at least that the barrier for Gs-representations can be overcome. To advance, we suggest the coarser approach to replace the semigroup of representations of a tensor by its moment polytope. We prove first results towards determining the moment polytopes of matrix multiplication and unit tensors. * A full version of this paper is available at arxiv.org/abs/1011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Categories and Subject Descriptors Algorithms, Theory Keywords geometric complexity theory, tensor rank, matrix multiplication, orbit closures, multiplicities, Kronecker coefficients We thank Matthias Christandl, Shrawan Kumar, Joseph Landsberg, Laurent Manivel, Ketan Mulmuley, and Jerzy Weyman for helpful discussions. We are grateful to the Fields Institute in Toronto for providing a forum for discussing some questions of GCT in the fall of 2009. Moreover, we thank the Center for Computational Intractability in Princeton for making possible the workshop in Geometric Complexity Theory in 2010, where some of the results presented here were discussed." }
1210.8368
1011.1350
HWV Obstructions
But the converse is not true in general, see for instance the discussion on Strassen's invariant in #REFR .
[ ",λ,i gh) = 0, which proves the proposition.", "We call such f λ a HWV obstruction against h ∈ Gc.", "We will show that some HWVs have a succinct encoding, which is linear in their degree d. . These properties can be rephrased as follows:", "• There exists some HWV f λ in C[V ] of weight λ that does not vanish on Gh.", "If λ is an occurence obstruction against h ∈ Gc, then there exists a HWV obstruction f λ of weight λ." ]
[ "Clearly, if the irreducible represenation corresponding to λ occurs in C[V ] with high multiplicity, then item one above is much harder to satisfy for occurence obstructions.", "While Proposition 3.3 tells us that h ∈ Gc can, in principle, always be proven by exhibiting a HWV obstruction, it is unclear whether this is also the case for occurence obstructions. We state this as an important open problem.", "hm,n / ∈ Gcn, is there an occurence obstruction proving this?", "Mulmuley and Sohoni conjecture that (2.2) can be proved with occurence obstructions, see [22, §3] ." ]
[ "discussion" ]
background
{ "title": "Explicit lower bounds via geometric complexity theory", "abstract": "We prove the lower bound R(Mm) ≥ 3 2 m 2 − 2 on the border rank of m × m matrix multiplication by exhibiting explicit representation theoretic (occurence) obstructions in the sense the geometric complexity theory (GCT) program. While this bound is weaker than the one recently obtained by Landsberg and Ottaviani, these are the first significant lower bounds obtained within the GCT program. Behind the proof is an explicit description of the highest weight vectors in Sym * in terms of combinatorial objects, called obstruction designs. This description results from analyzing the process of polarization and Schur-Weyl duality." }
{ "title": "Geometric complexity theory and tensor rank", "abstract": "Mulmuley and Sohoni [25, 26] proposed to view the permanent versus determinant problem as a specific orbit closure problem and to attack it by methods from geometric invariant and representation theory. We adopt these ideas towards the goal of showing lower bounds on the border rank of specific tensors, in particular for matrix multiplication. We thus study specific orbit closure problems for the group A key idea from [26] is that the irreducible Gs-representations occurring in the coordinate ring of the G-orbit closure of a stable tensor w ∈ W are exactly those having a nonzero invariant with respect to the stabilizer group of w. However, we prove that by considering Gs-representations, only trivial lower bounds on border rank can be shown. It is thus necessary to study G-representations, which leads to geometric extension problems that are beyond the scope of the subgroup restriction problems emphasized in [25, 26] and its follow up papers. We prove a very modest lower bound on the border rank of matrix multiplication tensors using G-representations. This shows at least that the barrier for Gs-representations can be overcome. To advance, we suggest the coarser approach to replace the semigroup of representations of a tensor by its moment polytope. We prove first results towards determining the moment polytopes of matrix multiplication and unit tensors. * A full version of this paper is available at arxiv.org/abs/1011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Categories and Subject Descriptors Algorithms, Theory Keywords geometric complexity theory, tensor rank, matrix multiplication, orbit closures, multiplicities, Kronecker coefficients We thank Matthias Christandl, Shrawan Kumar, Joseph Landsberg, Laurent Manivel, Ketan Mulmuley, and Jerzy Weyman for helpful discussions. We are grateful to the Fields Institute in Toronto for providing a forum for discussing some questions of GCT in the fall of 2009. Moreover, we thank the Center for Computational Intractability in Princeton for making possible the workshop in Geometric Complexity Theory in 2010, where some of the results presented here were discussed." }
1911.03990
1011.1350
Result details
The proof technique is based on the technique in #REFR . The proof is postponed to Section 10.
[ "The following Proposition 4.1 writes the multiplicity mult λ * C[Gp] as a nonnegative sum of products of multi-Littlewood-Richardson coefficients and plethysm coefficients.", "Then" ]
[ "We remark that if Problem 9 in [Sta00] is resolved positively, then Proposition 4.1 implies that the multiplicity mult λ * C[Gp] has a combinatorial description, i.e., the map (λ, m, d, D) → mult λ * C[Gp] is in #P.", "The same holds also for its summands b(λ, ̺, D, d).", "It is known that mult λ * C[Gq] = a λ (D, d) (see e.g. [Lan17, Sec.", "9.2.3]), so the same holds for mult λ * C[Gq].", "Our main technical theorem that enables us to find obstructions is the following." ]
[ "proof technique", "proof" ]
method
{ "title": "Implementing geometric complexity theory: On the separation of orbit closures via symmetries", "abstract": "Understanding the difference between group orbits and their closures is a key difficulty in geometric complexity theory (GCT): While the GCT program is set up to separate certain orbit closures, many beautiful mathematical properties are only known for the group orbits, in particular close relations with symmetry groups and invariant spaces, while the orbit closures seem much more difficult to understand. However, in order to prove lower bounds in algebraic complexity theory, considering group orbits is not enough. In this paper we tighten the relationship between the orbit of the power sum polynomial and its closure, so that we can separate this orbit closure from the orbit closure of the product of variables by just considering the symmetry groups of both polynomials and their representation theoretic decomposition coefficients. In a natural way our construction yields a multiplicity obstruction that is neither an occurrence obstruction, nor a so-called vanishing ideal occurrence obstruction. All multiplicity obstructions so far have been of one of these two types. Our paper is the first implementation of the ambitious approach that was originally suggested in the first papers on geometric complexity theory by Sohoni (SIAM J Comput 2001, 2008): Before our paper, all existence proofs of obstructions only took into account the symmetry group of one of the two polynomials (or tensors) that were to be separated. In our paper the multiplicity obstruction is obtained by comparing the representation theoretic decomposition coefficients of both symmetry groups. Our proof uses a semi-explicit description of the coordinate ring of the orbit closure of the power sum polynomial in terms of Young tableaux, which enables its comparison to the coordinate ring of the orbit." }
{ "title": "Geometric complexity theory and tensor rank", "abstract": "Mulmuley and Sohoni [25, 26] proposed to view the permanent versus determinant problem as a specific orbit closure problem and to attack it by methods from geometric invariant and representation theory. We adopt these ideas towards the goal of showing lower bounds on the border rank of specific tensors, in particular for matrix multiplication. We thus study specific orbit closure problems for the group A key idea from [26] is that the irreducible Gs-representations occurring in the coordinate ring of the G-orbit closure of a stable tensor w ∈ W are exactly those having a nonzero invariant with respect to the stabilizer group of w. However, we prove that by considering Gs-representations, only trivial lower bounds on border rank can be shown. It is thus necessary to study G-representations, which leads to geometric extension problems that are beyond the scope of the subgroup restriction problems emphasized in [25, 26] and its follow up papers. We prove a very modest lower bound on the border rank of matrix multiplication tensors using G-representations. This shows at least that the barrier for Gs-representations can be overcome. To advance, we suggest the coarser approach to replace the semigroup of representations of a tensor by its moment polytope. We prove first results towards determining the moment polytopes of matrix multiplication and unit tensors. * A full version of this paper is available at arxiv.org/abs/1011. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Categories and Subject Descriptors Algorithms, Theory Keywords geometric complexity theory, tensor rank, matrix multiplication, orbit closures, multiplicities, Kronecker coefficients We thank Matthias Christandl, Shrawan Kumar, Joseph Landsberg, Laurent Manivel, Ketan Mulmuley, and Jerzy Weyman for helpful discussions. We are grateful to the Fields Institute in Toronto for providing a forum for discussing some questions of GCT in the fall of 2009. Moreover, we thank the Center for Computational Intractability in Princeton for making possible the workshop in Geometric Complexity Theory in 2010, where some of the results presented here were discussed." }
1702.07486
1508.00271
Related work
An encoding scheme is also applied by #REFR , who use an encoder-recurrent-decoder (ERD) model to predict human motion amongst others.
[ "The experiments are restricted to walking, jogging and running motions.", "Instead, we seek a more general model that can capture a large variety of actions.", "In #OTHEREFR , a low-dimensional manifold of human motion is learned using a one-layer convolutional autoencoder.", "For motion synthesis, the learned features and high-level action commands form the input to a feed-forward network that is trained to reconstruct the desired motion pattern.", "While the idea of manifold learning resembles our approach, the use of convolutional and pooling layers prevents the implementation of deeper hierarchies due to blurring effects #OTHEREFR ." ]
[ "The encoder-decoder framework learns to reconstruct joint angles, while the recurrent middle layer represents the temporal dynamics.", "As the whole framework is jointly trained, the learned representation is tuned towards the dynamics of the recurrent network and might not be generalizable to new tasks.", "Finally, a combination of recurrent networks and the structural hierarchy of the human body for motion prediction has been introduced by #OTHEREFR in form of structural RNNs (S-RNN).", "By constructing a structural graph in which both nodes and edges consist of LSTMs, the temporal dynamics of both individual limbs and the whole body are modelled.", "Without the aid of a low-dimensional representation, a single model is trained for each motion." ]
[ "human motion", "encoder-recurrent-decoder (ERD) model" ]
method
{ "title": "Deep Representation Learning for Human Motion Prediction and Classification", "abstract": "Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1702.07486
1508.00271
Motion prediction of specific actions
Note that predictions over 560 ms can diverge from the ground truth substantially due to stochasticity in human motion #REFR while remaining meaningful to a human observer.
[ "This indicates that a structural prior is beneficial to motion prediction.", "As expected, the fine-tuning to specific actions decreases the prediction error and is especially effective during long-term prediction and for actions that are not contained in the original training data, such as \"smoking\".", "We depict the prediction for a walking sequence contained in the H3.6M dataset for the whole range of around 1600 ms in Figure 4 .", "The fine-tuned model (middle) predicts the ground truth (top) with a high level of accuracy.", "The prediction by the general model is accurate up to around 600 ms." ]
[]
[ "human motion" ]
background
{ "title": "Deep Representation Learning for Human Motion Prediction and Classification", "abstract": "Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1908.07214
1508.00271
Spatio-temporal Recurrent Neural Network (STRNN)
Note that unlike some RNNs #REFR , the decoding and predicting only start after the TEncoder takes all the input, making it a sequence-to-sequence model.
[ "It also enables us to impose constraints in a longer time span to stabilize the network.", "The temporal network is named Two-way Bidirectional Temporal Network (TBTN), consisting of three parts: the temporal encoder (TEncoder), the temporal decoder (TDecoder) and the temporal predictor (TPredictor) (Figure 3 ).", "The training is done by iterations of forward and backward passes.", "The forward pass goes through an encoding phase and then a decoding/predicting phase.", "It starts by taking m + 1 frames into TEncoder." ]
[ "After the encoding phase, the internal state of TEncoder is copied to TDecoder and TPredictor as a good/reasonable initialization.", "Then, the forward pass continues on TDecoder and TPredictor simultaneously.", "The decoding in TBTN unrolls in both directions in time.", "The task of TDecoder is to decode the frames backwards in time and the task of the TPredictor is to predict the frames forwards into the future.", "The backward decoding improves the convergence speed as it first decodes the last few frames that the encoder just sees." ]
[ "RNNs" ]
background
{ "title": "Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling", "abstract": "Abstract-Data-driven modeling of human motions is ubiquitous in computer graphics and computer vision applications, such as synthesizing realistic motions or recognizing actions. Recent research has shown that such problems can be approached by learning a natural motion manifold using deep learning on a large amount data, to address the shortcomings of traditional data-driven approaches. However, previous deep learning methods can be sub-optimal for two reasons. First, the skeletal information has not been fully utilized for feature extraction. Unlike images, it is difficult to define spatial proximity in skeletal motions in the way that deep networks can be applied for feature extraction. Second, motion is time-series data with strong multi-modal temporal correlations between frames. On the one hand, a frame could be followed by several candidate frames leading to different motions; on the other hand, long-range dependencies exist where a number of frames in the beginning correlate to a number of frames later. Ineffective temporal modeling would either under-estimate the multi-modality and variance, resulting in featureless mean motion or over-estimate them resulting in jittery motions, which is a major source of visual artifacts. In this paper, we propose a new deep network to tackle these challenges by creating a natural motion manifold that is versatile for many applications. The network has a new spatial component for feature extraction. It is also equipped with a new batch prediction model that predicts a large number of frames at once, such that long-term temporally-based objective functions can be employed to correctly learn the motion multi-modality and variances. With our system, long-duration motions can be predicted/synthesized using an open-loop setup where the motion retains the dynamics accurately. It can also be used for denoising corrupted motions and synthesizing new motions with given control signals. We demonstrate that our system can create superior results comparing to existing work in multiple applications." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1904.00442
1508.00271
B. Human motion forecasting
In order to generate predictions for a joint (node) y starting from a given prefix sequence X pref , we build the distribution ppX|X pref , yq (see details in Section C) and we sample sequences from that posterior. Our evaluation method and metric again followed #REFR .
[ "For SpaMHMM, we used these same values of M and S and we did 3-fold cross validation on the training data of the action \"walking\" to finetune the value of λ in the range r10´4, 1s. We ended up using λ \" 0.05.", "The number of hidden states in 1-HMM was set to 51 and in K-HMM it was set to 11 hidden states per HMM.", "The same values were then used to train the models for the remaining actions.", "Every model was trained for 100 iterations of EM or until the loss plateaus.", "For SpaMHMM, we did 100 iterations of the inner loop on each M-step, using a learning rate ρ \" 10´2." ]
[ "We fed our model with 8 prefix subsequences with 50 frames each (corresponding to 2 seconds) for each joint from the test subject and we predicted the following 10 frames (corresponding to 400 miliseconds).", "Each prediction was built by sampling 100 sequences from the posterior and averaging.", "We then computed the average mean angle error for the 8 sequences at different time horizons.", "Results are in Table II .", "Among our models (1-HMM, K-HMM, MHMM and SpaMHMM), SpaMHMM outperformed the remaining in all actions except \"eating\"." ]
[ "given prefix sequence", "sequences" ]
method
{ "title": "SpaMHMM: Sparse Mixture of Hidden Markov Models for Graph Connected Entities", "abstract": "Abstract-We propose a framework to model the distribution of sequential data coming from a set of entities connected in a graph with a known topology. The method is based on a mixture of shared hidden Markov models (HMMs), which are jointly trained in order to exploit the knowledge of the graph structure and in such a way that the obtained mixtures tend to be sparse. Experiments in different application domains demonstrate the effectiveness and versatility of the method." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1705.02082
1508.00271
Related Work
Work of #REFR uses recurrent networks to predict a set of body joint heatmaps at a future frame.
[ "However, it is hard to train many mixtures of high dimensional output spaces, and, as it has been observed, many components often remain un-trained, with one component dominating the rest #OTHEREFR , unless careful mixture balancing is designed #OTHEREFR .", "Many recent data driven approaches predict motion directly from image pixels.", "In #OTHEREFR , a large, nonparametric image patch vocabulary is built for patch motion regression.", "In #OTHEREFR , dense optical flow is predicted from a single image and the multimodality of motion is handled by considering a different softmax loss for every pixel.", "Work of #OTHEREFR predicts ball trajectories in synthetic \"billiard-like\" worlds directly from a sequence of visual glimpses using a regression loss." ]
[ "Such representation though cannot possibly group the heatmap peaks into coherent 2D pose proposals.", "Work of #OTHEREFR casts frame prediction as sequential conditional prediction, and samples from a categorical distribution of 255 pixel values at every pixel location, conditioning at the past history and image generated so far.", "It is unclear how to handle the computational overhead of such models effectively.", "Stochastic neural networks.", "Stochastic variables have been used in a variety of settings in the deep learning literature e.g., for generative modeling, regularization, reinforcement learning, etc." ]
[ "recurrent networks" ]
background
{ "title": "Motion Prediction Under Multimodality with Conditional Stochastic Networks", "abstract": "Given a visual history, multiple future outcomes for a video scene are equally probable, in other words, the distribution of future outcomes has multiple modes. Multimodality is notoriously hard to handle by standard regressors or classifiers: the former regress to the mean and the latter discretize a continuous high dimensional output space. In this work, we present stochastic neural network architectures that handle such multimodality through stochasticity: future trajectories of objects, body joints or frames are represented as deep, non-linear transformations of random (as opposed to deterministic) variables. Such random variables are sampled from simple Gaussian distributions whose means and variances are parametrized by the output of convolutional encoders over the visual history. We introduce novel convolutional architectures for predicting future body joint trajectories that outperform fully connected alternatives [29] . We introduce stochastic spatial transformers through optical flow warping for predicting future frames, which outperform their deterministic equivalents [17] . Training stochastic networks involves an intractable marginalization over stochastic variables. We compare various training schemes that handle such marginalization through a) straightforward sampling from the prior, b) conditional variational autoencoders [23, 29] , and, c) a proposed K-best-sample loss that penalizes the best prediction under a fixed \" prediction budget\". We show experimental results on object trajectory prediction, human body joint trajectory prediction and video prediction under varying future uncertainty, validating quantitatively and qualitatively our architectural choices and training schemes." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1511.05298
1508.00271
Human motion modeling and forecasting
We show that our structured approach outperforms the state-of-the-art unstructured deep architecture #REFR on motion forecasting from motion capture (mocap) data.
[ "Human body is a good example of separate but well related components.", "Its motion involves complex spatiotemporal interactions between the components (arms, legs, spine), resulting in sensible motion styles like walking, eating etc.", "In this experiment, we represent the complex motion of humans over st-graphs and learn to model them with S-RNN." ]
[ "Several approaches based on Gaussian processes #OTHEREFR , Restricted Boltzmann Machines (RBMs) #OTHEREFR , and RNNs #OTHEREFR have been proposed to model human motion. Recently, Fragkiadaki et al.", "#OTHEREFR proposed an encoder-RNN-decoder (ERD) which gets state-of-the-art forecasting results on H3.6m mocap data set #OTHEREFR . S-RNN architecture for human motion.", "Our S-RNN architecture follows the st-graph shown in Figure 5a .", "According to the st-graph, the spine interacts with all the body parts, and the arms and legs interact with each other.", "The st-graph is automatically transformed to S-RNN following Section 3.2." ]
[ "motion forecasting" ]
method
{ "title": "Structural-RNN: Deep Learning on Spatio-Temporal Graphs", "abstract": "Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatiotemporal graphs are a popular tool for imposing such highlevel intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatiotemporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks. Links: Web" }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1511.05298
1508.00271
Human motion modeling and forecasting
The motion generated by ERD #REFR stays human-like in the short-term but it drifts away to non-human like motion in the long-term.
[ "Figure 6 shows forecasting 1000ms of human motion on \"eating\" activity -the subject drinks while walking.", "S-RNN stays close to the ground-truth in the short-term and generates human like motion in the long-term.", "On removing edgeRNNs, the parts of human body become independent and stops interacting through parameters.", "Hence without edgeRNNs the skeleton freezes to some mean position. LSTM-3LR suffers with a drifting problem.", "On many test examples it drifts to the mean position of walking human ( #OTHEREFR made similar observations about LSTM-3LR)." ]
[ "This was a common outcome of ERD on complex aperiodic activities, unlike S-RNN.", "Furthermore, ERD produced human motion was non-smooth on many test examples.", "See the video on the project web page for more examples #OTHEREFR . Quantitative evaluation. We follow the evaluation metric of Fragkiadaki et al.", "#OTHEREFR and present the 3D angle error between the forecasted mocap frame and the ground truth in Table 1 . Qualitatively, ERD models human motion better than LSTM-3LR.", "However, in the short-term, it does not mimic the ground-truth as well as LSTM-3LR. Fragkiadaki et al. #OTHEREFR also note this trade-off between ERD and LSTM-3LR." ]
[ "motion", "ERD" ]
background
{ "title": "Structural-RNN: Deep Learning on Spatio-Temporal Graphs", "abstract": "Deep Recurrent Neural Network architectures, though remarkably capable at modeling sequences, lack an intuitive high-level spatio-temporal structure. That is while many problems in computer vision inherently have an underlying high-level structure and can benefit from it. Spatiotemporal graphs are a popular tool for imposing such highlevel intuitions in the formulation of real world problems. In this paper, we propose an approach for combining the power of high-level spatio-temporal graphs and sequence learning success of Recurrent Neural Networks (RNNs). We develop a scalable method for casting an arbitrary spatiotemporal graph as a rich RNN mixture that is feedforward, fully differentiable, and jointly trainable. The proposed method is generic and principled as it can be used for transforming any spatio-temporal graph through employing a certain set of well defined steps. The evaluations of the proposed approach on a diverse set of problems, ranging from modeling human motion to object interactions, shows improvement over the state-of-the-art with a large margin. We expect this method to empower new approaches to problem formulation through high-level spatio-temporal graphs and Recurrent Neural Networks. Links: Web" }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1806.08666
1508.00271
BACKGROUND
For example, Fragkiadaki and colleagues #REFR proposed two architectures: LSTM-3LR (3 layers of Long ShortTerm Memory cells) and ERD (Encoder-Recurrent-Decoder) to concatenate LSTM units to model the dynamics of human motions.
[ "Therefore, we will focus our discussion on generative motion models and their application in human motion generation and control.", "Our work builds upon a significant body of previous work on constructing generative statistical models for human motion analysis and synthesis.", "Generative statistical motion models are often represented as a set of mathematical functions, which describe human movement using a small number of hidden parameters and their associated probability distributions.", "Previous generative statistical models include Hidden Markov Models (HMMs) #OTHEREFR , variants of statistical dynamic models for modeling spatial-temporal variations within a temporal window #OTHEREFR , and concatenating statistical motion models into finite graphs of deformable motion models #OTHEREFR .", "Most recent work on generative modeling has been focused on employing deep recurrent neural networks (RNNs) to model dynamic temporal behavior of human motions for motion prediction #OTHEREFR ." ]
[ "Jain and colleagues #OTHEREFR introduced structural RNNs (SRNNs) for human motion prediction and generation by combining high-level spatio-temporal graphs with sequence modeling success of RNNs.", "RNNs is appealing to human motion modeling because it can handle nonlinear dynamics and long-term temporal dependencies in human motions.", "However, as observed by other researchers #OTHEREFR , current deep RNN based methods often have difficulty obtaining good performance for long term motion generation.", "They tend to fail when generating long sequences of motion as the errors in their prediction are fed back into the input and accumulate.", "As a result, their long-term results suffer from occasional unrealistic artifacts such as foot sliding and gradually converge to a static pose." ]
[ "Encoder-Recurrent-Decoder" ]
background
{ "title": "Combining Recurrent Neural Networks and Adversarial Training for Human Motion Synthesis and Control", "abstract": "This paper introduces a new generative deep learning network for human motion synthesis and control. Our key idea is to combine recurrent neural networks (RNNs) and adversarial training for human motion modeling. We first describe an efficient method for training a RNNs model from prerecorded motion data. We implement recurrent neural networks with long short-term memory (LSTM) cells because they are capable of handling nonlinear dynamics and long term temporal dependencies present in human motions. Next, we train a refiner network using an adversarial loss, similar to Generative Adversarial Networks (GANs), such that the refined motion sequences are indistinguishable from real motion capture data using a discriminative network. We embed contact information into the generative deep learning model to further improve the performance of our generative model. The resulting model is appealing to motion synthesis and control because it is compact, contact-aware, and can generate an infinite number of naturally looking motions with infinite lengths. Our experiments show that motions generated by our deep learning model are always highly realistic and comparable to high-quality motion capture data. We demonstrate the power and effectiveness of our models by exploring a variety of applications, ranging from random motion synthesis, online/offline motion control, and motion filtering. We show the superiority of our generative model by comparison against baseline models." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1807.02350
1508.00271
I. INTRODUCTION
These models slightly outperform the results in #REFR , have lower computational complexity once trained, and are therefore applicable to online tasks, but may overfit training data due to their deterministic mapping between subsequences.
[ "proposed a generative model for human motion generation using a deep neural architecture with Variational Inference (VI) #OTHEREFR and Bayesian filtering with Dynamic Movement Primitives (DMP) #OTHEREFR which ensures local space-time continuity in movement representation in a reduced space.", "This latent space provides plausibility in data reconstruction, while being generalizable to new samples of movements.", "In #OTHEREFR , the authors compared three different generative structures of encoder-decoder networks with temporal encod-ing, enabling action prediction in the reduced feature space.", "Two used a fully connected DNN, while the last one used a Convolutional Neural Networks (CNN).", "In all these encoderdecoder networks, the encoder learns a smaller representation of an input subsequence x t:t+S while the decoder learns to predict the next data subsequence x t+S+1:t+2S+1 ." ]
[ "[17] proposed a method for motion prediction that outperforms #OTHEREFR by far, and is similar to #OTHEREFR , with the exception that a noise was applied to training samples, by feeding the network with its own generated predicted sequences.", "This noise injection at training time prevents the system overfitting.", "Nevertheless, the learned representation remains biased by the application, i.e.", "prediction, and thus might not learn useful features for recognition purposes.", "The same phenomenon may appear in #OTHEREFR , where a Recurrent Neural Network (RNN) was employed in a generative model, alongside Variational Auto-Encoders (VAE) #OTHEREFR , which generalizes features encoding while being biased by the integration of the RNN internal state variable." ]
[ "training data" ]
result
{ "title": "A Variational Time Series Feature Extractor for Action Prediction", "abstract": "Abstract-We propose a Variational Time Series Feature Extractor (VTSFE), inspired by the VAE-DMP model of Chen et al. [1] , to be used for action recognition and prediction. Our method is based on variational autoencoders. It improves VAE-DMP in that it has a better noise inference model, a simpler transition model constraining the acceleration in the trajectories of the latent space, and a tighter lower bound for the variational inference. We apply the method for classification and prediction of whole-body movements on a dataset with 7 tasks and 10 demonstrations per task, recorded with a wearable motion capture suit. The comparison with VAE and VAE-DMP suggests the better performance of our method for feature extraction. An open-source software implementation of each method with TensorFlow is also provided. In addition, a more detailed version of this work can be found in the indicated code repository. Although it was meant to, the VTSFE hasn't been tested for action prediction, due to a lack of time in the context of Maxime Chaveroche's Master thesis at INRIA." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1702.08212
1508.00271
B. Online human motion prediction
Note that due to the stochasticity in human motion, an accurate longterm prediction (> 560 ms) is often not possible #REFR .
[ "Additionally, we report a variance estimate for each time step in the predicted time window ∆t as the average sum of variances of the limb and spatial dimensions. In Fig.", "4 a)-c) we visualize the motion prediction errors of the torso, right arm and left arm model for the duration of 1660 ms.", "Since the skeleton is represented in a local reference frame, any natural movement of the torso is restricted to rotations. Therefore, the prediction error is comparatively low.", "The MPE for both arms is similar and grows more strongly than for the torso.", "Interestingly, the model seems to learn that there is less uncertainty for the initial position of the predictions." ]
[ "For HRI it is important to represent these uncertainties about motion predictions such that the robot can take these into account during motion planning.", "In comparison to our CVAE models, a simple linear extrapolation in Fig.", "4 d) showcases the The samples were generated by propagating the past motion window through the network, sampling from the encoder and transitioner distributions and visualizing the mean output of the decoder.", "We depict the past 800 ms and samples of the next 800 ms.", "importance of modeling dynamics." ]
[ "human motion" ]
background
{ "title": "Anticipating many futures: Online human motion prediction and synthesis for human-robot collaboration", "abstract": "Abstract-Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. A common approach to human intention inference is to model specific trajectories towards known goals with supervised classifiers. However, these approaches do not take possible future movements into account nor do they make use of kinematic cues, such as legible and predictable motion. The bottleneck of these methods is the lack of an accurate model of general human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motions. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1912.10150
1508.00271
Related Works
For deep-learning-based methods, RNNs are probably one of the most successful models #REFR .
[ "Restricted Boltzmann Machine (RBM) also has been applied for motion generation #OTHEREFR ).", "However, inference for RBM is known to be particularly challenging.", "Gaussian-process latent variable models #OTHEREFR Urtasun et al.", "2008 ) and its variants #OTHEREFR have been applied for this task.", "One problem with such methods, however, is that they are not scalable enough to deal with large-scale data." ]
[ "However, most existing models assume output distributions as Gaussian or Gaussian mixture.", "Different from our implicit representation, these methods are not expressive enough to capture the diversity of human actions.", "In contrast to action prediction, limited work has been done for diverse action generation, apart from some preliminary work.", "Specifically, the motion graph approach #OTHEREFR needs to extract motion primitives from prerecorded data; the diversity and quality of action will be restricted by way of defining the primitives and transitions between the primitives.", "Variational autoencoder and GAN have also been applied in #OTHEREFR" ]
[ "RNNs" ]
method
{ "title": "Learning Diverse Stochastic Human-Action Generators by Learning Smooth Latent Transitions", "abstract": "Human-motion generation is a long-standing challenging task due to the requirement of accurately modeling complex and diverse dynamic patterns. Most existing methods adopt sequence models such as RNN to directly model transitions in the original action space. Due to high dimensionality and potential noise, such modeling of action transitions is particularly challenging. In this paper, we focus on skeleton-based action generation and propose to model smooth and diverse transitions on a latent space of action sequences with much lower dimensionality. Conditioned on a latent sequence, actions are generated by a frame-wise decoder shared by all latent action-poses. Specifically, an implicit RNN is defined to model smooth latent sequences, whose randomness (diversity) is controlled by noise from the input. Different from standard action-prediction methods, our model can generate action sequences from pure noise without any conditional action poses. Remarkably, it can also generate unseen actions from mixed classes during training. Our model is learned with a bi-directional generative-adversarial-net framework, which not only can generate diverse action sequences of a particular class or mix classes, but also learns to classify action sequences within the same model. Experimental results show the superiority of our method in both diverse action-sequence generation and classification, relative to existing methods." }
{ "title": "Recurrent Network Models for Human Dynamics", "abstract": "We propose the Encoder-Recurrent-Decoder (ERD)" }
1804.10692
1512.03012
Policy learning with perceptual rewards
Using the subject and object categories extracted from the natural language utterance, we retrieve corresponding 3D models from external 3D databases (3D Shapenet #REFR and 3D Warehouse [2]) and import them in a physics simulator (Bullet).
[ "Model-free policy search with binary rewards has notoriously high sample complexity due to the lack of informative gradients for the overwhelming majority of the sampled actions #OTHEREFR .", "Efficient policy search requires shaped rewards, either explicitly #OTHEREFR , or more recently, implicitly [5] , by encoding the goal configuration in a continuous space where similarity can be measured against alternative goals achieved during training.", "If we were able to visually picture the desired 3D object configuration to be achieved by our pick-and-place policies, then Euclidean distances to the pictured objects would provide an effective (approximate) shaping of the true rewards.", "We do so using analysis-by-synthesis, where our trained detector is used to select or discard sampled hypotheses.", "Given an initial configuration of two objects that we are supposed to manipulate towards a desired configuration, we seek a physically-plausible 3D object configuration which renders to an image that scores high with our corresponding reward detector." ]
[ "We sample 3D locations for the objects, render the scene and evaluate the score of our detector.", "Note that since we know the object identities, the relation module is the only one that needs to be considered for this scoring.", "We pick the highest scoring 3D configuration as our goal configuration.", "It is used at training time to provide effective shaping using 3D Euclidean distances between desired and current object locations and drastically reduces the number of samples needed for policy learning.", "However, our policy network takes 2D bounding box information as input, and does not need any 3D lifting, but rather operates reactively given the RGB images." ]
[ "natural language utterance", "3D Shapenet" ]
method
{ "title": "Reward Learning from Narrated Demonstrations", "abstract": "Humans effortlessly\"program\"one another by communicating goals and desires in natural language. In contrast, humans program robotic behaviours by indicating desired object locations and poses to be achieved, by providing RGB images of goal configurations, or supplying a demonstration to be imitated. None of these methods generalize across environment variations, and they convey the goal in awkward technical terms. This work proposes joint learning of natural language grounding and instructable behavioural policies reinforced by perceptual detectors of natural language expressions, grounded to the sensory inputs of the robotic agent. Our supervision is narrated visual demonstrations(NVD), which are visual demonstrations paired with verbal narration (as opposed to being silent). We introduce a dataset of NVD where teachers perform activities while describing them in detail. We map the teachers' descriptions to perceptual reward detectors, and use them to train corresponding behavioural policies in simulation.We empirically show that our instructable agents (i) learn visual reward detectors using a small number of examples by exploiting hard negative mined configurations from demonstration dynamics, (ii) develop pick-and place policies using learned visual reward detectors, (iii) benefit from object-factorized state representations that mimic the syntactic structure of natural language goal expressions, and (iv) can execute behaviours that involve novel objects in novel locations at test time, instructed by natural language." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
2002.03892
1512.03012
A. Dataset and Evaluation Metrics
In these experiments, we mainly used a subset of ShapeNetCore #REFR containing 500 models from five categories including Mug, Chair, Knife, Guitar, and Lamp.
[]
[ "For each category, we randomly selected 100 object models and convert them into complete point clouds with the pyntcloud package.", "We then shift and resize the point clouds data and convert them into a 32 × 32 × 32 array as the input size of networks.", "To the best of our knowledge, there are no existing similar researches done before.", "Therefore, we manually labeled an affordance part for each object to provide ground truth data. Part annotations are represented as point labels.", "A set of examples of labeled affordance part for different objects is depicted in Fig. 6 (affordance parts are highlighted by orange color)." ]
[ "Chair", "500 models" ]
method
{ "title": "Learning to Grasp 3D Objects using Deep Residual U-Nets", "abstract": "Affordance detection is one of the challenging tasks in robotics because it must predict the grasp configuration for the object of interest in real-time to enable the robot to interact with the environment. In this paper, we present a new deep learning approach to detect object affordances for a given 3D object. The method trains a Convolutional Neural Network (CNN) to learn a set of grasping features from RGB-D images. We named our approach Res-U-Net since the architecture of the network is designed based on U-Net structure and residual network-styled blocks. It devised to be robust and efficient to compute and use. A set of experiments has been performed to assess the performance of the proposed approach regarding grasp success rate on simulated robotic scenarios. Experiments validate the promising performance of the proposed architecture on a subset of ShapeNetCore dataset and simulated robot scenarios." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1505.05641
1512.03012
3D Model Dataset
We download 3D models from ShapeNet #REFR , which has organized common daily objects with categorization labels and joint alignment.
[ "As we discussed in Sec 2, there are several largescale 3D model repositories online." ]
[ "Since we evaluate our method on the PASCAL 3D+ benchmark, we download 3D models belonging to the 12 categories of PASCAL 3D+, including 30K models in total.", "After symmetry-preserving model set augmentation (Sec 4.1), we make sure that every category has 10K models. For more details, please refer to supplementary material." ]
[ "3D models", "ShapeNet" ]
method
{ "title": "Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views", "abstract": "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1811.11187
1512.03012
Alignment
Figure 6 : Unconstrained scenario where instead of having a ground truth set of CAD models given, we use a set of 400 randomly selected CAD models from ShapeNetCore #REFR , more closely mimicking a real-world application scenario.
[ "6 shows the capability of our method to align in an unconstrained real-world setting where ground truth CAD models are not given, we instead provide a set of 400 random CAD models from ShapeNet #OTHEREFR . #OTHEREFR scenes.", "Our approach to learning geometric features between real and synthetic data produce much more reliable keypoint correspondences, which coupled with our alignment optimization, produces significantly more accurate alignments.", "Table 2 : Accuracy comparison (%) on our CAD alignment benchmark.", "While handcrafted feature descriptors can achieve some alignment on more featureful objects (e.g., chairs, sofas), they do not tolerate well the geometric discrepancies between scan and CAD data -which remains difficult for the learned keypoint descriptors of 3DMatch.", "Scan2CAD directly addresses this problem of learning features that generalize across these domains, thus significantly outperforming state of the art." ]
[]
[ "CAD models" ]
method
{ "title": "Scan2CAD: Learning CAD Model Alignment in RGB-D Scans", "abstract": "Figure 1: Scan2CAD takes as input an RGB-D scan and a set of 3D CAD models (left). We then propose a novel 3D CNN approach to predict heatmap correspondences between the scan and the CAD models (middle). From these predictions, we formulate an energy minimization to find optimal 9 DoF object poses for CAD model alignment to the scan (right). Abstract" }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1907.09381
1512.03012
Implementation details
From ShapeNet #REFR , we select 401 different classes of vehicles and, for each vehicle, we screenshot each rendered image from 80 different viewpoints.
[ "3D model pool." ]
[ "Since the background of the rendered image is very clean, we can simply extract the accurate silhouettes by thresholding.", "In this way, we collect 32,080 silhouettes to form the auxiliary 3D model pool.", "Network structure and training.", "In practice, as encoder-decoder structure, both G 1 and G 2 downsample the resolution from 256 to 64 and then upsample to the original spatial resolution.", "As the middle layers, there are 8 residual blocks with the dilation rate 2." ]
[ "vehicle", "ShapeNet" ]
method
{ "title": "Visualizing the Invisible: Occluded Vehicle Segmentation and Recovery", "abstract": "In this paper, we propose a novel iterative multi-task framework to complete the segmentation mask of an occluded vehicle and recover the appearance of its invisible parts. In particular, firstly, to improve the quality of the segmentation completion, we present two coupled discriminators that introduce an auxiliary 3D model pool for sampling authentic silhouettes as adversarial samples. In addition, we propose a two-path structure with a shared network to enhance the appearance recovery capability. By iteratively performing the segmentation completion and the appearance recovery, the results will be progressively refined. To evaluate our method, we present a dataset, Occluded Vehicle dataset, containing synthetic and real-world occluded vehicle images. Based on this dataset, we conduct comparison experiments and demonstrate that our model outperforms the state-of-the-arts in both tasks of recovering segmentation mask and appearance for occluded vehicles. Moreover, we also demonstrate that our appearance recovery approach can benefit the occluded vehicle tracking in real-world videos." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1803.08457
1512.03012
3-D Point Cloud Clustering
For these experiments, we use objects from ShapeNet, #REFR which are sampled to create point clouds with 2048 points.
[ "Contrary to the datasets we have shown so far, the feature representation of the point clouds must be permutation-invariant and the reconstruction should match the shape outline and not the exact point coordinates.", "Therefore, a different autoencoder architecture and loss need to be used.", "We use the architecture in the work by Achlioptas et al.", "#OTHEREFR The number of filters in each layer of the encoder is 64-128-128-256-128 and the number of filters in each layer of the decoder is 256-256-3 Â #points in the point cloud. The loss function must be invariant to permutations. Therefore, the MSE loss function is not suitable. Instead, we used Chamfer loss.", "We perform two sets of experiments on 3d data-inter-class clustering, where the dataset contains different classes of 3-D objects, and intra-class clustering, where the dataset contains subcategories of the same class." ]
[ "The autoencoder is first trained for 1000 iterations using an Adam optimizer with a learning rate of 0.0005.", "During the clustering stage, the autoencoder learning rate is set to 0.0001 and the learning rate of U is set to 0.0001.", "The number of epochs between m update is set to 30." ]
[ "ShapeNet" ]
method
{ "title": "Clustering-Driven Deep Embedding With Pairwise Constraints", "abstract": "Recently, there has been increasing interest to leverage the competence of neural networks to analyze data. In particular, new clustering methods that employ deep embeddings have been presented. In this paper, we depart from centroid-based models and suggest a new framework, called Clustering-driven deep embedding with PAirwise Constraints (CPAC), for nonparametric clustering using a neural network. We present a clustering-driven embedding based on a Siamese network that encourages pairs of data points to output similar representations in the latent space. Our pair-based model allows augmenting the information with labeled pairs to constitute a semi-supervised framework. Our approach is based on analyzing the losses associated with each pair to refine the set of constraints. We show that clustering performance increases when using this scheme, even with a limited amount of user queries. We demonstrate how our architecture is adapted for various types of data and present the first deep framework to cluster three-dimensional (3-D) shapes." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1812.02725
1512.03012
Introduction
This advantage allows us to leverage both 2D image datasets and 3D shape collections #REFR and to synthesize objects of diverse shapes and texture.
[ "Finally, it learns to add diverse, realistic texture to 2.5D sketches and produce 2D images that are indistinguishable from real photos. We call our model Visual Object Networks (VON).", "32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. #OTHEREFR .", "(b) Our model produces three outputs: a 3D shape, its 2.5D projection given a viewpoint, and a final image with realistic texture.", "(c) Given this disentangled 3D representation, our method allows several 3D applications including changing viewpoint and editing shape or texture independently. Please see our code and website for more details.", "Wiring in conditional independence reduces our need for densely annotated data: unlike classic morphable face models #OTHEREFR , our training does not require paired data between 2D images and 3D shapes, nor dense correspondence annotations in 3D data." ]
[ "Through extensive experiments, we show that VON produce more realistic image samples than recent 2D deep generative models.", "We also demonstrate many 3D applications that are enabled by our disentangled representation, including rotating an object, adjusting object shape and texture, interpolating between two objects in texture and shape space independently, and transferring the appearance of a real image to new objects and viewpoints." ]
[ "2D image datasets", "3D shape collections" ]
background
{ "title": "Visual Object Networks: Image Generation with Disentangled 3D Representation", "abstract": "Recent progress in deep generative models has led to tremendous breakthroughs in image generation. However, while existing models can synthesize photorealistic images, they lack an understanding of our underlying 3D world. We present a new generative model, Visual Object Networks (VON), synthesizing natural images of objects with a disentangled 3D representation. Inspired by classic graphics rendering pipelines, we unravel our image formation process into three conditionally independent factors-shape, viewpoint, and texture-and present an end-to-end adversarial learning framework that jointly models 3D shapes and 2D images. Our model first learns to synthesize 3D shapes that are indistinguishable from real shapes. It then renders the object's 2.5D sketches (i.e., silhouette and depth map) from its shape under a sampled viewpoint. Finally, it learns to add realistic texture to these 2.5D sketches to generate natural images. The VON not only generates images that are more realistic than state-of-the-art 2D image synthesis methods, but also enables many 3D operations such as changing the viewpoint of a generated image, editing of shape and texture, linear interpolation in texture and shape space, and transferring appearance across different objects and viewpoints." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1907.13236
1512.03012
Introduction
Since collecting a large dataset with ground truth annotations is expensive and time-consuming, it is appealing to utilize synthetic data for training, such as using the ShapeNet repository which contains thousands of 3D shapes of different objects #REFR .
[ "A common environment in which manipulation tasks take place is on tabletops.", "Thus, in this paper, we approach this by focusing on the problem of unseen object instance segmentation (UOIS), where the goal is to separately segment every arbitrary (and potentially unseen) object instance, in tabletop environments.", "Training a perception module requires a large amount of data.", "In order to ensure the generalization capability of the module to recognize unseen objects, we need to learn from data that contains many various objects.", "However, in many robot environments, large-scale datasets with this property do not exist." ]
[ "However, there exists a domain gap between synthetic data and real world data.", "Training directly on synthetic data only usually does not work well in the real world #OTHEREFR .", "Consequently, recent efforts in robot perception have been devoted to the problem of Sim2Real, where the goal is to transfer capabilities learned in simulation to real world settings.", "For instance, some works have used domain adaptation techniques to bridge the domain gap when unlabeled real data is available #OTHEREFR .", "Domain randomization #OTHEREFR was proposed to diversify the rendering of synthetic data for training." ]
[ "ground truth annotations", "ShapeNet repository" ]
method
{ "title": "The Best of Both Modes: Separately Leveraging RGB and Depth for Unseen Object Instance Segmentation", "abstract": "Abstract: In order to function in unstructured environments, robots need the ability to recognize unseen novel objects. We take a step in this direction by tackling the problem of segmenting unseen object instances in tabletop environments. However, the type of large-scale real-world dataset required for this task typically does not exist for most robotic settings, which motivates the use of synthetic data. We propose a novel method that separately leverages synthetic RGB and synthetic depth for unseen object instance segmentation. Our method is comprised of two stages where the first stage operates only on depth to produce rough initial masks, and the second stage refines these masks with RGB. Surprisingly, our framework is able to learn from synthetic RGB-D data where the RGB is non-photorealistic. To train our method, we introduce a large-scale synthetic dataset of random objects on tabletops. We show that our method, trained on this dataset, can produce sharp and accurate masks, outperforming state-of-the-art methods on unseen object instance segmentation. We also show that our method can segment unseen objects for robot grasping. Code, models and video can be found at https://rse-lab.cs. washington.edu/projects/unseen-object-instance-segmentation/." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1808.09351
1512.03012
Implementation Details
For object meshes, we choose eight CAD models from ShapeNet #REFR including cars, vans, and buses.
[ "Semantic branch.", "Our semantic branch adopts Dilated Residual Networks (DRN) for semantic segmentation . We train the network for 25 epochs.", "Geometric branch.", "We use Mask-RCNN for object proposal generation #OTHEREFR ." ]
[ "Given an object proposal, we predict its scale, rotation, translation, 4 3 FFD grid point coefficients, and an 8-dimensional distribution across candidate meshes with a ResNet-18 network .", "The translation t can be recovered using the estimated offset e, the normalized distance log τ , and the ground truth focal length of the image.", "They are then fed to a differentiable renderer #OTHEREFR to render the instance map and normal map.", "We empirically set λ reproj = 0.1.", "We first train the network with L pred using Adam #OTHEREFR with a learning rate of 10 −3 for 256 epochs and then fine-tune the model with L pred + λ reproj L reproj and REINFORCE with a learning rate of 10 −4 for another 64 epochs." ]
[ "ShapeNet" ]
method
{ "title": "3D-Aware Scene Manipulation via Inverse Graphics", "abstract": "We aim to obtain an interpretable, expressive, and disentangled scene representation that contains comprehensive structural and textural information for each object. Previous scene representations learned by neural networks are often uninterpretable, limited to a single object, or lacking 3D knowledge. In this work, we propose 3D scene de-rendering networks (3D-SDN) to address the above issues by integrating disentangled representations for semantics, geometry, and appearance into a deep generative model. Our scene encoder performs inverse graphics, translating a scene into a structured object-wise representation. Our decoder has two components: a differentiable shape renderer and a neural texture generator. The disentanglement of semantics, geometry, and appearance supports 3D-aware scene manipulation, e.g., rotating and moving objects freely while keeping the consistent shape and texture, and changing the object appearance without affecting its shape. Experiments demonstrate that our editing scheme based on 3D-SDN is superior to its 2D counterpart." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1803.07289
1512.03012
Experiments
To evaluate the effectiveness of our approach, we participate in two benchmarks that arise from the ShapeNet #REFR dataset, which consists of synthetic 3D models created by digital artists.
[ "We conducted several experiments to validate our approach.", "These show that our flex-convolution-based neural network yields competitive performance to previous work on synthetic data for single object classification ( #OTHEREFR , 1024 points) using fewer resources and provide some insights about human performance on this dataset.", "We improve single instance part segmentation ([32], 2048 points) .", "Furthermore, we demonstrate the effectiveness of our approach by performing semantic point cloud segmentation on a large-scale real-world 3D scan ( #OTHEREFR , 270 Mio. points) improving previous methods in both accuracy and speed." ]
[]
[ "ShapeNet dataset" ]
method
{ "title": "Flex-Convolution (Million-Scale Point-Cloud Learning Beyond Grid-Worlds)", "abstract": "Traditional convolution layers are specifically designed to exploit the natural data representation of images -- a fixed and regular grid. However, unstructured data like 3D point clouds containing irregular neighborhoods constantly breaks the grid-based data assumption. Therefore applying best-practices and design choices from 2D-image learning methods towards processing point clouds are not readily possible. In this work, we introduce a natural generalization flex-convolution of the conventional convolution layer along with an efficient GPU implementation. We demonstrate competitive performance on rather small benchmark sets using fewer parameters and lower memory consumption and obtain significant improvements on a million-scale real-world dataset. Ours is the first which allows to efficiently process 7 million points concurrently." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1806.04807
1512.03012
Network Architecture Details
Our results are poorer on the 'Scene11' dataset, because the images there are synthesized with random objects from the ShapeNet #REFR without physically correct scale.
[ "APPENDIX C: EVALUATION ON DEMON DATASET Table 5 summarizes our results on the DeMoN dataset.", "For a comparison, we also cite the results from DeMoN #OTHEREFR and the most recent work LS-Net .", "We further cite the results from some conventional approaches as reported in DeMoN, indicated as Oracle, SIFT, FF, and Matlab respectively.", "Here, Oracle uses ground truth camera poses to solve the multi-view stereo by SGM #OTHEREFR , while SIFT, FF, and Matlab further use sparse features, optical flow, and KLT tracking respectively for feature correspondence to solve camera poses by the 8-pt algorithm #OTHEREFR Table 5 : Quantitative comparisons on the DeMoN dataset.", "Our method consistently outperforms DeMoN #OTHEREFR at both camera motion and scene depth, except on the 'Scenes11' data, because we enforce multi-view geometry constraint in the BA-Layer." ]
[ "This setting is inconsistent with real data and makes it harder for our method to learn the basis depth map generator.", "When compared with LS-Net , our method achieves similar accuracy on camera poses but better scene depth.", "It proves our feature-metric BA with learned feature is superior than the photometric BA in the LS-Net." ]
[ "'Scene11' dataset", "ShapeNet" ]
method
{ "title": "BA-Net: Dense Bundle Adjustment Network", "abstract": "This paper introduces a network architecture to solve the structure-from-motion (SfM) problem via feature-metric bundle adjustment (BA), which explicitly enforces multi-view geometry constraints in the form of feature-metric error. The whole pipeline is differentiable, so that the network can learn suitable features that make the BA problem more tractable. Furthermore, this work introduces a novel depth parameterization to recover dense per-pixel depth. The network first generates several basis depth maps according to the input image, and optimizes the final depth as a linear combination of these basis depth maps via feature-metric BA. The basis depth maps generator is also learned via end-to-end training. The whole system nicely combines domain knowledge (i.e. hard-coded multi-view geometry constraints) and deep learning (i.e. feature learning and basis depth maps learning) to address the challenging dense SfM problem. Experiments on large scale real data prove the success of the proposed method." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
1912.05237
1512.03012
Experiments
Dataset: We render synthetic datasets using objects from ShapeNet #REFR , considering three datasets with varying difficulty.
[ "In this section, we first compare our approach to several baselines on the task of 3D controllable image generation, both on synthetic and real data.", "Next, we conduct a thorough ablation study to better understand the influence of different representations and architecture components." ]
[ "Two datasets contain cars, one with and the other without background.", "For both datasets, we randomly sample 1 to 3 cars from a total of 10 different car models.", "Our third dataset is the most challenging of these three.", "It comprises indoor scenes containing objects of different categories, including chairs, tables and sofas.", "As background we use empty room images from Structured3D #OTHEREFR , a synthetic dataset with photo-realistic 2D images." ]
[ "ShapeNet" ]
method
{ "title": "Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis", "abstract": "In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task." }
{ "title": "ShapeNet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans." }
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
4
Edit dataset card