venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
NIPS | Title
Improved Error Bounds for Tree Representations of Metric Spaces
Abstract
Estimating optimal phylogenetic trees or hierarchical clustering trees from metric data is an important problem in evolutionary biology and data analysis. Intuitively, the goodness-of-fit of a metric space to a tree depends on its inherent treeness, as well as other metric properties such as intrinsic dimension. Existing algorithms for embedding metric spaces into tree metrics provide distortion bounds depending on cardinality. Because cardinality is a simple property of any set, we argue that such bounds do not fully capture the rich structure endowed by the metric. We consider an embedding of a metric space into a tree proposed by Gromov. By proving a stability result, we obtain an improved additive distortion bound depending only on the hyperbolicity and doubling dimension of the metric. We observe that Gromov’s method is dual to the well-known single linkage hierarchical clustering (SLHC) method. By means of this duality, we are able to transport our results to the setting of SLHC, where such additive distortion bounds were previously unknown.
1 Introduction
Numerous problems in data analysis are formulated as the question of embedding high-dimensional metric spaces into “simpler" spaces, typically of lower dimension. In classical multidimensional scaling (MDS) techniques [18], the goal is to embed a space into two or three dimensional Euclidean space while preserving interpoint distances. Classical MDS is helpful in exploratory data analysis, because it allows one to find hidden groupings in amorphous data by simple visual inspection. Generalizations of MDS exist for which the target space can be a tree metric space—see [13] for a summary of some of these approaches, written from the point of view of metric embeddings. The metric embeddings literature, which grew out of MDS, typically highlights the algorithmic gains made possible by embedding a complicated metric space into a simpler one [13].
The special case of MDS where the target space is a tree has been of interest in phylogenetics for quite some time [19, 5]; the numerical taxonomy problem (NTP) is that of finding an optimal tree embedding for a given metric space (X, dX), i.e. a tree (X, tX) such that the additive distortion, defined as ‖dX − tX‖`∞(X×X), is minimal over all possible tree metrics on X . This problem turns out to be NP-hard [3]; however, a 3-approximation algorithm exists [3], and a variant of this problem,
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
that of finding an optimal ultrametric tree, can be solved in polynomial time [11]. An ultrametric tree is a rooted tree where every point is equidistant from the root—for example, ultrametric trees are the outputs of hierarchical clustering (HC) methods that show groupings in data across different resolutions. A known connection between HC and MDS is that the output ultrametric of single linkage hierarchical clustering (SLHC) is a 2-approximation to the optimal ultrametric tree embedding [16], thus providing a partial answer to the NTP. However, it appears that the existing line of work regarding NTP does not address the question of quantifying the `∞ distance between a metric (X, dX) and its optimal tree metric, or even the optimal ultrametric. More specifically, we can ask: Question 1. Given a set X , a metric dX , and an optimal tree metric toptX (or an optimal ultrametric uoptX ), can one find a nontrivial upper bound on ‖dX − t opt X ‖`∞(X×X) (resp. ‖dX − u opt X ‖`∞(X×X)) depending on properties of the metric dX?
The question of distortion bounds is treated from a different perspective in the discrete algorithms literature. In this domain, tree embeddings are typically described with multiplicative distortion bounds (described in §2) depending on the cardinality of the underlying metric space, along with (typically) pathological counterexamples showing that these bounds are tight [4, 10]. We remark immediately that (1) multiplicative distortion is distinct from the additive distortion encountered in the NTP, and (2) these embeddings are rarely used in machine learning, where HC and MDS methods are the main workhorses. Moreover, such multiplicative distortion bounds do not take two considerations into account: (1) the ubiquitousness of very large data sets means that a bound dependent on cardinality is not desirable, and (2) “nice" properties such as low intrinsic dimensionality or treeness of real-world datasets are not exploited in cardinality bounds.
We prove novel additive distortion bounds for two methods of tree embeddings: one into general trees, and one into ultrametric trees. These additive distortion bounds take into account (1) whether the data is treelike, and (2) whether the data has low doubling dimension, which is a measure of its intrinsic dimension. Thus we prove an answer to Question 1 above, namely, that the approximation error made by an optimal tree metric (or optimal ultrametric) can be bounded nontrivially. Remark 1. The trivial upper bound is ‖dX − toptX ‖`∞(X×X) ≤ diam(X, dX). To see this, observe that any ultrametric is a tree, and that SLHC yields an ultrametric uX that is bounded above by dX .
An overview of our approach. A common measure of treeness is Gromov’s δ-hyperbolicity, which is a local condition on 4-point subsets of a metric space. Hyperbolicity has been shown to be a useful statistic for evaluating the quality of trees in [7]. The starting point for our work is a method used by Gromov to embed metric spaces into trees, which we call Gromov’s embedding [12]. A known result, which we call Gromov’s embedding theorem, is that if every 4-point subset of an n-point metric space is δ-hyperbolic, then the metric space embeds into a tree with `∞ distortion bounded above by 2δ log2(2n). The proof proceeds by a linkage argument, i.e. by invoking the definition of hyperbolicity at different scales along chains of points. By virtue of the embedding theorem, one can argue that hyperbolicity is a measure of the “treeness" of a given metric space. It has been shown in [1, 2] that various real-world data sets, such as Internet latencies and biological, social, and collaboration networks are inherently treelike, i.e. have low hyperbolicity. Thus, by Gromov’s result, these real-world data sets can be embedded into trees with additive distortion controlled by their respective cardinalities. The cardinality bound might of course be undesirable, especially for very large data sets such as the Internet. However, it has been claimed without proof in [1] that Gromov’s embedding can yield a 3-approximation to the NTP, independent of [3].
We note that the assumption of a metric input is not apparent in Gromov’s embedding theorem. Moreover, the proof of the theorem does not utilize any metric property. This leads one to hope for bounds where the dependence on cardinality is replaced by a dependence on some metric notion. A natural candidate for such a metric notion is the doubling dimension of a space [15], which has already found applications in learning [17] and algorithm design [15]. In this paper, we present novel upper bounds on the additive distortion of a Gromov embedding, depending only on the hyperbolicity and doubling dimension of the metric space.
Our main tool is a stability theorem that we prove using a metric induced by a Voronoi partition. This result is then combined with the results of Gromov’s linkage argument. Both the stability theorem and Gromov’s theorem rely on the embedding satisfying a particular linkage condition, which can be described as follows: for any embedding f : (X, dX) → (X, tX), and any x, x′ ∈ X , we have tX(x, x ′) = maxc mini Ψ(xi, xi+1), where c = {xi}ki=0 is a chain of points joining x to x′ and Ψ
is some function of dX . A dual notion is to flip the order of the max,min operations. Interestingly, under the correct objective function Ψ, this leads to the well-studied notion of SLHC. By virtue of this duality, the arguments of both the stability theorem and the scaling theorem apply in the SLHC setting. We introduce a new metric space statistic that we call ultrametricity (analogous to hyperbolicity), and are then able to obtain novel lower bounds, depending only on doubling dimension and ultrametricity, for the distortion incurred by a metric space when embedding into an ultrametric tree via SLHC.
We remark that just by virtue of the duality between Gromov’s embedding and the SLHC embedding, it is possible to obtain a distortion bound for SLHC depending on cardinality. We were unable to find such a bound in the existing HC literature, so it appears that even the knowledge of this duality, which bridges the domains of HC and MDS methods, is not prevalent in the community.
The paper is organized as follows. The main thrust of our work is explained in §1. In §2 we develop the context of our work by highlighting some of the surrounding literature. We provide all definitions and notation, including the Voronoi partition construction, in §3. In §4 we describe Gromov’s embedding and present Gromov’s distortion bound in Theorem 3. Our contributions begin with Theorem 4 in §4 and include all the results that follow: namely the stability results in §5, the improved distortion bounds in §6, and the proof of tightness in §7.
The supplementary material contains (1) an appendix with proofs omitted from the body, (2) a practical demonstration in §A where we apply Gromov’s embedding to a bitmap image of a tree and show that our upper bounds perform better than the bounds suggested by Gromov’s embedding theorem, and (3) Matlab .m files containing demos of Gromov’s embedding being applied to various images of trees.
2 Related Literature
MDS is explained thoroughly in [18]. In metric MDS [18] one attempts to find an embedding of the data X into a low dimensional Euclidean space given by a point cloud Y ⊂ Rd (where often d = 2 or d = 3) such that the metric distortion (measured by the Frobenius norm of the difference of the Gram matrices of X and Y ) is minimized. The most common non-metric variant of MDS is referred to as ordinal embedding, and has been studied in [14].
A common problem with metric MDS is that when the intrinsic dimension of the data is higher than the embedding dimension, the clustering in the original data may not be preserved [21]. One variant of MDS that preserves clusters is the tree preserving embedding [20], where the goal is to preserve the single linkage (SL) dendrogram from the original data. This is especially important for certain types of biological data, for the following reasons: (1) due to speciation, many biological datasets are inherently “treelike", and (2) the SL dendrogram is a 2-approximation to the optimal ultrametric tree embedding [16], so intuitively, preserving the SL dendrogram preserves the “treeness" of the data. Preserving the treeness of a metric space is related to the notion of finding an optimal embedding into a tree, which ties back to the numerical taxonomy problem. The SL dendrogram is an embedding of a metric space into an ultrametric tree, and can be used to find the optimal ultrametric tree [8].
The quality of an embedding is measured by computing its distortion, which has different definitions in different domain areas. Typically, a tree embedding is defined to be an injective map f : X → Y between metric spaces (X, dX) and (Y, tY ), where the target space is a tree. We have defined the additive distortion of a tree embedding in an `∞ setting above, but `p notions, for p ∈ [1,∞), can also be defined. Past efforts to embed a metric into a tree with low additive distortion are described in [19, Chapter 7]. One can also define a multiplicative distortion [4, 10], but this is studied in the domain of discrete algorithms and is not our focus in the current work.
3 Preliminaries on metric spaces, distances, and doubling dimension
A finite metric space (X, dX) is a finite set X together with a function dX : X × X → R+ such that: (1) dX(x, x′) = 0 ⇐⇒ x = x′, (2) dX(x, x′) = dX(x′, x), and (3) dX(x, x′) ≤ dX(x, x ′′) + dX(x ′′, x′) for any x, x′, x′′ ∈ X . A pointed metric space is a triple (X, dX , p), where (X, dX) is a finite metric space and p ∈ X . All the spaces we consider are assumed to be finite.
For a metric space (X, dX), the diameter is defined to be diam(X, dX) := maxx,x′∈X dX(x, x′). The hyperbolicity of (X, dX) was defined by Gromov [12] as follows:
hyp(X, dX) := max x1,x2,x3,x4∈X
ΨhypX (x1, x2, x3, x4), where
ΨhypX (x1, x2, x3, x4) : = 1 2 ( dX(x1, x2) + dX(x3, x4)
−max ( dX(x1, x3) + dX(x2, x4), dX(x1, x4) + dX(x2, x3) )) .
A tree metric space (X, tX) is a finite metric space such that hyp(X, tX) = 0 [19]. In our work, we strengthen the preceding characterization of trees to the special class of ultrametric trees. Recall that an ultrametric space (X,uX) is a metric space satisfying the strong triangle inequality:
uX(x, x ′) ≤ max(uX(x, x′′), uX(x′′, x′)), ∀x, x′, x′′ ∈ X.
Definition 1. We define the ultrametricity of a metric space (X, dX) as: ult(X, dX) := max
x1,x2,x3∈X ΨultX (x1, x2, x3), where ΨultX (x1, x2, x3) := dX(x1, x3)−max ( dX(x1, x2), dX(x2, x3) ) .
We introduce ultrametricity to quantify the deviation of a metric space from being ultrametric. Notice that (X,uX) is an ultrametric space if and only if ult(X,uX) = 0. One can verify that an ultrametric space is a tree metric space.
We will denote the cardinality of a set X by writing |X|. Given a set X and two metrics dX , d′X defined on X ×X , we denote the `∞ distance between dX and d′X as follows:
‖dX − d′X‖`∞(X×X) := max x,x′∈X |dX(x, x′)− d′X(x, x′)|.
We use the shorthand ‖dX−d′X‖∞ to mean ‖dX−d′X‖`∞(X×X). We write≈ to mean “approximately equal to." Given two functions f, g : N→ R, we will write f g to mean asymptotic tightness, i.e. that there exist constants c1, c2 such that c1|f(n)| ≤ |g(n)| ≤ c2|f(n)| for sufficiently large n ∈ N.
Induced metrics from Voronoi partitions. A key ingredient of our stability result involves a Voronoi partition construction. Given a metric space (X, dX) and a subset A ⊆ X , possibly with its own metric dA, we can define a new metric dAX on X ×X using a Voronoi partition. First write A = {x1, . . . , xn}. For each 1 ≤ i ≤ n, we define Ṽi := {x ∈ X : dX(x, xi) ≤ minj 6=i dX(x, xj)} . Then X = ⋃n i=1 Ṽi. Next we perform the following disjointification trick:
V1 := Ṽ1, V2 := Ṽ2 \ Ṽ1, . . . , Vn := Ṽn \ ( n−1⋃ i=1 Ṽi ) .
Then X = ⊔n i=1 Vi, a disjoint union of Voronoi cells Vi.
Next define the nearest-neighbor map η : X → A by η(x) = xi for each x ∈ Vi. The map η simply sends each x ∈ X to its closest neighbor in A, up to a choice when there are multiple nearest neighbors. Then we can define a new (pseudo)metric dAX : X ×X → R+ as follows:
dAX(x, x ′) := dA(η(x), η(x ′)).
Observe that dAX(x, x ′) = 0 if and only if x, x′ ∈ Vi for some 1 ≤ i ≤ n. Symmetry also holds, as does the triangle inequality.
A special case of this construction occurs when A is an ε-net of X endowed with a restriction of the metric dX . Given a finite metric space (X, dX), an ε-net is a subset Xε ⊂ X such that: (1) for any x ∈ X , there exists s ∈ Xε such that dX(x, s) < ε, and (2) for any s, s′ ∈ Xε, we have dX(s, s ′) ≥ ε [15]. For notational convenience, we write dεX to refer to dX ε X . In this case, we obtain:
‖dX − dεX‖`∞(X×X) = max x,x′∈X ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max
1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dεX(x, x′)∣∣ = max
1≤i,j≤n max x∈Vi,x′∈Vj ∣∣dX(x, x′)− dX(xi, xj)∣∣ ≤ max
1≤i,j≤n max x∈Vi,x′∈Vj
( dX(x, xi) + dX(x ′, xj) ) ≤ 2ε. (1)
Covering numbers and doubling dimension. For a finite metric space (X, dX), the open ball of radius ε centered at x ∈ X is denoted B(x, ε). The ε-covering number of (X, dX) is defined as:
NX(ε) := min
{ n ∈ N : X ⊂ n⋃ i=1 B(xi, ε) for x1, . . . , xn ∈ X } .
Notice that the ε-covering number of X is always bounded above by the cardinality of an ε-net. A related quantity is the doubling dimension ddim(X, dX) of a metric space (X, dX), which is defined to be the minimal value ρ such that any ε-ball in X can be covered by at most 2ρ ε/2-balls [15]. The covering number and doubling dimension of a metric space (X, dX) are related as follows: Lemma 2. Let (X, dX) be a finite metric space with doubling dimension bounded above by ρ > 0. Then for all ε ∈ (0,diam(X)], we have NX(ε) ≤ ( 8 diam(X)/ε )ρ .
4 Duality between Gromov’s embedding and SLHC
Given a metric space (X, dX) and any two points x, x′ ∈ X , we define a chain from x to x′ over X as an ordered set of points in X starting at x and ending at x′:
c = {x0, x1, x2, . . . , xn : x0 = x, xn = x′, xi ∈ X for all 0 ≤ i ≤ n} . The set of all chains from x to x′ over X will be denoted CX(x, x′). The cost of a chain c = {x0 . . . , xn} over X is defined to be costX(c) := max0≤i<n dX(xi, xi+1). For any metric space (X, dX) and any p ∈ X , the Gromov product of X with respect to p is a map gX,p : X ×X → R+ defined by:
gX,p(x, x ′) := 12 ( dX(x, p) + dX(x ′, p)− dX(x, x′) ) .
We can define a map gTX,p : X ×X → R+ as follows:
gTX,p(x, x ′)p := max
c∈CX(x,x′) min xi,xi+1∈c gX,p(xi, xi+1).
This induces a new metric tX,p : X ×X → R+: tX,p(x, x ′) := dX(x, p) + dX(x ′, p)− 2gTX,p(x, x′). Gromov observed that the space (X, tX,p) is a tree metric space, and that tX,p(x, x′) ≤ dX(x, x′) for any x, x′ ∈ X [12]. This yields the trivial upper bound:
‖dX − tX‖∞ ≤ diam(X, dX). The Gromov embedding T is defined for any pointed metric space (X, dX , p) as T (X, dX , p) := (X, tX,p). Note that each choice of p ∈ X will yield a tree metric tX,p that depends, a priori, on p. Theorem 3 (Gromov’s embedding theorem [12]). Let (X, dX , p) be an n-point pointed metric space, and let (X, tX,p) = T (X, dX , p). Then,
‖tX,p − dX‖l∞(X×X) ≤ 2 log2(2n) hyp(X, dX).
Gromov’s embedding is an MDS method where the target is a tree. We observe that its construction is dual—in the sense of swapping max and min operations—to the construction of the ultrametric space produced as an output of SLHC. Recall that the SLHC method H is defined for any metric space (X, dX) asH(X, dX) = (X,uX), where uX : X ×X → R+ is the ultrametric defined below:
uX(x, x ′) := min
c∈CX(x,x′) costX(c).
As a consequence of this duality, we can bound the additive distortion of SLHC as below: Theorem 4. Let (X, dX) be an n-point metric space, and let (X,uX) = H(X, dX). Then we have:
‖dX − uX‖`∞(X×X) ≤ log2(2n) ult(X, dX). Moreover, this bound is asymptotically tight.
The proof of Theorem 4 proceeds by invoking the definition of ultrametricity at various scales along chains of points; we provide details in Appendix B. We remark that the bounds in Theorems 3, 4 depend on both a local (ultrametricity/hyperbolicity) and a global property (cardinality); however, a natural improvement would be to exploit a global property that takes into account the metric structure of the underlying space. The first step in this improvement is to prove a set of stability theorems.
5 Stability of SLHC and Gromov’s embedding
It is known that SLHC is robust to small perturbations of the input data with respect to the GromovHausdorff distance between metric spaces, whereas other HC methods, such as average linkage and complete linkage, do not enjoy this stability [6]. We prove a particular stability result for SLHC involving the `∞ distance, and then we exploit the duality observed in §4 to prove a similar stability result for Gromov’s embedding. Theorem 5. Let (X, dX) be a metric space, and let (A, dA) be any subspace with the restriction metric dA := dX |A×A. LetH denote the SLHC method. Write (X,uX) = H(X, dX) and (A, uA) = H(A, dA). Also write uAX(x, x′) := uA(η(x), η(x′)) for x, x′ ∈ X . Then we have:
‖H(X, dX)−H(A, dA)‖∞ := ‖uX − uAX‖∞ ≤ ‖dX − dAX‖∞.
Theorem 6. Let (X, dX , p) be a pointed metric space, and let (A, dA, a) be any subspace with the restriction metric dA := dX |A×A such that η(p) = a. Let T denote the Gromov embedding. Write (X, tX,p) = T (X, dX , p) and (A, tA,a) = T (A, dA, a). Also write tAX,p(x, x′) := tA,a(η(x), η(x′)) for x, x′ ∈ X . Then we have:
‖T (X, dX , p)− T (A, dA, a)‖∞ := ‖tX,p − tAX,p‖∞ ≤ 5‖dX − dAX‖∞.
The proofs for both of these results use similar techniques, and we present them in Appendix B.
6 Improvement via Doubling Dimension
Our main theorems, providing additive distortion bounds for Gromov’s embedding and for SLHC, are stated below. The proofs for both theorems are similar, so we only present that of the former. Theorem 7. Let (X, dX) be a n-point metric space with doubling dimension ρ, hyperbolicity hyp(X, dX) = δ, and diameter D. Let p ∈ X , and write (X, tX) = T (X, dX , p). Then we obtain:
Covering number bound: ‖dX − tX‖∞ ≤ min ε∈(0,D]
( 12ε+ 2δ log2(2NX(ε)) ) . (2)
Also suppose D ≥ δρ6 ln 2 . Then,
Doubling dimension bound: ‖dX − tX‖∞ ≤ 2δ + 2δρ ( 13 2 + log2 ( D δρ )) . (3)
Theorem 8. Let (X, dX) be a n-point metric space with doubling dimension ρ, ultrametricity ult(X, dX) = ν, and diameter D. Write (X,uX) = H(X, dX). Then we obtain:
Covering number bound: ‖dX − uX‖∞ ≤ min ε∈(0,D]
( 4ε+ ν log2(2NX(ε)) ) . (4)
Also suppose D ≥ νρ4 ln 2 . Then,
Doubling dimension bound: ‖dX − uX‖∞ ≤ ν + νρ ( 6 + log2 ( D νρ )) . (5)
Remark 9 (A remark on the NTP). We are now able to return to Question 1 and provide some answers. Consider a metric space (X, dX). We can upper bound ‖dX − uoptX ‖∞ using the bounds in Theorem 8, and ‖dX − toptX ‖∞ using the bounds in Theorem 7. Remark 10 (A remark on parameters). Notice that as hyperbolicity δ approaches 0 (or ultrametricity approaches 0), the doubling dimension bounds (hence the covering number bounds) approach 0. Also note that as ε ↓ 0, we get NX(ε) ↑ |X|, so Theorems 7,8 reduce to Theorems 3,4. Experiments lead us to believe that the interesting range of ε values is typically a subinterval of (0, D].
Proof of Theorem 7. Fix ε ∈ (0, D] and let Xε = {x1, x2, ..., xk} be a collection of k = NX(ε) points that form an ε-net of X . Then we may define dεX and t ε X on X ×X as in §3. Subsequent application of Theorem 3 and Lemma 2 gives the bound
‖dεX − tεX‖`∞(X×X) ≤ ‖dXε − tXε‖`∞(Xε×Xε) ≤ 2δ log2(2k) ≤ 2δ log2(2Cε−ρ),
where we define C := (8D)ρ. Then by the triangle inequality for the `∞ distance, the stability of T (Theorem 6), and using the result that ‖dX − dεX‖`∞(X×X) ≤ 2ε (Inequality 1), we get:
‖dX − tX‖∞ ≤ ‖dX − dεX‖∞ + ‖dεX − tεX‖∞ + ‖tεX − tX‖∞ ≤ 6‖dX − dεX‖∞ + ‖dεX − tεX‖∞ ≤ 12ε+ 2δ log2(2NX(ε)).
Since ε ∈ (0, D] was arbitrary, this suffices to prove Inequality 2. Applying Lemma 2 yields:
‖dX − tX‖∞ ≤ 12ε+ 2δ log2(2Cε−ρ).
Notice that Cε−ρ ≥ NX(ε) ≥ 1, so the term on the right of the inequality above is positive. Consider the function f(ε) = 12ε+ 2δ + 2δ log2 C − 2δρ log2 ε. The minimizer of this function is obtained by taking a derivative with respect to ε:
f ′(ε) = 12− 2δρ ε ln 2 = 0 =⇒ ε = δρ 6 ln 2 .
Since ε takes values in (0, D], and limε→0 f(ε) = +∞, the value of f(ε) is minimized at min(D, δρ6 ln 2 ). By assumption, D ≥ δρ 6 ln 2 . Since ‖dX − tX‖∞ ≤ f(ε) for all ε ∈ (0, D], it follows that:
‖dX−tX‖∞ ≤ f ( δρ
6 ln 2
) = 2δρ
ln 2 +2δ+2δρ log2
( 48D ln 2
δρ
) ≤ 2δ+2δρ ( 13
2 + log2
( D
δρ
)) .
7 Tightness of our bounds in Theorems 7 and 8
By the construction provided below, we show that our covering number bound for the distortion of SLHC is asymptotically tight. A similar construction can be used to show that our covering number bound for Gromov’s embedding is also asymptotically tight.
Proposition 11. There exists a sequence (Xn, dXn)n∈N of finite metric spaces such that as n→∞,
‖dXn − uXn‖∞ min ε∈(0,Dn]
( 4ε+ νn log2(2NXn(ε)) ) → 0.
Here we have written (Xn, uXn) = H(Xn, dXn), νn = ult(Xn, dXn), and Dn = diam(Xn, dXn).
Proof of Proposition 11. After defining Xn for n ∈ N below, we will denote the error term, our covering number upper bound, and our Gromov-style upper bound as follows:
En := ‖dXn − uXn‖∞, Bn := min ε∈(0,Dn] ρ(n, ε), Gn := log2(2|Xn|) ult(Xn, dXn), where
ρ : N× [0,∞)→ R is defined by ρ(n, ε) = 4ε+ νn log2(2NXn(ε)).
Here we write |S| to denote the cardinality of a set S. Recall that the separation of a finite metric space (X, dX) is the quantity sep(X, dX) := minx6=x′∈X dX(x, x′). Let (V, uV ) be the finite ultrametric space consisting of two equidistant points with common distance 1. For each n ∈ N, let Ln denote the line metric space obtained by choosing (n+ 1) equally spaced points with separation 1n2 from the interval [0, 1n ], and endowing this set with the restriction of the Euclidean metric, denoted dLn . One can verify that ult(Ln, dLn) ≈ 12n . Finally, for each n ∈ N we define Xn := V × Ln, and endow Xn with the following metric:
dXn ( (v, l), (v′, l′) ) := max ( dV (v, v ′), dLn(l, l ′) ) , v, v′ ∈ V, l, l′ ∈ Ln.
Claim 1. ult(Xn, dXn) = ult(Ln, dLn) ≈ 12n . For a proof, see Appendix B.
Claim 2. En diam(Ln, dLn) = 1n . To see this, let n ∈ N, and let x = (v, l), x ′ = (v′, l′) ∈ Xn be two points realizing En. Suppose first that v = v′. Then an optimal chain from (v, l), (v, l′) only
has to incur the cost of moving along the Ln coordinate. As such, we obtain uXn(x, x ′) ≤ 1n2 , with equality if and only if l 6= l′. Then,
En = max x,x′∈Xn |dXn(x, x′)− uXn(x, x′)| = max l,l′∈Ln |dLn(l, l′)− 1n2 | = 1 n − 1 n2
1 n .
Note that the case v 6= v′ is not allowed, because then we would obtain dXn(x, x′) = dV (v, v′) = uXn(x, x
′), since sep(V, dV ) ≥ diam(Ln, dLn) and all the points in V are equidistant. Thus we would obtain |dXn(x, x′)− uXn(x, x′)| = 0, which is a contradiction because we assumed that x, x′ realize En.
Claim 3. For each n ∈ N, ε ∈ (0, Dn], we have:
NXn(ε) = NV (ε) : ε > sep(V, dV ),
|V | : diam(Ln, dLn) < ε ≤ sep(V, dV ), |V |NLn(ε) : ε ≤ diam(Ln, dLn).
To see this, note that in the first two cases, any ε-ball centered at a point (v, l) automatically contains all of {v} × Ln, so NXn(ε) = NV (ε). Specifically in the range diam(Ln, dLn) < ε ≤ sep(V, dV ), we need exactly one ε-ball for each v ∈ V to cover Xn. Finally in the third case, we need NLn(ε) ε-balls to cover {v} × Ln for each v ∈ V . This yields the stated estimate. By the preceding claims, we now have the following for each n ∈ N, ε ∈ (0, Dn]:
ρ(n, ε) ≈ 4ε+ 12n log2(2NXn(ε)) = 4ε+ 12n log2(2NV (ε)) : ε > sep(V ),
4ε+ 12n log2(2|V |) : diam(Ln) < ε ≤ sep(V ), 4ε+ 12n log2(2|V |NLn(ε)) : ε ≤ diam(Ln).
Notice that for sufficiently large n, infε>diam(Ln) ρ(n, ε) = ρ(n, 1 n ). Then we have:
1 n ≤ En ≤ Bn = minε∈(0,Dn] ρ(n, ε) ≤ ρ(n, 1n ) ≈ C n ,
for some constant C > 0. Here the first inequality follows from the proof of Claim 2, the second from Theorem 8, and the third from our observation above. It follows that En Bn 1n → 0.
Remark 12. Given the setup of the preceding proof, note that the Gromov-style bound behaves as:
Gn = ρ(n, 0) = 1 2n log2(2|V |(n+ 1)) ≈ C ′ log2(n+1) n ,
for some constant C ′ > 0. So Gn approaches 0 at a rate strictly slower than that of En and Bn.
8 Discussion
We are motivated by a particular aspect of the numerical taxonomy problem, namely, the distortion incurred when passing from a metric to its optimal tree embedding. We describe and explore a duality between a tree embedding method proposed by Gromov and the well known SLHC method for embedding a metric space into an ultrametric tree. Motivated by this duality, we propose a novel metric space statistic that we call ultrametricity, and give a novel, tight bound on the distortion of the SLHC method depending on cardinality and ultrametricity. We improve this Gromov-style bound by replacing the dependence on cardinality by a dependence on doubling dimension, and produce a family of examples proving tightness of this dimension-based bound. By invoking duality again, we are able to improve Gromov’s original bound on the distortion of his tree embedding method. More specifically, we replace the dependence on cardinality—a set-theoretic notion—by a dependence on doubling dimension, which is truly a metric notion.
Through Proposition 11, we are able to prove that our bound is not just asymptotically tight, but that it is strictly better than the corresponding Gromov-style bound. Indeed, Gromov’s bound can perform arbitrarily worse than our dimension-based bound. We construct an explicit example to verify this claim in Appendix A, Remark 14, where we also provide a practical demonstration of our methods. | 1. What is the focus of the paper regarding embedding high-dimensional metric spaces?
2. What are the strengths of the proposed approach, particularly in terms of the bound on additive distortion?
3. Do you have any concerns or questions about the motivation of the problem, specifically regarding phylogenetic trees construction?
4. What are your thoughts on the connection between Single Linkage Hierarchical Clustering and the ultrametric tree?
5. Are there any parts of the paper that need further clarification or details, such as the equation between lines 187 and 188? | Review | Review
While the most familiar form of embedding is Multi-dimensional Scaling, in which high-dimensional metric spaces are embedded into 2 or 3 dimensional Euclidean space, embedding into other spaces is also possible. This paper looks at embedding into a tree metric space, where the tree may or may not be ultrametric (having the same height from each leaf to the root), using Gromov's method. There is already a bound on the additive distortion between the original metric space and the embedding, but it depends on the cardinality of the data. The authors give a bound on the additive distortion that depends only on the hyperbolicity and doubling dimension of the metric space. I find motivating this problem by phylogenetic trees construction (as in first sentence of abstract) to be a little misleading. While there are some distance based methods for constructing phylogenetic trees, these are primarily used as guide trees for sequence alignment or starting trees for maximum likelihood estimation or Bayesian methods based on evolutionary models. Thus getting the correct tree shape matters more than bounds on the distances between leaves. That being said, I think determining better additive bounds for Gromov's tree embedding method is a worthwhile goal, as there is a lot of data that is hyperbolic in nature, and thus embeds better into a tree than a low-dimensional Euclidean space. Indeed the authors mention this at line 66. The connection to Single Linkage Hierarchical Clustering is also interesting. I had trouble understanding the equation between lines 187 and 188, and how you are going from the clusters produced by SLHC to an ultrametric tree. What are the interior vertices of the tree? Are they always points from X? How does minimizing over the set of all chains in X fit into this? Minor comments: - l. 32: "tree" missing after "ultrametric" - l. 125: fix parentheses in fraction - l. 291: journal volume? - l. 300: capitalization and two "in"s |
NIPS | Title
TOIST: Task Oriented Instance Segmentation Transformer with Noun-Pronoun Distillation
Abstract
Current referring expression comprehension algorithms can effectively detect or segment objects indicated by nouns, but how to understand verb reference is still under-explored. As such, we study the challenging problem of task oriented detection, which aims to find objects that best afford an action indicated by verbs like sit comfortably on. Towards a finer localization that better serves downstream applications like robot interaction, we extend the problem into task oriented instance segmentation. A unique requirement of this task is to select preferred candidates among possible alternatives. Thus we resort to the transformer architecture which naturally models pair-wise query relationships with attention, leading to the TOIST method. In order to leverage pre-trained noun referring expression comprehension models and the fact that we can access privileged noun ground truth during training, a novel noun-pronoun distillation framework is proposed. Noun prototypes are generated in an unsupervised manner and contextual pronoun features are trained to select prototypes. As such, the network remains noun-agnostic during inference. We evaluate TOIST on the large-scale task oriented dataset COCO-Tasks and achieve +10.9% higher mAP than the best-reported results. The proposed noun-pronoun distillation can boost mAP and mAP by +2.8% and +3.8%. Codes and models are publicly available at https://github.com/AIR-DISCOVER/TOIST.
1 Introduction
As benchmarked by the RefCOCO, RefCOCO+ [27][61] and RefCOCO-g [45] datasets, noun referring expression comprehension models have seen tremendous progress, thanks to large-scale vision-language pre-training models like VL-BERT [57], VilBERT [43], OSCAR [33], UNITER [10], 12-in-1 [44] and MDETR [26]. As shown in the left top part of Fig. 1, these algorithms take noun prompts like hatchback car as inputs and generate a bounding box or an instance mask of that car. However, in real-world applications like intelligent service robots, system inputs usually come in the form of affordance (i.e., the capability to support an action or say a verb phrase). Whether modern vision-language model designs can effectively understand verb reference remains under-explored.
To this end, we focus on the challenging problem of task oriented detection, as introduced by the COCO-Tasks benchmark [55]. As shown in the right top part of Fig. 1, a task oriented detector outputs three boxes of forks as they can be used to smear butter. We also extend the problem to an upgraded instance segmentation version using existing COCO masks [40], as the masks can provide finer localization. When RGB-D pairs are available, instance masks can be used to obtain object point clouds. When image sequences are available, instance masks can be used to reconstruct objects using visual hull [31][11][67]. As such, the newly proposed task oriented instance segmentation formulation (Fig. 1 bottom) is useful for down-stream robot interaction applications.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While noun referring expression comprehension datasets aim to minimize ambiguity [45], an interesting and challenging feature of task oriented detection/segmentation is the intrinsic ambiguity. For example, in the right top panel of Fig. 1, the pizza peel can also be used to smear butter. If we have neither forks nor pizza peels at hand, it is still possible to use the plate to smear butter. Another example is shown in Fig. 1 bottom. When we consider an object to step on, the chair is a better choice because the sofa is soft and the table is heavy to move. When the need switches to sit comfortably on, sofas are obviously the best candidates. In one word, objects that afford a verb are ambiguous and the algorithm needs to model preference.
To this end, current models [55] use a two-stage pipeline, in which objects are firstly detected then relatively ranked. Inspired by the success of DETR-like methods [6][65][41] and the advantage of the attention mechanism in revealing
the relationship between visual elements [34][64], we resort to the transformer architecture as it imposes self-attention on object queries thus naturally models the pair-wise relative preference between object candidates. Our one-stage method is named as Task Oriented Instance Segmentation Transformer and abbreviated as TOIST. Transformers are considered to be data hungry [5][14], but obtaining large-scale visually grounded verb reference data with relative preference (e.g., COCOTasks [55]) is difficult. This inspires us to explore the possibility of reusing knowledge in noun referring expression comprehension models. We propose to use pronouns like something as a proxy and distill knowledge from noun embedding prototypes generated by clustering.
Specifically, we first train a TOIST model with verb-noun input (e.g., step on chair for the bottom panel of Fig. 1 bottom), using the privileged noun ground truth. But during inference, we cannot access the noun chair, thus we train the second TOIST model with verb-pronoun input (e.g., step on something) and distill knowledge from the first TOIST model. As such, the second TOIST model remains noun-agnostic during inference and achieves better performance than directly training a model with verb-pronoun input. This framework is named as noun-pronoun distillation. Although leveraging knowledge from models with privileged information has been used in robotics research like autonomous driving [7] and quadrupedal locomotion [32], the proposed paradigm of distilling privileged noun information into pronoun features is novel, to the best of our knowledge.
To summarize, this paper has the following four contributions:
• We upgrade the task oriented detection task into task oriented instance segmentation and provide the first solution to it. Although this is a natural extension, this new formulation is of practical value to robotics applications.
• Unlike existing two-stage models that firstly detect objects then rank them, we propose the first transformer-based method TOIST, for task oriented detection/segmentation. It has only one stage and naturally models relative preference with self-attention on object queries.
• In order to leverage the privileged information in noun referring expression understanding models, we propose a novel noun-pronoun distillation framework. It improves TOIST by +2.8% and +3.8% for mAPbox and mAPmask, respectively.
• We achieve new state-of-the-art (SOTA) results on the COCO-Tasks dataset, out-performing the best reported results by +10.9% mAPbox. Codes and models are publicly available.
2 Related Works
Vision and Language. Connecting vision and language is a long-existing topic for visual scene understanding. Barnard et al. [4] propose a system that translates image regions into nouns. Babytalk
[30] is an early method that turns images into sentences, based upon conditional random fields. Visual question answering [3] aims to answer questions about an image, with potential applications in helping visually impaired people [19]. The CLEVR dataset [25] focuses on the reasoning ability of question answering models, thanks to a full control over the synthetic data. The Flickr30k benchmark [51] addresses the phrase grounding task that links image regions and descriptions. Vision-language navigation [2][16] aims to learn navigation policies that fulfill language commands. DALL-E [52] shows impressive text-to-image generation capability. Video dense captioning [28][15] generates language descriptions for detected salient regions. Visual madlibs [60] focuses on fill-the-blank question answering. Visual commonsense reasoning [62] proposes the more challenging task of justifying an answer. Referring expression detection [61][45] and segmentation [24] localize objects specified by nouns. Although this literature is very large with many problem formulations proposed, the task of detecting objects that afford verbs (e.g., COCO-Tasks[55]) is still under-explored.
Action and Affordance. Verbs, as the link between subjects and objects, have been extensively studied in the both vision [58][48] and language [23][46] communities before, while we focus on the vision side. It is difficult to define standalone verb recognition tasks, so existing problem formulations depend on the focus on subjects or objects. A simple taxonomy can be considered as such: recognizing subjects and verbs is named as action recognition [56][17]; recognizing verbs and objects is named as affordance recognition [66][59][13][9]; recognizing triplets is named as human-object-interaction recognition [35][63]. Task oriented object detection/segmentation, as an affordance understanding task, is very challenging and this study explores the noun-pronoun distillation framework to borrow rich knowledge from more visually grounded noun targets.
Knowledge Distillation. This technique is proposed in the deep learning literature by Hinton et al. [22] to distill knowledge from large models to small models. The insight is that soft logit targets generated by large models contain richer information that better serves as a supervision signal than hard one-hot labels. Knowledge distillation has been extended to show effectiveness in other domains like continual learning [37] and object detection [8]. The survey of Guo et al. [18] provides a comprehensive summary of knowledge distillation variants and applications. Most related to our method is the privileged knowledge distillation methods in robotics research [7][32], in which the teacher model has access to privileged information that the student cannot access during inference. Our noun-pronoun distillation method is tailored for the task oriented detection/segmentation problem which borrows rich knowledge from noun referring expression compression teacher models while still allowing the student model to be noun-agnostic.
3 Formulation
The problem is to detect and segment objects that are preferred to afford a specific task indicated by verb phrases, from an input image. Yet clearly defining affordance and preference is actually challenging so we follow the existing annotation protocol of the COCO-Tasks dataset [55]:
Affordance. Firstly, the target objects afford a specific task. In an input image, it is possible that no objects or multiple objects afford the task. And in the latter case, the objects may belong to multiple classes. For example, in the right top panel of Fig.1, nothing affords the task sit comfortably on. Instead, in the bottom panel, there are at least two sofas, a table and a chair that afford the task.
Preference. Secondly, we need to find the best ones from the objects which afford the task. In other words, the preference among multiple objects needs to be understood. In Fig.1 bottom, the two sofas are obviously more suitable for the task sit comfortably on than other objects enumerated above. Thus the ground truth objects for this task are the two sofas (covered in blue).
Now we formally define the task. The input is an RGB image Xv ∈ R3×H0×W0 (v represents visual) and a piece of text Xl (l represents language). Xl describes a specific task like sit comfortably on. The targets are bounding boxes Bgt = [b1, . . . , bngt ] ∈ [0, 1]ngt×4 and instance segmentation masks Mgt = [m1, . . . ,mngt ] ∈ Rngt×H0×W0 of target objects Ogt = ⟨Bgt,Mgt⟩, where ngt ≥ 0 is the count of targets. The four components of bi ∈ [0, 1]4 are normalized center coordinates, height and width of the i-th box. An algorithm f that addresses this problem works as:
f(⟨Xv,Xl⟩) = ⟨Bpred,Mpred,Spred⟩, (1)
where Bpred and Mpred are predicted boxes and masks, respectively. Spred = [ŝ1, . . . , ŝnpred ] ∈ [0, 1]npred is the probability scores of the predicted objects being selected, which reflects preference. We denote the predicted objects as Opred = ⟨Bpred,Mpred,Spred⟩.
4 Method
We propose an end-to-end Task Oriented Instance Segmentation Transformer, abbreviated as TOIST (Section 4.1). Leveraging pre-trained noun referring expression comprehension models, we further adopt a teacher-student framework for noun-pronoun distillation (Section 4.2).
4.1 Task Oriented Instance Segmentation Transformer
The SOTA method [55] uses a two-stage pipeline to solve the problem. Taking a single image Xv as input, it first detects the bounding boxes Bpred of all objects with a Faster-RCNN [53]. Then it ranks Bpred with a GNN [36], predicting the probabilities Spred of the objects being selected for a task. Our method differs from it in three ways: (1) We address the task with a one-stage architecture, allowing joint representation learning for detection and preference modeling. (2) We specify tasks using the text Xl. (3) We predict instance masks Mpred along with bounding boxes.
We choose to build our method upon the transformer architecture [6], because the self-attention operators in the decoder can naturally model pair-wise relative preference between object candidates. As shown in Fig.2 bottom, TOIST contains three main components: a multi-modal encoder (color brown) to extract tokenized features, a transformer encoder (color green) to aggregate features of two modalities and a transformer decoder (color blue) to predict the most suitable objects with attention.
Two Input Forms. To find an object that affords the task of dig hole, the first step is to construct a task description input Xl. To achieve this goal, we can extend the task name with the ground truth object category to the verb-noun form like dig hole with skateboard or with a pronoun to the verb-pronoun form like dig hole with something. While the former violates the noun-agnostic constraint during inference, it can be leveraged to improve the latter within the proposed noun-pronoun distillation framework, which will be detailed later. For a plain TOIST, the verb-pronoun form is selected as task description Xl, which is fed into the multi-modal encoder along with visual input Xv .
Multi-Modal Encoder. For Xv, a pre-trained CNN-based backbone and a one-layer feed forward network (FFN) are leveraged to extract a low-resolution feature map Fv ∈ Rd×H×W. Flattening the spatial dimensions of Fv into one dimension, we obtain a sequence of tokenized feature vectors V = [v1, . . . , vnv ] ∈ Rnv×d (light blue squares ■ on the left of the transformer encoder in Fig.2), where nv = H ×W. To preserve the spatial information, 2D positional embeddings are added to V. For Xl, we use a pre-trained text encoder and another FFN to produce corresponding feature vectors L = [l1, . . . , lnl ] ∈ Rnl×d (light orange squares ■), where nl is the total count of language tokens. Among these nl features, we denotes the one corresponding to the pronoun (or noun) token as lpron (or lnoun) (dark orange squares ■). We concatenate these vectors and obtain the final feature sequence [V,L] = [v1, . . . , vnv , l1, . . . , lnl ] ∈ R(nv+nl)×d. Transformer Encoder. The transformer encoder consists of ntr sequential blocks of multi-head self-attention layers. Given the sequence of features [V,L], it outputs processed feature sequence [Vtr,Ltr] = [vtr1 , . . . , v tr nv , l tr 1 , . . . , l tr nl ] ∈ R(nv+nl)×d. The pronoun (or noun) feature lpron (or lnoun) is encoded into ltrpron (or l tr noun), which will be used for noun-pronoun distillation (Section 4.2). Here for the plain TOIST, we directly use the features [Vtr,Ltr] for later processing.
Transformer Decoder. The transformer decoder consists of ntr blocks of self-attention and crossattention layers. It takes as input a set of learnable parameters serving as an object query sequence Q = [q1, . . . , qnpred ] ∈ Rnpred×d. [Vtr,Ltr] are used as keys and values for cross-attention layers. The outputs of the transformer decoder are feature vectors Qtr = [qtr1 , . . . , q tr npred
] ∈ Rnpred×d, which are projected to final results by three prediction heads. Specifically, the detect head predicts bounding boxes Bpred and the segment head outputs binary segmentation masks Mpred. The logit head outputs logits Gpred = [ĝ1, . . . , ĝnpred ] ∈ Rnpred×nmax , where ĝi = [ĝi1, . . . , ĝinmax ]. [ĝ i 1, . . . , ĝ i nl ] corresponds to text tokens L = [l1, . . . , lnl ] ∈ Rnl×d. [ĝinl+1, . . . , ĝ i nmax−1] is used to pad ĝi to length nmax (by default nmax=256) and the last one ĝinmax stands for the logit of "no-object". With the output logits, we define the preference score ŝi ∈ Spred of each predicted object as:
ŝi = 1− exp
( ĝinmax )∑nmax j=1 exp ( ĝij ) . (2)
During training, as in DETR [6], a bipartite matching is computed between npred predicted objects Opred and ground truth objects Ogt with the Hungarian algorithm [29]. The matched object predictions are supervised with L1 loss and Generalized Intersection over Union (GIoU) loss [54] for localization while Dice/F-1 loss [47] and Focal cross-entropy loss [39] for segmentation. We also adopt the soft-token prediction loss and the contrastive alignment loss used in MDETR [26]. But different from them, we do not use a single noun or pronoun as the ground truth token span for a matched object prediction. Instead, we use the whole verb-pronoun description as token span such that the network can understand the verbs rather than noun/pronoun only. The total loss for TOIST is:
LTOIST = λ1Ll1 + λ2Lgiou + λ3Ldice + λ4Lcross + λ5Ltoken + λ6Lalign, (3)
where λ1~λ6 are the weights of losses.
4.2 Noun-Pronoun Distillation
Text related settings mAPbox mAPmask
verb-pronoun input 41.3 35.2 verb-noun input 53.1 (+11.8) 47.2 (+12.0)
replace lpron with lnoun 43.7 (+2.4) 37.3 (+2.1) replace ltrpron with l tr noun 41.9 (+0.6) 35.6 (+0.4) noun-pronoun distillation 44.1 (+2.8) 39.0 (+3.8)
model can boost performance (see Table 1). However, during inference, the noun of the ground truth object is unavailable. Thus we believe a properly designed noun-pronoun distillation framework can leverage rich knowledge from the verb-noun model without violating the noun-agnostic constraint.
Distillation Framework Overview. Two TOIST models are trained simultaneously. The teacher (Fig.2 top) and the student (Fig.2 bottom) take as input verb-noun and verb-pronoun descriptions, respectively. A clustering distillation method with a memory bank and a tailored cluster loss is used to distill privileged object-centric knowledge from noun to pronoun (Fig.2 middle left). Besides, we also use a soft binary target loss imposed on Gpred to distill preference knowledge (Fig.2 middle right), in which Gpred are logits used to calculate preference scores Spred.
Clustering Distillation. Since one task can be afforded by objects of many different categories, we build a text feature memory bank to store noun features, with which a prototype can be selected and used to replace pronoun feature and distill knowledge. We term this process as clustering distillation. Specifically, we use ltrpron and l tr noun instead of lpron and lnoun for this process. The reason is that the former ones are conditioned on the image input and verb tokens of the task by self-attention layers, and thus it is meaningful to select a cluster center that suits the image and the task input.
Memory and Selector. The size of the memory bank is ntask × nmem × d. It consists of ntask queues of length nmem for ntask tasks. During training, for each sample from task j, we update the j-th queue Ljmem = [l j 1, . . . , l j nmem ] by adding the noun feature l tr noun generated by the teacher model and removing the existing one closest to ltrnoun. The updated queue is clustered with the K-means clustering method, leading to K cluster centers Ljc = {ljc1 , . . . , l j cK}. Then the student model uses a cluster selector, which is implemented as the nearest neighbor classifier, to select a prototype ljcs ∈ L j c according to the pronoun feature l tr pron and replace l tr pron with l j cs . Concatenating other tokens and the selected prototype together, the output of student transformer encoder [vtrs1 , . . . , v tr sn , l tr s1 , . . . , l tr pron, . . . , l tr snl ] is modified into [vtrs1 , . . . , v tr sn , l tr s1 , . . . , l j cs , . . . , l tr snl
] and fed into the transformer decoder. To distill knowledge to the student transformer encoder, we define cluster loss as: Lcluster = ∥ltrpron − ljcs∥2, (4) with which the privileged object-centric knowledge is distilled from clustered noun features to pronoun feature and further to the student TOIST encoder.
Preference Distillation. We use a soft binary target loss to distill preference knowledge from teacher to student. For an object query, we first define binary query probabilities of being positive-query or negative-query, which denotes whether or not the query is matched to a ground truth object target, as p = [ppos, pneg] ∈ R1×2. The probabilities can be calculated by the softmax function:
ppos = ∑nmax−1 j=1 exp (ĝj)∑nmax j=1 exp (ĝj) , pneg = exp (ĝnmax)∑nmax j=1 exp (ĝj) , (5)
where ĝj and ĝnmax represent the logits corresponding to i-th text token and "no-object" token, respectively. For all the object queries in teacher and student, the probability sequences are denoted by Pt = [pt1 , . . . ,ptnpred ] and Ps = [ps1 , . . . ,psnpred ]. Then we use the Hungarian algorithm to find a bipartite matching between the two sequences of object queries with Pt and Ps. Formally, we search for a permutation of npred elements σ ∈ Snpred which minimizes the matching loss:
σ̂ = argmin σ∈Snpred npred∑ i Lmatch(yti , ysσ(i)), (6)
where yti = (b̂tipti) and b̂ti is the predicted bounding box. Lmatch is a linear combination of box prediction losses (L1 & GIoU) and KL-Divergence. The KL-Divergence LKL can be written as:
LKL(pti ,psσ(i)) = KL ( pti∥psσ(i) ) = pposti log ( pposti ppossσ(i) ) + pnegti log ( pnegti pnegsσ(i) ) . (7)
With the optimal permutation σ̂, we define the soft binary target loss as:
Lbinary = npred∑
i
LKL(pti ,psσ̂(i)). (8)
It makes the binary query probabilities of the student model similar to the matched ones of the teacher model. And because the preference score ŝ (Eq.2) is defined in the same way as the probability ppos (Eq.5), the preference knowledge is distilled from teacher to student as the loss decreases.
Summary. The final training loss function for TOIST with noun-pronoun distillation is:
LTOIST−NP = LtTOIST + LsTOIST + λ7Lscluster + λ8Lsbinary, (9)
where λ7 and λ8 are the weights of losses. LtTOIST and LsTOIST are separate TOIST loss terms defined by Eq.3 for teacher and student, respectively. Note that the cluster loss Lscluster and the soft binary target loss Lsbinary are only used for supervising the student model.
As a reminder, during inference, we only use the student TOIST model and the fixed memory bank to find the most suitable objects, without violating the noun-agnostic constraint.
5 Experiments
Dataset. We conduct experiments on the COCO-Tasks dataset [55] which re-annotates the COCO dataset [40] with preference-aware affordance labels. This is the only dataset that involves instancelevel preference in affordance. Though there are other datasets for affordance detection, they neither distinguish between instances nor involve preference, such as ADE-Affordance [12] and IIT-AFF [49]. The COCO-Tasks dataset contains 14 tasks. For each task, there are 3600 train images and 900 test images. In each image, the boxes of preferred objects (one or more) are taken as ground truth labels for detection. Using existing COCO masks, we extend the dataset to an instance segmentation version. In Appendix B and D, we present more dataset details and show its diversity. Metric. We use the AP@0.5 metric for both detection and segmentation, where predicted preference scores Spred are used to rank objects. Averaging the AP@0.5 values of all tasks leads to mAP@0.5. The implementation details for TOIST and distillation can be found in Appendix A.
5.1 Comparisons with State-of-the-art Methods
Table 2 shows that TOIST with noun-pronoun distillation achieves state-of-the-art results compared to existing methods on COCO-Tasks. For object detection, we use the results reported by [55] as baselines. For instance segmentation, following the same experiment settings of [55], we build new baselines using Mask-RCNN [20]. The methods in the first row treat the problem as a standard detection or segmentation task. All other baselines use two-stage pipelines, in which objects are firstly detected or segmented then ranked. The proposed one-stage method achieves 41.3% mAPbox and 35.2% mAPmask, which are +8.1% and +2.8% better than the previous best results (Yolo+GGNN and Mask-RCNN+GGNN). Noun-pronoun distillation further boosts the performance of TOIST to 44.1% (+10.9%) mAPbox and 39.0% (+6.6%) mAPmask. Our method also out-performs another
baseline with the same backbone, as shown in Table 9 of Appendix C. These results demonstrate the effectiveness of the proposed method for the new problem of task oriented instance segmentation. Per-task quantitative results and precision-recall curves are provided in Appendix C and D.
5.2 Preference Modeling with Self-Attention
Protocol. Our design principle is that the self-attention layers in transformers can naturally model preference. But is this really the case? Fig.3 (a) shows its effect. Two plain TOIST models are trained separately, with the only difference being that one model does not contain self-attention operators in the decoder. Note that the removal of self-attention does not impact the number of parameters. The mAP metric is impacted by two kinds of errors: inaccurate box/mask localization or improper preference. To analyze the preference scores alone, for all object queries, we use the boxes and masks predicted by the last block of TOIST decoder, which are arguably the most accurate. We use the corresponding preference scores predicted by each block to calculate mAP values.
Interpretation. For the TOIST with self-attention, the performance is gradually boosted as the source of preference scores becomes deeper: from 29.6% mAPbox and 25.0% mAPmask to 41.3% and 35.2%. For the one without self-attention, the preference scores from the first block lead to the best performance: 33.9% mAPbox and 28.7% mAPmask, which is -7.5% and -6.5% lower than another TOIST. The results demonstrate that the self-attention in TOIST decoder models pair-wise relative preference between object candidates. As the decoder deepens, the preference relationship between object candidates is gradually extracted by self-attention.
5.3 Effect of Clustering Distillation Table 3: Ablations for distillation settings. CCR, CL and SBTL are short for cluster center replacement, cluster loss and soft binary target loss, respectively.
Index CCR CL SBTL mAPbox mAPmask
(a) × × × 41.3 35.2 (b) × × ✓ 43.4 (+2.1) 38.0 (+2.8) (c) × ✓ × 42.0 (+0.7) 37.1 (+1.9) (d) × ✓ ✓ 43.8 (+2.5) 38.6 (+3.4) (e) ✓ × × 42.0 (+0.7) 37.0 (+1.8) (f) ✓ × ✓ 42.3 (+1.0) 37.3 (+2.1) (g) ✓ ✓ × 42.3 (+1.0) 37.5 (+2.3) (h) ✓ ✓ ✓ 44.1 (+2.8) 39.0 (+3.8) In Table 3, we show the effects of using cluster loss and replacing pronoun features with cluster centers (noun prototypes). In (c) and (e), leveraging the two components alone brings an increase of +0.7% mAPbox, +1.9% mAPmask and +0.7% mAPbox, +1.8% mAPmask over baseline (a) respectively. In (g), the complete clustering distillation leads to a performance improvement of +1.0% mAPbox and +2.3% mAPmask. These results show that the clustering distillation method can improve student TOIST and enhance verb referring expression understanding.
In Fig.4, we visualize the predicted results (filtered by a preference threshold of 0.9) and the attention maps of pronoun tokens. In the first row, when there is no clustering distillation, TOIST wrongly prefers the flower to the cup, which is also confirmed by the attention map. But the TOIST with clustering distillation correctly selects the cup, and the attention on the flower is weakened. This shows that clustering distillation enables the student TOIST to reduce the ambiguity of verb-pronoun referring expression. In the second row, the bounding box of the knife is correctly detected by both two models. However, in the absence of the distillation, extra instance masks are predicted on the spoon and fork within the box. Instead, with the distillation, the masks predicted by TOIST are concentrated on the knife and the attention is more focused on it. This demonstrates that in the case of clustering distillation, TOIST can better ground the task into pixels within an object box.
Meanwhile, the fact that predicted masks may be inaccurate even if the box is correct makes it challenging for a robot to accurately grasp the preferred object when performing a specific task. This proves the importance of extending task oriented object detection to instance segmentation.
5.4 Effect of Preference Distillation TOIST w/Preference Distillation (1)
pound carpet
(2) smear butter
Ground TruthTOIST w/o Preference Distillation
(3) open parcel
Figure 5: Examples of three scenarios where preference distillation clearly works.
In Table 3 (b), preference distillation with soft binary target loss achieves +2.1% mAPbox and +2.8% mAPmask higher results than baseline (a). This loss acts on the preference probabilities of each object candidate in student TOIST. And the probabilities are used as scores to sort the object candidates for the calculation of mAP values. Therefore, the result of Table 3 (b) strongly supports that the preference information is distilled to the student TOIST.
A simple taxonomy differs three scenarios where preference distillation works. As shown in Fig.5, the predicted results (filtered by a preference threshold of 0.9) of the TOIST models w/ or w/o preference
distillation are compared. (1) Preference distillation makes the preference score of the false positive object (the baseball in the left picture) lower than the threshold. (2) The preference score of the false negative object (the spoon in the middle) is raised above the threshold with the distillation. (3) When there is no distillation, the false positive object (fork) scores higher than the true positive object (knife) (0.9822 > 0.9808). Although the distillation fails to lower the preference score of the false positive object below the threshold, its score is updated to be lower than the true positive one (0.9495 < 0.9680). These specific results demonstrate that the information of noun referring expression is distilled to the noun-agnostic student model in the form of preference scores.
5.5 Ablation Study and Qualitative Results
Distillation Methods. Instead of minimizing the distance between ltrpron and ljcs , a straightforward way is to directly minimize the distance between ltrpron and l tr noun. As shown in Table 4, this simplified method does not work well, which prompts us to develop the distillation framework.
Table 4: Different distillation methods.
Method mAPbox mAPmask
TOIST 41.3 35.2 distill from ljcs to l tr pron 44.1 (+2.8) 39.0 (+3.8) distill from ltrnoun to ltrpron 41.9 (+0.6) 36.0 (+0.8)
Table 5: Results without pre-training.
Method mAPbox mAPmask
verb-pronoun input 3.65 5.74 verb-noun input 11.19 12.67 noun-pronoun distillation 7.43 (+3.78) 11.28 (+5.54)
Interaction of the Two Distillation Components. In Table 3 (d) and (f), we show the effects of cluster loss or cluster center replacement together with soft binary target loss. (d) achieves +2.5% mAPbox and +3.4% mAPmask improvement, which demonstrates the two distillation losses collaborate well. (f) only achieves +1.0% mAPbox and +2.1% mAPmask improvement, slightly higher than (e) (using cluster loss only) but lower than (b) (using soft binary target loss only). This shows that preference distillation effectively improves object preference modeling. But solely replacing pronoun features to indicate target objects weakens the effect of preference distillation.
Table 6: Ablations for pronoun input.
Method Pronoun mAPbox mAPmask
TOIST
something 41.3 35.2 it 41.3 35.2
them 41.4 35.0 abcd 39.0 33.2
TOIST w/ distillation
something 44.1 39.0 it 43.8 38.4
them 43.8 38.1 abcd 42.8 37.4
Ablations for Cluster Number K. Fig.3 (b) shows the ablations for cluster number K. We perform distillation experiments on different K values between 1 to 10 because increasing K to an even higher value makes the clustering task more difficult. All of the experiments yield better results than the plain TOIST (41.3% mAPbox, 35.2% mAPmask) and K = 3 works the best. This demonstrates that a modest K can better cluster the information of noun features and distill it to the student TOIST.
Ablations for Pronoun Input. Table 6 shows the results of TOIST with different pronoun input. In the plain TOIST and TOIST with distillation, the usage of something, it or them leads to similar results,
while a meaningless string abcd yields less improvement. Nevertheless, the proposed distillation framework can still work well in the last case, which demonstrates the robustness of our method.
Results without Pre-training. In our architecture, the pre-trained noun referring expression comprehension models are leveraged. To investigate whether the noun-pronoun distillation framework is a standalone technical contribution, we conduct experiments without pre-training. The models are trained from scratch on the COCO-Tasks dataset and the results are shown in Table 5, which demonstrates that the proposed distillation can still improve performance even without pre-training.
Ablations for Task Number. Table 7 shows the ablation study of different task numbers, in which the first row corresponds to the plain TOIST without distillation and the others show the results with distillation under different ntask. The results demonstrate our proposed distillation works for different ntask, even if ntask = 1. And overall, smaller ntask leads to better performance. We attribute this to the reduced problem complexity due to the less interaction between different tasks, which makes it easier to improve the ability of the model to understand verbs through noun-pronoun distillation.
Qualitative Results. Fig.6 shows more qualitative results. In (a), two toilets are taken as target objects and annotated partially or totally. But TOIST simultaneously predicts the two kinds of results for each toilet. In (b), no object is annotated, while TOIST keenly detects two water bottles that afford the task. In (c), TOIST predicts more accurate mask result than ground truth. In (d), the table is selected and interestingly it does afford the task as the table edge can be used to open beer bottles. More qualitative results can be found in Appendix E.
6 Conclusion and Discussion
We explore the problem of task oriented instance segmentation and propose a transformer-based method named as TOIST with a novel noun-pronoun distillation framework. Experiments show our method successfully models affordance and preference, achieving SOTA results on the COCO-Tasks dataset. Limitations. Due to the lack of large-scale datasets with more abundant tasks, TOIST is only evaluated on limited tasks. While this is sufficient for many robotics applications, it would be interesting to explore general verb reference understanding on more tasks. Potential Negative Social Impact. Because TOIST is not perfect, when it is used in robotics applications, robots may have difficulty in selecting the most suitable object to carry out a task or even cause damage. | 1. What is the main contribution of the paper regarding task-oriented object detection?
2. What are the strengths and weaknesses of the proposed approach, particularly in leveraging knowledge distillation and verb-pronoun captions?
3. Do you have any questions or concerns about the methodology, such as updating the memory bank or using the score s_i in the loss function?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any limitations or potential drawbacks of the proposed approach that the reviewer would like to highlight? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Authors propose a novel way to do task oriented object detection. Dataset used is COCO Task. Author modify the backbone used in DETR model to feed the transformer encoder with textual features along with image features so that better contextualized representations are obtained. The loss function is optimized to perform accurate bounding box localization along with instance segmentation. The paper leverages knowledge from a model trained with verb-noun captions (using the ground truth nouns) to train a student model with verb-pronoun caption. This way the model still remains noun-agnostic during inference time. But it can detect the noun from verb-pronoun if it was trained properly.
Strengths And Weaknesses
Strength:
Authors propose a novel way for task oriented detection by introducing verb pronoun captions and leveraging knowledge distillation to learn from noun ground truth.
The paper presents state of art results on the said task.
Ablation provided to show utility of the distillation components which is one of the novelties of the paper.
Weakness:
The memory bank is a queue and is updated in a FIFO fashion. However, this might lead to removal of a noun feature not adequately represented in the rest of the list. Wouldn't it make more sense to update the queue by removing elements in a smarter way to reduce occurrence of similar features? For example, for any new object feature remove one of the past object features whose representation is closest to it.
Some ablation studies regarding why some loss terms are useful are missing.
Questions
How many unique pronouns are used in the captions to train the student model? How many objects in Coco Task dataset? Is the coco task dataset captions modified in any way (like replacing objects with pronouns) to train the student model?
It would be good to see an analysis of what verbs are associated with what objects (comparing ground truth and model predictions). Something like a distribution plot that the verb "sit on" was associated with "chair" in 10 out of 20 times, it was associated with "table" in 5 out of 20 times etc. That will indicate if the model fails for any verbs more frequently than others.
How is the score s_i is used in loss function in equation 3? Is it used in localization loss terms or segmentation loss terms? Or is it used in some other way?
How is default value of n_max = 256 decided? What is the value of npred? I am assuming it should be greater than the total number of objects in CoCo dataset. Is that the case?
Ablations for including vs not including the loss terms L_token and L_align?
It is not clear what G_npred means in line 245. Please define what it stands for.
L_match was a loss term in DETR model to encourage matching the class ad bounding boxes of ground truth and prediction. However it is not included in loss functions here (equations 3 and 9). Why is L_match not used in loss function? In line 247, authors mention KL divergence is also a part of L_match. However, the original DETR paper doesnt mention that.
Instead of minimizing the distance between l_pron_tr and l_cs_j in equation 4, why can't one minimize the distance between l_pron_tr and l_noun_tr for knowledge distillation?
Limitations
Limitations are briefly discussed in "Conclusion" section. |
NIPS | Title
TOIST: Task Oriented Instance Segmentation Transformer with Noun-Pronoun Distillation
Abstract
Current referring expression comprehension algorithms can effectively detect or segment objects indicated by nouns, but how to understand verb reference is still under-explored. As such, we study the challenging problem of task oriented detection, which aims to find objects that best afford an action indicated by verbs like sit comfortably on. Towards a finer localization that better serves downstream applications like robot interaction, we extend the problem into task oriented instance segmentation. A unique requirement of this task is to select preferred candidates among possible alternatives. Thus we resort to the transformer architecture which naturally models pair-wise query relationships with attention, leading to the TOIST method. In order to leverage pre-trained noun referring expression comprehension models and the fact that we can access privileged noun ground truth during training, a novel noun-pronoun distillation framework is proposed. Noun prototypes are generated in an unsupervised manner and contextual pronoun features are trained to select prototypes. As such, the network remains noun-agnostic during inference. We evaluate TOIST on the large-scale task oriented dataset COCO-Tasks and achieve +10.9% higher mAP than the best-reported results. The proposed noun-pronoun distillation can boost mAP and mAP by +2.8% and +3.8%. Codes and models are publicly available at https://github.com/AIR-DISCOVER/TOIST.
1 Introduction
As benchmarked by the RefCOCO, RefCOCO+ [27][61] and RefCOCO-g [45] datasets, noun referring expression comprehension models have seen tremendous progress, thanks to large-scale vision-language pre-training models like VL-BERT [57], VilBERT [43], OSCAR [33], UNITER [10], 12-in-1 [44] and MDETR [26]. As shown in the left top part of Fig. 1, these algorithms take noun prompts like hatchback car as inputs and generate a bounding box or an instance mask of that car. However, in real-world applications like intelligent service robots, system inputs usually come in the form of affordance (i.e., the capability to support an action or say a verb phrase). Whether modern vision-language model designs can effectively understand verb reference remains under-explored.
To this end, we focus on the challenging problem of task oriented detection, as introduced by the COCO-Tasks benchmark [55]. As shown in the right top part of Fig. 1, a task oriented detector outputs three boxes of forks as they can be used to smear butter. We also extend the problem to an upgraded instance segmentation version using existing COCO masks [40], as the masks can provide finer localization. When RGB-D pairs are available, instance masks can be used to obtain object point clouds. When image sequences are available, instance masks can be used to reconstruct objects using visual hull [31][11][67]. As such, the newly proposed task oriented instance segmentation formulation (Fig. 1 bottom) is useful for down-stream robot interaction applications.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While noun referring expression comprehension datasets aim to minimize ambiguity [45], an interesting and challenging feature of task oriented detection/segmentation is the intrinsic ambiguity. For example, in the right top panel of Fig. 1, the pizza peel can also be used to smear butter. If we have neither forks nor pizza peels at hand, it is still possible to use the plate to smear butter. Another example is shown in Fig. 1 bottom. When we consider an object to step on, the chair is a better choice because the sofa is soft and the table is heavy to move. When the need switches to sit comfortably on, sofas are obviously the best candidates. In one word, objects that afford a verb are ambiguous and the algorithm needs to model preference.
To this end, current models [55] use a two-stage pipeline, in which objects are firstly detected then relatively ranked. Inspired by the success of DETR-like methods [6][65][41] and the advantage of the attention mechanism in revealing
the relationship between visual elements [34][64], we resort to the transformer architecture as it imposes self-attention on object queries thus naturally models the pair-wise relative preference between object candidates. Our one-stage method is named as Task Oriented Instance Segmentation Transformer and abbreviated as TOIST. Transformers are considered to be data hungry [5][14], but obtaining large-scale visually grounded verb reference data with relative preference (e.g., COCOTasks [55]) is difficult. This inspires us to explore the possibility of reusing knowledge in noun referring expression comprehension models. We propose to use pronouns like something as a proxy and distill knowledge from noun embedding prototypes generated by clustering.
Specifically, we first train a TOIST model with verb-noun input (e.g., step on chair for the bottom panel of Fig. 1 bottom), using the privileged noun ground truth. But during inference, we cannot access the noun chair, thus we train the second TOIST model with verb-pronoun input (e.g., step on something) and distill knowledge from the first TOIST model. As such, the second TOIST model remains noun-agnostic during inference and achieves better performance than directly training a model with verb-pronoun input. This framework is named as noun-pronoun distillation. Although leveraging knowledge from models with privileged information has been used in robotics research like autonomous driving [7] and quadrupedal locomotion [32], the proposed paradigm of distilling privileged noun information into pronoun features is novel, to the best of our knowledge.
To summarize, this paper has the following four contributions:
• We upgrade the task oriented detection task into task oriented instance segmentation and provide the first solution to it. Although this is a natural extension, this new formulation is of practical value to robotics applications.
• Unlike existing two-stage models that firstly detect objects then rank them, we propose the first transformer-based method TOIST, for task oriented detection/segmentation. It has only one stage and naturally models relative preference with self-attention on object queries.
• In order to leverage the privileged information in noun referring expression understanding models, we propose a novel noun-pronoun distillation framework. It improves TOIST by +2.8% and +3.8% for mAPbox and mAPmask, respectively.
• We achieve new state-of-the-art (SOTA) results on the COCO-Tasks dataset, out-performing the best reported results by +10.9% mAPbox. Codes and models are publicly available.
2 Related Works
Vision and Language. Connecting vision and language is a long-existing topic for visual scene understanding. Barnard et al. [4] propose a system that translates image regions into nouns. Babytalk
[30] is an early method that turns images into sentences, based upon conditional random fields. Visual question answering [3] aims to answer questions about an image, with potential applications in helping visually impaired people [19]. The CLEVR dataset [25] focuses on the reasoning ability of question answering models, thanks to a full control over the synthetic data. The Flickr30k benchmark [51] addresses the phrase grounding task that links image regions and descriptions. Vision-language navigation [2][16] aims to learn navigation policies that fulfill language commands. DALL-E [52] shows impressive text-to-image generation capability. Video dense captioning [28][15] generates language descriptions for detected salient regions. Visual madlibs [60] focuses on fill-the-blank question answering. Visual commonsense reasoning [62] proposes the more challenging task of justifying an answer. Referring expression detection [61][45] and segmentation [24] localize objects specified by nouns. Although this literature is very large with many problem formulations proposed, the task of detecting objects that afford verbs (e.g., COCO-Tasks[55]) is still under-explored.
Action and Affordance. Verbs, as the link between subjects and objects, have been extensively studied in the both vision [58][48] and language [23][46] communities before, while we focus on the vision side. It is difficult to define standalone verb recognition tasks, so existing problem formulations depend on the focus on subjects or objects. A simple taxonomy can be considered as such: recognizing subjects and verbs is named as action recognition [56][17]; recognizing verbs and objects is named as affordance recognition [66][59][13][9]; recognizing triplets is named as human-object-interaction recognition [35][63]. Task oriented object detection/segmentation, as an affordance understanding task, is very challenging and this study explores the noun-pronoun distillation framework to borrow rich knowledge from more visually grounded noun targets.
Knowledge Distillation. This technique is proposed in the deep learning literature by Hinton et al. [22] to distill knowledge from large models to small models. The insight is that soft logit targets generated by large models contain richer information that better serves as a supervision signal than hard one-hot labels. Knowledge distillation has been extended to show effectiveness in other domains like continual learning [37] and object detection [8]. The survey of Guo et al. [18] provides a comprehensive summary of knowledge distillation variants and applications. Most related to our method is the privileged knowledge distillation methods in robotics research [7][32], in which the teacher model has access to privileged information that the student cannot access during inference. Our noun-pronoun distillation method is tailored for the task oriented detection/segmentation problem which borrows rich knowledge from noun referring expression compression teacher models while still allowing the student model to be noun-agnostic.
3 Formulation
The problem is to detect and segment objects that are preferred to afford a specific task indicated by verb phrases, from an input image. Yet clearly defining affordance and preference is actually challenging so we follow the existing annotation protocol of the COCO-Tasks dataset [55]:
Affordance. Firstly, the target objects afford a specific task. In an input image, it is possible that no objects or multiple objects afford the task. And in the latter case, the objects may belong to multiple classes. For example, in the right top panel of Fig.1, nothing affords the task sit comfortably on. Instead, in the bottom panel, there are at least two sofas, a table and a chair that afford the task.
Preference. Secondly, we need to find the best ones from the objects which afford the task. In other words, the preference among multiple objects needs to be understood. In Fig.1 bottom, the two sofas are obviously more suitable for the task sit comfortably on than other objects enumerated above. Thus the ground truth objects for this task are the two sofas (covered in blue).
Now we formally define the task. The input is an RGB image Xv ∈ R3×H0×W0 (v represents visual) and a piece of text Xl (l represents language). Xl describes a specific task like sit comfortably on. The targets are bounding boxes Bgt = [b1, . . . , bngt ] ∈ [0, 1]ngt×4 and instance segmentation masks Mgt = [m1, . . . ,mngt ] ∈ Rngt×H0×W0 of target objects Ogt = ⟨Bgt,Mgt⟩, where ngt ≥ 0 is the count of targets. The four components of bi ∈ [0, 1]4 are normalized center coordinates, height and width of the i-th box. An algorithm f that addresses this problem works as:
f(⟨Xv,Xl⟩) = ⟨Bpred,Mpred,Spred⟩, (1)
where Bpred and Mpred are predicted boxes and masks, respectively. Spred = [ŝ1, . . . , ŝnpred ] ∈ [0, 1]npred is the probability scores of the predicted objects being selected, which reflects preference. We denote the predicted objects as Opred = ⟨Bpred,Mpred,Spred⟩.
4 Method
We propose an end-to-end Task Oriented Instance Segmentation Transformer, abbreviated as TOIST (Section 4.1). Leveraging pre-trained noun referring expression comprehension models, we further adopt a teacher-student framework for noun-pronoun distillation (Section 4.2).
4.1 Task Oriented Instance Segmentation Transformer
The SOTA method [55] uses a two-stage pipeline to solve the problem. Taking a single image Xv as input, it first detects the bounding boxes Bpred of all objects with a Faster-RCNN [53]. Then it ranks Bpred with a GNN [36], predicting the probabilities Spred of the objects being selected for a task. Our method differs from it in three ways: (1) We address the task with a one-stage architecture, allowing joint representation learning for detection and preference modeling. (2) We specify tasks using the text Xl. (3) We predict instance masks Mpred along with bounding boxes.
We choose to build our method upon the transformer architecture [6], because the self-attention operators in the decoder can naturally model pair-wise relative preference between object candidates. As shown in Fig.2 bottom, TOIST contains three main components: a multi-modal encoder (color brown) to extract tokenized features, a transformer encoder (color green) to aggregate features of two modalities and a transformer decoder (color blue) to predict the most suitable objects with attention.
Two Input Forms. To find an object that affords the task of dig hole, the first step is to construct a task description input Xl. To achieve this goal, we can extend the task name with the ground truth object category to the verb-noun form like dig hole with skateboard or with a pronoun to the verb-pronoun form like dig hole with something. While the former violates the noun-agnostic constraint during inference, it can be leveraged to improve the latter within the proposed noun-pronoun distillation framework, which will be detailed later. For a plain TOIST, the verb-pronoun form is selected as task description Xl, which is fed into the multi-modal encoder along with visual input Xv .
Multi-Modal Encoder. For Xv, a pre-trained CNN-based backbone and a one-layer feed forward network (FFN) are leveraged to extract a low-resolution feature map Fv ∈ Rd×H×W. Flattening the spatial dimensions of Fv into one dimension, we obtain a sequence of tokenized feature vectors V = [v1, . . . , vnv ] ∈ Rnv×d (light blue squares ■ on the left of the transformer encoder in Fig.2), where nv = H ×W. To preserve the spatial information, 2D positional embeddings are added to V. For Xl, we use a pre-trained text encoder and another FFN to produce corresponding feature vectors L = [l1, . . . , lnl ] ∈ Rnl×d (light orange squares ■), where nl is the total count of language tokens. Among these nl features, we denotes the one corresponding to the pronoun (or noun) token as lpron (or lnoun) (dark orange squares ■). We concatenate these vectors and obtain the final feature sequence [V,L] = [v1, . . . , vnv , l1, . . . , lnl ] ∈ R(nv+nl)×d. Transformer Encoder. The transformer encoder consists of ntr sequential blocks of multi-head self-attention layers. Given the sequence of features [V,L], it outputs processed feature sequence [Vtr,Ltr] = [vtr1 , . . . , v tr nv , l tr 1 , . . . , l tr nl ] ∈ R(nv+nl)×d. The pronoun (or noun) feature lpron (or lnoun) is encoded into ltrpron (or l tr noun), which will be used for noun-pronoun distillation (Section 4.2). Here for the plain TOIST, we directly use the features [Vtr,Ltr] for later processing.
Transformer Decoder. The transformer decoder consists of ntr blocks of self-attention and crossattention layers. It takes as input a set of learnable parameters serving as an object query sequence Q = [q1, . . . , qnpred ] ∈ Rnpred×d. [Vtr,Ltr] are used as keys and values for cross-attention layers. The outputs of the transformer decoder are feature vectors Qtr = [qtr1 , . . . , q tr npred
] ∈ Rnpred×d, which are projected to final results by three prediction heads. Specifically, the detect head predicts bounding boxes Bpred and the segment head outputs binary segmentation masks Mpred. The logit head outputs logits Gpred = [ĝ1, . . . , ĝnpred ] ∈ Rnpred×nmax , where ĝi = [ĝi1, . . . , ĝinmax ]. [ĝ i 1, . . . , ĝ i nl ] corresponds to text tokens L = [l1, . . . , lnl ] ∈ Rnl×d. [ĝinl+1, . . . , ĝ i nmax−1] is used to pad ĝi to length nmax (by default nmax=256) and the last one ĝinmax stands for the logit of "no-object". With the output logits, we define the preference score ŝi ∈ Spred of each predicted object as:
ŝi = 1− exp
( ĝinmax )∑nmax j=1 exp ( ĝij ) . (2)
During training, as in DETR [6], a bipartite matching is computed between npred predicted objects Opred and ground truth objects Ogt with the Hungarian algorithm [29]. The matched object predictions are supervised with L1 loss and Generalized Intersection over Union (GIoU) loss [54] for localization while Dice/F-1 loss [47] and Focal cross-entropy loss [39] for segmentation. We also adopt the soft-token prediction loss and the contrastive alignment loss used in MDETR [26]. But different from them, we do not use a single noun or pronoun as the ground truth token span for a matched object prediction. Instead, we use the whole verb-pronoun description as token span such that the network can understand the verbs rather than noun/pronoun only. The total loss for TOIST is:
LTOIST = λ1Ll1 + λ2Lgiou + λ3Ldice + λ4Lcross + λ5Ltoken + λ6Lalign, (3)
where λ1~λ6 are the weights of losses.
4.2 Noun-Pronoun Distillation
Text related settings mAPbox mAPmask
verb-pronoun input 41.3 35.2 verb-noun input 53.1 (+11.8) 47.2 (+12.0)
replace lpron with lnoun 43.7 (+2.4) 37.3 (+2.1) replace ltrpron with l tr noun 41.9 (+0.6) 35.6 (+0.4) noun-pronoun distillation 44.1 (+2.8) 39.0 (+3.8)
model can boost performance (see Table 1). However, during inference, the noun of the ground truth object is unavailable. Thus we believe a properly designed noun-pronoun distillation framework can leverage rich knowledge from the verb-noun model without violating the noun-agnostic constraint.
Distillation Framework Overview. Two TOIST models are trained simultaneously. The teacher (Fig.2 top) and the student (Fig.2 bottom) take as input verb-noun and verb-pronoun descriptions, respectively. A clustering distillation method with a memory bank and a tailored cluster loss is used to distill privileged object-centric knowledge from noun to pronoun (Fig.2 middle left). Besides, we also use a soft binary target loss imposed on Gpred to distill preference knowledge (Fig.2 middle right), in which Gpred are logits used to calculate preference scores Spred.
Clustering Distillation. Since one task can be afforded by objects of many different categories, we build a text feature memory bank to store noun features, with which a prototype can be selected and used to replace pronoun feature and distill knowledge. We term this process as clustering distillation. Specifically, we use ltrpron and l tr noun instead of lpron and lnoun for this process. The reason is that the former ones are conditioned on the image input and verb tokens of the task by self-attention layers, and thus it is meaningful to select a cluster center that suits the image and the task input.
Memory and Selector. The size of the memory bank is ntask × nmem × d. It consists of ntask queues of length nmem for ntask tasks. During training, for each sample from task j, we update the j-th queue Ljmem = [l j 1, . . . , l j nmem ] by adding the noun feature l tr noun generated by the teacher model and removing the existing one closest to ltrnoun. The updated queue is clustered with the K-means clustering method, leading to K cluster centers Ljc = {ljc1 , . . . , l j cK}. Then the student model uses a cluster selector, which is implemented as the nearest neighbor classifier, to select a prototype ljcs ∈ L j c according to the pronoun feature l tr pron and replace l tr pron with l j cs . Concatenating other tokens and the selected prototype together, the output of student transformer encoder [vtrs1 , . . . , v tr sn , l tr s1 , . . . , l tr pron, . . . , l tr snl ] is modified into [vtrs1 , . . . , v tr sn , l tr s1 , . . . , l j cs , . . . , l tr snl
] and fed into the transformer decoder. To distill knowledge to the student transformer encoder, we define cluster loss as: Lcluster = ∥ltrpron − ljcs∥2, (4) with which the privileged object-centric knowledge is distilled from clustered noun features to pronoun feature and further to the student TOIST encoder.
Preference Distillation. We use a soft binary target loss to distill preference knowledge from teacher to student. For an object query, we first define binary query probabilities of being positive-query or negative-query, which denotes whether or not the query is matched to a ground truth object target, as p = [ppos, pneg] ∈ R1×2. The probabilities can be calculated by the softmax function:
ppos = ∑nmax−1 j=1 exp (ĝj)∑nmax j=1 exp (ĝj) , pneg = exp (ĝnmax)∑nmax j=1 exp (ĝj) , (5)
where ĝj and ĝnmax represent the logits corresponding to i-th text token and "no-object" token, respectively. For all the object queries in teacher and student, the probability sequences are denoted by Pt = [pt1 , . . . ,ptnpred ] and Ps = [ps1 , . . . ,psnpred ]. Then we use the Hungarian algorithm to find a bipartite matching between the two sequences of object queries with Pt and Ps. Formally, we search for a permutation of npred elements σ ∈ Snpred which minimizes the matching loss:
σ̂ = argmin σ∈Snpred npred∑ i Lmatch(yti , ysσ(i)), (6)
where yti = (b̂tipti) and b̂ti is the predicted bounding box. Lmatch is a linear combination of box prediction losses (L1 & GIoU) and KL-Divergence. The KL-Divergence LKL can be written as:
LKL(pti ,psσ(i)) = KL ( pti∥psσ(i) ) = pposti log ( pposti ppossσ(i) ) + pnegti log ( pnegti pnegsσ(i) ) . (7)
With the optimal permutation σ̂, we define the soft binary target loss as:
Lbinary = npred∑
i
LKL(pti ,psσ̂(i)). (8)
It makes the binary query probabilities of the student model similar to the matched ones of the teacher model. And because the preference score ŝ (Eq.2) is defined in the same way as the probability ppos (Eq.5), the preference knowledge is distilled from teacher to student as the loss decreases.
Summary. The final training loss function for TOIST with noun-pronoun distillation is:
LTOIST−NP = LtTOIST + LsTOIST + λ7Lscluster + λ8Lsbinary, (9)
where λ7 and λ8 are the weights of losses. LtTOIST and LsTOIST are separate TOIST loss terms defined by Eq.3 for teacher and student, respectively. Note that the cluster loss Lscluster and the soft binary target loss Lsbinary are only used for supervising the student model.
As a reminder, during inference, we only use the student TOIST model and the fixed memory bank to find the most suitable objects, without violating the noun-agnostic constraint.
5 Experiments
Dataset. We conduct experiments on the COCO-Tasks dataset [55] which re-annotates the COCO dataset [40] with preference-aware affordance labels. This is the only dataset that involves instancelevel preference in affordance. Though there are other datasets for affordance detection, they neither distinguish between instances nor involve preference, such as ADE-Affordance [12] and IIT-AFF [49]. The COCO-Tasks dataset contains 14 tasks. For each task, there are 3600 train images and 900 test images. In each image, the boxes of preferred objects (one or more) are taken as ground truth labels for detection. Using existing COCO masks, we extend the dataset to an instance segmentation version. In Appendix B and D, we present more dataset details and show its diversity. Metric. We use the AP@0.5 metric for both detection and segmentation, where predicted preference scores Spred are used to rank objects. Averaging the AP@0.5 values of all tasks leads to mAP@0.5. The implementation details for TOIST and distillation can be found in Appendix A.
5.1 Comparisons with State-of-the-art Methods
Table 2 shows that TOIST with noun-pronoun distillation achieves state-of-the-art results compared to existing methods on COCO-Tasks. For object detection, we use the results reported by [55] as baselines. For instance segmentation, following the same experiment settings of [55], we build new baselines using Mask-RCNN [20]. The methods in the first row treat the problem as a standard detection or segmentation task. All other baselines use two-stage pipelines, in which objects are firstly detected or segmented then ranked. The proposed one-stage method achieves 41.3% mAPbox and 35.2% mAPmask, which are +8.1% and +2.8% better than the previous best results (Yolo+GGNN and Mask-RCNN+GGNN). Noun-pronoun distillation further boosts the performance of TOIST to 44.1% (+10.9%) mAPbox and 39.0% (+6.6%) mAPmask. Our method also out-performs another
baseline with the same backbone, as shown in Table 9 of Appendix C. These results demonstrate the effectiveness of the proposed method for the new problem of task oriented instance segmentation. Per-task quantitative results and precision-recall curves are provided in Appendix C and D.
5.2 Preference Modeling with Self-Attention
Protocol. Our design principle is that the self-attention layers in transformers can naturally model preference. But is this really the case? Fig.3 (a) shows its effect. Two plain TOIST models are trained separately, with the only difference being that one model does not contain self-attention operators in the decoder. Note that the removal of self-attention does not impact the number of parameters. The mAP metric is impacted by two kinds of errors: inaccurate box/mask localization or improper preference. To analyze the preference scores alone, for all object queries, we use the boxes and masks predicted by the last block of TOIST decoder, which are arguably the most accurate. We use the corresponding preference scores predicted by each block to calculate mAP values.
Interpretation. For the TOIST with self-attention, the performance is gradually boosted as the source of preference scores becomes deeper: from 29.6% mAPbox and 25.0% mAPmask to 41.3% and 35.2%. For the one without self-attention, the preference scores from the first block lead to the best performance: 33.9% mAPbox and 28.7% mAPmask, which is -7.5% and -6.5% lower than another TOIST. The results demonstrate that the self-attention in TOIST decoder models pair-wise relative preference between object candidates. As the decoder deepens, the preference relationship between object candidates is gradually extracted by self-attention.
5.3 Effect of Clustering Distillation Table 3: Ablations for distillation settings. CCR, CL and SBTL are short for cluster center replacement, cluster loss and soft binary target loss, respectively.
Index CCR CL SBTL mAPbox mAPmask
(a) × × × 41.3 35.2 (b) × × ✓ 43.4 (+2.1) 38.0 (+2.8) (c) × ✓ × 42.0 (+0.7) 37.1 (+1.9) (d) × ✓ ✓ 43.8 (+2.5) 38.6 (+3.4) (e) ✓ × × 42.0 (+0.7) 37.0 (+1.8) (f) ✓ × ✓ 42.3 (+1.0) 37.3 (+2.1) (g) ✓ ✓ × 42.3 (+1.0) 37.5 (+2.3) (h) ✓ ✓ ✓ 44.1 (+2.8) 39.0 (+3.8) In Table 3, we show the effects of using cluster loss and replacing pronoun features with cluster centers (noun prototypes). In (c) and (e), leveraging the two components alone brings an increase of +0.7% mAPbox, +1.9% mAPmask and +0.7% mAPbox, +1.8% mAPmask over baseline (a) respectively. In (g), the complete clustering distillation leads to a performance improvement of +1.0% mAPbox and +2.3% mAPmask. These results show that the clustering distillation method can improve student TOIST and enhance verb referring expression understanding.
In Fig.4, we visualize the predicted results (filtered by a preference threshold of 0.9) and the attention maps of pronoun tokens. In the first row, when there is no clustering distillation, TOIST wrongly prefers the flower to the cup, which is also confirmed by the attention map. But the TOIST with clustering distillation correctly selects the cup, and the attention on the flower is weakened. This shows that clustering distillation enables the student TOIST to reduce the ambiguity of verb-pronoun referring expression. In the second row, the bounding box of the knife is correctly detected by both two models. However, in the absence of the distillation, extra instance masks are predicted on the spoon and fork within the box. Instead, with the distillation, the masks predicted by TOIST are concentrated on the knife and the attention is more focused on it. This demonstrates that in the case of clustering distillation, TOIST can better ground the task into pixels within an object box.
Meanwhile, the fact that predicted masks may be inaccurate even if the box is correct makes it challenging for a robot to accurately grasp the preferred object when performing a specific task. This proves the importance of extending task oriented object detection to instance segmentation.
5.4 Effect of Preference Distillation TOIST w/Preference Distillation (1)
pound carpet
(2) smear butter
Ground TruthTOIST w/o Preference Distillation
(3) open parcel
Figure 5: Examples of three scenarios where preference distillation clearly works.
In Table 3 (b), preference distillation with soft binary target loss achieves +2.1% mAPbox and +2.8% mAPmask higher results than baseline (a). This loss acts on the preference probabilities of each object candidate in student TOIST. And the probabilities are used as scores to sort the object candidates for the calculation of mAP values. Therefore, the result of Table 3 (b) strongly supports that the preference information is distilled to the student TOIST.
A simple taxonomy differs three scenarios where preference distillation works. As shown in Fig.5, the predicted results (filtered by a preference threshold of 0.9) of the TOIST models w/ or w/o preference
distillation are compared. (1) Preference distillation makes the preference score of the false positive object (the baseball in the left picture) lower than the threshold. (2) The preference score of the false negative object (the spoon in the middle) is raised above the threshold with the distillation. (3) When there is no distillation, the false positive object (fork) scores higher than the true positive object (knife) (0.9822 > 0.9808). Although the distillation fails to lower the preference score of the false positive object below the threshold, its score is updated to be lower than the true positive one (0.9495 < 0.9680). These specific results demonstrate that the information of noun referring expression is distilled to the noun-agnostic student model in the form of preference scores.
5.5 Ablation Study and Qualitative Results
Distillation Methods. Instead of minimizing the distance between ltrpron and ljcs , a straightforward way is to directly minimize the distance between ltrpron and l tr noun. As shown in Table 4, this simplified method does not work well, which prompts us to develop the distillation framework.
Table 4: Different distillation methods.
Method mAPbox mAPmask
TOIST 41.3 35.2 distill from ljcs to l tr pron 44.1 (+2.8) 39.0 (+3.8) distill from ltrnoun to ltrpron 41.9 (+0.6) 36.0 (+0.8)
Table 5: Results without pre-training.
Method mAPbox mAPmask
verb-pronoun input 3.65 5.74 verb-noun input 11.19 12.67 noun-pronoun distillation 7.43 (+3.78) 11.28 (+5.54)
Interaction of the Two Distillation Components. In Table 3 (d) and (f), we show the effects of cluster loss or cluster center replacement together with soft binary target loss. (d) achieves +2.5% mAPbox and +3.4% mAPmask improvement, which demonstrates the two distillation losses collaborate well. (f) only achieves +1.0% mAPbox and +2.1% mAPmask improvement, slightly higher than (e) (using cluster loss only) but lower than (b) (using soft binary target loss only). This shows that preference distillation effectively improves object preference modeling. But solely replacing pronoun features to indicate target objects weakens the effect of preference distillation.
Table 6: Ablations for pronoun input.
Method Pronoun mAPbox mAPmask
TOIST
something 41.3 35.2 it 41.3 35.2
them 41.4 35.0 abcd 39.0 33.2
TOIST w/ distillation
something 44.1 39.0 it 43.8 38.4
them 43.8 38.1 abcd 42.8 37.4
Ablations for Cluster Number K. Fig.3 (b) shows the ablations for cluster number K. We perform distillation experiments on different K values between 1 to 10 because increasing K to an even higher value makes the clustering task more difficult. All of the experiments yield better results than the plain TOIST (41.3% mAPbox, 35.2% mAPmask) and K = 3 works the best. This demonstrates that a modest K can better cluster the information of noun features and distill it to the student TOIST.
Ablations for Pronoun Input. Table 6 shows the results of TOIST with different pronoun input. In the plain TOIST and TOIST with distillation, the usage of something, it or them leads to similar results,
while a meaningless string abcd yields less improvement. Nevertheless, the proposed distillation framework can still work well in the last case, which demonstrates the robustness of our method.
Results without Pre-training. In our architecture, the pre-trained noun referring expression comprehension models are leveraged. To investigate whether the noun-pronoun distillation framework is a standalone technical contribution, we conduct experiments without pre-training. The models are trained from scratch on the COCO-Tasks dataset and the results are shown in Table 5, which demonstrates that the proposed distillation can still improve performance even without pre-training.
Ablations for Task Number. Table 7 shows the ablation study of different task numbers, in which the first row corresponds to the plain TOIST without distillation and the others show the results with distillation under different ntask. The results demonstrate our proposed distillation works for different ntask, even if ntask = 1. And overall, smaller ntask leads to better performance. We attribute this to the reduced problem complexity due to the less interaction between different tasks, which makes it easier to improve the ability of the model to understand verbs through noun-pronoun distillation.
Qualitative Results. Fig.6 shows more qualitative results. In (a), two toilets are taken as target objects and annotated partially or totally. But TOIST simultaneously predicts the two kinds of results for each toilet. In (b), no object is annotated, while TOIST keenly detects two water bottles that afford the task. In (c), TOIST predicts more accurate mask result than ground truth. In (d), the table is selected and interestingly it does afford the task as the table edge can be used to open beer bottles. More qualitative results can be found in Appendix E.
6 Conclusion and Discussion
We explore the problem of task oriented instance segmentation and propose a transformer-based method named as TOIST with a novel noun-pronoun distillation framework. Experiments show our method successfully models affordance and preference, achieving SOTA results on the COCO-Tasks dataset. Limitations. Due to the lack of large-scale datasets with more abundant tasks, TOIST is only evaluated on limited tasks. While this is sufficient for many robotics applications, it would be interesting to explore general verb reference understanding on more tasks. Potential Negative Social Impact. Because TOIST is not perfect, when it is used in robotics applications, robots may have difficulty in selecting the most suitable object to carry out a task or even cause damage. | 1. What is the main contribution of the paper regarding Task-Oriented Instance Segmentation Transformer (TOIST)?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application to affordance recognition tasks?
3. Do you have any concerns about the comparison between the proposed method and the baseline method in terms of backbone networks?
4. How does the reviewer assess the significance of the knowledge distillation mechanism used in the TOIST model?
5. Are there any limitations or potential negative societal impacts associated with the proposed approach that the authors have discussed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
In order to handle the affordance recognition task, this paper proposes a Task-Oriented Instance Segmentation Transformer (TOIST) to find objects that best afford an action indicated by verbs. The TOIST is a teacher-student knowledge distillation model, and such a model leverages the referring expression comprehension algorithm as the teacher module for guiding the student module to learn the noun-pronoun transformation. The experiments show the positive effect of the knowledge distillation mechanism.
Strengths And Weaknesses
[Strengths]
The idea of utilizing the referring expression comprehension algorithm as the teacher module is interesting.
The manuscript is well organized and has several interesting analyses.
[Weaknesses]
The contribution to upgrading the task-oriented detection into task-oriented instance segmentation upon the `existing’ transformer model is weak.
Though the existing method [48] is a two-stage model, yet the proposed TOIST needs to separately fine-turn the pre-trained student and teacher TOIST models with a final knowledge distilling. It seems that the pre-trained models employ the extra data for training, and hence the extra training data and the extra training procedure make the advantage of the claimed one-stage model somewhat weak.
Compared with the state-of-the-art methods, it is not fair to compare the methods extracting features with different backbones. It is interesting whether the ‘TOIST w/ distillation’ in table 2 can still surpass the baseline with the same backbone, i.e., ‘mdetr+GGNN’?
Questions
The tackled affordance recognition task is not a well-explored research topic; hence, the compared baseline method [48] is not advanced. In order to demonstrate the performance gain of the proposed TOIST, it is better to train the model without using the extra training data and comparing it with the baseline of an advanced backbone network, for example, ‘mdetr+GGNN.’ Please see [Weaknesses] for reference.
Limitations
The authors described the limitations and potential negative societal impact of their work. |
NIPS | Title
TOIST: Task Oriented Instance Segmentation Transformer with Noun-Pronoun Distillation
Abstract
Current referring expression comprehension algorithms can effectively detect or segment objects indicated by nouns, but how to understand verb reference is still under-explored. As such, we study the challenging problem of task oriented detection, which aims to find objects that best afford an action indicated by verbs like sit comfortably on. Towards a finer localization that better serves downstream applications like robot interaction, we extend the problem into task oriented instance segmentation. A unique requirement of this task is to select preferred candidates among possible alternatives. Thus we resort to the transformer architecture which naturally models pair-wise query relationships with attention, leading to the TOIST method. In order to leverage pre-trained noun referring expression comprehension models and the fact that we can access privileged noun ground truth during training, a novel noun-pronoun distillation framework is proposed. Noun prototypes are generated in an unsupervised manner and contextual pronoun features are trained to select prototypes. As such, the network remains noun-agnostic during inference. We evaluate TOIST on the large-scale task oriented dataset COCO-Tasks and achieve +10.9% higher mAP than the best-reported results. The proposed noun-pronoun distillation can boost mAP and mAP by +2.8% and +3.8%. Codes and models are publicly available at https://github.com/AIR-DISCOVER/TOIST.
1 Introduction
As benchmarked by the RefCOCO, RefCOCO+ [27][61] and RefCOCO-g [45] datasets, noun referring expression comprehension models have seen tremendous progress, thanks to large-scale vision-language pre-training models like VL-BERT [57], VilBERT [43], OSCAR [33], UNITER [10], 12-in-1 [44] and MDETR [26]. As shown in the left top part of Fig. 1, these algorithms take noun prompts like hatchback car as inputs and generate a bounding box or an instance mask of that car. However, in real-world applications like intelligent service robots, system inputs usually come in the form of affordance (i.e., the capability to support an action or say a verb phrase). Whether modern vision-language model designs can effectively understand verb reference remains under-explored.
To this end, we focus on the challenging problem of task oriented detection, as introduced by the COCO-Tasks benchmark [55]. As shown in the right top part of Fig. 1, a task oriented detector outputs three boxes of forks as they can be used to smear butter. We also extend the problem to an upgraded instance segmentation version using existing COCO masks [40], as the masks can provide finer localization. When RGB-D pairs are available, instance masks can be used to obtain object point clouds. When image sequences are available, instance masks can be used to reconstruct objects using visual hull [31][11][67]. As such, the newly proposed task oriented instance segmentation formulation (Fig. 1 bottom) is useful for down-stream robot interaction applications.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
While noun referring expression comprehension datasets aim to minimize ambiguity [45], an interesting and challenging feature of task oriented detection/segmentation is the intrinsic ambiguity. For example, in the right top panel of Fig. 1, the pizza peel can also be used to smear butter. If we have neither forks nor pizza peels at hand, it is still possible to use the plate to smear butter. Another example is shown in Fig. 1 bottom. When we consider an object to step on, the chair is a better choice because the sofa is soft and the table is heavy to move. When the need switches to sit comfortably on, sofas are obviously the best candidates. In one word, objects that afford a verb are ambiguous and the algorithm needs to model preference.
To this end, current models [55] use a two-stage pipeline, in which objects are firstly detected then relatively ranked. Inspired by the success of DETR-like methods [6][65][41] and the advantage of the attention mechanism in revealing
the relationship between visual elements [34][64], we resort to the transformer architecture as it imposes self-attention on object queries thus naturally models the pair-wise relative preference between object candidates. Our one-stage method is named as Task Oriented Instance Segmentation Transformer and abbreviated as TOIST. Transformers are considered to be data hungry [5][14], but obtaining large-scale visually grounded verb reference data with relative preference (e.g., COCOTasks [55]) is difficult. This inspires us to explore the possibility of reusing knowledge in noun referring expression comprehension models. We propose to use pronouns like something as a proxy and distill knowledge from noun embedding prototypes generated by clustering.
Specifically, we first train a TOIST model with verb-noun input (e.g., step on chair for the bottom panel of Fig. 1 bottom), using the privileged noun ground truth. But during inference, we cannot access the noun chair, thus we train the second TOIST model with verb-pronoun input (e.g., step on something) and distill knowledge from the first TOIST model. As such, the second TOIST model remains noun-agnostic during inference and achieves better performance than directly training a model with verb-pronoun input. This framework is named as noun-pronoun distillation. Although leveraging knowledge from models with privileged information has been used in robotics research like autonomous driving [7] and quadrupedal locomotion [32], the proposed paradigm of distilling privileged noun information into pronoun features is novel, to the best of our knowledge.
To summarize, this paper has the following four contributions:
• We upgrade the task oriented detection task into task oriented instance segmentation and provide the first solution to it. Although this is a natural extension, this new formulation is of practical value to robotics applications.
• Unlike existing two-stage models that firstly detect objects then rank them, we propose the first transformer-based method TOIST, for task oriented detection/segmentation. It has only one stage and naturally models relative preference with self-attention on object queries.
• In order to leverage the privileged information in noun referring expression understanding models, we propose a novel noun-pronoun distillation framework. It improves TOIST by +2.8% and +3.8% for mAPbox and mAPmask, respectively.
• We achieve new state-of-the-art (SOTA) results on the COCO-Tasks dataset, out-performing the best reported results by +10.9% mAPbox. Codes and models are publicly available.
2 Related Works
Vision and Language. Connecting vision and language is a long-existing topic for visual scene understanding. Barnard et al. [4] propose a system that translates image regions into nouns. Babytalk
[30] is an early method that turns images into sentences, based upon conditional random fields. Visual question answering [3] aims to answer questions about an image, with potential applications in helping visually impaired people [19]. The CLEVR dataset [25] focuses on the reasoning ability of question answering models, thanks to a full control over the synthetic data. The Flickr30k benchmark [51] addresses the phrase grounding task that links image regions and descriptions. Vision-language navigation [2][16] aims to learn navigation policies that fulfill language commands. DALL-E [52] shows impressive text-to-image generation capability. Video dense captioning [28][15] generates language descriptions for detected salient regions. Visual madlibs [60] focuses on fill-the-blank question answering. Visual commonsense reasoning [62] proposes the more challenging task of justifying an answer. Referring expression detection [61][45] and segmentation [24] localize objects specified by nouns. Although this literature is very large with many problem formulations proposed, the task of detecting objects that afford verbs (e.g., COCO-Tasks[55]) is still under-explored.
Action and Affordance. Verbs, as the link between subjects and objects, have been extensively studied in the both vision [58][48] and language [23][46] communities before, while we focus on the vision side. It is difficult to define standalone verb recognition tasks, so existing problem formulations depend on the focus on subjects or objects. A simple taxonomy can be considered as such: recognizing subjects and verbs is named as action recognition [56][17]; recognizing verbs and objects is named as affordance recognition [66][59][13][9]; recognizing triplets is named as human-object-interaction recognition [35][63]. Task oriented object detection/segmentation, as an affordance understanding task, is very challenging and this study explores the noun-pronoun distillation framework to borrow rich knowledge from more visually grounded noun targets.
Knowledge Distillation. This technique is proposed in the deep learning literature by Hinton et al. [22] to distill knowledge from large models to small models. The insight is that soft logit targets generated by large models contain richer information that better serves as a supervision signal than hard one-hot labels. Knowledge distillation has been extended to show effectiveness in other domains like continual learning [37] and object detection [8]. The survey of Guo et al. [18] provides a comprehensive summary of knowledge distillation variants and applications. Most related to our method is the privileged knowledge distillation methods in robotics research [7][32], in which the teacher model has access to privileged information that the student cannot access during inference. Our noun-pronoun distillation method is tailored for the task oriented detection/segmentation problem which borrows rich knowledge from noun referring expression compression teacher models while still allowing the student model to be noun-agnostic.
3 Formulation
The problem is to detect and segment objects that are preferred to afford a specific task indicated by verb phrases, from an input image. Yet clearly defining affordance and preference is actually challenging so we follow the existing annotation protocol of the COCO-Tasks dataset [55]:
Affordance. Firstly, the target objects afford a specific task. In an input image, it is possible that no objects or multiple objects afford the task. And in the latter case, the objects may belong to multiple classes. For example, in the right top panel of Fig.1, nothing affords the task sit comfortably on. Instead, in the bottom panel, there are at least two sofas, a table and a chair that afford the task.
Preference. Secondly, we need to find the best ones from the objects which afford the task. In other words, the preference among multiple objects needs to be understood. In Fig.1 bottom, the two sofas are obviously more suitable for the task sit comfortably on than other objects enumerated above. Thus the ground truth objects for this task are the two sofas (covered in blue).
Now we formally define the task. The input is an RGB image Xv ∈ R3×H0×W0 (v represents visual) and a piece of text Xl (l represents language). Xl describes a specific task like sit comfortably on. The targets are bounding boxes Bgt = [b1, . . . , bngt ] ∈ [0, 1]ngt×4 and instance segmentation masks Mgt = [m1, . . . ,mngt ] ∈ Rngt×H0×W0 of target objects Ogt = ⟨Bgt,Mgt⟩, where ngt ≥ 0 is the count of targets. The four components of bi ∈ [0, 1]4 are normalized center coordinates, height and width of the i-th box. An algorithm f that addresses this problem works as:
f(⟨Xv,Xl⟩) = ⟨Bpred,Mpred,Spred⟩, (1)
where Bpred and Mpred are predicted boxes and masks, respectively. Spred = [ŝ1, . . . , ŝnpred ] ∈ [0, 1]npred is the probability scores of the predicted objects being selected, which reflects preference. We denote the predicted objects as Opred = ⟨Bpred,Mpred,Spred⟩.
4 Method
We propose an end-to-end Task Oriented Instance Segmentation Transformer, abbreviated as TOIST (Section 4.1). Leveraging pre-trained noun referring expression comprehension models, we further adopt a teacher-student framework for noun-pronoun distillation (Section 4.2).
4.1 Task Oriented Instance Segmentation Transformer
The SOTA method [55] uses a two-stage pipeline to solve the problem. Taking a single image Xv as input, it first detects the bounding boxes Bpred of all objects with a Faster-RCNN [53]. Then it ranks Bpred with a GNN [36], predicting the probabilities Spred of the objects being selected for a task. Our method differs from it in three ways: (1) We address the task with a one-stage architecture, allowing joint representation learning for detection and preference modeling. (2) We specify tasks using the text Xl. (3) We predict instance masks Mpred along with bounding boxes.
We choose to build our method upon the transformer architecture [6], because the self-attention operators in the decoder can naturally model pair-wise relative preference between object candidates. As shown in Fig.2 bottom, TOIST contains three main components: a multi-modal encoder (color brown) to extract tokenized features, a transformer encoder (color green) to aggregate features of two modalities and a transformer decoder (color blue) to predict the most suitable objects with attention.
Two Input Forms. To find an object that affords the task of dig hole, the first step is to construct a task description input Xl. To achieve this goal, we can extend the task name with the ground truth object category to the verb-noun form like dig hole with skateboard or with a pronoun to the verb-pronoun form like dig hole with something. While the former violates the noun-agnostic constraint during inference, it can be leveraged to improve the latter within the proposed noun-pronoun distillation framework, which will be detailed later. For a plain TOIST, the verb-pronoun form is selected as task description Xl, which is fed into the multi-modal encoder along with visual input Xv .
Multi-Modal Encoder. For Xv, a pre-trained CNN-based backbone and a one-layer feed forward network (FFN) are leveraged to extract a low-resolution feature map Fv ∈ Rd×H×W. Flattening the spatial dimensions of Fv into one dimension, we obtain a sequence of tokenized feature vectors V = [v1, . . . , vnv ] ∈ Rnv×d (light blue squares ■ on the left of the transformer encoder in Fig.2), where nv = H ×W. To preserve the spatial information, 2D positional embeddings are added to V. For Xl, we use a pre-trained text encoder and another FFN to produce corresponding feature vectors L = [l1, . . . , lnl ] ∈ Rnl×d (light orange squares ■), where nl is the total count of language tokens. Among these nl features, we denotes the one corresponding to the pronoun (or noun) token as lpron (or lnoun) (dark orange squares ■). We concatenate these vectors and obtain the final feature sequence [V,L] = [v1, . . . , vnv , l1, . . . , lnl ] ∈ R(nv+nl)×d. Transformer Encoder. The transformer encoder consists of ntr sequential blocks of multi-head self-attention layers. Given the sequence of features [V,L], it outputs processed feature sequence [Vtr,Ltr] = [vtr1 , . . . , v tr nv , l tr 1 , . . . , l tr nl ] ∈ R(nv+nl)×d. The pronoun (or noun) feature lpron (or lnoun) is encoded into ltrpron (or l tr noun), which will be used for noun-pronoun distillation (Section 4.2). Here for the plain TOIST, we directly use the features [Vtr,Ltr] for later processing.
Transformer Decoder. The transformer decoder consists of ntr blocks of self-attention and crossattention layers. It takes as input a set of learnable parameters serving as an object query sequence Q = [q1, . . . , qnpred ] ∈ Rnpred×d. [Vtr,Ltr] are used as keys and values for cross-attention layers. The outputs of the transformer decoder are feature vectors Qtr = [qtr1 , . . . , q tr npred
] ∈ Rnpred×d, which are projected to final results by three prediction heads. Specifically, the detect head predicts bounding boxes Bpred and the segment head outputs binary segmentation masks Mpred. The logit head outputs logits Gpred = [ĝ1, . . . , ĝnpred ] ∈ Rnpred×nmax , where ĝi = [ĝi1, . . . , ĝinmax ]. [ĝ i 1, . . . , ĝ i nl ] corresponds to text tokens L = [l1, . . . , lnl ] ∈ Rnl×d. [ĝinl+1, . . . , ĝ i nmax−1] is used to pad ĝi to length nmax (by default nmax=256) and the last one ĝinmax stands for the logit of "no-object". With the output logits, we define the preference score ŝi ∈ Spred of each predicted object as:
ŝi = 1− exp
( ĝinmax )∑nmax j=1 exp ( ĝij ) . (2)
During training, as in DETR [6], a bipartite matching is computed between npred predicted objects Opred and ground truth objects Ogt with the Hungarian algorithm [29]. The matched object predictions are supervised with L1 loss and Generalized Intersection over Union (GIoU) loss [54] for localization while Dice/F-1 loss [47] and Focal cross-entropy loss [39] for segmentation. We also adopt the soft-token prediction loss and the contrastive alignment loss used in MDETR [26]. But different from them, we do not use a single noun or pronoun as the ground truth token span for a matched object prediction. Instead, we use the whole verb-pronoun description as token span such that the network can understand the verbs rather than noun/pronoun only. The total loss for TOIST is:
LTOIST = λ1Ll1 + λ2Lgiou + λ3Ldice + λ4Lcross + λ5Ltoken + λ6Lalign, (3)
where λ1~λ6 are the weights of losses.
4.2 Noun-Pronoun Distillation
Text related settings mAPbox mAPmask
verb-pronoun input 41.3 35.2 verb-noun input 53.1 (+11.8) 47.2 (+12.0)
replace lpron with lnoun 43.7 (+2.4) 37.3 (+2.1) replace ltrpron with l tr noun 41.9 (+0.6) 35.6 (+0.4) noun-pronoun distillation 44.1 (+2.8) 39.0 (+3.8)
model can boost performance (see Table 1). However, during inference, the noun of the ground truth object is unavailable. Thus we believe a properly designed noun-pronoun distillation framework can leverage rich knowledge from the verb-noun model without violating the noun-agnostic constraint.
Distillation Framework Overview. Two TOIST models are trained simultaneously. The teacher (Fig.2 top) and the student (Fig.2 bottom) take as input verb-noun and verb-pronoun descriptions, respectively. A clustering distillation method with a memory bank and a tailored cluster loss is used to distill privileged object-centric knowledge from noun to pronoun (Fig.2 middle left). Besides, we also use a soft binary target loss imposed on Gpred to distill preference knowledge (Fig.2 middle right), in which Gpred are logits used to calculate preference scores Spred.
Clustering Distillation. Since one task can be afforded by objects of many different categories, we build a text feature memory bank to store noun features, with which a prototype can be selected and used to replace pronoun feature and distill knowledge. We term this process as clustering distillation. Specifically, we use ltrpron and l tr noun instead of lpron and lnoun for this process. The reason is that the former ones are conditioned on the image input and verb tokens of the task by self-attention layers, and thus it is meaningful to select a cluster center that suits the image and the task input.
Memory and Selector. The size of the memory bank is ntask × nmem × d. It consists of ntask queues of length nmem for ntask tasks. During training, for each sample from task j, we update the j-th queue Ljmem = [l j 1, . . . , l j nmem ] by adding the noun feature l tr noun generated by the teacher model and removing the existing one closest to ltrnoun. The updated queue is clustered with the K-means clustering method, leading to K cluster centers Ljc = {ljc1 , . . . , l j cK}. Then the student model uses a cluster selector, which is implemented as the nearest neighbor classifier, to select a prototype ljcs ∈ L j c according to the pronoun feature l tr pron and replace l tr pron with l j cs . Concatenating other tokens and the selected prototype together, the output of student transformer encoder [vtrs1 , . . . , v tr sn , l tr s1 , . . . , l tr pron, . . . , l tr snl ] is modified into [vtrs1 , . . . , v tr sn , l tr s1 , . . . , l j cs , . . . , l tr snl
] and fed into the transformer decoder. To distill knowledge to the student transformer encoder, we define cluster loss as: Lcluster = ∥ltrpron − ljcs∥2, (4) with which the privileged object-centric knowledge is distilled from clustered noun features to pronoun feature and further to the student TOIST encoder.
Preference Distillation. We use a soft binary target loss to distill preference knowledge from teacher to student. For an object query, we first define binary query probabilities of being positive-query or negative-query, which denotes whether or not the query is matched to a ground truth object target, as p = [ppos, pneg] ∈ R1×2. The probabilities can be calculated by the softmax function:
ppos = ∑nmax−1 j=1 exp (ĝj)∑nmax j=1 exp (ĝj) , pneg = exp (ĝnmax)∑nmax j=1 exp (ĝj) , (5)
where ĝj and ĝnmax represent the logits corresponding to i-th text token and "no-object" token, respectively. For all the object queries in teacher and student, the probability sequences are denoted by Pt = [pt1 , . . . ,ptnpred ] and Ps = [ps1 , . . . ,psnpred ]. Then we use the Hungarian algorithm to find a bipartite matching between the two sequences of object queries with Pt and Ps. Formally, we search for a permutation of npred elements σ ∈ Snpred which minimizes the matching loss:
σ̂ = argmin σ∈Snpred npred∑ i Lmatch(yti , ysσ(i)), (6)
where yti = (b̂tipti) and b̂ti is the predicted bounding box. Lmatch is a linear combination of box prediction losses (L1 & GIoU) and KL-Divergence. The KL-Divergence LKL can be written as:
LKL(pti ,psσ(i)) = KL ( pti∥psσ(i) ) = pposti log ( pposti ppossσ(i) ) + pnegti log ( pnegti pnegsσ(i) ) . (7)
With the optimal permutation σ̂, we define the soft binary target loss as:
Lbinary = npred∑
i
LKL(pti ,psσ̂(i)). (8)
It makes the binary query probabilities of the student model similar to the matched ones of the teacher model. And because the preference score ŝ (Eq.2) is defined in the same way as the probability ppos (Eq.5), the preference knowledge is distilled from teacher to student as the loss decreases.
Summary. The final training loss function for TOIST with noun-pronoun distillation is:
LTOIST−NP = LtTOIST + LsTOIST + λ7Lscluster + λ8Lsbinary, (9)
where λ7 and λ8 are the weights of losses. LtTOIST and LsTOIST are separate TOIST loss terms defined by Eq.3 for teacher and student, respectively. Note that the cluster loss Lscluster and the soft binary target loss Lsbinary are only used for supervising the student model.
As a reminder, during inference, we only use the student TOIST model and the fixed memory bank to find the most suitable objects, without violating the noun-agnostic constraint.
5 Experiments
Dataset. We conduct experiments on the COCO-Tasks dataset [55] which re-annotates the COCO dataset [40] with preference-aware affordance labels. This is the only dataset that involves instancelevel preference in affordance. Though there are other datasets for affordance detection, they neither distinguish between instances nor involve preference, such as ADE-Affordance [12] and IIT-AFF [49]. The COCO-Tasks dataset contains 14 tasks. For each task, there are 3600 train images and 900 test images. In each image, the boxes of preferred objects (one or more) are taken as ground truth labels for detection. Using existing COCO masks, we extend the dataset to an instance segmentation version. In Appendix B and D, we present more dataset details and show its diversity. Metric. We use the AP@0.5 metric for both detection and segmentation, where predicted preference scores Spred are used to rank objects. Averaging the AP@0.5 values of all tasks leads to mAP@0.5. The implementation details for TOIST and distillation can be found in Appendix A.
5.1 Comparisons with State-of-the-art Methods
Table 2 shows that TOIST with noun-pronoun distillation achieves state-of-the-art results compared to existing methods on COCO-Tasks. For object detection, we use the results reported by [55] as baselines. For instance segmentation, following the same experiment settings of [55], we build new baselines using Mask-RCNN [20]. The methods in the first row treat the problem as a standard detection or segmentation task. All other baselines use two-stage pipelines, in which objects are firstly detected or segmented then ranked. The proposed one-stage method achieves 41.3% mAPbox and 35.2% mAPmask, which are +8.1% and +2.8% better than the previous best results (Yolo+GGNN and Mask-RCNN+GGNN). Noun-pronoun distillation further boosts the performance of TOIST to 44.1% (+10.9%) mAPbox and 39.0% (+6.6%) mAPmask. Our method also out-performs another
baseline with the same backbone, as shown in Table 9 of Appendix C. These results demonstrate the effectiveness of the proposed method for the new problem of task oriented instance segmentation. Per-task quantitative results and precision-recall curves are provided in Appendix C and D.
5.2 Preference Modeling with Self-Attention
Protocol. Our design principle is that the self-attention layers in transformers can naturally model preference. But is this really the case? Fig.3 (a) shows its effect. Two plain TOIST models are trained separately, with the only difference being that one model does not contain self-attention operators in the decoder. Note that the removal of self-attention does not impact the number of parameters. The mAP metric is impacted by two kinds of errors: inaccurate box/mask localization or improper preference. To analyze the preference scores alone, for all object queries, we use the boxes and masks predicted by the last block of TOIST decoder, which are arguably the most accurate. We use the corresponding preference scores predicted by each block to calculate mAP values.
Interpretation. For the TOIST with self-attention, the performance is gradually boosted as the source of preference scores becomes deeper: from 29.6% mAPbox and 25.0% mAPmask to 41.3% and 35.2%. For the one without self-attention, the preference scores from the first block lead to the best performance: 33.9% mAPbox and 28.7% mAPmask, which is -7.5% and -6.5% lower than another TOIST. The results demonstrate that the self-attention in TOIST decoder models pair-wise relative preference between object candidates. As the decoder deepens, the preference relationship between object candidates is gradually extracted by self-attention.
5.3 Effect of Clustering Distillation Table 3: Ablations for distillation settings. CCR, CL and SBTL are short for cluster center replacement, cluster loss and soft binary target loss, respectively.
Index CCR CL SBTL mAPbox mAPmask
(a) × × × 41.3 35.2 (b) × × ✓ 43.4 (+2.1) 38.0 (+2.8) (c) × ✓ × 42.0 (+0.7) 37.1 (+1.9) (d) × ✓ ✓ 43.8 (+2.5) 38.6 (+3.4) (e) ✓ × × 42.0 (+0.7) 37.0 (+1.8) (f) ✓ × ✓ 42.3 (+1.0) 37.3 (+2.1) (g) ✓ ✓ × 42.3 (+1.0) 37.5 (+2.3) (h) ✓ ✓ ✓ 44.1 (+2.8) 39.0 (+3.8) In Table 3, we show the effects of using cluster loss and replacing pronoun features with cluster centers (noun prototypes). In (c) and (e), leveraging the two components alone brings an increase of +0.7% mAPbox, +1.9% mAPmask and +0.7% mAPbox, +1.8% mAPmask over baseline (a) respectively. In (g), the complete clustering distillation leads to a performance improvement of +1.0% mAPbox and +2.3% mAPmask. These results show that the clustering distillation method can improve student TOIST and enhance verb referring expression understanding.
In Fig.4, we visualize the predicted results (filtered by a preference threshold of 0.9) and the attention maps of pronoun tokens. In the first row, when there is no clustering distillation, TOIST wrongly prefers the flower to the cup, which is also confirmed by the attention map. But the TOIST with clustering distillation correctly selects the cup, and the attention on the flower is weakened. This shows that clustering distillation enables the student TOIST to reduce the ambiguity of verb-pronoun referring expression. In the second row, the bounding box of the knife is correctly detected by both two models. However, in the absence of the distillation, extra instance masks are predicted on the spoon and fork within the box. Instead, with the distillation, the masks predicted by TOIST are concentrated on the knife and the attention is more focused on it. This demonstrates that in the case of clustering distillation, TOIST can better ground the task into pixels within an object box.
Meanwhile, the fact that predicted masks may be inaccurate even if the box is correct makes it challenging for a robot to accurately grasp the preferred object when performing a specific task. This proves the importance of extending task oriented object detection to instance segmentation.
5.4 Effect of Preference Distillation TOIST w/Preference Distillation (1)
pound carpet
(2) smear butter
Ground TruthTOIST w/o Preference Distillation
(3) open parcel
Figure 5: Examples of three scenarios where preference distillation clearly works.
In Table 3 (b), preference distillation with soft binary target loss achieves +2.1% mAPbox and +2.8% mAPmask higher results than baseline (a). This loss acts on the preference probabilities of each object candidate in student TOIST. And the probabilities are used as scores to sort the object candidates for the calculation of mAP values. Therefore, the result of Table 3 (b) strongly supports that the preference information is distilled to the student TOIST.
A simple taxonomy differs three scenarios where preference distillation works. As shown in Fig.5, the predicted results (filtered by a preference threshold of 0.9) of the TOIST models w/ or w/o preference
distillation are compared. (1) Preference distillation makes the preference score of the false positive object (the baseball in the left picture) lower than the threshold. (2) The preference score of the false negative object (the spoon in the middle) is raised above the threshold with the distillation. (3) When there is no distillation, the false positive object (fork) scores higher than the true positive object (knife) (0.9822 > 0.9808). Although the distillation fails to lower the preference score of the false positive object below the threshold, its score is updated to be lower than the true positive one (0.9495 < 0.9680). These specific results demonstrate that the information of noun referring expression is distilled to the noun-agnostic student model in the form of preference scores.
5.5 Ablation Study and Qualitative Results
Distillation Methods. Instead of minimizing the distance between ltrpron and ljcs , a straightforward way is to directly minimize the distance between ltrpron and l tr noun. As shown in Table 4, this simplified method does not work well, which prompts us to develop the distillation framework.
Table 4: Different distillation methods.
Method mAPbox mAPmask
TOIST 41.3 35.2 distill from ljcs to l tr pron 44.1 (+2.8) 39.0 (+3.8) distill from ltrnoun to ltrpron 41.9 (+0.6) 36.0 (+0.8)
Table 5: Results without pre-training.
Method mAPbox mAPmask
verb-pronoun input 3.65 5.74 verb-noun input 11.19 12.67 noun-pronoun distillation 7.43 (+3.78) 11.28 (+5.54)
Interaction of the Two Distillation Components. In Table 3 (d) and (f), we show the effects of cluster loss or cluster center replacement together with soft binary target loss. (d) achieves +2.5% mAPbox and +3.4% mAPmask improvement, which demonstrates the two distillation losses collaborate well. (f) only achieves +1.0% mAPbox and +2.1% mAPmask improvement, slightly higher than (e) (using cluster loss only) but lower than (b) (using soft binary target loss only). This shows that preference distillation effectively improves object preference modeling. But solely replacing pronoun features to indicate target objects weakens the effect of preference distillation.
Table 6: Ablations for pronoun input.
Method Pronoun mAPbox mAPmask
TOIST
something 41.3 35.2 it 41.3 35.2
them 41.4 35.0 abcd 39.0 33.2
TOIST w/ distillation
something 44.1 39.0 it 43.8 38.4
them 43.8 38.1 abcd 42.8 37.4
Ablations for Cluster Number K. Fig.3 (b) shows the ablations for cluster number K. We perform distillation experiments on different K values between 1 to 10 because increasing K to an even higher value makes the clustering task more difficult. All of the experiments yield better results than the plain TOIST (41.3% mAPbox, 35.2% mAPmask) and K = 3 works the best. This demonstrates that a modest K can better cluster the information of noun features and distill it to the student TOIST.
Ablations for Pronoun Input. Table 6 shows the results of TOIST with different pronoun input. In the plain TOIST and TOIST with distillation, the usage of something, it or them leads to similar results,
while a meaningless string abcd yields less improvement. Nevertheless, the proposed distillation framework can still work well in the last case, which demonstrates the robustness of our method.
Results without Pre-training. In our architecture, the pre-trained noun referring expression comprehension models are leveraged. To investigate whether the noun-pronoun distillation framework is a standalone technical contribution, we conduct experiments without pre-training. The models are trained from scratch on the COCO-Tasks dataset and the results are shown in Table 5, which demonstrates that the proposed distillation can still improve performance even without pre-training.
Ablations for Task Number. Table 7 shows the ablation study of different task numbers, in which the first row corresponds to the plain TOIST without distillation and the others show the results with distillation under different ntask. The results demonstrate our proposed distillation works for different ntask, even if ntask = 1. And overall, smaller ntask leads to better performance. We attribute this to the reduced problem complexity due to the less interaction between different tasks, which makes it easier to improve the ability of the model to understand verbs through noun-pronoun distillation.
Qualitative Results. Fig.6 shows more qualitative results. In (a), two toilets are taken as target objects and annotated partially or totally. But TOIST simultaneously predicts the two kinds of results for each toilet. In (b), no object is annotated, while TOIST keenly detects two water bottles that afford the task. In (c), TOIST predicts more accurate mask result than ground truth. In (d), the table is selected and interestingly it does afford the task as the table edge can be used to open beer bottles. More qualitative results can be found in Appendix E.
6 Conclusion and Discussion
We explore the problem of task oriented instance segmentation and propose a transformer-based method named as TOIST with a novel noun-pronoun distillation framework. Experiments show our method successfully models affordance and preference, achieving SOTA results on the COCO-Tasks dataset. Limitations. Due to the lack of large-scale datasets with more abundant tasks, TOIST is only evaluated on limited tasks. While this is sufficient for many robotics applications, it would be interesting to explore general verb reference understanding on more tasks. Potential Negative Social Impact. Because TOIST is not perfect, when it is used in robotics applications, robots may have difficulty in selecting the most suitable object to carry out a task or even cause damage. | 1. What is the main contribution of the paper regarding task-oriented detection?
2. What are the strengths and weaknesses of the proposed approach, particularly TOIST and its distillation techniques?
3. Do you have any questions or concerns regarding the baseline and dataset used in the experiments?
4. How does the reviewer assess the originality, quality, clarity, and significance of the paper's content?
5. What are the limitations of the proposed approach that the authors mentioned in the conclusion, and are there any other potential limitations that the reviewer would like to highlight? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper aims at task oriented detection. Instead of specifying the type of object to detect, this problem requires detection of the objects that best fits the task description. The authors proposed a largely Transformer based model, TOIST, that outperformed the previous state-of-the-art. They then proposed two distillation techniques to distill the type of object into the student model, and performance is further boosted. Experiments are performed on COCO-Tasks.
POST-REBUTTAL UPDATE:
I have read the authors' rebuttal. The authors added a lot of experiments to justify their design and showcase their big improvement over the previous baseline. However I don't think my non-result related questions are well-addressed, such as "posing questions about the baseline and the dataset in general". Overall I decided to slightly increase my score from 4 to 5. Regardless of the final result, I suggest the authors to simplify the proposed approach if possible, e.g. throwing away the notion of "task", which seems to deliver the best performance according to the new experiment.
Strengths And Weaknesses
Strengths:
The base model, TOIST, is fairly well motivated and well described.
The performance gain over the previous state-of-the-art seems significant.
The idea of distilling the object type into "something", i.e. distilling noun into pronoun, is interesting and novel in my opinion.
Weaknesses:
The distillation version of TOIST may be a bit overly complicated, and I do not understand why it has to be designed the way it is.
The baseline [48] is more than 3 years old, and as someone who is not extremely familiar with COCO-Tasks, it is concerning that there is no follow-up works in 3 years, posing questions about the baseline and the dataset in general.
Overall, I think originality is fairly good; quality, clarity, significance is medium.
Questions
L169 mentioned that the text encoder is "pre-trained". What data is this text encoder pre-trained on? This is the reason why the pronoun in Table 4 makes a difference, right? What would the performance be if this part is trained from scratch? Does the distillation still work?
The "clustering distillation" component requires the notion of "number of tasks", and a clustering algorithm is done for each task. However, I don't think the concept of "task" is well defined in this paper (e.g. in Section 3). Does "task" equal to verb, like "dig hole" is one task, and "sit comfortably on" is another? If so, how many training examples are in COCO-Tasks, and how many tasks? Dividing the former by the latter can give the reader a rough sense of how many training examples per task, and that will also inform how the number of clusters, K, ought to be chosen.
Following the question above, Section 5.5 ablated the cluster number K. What about n_{task}? Does the distillation still work when n_{task} = 1, i.e. throwing away the notion of "task"?
I do not understand why distillation has to be done the way it is in "clustering distillation". What about loading the same data for Teacher and Student, and simply distill l_{noun} into l_{pronoun} (say in the same way as Equation 4)? This gets rid of introducing "number of tasks" and "memory bank", which greatly simplifies the proposed method.
Limitations
The authors used 3 lines in Conclusion to talk about limitations. I feel it can be expanded to talk about some of the angles in my Questions section above. |
NIPS | Title
Learning Superpoint Graph Cut for 3D Instance Segmentation
Abstract
3D instance segmentation is a challenging task due to the complex local geometric structures of objects in point clouds. In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of the point cloud for 3D instance segmentation. Specifically, we first oversegment the raw point clouds into superpoints and construct the superpoint graph. Then, we propose an edge score prediction network to predict the edge scores of the superpoint graph, where the similarity vectors of two adjacent nodes learned through cross-graph attention in the coordinate and feature spaces are used for regressing edge scores. By forcing two adjacent nodes of the same instance to be close to the instance center in the coordinate and feature spaces, we formulate a geometry-aware edge loss to train the edge score prediction network. Finally, we develop a superpoint graph cut network that employs the learned edge scores and the predicted semantic classes of nodes to generate instances, where bilateral graph attention is proposed to extract discriminative features on both the coordinate and feature spaces for predicting semantic labels and scores of instances. Extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, show that our method achieves new state-of-the-art performance on 3D instance segmentation. Code is available at https://github.com/fpthink/GraphCut.
1 Introduction
In recent years, with the development of 3D sensors, such as LiDAR and Kinect camera, various 3D computer vision tasks have been receiving more and more attention. 3D instance segmentation is a fundamental task in 3D scene understanding and has been widely used in kinds of applications such as self-driving cars, virtual reality, and robotic navigation. Although recent progress in 3D instance segmentation is encouraging, it is still a challenging task due to irregularities and context uncertainties of 3D points in 3D scenes with complex geometric structures.
Many efforts have been dedicated to 3D instance segmentation and achieved promising performance. These methods can be mainly divided into two categories: detection-based methods [44, 45] and clustering-based methods [40, 18]. Among detection-based methods, 3D-BoNet [44] first detects the 3D bounding boxes and then employs a mask prediction network to predict the object mask for 3D instance segmentation. However, for objects with complex geometric structures, detectionbased methods [45] cannot obtain accurate 3D bounding boxes, thereby degrading the instance segmentation performance. The clustering-based method SGPN [40] clusters 3D points based
†Equal Contributions, ∗Corresponding authors. Le Hui, Linghua Tang, Yaqi Shen, Jin Xie, and Jian Yang are with PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, China.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
on semantic segmentation to generate instances. Unlike SGPN, Jiang et al. [18] developed an offset branch to cluster points based on semantic predictions in dual coordinate spaces, including original and shifted coordinate spaces. Besides, some follow-up methods utilize tree structures [25], hierarchical aggregation [3], and soft semantic segmentation [37] to boost the performance of 3D instance segmentation. However, most of these clustering-based methods rely on center offsets and semantics to segment instances, which cannot effectively capture the geometric context information of point clouds. Therefore, the performance of instance segmentation is usually limited by objects with complex geometric structures in point clouds.
In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of point clouds to segment 3D instances. Specifically, we construct the superpoint graph to learn the geometric context similarities of superpoints and convert the instance segmentation into a binary classification of edges. Our method consists of an edge score prediction network to predict edge scores and a superpoint graph cut network to generate instances. In our method, we oversegment the raw point clouds into superpoints and construct the superpoint graph by linking the k-nearest superpoints in the coordinate space. In the edge score prediction network, we first perform cross-graph attention on the local neighborhoods of two adjacent nodes to extract local geometric features for measuring the similarity of the nodes. Then, based on the learned similarity vectors from the coordinate and feature spaces, we adopt an edge score branch to predict the edge scores. In addition, we propose a geometry-aware edge loss to train the edge score prediction network by forcing the adjacent nodes of the same instance to be close to the instance center in both the coordinate and feature spaces. In the superpoint graph cut network, we use the learned edge scores combined with semantic classes of the nodes to cut the edges for forming object proposals. The proposals are obtained by applying the breadth-first-search algorithm on the superpoint graph to aggregate nodes in the same connected component. In each proposal, we apply bilateral graph attention to aggregate local geometric features to extract discriminative features for predicting classes and scores of proposals. Furthermore, we adopt a mask learning branch to filter the low-confidence superpoints within the proposal to generate instance.
In summary, we present an edge score prediction network that learns the local geometric features of adjacent nodes for producing edge scores. To train it, we propose a geometry-aware edge loss to keep the instance compact in the coordinate and feature space simultaneously. We present a superpoint graph cut network that extracts discriminative instance features to generate accurate instances by using bilateral graph attention in the coordinate and feature spaces. Extensive experiments on the ScanNet v2 [7] and S3DIS [1] datasets show that our method achieves new state-of-the-art performance on 3D instance segmentation. On the online test set of ScanNet v2, our method achieves the performance of 55.2% in terms of mAP, which is 4.6% higher than the current best results [25]. For S3DIS, our method outperforms the current best results [37] over 2% in terms of mAP.
2 Related Work
3D semantic segmentation. Extracting features from irregular 3D point clouds is crucial for 3D semantic segmentation. Qi et al. [30] first proposed PointNet to learn point-wise features from a point set for semantic segmentation through the multi-layer perceptron network. Following it, many efforts [24, 34, 46, 15, 47, 4, 2] have been proposed to improve semantic segmentation performance. Early point-based methods [36, 43, 31, 14] design various local feature aggregation strategies to extract discriminative point-wise features for semantic segmentation. Inspired by successful 2D convolution networks, view-based methods [23, 35, 17, 19] project the point cloud into multiple regular 2D views, where the regular 2D convolution is applied to extract features. In addition to viewbased methods, volumetric-based methods [27, 39, 10, 6] first voxelize the point cloud into regular 3D grids and then apply 3D convolution to extract local features of point clouds. In order to capture the local geometric structures of point clouds, graph-based methods [42, 22, 38, 5, 16] construct the graph on point clouds and utilize graph convolution to aggregate local geometric information for semantic segmentation.
3D instance segmentation. 3D instance segmentation is a more challenging task, which further needs to identify each instance. Current methods can be roughly grouped into two classes: detection-based methods and clustering-based methods.
Detection-based methods [45, 13, 26] first detect 3D bounding boxes of each object in point clouds, and then apply a mask prediction network on each box to predict the object mask for 3D instance segmentation. In [44], a 3D instance segmentation framework dubbed 3D-BoNet is proposed, which directly regresses the 3D bounding boxes for all instances and predicts point-level masks for each instance. Yi et al. [45] proposed a generative shape proposal network that generates proposals by reconstructing shapes from noisy observations in a scene for 3D instance segmentation. In addition, using both geometry and RGB inputs, [13] develops a joint 2D-3D feature learning network that combines the 2D and 3D features to regress 3D object bounding boxes and predict instance masks.
Clustering-based methods usually use point similarity [40], semantic maps [11, 20], or geometric shifts [18, 3, 25, 33] to cluster 3D points into object instances. A similarity group proposal network was proposed in [40] to cluster points by learning point-wise similarity for generating instances. [29] proposes a multi-task learning framework that simultaneously learns semantic classes and high-dimensional embeddings of 3D points to cluster the points into object instances. In [41], a segmentation framework is introduced to learn semantic-aware point-wise instance embedding for associatively segmenting instances and semantics of point clouds. Han et al. [11] proposed an occupancy-aware method to predict the number of occupied voxels for each instance. PointGroup [18] clusters points by using predicted point-wise center offset vectors and point-wise semantic labels. The follow-up method [3] adopts a hierarchical aggregation strategy for 3D instance segmentation, which first performs point aggregation to cluster points into preliminarily sets and then performs set aggregation to cluster sets into instances. Lately, Vu et al. [37] proposed a soft grouping strategy to mitigate the problem of semantic prediction errors by associating each point with multiple classes, yielding in significant performance gains in 3D instance segmentation. In addition, a semantic superpoint tree network, called SSTNet, is proposed in [25] for segmenting point clouds in instances. It first groups superpoints with similar semantic features to build a binary tree and then generates instances by tree traversal and splitting. To make the network more efficient, a dynamic convolution network combined with a small Transformer network is constructed to propose a lightweight 3D instance segmentation method [12].
3 Method
An overview of our learning-based superpoint graph cut method is illustrated in Figure 1. Based on the superpoint graph, the edge score prediction network (Sec. 3.1) extracts edge embeddings from the coordinate and feature spaces for predicting edge scores. After that, the superpoint graph cut network (Sec. 3.2) generates accurate object instances by learning discriminative instance features to
predict classes and scores of instances. Finally, in Sec. 3.3, we describe how to train our method and inference instances from point clouds.
3.1 Edge Score Prediction Network
Given a raw point cloud, we oversegment it into superpoints and construct the superpoint graph G = (V,E), where V represents the node set of superpoints and E represents the edge set. Since the superpoint representation is coarser than the point representation, learning features directly from the superpoint representation cannot effectively capture the local geometric structures of point clouds. Therefore, we apply submanifold sparse convolution [10] on the point cloud to extract point-level features and use the point-level features to initialize superpoint-level features by average pooling. After that, we apply edge-conditioned convolution [32] to extract superpoint features, denoted as F ∈ R|V |×C , where C is the feature dimension.
3.1.1 Edge Feature Embedding
Once we obtain superpoint features, the edge score prediction network learns edge embeddings to predict edge scores for segmenting instances. Given the adjacent nodes (u, v) ∈ E, it is desired that the learned edge embedding can effectively identify whether nodes u and v belong to the same instance. To tackle this problem, we applies cross-graph attention to the superpoint graph in double spaces (the coordinate and feature spaces) for learning superpoint similarities. The learned similarity vectors of nodes u and v are used to form the edge embedding for predicting edge scores.
Edge embedding in coordinate space. To characterize the similarity of nodes u and v, we first shift them toward the corresponding instance centroids in the coordinate space. Here, a multi-layer preceptron (MLP) network encodes F to produce |V | offset vectorsO = {o1, . . . ,o|V |} ∈ R|V |×3. Give the original superpoint coordinates X = {x1, . . . , x|V |} ∈ R|V |×3, the shifted superpoint coordinates X̂ = {x̂1, . . . , x̂|V |} are obtained by X̂ = X +O. In this way, the geometric distance of nodes belonging to different instances will be increased, so that the discrimination of the superpoints will be enhanced. After that, based on the shifted coordinate space, for node u, we leverage its k-nearest superpoints (i.e., Nu) to construct the local k-NN graph Gu. Similarly, we can obtain the graph Gv for node v. Then, we perform cross-graph attention across Gu and Gv to characterize the similarity of nodes through the learned feature vectors, as shown in Figure 1. Taking the node u as an example, the weight α of cross-graph attention is defined as:
αu,i = MLP(x̂i − x̂u),∀i ∈ Nu ∪Nv (1)
where x̂i and x̂u are the shifted coordinates. Note that i enumerates the all 2*k neighbors across two graphs. Therefore, the final output feature vector can be formulated as:
hu = ∑
i∈Nu∪Nv
α̂u,i ∗MLP(x̂i) + bi (2)
where α̂u,i is the weight αu,i after softmax and bi is a learnable bias. The learned feature vector hu ∈ RC can characterize the geometry similarity by adaptively learning the geometric differences on two graphs. In the same way, we can obtain feature vector hv for another node v. We combine the feature vectors hu and hv as the edge embedding in the coordinate space: eu,v = [hu,hv].
Edge embedding in feature space. In addition to the coordinate space, we also consider the feature space to extract discriminative edge embeddings. Firstly, a MLP network encodes F to produce the initial feature embedding Z ∈ RD. By pushing feature embeddings of instances away from each other, we enlarge the gap between different instances in the feature space. Given a pair of nodes (u, v) ∈ E, we construct the k-NN graphs Ĝu and Ĝv in the feature space, respectively. In this way, it is expected that each graph can aggregate superpoints within the same instance. Then, we execute cross-graph attention across Ĝu and Ĝv to characterize the similarity of nodes in the feature space. Finally, we can obtain the feature vectors ĥu ∈ RC and ĥv ∈ RC . If u and v belong to the same instance, they share the similar k-NN graph in the feature space, so that the learned feature vectors ĥu and ĥv are similar to each other. Here, we also combine the feature vectors to obtain the edge embedding in the feature space: êu,v = [ĥu, ĥv].
modules
(𝑢, 𝑣) in the same instance
superpoint coordinate
weights
Geometry-Aware Edge Loss
superpoint embedding
Edge score prediction. After obtaining edge embeddings in the coordinate and feature spaces, we utilize a simple MLP network to generate the edge score, which is defined as:
au,v = σ(MLP([eu,v, êu,v, du,v])) (3) where [·, ·, ·] indicates the concatenation operation, σ denotes the sigmoid function, and du,v represents the geometric distance between the nodes u and v in the shifted coordinate space. In the experiment, if the edge score au,v > 0.5, it means that the edge between nodes u and v should be cut from the superpoint graph. We use the binary cross-entropy loss Ledge to minimize the edge score.
3.1.2 Geometry-Aware Edge Loss
To train the edge score prediction network, we employ the geometric structures of the superpoint graph to form a geometry-aware edge loss, as shown in Figure 2.
Specifically, given the nodes u, v and their corresponding instance centroids cu and cv, we draw nodes toward their instance centroids by minimizing the L2 distance du,cu and dv,cv . Furthermore, when (u, v) ∈ E belong to the same instance, it is expected that they can collaboratively shift to the same instance centroid c by minimizing the area of triangle4uvc. While (u, v) ∈ E belong to different instances, it is expected that they can collaboratively shift to their own instance centroids cu and cv by minimizing the area of triangles4uvcu and4uvcv . The area constraint in the coordinate space is written as:
Larea = 1 |E| ∑
(u,v)∈E
‖x̂u − cu‖2 + ‖x̂v − cv‖2 + 1
2 (|(c− x̂u)× (c− x̂v)|I(u, v)
+(|(cu − x̂u)× (x̂v − x̂u)|+ |(cv − x̂v)× (x̂u − x̂v)|)(1− I(u, v))) (4)
where I(u, v) is the indicator function, and I(u, v) equals to 1 if u and v belong to the same instance, and 0, otherwise. Note that “×” represents the outer product of vectors for computing the area of triangles. For nodes u and v from the same instance, u and v are simultaneously close to the common instance centroid. Therefore, they are pulled close to each other in the coordinate space, which is helpful to group u and v into the same instance. For nodes u and v from different instances, u and v are respectively close to the corresponding instance centroids. Thus, they are pushed away in the coordinate space, which is helpful to divide u and v into two different instances.
Likewise, we expect the nodes in the same instance to be compact in the feature space by constraining their feature embeddings. For (u, v) ∈ E belong to the same instance, we draw embeddings of u and v toward the mean embedding of the instance, and also pull them to each other. For (u, v) ∈ E belong to different instances, we push embeddings of u and v away from each other. In addition, the instances are pushed away from each other by increasing the distance of their own mean embedding of instances. Thus, the constraint in the feature space is written as:
Lfeat = 1 |E| ∑
(u,v)∈E
([‖zu − zv‖2 − δ]2+ + [‖zu − gu‖2 − δ]2+)I(u, v)
+([2β − ‖zu − zv‖2]2+ + [2β − ‖gu − gv‖2]2+)(1− I(u, v)) (5)
where zu ∈ RD and zv ∈ RD are the feature embeddings. Note that gu ∈ RD and gv ∈ RD indicate the mean feature embeddings of the instances that u and v belong to, respectively. The
Algorithm 1 Proposal Generation Algorithm Input: node semantic scores S = {s1, . . . , s|V | | si ∈ RN for i = 1, . . . , |V |}, N is the number of classes; semantic threshold θ; edge scores A = {au,v} ∈ R|E|×1, au,v indicates the score of edge which connects nodes u and v; Output: proposals I = {I1, . . . , Im}, m is the number of proposals.
1: initialize an empty instance set I 2: for i = 1 to N do 3: if i is valid class (excluding wall, floor) then 4: initialize an array f (visited) of length |V | with all zeros 5: for v = 1 to |V | do 6: if fv == 0 and siv > θ then 7: initialize an empty queue Q 8: initialize an empty set I 9: fv = 1 ; Q.pushBack(v) ; add v to I
10: while Q is not empty do 11: h = Q.popFront() 12: for each k ∈ {k | ah,k < 0.5} do 13: if fk == 0 and sik > θ then 14: fk = 1 ; Q.pushBack(k) ; add k to I 15: add I to I 16: return I
thresholds δ and β are set to be 0.1 and 1.5 to ensure that the inter-instance distance is higher than the intra-instance. Finally, the geometry-aware edge loss is defined as:
Lgeo = Larea + Lfeat + Ledge (6)
3.2 Superpoint Graph Cut Network
3.2.1 Proposal Generation via Superpoint Graph Cut
Given the edge scoreA = {au,v} ∈ R|E|×1, we propose a proposal generation algorithm to generate candidate proposals by simultaneously employing the learned edge scores and the predicted semantic classes of nodes (i.e., superpoints). Specifically, in order to mitigate semantic prediction errors, we follow [37] and adopt a soft threshold θ to associate the nodes with multiple classes. Given semantic scores of superpoints S = {s1, . . . , s|V | | si ∈ RN for i = 1, . . . , |V |}, where N is the number of classes, if siv > θ, the v-th superpoint can be associated with the i-th class. In this way, for the i-th class, we can slice a superpoint subset Ci on the superpoint graph, where the semantic score of the superpoint on the i-th class index is higher than θ. Then, on the superpoint graph, for the edge (u, v) ∈ E, if nodes u ∈ Ci and v ∈ Ci, the edge (u, v) will be preserved, otherwise the edge will be deleted. In other words, we remove the edge between two superpoint nodes with different semantics. After that, for the preserved edges (u, v) on the superpoint graph, we utilize the edge score au,v to determine whether the edge should be cut from the superpoint graph. In the experiment, the threshold for cutting the edge is set to 0.5. If the edge score is higher than 0.5, the edge will be cut from the superpoint graph. Finally, we apply the breadth-first-search algorithm on the superpoint graph to aggregate nodes in the same connected component for generating proposals for the i-th class. In this way, we can generate proposals for N classes by iterating through N classes. The details are shown in Algorithm 1.
3.2.2 Bilateral Graph Attention for Proposal Embedding
As we obtain proposals I = {I1, . . . , Im} from the point cloud, we propose bilateral graph attention to extract proposal embeddings for generating instances by applying the attention mechanism in both the coordinate and feature spaces. Specifically, given the i-th proposal, we first compute proposal centroid ci by averaging the shifted superpoint coordinates. Then, we adopt the inverse distance weighted average of the corresponding superpoints to interpolate the embedding of the proposal
centroid, which is formulated as:
f ′
i (ci) = ∑ j∈Ii ψj(ci) ∗ fj∑ j∈Ii ψj(ci) , ψj(ci) = 1 ‖xj − ci‖2 (7)
where Ii represents the superpoints within the i-th proposal and xj indicates the original coordinates of superpoints. Note that ∗ indicates the Hadamard product, which outputs the element-wise production of two vectors. After obtaining the coordinate ci and embedding f ′
i for the i-th proposal, we then link the superpoints to proposal centroid for constructing the k-NN graph. To extract discriminative embedding of the proposal, we develop bilateral graph attention to achieve this. The bilateral weight wi,j between the superpoint j ∈ Ii and the i-th proposal is formulated as:
wi,j = φ(f ′ i ,fj) ∗ ϕ(ci, xj) (8)
where φ(·, ·) : RC×RC → RC and ϕ(·, ·) : R3×R3 → RC are two mapping functions implemented by MLP networks. φ(f ′
i ,fj) = ReLU(W > φ (f
′
i−fj)) encodes the difference between the superpoint and proposal centroid in the feature space, while ϕ(ci, xj) = ReLU(W>ϕ (ci − xj)) encodes the difference between the superpoint and proposal centroid in the coordinate space. Thus, wi,j ∈ RC captures the channel-wise relationship between the superpoint and proposal in the coordinate and feature spaces. We use the softmax function to obtain normalized weight ŵi,j across the proposal Ii, which is written as:
ŵi,j = exp(wi,j)∑ k∈Ii exp(wi,k)
(9)
Finally, we sum the weighted superpoint embeddings to obtain the proposal embedding, which is given by: f̂i = ∑
j∈Ii wi,j ∗ fj (10)
After obtaining the proposal embedding, we adopt a classification head and a score head to predict the class and score of the proposal Ii. In addition, we use a superpoint mask head to predict the superpoint score for masking the low-confidence superpoints within the proposal. Note that by using the superpoint mask head, we can generate the instance from the candidate proposal. According to these three heads, we use the cross-entropy as the classification loss Lcls, the binary cross-entropy as the score loss Lscore, and the mean squared error as the mask loss Lmask to form the instance loss Lins = Lcls + Lscore + Lmask for training the superpoint graph cut network.
3.3 Training and Inference
In the training process, the whole framework is optimized by a joint loss, which is defined as:
Ljoint = Lsem + Lgeo + Lins (11) where Lsem is the conventional cross-entropy loss for semantic scores, Lgeo is the geometry-aware edge loss for edge scores, and Lins is the instance loss for instance classification, score prediction, and superpoint mask prediction. In the inference process, our method directly outputs instances after a forward pass of the network. Note that non-maximum suppression is not necessary for our method.
4 Experiments
4.1 Experimental Settings
Datasets. We conduct experiments on two benchmark datasets, ScanNet v2 [7] and S3DIS [1]. The ScanNet v2 dataset contains 1,613 3D scenes, which are split into 1,201 training, 312 validation, and 100 test scenes, respectively. The results of instance segmentation are evaluated on 18 object categories. We report the results on validation and hidden test set. The ablation study is conducted on the validation set. The S3DIS dataset has 272 3D scans in 6 different areas with 13 object classes. The instance segmentation is evaluated in all classes. We report Area 5 and 6-fold cross-validation results, respectively.
Evaluation metrics. Following the ScanNet v2 official protocol, we use the mean average precision as the evaluation metric for both ScanNet v2 and S3DIS. The mean average precision with IoU
thresholds of 50% and 25% are denoted as AP50 and AP25, respectively. Also, AP denotes the mean average precision with the IoU threshold from 50% to 95% with a step size of 5%. Additionally, following existing methods [41, 3, 37], we use mean coverage (mCov), mean weighted coverage (mWCov), mean precision (mPrec), and mean recall (mRec) for S3DIS evaluation.
Implementation details. Our model is trained on a single TITAN RTX GPU. We use the Adam optimizer with a base learning rate of 0.001 for the network training, which is scheduled by a cosine annealing. The voxel size is set to 0.02m. A graph-based segmentation method [9] and SSP+SPG [22, 21] are used to generate superpoints for ScanNet scene and S3DIS room, respectively. At training time, we limit the maximum number of points in a scene to 250k and crop the excess randomly. Due to the high point density of S3DIS, we randomly downsample its 1/4 points before cropping. At inference, the whole scene is fed into the network without downsampling and cropping. Note that we follow [3, 37] and use the statistical average instance radius of the specific class to refine the instances.
4.2 Benchmarking Results
ScanNet v2. We compare our model with recent state-of-the-art methods on the unreleased test set of ScanNet v2. Table 1 reports the results on the leaderboard of the official testing server . It can be observed that our method achieves the highest performance in terms of AP. The results on the leaderboard can demonstrate the effectiveness of our method for 3D instance segmentation.
Moreover, we evaluate our method on the validation set of ScanNet v2. From the results in Table 2, one can observe that the proposed GrapCut can achieve better results. In particular, our method brings 2.8% gains for the metric AP and 1.5% gains for the metric AP50 to the second-best methods. In addition, we provide the visualization results of our GraphCut and SoftGroup [37] in Figure 3. We use the red rectangular boxes to show the differences between them. It can be observed that our method can generate good instances with clear boundaries for objects clustered together, such as chairs. SoftGroup relies on point grouping by using
offset-shifted point coordinates, which cannot make full use of local geometric information of point clouds. Since our method can fully utilize the local geometric information of point clouds by constructing an edge score prediction network and a superpoint graph cut network, our method achieves better results than SoftGroup on these clustered objects.
S3DIS. In Table 3, we list the results of Area 5 and 6-fold cross-validation on S3DIS. Regarding the evaluation of Area 5, our method can outperform all compared methods. It is worth noting that our model improves SoftGroup by 2.5% in terms of AP. For the 6-fold cross-validation of S3DIS, our method is superior to the state-of-the-art methods on most metrics.
http://kaldir.vc.in.tum.de/scannet_benchmark/semantic_instance_3d.php?metric=ap
4.3 Ablation Studies and Analysis
Different k in edge score prediction network. In our edge score prediction network, we learn the similarity from the local k-NN graphs of two adjacent nodes to identify whether they belong to the same instance. Here, we study the impact of different k on the instance segmentation performance. We select k ∈ {0, 2, 4, 8, 16}. Notably, k=0 means that we only concatenate two adjacent superpoint features as edge embedding. The results of AP, AP50, and AP25 are 52.0%, 68.8%, 78.7% (k=2), 52.2%, 69.1%, 79.3% (k=4), 51.4%, 68.1%, 79.1% (k=8), and 50.8%, 67.7%, 79.1% (k=16), respectively. Since k=4 achieves the best results, we set k=4 in our experiment.
Effectiveness of edge feature embedding. To verify the effectiveness of our edge feature embedding, we consider three cases: (1) Only with edge embedding in coordinate space (dubbed as “Coordinate”), (2) Only with edge embedding in feature space (dubbed as “Feature”), (3) Only with embeddings of adjacent nodes as edge embedding. From the instance segmentation results on the ScanNet v2 validation set listed in Table 4, the best performance is achieved with the combination of the edge embeddings in both the coordinate and feature spaces. In the edge feature embedding, employing both geometry and feature embeddings of point clouds can improve the performance of the instance segmentation of point clouds.
Ablation study on geometry-aware edge loss. Here, we conduct the experiments on the ScanNet v2 validation set to verify the effectiveness of the propose geometry-aware edge loss. Specifically, we also consider three ablations: (1) Only with area constraint in the coordinate space (i.e., “Larea”), (2) Only with instance constraint in the feature space (i.e., “Lfeat”), (3) Only with binary cross-entropy loss, i.e., Ledge. The results are listed in Table 5. It can be observed that the geometry constraints bring substantial gains to our method. By using the area constraint, it is easier to draw the nodes of the instance toward the instance center, making the boundary between different instances clearer.
Effectiveness of bilateral graph attention. In order to validate the effectiveness of the proposed bilateral graph attention, we replace the bilateral graph attention with a simple MLP network followed by max-pooling and conduct experiments on the ScanNet v2 validation set. The results of AP, AP50, and AP25 are 49.9%, 66.8%, 77.3% (MLP network), and 52.2%, 69.1%, 79.3% (our bilateral graph attention), respectively. Without our designed bilateral graph attention, the performance drops a lot. This is because the bilateral graph attention can adaptively aggregate the information of superpoints in the same instance, which is more reasonable than the simple max-pooling operation for instance embedding.
5 Conclusion
In this paper, we proposed a learning-based superpoint graph cut method for 3D instance segmentation, which prunes the edges off the superpoint graph for generating instances. Specifically, we proposed an edge score prediction network with cross-graph attention in the coordinate and feature spaces to capture local geometric information of two adjacent nodes and predict the edge scores. A geometryaware edge loss was proposed to train the edge score prediction network, which encourages two adjacent nodes in the same instance to be close to the instance center in both the coordinate and feature spaces. Based on the learned edge scores, a superpoint graph cut network was developed to cut irrelevant edges for instance generation. For the generated instances, we further adopted bilateral graph attention to predict semantic classes and scores of instances. Extensive experiments on ScanNet v2 and S3DIS benchmarks show that our method achieves new state-of-the-art performance on 3D instance segmentation.
Acknowledgments
The authors would like to thank reviewers for their detailed comments and instructive suggestions. This work was supported by the National Science Fund of China (Grant Nos. U1713208, 61876084). | 1. What is the novel idea presented in the paper?
2. What are the strengths of the paper's experimental design?
3. What are the weaknesses of the paper, particularly regarding the design of the MLP and the handling of nodes from different instances?
4. Are there any limitations to the paper's approach that should be considered? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This is a good paper with new idea and solid experiments.
Strengths And Weaknesses
Strengths: 1. The idea is novel. 2. The experiments are sufficient. 3. The results are good. Weaknesses: Refer to the Questions.
Questions
1.Line 133, the authors design a MLP to predict the offset vectors. Is there any constrain on the MLP? Could you please add a figure (in the supp.) to show the shifted points or superpoints? Could you please add one experiment that without the offset vectors? 2.Line 172, for nodes u and v, if they are from different instances, the triangle areas are also minimized, just as they are from the same instance. Please explain it in detail.
Limitations
Please refer to the Questions. |
NIPS | Title
Learning Superpoint Graph Cut for 3D Instance Segmentation
Abstract
3D instance segmentation is a challenging task due to the complex local geometric structures of objects in point clouds. In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of the point cloud for 3D instance segmentation. Specifically, we first oversegment the raw point clouds into superpoints and construct the superpoint graph. Then, we propose an edge score prediction network to predict the edge scores of the superpoint graph, where the similarity vectors of two adjacent nodes learned through cross-graph attention in the coordinate and feature spaces are used for regressing edge scores. By forcing two adjacent nodes of the same instance to be close to the instance center in the coordinate and feature spaces, we formulate a geometry-aware edge loss to train the edge score prediction network. Finally, we develop a superpoint graph cut network that employs the learned edge scores and the predicted semantic classes of nodes to generate instances, where bilateral graph attention is proposed to extract discriminative features on both the coordinate and feature spaces for predicting semantic labels and scores of instances. Extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, show that our method achieves new state-of-the-art performance on 3D instance segmentation. Code is available at https://github.com/fpthink/GraphCut.
1 Introduction
In recent years, with the development of 3D sensors, such as LiDAR and Kinect camera, various 3D computer vision tasks have been receiving more and more attention. 3D instance segmentation is a fundamental task in 3D scene understanding and has been widely used in kinds of applications such as self-driving cars, virtual reality, and robotic navigation. Although recent progress in 3D instance segmentation is encouraging, it is still a challenging task due to irregularities and context uncertainties of 3D points in 3D scenes with complex geometric structures.
Many efforts have been dedicated to 3D instance segmentation and achieved promising performance. These methods can be mainly divided into two categories: detection-based methods [44, 45] and clustering-based methods [40, 18]. Among detection-based methods, 3D-BoNet [44] first detects the 3D bounding boxes and then employs a mask prediction network to predict the object mask for 3D instance segmentation. However, for objects with complex geometric structures, detectionbased methods [45] cannot obtain accurate 3D bounding boxes, thereby degrading the instance segmentation performance. The clustering-based method SGPN [40] clusters 3D points based
†Equal Contributions, ∗Corresponding authors. Le Hui, Linghua Tang, Yaqi Shen, Jin Xie, and Jian Yang are with PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, China.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
on semantic segmentation to generate instances. Unlike SGPN, Jiang et al. [18] developed an offset branch to cluster points based on semantic predictions in dual coordinate spaces, including original and shifted coordinate spaces. Besides, some follow-up methods utilize tree structures [25], hierarchical aggregation [3], and soft semantic segmentation [37] to boost the performance of 3D instance segmentation. However, most of these clustering-based methods rely on center offsets and semantics to segment instances, which cannot effectively capture the geometric context information of point clouds. Therefore, the performance of instance segmentation is usually limited by objects with complex geometric structures in point clouds.
In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of point clouds to segment 3D instances. Specifically, we construct the superpoint graph to learn the geometric context similarities of superpoints and convert the instance segmentation into a binary classification of edges. Our method consists of an edge score prediction network to predict edge scores and a superpoint graph cut network to generate instances. In our method, we oversegment the raw point clouds into superpoints and construct the superpoint graph by linking the k-nearest superpoints in the coordinate space. In the edge score prediction network, we first perform cross-graph attention on the local neighborhoods of two adjacent nodes to extract local geometric features for measuring the similarity of the nodes. Then, based on the learned similarity vectors from the coordinate and feature spaces, we adopt an edge score branch to predict the edge scores. In addition, we propose a geometry-aware edge loss to train the edge score prediction network by forcing the adjacent nodes of the same instance to be close to the instance center in both the coordinate and feature spaces. In the superpoint graph cut network, we use the learned edge scores combined with semantic classes of the nodes to cut the edges for forming object proposals. The proposals are obtained by applying the breadth-first-search algorithm on the superpoint graph to aggregate nodes in the same connected component. In each proposal, we apply bilateral graph attention to aggregate local geometric features to extract discriminative features for predicting classes and scores of proposals. Furthermore, we adopt a mask learning branch to filter the low-confidence superpoints within the proposal to generate instance.
In summary, we present an edge score prediction network that learns the local geometric features of adjacent nodes for producing edge scores. To train it, we propose a geometry-aware edge loss to keep the instance compact in the coordinate and feature space simultaneously. We present a superpoint graph cut network that extracts discriminative instance features to generate accurate instances by using bilateral graph attention in the coordinate and feature spaces. Extensive experiments on the ScanNet v2 [7] and S3DIS [1] datasets show that our method achieves new state-of-the-art performance on 3D instance segmentation. On the online test set of ScanNet v2, our method achieves the performance of 55.2% in terms of mAP, which is 4.6% higher than the current best results [25]. For S3DIS, our method outperforms the current best results [37] over 2% in terms of mAP.
2 Related Work
3D semantic segmentation. Extracting features from irregular 3D point clouds is crucial for 3D semantic segmentation. Qi et al. [30] first proposed PointNet to learn point-wise features from a point set for semantic segmentation through the multi-layer perceptron network. Following it, many efforts [24, 34, 46, 15, 47, 4, 2] have been proposed to improve semantic segmentation performance. Early point-based methods [36, 43, 31, 14] design various local feature aggregation strategies to extract discriminative point-wise features for semantic segmentation. Inspired by successful 2D convolution networks, view-based methods [23, 35, 17, 19] project the point cloud into multiple regular 2D views, where the regular 2D convolution is applied to extract features. In addition to viewbased methods, volumetric-based methods [27, 39, 10, 6] first voxelize the point cloud into regular 3D grids and then apply 3D convolution to extract local features of point clouds. In order to capture the local geometric structures of point clouds, graph-based methods [42, 22, 38, 5, 16] construct the graph on point clouds and utilize graph convolution to aggregate local geometric information for semantic segmentation.
3D instance segmentation. 3D instance segmentation is a more challenging task, which further needs to identify each instance. Current methods can be roughly grouped into two classes: detection-based methods and clustering-based methods.
Detection-based methods [45, 13, 26] first detect 3D bounding boxes of each object in point clouds, and then apply a mask prediction network on each box to predict the object mask for 3D instance segmentation. In [44], a 3D instance segmentation framework dubbed 3D-BoNet is proposed, which directly regresses the 3D bounding boxes for all instances and predicts point-level masks for each instance. Yi et al. [45] proposed a generative shape proposal network that generates proposals by reconstructing shapes from noisy observations in a scene for 3D instance segmentation. In addition, using both geometry and RGB inputs, [13] develops a joint 2D-3D feature learning network that combines the 2D and 3D features to regress 3D object bounding boxes and predict instance masks.
Clustering-based methods usually use point similarity [40], semantic maps [11, 20], or geometric shifts [18, 3, 25, 33] to cluster 3D points into object instances. A similarity group proposal network was proposed in [40] to cluster points by learning point-wise similarity for generating instances. [29] proposes a multi-task learning framework that simultaneously learns semantic classes and high-dimensional embeddings of 3D points to cluster the points into object instances. In [41], a segmentation framework is introduced to learn semantic-aware point-wise instance embedding for associatively segmenting instances and semantics of point clouds. Han et al. [11] proposed an occupancy-aware method to predict the number of occupied voxels for each instance. PointGroup [18] clusters points by using predicted point-wise center offset vectors and point-wise semantic labels. The follow-up method [3] adopts a hierarchical aggregation strategy for 3D instance segmentation, which first performs point aggregation to cluster points into preliminarily sets and then performs set aggregation to cluster sets into instances. Lately, Vu et al. [37] proposed a soft grouping strategy to mitigate the problem of semantic prediction errors by associating each point with multiple classes, yielding in significant performance gains in 3D instance segmentation. In addition, a semantic superpoint tree network, called SSTNet, is proposed in [25] for segmenting point clouds in instances. It first groups superpoints with similar semantic features to build a binary tree and then generates instances by tree traversal and splitting. To make the network more efficient, a dynamic convolution network combined with a small Transformer network is constructed to propose a lightweight 3D instance segmentation method [12].
3 Method
An overview of our learning-based superpoint graph cut method is illustrated in Figure 1. Based on the superpoint graph, the edge score prediction network (Sec. 3.1) extracts edge embeddings from the coordinate and feature spaces for predicting edge scores. After that, the superpoint graph cut network (Sec. 3.2) generates accurate object instances by learning discriminative instance features to
predict classes and scores of instances. Finally, in Sec. 3.3, we describe how to train our method and inference instances from point clouds.
3.1 Edge Score Prediction Network
Given a raw point cloud, we oversegment it into superpoints and construct the superpoint graph G = (V,E), where V represents the node set of superpoints and E represents the edge set. Since the superpoint representation is coarser than the point representation, learning features directly from the superpoint representation cannot effectively capture the local geometric structures of point clouds. Therefore, we apply submanifold sparse convolution [10] on the point cloud to extract point-level features and use the point-level features to initialize superpoint-level features by average pooling. After that, we apply edge-conditioned convolution [32] to extract superpoint features, denoted as F ∈ R|V |×C , where C is the feature dimension.
3.1.1 Edge Feature Embedding
Once we obtain superpoint features, the edge score prediction network learns edge embeddings to predict edge scores for segmenting instances. Given the adjacent nodes (u, v) ∈ E, it is desired that the learned edge embedding can effectively identify whether nodes u and v belong to the same instance. To tackle this problem, we applies cross-graph attention to the superpoint graph in double spaces (the coordinate and feature spaces) for learning superpoint similarities. The learned similarity vectors of nodes u and v are used to form the edge embedding for predicting edge scores.
Edge embedding in coordinate space. To characterize the similarity of nodes u and v, we first shift them toward the corresponding instance centroids in the coordinate space. Here, a multi-layer preceptron (MLP) network encodes F to produce |V | offset vectorsO = {o1, . . . ,o|V |} ∈ R|V |×3. Give the original superpoint coordinates X = {x1, . . . , x|V |} ∈ R|V |×3, the shifted superpoint coordinates X̂ = {x̂1, . . . , x̂|V |} are obtained by X̂ = X +O. In this way, the geometric distance of nodes belonging to different instances will be increased, so that the discrimination of the superpoints will be enhanced. After that, based on the shifted coordinate space, for node u, we leverage its k-nearest superpoints (i.e., Nu) to construct the local k-NN graph Gu. Similarly, we can obtain the graph Gv for node v. Then, we perform cross-graph attention across Gu and Gv to characterize the similarity of nodes through the learned feature vectors, as shown in Figure 1. Taking the node u as an example, the weight α of cross-graph attention is defined as:
αu,i = MLP(x̂i − x̂u),∀i ∈ Nu ∪Nv (1)
where x̂i and x̂u are the shifted coordinates. Note that i enumerates the all 2*k neighbors across two graphs. Therefore, the final output feature vector can be formulated as:
hu = ∑
i∈Nu∪Nv
α̂u,i ∗MLP(x̂i) + bi (2)
where α̂u,i is the weight αu,i after softmax and bi is a learnable bias. The learned feature vector hu ∈ RC can characterize the geometry similarity by adaptively learning the geometric differences on two graphs. In the same way, we can obtain feature vector hv for another node v. We combine the feature vectors hu and hv as the edge embedding in the coordinate space: eu,v = [hu,hv].
Edge embedding in feature space. In addition to the coordinate space, we also consider the feature space to extract discriminative edge embeddings. Firstly, a MLP network encodes F to produce the initial feature embedding Z ∈ RD. By pushing feature embeddings of instances away from each other, we enlarge the gap between different instances in the feature space. Given a pair of nodes (u, v) ∈ E, we construct the k-NN graphs Ĝu and Ĝv in the feature space, respectively. In this way, it is expected that each graph can aggregate superpoints within the same instance. Then, we execute cross-graph attention across Ĝu and Ĝv to characterize the similarity of nodes in the feature space. Finally, we can obtain the feature vectors ĥu ∈ RC and ĥv ∈ RC . If u and v belong to the same instance, they share the similar k-NN graph in the feature space, so that the learned feature vectors ĥu and ĥv are similar to each other. Here, we also combine the feature vectors to obtain the edge embedding in the feature space: êu,v = [ĥu, ĥv].
modules
(𝑢, 𝑣) in the same instance
superpoint coordinate
weights
Geometry-Aware Edge Loss
superpoint embedding
Edge score prediction. After obtaining edge embeddings in the coordinate and feature spaces, we utilize a simple MLP network to generate the edge score, which is defined as:
au,v = σ(MLP([eu,v, êu,v, du,v])) (3) where [·, ·, ·] indicates the concatenation operation, σ denotes the sigmoid function, and du,v represents the geometric distance between the nodes u and v in the shifted coordinate space. In the experiment, if the edge score au,v > 0.5, it means that the edge between nodes u and v should be cut from the superpoint graph. We use the binary cross-entropy loss Ledge to minimize the edge score.
3.1.2 Geometry-Aware Edge Loss
To train the edge score prediction network, we employ the geometric structures of the superpoint graph to form a geometry-aware edge loss, as shown in Figure 2.
Specifically, given the nodes u, v and their corresponding instance centroids cu and cv, we draw nodes toward their instance centroids by minimizing the L2 distance du,cu and dv,cv . Furthermore, when (u, v) ∈ E belong to the same instance, it is expected that they can collaboratively shift to the same instance centroid c by minimizing the area of triangle4uvc. While (u, v) ∈ E belong to different instances, it is expected that they can collaboratively shift to their own instance centroids cu and cv by minimizing the area of triangles4uvcu and4uvcv . The area constraint in the coordinate space is written as:
Larea = 1 |E| ∑
(u,v)∈E
‖x̂u − cu‖2 + ‖x̂v − cv‖2 + 1
2 (|(c− x̂u)× (c− x̂v)|I(u, v)
+(|(cu − x̂u)× (x̂v − x̂u)|+ |(cv − x̂v)× (x̂u − x̂v)|)(1− I(u, v))) (4)
where I(u, v) is the indicator function, and I(u, v) equals to 1 if u and v belong to the same instance, and 0, otherwise. Note that “×” represents the outer product of vectors for computing the area of triangles. For nodes u and v from the same instance, u and v are simultaneously close to the common instance centroid. Therefore, they are pulled close to each other in the coordinate space, which is helpful to group u and v into the same instance. For nodes u and v from different instances, u and v are respectively close to the corresponding instance centroids. Thus, they are pushed away in the coordinate space, which is helpful to divide u and v into two different instances.
Likewise, we expect the nodes in the same instance to be compact in the feature space by constraining their feature embeddings. For (u, v) ∈ E belong to the same instance, we draw embeddings of u and v toward the mean embedding of the instance, and also pull them to each other. For (u, v) ∈ E belong to different instances, we push embeddings of u and v away from each other. In addition, the instances are pushed away from each other by increasing the distance of their own mean embedding of instances. Thus, the constraint in the feature space is written as:
Lfeat = 1 |E| ∑
(u,v)∈E
([‖zu − zv‖2 − δ]2+ + [‖zu − gu‖2 − δ]2+)I(u, v)
+([2β − ‖zu − zv‖2]2+ + [2β − ‖gu − gv‖2]2+)(1− I(u, v)) (5)
where zu ∈ RD and zv ∈ RD are the feature embeddings. Note that gu ∈ RD and gv ∈ RD indicate the mean feature embeddings of the instances that u and v belong to, respectively. The
Algorithm 1 Proposal Generation Algorithm Input: node semantic scores S = {s1, . . . , s|V | | si ∈ RN for i = 1, . . . , |V |}, N is the number of classes; semantic threshold θ; edge scores A = {au,v} ∈ R|E|×1, au,v indicates the score of edge which connects nodes u and v; Output: proposals I = {I1, . . . , Im}, m is the number of proposals.
1: initialize an empty instance set I 2: for i = 1 to N do 3: if i is valid class (excluding wall, floor) then 4: initialize an array f (visited) of length |V | with all zeros 5: for v = 1 to |V | do 6: if fv == 0 and siv > θ then 7: initialize an empty queue Q 8: initialize an empty set I 9: fv = 1 ; Q.pushBack(v) ; add v to I
10: while Q is not empty do 11: h = Q.popFront() 12: for each k ∈ {k | ah,k < 0.5} do 13: if fk == 0 and sik > θ then 14: fk = 1 ; Q.pushBack(k) ; add k to I 15: add I to I 16: return I
thresholds δ and β are set to be 0.1 and 1.5 to ensure that the inter-instance distance is higher than the intra-instance. Finally, the geometry-aware edge loss is defined as:
Lgeo = Larea + Lfeat + Ledge (6)
3.2 Superpoint Graph Cut Network
3.2.1 Proposal Generation via Superpoint Graph Cut
Given the edge scoreA = {au,v} ∈ R|E|×1, we propose a proposal generation algorithm to generate candidate proposals by simultaneously employing the learned edge scores and the predicted semantic classes of nodes (i.e., superpoints). Specifically, in order to mitigate semantic prediction errors, we follow [37] and adopt a soft threshold θ to associate the nodes with multiple classes. Given semantic scores of superpoints S = {s1, . . . , s|V | | si ∈ RN for i = 1, . . . , |V |}, where N is the number of classes, if siv > θ, the v-th superpoint can be associated with the i-th class. In this way, for the i-th class, we can slice a superpoint subset Ci on the superpoint graph, where the semantic score of the superpoint on the i-th class index is higher than θ. Then, on the superpoint graph, for the edge (u, v) ∈ E, if nodes u ∈ Ci and v ∈ Ci, the edge (u, v) will be preserved, otherwise the edge will be deleted. In other words, we remove the edge between two superpoint nodes with different semantics. After that, for the preserved edges (u, v) on the superpoint graph, we utilize the edge score au,v to determine whether the edge should be cut from the superpoint graph. In the experiment, the threshold for cutting the edge is set to 0.5. If the edge score is higher than 0.5, the edge will be cut from the superpoint graph. Finally, we apply the breadth-first-search algorithm on the superpoint graph to aggregate nodes in the same connected component for generating proposals for the i-th class. In this way, we can generate proposals for N classes by iterating through N classes. The details are shown in Algorithm 1.
3.2.2 Bilateral Graph Attention for Proposal Embedding
As we obtain proposals I = {I1, . . . , Im} from the point cloud, we propose bilateral graph attention to extract proposal embeddings for generating instances by applying the attention mechanism in both the coordinate and feature spaces. Specifically, given the i-th proposal, we first compute proposal centroid ci by averaging the shifted superpoint coordinates. Then, we adopt the inverse distance weighted average of the corresponding superpoints to interpolate the embedding of the proposal
centroid, which is formulated as:
f ′
i (ci) = ∑ j∈Ii ψj(ci) ∗ fj∑ j∈Ii ψj(ci) , ψj(ci) = 1 ‖xj − ci‖2 (7)
where Ii represents the superpoints within the i-th proposal and xj indicates the original coordinates of superpoints. Note that ∗ indicates the Hadamard product, which outputs the element-wise production of two vectors. After obtaining the coordinate ci and embedding f ′
i for the i-th proposal, we then link the superpoints to proposal centroid for constructing the k-NN graph. To extract discriminative embedding of the proposal, we develop bilateral graph attention to achieve this. The bilateral weight wi,j between the superpoint j ∈ Ii and the i-th proposal is formulated as:
wi,j = φ(f ′ i ,fj) ∗ ϕ(ci, xj) (8)
where φ(·, ·) : RC×RC → RC and ϕ(·, ·) : R3×R3 → RC are two mapping functions implemented by MLP networks. φ(f ′
i ,fj) = ReLU(W > φ (f
′
i−fj)) encodes the difference between the superpoint and proposal centroid in the feature space, while ϕ(ci, xj) = ReLU(W>ϕ (ci − xj)) encodes the difference between the superpoint and proposal centroid in the coordinate space. Thus, wi,j ∈ RC captures the channel-wise relationship between the superpoint and proposal in the coordinate and feature spaces. We use the softmax function to obtain normalized weight ŵi,j across the proposal Ii, which is written as:
ŵi,j = exp(wi,j)∑ k∈Ii exp(wi,k)
(9)
Finally, we sum the weighted superpoint embeddings to obtain the proposal embedding, which is given by: f̂i = ∑
j∈Ii wi,j ∗ fj (10)
After obtaining the proposal embedding, we adopt a classification head and a score head to predict the class and score of the proposal Ii. In addition, we use a superpoint mask head to predict the superpoint score for masking the low-confidence superpoints within the proposal. Note that by using the superpoint mask head, we can generate the instance from the candidate proposal. According to these three heads, we use the cross-entropy as the classification loss Lcls, the binary cross-entropy as the score loss Lscore, and the mean squared error as the mask loss Lmask to form the instance loss Lins = Lcls + Lscore + Lmask for training the superpoint graph cut network.
3.3 Training and Inference
In the training process, the whole framework is optimized by a joint loss, which is defined as:
Ljoint = Lsem + Lgeo + Lins (11) where Lsem is the conventional cross-entropy loss for semantic scores, Lgeo is the geometry-aware edge loss for edge scores, and Lins is the instance loss for instance classification, score prediction, and superpoint mask prediction. In the inference process, our method directly outputs instances after a forward pass of the network. Note that non-maximum suppression is not necessary for our method.
4 Experiments
4.1 Experimental Settings
Datasets. We conduct experiments on two benchmark datasets, ScanNet v2 [7] and S3DIS [1]. The ScanNet v2 dataset contains 1,613 3D scenes, which are split into 1,201 training, 312 validation, and 100 test scenes, respectively. The results of instance segmentation are evaluated on 18 object categories. We report the results on validation and hidden test set. The ablation study is conducted on the validation set. The S3DIS dataset has 272 3D scans in 6 different areas with 13 object classes. The instance segmentation is evaluated in all classes. We report Area 5 and 6-fold cross-validation results, respectively.
Evaluation metrics. Following the ScanNet v2 official protocol, we use the mean average precision as the evaluation metric for both ScanNet v2 and S3DIS. The mean average precision with IoU
thresholds of 50% and 25% are denoted as AP50 and AP25, respectively. Also, AP denotes the mean average precision with the IoU threshold from 50% to 95% with a step size of 5%. Additionally, following existing methods [41, 3, 37], we use mean coverage (mCov), mean weighted coverage (mWCov), mean precision (mPrec), and mean recall (mRec) for S3DIS evaluation.
Implementation details. Our model is trained on a single TITAN RTX GPU. We use the Adam optimizer with a base learning rate of 0.001 for the network training, which is scheduled by a cosine annealing. The voxel size is set to 0.02m. A graph-based segmentation method [9] and SSP+SPG [22, 21] are used to generate superpoints for ScanNet scene and S3DIS room, respectively. At training time, we limit the maximum number of points in a scene to 250k and crop the excess randomly. Due to the high point density of S3DIS, we randomly downsample its 1/4 points before cropping. At inference, the whole scene is fed into the network without downsampling and cropping. Note that we follow [3, 37] and use the statistical average instance radius of the specific class to refine the instances.
4.2 Benchmarking Results
ScanNet v2. We compare our model with recent state-of-the-art methods on the unreleased test set of ScanNet v2. Table 1 reports the results on the leaderboard of the official testing server . It can be observed that our method achieves the highest performance in terms of AP. The results on the leaderboard can demonstrate the effectiveness of our method for 3D instance segmentation.
Moreover, we evaluate our method on the validation set of ScanNet v2. From the results in Table 2, one can observe that the proposed GrapCut can achieve better results. In particular, our method brings 2.8% gains for the metric AP and 1.5% gains for the metric AP50 to the second-best methods. In addition, we provide the visualization results of our GraphCut and SoftGroup [37] in Figure 3. We use the red rectangular boxes to show the differences between them. It can be observed that our method can generate good instances with clear boundaries for objects clustered together, such as chairs. SoftGroup relies on point grouping by using
offset-shifted point coordinates, which cannot make full use of local geometric information of point clouds. Since our method can fully utilize the local geometric information of point clouds by constructing an edge score prediction network and a superpoint graph cut network, our method achieves better results than SoftGroup on these clustered objects.
S3DIS. In Table 3, we list the results of Area 5 and 6-fold cross-validation on S3DIS. Regarding the evaluation of Area 5, our method can outperform all compared methods. It is worth noting that our model improves SoftGroup by 2.5% in terms of AP. For the 6-fold cross-validation of S3DIS, our method is superior to the state-of-the-art methods on most metrics.
http://kaldir.vc.in.tum.de/scannet_benchmark/semantic_instance_3d.php?metric=ap
4.3 Ablation Studies and Analysis
Different k in edge score prediction network. In our edge score prediction network, we learn the similarity from the local k-NN graphs of two adjacent nodes to identify whether they belong to the same instance. Here, we study the impact of different k on the instance segmentation performance. We select k ∈ {0, 2, 4, 8, 16}. Notably, k=0 means that we only concatenate two adjacent superpoint features as edge embedding. The results of AP, AP50, and AP25 are 52.0%, 68.8%, 78.7% (k=2), 52.2%, 69.1%, 79.3% (k=4), 51.4%, 68.1%, 79.1% (k=8), and 50.8%, 67.7%, 79.1% (k=16), respectively. Since k=4 achieves the best results, we set k=4 in our experiment.
Effectiveness of edge feature embedding. To verify the effectiveness of our edge feature embedding, we consider three cases: (1) Only with edge embedding in coordinate space (dubbed as “Coordinate”), (2) Only with edge embedding in feature space (dubbed as “Feature”), (3) Only with embeddings of adjacent nodes as edge embedding. From the instance segmentation results on the ScanNet v2 validation set listed in Table 4, the best performance is achieved with the combination of the edge embeddings in both the coordinate and feature spaces. In the edge feature embedding, employing both geometry and feature embeddings of point clouds can improve the performance of the instance segmentation of point clouds.
Ablation study on geometry-aware edge loss. Here, we conduct the experiments on the ScanNet v2 validation set to verify the effectiveness of the propose geometry-aware edge loss. Specifically, we also consider three ablations: (1) Only with area constraint in the coordinate space (i.e., “Larea”), (2) Only with instance constraint in the feature space (i.e., “Lfeat”), (3) Only with binary cross-entropy loss, i.e., Ledge. The results are listed in Table 5. It can be observed that the geometry constraints bring substantial gains to our method. By using the area constraint, it is easier to draw the nodes of the instance toward the instance center, making the boundary between different instances clearer.
Effectiveness of bilateral graph attention. In order to validate the effectiveness of the proposed bilateral graph attention, we replace the bilateral graph attention with a simple MLP network followed by max-pooling and conduct experiments on the ScanNet v2 validation set. The results of AP, AP50, and AP25 are 49.9%, 66.8%, 77.3% (MLP network), and 52.2%, 69.1%, 79.3% (our bilateral graph attention), respectively. Without our designed bilateral graph attention, the performance drops a lot. This is because the bilateral graph attention can adaptively aggregate the information of superpoints in the same instance, which is more reasonable than the simple max-pooling operation for instance embedding.
5 Conclusion
In this paper, we proposed a learning-based superpoint graph cut method for 3D instance segmentation, which prunes the edges off the superpoint graph for generating instances. Specifically, we proposed an edge score prediction network with cross-graph attention in the coordinate and feature spaces to capture local geometric information of two adjacent nodes and predict the edge scores. A geometryaware edge loss was proposed to train the edge score prediction network, which encourages two adjacent nodes in the same instance to be close to the instance center in both the coordinate and feature spaces. Based on the learned edge scores, a superpoint graph cut network was developed to cut irrelevant edges for instance generation. For the generated instances, we further adopted bilateral graph attention to predict semantic classes and scores of instances. Extensive experiments on ScanNet v2 and S3DIS benchmarks show that our method achieves new state-of-the-art performance on 3D instance segmentation.
Acknowledgments
The authors would like to thank reviewers for their detailed comments and instructive suggestions. This work was supported by the National Science Fund of China (Grant Nos. U1713208, 61876084). | 1. What is the focus and contribution of the paper regarding 3D semantic instance segmentation?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its originality and complexity?
3. Do you have any concerns regarding the motivation and explanation of some components of the model architecture?
4. How does the method introduce new triangle losses, and what is their impact on performance?
5. Can you provide more information about the hyperparameters used in the approach and their sensitivity?
6. What are the minor suggestions mentioned in the review, and how can they improve the clarity and understanding of the paper's content?
7. Are there any limitations or negative societal impacts associated with the proposed approach, and how can they be addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This work tackles the task of 3D semantic instance segmentation by exploiting the local geometry. For that purpose, a graph is defined over geometric super-points. Then a series of graph cut networks, edge scoring functions and attention mechanisms is used to extract, classify and score the final instance mask. Experiments on S3DIS A5/6fCV and ScanNet test outperform prior published methods. Ablation studies on ScanNet validation show the effect of the individual model components.
Strengths And Weaknesses
Strength
The proposed approach is original in terms of the model architecture which is inspired by traditional graph cut approaches and achieves state-of-the-art 3D instance segmentation scores on two popular indoor scenes datasets (ScanNet, S3DIS).
Weaknesses
The model itself consists of numerous components that are not always very clearly explained and/or motivated. For example, l.122 mentions sparse convolutions, but it is not very clear if they are applied on points or on superpoints. Then additional edge-conditioned convolutions are applied. What is the motivation for two types of convolutions, wouldn’t one type of convolutions be enough? This could be further motivated and also evaluated to show that it actually improves performance. Overall, the model consists of numerous components which increases complexity and raises the question if a similar performance could be achieved with a simpler model, allowing the community to draw more general conclusions. At this stage, it is an interesting model that performs very well but I’m unsure about the significance to the community since it is unclear which conclusions to draw that can push the field forward.
The approach also relies on a large number of hyper-parameters (l.184, l.192, l.201, l.292 …), requiring manual tuning. The paper contains parameter studies only for a subset of them e.g. k in k-NN. From the paper it is unclear if the same hyperparameters are used for both datasets and how sensitive the approach is to the parameters.
Questions
Question
The method introduces some new triangle losses, but it does not help? Table 3 in the supplementary seems to suggest that the L_area hurts the performance. However, Table 5 in the main paper suggests the opposite conclusion. What is the correct answer?
Minor suggestions
l.133/149 - repeat meaning of F.
Fig.1 - add the losses to the model figure (now it’s a bit unclear where they are applied), similarly add the used notation into the Figure -
l.184 Parameter study for delta and beta?
l.191 Where do the semantic classes come from?
l.201 Parameter study?
l.230 How is the scalar score obtained from the cross entropy loss / per-?
l.235 Where is the semantic loss applied?
l.278 Highlighting that the model produces clear boundaries is not that insightful - this is clear from the fact that the model is based on an over-segmentation which always produces clear boundaries.
Training details: is the model trained in the train and val split for the test set submission?
Limitations
Limitations and negative societal impact are adequately discussed in the rebuttal. In particular, the limitations include sensitivity to the long-tail problem, i.e, refrigerators which appear sparsely in the training data and expose a large intra-class variety. The papers also suggest potential solutions such as mining context information or data augmentation. |
NIPS | Title
Learning Superpoint Graph Cut for 3D Instance Segmentation
Abstract
3D instance segmentation is a challenging task due to the complex local geometric structures of objects in point clouds. In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of the point cloud for 3D instance segmentation. Specifically, we first oversegment the raw point clouds into superpoints and construct the superpoint graph. Then, we propose an edge score prediction network to predict the edge scores of the superpoint graph, where the similarity vectors of two adjacent nodes learned through cross-graph attention in the coordinate and feature spaces are used for regressing edge scores. By forcing two adjacent nodes of the same instance to be close to the instance center in the coordinate and feature spaces, we formulate a geometry-aware edge loss to train the edge score prediction network. Finally, we develop a superpoint graph cut network that employs the learned edge scores and the predicted semantic classes of nodes to generate instances, where bilateral graph attention is proposed to extract discriminative features on both the coordinate and feature spaces for predicting semantic labels and scores of instances. Extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, show that our method achieves new state-of-the-art performance on 3D instance segmentation. Code is available at https://github.com/fpthink/GraphCut.
1 Introduction
In recent years, with the development of 3D sensors, such as LiDAR and Kinect camera, various 3D computer vision tasks have been receiving more and more attention. 3D instance segmentation is a fundamental task in 3D scene understanding and has been widely used in kinds of applications such as self-driving cars, virtual reality, and robotic navigation. Although recent progress in 3D instance segmentation is encouraging, it is still a challenging task due to irregularities and context uncertainties of 3D points in 3D scenes with complex geometric structures.
Many efforts have been dedicated to 3D instance segmentation and achieved promising performance. These methods can be mainly divided into two categories: detection-based methods [44, 45] and clustering-based methods [40, 18]. Among detection-based methods, 3D-BoNet [44] first detects the 3D bounding boxes and then employs a mask prediction network to predict the object mask for 3D instance segmentation. However, for objects with complex geometric structures, detectionbased methods [45] cannot obtain accurate 3D bounding boxes, thereby degrading the instance segmentation performance. The clustering-based method SGPN [40] clusters 3D points based
†Equal Contributions, ∗Corresponding authors. Le Hui, Linghua Tang, Yaqi Shen, Jin Xie, and Jian Yang are with PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, and Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, China.
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
on semantic segmentation to generate instances. Unlike SGPN, Jiang et al. [18] developed an offset branch to cluster points based on semantic predictions in dual coordinate spaces, including original and shifted coordinate spaces. Besides, some follow-up methods utilize tree structures [25], hierarchical aggregation [3], and soft semantic segmentation [37] to boost the performance of 3D instance segmentation. However, most of these clustering-based methods rely on center offsets and semantics to segment instances, which cannot effectively capture the geometric context information of point clouds. Therefore, the performance of instance segmentation is usually limited by objects with complex geometric structures in point clouds.
In this paper, we propose a learning-based superpoint graph cut method that explicitly learns the local geometric structures of point clouds to segment 3D instances. Specifically, we construct the superpoint graph to learn the geometric context similarities of superpoints and convert the instance segmentation into a binary classification of edges. Our method consists of an edge score prediction network to predict edge scores and a superpoint graph cut network to generate instances. In our method, we oversegment the raw point clouds into superpoints and construct the superpoint graph by linking the k-nearest superpoints in the coordinate space. In the edge score prediction network, we first perform cross-graph attention on the local neighborhoods of two adjacent nodes to extract local geometric features for measuring the similarity of the nodes. Then, based on the learned similarity vectors from the coordinate and feature spaces, we adopt an edge score branch to predict the edge scores. In addition, we propose a geometry-aware edge loss to train the edge score prediction network by forcing the adjacent nodes of the same instance to be close to the instance center in both the coordinate and feature spaces. In the superpoint graph cut network, we use the learned edge scores combined with semantic classes of the nodes to cut the edges for forming object proposals. The proposals are obtained by applying the breadth-first-search algorithm on the superpoint graph to aggregate nodes in the same connected component. In each proposal, we apply bilateral graph attention to aggregate local geometric features to extract discriminative features for predicting classes and scores of proposals. Furthermore, we adopt a mask learning branch to filter the low-confidence superpoints within the proposal to generate instance.
In summary, we present an edge score prediction network that learns the local geometric features of adjacent nodes for producing edge scores. To train it, we propose a geometry-aware edge loss to keep the instance compact in the coordinate and feature space simultaneously. We present a superpoint graph cut network that extracts discriminative instance features to generate accurate instances by using bilateral graph attention in the coordinate and feature spaces. Extensive experiments on the ScanNet v2 [7] and S3DIS [1] datasets show that our method achieves new state-of-the-art performance on 3D instance segmentation. On the online test set of ScanNet v2, our method achieves the performance of 55.2% in terms of mAP, which is 4.6% higher than the current best results [25]. For S3DIS, our method outperforms the current best results [37] over 2% in terms of mAP.
2 Related Work
3D semantic segmentation. Extracting features from irregular 3D point clouds is crucial for 3D semantic segmentation. Qi et al. [30] first proposed PointNet to learn point-wise features from a point set for semantic segmentation through the multi-layer perceptron network. Following it, many efforts [24, 34, 46, 15, 47, 4, 2] have been proposed to improve semantic segmentation performance. Early point-based methods [36, 43, 31, 14] design various local feature aggregation strategies to extract discriminative point-wise features for semantic segmentation. Inspired by successful 2D convolution networks, view-based methods [23, 35, 17, 19] project the point cloud into multiple regular 2D views, where the regular 2D convolution is applied to extract features. In addition to viewbased methods, volumetric-based methods [27, 39, 10, 6] first voxelize the point cloud into regular 3D grids and then apply 3D convolution to extract local features of point clouds. In order to capture the local geometric structures of point clouds, graph-based methods [42, 22, 38, 5, 16] construct the graph on point clouds and utilize graph convolution to aggregate local geometric information for semantic segmentation.
3D instance segmentation. 3D instance segmentation is a more challenging task, which further needs to identify each instance. Current methods can be roughly grouped into two classes: detection-based methods and clustering-based methods.
Detection-based methods [45, 13, 26] first detect 3D bounding boxes of each object in point clouds, and then apply a mask prediction network on each box to predict the object mask for 3D instance segmentation. In [44], a 3D instance segmentation framework dubbed 3D-BoNet is proposed, which directly regresses the 3D bounding boxes for all instances and predicts point-level masks for each instance. Yi et al. [45] proposed a generative shape proposal network that generates proposals by reconstructing shapes from noisy observations in a scene for 3D instance segmentation. In addition, using both geometry and RGB inputs, [13] develops a joint 2D-3D feature learning network that combines the 2D and 3D features to regress 3D object bounding boxes and predict instance masks.
Clustering-based methods usually use point similarity [40], semantic maps [11, 20], or geometric shifts [18, 3, 25, 33] to cluster 3D points into object instances. A similarity group proposal network was proposed in [40] to cluster points by learning point-wise similarity for generating instances. [29] proposes a multi-task learning framework that simultaneously learns semantic classes and high-dimensional embeddings of 3D points to cluster the points into object instances. In [41], a segmentation framework is introduced to learn semantic-aware point-wise instance embedding for associatively segmenting instances and semantics of point clouds. Han et al. [11] proposed an occupancy-aware method to predict the number of occupied voxels for each instance. PointGroup [18] clusters points by using predicted point-wise center offset vectors and point-wise semantic labels. The follow-up method [3] adopts a hierarchical aggregation strategy for 3D instance segmentation, which first performs point aggregation to cluster points into preliminarily sets and then performs set aggregation to cluster sets into instances. Lately, Vu et al. [37] proposed a soft grouping strategy to mitigate the problem of semantic prediction errors by associating each point with multiple classes, yielding in significant performance gains in 3D instance segmentation. In addition, a semantic superpoint tree network, called SSTNet, is proposed in [25] for segmenting point clouds in instances. It first groups superpoints with similar semantic features to build a binary tree and then generates instances by tree traversal and splitting. To make the network more efficient, a dynamic convolution network combined with a small Transformer network is constructed to propose a lightweight 3D instance segmentation method [12].
3 Method
An overview of our learning-based superpoint graph cut method is illustrated in Figure 1. Based on the superpoint graph, the edge score prediction network (Sec. 3.1) extracts edge embeddings from the coordinate and feature spaces for predicting edge scores. After that, the superpoint graph cut network (Sec. 3.2) generates accurate object instances by learning discriminative instance features to
predict classes and scores of instances. Finally, in Sec. 3.3, we describe how to train our method and inference instances from point clouds.
3.1 Edge Score Prediction Network
Given a raw point cloud, we oversegment it into superpoints and construct the superpoint graph G = (V,E), where V represents the node set of superpoints and E represents the edge set. Since the superpoint representation is coarser than the point representation, learning features directly from the superpoint representation cannot effectively capture the local geometric structures of point clouds. Therefore, we apply submanifold sparse convolution [10] on the point cloud to extract point-level features and use the point-level features to initialize superpoint-level features by average pooling. After that, we apply edge-conditioned convolution [32] to extract superpoint features, denoted as F ∈ R|V |×C , where C is the feature dimension.
3.1.1 Edge Feature Embedding
Once we obtain superpoint features, the edge score prediction network learns edge embeddings to predict edge scores for segmenting instances. Given the adjacent nodes (u, v) ∈ E, it is desired that the learned edge embedding can effectively identify whether nodes u and v belong to the same instance. To tackle this problem, we applies cross-graph attention to the superpoint graph in double spaces (the coordinate and feature spaces) for learning superpoint similarities. The learned similarity vectors of nodes u and v are used to form the edge embedding for predicting edge scores.
Edge embedding in coordinate space. To characterize the similarity of nodes u and v, we first shift them toward the corresponding instance centroids in the coordinate space. Here, a multi-layer preceptron (MLP) network encodes F to produce |V | offset vectorsO = {o1, . . . ,o|V |} ∈ R|V |×3. Give the original superpoint coordinates X = {x1, . . . , x|V |} ∈ R|V |×3, the shifted superpoint coordinates X̂ = {x̂1, . . . , x̂|V |} are obtained by X̂ = X +O. In this way, the geometric distance of nodes belonging to different instances will be increased, so that the discrimination of the superpoints will be enhanced. After that, based on the shifted coordinate space, for node u, we leverage its k-nearest superpoints (i.e., Nu) to construct the local k-NN graph Gu. Similarly, we can obtain the graph Gv for node v. Then, we perform cross-graph attention across Gu and Gv to characterize the similarity of nodes through the learned feature vectors, as shown in Figure 1. Taking the node u as an example, the weight α of cross-graph attention is defined as:
αu,i = MLP(x̂i − x̂u),∀i ∈ Nu ∪Nv (1)
where x̂i and x̂u are the shifted coordinates. Note that i enumerates the all 2*k neighbors across two graphs. Therefore, the final output feature vector can be formulated as:
hu = ∑
i∈Nu∪Nv
α̂u,i ∗MLP(x̂i) + bi (2)
where α̂u,i is the weight αu,i after softmax and bi is a learnable bias. The learned feature vector hu ∈ RC can characterize the geometry similarity by adaptively learning the geometric differences on two graphs. In the same way, we can obtain feature vector hv for another node v. We combine the feature vectors hu and hv as the edge embedding in the coordinate space: eu,v = [hu,hv].
Edge embedding in feature space. In addition to the coordinate space, we also consider the feature space to extract discriminative edge embeddings. Firstly, a MLP network encodes F to produce the initial feature embedding Z ∈ RD. By pushing feature embeddings of instances away from each other, we enlarge the gap between different instances in the feature space. Given a pair of nodes (u, v) ∈ E, we construct the k-NN graphs Ĝu and Ĝv in the feature space, respectively. In this way, it is expected that each graph can aggregate superpoints within the same instance. Then, we execute cross-graph attention across Ĝu and Ĝv to characterize the similarity of nodes in the feature space. Finally, we can obtain the feature vectors ĥu ∈ RC and ĥv ∈ RC . If u and v belong to the same instance, they share the similar k-NN graph in the feature space, so that the learned feature vectors ĥu and ĥv are similar to each other. Here, we also combine the feature vectors to obtain the edge embedding in the feature space: êu,v = [ĥu, ĥv].
modules
(𝑢, 𝑣) in the same instance
superpoint coordinate
weights
Geometry-Aware Edge Loss
superpoint embedding
Edge score prediction. After obtaining edge embeddings in the coordinate and feature spaces, we utilize a simple MLP network to generate the edge score, which is defined as:
au,v = σ(MLP([eu,v, êu,v, du,v])) (3) where [·, ·, ·] indicates the concatenation operation, σ denotes the sigmoid function, and du,v represents the geometric distance between the nodes u and v in the shifted coordinate space. In the experiment, if the edge score au,v > 0.5, it means that the edge between nodes u and v should be cut from the superpoint graph. We use the binary cross-entropy loss Ledge to minimize the edge score.
3.1.2 Geometry-Aware Edge Loss
To train the edge score prediction network, we employ the geometric structures of the superpoint graph to form a geometry-aware edge loss, as shown in Figure 2.
Specifically, given the nodes u, v and their corresponding instance centroids cu and cv, we draw nodes toward their instance centroids by minimizing the L2 distance du,cu and dv,cv . Furthermore, when (u, v) ∈ E belong to the same instance, it is expected that they can collaboratively shift to the same instance centroid c by minimizing the area of triangle4uvc. While (u, v) ∈ E belong to different instances, it is expected that they can collaboratively shift to their own instance centroids cu and cv by minimizing the area of triangles4uvcu and4uvcv . The area constraint in the coordinate space is written as:
Larea = 1 |E| ∑
(u,v)∈E
‖x̂u − cu‖2 + ‖x̂v − cv‖2 + 1
2 (|(c− x̂u)× (c− x̂v)|I(u, v)
+(|(cu − x̂u)× (x̂v − x̂u)|+ |(cv − x̂v)× (x̂u − x̂v)|)(1− I(u, v))) (4)
where I(u, v) is the indicator function, and I(u, v) equals to 1 if u and v belong to the same instance, and 0, otherwise. Note that “×” represents the outer product of vectors for computing the area of triangles. For nodes u and v from the same instance, u and v are simultaneously close to the common instance centroid. Therefore, they are pulled close to each other in the coordinate space, which is helpful to group u and v into the same instance. For nodes u and v from different instances, u and v are respectively close to the corresponding instance centroids. Thus, they are pushed away in the coordinate space, which is helpful to divide u and v into two different instances.
Likewise, we expect the nodes in the same instance to be compact in the feature space by constraining their feature embeddings. For (u, v) ∈ E belong to the same instance, we draw embeddings of u and v toward the mean embedding of the instance, and also pull them to each other. For (u, v) ∈ E belong to different instances, we push embeddings of u and v away from each other. In addition, the instances are pushed away from each other by increasing the distance of their own mean embedding of instances. Thus, the constraint in the feature space is written as:
Lfeat = 1 |E| ∑
(u,v)∈E
([‖zu − zv‖2 − δ]2+ + [‖zu − gu‖2 − δ]2+)I(u, v)
+([2β − ‖zu − zv‖2]2+ + [2β − ‖gu − gv‖2]2+)(1− I(u, v)) (5)
where zu ∈ RD and zv ∈ RD are the feature embeddings. Note that gu ∈ RD and gv ∈ RD indicate the mean feature embeddings of the instances that u and v belong to, respectively. The
Algorithm 1 Proposal Generation Algorithm Input: node semantic scores S = {s1, . . . , s|V | | si ∈ RN for i = 1, . . . , |V |}, N is the number of classes; semantic threshold θ; edge scores A = {au,v} ∈ R|E|×1, au,v indicates the score of edge which connects nodes u and v; Output: proposals I = {I1, . . . , Im}, m is the number of proposals.
1: initialize an empty instance set I 2: for i = 1 to N do 3: if i is valid class (excluding wall, floor) then 4: initialize an array f (visited) of length |V | with all zeros 5: for v = 1 to |V | do 6: if fv == 0 and siv > θ then 7: initialize an empty queue Q 8: initialize an empty set I 9: fv = 1 ; Q.pushBack(v) ; add v to I
10: while Q is not empty do 11: h = Q.popFront() 12: for each k ∈ {k | ah,k < 0.5} do 13: if fk == 0 and sik > θ then 14: fk = 1 ; Q.pushBack(k) ; add k to I 15: add I to I 16: return I
thresholds δ and β are set to be 0.1 and 1.5 to ensure that the inter-instance distance is higher than the intra-instance. Finally, the geometry-aware edge loss is defined as:
Lgeo = Larea + Lfeat + Ledge (6)
3.2 Superpoint Graph Cut Network
3.2.1 Proposal Generation via Superpoint Graph Cut
Given the edge scoreA = {au,v} ∈ R|E|×1, we propose a proposal generation algorithm to generate candidate proposals by simultaneously employing the learned edge scores and the predicted semantic classes of nodes (i.e., superpoints). Specifically, in order to mitigate semantic prediction errors, we follow [37] and adopt a soft threshold θ to associate the nodes with multiple classes. Given semantic scores of superpoints S = {s1, . . . , s|V | | si ∈ RN for i = 1, . . . , |V |}, where N is the number of classes, if siv > θ, the v-th superpoint can be associated with the i-th class. In this way, for the i-th class, we can slice a superpoint subset Ci on the superpoint graph, where the semantic score of the superpoint on the i-th class index is higher than θ. Then, on the superpoint graph, for the edge (u, v) ∈ E, if nodes u ∈ Ci and v ∈ Ci, the edge (u, v) will be preserved, otherwise the edge will be deleted. In other words, we remove the edge between two superpoint nodes with different semantics. After that, for the preserved edges (u, v) on the superpoint graph, we utilize the edge score au,v to determine whether the edge should be cut from the superpoint graph. In the experiment, the threshold for cutting the edge is set to 0.5. If the edge score is higher than 0.5, the edge will be cut from the superpoint graph. Finally, we apply the breadth-first-search algorithm on the superpoint graph to aggregate nodes in the same connected component for generating proposals for the i-th class. In this way, we can generate proposals for N classes by iterating through N classes. The details are shown in Algorithm 1.
3.2.2 Bilateral Graph Attention for Proposal Embedding
As we obtain proposals I = {I1, . . . , Im} from the point cloud, we propose bilateral graph attention to extract proposal embeddings for generating instances by applying the attention mechanism in both the coordinate and feature spaces. Specifically, given the i-th proposal, we first compute proposal centroid ci by averaging the shifted superpoint coordinates. Then, we adopt the inverse distance weighted average of the corresponding superpoints to interpolate the embedding of the proposal
centroid, which is formulated as:
f ′
i (ci) = ∑ j∈Ii ψj(ci) ∗ fj∑ j∈Ii ψj(ci) , ψj(ci) = 1 ‖xj − ci‖2 (7)
where Ii represents the superpoints within the i-th proposal and xj indicates the original coordinates of superpoints. Note that ∗ indicates the Hadamard product, which outputs the element-wise production of two vectors. After obtaining the coordinate ci and embedding f ′
i for the i-th proposal, we then link the superpoints to proposal centroid for constructing the k-NN graph. To extract discriminative embedding of the proposal, we develop bilateral graph attention to achieve this. The bilateral weight wi,j between the superpoint j ∈ Ii and the i-th proposal is formulated as:
wi,j = φ(f ′ i ,fj) ∗ ϕ(ci, xj) (8)
where φ(·, ·) : RC×RC → RC and ϕ(·, ·) : R3×R3 → RC are two mapping functions implemented by MLP networks. φ(f ′
i ,fj) = ReLU(W > φ (f
′
i−fj)) encodes the difference between the superpoint and proposal centroid in the feature space, while ϕ(ci, xj) = ReLU(W>ϕ (ci − xj)) encodes the difference between the superpoint and proposal centroid in the coordinate space. Thus, wi,j ∈ RC captures the channel-wise relationship between the superpoint and proposal in the coordinate and feature spaces. We use the softmax function to obtain normalized weight ŵi,j across the proposal Ii, which is written as:
ŵi,j = exp(wi,j)∑ k∈Ii exp(wi,k)
(9)
Finally, we sum the weighted superpoint embeddings to obtain the proposal embedding, which is given by: f̂i = ∑
j∈Ii wi,j ∗ fj (10)
After obtaining the proposal embedding, we adopt a classification head and a score head to predict the class and score of the proposal Ii. In addition, we use a superpoint mask head to predict the superpoint score for masking the low-confidence superpoints within the proposal. Note that by using the superpoint mask head, we can generate the instance from the candidate proposal. According to these three heads, we use the cross-entropy as the classification loss Lcls, the binary cross-entropy as the score loss Lscore, and the mean squared error as the mask loss Lmask to form the instance loss Lins = Lcls + Lscore + Lmask for training the superpoint graph cut network.
3.3 Training and Inference
In the training process, the whole framework is optimized by a joint loss, which is defined as:
Ljoint = Lsem + Lgeo + Lins (11) where Lsem is the conventional cross-entropy loss for semantic scores, Lgeo is the geometry-aware edge loss for edge scores, and Lins is the instance loss for instance classification, score prediction, and superpoint mask prediction. In the inference process, our method directly outputs instances after a forward pass of the network. Note that non-maximum suppression is not necessary for our method.
4 Experiments
4.1 Experimental Settings
Datasets. We conduct experiments on two benchmark datasets, ScanNet v2 [7] and S3DIS [1]. The ScanNet v2 dataset contains 1,613 3D scenes, which are split into 1,201 training, 312 validation, and 100 test scenes, respectively. The results of instance segmentation are evaluated on 18 object categories. We report the results on validation and hidden test set. The ablation study is conducted on the validation set. The S3DIS dataset has 272 3D scans in 6 different areas with 13 object classes. The instance segmentation is evaluated in all classes. We report Area 5 and 6-fold cross-validation results, respectively.
Evaluation metrics. Following the ScanNet v2 official protocol, we use the mean average precision as the evaluation metric for both ScanNet v2 and S3DIS. The mean average precision with IoU
thresholds of 50% and 25% are denoted as AP50 and AP25, respectively. Also, AP denotes the mean average precision with the IoU threshold from 50% to 95% with a step size of 5%. Additionally, following existing methods [41, 3, 37], we use mean coverage (mCov), mean weighted coverage (mWCov), mean precision (mPrec), and mean recall (mRec) for S3DIS evaluation.
Implementation details. Our model is trained on a single TITAN RTX GPU. We use the Adam optimizer with a base learning rate of 0.001 for the network training, which is scheduled by a cosine annealing. The voxel size is set to 0.02m. A graph-based segmentation method [9] and SSP+SPG [22, 21] are used to generate superpoints for ScanNet scene and S3DIS room, respectively. At training time, we limit the maximum number of points in a scene to 250k and crop the excess randomly. Due to the high point density of S3DIS, we randomly downsample its 1/4 points before cropping. At inference, the whole scene is fed into the network without downsampling and cropping. Note that we follow [3, 37] and use the statistical average instance radius of the specific class to refine the instances.
4.2 Benchmarking Results
ScanNet v2. We compare our model with recent state-of-the-art methods on the unreleased test set of ScanNet v2. Table 1 reports the results on the leaderboard of the official testing server . It can be observed that our method achieves the highest performance in terms of AP. The results on the leaderboard can demonstrate the effectiveness of our method for 3D instance segmentation.
Moreover, we evaluate our method on the validation set of ScanNet v2. From the results in Table 2, one can observe that the proposed GrapCut can achieve better results. In particular, our method brings 2.8% gains for the metric AP and 1.5% gains for the metric AP50 to the second-best methods. In addition, we provide the visualization results of our GraphCut and SoftGroup [37] in Figure 3. We use the red rectangular boxes to show the differences between them. It can be observed that our method can generate good instances with clear boundaries for objects clustered together, such as chairs. SoftGroup relies on point grouping by using
offset-shifted point coordinates, which cannot make full use of local geometric information of point clouds. Since our method can fully utilize the local geometric information of point clouds by constructing an edge score prediction network and a superpoint graph cut network, our method achieves better results than SoftGroup on these clustered objects.
S3DIS. In Table 3, we list the results of Area 5 and 6-fold cross-validation on S3DIS. Regarding the evaluation of Area 5, our method can outperform all compared methods. It is worth noting that our model improves SoftGroup by 2.5% in terms of AP. For the 6-fold cross-validation of S3DIS, our method is superior to the state-of-the-art methods on most metrics.
http://kaldir.vc.in.tum.de/scannet_benchmark/semantic_instance_3d.php?metric=ap
4.3 Ablation Studies and Analysis
Different k in edge score prediction network. In our edge score prediction network, we learn the similarity from the local k-NN graphs of two adjacent nodes to identify whether they belong to the same instance. Here, we study the impact of different k on the instance segmentation performance. We select k ∈ {0, 2, 4, 8, 16}. Notably, k=0 means that we only concatenate two adjacent superpoint features as edge embedding. The results of AP, AP50, and AP25 are 52.0%, 68.8%, 78.7% (k=2), 52.2%, 69.1%, 79.3% (k=4), 51.4%, 68.1%, 79.1% (k=8), and 50.8%, 67.7%, 79.1% (k=16), respectively. Since k=4 achieves the best results, we set k=4 in our experiment.
Effectiveness of edge feature embedding. To verify the effectiveness of our edge feature embedding, we consider three cases: (1) Only with edge embedding in coordinate space (dubbed as “Coordinate”), (2) Only with edge embedding in feature space (dubbed as “Feature”), (3) Only with embeddings of adjacent nodes as edge embedding. From the instance segmentation results on the ScanNet v2 validation set listed in Table 4, the best performance is achieved with the combination of the edge embeddings in both the coordinate and feature spaces. In the edge feature embedding, employing both geometry and feature embeddings of point clouds can improve the performance of the instance segmentation of point clouds.
Ablation study on geometry-aware edge loss. Here, we conduct the experiments on the ScanNet v2 validation set to verify the effectiveness of the propose geometry-aware edge loss. Specifically, we also consider three ablations: (1) Only with area constraint in the coordinate space (i.e., “Larea”), (2) Only with instance constraint in the feature space (i.e., “Lfeat”), (3) Only with binary cross-entropy loss, i.e., Ledge. The results are listed in Table 5. It can be observed that the geometry constraints bring substantial gains to our method. By using the area constraint, it is easier to draw the nodes of the instance toward the instance center, making the boundary between different instances clearer.
Effectiveness of bilateral graph attention. In order to validate the effectiveness of the proposed bilateral graph attention, we replace the bilateral graph attention with a simple MLP network followed by max-pooling and conduct experiments on the ScanNet v2 validation set. The results of AP, AP50, and AP25 are 49.9%, 66.8%, 77.3% (MLP network), and 52.2%, 69.1%, 79.3% (our bilateral graph attention), respectively. Without our designed bilateral graph attention, the performance drops a lot. This is because the bilateral graph attention can adaptively aggregate the information of superpoints in the same instance, which is more reasonable than the simple max-pooling operation for instance embedding.
5 Conclusion
In this paper, we proposed a learning-based superpoint graph cut method for 3D instance segmentation, which prunes the edges off the superpoint graph for generating instances. Specifically, we proposed an edge score prediction network with cross-graph attention in the coordinate and feature spaces to capture local geometric information of two adjacent nodes and predict the edge scores. A geometryaware edge loss was proposed to train the edge score prediction network, which encourages two adjacent nodes in the same instance to be close to the instance center in both the coordinate and feature spaces. Based on the learned edge scores, a superpoint graph cut network was developed to cut irrelevant edges for instance generation. For the generated instances, we further adopted bilateral graph attention to predict semantic classes and scores of instances. Extensive experiments on ScanNet v2 and S3DIS benchmarks show that our method achieves new state-of-the-art performance on 3D instance segmentation.
Acknowledgments
The authors would like to thank reviewers for their detailed comments and instructive suggestions. This work was supported by the National Science Fund of China (Grant Nos. U1713208, 61876084). | 1. What is the focus and contribution of the paper on 3D point cloud instance segmentation?
2. What are the strengths of the proposed approach, particularly in its novelty and efficacy?
3. What are the weaknesses of the paper, especially regarding its limitations in runtime efficiency and comparisons with other works?
4. Do you have any concerns or suggestions regarding the initial superpoint graph configuration and sensitivity analysis?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper proposes a learnable superpoint-based graph cut method to explicitly learn the geometric structures for 3D point cloud instance problem. Experiments on benchmark datasets show that the proposed method outperforms current state-of-the-art methods.
Strengths And Weaknesses
Strengths: The proposed approach which dynamically prunes the unrelated edges for better instance segmentation is relatively new. The paper presents a detailed description on the main building blocks (i) edge score prediction network and ii) superpoint graph cut network. And the ablation analysis validates the efficacy of the proposed network structure and geometry-aware edge loss.
Weaknesses: Strictly speaking, the proposed approach is not completely learnable, since the initial superpoint graph oversegmented from point cloud is determined empirically. What is the sensitivity analysis on the different configurations on the initial superpoint graph?
Questions
Using attention to capture the similarities between points or nodes in graph is not a completely novel approach, there have already been some existing works [1] in 3D point cloud proposed before. What are the main differences of the author’s attention-based approach? It is suggested to append this mention in the Related Work section.
[1] Wang, Lei, et al. "Graph attention convolution for point cloud semantic segmentation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.
Limitations
What I’m more concerned about is the runtime efficiency on the proposed learnable graph cut method. For deploying the proposed framework in the autonomous driving application as mentioned in the Introduction section, a short response time is desired. From the supplemental, however, the runtime efficiency comparison only is performed on indoor datasets (S3DIS, ScanNet2). Outdoor dataset specific to autonomous driving such as SemanticKITTI should be the more suitable one for evaluation. |
NIPS | Title
OnACID: Online Analysis of Calcium Imaging Data in Real Time
Abstract
Optical imaging methods using calcium indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. Here we introduce OnACID, an Online framework for the Analysis of streaming Calcium Imaging Data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments. We apply our algorithm on two large scale experimental datasets, benchmark its performance on manually annotated data, and show that it outperforms a popular offline approach.
1 Introduction
Calcium imaging methods continue to gain traction among experimental neuroscientists due to their capability of monitoring large targeted neuronal populations across multiple days or weeks with decisecond temporal and single-neuron spatial resolution. To infer the neural population activity from the raw imaging data, an analysis pipeline is employed which typically involves solving the following problems (all of which are still areas of active research): i) correcting for motion artifacts during the imaging experiment, ii) identifying/extracting the sources (neurons and axonal or dendritic processes) in the imaged field of view (FOV), and iii) denoising and deconvolving the neural activity from the dynamics of the expressed calcium indicator.
1These authors contributed equally to this work. 2To whom correspondence should be addressed.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The fine spatiotemporal resolution of calcium imaging comes at a data rate cost; a typical two-photon (2p) experiment on a 512×512 pixel large FOV imaged at 30Hz, generates ∼50GB of data (in 16-bit integer format) per hour. These rates can be significantly higher for other planar and volumetric imaging techniques, e.g., light-sheet [1] or SCAPE imaging [4], where the data rates can exceed 1TB per hour. The resulting data deluge poses a significant challenge.
Of the three basic pre-processing problems described above, the problem of source extraction faces the most severe scalability issues. Popular approaches reshape the data movies into a large array with dimensions (#pixels)× (#timesteps), that is then factorized (e.g., via independent component analysis [20] or constrained non-negative matrix factorization (CNMF) [26]) to produce the locations in the FOV and temporal activities of the imaged sources. While effective for small or medium datasets, direct factorization can be impractical, since a typical experiment can quickly produce datasets larger than the available RAM. Several strategies have been proposed to enhance scalability, including parallel processing [9], spatiotemporal decimation [10], dimensionality reduction [23], and out-of-core processing [13]. While these approaches enable efficient processing of larger datasets, they still require significant storage, power, time, and memory resources.
Apart from recording large neural populations, optical methods can also be used for stimulation [5]. Combining optogenetic methods for recording and perturbing neural ensembles opens the door to exciting closed-loop experiments [24, 15, 8], where the pattern of the stimulation can be determined based on the recorded activity during behavior. In a typical closed-loop experiment, the monitored/perturbed regions of interest (ROIs) have been preselected by analyzing offline a previous dataset from the same FOV. Monitoring the activity of a ROI, which usually corresponds to a soma, typically entails averaging the fluorescence over the corresponding ROI, resulting in a signal that is only a proxy for the actual neural activity and which can be sensitive to motion artifacts and drifts, as well as spatially overlapping sources, background/neuropil contamination, and noise. Furthermore, by preselecting the ROIs, the experimenter is unable to detect and incorporate new sources that become active later during the experiment, which prevents the execution of truly closed-loop experiments.
In this paper, we present an Online, single-pass, algorithmic framework for the Analysis of Calcium Imaging Data (OnACID). Our framework is highly scalable with minimal memory requirements, as it processes the data in a streaming fashion one frame at a time, while keeping in memory a set of low dimensional sufficient statistics and a small minibatch of the last data frames. Every frame is processed in four sequential steps: i) The frame is registered against the previous denoised (and registered) frame to correct for motion artifacts. ii) The fluorescence activity of the already detected sources is tracked. iii) Newly appearing neurons and processes are detected and incorporated to the set of existing sources. iv) The fluorescence trace of each source is denoised and deconvolved to provide an estimate of the underlying spiking activity.
Our algorithm integrates and extends the online NMF algorithm of [19], the CNMF source extraction algorithm of [26], and the near-online deconvolution algorithm of [11], to provide a framework capable of real time identification and processing of hundreds of neurons in a typical 2p experiment (512×512 pixel wide FOV imaged at 30Hz), enabling novel designs of closed-loop experiments. We apply OnACID to two large-scale (50 and 65 minute long) mouse in vivo 2p datasets; our algorithm can find and track hundreds of neurons faster than real-time, and outperforms the CNMF algorithm of [26] benchmarked on multiple manual annotations using a precision-recall framework.
2 Methods
We illustrate OnACID in process in Fig. 1. At the beginning of the experiment (Fig. 1-left), only a few components are active, as shown in the panel A by the max-correlation image3, and these are detected by the algorithm (Fig. 1B). As the experiment proceeds more neurons activate and are subsequently detected by OnACID (Fig. 1 middle, right) which also tracks their activity across time (Fig. 1C). See also Supplementary Movie 1 for an example in simulated data.
Next, we present the steps of OnACID in more detail.
3The correlation image (CI) at every pixel is equal to the average temporal correlation coefficient between that pixel and its neighbors [28] (8 neighbors were used for our analysis). The max-correlation image is obtained by computing the CI for each batch of 1000 frames, and then taking the maximum over all these images.
Motion correction: Our online approach allows us to employ a very simple yet effective motion correction scheme: each denoised dataframe can be used to register the next incoming noisy dataframe. To enhance robustness we use the denoised background/neuropil signal (defined in the next section) as a template to align the next dataframe. We use rigid, sub-pixel registration [16], although piecewise rigid registration can also be used at an additional computational cost. This simple alignment process is not suitable for offline algorithms due to noise in the raw data, leading to the development of various algorithms based on template matching [14, 23, 25] or Hidden Markov Models [7, 18].
Source extraction: A standard approach for source extraction is to model the fluorescence within a matrix factorization framework [20, 26]. Let Y ∈ Rd×T denote the observed fluorescence across space and time in a matrix format, where d denotes the number of imaged pixels, and T the length of the experiment in timepoints. If the number of imaged sources is K, then let A ∈ Rd×K denote the matrix where column i encodes the "spatial footprint" of the source i. Similarly, let C ∈ RK×T denote the matrix where each row encodes the temporal activity of the corresponding source. The observed data matrix can then be expressed as
Y = AC +B + E, (1) where B,E ∈ Rd×T denote matrices for background/neuropil activity and observation noise, respectively. A common approach, introduced in [26], is to express the background matrix B as a low rank matrix, i.e., B = bf , where b ∈ Rd×nb and f ∈ Rnb×T denote the spatial and temporal components of the low rank background signal, and nb is a small integer, e.g., nb = 1, 2. The CNMF framework of [26] operates by alternating optimization of [A,b] given the data Y and estimates of [C; f ], and vice versa, where each column of A is constrained to be zero outside of a neighborhood around its previous estimate. This strategy exploits the spatial locality of each neuron to reduce the computational complexity. This framework can be adapted to a data streaming setup using the online NMF algorithm of [19], where the observed fluorescence at time t can be written as
yt = Act + bft + εt. (2) Proceeding in a similar alternating way, the activity of all neurons at time t, ct, and temporal background ft, given yt and the spatial footprints and background [A,b], can be found by solving a nonnegative least squares problem, whereas [A,b] can be estimated efficiently as in [19] by only keeping in memory the sufficient statistics (where we define c̃t = [ct; ft])
Wt = t−1 t Wt−1 + 1 tytc̃ > t , Mt = t−1 t Mt−1 + 1 t c̃tc̃ > t , (3)
while at the same time enforcing the same spatial locality constraints as in the CNMF framework.
Deconvolution: The online framework presented above estimates the demixed fluorescence traces c1, . . . , cK of each neuronal source. The fluorescence is a filtered version of the underlying neural activity that we want to infer. To further denoise and deconvolve the neural activity from the dynamics of the indicator we use the OASIS algorithm [11] that implements the popular spike deconvolution algorithm of [30] in a nearly online fashion by adapting the highly efficient Pool Adjacent Violators Algorithm used in isotonic regression [3]. The calcium dynamics is modeled with a stable autoregressive process of order p, ct = ∑p i=1 γict−i + st. We use p = 1 here, but can extend to p = 2 to incorporate the indicator rise time [11]. OASIS solves a modified LASSO problem
minimize ĉ,ŝ 1 2‖ĉ− y‖ 2 + λ‖ŝ‖1 subject to ŝt = ĉt − γĉt−1 ≥ smin or ŝt = 0 (4)
where the `1 penalty on ŝ or the minimal spike size smin can be used to enforce sparsity of the neural activity. The algorithm progresses through each time series sequentially from beginning to end and backtracks only to the most recent spike. We can further restrict the lag to few frames, to obtain a good approximate solution applicable for real-time experiments.
Detecting new components: The approach explained above enables tracking the activity of a fixed number of sources, and will ignore neurons that become active later in the experiment. To account for a variable number of sources in an online NMF setting, [12] proposes to add a new random component when the correlation coefficient between each data frame and its representation in terms of the current factors is lower than a threshold. This approach is insufficient here since the footprint of a new neuron in the whole FOV is typically too small to modify the correlation coefficient significantly.
We approach the problem by introducing a buffer that contains the last lb instances of the residual signal rt = yt − Act − bft, where lb is a reasonably small number, e.g., lb = 100. On this buffer, similarly to [26], we perform spatial smoothing with a Gaussian kernel with radius similar to the expected neuron radius, and then search for the point in space that explains the maximum variance. New candidate components anew, and cnew are estimated by performing a local rank-1 NMF of the residual matrix restricted to a fixed neighborhood around the point of maximal variance.
To limit false positives, the candidate component is screened for quality. First, to prevent noise overfitting, the shape anew must be significantly correlated (e.g., θs ∼ 0.8− 0.9) to the residual buffer averaged over time and restricted to the spatial extent of anew. Moreover, if anew significantly overlaps with any of the existing components, then its temporal component cnew must not be highly correlated with the corresponding temporal components; otherwise we reject it as a possible duplicate of an existing component. Once a new component is accepted, [A,b], [C; f ] are augmented with anew and cnew respectively, and the sufficient statistics are updated as follows:
Wt = [ Wt, 1
t Ybufc
> new ] , Mt = 1
t
[ tMt C̃bufc > new
cnewC̃ > buf ‖cnew‖2
] , (5)
where Ybuf, C̃buf denote the matrices Y, [C; f ], restricted to the last lb frames that the buffer stores. This process is repeated until no new components are accepted, at which point the next frame is read and processed. The whole online procedure is described in Algorithm 1; the supplement includes pseudocode description of all the referenced routines.
Initialization: To initialize our algorithm we use the CNMF algorithm on a short initial batch of data of length Tb, (e.g., Tb = 1000). The sufficient statistics are initialized from the components that the offline algorithm finds according to (3). To ensure that new components are also initialized in the darker parts of the FOV, each data frame is normalized with the (running) mean for every pixel, during both the offline and the online phases.
Algorithmic Speedups: Several algorithmic and computational schemes are employed to boost the speed of the algorithm and make it applicable to real-time large-scale experiments. In [19] block coordinate descent is used to update the factors A, warm started at the value from the previous iteration. The same trick is used here not only for A, but also for C, since the calcium traces are continuous and typically change slowly. Moreover, the temporal traces of components that do not spatially overlap with each other can be updated simultaneously in vector form; we use a simple greedy scheme to partition the components into spatially non-overlapping groups.
Since neurons’ shapes are not expected to change at a fast timescale, updating their values (i.e., recomputing A and b) is not required at every timepoint; in practice we update every lb timesteps.
Algorithm 1 ONACID Require: Data matrix Y , initial estimates A,b, C, f , S, current number of components K, current
timestep t′, rest of parameters. 1: W = Y [:, 1 : t′]C>/t′ 2: M = CC>/t′ . Initialize sufficient statistics 3: G = DETERMINEGROUPS([A,b],K) . Alg. S1-S2 4: Rbuf = [Y − [A,b][C; f ]][:, t′ − lb + 1 : t′] . Initialize residual buffer 5: t = t′ 6: while there is more data do 7: t← t+ 1 8: yt ← ALIGNFRAME(yt,bft−1) . [16] 9: [ct; ft]← UPDATETRACES([A,b], [ct−1; ft−1],yt,G) . Alg. S3 10: C, S ← OASIS(C, γ, smin, λ) . [11] 11: [A,b], [C, f ],K,G, Rbuf,W,M ← 12: DETECTNEWCOMPONENTS([A,b], [C, f ],K,G, Rbuf,yt,W,M) . Alg. S4 13: Rbuf ← [Rbuf[:, 2 : lb],yt −Act − bft] . Update residual buffer 14: if mod (t− t′, lb) = 0 then . Update W,M, [A,b] every lb timesteps 15: W,M ← UPDATESUFFSTATISTICS(W,M,yt, [ct; ft]) . Equation (3) 16: [A,b]← UPDATESHAPES[W,M, [A,b]] . Alg. S5 17: return A,b, C, f , S
Additionally, the sufficient statistics Wt,Mt are only needed for updating the estimates of [A,b] so they can be updated only when required. Motion correction can be sped up by estimating the motion only on a small (active) contiguous part of the FOV. Finally, as shown in [10], spatial decimation can bring significant speed benefits without compromising the quality of the results.
Software: OnACID is implemented in Python and is available at https://github.com/ simonsfoundation/caiman as part of the CaImAn package [13].
3 Results
Benchmarking on simulated data: To compare to ground truth spike trains, we simulated a 2000 frame dataset taken at 30Hz over a 256×256 pixel wide FOV containing 400 "donut" shaped neurons with Poisson spike trains (see supplement for details). OnACID was initialized on the first 500 frames. During initialization, 265 active sources were accurately detected (Fig. S2). After the full 2000 frames, the algorithm had detected and tracked all active sources, plus one false positive (Fig. 2A).
After detecting a neuron, we need to extract its spikes with a short time-lag, to enable interesting closed loop experiments. To quantify performance we measured the correlation of the inferred spike train with the ground truth (Fig. 2B). We varied the lag in the online estimator, i.e. the number of future samples observed before assigning a spike at time zero. Lags of 2-5 yield already similar
results as the solution with unrestricted lag. A further requirement for online closed-loop experiments is that the computational processing is fast enough. To balance the computational load over frames, we distributed here the shape update over the frames, while still updating each neuron every 30 frames on average. Because the shape update is the last step of the loop in Algorithm 1, we keep track of the time already spent in the iteration and increase or decrease the number of updated neurons accordingly. In this way the frame processing rate remained always higher than 30Hz (Fig. 2C).
Application to in vivo 2p mouse hippocampal data: Next we considered a larger scale (90K frames, 480×480 pixels) real 2p calcium imaging dataset taken at 30Hz (i.e., 50 minute experiment). Motion artifacts were corrected prior to the analysis described below. The online algorithm was initialized on the first 1000 frames of the dataset using a Python implementation of the CNMF algorithm found in the CaImAn package [13]. During initialization 139 active sources were detected; by the end of all 90K frames, 727 active sources had been detected and tracked (5 of which were discarded due to their small size).
Benchmarking against offline processing and manual annotations: We collected manual annotations from two independent labelers who were instructed to find round or donut shaped neurons of similar size using the ImageJ Cell Magic Wand tool [31] given i) a movie obtained by removing a running 20th percentile (as a crude background approximation) and downsampling in time by a factor of 10, and ii) the max-correlation image. The goal of this pre-processing was to suppress silent and promote active cells. The labelers found respectively 872 and 880 ROIs. We also compared with the CNMF algorithm applied to the whole dataset which found 904 sources (805 after filtering for size).
To quantify performance we used a precision/recall framework similar to [2]. As a distance metric between two cells we used the Jaccard distance, and the pairing between different annotations was computed using the Hungarian algorithm, where matches with distance > 0.7 were discarded4. Table. 1 summarizes the results within the precision/recall framework. The online algorithm not only matches but outperforms the offline approach of CNMF, reaching high performance values (F1 = 0.79 and 0.78 against the two manual annotations, as opposed to 0.71 against both annotations for CNMF). The two annotations matched closely with each other (F1 = 0.89), indicating high reliability, whereas OnACID vs CNMF also produced a high score (F1 = 0.79), suggesting significant overlap in the mismatches between the two algorithms against manual annotations.
Fig. 3 offers a more detailed view, where contour plots of the detected components are superimposed on the max-correlation image for the online (Fig. 3A) and offline (Fig. 3B) algorithms (white) and the annotations of Labeler 1 (red) restricted to a 200×200 pixel part of the FOV. Annotations of matches and mismatches between the online algorithm and the two labelers, as well as between the two labelers in the entire FOV are shown in Figs. S3-S8. For the automated procedures binary masks and contour plots were constructed by thresholding the spatial footprint of each component at a level equal to 0.2 times its maximum value. A close inspection at the matches between the online algorithm and the manual annotation (Fig. 3A-left) indicates that neurons with a strong footprint in the max-correlation image (indicating calcium transients with high amplitude compared to noise and background/neuropil activity) are reliably detected, despite the high neuron density and level of overlap. On the other hand, mismatches (Fig. 3B-left) can sometimes be attributed to shape mismatches, manually selected components with no signature in the max-correlation image (indicating faint or possibly unclear activity) that are not detected by the online algorithm (false negatives), or small partially visible processes detected by OnACID but ignored by the labelers ("false" positives).
4Note that the Cell Magic Wand Tool by construction, tends to select circular ROI shapes whereas the results of the online algorithm do not pose restrictions on the shapes. As a result the computed Jaccard distances tend to be overestimated. This explains our choice of a seemingly high mismatch threshold.
Fig. 3C shows examples of the traces from three selected neurons. OnACID can detect and track neurons with very sparse spiking over the course of the entire 50 minute experiment (Fig. 3C-top), and produce traces that are highly correlated with their offline counterparts. To examine the quality of the inferred traces (where ground truth collection at such scale is both very strenuous and severely impeded by the presence of background signals and neuropil activity), we compared the traces between the online algorithm and the CNMF approach on matched pairs of components. Fig. 3D shows the empirical cumulative distribution function (CDF) of the correlation coefficients from this comparison. The majority of the coefficients attain values close to 1, suggesting that the online algorithm can detect new neurons once they become active and then reliably track their activity.
OnACID is faster than real time on average: In addition to being more accurate, OnACID is also considerably faster as it required ∼27 minutes, i.e., ∼ 2× faster than real time on average, to analyze the full dataset (2 minutes for initialization and 25 for the online processing) as opposed to ∼1.5 hours for the offline approach and ∼10 hours for each of the annotators (who only select ROIs). Fig. 3E illustrates the time consumption of the various steps. In the majority of the frames where no spatial shapes are being updated and no new neurons are being incorporated, OnACID processing speed exceeds the data rate of 30Hz (Fig. 3E-top), and this processing time scales only mildly with the inclusion of new neurons. The cost of updating shapes and sufficient statistics per neuron is also very low (< 1ms), and only scales mildly with the number of existing neurons (Fig. 3E-middle). As argued before this cost can be distributed among all the frames while maintaining faster than real time processing rates. The expensive step appears when detecting and including one or possibly more new neurons in the algorithm (Fig. 3E-bottom). Although this occurs only sporadically, several speedups can be potentially employed here to achieve beyond real time at every frame (see also Discussion section), which would facilitate zero-lag closed-loop experiments.
Application to in vivo 2p mouse parietal cortex data: As a second application to 2p data we used a 116,000 frame dataset, taken at 30Hz over a 512×512 FOV (64min long). The first 3000 frames were used for initialization during which the CNMF algorithm found 442 neurons, before switching to OnACID, which by the end of the experiment found a total of 752 neurons (734 after filtering for size). Compared to two independent manual annotations of 928 and 875 ROIs respectively, OnACID achieved F1 = 0.76, 0.79 significantly outperforming CNMF (F1 = 0.65, 0.66 respectively). The matches and mismatches between OnACID and Labeler 1 on a 200×200 pixel part of the FOV are shown in Fig. 4A. Full FOV pairings as well as precision/recall metrics are given in Table 2.
For this dataset, rigid motion correction was also performed according to the simple method of aligning each frame to the denoised (and registered) background from the previous frame. Fig. 4B shows that this approach produced strikingly similar results to an offline template based, rigid motion correction method [25]. The difference in the displacements produced by the two methods was less than 1 pixel for all 116,000 frames with standard deviations 0.11 and 0.12 pixel for the x and y directions, respectively. In terms of timing, OnACID processed the dataset in 48 minutes, again faster than real time on average. This also includes the time needed for motion correction, which on average took 5ms per frame (a bit less than 10 minutes in total).
4 Discussion - Future Work
Although at first striking, the superior performance of OnACID compared to offline CNMF, for the datasets presented in this work, can be attributed to several factors. Calcium transient events are localized both in space (spatial footprint of a neuron), and in time (typically 0.3-1s for genetic indica-
tors). By looking at a short rolling buffer OnACID is able to more robustly detect activity compared to offline approaches that look at all the data simultaneously. Moreover, OnACID searches for new activity in the residuals buffer that excludes the activity of already detected neurons, making it easier to detect new overlapping components. Finally, offline CNMF requires the a priori specification of the number of components, making it more prone to either false positive or false negative components.
For both the datasets presented above, the analysis was done using the same space correlation threshold θs = 0.9. This strict choice leads to results with high precision and lower recall (see Tables 1 and 2). Results can be moderately improved by allowing a second pass of the data that can identify neurons that were initially not selected. Moreover, by relaxing the threshold the discrepancy between the precision and recall scores can be reduced, with only marginal modifications to the F1 scores (data not shown).
Our current implementation performs all processing serially. In principle, significant speed gains can be obtained by performing computations not needed at each timestep (updating shapes and sufficient statistics) or occur only sporadically (incorporating a new neuron) in a parallel thread with shared memory. Moreover, different online dictionary learning algorithms that do not require the solution of an inverse problem at each timestep can potentially further speed up our framework [17].
For detecting centroids of new sources OnACID examines a static image obtained by computing the variance across time of the spatially smoother residual buffer. While this approach works very well in practice it effectively favors shapes looking similar to a pre-defined Gaussian blob (when spatially smoothed). Different approaches for detecting neurons in static images can be possibly used here, e.g., [22], [2], [29], [27].
Apart from facilitating closed-loop behavioral experiments and rapid general calcium imaging data analysis, our online pipeline can be potentially employed to future, optical-based, brain computer interfaces [6, 21] where high quality real-time processing is critical to their performance. These directions will be pursued in future work.
Acknowledgments
We thank Sue Ann Koay, Jeff Gauthier and David Tank (Princeton University) for sharing their cortex and hippocampal data with us. We thank Lindsey Myers, Sonia Villani and Natalia Roumelioti for providing manual annotations. We thank Daniel Barabasi (Cold Spring Harbor Laboratory) for useful discussions. AG, DC, and EAP were internally funded by the Simons Foundation. Additional support was provided by SNSF P300P2_158428 (JF), and NIH BRAIN Initiative R01EB22913, DARPA N66001-15-C-4032, IARPA MICRONS D16PC00003 (LP). | 1. What is the main contribution of the paper regarding calcium imaging data analysis?
2. What are the strengths of the proposed approach, particularly in its application to large datasets?
3. Do you have any concerns about the method's reliance on specific thresholds or parameters?
4. How does the threshold used to include new components affect performance?
5. Can you provide more information on the use of isotonic regression for deconvolution, and how it relates to cell segmentation?
6. How does the online method differ from the original CNMF method, and what are the advantages of the former?
7. Can you clarify the idea you want to communicate in lines 40-50 of the review?
8. Are there any simulation results verifying the convergence rate of the algorithm?
9. How do you handle the permutation invariance of A?
10. Any questions regarding the paper that were not addressed in the review? | Review | Review
This paper proposes an online framework for analyzing calcium imaging data. This framework is built upon the popular and now widely used constrained non-negative matrix factorization (CNMF) method for cell segmentation and calcium time-series analysis (Pnevmatikakis, et al., 2016). While the existing CNMF approach is now being used by many labs across the country, Iâve talked to many neuroscientists that complain that this method cannot be applied to large datasets and thus its application has been limited. This work extends this method to a real-time decoding setting, making it an extremely useful contribution for the neuroscience community.
The paper is well written and the results are compelling. My only concern is that the paper appears to combine multiple existing methods to achieve their result. Nonetheless, I think the performance evaluation is solid and an online extension of CNMF for calcium image data analysis and will likely be useful to the neuroscience community.
Major comments:
- There are many steps in the described method that rely on specific thresholds or parameters. It would be useful to understand the sensitivity of the method to these different hyperparameters. In particular, how does the threshold used to include new components affect performance?
- Lines 40-50: This paragraph seems to wander from the point that you nicely set up before this paragraph (and continue afterwards). Not sure why youâre bringing up closed loop, except for saying that by doing this in real time you can do closed-loop experiments. Iâm not sure the idea that you want to communicate here.
- The performance evaluations do not really address the use of isotonic regression for deconvolution. In Figure 3C, the traces appear to be demixed calcium after segmentation and not the deconvolved spike trains. Many of the other comparisons focus on cell segmentation. Please comment on the use of deconvolution and how it possibly might help in cell segmentation.
- The results show that the online method outperforms the original CNMF method. Can the authors comment on where the two differ?
Minor comments:
Figure 1: The contours cannot be seen in B. Perhaps a white background in B (and later in the results) can help to see the components and contours. C is hard to see and interpret initially as the traces overlap significantly. |
NIPS | Title
OnACID: Online Analysis of Calcium Imaging Data in Real Time
Abstract
Optical imaging methods using calcium indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. Here we introduce OnACID, an Online framework for the Analysis of streaming Calcium Imaging Data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments. We apply our algorithm on two large scale experimental datasets, benchmark its performance on manually annotated data, and show that it outperforms a popular offline approach.
1 Introduction
Calcium imaging methods continue to gain traction among experimental neuroscientists due to their capability of monitoring large targeted neuronal populations across multiple days or weeks with decisecond temporal and single-neuron spatial resolution. To infer the neural population activity from the raw imaging data, an analysis pipeline is employed which typically involves solving the following problems (all of which are still areas of active research): i) correcting for motion artifacts during the imaging experiment, ii) identifying/extracting the sources (neurons and axonal or dendritic processes) in the imaged field of view (FOV), and iii) denoising and deconvolving the neural activity from the dynamics of the expressed calcium indicator.
1These authors contributed equally to this work. 2To whom correspondence should be addressed.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The fine spatiotemporal resolution of calcium imaging comes at a data rate cost; a typical two-photon (2p) experiment on a 512×512 pixel large FOV imaged at 30Hz, generates ∼50GB of data (in 16-bit integer format) per hour. These rates can be significantly higher for other planar and volumetric imaging techniques, e.g., light-sheet [1] or SCAPE imaging [4], where the data rates can exceed 1TB per hour. The resulting data deluge poses a significant challenge.
Of the three basic pre-processing problems described above, the problem of source extraction faces the most severe scalability issues. Popular approaches reshape the data movies into a large array with dimensions (#pixels)× (#timesteps), that is then factorized (e.g., via independent component analysis [20] or constrained non-negative matrix factorization (CNMF) [26]) to produce the locations in the FOV and temporal activities of the imaged sources. While effective for small or medium datasets, direct factorization can be impractical, since a typical experiment can quickly produce datasets larger than the available RAM. Several strategies have been proposed to enhance scalability, including parallel processing [9], spatiotemporal decimation [10], dimensionality reduction [23], and out-of-core processing [13]. While these approaches enable efficient processing of larger datasets, they still require significant storage, power, time, and memory resources.
Apart from recording large neural populations, optical methods can also be used for stimulation [5]. Combining optogenetic methods for recording and perturbing neural ensembles opens the door to exciting closed-loop experiments [24, 15, 8], where the pattern of the stimulation can be determined based on the recorded activity during behavior. In a typical closed-loop experiment, the monitored/perturbed regions of interest (ROIs) have been preselected by analyzing offline a previous dataset from the same FOV. Monitoring the activity of a ROI, which usually corresponds to a soma, typically entails averaging the fluorescence over the corresponding ROI, resulting in a signal that is only a proxy for the actual neural activity and which can be sensitive to motion artifacts and drifts, as well as spatially overlapping sources, background/neuropil contamination, and noise. Furthermore, by preselecting the ROIs, the experimenter is unable to detect and incorporate new sources that become active later during the experiment, which prevents the execution of truly closed-loop experiments.
In this paper, we present an Online, single-pass, algorithmic framework for the Analysis of Calcium Imaging Data (OnACID). Our framework is highly scalable with minimal memory requirements, as it processes the data in a streaming fashion one frame at a time, while keeping in memory a set of low dimensional sufficient statistics and a small minibatch of the last data frames. Every frame is processed in four sequential steps: i) The frame is registered against the previous denoised (and registered) frame to correct for motion artifacts. ii) The fluorescence activity of the already detected sources is tracked. iii) Newly appearing neurons and processes are detected and incorporated to the set of existing sources. iv) The fluorescence trace of each source is denoised and deconvolved to provide an estimate of the underlying spiking activity.
Our algorithm integrates and extends the online NMF algorithm of [19], the CNMF source extraction algorithm of [26], and the near-online deconvolution algorithm of [11], to provide a framework capable of real time identification and processing of hundreds of neurons in a typical 2p experiment (512×512 pixel wide FOV imaged at 30Hz), enabling novel designs of closed-loop experiments. We apply OnACID to two large-scale (50 and 65 minute long) mouse in vivo 2p datasets; our algorithm can find and track hundreds of neurons faster than real-time, and outperforms the CNMF algorithm of [26] benchmarked on multiple manual annotations using a precision-recall framework.
2 Methods
We illustrate OnACID in process in Fig. 1. At the beginning of the experiment (Fig. 1-left), only a few components are active, as shown in the panel A by the max-correlation image3, and these are detected by the algorithm (Fig. 1B). As the experiment proceeds more neurons activate and are subsequently detected by OnACID (Fig. 1 middle, right) which also tracks their activity across time (Fig. 1C). See also Supplementary Movie 1 for an example in simulated data.
Next, we present the steps of OnACID in more detail.
3The correlation image (CI) at every pixel is equal to the average temporal correlation coefficient between that pixel and its neighbors [28] (8 neighbors were used for our analysis). The max-correlation image is obtained by computing the CI for each batch of 1000 frames, and then taking the maximum over all these images.
Motion correction: Our online approach allows us to employ a very simple yet effective motion correction scheme: each denoised dataframe can be used to register the next incoming noisy dataframe. To enhance robustness we use the denoised background/neuropil signal (defined in the next section) as a template to align the next dataframe. We use rigid, sub-pixel registration [16], although piecewise rigid registration can also be used at an additional computational cost. This simple alignment process is not suitable for offline algorithms due to noise in the raw data, leading to the development of various algorithms based on template matching [14, 23, 25] or Hidden Markov Models [7, 18].
Source extraction: A standard approach for source extraction is to model the fluorescence within a matrix factorization framework [20, 26]. Let Y ∈ Rd×T denote the observed fluorescence across space and time in a matrix format, where d denotes the number of imaged pixels, and T the length of the experiment in timepoints. If the number of imaged sources is K, then let A ∈ Rd×K denote the matrix where column i encodes the "spatial footprint" of the source i. Similarly, let C ∈ RK×T denote the matrix where each row encodes the temporal activity of the corresponding source. The observed data matrix can then be expressed as
Y = AC +B + E, (1) where B,E ∈ Rd×T denote matrices for background/neuropil activity and observation noise, respectively. A common approach, introduced in [26], is to express the background matrix B as a low rank matrix, i.e., B = bf , where b ∈ Rd×nb and f ∈ Rnb×T denote the spatial and temporal components of the low rank background signal, and nb is a small integer, e.g., nb = 1, 2. The CNMF framework of [26] operates by alternating optimization of [A,b] given the data Y and estimates of [C; f ], and vice versa, where each column of A is constrained to be zero outside of a neighborhood around its previous estimate. This strategy exploits the spatial locality of each neuron to reduce the computational complexity. This framework can be adapted to a data streaming setup using the online NMF algorithm of [19], where the observed fluorescence at time t can be written as
yt = Act + bft + εt. (2) Proceeding in a similar alternating way, the activity of all neurons at time t, ct, and temporal background ft, given yt and the spatial footprints and background [A,b], can be found by solving a nonnegative least squares problem, whereas [A,b] can be estimated efficiently as in [19] by only keeping in memory the sufficient statistics (where we define c̃t = [ct; ft])
Wt = t−1 t Wt−1 + 1 tytc̃ > t , Mt = t−1 t Mt−1 + 1 t c̃tc̃ > t , (3)
while at the same time enforcing the same spatial locality constraints as in the CNMF framework.
Deconvolution: The online framework presented above estimates the demixed fluorescence traces c1, . . . , cK of each neuronal source. The fluorescence is a filtered version of the underlying neural activity that we want to infer. To further denoise and deconvolve the neural activity from the dynamics of the indicator we use the OASIS algorithm [11] that implements the popular spike deconvolution algorithm of [30] in a nearly online fashion by adapting the highly efficient Pool Adjacent Violators Algorithm used in isotonic regression [3]. The calcium dynamics is modeled with a stable autoregressive process of order p, ct = ∑p i=1 γict−i + st. We use p = 1 here, but can extend to p = 2 to incorporate the indicator rise time [11]. OASIS solves a modified LASSO problem
minimize ĉ,ŝ 1 2‖ĉ− y‖ 2 + λ‖ŝ‖1 subject to ŝt = ĉt − γĉt−1 ≥ smin or ŝt = 0 (4)
where the `1 penalty on ŝ or the minimal spike size smin can be used to enforce sparsity of the neural activity. The algorithm progresses through each time series sequentially from beginning to end and backtracks only to the most recent spike. We can further restrict the lag to few frames, to obtain a good approximate solution applicable for real-time experiments.
Detecting new components: The approach explained above enables tracking the activity of a fixed number of sources, and will ignore neurons that become active later in the experiment. To account for a variable number of sources in an online NMF setting, [12] proposes to add a new random component when the correlation coefficient between each data frame and its representation in terms of the current factors is lower than a threshold. This approach is insufficient here since the footprint of a new neuron in the whole FOV is typically too small to modify the correlation coefficient significantly.
We approach the problem by introducing a buffer that contains the last lb instances of the residual signal rt = yt − Act − bft, where lb is a reasonably small number, e.g., lb = 100. On this buffer, similarly to [26], we perform spatial smoothing with a Gaussian kernel with radius similar to the expected neuron radius, and then search for the point in space that explains the maximum variance. New candidate components anew, and cnew are estimated by performing a local rank-1 NMF of the residual matrix restricted to a fixed neighborhood around the point of maximal variance.
To limit false positives, the candidate component is screened for quality. First, to prevent noise overfitting, the shape anew must be significantly correlated (e.g., θs ∼ 0.8− 0.9) to the residual buffer averaged over time and restricted to the spatial extent of anew. Moreover, if anew significantly overlaps with any of the existing components, then its temporal component cnew must not be highly correlated with the corresponding temporal components; otherwise we reject it as a possible duplicate of an existing component. Once a new component is accepted, [A,b], [C; f ] are augmented with anew and cnew respectively, and the sufficient statistics are updated as follows:
Wt = [ Wt, 1
t Ybufc
> new ] , Mt = 1
t
[ tMt C̃bufc > new
cnewC̃ > buf ‖cnew‖2
] , (5)
where Ybuf, C̃buf denote the matrices Y, [C; f ], restricted to the last lb frames that the buffer stores. This process is repeated until no new components are accepted, at which point the next frame is read and processed. The whole online procedure is described in Algorithm 1; the supplement includes pseudocode description of all the referenced routines.
Initialization: To initialize our algorithm we use the CNMF algorithm on a short initial batch of data of length Tb, (e.g., Tb = 1000). The sufficient statistics are initialized from the components that the offline algorithm finds according to (3). To ensure that new components are also initialized in the darker parts of the FOV, each data frame is normalized with the (running) mean for every pixel, during both the offline and the online phases.
Algorithmic Speedups: Several algorithmic and computational schemes are employed to boost the speed of the algorithm and make it applicable to real-time large-scale experiments. In [19] block coordinate descent is used to update the factors A, warm started at the value from the previous iteration. The same trick is used here not only for A, but also for C, since the calcium traces are continuous and typically change slowly. Moreover, the temporal traces of components that do not spatially overlap with each other can be updated simultaneously in vector form; we use a simple greedy scheme to partition the components into spatially non-overlapping groups.
Since neurons’ shapes are not expected to change at a fast timescale, updating their values (i.e., recomputing A and b) is not required at every timepoint; in practice we update every lb timesteps.
Algorithm 1 ONACID Require: Data matrix Y , initial estimates A,b, C, f , S, current number of components K, current
timestep t′, rest of parameters. 1: W = Y [:, 1 : t′]C>/t′ 2: M = CC>/t′ . Initialize sufficient statistics 3: G = DETERMINEGROUPS([A,b],K) . Alg. S1-S2 4: Rbuf = [Y − [A,b][C; f ]][:, t′ − lb + 1 : t′] . Initialize residual buffer 5: t = t′ 6: while there is more data do 7: t← t+ 1 8: yt ← ALIGNFRAME(yt,bft−1) . [16] 9: [ct; ft]← UPDATETRACES([A,b], [ct−1; ft−1],yt,G) . Alg. S3 10: C, S ← OASIS(C, γ, smin, λ) . [11] 11: [A,b], [C, f ],K,G, Rbuf,W,M ← 12: DETECTNEWCOMPONENTS([A,b], [C, f ],K,G, Rbuf,yt,W,M) . Alg. S4 13: Rbuf ← [Rbuf[:, 2 : lb],yt −Act − bft] . Update residual buffer 14: if mod (t− t′, lb) = 0 then . Update W,M, [A,b] every lb timesteps 15: W,M ← UPDATESUFFSTATISTICS(W,M,yt, [ct; ft]) . Equation (3) 16: [A,b]← UPDATESHAPES[W,M, [A,b]] . Alg. S5 17: return A,b, C, f , S
Additionally, the sufficient statistics Wt,Mt are only needed for updating the estimates of [A,b] so they can be updated only when required. Motion correction can be sped up by estimating the motion only on a small (active) contiguous part of the FOV. Finally, as shown in [10], spatial decimation can bring significant speed benefits without compromising the quality of the results.
Software: OnACID is implemented in Python and is available at https://github.com/ simonsfoundation/caiman as part of the CaImAn package [13].
3 Results
Benchmarking on simulated data: To compare to ground truth spike trains, we simulated a 2000 frame dataset taken at 30Hz over a 256×256 pixel wide FOV containing 400 "donut" shaped neurons with Poisson spike trains (see supplement for details). OnACID was initialized on the first 500 frames. During initialization, 265 active sources were accurately detected (Fig. S2). After the full 2000 frames, the algorithm had detected and tracked all active sources, plus one false positive (Fig. 2A).
After detecting a neuron, we need to extract its spikes with a short time-lag, to enable interesting closed loop experiments. To quantify performance we measured the correlation of the inferred spike train with the ground truth (Fig. 2B). We varied the lag in the online estimator, i.e. the number of future samples observed before assigning a spike at time zero. Lags of 2-5 yield already similar
results as the solution with unrestricted lag. A further requirement for online closed-loop experiments is that the computational processing is fast enough. To balance the computational load over frames, we distributed here the shape update over the frames, while still updating each neuron every 30 frames on average. Because the shape update is the last step of the loop in Algorithm 1, we keep track of the time already spent in the iteration and increase or decrease the number of updated neurons accordingly. In this way the frame processing rate remained always higher than 30Hz (Fig. 2C).
Application to in vivo 2p mouse hippocampal data: Next we considered a larger scale (90K frames, 480×480 pixels) real 2p calcium imaging dataset taken at 30Hz (i.e., 50 minute experiment). Motion artifacts were corrected prior to the analysis described below. The online algorithm was initialized on the first 1000 frames of the dataset using a Python implementation of the CNMF algorithm found in the CaImAn package [13]. During initialization 139 active sources were detected; by the end of all 90K frames, 727 active sources had been detected and tracked (5 of which were discarded due to their small size).
Benchmarking against offline processing and manual annotations: We collected manual annotations from two independent labelers who were instructed to find round or donut shaped neurons of similar size using the ImageJ Cell Magic Wand tool [31] given i) a movie obtained by removing a running 20th percentile (as a crude background approximation) and downsampling in time by a factor of 10, and ii) the max-correlation image. The goal of this pre-processing was to suppress silent and promote active cells. The labelers found respectively 872 and 880 ROIs. We also compared with the CNMF algorithm applied to the whole dataset which found 904 sources (805 after filtering for size).
To quantify performance we used a precision/recall framework similar to [2]. As a distance metric between two cells we used the Jaccard distance, and the pairing between different annotations was computed using the Hungarian algorithm, where matches with distance > 0.7 were discarded4. Table. 1 summarizes the results within the precision/recall framework. The online algorithm not only matches but outperforms the offline approach of CNMF, reaching high performance values (F1 = 0.79 and 0.78 against the two manual annotations, as opposed to 0.71 against both annotations for CNMF). The two annotations matched closely with each other (F1 = 0.89), indicating high reliability, whereas OnACID vs CNMF also produced a high score (F1 = 0.79), suggesting significant overlap in the mismatches between the two algorithms against manual annotations.
Fig. 3 offers a more detailed view, where contour plots of the detected components are superimposed on the max-correlation image for the online (Fig. 3A) and offline (Fig. 3B) algorithms (white) and the annotations of Labeler 1 (red) restricted to a 200×200 pixel part of the FOV. Annotations of matches and mismatches between the online algorithm and the two labelers, as well as between the two labelers in the entire FOV are shown in Figs. S3-S8. For the automated procedures binary masks and contour plots were constructed by thresholding the spatial footprint of each component at a level equal to 0.2 times its maximum value. A close inspection at the matches between the online algorithm and the manual annotation (Fig. 3A-left) indicates that neurons with a strong footprint in the max-correlation image (indicating calcium transients with high amplitude compared to noise and background/neuropil activity) are reliably detected, despite the high neuron density and level of overlap. On the other hand, mismatches (Fig. 3B-left) can sometimes be attributed to shape mismatches, manually selected components with no signature in the max-correlation image (indicating faint or possibly unclear activity) that are not detected by the online algorithm (false negatives), or small partially visible processes detected by OnACID but ignored by the labelers ("false" positives).
4Note that the Cell Magic Wand Tool by construction, tends to select circular ROI shapes whereas the results of the online algorithm do not pose restrictions on the shapes. As a result the computed Jaccard distances tend to be overestimated. This explains our choice of a seemingly high mismatch threshold.
Fig. 3C shows examples of the traces from three selected neurons. OnACID can detect and track neurons with very sparse spiking over the course of the entire 50 minute experiment (Fig. 3C-top), and produce traces that are highly correlated with their offline counterparts. To examine the quality of the inferred traces (where ground truth collection at such scale is both very strenuous and severely impeded by the presence of background signals and neuropil activity), we compared the traces between the online algorithm and the CNMF approach on matched pairs of components. Fig. 3D shows the empirical cumulative distribution function (CDF) of the correlation coefficients from this comparison. The majority of the coefficients attain values close to 1, suggesting that the online algorithm can detect new neurons once they become active and then reliably track their activity.
OnACID is faster than real time on average: In addition to being more accurate, OnACID is also considerably faster as it required ∼27 minutes, i.e., ∼ 2× faster than real time on average, to analyze the full dataset (2 minutes for initialization and 25 for the online processing) as opposed to ∼1.5 hours for the offline approach and ∼10 hours for each of the annotators (who only select ROIs). Fig. 3E illustrates the time consumption of the various steps. In the majority of the frames where no spatial shapes are being updated and no new neurons are being incorporated, OnACID processing speed exceeds the data rate of 30Hz (Fig. 3E-top), and this processing time scales only mildly with the inclusion of new neurons. The cost of updating shapes and sufficient statistics per neuron is also very low (< 1ms), and only scales mildly with the number of existing neurons (Fig. 3E-middle). As argued before this cost can be distributed among all the frames while maintaining faster than real time processing rates. The expensive step appears when detecting and including one or possibly more new neurons in the algorithm (Fig. 3E-bottom). Although this occurs only sporadically, several speedups can be potentially employed here to achieve beyond real time at every frame (see also Discussion section), which would facilitate zero-lag closed-loop experiments.
Application to in vivo 2p mouse parietal cortex data: As a second application to 2p data we used a 116,000 frame dataset, taken at 30Hz over a 512×512 FOV (64min long). The first 3000 frames were used for initialization during which the CNMF algorithm found 442 neurons, before switching to OnACID, which by the end of the experiment found a total of 752 neurons (734 after filtering for size). Compared to two independent manual annotations of 928 and 875 ROIs respectively, OnACID achieved F1 = 0.76, 0.79 significantly outperforming CNMF (F1 = 0.65, 0.66 respectively). The matches and mismatches between OnACID and Labeler 1 on a 200×200 pixel part of the FOV are shown in Fig. 4A. Full FOV pairings as well as precision/recall metrics are given in Table 2.
For this dataset, rigid motion correction was also performed according to the simple method of aligning each frame to the denoised (and registered) background from the previous frame. Fig. 4B shows that this approach produced strikingly similar results to an offline template based, rigid motion correction method [25]. The difference in the displacements produced by the two methods was less than 1 pixel for all 116,000 frames with standard deviations 0.11 and 0.12 pixel for the x and y directions, respectively. In terms of timing, OnACID processed the dataset in 48 minutes, again faster than real time on average. This also includes the time needed for motion correction, which on average took 5ms per frame (a bit less than 10 minutes in total).
4 Discussion - Future Work
Although at first striking, the superior performance of OnACID compared to offline CNMF, for the datasets presented in this work, can be attributed to several factors. Calcium transient events are localized both in space (spatial footprint of a neuron), and in time (typically 0.3-1s for genetic indica-
tors). By looking at a short rolling buffer OnACID is able to more robustly detect activity compared to offline approaches that look at all the data simultaneously. Moreover, OnACID searches for new activity in the residuals buffer that excludes the activity of already detected neurons, making it easier to detect new overlapping components. Finally, offline CNMF requires the a priori specification of the number of components, making it more prone to either false positive or false negative components.
For both the datasets presented above, the analysis was done using the same space correlation threshold θs = 0.9. This strict choice leads to results with high precision and lower recall (see Tables 1 and 2). Results can be moderately improved by allowing a second pass of the data that can identify neurons that were initially not selected. Moreover, by relaxing the threshold the discrepancy between the precision and recall scores can be reduced, with only marginal modifications to the F1 scores (data not shown).
Our current implementation performs all processing serially. In principle, significant speed gains can be obtained by performing computations not needed at each timestep (updating shapes and sufficient statistics) or occur only sporadically (incorporating a new neuron) in a parallel thread with shared memory. Moreover, different online dictionary learning algorithms that do not require the solution of an inverse problem at each timestep can potentially further speed up our framework [17].
For detecting centroids of new sources OnACID examines a static image obtained by computing the variance across time of the spatially smoother residual buffer. While this approach works very well in practice it effectively favors shapes looking similar to a pre-defined Gaussian blob (when spatially smoothed). Different approaches for detecting neurons in static images can be possibly used here, e.g., [22], [2], [29], [27].
Apart from facilitating closed-loop behavioral experiments and rapid general calcium imaging data analysis, our online pipeline can be potentially employed to future, optical-based, brain computer interfaces [6, 21] where high quality real-time processing is critical to their performance. These directions will be pursued in future work.
Acknowledgments
We thank Sue Ann Koay, Jeff Gauthier and David Tank (Princeton University) for sharing their cortex and hippocampal data with us. We thank Lindsey Myers, Sonia Villani and Natalia Roumelioti for providing manual annotations. We thank Daniel Barabasi (Cold Spring Harbor Laboratory) for useful discussions. AG, DC, and EAP were internally funded by the Simons Foundation. Additional support was provided by SNSF P300P2_158428 (JF), and NIH BRAIN Initiative R01EB22913, DARPA N66001-15-C-4032, IARPA MICRONS D16PC00003 (LP). | 1. What is the focus of the paper regarding calcium imaging data analysis?
2. What are the strengths of the proposed approach, particularly in terms of its components such as motion artifact removal, source extraction, and activity denoising and deconvolution?
3. Do you have any concerns or questions regarding the methodology, such as the motion correction techniques?
4. How does the reviewer assess the quality and clarity of the writing in the paper?
5. What are some minor comments or suggestions for improvements in the paper? | Review | Review
The authors present an online analysis pipeline for analyzing calcium imaging data, including motion artifact removal, source extraction, and activity denoising and deconvolution. The authors apply their technique to two 2-photon calcium imaging datasets in mouse.
The presented work is of high quality. The writing is clear and does a great job of explaining the method as well as how it relates to previous work. The results are compelling, and the authors compare to a number of benchmarks, including human annotation.
I encourage the authors to release source code for their analysis pipeline (I did not see a reference to the source code in the paper).
Minor comments
-----------------
- I was confused by the Methods section on motion correction. A few different methods are proposed: which is the one actually used in the paper? Are the others proposed as possible extensions/alternatives?
- Unless I am mistaken, the claim that "OnACID is faster than real time on average" depends on the details of the dataset, namely the spatial and temporal resolution of the raw data? Perhaps this can be clarified in the text.
- fig 3E: change the y-labels from "Time [ms]" to "Time per frame [ms]" (again, this depends on the spatial resolution of the frame?)
- line 137: are the units for T_b in frames or seconds? |
NIPS | Title
OnACID: Online Analysis of Calcium Imaging Data in Real Time
Abstract
Optical imaging methods using calcium indicators are critical for monitoring the activity of large neuronal populations in vivo. Imaging experiments typically generate a large amount of data that needs to be processed to extract the activity of the imaged neuronal sources. While deriving such processing algorithms is an active area of research, most existing methods require the processing of large amounts of data at a time, rendering them vulnerable to the volume of the recorded data, and preventing real-time experimental interrogation. Here we introduce OnACID, an Online framework for the Analysis of streaming Calcium Imaging Data, including i) motion artifact correction, ii) neuronal source extraction, and iii) activity denoising and deconvolution. Our approach combines and extends previous work on online dictionary learning and calcium imaging data analysis, to deliver an automated pipeline that can discover and track the activity of hundreds of cells in real time, thereby enabling new types of closed-loop experiments. We apply our algorithm on two large scale experimental datasets, benchmark its performance on manually annotated data, and show that it outperforms a popular offline approach.
1 Introduction
Calcium imaging methods continue to gain traction among experimental neuroscientists due to their capability of monitoring large targeted neuronal populations across multiple days or weeks with decisecond temporal and single-neuron spatial resolution. To infer the neural population activity from the raw imaging data, an analysis pipeline is employed which typically involves solving the following problems (all of which are still areas of active research): i) correcting for motion artifacts during the imaging experiment, ii) identifying/extracting the sources (neurons and axonal or dendritic processes) in the imaged field of view (FOV), and iii) denoising and deconvolving the neural activity from the dynamics of the expressed calcium indicator.
1These authors contributed equally to this work. 2To whom correspondence should be addressed.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The fine spatiotemporal resolution of calcium imaging comes at a data rate cost; a typical two-photon (2p) experiment on a 512×512 pixel large FOV imaged at 30Hz, generates ∼50GB of data (in 16-bit integer format) per hour. These rates can be significantly higher for other planar and volumetric imaging techniques, e.g., light-sheet [1] or SCAPE imaging [4], where the data rates can exceed 1TB per hour. The resulting data deluge poses a significant challenge.
Of the three basic pre-processing problems described above, the problem of source extraction faces the most severe scalability issues. Popular approaches reshape the data movies into a large array with dimensions (#pixels)× (#timesteps), that is then factorized (e.g., via independent component analysis [20] or constrained non-negative matrix factorization (CNMF) [26]) to produce the locations in the FOV and temporal activities of the imaged sources. While effective for small or medium datasets, direct factorization can be impractical, since a typical experiment can quickly produce datasets larger than the available RAM. Several strategies have been proposed to enhance scalability, including parallel processing [9], spatiotemporal decimation [10], dimensionality reduction [23], and out-of-core processing [13]. While these approaches enable efficient processing of larger datasets, they still require significant storage, power, time, and memory resources.
Apart from recording large neural populations, optical methods can also be used for stimulation [5]. Combining optogenetic methods for recording and perturbing neural ensembles opens the door to exciting closed-loop experiments [24, 15, 8], where the pattern of the stimulation can be determined based on the recorded activity during behavior. In a typical closed-loop experiment, the monitored/perturbed regions of interest (ROIs) have been preselected by analyzing offline a previous dataset from the same FOV. Monitoring the activity of a ROI, which usually corresponds to a soma, typically entails averaging the fluorescence over the corresponding ROI, resulting in a signal that is only a proxy for the actual neural activity and which can be sensitive to motion artifacts and drifts, as well as spatially overlapping sources, background/neuropil contamination, and noise. Furthermore, by preselecting the ROIs, the experimenter is unable to detect and incorporate new sources that become active later during the experiment, which prevents the execution of truly closed-loop experiments.
In this paper, we present an Online, single-pass, algorithmic framework for the Analysis of Calcium Imaging Data (OnACID). Our framework is highly scalable with minimal memory requirements, as it processes the data in a streaming fashion one frame at a time, while keeping in memory a set of low dimensional sufficient statistics and a small minibatch of the last data frames. Every frame is processed in four sequential steps: i) The frame is registered against the previous denoised (and registered) frame to correct for motion artifacts. ii) The fluorescence activity of the already detected sources is tracked. iii) Newly appearing neurons and processes are detected and incorporated to the set of existing sources. iv) The fluorescence trace of each source is denoised and deconvolved to provide an estimate of the underlying spiking activity.
Our algorithm integrates and extends the online NMF algorithm of [19], the CNMF source extraction algorithm of [26], and the near-online deconvolution algorithm of [11], to provide a framework capable of real time identification and processing of hundreds of neurons in a typical 2p experiment (512×512 pixel wide FOV imaged at 30Hz), enabling novel designs of closed-loop experiments. We apply OnACID to two large-scale (50 and 65 minute long) mouse in vivo 2p datasets; our algorithm can find and track hundreds of neurons faster than real-time, and outperforms the CNMF algorithm of [26] benchmarked on multiple manual annotations using a precision-recall framework.
2 Methods
We illustrate OnACID in process in Fig. 1. At the beginning of the experiment (Fig. 1-left), only a few components are active, as shown in the panel A by the max-correlation image3, and these are detected by the algorithm (Fig. 1B). As the experiment proceeds more neurons activate and are subsequently detected by OnACID (Fig. 1 middle, right) which also tracks their activity across time (Fig. 1C). See also Supplementary Movie 1 for an example in simulated data.
Next, we present the steps of OnACID in more detail.
3The correlation image (CI) at every pixel is equal to the average temporal correlation coefficient between that pixel and its neighbors [28] (8 neighbors were used for our analysis). The max-correlation image is obtained by computing the CI for each batch of 1000 frames, and then taking the maximum over all these images.
Motion correction: Our online approach allows us to employ a very simple yet effective motion correction scheme: each denoised dataframe can be used to register the next incoming noisy dataframe. To enhance robustness we use the denoised background/neuropil signal (defined in the next section) as a template to align the next dataframe. We use rigid, sub-pixel registration [16], although piecewise rigid registration can also be used at an additional computational cost. This simple alignment process is not suitable for offline algorithms due to noise in the raw data, leading to the development of various algorithms based on template matching [14, 23, 25] or Hidden Markov Models [7, 18].
Source extraction: A standard approach for source extraction is to model the fluorescence within a matrix factorization framework [20, 26]. Let Y ∈ Rd×T denote the observed fluorescence across space and time in a matrix format, where d denotes the number of imaged pixels, and T the length of the experiment in timepoints. If the number of imaged sources is K, then let A ∈ Rd×K denote the matrix where column i encodes the "spatial footprint" of the source i. Similarly, let C ∈ RK×T denote the matrix where each row encodes the temporal activity of the corresponding source. The observed data matrix can then be expressed as
Y = AC +B + E, (1) where B,E ∈ Rd×T denote matrices for background/neuropil activity and observation noise, respectively. A common approach, introduced in [26], is to express the background matrix B as a low rank matrix, i.e., B = bf , where b ∈ Rd×nb and f ∈ Rnb×T denote the spatial and temporal components of the low rank background signal, and nb is a small integer, e.g., nb = 1, 2. The CNMF framework of [26] operates by alternating optimization of [A,b] given the data Y and estimates of [C; f ], and vice versa, where each column of A is constrained to be zero outside of a neighborhood around its previous estimate. This strategy exploits the spatial locality of each neuron to reduce the computational complexity. This framework can be adapted to a data streaming setup using the online NMF algorithm of [19], where the observed fluorescence at time t can be written as
yt = Act + bft + εt. (2) Proceeding in a similar alternating way, the activity of all neurons at time t, ct, and temporal background ft, given yt and the spatial footprints and background [A,b], can be found by solving a nonnegative least squares problem, whereas [A,b] can be estimated efficiently as in [19] by only keeping in memory the sufficient statistics (where we define c̃t = [ct; ft])
Wt = t−1 t Wt−1 + 1 tytc̃ > t , Mt = t−1 t Mt−1 + 1 t c̃tc̃ > t , (3)
while at the same time enforcing the same spatial locality constraints as in the CNMF framework.
Deconvolution: The online framework presented above estimates the demixed fluorescence traces c1, . . . , cK of each neuronal source. The fluorescence is a filtered version of the underlying neural activity that we want to infer. To further denoise and deconvolve the neural activity from the dynamics of the indicator we use the OASIS algorithm [11] that implements the popular spike deconvolution algorithm of [30] in a nearly online fashion by adapting the highly efficient Pool Adjacent Violators Algorithm used in isotonic regression [3]. The calcium dynamics is modeled with a stable autoregressive process of order p, ct = ∑p i=1 γict−i + st. We use p = 1 here, but can extend to p = 2 to incorporate the indicator rise time [11]. OASIS solves a modified LASSO problem
minimize ĉ,ŝ 1 2‖ĉ− y‖ 2 + λ‖ŝ‖1 subject to ŝt = ĉt − γĉt−1 ≥ smin or ŝt = 0 (4)
where the `1 penalty on ŝ or the minimal spike size smin can be used to enforce sparsity of the neural activity. The algorithm progresses through each time series sequentially from beginning to end and backtracks only to the most recent spike. We can further restrict the lag to few frames, to obtain a good approximate solution applicable for real-time experiments.
Detecting new components: The approach explained above enables tracking the activity of a fixed number of sources, and will ignore neurons that become active later in the experiment. To account for a variable number of sources in an online NMF setting, [12] proposes to add a new random component when the correlation coefficient between each data frame and its representation in terms of the current factors is lower than a threshold. This approach is insufficient here since the footprint of a new neuron in the whole FOV is typically too small to modify the correlation coefficient significantly.
We approach the problem by introducing a buffer that contains the last lb instances of the residual signal rt = yt − Act − bft, where lb is a reasonably small number, e.g., lb = 100. On this buffer, similarly to [26], we perform spatial smoothing with a Gaussian kernel with radius similar to the expected neuron radius, and then search for the point in space that explains the maximum variance. New candidate components anew, and cnew are estimated by performing a local rank-1 NMF of the residual matrix restricted to a fixed neighborhood around the point of maximal variance.
To limit false positives, the candidate component is screened for quality. First, to prevent noise overfitting, the shape anew must be significantly correlated (e.g., θs ∼ 0.8− 0.9) to the residual buffer averaged over time and restricted to the spatial extent of anew. Moreover, if anew significantly overlaps with any of the existing components, then its temporal component cnew must not be highly correlated with the corresponding temporal components; otherwise we reject it as a possible duplicate of an existing component. Once a new component is accepted, [A,b], [C; f ] are augmented with anew and cnew respectively, and the sufficient statistics are updated as follows:
Wt = [ Wt, 1
t Ybufc
> new ] , Mt = 1
t
[ tMt C̃bufc > new
cnewC̃ > buf ‖cnew‖2
] , (5)
where Ybuf, C̃buf denote the matrices Y, [C; f ], restricted to the last lb frames that the buffer stores. This process is repeated until no new components are accepted, at which point the next frame is read and processed. The whole online procedure is described in Algorithm 1; the supplement includes pseudocode description of all the referenced routines.
Initialization: To initialize our algorithm we use the CNMF algorithm on a short initial batch of data of length Tb, (e.g., Tb = 1000). The sufficient statistics are initialized from the components that the offline algorithm finds according to (3). To ensure that new components are also initialized in the darker parts of the FOV, each data frame is normalized with the (running) mean for every pixel, during both the offline and the online phases.
Algorithmic Speedups: Several algorithmic and computational schemes are employed to boost the speed of the algorithm and make it applicable to real-time large-scale experiments. In [19] block coordinate descent is used to update the factors A, warm started at the value from the previous iteration. The same trick is used here not only for A, but also for C, since the calcium traces are continuous and typically change slowly. Moreover, the temporal traces of components that do not spatially overlap with each other can be updated simultaneously in vector form; we use a simple greedy scheme to partition the components into spatially non-overlapping groups.
Since neurons’ shapes are not expected to change at a fast timescale, updating their values (i.e., recomputing A and b) is not required at every timepoint; in practice we update every lb timesteps.
Algorithm 1 ONACID Require: Data matrix Y , initial estimates A,b, C, f , S, current number of components K, current
timestep t′, rest of parameters. 1: W = Y [:, 1 : t′]C>/t′ 2: M = CC>/t′ . Initialize sufficient statistics 3: G = DETERMINEGROUPS([A,b],K) . Alg. S1-S2 4: Rbuf = [Y − [A,b][C; f ]][:, t′ − lb + 1 : t′] . Initialize residual buffer 5: t = t′ 6: while there is more data do 7: t← t+ 1 8: yt ← ALIGNFRAME(yt,bft−1) . [16] 9: [ct; ft]← UPDATETRACES([A,b], [ct−1; ft−1],yt,G) . Alg. S3 10: C, S ← OASIS(C, γ, smin, λ) . [11] 11: [A,b], [C, f ],K,G, Rbuf,W,M ← 12: DETECTNEWCOMPONENTS([A,b], [C, f ],K,G, Rbuf,yt,W,M) . Alg. S4 13: Rbuf ← [Rbuf[:, 2 : lb],yt −Act − bft] . Update residual buffer 14: if mod (t− t′, lb) = 0 then . Update W,M, [A,b] every lb timesteps 15: W,M ← UPDATESUFFSTATISTICS(W,M,yt, [ct; ft]) . Equation (3) 16: [A,b]← UPDATESHAPES[W,M, [A,b]] . Alg. S5 17: return A,b, C, f , S
Additionally, the sufficient statistics Wt,Mt are only needed for updating the estimates of [A,b] so they can be updated only when required. Motion correction can be sped up by estimating the motion only on a small (active) contiguous part of the FOV. Finally, as shown in [10], spatial decimation can bring significant speed benefits without compromising the quality of the results.
Software: OnACID is implemented in Python and is available at https://github.com/ simonsfoundation/caiman as part of the CaImAn package [13].
3 Results
Benchmarking on simulated data: To compare to ground truth spike trains, we simulated a 2000 frame dataset taken at 30Hz over a 256×256 pixel wide FOV containing 400 "donut" shaped neurons with Poisson spike trains (see supplement for details). OnACID was initialized on the first 500 frames. During initialization, 265 active sources were accurately detected (Fig. S2). After the full 2000 frames, the algorithm had detected and tracked all active sources, plus one false positive (Fig. 2A).
After detecting a neuron, we need to extract its spikes with a short time-lag, to enable interesting closed loop experiments. To quantify performance we measured the correlation of the inferred spike train with the ground truth (Fig. 2B). We varied the lag in the online estimator, i.e. the number of future samples observed before assigning a spike at time zero. Lags of 2-5 yield already similar
results as the solution with unrestricted lag. A further requirement for online closed-loop experiments is that the computational processing is fast enough. To balance the computational load over frames, we distributed here the shape update over the frames, while still updating each neuron every 30 frames on average. Because the shape update is the last step of the loop in Algorithm 1, we keep track of the time already spent in the iteration and increase or decrease the number of updated neurons accordingly. In this way the frame processing rate remained always higher than 30Hz (Fig. 2C).
Application to in vivo 2p mouse hippocampal data: Next we considered a larger scale (90K frames, 480×480 pixels) real 2p calcium imaging dataset taken at 30Hz (i.e., 50 minute experiment). Motion artifacts were corrected prior to the analysis described below. The online algorithm was initialized on the first 1000 frames of the dataset using a Python implementation of the CNMF algorithm found in the CaImAn package [13]. During initialization 139 active sources were detected; by the end of all 90K frames, 727 active sources had been detected and tracked (5 of which were discarded due to their small size).
Benchmarking against offline processing and manual annotations: We collected manual annotations from two independent labelers who were instructed to find round or donut shaped neurons of similar size using the ImageJ Cell Magic Wand tool [31] given i) a movie obtained by removing a running 20th percentile (as a crude background approximation) and downsampling in time by a factor of 10, and ii) the max-correlation image. The goal of this pre-processing was to suppress silent and promote active cells. The labelers found respectively 872 and 880 ROIs. We also compared with the CNMF algorithm applied to the whole dataset which found 904 sources (805 after filtering for size).
To quantify performance we used a precision/recall framework similar to [2]. As a distance metric between two cells we used the Jaccard distance, and the pairing between different annotations was computed using the Hungarian algorithm, where matches with distance > 0.7 were discarded4. Table. 1 summarizes the results within the precision/recall framework. The online algorithm not only matches but outperforms the offline approach of CNMF, reaching high performance values (F1 = 0.79 and 0.78 against the two manual annotations, as opposed to 0.71 against both annotations for CNMF). The two annotations matched closely with each other (F1 = 0.89), indicating high reliability, whereas OnACID vs CNMF also produced a high score (F1 = 0.79), suggesting significant overlap in the mismatches between the two algorithms against manual annotations.
Fig. 3 offers a more detailed view, where contour plots of the detected components are superimposed on the max-correlation image for the online (Fig. 3A) and offline (Fig. 3B) algorithms (white) and the annotations of Labeler 1 (red) restricted to a 200×200 pixel part of the FOV. Annotations of matches and mismatches between the online algorithm and the two labelers, as well as between the two labelers in the entire FOV are shown in Figs. S3-S8. For the automated procedures binary masks and contour plots were constructed by thresholding the spatial footprint of each component at a level equal to 0.2 times its maximum value. A close inspection at the matches between the online algorithm and the manual annotation (Fig. 3A-left) indicates that neurons with a strong footprint in the max-correlation image (indicating calcium transients with high amplitude compared to noise and background/neuropil activity) are reliably detected, despite the high neuron density and level of overlap. On the other hand, mismatches (Fig. 3B-left) can sometimes be attributed to shape mismatches, manually selected components with no signature in the max-correlation image (indicating faint or possibly unclear activity) that are not detected by the online algorithm (false negatives), or small partially visible processes detected by OnACID but ignored by the labelers ("false" positives).
4Note that the Cell Magic Wand Tool by construction, tends to select circular ROI shapes whereas the results of the online algorithm do not pose restrictions on the shapes. As a result the computed Jaccard distances tend to be overestimated. This explains our choice of a seemingly high mismatch threshold.
Fig. 3C shows examples of the traces from three selected neurons. OnACID can detect and track neurons with very sparse spiking over the course of the entire 50 minute experiment (Fig. 3C-top), and produce traces that are highly correlated with their offline counterparts. To examine the quality of the inferred traces (where ground truth collection at such scale is both very strenuous and severely impeded by the presence of background signals and neuropil activity), we compared the traces between the online algorithm and the CNMF approach on matched pairs of components. Fig. 3D shows the empirical cumulative distribution function (CDF) of the correlation coefficients from this comparison. The majority of the coefficients attain values close to 1, suggesting that the online algorithm can detect new neurons once they become active and then reliably track their activity.
OnACID is faster than real time on average: In addition to being more accurate, OnACID is also considerably faster as it required ∼27 minutes, i.e., ∼ 2× faster than real time on average, to analyze the full dataset (2 minutes for initialization and 25 for the online processing) as opposed to ∼1.5 hours for the offline approach and ∼10 hours for each of the annotators (who only select ROIs). Fig. 3E illustrates the time consumption of the various steps. In the majority of the frames where no spatial shapes are being updated and no new neurons are being incorporated, OnACID processing speed exceeds the data rate of 30Hz (Fig. 3E-top), and this processing time scales only mildly with the inclusion of new neurons. The cost of updating shapes and sufficient statistics per neuron is also very low (< 1ms), and only scales mildly with the number of existing neurons (Fig. 3E-middle). As argued before this cost can be distributed among all the frames while maintaining faster than real time processing rates. The expensive step appears when detecting and including one or possibly more new neurons in the algorithm (Fig. 3E-bottom). Although this occurs only sporadically, several speedups can be potentially employed here to achieve beyond real time at every frame (see also Discussion section), which would facilitate zero-lag closed-loop experiments.
Application to in vivo 2p mouse parietal cortex data: As a second application to 2p data we used a 116,000 frame dataset, taken at 30Hz over a 512×512 FOV (64min long). The first 3000 frames were used for initialization during which the CNMF algorithm found 442 neurons, before switching to OnACID, which by the end of the experiment found a total of 752 neurons (734 after filtering for size). Compared to two independent manual annotations of 928 and 875 ROIs respectively, OnACID achieved F1 = 0.76, 0.79 significantly outperforming CNMF (F1 = 0.65, 0.66 respectively). The matches and mismatches between OnACID and Labeler 1 on a 200×200 pixel part of the FOV are shown in Fig. 4A. Full FOV pairings as well as precision/recall metrics are given in Table 2.
For this dataset, rigid motion correction was also performed according to the simple method of aligning each frame to the denoised (and registered) background from the previous frame. Fig. 4B shows that this approach produced strikingly similar results to an offline template based, rigid motion correction method [25]. The difference in the displacements produced by the two methods was less than 1 pixel for all 116,000 frames with standard deviations 0.11 and 0.12 pixel for the x and y directions, respectively. In terms of timing, OnACID processed the dataset in 48 minutes, again faster than real time on average. This also includes the time needed for motion correction, which on average took 5ms per frame (a bit less than 10 minutes in total).
4 Discussion - Future Work
Although at first striking, the superior performance of OnACID compared to offline CNMF, for the datasets presented in this work, can be attributed to several factors. Calcium transient events are localized both in space (spatial footprint of a neuron), and in time (typically 0.3-1s for genetic indica-
tors). By looking at a short rolling buffer OnACID is able to more robustly detect activity compared to offline approaches that look at all the data simultaneously. Moreover, OnACID searches for new activity in the residuals buffer that excludes the activity of already detected neurons, making it easier to detect new overlapping components. Finally, offline CNMF requires the a priori specification of the number of components, making it more prone to either false positive or false negative components.
For both the datasets presented above, the analysis was done using the same space correlation threshold θs = 0.9. This strict choice leads to results with high precision and lower recall (see Tables 1 and 2). Results can be moderately improved by allowing a second pass of the data that can identify neurons that were initially not selected. Moreover, by relaxing the threshold the discrepancy between the precision and recall scores can be reduced, with only marginal modifications to the F1 scores (data not shown).
Our current implementation performs all processing serially. In principle, significant speed gains can be obtained by performing computations not needed at each timestep (updating shapes and sufficient statistics) or occur only sporadically (incorporating a new neuron) in a parallel thread with shared memory. Moreover, different online dictionary learning algorithms that do not require the solution of an inverse problem at each timestep can potentially further speed up our framework [17].
For detecting centroids of new sources OnACID examines a static image obtained by computing the variance across time of the spatially smoother residual buffer. While this approach works very well in practice it effectively favors shapes looking similar to a pre-defined Gaussian blob (when spatially smoothed). Different approaches for detecting neurons in static images can be possibly used here, e.g., [22], [2], [29], [27].
Apart from facilitating closed-loop behavioral experiments and rapid general calcium imaging data analysis, our online pipeline can be potentially employed to future, optical-based, brain computer interfaces [6, 21] where high quality real-time processing is critical to their performance. These directions will be pursued in future work.
Acknowledgments
We thank Sue Ann Koay, Jeff Gauthier and David Tank (Princeton University) for sharing their cortex and hippocampal data with us. We thank Lindsey Myers, Sonia Villani and Natalia Roumelioti for providing manual annotations. We thank Daniel Barabasi (Cold Spring Harbor Laboratory) for useful discussions. AG, DC, and EAP were internally funded by the Simons Foundation. Additional support was provided by SNSF P300P2_158428 (JF), and NIH BRAIN Initiative R01EB22913, DARPA N66001-15-C-4032, IARPA MICRONS D16PC00003 (LP). | 1. What is the main contribution of the paper in terms of ideas and innovations?
2. What are the concerns regarding the novelty and advancement of the proposed method compared to prior works?
3. How reliable are the manual annotations, and what other labels could be used to improve their accuracy?
4. How well does the method handle drifting in the z-axis, and what information is needed to understand its performance in vivo?
5. What is the current implementation's latency, and how could it be optimized for real-time applications?
6. Are there any additional results or visualizations that could support the paper's findings and enhance its impact? | Review | Review
This paper describes a framework for online motion correction and signal extraction for calcium imaging data. This work combines ideas from several previous studies, and introduces a few innovations as well.
The most pressing concern here is that there isn't a core new idea beyond the CMNF and OASIS algorithms. What is new here is the method for adding a new component, the initialization method, and the manually labeled datasets. This is likely quite a useful tool for biologists but the existence of a mathematical, computational or scientific advance is not so clear.
Additionally, it's not so clear why we should trust the manual annotations. Ideally neurons would be identified by an additional unambiguous label (mcherry, DAPI, etc.). This, together with the relative similarity of CMNF and the current method call into question how much of an advance has been made here.
The in vivo datasets need to be more clearly described. For example, Which calcium indicator was used?
What happens if and when the focal plane drifts slowly along the z axis?
It's also frequently mentioned that this is an online algorithm, but it's not clear what sort of latency can actually be achieved with the current implementation, or how it could be linked up with existing microscope software. If this is only possible in principle, the paper should stand better on its conceptual or computational merits.
The spatial profiles extracted for individual neurons should be also shown. |
NIPS | Title
Depth-Limited Solving for Imperfect-Information Games
Abstract
A fundamental challenge in imperfect-information games is that states do not have well-defined values. As a result, depth-limited search algorithms used in singleagent settings and perfect-information games do not apply. This paper introduces a principled way to conduct depth-limited solving in imperfect-information games by allowing the opponent to choose among a number of strategies for the remainder of the game at the depth limit. Each one of these strategies results in a different set of values for leaf nodes. This forces an agent to be robust to the different strategies an opponent may employ. We demonstrate the effectiveness of this approach by building a master-level heads-up no-limit Texas hold’em poker AI that defeats two prior top agents using only a 4-core CPU and 16 GB of memory. Developing such a powerful agent would have previously required a supercomputer.
1 Introduction
Imperfect-information games model strategic interactions between agents with hidden information. The primary benchmark for this class of games is poker, specifically heads-up no-limit Texas hold’em (HUNL), in which Libratus defeated top humans in 2017 [6]. The key breakthrough that led to superhuman performance was nested solving, in which the agent repeatedly calculates a finer-grained strategy in real time (for just a portion of the full game) as play proceeds down the game tree [5, 27, 6].
However, real-time subgame solving was too expensive for Libratus in the first half of the game because the portion of the game tree Libratus solved in real time, known as the subgame, always extended to the end of the game. Instead, for the first half of the game Libratus pre-computed a finegrained strategy that was used as a lookup table. While this pre-computed strategy was successful, it required millions of core hours and terabytes of memory to calculate. Moreover, in deeper sequential games the computational cost of this approach would be even more expensive because either longer subgames or a larger pre-computed strategy would need to be solved. A more general approach would be to solve depth-limited subgames, which may not extend to the end of the game. These could be solved even in the early portions of a game.
The poker AI DeepStack does this using a technique similar to nested solving that was developed independently [27]. However, while DeepStack defeated a set of non-elite human professionals in HUNL, it never defeated prior top AIs despite using over one million core hours to train the agent, suggesting its approach may not be sufficiently efficient in domains like poker. We discuss this in more detail in Section 7. This paper introduces a different approach to depth-limited solving that defeats prior top AIs and is computationally orders of magnitude less expensive.
When conducting depth-limited solving, a primary challenge is determining what values to substitute at the leaf nodes of the depth-limited subgame. In perfect-information depth-limited subgames, the value substituted at leaf nodes is simply an estimate of the state’s value when all players play an
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
equilibrium [35, 33]. For example, this approach was used to achieve superhuman performance in backgammon [39], chess [9], and Go [36, 37]. The same approach is also widely used in single-agent settings such as heuristic search [30, 24, 31, 15]. Indeed, in single-agent and perfect-information multi-agent settings, knowing the values of states when all agents play an equilibrium is sufficient to reconstruct an equilibrium. However, this does not work in imperfect-information games, as we demonstrate in the next section.
2 The Challenge of Depth-Limited Solving in Imperfect-Information Games
In imperfect-information games (also referred to as partially-observable games), an optimal strategy cannot be determined in a subgame simply by knowing the values of states (i.e., game-tree nodes) when all players play an equilibrium strategy. A simple demonstration is in Figure 1a, which shows a sequential game we call Rock-Paper-Scissors+ (RPS+). RPS+ is identical to traditional Rock-PaperScissors, except if either player plays Scissors, the winner receives 2 points instead of 1 (and the loser loses 2 points). Figure 1a shows RPS+ as a sequential game in which P1 acts first but does not reveal the action to P2 [7, 13]. The optimal strategy (Minmax strategy, which is also a Nash equilibrium in two-player zero-sum games) for both players in this game is to choose Rock and Paper each with 40% probability, and Scissors with 20% probability. In this equilibrium, the expected value to P1 of choosing Rock is 0, as is the value of choosing Scissors or Paper. In other words, all the red states in
In the RPS+ example, the core problem is that we incorrectly assumed P2 would always play a fixed strategy. If indeed P2 were to always play Rock, Paper, and Scissors with probability 〈0.4, 0.4, 0.2〉, then P1 could choose any arbitrary strategy and receive an expected value of 0. However, by assuming P2 is playing a fixed strategy, P1 may not find a strategy that is robust to P2 adapting. In reality, P2’s optimal strategy depends on the probability that P1 chooses Rock, Paper, and Scissors. In general, in imperfect-information games a player’s optimal strategy at a decision point depends on the player’s belief distribution over states as well as the strategy of all other agents beyond that decision point.
In this paper we introduce a method for depth-limited solving that ensures a player is robust to such opponent adaptations. Rather than simply substitute a single state value at a depth limit, we instead allow the opponent one final choice of action at the depth limit, where each action corresponds to a strategy the opponent will play in the remainder of the game. The choice of strategy determines the value of the state. The opponent does not make this choice in a way that is specific to the state (in which case he would trivially choose the maximum value for himself). Instead, naturally, the opponent must make the same choice at all states that are indistinguishable to him. We prove that if the opponent is given a choice between a sufficient number of strategies at the depth limit, then any solution to the depth-limited subgame is part of a Nash equilibrium strategy in the full game. We also show experimentally that when only a few choices are offered (for computational speed), performance of the method is extremely strong.
3 Notation and Background
In an imperfect-information extensive-form game there is a finite set of players, P . A state (also called a node) is defined by all information of the current situation, including private knowledge known to only one player. A unique player P (h) acts at state h. H is the set of all states in the game tree. The state h′ reached after an action is taken in h is a child of h, represented by h · a = h′, while h is the parent of h′. If there exists a sequence of actions from h to h′, then h is an ancestor of h′ (and h′ is a descendant of h), represented as h @ h′. Z ⊆ H are terminal states for which no actions are available. For each player i ∈ P , there is a payoff function ui : Z → R. If P = {1, 2} and u1 = −u2, the game is two-player zero-sum. In this paper we assume the game is two-player zero-sum, though many of the ideas extend to general sum and more than two players.
Imperfect information is represented by information sets (infosets) for each player i ∈ P . For any infoset I belonging to player i, all states h, h′ ∈ I are indistinguishable to player i. Moreover, every non-terminal state h ∈ H belongs to exactly one infoset for each player i. A strategy σi(I) (also known as a policy) is a probability vector over actions for player i in infoset I . The probability of a particular action a is denoted by σi(I, a). Since all states in an infoset belonging to player i are indistinguishable, the strategies in each of them must be identical. We define σi to be a strategy for player i in every infoset in the game where player i acts. A strategy is pure if all probabilities in it are 0 or 1. All strategies are a linear combination of pure strategies. A strategy profile σ is a tuple of strategies, one for each player. The strategy of every player other than i is represented as σ−i. ui(σi, σ−i) is the expected payoff for player i if all players play according to the strategy profile 〈σi, σ−i〉. The value to player i at state h given that all players play according to strategy profile σ is defined as vσi (h), and the value to player i at infoset I is defined as vσ(I) = ∑ h∈I ( p(h)vσi (h) ) , where p(h) is player i’s believed probability that they are in state h, conditional on being in infoset I , based on the other players’ strategies and chance’s probabilities.
A best response to σ−i is a strategy BR(σ−i) such that ui(BR(σ−i), σ−i) = maxσ′i ui(σ ′ i, σ−i). A Nash equilibrium σ∗ is a strategy profile where every player plays a best response: ∀i, ui(σ∗i , σ∗−i) = maxσ′i ui(σ ′ i, σ ∗ −i) [29]. A Nash equilibrium strategy for player i is a strategy σ ∗ i that is part of any Nash equilibrium. In two-player zero-sum games, if σi and σ−i are both Nash equilibrium strategies, then 〈σi, σ−i〉 is a Nash equilibrium. A depth-limited imperfect-information subgame, which we refer to simply as a subgame, is a contiguous portion of the game tree that does not divide infosets. Formally, a subgame S is a set of states such that for all h ∈ S, if h ∈ Ii and h′ ∈ Ii for some player i, then h′ ∈ S. Moreover, if x ∈ S and z ∈ S and x @ y @ z, then y ∈ S. If h ∈ S but no descendant of h is in S, then h is a leaf node. Additionally, the infosets containing h are leaf infosets. Finally, if h ∈ S but no ancestor of h is in S, then h is a root node and the infosets containing h are root infosets.
4 Multi-Valued States in Imperfect-Information Games
In this section we describe our new method for depth-limited solving in imperfect-information games, which we refer to as multi-valued states. Our general approach is to first precompute an approximate Nash equilibrium for the entire game. We refer to this precomputed strategy profile as a blueprint strategy. Since the blueprint is precomputed for the entire game, it is likely just a coarse approximation of a true Nash equilibrium. Our goal is to compute a better approximation in real time for just a depth-limited subgame S that we find ourselves in during play. For the remainder of this paper, we assume that player P1 is attempting to approximate a Nash equilibrium strategy in S.
Let σ∗ be an exact Nash equilibrium. To present the intuition for our approach, we begin by considering what information about σ∗ would, in theory, be sufficient in order to compute a P1 Nash equilibrium strategy in S. For ease of understanding, when considering the intuition for multi-valued states we suggest the reader first focus on the case where S is rooted at the start of the game (that is, no prior actions have occurred).
As explained in Section 2, knowing the values of leaf nodes in S when both players play according to σ∗ (that is, vσ ∗
i (h) for leaf node h and player Pi) is insufficient to compute a Nash equilibrium in S (even though this is sufficient in perfect-information games), because it assumes P2 would not adapt their strategy outside S. But what if P2 could adapt? Specifically, suppose hypothetically that P2
could choose any strategy in the entire game, while P1 could only play according to σ∗1 outside of S. In this case, what strategy should P1 choose in S? Since σ∗1 is a Nash equilibrium strategy and P2 can choose any strategy in the game (including a best response to P1’s strategy), so by definition P1 cannot do better than playing σ∗1 in S. Thus, P1 should play σ ∗ 1 (or some equally good Nash equilibrium) in S.
Another way to describe this setup is that upon reaching a leaf node h in infoset I in subgame S, rather than simply substituting vσ ∗
2 (h) (which assumes P2 plays according to σ ∗ 2 for the remainder of
the game), P2 could instead choose any mixture of pure strategies for the remainder of the game. So if there are N possible pure strategies following I , P2 would choose among N actions upon reaching I , where action n would correspond to playing pure strategy σn2 for the remainder of the game. Since this choice is made separately at each infoset I and since P2 may mix between pure strategies, so this allows P2 to choose any strategy below S.
Since the choice of action would define a P2 strategy for the remainder of the game and since P1 is known to play according to σ∗1 outside S, so the chosen action could immediately reward the expected value v〈σ ∗ 1 ,σ n 2 〉
i (h) to Pi. Therefore, in order to reconstruct a P1 Nash equilibrium in S, it is sufficient to know for every leaf node the expected value of every pure P2 strategy against σ∗1 (stated formally in Proposition 1). This is in contrast to perfect-information games, in which it is sufficient to know for every leaf node just the expected value of σ∗2 against σ ∗ 1 . Critically, it is not necessary to know the strategy σ∗1 , just the values of σ ∗ 1 played against every pure opponent strategy in each leaf node. Proposition 1 adds the condition that we know v〈σ ∗ 1 ,BR(σ ∗ 1 )〉
2 (I) for every root infoset I ∈ S. This condition is used if S does not begin at the start of the game. Knowledge of v〈σ ∗ 1 ,BR(σ ∗ 1 )〉
2 (I) is needed to ensure that any strategy σ1 that P1 computes in S cannot be exploited by P2 changing their strategy earlier in the game. Specifically, we add a constraint that v〈σ1,BR(σ ∗ 1 )〉 2 (I) ≤ v 〈σ∗1 ,BR(σ ∗ 1 )〉
2 (I) for all P2 root infosets I . This makes our technique safe:
Proposition 1. Assume P1 has played according to Nash equilibrium strategy σ∗1 prior to reaching a depth-limited subgame S of a two-player zero-sum game. In order to calculate the portion of a P1 Nash equilibrium strategy that is in S, it is sufficient to know v 〈σ∗1 ,BR(σ ∗ 1 )〉
2 (I) for every root P2 infoset I ∈ S and v〈σ ∗ 1 ,σ2〉
1 (h) for every pure undominated P2 strategy σ2 and every leaf node h ∈ S.
Other safe subgame solving techniques have been developed in recent papers, but those techniques require solving to the end of the full game [7, 17, 28, 5, 6] (except one [27], which we will compare to in Section 7).
Of course, it is impractical to know the expected value in every state of every pure P2 strategy against σ∗1 , especially since we do not know σ ∗ 1 itself. To deal with this, we first compute a blueprint strategy σ̂∗ (that is, a precomputed approximate Nash equilibrium for the full game). Next, rather than consider every pure P2 strategy, we instead consider just a small number of different P2 strategies (that may or may not be pure). Indeed, in many complex games, the possible opponent strategies at a decision point can be approximately grouped into just a few “meta-strategies”, such as which highway lane a car will choose in a driving simulation. In our experiments, we find that excellent performance is obtained in poker with fewer than ten opponent strategies. In part, excellent performance is possible with a small number of strategies because the choice of strategy beyond the depth limit is made separately at each leaf infoset. Thus, if the opponent chooses between ten strategies at the depth limit, but makes this choice independently in each of 100 leaf infosets, then the opponent is actually choosing between 10100 different strategies. We now consider two questions. First, how do we compute the blueprint strategy σ̂∗1? Second, how do we determine the set of P2 strategies? We answer each of these in turn.
There exist several methods for constructing a blueprint. One option, which achieves the best empirical results and is what we use, involves first abstracting the game by bucketing together similar situations [19, 12] and then applying the iterative algorithm Monte Carlo Counterfactual Regret Minimization [22]. Several alternatives exist that do not use a distinct abstraction step [3, 16, 10]. The agent will never actually play according to the blueprint σ̂∗. It is only used to estimate v〈σ ∗ 1 ,σ2〉(h).
We now discuss two different ways to select a set of P2 strategies. Ultimately we would like the set of P2 strategies to contain a diverse set of intelligent strategies the opponent might play, so that P1’s solution in a subgame is robust to possible P2 adaptation. One option is to bias the P2 blueprint
strategy σ̂∗2 in a few different ways. For example, in poker the blueprint strategy should be a mixed strategy involving some probability of folding, calling, or raising. We could define a new strategy σ′2 in which the probability of folding is multiplied by 10 (and then all the probabilities renormalized). If the blueprint strategy σ̂∗ were an exact Nash equilibrium, then any such “biased” strategy σ′2 in which the probabilities are arbitrarily multiplied would still be a best response to σ̂∗1 . In our experiments, we use this biasing of the blueprint strategy to construct a set of four opponent strategies on the second betting round. We refer to this as the bias approach.
Another option is to construct the set of P2 strategies via self-play. The set begins with just one P2 strategy: the blueprint strategy σ̂∗2 . We then solve a depth-limited subgame rooted at the start of the game and going to whatever depth is feasible to solve, giving P2 only the choice of this P2 strategy at leaf infosets. That is, at leaf node h we simply substitute vσ̂ ∗
i (h) for Pi. Let the P1 solution to this depth-limited subgame be σ1. We then approximate a P2 best response assuming P1 plays according to σ1 in the depth-limited subgame and according to σ̂∗1 in the remainder of the game. Since P1 plays according to this fixed strategy, approximating a P2 best response is equivalent to solving a Markov Decision Process, which is far easier to solve than an imperfect-information game. This P2 approximate best response is added to the set of strategies that P2 may choose at the depth limit, and the depth-limited subgame is solved again. This process repeats until the set of P2 strategies grows to the desired size. This self-generative approach bears some resemblance to the double oracle algorithm [26] and recent work on generation of opponent strategies in multi-agent RL [23]. In our experiments, we use this self-generative method to construct a set of ten opponent strategies on the first betting round. We refer to this as the self-generative approach.
One practical consideration is that since σ̂∗1 is not an exact Nash equilibrium, a generated P2 strategy σ2 may do better than σ̂∗2 against σ̂ ∗ 1 . In that case, P1 may play more conservatively than σ ∗ 1 in a depth-limited subgame. To correct for this, one can balance the players by also giving P1 a choice between multiple strategies for the remainder of the game at the depth limit. Alternatively, one can “weaken” the generated P2 strategies so that they do no better than σ̂∗2 against σ̂ ∗ 1 . Formally, if v 〈σ̂∗1 ,σ2〉 2 (I) > v 〈σ̂∗1 ,σ̂ ∗ 2 〉 2 (I), we uniformly lower v 〈σ̂∗1 ,σ2〉 2 (h) for h ∈ I by v 〈σ̂∗1 ,σ2〉 2 (I)− v 〈σ̂∗1 ,σ̂ ∗ 2 〉
2 (I). Another alternative (or additional) solution would be to simply reduce v〈σ̂ ∗ 1 ,σ2〉
2 (h) for σ2 6= σ̂∗2 by some heuristic amount, such as a small percentage of the pot in poker.
Once a P1 strategy σ̂∗1 and a set of P2 strategies have been generated, we need some way to calculate and store v〈σ̂ ∗ 1 ,σ2〉
2 (h). Calculating the state values can be done by traversing the entire game tree once. However, that may not be feasible in large games. Instead, one can use Monte Carlo simulations to approximate the values. For storage, if the number of states is small (such as in the early part of the game tree), one could simply store the values in a table. More generally, one could train a function to predict the values corresponding to a state, taking as input a description of the state and outputting a value for each P2 strategy. Alternatively, one could simply store σ̂∗1 and the set of P2 strategies. Then, in real time, the value of a state could be estimated via Monte Carlo rollouts. We present results for both of these approaches in Section 6.
5 Nested Solving of Imperfect-Information Games
We use the new idea discussed in the previous section in the context of nested solving, which is a way to repeatedly solve subgames as play descends down the game tree [5]. Whenever an opponent chooses an action, a subgame is generated following that action. This subgame is solved, and its solution determines the strategy to play until the next opponent action is taken.
Nested solving is particularly useful in dealing with large or continuous action spaces, such as an auction that allows any bid in dollar increments up to $10,000. To make these games feasible to solve, it is common to apply action abstraction, in which the game is simplified by considering only a few actions (both for ourselves and for the opponent) in the full action space. For example, an action abstraction might only consider bid increments of $100. However, if the opponent chooses an action that is not in the action abstraction (called an off-tree action), the optimal response to that opponent action is undefined.
Prior to the introduction of nested solving, it was standard to simply round off-tree actions to a nearby in-abstraction action (such as treating an opponent bid of $150 as a bid of $200) [14, 34, 11]. Nested solving allows a response to be calculated for off-tree actions by constructing and solving a subgame
that immediately follows that action. The goal is to find a strategy in the subgame that makes the opponent no better off for having chosen the off-tree action than an action already in the abstraction.
Depth-limited solving makes nested solving feasible even in the early game, so it is possible to play without acting according to a precomputed strategy or using action translation. At the start of the game, we solve a depth-limited subgame (using action abstraction) to whatever depth is feasible. This determines our first action. After every opponent action, we solve a new depth-limited subgame that attempts to make the opponent no better off for having chosen that action than an action that was in our previous subgame’s action abstraction. This new subgame determines our next action, and so on.
6 Experiments
We conducted experiments on the games of heads-up no-limit Texas hold’em poker (HUNL) and heads-up no-limit flop hold’em poker (NLFH). Appendix B reminds the reader of the rules of these games. HUNL is the main large-scale benchmark for imperfect-information game AIs. NLFH is similar to HUNL, except the game ends immediately after the second betting round, which makes it small enough to precisely calculate best responses and Nash equilibria. Performance is measured in terms of mbb/g, which is a standard win rate measure in the literature. It stands for milli-big blinds per game and represents how many thousandths of a big blind (the initial money a player must commit to the pot) a player wins on average per hand of poker played.
6.1 Exploitability Experiments in No-Limit Flop Hold’em (NLFH)
Our first experiment measured the exploitability of our technique in NLFH. Exploitability of a strategy in a two-player zero-sum game is how much worse the strategy would do against a best response than a Nash equilibrium strategy would do against a best response. Formally, the exploitability of σ1 is minσ2 u1(σ ∗ 1 , σ2)−minσ2 u1(σ1, σ2), where σ∗1 is a Nash equilibrium strategy.
We considered the case of P1 betting 0.75× the pot at the start of the game, when the action abstraction only contains bets of 0.5× and 1× the pot. We compared our depth-limited solving technique to the randomized pseudoharmonic action translation (RPAT) [11], in which the bet of 0.75× is simply treated as either a bet of 0.5× or 1×. RPAT is the lowest-exploitability known technique for responding to off-tree actions that does not involve real-time computation.
We began by calculating an approximate Nash equilibrium in an action abstraction that does not include the 0.75× bet. This was done by running the CFR+ equilibrium-approximation algorithm [38] for 1,000 iterations, which resulted in less than 1 mbb/g of exploitability within the action abstraction. Next, values for the states at the end of the first betting round within the action abstraction were determined using the self-generative method discussed in Section 4. Since the first betting round is a small portion of the entire game, storing a value for each state in a table required just 42 MB.
To determine a P2 strategy in response to the 0.75× bet, we constructed a depth-limited subgame rooted after the 0.75× bet with leaf nodes at the end of the first betting round. The values of a leaf node in this subgame were set by first determining the in-abstraction leaf nodes corresponding to the exact same sequence of actions, except P1 initially bets 0.5× or 1× the pot. The leaf node values in the 0.75× subgame were set to the average of those two corresponding value vectors. When the end of the first betting round was reached and the board cards were dealt, the remaining game was solved using safe subgame solving.
Figure 2 shows how exploitability decreases as we add state values (that is, as we give P1 more best responses to choose from at the depth limit). When using only one state value at the depth limit (that is, assuming P1 would always play according to the blueprint strategy for the remainder of the game), it is actually better to use RPAT. However, after that our technique becomes significantly better and at 16 values its performance is close to having had the 0.75× action in the abstraction in the first place.
While one could have calculated a (slightly better) P2 strategy in response to the 0.75× bet by solving to the end of the game, that subgame would have been about 10,000× larger than the subgames solved in this experiment. Thus, depth-limited solving dramatically reduces the computational cost of nested subgame solving while giving up very little solution quality.
Exploitability of depth-limited solving in NLFH
6.2 Experiments Against Top AIs in Heads-Up No-Limit Texas Hold’em (HUNL)
Our main experiment uses depth-limited solving to produce a master-level HUNL poker AI called Modicum using computing resources found in a typical laptop. We test Modicum against Baby Tartanian8 [4], the winner of the 2016 Annual Computer Poker Competition, and against Slumbot [18], the winner of the 2018 Annual Computer Poker Competition. Neither Baby Tartanian8 nor Slumbot uses real time computation; their strategies are a precomputed lookup table. Baby Tartanian8 used about 2 million core hours and 18 TB of RAM to compute its strategy. Slumbot used about 250,000 core hours and 2 TB of RAM to compute its strategy. In contrast, Modicum used just 700 core hours and 16GB of RAM to compute its strategy and can play in real time at the speed of human professionals (an average of 20 seconds for an entire hand of poker) using just a 4-core CPU. We now describe Modicum and provide details of its construction in Appendix A.
The blueprint strategy for Modicum was constructed by first generating an abstraction of HUNL using state-of-the-art abstraction techniques [12, 20]. Storing a strategy for this abstraction as 4-byte floats requires just 5 GB. This abstraction was approximately solved by running Monte Carlo Counterfactual Regret Minimization for 700 core hours [22].
HUNL consists of four betting rounds. We conduct depth-limited solving on the first two rounds by solving to the end of that round using MCCFR. Once the third betting round is reached, the remaining game is small enough that we solve to the end of the game using an enhanced form of CFR+ described in the appendix.
We generated 10 values for each state at the end of the first betting round using the self-generative approach. The first betting round was small enough to store all of these state values in a table using 240 MB. For the second betting round, we used the bias approach to generate four opponent best responses. The first best response is simply the opponent’s blueprint strategy. For the second, we biased the opponent’s blueprint strategy toward folding by multiplying the probability of fold actions by 10 and then renormalizing. For the third, we biased the opponent’s blueprint strategy toward checking and calling. Finally for the fourth, we biased the opponent’s blueprint strategy toward betting and raising. To estimate the values of a state when the depth limit is reached on the second round, we sample rollouts of each of the stored best-response strategies.
The performance of Modicum is shown in Table 1. For the evaluation, we used AIVAT to reduce variance [8]. Our new agent defeats both Baby Tartanian8 and Slumbot with statistical significance. For comparison, Baby Tartanian8 defeated Slumbot by 36 ± 12 mbb/g, Libratus defeated Baby Tartanian8 by 63± 28 mbb/g, and Libratus defeated top human professionals by 147± 77 mbb/g. In addition to head-to-head performance against prior top AIs, we also tested Modicum against two versions of Local Best Response (LBR) [25]. An LBR agent is given full access to its opponent’s full-game strategy and uses that knowledge to exactly calculate the probability the LBR agent is in each possible state. Given that probability distribution and a heuristic for how the opposing agent will play thereafter, the LBR agent chooses a best response action. LBR is a way to calculate a lower bound on exploitability and has been shown to be effective in exploiting agents that do not use real-time solving.
In the first version of LBR we tested against, the LBR agent was limited to either folding or betting 0.75× the pot on the first action, and thereafter was limited to either folding or calling. Modicum beat this version of LBR by 570± 42 mbb/g. The second version of LBR we tested against could bet 10 different amounts on the flop that Modicum did not include in its blueprint strategy. Much like the experiment in Section 6.1, this was intended to measure how vulnerable Modicum is to unanticipated bet sizes. The LBR agent was limited to betting 0.75× the pot for the first action of the game and calling for the remaining actions on the preflop. On the flop, the LBR agent could either fold, call, or bet 0.33× 2x times the pot for x ∈ {0, 1, ..., 10}. On the remaining rounds the LBR agent could either fold or call. Modicum beat this version of LBR by 1377 ± 115 mbb/g. In contrast, similar forms of LBR have been shown to defeat prior top poker AIs that do not use real-time solving by hundreds or thousands of mbb/g [25].
While our new agent is probably not as strong as Libratus, it was produced with less than 0.1% of the computing resources and memory, and is never vulnerable to off-tree opponent actions.
While the rollout method used on the second betting round worked well, rollouts may be significantly more expensive in deeper games. To demonstrate the generality of our approach, we also trained a deep neural network (DNN) to predict the values of states at the end of the second betting round as an alternative to using rollouts. The DNN takes as input a 34-float vector of features describing the state, and outputs four floats representing the values of the state for the four possible opponent strategies (represented as a fraction of the size of the pot). The DNN was trained using 180 million examples per player by optimizing the Huber loss with Adam [21], which we implemented using PyTorch [32]. In order for the network to run sufficiently fast on just a 4-core CPU, the DNN has just 4 hidden layers with 256 nodes in the first hidden layer and 128 nodes in the remaining hidden layers. This achieved a Huber loss of 0.02. Using a DNN rather than rollouts resulted in the agent beating Baby Tartanian8 by 2± 9 mbb/g. However, the average time taken using a 4-core CPU increased from 20 seconds to 31 seconds per hand. Still, these results demonstrate the generality of our approach.
7 Comparison to Prior Work
Section 2 demonstrated that in imperfect-information games, states do not have unique values and therefore the techniques common in perfect-information games and single-agent settings do not apply. This paper introduced a way to overcome this challenge by assigning multiple values to states. A different approach is to modify the definition of a “state” to instead be all players’ belief probability distributions over states, which we refer to as a joint belief state. This technique was previously used to develop the poker AI DeepStack [27]. While DeepStack defeated non-elite human professionals in HUNL, it was never shown to defeat prior top AIs even though it used over 1,000,000 core hours of computation. In contrast, Modicum defeated two prior top AIs with less than 1,000 core hours of computation. Still, there are benefits and drawbacks to both approaches, which we now describe in detail. The right choice may depend on the domain and future research may change the competitiveness of either approach.
A joint belief state is defined by a probability (belief) distribution for each player over states that are indistinguishable to the player. In poker, for example, a joint belief state is defined by each players’ belief about what cards the other players are holding. Joint belief states maintain some of the properties that regular states have in perfect-information games. In particular, it is possible to determine an optimal strategy in a subgame rooted at a joint belief state independently from the rest of the game. Therefore, joint belief states have unique, well-defined values that are not influenced by the strategies played in disjoint portions of the game tree. Given a joint belief state, it is also possible
to define the value of each root infoset for each player. In the example of poker, this would be the value of a player holding a particular poker hand given the joint belief state.
One way to do depth-limited subgame solving, other than the method we describe in this paper, is to learn a function that maps joint belief states to infoset values. When conducting depth-limited solving, one could then set the value of a leaf infoset based on the joint belief state at that leaf infoset.
One drawback is that because a player’s belief distribution partly defines a joint belief state, the values of the leaf infosets must be recalculated each time the strategy in the subgame changes. With the best domain-specific iterative algorithms, this would require recalculating the leaf infosets about 500 times. Monte Carlo algorithms, which are the preferred domain-independent method of solving imperfect-information games, may change the strategy millions of times in a subgame, which poses a problem for the joint belief state approach. In contrast, our multi-valued state approach requires only a single function call for each leaf node regardless of the number of iterations conducted.
Moreover, evaluating multi-valued states with a function approximator is cheaper and more scalable to large games than joint belief states. The input to a function that predicts the value of a multi-valued state is simply the state description (for example, the sequence of actions), and the output is several values. In our experiments, the input was 34 floats and the output was 4 floats. In contrast, the input to a function that predicts the values of a joint belief state is a probability vector for each player over the possible states they may be in. For example, in HUNL, the input is more than 2,000 floats and the output is more than 1,000 floats. The input would be even larger in games with more states per infoset.
Another drawback is that learning a mapping from joint belief states to infoset values is computationally more expensive than learning a mapping from states to a set of values. For example, Modicum required less than 1,000 core hours to create this mapping. In contrast, DeepStack required over 1,000,000 core hours to create its mapping. The increased cost is partly because computing training data for a joint belief state value mapping is inherently more expensive. The multi-valued states approach is learning the values of best responses to a particular strategy (namely, the approximate Nash equilibrium strategy σ̂∗1). In contrast, a joint belief state value mapping is learning the value of all players playing an equilibrium strategy given that joint belief state. As a rough guideline, computing an equilibrium is about 1,000× more expensive than computing a best response in large games [1].
On the other hand, the multi-valued state approach requires knowledge of a blueprint strategy that is already an approximate Nash equilibrium. A benefit of the joint belief state approach is that rather than simply learning best responses to a particular strategy, it is learning best responses against every possible strategy. This may be particularly useful in self-play settings where the blueprint strategy is unknown, because it may lead to increasingly more sophisticated strategies.
Another benefit of the joint belief state approach is that in many games (but not all) it obviates the need to keep track of the sequence of actions played. For example, in poker if there are two different sequences of actions that result in the same amount of money in the pot and all players having the same belief distribution over what their opponents’ cards are, then the optimal strategy in both of those situations is the same. This is similar to how in Go it is not necessary to know the exact sequence of actions that were played. Rather, it is only necessary to know the current configuration of the board (and, in certain situations, also the last few actions played).
A further benefit of the joint belief state approach is that its run-time complexity does not increase with the degree of precision other than needing a better (possibly more computationally expensive) function approximator. In contrast, for our algorithm the computational complexity of finding a solution to a depth-limited subgame grows linearly with the number of values per state.
8 Conclusions
We introduced a principled method for conducting depth-limited solving in imperfect-information games. Experimental results show that this leads to stronger performance than the best precomputedstrategy AIs in HUNL while using orders of magnitude less computational resources, and is also orders of magnitude more efficient than past approaches that use real-time solving. Additionally, the method exhibits low exploitability. In addition to using less resources, this approach broadens the applicability of nested real-time solving to longer games.
9 Acknowledgments
This material is based on work supported by the National Science Foundation under grants IIS1718457, IIS-1617590, and CCF-1733556, and the ARO under award W911NF-17-1-0082, as well as XSEDE computing resources provided by the Pittsburgh Supercomputing Center. We thank Thore Graepel, Marc Lanctot, David Silver, Ariel Procaccia, Fei Fang, and our anonymous reviewers for helpful inspiration, feedback, suggestions, and support. | 1. What is the main contribution of the paper regarding finding approximate Nash equilibria in two-player zero-sum games?
2. What are the strengths of the proposed method, particularly in its ability to achieve strong performance with constrained resources?
3. What are some weaknesses or limitations of the paper, such as the choice of strategies and the limited scope of the experiments?
4. How does the paper address the issue of imperfect information in depth-limited solving techniques?
5. Can you explain the key idea of the paper, which assumes that player 1's strategy can be reconstructed with depth-limited solving if at the leaves of a subgame, player 1's value is stored for every pure player 2 strategy?
6. How does the paper demonstrate the effectiveness of the proposed approach through empirical results?
7. What are some potential future directions for research related to this paper, such as exploring different heuristics and learning algorithms for generating a useful set of player 2 strategies? | Review | Review
This paper presents a method for finding approximate Nash equilibria in two player zero-sum games with imperfect information (using poker as the motivating example) using a depth-limited solving technique. Depth-limited techniques avoid the need to traverse an entire game tree by replacing nodes in the tree with an estimate of the expected value of their corresponding subtree (for given strategies). This approach doesnât work in imperfect information games because optimal strategies canât be reconstructed from values (this idea is clearly demonstrated in a simple rock-paper-scissors example in Section 2 of the paper). This paper presents a technique for avoiding this problem by building player 1âs strategy in a depth-limited sub-game with respect to a set of candidate player 2 strategies (and their corresponding values) that define how player 2 may play in the subgame below the depth limit. It addresses the lack of depth-limited solving in Libratus [Brown & Sandholm 2017] which necessitated a pre-computated strategy that was extremely computationally expensive and presents an alternative to DeepStackâs [MoravÄÃk et al 2017] depth-limited solving technique. # Quality The key idea of this paper is that you can reconstruct player 1âs Nash strategy with depth-limited solving if at the leaves of a subgame you store player 1âs value for every pure player 2 strategy (as summarized by Proposition 1). This proposition doesnât appear immediately useful: it assumes we already have player 1âs strategy and that the set of all pure strategies for player 2 grows at the same rate as the full search space, so it doesnât seem helpful to use depth limiting if you need to store an exponentially large object as every node, but the authors find that it is sufficient to instead use only a constant number of well-chosen strategies with an approximation of player 1âs Nash strategy and still get strong performance. Pros - empirically theyâre able to get very good performance with very constrained resources. This is the strongest support of the claim that a small number of strategies is sufficient - there is potentially a large design space in choosing P2âs strategies. I suspect there will be a lot of future work in this area Cons - despite presenting two approaches for building the set of P2 strategies (bias and self-generating) the paper gives very little guidance on what constitutes a good set of strategies. I would expect performance to be very sensitive to your choice of strategies, but this isnât evaluated. The implementation used in the experiments use the self generative approach for the first betting round and the biased approach thereafter. Why? What happens if you just used the self-generated approach or the biased approach? - The computational restriction to a 4-core CPU and 16 GB of memory makes for a flashy abstract; but I really would have liked to see what happens with a less arbitrarily constrained model. Do improvements in performance saturate as you keep increasing the size of the set of P2 strategies? What does the DNN performance look like if you used the largest model that you could fit on the GPU it was trained on instead of limiting it for fast CPU performance? How does the quality of the blueprint strategy affect performance? All of these questions could easily have been answered without needing massive compute resources, so it seems weird to limit oneself to the resources of a MacBook Air. # Clarity Overall I thought the paper was very clearly written and I enjoyed the simple examples shown in section 2 to describe the problems associated with depth limited search. There were a couple of ambiguous / misleading / etc. statements that could be improved: - lines 147 - 150: the claim around 10^100 strategies seems very misleading. Surely what matters is not the size of the set of the potential number of independent choices, but the coverage of the subset you choose? Unless you can prove something about how the number of independent strategies relates to the quality of the solution I would remove this line. - line 166 âThus \sigma_2â certainly qualifies as an intelligent strategy to playâ - by that definition any arbitrary strategies is âintelligentâ because it has expected value zero against a Nash strategy. Iâd argue thatâs just a property of a Nash equilibrium not about how intelligent the strategy is⦠- line 358-359: I donât think thereâs a relationship between the computational complexity of computing a function (such as best response or equilibrium) and the difficulty in learning it (given the same amount of training examples)? Function approximation quality depends on the properties of the function (Lipschitz constants, etc.) rather than how expensive it is to collect the data points. # Significance Despite my reservations about the empirical evaluation of this work, I do this that it is significant. I suspect a number of authors will explore the space of heuristics and learning algorithms for generating a useful set of P2 strategies. It is for this reason that I am recommending its acceptance. But I do think it would have been a far stronger paper if the implementation details were more explicitly evaluated. Typos: - line 190 - should be â\sigma_1^*â not â\sigma_2^*â - line 118 - delete âsoâ in âso clearly P1 cannot do betterâ |
NIPS | Title
Depth-Limited Solving for Imperfect-Information Games
Abstract
A fundamental challenge in imperfect-information games is that states do not have well-defined values. As a result, depth-limited search algorithms used in singleagent settings and perfect-information games do not apply. This paper introduces a principled way to conduct depth-limited solving in imperfect-information games by allowing the opponent to choose among a number of strategies for the remainder of the game at the depth limit. Each one of these strategies results in a different set of values for leaf nodes. This forces an agent to be robust to the different strategies an opponent may employ. We demonstrate the effectiveness of this approach by building a master-level heads-up no-limit Texas hold’em poker AI that defeats two prior top agents using only a 4-core CPU and 16 GB of memory. Developing such a powerful agent would have previously required a supercomputer.
1 Introduction
Imperfect-information games model strategic interactions between agents with hidden information. The primary benchmark for this class of games is poker, specifically heads-up no-limit Texas hold’em (HUNL), in which Libratus defeated top humans in 2017 [6]. The key breakthrough that led to superhuman performance was nested solving, in which the agent repeatedly calculates a finer-grained strategy in real time (for just a portion of the full game) as play proceeds down the game tree [5, 27, 6].
However, real-time subgame solving was too expensive for Libratus in the first half of the game because the portion of the game tree Libratus solved in real time, known as the subgame, always extended to the end of the game. Instead, for the first half of the game Libratus pre-computed a finegrained strategy that was used as a lookup table. While this pre-computed strategy was successful, it required millions of core hours and terabytes of memory to calculate. Moreover, in deeper sequential games the computational cost of this approach would be even more expensive because either longer subgames or a larger pre-computed strategy would need to be solved. A more general approach would be to solve depth-limited subgames, which may not extend to the end of the game. These could be solved even in the early portions of a game.
The poker AI DeepStack does this using a technique similar to nested solving that was developed independently [27]. However, while DeepStack defeated a set of non-elite human professionals in HUNL, it never defeated prior top AIs despite using over one million core hours to train the agent, suggesting its approach may not be sufficiently efficient in domains like poker. We discuss this in more detail in Section 7. This paper introduces a different approach to depth-limited solving that defeats prior top AIs and is computationally orders of magnitude less expensive.
When conducting depth-limited solving, a primary challenge is determining what values to substitute at the leaf nodes of the depth-limited subgame. In perfect-information depth-limited subgames, the value substituted at leaf nodes is simply an estimate of the state’s value when all players play an
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
equilibrium [35, 33]. For example, this approach was used to achieve superhuman performance in backgammon [39], chess [9], and Go [36, 37]. The same approach is also widely used in single-agent settings such as heuristic search [30, 24, 31, 15]. Indeed, in single-agent and perfect-information multi-agent settings, knowing the values of states when all agents play an equilibrium is sufficient to reconstruct an equilibrium. However, this does not work in imperfect-information games, as we demonstrate in the next section.
2 The Challenge of Depth-Limited Solving in Imperfect-Information Games
In imperfect-information games (also referred to as partially-observable games), an optimal strategy cannot be determined in a subgame simply by knowing the values of states (i.e., game-tree nodes) when all players play an equilibrium strategy. A simple demonstration is in Figure 1a, which shows a sequential game we call Rock-Paper-Scissors+ (RPS+). RPS+ is identical to traditional Rock-PaperScissors, except if either player plays Scissors, the winner receives 2 points instead of 1 (and the loser loses 2 points). Figure 1a shows RPS+ as a sequential game in which P1 acts first but does not reveal the action to P2 [7, 13]. The optimal strategy (Minmax strategy, which is also a Nash equilibrium in two-player zero-sum games) for both players in this game is to choose Rock and Paper each with 40% probability, and Scissors with 20% probability. In this equilibrium, the expected value to P1 of choosing Rock is 0, as is the value of choosing Scissors or Paper. In other words, all the red states in
In the RPS+ example, the core problem is that we incorrectly assumed P2 would always play a fixed strategy. If indeed P2 were to always play Rock, Paper, and Scissors with probability 〈0.4, 0.4, 0.2〉, then P1 could choose any arbitrary strategy and receive an expected value of 0. However, by assuming P2 is playing a fixed strategy, P1 may not find a strategy that is robust to P2 adapting. In reality, P2’s optimal strategy depends on the probability that P1 chooses Rock, Paper, and Scissors. In general, in imperfect-information games a player’s optimal strategy at a decision point depends on the player’s belief distribution over states as well as the strategy of all other agents beyond that decision point.
In this paper we introduce a method for depth-limited solving that ensures a player is robust to such opponent adaptations. Rather than simply substitute a single state value at a depth limit, we instead allow the opponent one final choice of action at the depth limit, where each action corresponds to a strategy the opponent will play in the remainder of the game. The choice of strategy determines the value of the state. The opponent does not make this choice in a way that is specific to the state (in which case he would trivially choose the maximum value for himself). Instead, naturally, the opponent must make the same choice at all states that are indistinguishable to him. We prove that if the opponent is given a choice between a sufficient number of strategies at the depth limit, then any solution to the depth-limited subgame is part of a Nash equilibrium strategy in the full game. We also show experimentally that when only a few choices are offered (for computational speed), performance of the method is extremely strong.
3 Notation and Background
In an imperfect-information extensive-form game there is a finite set of players, P . A state (also called a node) is defined by all information of the current situation, including private knowledge known to only one player. A unique player P (h) acts at state h. H is the set of all states in the game tree. The state h′ reached after an action is taken in h is a child of h, represented by h · a = h′, while h is the parent of h′. If there exists a sequence of actions from h to h′, then h is an ancestor of h′ (and h′ is a descendant of h), represented as h @ h′. Z ⊆ H are terminal states for which no actions are available. For each player i ∈ P , there is a payoff function ui : Z → R. If P = {1, 2} and u1 = −u2, the game is two-player zero-sum. In this paper we assume the game is two-player zero-sum, though many of the ideas extend to general sum and more than two players.
Imperfect information is represented by information sets (infosets) for each player i ∈ P . For any infoset I belonging to player i, all states h, h′ ∈ I are indistinguishable to player i. Moreover, every non-terminal state h ∈ H belongs to exactly one infoset for each player i. A strategy σi(I) (also known as a policy) is a probability vector over actions for player i in infoset I . The probability of a particular action a is denoted by σi(I, a). Since all states in an infoset belonging to player i are indistinguishable, the strategies in each of them must be identical. We define σi to be a strategy for player i in every infoset in the game where player i acts. A strategy is pure if all probabilities in it are 0 or 1. All strategies are a linear combination of pure strategies. A strategy profile σ is a tuple of strategies, one for each player. The strategy of every player other than i is represented as σ−i. ui(σi, σ−i) is the expected payoff for player i if all players play according to the strategy profile 〈σi, σ−i〉. The value to player i at state h given that all players play according to strategy profile σ is defined as vσi (h), and the value to player i at infoset I is defined as vσ(I) = ∑ h∈I ( p(h)vσi (h) ) , where p(h) is player i’s believed probability that they are in state h, conditional on being in infoset I , based on the other players’ strategies and chance’s probabilities.
A best response to σ−i is a strategy BR(σ−i) such that ui(BR(σ−i), σ−i) = maxσ′i ui(σ ′ i, σ−i). A Nash equilibrium σ∗ is a strategy profile where every player plays a best response: ∀i, ui(σ∗i , σ∗−i) = maxσ′i ui(σ ′ i, σ ∗ −i) [29]. A Nash equilibrium strategy for player i is a strategy σ ∗ i that is part of any Nash equilibrium. In two-player zero-sum games, if σi and σ−i are both Nash equilibrium strategies, then 〈σi, σ−i〉 is a Nash equilibrium. A depth-limited imperfect-information subgame, which we refer to simply as a subgame, is a contiguous portion of the game tree that does not divide infosets. Formally, a subgame S is a set of states such that for all h ∈ S, if h ∈ Ii and h′ ∈ Ii for some player i, then h′ ∈ S. Moreover, if x ∈ S and z ∈ S and x @ y @ z, then y ∈ S. If h ∈ S but no descendant of h is in S, then h is a leaf node. Additionally, the infosets containing h are leaf infosets. Finally, if h ∈ S but no ancestor of h is in S, then h is a root node and the infosets containing h are root infosets.
4 Multi-Valued States in Imperfect-Information Games
In this section we describe our new method for depth-limited solving in imperfect-information games, which we refer to as multi-valued states. Our general approach is to first precompute an approximate Nash equilibrium for the entire game. We refer to this precomputed strategy profile as a blueprint strategy. Since the blueprint is precomputed for the entire game, it is likely just a coarse approximation of a true Nash equilibrium. Our goal is to compute a better approximation in real time for just a depth-limited subgame S that we find ourselves in during play. For the remainder of this paper, we assume that player P1 is attempting to approximate a Nash equilibrium strategy in S.
Let σ∗ be an exact Nash equilibrium. To present the intuition for our approach, we begin by considering what information about σ∗ would, in theory, be sufficient in order to compute a P1 Nash equilibrium strategy in S. For ease of understanding, when considering the intuition for multi-valued states we suggest the reader first focus on the case where S is rooted at the start of the game (that is, no prior actions have occurred).
As explained in Section 2, knowing the values of leaf nodes in S when both players play according to σ∗ (that is, vσ ∗
i (h) for leaf node h and player Pi) is insufficient to compute a Nash equilibrium in S (even though this is sufficient in perfect-information games), because it assumes P2 would not adapt their strategy outside S. But what if P2 could adapt? Specifically, suppose hypothetically that P2
could choose any strategy in the entire game, while P1 could only play according to σ∗1 outside of S. In this case, what strategy should P1 choose in S? Since σ∗1 is a Nash equilibrium strategy and P2 can choose any strategy in the game (including a best response to P1’s strategy), so by definition P1 cannot do better than playing σ∗1 in S. Thus, P1 should play σ ∗ 1 (or some equally good Nash equilibrium) in S.
Another way to describe this setup is that upon reaching a leaf node h in infoset I in subgame S, rather than simply substituting vσ ∗
2 (h) (which assumes P2 plays according to σ ∗ 2 for the remainder of
the game), P2 could instead choose any mixture of pure strategies for the remainder of the game. So if there are N possible pure strategies following I , P2 would choose among N actions upon reaching I , where action n would correspond to playing pure strategy σn2 for the remainder of the game. Since this choice is made separately at each infoset I and since P2 may mix between pure strategies, so this allows P2 to choose any strategy below S.
Since the choice of action would define a P2 strategy for the remainder of the game and since P1 is known to play according to σ∗1 outside S, so the chosen action could immediately reward the expected value v〈σ ∗ 1 ,σ n 2 〉
i (h) to Pi. Therefore, in order to reconstruct a P1 Nash equilibrium in S, it is sufficient to know for every leaf node the expected value of every pure P2 strategy against σ∗1 (stated formally in Proposition 1). This is in contrast to perfect-information games, in which it is sufficient to know for every leaf node just the expected value of σ∗2 against σ ∗ 1 . Critically, it is not necessary to know the strategy σ∗1 , just the values of σ ∗ 1 played against every pure opponent strategy in each leaf node. Proposition 1 adds the condition that we know v〈σ ∗ 1 ,BR(σ ∗ 1 )〉
2 (I) for every root infoset I ∈ S. This condition is used if S does not begin at the start of the game. Knowledge of v〈σ ∗ 1 ,BR(σ ∗ 1 )〉
2 (I) is needed to ensure that any strategy σ1 that P1 computes in S cannot be exploited by P2 changing their strategy earlier in the game. Specifically, we add a constraint that v〈σ1,BR(σ ∗ 1 )〉 2 (I) ≤ v 〈σ∗1 ,BR(σ ∗ 1 )〉
2 (I) for all P2 root infosets I . This makes our technique safe:
Proposition 1. Assume P1 has played according to Nash equilibrium strategy σ∗1 prior to reaching a depth-limited subgame S of a two-player zero-sum game. In order to calculate the portion of a P1 Nash equilibrium strategy that is in S, it is sufficient to know v 〈σ∗1 ,BR(σ ∗ 1 )〉
2 (I) for every root P2 infoset I ∈ S and v〈σ ∗ 1 ,σ2〉
1 (h) for every pure undominated P2 strategy σ2 and every leaf node h ∈ S.
Other safe subgame solving techniques have been developed in recent papers, but those techniques require solving to the end of the full game [7, 17, 28, 5, 6] (except one [27], which we will compare to in Section 7).
Of course, it is impractical to know the expected value in every state of every pure P2 strategy against σ∗1 , especially since we do not know σ ∗ 1 itself. To deal with this, we first compute a blueprint strategy σ̂∗ (that is, a precomputed approximate Nash equilibrium for the full game). Next, rather than consider every pure P2 strategy, we instead consider just a small number of different P2 strategies (that may or may not be pure). Indeed, in many complex games, the possible opponent strategies at a decision point can be approximately grouped into just a few “meta-strategies”, such as which highway lane a car will choose in a driving simulation. In our experiments, we find that excellent performance is obtained in poker with fewer than ten opponent strategies. In part, excellent performance is possible with a small number of strategies because the choice of strategy beyond the depth limit is made separately at each leaf infoset. Thus, if the opponent chooses between ten strategies at the depth limit, but makes this choice independently in each of 100 leaf infosets, then the opponent is actually choosing between 10100 different strategies. We now consider two questions. First, how do we compute the blueprint strategy σ̂∗1? Second, how do we determine the set of P2 strategies? We answer each of these in turn.
There exist several methods for constructing a blueprint. One option, which achieves the best empirical results and is what we use, involves first abstracting the game by bucketing together similar situations [19, 12] and then applying the iterative algorithm Monte Carlo Counterfactual Regret Minimization [22]. Several alternatives exist that do not use a distinct abstraction step [3, 16, 10]. The agent will never actually play according to the blueprint σ̂∗. It is only used to estimate v〈σ ∗ 1 ,σ2〉(h).
We now discuss two different ways to select a set of P2 strategies. Ultimately we would like the set of P2 strategies to contain a diverse set of intelligent strategies the opponent might play, so that P1’s solution in a subgame is robust to possible P2 adaptation. One option is to bias the P2 blueprint
strategy σ̂∗2 in a few different ways. For example, in poker the blueprint strategy should be a mixed strategy involving some probability of folding, calling, or raising. We could define a new strategy σ′2 in which the probability of folding is multiplied by 10 (and then all the probabilities renormalized). If the blueprint strategy σ̂∗ were an exact Nash equilibrium, then any such “biased” strategy σ′2 in which the probabilities are arbitrarily multiplied would still be a best response to σ̂∗1 . In our experiments, we use this biasing of the blueprint strategy to construct a set of four opponent strategies on the second betting round. We refer to this as the bias approach.
Another option is to construct the set of P2 strategies via self-play. The set begins with just one P2 strategy: the blueprint strategy σ̂∗2 . We then solve a depth-limited subgame rooted at the start of the game and going to whatever depth is feasible to solve, giving P2 only the choice of this P2 strategy at leaf infosets. That is, at leaf node h we simply substitute vσ̂ ∗
i (h) for Pi. Let the P1 solution to this depth-limited subgame be σ1. We then approximate a P2 best response assuming P1 plays according to σ1 in the depth-limited subgame and according to σ̂∗1 in the remainder of the game. Since P1 plays according to this fixed strategy, approximating a P2 best response is equivalent to solving a Markov Decision Process, which is far easier to solve than an imperfect-information game. This P2 approximate best response is added to the set of strategies that P2 may choose at the depth limit, and the depth-limited subgame is solved again. This process repeats until the set of P2 strategies grows to the desired size. This self-generative approach bears some resemblance to the double oracle algorithm [26] and recent work on generation of opponent strategies in multi-agent RL [23]. In our experiments, we use this self-generative method to construct a set of ten opponent strategies on the first betting round. We refer to this as the self-generative approach.
One practical consideration is that since σ̂∗1 is not an exact Nash equilibrium, a generated P2 strategy σ2 may do better than σ̂∗2 against σ̂ ∗ 1 . In that case, P1 may play more conservatively than σ ∗ 1 in a depth-limited subgame. To correct for this, one can balance the players by also giving P1 a choice between multiple strategies for the remainder of the game at the depth limit. Alternatively, one can “weaken” the generated P2 strategies so that they do no better than σ̂∗2 against σ̂ ∗ 1 . Formally, if v 〈σ̂∗1 ,σ2〉 2 (I) > v 〈σ̂∗1 ,σ̂ ∗ 2 〉 2 (I), we uniformly lower v 〈σ̂∗1 ,σ2〉 2 (h) for h ∈ I by v 〈σ̂∗1 ,σ2〉 2 (I)− v 〈σ̂∗1 ,σ̂ ∗ 2 〉
2 (I). Another alternative (or additional) solution would be to simply reduce v〈σ̂ ∗ 1 ,σ2〉
2 (h) for σ2 6= σ̂∗2 by some heuristic amount, such as a small percentage of the pot in poker.
Once a P1 strategy σ̂∗1 and a set of P2 strategies have been generated, we need some way to calculate and store v〈σ̂ ∗ 1 ,σ2〉
2 (h). Calculating the state values can be done by traversing the entire game tree once. However, that may not be feasible in large games. Instead, one can use Monte Carlo simulations to approximate the values. For storage, if the number of states is small (such as in the early part of the game tree), one could simply store the values in a table. More generally, one could train a function to predict the values corresponding to a state, taking as input a description of the state and outputting a value for each P2 strategy. Alternatively, one could simply store σ̂∗1 and the set of P2 strategies. Then, in real time, the value of a state could be estimated via Monte Carlo rollouts. We present results for both of these approaches in Section 6.
5 Nested Solving of Imperfect-Information Games
We use the new idea discussed in the previous section in the context of nested solving, which is a way to repeatedly solve subgames as play descends down the game tree [5]. Whenever an opponent chooses an action, a subgame is generated following that action. This subgame is solved, and its solution determines the strategy to play until the next opponent action is taken.
Nested solving is particularly useful in dealing with large or continuous action spaces, such as an auction that allows any bid in dollar increments up to $10,000. To make these games feasible to solve, it is common to apply action abstraction, in which the game is simplified by considering only a few actions (both for ourselves and for the opponent) in the full action space. For example, an action abstraction might only consider bid increments of $100. However, if the opponent chooses an action that is not in the action abstraction (called an off-tree action), the optimal response to that opponent action is undefined.
Prior to the introduction of nested solving, it was standard to simply round off-tree actions to a nearby in-abstraction action (such as treating an opponent bid of $150 as a bid of $200) [14, 34, 11]. Nested solving allows a response to be calculated for off-tree actions by constructing and solving a subgame
that immediately follows that action. The goal is to find a strategy in the subgame that makes the opponent no better off for having chosen the off-tree action than an action already in the abstraction.
Depth-limited solving makes nested solving feasible even in the early game, so it is possible to play without acting according to a precomputed strategy or using action translation. At the start of the game, we solve a depth-limited subgame (using action abstraction) to whatever depth is feasible. This determines our first action. After every opponent action, we solve a new depth-limited subgame that attempts to make the opponent no better off for having chosen that action than an action that was in our previous subgame’s action abstraction. This new subgame determines our next action, and so on.
6 Experiments
We conducted experiments on the games of heads-up no-limit Texas hold’em poker (HUNL) and heads-up no-limit flop hold’em poker (NLFH). Appendix B reminds the reader of the rules of these games. HUNL is the main large-scale benchmark for imperfect-information game AIs. NLFH is similar to HUNL, except the game ends immediately after the second betting round, which makes it small enough to precisely calculate best responses and Nash equilibria. Performance is measured in terms of mbb/g, which is a standard win rate measure in the literature. It stands for milli-big blinds per game and represents how many thousandths of a big blind (the initial money a player must commit to the pot) a player wins on average per hand of poker played.
6.1 Exploitability Experiments in No-Limit Flop Hold’em (NLFH)
Our first experiment measured the exploitability of our technique in NLFH. Exploitability of a strategy in a two-player zero-sum game is how much worse the strategy would do against a best response than a Nash equilibrium strategy would do against a best response. Formally, the exploitability of σ1 is minσ2 u1(σ ∗ 1 , σ2)−minσ2 u1(σ1, σ2), where σ∗1 is a Nash equilibrium strategy.
We considered the case of P1 betting 0.75× the pot at the start of the game, when the action abstraction only contains bets of 0.5× and 1× the pot. We compared our depth-limited solving technique to the randomized pseudoharmonic action translation (RPAT) [11], in which the bet of 0.75× is simply treated as either a bet of 0.5× or 1×. RPAT is the lowest-exploitability known technique for responding to off-tree actions that does not involve real-time computation.
We began by calculating an approximate Nash equilibrium in an action abstraction that does not include the 0.75× bet. This was done by running the CFR+ equilibrium-approximation algorithm [38] for 1,000 iterations, which resulted in less than 1 mbb/g of exploitability within the action abstraction. Next, values for the states at the end of the first betting round within the action abstraction were determined using the self-generative method discussed in Section 4. Since the first betting round is a small portion of the entire game, storing a value for each state in a table required just 42 MB.
To determine a P2 strategy in response to the 0.75× bet, we constructed a depth-limited subgame rooted after the 0.75× bet with leaf nodes at the end of the first betting round. The values of a leaf node in this subgame were set by first determining the in-abstraction leaf nodes corresponding to the exact same sequence of actions, except P1 initially bets 0.5× or 1× the pot. The leaf node values in the 0.75× subgame were set to the average of those two corresponding value vectors. When the end of the first betting round was reached and the board cards were dealt, the remaining game was solved using safe subgame solving.
Figure 2 shows how exploitability decreases as we add state values (that is, as we give P1 more best responses to choose from at the depth limit). When using only one state value at the depth limit (that is, assuming P1 would always play according to the blueprint strategy for the remainder of the game), it is actually better to use RPAT. However, after that our technique becomes significantly better and at 16 values its performance is close to having had the 0.75× action in the abstraction in the first place.
While one could have calculated a (slightly better) P2 strategy in response to the 0.75× bet by solving to the end of the game, that subgame would have been about 10,000× larger than the subgames solved in this experiment. Thus, depth-limited solving dramatically reduces the computational cost of nested subgame solving while giving up very little solution quality.
Exploitability of depth-limited solving in NLFH
6.2 Experiments Against Top AIs in Heads-Up No-Limit Texas Hold’em (HUNL)
Our main experiment uses depth-limited solving to produce a master-level HUNL poker AI called Modicum using computing resources found in a typical laptop. We test Modicum against Baby Tartanian8 [4], the winner of the 2016 Annual Computer Poker Competition, and against Slumbot [18], the winner of the 2018 Annual Computer Poker Competition. Neither Baby Tartanian8 nor Slumbot uses real time computation; their strategies are a precomputed lookup table. Baby Tartanian8 used about 2 million core hours and 18 TB of RAM to compute its strategy. Slumbot used about 250,000 core hours and 2 TB of RAM to compute its strategy. In contrast, Modicum used just 700 core hours and 16GB of RAM to compute its strategy and can play in real time at the speed of human professionals (an average of 20 seconds for an entire hand of poker) using just a 4-core CPU. We now describe Modicum and provide details of its construction in Appendix A.
The blueprint strategy for Modicum was constructed by first generating an abstraction of HUNL using state-of-the-art abstraction techniques [12, 20]. Storing a strategy for this abstraction as 4-byte floats requires just 5 GB. This abstraction was approximately solved by running Monte Carlo Counterfactual Regret Minimization for 700 core hours [22].
HUNL consists of four betting rounds. We conduct depth-limited solving on the first two rounds by solving to the end of that round using MCCFR. Once the third betting round is reached, the remaining game is small enough that we solve to the end of the game using an enhanced form of CFR+ described in the appendix.
We generated 10 values for each state at the end of the first betting round using the self-generative approach. The first betting round was small enough to store all of these state values in a table using 240 MB. For the second betting round, we used the bias approach to generate four opponent best responses. The first best response is simply the opponent’s blueprint strategy. For the second, we biased the opponent’s blueprint strategy toward folding by multiplying the probability of fold actions by 10 and then renormalizing. For the third, we biased the opponent’s blueprint strategy toward checking and calling. Finally for the fourth, we biased the opponent’s blueprint strategy toward betting and raising. To estimate the values of a state when the depth limit is reached on the second round, we sample rollouts of each of the stored best-response strategies.
The performance of Modicum is shown in Table 1. For the evaluation, we used AIVAT to reduce variance [8]. Our new agent defeats both Baby Tartanian8 and Slumbot with statistical significance. For comparison, Baby Tartanian8 defeated Slumbot by 36 ± 12 mbb/g, Libratus defeated Baby Tartanian8 by 63± 28 mbb/g, and Libratus defeated top human professionals by 147± 77 mbb/g. In addition to head-to-head performance against prior top AIs, we also tested Modicum against two versions of Local Best Response (LBR) [25]. An LBR agent is given full access to its opponent’s full-game strategy and uses that knowledge to exactly calculate the probability the LBR agent is in each possible state. Given that probability distribution and a heuristic for how the opposing agent will play thereafter, the LBR agent chooses a best response action. LBR is a way to calculate a lower bound on exploitability and has been shown to be effective in exploiting agents that do not use real-time solving.
In the first version of LBR we tested against, the LBR agent was limited to either folding or betting 0.75× the pot on the first action, and thereafter was limited to either folding or calling. Modicum beat this version of LBR by 570± 42 mbb/g. The second version of LBR we tested against could bet 10 different amounts on the flop that Modicum did not include in its blueprint strategy. Much like the experiment in Section 6.1, this was intended to measure how vulnerable Modicum is to unanticipated bet sizes. The LBR agent was limited to betting 0.75× the pot for the first action of the game and calling for the remaining actions on the preflop. On the flop, the LBR agent could either fold, call, or bet 0.33× 2x times the pot for x ∈ {0, 1, ..., 10}. On the remaining rounds the LBR agent could either fold or call. Modicum beat this version of LBR by 1377 ± 115 mbb/g. In contrast, similar forms of LBR have been shown to defeat prior top poker AIs that do not use real-time solving by hundreds or thousands of mbb/g [25].
While our new agent is probably not as strong as Libratus, it was produced with less than 0.1% of the computing resources and memory, and is never vulnerable to off-tree opponent actions.
While the rollout method used on the second betting round worked well, rollouts may be significantly more expensive in deeper games. To demonstrate the generality of our approach, we also trained a deep neural network (DNN) to predict the values of states at the end of the second betting round as an alternative to using rollouts. The DNN takes as input a 34-float vector of features describing the state, and outputs four floats representing the values of the state for the four possible opponent strategies (represented as a fraction of the size of the pot). The DNN was trained using 180 million examples per player by optimizing the Huber loss with Adam [21], which we implemented using PyTorch [32]. In order for the network to run sufficiently fast on just a 4-core CPU, the DNN has just 4 hidden layers with 256 nodes in the first hidden layer and 128 nodes in the remaining hidden layers. This achieved a Huber loss of 0.02. Using a DNN rather than rollouts resulted in the agent beating Baby Tartanian8 by 2± 9 mbb/g. However, the average time taken using a 4-core CPU increased from 20 seconds to 31 seconds per hand. Still, these results demonstrate the generality of our approach.
7 Comparison to Prior Work
Section 2 demonstrated that in imperfect-information games, states do not have unique values and therefore the techniques common in perfect-information games and single-agent settings do not apply. This paper introduced a way to overcome this challenge by assigning multiple values to states. A different approach is to modify the definition of a “state” to instead be all players’ belief probability distributions over states, which we refer to as a joint belief state. This technique was previously used to develop the poker AI DeepStack [27]. While DeepStack defeated non-elite human professionals in HUNL, it was never shown to defeat prior top AIs even though it used over 1,000,000 core hours of computation. In contrast, Modicum defeated two prior top AIs with less than 1,000 core hours of computation. Still, there are benefits and drawbacks to both approaches, which we now describe in detail. The right choice may depend on the domain and future research may change the competitiveness of either approach.
A joint belief state is defined by a probability (belief) distribution for each player over states that are indistinguishable to the player. In poker, for example, a joint belief state is defined by each players’ belief about what cards the other players are holding. Joint belief states maintain some of the properties that regular states have in perfect-information games. In particular, it is possible to determine an optimal strategy in a subgame rooted at a joint belief state independently from the rest of the game. Therefore, joint belief states have unique, well-defined values that are not influenced by the strategies played in disjoint portions of the game tree. Given a joint belief state, it is also possible
to define the value of each root infoset for each player. In the example of poker, this would be the value of a player holding a particular poker hand given the joint belief state.
One way to do depth-limited subgame solving, other than the method we describe in this paper, is to learn a function that maps joint belief states to infoset values. When conducting depth-limited solving, one could then set the value of a leaf infoset based on the joint belief state at that leaf infoset.
One drawback is that because a player’s belief distribution partly defines a joint belief state, the values of the leaf infosets must be recalculated each time the strategy in the subgame changes. With the best domain-specific iterative algorithms, this would require recalculating the leaf infosets about 500 times. Monte Carlo algorithms, which are the preferred domain-independent method of solving imperfect-information games, may change the strategy millions of times in a subgame, which poses a problem for the joint belief state approach. In contrast, our multi-valued state approach requires only a single function call for each leaf node regardless of the number of iterations conducted.
Moreover, evaluating multi-valued states with a function approximator is cheaper and more scalable to large games than joint belief states. The input to a function that predicts the value of a multi-valued state is simply the state description (for example, the sequence of actions), and the output is several values. In our experiments, the input was 34 floats and the output was 4 floats. In contrast, the input to a function that predicts the values of a joint belief state is a probability vector for each player over the possible states they may be in. For example, in HUNL, the input is more than 2,000 floats and the output is more than 1,000 floats. The input would be even larger in games with more states per infoset.
Another drawback is that learning a mapping from joint belief states to infoset values is computationally more expensive than learning a mapping from states to a set of values. For example, Modicum required less than 1,000 core hours to create this mapping. In contrast, DeepStack required over 1,000,000 core hours to create its mapping. The increased cost is partly because computing training data for a joint belief state value mapping is inherently more expensive. The multi-valued states approach is learning the values of best responses to a particular strategy (namely, the approximate Nash equilibrium strategy σ̂∗1). In contrast, a joint belief state value mapping is learning the value of all players playing an equilibrium strategy given that joint belief state. As a rough guideline, computing an equilibrium is about 1,000× more expensive than computing a best response in large games [1].
On the other hand, the multi-valued state approach requires knowledge of a blueprint strategy that is already an approximate Nash equilibrium. A benefit of the joint belief state approach is that rather than simply learning best responses to a particular strategy, it is learning best responses against every possible strategy. This may be particularly useful in self-play settings where the blueprint strategy is unknown, because it may lead to increasingly more sophisticated strategies.
Another benefit of the joint belief state approach is that in many games (but not all) it obviates the need to keep track of the sequence of actions played. For example, in poker if there are two different sequences of actions that result in the same amount of money in the pot and all players having the same belief distribution over what their opponents’ cards are, then the optimal strategy in both of those situations is the same. This is similar to how in Go it is not necessary to know the exact sequence of actions that were played. Rather, it is only necessary to know the current configuration of the board (and, in certain situations, also the last few actions played).
A further benefit of the joint belief state approach is that its run-time complexity does not increase with the degree of precision other than needing a better (possibly more computationally expensive) function approximator. In contrast, for our algorithm the computational complexity of finding a solution to a depth-limited subgame grows linearly with the number of values per state.
8 Conclusions
We introduced a principled method for conducting depth-limited solving in imperfect-information games. Experimental results show that this leads to stronger performance than the best precomputedstrategy AIs in HUNL while using orders of magnitude less computational resources, and is also orders of magnitude more efficient than past approaches that use real-time solving. Additionally, the method exhibits low exploitability. In addition to using less resources, this approach broadens the applicability of nested real-time solving to longer games.
9 Acknowledgments
This material is based on work supported by the National Science Foundation under grants IIS1718457, IIS-1617590, and CCF-1733556, and the ARO under award W911NF-17-1-0082, as well as XSEDE computing resources provided by the Pittsburgh Supercomputing Center. We thank Thore Graepel, Marc Lanctot, David Silver, Ariel Procaccia, Fei Fang, and our anonymous reviewers for helpful inspiration, feedback, suggestions, and support. | 1. What is the main contribution of the paper in solving two-player games with imperfect information?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its clarity and simplicity?
3. Do you have any concerns about the theoretical correctness of the approach, especially regarding the lack of a formal proof for Proposition 1?
4. How does the reviewer assess the novelty and originality of the paper's content?
5. Are there any specific areas where the reviewer found the paper unclear or confusing, such as the discussion of n-player games? | Review | Review
Main ideas ========== This paper proposes a method for solving two player games with imperfect information. The technique used is to perfrom a depth limited game tree search, and at the depth-limit consider a range of values - possible outcomes of following a fixed strategy - and then checking its relation to the best response. The experimental results show that the resulting solver outperforms state of the art Poker AI players, and requires significantly lower computation resources. The authors also experimented with a deep learning based approach for the estimation part of their algorithm, with so-so results. Weakness ========== Clarity: Up to section 4, the paper is clear. From section 4 and onwards, I found it very difficult to understand, because it is not clear what are the simplifying assumptions being made, what are implementation details, and what are the key technical contributions. Also, the authors focus on two player games, but in several places in the paper go back to the more general n-player definitions. This is very confusing and unhelpful. For examples, see lines 94 and 125. Strengh ========= Significance: It seems to me that Proposition 1 is key to the theoretical correctness of the proposed approach, but it is not given a formal proof. Also, the lack of clarity of the proposed method will make it very hard to verify or understand the main ideas underlying this work. However, the state of the art results are extremely impressive. Originality: Novel as far as I could tell. Others: Line 94: \Sigme_i was not defined. Line 145: " ... of three lanes to choose ..." - This sentence is not clear to me at all. |
NIPS | Title
Depth-Limited Solving for Imperfect-Information Games
Abstract
A fundamental challenge in imperfect-information games is that states do not have well-defined values. As a result, depth-limited search algorithms used in singleagent settings and perfect-information games do not apply. This paper introduces a principled way to conduct depth-limited solving in imperfect-information games by allowing the opponent to choose among a number of strategies for the remainder of the game at the depth limit. Each one of these strategies results in a different set of values for leaf nodes. This forces an agent to be robust to the different strategies an opponent may employ. We demonstrate the effectiveness of this approach by building a master-level heads-up no-limit Texas hold’em poker AI that defeats two prior top agents using only a 4-core CPU and 16 GB of memory. Developing such a powerful agent would have previously required a supercomputer.
1 Introduction
Imperfect-information games model strategic interactions between agents with hidden information. The primary benchmark for this class of games is poker, specifically heads-up no-limit Texas hold’em (HUNL), in which Libratus defeated top humans in 2017 [6]. The key breakthrough that led to superhuman performance was nested solving, in which the agent repeatedly calculates a finer-grained strategy in real time (for just a portion of the full game) as play proceeds down the game tree [5, 27, 6].
However, real-time subgame solving was too expensive for Libratus in the first half of the game because the portion of the game tree Libratus solved in real time, known as the subgame, always extended to the end of the game. Instead, for the first half of the game Libratus pre-computed a finegrained strategy that was used as a lookup table. While this pre-computed strategy was successful, it required millions of core hours and terabytes of memory to calculate. Moreover, in deeper sequential games the computational cost of this approach would be even more expensive because either longer subgames or a larger pre-computed strategy would need to be solved. A more general approach would be to solve depth-limited subgames, which may not extend to the end of the game. These could be solved even in the early portions of a game.
The poker AI DeepStack does this using a technique similar to nested solving that was developed independently [27]. However, while DeepStack defeated a set of non-elite human professionals in HUNL, it never defeated prior top AIs despite using over one million core hours to train the agent, suggesting its approach may not be sufficiently efficient in domains like poker. We discuss this in more detail in Section 7. This paper introduces a different approach to depth-limited solving that defeats prior top AIs and is computationally orders of magnitude less expensive.
When conducting depth-limited solving, a primary challenge is determining what values to substitute at the leaf nodes of the depth-limited subgame. In perfect-information depth-limited subgames, the value substituted at leaf nodes is simply an estimate of the state’s value when all players play an
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
equilibrium [35, 33]. For example, this approach was used to achieve superhuman performance in backgammon [39], chess [9], and Go [36, 37]. The same approach is also widely used in single-agent settings such as heuristic search [30, 24, 31, 15]. Indeed, in single-agent and perfect-information multi-agent settings, knowing the values of states when all agents play an equilibrium is sufficient to reconstruct an equilibrium. However, this does not work in imperfect-information games, as we demonstrate in the next section.
2 The Challenge of Depth-Limited Solving in Imperfect-Information Games
In imperfect-information games (also referred to as partially-observable games), an optimal strategy cannot be determined in a subgame simply by knowing the values of states (i.e., game-tree nodes) when all players play an equilibrium strategy. A simple demonstration is in Figure 1a, which shows a sequential game we call Rock-Paper-Scissors+ (RPS+). RPS+ is identical to traditional Rock-PaperScissors, except if either player plays Scissors, the winner receives 2 points instead of 1 (and the loser loses 2 points). Figure 1a shows RPS+ as a sequential game in which P1 acts first but does not reveal the action to P2 [7, 13]. The optimal strategy (Minmax strategy, which is also a Nash equilibrium in two-player zero-sum games) for both players in this game is to choose Rock and Paper each with 40% probability, and Scissors with 20% probability. In this equilibrium, the expected value to P1 of choosing Rock is 0, as is the value of choosing Scissors or Paper. In other words, all the red states in
In the RPS+ example, the core problem is that we incorrectly assumed P2 would always play a fixed strategy. If indeed P2 were to always play Rock, Paper, and Scissors with probability 〈0.4, 0.4, 0.2〉, then P1 could choose any arbitrary strategy and receive an expected value of 0. However, by assuming P2 is playing a fixed strategy, P1 may not find a strategy that is robust to P2 adapting. In reality, P2’s optimal strategy depends on the probability that P1 chooses Rock, Paper, and Scissors. In general, in imperfect-information games a player’s optimal strategy at a decision point depends on the player’s belief distribution over states as well as the strategy of all other agents beyond that decision point.
In this paper we introduce a method for depth-limited solving that ensures a player is robust to such opponent adaptations. Rather than simply substitute a single state value at a depth limit, we instead allow the opponent one final choice of action at the depth limit, where each action corresponds to a strategy the opponent will play in the remainder of the game. The choice of strategy determines the value of the state. The opponent does not make this choice in a way that is specific to the state (in which case he would trivially choose the maximum value for himself). Instead, naturally, the opponent must make the same choice at all states that are indistinguishable to him. We prove that if the opponent is given a choice between a sufficient number of strategies at the depth limit, then any solution to the depth-limited subgame is part of a Nash equilibrium strategy in the full game. We also show experimentally that when only a few choices are offered (for computational speed), performance of the method is extremely strong.
3 Notation and Background
In an imperfect-information extensive-form game there is a finite set of players, P . A state (also called a node) is defined by all information of the current situation, including private knowledge known to only one player. A unique player P (h) acts at state h. H is the set of all states in the game tree. The state h′ reached after an action is taken in h is a child of h, represented by h · a = h′, while h is the parent of h′. If there exists a sequence of actions from h to h′, then h is an ancestor of h′ (and h′ is a descendant of h), represented as h @ h′. Z ⊆ H are terminal states for which no actions are available. For each player i ∈ P , there is a payoff function ui : Z → R. If P = {1, 2} and u1 = −u2, the game is two-player zero-sum. In this paper we assume the game is two-player zero-sum, though many of the ideas extend to general sum and more than two players.
Imperfect information is represented by information sets (infosets) for each player i ∈ P . For any infoset I belonging to player i, all states h, h′ ∈ I are indistinguishable to player i. Moreover, every non-terminal state h ∈ H belongs to exactly one infoset for each player i. A strategy σi(I) (also known as a policy) is a probability vector over actions for player i in infoset I . The probability of a particular action a is denoted by σi(I, a). Since all states in an infoset belonging to player i are indistinguishable, the strategies in each of them must be identical. We define σi to be a strategy for player i in every infoset in the game where player i acts. A strategy is pure if all probabilities in it are 0 or 1. All strategies are a linear combination of pure strategies. A strategy profile σ is a tuple of strategies, one for each player. The strategy of every player other than i is represented as σ−i. ui(σi, σ−i) is the expected payoff for player i if all players play according to the strategy profile 〈σi, σ−i〉. The value to player i at state h given that all players play according to strategy profile σ is defined as vσi (h), and the value to player i at infoset I is defined as vσ(I) = ∑ h∈I ( p(h)vσi (h) ) , where p(h) is player i’s believed probability that they are in state h, conditional on being in infoset I , based on the other players’ strategies and chance’s probabilities.
A best response to σ−i is a strategy BR(σ−i) such that ui(BR(σ−i), σ−i) = maxσ′i ui(σ ′ i, σ−i). A Nash equilibrium σ∗ is a strategy profile where every player plays a best response: ∀i, ui(σ∗i , σ∗−i) = maxσ′i ui(σ ′ i, σ ∗ −i) [29]. A Nash equilibrium strategy for player i is a strategy σ ∗ i that is part of any Nash equilibrium. In two-player zero-sum games, if σi and σ−i are both Nash equilibrium strategies, then 〈σi, σ−i〉 is a Nash equilibrium. A depth-limited imperfect-information subgame, which we refer to simply as a subgame, is a contiguous portion of the game tree that does not divide infosets. Formally, a subgame S is a set of states such that for all h ∈ S, if h ∈ Ii and h′ ∈ Ii for some player i, then h′ ∈ S. Moreover, if x ∈ S and z ∈ S and x @ y @ z, then y ∈ S. If h ∈ S but no descendant of h is in S, then h is a leaf node. Additionally, the infosets containing h are leaf infosets. Finally, if h ∈ S but no ancestor of h is in S, then h is a root node and the infosets containing h are root infosets.
4 Multi-Valued States in Imperfect-Information Games
In this section we describe our new method for depth-limited solving in imperfect-information games, which we refer to as multi-valued states. Our general approach is to first precompute an approximate Nash equilibrium for the entire game. We refer to this precomputed strategy profile as a blueprint strategy. Since the blueprint is precomputed for the entire game, it is likely just a coarse approximation of a true Nash equilibrium. Our goal is to compute a better approximation in real time for just a depth-limited subgame S that we find ourselves in during play. For the remainder of this paper, we assume that player P1 is attempting to approximate a Nash equilibrium strategy in S.
Let σ∗ be an exact Nash equilibrium. To present the intuition for our approach, we begin by considering what information about σ∗ would, in theory, be sufficient in order to compute a P1 Nash equilibrium strategy in S. For ease of understanding, when considering the intuition for multi-valued states we suggest the reader first focus on the case where S is rooted at the start of the game (that is, no prior actions have occurred).
As explained in Section 2, knowing the values of leaf nodes in S when both players play according to σ∗ (that is, vσ ∗
i (h) for leaf node h and player Pi) is insufficient to compute a Nash equilibrium in S (even though this is sufficient in perfect-information games), because it assumes P2 would not adapt their strategy outside S. But what if P2 could adapt? Specifically, suppose hypothetically that P2
could choose any strategy in the entire game, while P1 could only play according to σ∗1 outside of S. In this case, what strategy should P1 choose in S? Since σ∗1 is a Nash equilibrium strategy and P2 can choose any strategy in the game (including a best response to P1’s strategy), so by definition P1 cannot do better than playing σ∗1 in S. Thus, P1 should play σ ∗ 1 (or some equally good Nash equilibrium) in S.
Another way to describe this setup is that upon reaching a leaf node h in infoset I in subgame S, rather than simply substituting vσ ∗
2 (h) (which assumes P2 plays according to σ ∗ 2 for the remainder of
the game), P2 could instead choose any mixture of pure strategies for the remainder of the game. So if there are N possible pure strategies following I , P2 would choose among N actions upon reaching I , where action n would correspond to playing pure strategy σn2 for the remainder of the game. Since this choice is made separately at each infoset I and since P2 may mix between pure strategies, so this allows P2 to choose any strategy below S.
Since the choice of action would define a P2 strategy for the remainder of the game and since P1 is known to play according to σ∗1 outside S, so the chosen action could immediately reward the expected value v〈σ ∗ 1 ,σ n 2 〉
i (h) to Pi. Therefore, in order to reconstruct a P1 Nash equilibrium in S, it is sufficient to know for every leaf node the expected value of every pure P2 strategy against σ∗1 (stated formally in Proposition 1). This is in contrast to perfect-information games, in which it is sufficient to know for every leaf node just the expected value of σ∗2 against σ ∗ 1 . Critically, it is not necessary to know the strategy σ∗1 , just the values of σ ∗ 1 played against every pure opponent strategy in each leaf node. Proposition 1 adds the condition that we know v〈σ ∗ 1 ,BR(σ ∗ 1 )〉
2 (I) for every root infoset I ∈ S. This condition is used if S does not begin at the start of the game. Knowledge of v〈σ ∗ 1 ,BR(σ ∗ 1 )〉
2 (I) is needed to ensure that any strategy σ1 that P1 computes in S cannot be exploited by P2 changing their strategy earlier in the game. Specifically, we add a constraint that v〈σ1,BR(σ ∗ 1 )〉 2 (I) ≤ v 〈σ∗1 ,BR(σ ∗ 1 )〉
2 (I) for all P2 root infosets I . This makes our technique safe:
Proposition 1. Assume P1 has played according to Nash equilibrium strategy σ∗1 prior to reaching a depth-limited subgame S of a two-player zero-sum game. In order to calculate the portion of a P1 Nash equilibrium strategy that is in S, it is sufficient to know v 〈σ∗1 ,BR(σ ∗ 1 )〉
2 (I) for every root P2 infoset I ∈ S and v〈σ ∗ 1 ,σ2〉
1 (h) for every pure undominated P2 strategy σ2 and every leaf node h ∈ S.
Other safe subgame solving techniques have been developed in recent papers, but those techniques require solving to the end of the full game [7, 17, 28, 5, 6] (except one [27], which we will compare to in Section 7).
Of course, it is impractical to know the expected value in every state of every pure P2 strategy against σ∗1 , especially since we do not know σ ∗ 1 itself. To deal with this, we first compute a blueprint strategy σ̂∗ (that is, a precomputed approximate Nash equilibrium for the full game). Next, rather than consider every pure P2 strategy, we instead consider just a small number of different P2 strategies (that may or may not be pure). Indeed, in many complex games, the possible opponent strategies at a decision point can be approximately grouped into just a few “meta-strategies”, such as which highway lane a car will choose in a driving simulation. In our experiments, we find that excellent performance is obtained in poker with fewer than ten opponent strategies. In part, excellent performance is possible with a small number of strategies because the choice of strategy beyond the depth limit is made separately at each leaf infoset. Thus, if the opponent chooses between ten strategies at the depth limit, but makes this choice independently in each of 100 leaf infosets, then the opponent is actually choosing between 10100 different strategies. We now consider two questions. First, how do we compute the blueprint strategy σ̂∗1? Second, how do we determine the set of P2 strategies? We answer each of these in turn.
There exist several methods for constructing a blueprint. One option, which achieves the best empirical results and is what we use, involves first abstracting the game by bucketing together similar situations [19, 12] and then applying the iterative algorithm Monte Carlo Counterfactual Regret Minimization [22]. Several alternatives exist that do not use a distinct abstraction step [3, 16, 10]. The agent will never actually play according to the blueprint σ̂∗. It is only used to estimate v〈σ ∗ 1 ,σ2〉(h).
We now discuss two different ways to select a set of P2 strategies. Ultimately we would like the set of P2 strategies to contain a diverse set of intelligent strategies the opponent might play, so that P1’s solution in a subgame is robust to possible P2 adaptation. One option is to bias the P2 blueprint
strategy σ̂∗2 in a few different ways. For example, in poker the blueprint strategy should be a mixed strategy involving some probability of folding, calling, or raising. We could define a new strategy σ′2 in which the probability of folding is multiplied by 10 (and then all the probabilities renormalized). If the blueprint strategy σ̂∗ were an exact Nash equilibrium, then any such “biased” strategy σ′2 in which the probabilities are arbitrarily multiplied would still be a best response to σ̂∗1 . In our experiments, we use this biasing of the blueprint strategy to construct a set of four opponent strategies on the second betting round. We refer to this as the bias approach.
Another option is to construct the set of P2 strategies via self-play. The set begins with just one P2 strategy: the blueprint strategy σ̂∗2 . We then solve a depth-limited subgame rooted at the start of the game and going to whatever depth is feasible to solve, giving P2 only the choice of this P2 strategy at leaf infosets. That is, at leaf node h we simply substitute vσ̂ ∗
i (h) for Pi. Let the P1 solution to this depth-limited subgame be σ1. We then approximate a P2 best response assuming P1 plays according to σ1 in the depth-limited subgame and according to σ̂∗1 in the remainder of the game. Since P1 plays according to this fixed strategy, approximating a P2 best response is equivalent to solving a Markov Decision Process, which is far easier to solve than an imperfect-information game. This P2 approximate best response is added to the set of strategies that P2 may choose at the depth limit, and the depth-limited subgame is solved again. This process repeats until the set of P2 strategies grows to the desired size. This self-generative approach bears some resemblance to the double oracle algorithm [26] and recent work on generation of opponent strategies in multi-agent RL [23]. In our experiments, we use this self-generative method to construct a set of ten opponent strategies on the first betting round. We refer to this as the self-generative approach.
One practical consideration is that since σ̂∗1 is not an exact Nash equilibrium, a generated P2 strategy σ2 may do better than σ̂∗2 against σ̂ ∗ 1 . In that case, P1 may play more conservatively than σ ∗ 1 in a depth-limited subgame. To correct for this, one can balance the players by also giving P1 a choice between multiple strategies for the remainder of the game at the depth limit. Alternatively, one can “weaken” the generated P2 strategies so that they do no better than σ̂∗2 against σ̂ ∗ 1 . Formally, if v 〈σ̂∗1 ,σ2〉 2 (I) > v 〈σ̂∗1 ,σ̂ ∗ 2 〉 2 (I), we uniformly lower v 〈σ̂∗1 ,σ2〉 2 (h) for h ∈ I by v 〈σ̂∗1 ,σ2〉 2 (I)− v 〈σ̂∗1 ,σ̂ ∗ 2 〉
2 (I). Another alternative (or additional) solution would be to simply reduce v〈σ̂ ∗ 1 ,σ2〉
2 (h) for σ2 6= σ̂∗2 by some heuristic amount, such as a small percentage of the pot in poker.
Once a P1 strategy σ̂∗1 and a set of P2 strategies have been generated, we need some way to calculate and store v〈σ̂ ∗ 1 ,σ2〉
2 (h). Calculating the state values can be done by traversing the entire game tree once. However, that may not be feasible in large games. Instead, one can use Monte Carlo simulations to approximate the values. For storage, if the number of states is small (such as in the early part of the game tree), one could simply store the values in a table. More generally, one could train a function to predict the values corresponding to a state, taking as input a description of the state and outputting a value for each P2 strategy. Alternatively, one could simply store σ̂∗1 and the set of P2 strategies. Then, in real time, the value of a state could be estimated via Monte Carlo rollouts. We present results for both of these approaches in Section 6.
5 Nested Solving of Imperfect-Information Games
We use the new idea discussed in the previous section in the context of nested solving, which is a way to repeatedly solve subgames as play descends down the game tree [5]. Whenever an opponent chooses an action, a subgame is generated following that action. This subgame is solved, and its solution determines the strategy to play until the next opponent action is taken.
Nested solving is particularly useful in dealing with large or continuous action spaces, such as an auction that allows any bid in dollar increments up to $10,000. To make these games feasible to solve, it is common to apply action abstraction, in which the game is simplified by considering only a few actions (both for ourselves and for the opponent) in the full action space. For example, an action abstraction might only consider bid increments of $100. However, if the opponent chooses an action that is not in the action abstraction (called an off-tree action), the optimal response to that opponent action is undefined.
Prior to the introduction of nested solving, it was standard to simply round off-tree actions to a nearby in-abstraction action (such as treating an opponent bid of $150 as a bid of $200) [14, 34, 11]. Nested solving allows a response to be calculated for off-tree actions by constructing and solving a subgame
that immediately follows that action. The goal is to find a strategy in the subgame that makes the opponent no better off for having chosen the off-tree action than an action already in the abstraction.
Depth-limited solving makes nested solving feasible even in the early game, so it is possible to play without acting according to a precomputed strategy or using action translation. At the start of the game, we solve a depth-limited subgame (using action abstraction) to whatever depth is feasible. This determines our first action. After every opponent action, we solve a new depth-limited subgame that attempts to make the opponent no better off for having chosen that action than an action that was in our previous subgame’s action abstraction. This new subgame determines our next action, and so on.
6 Experiments
We conducted experiments on the games of heads-up no-limit Texas hold’em poker (HUNL) and heads-up no-limit flop hold’em poker (NLFH). Appendix B reminds the reader of the rules of these games. HUNL is the main large-scale benchmark for imperfect-information game AIs. NLFH is similar to HUNL, except the game ends immediately after the second betting round, which makes it small enough to precisely calculate best responses and Nash equilibria. Performance is measured in terms of mbb/g, which is a standard win rate measure in the literature. It stands for milli-big blinds per game and represents how many thousandths of a big blind (the initial money a player must commit to the pot) a player wins on average per hand of poker played.
6.1 Exploitability Experiments in No-Limit Flop Hold’em (NLFH)
Our first experiment measured the exploitability of our technique in NLFH. Exploitability of a strategy in a two-player zero-sum game is how much worse the strategy would do against a best response than a Nash equilibrium strategy would do against a best response. Formally, the exploitability of σ1 is minσ2 u1(σ ∗ 1 , σ2)−minσ2 u1(σ1, σ2), where σ∗1 is a Nash equilibrium strategy.
We considered the case of P1 betting 0.75× the pot at the start of the game, when the action abstraction only contains bets of 0.5× and 1× the pot. We compared our depth-limited solving technique to the randomized pseudoharmonic action translation (RPAT) [11], in which the bet of 0.75× is simply treated as either a bet of 0.5× or 1×. RPAT is the lowest-exploitability known technique for responding to off-tree actions that does not involve real-time computation.
We began by calculating an approximate Nash equilibrium in an action abstraction that does not include the 0.75× bet. This was done by running the CFR+ equilibrium-approximation algorithm [38] for 1,000 iterations, which resulted in less than 1 mbb/g of exploitability within the action abstraction. Next, values for the states at the end of the first betting round within the action abstraction were determined using the self-generative method discussed in Section 4. Since the first betting round is a small portion of the entire game, storing a value for each state in a table required just 42 MB.
To determine a P2 strategy in response to the 0.75× bet, we constructed a depth-limited subgame rooted after the 0.75× bet with leaf nodes at the end of the first betting round. The values of a leaf node in this subgame were set by first determining the in-abstraction leaf nodes corresponding to the exact same sequence of actions, except P1 initially bets 0.5× or 1× the pot. The leaf node values in the 0.75× subgame were set to the average of those two corresponding value vectors. When the end of the first betting round was reached and the board cards were dealt, the remaining game was solved using safe subgame solving.
Figure 2 shows how exploitability decreases as we add state values (that is, as we give P1 more best responses to choose from at the depth limit). When using only one state value at the depth limit (that is, assuming P1 would always play according to the blueprint strategy for the remainder of the game), it is actually better to use RPAT. However, after that our technique becomes significantly better and at 16 values its performance is close to having had the 0.75× action in the abstraction in the first place.
While one could have calculated a (slightly better) P2 strategy in response to the 0.75× bet by solving to the end of the game, that subgame would have been about 10,000× larger than the subgames solved in this experiment. Thus, depth-limited solving dramatically reduces the computational cost of nested subgame solving while giving up very little solution quality.
Exploitability of depth-limited solving in NLFH
6.2 Experiments Against Top AIs in Heads-Up No-Limit Texas Hold’em (HUNL)
Our main experiment uses depth-limited solving to produce a master-level HUNL poker AI called Modicum using computing resources found in a typical laptop. We test Modicum against Baby Tartanian8 [4], the winner of the 2016 Annual Computer Poker Competition, and against Slumbot [18], the winner of the 2018 Annual Computer Poker Competition. Neither Baby Tartanian8 nor Slumbot uses real time computation; their strategies are a precomputed lookup table. Baby Tartanian8 used about 2 million core hours and 18 TB of RAM to compute its strategy. Slumbot used about 250,000 core hours and 2 TB of RAM to compute its strategy. In contrast, Modicum used just 700 core hours and 16GB of RAM to compute its strategy and can play in real time at the speed of human professionals (an average of 20 seconds for an entire hand of poker) using just a 4-core CPU. We now describe Modicum and provide details of its construction in Appendix A.
The blueprint strategy for Modicum was constructed by first generating an abstraction of HUNL using state-of-the-art abstraction techniques [12, 20]. Storing a strategy for this abstraction as 4-byte floats requires just 5 GB. This abstraction was approximately solved by running Monte Carlo Counterfactual Regret Minimization for 700 core hours [22].
HUNL consists of four betting rounds. We conduct depth-limited solving on the first two rounds by solving to the end of that round using MCCFR. Once the third betting round is reached, the remaining game is small enough that we solve to the end of the game using an enhanced form of CFR+ described in the appendix.
We generated 10 values for each state at the end of the first betting round using the self-generative approach. The first betting round was small enough to store all of these state values in a table using 240 MB. For the second betting round, we used the bias approach to generate four opponent best responses. The first best response is simply the opponent’s blueprint strategy. For the second, we biased the opponent’s blueprint strategy toward folding by multiplying the probability of fold actions by 10 and then renormalizing. For the third, we biased the opponent’s blueprint strategy toward checking and calling. Finally for the fourth, we biased the opponent’s blueprint strategy toward betting and raising. To estimate the values of a state when the depth limit is reached on the second round, we sample rollouts of each of the stored best-response strategies.
The performance of Modicum is shown in Table 1. For the evaluation, we used AIVAT to reduce variance [8]. Our new agent defeats both Baby Tartanian8 and Slumbot with statistical significance. For comparison, Baby Tartanian8 defeated Slumbot by 36 ± 12 mbb/g, Libratus defeated Baby Tartanian8 by 63± 28 mbb/g, and Libratus defeated top human professionals by 147± 77 mbb/g. In addition to head-to-head performance against prior top AIs, we also tested Modicum against two versions of Local Best Response (LBR) [25]. An LBR agent is given full access to its opponent’s full-game strategy and uses that knowledge to exactly calculate the probability the LBR agent is in each possible state. Given that probability distribution and a heuristic for how the opposing agent will play thereafter, the LBR agent chooses a best response action. LBR is a way to calculate a lower bound on exploitability and has been shown to be effective in exploiting agents that do not use real-time solving.
In the first version of LBR we tested against, the LBR agent was limited to either folding or betting 0.75× the pot on the first action, and thereafter was limited to either folding or calling. Modicum beat this version of LBR by 570± 42 mbb/g. The second version of LBR we tested against could bet 10 different amounts on the flop that Modicum did not include in its blueprint strategy. Much like the experiment in Section 6.1, this was intended to measure how vulnerable Modicum is to unanticipated bet sizes. The LBR agent was limited to betting 0.75× the pot for the first action of the game and calling for the remaining actions on the preflop. On the flop, the LBR agent could either fold, call, or bet 0.33× 2x times the pot for x ∈ {0, 1, ..., 10}. On the remaining rounds the LBR agent could either fold or call. Modicum beat this version of LBR by 1377 ± 115 mbb/g. In contrast, similar forms of LBR have been shown to defeat prior top poker AIs that do not use real-time solving by hundreds or thousands of mbb/g [25].
While our new agent is probably not as strong as Libratus, it was produced with less than 0.1% of the computing resources and memory, and is never vulnerable to off-tree opponent actions.
While the rollout method used on the second betting round worked well, rollouts may be significantly more expensive in deeper games. To demonstrate the generality of our approach, we also trained a deep neural network (DNN) to predict the values of states at the end of the second betting round as an alternative to using rollouts. The DNN takes as input a 34-float vector of features describing the state, and outputs four floats representing the values of the state for the four possible opponent strategies (represented as a fraction of the size of the pot). The DNN was trained using 180 million examples per player by optimizing the Huber loss with Adam [21], which we implemented using PyTorch [32]. In order for the network to run sufficiently fast on just a 4-core CPU, the DNN has just 4 hidden layers with 256 nodes in the first hidden layer and 128 nodes in the remaining hidden layers. This achieved a Huber loss of 0.02. Using a DNN rather than rollouts resulted in the agent beating Baby Tartanian8 by 2± 9 mbb/g. However, the average time taken using a 4-core CPU increased from 20 seconds to 31 seconds per hand. Still, these results demonstrate the generality of our approach.
7 Comparison to Prior Work
Section 2 demonstrated that in imperfect-information games, states do not have unique values and therefore the techniques common in perfect-information games and single-agent settings do not apply. This paper introduced a way to overcome this challenge by assigning multiple values to states. A different approach is to modify the definition of a “state” to instead be all players’ belief probability distributions over states, which we refer to as a joint belief state. This technique was previously used to develop the poker AI DeepStack [27]. While DeepStack defeated non-elite human professionals in HUNL, it was never shown to defeat prior top AIs even though it used over 1,000,000 core hours of computation. In contrast, Modicum defeated two prior top AIs with less than 1,000 core hours of computation. Still, there are benefits and drawbacks to both approaches, which we now describe in detail. The right choice may depend on the domain and future research may change the competitiveness of either approach.
A joint belief state is defined by a probability (belief) distribution for each player over states that are indistinguishable to the player. In poker, for example, a joint belief state is defined by each players’ belief about what cards the other players are holding. Joint belief states maintain some of the properties that regular states have in perfect-information games. In particular, it is possible to determine an optimal strategy in a subgame rooted at a joint belief state independently from the rest of the game. Therefore, joint belief states have unique, well-defined values that are not influenced by the strategies played in disjoint portions of the game tree. Given a joint belief state, it is also possible
to define the value of each root infoset for each player. In the example of poker, this would be the value of a player holding a particular poker hand given the joint belief state.
One way to do depth-limited subgame solving, other than the method we describe in this paper, is to learn a function that maps joint belief states to infoset values. When conducting depth-limited solving, one could then set the value of a leaf infoset based on the joint belief state at that leaf infoset.
One drawback is that because a player’s belief distribution partly defines a joint belief state, the values of the leaf infosets must be recalculated each time the strategy in the subgame changes. With the best domain-specific iterative algorithms, this would require recalculating the leaf infosets about 500 times. Monte Carlo algorithms, which are the preferred domain-independent method of solving imperfect-information games, may change the strategy millions of times in a subgame, which poses a problem for the joint belief state approach. In contrast, our multi-valued state approach requires only a single function call for each leaf node regardless of the number of iterations conducted.
Moreover, evaluating multi-valued states with a function approximator is cheaper and more scalable to large games than joint belief states. The input to a function that predicts the value of a multi-valued state is simply the state description (for example, the sequence of actions), and the output is several values. In our experiments, the input was 34 floats and the output was 4 floats. In contrast, the input to a function that predicts the values of a joint belief state is a probability vector for each player over the possible states they may be in. For example, in HUNL, the input is more than 2,000 floats and the output is more than 1,000 floats. The input would be even larger in games with more states per infoset.
Another drawback is that learning a mapping from joint belief states to infoset values is computationally more expensive than learning a mapping from states to a set of values. For example, Modicum required less than 1,000 core hours to create this mapping. In contrast, DeepStack required over 1,000,000 core hours to create its mapping. The increased cost is partly because computing training data for a joint belief state value mapping is inherently more expensive. The multi-valued states approach is learning the values of best responses to a particular strategy (namely, the approximate Nash equilibrium strategy σ̂∗1). In contrast, a joint belief state value mapping is learning the value of all players playing an equilibrium strategy given that joint belief state. As a rough guideline, computing an equilibrium is about 1,000× more expensive than computing a best response in large games [1].
On the other hand, the multi-valued state approach requires knowledge of a blueprint strategy that is already an approximate Nash equilibrium. A benefit of the joint belief state approach is that rather than simply learning best responses to a particular strategy, it is learning best responses against every possible strategy. This may be particularly useful in self-play settings where the blueprint strategy is unknown, because it may lead to increasingly more sophisticated strategies.
Another benefit of the joint belief state approach is that in many games (but not all) it obviates the need to keep track of the sequence of actions played. For example, in poker if there are two different sequences of actions that result in the same amount of money in the pot and all players having the same belief distribution over what their opponents’ cards are, then the optimal strategy in both of those situations is the same. This is similar to how in Go it is not necessary to know the exact sequence of actions that were played. Rather, it is only necessary to know the current configuration of the board (and, in certain situations, also the last few actions played).
A further benefit of the joint belief state approach is that its run-time complexity does not increase with the degree of precision other than needing a better (possibly more computationally expensive) function approximator. In contrast, for our algorithm the computational complexity of finding a solution to a depth-limited subgame grows linearly with the number of values per state.
8 Conclusions
We introduced a principled method for conducting depth-limited solving in imperfect-information games. Experimental results show that this leads to stronger performance than the best precomputedstrategy AIs in HUNL while using orders of magnitude less computational resources, and is also orders of magnitude more efficient than past approaches that use real-time solving. Additionally, the method exhibits low exploitability. In addition to using less resources, this approach broadens the applicability of nested real-time solving to longer games.
9 Acknowledgments
This material is based on work supported by the National Science Foundation under grants IIS1718457, IIS-1617590, and CCF-1733556, and the ARO under award W911NF-17-1-0082, as well as XSEDE computing resources provided by the Pittsburgh Supercomputing Center. We thank Thore Graepel, Marc Lanctot, David Silver, Ariel Procaccia, Fei Fang, and our anonymous reviewers for helpful inspiration, feedback, suggestions, and support. | 1. What is the main contribution of the paper regarding the modification of the continual resolving algorithm?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content?
4. What are the concerns regarding the theoretical guarantees and empirical comparisons with state-of-the-art methods?
5. Are there any questions or suggestions for improving the paper or its approach? | Review | Review
Summary: The paper proposes a modification of the continual resolving algorithm for depth-limited search in imperfect information games. Instead of counterfactual value network mapping joint beliefs to counterfactual values, it suggests using a function that maps a public state to a set of values of diverse best response strategies again an approximate Nash equilibrium strategy. This function can be precomputed, or realized either by function approximation of by Monte Carlo simulation based on pre-computed strategies. The authors show that this approach performs well in poker with small computational requirements. Quality: The core ideas of the paper are correct and interesting. There are several technical details that are unclear or overstated. 1) The most important problem are the inconsistencies regarding the blueprint strategy. The paper assumes at several places that the player plays based on a precomputed strategy \sigma^*_1. At the same time, the paper says that the player never plays based on the blueprint strategy. Since the blueprint strategy is created using abstraction, which leads to huge exploitability in HUNL, it is likely substantially different from the strategy computed and played online, especially when the opponent uses actions not included in the abstraction. All the theoretical guarantees relay on the player playing based on the blueprint strategy. Therefore, they make only a very limited sense and serve only as an inspiration for the heuristic approach actually evaluated in the paper. This is not clearly stated. I do not see how the theory can be useful unless the blueprint strategy already solves the game and therefore, there is no need for online computation. I ask the authors to elaborate on this in the rebuttal. 2) The second important weakness of the paper is the lack of experimental comparison with the state of the art. The paper spends whole page explaining reasons why the presented approach might perform better under some circumstances, but there is no hard evidence at all. What is the reason not to perform an empirical comparison to the joint belief state approach and show the real impact of the claimed advantages and disadvantages? Since this is the main point of the paper, it should be clear when the new modification is useful. 3) Furthermore, there is an incorrect statement about the performance of the state of the art method. The paper claims that "The evidence suggests that in the domain we tested on, using multi-valued states leads to better performance." because the alternative approach "was never shown to defeat prior top AIs". This is simply incorrect. Lack of an experiment is not evidence for superiority of the method that performed the experiment without any comparison. 4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately. 5) As explained in 1), the presented method is quite heuristic. The algorithm does not actually play the blueprint strategy, only few values are used in the leaf states, which cannot cover the whole variety of the best response values. In order to assess whether the presented approach might be applicable also for other games, it would be very useful to evaluate it on some substantially different domains, besides poker. Clarity: The paper is well written and organized, and it is reasonably easy to understand. The impact of the key differences between the theoretic inspiration and the practical implementation should be explained more clearly. Originality: The presented method is a novel modification of continual resolving. The paper clearly explains the main distinction form the existing method. Significance: The presented method seems to substantially reduce the computational requirements of creating a strong poker bot. If this proofs to be the case also for some other imperfect information games, it would be a very significant advancement in creating algorithms for playing these games. Detailed comments: 190: I guess the index should be 1 339: I would not say MCCFR is currently the preferred solution method, since CFR+ does not work well with sampling 349: There is no evidence the presented method would work better in stratego. It would depend on the specific representation and how well would the NN generalize over the types of heuristics. Reaction to rebuttal: 1) The formulation of the formal statement should be clearer. Still, while you are using the BR values from the blueprint strategy in the computation, I do not see how the theory can give you any real bounds the way you use the algorithm. One way to get more realistic bounds would be to analyze the function approximation version and use error estimates from cross-valiadation. 2) I do not believe head-to-head evaluation makes too much sense because of well known intransitivity effects. However, since the key difference between your algorithm and DeepStack is the form of the used leaf evaluation function, it would certainly not take man-years to replace the evaluation function with the joint belief in your framework. It would be very interesting to see comparison of exploitability and other trade-offs on smaller games, where we can still compute it. 4) I meant the use of the example for save resolving. 5) There is no need for strong agents for some particular games to make rigorous evaluation of equilibrium solving algorithms. You can compute exploitability in sufficiently large games to evaluate how close your approach is to the equilibrium. Furthermore, there are many domain independent algorithms for approaximating equilibriua in these games you can compare to. Especially the small number of best response values necessary for the presented approach is something that would be very interesting to evaluate in other games. Line 339: I just meant that I consider CFR+ to be "the preferred domain-independent method of solving imperfect-information games", but it is not really important, it was a detailed comment. |
NIPS | Title
Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations
Abstract
In this paper, we propose one novel model for point cloud semantic segmentation, which exploits both the local and global structures within the point cloud based on the contextual point representations. Specifically, we enrich each point representation by performing one novel gated fusion on the point itself and its contextual points. Afterwards, based on the enriched representation, we propose one novel graph pointnet module, relying on the graph attention block to dynamically compose and update each point representation within the local point cloud structure. Finally, we resort to the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure and thereby yield the resulting semantic label for each point. Extensive results on the public point cloud databases, namely the S3DIS and ScanNet datasets, demonstrate the effectiveness of our proposed model, outperforming the state-of-the-art approaches. Our code for this paper is available at https://github.com/fly519/ELGS.
1 Introduction
The point cloud captured by 3D scanners has attracted more and more research interests, especially for the point cloud understanding tasks, including the 3D object classification [13, 14, 10, 11], 3D object detection [21, 27], and 3D semantic segmentation [25, 13, 14, 23, 10]. 3D semantic segmentation, aiming at providing class labels for each point in the 3D space, is a prevalent challenging problem. First, the points captured by the 3D scanners are usually sparse, which hinders the design of one effective and efficient deep model for semantic segmentation. Second, the points always appear unstructured and unordered. As such, the relationship between the points is hard to be captured and modeled.
As points are not in a regular format, some existing approaches first transform the point clouds into regular 3D voxel grids or collections of images, and then feed them into traditional convolutional neural network (CNN) to yield the resulting semantic segmentation [25, 5, 22]. Such a transformation process can somehow capture the structure information of the points and thereby exploit their relationships. However, such approaches, especially in the format of 3D volumetric data, require high memory and computation cost. Recently, another thread of deep learning architectures on point clouds, namely PointNet [13] and PointNet++ [14], is proposed to handle the points in an efficient and effective way. Specifically, PointNet learns a spatial encoding of each point and then aggregates all individual point features as one global representation. However, PointNet does not consider the ∗Corresponding author.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
local structures. In order to further exploit the local structures, PointNet++ processes a set of points in a hierarchical manner. Specifically, the points are partitioned into overlapping local regions to capture the fine geometric structures. And the obtained local features are further aggregated into larger units to generate higher level features until the global representation is obtained. Although promising results have been achieved on the public datasets, there still remains some opening issues.
First, each point is characterized by its own coordinate information and extra attribute values, i.e. color, normal, reflectance, etc. Such representation only expresses the physical meaning of the point itself, which does not consider its neighbouring and contextual ones. Second, we argue that the local structures within point cloud are complicated, while the simple partitioning process in PointNet++ cannot effectively capture such complicated relationships. Third, each point labeling not only depends on its own representation, but also relates to the other points. Although the global representation is obtained in PointNet and PointNet++, the complicated global relationships within the point cloud have not been explicitly exploited and characterized.
In this paper, we propose one novel model for point cloud semantic segmentation. First, for each point, we construct one contextual representation by considering its neighboring points to enrich its semantic meaning by one novel gated fusion strategy. Based on the enriched semantic representations, we propose one novel graph pointnet module (GPM), which relies on one graph attention block (GAB) to compose and update the feature representation of each point within the local structure. Multiple GPMs can be stacked together to generate the compact representation of the point cloud. Finally, the global point cloud structure is exploited by the spatial-wise and channel-wise attention strategies to generate the semantic label for each point.
2 Related Work
Recently, deep models have demonstrated the feature learning abilities on computer vision tasks with regular data structure. However, due to the limitation of data representation method, there are still many challenges for 3D point cloud task, which is of irregular data structures. According to the 3D data representation methods, existing approaches can be roughly categorized as 3D voxelbased [5, 25, 22, 7, 9], multiview-based [18, 12], and set-based approaches [13, 14].
3D Voxel-based Approach. The 3D voxel-based methods first transform the point clouds into regular 3D voxel grids, and then the 3D CNN can be directly applied similarly as the image or video. Wu et al. [25] propose the full-voxels based 3D ShapeNets network to store and process 3D data. Due to the constraints of representation resolution, information loss are inevitable during the discretization process. Meanwhile, the memory and computational consumption are increases dramatically with respect to the resolution of voxel. Recently, Oct-Net [16], Kd-Net [7], and O-CNN [22] have been proposed to reduce the computational cost by skipping the operations on empty voxels.
Multiview-based Approach. The multiview-based methods need to render multiple images from the target point cloud based on different view angle settings. Afterwards, each image can be processed by the traditional 2D CNN operations [18]. Recently, the multiview image CNN [12] has been applied to 3D shape segmentation, and has obtained satisfactory results. The multiview-based approaches help reducing the computational cost and running memory. However, converting the 3D point cloud into images also introduce information loss. And how to determine the number of views and how to allocate the view to better represent the 3D shape still remains as an intractable problem.
Set-based Approach. PointNet [13] is the first set-based method, which learns the representation directly on the unordered and unstructured point clouds. PointNet++ [14] relies on the hierarchical learning strategy to extend PointNet for capturing local structures information. PointCNN [10] is further proposed to exploit the canonical order of points for local context information extraction.
Recently, there have been several attempts in the literature to model the point cloud as structured graphs. For example, Qi et al. [15] propose to build a k-nearest neighbor directed graph on top of point cloud to boost the performance on the semantic segmentation task. SPGraph [8] is proposed to deal with large scale point clouds. The points are adaptively partitioned into geometrically homogeneous elements to build a superpoint graph, which is then fed into a graph convolutional network (GCN) for predicting the semantic labels. DGCNN [24] relies on the edge convolution operation to dynamically capture the local shapes. RS-CNN [11] extends regular grid CNN to irregular configuration, which encodes the geometric relation of points to achieve contextual shape-aware learning of point cloud.
These approaches mainly focus on the point local relationship exploitation, and neglect the global relationship.
Unlike previous set-based methods that only consider the raw coordinate and attribute information of each single point, we pay more attentions on the spatial context information within neighbor points. Our proposed context representation is able to express more fine-grained structural information. We also rely on one novel graph pointnet module to compose and update each point representation within the local point cloud structure. Moreover, the point cloud global structure information is considered with the spatial-wise and channel-wise attention strategies.
3 Approach
The point cloud semantic segmentation aims to take the 3D point cloud as input and assign one semantic class label for each point. We propose one novel model for handling this point cloud semantic segmentation, as shown in Fig. 1. Specifically, our proposed network consists of three components, namely, the point enrichment, the feature representation, and the prediction. These three components fully couple together, ensuring an end-to-end training manner.
Point Enrichment. To make accurate class prediction for each point within the complicated point cloud structure, we need to not only consider the information of each point itself but also its neighboring or contextual points. Different from the existing approaches, relying on the information of each point itself, such as the geometry, color, etc., we propose one novel point enrichment layer to enrich each point representation by taking its neighboring or contextual points into consideration. With the incorporated contextual information, each point is able to sense the complicated point cloud structure information. As will be demonstrated in Sec. 4.4, the contextual information, enriching the semantic information of each point, can help boosting the final segmentation performance.
Feature Representation. With the enriched point representation, we resort to the conventional encoder-decoder architecture with lateral connections to learn the feature representation for each point. To further exploit local structure information of the point cloud, the GPM is employed in the encoder, which relies on the GAB to dynamically compose and update the feature representation of each point within its local regions. The decoder with lateral connections works on the compacted representation obtained from the encoder, to generate the semantic feature representation for each point.
Prediction. Based on the obtained semantic representations, we resort to both the channel-wise and spatial-wise attentions to further exploit the global structure of the point cloud. Afterwards, the semantic label is predicted for each point.
3.1 Point Enrichment
The raw representation of each point is usually its 3D position and associated attributes, such as color, reflectance, surface normal, etc. Existing approaches usually directly take such representation as input, neglecting its neighboring or contextual information, which is believed to play an essential role [17] for characterizing the point cloud structure, especially from the local perspective. In this paper, besides the point itself, we incorporate its neighboring points as its contextual information to enrich the point semantic representation. With such incorporated contextual information, each point is aware of the complicated point cloud structure information.
As illustrated in Fig. 2, a point cloud consists of N points, which can be represented as {P1, P2, ..., PN}, with Pi ∈ RCf denoting the attribute values of the i-th point, such as position coordinate, color, normal, etc. To characterize the contextual information for each point, k-nearest neighbor set Ni within the local region centered on i-th point are selected and concatenated together, where the contextual representation Ri ∈ RkCf of the given point i is as follows:
Ri = ‖ j∈Ni Pj . (1)
For each point, we have two different representations, specifically the Pi andRi. However, these two representations are of different dimensions and different characteristics. How to effectively fuse them together to produce one more representative feature for each point remains an open issue. In this paper, we propose a novel gated fusion strategy. We first feed Pi into one fully-connected (FC) layer to obtain a new feature vector P̃i ∈ RkCf . Afterwards, the gated fusion operation is performed:
gi = σ(wiRi + bi), P̂i = gi P̃i, gRi = σ(w R i P̃i + b R i ), R̂i = g R i Ri,
(2)
where wi, wRi ∈ RkCf×kCf and bi, bRi ∈ RkCf are the learnable parameters. σ is the non-linear sigmoid function. is the element-wise multiplication. The gated fusion aims to mutually absorb useful and meaningful information of Pi and Ri. And the interactions between Pi and Ri are updated, yielding P̂i and R̂i. As such, the i-th point representation is then enriched by concatenating them together as P̂i ‖ R̂i. For easing the following introduction, we will re-use Pi to denote the enriched representation of the i-th point.
3.2 Feature Representation
Based on the enriched point representation, we rely on one traditional encoder-decoder architecture with lateral connections to learn the feature representation of each point.
3.2.1 Encoder
Although the enriched point representation has somewhat considered the local structure information, the complicated relationships within points, especially from the local perspective need to be further exploited. In order to tackle this challenge, we propose one novel GPM in the encoder, which aims to learn the composition ability between points and thereby more effectively capture the local structural information within the point cloud.
Graph Pointnet Module. Same as [14], we first use the sampling and grouping layers to divide the point set into several local groups. Within each group, the GPM is used to exploit the local relationships between points, and thereby update the point representation by aggregating the point information within the local structure.
As illustrated in Fig. 3, the proposed GPM consists of one multi-layer perceptron (MLP) and GAB. The MLP in conventional PointNet [13] and PointNet++ [14] independently performs on each point to mine the information within the point itself, while neglects the correlations and relationships among the points. In order to more comprehensively exploit the point relationship, we rely on the GAB to aggregate the neighboring point representations and thereby updated the point representation.
For each obtained local structure obtained by the sampling and grouping layers, GAB [20] first defines one fully connected undirected graph to measure the similarities between any two points with such local structures. Given the output feature map G ∈ RCe×Ne of the MLP layer in the GPM module, we first linearly project each point to one common space through a FC layer to obtain new feature map Ĝ ∈ RCe×Ne . The similarity αij between point i and point j is measured as follows:
αij = Ĝi · Ĝj . (3)
Afterwards, we calculate the influence factor of point j on point i:
βij = softmaxj(LeakyReLU(αij)), (4)
where βij is regarded as the normalized attentive weight, representing how point j relates to point i. The representation of each point is updated by attentively aggregating the point representations with reference to βij :
G̃i = Ne∑ j=1 βijĜj . (5)
It can be observed that the GAB dynamically updates the local feature representation by referring to the similarities between points and captures their relationships. Moreover, in order to preserve the original information, the point feature after MLP is concatenated with the updated one via one skip connection through a gated fusion operation, as shown in Fig. 3.
Please note that we can stack multiple GPMs, as shown in Fig. 3, to further exploit the complicated non-linear relationships within each local structure. Afterwards, one max pooling layer is used to aggregate the feature map into a one-dimensional feature vector, which not only lowers the dimensionality of the representation, thus making it possible to quickly generate compact representation of the point cloud, but also help filtering out the unreliable noises.
3.2.2 Decoder
For decoder, we use the same architecture as [14]. Specifically, we progressively upsample the compact feature obtained from the encoder until the original resolution. Please note that for preserving the information generated in the encoder as much as possible, lateral connections are also used.
3.3 Prediction
After performing the feature representation, rich semantic representation for each point is obtained. Note that our previous operations, including contextual representation and feature representation, only mine the point local relationships. However, the global information is also important, which needs to be considered when determining the label for each individual point. For the semantic segmentation task, two points departing greatly in space may belong to the same semantic category, which can be jointly considered to mutually enhance their feature representations. Moreover, for high-dimensional feature representations, the inter-dependencies between feature channels also exist. As such, in order to capture the global context information for each point, we introduce two attention modules, namely spatial-wise and channel-wise attentions [4] for modeling the global relationships between points.
Spatial-wise Attention. To model rich global contextual relationships among points, the spatial-wise attention module is employed to adaptively aggregate spatial contexts of local features. Given the
feature map F ∈ RCd×Nd from the decoder, we first feed it into two FC layers to obtain two new feature maps A and B, respectively, where {A,B} ∈ RCd×Nd . Nd is the number of points and Cd is number of feature channel. The normalized spatial-wise attentive weight vij measures the influence factor of point j on point i as follows:
vij = softmaxj(Ai ·Bj), (6)
Afterwards, the feature map F is fed into another FC layer to generate a new feature map D ∈ RCd×Nd . The output feature map F̂ ∈ RCd×Nd after spatial-wise attention is obtained:
F̂i = Nd∑ j=1 (vijDj) + Fi. (7)
As such, the global spatial structure information is attentively aggregated with each point representation.
Channel-wise Attention. The channel-wise attention performs similarly with the spatial-wise attention, with the channel attention map explicitly modeling the interdependencies between channels and thereby boosting the feature discriminability. Similar as the spatial-wise attention module, the output feature map F̃ ∈ RCd×Nd is obtained by aggregating the global channel structure information with each channel representation.
After summing the feature maps F̂ and F̃, the semantic label for each point can be obtained with one additional FC layer. With such attention processes from the global perspective, the feature representation of each point is updated. As such, the complicated relationships between the points can be comprehensively exploited, yielding more accurate segmentation results.
4 Experiment
4.1 Experiment Setting
Dataset. To evaluate the performance of proposed model and compare with state-of-the-art, we conduct experiments on two public available datasets, the Stanford 3D Indoor Semantics (S3DIS) Dataset [1] and ScanNet Dataset[2]. The S3DIS dataset comes from real scan of the indoor environment, including 3D scans of Matterport scanners from 6 areas. There are 271 rooms divided by room. ScanNet is a point cloud dataset with scanned indoor scenes. It has 22 categories of semantic tags, with 1513 scenes. ScanNet contains a wide variety of spaces. Each point is annotated with an instance-level semantic category label.
Implementation Details. The number of neighboring points k in contextual representation is set as 3, where the farthest distance for neighboring point is fixed to 0.06. For feature extraction, a four-layer encoder is used, where the spatial scale of each layer is set as 1024, 256, 64, and 16, respectively. The GPM is enabled in the first two layers of the encoder, to exploit the local relationships between points. The maximum training epochs for S3DIS and ScanNet are set as 120 and 500, respectively.
Evaluation Metric. Two widely used metrics, namely overall accuracy (OA) and mean intersection of union (mIoU), are used to measure the semantic segmentation performance. OA is the prediction accuracy of all points. IoU measures the ratio of the area of overlap to the area of union between the ground truth and segmentation result. mIoU is the average of IoU over all categories.
Competitor Methods. For S3DIS dataset, we compare our method with PointNet [13], PointNet++ [14],
SEGCloud [19], RSNet [6], SPGraph [8], SGPN [23], Engelmann et al. [3], A-SCN [26] and DGCNN [24]. For ScanNet dataset, we compare with 3DCNN [2], PointNet [13], PointNet++ [14], RSNet [6] and PointCNN [10].
4.2 S3DIS Semantic Segmentation
We perform semantic segmentation experiments on the S3DIS dataset to evaluate our performance in indoor real-world scene scans and perform ablation experiments on this dataset. Same as the experimental setup in PointNet [13], we divide each room evenly into several 1m3 cube, with each uniformly sampleing 4096 points.
Same as [13, 3, 8], we perform 6-fold cross validation with micro-averaging. In order to compare with more methods, we also report the performance on the fifth fold only (Area 5). The OA and mIoU results are summarized in Table 1. From the results we can see that our algorithm performs better than other competitor methods in terms of both OA and mIoU metrics.
Besides, the IoU values of each category are summarized in Table 2, it can be observed that our proposed method achieves the best performance for several categories. For simple shapes such as “floor” and “ceiling”, each model performs well, with our approach performing better. This is mainly due to that the prediction layer of our propose method incorporates the global structure information between points, which enhances the point representation in the flat area. For categories with complex local structure, such as “chair” and “bookcase”, our model shows the best performance,
since we consider the contextual representation to enhance the relationship between each point and its neighbors, and use the GPM module to exploit the local structure information. However, the “window” and “board” categories are more difficult to distinguish from the “wall”, as they are close to the “wall” in position and appear similarly. The key to distinguishing them is to find subtle shape differences and detect the edges. It can be observed that our model performs well on the “window” and “board” categories. In order to further demonstrate the effectiveness of our model, some qualitative examples from S3DIS dataset are provided in Fig. 4 and Fig. 5, demonstrating that our model can yield more accurate segmentation results.
4.3 ScanNet Semantic Segmentation Table 3: The segmentation results of ScanNet dataset in terms of both OA and mIoU.
Method OA mIoU
3DCNN [2] 73.0 - PointNet [13] 73.9 - PointNet++ [14] 84.5 38.28 RSNet [6] - 39.35 PointCNN [10] 85.1 -
Ours 85.3 40.6
For the ScanNet dataset, the number of scenes trained and tested is 1201 and 312, same as [14, 10]. We only use its XYZ coordinate information. The results are illustrated in Table 3. Compared with other competitive methods, our proposed model achieves better performance in terms of both the OA and mIoU metrics.
4.4 Ablation Study
To validate the contribution of each module in our framework, we conduct ablation studies to demonstrate their effectiveness. Detailed experimental results are provided in Table 4.
Contextual Representation Module. After removing the contextual representation module in the input layer (denoted as w/o CR), we can see that the mIoU value dropped from 60.06 to 56.15, as shown in Table 4. Based on the results of each category in Table 5, some categories have significant drops in IoU, such as “column”, “sofa”, and “door”. The contextual representation can enhance the point feature of the categories with complex local structures. We also replace the gating operation in the contextual repre-
sentation with a simple concatenation operation. Due to the inequality of the two kinds of information,
the OA and mIoU decreases. Thus, the proposed gating operation is useful for fusing the information of the point itself and its neighborhood.
Graph Pointnet Module. The segmentation performance of our model without GPM module (denoted as w/o GPM) also significantly drops, which indicates that both the proposed GPM and CR are important for performance improvement. Specifically, without GPM, the mIoU of the categories, such as “column” and “sofa” drops significantly.
Attention Module. Removing the attention module (denoted as w/o AM) decreases both OA and mIoU. Moreover, the performances on categories with large flat area, such as “ceiling”, “floor”, “wall”, and “window”, significantly drop. As aforementioned, the attention module aims to mine the global relationship between points. Two points within the same category may with large spatial distance. With the attention module, the features of these points are mutually aggregated.
Table 6: Performances of DGCNN with our proposed module in terms of OA.
Model OA
DGCNN 84.31 DGCNN+CR 85.35 DGCNN+GPM 84.90 DGCNN+AM 85.17 DGCNN+CR+GPM+AM 86.07
We further incorporate the proposed CR, AM, and GPM together with DGCNN [24] for point cloud semantic segmentation, with the performances illustrated in Table 6. It can be observed that CR, AM, and GPM can help improving the performances, demonstrating the effectiveness of each module.
Model Complexity. Table 7 illustrates the model complexity comparisons. The sample sizes for all the models are fixed as 4096. It can be observed that the inference time of our model (28ms) is less than the other competitor models, except for PointNet (5.3ms) and PointNet++ (24ms). And the model size seems to be identical with other models except PointCNN, which presents the largest model.
Robustness under Noise. We further demonstrate the robustness of our proposed model with respect to PointNet++. As for scaling, when the scaling ratio are 50%, the OA of our proposed model and PointNet++ on segmentation task decreases by 3.0% and 4.5%, respectively. As for rotation, when the rotation angle is π10 , the OA of our proposed model and PointNet++ on segmentation task decreases by 1.7% and 1.0%, respectively. As such, our model is more robust to scaling while less robust to rotation.
5 Conclusion
In this paper, we proposed one novel network for point cloud semantic segmentation. Different with existing approaches, we enrich each point representation by incorporating its neighboring and contextual points. Moreover, we proposed one novel graph pointnet module to exploit the point cloud local structure, and rely on the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure. Extensive experiments on two public point cloud semantic segmentation datasets demonstrating the superiority of our proposed model.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant 61871270 and Grant 61672443), in part by the Natural Science Foundation of SZU (grant no. 827000144) and in part by the National Engineering Laboratory for Big Data System Computing Technology of China. | 1. What is the focus of the review?
2. What are the strengths and weaknesses of the proposed approach in the paper?
3. Do you have any concerns regarding the novelty and significance of the paper's contribution?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any questions or concerns regarding the experimental design and results? | Review | Review
1. Originality: The method is a combination of existing techniques, the attention has been well explored in GNN, which solves a simiilar to the point cloud analysis. Actually, point could is a kind of graph data. The contextual representation is just a fusion of neighboring features with the central one, which is quite straightforward. Meanwhile, the choice of fusion operations (equ 2,5,7) is not well explained and motivated. Local information has been explored in the point cloud community. Some related works are not cited and discussed. For example, Dynamic Graph CNN for Learning on Point Clouds, Relation-Shape Convolutional Neural Network for Point Cloud Analysis, they all offically accepted by peer-reviewed journals/conferences, and have papers on arxiv before NeuIPS submission deadline. Mining Point Cloud Local Structures by Kernel Correlation and Graph Pooling Pointwise Convolutional Neural Networks 2. Quality: The paper is technically sound. Some designs are not well supported by experiments or not well motivated, as I mentioned above. No results on running time complexity. 3. Clarity: The paper is well written and easy to follow. 4. Significance: The paper is incremental to previous work. While considering both global and local information is benefical for point cloud segmentation, to my opinion, the method does not show its advantages over previous work. For example, both the DGCNN and PointNet++ incorporates contextual information to enrich the point feature either by feature space(DGCNN) or geometric space(PointNet++, if we do not use the sampling to reduce the point number in the original PointNet++, it is a kind of feature extractor by considering neighboring information), the method does not show its advantage over these method. Actually, the method only shows that by combining many factors it can achieve better performance, however, it is unclear what will happen if we replace the proposed module with the existing technique. This could be more interest to readers to know the advance of the proposed method. |
NIPS | Title
Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations
Abstract
In this paper, we propose one novel model for point cloud semantic segmentation, which exploits both the local and global structures within the point cloud based on the contextual point representations. Specifically, we enrich each point representation by performing one novel gated fusion on the point itself and its contextual points. Afterwards, based on the enriched representation, we propose one novel graph pointnet module, relying on the graph attention block to dynamically compose and update each point representation within the local point cloud structure. Finally, we resort to the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure and thereby yield the resulting semantic label for each point. Extensive results on the public point cloud databases, namely the S3DIS and ScanNet datasets, demonstrate the effectiveness of our proposed model, outperforming the state-of-the-art approaches. Our code for this paper is available at https://github.com/fly519/ELGS.
1 Introduction
The point cloud captured by 3D scanners has attracted more and more research interests, especially for the point cloud understanding tasks, including the 3D object classification [13, 14, 10, 11], 3D object detection [21, 27], and 3D semantic segmentation [25, 13, 14, 23, 10]. 3D semantic segmentation, aiming at providing class labels for each point in the 3D space, is a prevalent challenging problem. First, the points captured by the 3D scanners are usually sparse, which hinders the design of one effective and efficient deep model for semantic segmentation. Second, the points always appear unstructured and unordered. As such, the relationship between the points is hard to be captured and modeled.
As points are not in a regular format, some existing approaches first transform the point clouds into regular 3D voxel grids or collections of images, and then feed them into traditional convolutional neural network (CNN) to yield the resulting semantic segmentation [25, 5, 22]. Such a transformation process can somehow capture the structure information of the points and thereby exploit their relationships. However, such approaches, especially in the format of 3D volumetric data, require high memory and computation cost. Recently, another thread of deep learning architectures on point clouds, namely PointNet [13] and PointNet++ [14], is proposed to handle the points in an efficient and effective way. Specifically, PointNet learns a spatial encoding of each point and then aggregates all individual point features as one global representation. However, PointNet does not consider the ∗Corresponding author.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
local structures. In order to further exploit the local structures, PointNet++ processes a set of points in a hierarchical manner. Specifically, the points are partitioned into overlapping local regions to capture the fine geometric structures. And the obtained local features are further aggregated into larger units to generate higher level features until the global representation is obtained. Although promising results have been achieved on the public datasets, there still remains some opening issues.
First, each point is characterized by its own coordinate information and extra attribute values, i.e. color, normal, reflectance, etc. Such representation only expresses the physical meaning of the point itself, which does not consider its neighbouring and contextual ones. Second, we argue that the local structures within point cloud are complicated, while the simple partitioning process in PointNet++ cannot effectively capture such complicated relationships. Third, each point labeling not only depends on its own representation, but also relates to the other points. Although the global representation is obtained in PointNet and PointNet++, the complicated global relationships within the point cloud have not been explicitly exploited and characterized.
In this paper, we propose one novel model for point cloud semantic segmentation. First, for each point, we construct one contextual representation by considering its neighboring points to enrich its semantic meaning by one novel gated fusion strategy. Based on the enriched semantic representations, we propose one novel graph pointnet module (GPM), which relies on one graph attention block (GAB) to compose and update the feature representation of each point within the local structure. Multiple GPMs can be stacked together to generate the compact representation of the point cloud. Finally, the global point cloud structure is exploited by the spatial-wise and channel-wise attention strategies to generate the semantic label for each point.
2 Related Work
Recently, deep models have demonstrated the feature learning abilities on computer vision tasks with regular data structure. However, due to the limitation of data representation method, there are still many challenges for 3D point cloud task, which is of irregular data structures. According to the 3D data representation methods, existing approaches can be roughly categorized as 3D voxelbased [5, 25, 22, 7, 9], multiview-based [18, 12], and set-based approaches [13, 14].
3D Voxel-based Approach. The 3D voxel-based methods first transform the point clouds into regular 3D voxel grids, and then the 3D CNN can be directly applied similarly as the image or video. Wu et al. [25] propose the full-voxels based 3D ShapeNets network to store and process 3D data. Due to the constraints of representation resolution, information loss are inevitable during the discretization process. Meanwhile, the memory and computational consumption are increases dramatically with respect to the resolution of voxel. Recently, Oct-Net [16], Kd-Net [7], and O-CNN [22] have been proposed to reduce the computational cost by skipping the operations on empty voxels.
Multiview-based Approach. The multiview-based methods need to render multiple images from the target point cloud based on different view angle settings. Afterwards, each image can be processed by the traditional 2D CNN operations [18]. Recently, the multiview image CNN [12] has been applied to 3D shape segmentation, and has obtained satisfactory results. The multiview-based approaches help reducing the computational cost and running memory. However, converting the 3D point cloud into images also introduce information loss. And how to determine the number of views and how to allocate the view to better represent the 3D shape still remains as an intractable problem.
Set-based Approach. PointNet [13] is the first set-based method, which learns the representation directly on the unordered and unstructured point clouds. PointNet++ [14] relies on the hierarchical learning strategy to extend PointNet for capturing local structures information. PointCNN [10] is further proposed to exploit the canonical order of points for local context information extraction.
Recently, there have been several attempts in the literature to model the point cloud as structured graphs. For example, Qi et al. [15] propose to build a k-nearest neighbor directed graph on top of point cloud to boost the performance on the semantic segmentation task. SPGraph [8] is proposed to deal with large scale point clouds. The points are adaptively partitioned into geometrically homogeneous elements to build a superpoint graph, which is then fed into a graph convolutional network (GCN) for predicting the semantic labels. DGCNN [24] relies on the edge convolution operation to dynamically capture the local shapes. RS-CNN [11] extends regular grid CNN to irregular configuration, which encodes the geometric relation of points to achieve contextual shape-aware learning of point cloud.
These approaches mainly focus on the point local relationship exploitation, and neglect the global relationship.
Unlike previous set-based methods that only consider the raw coordinate and attribute information of each single point, we pay more attentions on the spatial context information within neighbor points. Our proposed context representation is able to express more fine-grained structural information. We also rely on one novel graph pointnet module to compose and update each point representation within the local point cloud structure. Moreover, the point cloud global structure information is considered with the spatial-wise and channel-wise attention strategies.
3 Approach
The point cloud semantic segmentation aims to take the 3D point cloud as input and assign one semantic class label for each point. We propose one novel model for handling this point cloud semantic segmentation, as shown in Fig. 1. Specifically, our proposed network consists of three components, namely, the point enrichment, the feature representation, and the prediction. These three components fully couple together, ensuring an end-to-end training manner.
Point Enrichment. To make accurate class prediction for each point within the complicated point cloud structure, we need to not only consider the information of each point itself but also its neighboring or contextual points. Different from the existing approaches, relying on the information of each point itself, such as the geometry, color, etc., we propose one novel point enrichment layer to enrich each point representation by taking its neighboring or contextual points into consideration. With the incorporated contextual information, each point is able to sense the complicated point cloud structure information. As will be demonstrated in Sec. 4.4, the contextual information, enriching the semantic information of each point, can help boosting the final segmentation performance.
Feature Representation. With the enriched point representation, we resort to the conventional encoder-decoder architecture with lateral connections to learn the feature representation for each point. To further exploit local structure information of the point cloud, the GPM is employed in the encoder, which relies on the GAB to dynamically compose and update the feature representation of each point within its local regions. The decoder with lateral connections works on the compacted representation obtained from the encoder, to generate the semantic feature representation for each point.
Prediction. Based on the obtained semantic representations, we resort to both the channel-wise and spatial-wise attentions to further exploit the global structure of the point cloud. Afterwards, the semantic label is predicted for each point.
3.1 Point Enrichment
The raw representation of each point is usually its 3D position and associated attributes, such as color, reflectance, surface normal, etc. Existing approaches usually directly take such representation as input, neglecting its neighboring or contextual information, which is believed to play an essential role [17] for characterizing the point cloud structure, especially from the local perspective. In this paper, besides the point itself, we incorporate its neighboring points as its contextual information to enrich the point semantic representation. With such incorporated contextual information, each point is aware of the complicated point cloud structure information.
As illustrated in Fig. 2, a point cloud consists of N points, which can be represented as {P1, P2, ..., PN}, with Pi ∈ RCf denoting the attribute values of the i-th point, such as position coordinate, color, normal, etc. To characterize the contextual information for each point, k-nearest neighbor set Ni within the local region centered on i-th point are selected and concatenated together, where the contextual representation Ri ∈ RkCf of the given point i is as follows:
Ri = ‖ j∈Ni Pj . (1)
For each point, we have two different representations, specifically the Pi andRi. However, these two representations are of different dimensions and different characteristics. How to effectively fuse them together to produce one more representative feature for each point remains an open issue. In this paper, we propose a novel gated fusion strategy. We first feed Pi into one fully-connected (FC) layer to obtain a new feature vector P̃i ∈ RkCf . Afterwards, the gated fusion operation is performed:
gi = σ(wiRi + bi), P̂i = gi P̃i, gRi = σ(w R i P̃i + b R i ), R̂i = g R i Ri,
(2)
where wi, wRi ∈ RkCf×kCf and bi, bRi ∈ RkCf are the learnable parameters. σ is the non-linear sigmoid function. is the element-wise multiplication. The gated fusion aims to mutually absorb useful and meaningful information of Pi and Ri. And the interactions between Pi and Ri are updated, yielding P̂i and R̂i. As such, the i-th point representation is then enriched by concatenating them together as P̂i ‖ R̂i. For easing the following introduction, we will re-use Pi to denote the enriched representation of the i-th point.
3.2 Feature Representation
Based on the enriched point representation, we rely on one traditional encoder-decoder architecture with lateral connections to learn the feature representation of each point.
3.2.1 Encoder
Although the enriched point representation has somewhat considered the local structure information, the complicated relationships within points, especially from the local perspective need to be further exploited. In order to tackle this challenge, we propose one novel GPM in the encoder, which aims to learn the composition ability between points and thereby more effectively capture the local structural information within the point cloud.
Graph Pointnet Module. Same as [14], we first use the sampling and grouping layers to divide the point set into several local groups. Within each group, the GPM is used to exploit the local relationships between points, and thereby update the point representation by aggregating the point information within the local structure.
As illustrated in Fig. 3, the proposed GPM consists of one multi-layer perceptron (MLP) and GAB. The MLP in conventional PointNet [13] and PointNet++ [14] independently performs on each point to mine the information within the point itself, while neglects the correlations and relationships among the points. In order to more comprehensively exploit the point relationship, we rely on the GAB to aggregate the neighboring point representations and thereby updated the point representation.
For each obtained local structure obtained by the sampling and grouping layers, GAB [20] first defines one fully connected undirected graph to measure the similarities between any two points with such local structures. Given the output feature map G ∈ RCe×Ne of the MLP layer in the GPM module, we first linearly project each point to one common space through a FC layer to obtain new feature map Ĝ ∈ RCe×Ne . The similarity αij between point i and point j is measured as follows:
αij = Ĝi · Ĝj . (3)
Afterwards, we calculate the influence factor of point j on point i:
βij = softmaxj(LeakyReLU(αij)), (4)
where βij is regarded as the normalized attentive weight, representing how point j relates to point i. The representation of each point is updated by attentively aggregating the point representations with reference to βij :
G̃i = Ne∑ j=1 βijĜj . (5)
It can be observed that the GAB dynamically updates the local feature representation by referring to the similarities between points and captures their relationships. Moreover, in order to preserve the original information, the point feature after MLP is concatenated with the updated one via one skip connection through a gated fusion operation, as shown in Fig. 3.
Please note that we can stack multiple GPMs, as shown in Fig. 3, to further exploit the complicated non-linear relationships within each local structure. Afterwards, one max pooling layer is used to aggregate the feature map into a one-dimensional feature vector, which not only lowers the dimensionality of the representation, thus making it possible to quickly generate compact representation of the point cloud, but also help filtering out the unreliable noises.
3.2.2 Decoder
For decoder, we use the same architecture as [14]. Specifically, we progressively upsample the compact feature obtained from the encoder until the original resolution. Please note that for preserving the information generated in the encoder as much as possible, lateral connections are also used.
3.3 Prediction
After performing the feature representation, rich semantic representation for each point is obtained. Note that our previous operations, including contextual representation and feature representation, only mine the point local relationships. However, the global information is also important, which needs to be considered when determining the label for each individual point. For the semantic segmentation task, two points departing greatly in space may belong to the same semantic category, which can be jointly considered to mutually enhance their feature representations. Moreover, for high-dimensional feature representations, the inter-dependencies between feature channels also exist. As such, in order to capture the global context information for each point, we introduce two attention modules, namely spatial-wise and channel-wise attentions [4] for modeling the global relationships between points.
Spatial-wise Attention. To model rich global contextual relationships among points, the spatial-wise attention module is employed to adaptively aggregate spatial contexts of local features. Given the
feature map F ∈ RCd×Nd from the decoder, we first feed it into two FC layers to obtain two new feature maps A and B, respectively, where {A,B} ∈ RCd×Nd . Nd is the number of points and Cd is number of feature channel. The normalized spatial-wise attentive weight vij measures the influence factor of point j on point i as follows:
vij = softmaxj(Ai ·Bj), (6)
Afterwards, the feature map F is fed into another FC layer to generate a new feature map D ∈ RCd×Nd . The output feature map F̂ ∈ RCd×Nd after spatial-wise attention is obtained:
F̂i = Nd∑ j=1 (vijDj) + Fi. (7)
As such, the global spatial structure information is attentively aggregated with each point representation.
Channel-wise Attention. The channel-wise attention performs similarly with the spatial-wise attention, with the channel attention map explicitly modeling the interdependencies between channels and thereby boosting the feature discriminability. Similar as the spatial-wise attention module, the output feature map F̃ ∈ RCd×Nd is obtained by aggregating the global channel structure information with each channel representation.
After summing the feature maps F̂ and F̃, the semantic label for each point can be obtained with one additional FC layer. With such attention processes from the global perspective, the feature representation of each point is updated. As such, the complicated relationships between the points can be comprehensively exploited, yielding more accurate segmentation results.
4 Experiment
4.1 Experiment Setting
Dataset. To evaluate the performance of proposed model and compare with state-of-the-art, we conduct experiments on two public available datasets, the Stanford 3D Indoor Semantics (S3DIS) Dataset [1] and ScanNet Dataset[2]. The S3DIS dataset comes from real scan of the indoor environment, including 3D scans of Matterport scanners from 6 areas. There are 271 rooms divided by room. ScanNet is a point cloud dataset with scanned indoor scenes. It has 22 categories of semantic tags, with 1513 scenes. ScanNet contains a wide variety of spaces. Each point is annotated with an instance-level semantic category label.
Implementation Details. The number of neighboring points k in contextual representation is set as 3, where the farthest distance for neighboring point is fixed to 0.06. For feature extraction, a four-layer encoder is used, where the spatial scale of each layer is set as 1024, 256, 64, and 16, respectively. The GPM is enabled in the first two layers of the encoder, to exploit the local relationships between points. The maximum training epochs for S3DIS and ScanNet are set as 120 and 500, respectively.
Evaluation Metric. Two widely used metrics, namely overall accuracy (OA) and mean intersection of union (mIoU), are used to measure the semantic segmentation performance. OA is the prediction accuracy of all points. IoU measures the ratio of the area of overlap to the area of union between the ground truth and segmentation result. mIoU is the average of IoU over all categories.
Competitor Methods. For S3DIS dataset, we compare our method with PointNet [13], PointNet++ [14],
SEGCloud [19], RSNet [6], SPGraph [8], SGPN [23], Engelmann et al. [3], A-SCN [26] and DGCNN [24]. For ScanNet dataset, we compare with 3DCNN [2], PointNet [13], PointNet++ [14], RSNet [6] and PointCNN [10].
4.2 S3DIS Semantic Segmentation
We perform semantic segmentation experiments on the S3DIS dataset to evaluate our performance in indoor real-world scene scans and perform ablation experiments on this dataset. Same as the experimental setup in PointNet [13], we divide each room evenly into several 1m3 cube, with each uniformly sampleing 4096 points.
Same as [13, 3, 8], we perform 6-fold cross validation with micro-averaging. In order to compare with more methods, we also report the performance on the fifth fold only (Area 5). The OA and mIoU results are summarized in Table 1. From the results we can see that our algorithm performs better than other competitor methods in terms of both OA and mIoU metrics.
Besides, the IoU values of each category are summarized in Table 2, it can be observed that our proposed method achieves the best performance for several categories. For simple shapes such as “floor” and “ceiling”, each model performs well, with our approach performing better. This is mainly due to that the prediction layer of our propose method incorporates the global structure information between points, which enhances the point representation in the flat area. For categories with complex local structure, such as “chair” and “bookcase”, our model shows the best performance,
since we consider the contextual representation to enhance the relationship between each point and its neighbors, and use the GPM module to exploit the local structure information. However, the “window” and “board” categories are more difficult to distinguish from the “wall”, as they are close to the “wall” in position and appear similarly. The key to distinguishing them is to find subtle shape differences and detect the edges. It can be observed that our model performs well on the “window” and “board” categories. In order to further demonstrate the effectiveness of our model, some qualitative examples from S3DIS dataset are provided in Fig. 4 and Fig. 5, demonstrating that our model can yield more accurate segmentation results.
4.3 ScanNet Semantic Segmentation Table 3: The segmentation results of ScanNet dataset in terms of both OA and mIoU.
Method OA mIoU
3DCNN [2] 73.0 - PointNet [13] 73.9 - PointNet++ [14] 84.5 38.28 RSNet [6] - 39.35 PointCNN [10] 85.1 -
Ours 85.3 40.6
For the ScanNet dataset, the number of scenes trained and tested is 1201 and 312, same as [14, 10]. We only use its XYZ coordinate information. The results are illustrated in Table 3. Compared with other competitive methods, our proposed model achieves better performance in terms of both the OA and mIoU metrics.
4.4 Ablation Study
To validate the contribution of each module in our framework, we conduct ablation studies to demonstrate their effectiveness. Detailed experimental results are provided in Table 4.
Contextual Representation Module. After removing the contextual representation module in the input layer (denoted as w/o CR), we can see that the mIoU value dropped from 60.06 to 56.15, as shown in Table 4. Based on the results of each category in Table 5, some categories have significant drops in IoU, such as “column”, “sofa”, and “door”. The contextual representation can enhance the point feature of the categories with complex local structures. We also replace the gating operation in the contextual repre-
sentation with a simple concatenation operation. Due to the inequality of the two kinds of information,
the OA and mIoU decreases. Thus, the proposed gating operation is useful for fusing the information of the point itself and its neighborhood.
Graph Pointnet Module. The segmentation performance of our model without GPM module (denoted as w/o GPM) also significantly drops, which indicates that both the proposed GPM and CR are important for performance improvement. Specifically, without GPM, the mIoU of the categories, such as “column” and “sofa” drops significantly.
Attention Module. Removing the attention module (denoted as w/o AM) decreases both OA and mIoU. Moreover, the performances on categories with large flat area, such as “ceiling”, “floor”, “wall”, and “window”, significantly drop. As aforementioned, the attention module aims to mine the global relationship between points. Two points within the same category may with large spatial distance. With the attention module, the features of these points are mutually aggregated.
Table 6: Performances of DGCNN with our proposed module in terms of OA.
Model OA
DGCNN 84.31 DGCNN+CR 85.35 DGCNN+GPM 84.90 DGCNN+AM 85.17 DGCNN+CR+GPM+AM 86.07
We further incorporate the proposed CR, AM, and GPM together with DGCNN [24] for point cloud semantic segmentation, with the performances illustrated in Table 6. It can be observed that CR, AM, and GPM can help improving the performances, demonstrating the effectiveness of each module.
Model Complexity. Table 7 illustrates the model complexity comparisons. The sample sizes for all the models are fixed as 4096. It can be observed that the inference time of our model (28ms) is less than the other competitor models, except for PointNet (5.3ms) and PointNet++ (24ms). And the model size seems to be identical with other models except PointCNN, which presents the largest model.
Robustness under Noise. We further demonstrate the robustness of our proposed model with respect to PointNet++. As for scaling, when the scaling ratio are 50%, the OA of our proposed model and PointNet++ on segmentation task decreases by 3.0% and 4.5%, respectively. As for rotation, when the rotation angle is π10 , the OA of our proposed model and PointNet++ on segmentation task decreases by 1.7% and 1.0%, respectively. As such, our model is more robust to scaling while less robust to rotation.
5 Conclusion
In this paper, we proposed one novel network for point cloud semantic segmentation. Different with existing approaches, we enrich each point representation by incorporating its neighboring and contextual points. Moreover, we proposed one novel graph pointnet module to exploit the point cloud local structure, and rely on the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure. Extensive experiments on two public point cloud semantic segmentation datasets demonstrating the superiority of our proposed model.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant 61871270 and Grant 61672443), in part by the Natural Science Foundation of SZU (grant no. 827000144) and in part by the National Engineering Laboratory for Big Data System Computing Technology of China. | 1. What is the focus of the paper regarding point cloud processing?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and technical soundness?
3. What are the weaknesses of the paper, especially regarding its claims and failure modes?
4. How does the reviewer assess the clarity, quality, and novelty of the paper's content?
5. Are there any concerns or suggestions for improvement regarding the proposed method's applicability and potential failure modes? | Review | Review
The authors present a novel network architecture and encoding for point clouds. Specifically, they propose to use additionally to the plain point position an enriched representation that includes the positions of the nearest neighbors of the point. The paper is technically sound, though I have some additional questions which I state below (see "Improvements"). The paper does not include a discussion of the failure modes of the proposed algorithm (especially the invariance properties w.r.t. rotation and scale could be interesting), but is reasonably well evaluated. The paper is clearly written and well-structured. I do find the described idea moderately interesting, where my main concerns are 1) general applicability of the method and 2) the failure modes of the proposed approach. I will go more in depth on these issues in the "Improvements" section. Additional comments after the rebuttal: I thank the authors for the additional insightful experiments and their detailed response. On ground of the classification scores, I tend to accept the paper. However, I still feel that there is not a good explanation for the globally optimized feature combination function. |
NIPS | Title
Exploiting Local and Global Structure for Point Cloud Semantic Segmentation with Contextual Point Representations
Abstract
In this paper, we propose one novel model for point cloud semantic segmentation, which exploits both the local and global structures within the point cloud based on the contextual point representations. Specifically, we enrich each point representation by performing one novel gated fusion on the point itself and its contextual points. Afterwards, based on the enriched representation, we propose one novel graph pointnet module, relying on the graph attention block to dynamically compose and update each point representation within the local point cloud structure. Finally, we resort to the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure and thereby yield the resulting semantic label for each point. Extensive results on the public point cloud databases, namely the S3DIS and ScanNet datasets, demonstrate the effectiveness of our proposed model, outperforming the state-of-the-art approaches. Our code for this paper is available at https://github.com/fly519/ELGS.
1 Introduction
The point cloud captured by 3D scanners has attracted more and more research interests, especially for the point cloud understanding tasks, including the 3D object classification [13, 14, 10, 11], 3D object detection [21, 27], and 3D semantic segmentation [25, 13, 14, 23, 10]. 3D semantic segmentation, aiming at providing class labels for each point in the 3D space, is a prevalent challenging problem. First, the points captured by the 3D scanners are usually sparse, which hinders the design of one effective and efficient deep model for semantic segmentation. Second, the points always appear unstructured and unordered. As such, the relationship between the points is hard to be captured and modeled.
As points are not in a regular format, some existing approaches first transform the point clouds into regular 3D voxel grids or collections of images, and then feed them into traditional convolutional neural network (CNN) to yield the resulting semantic segmentation [25, 5, 22]. Such a transformation process can somehow capture the structure information of the points and thereby exploit their relationships. However, such approaches, especially in the format of 3D volumetric data, require high memory and computation cost. Recently, another thread of deep learning architectures on point clouds, namely PointNet [13] and PointNet++ [14], is proposed to handle the points in an efficient and effective way. Specifically, PointNet learns a spatial encoding of each point and then aggregates all individual point features as one global representation. However, PointNet does not consider the ∗Corresponding author.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
local structures. In order to further exploit the local structures, PointNet++ processes a set of points in a hierarchical manner. Specifically, the points are partitioned into overlapping local regions to capture the fine geometric structures. And the obtained local features are further aggregated into larger units to generate higher level features until the global representation is obtained. Although promising results have been achieved on the public datasets, there still remains some opening issues.
First, each point is characterized by its own coordinate information and extra attribute values, i.e. color, normal, reflectance, etc. Such representation only expresses the physical meaning of the point itself, which does not consider its neighbouring and contextual ones. Second, we argue that the local structures within point cloud are complicated, while the simple partitioning process in PointNet++ cannot effectively capture such complicated relationships. Third, each point labeling not only depends on its own representation, but also relates to the other points. Although the global representation is obtained in PointNet and PointNet++, the complicated global relationships within the point cloud have not been explicitly exploited and characterized.
In this paper, we propose one novel model for point cloud semantic segmentation. First, for each point, we construct one contextual representation by considering its neighboring points to enrich its semantic meaning by one novel gated fusion strategy. Based on the enriched semantic representations, we propose one novel graph pointnet module (GPM), which relies on one graph attention block (GAB) to compose and update the feature representation of each point within the local structure. Multiple GPMs can be stacked together to generate the compact representation of the point cloud. Finally, the global point cloud structure is exploited by the spatial-wise and channel-wise attention strategies to generate the semantic label for each point.
2 Related Work
Recently, deep models have demonstrated the feature learning abilities on computer vision tasks with regular data structure. However, due to the limitation of data representation method, there are still many challenges for 3D point cloud task, which is of irregular data structures. According to the 3D data representation methods, existing approaches can be roughly categorized as 3D voxelbased [5, 25, 22, 7, 9], multiview-based [18, 12], and set-based approaches [13, 14].
3D Voxel-based Approach. The 3D voxel-based methods first transform the point clouds into regular 3D voxel grids, and then the 3D CNN can be directly applied similarly as the image or video. Wu et al. [25] propose the full-voxels based 3D ShapeNets network to store and process 3D data. Due to the constraints of representation resolution, information loss are inevitable during the discretization process. Meanwhile, the memory and computational consumption are increases dramatically with respect to the resolution of voxel. Recently, Oct-Net [16], Kd-Net [7], and O-CNN [22] have been proposed to reduce the computational cost by skipping the operations on empty voxels.
Multiview-based Approach. The multiview-based methods need to render multiple images from the target point cloud based on different view angle settings. Afterwards, each image can be processed by the traditional 2D CNN operations [18]. Recently, the multiview image CNN [12] has been applied to 3D shape segmentation, and has obtained satisfactory results. The multiview-based approaches help reducing the computational cost and running memory. However, converting the 3D point cloud into images also introduce information loss. And how to determine the number of views and how to allocate the view to better represent the 3D shape still remains as an intractable problem.
Set-based Approach. PointNet [13] is the first set-based method, which learns the representation directly on the unordered and unstructured point clouds. PointNet++ [14] relies on the hierarchical learning strategy to extend PointNet for capturing local structures information. PointCNN [10] is further proposed to exploit the canonical order of points for local context information extraction.
Recently, there have been several attempts in the literature to model the point cloud as structured graphs. For example, Qi et al. [15] propose to build a k-nearest neighbor directed graph on top of point cloud to boost the performance on the semantic segmentation task. SPGraph [8] is proposed to deal with large scale point clouds. The points are adaptively partitioned into geometrically homogeneous elements to build a superpoint graph, which is then fed into a graph convolutional network (GCN) for predicting the semantic labels. DGCNN [24] relies on the edge convolution operation to dynamically capture the local shapes. RS-CNN [11] extends regular grid CNN to irregular configuration, which encodes the geometric relation of points to achieve contextual shape-aware learning of point cloud.
These approaches mainly focus on the point local relationship exploitation, and neglect the global relationship.
Unlike previous set-based methods that only consider the raw coordinate and attribute information of each single point, we pay more attentions on the spatial context information within neighbor points. Our proposed context representation is able to express more fine-grained structural information. We also rely on one novel graph pointnet module to compose and update each point representation within the local point cloud structure. Moreover, the point cloud global structure information is considered with the spatial-wise and channel-wise attention strategies.
3 Approach
The point cloud semantic segmentation aims to take the 3D point cloud as input and assign one semantic class label for each point. We propose one novel model for handling this point cloud semantic segmentation, as shown in Fig. 1. Specifically, our proposed network consists of three components, namely, the point enrichment, the feature representation, and the prediction. These three components fully couple together, ensuring an end-to-end training manner.
Point Enrichment. To make accurate class prediction for each point within the complicated point cloud structure, we need to not only consider the information of each point itself but also its neighboring or contextual points. Different from the existing approaches, relying on the information of each point itself, such as the geometry, color, etc., we propose one novel point enrichment layer to enrich each point representation by taking its neighboring or contextual points into consideration. With the incorporated contextual information, each point is able to sense the complicated point cloud structure information. As will be demonstrated in Sec. 4.4, the contextual information, enriching the semantic information of each point, can help boosting the final segmentation performance.
Feature Representation. With the enriched point representation, we resort to the conventional encoder-decoder architecture with lateral connections to learn the feature representation for each point. To further exploit local structure information of the point cloud, the GPM is employed in the encoder, which relies on the GAB to dynamically compose and update the feature representation of each point within its local regions. The decoder with lateral connections works on the compacted representation obtained from the encoder, to generate the semantic feature representation for each point.
Prediction. Based on the obtained semantic representations, we resort to both the channel-wise and spatial-wise attentions to further exploit the global structure of the point cloud. Afterwards, the semantic label is predicted for each point.
3.1 Point Enrichment
The raw representation of each point is usually its 3D position and associated attributes, such as color, reflectance, surface normal, etc. Existing approaches usually directly take such representation as input, neglecting its neighboring or contextual information, which is believed to play an essential role [17] for characterizing the point cloud structure, especially from the local perspective. In this paper, besides the point itself, we incorporate its neighboring points as its contextual information to enrich the point semantic representation. With such incorporated contextual information, each point is aware of the complicated point cloud structure information.
As illustrated in Fig. 2, a point cloud consists of N points, which can be represented as {P1, P2, ..., PN}, with Pi ∈ RCf denoting the attribute values of the i-th point, such as position coordinate, color, normal, etc. To characterize the contextual information for each point, k-nearest neighbor set Ni within the local region centered on i-th point are selected and concatenated together, where the contextual representation Ri ∈ RkCf of the given point i is as follows:
Ri = ‖ j∈Ni Pj . (1)
For each point, we have two different representations, specifically the Pi andRi. However, these two representations are of different dimensions and different characteristics. How to effectively fuse them together to produce one more representative feature for each point remains an open issue. In this paper, we propose a novel gated fusion strategy. We first feed Pi into one fully-connected (FC) layer to obtain a new feature vector P̃i ∈ RkCf . Afterwards, the gated fusion operation is performed:
gi = σ(wiRi + bi), P̂i = gi P̃i, gRi = σ(w R i P̃i + b R i ), R̂i = g R i Ri,
(2)
where wi, wRi ∈ RkCf×kCf and bi, bRi ∈ RkCf are the learnable parameters. σ is the non-linear sigmoid function. is the element-wise multiplication. The gated fusion aims to mutually absorb useful and meaningful information of Pi and Ri. And the interactions between Pi and Ri are updated, yielding P̂i and R̂i. As such, the i-th point representation is then enriched by concatenating them together as P̂i ‖ R̂i. For easing the following introduction, we will re-use Pi to denote the enriched representation of the i-th point.
3.2 Feature Representation
Based on the enriched point representation, we rely on one traditional encoder-decoder architecture with lateral connections to learn the feature representation of each point.
3.2.1 Encoder
Although the enriched point representation has somewhat considered the local structure information, the complicated relationships within points, especially from the local perspective need to be further exploited. In order to tackle this challenge, we propose one novel GPM in the encoder, which aims to learn the composition ability between points and thereby more effectively capture the local structural information within the point cloud.
Graph Pointnet Module. Same as [14], we first use the sampling and grouping layers to divide the point set into several local groups. Within each group, the GPM is used to exploit the local relationships between points, and thereby update the point representation by aggregating the point information within the local structure.
As illustrated in Fig. 3, the proposed GPM consists of one multi-layer perceptron (MLP) and GAB. The MLP in conventional PointNet [13] and PointNet++ [14] independently performs on each point to mine the information within the point itself, while neglects the correlations and relationships among the points. In order to more comprehensively exploit the point relationship, we rely on the GAB to aggregate the neighboring point representations and thereby updated the point representation.
For each obtained local structure obtained by the sampling and grouping layers, GAB [20] first defines one fully connected undirected graph to measure the similarities between any two points with such local structures. Given the output feature map G ∈ RCe×Ne of the MLP layer in the GPM module, we first linearly project each point to one common space through a FC layer to obtain new feature map Ĝ ∈ RCe×Ne . The similarity αij between point i and point j is measured as follows:
αij = Ĝi · Ĝj . (3)
Afterwards, we calculate the influence factor of point j on point i:
βij = softmaxj(LeakyReLU(αij)), (4)
where βij is regarded as the normalized attentive weight, representing how point j relates to point i. The representation of each point is updated by attentively aggregating the point representations with reference to βij :
G̃i = Ne∑ j=1 βijĜj . (5)
It can be observed that the GAB dynamically updates the local feature representation by referring to the similarities between points and captures their relationships. Moreover, in order to preserve the original information, the point feature after MLP is concatenated with the updated one via one skip connection through a gated fusion operation, as shown in Fig. 3.
Please note that we can stack multiple GPMs, as shown in Fig. 3, to further exploit the complicated non-linear relationships within each local structure. Afterwards, one max pooling layer is used to aggregate the feature map into a one-dimensional feature vector, which not only lowers the dimensionality of the representation, thus making it possible to quickly generate compact representation of the point cloud, but also help filtering out the unreliable noises.
3.2.2 Decoder
For decoder, we use the same architecture as [14]. Specifically, we progressively upsample the compact feature obtained from the encoder until the original resolution. Please note that for preserving the information generated in the encoder as much as possible, lateral connections are also used.
3.3 Prediction
After performing the feature representation, rich semantic representation for each point is obtained. Note that our previous operations, including contextual representation and feature representation, only mine the point local relationships. However, the global information is also important, which needs to be considered when determining the label for each individual point. For the semantic segmentation task, two points departing greatly in space may belong to the same semantic category, which can be jointly considered to mutually enhance their feature representations. Moreover, for high-dimensional feature representations, the inter-dependencies between feature channels also exist. As such, in order to capture the global context information for each point, we introduce two attention modules, namely spatial-wise and channel-wise attentions [4] for modeling the global relationships between points.
Spatial-wise Attention. To model rich global contextual relationships among points, the spatial-wise attention module is employed to adaptively aggregate spatial contexts of local features. Given the
feature map F ∈ RCd×Nd from the decoder, we first feed it into two FC layers to obtain two new feature maps A and B, respectively, where {A,B} ∈ RCd×Nd . Nd is the number of points and Cd is number of feature channel. The normalized spatial-wise attentive weight vij measures the influence factor of point j on point i as follows:
vij = softmaxj(Ai ·Bj), (6)
Afterwards, the feature map F is fed into another FC layer to generate a new feature map D ∈ RCd×Nd . The output feature map F̂ ∈ RCd×Nd after spatial-wise attention is obtained:
F̂i = Nd∑ j=1 (vijDj) + Fi. (7)
As such, the global spatial structure information is attentively aggregated with each point representation.
Channel-wise Attention. The channel-wise attention performs similarly with the spatial-wise attention, with the channel attention map explicitly modeling the interdependencies between channels and thereby boosting the feature discriminability. Similar as the spatial-wise attention module, the output feature map F̃ ∈ RCd×Nd is obtained by aggregating the global channel structure information with each channel representation.
After summing the feature maps F̂ and F̃, the semantic label for each point can be obtained with one additional FC layer. With such attention processes from the global perspective, the feature representation of each point is updated. As such, the complicated relationships between the points can be comprehensively exploited, yielding more accurate segmentation results.
4 Experiment
4.1 Experiment Setting
Dataset. To evaluate the performance of proposed model and compare with state-of-the-art, we conduct experiments on two public available datasets, the Stanford 3D Indoor Semantics (S3DIS) Dataset [1] and ScanNet Dataset[2]. The S3DIS dataset comes from real scan of the indoor environment, including 3D scans of Matterport scanners from 6 areas. There are 271 rooms divided by room. ScanNet is a point cloud dataset with scanned indoor scenes. It has 22 categories of semantic tags, with 1513 scenes. ScanNet contains a wide variety of spaces. Each point is annotated with an instance-level semantic category label.
Implementation Details. The number of neighboring points k in contextual representation is set as 3, where the farthest distance for neighboring point is fixed to 0.06. For feature extraction, a four-layer encoder is used, where the spatial scale of each layer is set as 1024, 256, 64, and 16, respectively. The GPM is enabled in the first two layers of the encoder, to exploit the local relationships between points. The maximum training epochs for S3DIS and ScanNet are set as 120 and 500, respectively.
Evaluation Metric. Two widely used metrics, namely overall accuracy (OA) and mean intersection of union (mIoU), are used to measure the semantic segmentation performance. OA is the prediction accuracy of all points. IoU measures the ratio of the area of overlap to the area of union between the ground truth and segmentation result. mIoU is the average of IoU over all categories.
Competitor Methods. For S3DIS dataset, we compare our method with PointNet [13], PointNet++ [14],
SEGCloud [19], RSNet [6], SPGraph [8], SGPN [23], Engelmann et al. [3], A-SCN [26] and DGCNN [24]. For ScanNet dataset, we compare with 3DCNN [2], PointNet [13], PointNet++ [14], RSNet [6] and PointCNN [10].
4.2 S3DIS Semantic Segmentation
We perform semantic segmentation experiments on the S3DIS dataset to evaluate our performance in indoor real-world scene scans and perform ablation experiments on this dataset. Same as the experimental setup in PointNet [13], we divide each room evenly into several 1m3 cube, with each uniformly sampleing 4096 points.
Same as [13, 3, 8], we perform 6-fold cross validation with micro-averaging. In order to compare with more methods, we also report the performance on the fifth fold only (Area 5). The OA and mIoU results are summarized in Table 1. From the results we can see that our algorithm performs better than other competitor methods in terms of both OA and mIoU metrics.
Besides, the IoU values of each category are summarized in Table 2, it can be observed that our proposed method achieves the best performance for several categories. For simple shapes such as “floor” and “ceiling”, each model performs well, with our approach performing better. This is mainly due to that the prediction layer of our propose method incorporates the global structure information between points, which enhances the point representation in the flat area. For categories with complex local structure, such as “chair” and “bookcase”, our model shows the best performance,
since we consider the contextual representation to enhance the relationship between each point and its neighbors, and use the GPM module to exploit the local structure information. However, the “window” and “board” categories are more difficult to distinguish from the “wall”, as they are close to the “wall” in position and appear similarly. The key to distinguishing them is to find subtle shape differences and detect the edges. It can be observed that our model performs well on the “window” and “board” categories. In order to further demonstrate the effectiveness of our model, some qualitative examples from S3DIS dataset are provided in Fig. 4 and Fig. 5, demonstrating that our model can yield more accurate segmentation results.
4.3 ScanNet Semantic Segmentation Table 3: The segmentation results of ScanNet dataset in terms of both OA and mIoU.
Method OA mIoU
3DCNN [2] 73.0 - PointNet [13] 73.9 - PointNet++ [14] 84.5 38.28 RSNet [6] - 39.35 PointCNN [10] 85.1 -
Ours 85.3 40.6
For the ScanNet dataset, the number of scenes trained and tested is 1201 and 312, same as [14, 10]. We only use its XYZ coordinate information. The results are illustrated in Table 3. Compared with other competitive methods, our proposed model achieves better performance in terms of both the OA and mIoU metrics.
4.4 Ablation Study
To validate the contribution of each module in our framework, we conduct ablation studies to demonstrate their effectiveness. Detailed experimental results are provided in Table 4.
Contextual Representation Module. After removing the contextual representation module in the input layer (denoted as w/o CR), we can see that the mIoU value dropped from 60.06 to 56.15, as shown in Table 4. Based on the results of each category in Table 5, some categories have significant drops in IoU, such as “column”, “sofa”, and “door”. The contextual representation can enhance the point feature of the categories with complex local structures. We also replace the gating operation in the contextual repre-
sentation with a simple concatenation operation. Due to the inequality of the two kinds of information,
the OA and mIoU decreases. Thus, the proposed gating operation is useful for fusing the information of the point itself and its neighborhood.
Graph Pointnet Module. The segmentation performance of our model without GPM module (denoted as w/o GPM) also significantly drops, which indicates that both the proposed GPM and CR are important for performance improvement. Specifically, without GPM, the mIoU of the categories, such as “column” and “sofa” drops significantly.
Attention Module. Removing the attention module (denoted as w/o AM) decreases both OA and mIoU. Moreover, the performances on categories with large flat area, such as “ceiling”, “floor”, “wall”, and “window”, significantly drop. As aforementioned, the attention module aims to mine the global relationship between points. Two points within the same category may with large spatial distance. With the attention module, the features of these points are mutually aggregated.
Table 6: Performances of DGCNN with our proposed module in terms of OA.
Model OA
DGCNN 84.31 DGCNN+CR 85.35 DGCNN+GPM 84.90 DGCNN+AM 85.17 DGCNN+CR+GPM+AM 86.07
We further incorporate the proposed CR, AM, and GPM together with DGCNN [24] for point cloud semantic segmentation, with the performances illustrated in Table 6. It can be observed that CR, AM, and GPM can help improving the performances, demonstrating the effectiveness of each module.
Model Complexity. Table 7 illustrates the model complexity comparisons. The sample sizes for all the models are fixed as 4096. It can be observed that the inference time of our model (28ms) is less than the other competitor models, except for PointNet (5.3ms) and PointNet++ (24ms). And the model size seems to be identical with other models except PointCNN, which presents the largest model.
Robustness under Noise. We further demonstrate the robustness of our proposed model with respect to PointNet++. As for scaling, when the scaling ratio are 50%, the OA of our proposed model and PointNet++ on segmentation task decreases by 3.0% and 4.5%, respectively. As for rotation, when the rotation angle is π10 , the OA of our proposed model and PointNet++ on segmentation task decreases by 1.7% and 1.0%, respectively. As such, our model is more robust to scaling while less robust to rotation.
5 Conclusion
In this paper, we proposed one novel network for point cloud semantic segmentation. Different with existing approaches, we enrich each point representation by incorporating its neighboring and contextual points. Moreover, we proposed one novel graph pointnet module to exploit the point cloud local structure, and rely on the spatial-wise and channel-wise attention strategies to exploit the point cloud global structure. Extensive experiments on two public point cloud semantic segmentation datasets demonstrating the superiority of our proposed model.
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (Grant 61871270 and Grant 61672443), in part by the Natural Science Foundation of SZU (grant no. 827000144) and in part by the National Engineering Laboratory for Big Data System Computing Technology of China. | 1. How do the authors propose to exploit the structure relationships between point clouds?
2. What is the novel contextual representation of each point proposed in the paper?
3. Can the authors provide more information on the ablation studies they conducted?
4. How does the performance of the proposed method compare to other state-of-the-art methods?
5. Are there any limitations or areas for improvement in the proposed approach? | Review | Review
The authors proposed to exploit the structure relationships between point clouds from both global and local perspectives are very enlightening. As such, the complicated relationships between point clouds can be more comprehensively exploited. Moreover, one novel contextual representation of each point is proposed, which considers its neighboring points to enrich the semantic meaning of each point. Such contextual representations are clearly motivated, with ablation studies demonstrating the corresponding contributions. The corresponding novelties and contributions have been summarized in the âContributionsâ part. And the questions and some detailed comments are listed in the following. 1. I am wondering the results if considering the spatial-wise and channel-wise attention with each GPM. How does it perform, comparing with the proposed GPM. 2. For the ablation studies in Table 5, it seems that the performances of different components, namely CR, AM, and GPM, perform differently over different categories. Please provide more explanations. 3. What about the performances by stacking different numbers of GPMs? 4. Some more qualitative results should be provided. |
NIPS | Title
Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control
Abstract
Multi-agent reinforcement learning (MARL) has recently received considerable attention due to its applicability to a wide range of real-world applications. However, achieving efficient communication among agents has always been an overarching problem in MARL. In this work, we propose Variance Based Control (VBC), a simple yet efficient technique to improve communication efficiency in MARL. By limiting the variance of the exchanged messages between agents during the training phase, the noisy component in the messages can be eliminated effectively, while the useful part can be preserved and utilized by the agents for better performance. Our evaluation using multiple MARL benchmarks indicates that our method achieves 2− 10× lower in communication overhead than state-of-the-art MARL algorithms, while allowing agents to achieve better overall performance.
1 Introduction
Many real-world applications (e.g., autonomous driving [16], game playing [12] and robotics control [9]) today require reinforcement learning tasks to be carried out in multi-agent settings. In MARL, multiple agents interact with each other in a shared environment. Each agent only has access to partial observations of the environment, and needs to make local decisions based on partial observations as well as both direct and indirect interactions with the other agents. This complex interaction model has introduced numerous challenges for MARL. In particular, during the training phase, each agent may dynamically change its strategy, causing dynamics in the surrounding environment and instability in the training process. Worse still, each agent can easily overfit its strategy to the behaviours of other agents [11], which may seriously deteriorate the overall performance.
In the research literature, there have been three lines of research that try to mitigate the instability and inefficiency caused by decentralized execution. The most common approach is independent Q-learning (IQL) [20], which breaks down a multi-agent learning problem into multiple independent single-agent learning problems, thus allowing each agent to learn and act independently. Unfortunately, this approach does not account for instability caused by environment dynamics, and therefore often suffer from the problem of poor convergence. The second approach adopts the centralized training and decentralized execution [18] paradigm, where a joint action value function is learned during the training phase to better coordinate the agents’ behaviours. During execution, each agent acts independently without direct communication. The third approach introduces communication among agents during execution [17, 3]. This approach allows each agent to dynamically adjusts its strategy based on its local observation along with the information received from the other agents. Nonetheless, it introduces additional communication overhead in terms of latency and bandwidth during execution, and its effectiveness is heavily dependent on the usefulness of the received information.
In this work, we leverage the advantages of both the second and third approaches. Specifically, we consider a fully cooperative scenario where multiple agents collaborate to achieve a common objective. The agents are trained in a centralized fashion within the multi-agent Q-learning framework, and are allowed to communicate with each other during execution. However, unlike previous work,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
we make a few key observations. First, for many applications, it is often superfluous for an agent to wait for feedback from all surrounding agents before making an action decision. For instance, when the front camera on an autonomous vehicle detects an obstacle within the dangerous distance limit, it triggers the ‘brake‘ signal 2 without waiting for the feedback from the other parts of the vehicle. Second, the feedback received from the other agents may not always provide useful information. For example, the navigation system of the autonomous vehicle should pay more attention to the messages sent by the perception system (e.g., camera, radar), and less attention to the entertainment system inside the vehicle before taking its action. The full (i.e., all-to-all) communication pattern among the agents can lead to a significant communication overhead in terms of both bandwidth and latency, which limits its practicality and effectiveness in real applications with strict latency requirements and bandwidth constraints (e.g., real-time traffic signal control, autonomous driving, etc). In addition, as pointed out by Jiang et al. [7], an excessive amount of communication may introduce useless and even harmful information which can even impair the convergence of the learning process.
Motivated by these observations, we design a novel deep MARL architecture that can significantly improve inter-agent communication efficiency. Specifically, we introduce Variance Based Control (VBC), a simple yet efficient approach to reduce the among of information transferred between agents. By inserting an extra loss term on the variance of the exchanged information, the meaningful part of the messages can be effectively extracted and utilized to benefit the training of each individual agent. Furthermore, unlike previous work, we do not require an extra decision module to dynamically adjust the communication pattern. This allows us to reduce the model complexity significantly. Instead, each agent first makes a preliminary decision based on its local information, and initiates communication only when its confidence level on this preliminary decision is low. Similarly, upon receiving the communication request, the agent replies to the request only when its message is informative. By only exchanging useful information among the agents, VBC not only improves agent performance, but also substantially reduces communication overhead during execution. Lastly, it can be theoretically shown that the resulting training algorithm provides guaranteed stability.
For evaluation, we test VBC on several MARL benchmarks, including StarCraft Multi-Agent Challenge [15], Cooperative Navigation (CN) [10] and Predator-prey (PP) [8]. For StarCraft Multi-Agent Challenge, VBC achieves 20% higher winning rate and 2− 10× lower communication overhead on average compared with the other benchmark algorithms. For both CN and PP scenarios, VBC outperforms the existing algorithms and incurs much lower overhead than existing communication-enabled approaches. A video demo is available at [2] for a better illustration of the VBC performance. The code is available at https://github.com/saizhang0218/VBC.
2 Related Work
The simplest training method for MARL is to make each agent learn independently using Independent Q-Learning (IQL) [20]. Although IQL is successful in solving simple tasks such as Pong [19], it ignores the environment dynamics arose from the interactions among the agents. As a result, it suffers from the problem of poor convergence, making it difficult to handle advanced tasks.
Given the recent success on deep Q-learning [12], some recent studies explore the scheme of centralized training and decentralized execution. Sunehag et al. [18] propose Value Decomposition Network (VDN), a method that acquires the joint action value function by summing up all the action value functions of each agent. All the agents are trained as a whole by updating the joint action value functions iteratively. QMIX [14] sheds some light on VDN, and utilizes a neural network to represent the joint action value function as a function of the individual action value functions and the global state information. The authors of [10] extend the actor-critic methods to the multi-agent scenario. By performing centralized training and decentralized execution over the agents, the agents can better adapt to the changes in the environment and collaborate with each other. Foerster et al. [5] propose counterfactual multi-agent policy gradient (COMA), which employs a centralized critic function to estimate the action value function of the joint, and decentralized actor functions to make each agent execute independently. All the aforementioned methods assume no communication between the agents during the execution. As a result, many subsequent approaches, including ours, can be applied to improve the performance of these methods.
Learning the communication pattern for MARL is first proposed by Sukhbaatar et. al. [17]. The authors introduce CommNet, a framework that adopts continuous communication for fully cooperative
tasks. During the execution, each agent takes their internal states as well as the means of the internal states of the rest agents as the input to make decision on its action. The BiCNet [13] uses a bidirectional coordinated network to connect the agents. However, both schemes require all-to-all communication among the agents, which can cause a significant communication overhead and latency.
Several other proposals [3, 7, 8] use a selection module to dynamically adjust the communication pattern among the agents. In Differentiable Inter-Agent Learning (DIAL) [3], the messages produced by an agent are selectively sent to the neighboring agents through the discretize/regularise unit (DRU). By jointly training DRU with the agent network, the communication overhead can be efficiently reduced. Jiang et. al. [7] propose an attentional communication model that learns when the communication is required and how to aggregate the shared information. However, an agent can only talk to the agents within its observable range at each timestep. This limits the speed of information propagation, and restricts the possible communication patterns when the local observable field is small. Kim et. al. [8] propose a communication scheduling scheme for wireless environment, but only a fraction of the agents can broadcast their messages at each time. In comparison, our approach does not impose hard constraints on the communication pattern, which is beneficial to the learning process. Also our method does not adopt additional decision module for the communication scheduling, which greatly reduces the model complexity.
3 Background
Deep Q-networks: We consider a standard reinforcement learning problem based on Markov Decision Process (MDP). At each timestamp t, the agent observes the state st, and chooses an action at. It then receives a reward rt for its action at and proceeds to the next state st+1. The goal is to maximize the total expected discounted reward R = ∑T t=1 γ
trt, where γ ∈ [0, 1] is the discount factor. A Deep Q-Network (DQN) use a deep neural network to represent the action value function Qθ(s, a) = E[Rt|st = s, at = a], where θ represents the parameters of the neural network, and Rt is the total rewards received at and after t. During the training phase, a replay buffer is used to store the transition tuples 〈 st, at, st+1, rt 〉 . The action value function Qθ(s, a) can be trained recursively by minimizing the loss L = Est,at,rt,st+1 [yt −Qθ(st, at)]2, where yt = rt + γmaxat+1Qθ′(st, at+1) and θ′ represents the parameters of the target network. An action is usually selected with -greedy policy. Namely, selecting the action with maximum action value with probability 1− , and choosing a random action with probability .
Multi-agent deep reinforcement learning: We consider an environment with N agents work cooperatively to fulfill a given task. At timestep t, each agent i (1 ≤ i ≤ N ) receives a local observation oti and executes an action a t i. They then receive a joint reward rt and proceed to the next state. We use a vector at = {ati} to represent the joint actions taken by all the agents. The agents aim to maximize the joint reward by choosing the best joint actions at at each timestep t.
Deep recurrent Q-networks: Traditional DQNs generate action solely based on a limited number of local observations without considering the prior knowledge. Hausknecht et al. [6] introduce Deep Recurrent Q-Networks (DRQN), which models the action value function with a recurrent neural network (RNN). The DRQN leverages its recurrent structure to integrate the previous observations and knowledge for better decision-making. At each timestep t, the DRQN Qθ(oti, h t−1 i , a t i) takes the local observation oti and hidden state h t−1 i from the previous steps as input to yield action values.
Learning the joint Q-function: Recent research effort has been made on the learning of joint action value function for multi-agent Q-learning. Two representative works are VDN [18] and QMIX [14]. In VDN, the joint action value function Qtot(ot,ht−1, at) is assumed to be the sum of all the individual action value functions, i.e. Qtot(ot,ht−1, at) = ∑ iQi(o t i, h t−1 i , a t i), where ot = {oti}, ht = {hti} and at = {ati} are the collection of the observations, hidden states and actions of all the agents at timestep t respectively. QMIX employs a neural network to represent the joint value function Qtot(ot,ht−1, at) as a nonlinear function of Qi(oti, h t−1 i , a t i) and global state st.
4 Variance Based Control
In this section, we present the detailed design of VBC in the context of multi-agent Q-learning. The main idea of VBC is to improve agent performance and communication efficiency by limiting the
variance of the transferred messages. During execution, each agent communicates with other agents only when its local decision is ambiguous. The degree of ambiguity is measured by the difference between the top two largest action values. Upon receiving the communication request from other agents, the agent replies only if its feedback is informative, namely the variance of the feedback is high.
4.1 Agent Network Design
The agent network consists of the following three networks: local action generator, message encoder and combiner. Figure 1(a) describes the network architecture for agent 1. The local action generator consists of a Gated Recurrent Unit (GRU) and a fully connected layer (FC). For agent i, the GRU takes the local observation oti and the hidden state h t−1 i as the inputs, and generates the intermediate results cti. c t i is then sent to the FC layer, which outputs the local action values Qi(o t i, h t−1 i , a t i) for each action ati ∈ A, where A is the set of possible actions. The message encoder, f ijenc(.), is a multi-layer perceptron (MLP) which contains two FC layers and a leaky ReLU layer. The agent network involves multiple independent message encoders, each accepts ctj from another agent j (j 6= i), and outputs f ijenc(c t j). The outputs from local action generator and message encoder are then sent to the combiner, which produces the global action value function Qi(ot,ht−1, ati) of agent i by taking into account the global observation ot and global history ht−1. To simplify the design and reduce model complexity, we do not introduce extra parameters for the combiner. Instead, we make the dimension of the f ijenc(c t j) the same as the local action values Qi(oti, h t−1 i , .), and hence the combiner can simply perform elementwise summation over its inputs, namely Qi(ot,ht−1, .) = Qi(oti, h t−1 i , .) + ∑ j 6=i f ij enc(c t j). The combiner chooses the action with the -greedy policy π(.). Let θilocal and θ ij enc denote the set of parameters of the local action generators and the message encoders, respectively. To prevent the lazy agent problem [18] and decrease the model complexity, we make θilocal the same for all i, and make θijenc the same for all i and j(j 6= i). Accordingly, we can drop the corner scripts and use θ = {θlocal, θenc} and fenc(.) to denote the agent network parameters and the message encoder.
4.2 Loss Function Definition
During the training phase, the message encoder and local action generator jointly learn to generate the best estimation on the action values. More specifically, we employ a mixing network (shown in Figure 1(b)) to aggregate the global action value functions Qi(ot,ht−1, ati) from each agents i, and yields the joint action value function, Qtot(ot,ht−1, at). To limit the variance of the messages from the other agents, we introduce an extra loss term on the variance of the outputs of the message encoders fenc(ctj). The loss function during the training phase is defined as:
L(θlocal, θenc) = B∑ b=1 T∑ t=1 [ (ybtot −Qtot(obt , hbt−1, abt ;θ))2 + λ N∑ i=1 V ar(fenc(c t,b i )) ] (1)
Algorithm 1: Communication protocol at agent i 1 Input: Confidence threshold of local actions δ1, threshold on variance of message encoder output δ2. Total
number of agents N. 2 for t ∈ T do 3 // Decision on the action of itself: 4 Compute local action values Qi(oti, h t−1 i , .). Denote m1,m2 the top two largest values of Qi(o t i, h t−1 i , .). 5 if m1 −m2 ≥ δ1 then 6 Let Qi(ot, ht−1, .) = Qi(oti, ht−1i , .). 7 else 8 Broadcast a request to the other agents, and receive the fenc(ctj) from Nreply(Nreply ≤ N) agents. 9 Let Qi(ot, ht−1, .) = Qi(oti, ht−1i , .) + ∑Nreply j=1 fenc(c t j).
10 // Generating reply messages for the other agents: 11 Calculate variance of fenc(cti), if V ar(fenc(c t i)) ≥ δ2, store fenc(cti) in the buffer. 12 if V ar(fenc(cti)) ≥ δ2 and Receive a request from agent j then 13 Reply the request from agent j with fenc(cti).
where ybtot = r b t + γmaxat+1Qtot(obt+1,h b t , at+1;θ −), θ− is the parameter of the target network which is copied from the θ periodically, V ar(.) is the variance function and λ is the weight of the loss on it. b is the batch index. The replay buffer is refreshed periodically by running each agent network and selecting the action which maximizes Qi(ot,ht−1, .).
4.3 Communication Protocol Design
During the execution phase, at every timestep t, the agent i first computes the local action value function Qi(oti, h t−1 i , .) and fenc(c t i). It then measures the confidence level on the local decision by computing the difference between the largest and the second largest element within the action values. An example is given in Figure 2(a). Assume agent 1 has three actions to select, and the output of the local action generator of agent 1 is Q1(ot1, h t−1 1 , .) = (0.1, 1.6, 3.8), and the difference between the largest and the second largest action values is 3.8− 1.6 = 2.2, which is greater than the threshold δ1 = 1.0. Given the fact that the variance of message encoder outputs fenc(ctj) from the agent 2 and 3 is relatively small due to the additional penalty term on variance in equation 1, it is highly possible that the global action value function Q1(ot,ht−1, .) also has the largest value in its third element. Therefore agent 1 does not have to talk to other agents to acquire fenc(ctj). Otherwise, agent 1 broadcasts a request to ask for help if its confidence level on the local decision is low. Because the request does not contain any actual data, it consumes very low bandwidth. Upon receiving the request, only the agents whose message has a large variance reply (Figure 2(b)), because their messages may change the current action decision of agent 1. This protocol not only reduces the communication overhead considerably, but also eliminates noisy, less informative messages that may impair the overall performance. The detailed protocol and operations performed at an agent i is summarized in Algorithm 1.
5 Convergence Analysis
In this section, we analyze convergence of the learning process with the loss function defined in equation (1) under the tabular setting. For the sake of simplicity, we ignore the dependency of the action value function on the previous knowledge ht. To minimize equation (1), given the initial state Q0, at iteration k, the q values in the table is updated according to the following rule:
Qk+1tot (ot, at) = Q k tot(ot, at)+ηk [ rt+γmaxaQ k tot(ot+1, a)−Qktot(ot, at)−λ
N∑ i=1 ∂V ar(fenc(c t i)) ∂Qktot(ot, at)
] (2)
where ηk, Qktot(.) are the learning rate and the joint action value function at iteration k respectively. Let Q∗tot(.) denote the optimal joint action value function. We have the following result on the convergence of the learning process. A detailed proof is given in the supplementary materials.
Theorem 1. Assume 0 ≤ ηk ≤ 1, ∑ k ηk = ∞, ∑ k η 2 k < ∞. Also assume the number of possible actions and states are finite. By performing equation 2 iteratively, we have ||Qktot(ot, at)− Q∗tot(ot, at)|| ≤ λNG ∀ot, at, as k →∞, where G satisfies || ∂V ar(fenc(c t i))
∂Qktot(ot,at) || ≤ G,∀i, k, t, ot, at.
6 Experiment
We evaluated the performance of VBC on the StarCraft Multi-Agent Challenge (SMAC) [15]. StarCraft II [1] is a real-time strategy (RTS) game that has recently been utilized as a benchmark by the reinforcement learning community [14, 5, 13, 4]. In this work, we focus on the decentralized micromanagement problem in StarCraft II, which involves two armies, one controlled by the user (i.e. a group of agents), and the other controlled by the build-in StarCraft II AI. The goal of the user is to control its allied units to destroy all enemy units, while minimizing received damage on each unit. We consider six different battle settings. Three of them are symmetrical battles, where both the user and the enemy groups consist of 2 Stalkers and 3 Zealots (2s3z), 3 Stalkers and 5 Zealots (2s5z), and 1 Medivac, 2 Marauders and 7 Marines (MMM) respectively. The other three are unsymmetrical battles, where the user and enemy groups have different army unit compositions, including: 3 Stalkers for user versus 4 Zealots for enemy (3s_vs_4z), 6 Hydralisks for user versus 8 Zealots for enemy (6s_vs_8z), and 6 Zealot for user versus 24 Zerglings for enemy (6z_vs_24zerg). The unsymmetrical battles are considered to be harder than the symmetrical battles because of the difference in army size.
At each timestep, each agent controls a single unit to perform an action, including move[direction], attack[enemy_id], stop and no-op. Each agent has a limited sight range and shooting range, where shooting range is less than the sight range. The attack operation is available only when the enemies are within the shooting range. The joint reward received by the allied units equals to the total damage inflicted on enemy units. Additionally, the agents are rewarded 100 extra points after killing each enemy unit, and 200 extra points for killing the entire army. The user wins the battle only when the allied units kill all the enemies within the time limit. Otherwise the built-in AI wins. The input observation of each agent is a vector that consists of the following information of each allied unit and enemy unit in its sight range: relative x, y coordinates, relative distance and agent type. For the detailed game settings, hyperparameters, and additional experiment evaluation over other test environments, please refer to supplementary materials.
6.1 Results
We compare VBC and several benchmark algorithms, including VDN [18], QMIX [14] and SchedNet [8] for controlling allied units. We consider two types of VBCs by adopting the mixing networks of VDN and QMIX, denoted as VBC+VDN and VBC+QMIX. The mixing network of VDN simply computes the elementwise summation across all the inputs, and the mixing network of QMIX deploys a neural network whose weight is derived from the global state st. The detailed architecture of this mixing network can be found in [14]. Additionally, we create an algorithm FC (full communication) by removing the penalty in Equation (1), and dropping the limit on variance during the execution phase (i.e., δ1 =∞ and δ2 = −∞). The agents are trained with the same network architecture shown in Figure (1), and the mixing network of VDN is used. For SchedNet, at every timestep only K out of N agents can broadcast their messages by using Top(k) scheduling policy [8]. We usually set K close to 0.5N , that is, each time roughly half of the allied units can broadcast their messages. The
VBC are trained for different number of episodes based on the difficulties of the battles, which we describe in detail next.
To measure the convergence speed of each algorithm, we stop the training process and save the current model every 200 training episodes. We then run 20 test episodes and measure the winning rates for these 20 episodes. For VBC+VDN and VBC+QMIX, the winning rates are measured by running the communication protocol described in Algorithm 1. For easy tasks, namely MMM and 2s_vs_3z, we train the algorithms with 2 million and 4 million episodes respectively. For all the other tasks, we train the algorithms with 10 million episodes. Each algorithm is trained 15 times. Figure 3 shows the average winning rate and 95% confidence interval of each algorithm for all the six tasks. For hyperparameters used by VBC (i.e., λ used in equation (1), δ1andδ2 in Algorithm 1), we first search for a coarse parameter range based on random trial, experience and message statistics. We then perform a random search within a smaller hyperparameter space. Best selections are shown in the legend of each figure.
We observe that the algorithms that involve communication (i.e., SchedNet, FC, VBC) outperform the algorithms without communication (i.e., VDN, QMIX) in all the six tasks. This is a clear indication that communication benefits the performance. Moreover, both VBC+VDN and VBC+QMIX achieve better winning rates than SchedNet, because SchedNet only allows a fixed number of agents to talk at every timestep, which prohibits some key information to exchange in a timely fashion. Finally, VBC achieves similar performance as FC and even outplays FC for some tasks (e.g., 2s3z,6h_vs_8z, 6z_vs_24zerg). This is because a fair amount of communication between the agents are noisy and redundant. By eliminating these undesired messages, VBC is able to achieve both communication efficiency and performance gain.
6.2 Communication Overhead
We now evaluate the communication overhead of VBC. To quantify the amount of communication involved, we run Algorithm 1 and count the total number of pairs of agents gt that conduct communication for each timestep t, then divided by the total number of pairs of agents in the user group, R. In other words, the communication overhead β = ∑T t=1 gt/RT . An an example, for the task 3s_vs_4z, since the user controls 3 Stalkers, and the total number of agent pairs is R = 3× 2 = 6. Within these 6 pairs of agents, suppose that 2 pairs involve communication, then gt = 2. Table 1 shows the β of VBC+VDN, VBC+QMIX and SchedNet across all the test episodes at the end of the training phase of each battle. For SchedNet, β simply equals the ratio between the number of allied agents that are allowed to talk and the total number of allied agents. As shown in Table 1, in contrast to ScheNet,
VBC+VDN and VBC+QMIX produce 10× lower communication overhead for MMM and 2s3z, and 2− 6× less traffic for the rest of tasks.
6.3 Learned Strategy
In this section, we examine the behaviors of the agents in order to better understand the strategies adopted by the different algorithms. We have made a video demo available at [2] for better illustration.
For unsymmetrical battles, the number of allied units is less than the enemy units, and therefore the agents are prone to be attacked by the enemies. This is exactly what happened for the QMIX and VDN agents on 6h_vs_8z, as shown in (Figure 4(c)). Figure 4(b) shows the strategy of VBC, all the Hydralisks are placed in a row at the bottom margin of the map. Due to the limited size of the map, the Zealots can not go beyond the margin to surround the Hydralisks. The Hydralisks then focus their fire to kill each Zealot. Figure 4(a) shows the change on β for a sample test episode. We observe that most of the communication appears in the beginning of the episode. This is due to the fact that Hydralisks need to talk in order to arrange in a row formation. After the arrangement is formed, no communication is needed until the arrangement is broken due to the deaths of some Hydralisks, as indicated by the short spikes near the end of the episode. Finally, SchedNet and FC utilize a similar strategy as VBC. Nonetheless, due to the restriction on communication pattern, the row formed by the allied agents are usually not well formed, and can be easily broken by the enemies.
For 3s_vs_4z scenario, the Stalkers have a larger attack range than Zealots. All the algorithms adopt a kiting strategy where the Stalkers form a group and attack the Zealots while kiting them. For VBC and FC, at each timestep only the agents that are far from the enemies attack, and the rest of the agents (usually the healthier ones) are used as a shield to protect the firing agents (Figure 4(d)). Communication only occurs when the group are broken and need to realign. In contrast, VDN and QMIX do not have this attacking pattern, and all the Stalkers always fire simultaneously, therefore the Stalkers closest to the Zealots are get killed first. SchedNet and FC also adopt a similar policy as VBC, but the attacking pattern of the Stalkers is less regular, i.e., the Stalkers close to the Zealots also fire occasionally.
6z_vs_24zerg is the toughest scenario in our experiment. For QMIX and VDN, the 6 Zealots are surrounded and killed by 24 Zerglings shortly after the episode starts. In contrast, VBC first separates the agents into two groups with two Zealots and four Zealots respectively (Figure 4(e)). The two Zealots attract most of the Zerglings to a place far away from the rest four Zealots, and are killed shortly. Due to the limit sight range of the Zerglings, they can not find the rest four Zealots. On the other side, the four Zealots kill the small part of Zerglings easily and search for the rest Zerglings. The four Zealots take advantage of the short sight of the Zerglings. Each time the four Zealots adjust their positions in a way such that they can only be seen by a small number of the Zerglings, the baited Zerglings are then killed easily (Figure 4(f)). For VBC, the communication only occurs in the beginning of the episode when the Zealots are separated into two groups, and near the end of the episode when four Zealots adjust their positions. Both FC and SchedNet learn the strategy of splitting the Zealots into two groups, but they fail to fine-tune their positions to kill the remaining Zerglings.
For symmetrical battles, the tasks are less challenging, and we see less disparities on performances of the algorithms. For 2s3z and 3s5z, the VDN agents attack the enemies blindly without any cooperation. The QMIX agents learn to focus firing and protect the Stalkers. The agents of VBC, FC and SchedNet adopt a more aggressive policy, where the allied Zealots try to surround and kill the enemy Zealots first, and then attack the enemy Stalkers by collaborating with the allied Stalkers. This is extremely effective because Zealots counter Stalkers, so it is important to kill the enemy Zealots before they damage allied Stalkers. For VBC, the communication occurs mostly when the allied Zealots try to surround the enemy Zealots. For MMM, almost all the methods learn the optimal policy, namely killing the Medivac first, then attack the rest of the enemy units cooperatively.
6.4 Evaluation on Cooperative Navigation and Predator-prey
To demonstrate the applicability of VBC in more general settings, we have tested VBC for two more scenarios: (1) Cooperative Navigation (CN) which is a cooperative scenario, and (2) Predator-prey (PP) which is a competitive scenario. The game settings are the same as what are used in [10] and [8], respectively. We train each method until convergence and test the result models for 2000 episodes. For PP, we make the agents of VBC compete against the agents of other methods, and report the normalized score of Predator (Figure 5(a)). For CN we report the average distance between agents and their destinations, and average number of collisions (Figure 5(b)). We notice that methods which allow communication (i.e., SchedNet, FC, VBC) outperform the others for both tasks, and VBC achieves the best performance. Moreover, in both scenarios, VBC incurs 10× and 3× lower communication overhead than FC and SchedNet respectively. In CN, most of the communication of VBC occurs when the agents are close to each other to prevent collisions. In PP, the communication of VBC occurs mainly to rearrange agent positions for better coordination. These observations confirm that VBC’s can be applied to a variety of MARL scenarios with great effectiveness.
7 Conclusion
In this work, we propose VBC, a simple and effective approach to achieve efficient communication among agents in MARL. By constraining the variance of the exchanged messages during the training phase, VBC improves communication efficiency while enabling better cooperation among the agents. The test results of multiple MARL benchmarks indicate that VBC outperforms the other state-of-theart methods significantly in terms of both performance and communication overhead. | 1. How do the messages exchanged between agents convey meaning, and what is the significance of these messages?
2. How is the confidence level of an agent computed, and is this method effective?
3. In Equation 1, which variance is being computed, and why is this important?
4. Is the choice of environment (StarCraft) appropriate for testing the proposed approach, and how do the results support this choice?
5. Are the interpretations offered by the authors regarding the strategies developed by the agents useful and insightful? | Review | Review
1- The messages exchanged are a bunch of floats. What these messages mean or represent, and how agents can make sense of them looks currently as black magic. There's a lack of explainability, and intuition of "but somehow it works, so..." with no analysis of the messages meaning. 2- The way the "confidence level" of an agent is computed is somewhat naive. 3- Equation 1 is unclear: which variance exactly is computed? The variance between agents' messages or the variance of the messages coming from one agent? 4- The environment chosen (StarCraft) seems adequate, and results are convincing. 5- I found the interpretations offered by the authors of the strategies developed by the agents while communicating interesting and welcome. |
NIPS | Title
Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control
Abstract
Multi-agent reinforcement learning (MARL) has recently received considerable attention due to its applicability to a wide range of real-world applications. However, achieving efficient communication among agents has always been an overarching problem in MARL. In this work, we propose Variance Based Control (VBC), a simple yet efficient technique to improve communication efficiency in MARL. By limiting the variance of the exchanged messages between agents during the training phase, the noisy component in the messages can be eliminated effectively, while the useful part can be preserved and utilized by the agents for better performance. Our evaluation using multiple MARL benchmarks indicates that our method achieves 2− 10× lower in communication overhead than state-of-the-art MARL algorithms, while allowing agents to achieve better overall performance.
1 Introduction
Many real-world applications (e.g., autonomous driving [16], game playing [12] and robotics control [9]) today require reinforcement learning tasks to be carried out in multi-agent settings. In MARL, multiple agents interact with each other in a shared environment. Each agent only has access to partial observations of the environment, and needs to make local decisions based on partial observations as well as both direct and indirect interactions with the other agents. This complex interaction model has introduced numerous challenges for MARL. In particular, during the training phase, each agent may dynamically change its strategy, causing dynamics in the surrounding environment and instability in the training process. Worse still, each agent can easily overfit its strategy to the behaviours of other agents [11], which may seriously deteriorate the overall performance.
In the research literature, there have been three lines of research that try to mitigate the instability and inefficiency caused by decentralized execution. The most common approach is independent Q-learning (IQL) [20], which breaks down a multi-agent learning problem into multiple independent single-agent learning problems, thus allowing each agent to learn and act independently. Unfortunately, this approach does not account for instability caused by environment dynamics, and therefore often suffer from the problem of poor convergence. The second approach adopts the centralized training and decentralized execution [18] paradigm, where a joint action value function is learned during the training phase to better coordinate the agents’ behaviours. During execution, each agent acts independently without direct communication. The third approach introduces communication among agents during execution [17, 3]. This approach allows each agent to dynamically adjusts its strategy based on its local observation along with the information received from the other agents. Nonetheless, it introduces additional communication overhead in terms of latency and bandwidth during execution, and its effectiveness is heavily dependent on the usefulness of the received information.
In this work, we leverage the advantages of both the second and third approaches. Specifically, we consider a fully cooperative scenario where multiple agents collaborate to achieve a common objective. The agents are trained in a centralized fashion within the multi-agent Q-learning framework, and are allowed to communicate with each other during execution. However, unlike previous work,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
we make a few key observations. First, for many applications, it is often superfluous for an agent to wait for feedback from all surrounding agents before making an action decision. For instance, when the front camera on an autonomous vehicle detects an obstacle within the dangerous distance limit, it triggers the ‘brake‘ signal 2 without waiting for the feedback from the other parts of the vehicle. Second, the feedback received from the other agents may not always provide useful information. For example, the navigation system of the autonomous vehicle should pay more attention to the messages sent by the perception system (e.g., camera, radar), and less attention to the entertainment system inside the vehicle before taking its action. The full (i.e., all-to-all) communication pattern among the agents can lead to a significant communication overhead in terms of both bandwidth and latency, which limits its practicality and effectiveness in real applications with strict latency requirements and bandwidth constraints (e.g., real-time traffic signal control, autonomous driving, etc). In addition, as pointed out by Jiang et al. [7], an excessive amount of communication may introduce useless and even harmful information which can even impair the convergence of the learning process.
Motivated by these observations, we design a novel deep MARL architecture that can significantly improve inter-agent communication efficiency. Specifically, we introduce Variance Based Control (VBC), a simple yet efficient approach to reduce the among of information transferred between agents. By inserting an extra loss term on the variance of the exchanged information, the meaningful part of the messages can be effectively extracted and utilized to benefit the training of each individual agent. Furthermore, unlike previous work, we do not require an extra decision module to dynamically adjust the communication pattern. This allows us to reduce the model complexity significantly. Instead, each agent first makes a preliminary decision based on its local information, and initiates communication only when its confidence level on this preliminary decision is low. Similarly, upon receiving the communication request, the agent replies to the request only when its message is informative. By only exchanging useful information among the agents, VBC not only improves agent performance, but also substantially reduces communication overhead during execution. Lastly, it can be theoretically shown that the resulting training algorithm provides guaranteed stability.
For evaluation, we test VBC on several MARL benchmarks, including StarCraft Multi-Agent Challenge [15], Cooperative Navigation (CN) [10] and Predator-prey (PP) [8]. For StarCraft Multi-Agent Challenge, VBC achieves 20% higher winning rate and 2− 10× lower communication overhead on average compared with the other benchmark algorithms. For both CN and PP scenarios, VBC outperforms the existing algorithms and incurs much lower overhead than existing communication-enabled approaches. A video demo is available at [2] for a better illustration of the VBC performance. The code is available at https://github.com/saizhang0218/VBC.
2 Related Work
The simplest training method for MARL is to make each agent learn independently using Independent Q-Learning (IQL) [20]. Although IQL is successful in solving simple tasks such as Pong [19], it ignores the environment dynamics arose from the interactions among the agents. As a result, it suffers from the problem of poor convergence, making it difficult to handle advanced tasks.
Given the recent success on deep Q-learning [12], some recent studies explore the scheme of centralized training and decentralized execution. Sunehag et al. [18] propose Value Decomposition Network (VDN), a method that acquires the joint action value function by summing up all the action value functions of each agent. All the agents are trained as a whole by updating the joint action value functions iteratively. QMIX [14] sheds some light on VDN, and utilizes a neural network to represent the joint action value function as a function of the individual action value functions and the global state information. The authors of [10] extend the actor-critic methods to the multi-agent scenario. By performing centralized training and decentralized execution over the agents, the agents can better adapt to the changes in the environment and collaborate with each other. Foerster et al. [5] propose counterfactual multi-agent policy gradient (COMA), which employs a centralized critic function to estimate the action value function of the joint, and decentralized actor functions to make each agent execute independently. All the aforementioned methods assume no communication between the agents during the execution. As a result, many subsequent approaches, including ours, can be applied to improve the performance of these methods.
Learning the communication pattern for MARL is first proposed by Sukhbaatar et. al. [17]. The authors introduce CommNet, a framework that adopts continuous communication for fully cooperative
tasks. During the execution, each agent takes their internal states as well as the means of the internal states of the rest agents as the input to make decision on its action. The BiCNet [13] uses a bidirectional coordinated network to connect the agents. However, both schemes require all-to-all communication among the agents, which can cause a significant communication overhead and latency.
Several other proposals [3, 7, 8] use a selection module to dynamically adjust the communication pattern among the agents. In Differentiable Inter-Agent Learning (DIAL) [3], the messages produced by an agent are selectively sent to the neighboring agents through the discretize/regularise unit (DRU). By jointly training DRU with the agent network, the communication overhead can be efficiently reduced. Jiang et. al. [7] propose an attentional communication model that learns when the communication is required and how to aggregate the shared information. However, an agent can only talk to the agents within its observable range at each timestep. This limits the speed of information propagation, and restricts the possible communication patterns when the local observable field is small. Kim et. al. [8] propose a communication scheduling scheme for wireless environment, but only a fraction of the agents can broadcast their messages at each time. In comparison, our approach does not impose hard constraints on the communication pattern, which is beneficial to the learning process. Also our method does not adopt additional decision module for the communication scheduling, which greatly reduces the model complexity.
3 Background
Deep Q-networks: We consider a standard reinforcement learning problem based on Markov Decision Process (MDP). At each timestamp t, the agent observes the state st, and chooses an action at. It then receives a reward rt for its action at and proceeds to the next state st+1. The goal is to maximize the total expected discounted reward R = ∑T t=1 γ
trt, where γ ∈ [0, 1] is the discount factor. A Deep Q-Network (DQN) use a deep neural network to represent the action value function Qθ(s, a) = E[Rt|st = s, at = a], where θ represents the parameters of the neural network, and Rt is the total rewards received at and after t. During the training phase, a replay buffer is used to store the transition tuples 〈 st, at, st+1, rt 〉 . The action value function Qθ(s, a) can be trained recursively by minimizing the loss L = Est,at,rt,st+1 [yt −Qθ(st, at)]2, where yt = rt + γmaxat+1Qθ′(st, at+1) and θ′ represents the parameters of the target network. An action is usually selected with -greedy policy. Namely, selecting the action with maximum action value with probability 1− , and choosing a random action with probability .
Multi-agent deep reinforcement learning: We consider an environment with N agents work cooperatively to fulfill a given task. At timestep t, each agent i (1 ≤ i ≤ N ) receives a local observation oti and executes an action a t i. They then receive a joint reward rt and proceed to the next state. We use a vector at = {ati} to represent the joint actions taken by all the agents. The agents aim to maximize the joint reward by choosing the best joint actions at at each timestep t.
Deep recurrent Q-networks: Traditional DQNs generate action solely based on a limited number of local observations without considering the prior knowledge. Hausknecht et al. [6] introduce Deep Recurrent Q-Networks (DRQN), which models the action value function with a recurrent neural network (RNN). The DRQN leverages its recurrent structure to integrate the previous observations and knowledge for better decision-making. At each timestep t, the DRQN Qθ(oti, h t−1 i , a t i) takes the local observation oti and hidden state h t−1 i from the previous steps as input to yield action values.
Learning the joint Q-function: Recent research effort has been made on the learning of joint action value function for multi-agent Q-learning. Two representative works are VDN [18] and QMIX [14]. In VDN, the joint action value function Qtot(ot,ht−1, at) is assumed to be the sum of all the individual action value functions, i.e. Qtot(ot,ht−1, at) = ∑ iQi(o t i, h t−1 i , a t i), where ot = {oti}, ht = {hti} and at = {ati} are the collection of the observations, hidden states and actions of all the agents at timestep t respectively. QMIX employs a neural network to represent the joint value function Qtot(ot,ht−1, at) as a nonlinear function of Qi(oti, h t−1 i , a t i) and global state st.
4 Variance Based Control
In this section, we present the detailed design of VBC in the context of multi-agent Q-learning. The main idea of VBC is to improve agent performance and communication efficiency by limiting the
variance of the transferred messages. During execution, each agent communicates with other agents only when its local decision is ambiguous. The degree of ambiguity is measured by the difference between the top two largest action values. Upon receiving the communication request from other agents, the agent replies only if its feedback is informative, namely the variance of the feedback is high.
4.1 Agent Network Design
The agent network consists of the following three networks: local action generator, message encoder and combiner. Figure 1(a) describes the network architecture for agent 1. The local action generator consists of a Gated Recurrent Unit (GRU) and a fully connected layer (FC). For agent i, the GRU takes the local observation oti and the hidden state h t−1 i as the inputs, and generates the intermediate results cti. c t i is then sent to the FC layer, which outputs the local action values Qi(o t i, h t−1 i , a t i) for each action ati ∈ A, where A is the set of possible actions. The message encoder, f ijenc(.), is a multi-layer perceptron (MLP) which contains two FC layers and a leaky ReLU layer. The agent network involves multiple independent message encoders, each accepts ctj from another agent j (j 6= i), and outputs f ijenc(c t j). The outputs from local action generator and message encoder are then sent to the combiner, which produces the global action value function Qi(ot,ht−1, ati) of agent i by taking into account the global observation ot and global history ht−1. To simplify the design and reduce model complexity, we do not introduce extra parameters for the combiner. Instead, we make the dimension of the f ijenc(c t j) the same as the local action values Qi(oti, h t−1 i , .), and hence the combiner can simply perform elementwise summation over its inputs, namely Qi(ot,ht−1, .) = Qi(oti, h t−1 i , .) + ∑ j 6=i f ij enc(c t j). The combiner chooses the action with the -greedy policy π(.). Let θilocal and θ ij enc denote the set of parameters of the local action generators and the message encoders, respectively. To prevent the lazy agent problem [18] and decrease the model complexity, we make θilocal the same for all i, and make θijenc the same for all i and j(j 6= i). Accordingly, we can drop the corner scripts and use θ = {θlocal, θenc} and fenc(.) to denote the agent network parameters and the message encoder.
4.2 Loss Function Definition
During the training phase, the message encoder and local action generator jointly learn to generate the best estimation on the action values. More specifically, we employ a mixing network (shown in Figure 1(b)) to aggregate the global action value functions Qi(ot,ht−1, ati) from each agents i, and yields the joint action value function, Qtot(ot,ht−1, at). To limit the variance of the messages from the other agents, we introduce an extra loss term on the variance of the outputs of the message encoders fenc(ctj). The loss function during the training phase is defined as:
L(θlocal, θenc) = B∑ b=1 T∑ t=1 [ (ybtot −Qtot(obt , hbt−1, abt ;θ))2 + λ N∑ i=1 V ar(fenc(c t,b i )) ] (1)
Algorithm 1: Communication protocol at agent i 1 Input: Confidence threshold of local actions δ1, threshold on variance of message encoder output δ2. Total
number of agents N. 2 for t ∈ T do 3 // Decision on the action of itself: 4 Compute local action values Qi(oti, h t−1 i , .). Denote m1,m2 the top two largest values of Qi(o t i, h t−1 i , .). 5 if m1 −m2 ≥ δ1 then 6 Let Qi(ot, ht−1, .) = Qi(oti, ht−1i , .). 7 else 8 Broadcast a request to the other agents, and receive the fenc(ctj) from Nreply(Nreply ≤ N) agents. 9 Let Qi(ot, ht−1, .) = Qi(oti, ht−1i , .) + ∑Nreply j=1 fenc(c t j).
10 // Generating reply messages for the other agents: 11 Calculate variance of fenc(cti), if V ar(fenc(c t i)) ≥ δ2, store fenc(cti) in the buffer. 12 if V ar(fenc(cti)) ≥ δ2 and Receive a request from agent j then 13 Reply the request from agent j with fenc(cti).
where ybtot = r b t + γmaxat+1Qtot(obt+1,h b t , at+1;θ −), θ− is the parameter of the target network which is copied from the θ periodically, V ar(.) is the variance function and λ is the weight of the loss on it. b is the batch index. The replay buffer is refreshed periodically by running each agent network and selecting the action which maximizes Qi(ot,ht−1, .).
4.3 Communication Protocol Design
During the execution phase, at every timestep t, the agent i first computes the local action value function Qi(oti, h t−1 i , .) and fenc(c t i). It then measures the confidence level on the local decision by computing the difference between the largest and the second largest element within the action values. An example is given in Figure 2(a). Assume agent 1 has three actions to select, and the output of the local action generator of agent 1 is Q1(ot1, h t−1 1 , .) = (0.1, 1.6, 3.8), and the difference between the largest and the second largest action values is 3.8− 1.6 = 2.2, which is greater than the threshold δ1 = 1.0. Given the fact that the variance of message encoder outputs fenc(ctj) from the agent 2 and 3 is relatively small due to the additional penalty term on variance in equation 1, it is highly possible that the global action value function Q1(ot,ht−1, .) also has the largest value in its third element. Therefore agent 1 does not have to talk to other agents to acquire fenc(ctj). Otherwise, agent 1 broadcasts a request to ask for help if its confidence level on the local decision is low. Because the request does not contain any actual data, it consumes very low bandwidth. Upon receiving the request, only the agents whose message has a large variance reply (Figure 2(b)), because their messages may change the current action decision of agent 1. This protocol not only reduces the communication overhead considerably, but also eliminates noisy, less informative messages that may impair the overall performance. The detailed protocol and operations performed at an agent i is summarized in Algorithm 1.
5 Convergence Analysis
In this section, we analyze convergence of the learning process with the loss function defined in equation (1) under the tabular setting. For the sake of simplicity, we ignore the dependency of the action value function on the previous knowledge ht. To minimize equation (1), given the initial state Q0, at iteration k, the q values in the table is updated according to the following rule:
Qk+1tot (ot, at) = Q k tot(ot, at)+ηk [ rt+γmaxaQ k tot(ot+1, a)−Qktot(ot, at)−λ
N∑ i=1 ∂V ar(fenc(c t i)) ∂Qktot(ot, at)
] (2)
where ηk, Qktot(.) are the learning rate and the joint action value function at iteration k respectively. Let Q∗tot(.) denote the optimal joint action value function. We have the following result on the convergence of the learning process. A detailed proof is given in the supplementary materials.
Theorem 1. Assume 0 ≤ ηk ≤ 1, ∑ k ηk = ∞, ∑ k η 2 k < ∞. Also assume the number of possible actions and states are finite. By performing equation 2 iteratively, we have ||Qktot(ot, at)− Q∗tot(ot, at)|| ≤ λNG ∀ot, at, as k →∞, where G satisfies || ∂V ar(fenc(c t i))
∂Qktot(ot,at) || ≤ G,∀i, k, t, ot, at.
6 Experiment
We evaluated the performance of VBC on the StarCraft Multi-Agent Challenge (SMAC) [15]. StarCraft II [1] is a real-time strategy (RTS) game that has recently been utilized as a benchmark by the reinforcement learning community [14, 5, 13, 4]. In this work, we focus on the decentralized micromanagement problem in StarCraft II, which involves two armies, one controlled by the user (i.e. a group of agents), and the other controlled by the build-in StarCraft II AI. The goal of the user is to control its allied units to destroy all enemy units, while minimizing received damage on each unit. We consider six different battle settings. Three of them are symmetrical battles, where both the user and the enemy groups consist of 2 Stalkers and 3 Zealots (2s3z), 3 Stalkers and 5 Zealots (2s5z), and 1 Medivac, 2 Marauders and 7 Marines (MMM) respectively. The other three are unsymmetrical battles, where the user and enemy groups have different army unit compositions, including: 3 Stalkers for user versus 4 Zealots for enemy (3s_vs_4z), 6 Hydralisks for user versus 8 Zealots for enemy (6s_vs_8z), and 6 Zealot for user versus 24 Zerglings for enemy (6z_vs_24zerg). The unsymmetrical battles are considered to be harder than the symmetrical battles because of the difference in army size.
At each timestep, each agent controls a single unit to perform an action, including move[direction], attack[enemy_id], stop and no-op. Each agent has a limited sight range and shooting range, where shooting range is less than the sight range. The attack operation is available only when the enemies are within the shooting range. The joint reward received by the allied units equals to the total damage inflicted on enemy units. Additionally, the agents are rewarded 100 extra points after killing each enemy unit, and 200 extra points for killing the entire army. The user wins the battle only when the allied units kill all the enemies within the time limit. Otherwise the built-in AI wins. The input observation of each agent is a vector that consists of the following information of each allied unit and enemy unit in its sight range: relative x, y coordinates, relative distance and agent type. For the detailed game settings, hyperparameters, and additional experiment evaluation over other test environments, please refer to supplementary materials.
6.1 Results
We compare VBC and several benchmark algorithms, including VDN [18], QMIX [14] and SchedNet [8] for controlling allied units. We consider two types of VBCs by adopting the mixing networks of VDN and QMIX, denoted as VBC+VDN and VBC+QMIX. The mixing network of VDN simply computes the elementwise summation across all the inputs, and the mixing network of QMIX deploys a neural network whose weight is derived from the global state st. The detailed architecture of this mixing network can be found in [14]. Additionally, we create an algorithm FC (full communication) by removing the penalty in Equation (1), and dropping the limit on variance during the execution phase (i.e., δ1 =∞ and δ2 = −∞). The agents are trained with the same network architecture shown in Figure (1), and the mixing network of VDN is used. For SchedNet, at every timestep only K out of N agents can broadcast their messages by using Top(k) scheduling policy [8]. We usually set K close to 0.5N , that is, each time roughly half of the allied units can broadcast their messages. The
VBC are trained for different number of episodes based on the difficulties of the battles, which we describe in detail next.
To measure the convergence speed of each algorithm, we stop the training process and save the current model every 200 training episodes. We then run 20 test episodes and measure the winning rates for these 20 episodes. For VBC+VDN and VBC+QMIX, the winning rates are measured by running the communication protocol described in Algorithm 1. For easy tasks, namely MMM and 2s_vs_3z, we train the algorithms with 2 million and 4 million episodes respectively. For all the other tasks, we train the algorithms with 10 million episodes. Each algorithm is trained 15 times. Figure 3 shows the average winning rate and 95% confidence interval of each algorithm for all the six tasks. For hyperparameters used by VBC (i.e., λ used in equation (1), δ1andδ2 in Algorithm 1), we first search for a coarse parameter range based on random trial, experience and message statistics. We then perform a random search within a smaller hyperparameter space. Best selections are shown in the legend of each figure.
We observe that the algorithms that involve communication (i.e., SchedNet, FC, VBC) outperform the algorithms without communication (i.e., VDN, QMIX) in all the six tasks. This is a clear indication that communication benefits the performance. Moreover, both VBC+VDN and VBC+QMIX achieve better winning rates than SchedNet, because SchedNet only allows a fixed number of agents to talk at every timestep, which prohibits some key information to exchange in a timely fashion. Finally, VBC achieves similar performance as FC and even outplays FC for some tasks (e.g., 2s3z,6h_vs_8z, 6z_vs_24zerg). This is because a fair amount of communication between the agents are noisy and redundant. By eliminating these undesired messages, VBC is able to achieve both communication efficiency and performance gain.
6.2 Communication Overhead
We now evaluate the communication overhead of VBC. To quantify the amount of communication involved, we run Algorithm 1 and count the total number of pairs of agents gt that conduct communication for each timestep t, then divided by the total number of pairs of agents in the user group, R. In other words, the communication overhead β = ∑T t=1 gt/RT . An an example, for the task 3s_vs_4z, since the user controls 3 Stalkers, and the total number of agent pairs is R = 3× 2 = 6. Within these 6 pairs of agents, suppose that 2 pairs involve communication, then gt = 2. Table 1 shows the β of VBC+VDN, VBC+QMIX and SchedNet across all the test episodes at the end of the training phase of each battle. For SchedNet, β simply equals the ratio between the number of allied agents that are allowed to talk and the total number of allied agents. As shown in Table 1, in contrast to ScheNet,
VBC+VDN and VBC+QMIX produce 10× lower communication overhead for MMM and 2s3z, and 2− 6× less traffic for the rest of tasks.
6.3 Learned Strategy
In this section, we examine the behaviors of the agents in order to better understand the strategies adopted by the different algorithms. We have made a video demo available at [2] for better illustration.
For unsymmetrical battles, the number of allied units is less than the enemy units, and therefore the agents are prone to be attacked by the enemies. This is exactly what happened for the QMIX and VDN agents on 6h_vs_8z, as shown in (Figure 4(c)). Figure 4(b) shows the strategy of VBC, all the Hydralisks are placed in a row at the bottom margin of the map. Due to the limited size of the map, the Zealots can not go beyond the margin to surround the Hydralisks. The Hydralisks then focus their fire to kill each Zealot. Figure 4(a) shows the change on β for a sample test episode. We observe that most of the communication appears in the beginning of the episode. This is due to the fact that Hydralisks need to talk in order to arrange in a row formation. After the arrangement is formed, no communication is needed until the arrangement is broken due to the deaths of some Hydralisks, as indicated by the short spikes near the end of the episode. Finally, SchedNet and FC utilize a similar strategy as VBC. Nonetheless, due to the restriction on communication pattern, the row formed by the allied agents are usually not well formed, and can be easily broken by the enemies.
For 3s_vs_4z scenario, the Stalkers have a larger attack range than Zealots. All the algorithms adopt a kiting strategy where the Stalkers form a group and attack the Zealots while kiting them. For VBC and FC, at each timestep only the agents that are far from the enemies attack, and the rest of the agents (usually the healthier ones) are used as a shield to protect the firing agents (Figure 4(d)). Communication only occurs when the group are broken and need to realign. In contrast, VDN and QMIX do not have this attacking pattern, and all the Stalkers always fire simultaneously, therefore the Stalkers closest to the Zealots are get killed first. SchedNet and FC also adopt a similar policy as VBC, but the attacking pattern of the Stalkers is less regular, i.e., the Stalkers close to the Zealots also fire occasionally.
6z_vs_24zerg is the toughest scenario in our experiment. For QMIX and VDN, the 6 Zealots are surrounded and killed by 24 Zerglings shortly after the episode starts. In contrast, VBC first separates the agents into two groups with two Zealots and four Zealots respectively (Figure 4(e)). The two Zealots attract most of the Zerglings to a place far away from the rest four Zealots, and are killed shortly. Due to the limit sight range of the Zerglings, they can not find the rest four Zealots. On the other side, the four Zealots kill the small part of Zerglings easily and search for the rest Zerglings. The four Zealots take advantage of the short sight of the Zerglings. Each time the four Zealots adjust their positions in a way such that they can only be seen by a small number of the Zerglings, the baited Zerglings are then killed easily (Figure 4(f)). For VBC, the communication only occurs in the beginning of the episode when the Zealots are separated into two groups, and near the end of the episode when four Zealots adjust their positions. Both FC and SchedNet learn the strategy of splitting the Zealots into two groups, but they fail to fine-tune their positions to kill the remaining Zerglings.
For symmetrical battles, the tasks are less challenging, and we see less disparities on performances of the algorithms. For 2s3z and 3s5z, the VDN agents attack the enemies blindly without any cooperation. The QMIX agents learn to focus firing and protect the Stalkers. The agents of VBC, FC and SchedNet adopt a more aggressive policy, where the allied Zealots try to surround and kill the enemy Zealots first, and then attack the enemy Stalkers by collaborating with the allied Stalkers. This is extremely effective because Zealots counter Stalkers, so it is important to kill the enemy Zealots before they damage allied Stalkers. For VBC, the communication occurs mostly when the allied Zealots try to surround the enemy Zealots. For MMM, almost all the methods learn the optimal policy, namely killing the Medivac first, then attack the rest of the enemy units cooperatively.
6.4 Evaluation on Cooperative Navigation and Predator-prey
To demonstrate the applicability of VBC in more general settings, we have tested VBC for two more scenarios: (1) Cooperative Navigation (CN) which is a cooperative scenario, and (2) Predator-prey (PP) which is a competitive scenario. The game settings are the same as what are used in [10] and [8], respectively. We train each method until convergence and test the result models for 2000 episodes. For PP, we make the agents of VBC compete against the agents of other methods, and report the normalized score of Predator (Figure 5(a)). For CN we report the average distance between agents and their destinations, and average number of collisions (Figure 5(b)). We notice that methods which allow communication (i.e., SchedNet, FC, VBC) outperform the others for both tasks, and VBC achieves the best performance. Moreover, in both scenarios, VBC incurs 10× and 3× lower communication overhead than FC and SchedNet respectively. In CN, most of the communication of VBC occurs when the agents are close to each other to prevent collisions. In PP, the communication of VBC occurs mainly to rearrange agent positions for better coordination. These observations confirm that VBC’s can be applied to a variety of MARL scenarios with great effectiveness.
7 Conclusion
In this work, we propose VBC, a simple and effective approach to achieve efficient communication among agents in MARL. By constraining the variance of the exchanged messages during the training phase, VBC improves communication efficiency while enabling better cooperation among the agents. The test results of multiple MARL benchmarks indicate that VBC outperforms the other state-of-theart methods significantly in terms of both performance and communication overhead. | 1. What is the focus and contribution of the paper regarding MARL algorithms?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application and comparison with other works?
3. Do you have any concerns or questions regarding the method's narrow application, typos, or omitted references?
4. How did the authors choose the learning hyperparameters and messaging thresholds, and how sensitive is the method to these values?
5. What additional discussions or analyses would be helpful to better understand the method's performance and settings? | Review | Review
The paper contributes to the overall class of MARL algorithms as another simple communication method that improves performance with reduced communication costs. - I am a bit worried about the methods narrow application. It was only evaluated on a collection of similar Starcraft II environments. It also only works on cooperative environments. - Line 111 the Q function targets should be optimized over s_{t+1}, not s_{t}. I think this is just a typo and does not reflect in the results. - I do find it odd that MADDPG (Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments) was not referenced in this paper. It is very related and has a form of implicit communication. - The change to the learning loss is simple. - There is little discussion on the learning hyper parameter introduced and the messaging thresholds. How are these chosen? How sensitive is the method to these values? It is not explicitly said what values are used for the experiments. I assume the same from the figures. After going over the author response I appreciate the extra analysis put into comparing the method to MADDPG to make sure it is state of the art. It is good to compare these methods across previous benchmarks to show improvement. While the additional hyperparameter analysis is helpful it is a bit obvious of what is normally done. Some discussion on the effects of specific settings might shed more light on how the method works. I have updated my scoring. |
NIPS | Title
Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control
Abstract
Multi-agent reinforcement learning (MARL) has recently received considerable attention due to its applicability to a wide range of real-world applications. However, achieving efficient communication among agents has always been an overarching problem in MARL. In this work, we propose Variance Based Control (VBC), a simple yet efficient technique to improve communication efficiency in MARL. By limiting the variance of the exchanged messages between agents during the training phase, the noisy component in the messages can be eliminated effectively, while the useful part can be preserved and utilized by the agents for better performance. Our evaluation using multiple MARL benchmarks indicates that our method achieves 2− 10× lower in communication overhead than state-of-the-art MARL algorithms, while allowing agents to achieve better overall performance.
1 Introduction
Many real-world applications (e.g., autonomous driving [16], game playing [12] and robotics control [9]) today require reinforcement learning tasks to be carried out in multi-agent settings. In MARL, multiple agents interact with each other in a shared environment. Each agent only has access to partial observations of the environment, and needs to make local decisions based on partial observations as well as both direct and indirect interactions with the other agents. This complex interaction model has introduced numerous challenges for MARL. In particular, during the training phase, each agent may dynamically change its strategy, causing dynamics in the surrounding environment and instability in the training process. Worse still, each agent can easily overfit its strategy to the behaviours of other agents [11], which may seriously deteriorate the overall performance.
In the research literature, there have been three lines of research that try to mitigate the instability and inefficiency caused by decentralized execution. The most common approach is independent Q-learning (IQL) [20], which breaks down a multi-agent learning problem into multiple independent single-agent learning problems, thus allowing each agent to learn and act independently. Unfortunately, this approach does not account for instability caused by environment dynamics, and therefore often suffer from the problem of poor convergence. The second approach adopts the centralized training and decentralized execution [18] paradigm, where a joint action value function is learned during the training phase to better coordinate the agents’ behaviours. During execution, each agent acts independently without direct communication. The third approach introduces communication among agents during execution [17, 3]. This approach allows each agent to dynamically adjusts its strategy based on its local observation along with the information received from the other agents. Nonetheless, it introduces additional communication overhead in terms of latency and bandwidth during execution, and its effectiveness is heavily dependent on the usefulness of the received information.
In this work, we leverage the advantages of both the second and third approaches. Specifically, we consider a fully cooperative scenario where multiple agents collaborate to achieve a common objective. The agents are trained in a centralized fashion within the multi-agent Q-learning framework, and are allowed to communicate with each other during execution. However, unlike previous work,
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
we make a few key observations. First, for many applications, it is often superfluous for an agent to wait for feedback from all surrounding agents before making an action decision. For instance, when the front camera on an autonomous vehicle detects an obstacle within the dangerous distance limit, it triggers the ‘brake‘ signal 2 without waiting for the feedback from the other parts of the vehicle. Second, the feedback received from the other agents may not always provide useful information. For example, the navigation system of the autonomous vehicle should pay more attention to the messages sent by the perception system (e.g., camera, radar), and less attention to the entertainment system inside the vehicle before taking its action. The full (i.e., all-to-all) communication pattern among the agents can lead to a significant communication overhead in terms of both bandwidth and latency, which limits its practicality and effectiveness in real applications with strict latency requirements and bandwidth constraints (e.g., real-time traffic signal control, autonomous driving, etc). In addition, as pointed out by Jiang et al. [7], an excessive amount of communication may introduce useless and even harmful information which can even impair the convergence of the learning process.
Motivated by these observations, we design a novel deep MARL architecture that can significantly improve inter-agent communication efficiency. Specifically, we introduce Variance Based Control (VBC), a simple yet efficient approach to reduce the among of information transferred between agents. By inserting an extra loss term on the variance of the exchanged information, the meaningful part of the messages can be effectively extracted and utilized to benefit the training of each individual agent. Furthermore, unlike previous work, we do not require an extra decision module to dynamically adjust the communication pattern. This allows us to reduce the model complexity significantly. Instead, each agent first makes a preliminary decision based on its local information, and initiates communication only when its confidence level on this preliminary decision is low. Similarly, upon receiving the communication request, the agent replies to the request only when its message is informative. By only exchanging useful information among the agents, VBC not only improves agent performance, but also substantially reduces communication overhead during execution. Lastly, it can be theoretically shown that the resulting training algorithm provides guaranteed stability.
For evaluation, we test VBC on several MARL benchmarks, including StarCraft Multi-Agent Challenge [15], Cooperative Navigation (CN) [10] and Predator-prey (PP) [8]. For StarCraft Multi-Agent Challenge, VBC achieves 20% higher winning rate and 2− 10× lower communication overhead on average compared with the other benchmark algorithms. For both CN and PP scenarios, VBC outperforms the existing algorithms and incurs much lower overhead than existing communication-enabled approaches. A video demo is available at [2] for a better illustration of the VBC performance. The code is available at https://github.com/saizhang0218/VBC.
2 Related Work
The simplest training method for MARL is to make each agent learn independently using Independent Q-Learning (IQL) [20]. Although IQL is successful in solving simple tasks such as Pong [19], it ignores the environment dynamics arose from the interactions among the agents. As a result, it suffers from the problem of poor convergence, making it difficult to handle advanced tasks.
Given the recent success on deep Q-learning [12], some recent studies explore the scheme of centralized training and decentralized execution. Sunehag et al. [18] propose Value Decomposition Network (VDN), a method that acquires the joint action value function by summing up all the action value functions of each agent. All the agents are trained as a whole by updating the joint action value functions iteratively. QMIX [14] sheds some light on VDN, and utilizes a neural network to represent the joint action value function as a function of the individual action value functions and the global state information. The authors of [10] extend the actor-critic methods to the multi-agent scenario. By performing centralized training and decentralized execution over the agents, the agents can better adapt to the changes in the environment and collaborate with each other. Foerster et al. [5] propose counterfactual multi-agent policy gradient (COMA), which employs a centralized critic function to estimate the action value function of the joint, and decentralized actor functions to make each agent execute independently. All the aforementioned methods assume no communication between the agents during the execution. As a result, many subsequent approaches, including ours, can be applied to improve the performance of these methods.
Learning the communication pattern for MARL is first proposed by Sukhbaatar et. al. [17]. The authors introduce CommNet, a framework that adopts continuous communication for fully cooperative
tasks. During the execution, each agent takes their internal states as well as the means of the internal states of the rest agents as the input to make decision on its action. The BiCNet [13] uses a bidirectional coordinated network to connect the agents. However, both schemes require all-to-all communication among the agents, which can cause a significant communication overhead and latency.
Several other proposals [3, 7, 8] use a selection module to dynamically adjust the communication pattern among the agents. In Differentiable Inter-Agent Learning (DIAL) [3], the messages produced by an agent are selectively sent to the neighboring agents through the discretize/regularise unit (DRU). By jointly training DRU with the agent network, the communication overhead can be efficiently reduced. Jiang et. al. [7] propose an attentional communication model that learns when the communication is required and how to aggregate the shared information. However, an agent can only talk to the agents within its observable range at each timestep. This limits the speed of information propagation, and restricts the possible communication patterns when the local observable field is small. Kim et. al. [8] propose a communication scheduling scheme for wireless environment, but only a fraction of the agents can broadcast their messages at each time. In comparison, our approach does not impose hard constraints on the communication pattern, which is beneficial to the learning process. Also our method does not adopt additional decision module for the communication scheduling, which greatly reduces the model complexity.
3 Background
Deep Q-networks: We consider a standard reinforcement learning problem based on Markov Decision Process (MDP). At each timestamp t, the agent observes the state st, and chooses an action at. It then receives a reward rt for its action at and proceeds to the next state st+1. The goal is to maximize the total expected discounted reward R = ∑T t=1 γ
trt, where γ ∈ [0, 1] is the discount factor. A Deep Q-Network (DQN) use a deep neural network to represent the action value function Qθ(s, a) = E[Rt|st = s, at = a], where θ represents the parameters of the neural network, and Rt is the total rewards received at and after t. During the training phase, a replay buffer is used to store the transition tuples 〈 st, at, st+1, rt 〉 . The action value function Qθ(s, a) can be trained recursively by minimizing the loss L = Est,at,rt,st+1 [yt −Qθ(st, at)]2, where yt = rt + γmaxat+1Qθ′(st, at+1) and θ′ represents the parameters of the target network. An action is usually selected with -greedy policy. Namely, selecting the action with maximum action value with probability 1− , and choosing a random action with probability .
Multi-agent deep reinforcement learning: We consider an environment with N agents work cooperatively to fulfill a given task. At timestep t, each agent i (1 ≤ i ≤ N ) receives a local observation oti and executes an action a t i. They then receive a joint reward rt and proceed to the next state. We use a vector at = {ati} to represent the joint actions taken by all the agents. The agents aim to maximize the joint reward by choosing the best joint actions at at each timestep t.
Deep recurrent Q-networks: Traditional DQNs generate action solely based on a limited number of local observations without considering the prior knowledge. Hausknecht et al. [6] introduce Deep Recurrent Q-Networks (DRQN), which models the action value function with a recurrent neural network (RNN). The DRQN leverages its recurrent structure to integrate the previous observations and knowledge for better decision-making. At each timestep t, the DRQN Qθ(oti, h t−1 i , a t i) takes the local observation oti and hidden state h t−1 i from the previous steps as input to yield action values.
Learning the joint Q-function: Recent research effort has been made on the learning of joint action value function for multi-agent Q-learning. Two representative works are VDN [18] and QMIX [14]. In VDN, the joint action value function Qtot(ot,ht−1, at) is assumed to be the sum of all the individual action value functions, i.e. Qtot(ot,ht−1, at) = ∑ iQi(o t i, h t−1 i , a t i), where ot = {oti}, ht = {hti} and at = {ati} are the collection of the observations, hidden states and actions of all the agents at timestep t respectively. QMIX employs a neural network to represent the joint value function Qtot(ot,ht−1, at) as a nonlinear function of Qi(oti, h t−1 i , a t i) and global state st.
4 Variance Based Control
In this section, we present the detailed design of VBC in the context of multi-agent Q-learning. The main idea of VBC is to improve agent performance and communication efficiency by limiting the
variance of the transferred messages. During execution, each agent communicates with other agents only when its local decision is ambiguous. The degree of ambiguity is measured by the difference between the top two largest action values. Upon receiving the communication request from other agents, the agent replies only if its feedback is informative, namely the variance of the feedback is high.
4.1 Agent Network Design
The agent network consists of the following three networks: local action generator, message encoder and combiner. Figure 1(a) describes the network architecture for agent 1. The local action generator consists of a Gated Recurrent Unit (GRU) and a fully connected layer (FC). For agent i, the GRU takes the local observation oti and the hidden state h t−1 i as the inputs, and generates the intermediate results cti. c t i is then sent to the FC layer, which outputs the local action values Qi(o t i, h t−1 i , a t i) for each action ati ∈ A, where A is the set of possible actions. The message encoder, f ijenc(.), is a multi-layer perceptron (MLP) which contains two FC layers and a leaky ReLU layer. The agent network involves multiple independent message encoders, each accepts ctj from another agent j (j 6= i), and outputs f ijenc(c t j). The outputs from local action generator and message encoder are then sent to the combiner, which produces the global action value function Qi(ot,ht−1, ati) of agent i by taking into account the global observation ot and global history ht−1. To simplify the design and reduce model complexity, we do not introduce extra parameters for the combiner. Instead, we make the dimension of the f ijenc(c t j) the same as the local action values Qi(oti, h t−1 i , .), and hence the combiner can simply perform elementwise summation over its inputs, namely Qi(ot,ht−1, .) = Qi(oti, h t−1 i , .) + ∑ j 6=i f ij enc(c t j). The combiner chooses the action with the -greedy policy π(.). Let θilocal and θ ij enc denote the set of parameters of the local action generators and the message encoders, respectively. To prevent the lazy agent problem [18] and decrease the model complexity, we make θilocal the same for all i, and make θijenc the same for all i and j(j 6= i). Accordingly, we can drop the corner scripts and use θ = {θlocal, θenc} and fenc(.) to denote the agent network parameters and the message encoder.
4.2 Loss Function Definition
During the training phase, the message encoder and local action generator jointly learn to generate the best estimation on the action values. More specifically, we employ a mixing network (shown in Figure 1(b)) to aggregate the global action value functions Qi(ot,ht−1, ati) from each agents i, and yields the joint action value function, Qtot(ot,ht−1, at). To limit the variance of the messages from the other agents, we introduce an extra loss term on the variance of the outputs of the message encoders fenc(ctj). The loss function during the training phase is defined as:
L(θlocal, θenc) = B∑ b=1 T∑ t=1 [ (ybtot −Qtot(obt , hbt−1, abt ;θ))2 + λ N∑ i=1 V ar(fenc(c t,b i )) ] (1)
Algorithm 1: Communication protocol at agent i 1 Input: Confidence threshold of local actions δ1, threshold on variance of message encoder output δ2. Total
number of agents N. 2 for t ∈ T do 3 // Decision on the action of itself: 4 Compute local action values Qi(oti, h t−1 i , .). Denote m1,m2 the top two largest values of Qi(o t i, h t−1 i , .). 5 if m1 −m2 ≥ δ1 then 6 Let Qi(ot, ht−1, .) = Qi(oti, ht−1i , .). 7 else 8 Broadcast a request to the other agents, and receive the fenc(ctj) from Nreply(Nreply ≤ N) agents. 9 Let Qi(ot, ht−1, .) = Qi(oti, ht−1i , .) + ∑Nreply j=1 fenc(c t j).
10 // Generating reply messages for the other agents: 11 Calculate variance of fenc(cti), if V ar(fenc(c t i)) ≥ δ2, store fenc(cti) in the buffer. 12 if V ar(fenc(cti)) ≥ δ2 and Receive a request from agent j then 13 Reply the request from agent j with fenc(cti).
where ybtot = r b t + γmaxat+1Qtot(obt+1,h b t , at+1;θ −), θ− is the parameter of the target network which is copied from the θ periodically, V ar(.) is the variance function and λ is the weight of the loss on it. b is the batch index. The replay buffer is refreshed periodically by running each agent network and selecting the action which maximizes Qi(ot,ht−1, .).
4.3 Communication Protocol Design
During the execution phase, at every timestep t, the agent i first computes the local action value function Qi(oti, h t−1 i , .) and fenc(c t i). It then measures the confidence level on the local decision by computing the difference between the largest and the second largest element within the action values. An example is given in Figure 2(a). Assume agent 1 has three actions to select, and the output of the local action generator of agent 1 is Q1(ot1, h t−1 1 , .) = (0.1, 1.6, 3.8), and the difference between the largest and the second largest action values is 3.8− 1.6 = 2.2, which is greater than the threshold δ1 = 1.0. Given the fact that the variance of message encoder outputs fenc(ctj) from the agent 2 and 3 is relatively small due to the additional penalty term on variance in equation 1, it is highly possible that the global action value function Q1(ot,ht−1, .) also has the largest value in its third element. Therefore agent 1 does not have to talk to other agents to acquire fenc(ctj). Otherwise, agent 1 broadcasts a request to ask for help if its confidence level on the local decision is low. Because the request does not contain any actual data, it consumes very low bandwidth. Upon receiving the request, only the agents whose message has a large variance reply (Figure 2(b)), because their messages may change the current action decision of agent 1. This protocol not only reduces the communication overhead considerably, but also eliminates noisy, less informative messages that may impair the overall performance. The detailed protocol and operations performed at an agent i is summarized in Algorithm 1.
5 Convergence Analysis
In this section, we analyze convergence of the learning process with the loss function defined in equation (1) under the tabular setting. For the sake of simplicity, we ignore the dependency of the action value function on the previous knowledge ht. To minimize equation (1), given the initial state Q0, at iteration k, the q values in the table is updated according to the following rule:
Qk+1tot (ot, at) = Q k tot(ot, at)+ηk [ rt+γmaxaQ k tot(ot+1, a)−Qktot(ot, at)−λ
N∑ i=1 ∂V ar(fenc(c t i)) ∂Qktot(ot, at)
] (2)
where ηk, Qktot(.) are the learning rate and the joint action value function at iteration k respectively. Let Q∗tot(.) denote the optimal joint action value function. We have the following result on the convergence of the learning process. A detailed proof is given in the supplementary materials.
Theorem 1. Assume 0 ≤ ηk ≤ 1, ∑ k ηk = ∞, ∑ k η 2 k < ∞. Also assume the number of possible actions and states are finite. By performing equation 2 iteratively, we have ||Qktot(ot, at)− Q∗tot(ot, at)|| ≤ λNG ∀ot, at, as k →∞, where G satisfies || ∂V ar(fenc(c t i))
∂Qktot(ot,at) || ≤ G,∀i, k, t, ot, at.
6 Experiment
We evaluated the performance of VBC on the StarCraft Multi-Agent Challenge (SMAC) [15]. StarCraft II [1] is a real-time strategy (RTS) game that has recently been utilized as a benchmark by the reinforcement learning community [14, 5, 13, 4]. In this work, we focus on the decentralized micromanagement problem in StarCraft II, which involves two armies, one controlled by the user (i.e. a group of agents), and the other controlled by the build-in StarCraft II AI. The goal of the user is to control its allied units to destroy all enemy units, while minimizing received damage on each unit. We consider six different battle settings. Three of them are symmetrical battles, where both the user and the enemy groups consist of 2 Stalkers and 3 Zealots (2s3z), 3 Stalkers and 5 Zealots (2s5z), and 1 Medivac, 2 Marauders and 7 Marines (MMM) respectively. The other three are unsymmetrical battles, where the user and enemy groups have different army unit compositions, including: 3 Stalkers for user versus 4 Zealots for enemy (3s_vs_4z), 6 Hydralisks for user versus 8 Zealots for enemy (6s_vs_8z), and 6 Zealot for user versus 24 Zerglings for enemy (6z_vs_24zerg). The unsymmetrical battles are considered to be harder than the symmetrical battles because of the difference in army size.
At each timestep, each agent controls a single unit to perform an action, including move[direction], attack[enemy_id], stop and no-op. Each agent has a limited sight range and shooting range, where shooting range is less than the sight range. The attack operation is available only when the enemies are within the shooting range. The joint reward received by the allied units equals to the total damage inflicted on enemy units. Additionally, the agents are rewarded 100 extra points after killing each enemy unit, and 200 extra points for killing the entire army. The user wins the battle only when the allied units kill all the enemies within the time limit. Otherwise the built-in AI wins. The input observation of each agent is a vector that consists of the following information of each allied unit and enemy unit in its sight range: relative x, y coordinates, relative distance and agent type. For the detailed game settings, hyperparameters, and additional experiment evaluation over other test environments, please refer to supplementary materials.
6.1 Results
We compare VBC and several benchmark algorithms, including VDN [18], QMIX [14] and SchedNet [8] for controlling allied units. We consider two types of VBCs by adopting the mixing networks of VDN and QMIX, denoted as VBC+VDN and VBC+QMIX. The mixing network of VDN simply computes the elementwise summation across all the inputs, and the mixing network of QMIX deploys a neural network whose weight is derived from the global state st. The detailed architecture of this mixing network can be found in [14]. Additionally, we create an algorithm FC (full communication) by removing the penalty in Equation (1), and dropping the limit on variance during the execution phase (i.e., δ1 =∞ and δ2 = −∞). The agents are trained with the same network architecture shown in Figure (1), and the mixing network of VDN is used. For SchedNet, at every timestep only K out of N agents can broadcast their messages by using Top(k) scheduling policy [8]. We usually set K close to 0.5N , that is, each time roughly half of the allied units can broadcast their messages. The
VBC are trained for different number of episodes based on the difficulties of the battles, which we describe in detail next.
To measure the convergence speed of each algorithm, we stop the training process and save the current model every 200 training episodes. We then run 20 test episodes and measure the winning rates for these 20 episodes. For VBC+VDN and VBC+QMIX, the winning rates are measured by running the communication protocol described in Algorithm 1. For easy tasks, namely MMM and 2s_vs_3z, we train the algorithms with 2 million and 4 million episodes respectively. For all the other tasks, we train the algorithms with 10 million episodes. Each algorithm is trained 15 times. Figure 3 shows the average winning rate and 95% confidence interval of each algorithm for all the six tasks. For hyperparameters used by VBC (i.e., λ used in equation (1), δ1andδ2 in Algorithm 1), we first search for a coarse parameter range based on random trial, experience and message statistics. We then perform a random search within a smaller hyperparameter space. Best selections are shown in the legend of each figure.
We observe that the algorithms that involve communication (i.e., SchedNet, FC, VBC) outperform the algorithms without communication (i.e., VDN, QMIX) in all the six tasks. This is a clear indication that communication benefits the performance. Moreover, both VBC+VDN and VBC+QMIX achieve better winning rates than SchedNet, because SchedNet only allows a fixed number of agents to talk at every timestep, which prohibits some key information to exchange in a timely fashion. Finally, VBC achieves similar performance as FC and even outplays FC for some tasks (e.g., 2s3z,6h_vs_8z, 6z_vs_24zerg). This is because a fair amount of communication between the agents are noisy and redundant. By eliminating these undesired messages, VBC is able to achieve both communication efficiency and performance gain.
6.2 Communication Overhead
We now evaluate the communication overhead of VBC. To quantify the amount of communication involved, we run Algorithm 1 and count the total number of pairs of agents gt that conduct communication for each timestep t, then divided by the total number of pairs of agents in the user group, R. In other words, the communication overhead β = ∑T t=1 gt/RT . An an example, for the task 3s_vs_4z, since the user controls 3 Stalkers, and the total number of agent pairs is R = 3× 2 = 6. Within these 6 pairs of agents, suppose that 2 pairs involve communication, then gt = 2. Table 1 shows the β of VBC+VDN, VBC+QMIX and SchedNet across all the test episodes at the end of the training phase of each battle. For SchedNet, β simply equals the ratio between the number of allied agents that are allowed to talk and the total number of allied agents. As shown in Table 1, in contrast to ScheNet,
VBC+VDN and VBC+QMIX produce 10× lower communication overhead for MMM and 2s3z, and 2− 6× less traffic for the rest of tasks.
6.3 Learned Strategy
In this section, we examine the behaviors of the agents in order to better understand the strategies adopted by the different algorithms. We have made a video demo available at [2] for better illustration.
For unsymmetrical battles, the number of allied units is less than the enemy units, and therefore the agents are prone to be attacked by the enemies. This is exactly what happened for the QMIX and VDN agents on 6h_vs_8z, as shown in (Figure 4(c)). Figure 4(b) shows the strategy of VBC, all the Hydralisks are placed in a row at the bottom margin of the map. Due to the limited size of the map, the Zealots can not go beyond the margin to surround the Hydralisks. The Hydralisks then focus their fire to kill each Zealot. Figure 4(a) shows the change on β for a sample test episode. We observe that most of the communication appears in the beginning of the episode. This is due to the fact that Hydralisks need to talk in order to arrange in a row formation. After the arrangement is formed, no communication is needed until the arrangement is broken due to the deaths of some Hydralisks, as indicated by the short spikes near the end of the episode. Finally, SchedNet and FC utilize a similar strategy as VBC. Nonetheless, due to the restriction on communication pattern, the row formed by the allied agents are usually not well formed, and can be easily broken by the enemies.
For 3s_vs_4z scenario, the Stalkers have a larger attack range than Zealots. All the algorithms adopt a kiting strategy where the Stalkers form a group and attack the Zealots while kiting them. For VBC and FC, at each timestep only the agents that are far from the enemies attack, and the rest of the agents (usually the healthier ones) are used as a shield to protect the firing agents (Figure 4(d)). Communication only occurs when the group are broken and need to realign. In contrast, VDN and QMIX do not have this attacking pattern, and all the Stalkers always fire simultaneously, therefore the Stalkers closest to the Zealots are get killed first. SchedNet and FC also adopt a similar policy as VBC, but the attacking pattern of the Stalkers is less regular, i.e., the Stalkers close to the Zealots also fire occasionally.
6z_vs_24zerg is the toughest scenario in our experiment. For QMIX and VDN, the 6 Zealots are surrounded and killed by 24 Zerglings shortly after the episode starts. In contrast, VBC first separates the agents into two groups with two Zealots and four Zealots respectively (Figure 4(e)). The two Zealots attract most of the Zerglings to a place far away from the rest four Zealots, and are killed shortly. Due to the limit sight range of the Zerglings, they can not find the rest four Zealots. On the other side, the four Zealots kill the small part of Zerglings easily and search for the rest Zerglings. The four Zealots take advantage of the short sight of the Zerglings. Each time the four Zealots adjust their positions in a way such that they can only be seen by a small number of the Zerglings, the baited Zerglings are then killed easily (Figure 4(f)). For VBC, the communication only occurs in the beginning of the episode when the Zealots are separated into two groups, and near the end of the episode when four Zealots adjust their positions. Both FC and SchedNet learn the strategy of splitting the Zealots into two groups, but they fail to fine-tune their positions to kill the remaining Zerglings.
For symmetrical battles, the tasks are less challenging, and we see less disparities on performances of the algorithms. For 2s3z and 3s5z, the VDN agents attack the enemies blindly without any cooperation. The QMIX agents learn to focus firing and protect the Stalkers. The agents of VBC, FC and SchedNet adopt a more aggressive policy, where the allied Zealots try to surround and kill the enemy Zealots first, and then attack the enemy Stalkers by collaborating with the allied Stalkers. This is extremely effective because Zealots counter Stalkers, so it is important to kill the enemy Zealots before they damage allied Stalkers. For VBC, the communication occurs mostly when the allied Zealots try to surround the enemy Zealots. For MMM, almost all the methods learn the optimal policy, namely killing the Medivac first, then attack the rest of the enemy units cooperatively.
6.4 Evaluation on Cooperative Navigation and Predator-prey
To demonstrate the applicability of VBC in more general settings, we have tested VBC for two more scenarios: (1) Cooperative Navigation (CN) which is a cooperative scenario, and (2) Predator-prey (PP) which is a competitive scenario. The game settings are the same as what are used in [10] and [8], respectively. We train each method until convergence and test the result models for 2000 episodes. For PP, we make the agents of VBC compete against the agents of other methods, and report the normalized score of Predator (Figure 5(a)). For CN we report the average distance between agents and their destinations, and average number of collisions (Figure 5(b)). We notice that methods which allow communication (i.e., SchedNet, FC, VBC) outperform the others for both tasks, and VBC achieves the best performance. Moreover, in both scenarios, VBC incurs 10× and 3× lower communication overhead than FC and SchedNet respectively. In CN, most of the communication of VBC occurs when the agents are close to each other to prevent collisions. In PP, the communication of VBC occurs mainly to rearrange agent positions for better coordination. These observations confirm that VBC’s can be applied to a variety of MARL scenarios with great effectiveness.
7 Conclusion
In this work, we propose VBC, a simple and effective approach to achieve efficient communication among agents in MARL. By constraining the variance of the exchanged messages during the training phase, VBC improves communication efficiency while enabling better cooperation among the agents. The test results of multiple MARL benchmarks indicate that VBC outperforms the other state-of-theart methods significantly in terms of both performance and communication overhead. | 1. What is the focus of the review on the paper regarding MARL?
2. What are the strengths of the proposed approach, particularly in reducing communication overhead?
3. What are the weaknesses of the paper, especially regarding the training process?
4. Do you have any concerns about the clarity and presentation of the content?
5. How does the reviewer assess the originality, quality, clarity, significance, and minor aspects of the paper? | Review | Review
The paper is well written and easy to read. I very much enjoyed reading the paper. 1. Line 151: an individual agent can access the global observation and global history only through the conditioned messages. Is that right? If so, please make it explicit for better clarity. 2. Line 154: The fact that the combiner is just doing element wise addition can also be motivated as each agent trying to pass the message which could be the value of each action from that agentâs point of view. This could also motivate the variance based control loss because when there is not much variance in the message, then that agent do not have any preference over which action to choose and hence its message can be safely ignored. 3. It is not clear whether the communication protocol is used during the training or only during the testing time. I assume that you are using the same communication protocol even during training. Please explain this. I did not verify the correctness of the proof. ########################################################## Originality: The paper proposes a novel variance based loss to reduce communication overhead in a MARL setting. Quality: The work is good enough to be accepted at NeurIPS. Clarity: The paper is well written. I have given few comments above to improve the clarity in the presentation. Significance: Definitely a significant contribution to MARL. ###################################################### Minor comments: 1. Line 232: derivation -> deviation. |
NIPS | Title
Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models
Abstract
Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification. While many deterministic learning algorithms have been designed based on numerical integration via the adjoint method, many downstream tasks such as active learning, exploration in reinforcement learning, robust control, or filtering require accurate estimates of predictive uncertainties. In this work, we propose a novel approach towards estimating epistemically uncertain neural ODEs, avoiding the numerical integration bottleneck. Instead of modeling uncertainty in the ODE parameters, we directly model uncertainties in the state space. Our algorithm – distributional gradient matching (DGM) – jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss. Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
N/A
1 Introduction
For continuous-time system identification and control, ordinary differential equations form an essential class of models, deployed in applications ranging from robotics (Spong et al., 2006) to biology (Jones et al., 2009). Here, it is assumed that the evolution of a system is described by the evolution of continuous state variables, whose time-derivative is given by a set of parametrized equations. Often, these equations are derived from first principles, e.g., rigid body dynamics (Wittenburg, 2013), mass action kinetics (Ingalls, 2013), or Hamiltonian dynamics (Greydanus et al., 2019), or chosen for computational convenience (e.g., linear systems (Ljung, 1998)) or parametrized to facilitate system identification (Brunton et al., 2016).
Such construction methods lead to intriguing properties, including guarantees on physical realizability (Wensing et al., 2017), favorable convergence properties (Ortega et al., 2018), or a structure suitable for downstream tasks such as control design (Ortega et al., 2002). However, such models often capture the system dynamics only approximately, leading to a potentially significant discrepancy between the model and reality (Ljung, 1999). Moreover, when expert knowledge is not available, or precise parameter values are cumbersome to obtain, system identification from raw time series data becomes
∗Equal Contribution. Correspondence to trevenl@ethz.ch, wenkph@ethz.ch.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
necessary. In this case, one may seek more expressive nonparametric models instead (Rackauckas et al., 2020; Pillonetto et al., 2014). If the model is completely replaced by a neural network, the resulting model is called neural ODE (Chen et al., 2018). Despite their large number of parameters, as demonstrated by Chen et al. (2018); Kidger et al. (2020); Zhuang et al. (2020, 2021), deterministic neural ODEs can be efficiently trained, enabling accurate deterministic trajectory predictions. For many practical applications however, accurate uncertainty estimates are essential, as they guide downstream tasks like reinforcement learning (Deisenroth and Rasmussen, 2011; Schulman et al., 2015), safety guarantees (Berkenkamp et al., 2017), robust control design (Hjalmarsson, 2005), planning under uncertainty (LaValle, 2006), probabilistic forecasting in meteorology (Fanfarillo et al., 2021), or active learning / experimental design (Srinivas et al., 2010). A common way of obtaining such uncertainties is via a Bayesian framework. However, as observed by Dandekar et al. (2021), Bayesian training of neural ODEs in a dynamics setting remains largely unexplored. They demonstrate that initial variational-based inference schemes for Bayesian neural ODEs suffer from several serious drawbacks and thus propose sampling-based alternatives. However, as surfaced by our experiments in Section 4, sampling-based approaches still exhibit serious challenges. These pertain both to robustness (even if highly informed priors are supplied), and reliance on frequent numerical integration of large neural networks, which poses severe computational challenges for many downstream tasks like sampling-based planning (Karaman and Frazzoli, 2011) or uncertainty propagation in prediction.
Contributions In this work, we propose a novel approach for uncertainty quantification in nonlinear dynamical systems (cf. Figure 1). Crucially, our approach avoids explicit costly and non-robust numerical integration, by employing a probabilistic smoother of the observational data, whose representation we learn jointly across multiple trajectories. To capture dynamics, we regularize our smoother with a dynamics model. Latter captures epistemic uncertainty in the gradients of the ODE, which we match with the smoother’s gradients by minimizing a Wasserstein loss, hence we call our approach Distributional Gradient Matching (DGM). In summary, our main contributions are:
• We develop DGM, an approach2 for capturing epistemic uncertainty about nonlinear dynamical systems by jointly training a smoother and a neural dynamics model;
• We provide a computationally efficient and statistically accurate mechanism for prediction, by focusing directly on the posterior / predictive state distribution.
• We experimentally demonstrate the effectiveness of our approach on learning challenging, chaotic dynamical systems, and generalizing to new unseen inital conditions.
High-level overview A high-level depiction of our algorithm is shown in Figure 2. In principle, DGM jointly learns a smoother (S) and a dynamics model (D). The smoother model, chosen to be a Gaussian process, maps an initial condition x0 and a time t to the state distribution pS(x(t)) and state derivatives distribution pS(ẋ(t)) reached at that time. The dynamics model, chosen to be a neural network, represents an ODE that maps states x(t) to the derivative distribution pD(ẋ(t)). Both models are evaluated at some training times and all its output distributions collected in the random variables XS , ẊS and ẊD. The parameters of these models are then jointly trained using a Wasserstein-distance-based objective directly on the level of distributions. For more details on every one of these components, we refer to Section 3. There, we introduce all components individually and then present how they interplay. Section 3 builds on known concepts from the literature, which we
2Code is available at: https://github.com/lenarttreven/dgm
summarize in Section 2. Finally, in Section 4, we present the empirical study of the DGM algorithm, where we benchmark it against the state-of-the-art, uncertainty aware dynamics models.
2 Background
2.1 Problem Statement
Consider a continuous-time dynamical system whose K-dimensional state x ∈ RK evolves according to an unknown ordinary differential equation of the form
ẋ = f∗(x). (1)
Here, f∗ is an arbitrary, unknown function assumed to be locally Lipschitz continuous, to guarantee existence and uniqueness of trajectories for every initial condition. In our experiment, we initialize the system at M different initial conditions xm(0), m ∈ {1, . . . ,M}, and let it evolve to generate M trajectories. Each trajectory is then observed at discrete (but not necessarily uniformly spaced) time-points, where the number of observations (Nm)m∈{1...M} can vary from trajectory to trajectory. Thus, a trajectory m is described by its initial condition xm(0), and the observations ym := [xm(tn,m) + ϵn,m]n∈{1...Nm} at times tm := [tn,m]n∈{1...Nm}, where the additive observation noise ϵn,m is assumed to be drawn i.i.d. from a zero mean Gaussian, whose covariance is given by Σϵ := diag(σ21 , . . . , σ 2 K). We denote by D the dataset, consisting of M initial conditions xm(0), observation times tm, and observations ym. To model the unknown dynamical system, we choose a parametric Ansatz ẋ = f(x,θ). Depending on the amount of expert knowledge, this parameterization can follow a white-box, gray-box, or black-box methodology (Bohlin, 2006). In any case, the parametric form of f is fixed a priori (e.g., a neural network), and the key challenge is to infer a reasonable distribution over the parameters θ, conditioned on the data D. For later tasks, we are particularly interested in the predictive posterior state distribution p(xnew(tnew)|D, tnew,xnew(0)), (2) i.e., the posterior distribution of the states starting from a potentially unseen initial condition xnew(0) and evaluated at times tnew. This posterior would then be used by the downstream or prediction tasks described in the introduction.
2.2 Bayesian Parameter Inference
In the case of Bayesian parameter inference, an additional prior p(θ) is imposed on the parameters θ so that the posterior distribution of Equation (2) can be inferred. Unfortunately, this distribution is not analytically tractable for most choices of f(x,θ), which is especially true when we model f with a neural network. Formally, for fixed parameters θ, initial condition x(0) and observation time t, the likelihood of an observation y is given by
p(y(t)|x(0), t,θ,Σobs) = N ( y(t) ∣∣∣∣x(0) + ∫ t 0 f(x(τ),θ)dτ,Σobs ) . (3)
Using the fact that all noise realizations are independent, the expression (3) can be used to calculate the likelihood of all observations in D. Most state-of-the-art parameter inference schemes use this fact to create samples θ̂s of the posterior over parameters p(θ|D) using various Monte Carlo methods. Given a new initial condition x(0) and observation time t, these samples θ̂s can then be turned into samples of the predictive posterior state again by numerically integrating
x̂s(t) = x(0) + ∫ t 0 f(x(τ), θ̂s)dτ. (4)
Clearly, both training (i.e., obtaining the samples θ̂s) and prediction (i.e., evaluating Equation (4)) require integrating the system dynamics f many times. Especially when we model f with a neural network, this can be a huge burden, both numerically and computationally (Kelly et al., 2020). As an alternative approach, we can approximate the posterior p(θ|D) with variational inference (Bishop, 2006). However, we run into similar bottlenecks. While optimizing the variational objective, e.g., the ELBO, many integration steps are necessary to evaluate the unnormalized posterior. Also, at inference time, to obtain a distribution over state x̂s(t), we still need to integrate f several times. Furthermore, Dandekar et al. (2021) report poor forecasting performance by the variational approach.
3 Distributional Gradient Matching
In both the Monte Carlo sampling-based and variational approaches, all information about the dynamical system is stored in the estimates of the system parameters θ̂. This makes these approaches rather cumbersome: Both for obtaining estimates of θ̂ and for obtaining the predictive posterior over states, once θ̂ is found, we need multiple rounds of numerically integrating a potentially complicated (neural) differential equation. We thus have identified two bottlenecks limiting the performance and applicability of these algorithms: namely, numerical integration of f and inference of the system parameters θ. In our proposed algorithm, we avoid both of these bottlenecks by directly working with the posterior distribution in the state space. To this end, we introduce a probabilistic, differentiable smoother model, that directly maps a tuple (t,x(0)) consisting of a time point t and an initial condition x(0)) as input and maps it to the corresponding distribution over x(t). Thus, the smoother directly replaces the costly, numerical integration steps, needed, e.g., to evaluate Equation (2). Albeit computationally attractive, this approach has one serious drawback. Since the smoother no longer explicitly integrates differential equations, there is no guarantee that the obtained smoother model follows any vector field. Thus, the smoother model is strictly more general than the systems described by Equation (1). Unlike ODEs, it is able to capture mappings whose underlying functions violate, e.g., Lipschitz or Markovianity properties, which is clearly not desirable. To address this issue, we introduce a regularization term, Ldynamics, which ensures that a trajectory predicted by the smoother is encouraged to follow some underlying system of the form of Equation (1). The smoother is then trained with the multi-objective loss function
L := Ldata + λ · Ldynamics, (5) where, Ldata is a smoother-dependent loss function that ensures a sufficiently accurate data fit, and λ is a trade-off parameter.
3.1 Regularization by Matching Distributions over Gradients
To ultimately define Ldynamics, first choose a parametric dynamics model similar to f(x,θ) in Equation (3), that maps states to their derivatives. Second, define a set of supporting points T with the corresponding supporting gradients Ẋ as
T := { (tsupp,l,xsupp,l(0))l∈{1...Nsupp} } , Ẋ := { (ẋsupp,l)l∈{1...Nsupp} } .
Here, the l-th element represents the event that the dynamical system’s derivative at time tsupp,l is ẋsupp,l, after being initialized at time 0 at initial condition xsupp,l(0). Given both the smoother and the dynamics model, we have now two different ways to calculate distributions over Ẋ given some data D and supporting points T . First, we can directly leverage the differentiability and global nature of our smoother model to extract a distribution pS(Ẋ |D, T ) from the smoother model. Second, we can first use the smoother to obtain state estimates and then plug these state estimates into the dynamics model, to obtain a second distribution pD(Ẋ |D, T ). Clearly, if the solution proposed by the smoother follows the dynamics, these two distributions should match. Thus, we can regularize the smoother to follow the solution of Equation (3) by defining Ldynamics to encode the distance between pD(Ẋ |D, T ) and pS(Ẋ |D, T ) to be small in some metric. By minimizing the overall loss, we thus match the distributions over the gradients of the smoother and the dynamics model.
3.2 Smoothing jointly over Trajectories with Deep Gaussian Processes
The core of DGM is formed by a smoother model. In principle, the posterior state distribution of Equation (2) could be modeled by any Bayesian regression technique. However, calculating pS(Ẋ |D, T ) is generally more involved. Here, the key challenge is evaluating this posterior, which is already computationally challenging, e.g., for simple Bayesian neural networks. For Gaussian processes, however, this becomes straightforward, since derivatives of GPs remain GPs (Solak et al., 2003). Thus, DGM uses a GP smoother. For scalability and simplicity, we keep K different, independent smoothers, one for each state dimension. However, if computational complexity is not a concern, our approach generalizes directly to multi-output Gaussian processes. Below, we focus on the one-dimensional case, for clarity of exposition. For notational compactness, all vectors with a
superscript should be interpreted as vectors over time in this subsection. For example, the vector x(k) consists of all the k-th elements of the state vectors x(tn,m), n ∈ {1, . . . , Nm},m ∈ {1, . . . ,M}. We define a Gaussian process with a differentiable mean function µ(xm(0), tn,m) as well as a differentiable and positive-definite kernel function KRBF(ϕ(xm(0), tn,m),ϕ(xm′(0), tn′,m′). Here, the kernel is given by the composition of a standard ARD-RBF kernel (Rasmussen, 2004) and a differentiable feature extractor ϕ parametrized by a deep neural network, as introduced by Wilson et al. (2016). Following Solak et al. (2003), given fixed xsupp, we can now calculate the joint density of (ẋ(k)supp,y(k)) for each state dimension k. Concatenating vectors accordingly across time and trajectories, let
µ(k) := µ(k) (x(0), t) , µ̇(k) := ∂
∂t µ(k) (xsupp(0), tsupp) ,
z(k) := ϕ(k)(x(0), t), z(k)supp := ϕ (k)(xsupp(0), tsupp),
K(k) := K(k)RBF(z(k), z(k)), K̇ (k) := ∂
∂t1 K(k)RBF(z (k) supp, z
(k)), K̈(k) := ∂ 2
∂t1∂t2 K(k)RBF(z (k) supp, z (k) supp).
Then the joint density of (ẋ(k)supp,y(k)) can be written as( ẋ (k) supp
y(k)
) ∼ N (( µ̇(k)
µ(k)
) , ( K̈(k) K̇(k)
(K̇(k))⊤ K(k) + σ2kI
)) . (6)
Here we denote by ∂∂t1 the partial derivative with respect to time in the first coordinate, by ∂ ∂t2 the partial derivative with respect to time in the second coordinate, and with σ2k the corresponding noise variance of Σobs. Since the conditionals of a joint Gaussian random variable are again Gaussian distributed, pS is again Gaussian, i.e., pS(Ẋk|D, T ) = N ( ẋ (k) supp|µS ,ΣS ) with
µS := µ̇ (k) + K̇(k)(K(k) + σ2kI)−1 ( y(k) − µ(k) ) ,
ΣS := K̈ (k) − K̇(k)(K(k) + σ2kI)−1(K̇ (k) )⊤.
(7)
Here, the index k is used to highlight that this is just the distribution for one state dimension. To obtain the final pS(Ẋ |D, T ), we take the product over all state dimensions k. To fit our model to the data, we minimize the negative marginal log likelihood of our observations, neglecting purely additive terms (Rasmussen, 2004), i.e.,
Ldata := K∑ k=1 1 2 ( y(k) − µ(k) )⊤ ( K(k) + σ2kI )−1 ( y(k) − µ(k) ) + 1 2 logdet ( K(k) + σ2kI ) . (8)
Furthermore, the predictive posterior for a new point x(k)test given time ttest and initial condition x (k) test (0) has the closed form
pS(x (k) test |Dk, ttest,xtest) = N ( x (k) test ∣∣∣µ(k)post, σ2post,k) , (9) where µ(k)post = µ (k)(xtest(0), ttest) +K(k)RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1 ( y(k) − µ(k) ) , (10)
σ2post,k = K (k) RBF(ztest, ztest)−K (k) RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1K (k) RBF(z (k) test , z (k)). (11)
3.3 Representing Uncertainty in the Dynamics Model via the Reparametrization Trick
As described at the beginning of this section, a key bottleneck of standard Bayesian approaches is the potentially high dimensionality of the dynamics parameter vector θ. The same is true for our approach. If we were to keep track of the distributions over all parameters of our dynamics model, calculating pD(Ẋ |D, T ) quickly becomes infeasible.
However, especially in the case of modeling f with a neural network, the benefits of keeping distributions directly over θ is unclear due to overparametrization. For both the downstream tasks and our training method, we are mainly interested in the distributions in the state space. Usually, the state space is significantly lower dimensional compared to the parameter space of θ. Furthermore, since the exact posterior state distributions are generally intractable, they normally have to be approximated anyways with simpler distributions for downstream tasks (Schulman et al., 2015; Houthooft et al., 2016; Berkenkamp et al., 2017). Thus, we change the parametrization of our dynamics model as follows. Instead of working directly with ẋ(t) = f(x(t),θ) and keeping a distribution over θ, we model uncertainty directly on the level of the vector field as
ẋ(t) = f(x(t),ψ) +Σ 1 2
D(x(t),ψ)ϵ, (12)
where ϵ ∼ N (0, IK) is drawn once per rollout (i.e., fixed within a trajectory) and ΣD is a statedependent and positive semi-definite matrix parametrized by a neural network. Here, ψ are the parameters of the new dynamics model, consisting of both the original parameters θ and the weights of the neural network parametrizing ΣD. To keep the number of parameters reasonable, we employ a weight sharing scheme, detailed in Appendix B. In spirit, this modeling paradigm is very closely related to standard Bayesian training of NODEs. In both cases, the random distributions capture a distribution over a set of deterministic, ordinary differential equations. This should be seen in stark contrast to stochastic differential equations, where the randomness in the state space, i.e., diffusion, is modeled with a stochastic process. In comparison to (12), the latter is a time-varying disturbance added to the vector field. In that sense, our model still captures the epistemic uncertainty about our system dynamics, while an SDE model captures the intrinsic process noise, i.e., aleatoric uncertainty. While this reparametrization does not allow us to directly calculate pD(Ẋ |D, T ), we obtain a Gaussian distribution for the marginals pD(ẋsupp|xsupp). To retrieve pD(Ẋ |D, T ), we use the smoother model’s predictive state posterior to obtain
pD(Ẋ |D, T ) = ∫ pD(ẋsupp,xsupp|D, T )dxsupp (13)
≈ ∫ pD(ẋsupp|xsupp)pS(xsupp|T ,D)dxsupp. (14)
3.4 Comparing Gradient Distributions via the Wasserstein Distance
To compare and eventually match pD(Ẋ |D, T ) and pS(Ẋ |D, T ), we propose to use the Wasserstein distance (Kantorovich, 1939), since it allows for an analytic, closed-form representation, and since it outperforms similar measures (like forward, backward and symmetric KL divergence) in our exploratory experiments. The squared type-2 Wasserstein distance gives rise to the term
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] = W22 [ pS(Ẋ |D, T ),Exsupp∼pGP(xsupp|D,T ) [pD(ẋsupp|xsupp)] ] (15) that we will later use to regularize the smoothing process. To render the calculation of this regularization term computationally feasible, we introduce two approximations. First, observe that an exact calculation of the expectation in Equation (15) requires mapping a multivariate Gaussian through the deterministic neural networks parametrizing f and ΣD in Equation (12). To avoid complex sampling schemes, we carry out a certainty-equivalence approximation of the expectation, that is, we evaluate the dynamics model on the posterior smoother mean µS, supp. As a result of this approximation, observe that both pD(Ẋ |D, T ) and pS(Ẋ |D, T ) become Gaussians. However, the covariance structure of these matrices is very different. Since we use independent GPs for different state dimensions, the smoother only models the covariance between the state values within the same dimension, across different time points. Furthermore, since ϵ, the random variable that captures the randomness of the dynamics across all time-points, is only K-dimensional, the covariance of pD will be degenerate. Thus, we do not match the distributions directly, but instead match the marginals of each state coordinate at each time point independently at the different supporting time points. Hence,
using first marginalization and then the certainty equivalence, Equation (15) reduces to
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] ≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|D, T ) ]
≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] . (16)
Conveniently, the Wasserstein distance can now be calculated analytically, since for two onedimensional Gaussians a ∼ N (µa, σ2a) and b ∼ N (µb, σ2b ), we have W22[a, b] = (µa − µb)2 + (σa − σb)2.
3.5 Final Loss Function
As explained in the previous paragraphs, distributional gradient matching trains a smoother regularized by a dynamics model. Both the parameters of the smoother φ, consisting of the trainable parameters of the GP prior mean µ, the feature map ϕ, and the kernel K, and the parameters of the dynamics model ψ are trained concurrently, using the same loss function. This loss consists of two terms, of which the regularization term was already described in Equation (16). While this term ensures that the smoother follows the dynamics, we need a second term ensuring that the smoother also follows the data. To this end, we follow standard GP regression literature, where it is common to learn the GP hyperparameters by maximizing the marginal log likelihood of the observations, i.e. Ldata (Rasmussen, 2004). Combining these terms, we obtain the final objective
L(φ,ψ) := Ldata − λ · K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] .
This loss function is a multi-criteria objective, where fitting the data (via the smoother) and identifying the dynamics model (by matching the marginals) regularize each other. In our preliminary experiments, we found the objective to be quite robust w.r.t. different choices of λ. In the interest of simplicity, we thus set it in all our experiments in Section 4 to a default value of λ = |D||Ẋ | , accounting only for the possibility of having different numbers of supporting points and observations. One special case worth mentioning is λ → 0, which corresponds to conventional sequential smoothing, where the second part would be used for identification in a second step, as proposed by Pillonetto and De Nicolao (2010). However, as can be seen in Figure 1, the smoother fails to properly identify the system without any knowledge about the dynamics and thus fails to provide meaningful state or derivative estimates. Thus, especially in the case of sparse observations, joint training is strictly superior. In its final form, unlike its pure Bayesian counterparts, DGM does not require any prior knowledge about the system dynamics. Nevertheless, if some prior knowledge is available, one could add an additional, additive term log(p(ψ)) to L(φ,ψ). It should be noted however that this was not done in any of our experiments, and excellent performance can be achieved without.
4 Experiments
We now compare DGM against state-of-the-art methods. In a first experiment, we demonstrate the effects of an overparametrized, simple dynamics model on the performance of DGM as well as traditional, MC-based algorithms SGLD (Stochastic Gradient Lengevin Dynamics, (Welling and Teh, 2011)) and SGHMC (Stochastic Gradient Hamiltonian Monte Carlo, (Chen et al., 2014)). We select our baselines based on the results of Dandekar et al. (2021), who demonstrate that both a variational approach and NUTS (No U-Turn Sampler, Hoffman and Gelman (2014)) are inferior to these two. Subsequently, we will investigate and benchmark the ability of DGM to correctly identify neural dynamics models and to generalize across different initial conditions. Since SGLD and SGHMC reach their computational limits in the generalization experiments, we compare against Neural ODE Processes (NDP). Lastly, we will conclude by demonstrating the necessity of all of its components. For all comparisons, we use the julia implementations of SGLD and SGHMC provided by Dandekar et al. (2021), the pytorch implementation of NDP provided by Norcliffe et al. (2021), and our own JAX (Bradbury et al., 2018) implementation of DGM.
4.1 Setup
We use known parametric systems from the literature to generate simulated, noisy trajectories. For these benchmarks, we use the two-dimensional Lotka Volterra (LV) system, the three-dimensional, chaotic Lorenz (LO) system, a four-dimensional double pendulum (DP) and a twelve-dimensional quadrocopter (QU) model. For all systems, the exact equations and ground truth parameters are provided in the Appendix A. For each system, we create two different data sets. In the first, we include just one densely observed trajectory, taking the computational limitations of the benchmarks into consideration. In the second, we include many, but sparsely observed trajectories (5 for LV and DP, 10 for LO, 15 for QU). This setting aims to study generalization over different initial conditions.
4.2 Metric
We use the log likelihood as a metric to compare the accuracy of our probabilistic models. In the 1-trajectory setting, we take a grid of 100 time points equidistantly on the training trajectory. We then calculate the ground truth and evaluate its likelihood using the predictive distributions of our models. When testing for generalization, we repeat the same procedure for unseen initial conditions.
4.3 Effects of Overparametrization
3,3 3,6,3 3,6,6,3 3,6,9,6,3 3,12,9,6,3 Model
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Lo g
Lik el
ih oo
d
SGLD SGHMC DGM
likelihood of the ground truth over 10 different noise realizations. The exact procedure for one noise realization is described in the appendix, Appendix C). While SGLD runs into numerical issues after a medium model complexity, the performance of SGHMC continuously disintegrates, while DGM is unaffected. This foreshadows the results of the next two experiments, where we observe that the MC-based approaches are not suitable for the more complicated settings.
4.4 Single Trajectory Benchmarks
In Table 1, we evaluate the log-likelihood of the ground truth for the four benchmark systems, obtained when learning these systems using a neural ODE as a dynamics model (for more details, see appendix B). Clearly, DGM performs the best on all systems, even though we supplied both SGLD and SGHMC with very strong priors and fine-tuned them with an extensive hyperparameter sweep (see Appendix C for more details). Despite this effort, we failed to get SGLD to work on Quadrocopter 1, where it always returned NaNs. This is in stark contrast to DGM, which performs reliably without any pre-training or priors.
4.5 Prediction speed
To evaluate prediction speed, we consider the task of predicting 100 points on a previously unseen trajectory. To obtain a fair comparison, all algorithms’ prediction routines were implemented in JAX (Bradbury et al., 2018). Furthermore, while we used 1000 MC samples when evaluating the predictive posterior for the log likelihood to guarantee maximal accuracy, we only used 200 samples in Table 1. Here, 200 was chosen as a minimal sample size guaranteeing reasonable accuracy, following a preliminary experiment visualized in Appendix C. Nevertheless, the predictions of DGM are 1-2 orders of magnitudes faster, as can be seen in Table 1. This further illustrates the advantage of relying on a smoother instead of costly, numerical integration to obtain predictive posteriors in the state space.
4.6 Multi-Trajectory Benchmarks
Next, we take a set of trajectories starting on an equidistant grid of the initial conditions. Each trajectory is then observed at 5 equidistant observation times for LV and DP, and 10 equidistant observation times for the chaotic Lorenz and more complicated Quadrocopter. We test generalization by randomly sampling a new initial condition and evaluating the negative log likelihood of the ground truth at 100 equidistant time points. In Table 2, we compare the generalization performance of DGM against NDP, since despite serious tuning efforts, the MC methods failed to produce meaningful results in this setting. DGM clearly outperforms NDP, a fact which is further exemplified in Figure 4. There, we show the test log likeliood for Lotka Volterra trained on an increasing set of trajectories. Even though the time grid is fixed and we only decrease the distance between initial condition samples, the dynamics model helps the smoother to generalize across time as well. In stark contrast, NDP fails to improve with increasing data after an initial jump.
4.7 Ablation study
We next study the importance of different elements of our approach via an ablation study on the Lorenz 125 dataset, shown in Figure 1. Comparing the two rows, we see that joint smoothing across trajectories is essential to transfer knowledge between different training trajectories. Similarly, comparing the two columns, we see that the dynamics model enables the smoother to reduce its uncertainty in between observation points.
4.8 Computational Requirements
For the one trajectory setting, all DGM related experiments were run on a Nvidia RTX 2080 Ti, where the longest ones took 15 minutes. The comparison methods were given 24h, on Intel Xeon Gold 6140 CPUs. For the multi-trajectory setting, we used Nvidia Titan RTX, where all experiments finished in less than 3 hours. A more detailed run time compilation can be found in Appendix B. Using careful implementation, the run time of DGM scales linearly in the number of dimensions K. However, since we use an accurate RBF kernel for all our experiments reported in this section, we have cubic run time complexity in ∑M m=1 Nm. In principle, this can be alleviated by deploying standard feature approximation methods (Rahimi et al., 2007; Liu et al., 2020). While this is a well known fact, we nevertheless refer the interested reader to a more detailed discussion of the subject in Appendix D.
5 Related work
5.1 Bayesian Parameter Inference with Gaussian Processes
The idea of matching gradients of a (spline-based) smoother and a dynamics model goes back to the work of Varah (1982). For GPs, this idea is introduced by Calderhead et al. (2009), who first fit a GP to the data and then match the parameters of the dynamics. Dondelinger et al. (2013) introduce concurrent training, while Gorbach et al. (2017) introduce an efficient variational inference procedure for systems with a locally-linear parametric form. All these works claim to match the distributions of the gradients of the smoother and dynamics models, by relying on a product of experts heuristics. However, Wenk et al. (2019) demonstrate that this product of experts in fact leads to statistical independence between the observations and the dynamics parameters, and that these algorithms essentially match point estimates of the gradients instead. Thus, DGM is the first algorithm to actually match gradients on the level of distributions for ODEs. In the context of stochastic differential equations (SDEs) with constant diffusion terms, Abbati et al. (2019) deploy MMD and GANs to match their gradient distributions. However, it should be noted that their algorithm treats the parameters of the dynamics model deterministically and thus, they can not provide the epistemic uncertainty estimates that we seek here. Note that our work is not related to the growing literature investigating SDE approximations of Bayesian Neural ODEs in the context of classification (Xu et al., 2021). Similarly to Chen et al. (2018), these works emphasize learning a terminal state of the ODE used for other downstream tasks.
5.2 Gaussian Processes with Operator Constraints
Gradient matching approaches mainly use the smoother as a proxy to infer dynamics parameters. This is in stark contrast to our work, where we treat the smoother as the main model used for prediction. While the regularizing properties of the dynamics on the smoother are explored by Wenk et al. (2020), Jidling et al. (2017) introduce an algorithm to incorporate linear operator constraints directly on the kernel level. Unlike in our work, they can provide strong guarantees that the posterior always follows these constraints. However, it remains unclear how to generalize their approach to the case of complex, nonlinear operators, potentially parametrized by neural dynamics models.
5.3 Other Related Approaches
In some sense, the smoother is mimicking a probabilistic numerical integration step, but without explicitly integrating. In spirit, this approach is similar to the solution networks used in the context of PDEs, as presented by Raissi et al. (2019), which however typically disregard uncertainty. In the context of classical ODE parameter inference, Kersting et al. (2020) deploy a GP to directly mimic a numerical integrator in a probabilistic, differentiable manner. Albeit promising in a classical, parametric ODE setting, it remains unclear how these methods can be scaled up, as there is still the numerical integration bottleneck. Unrelated to their work, Ghosh et al. (2021) present a variational inference scheme in the same, classical ODE setting. However, they still keep distributions over all weights of the neural network (Norcliffe et al., 2021). A similar approach is investigated by Dandekar et al. (2021), who found it to be inferior to the MC methods we use as a benchmark. Variational inference was previously employed by Yildiz et al. (2019) in the context of latent neural ODEs parametrized by a Bayesian neural network, but their work mainly focuses on dimensionality reduction. Nevertheless, their work inspired a model called Neural ODE Processes by Norcliffe et al. (2021). This work is similar to ours in the sense that it avoids keeping distributions over network weights and models an ensemble of deterministic ODEs via a global context variable. Consequently, we use it as a benchmark in Section 4, showing that it does not properly capture epistemic uncertainty in a low data setting, which might be problematic for downstream tasks like reinforcement learning.
6 Conclusion
In this work, we introduced a novel, GP-based collocation method, that matches gradients of a smoother and a dynamics model on the distribution level using a Wasserstein loss. Through careful parametrization of the dynamics model, we manage to train complicated, neural ODE models, where state of the art methods struggle. We then demonstrate that these models are able to accurately predict unseen trajectories, while capturing epistemic uncertainty relevant for downstream tasks. In future work, we are excited to see how our training regime can be leveraged in the context of active learning of Bayesian neural ordinary differential equation for continuous-time reinforcement learning.
Acknowledgments
This research was supported by the Max Planck ETH Center for Learning Systems. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement No 815943 as well as from the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. | 1. How does the reviewer perceive the novelty and contribution of the paper regarding uncertainty modeling and trajectory data?
2. What are the major issues that the reviewer has identified in the methodology description, specifically concerning the role of supporting gradients, notation confusion, and state noise correlation?
3. How would you address the concern about the certainty-equivalence approximation discarding GP model uncertainty, especially when comparing quadrature approximations or sampling-based methods?
4. Can you provide more experimental details or references for SGLD, SGHMC, NUTS, and simulator-based inference approaches for better evaluation and comparison?
5. Could you clarify the relationship between parametric and non-parametric models in this work, particularly regarding neural networks and GPs?
6. Are there any minor issues mentioned by the reviewer, such as missing details, typos, or unclear terms, that could be addressed or clarified? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a novel approach to model and capture the uncertainty of dynamical systems by learning from trajectory data. The method is composed of a Gaussian process smoother model, used for predictions, and a neural dynamics model, used for training. The novelty in the method comes from the use of the distributions over time-derivatives of the state vectors (via a Wasserstein-distance penalty between the distributions from the smoother and the dynamics model) during the training process. Basing the approach on integrals over the state space, instead of over the model parameters space, allows for computational efficiency and more robustness when compared to previous sampling-based approaches, as evidenced by experiments.
Review
The paper is mostly well written, providing a good motivation, an appropriate use of citation and interesting insights in the derivation of the method. However, there are a few issues, especially regarding methodological details, which make it hard to understand the method and to assess its soundness.
Major issues:
The description of the methodology is somewhat confusing in a few important aspects. For instance, how does the set of "supporting gradients" play a role in the estimation of these two distributions? Are they random variables to be inferred as part of
X
˙
? Or are they additional observations which will be used somehow in the calculation of the losses?
Regarding
z
in Eq. 6, I get the idea, but it's a bit confusing at first-reading to use both
z
and the
(
x
,
t
)
-tuples as inputs to the GP when the feature map "phi" is supposed to be part of the kernel. It'd be better to consider
k
:=
k
∘
ϕ
(composition), since the composition of a kernel with an input map is still a kernel, and then represent the inputs as simply
(
x
,
t
)
, without the extra
z
notation.
In line 196, it is mentioned that the (instant) state noise vector in Eq. 12 is drawn only once per rollout. Does it mean that the noise term epsilon is drawn once and repeated for all time points along a trajectory? If so, the state noise is not independent, but actually correlated across time and that would possibly not allow the model in Eq. 12 to properly capture uncertainty in the state transitions.
I'm not sure about the effects of this approximation in (14), since later on the difference between
p
D
and
p
S
will be minimised. Having
p
D
as a function of
p
S
might have negative side effects. Any ideas on the drawbacks?
The certainty-equivalence approximation (see line 220) discards the uncertainty captured by the GP model in the expectation in Eq. 15. Why not using a quadrature approximation to compute Eq. 15, like the unscented transform, or anything else which could capture the states covariance matrix on the inputs?
In the experiments with SGLD and SGHMC, how is the likelihood for these sampling-based methods computed? These methods don't directly produce a probability density, only empirical approximations.
The experimental evaluation could be more complete if they included comparisons against methods which encode (inexact) prior knowledge about the dynamics, such as simulator-based inference approaches [e.g., A, B, C, below]. The relationship to these methods is also not discussed in the related work section. In my view, the proposed approach could be combined with simulator-based models by using a physics-informed mean function for the GP smoother, which is not explored/discussed in the paper.
Minor issues:
Parametric vs. Non-parametric: Neural networks are "parametric" models, though in the main paper and in the appendix they are referred to as "non-parametric" models. The only non-parametric model in this work is the GP.
The equation for
z
i
in line 164 is missing
ϕ
.
I couldn't find details of this weight sharing scheme in Sec. 4, despite the mention in line 200.
Preliminary experiments: Are these results available? If not, it'd be better to add them to the appendix or give more details/references to back it up. For example, the Wasserstein GANs paper provides some experiments and theoretical justification on why the Wasserstein distance is more appropriate than the KL divergence for models over high-dimensional space, which usually have their data distribution concentrated on a (latent) lower-dimensional subspace.
Experiments: At the first mention of SGLD, SGHMC and NUTS, please spell out and/or add references to these methods for readers who are unfamiliar with the literature. Also, please, consider adding a reference to NDP in line 265.
Experiments: Line 306, any idea why SGLD failed with the Quadrocopter 1?
Please, add (foot)note on what is an "Ansatz". At first, I thought it was a typo, as I was unfamiliar with the term.
References:
[A] Cranmer, Kyle, Johann Brehmer, and Gilles Louppe. 2020. “The Frontier of Simulation-Based Inference.” Proceedings of the National Academy of Sciences.
[B] Ramos, Fabio, Rafael Carvalhaes Possas, and Dieter Fox. 2019. “BayesSim : Adaptive Domain Randomization via Probabilistic Inference for Robotics Simulators.” In Robotics: Science and Systems (RSS). Freiburg im Breisgau, Germany.
[C] Okada, Masashi, and Tadahiro Taniguchi. 2019. “Variational Inference MPC for Bayesian Model-Based Reinforcement Learning.” In 3rd Conference on Robot Learning (CoRL 2019). |
NIPS | Title
Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models
Abstract
Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification. While many deterministic learning algorithms have been designed based on numerical integration via the adjoint method, many downstream tasks such as active learning, exploration in reinforcement learning, robust control, or filtering require accurate estimates of predictive uncertainties. In this work, we propose a novel approach towards estimating epistemically uncertain neural ODEs, avoiding the numerical integration bottleneck. Instead of modeling uncertainty in the ODE parameters, we directly model uncertainties in the state space. Our algorithm – distributional gradient matching (DGM) – jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss. Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
N/A
1 Introduction
For continuous-time system identification and control, ordinary differential equations form an essential class of models, deployed in applications ranging from robotics (Spong et al., 2006) to biology (Jones et al., 2009). Here, it is assumed that the evolution of a system is described by the evolution of continuous state variables, whose time-derivative is given by a set of parametrized equations. Often, these equations are derived from first principles, e.g., rigid body dynamics (Wittenburg, 2013), mass action kinetics (Ingalls, 2013), or Hamiltonian dynamics (Greydanus et al., 2019), or chosen for computational convenience (e.g., linear systems (Ljung, 1998)) or parametrized to facilitate system identification (Brunton et al., 2016).
Such construction methods lead to intriguing properties, including guarantees on physical realizability (Wensing et al., 2017), favorable convergence properties (Ortega et al., 2018), or a structure suitable for downstream tasks such as control design (Ortega et al., 2002). However, such models often capture the system dynamics only approximately, leading to a potentially significant discrepancy between the model and reality (Ljung, 1999). Moreover, when expert knowledge is not available, or precise parameter values are cumbersome to obtain, system identification from raw time series data becomes
∗Equal Contribution. Correspondence to trevenl@ethz.ch, wenkph@ethz.ch.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
necessary. In this case, one may seek more expressive nonparametric models instead (Rackauckas et al., 2020; Pillonetto et al., 2014). If the model is completely replaced by a neural network, the resulting model is called neural ODE (Chen et al., 2018). Despite their large number of parameters, as demonstrated by Chen et al. (2018); Kidger et al. (2020); Zhuang et al. (2020, 2021), deterministic neural ODEs can be efficiently trained, enabling accurate deterministic trajectory predictions. For many practical applications however, accurate uncertainty estimates are essential, as they guide downstream tasks like reinforcement learning (Deisenroth and Rasmussen, 2011; Schulman et al., 2015), safety guarantees (Berkenkamp et al., 2017), robust control design (Hjalmarsson, 2005), planning under uncertainty (LaValle, 2006), probabilistic forecasting in meteorology (Fanfarillo et al., 2021), or active learning / experimental design (Srinivas et al., 2010). A common way of obtaining such uncertainties is via a Bayesian framework. However, as observed by Dandekar et al. (2021), Bayesian training of neural ODEs in a dynamics setting remains largely unexplored. They demonstrate that initial variational-based inference schemes for Bayesian neural ODEs suffer from several serious drawbacks and thus propose sampling-based alternatives. However, as surfaced by our experiments in Section 4, sampling-based approaches still exhibit serious challenges. These pertain both to robustness (even if highly informed priors are supplied), and reliance on frequent numerical integration of large neural networks, which poses severe computational challenges for many downstream tasks like sampling-based planning (Karaman and Frazzoli, 2011) or uncertainty propagation in prediction.
Contributions In this work, we propose a novel approach for uncertainty quantification in nonlinear dynamical systems (cf. Figure 1). Crucially, our approach avoids explicit costly and non-robust numerical integration, by employing a probabilistic smoother of the observational data, whose representation we learn jointly across multiple trajectories. To capture dynamics, we regularize our smoother with a dynamics model. Latter captures epistemic uncertainty in the gradients of the ODE, which we match with the smoother’s gradients by minimizing a Wasserstein loss, hence we call our approach Distributional Gradient Matching (DGM). In summary, our main contributions are:
• We develop DGM, an approach2 for capturing epistemic uncertainty about nonlinear dynamical systems by jointly training a smoother and a neural dynamics model;
• We provide a computationally efficient and statistically accurate mechanism for prediction, by focusing directly on the posterior / predictive state distribution.
• We experimentally demonstrate the effectiveness of our approach on learning challenging, chaotic dynamical systems, and generalizing to new unseen inital conditions.
High-level overview A high-level depiction of our algorithm is shown in Figure 2. In principle, DGM jointly learns a smoother (S) and a dynamics model (D). The smoother model, chosen to be a Gaussian process, maps an initial condition x0 and a time t to the state distribution pS(x(t)) and state derivatives distribution pS(ẋ(t)) reached at that time. The dynamics model, chosen to be a neural network, represents an ODE that maps states x(t) to the derivative distribution pD(ẋ(t)). Both models are evaluated at some training times and all its output distributions collected in the random variables XS , ẊS and ẊD. The parameters of these models are then jointly trained using a Wasserstein-distance-based objective directly on the level of distributions. For more details on every one of these components, we refer to Section 3. There, we introduce all components individually and then present how they interplay. Section 3 builds on known concepts from the literature, which we
2Code is available at: https://github.com/lenarttreven/dgm
summarize in Section 2. Finally, in Section 4, we present the empirical study of the DGM algorithm, where we benchmark it against the state-of-the-art, uncertainty aware dynamics models.
2 Background
2.1 Problem Statement
Consider a continuous-time dynamical system whose K-dimensional state x ∈ RK evolves according to an unknown ordinary differential equation of the form
ẋ = f∗(x). (1)
Here, f∗ is an arbitrary, unknown function assumed to be locally Lipschitz continuous, to guarantee existence and uniqueness of trajectories for every initial condition. In our experiment, we initialize the system at M different initial conditions xm(0), m ∈ {1, . . . ,M}, and let it evolve to generate M trajectories. Each trajectory is then observed at discrete (but not necessarily uniformly spaced) time-points, where the number of observations (Nm)m∈{1...M} can vary from trajectory to trajectory. Thus, a trajectory m is described by its initial condition xm(0), and the observations ym := [xm(tn,m) + ϵn,m]n∈{1...Nm} at times tm := [tn,m]n∈{1...Nm}, where the additive observation noise ϵn,m is assumed to be drawn i.i.d. from a zero mean Gaussian, whose covariance is given by Σϵ := diag(σ21 , . . . , σ 2 K). We denote by D the dataset, consisting of M initial conditions xm(0), observation times tm, and observations ym. To model the unknown dynamical system, we choose a parametric Ansatz ẋ = f(x,θ). Depending on the amount of expert knowledge, this parameterization can follow a white-box, gray-box, or black-box methodology (Bohlin, 2006). In any case, the parametric form of f is fixed a priori (e.g., a neural network), and the key challenge is to infer a reasonable distribution over the parameters θ, conditioned on the data D. For later tasks, we are particularly interested in the predictive posterior state distribution p(xnew(tnew)|D, tnew,xnew(0)), (2) i.e., the posterior distribution of the states starting from a potentially unseen initial condition xnew(0) and evaluated at times tnew. This posterior would then be used by the downstream or prediction tasks described in the introduction.
2.2 Bayesian Parameter Inference
In the case of Bayesian parameter inference, an additional prior p(θ) is imposed on the parameters θ so that the posterior distribution of Equation (2) can be inferred. Unfortunately, this distribution is not analytically tractable for most choices of f(x,θ), which is especially true when we model f with a neural network. Formally, for fixed parameters θ, initial condition x(0) and observation time t, the likelihood of an observation y is given by
p(y(t)|x(0), t,θ,Σobs) = N ( y(t) ∣∣∣∣x(0) + ∫ t 0 f(x(τ),θ)dτ,Σobs ) . (3)
Using the fact that all noise realizations are independent, the expression (3) can be used to calculate the likelihood of all observations in D. Most state-of-the-art parameter inference schemes use this fact to create samples θ̂s of the posterior over parameters p(θ|D) using various Monte Carlo methods. Given a new initial condition x(0) and observation time t, these samples θ̂s can then be turned into samples of the predictive posterior state again by numerically integrating
x̂s(t) = x(0) + ∫ t 0 f(x(τ), θ̂s)dτ. (4)
Clearly, both training (i.e., obtaining the samples θ̂s) and prediction (i.e., evaluating Equation (4)) require integrating the system dynamics f many times. Especially when we model f with a neural network, this can be a huge burden, both numerically and computationally (Kelly et al., 2020). As an alternative approach, we can approximate the posterior p(θ|D) with variational inference (Bishop, 2006). However, we run into similar bottlenecks. While optimizing the variational objective, e.g., the ELBO, many integration steps are necessary to evaluate the unnormalized posterior. Also, at inference time, to obtain a distribution over state x̂s(t), we still need to integrate f several times. Furthermore, Dandekar et al. (2021) report poor forecasting performance by the variational approach.
3 Distributional Gradient Matching
In both the Monte Carlo sampling-based and variational approaches, all information about the dynamical system is stored in the estimates of the system parameters θ̂. This makes these approaches rather cumbersome: Both for obtaining estimates of θ̂ and for obtaining the predictive posterior over states, once θ̂ is found, we need multiple rounds of numerically integrating a potentially complicated (neural) differential equation. We thus have identified two bottlenecks limiting the performance and applicability of these algorithms: namely, numerical integration of f and inference of the system parameters θ. In our proposed algorithm, we avoid both of these bottlenecks by directly working with the posterior distribution in the state space. To this end, we introduce a probabilistic, differentiable smoother model, that directly maps a tuple (t,x(0)) consisting of a time point t and an initial condition x(0)) as input and maps it to the corresponding distribution over x(t). Thus, the smoother directly replaces the costly, numerical integration steps, needed, e.g., to evaluate Equation (2). Albeit computationally attractive, this approach has one serious drawback. Since the smoother no longer explicitly integrates differential equations, there is no guarantee that the obtained smoother model follows any vector field. Thus, the smoother model is strictly more general than the systems described by Equation (1). Unlike ODEs, it is able to capture mappings whose underlying functions violate, e.g., Lipschitz or Markovianity properties, which is clearly not desirable. To address this issue, we introduce a regularization term, Ldynamics, which ensures that a trajectory predicted by the smoother is encouraged to follow some underlying system of the form of Equation (1). The smoother is then trained with the multi-objective loss function
L := Ldata + λ · Ldynamics, (5) where, Ldata is a smoother-dependent loss function that ensures a sufficiently accurate data fit, and λ is a trade-off parameter.
3.1 Regularization by Matching Distributions over Gradients
To ultimately define Ldynamics, first choose a parametric dynamics model similar to f(x,θ) in Equation (3), that maps states to their derivatives. Second, define a set of supporting points T with the corresponding supporting gradients Ẋ as
T := { (tsupp,l,xsupp,l(0))l∈{1...Nsupp} } , Ẋ := { (ẋsupp,l)l∈{1...Nsupp} } .
Here, the l-th element represents the event that the dynamical system’s derivative at time tsupp,l is ẋsupp,l, after being initialized at time 0 at initial condition xsupp,l(0). Given both the smoother and the dynamics model, we have now two different ways to calculate distributions over Ẋ given some data D and supporting points T . First, we can directly leverage the differentiability and global nature of our smoother model to extract a distribution pS(Ẋ |D, T ) from the smoother model. Second, we can first use the smoother to obtain state estimates and then plug these state estimates into the dynamics model, to obtain a second distribution pD(Ẋ |D, T ). Clearly, if the solution proposed by the smoother follows the dynamics, these two distributions should match. Thus, we can regularize the smoother to follow the solution of Equation (3) by defining Ldynamics to encode the distance between pD(Ẋ |D, T ) and pS(Ẋ |D, T ) to be small in some metric. By minimizing the overall loss, we thus match the distributions over the gradients of the smoother and the dynamics model.
3.2 Smoothing jointly over Trajectories with Deep Gaussian Processes
The core of DGM is formed by a smoother model. In principle, the posterior state distribution of Equation (2) could be modeled by any Bayesian regression technique. However, calculating pS(Ẋ |D, T ) is generally more involved. Here, the key challenge is evaluating this posterior, which is already computationally challenging, e.g., for simple Bayesian neural networks. For Gaussian processes, however, this becomes straightforward, since derivatives of GPs remain GPs (Solak et al., 2003). Thus, DGM uses a GP smoother. For scalability and simplicity, we keep K different, independent smoothers, one for each state dimension. However, if computational complexity is not a concern, our approach generalizes directly to multi-output Gaussian processes. Below, we focus on the one-dimensional case, for clarity of exposition. For notational compactness, all vectors with a
superscript should be interpreted as vectors over time in this subsection. For example, the vector x(k) consists of all the k-th elements of the state vectors x(tn,m), n ∈ {1, . . . , Nm},m ∈ {1, . . . ,M}. We define a Gaussian process with a differentiable mean function µ(xm(0), tn,m) as well as a differentiable and positive-definite kernel function KRBF(ϕ(xm(0), tn,m),ϕ(xm′(0), tn′,m′). Here, the kernel is given by the composition of a standard ARD-RBF kernel (Rasmussen, 2004) and a differentiable feature extractor ϕ parametrized by a deep neural network, as introduced by Wilson et al. (2016). Following Solak et al. (2003), given fixed xsupp, we can now calculate the joint density of (ẋ(k)supp,y(k)) for each state dimension k. Concatenating vectors accordingly across time and trajectories, let
µ(k) := µ(k) (x(0), t) , µ̇(k) := ∂
∂t µ(k) (xsupp(0), tsupp) ,
z(k) := ϕ(k)(x(0), t), z(k)supp := ϕ (k)(xsupp(0), tsupp),
K(k) := K(k)RBF(z(k), z(k)), K̇ (k) := ∂
∂t1 K(k)RBF(z (k) supp, z
(k)), K̈(k) := ∂ 2
∂t1∂t2 K(k)RBF(z (k) supp, z (k) supp).
Then the joint density of (ẋ(k)supp,y(k)) can be written as( ẋ (k) supp
y(k)
) ∼ N (( µ̇(k)
µ(k)
) , ( K̈(k) K̇(k)
(K̇(k))⊤ K(k) + σ2kI
)) . (6)
Here we denote by ∂∂t1 the partial derivative with respect to time in the first coordinate, by ∂ ∂t2 the partial derivative with respect to time in the second coordinate, and with σ2k the corresponding noise variance of Σobs. Since the conditionals of a joint Gaussian random variable are again Gaussian distributed, pS is again Gaussian, i.e., pS(Ẋk|D, T ) = N ( ẋ (k) supp|µS ,ΣS ) with
µS := µ̇ (k) + K̇(k)(K(k) + σ2kI)−1 ( y(k) − µ(k) ) ,
ΣS := K̈ (k) − K̇(k)(K(k) + σ2kI)−1(K̇ (k) )⊤.
(7)
Here, the index k is used to highlight that this is just the distribution for one state dimension. To obtain the final pS(Ẋ |D, T ), we take the product over all state dimensions k. To fit our model to the data, we minimize the negative marginal log likelihood of our observations, neglecting purely additive terms (Rasmussen, 2004), i.e.,
Ldata := K∑ k=1 1 2 ( y(k) − µ(k) )⊤ ( K(k) + σ2kI )−1 ( y(k) − µ(k) ) + 1 2 logdet ( K(k) + σ2kI ) . (8)
Furthermore, the predictive posterior for a new point x(k)test given time ttest and initial condition x (k) test (0) has the closed form
pS(x (k) test |Dk, ttest,xtest) = N ( x (k) test ∣∣∣µ(k)post, σ2post,k) , (9) where µ(k)post = µ (k)(xtest(0), ttest) +K(k)RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1 ( y(k) − µ(k) ) , (10)
σ2post,k = K (k) RBF(ztest, ztest)−K (k) RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1K (k) RBF(z (k) test , z (k)). (11)
3.3 Representing Uncertainty in the Dynamics Model via the Reparametrization Trick
As described at the beginning of this section, a key bottleneck of standard Bayesian approaches is the potentially high dimensionality of the dynamics parameter vector θ. The same is true for our approach. If we were to keep track of the distributions over all parameters of our dynamics model, calculating pD(Ẋ |D, T ) quickly becomes infeasible.
However, especially in the case of modeling f with a neural network, the benefits of keeping distributions directly over θ is unclear due to overparametrization. For both the downstream tasks and our training method, we are mainly interested in the distributions in the state space. Usually, the state space is significantly lower dimensional compared to the parameter space of θ. Furthermore, since the exact posterior state distributions are generally intractable, they normally have to be approximated anyways with simpler distributions for downstream tasks (Schulman et al., 2015; Houthooft et al., 2016; Berkenkamp et al., 2017). Thus, we change the parametrization of our dynamics model as follows. Instead of working directly with ẋ(t) = f(x(t),θ) and keeping a distribution over θ, we model uncertainty directly on the level of the vector field as
ẋ(t) = f(x(t),ψ) +Σ 1 2
D(x(t),ψ)ϵ, (12)
where ϵ ∼ N (0, IK) is drawn once per rollout (i.e., fixed within a trajectory) and ΣD is a statedependent and positive semi-definite matrix parametrized by a neural network. Here, ψ are the parameters of the new dynamics model, consisting of both the original parameters θ and the weights of the neural network parametrizing ΣD. To keep the number of parameters reasonable, we employ a weight sharing scheme, detailed in Appendix B. In spirit, this modeling paradigm is very closely related to standard Bayesian training of NODEs. In both cases, the random distributions capture a distribution over a set of deterministic, ordinary differential equations. This should be seen in stark contrast to stochastic differential equations, where the randomness in the state space, i.e., diffusion, is modeled with a stochastic process. In comparison to (12), the latter is a time-varying disturbance added to the vector field. In that sense, our model still captures the epistemic uncertainty about our system dynamics, while an SDE model captures the intrinsic process noise, i.e., aleatoric uncertainty. While this reparametrization does not allow us to directly calculate pD(Ẋ |D, T ), we obtain a Gaussian distribution for the marginals pD(ẋsupp|xsupp). To retrieve pD(Ẋ |D, T ), we use the smoother model’s predictive state posterior to obtain
pD(Ẋ |D, T ) = ∫ pD(ẋsupp,xsupp|D, T )dxsupp (13)
≈ ∫ pD(ẋsupp|xsupp)pS(xsupp|T ,D)dxsupp. (14)
3.4 Comparing Gradient Distributions via the Wasserstein Distance
To compare and eventually match pD(Ẋ |D, T ) and pS(Ẋ |D, T ), we propose to use the Wasserstein distance (Kantorovich, 1939), since it allows for an analytic, closed-form representation, and since it outperforms similar measures (like forward, backward and symmetric KL divergence) in our exploratory experiments. The squared type-2 Wasserstein distance gives rise to the term
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] = W22 [ pS(Ẋ |D, T ),Exsupp∼pGP(xsupp|D,T ) [pD(ẋsupp|xsupp)] ] (15) that we will later use to regularize the smoothing process. To render the calculation of this regularization term computationally feasible, we introduce two approximations. First, observe that an exact calculation of the expectation in Equation (15) requires mapping a multivariate Gaussian through the deterministic neural networks parametrizing f and ΣD in Equation (12). To avoid complex sampling schemes, we carry out a certainty-equivalence approximation of the expectation, that is, we evaluate the dynamics model on the posterior smoother mean µS, supp. As a result of this approximation, observe that both pD(Ẋ |D, T ) and pS(Ẋ |D, T ) become Gaussians. However, the covariance structure of these matrices is very different. Since we use independent GPs for different state dimensions, the smoother only models the covariance between the state values within the same dimension, across different time points. Furthermore, since ϵ, the random variable that captures the randomness of the dynamics across all time-points, is only K-dimensional, the covariance of pD will be degenerate. Thus, we do not match the distributions directly, but instead match the marginals of each state coordinate at each time point independently at the different supporting time points. Hence,
using first marginalization and then the certainty equivalence, Equation (15) reduces to
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] ≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|D, T ) ]
≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] . (16)
Conveniently, the Wasserstein distance can now be calculated analytically, since for two onedimensional Gaussians a ∼ N (µa, σ2a) and b ∼ N (µb, σ2b ), we have W22[a, b] = (µa − µb)2 + (σa − σb)2.
3.5 Final Loss Function
As explained in the previous paragraphs, distributional gradient matching trains a smoother regularized by a dynamics model. Both the parameters of the smoother φ, consisting of the trainable parameters of the GP prior mean µ, the feature map ϕ, and the kernel K, and the parameters of the dynamics model ψ are trained concurrently, using the same loss function. This loss consists of two terms, of which the regularization term was already described in Equation (16). While this term ensures that the smoother follows the dynamics, we need a second term ensuring that the smoother also follows the data. To this end, we follow standard GP regression literature, where it is common to learn the GP hyperparameters by maximizing the marginal log likelihood of the observations, i.e. Ldata (Rasmussen, 2004). Combining these terms, we obtain the final objective
L(φ,ψ) := Ldata − λ · K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] .
This loss function is a multi-criteria objective, where fitting the data (via the smoother) and identifying the dynamics model (by matching the marginals) regularize each other. In our preliminary experiments, we found the objective to be quite robust w.r.t. different choices of λ. In the interest of simplicity, we thus set it in all our experiments in Section 4 to a default value of λ = |D||Ẋ | , accounting only for the possibility of having different numbers of supporting points and observations. One special case worth mentioning is λ → 0, which corresponds to conventional sequential smoothing, where the second part would be used for identification in a second step, as proposed by Pillonetto and De Nicolao (2010). However, as can be seen in Figure 1, the smoother fails to properly identify the system without any knowledge about the dynamics and thus fails to provide meaningful state or derivative estimates. Thus, especially in the case of sparse observations, joint training is strictly superior. In its final form, unlike its pure Bayesian counterparts, DGM does not require any prior knowledge about the system dynamics. Nevertheless, if some prior knowledge is available, one could add an additional, additive term log(p(ψ)) to L(φ,ψ). It should be noted however that this was not done in any of our experiments, and excellent performance can be achieved without.
4 Experiments
We now compare DGM against state-of-the-art methods. In a first experiment, we demonstrate the effects of an overparametrized, simple dynamics model on the performance of DGM as well as traditional, MC-based algorithms SGLD (Stochastic Gradient Lengevin Dynamics, (Welling and Teh, 2011)) and SGHMC (Stochastic Gradient Hamiltonian Monte Carlo, (Chen et al., 2014)). We select our baselines based on the results of Dandekar et al. (2021), who demonstrate that both a variational approach and NUTS (No U-Turn Sampler, Hoffman and Gelman (2014)) are inferior to these two. Subsequently, we will investigate and benchmark the ability of DGM to correctly identify neural dynamics models and to generalize across different initial conditions. Since SGLD and SGHMC reach their computational limits in the generalization experiments, we compare against Neural ODE Processes (NDP). Lastly, we will conclude by demonstrating the necessity of all of its components. For all comparisons, we use the julia implementations of SGLD and SGHMC provided by Dandekar et al. (2021), the pytorch implementation of NDP provided by Norcliffe et al. (2021), and our own JAX (Bradbury et al., 2018) implementation of DGM.
4.1 Setup
We use known parametric systems from the literature to generate simulated, noisy trajectories. For these benchmarks, we use the two-dimensional Lotka Volterra (LV) system, the three-dimensional, chaotic Lorenz (LO) system, a four-dimensional double pendulum (DP) and a twelve-dimensional quadrocopter (QU) model. For all systems, the exact equations and ground truth parameters are provided in the Appendix A. For each system, we create two different data sets. In the first, we include just one densely observed trajectory, taking the computational limitations of the benchmarks into consideration. In the second, we include many, but sparsely observed trajectories (5 for LV and DP, 10 for LO, 15 for QU). This setting aims to study generalization over different initial conditions.
4.2 Metric
We use the log likelihood as a metric to compare the accuracy of our probabilistic models. In the 1-trajectory setting, we take a grid of 100 time points equidistantly on the training trajectory. We then calculate the ground truth and evaluate its likelihood using the predictive distributions of our models. When testing for generalization, we repeat the same procedure for unseen initial conditions.
4.3 Effects of Overparametrization
3,3 3,6,3 3,6,6,3 3,6,9,6,3 3,12,9,6,3 Model
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Lo g
Lik el
ih oo
d
SGLD SGHMC DGM
likelihood of the ground truth over 10 different noise realizations. The exact procedure for one noise realization is described in the appendix, Appendix C). While SGLD runs into numerical issues after a medium model complexity, the performance of SGHMC continuously disintegrates, while DGM is unaffected. This foreshadows the results of the next two experiments, where we observe that the MC-based approaches are not suitable for the more complicated settings.
4.4 Single Trajectory Benchmarks
In Table 1, we evaluate the log-likelihood of the ground truth for the four benchmark systems, obtained when learning these systems using a neural ODE as a dynamics model (for more details, see appendix B). Clearly, DGM performs the best on all systems, even though we supplied both SGLD and SGHMC with very strong priors and fine-tuned them with an extensive hyperparameter sweep (see Appendix C for more details). Despite this effort, we failed to get SGLD to work on Quadrocopter 1, where it always returned NaNs. This is in stark contrast to DGM, which performs reliably without any pre-training or priors.
4.5 Prediction speed
To evaluate prediction speed, we consider the task of predicting 100 points on a previously unseen trajectory. To obtain a fair comparison, all algorithms’ prediction routines were implemented in JAX (Bradbury et al., 2018). Furthermore, while we used 1000 MC samples when evaluating the predictive posterior for the log likelihood to guarantee maximal accuracy, we only used 200 samples in Table 1. Here, 200 was chosen as a minimal sample size guaranteeing reasonable accuracy, following a preliminary experiment visualized in Appendix C. Nevertheless, the predictions of DGM are 1-2 orders of magnitudes faster, as can be seen in Table 1. This further illustrates the advantage of relying on a smoother instead of costly, numerical integration to obtain predictive posteriors in the state space.
4.6 Multi-Trajectory Benchmarks
Next, we take a set of trajectories starting on an equidistant grid of the initial conditions. Each trajectory is then observed at 5 equidistant observation times for LV and DP, and 10 equidistant observation times for the chaotic Lorenz and more complicated Quadrocopter. We test generalization by randomly sampling a new initial condition and evaluating the negative log likelihood of the ground truth at 100 equidistant time points. In Table 2, we compare the generalization performance of DGM against NDP, since despite serious tuning efforts, the MC methods failed to produce meaningful results in this setting. DGM clearly outperforms NDP, a fact which is further exemplified in Figure 4. There, we show the test log likeliood for Lotka Volterra trained on an increasing set of trajectories. Even though the time grid is fixed and we only decrease the distance between initial condition samples, the dynamics model helps the smoother to generalize across time as well. In stark contrast, NDP fails to improve with increasing data after an initial jump.
4.7 Ablation study
We next study the importance of different elements of our approach via an ablation study on the Lorenz 125 dataset, shown in Figure 1. Comparing the two rows, we see that joint smoothing across trajectories is essential to transfer knowledge between different training trajectories. Similarly, comparing the two columns, we see that the dynamics model enables the smoother to reduce its uncertainty in between observation points.
4.8 Computational Requirements
For the one trajectory setting, all DGM related experiments were run on a Nvidia RTX 2080 Ti, where the longest ones took 15 minutes. The comparison methods were given 24h, on Intel Xeon Gold 6140 CPUs. For the multi-trajectory setting, we used Nvidia Titan RTX, where all experiments finished in less than 3 hours. A more detailed run time compilation can be found in Appendix B. Using careful implementation, the run time of DGM scales linearly in the number of dimensions K. However, since we use an accurate RBF kernel for all our experiments reported in this section, we have cubic run time complexity in ∑M m=1 Nm. In principle, this can be alleviated by deploying standard feature approximation methods (Rahimi et al., 2007; Liu et al., 2020). While this is a well known fact, we nevertheless refer the interested reader to a more detailed discussion of the subject in Appendix D.
5 Related work
5.1 Bayesian Parameter Inference with Gaussian Processes
The idea of matching gradients of a (spline-based) smoother and a dynamics model goes back to the work of Varah (1982). For GPs, this idea is introduced by Calderhead et al. (2009), who first fit a GP to the data and then match the parameters of the dynamics. Dondelinger et al. (2013) introduce concurrent training, while Gorbach et al. (2017) introduce an efficient variational inference procedure for systems with a locally-linear parametric form. All these works claim to match the distributions of the gradients of the smoother and dynamics models, by relying on a product of experts heuristics. However, Wenk et al. (2019) demonstrate that this product of experts in fact leads to statistical independence between the observations and the dynamics parameters, and that these algorithms essentially match point estimates of the gradients instead. Thus, DGM is the first algorithm to actually match gradients on the level of distributions for ODEs. In the context of stochastic differential equations (SDEs) with constant diffusion terms, Abbati et al. (2019) deploy MMD and GANs to match their gradient distributions. However, it should be noted that their algorithm treats the parameters of the dynamics model deterministically and thus, they can not provide the epistemic uncertainty estimates that we seek here. Note that our work is not related to the growing literature investigating SDE approximations of Bayesian Neural ODEs in the context of classification (Xu et al., 2021). Similarly to Chen et al. (2018), these works emphasize learning a terminal state of the ODE used for other downstream tasks.
5.2 Gaussian Processes with Operator Constraints
Gradient matching approaches mainly use the smoother as a proxy to infer dynamics parameters. This is in stark contrast to our work, where we treat the smoother as the main model used for prediction. While the regularizing properties of the dynamics on the smoother are explored by Wenk et al. (2020), Jidling et al. (2017) introduce an algorithm to incorporate linear operator constraints directly on the kernel level. Unlike in our work, they can provide strong guarantees that the posterior always follows these constraints. However, it remains unclear how to generalize their approach to the case of complex, nonlinear operators, potentially parametrized by neural dynamics models.
5.3 Other Related Approaches
In some sense, the smoother is mimicking a probabilistic numerical integration step, but without explicitly integrating. In spirit, this approach is similar to the solution networks used in the context of PDEs, as presented by Raissi et al. (2019), which however typically disregard uncertainty. In the context of classical ODE parameter inference, Kersting et al. (2020) deploy a GP to directly mimic a numerical integrator in a probabilistic, differentiable manner. Albeit promising in a classical, parametric ODE setting, it remains unclear how these methods can be scaled up, as there is still the numerical integration bottleneck. Unrelated to their work, Ghosh et al. (2021) present a variational inference scheme in the same, classical ODE setting. However, they still keep distributions over all weights of the neural network (Norcliffe et al., 2021). A similar approach is investigated by Dandekar et al. (2021), who found it to be inferior to the MC methods we use as a benchmark. Variational inference was previously employed by Yildiz et al. (2019) in the context of latent neural ODEs parametrized by a Bayesian neural network, but their work mainly focuses on dimensionality reduction. Nevertheless, their work inspired a model called Neural ODE Processes by Norcliffe et al. (2021). This work is similar to ours in the sense that it avoids keeping distributions over network weights and models an ensemble of deterministic ODEs via a global context variable. Consequently, we use it as a benchmark in Section 4, showing that it does not properly capture epistemic uncertainty in a low data setting, which might be problematic for downstream tasks like reinforcement learning.
6 Conclusion
In this work, we introduced a novel, GP-based collocation method, that matches gradients of a smoother and a dynamics model on the distribution level using a Wasserstein loss. Through careful parametrization of the dynamics model, we manage to train complicated, neural ODE models, where state of the art methods struggle. We then demonstrate that these models are able to accurately predict unseen trajectories, while capturing epistemic uncertainty relevant for downstream tasks. In future work, we are excited to see how our training regime can be leveraged in the context of active learning of Bayesian neural ordinary differential equation for continuous-time reinforcement learning.
Acknowledgments
This research was supported by the Max Planck ETH Center for Learning Systems. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement No 815943 as well as from the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. | 1. What is the main contribution of the paper, and how does it differ from previous work in learning ODEs?
2. How effective is the proposed method in comparison to other existing methods, particularly in terms of accuracy and speed?
3. What are some potential weaknesses or areas for improvement in the presentation or content of the paper?
4. Are there any questions regarding the notation or explanations used in the paper that could benefit from further clarification?
5. Is there sufficient detail provided in the paper to replicate the experiments, and what minor comments or typos can be noted? | Summary Of The Paper
Review | Summary Of The Paper
This paper develops a method called distributional gradient matching for learning unknown differential equations from data. The method is based on a combination of a neural ODE model and a Gaussian process model (with a deep covariance kernel) which is encouraged to produce solutions which follow the neural ODE model by introducing a regularisation term based on Wasserstein loss. Numerical examples show that the proposed method outperforms some existing alternatives.
Review
I am not an expert on learning ODEs, but the literature reviews in Sections 1 and 5 do make it seem like the approach is new. It would be interesting to have more commentary on how the use of the GP model in this paper differs from prior work - I presume that most prior work does not use a deep GP model.
The numerical examples seem impressive, the quadrocopter system in particular being highly non-trivial. In these examples the proposed method clearly outperforms some existing methods, both in terms of accuracy and speed. I lack expertise to say if the two MC-based algorithms, SGLD and SGHMC, that the comparisons are made to are representative of the state-of-the-art in this setting.
In my opinion the main weakness of this paper is that the presentation of the method is often quite confusing and the notation not always properly explained:
Line 141: What does \dot{\mathcal{X}} stand for?
p4: It is not told what the GP model is supposed to be modelling, just that "we define a Gaussian process".
p4: The use of a mapping \phi is not standard in GP regression (at least yet), so it would be good to make it clearer that this part of the GP model comes from Wilson et al. (2016).
p4: To call \phi a "feature map" seems potentially confusing if one is used to defining covariance kernels via inner products of feature maps.
p4: Is the GP covariance kernel supposed to be potentially different for each k (and hence the notation \mathcal{K}_k)? If so, is it intentional that there is only a single prior mean function?
Remarks on p6 that it has "trainable parameters" and in Appendix B.2 that it is "deep" appear to be the extent of exposition on the GP prior mean function \mu. More should perhaps be said.
What are t and t_{supp} in the definitions on p4?
\mathcal{L}{data} defined in Equation (8) is never used afterwards; it is denited p{GP}(y | \varphi) later.
Is \mu_{S,supp} on p6 equal to \mu_S in Equation (7)?
It would be helpful to tell explicitly that the Wasserstein loss in Equation (16) will be used as \mathcal{L}_{dynamics}.
I am not sure if the paper contains sufficient details to replicate the experiments.
Other minor comments:
The sentence on lines 119-121 is quite awkward. There is also an extra closing parenthesis on line 120.
Line 131: "smother-dependent"
I believe the dot over \mu in Equation (7) is larger than that in the equation between lines 166 and 167.
The GP prior mean needs to be subtracted from second term in Equation (10).
Transpose notation is not consistent: E.g., Equations (10) and (11) use "T" while (6) and (7) use "\top".
\mathcal{D} in Equation (9) should probably have subscript k.
That the noise variance matrix is \Sigma_\epsilon given that Equation (12) has a different random variable \epsilon.
Line 196: \epsilon should probably be bold.
Lines 216-17: "regularizaton"
Line 223: "indepdent"
"Lahdesmaki" - "Lähdesmäki" in one of the references |
NIPS | Title
Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models
Abstract
Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification. While many deterministic learning algorithms have been designed based on numerical integration via the adjoint method, many downstream tasks such as active learning, exploration in reinforcement learning, robust control, or filtering require accurate estimates of predictive uncertainties. In this work, we propose a novel approach towards estimating epistemically uncertain neural ODEs, avoiding the numerical integration bottleneck. Instead of modeling uncertainty in the ODE parameters, we directly model uncertainties in the state space. Our algorithm – distributional gradient matching (DGM) – jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss. Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
N/A
1 Introduction
For continuous-time system identification and control, ordinary differential equations form an essential class of models, deployed in applications ranging from robotics (Spong et al., 2006) to biology (Jones et al., 2009). Here, it is assumed that the evolution of a system is described by the evolution of continuous state variables, whose time-derivative is given by a set of parametrized equations. Often, these equations are derived from first principles, e.g., rigid body dynamics (Wittenburg, 2013), mass action kinetics (Ingalls, 2013), or Hamiltonian dynamics (Greydanus et al., 2019), or chosen for computational convenience (e.g., linear systems (Ljung, 1998)) or parametrized to facilitate system identification (Brunton et al., 2016).
Such construction methods lead to intriguing properties, including guarantees on physical realizability (Wensing et al., 2017), favorable convergence properties (Ortega et al., 2018), or a structure suitable for downstream tasks such as control design (Ortega et al., 2002). However, such models often capture the system dynamics only approximately, leading to a potentially significant discrepancy between the model and reality (Ljung, 1999). Moreover, when expert knowledge is not available, or precise parameter values are cumbersome to obtain, system identification from raw time series data becomes
∗Equal Contribution. Correspondence to trevenl@ethz.ch, wenkph@ethz.ch.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
necessary. In this case, one may seek more expressive nonparametric models instead (Rackauckas et al., 2020; Pillonetto et al., 2014). If the model is completely replaced by a neural network, the resulting model is called neural ODE (Chen et al., 2018). Despite their large number of parameters, as demonstrated by Chen et al. (2018); Kidger et al. (2020); Zhuang et al. (2020, 2021), deterministic neural ODEs can be efficiently trained, enabling accurate deterministic trajectory predictions. For many practical applications however, accurate uncertainty estimates are essential, as they guide downstream tasks like reinforcement learning (Deisenroth and Rasmussen, 2011; Schulman et al., 2015), safety guarantees (Berkenkamp et al., 2017), robust control design (Hjalmarsson, 2005), planning under uncertainty (LaValle, 2006), probabilistic forecasting in meteorology (Fanfarillo et al., 2021), or active learning / experimental design (Srinivas et al., 2010). A common way of obtaining such uncertainties is via a Bayesian framework. However, as observed by Dandekar et al. (2021), Bayesian training of neural ODEs in a dynamics setting remains largely unexplored. They demonstrate that initial variational-based inference schemes for Bayesian neural ODEs suffer from several serious drawbacks and thus propose sampling-based alternatives. However, as surfaced by our experiments in Section 4, sampling-based approaches still exhibit serious challenges. These pertain both to robustness (even if highly informed priors are supplied), and reliance on frequent numerical integration of large neural networks, which poses severe computational challenges for many downstream tasks like sampling-based planning (Karaman and Frazzoli, 2011) or uncertainty propagation in prediction.
Contributions In this work, we propose a novel approach for uncertainty quantification in nonlinear dynamical systems (cf. Figure 1). Crucially, our approach avoids explicit costly and non-robust numerical integration, by employing a probabilistic smoother of the observational data, whose representation we learn jointly across multiple trajectories. To capture dynamics, we regularize our smoother with a dynamics model. Latter captures epistemic uncertainty in the gradients of the ODE, which we match with the smoother’s gradients by minimizing a Wasserstein loss, hence we call our approach Distributional Gradient Matching (DGM). In summary, our main contributions are:
• We develop DGM, an approach2 for capturing epistemic uncertainty about nonlinear dynamical systems by jointly training a smoother and a neural dynamics model;
• We provide a computationally efficient and statistically accurate mechanism for prediction, by focusing directly on the posterior / predictive state distribution.
• We experimentally demonstrate the effectiveness of our approach on learning challenging, chaotic dynamical systems, and generalizing to new unseen inital conditions.
High-level overview A high-level depiction of our algorithm is shown in Figure 2. In principle, DGM jointly learns a smoother (S) and a dynamics model (D). The smoother model, chosen to be a Gaussian process, maps an initial condition x0 and a time t to the state distribution pS(x(t)) and state derivatives distribution pS(ẋ(t)) reached at that time. The dynamics model, chosen to be a neural network, represents an ODE that maps states x(t) to the derivative distribution pD(ẋ(t)). Both models are evaluated at some training times and all its output distributions collected in the random variables XS , ẊS and ẊD. The parameters of these models are then jointly trained using a Wasserstein-distance-based objective directly on the level of distributions. For more details on every one of these components, we refer to Section 3. There, we introduce all components individually and then present how they interplay. Section 3 builds on known concepts from the literature, which we
2Code is available at: https://github.com/lenarttreven/dgm
summarize in Section 2. Finally, in Section 4, we present the empirical study of the DGM algorithm, where we benchmark it against the state-of-the-art, uncertainty aware dynamics models.
2 Background
2.1 Problem Statement
Consider a continuous-time dynamical system whose K-dimensional state x ∈ RK evolves according to an unknown ordinary differential equation of the form
ẋ = f∗(x). (1)
Here, f∗ is an arbitrary, unknown function assumed to be locally Lipschitz continuous, to guarantee existence and uniqueness of trajectories for every initial condition. In our experiment, we initialize the system at M different initial conditions xm(0), m ∈ {1, . . . ,M}, and let it evolve to generate M trajectories. Each trajectory is then observed at discrete (but not necessarily uniformly spaced) time-points, where the number of observations (Nm)m∈{1...M} can vary from trajectory to trajectory. Thus, a trajectory m is described by its initial condition xm(0), and the observations ym := [xm(tn,m) + ϵn,m]n∈{1...Nm} at times tm := [tn,m]n∈{1...Nm}, where the additive observation noise ϵn,m is assumed to be drawn i.i.d. from a zero mean Gaussian, whose covariance is given by Σϵ := diag(σ21 , . . . , σ 2 K). We denote by D the dataset, consisting of M initial conditions xm(0), observation times tm, and observations ym. To model the unknown dynamical system, we choose a parametric Ansatz ẋ = f(x,θ). Depending on the amount of expert knowledge, this parameterization can follow a white-box, gray-box, or black-box methodology (Bohlin, 2006). In any case, the parametric form of f is fixed a priori (e.g., a neural network), and the key challenge is to infer a reasonable distribution over the parameters θ, conditioned on the data D. For later tasks, we are particularly interested in the predictive posterior state distribution p(xnew(tnew)|D, tnew,xnew(0)), (2) i.e., the posterior distribution of the states starting from a potentially unseen initial condition xnew(0) and evaluated at times tnew. This posterior would then be used by the downstream or prediction tasks described in the introduction.
2.2 Bayesian Parameter Inference
In the case of Bayesian parameter inference, an additional prior p(θ) is imposed on the parameters θ so that the posterior distribution of Equation (2) can be inferred. Unfortunately, this distribution is not analytically tractable for most choices of f(x,θ), which is especially true when we model f with a neural network. Formally, for fixed parameters θ, initial condition x(0) and observation time t, the likelihood of an observation y is given by
p(y(t)|x(0), t,θ,Σobs) = N ( y(t) ∣∣∣∣x(0) + ∫ t 0 f(x(τ),θ)dτ,Σobs ) . (3)
Using the fact that all noise realizations are independent, the expression (3) can be used to calculate the likelihood of all observations in D. Most state-of-the-art parameter inference schemes use this fact to create samples θ̂s of the posterior over parameters p(θ|D) using various Monte Carlo methods. Given a new initial condition x(0) and observation time t, these samples θ̂s can then be turned into samples of the predictive posterior state again by numerically integrating
x̂s(t) = x(0) + ∫ t 0 f(x(τ), θ̂s)dτ. (4)
Clearly, both training (i.e., obtaining the samples θ̂s) and prediction (i.e., evaluating Equation (4)) require integrating the system dynamics f many times. Especially when we model f with a neural network, this can be a huge burden, both numerically and computationally (Kelly et al., 2020). As an alternative approach, we can approximate the posterior p(θ|D) with variational inference (Bishop, 2006). However, we run into similar bottlenecks. While optimizing the variational objective, e.g., the ELBO, many integration steps are necessary to evaluate the unnormalized posterior. Also, at inference time, to obtain a distribution over state x̂s(t), we still need to integrate f several times. Furthermore, Dandekar et al. (2021) report poor forecasting performance by the variational approach.
3 Distributional Gradient Matching
In both the Monte Carlo sampling-based and variational approaches, all information about the dynamical system is stored in the estimates of the system parameters θ̂. This makes these approaches rather cumbersome: Both for obtaining estimates of θ̂ and for obtaining the predictive posterior over states, once θ̂ is found, we need multiple rounds of numerically integrating a potentially complicated (neural) differential equation. We thus have identified two bottlenecks limiting the performance and applicability of these algorithms: namely, numerical integration of f and inference of the system parameters θ. In our proposed algorithm, we avoid both of these bottlenecks by directly working with the posterior distribution in the state space. To this end, we introduce a probabilistic, differentiable smoother model, that directly maps a tuple (t,x(0)) consisting of a time point t and an initial condition x(0)) as input and maps it to the corresponding distribution over x(t). Thus, the smoother directly replaces the costly, numerical integration steps, needed, e.g., to evaluate Equation (2). Albeit computationally attractive, this approach has one serious drawback. Since the smoother no longer explicitly integrates differential equations, there is no guarantee that the obtained smoother model follows any vector field. Thus, the smoother model is strictly more general than the systems described by Equation (1). Unlike ODEs, it is able to capture mappings whose underlying functions violate, e.g., Lipschitz or Markovianity properties, which is clearly not desirable. To address this issue, we introduce a regularization term, Ldynamics, which ensures that a trajectory predicted by the smoother is encouraged to follow some underlying system of the form of Equation (1). The smoother is then trained with the multi-objective loss function
L := Ldata + λ · Ldynamics, (5) where, Ldata is a smoother-dependent loss function that ensures a sufficiently accurate data fit, and λ is a trade-off parameter.
3.1 Regularization by Matching Distributions over Gradients
To ultimately define Ldynamics, first choose a parametric dynamics model similar to f(x,θ) in Equation (3), that maps states to their derivatives. Second, define a set of supporting points T with the corresponding supporting gradients Ẋ as
T := { (tsupp,l,xsupp,l(0))l∈{1...Nsupp} } , Ẋ := { (ẋsupp,l)l∈{1...Nsupp} } .
Here, the l-th element represents the event that the dynamical system’s derivative at time tsupp,l is ẋsupp,l, after being initialized at time 0 at initial condition xsupp,l(0). Given both the smoother and the dynamics model, we have now two different ways to calculate distributions over Ẋ given some data D and supporting points T . First, we can directly leverage the differentiability and global nature of our smoother model to extract a distribution pS(Ẋ |D, T ) from the smoother model. Second, we can first use the smoother to obtain state estimates and then plug these state estimates into the dynamics model, to obtain a second distribution pD(Ẋ |D, T ). Clearly, if the solution proposed by the smoother follows the dynamics, these two distributions should match. Thus, we can regularize the smoother to follow the solution of Equation (3) by defining Ldynamics to encode the distance between pD(Ẋ |D, T ) and pS(Ẋ |D, T ) to be small in some metric. By minimizing the overall loss, we thus match the distributions over the gradients of the smoother and the dynamics model.
3.2 Smoothing jointly over Trajectories with Deep Gaussian Processes
The core of DGM is formed by a smoother model. In principle, the posterior state distribution of Equation (2) could be modeled by any Bayesian regression technique. However, calculating pS(Ẋ |D, T ) is generally more involved. Here, the key challenge is evaluating this posterior, which is already computationally challenging, e.g., for simple Bayesian neural networks. For Gaussian processes, however, this becomes straightforward, since derivatives of GPs remain GPs (Solak et al., 2003). Thus, DGM uses a GP smoother. For scalability and simplicity, we keep K different, independent smoothers, one for each state dimension. However, if computational complexity is not a concern, our approach generalizes directly to multi-output Gaussian processes. Below, we focus on the one-dimensional case, for clarity of exposition. For notational compactness, all vectors with a
superscript should be interpreted as vectors over time in this subsection. For example, the vector x(k) consists of all the k-th elements of the state vectors x(tn,m), n ∈ {1, . . . , Nm},m ∈ {1, . . . ,M}. We define a Gaussian process with a differentiable mean function µ(xm(0), tn,m) as well as a differentiable and positive-definite kernel function KRBF(ϕ(xm(0), tn,m),ϕ(xm′(0), tn′,m′). Here, the kernel is given by the composition of a standard ARD-RBF kernel (Rasmussen, 2004) and a differentiable feature extractor ϕ parametrized by a deep neural network, as introduced by Wilson et al. (2016). Following Solak et al. (2003), given fixed xsupp, we can now calculate the joint density of (ẋ(k)supp,y(k)) for each state dimension k. Concatenating vectors accordingly across time and trajectories, let
µ(k) := µ(k) (x(0), t) , µ̇(k) := ∂
∂t µ(k) (xsupp(0), tsupp) ,
z(k) := ϕ(k)(x(0), t), z(k)supp := ϕ (k)(xsupp(0), tsupp),
K(k) := K(k)RBF(z(k), z(k)), K̇ (k) := ∂
∂t1 K(k)RBF(z (k) supp, z
(k)), K̈(k) := ∂ 2
∂t1∂t2 K(k)RBF(z (k) supp, z (k) supp).
Then the joint density of (ẋ(k)supp,y(k)) can be written as( ẋ (k) supp
y(k)
) ∼ N (( µ̇(k)
µ(k)
) , ( K̈(k) K̇(k)
(K̇(k))⊤ K(k) + σ2kI
)) . (6)
Here we denote by ∂∂t1 the partial derivative with respect to time in the first coordinate, by ∂ ∂t2 the partial derivative with respect to time in the second coordinate, and with σ2k the corresponding noise variance of Σobs. Since the conditionals of a joint Gaussian random variable are again Gaussian distributed, pS is again Gaussian, i.e., pS(Ẋk|D, T ) = N ( ẋ (k) supp|µS ,ΣS ) with
µS := µ̇ (k) + K̇(k)(K(k) + σ2kI)−1 ( y(k) − µ(k) ) ,
ΣS := K̈ (k) − K̇(k)(K(k) + σ2kI)−1(K̇ (k) )⊤.
(7)
Here, the index k is used to highlight that this is just the distribution for one state dimension. To obtain the final pS(Ẋ |D, T ), we take the product over all state dimensions k. To fit our model to the data, we minimize the negative marginal log likelihood of our observations, neglecting purely additive terms (Rasmussen, 2004), i.e.,
Ldata := K∑ k=1 1 2 ( y(k) − µ(k) )⊤ ( K(k) + σ2kI )−1 ( y(k) − µ(k) ) + 1 2 logdet ( K(k) + σ2kI ) . (8)
Furthermore, the predictive posterior for a new point x(k)test given time ttest and initial condition x (k) test (0) has the closed form
pS(x (k) test |Dk, ttest,xtest) = N ( x (k) test ∣∣∣µ(k)post, σ2post,k) , (9) where µ(k)post = µ (k)(xtest(0), ttest) +K(k)RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1 ( y(k) − µ(k) ) , (10)
σ2post,k = K (k) RBF(ztest, ztest)−K (k) RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1K (k) RBF(z (k) test , z (k)). (11)
3.3 Representing Uncertainty in the Dynamics Model via the Reparametrization Trick
As described at the beginning of this section, a key bottleneck of standard Bayesian approaches is the potentially high dimensionality of the dynamics parameter vector θ. The same is true for our approach. If we were to keep track of the distributions over all parameters of our dynamics model, calculating pD(Ẋ |D, T ) quickly becomes infeasible.
However, especially in the case of modeling f with a neural network, the benefits of keeping distributions directly over θ is unclear due to overparametrization. For both the downstream tasks and our training method, we are mainly interested in the distributions in the state space. Usually, the state space is significantly lower dimensional compared to the parameter space of θ. Furthermore, since the exact posterior state distributions are generally intractable, they normally have to be approximated anyways with simpler distributions for downstream tasks (Schulman et al., 2015; Houthooft et al., 2016; Berkenkamp et al., 2017). Thus, we change the parametrization of our dynamics model as follows. Instead of working directly with ẋ(t) = f(x(t),θ) and keeping a distribution over θ, we model uncertainty directly on the level of the vector field as
ẋ(t) = f(x(t),ψ) +Σ 1 2
D(x(t),ψ)ϵ, (12)
where ϵ ∼ N (0, IK) is drawn once per rollout (i.e., fixed within a trajectory) and ΣD is a statedependent and positive semi-definite matrix parametrized by a neural network. Here, ψ are the parameters of the new dynamics model, consisting of both the original parameters θ and the weights of the neural network parametrizing ΣD. To keep the number of parameters reasonable, we employ a weight sharing scheme, detailed in Appendix B. In spirit, this modeling paradigm is very closely related to standard Bayesian training of NODEs. In both cases, the random distributions capture a distribution over a set of deterministic, ordinary differential equations. This should be seen in stark contrast to stochastic differential equations, where the randomness in the state space, i.e., diffusion, is modeled with a stochastic process. In comparison to (12), the latter is a time-varying disturbance added to the vector field. In that sense, our model still captures the epistemic uncertainty about our system dynamics, while an SDE model captures the intrinsic process noise, i.e., aleatoric uncertainty. While this reparametrization does not allow us to directly calculate pD(Ẋ |D, T ), we obtain a Gaussian distribution for the marginals pD(ẋsupp|xsupp). To retrieve pD(Ẋ |D, T ), we use the smoother model’s predictive state posterior to obtain
pD(Ẋ |D, T ) = ∫ pD(ẋsupp,xsupp|D, T )dxsupp (13)
≈ ∫ pD(ẋsupp|xsupp)pS(xsupp|T ,D)dxsupp. (14)
3.4 Comparing Gradient Distributions via the Wasserstein Distance
To compare and eventually match pD(Ẋ |D, T ) and pS(Ẋ |D, T ), we propose to use the Wasserstein distance (Kantorovich, 1939), since it allows for an analytic, closed-form representation, and since it outperforms similar measures (like forward, backward and symmetric KL divergence) in our exploratory experiments. The squared type-2 Wasserstein distance gives rise to the term
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] = W22 [ pS(Ẋ |D, T ),Exsupp∼pGP(xsupp|D,T ) [pD(ẋsupp|xsupp)] ] (15) that we will later use to regularize the smoothing process. To render the calculation of this regularization term computationally feasible, we introduce two approximations. First, observe that an exact calculation of the expectation in Equation (15) requires mapping a multivariate Gaussian through the deterministic neural networks parametrizing f and ΣD in Equation (12). To avoid complex sampling schemes, we carry out a certainty-equivalence approximation of the expectation, that is, we evaluate the dynamics model on the posterior smoother mean µS, supp. As a result of this approximation, observe that both pD(Ẋ |D, T ) and pS(Ẋ |D, T ) become Gaussians. However, the covariance structure of these matrices is very different. Since we use independent GPs for different state dimensions, the smoother only models the covariance between the state values within the same dimension, across different time points. Furthermore, since ϵ, the random variable that captures the randomness of the dynamics across all time-points, is only K-dimensional, the covariance of pD will be degenerate. Thus, we do not match the distributions directly, but instead match the marginals of each state coordinate at each time point independently at the different supporting time points. Hence,
using first marginalization and then the certainty equivalence, Equation (15) reduces to
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] ≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|D, T ) ]
≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] . (16)
Conveniently, the Wasserstein distance can now be calculated analytically, since for two onedimensional Gaussians a ∼ N (µa, σ2a) and b ∼ N (µb, σ2b ), we have W22[a, b] = (µa − µb)2 + (σa − σb)2.
3.5 Final Loss Function
As explained in the previous paragraphs, distributional gradient matching trains a smoother regularized by a dynamics model. Both the parameters of the smoother φ, consisting of the trainable parameters of the GP prior mean µ, the feature map ϕ, and the kernel K, and the parameters of the dynamics model ψ are trained concurrently, using the same loss function. This loss consists of two terms, of which the regularization term was already described in Equation (16). While this term ensures that the smoother follows the dynamics, we need a second term ensuring that the smoother also follows the data. To this end, we follow standard GP regression literature, where it is common to learn the GP hyperparameters by maximizing the marginal log likelihood of the observations, i.e. Ldata (Rasmussen, 2004). Combining these terms, we obtain the final objective
L(φ,ψ) := Ldata − λ · K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] .
This loss function is a multi-criteria objective, where fitting the data (via the smoother) and identifying the dynamics model (by matching the marginals) regularize each other. In our preliminary experiments, we found the objective to be quite robust w.r.t. different choices of λ. In the interest of simplicity, we thus set it in all our experiments in Section 4 to a default value of λ = |D||Ẋ | , accounting only for the possibility of having different numbers of supporting points and observations. One special case worth mentioning is λ → 0, which corresponds to conventional sequential smoothing, where the second part would be used for identification in a second step, as proposed by Pillonetto and De Nicolao (2010). However, as can be seen in Figure 1, the smoother fails to properly identify the system without any knowledge about the dynamics and thus fails to provide meaningful state or derivative estimates. Thus, especially in the case of sparse observations, joint training is strictly superior. In its final form, unlike its pure Bayesian counterparts, DGM does not require any prior knowledge about the system dynamics. Nevertheless, if some prior knowledge is available, one could add an additional, additive term log(p(ψ)) to L(φ,ψ). It should be noted however that this was not done in any of our experiments, and excellent performance can be achieved without.
4 Experiments
We now compare DGM against state-of-the-art methods. In a first experiment, we demonstrate the effects of an overparametrized, simple dynamics model on the performance of DGM as well as traditional, MC-based algorithms SGLD (Stochastic Gradient Lengevin Dynamics, (Welling and Teh, 2011)) and SGHMC (Stochastic Gradient Hamiltonian Monte Carlo, (Chen et al., 2014)). We select our baselines based on the results of Dandekar et al. (2021), who demonstrate that both a variational approach and NUTS (No U-Turn Sampler, Hoffman and Gelman (2014)) are inferior to these two. Subsequently, we will investigate and benchmark the ability of DGM to correctly identify neural dynamics models and to generalize across different initial conditions. Since SGLD and SGHMC reach their computational limits in the generalization experiments, we compare against Neural ODE Processes (NDP). Lastly, we will conclude by demonstrating the necessity of all of its components. For all comparisons, we use the julia implementations of SGLD and SGHMC provided by Dandekar et al. (2021), the pytorch implementation of NDP provided by Norcliffe et al. (2021), and our own JAX (Bradbury et al., 2018) implementation of DGM.
4.1 Setup
We use known parametric systems from the literature to generate simulated, noisy trajectories. For these benchmarks, we use the two-dimensional Lotka Volterra (LV) system, the three-dimensional, chaotic Lorenz (LO) system, a four-dimensional double pendulum (DP) and a twelve-dimensional quadrocopter (QU) model. For all systems, the exact equations and ground truth parameters are provided in the Appendix A. For each system, we create two different data sets. In the first, we include just one densely observed trajectory, taking the computational limitations of the benchmarks into consideration. In the second, we include many, but sparsely observed trajectories (5 for LV and DP, 10 for LO, 15 for QU). This setting aims to study generalization over different initial conditions.
4.2 Metric
We use the log likelihood as a metric to compare the accuracy of our probabilistic models. In the 1-trajectory setting, we take a grid of 100 time points equidistantly on the training trajectory. We then calculate the ground truth and evaluate its likelihood using the predictive distributions of our models. When testing for generalization, we repeat the same procedure for unseen initial conditions.
4.3 Effects of Overparametrization
3,3 3,6,3 3,6,6,3 3,6,9,6,3 3,12,9,6,3 Model
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Lo g
Lik el
ih oo
d
SGLD SGHMC DGM
likelihood of the ground truth over 10 different noise realizations. The exact procedure for one noise realization is described in the appendix, Appendix C). While SGLD runs into numerical issues after a medium model complexity, the performance of SGHMC continuously disintegrates, while DGM is unaffected. This foreshadows the results of the next two experiments, where we observe that the MC-based approaches are not suitable for the more complicated settings.
4.4 Single Trajectory Benchmarks
In Table 1, we evaluate the log-likelihood of the ground truth for the four benchmark systems, obtained when learning these systems using a neural ODE as a dynamics model (for more details, see appendix B). Clearly, DGM performs the best on all systems, even though we supplied both SGLD and SGHMC with very strong priors and fine-tuned them with an extensive hyperparameter sweep (see Appendix C for more details). Despite this effort, we failed to get SGLD to work on Quadrocopter 1, where it always returned NaNs. This is in stark contrast to DGM, which performs reliably without any pre-training or priors.
4.5 Prediction speed
To evaluate prediction speed, we consider the task of predicting 100 points on a previously unseen trajectory. To obtain a fair comparison, all algorithms’ prediction routines were implemented in JAX (Bradbury et al., 2018). Furthermore, while we used 1000 MC samples when evaluating the predictive posterior for the log likelihood to guarantee maximal accuracy, we only used 200 samples in Table 1. Here, 200 was chosen as a minimal sample size guaranteeing reasonable accuracy, following a preliminary experiment visualized in Appendix C. Nevertheless, the predictions of DGM are 1-2 orders of magnitudes faster, as can be seen in Table 1. This further illustrates the advantage of relying on a smoother instead of costly, numerical integration to obtain predictive posteriors in the state space.
4.6 Multi-Trajectory Benchmarks
Next, we take a set of trajectories starting on an equidistant grid of the initial conditions. Each trajectory is then observed at 5 equidistant observation times for LV and DP, and 10 equidistant observation times for the chaotic Lorenz and more complicated Quadrocopter. We test generalization by randomly sampling a new initial condition and evaluating the negative log likelihood of the ground truth at 100 equidistant time points. In Table 2, we compare the generalization performance of DGM against NDP, since despite serious tuning efforts, the MC methods failed to produce meaningful results in this setting. DGM clearly outperforms NDP, a fact which is further exemplified in Figure 4. There, we show the test log likeliood for Lotka Volterra trained on an increasing set of trajectories. Even though the time grid is fixed and we only decrease the distance between initial condition samples, the dynamics model helps the smoother to generalize across time as well. In stark contrast, NDP fails to improve with increasing data after an initial jump.
4.7 Ablation study
We next study the importance of different elements of our approach via an ablation study on the Lorenz 125 dataset, shown in Figure 1. Comparing the two rows, we see that joint smoothing across trajectories is essential to transfer knowledge between different training trajectories. Similarly, comparing the two columns, we see that the dynamics model enables the smoother to reduce its uncertainty in between observation points.
4.8 Computational Requirements
For the one trajectory setting, all DGM related experiments were run on a Nvidia RTX 2080 Ti, where the longest ones took 15 minutes. The comparison methods were given 24h, on Intel Xeon Gold 6140 CPUs. For the multi-trajectory setting, we used Nvidia Titan RTX, where all experiments finished in less than 3 hours. A more detailed run time compilation can be found in Appendix B. Using careful implementation, the run time of DGM scales linearly in the number of dimensions K. However, since we use an accurate RBF kernel for all our experiments reported in this section, we have cubic run time complexity in ∑M m=1 Nm. In principle, this can be alleviated by deploying standard feature approximation methods (Rahimi et al., 2007; Liu et al., 2020). While this is a well known fact, we nevertheless refer the interested reader to a more detailed discussion of the subject in Appendix D.
5 Related work
5.1 Bayesian Parameter Inference with Gaussian Processes
The idea of matching gradients of a (spline-based) smoother and a dynamics model goes back to the work of Varah (1982). For GPs, this idea is introduced by Calderhead et al. (2009), who first fit a GP to the data and then match the parameters of the dynamics. Dondelinger et al. (2013) introduce concurrent training, while Gorbach et al. (2017) introduce an efficient variational inference procedure for systems with a locally-linear parametric form. All these works claim to match the distributions of the gradients of the smoother and dynamics models, by relying on a product of experts heuristics. However, Wenk et al. (2019) demonstrate that this product of experts in fact leads to statistical independence between the observations and the dynamics parameters, and that these algorithms essentially match point estimates of the gradients instead. Thus, DGM is the first algorithm to actually match gradients on the level of distributions for ODEs. In the context of stochastic differential equations (SDEs) with constant diffusion terms, Abbati et al. (2019) deploy MMD and GANs to match their gradient distributions. However, it should be noted that their algorithm treats the parameters of the dynamics model deterministically and thus, they can not provide the epistemic uncertainty estimates that we seek here. Note that our work is not related to the growing literature investigating SDE approximations of Bayesian Neural ODEs in the context of classification (Xu et al., 2021). Similarly to Chen et al. (2018), these works emphasize learning a terminal state of the ODE used for other downstream tasks.
5.2 Gaussian Processes with Operator Constraints
Gradient matching approaches mainly use the smoother as a proxy to infer dynamics parameters. This is in stark contrast to our work, where we treat the smoother as the main model used for prediction. While the regularizing properties of the dynamics on the smoother are explored by Wenk et al. (2020), Jidling et al. (2017) introduce an algorithm to incorporate linear operator constraints directly on the kernel level. Unlike in our work, they can provide strong guarantees that the posterior always follows these constraints. However, it remains unclear how to generalize their approach to the case of complex, nonlinear operators, potentially parametrized by neural dynamics models.
5.3 Other Related Approaches
In some sense, the smoother is mimicking a probabilistic numerical integration step, but without explicitly integrating. In spirit, this approach is similar to the solution networks used in the context of PDEs, as presented by Raissi et al. (2019), which however typically disregard uncertainty. In the context of classical ODE parameter inference, Kersting et al. (2020) deploy a GP to directly mimic a numerical integrator in a probabilistic, differentiable manner. Albeit promising in a classical, parametric ODE setting, it remains unclear how these methods can be scaled up, as there is still the numerical integration bottleneck. Unrelated to their work, Ghosh et al. (2021) present a variational inference scheme in the same, classical ODE setting. However, they still keep distributions over all weights of the neural network (Norcliffe et al., 2021). A similar approach is investigated by Dandekar et al. (2021), who found it to be inferior to the MC methods we use as a benchmark. Variational inference was previously employed by Yildiz et al. (2019) in the context of latent neural ODEs parametrized by a Bayesian neural network, but their work mainly focuses on dimensionality reduction. Nevertheless, their work inspired a model called Neural ODE Processes by Norcliffe et al. (2021). This work is similar to ours in the sense that it avoids keeping distributions over network weights and models an ensemble of deterministic ODEs via a global context variable. Consequently, we use it as a benchmark in Section 4, showing that it does not properly capture epistemic uncertainty in a low data setting, which might be problematic for downstream tasks like reinforcement learning.
6 Conclusion
In this work, we introduced a novel, GP-based collocation method, that matches gradients of a smoother and a dynamics model on the distribution level using a Wasserstein loss. Through careful parametrization of the dynamics model, we manage to train complicated, neural ODE models, where state of the art methods struggle. We then demonstrate that these models are able to accurately predict unseen trajectories, while capturing epistemic uncertainty relevant for downstream tasks. In future work, we are excited to see how our training regime can be leveraged in the context of active learning of Bayesian neural ordinary differential equation for continuous-time reinforcement learning.
Acknowledgments
This research was supported by the Max Planck ETH Center for Learning Systems. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement No 815943 as well as from the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. | 1. What is the focus of the paper regarding nonlinear dynamical systems?
2. What are the strengths of the proposed approach compared to existing methods?
3. How does the reviewer assess the quality and significance of the work presented?
4. What are the weaknesses or limitations of the method, according to the reviewer?
5. Can you provide additional explanations or justifications for certain aspects of the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
The paper tackles the problem of learning nonlinear dynamical systems from historical trajectories, while estimating predictive uncertainties. They propose a new method based on a Gaussian process smoother which is trained with maximum likelihood to fit the data of all training trajectories jointly, while constrained to follow in some sense the underlying dynamics. The latter is performed thanks to a regularizer forcing the smoother’s gradients distribution to match the distribution of gradients of the dynamics model over a fixed set of support points. The Wasserstein-2 distance is approximated to evaluate the distribution discrepancies. Then, instead of deriving uncertainty estimates over the predicted parameters, as done in standard Bayesian methodology, uncertainty is directly estimated over the predicted states. For this, a Gaussian distribution is used to model probability of the states derivatives conditioned on the states. Thanks to these tricks and modeling choices, the authors avoid to have to integrate the complex dynamics model, unlike previous Monte Carlo methods. The proposed approach is show-cased and compared to sampling-based methods in several experiments with simulated data, including overparametrized linear dynamics, as well as single and multi-trajectory cases of 4 well-known dynamical systems. An ablation study is also presented showing the benefits of joint smoothing and gradients distribution matching regularization on one of the four simulated datasets.
Review
Post-rebuttal additions
I would like to thank the authors for their detailed answers, which address all concerns I had.
Originality
The paper seems original to me and the related work seems quite well referenced to me.
Significance
The proposed approach is very complex, but not more than existing ones, so I believe it could be a good alternative to continuous-time nonlinear system identification requiring uncertainty estimates. It solves indeed a real bottleneck of existing approches when used with NODE models (numerical integration). Also, instead of delivering parameter uncertainties as previous methods, is focuses on direct estimation of state uncertainties, which is what matters at the end of the day. Hence I believe that the work present could be quite significant in the field, both for practitioners and for researchers who could build on these ideas.
Quality
Overall quality of the explanations and experiments is quite good, as the method was show cased in varied settings and compared to relevant state-of-the-art alternatives. Also, the code seems clean (+1 for reproducibility). I do have some remarks though concerning some points that do not seem explained in the paper, as well as a few questions listed below:
While it is said in section 2 that trajectories can be sampled at non-equality spaced times, all simulated experiments use uniform time grids, which is a shame.
How are the supporting points chosen and how do they influence the estimates’ quality? For example, having a set of supporting points not well spread or too small should lead to degraded performance should it not?
In equation 12 and line 196, it is said that dynamics noise
e
p
s
i
l
o
n
is drawn one per trajectory, while in the problem statement, observations
y
are supposed to have i.i.d additive noise (one per sample in each trajectory). While I realize these quantities are not the same (one is in the states and the other in their derivatives), I wonder why make these different modeling choices and not also have a different disturbance realization for each instant in the dynamics model.
Concerning the ablation study, you say on line 330 that by comparing the rows of Figure 1 it can be seen that joint smoothing of multiple trajectories helps. While I do see it clearly when comparing rows of the second column (i.e. with the dynamics regularizer), I see the opposite when looking at the first column. Namely, joint smoothing seems to lead a noisier and less accurate predicted trajectory, with a larger confidence region. As this is very curious, since it contradict the text, I think it should be commented and also that the sentence should be clarified (by saying that improvement is visible on the second column for example).
How many training and testing trajectories are used to compute results for figure 2?
Likewise, while it is said that 5 to 10 points per trajectory are used to train the models in the multi-trajectory experiment, how many trajectories are used? Is that the meaning of the numbers next to the systems considered in the first column of Table 2? If that’s the case, please add it to the legend or paragraph text.
Could you please comment the fact that SGLD and SGHMC methods outperform DGM for configurations 3.3 and 3.6.3 on figure 2?
Could you please develop on the reason SGLD failed on the quadcopter single trajectory experiment, leading to NaNs? Did you try to understand what these NaNs meant?
Clarity
Although the paper is well written, the proposed method is very complicated and difficult to grasp (I had to read the paper many times before I started to get it). I cannot say it is not well explained though, but it might help to put more structure in section 3 (replace paragraphs by subsections + parapgraphs for example). Also, adding a diagram summarizing how the many elements of the method interact with each other could help a lot I believe.
Furthermore, I found some explanation was missing concerning the certainty-equivalence approximation of the Wasserstein distance. Maybe add that the expectation you are approximating is the one in (15) and that this approximation is carried in the second line go (16).
In the paragraph Final loss (page 6), the first data fitting term of the loss is denoted quite differently than in equation (8). It took me some time to realize they were the same, so I think some referencing and more consistent notation could help with the clarity here.
In line 285, you denote the model used to fit the dynamics with the same notation as in line 282, where you explain the real dynamics that generated the data. I find this confusing and believe that less ambiguous notation here would help with the clarity (by using for example the dynamics function
f
introduced in (1)).
I could not find any introduction of the abbreviations SGLD and SGHMC.
On line 164, I believe
z
i
:=
should not be there, as it is defined differently in the text and in equations below line 166.
Sentence at lines 119-120 has a problem as the word maps is repeated twice.
Line 244: another -> each other |
NIPS | Title
Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models
Abstract
Differential equations in general and neural ODEs in particular are an essential technique in continuous-time system identification. While many deterministic learning algorithms have been designed based on numerical integration via the adjoint method, many downstream tasks such as active learning, exploration in reinforcement learning, robust control, or filtering require accurate estimates of predictive uncertainties. In this work, we propose a novel approach towards estimating epistemically uncertain neural ODEs, avoiding the numerical integration bottleneck. Instead of modeling uncertainty in the ODE parameters, we directly model uncertainties in the state space. Our algorithm – distributional gradient matching (DGM) – jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss. Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
N/A
1 Introduction
For continuous-time system identification and control, ordinary differential equations form an essential class of models, deployed in applications ranging from robotics (Spong et al., 2006) to biology (Jones et al., 2009). Here, it is assumed that the evolution of a system is described by the evolution of continuous state variables, whose time-derivative is given by a set of parametrized equations. Often, these equations are derived from first principles, e.g., rigid body dynamics (Wittenburg, 2013), mass action kinetics (Ingalls, 2013), or Hamiltonian dynamics (Greydanus et al., 2019), or chosen for computational convenience (e.g., linear systems (Ljung, 1998)) or parametrized to facilitate system identification (Brunton et al., 2016).
Such construction methods lead to intriguing properties, including guarantees on physical realizability (Wensing et al., 2017), favorable convergence properties (Ortega et al., 2018), or a structure suitable for downstream tasks such as control design (Ortega et al., 2002). However, such models often capture the system dynamics only approximately, leading to a potentially significant discrepancy between the model and reality (Ljung, 1999). Moreover, when expert knowledge is not available, or precise parameter values are cumbersome to obtain, system identification from raw time series data becomes
∗Equal Contribution. Correspondence to trevenl@ethz.ch, wenkph@ethz.ch.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
necessary. In this case, one may seek more expressive nonparametric models instead (Rackauckas et al., 2020; Pillonetto et al., 2014). If the model is completely replaced by a neural network, the resulting model is called neural ODE (Chen et al., 2018). Despite their large number of parameters, as demonstrated by Chen et al. (2018); Kidger et al. (2020); Zhuang et al. (2020, 2021), deterministic neural ODEs can be efficiently trained, enabling accurate deterministic trajectory predictions. For many practical applications however, accurate uncertainty estimates are essential, as they guide downstream tasks like reinforcement learning (Deisenroth and Rasmussen, 2011; Schulman et al., 2015), safety guarantees (Berkenkamp et al., 2017), robust control design (Hjalmarsson, 2005), planning under uncertainty (LaValle, 2006), probabilistic forecasting in meteorology (Fanfarillo et al., 2021), or active learning / experimental design (Srinivas et al., 2010). A common way of obtaining such uncertainties is via a Bayesian framework. However, as observed by Dandekar et al. (2021), Bayesian training of neural ODEs in a dynamics setting remains largely unexplored. They demonstrate that initial variational-based inference schemes for Bayesian neural ODEs suffer from several serious drawbacks and thus propose sampling-based alternatives. However, as surfaced by our experiments in Section 4, sampling-based approaches still exhibit serious challenges. These pertain both to robustness (even if highly informed priors are supplied), and reliance on frequent numerical integration of large neural networks, which poses severe computational challenges for many downstream tasks like sampling-based planning (Karaman and Frazzoli, 2011) or uncertainty propagation in prediction.
Contributions In this work, we propose a novel approach for uncertainty quantification in nonlinear dynamical systems (cf. Figure 1). Crucially, our approach avoids explicit costly and non-robust numerical integration, by employing a probabilistic smoother of the observational data, whose representation we learn jointly across multiple trajectories. To capture dynamics, we regularize our smoother with a dynamics model. Latter captures epistemic uncertainty in the gradients of the ODE, which we match with the smoother’s gradients by minimizing a Wasserstein loss, hence we call our approach Distributional Gradient Matching (DGM). In summary, our main contributions are:
• We develop DGM, an approach2 for capturing epistemic uncertainty about nonlinear dynamical systems by jointly training a smoother and a neural dynamics model;
• We provide a computationally efficient and statistically accurate mechanism for prediction, by focusing directly on the posterior / predictive state distribution.
• We experimentally demonstrate the effectiveness of our approach on learning challenging, chaotic dynamical systems, and generalizing to new unseen inital conditions.
High-level overview A high-level depiction of our algorithm is shown in Figure 2. In principle, DGM jointly learns a smoother (S) and a dynamics model (D). The smoother model, chosen to be a Gaussian process, maps an initial condition x0 and a time t to the state distribution pS(x(t)) and state derivatives distribution pS(ẋ(t)) reached at that time. The dynamics model, chosen to be a neural network, represents an ODE that maps states x(t) to the derivative distribution pD(ẋ(t)). Both models are evaluated at some training times and all its output distributions collected in the random variables XS , ẊS and ẊD. The parameters of these models are then jointly trained using a Wasserstein-distance-based objective directly on the level of distributions. For more details on every one of these components, we refer to Section 3. There, we introduce all components individually and then present how they interplay. Section 3 builds on known concepts from the literature, which we
2Code is available at: https://github.com/lenarttreven/dgm
summarize in Section 2. Finally, in Section 4, we present the empirical study of the DGM algorithm, where we benchmark it against the state-of-the-art, uncertainty aware dynamics models.
2 Background
2.1 Problem Statement
Consider a continuous-time dynamical system whose K-dimensional state x ∈ RK evolves according to an unknown ordinary differential equation of the form
ẋ = f∗(x). (1)
Here, f∗ is an arbitrary, unknown function assumed to be locally Lipschitz continuous, to guarantee existence and uniqueness of trajectories for every initial condition. In our experiment, we initialize the system at M different initial conditions xm(0), m ∈ {1, . . . ,M}, and let it evolve to generate M trajectories. Each trajectory is then observed at discrete (but not necessarily uniformly spaced) time-points, where the number of observations (Nm)m∈{1...M} can vary from trajectory to trajectory. Thus, a trajectory m is described by its initial condition xm(0), and the observations ym := [xm(tn,m) + ϵn,m]n∈{1...Nm} at times tm := [tn,m]n∈{1...Nm}, where the additive observation noise ϵn,m is assumed to be drawn i.i.d. from a zero mean Gaussian, whose covariance is given by Σϵ := diag(σ21 , . . . , σ 2 K). We denote by D the dataset, consisting of M initial conditions xm(0), observation times tm, and observations ym. To model the unknown dynamical system, we choose a parametric Ansatz ẋ = f(x,θ). Depending on the amount of expert knowledge, this parameterization can follow a white-box, gray-box, or black-box methodology (Bohlin, 2006). In any case, the parametric form of f is fixed a priori (e.g., a neural network), and the key challenge is to infer a reasonable distribution over the parameters θ, conditioned on the data D. For later tasks, we are particularly interested in the predictive posterior state distribution p(xnew(tnew)|D, tnew,xnew(0)), (2) i.e., the posterior distribution of the states starting from a potentially unseen initial condition xnew(0) and evaluated at times tnew. This posterior would then be used by the downstream or prediction tasks described in the introduction.
2.2 Bayesian Parameter Inference
In the case of Bayesian parameter inference, an additional prior p(θ) is imposed on the parameters θ so that the posterior distribution of Equation (2) can be inferred. Unfortunately, this distribution is not analytically tractable for most choices of f(x,θ), which is especially true when we model f with a neural network. Formally, for fixed parameters θ, initial condition x(0) and observation time t, the likelihood of an observation y is given by
p(y(t)|x(0), t,θ,Σobs) = N ( y(t) ∣∣∣∣x(0) + ∫ t 0 f(x(τ),θ)dτ,Σobs ) . (3)
Using the fact that all noise realizations are independent, the expression (3) can be used to calculate the likelihood of all observations in D. Most state-of-the-art parameter inference schemes use this fact to create samples θ̂s of the posterior over parameters p(θ|D) using various Monte Carlo methods. Given a new initial condition x(0) and observation time t, these samples θ̂s can then be turned into samples of the predictive posterior state again by numerically integrating
x̂s(t) = x(0) + ∫ t 0 f(x(τ), θ̂s)dτ. (4)
Clearly, both training (i.e., obtaining the samples θ̂s) and prediction (i.e., evaluating Equation (4)) require integrating the system dynamics f many times. Especially when we model f with a neural network, this can be a huge burden, both numerically and computationally (Kelly et al., 2020). As an alternative approach, we can approximate the posterior p(θ|D) with variational inference (Bishop, 2006). However, we run into similar bottlenecks. While optimizing the variational objective, e.g., the ELBO, many integration steps are necessary to evaluate the unnormalized posterior. Also, at inference time, to obtain a distribution over state x̂s(t), we still need to integrate f several times. Furthermore, Dandekar et al. (2021) report poor forecasting performance by the variational approach.
3 Distributional Gradient Matching
In both the Monte Carlo sampling-based and variational approaches, all information about the dynamical system is stored in the estimates of the system parameters θ̂. This makes these approaches rather cumbersome: Both for obtaining estimates of θ̂ and for obtaining the predictive posterior over states, once θ̂ is found, we need multiple rounds of numerically integrating a potentially complicated (neural) differential equation. We thus have identified two bottlenecks limiting the performance and applicability of these algorithms: namely, numerical integration of f and inference of the system parameters θ. In our proposed algorithm, we avoid both of these bottlenecks by directly working with the posterior distribution in the state space. To this end, we introduce a probabilistic, differentiable smoother model, that directly maps a tuple (t,x(0)) consisting of a time point t and an initial condition x(0)) as input and maps it to the corresponding distribution over x(t). Thus, the smoother directly replaces the costly, numerical integration steps, needed, e.g., to evaluate Equation (2). Albeit computationally attractive, this approach has one serious drawback. Since the smoother no longer explicitly integrates differential equations, there is no guarantee that the obtained smoother model follows any vector field. Thus, the smoother model is strictly more general than the systems described by Equation (1). Unlike ODEs, it is able to capture mappings whose underlying functions violate, e.g., Lipschitz or Markovianity properties, which is clearly not desirable. To address this issue, we introduce a regularization term, Ldynamics, which ensures that a trajectory predicted by the smoother is encouraged to follow some underlying system of the form of Equation (1). The smoother is then trained with the multi-objective loss function
L := Ldata + λ · Ldynamics, (5) where, Ldata is a smoother-dependent loss function that ensures a sufficiently accurate data fit, and λ is a trade-off parameter.
3.1 Regularization by Matching Distributions over Gradients
To ultimately define Ldynamics, first choose a parametric dynamics model similar to f(x,θ) in Equation (3), that maps states to their derivatives. Second, define a set of supporting points T with the corresponding supporting gradients Ẋ as
T := { (tsupp,l,xsupp,l(0))l∈{1...Nsupp} } , Ẋ := { (ẋsupp,l)l∈{1...Nsupp} } .
Here, the l-th element represents the event that the dynamical system’s derivative at time tsupp,l is ẋsupp,l, after being initialized at time 0 at initial condition xsupp,l(0). Given both the smoother and the dynamics model, we have now two different ways to calculate distributions over Ẋ given some data D and supporting points T . First, we can directly leverage the differentiability and global nature of our smoother model to extract a distribution pS(Ẋ |D, T ) from the smoother model. Second, we can first use the smoother to obtain state estimates and then plug these state estimates into the dynamics model, to obtain a second distribution pD(Ẋ |D, T ). Clearly, if the solution proposed by the smoother follows the dynamics, these two distributions should match. Thus, we can regularize the smoother to follow the solution of Equation (3) by defining Ldynamics to encode the distance between pD(Ẋ |D, T ) and pS(Ẋ |D, T ) to be small in some metric. By minimizing the overall loss, we thus match the distributions over the gradients of the smoother and the dynamics model.
3.2 Smoothing jointly over Trajectories with Deep Gaussian Processes
The core of DGM is formed by a smoother model. In principle, the posterior state distribution of Equation (2) could be modeled by any Bayesian regression technique. However, calculating pS(Ẋ |D, T ) is generally more involved. Here, the key challenge is evaluating this posterior, which is already computationally challenging, e.g., for simple Bayesian neural networks. For Gaussian processes, however, this becomes straightforward, since derivatives of GPs remain GPs (Solak et al., 2003). Thus, DGM uses a GP smoother. For scalability and simplicity, we keep K different, independent smoothers, one for each state dimension. However, if computational complexity is not a concern, our approach generalizes directly to multi-output Gaussian processes. Below, we focus on the one-dimensional case, for clarity of exposition. For notational compactness, all vectors with a
superscript should be interpreted as vectors over time in this subsection. For example, the vector x(k) consists of all the k-th elements of the state vectors x(tn,m), n ∈ {1, . . . , Nm},m ∈ {1, . . . ,M}. We define a Gaussian process with a differentiable mean function µ(xm(0), tn,m) as well as a differentiable and positive-definite kernel function KRBF(ϕ(xm(0), tn,m),ϕ(xm′(0), tn′,m′). Here, the kernel is given by the composition of a standard ARD-RBF kernel (Rasmussen, 2004) and a differentiable feature extractor ϕ parametrized by a deep neural network, as introduced by Wilson et al. (2016). Following Solak et al. (2003), given fixed xsupp, we can now calculate the joint density of (ẋ(k)supp,y(k)) for each state dimension k. Concatenating vectors accordingly across time and trajectories, let
µ(k) := µ(k) (x(0), t) , µ̇(k) := ∂
∂t µ(k) (xsupp(0), tsupp) ,
z(k) := ϕ(k)(x(0), t), z(k)supp := ϕ (k)(xsupp(0), tsupp),
K(k) := K(k)RBF(z(k), z(k)), K̇ (k) := ∂
∂t1 K(k)RBF(z (k) supp, z
(k)), K̈(k) := ∂ 2
∂t1∂t2 K(k)RBF(z (k) supp, z (k) supp).
Then the joint density of (ẋ(k)supp,y(k)) can be written as( ẋ (k) supp
y(k)
) ∼ N (( µ̇(k)
µ(k)
) , ( K̈(k) K̇(k)
(K̇(k))⊤ K(k) + σ2kI
)) . (6)
Here we denote by ∂∂t1 the partial derivative with respect to time in the first coordinate, by ∂ ∂t2 the partial derivative with respect to time in the second coordinate, and with σ2k the corresponding noise variance of Σobs. Since the conditionals of a joint Gaussian random variable are again Gaussian distributed, pS is again Gaussian, i.e., pS(Ẋk|D, T ) = N ( ẋ (k) supp|µS ,ΣS ) with
µS := µ̇ (k) + K̇(k)(K(k) + σ2kI)−1 ( y(k) − µ(k) ) ,
ΣS := K̈ (k) − K̇(k)(K(k) + σ2kI)−1(K̇ (k) )⊤.
(7)
Here, the index k is used to highlight that this is just the distribution for one state dimension. To obtain the final pS(Ẋ |D, T ), we take the product over all state dimensions k. To fit our model to the data, we minimize the negative marginal log likelihood of our observations, neglecting purely additive terms (Rasmussen, 2004), i.e.,
Ldata := K∑ k=1 1 2 ( y(k) − µ(k) )⊤ ( K(k) + σ2kI )−1 ( y(k) − µ(k) ) + 1 2 logdet ( K(k) + σ2kI ) . (8)
Furthermore, the predictive posterior for a new point x(k)test given time ttest and initial condition x (k) test (0) has the closed form
pS(x (k) test |Dk, ttest,xtest) = N ( x (k) test ∣∣∣µ(k)post, σ2post,k) , (9) where µ(k)post = µ (k)(xtest(0), ttest) +K(k)RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1 ( y(k) − µ(k) ) , (10)
σ2post,k = K (k) RBF(ztest, ztest)−K (k) RBF(z (k) test , z (k))⊤(K(k) + σ2kI)−1K (k) RBF(z (k) test , z (k)). (11)
3.3 Representing Uncertainty in the Dynamics Model via the Reparametrization Trick
As described at the beginning of this section, a key bottleneck of standard Bayesian approaches is the potentially high dimensionality of the dynamics parameter vector θ. The same is true for our approach. If we were to keep track of the distributions over all parameters of our dynamics model, calculating pD(Ẋ |D, T ) quickly becomes infeasible.
However, especially in the case of modeling f with a neural network, the benefits of keeping distributions directly over θ is unclear due to overparametrization. For both the downstream tasks and our training method, we are mainly interested in the distributions in the state space. Usually, the state space is significantly lower dimensional compared to the parameter space of θ. Furthermore, since the exact posterior state distributions are generally intractable, they normally have to be approximated anyways with simpler distributions for downstream tasks (Schulman et al., 2015; Houthooft et al., 2016; Berkenkamp et al., 2017). Thus, we change the parametrization of our dynamics model as follows. Instead of working directly with ẋ(t) = f(x(t),θ) and keeping a distribution over θ, we model uncertainty directly on the level of the vector field as
ẋ(t) = f(x(t),ψ) +Σ 1 2
D(x(t),ψ)ϵ, (12)
where ϵ ∼ N (0, IK) is drawn once per rollout (i.e., fixed within a trajectory) and ΣD is a statedependent and positive semi-definite matrix parametrized by a neural network. Here, ψ are the parameters of the new dynamics model, consisting of both the original parameters θ and the weights of the neural network parametrizing ΣD. To keep the number of parameters reasonable, we employ a weight sharing scheme, detailed in Appendix B. In spirit, this modeling paradigm is very closely related to standard Bayesian training of NODEs. In both cases, the random distributions capture a distribution over a set of deterministic, ordinary differential equations. This should be seen in stark contrast to stochastic differential equations, where the randomness in the state space, i.e., diffusion, is modeled with a stochastic process. In comparison to (12), the latter is a time-varying disturbance added to the vector field. In that sense, our model still captures the epistemic uncertainty about our system dynamics, while an SDE model captures the intrinsic process noise, i.e., aleatoric uncertainty. While this reparametrization does not allow us to directly calculate pD(Ẋ |D, T ), we obtain a Gaussian distribution for the marginals pD(ẋsupp|xsupp). To retrieve pD(Ẋ |D, T ), we use the smoother model’s predictive state posterior to obtain
pD(Ẋ |D, T ) = ∫ pD(ẋsupp,xsupp|D, T )dxsupp (13)
≈ ∫ pD(ẋsupp|xsupp)pS(xsupp|T ,D)dxsupp. (14)
3.4 Comparing Gradient Distributions via the Wasserstein Distance
To compare and eventually match pD(Ẋ |D, T ) and pS(Ẋ |D, T ), we propose to use the Wasserstein distance (Kantorovich, 1939), since it allows for an analytic, closed-form representation, and since it outperforms similar measures (like forward, backward and symmetric KL divergence) in our exploratory experiments. The squared type-2 Wasserstein distance gives rise to the term
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] = W22 [ pS(Ẋ |D, T ),Exsupp∼pGP(xsupp|D,T ) [pD(ẋsupp|xsupp)] ] (15) that we will later use to regularize the smoothing process. To render the calculation of this regularization term computationally feasible, we introduce two approximations. First, observe that an exact calculation of the expectation in Equation (15) requires mapping a multivariate Gaussian through the deterministic neural networks parametrizing f and ΣD in Equation (12). To avoid complex sampling schemes, we carry out a certainty-equivalence approximation of the expectation, that is, we evaluate the dynamics model on the posterior smoother mean µS, supp. As a result of this approximation, observe that both pD(Ẋ |D, T ) and pS(Ẋ |D, T ) become Gaussians. However, the covariance structure of these matrices is very different. Since we use independent GPs for different state dimensions, the smoother only models the covariance between the state values within the same dimension, across different time points. Furthermore, since ϵ, the random variable that captures the randomness of the dynamics across all time-points, is only K-dimensional, the covariance of pD will be degenerate. Thus, we do not match the distributions directly, but instead match the marginals of each state coordinate at each time point independently at the different supporting time points. Hence,
using first marginalization and then the certainty equivalence, Equation (15) reduces to
W22 [ pS(Ẋ |D, T ), pD(Ẋ |D, T ) ] ≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|D, T ) ]
≈ K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] . (16)
Conveniently, the Wasserstein distance can now be calculated analytically, since for two onedimensional Gaussians a ∼ N (µa, σ2a) and b ∼ N (µb, σ2b ), we have W22[a, b] = (µa − µb)2 + (σa − σb)2.
3.5 Final Loss Function
As explained in the previous paragraphs, distributional gradient matching trains a smoother regularized by a dynamics model. Both the parameters of the smoother φ, consisting of the trainable parameters of the GP prior mean µ, the feature map ϕ, and the kernel K, and the parameters of the dynamics model ψ are trained concurrently, using the same loss function. This loss consists of two terms, of which the regularization term was already described in Equation (16). While this term ensures that the smoother follows the dynamics, we need a second term ensuring that the smoother also follows the data. To this end, we follow standard GP regression literature, where it is common to learn the GP hyperparameters by maximizing the marginal log likelihood of the observations, i.e. Ldata (Rasmussen, 2004). Combining these terms, we obtain the final objective
L(φ,ψ) := Ldata − λ · K∑ k=1 |Ẋ |∑ i=1 W22 [ pS(ẋ (k) supp(tsupp,i)|D, T ), pD(ẋ(k)supp(tsupp,i)|µS, supp) ] .
This loss function is a multi-criteria objective, where fitting the data (via the smoother) and identifying the dynamics model (by matching the marginals) regularize each other. In our preliminary experiments, we found the objective to be quite robust w.r.t. different choices of λ. In the interest of simplicity, we thus set it in all our experiments in Section 4 to a default value of λ = |D||Ẋ | , accounting only for the possibility of having different numbers of supporting points and observations. One special case worth mentioning is λ → 0, which corresponds to conventional sequential smoothing, where the second part would be used for identification in a second step, as proposed by Pillonetto and De Nicolao (2010). However, as can be seen in Figure 1, the smoother fails to properly identify the system without any knowledge about the dynamics and thus fails to provide meaningful state or derivative estimates. Thus, especially in the case of sparse observations, joint training is strictly superior. In its final form, unlike its pure Bayesian counterparts, DGM does not require any prior knowledge about the system dynamics. Nevertheless, if some prior knowledge is available, one could add an additional, additive term log(p(ψ)) to L(φ,ψ). It should be noted however that this was not done in any of our experiments, and excellent performance can be achieved without.
4 Experiments
We now compare DGM against state-of-the-art methods. In a first experiment, we demonstrate the effects of an overparametrized, simple dynamics model on the performance of DGM as well as traditional, MC-based algorithms SGLD (Stochastic Gradient Lengevin Dynamics, (Welling and Teh, 2011)) and SGHMC (Stochastic Gradient Hamiltonian Monte Carlo, (Chen et al., 2014)). We select our baselines based on the results of Dandekar et al. (2021), who demonstrate that both a variational approach and NUTS (No U-Turn Sampler, Hoffman and Gelman (2014)) are inferior to these two. Subsequently, we will investigate and benchmark the ability of DGM to correctly identify neural dynamics models and to generalize across different initial conditions. Since SGLD and SGHMC reach their computational limits in the generalization experiments, we compare against Neural ODE Processes (NDP). Lastly, we will conclude by demonstrating the necessity of all of its components. For all comparisons, we use the julia implementations of SGLD and SGHMC provided by Dandekar et al. (2021), the pytorch implementation of NDP provided by Norcliffe et al. (2021), and our own JAX (Bradbury et al., 2018) implementation of DGM.
4.1 Setup
We use known parametric systems from the literature to generate simulated, noisy trajectories. For these benchmarks, we use the two-dimensional Lotka Volterra (LV) system, the three-dimensional, chaotic Lorenz (LO) system, a four-dimensional double pendulum (DP) and a twelve-dimensional quadrocopter (QU) model. For all systems, the exact equations and ground truth parameters are provided in the Appendix A. For each system, we create two different data sets. In the first, we include just one densely observed trajectory, taking the computational limitations of the benchmarks into consideration. In the second, we include many, but sparsely observed trajectories (5 for LV and DP, 10 for LO, 15 for QU). This setting aims to study generalization over different initial conditions.
4.2 Metric
We use the log likelihood as a metric to compare the accuracy of our probabilistic models. In the 1-trajectory setting, we take a grid of 100 time points equidistantly on the training trajectory. We then calculate the ground truth and evaluate its likelihood using the predictive distributions of our models. When testing for generalization, we repeat the same procedure for unseen initial conditions.
4.3 Effects of Overparametrization
3,3 3,6,3 3,6,6,3 3,6,9,6,3 3,12,9,6,3 Model
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
Lo g
Lik el
ih oo
d
SGLD SGHMC DGM
likelihood of the ground truth over 10 different noise realizations. The exact procedure for one noise realization is described in the appendix, Appendix C). While SGLD runs into numerical issues after a medium model complexity, the performance of SGHMC continuously disintegrates, while DGM is unaffected. This foreshadows the results of the next two experiments, where we observe that the MC-based approaches are not suitable for the more complicated settings.
4.4 Single Trajectory Benchmarks
In Table 1, we evaluate the log-likelihood of the ground truth for the four benchmark systems, obtained when learning these systems using a neural ODE as a dynamics model (for more details, see appendix B). Clearly, DGM performs the best on all systems, even though we supplied both SGLD and SGHMC with very strong priors and fine-tuned them with an extensive hyperparameter sweep (see Appendix C for more details). Despite this effort, we failed to get SGLD to work on Quadrocopter 1, where it always returned NaNs. This is in stark contrast to DGM, which performs reliably without any pre-training or priors.
4.5 Prediction speed
To evaluate prediction speed, we consider the task of predicting 100 points on a previously unseen trajectory. To obtain a fair comparison, all algorithms’ prediction routines were implemented in JAX (Bradbury et al., 2018). Furthermore, while we used 1000 MC samples when evaluating the predictive posterior for the log likelihood to guarantee maximal accuracy, we only used 200 samples in Table 1. Here, 200 was chosen as a minimal sample size guaranteeing reasonable accuracy, following a preliminary experiment visualized in Appendix C. Nevertheless, the predictions of DGM are 1-2 orders of magnitudes faster, as can be seen in Table 1. This further illustrates the advantage of relying on a smoother instead of costly, numerical integration to obtain predictive posteriors in the state space.
4.6 Multi-Trajectory Benchmarks
Next, we take a set of trajectories starting on an equidistant grid of the initial conditions. Each trajectory is then observed at 5 equidistant observation times for LV and DP, and 10 equidistant observation times for the chaotic Lorenz and more complicated Quadrocopter. We test generalization by randomly sampling a new initial condition and evaluating the negative log likelihood of the ground truth at 100 equidistant time points. In Table 2, we compare the generalization performance of DGM against NDP, since despite serious tuning efforts, the MC methods failed to produce meaningful results in this setting. DGM clearly outperforms NDP, a fact which is further exemplified in Figure 4. There, we show the test log likeliood for Lotka Volterra trained on an increasing set of trajectories. Even though the time grid is fixed and we only decrease the distance between initial condition samples, the dynamics model helps the smoother to generalize across time as well. In stark contrast, NDP fails to improve with increasing data after an initial jump.
4.7 Ablation study
We next study the importance of different elements of our approach via an ablation study on the Lorenz 125 dataset, shown in Figure 1. Comparing the two rows, we see that joint smoothing across trajectories is essential to transfer knowledge between different training trajectories. Similarly, comparing the two columns, we see that the dynamics model enables the smoother to reduce its uncertainty in between observation points.
4.8 Computational Requirements
For the one trajectory setting, all DGM related experiments were run on a Nvidia RTX 2080 Ti, where the longest ones took 15 minutes. The comparison methods were given 24h, on Intel Xeon Gold 6140 CPUs. For the multi-trajectory setting, we used Nvidia Titan RTX, where all experiments finished in less than 3 hours. A more detailed run time compilation can be found in Appendix B. Using careful implementation, the run time of DGM scales linearly in the number of dimensions K. However, since we use an accurate RBF kernel for all our experiments reported in this section, we have cubic run time complexity in ∑M m=1 Nm. In principle, this can be alleviated by deploying standard feature approximation methods (Rahimi et al., 2007; Liu et al., 2020). While this is a well known fact, we nevertheless refer the interested reader to a more detailed discussion of the subject in Appendix D.
5 Related work
5.1 Bayesian Parameter Inference with Gaussian Processes
The idea of matching gradients of a (spline-based) smoother and a dynamics model goes back to the work of Varah (1982). For GPs, this idea is introduced by Calderhead et al. (2009), who first fit a GP to the data and then match the parameters of the dynamics. Dondelinger et al. (2013) introduce concurrent training, while Gorbach et al. (2017) introduce an efficient variational inference procedure for systems with a locally-linear parametric form. All these works claim to match the distributions of the gradients of the smoother and dynamics models, by relying on a product of experts heuristics. However, Wenk et al. (2019) demonstrate that this product of experts in fact leads to statistical independence between the observations and the dynamics parameters, and that these algorithms essentially match point estimates of the gradients instead. Thus, DGM is the first algorithm to actually match gradients on the level of distributions for ODEs. In the context of stochastic differential equations (SDEs) with constant diffusion terms, Abbati et al. (2019) deploy MMD and GANs to match their gradient distributions. However, it should be noted that their algorithm treats the parameters of the dynamics model deterministically and thus, they can not provide the epistemic uncertainty estimates that we seek here. Note that our work is not related to the growing literature investigating SDE approximations of Bayesian Neural ODEs in the context of classification (Xu et al., 2021). Similarly to Chen et al. (2018), these works emphasize learning a terminal state of the ODE used for other downstream tasks.
5.2 Gaussian Processes with Operator Constraints
Gradient matching approaches mainly use the smoother as a proxy to infer dynamics parameters. This is in stark contrast to our work, where we treat the smoother as the main model used for prediction. While the regularizing properties of the dynamics on the smoother are explored by Wenk et al. (2020), Jidling et al. (2017) introduce an algorithm to incorporate linear operator constraints directly on the kernel level. Unlike in our work, they can provide strong guarantees that the posterior always follows these constraints. However, it remains unclear how to generalize their approach to the case of complex, nonlinear operators, potentially parametrized by neural dynamics models.
5.3 Other Related Approaches
In some sense, the smoother is mimicking a probabilistic numerical integration step, but without explicitly integrating. In spirit, this approach is similar to the solution networks used in the context of PDEs, as presented by Raissi et al. (2019), which however typically disregard uncertainty. In the context of classical ODE parameter inference, Kersting et al. (2020) deploy a GP to directly mimic a numerical integrator in a probabilistic, differentiable manner. Albeit promising in a classical, parametric ODE setting, it remains unclear how these methods can be scaled up, as there is still the numerical integration bottleneck. Unrelated to their work, Ghosh et al. (2021) present a variational inference scheme in the same, classical ODE setting. However, they still keep distributions over all weights of the neural network (Norcliffe et al., 2021). A similar approach is investigated by Dandekar et al. (2021), who found it to be inferior to the MC methods we use as a benchmark. Variational inference was previously employed by Yildiz et al. (2019) in the context of latent neural ODEs parametrized by a Bayesian neural network, but their work mainly focuses on dimensionality reduction. Nevertheless, their work inspired a model called Neural ODE Processes by Norcliffe et al. (2021). This work is similar to ours in the sense that it avoids keeping distributions over network weights and models an ensemble of deterministic ODEs via a global context variable. Consequently, we use it as a benchmark in Section 4, showing that it does not properly capture epistemic uncertainty in a low data setting, which might be problematic for downstream tasks like reinforcement learning.
6 Conclusion
In this work, we introduced a novel, GP-based collocation method, that matches gradients of a smoother and a dynamics model on the distribution level using a Wasserstein loss. Through careful parametrization of the dynamics model, we manage to train complicated, neural ODE models, where state of the art methods struggle. We then demonstrate that these models are able to accurately predict unseen trajectories, while capturing epistemic uncertainty relevant for downstream tasks. In future work, we are excited to see how our training regime can be leveraged in the context of active learning of Bayesian neural ordinary differential equation for continuous-time reinforcement learning.
Acknowledgments
This research was supported by the Max Planck ETH Center for Learning Systems. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme grant agreement No 815943 as well as from the Swiss National Science Foundation under NCCR Automation, grant agreement 51NF40 180545. | 1. What is the main contribution of the paper regarding learning dynamics models?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to handle complex systems and uncertainty estimation?
3. Do you have any concerns regarding the combination of different techniques, such as GP's, neural networks, and the Wasserstein metric?
4. How does the reviewer assess the clarity and quality of the paper's content, particularly in terms of providing intuition and understanding of the proposed approach?
5. What are the limitations of the experimental evaluation, and how could it be improved to provide more insightful results? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes to skip the integration of continuous-time models and directly learn a model that predicts the posterior of the state conditioned on the initial state and the desired time. To learn such a model with 'good' uncertainty estimation the authors mix GP's, neural networks and the wasserstein metric (I did not understand how the authors combine these approaches together). Within the experiments the paper shows that the log-likelihood is better for the proposed model on uncontrolled toy systems, e.g. Lotka Volterra, chaotic Lorenz, double pendulum and quadcopter.
Review
First of all, I am an expert on learning dynamics models but not an expert for Bayesian models. My main problem with the paper is that I cannot understand the proposed approach. The authors described the problem and their approach. However, once the authors mix all concepts of bayesian networks, GP's, Wasserstein distance and stir it all together. I personally do not have any understanding how all of this plays together. Therefore, the paper needs a major revision that restructures the writing such that the reader gets a good understanding of the approach. A good approach is to provide the intuition of the proposed approach and only afterwards deep dive into the details. However, the authors skip the intuitive approach and directly deep dive into the theory and lose the reader in the process. Besides the writing, the performed experiments are not really impressive and only try a few toy domains. Even for the toy domains the experimental evaluation does not provide much insights except my number is bigger than yours. It would be beneficial to incorporate more qualitative evaluations and explain to the reader what to look out for. Furthermore, the authors motivate their approach using control and RL but then only model uncontrolled dynamics and never scale the proposed approach to interesting control problems. How does the model work for high-dimensional systems with contacts? Therefore, it would be necessary to evaluate the proposed approach on more complex domains such as walker or humanoid.
As the writing and structure of the paper needs a major revision and the performed experiments do not evaluate whether this model actually improves learned models for control, I cannot recommend the paper for acceptance. |
NIPS | Title
Selecting the independent coordinates of manifolds with large aspect ratios
Abstract
Many manifold embedding algorithms fail apparently when the data manifold has a large aspect ratio (such as a long, thin strip). Here, we formulate success and failure in terms of finding a smooth embedding, showing also that the problem is pervasive and more complex than previously recognized. Mathematically, success is possible under very broad conditions, provided that embedding is done by carefully selected eigenfunctions of the Laplace-Beltrami operator M. Hence, we propose a bicriterial Independent Eigencoordinate Selection (IES) algorithm that selects smooth embeddings with few eigenvectors. The algorithm is grounded in theory, has low computational overhead, and is successful on synthetic and large real data.
N/A
Many manifold embedding algorithms fail apparently when the data manifold has a large aspect ratio (such as a long, thin strip). Here, we formulate success and failure in terms of finding a smooth embedding, showing also that the problem is pervasive and more complex than previously recognized. Mathematically, success is possible under very broad conditions, provided that embedding is done by carefully selected eigenfunctions of the Laplace-Beltrami operator M. Hence, we propose a bicriterial Independent Eigencoordinate Selection (IES) algorithm that selects smooth embeddings with few eigenvectors. The algorithm is grounded in theory, has low computational overhead, and is successful on synthetic and large real data.
1 Motivation
We study a well-documented deficiency of manifold learning algorithms. Namely, as shown in [GZKR08], algorithms such as Laplacian Eigenmaps (LE), Local Tangent Space Alignment (LTSA), Hessian Eigenmaps (HLLE), and Diffusion Maps (DM) fail spectacularly when the data has a large aspect ratio, that is, it extends much more in one geodesic direction than in others. This problem, illustrated by the strip in Figure 1, was studied in [GZKR08] from a linear algebraic perspective; [GZKR08] show that, especially when noise is present, the problem is pervasive.
In the present paper, we revisit the problem from a differential geometric perspective. First, we define failure not as distortion, but as drop in the rank of the mapping represented by the embedding algorithm. In other words, the algorithm fails when the map is not invertible, or, equivalently, when the dimension dim (M) < dimM = d, where M represents the idealized data manifold, and dim denotes the intrinsic dimension. Figure 1 demonstrates that the problem is fixed by choosing the eigenvectors with care. We call this problem the Independent Eigencoordinate Selection (IES) problem, formulate it and explain its challenges in Section 3.
Our second main contribution (Section 4) is to design a bicriterial method that will select from a set of coordinate functions 1, . . . m, a subset S of small size that provides a smooth full-dimensional embedding of the data. The IES problem requires searching over a combinatorial number of sets. We show (Section 4) how to drastically reduce the computational burden per set for our algorithm. Third, we analyze the proposed criterion under asymptotic limit (Section 5). Finally (Section 6), we show examples of successful selection on real and synthetic data. The experiments also demonstrate that users of manifold learning for other than toy data must be aware of the IES problem and have tools for handling it. Notations table, proofs, a library of hard examples, extra experiments and analyses are in Supplements A–H; Figure/Table/Equation references with prefix S are in the Supplement.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
2 Background on manifold learning
Manifold learning (ML) and intrinsic geometry Suppose we observe data X 2 Rn⇥D, with data points denoted by xi 2 RD 8 i 2 [n], that are sampled from a smooth1 d-dimensional submanifold M ⇢ RD. Manifold Learning algorithms map xi, i 2 [n] to yi = (xi) 2 Rs, where d s⌧ D, thus reducing the dimension of the data X while preserving (some of) its properties. Here we present the LE/DM algorithm, but our results can be applied to other ML methods with slight modification. The DM [CL06, NLCK06] algorithm embeds the data by solving the minimum eigen-problem of the renormalized graph Laplacian [CL06] matrix L. The desired m dimensional embedding coordinates are obtained from the second to m + 1-th principal eigenvectors of graph Laplacian L, with 0 = 0 < 1 . . . m, i.e., yi = ( 1(xi), . . . m(xi)) (see also Supplement B).
To analyze ML algorithms, it is useful to consider the limit of the mapping when the data is the entire manifold M. We denote this limit also by , and its image by (M) 2 Rm. For standard algorithms such as LE/DM, it is known that this limit exists [CL06, BN07, HAvL05, HAvL07, THJ10]. One of the fundamental requirements of ML is to preserve the neighborhood relations in the original data. In mathematical terms, we require that : M ! (M) is a smooth embedding, i.e., that is a smooth function (i.e. does not break existing neighborhood relations) whose Jacobian D (x) is full rank d at each x 2M (i.e. does not create new neighborhood relations).
The pushforward Riemannian metric A smooth does not typically preserve geometric quantities such as distances along curves in M. These concepts are captured by Riemannian geometry, and we additionally assume that (M, g) is a Riemannian manifold, with the metric g induced from RD. One can always associate with (M) a Riemannian metric g⇤ , called the pushforward Riemannian metric [Lee03], which preserves the geometry of (M, g); g⇤ is defined by
hu,vig⇤ (x) = ⌦ D 1(x)u,D 1(x)v ↵ g(x) for all u,v 2 T (x) (M) (1)
Algorithm 1: RMETRIC Input : Embedding Y 2 Rn⇥m, Laplacian L,
intrinsic dimension d 1 for all yi 2 Y, k = 1! m, l = 1! m do 2 [H̃(i)]kl = P j 6=i Lij(yjl yil)(yjk yik)
3 end
4 for i = 1! n do 5 U(i), ⌃(i) REDUCEDRANKSVD(H̃(i), d) 6 H(i) = U(i)⌃(i)U(i)> 7 G(i) = U(i)⌃ 1(i)U(i)>
8 end
Return: G(i),H(i) 2 Rm⇥m, U(i) 2 Rm⇥d, ⌃(i) 2 Rd⇥d, for i 2 [n]
In the above, TxM, T (x) (M) are tangent subspaces, D 1(x) maps vectors from T (x) (M) to TxM, and h, i is the Euclidean scalar product. For each (xi), the associated pushforward Riemannian metric expressed in the coordinates of Rm, is a symmetric, semi-positive definite m ⇥ m matrix G(i) of rank d. The scalar product hu,vig⇤ (xi) takes the form u > G(i)v. Given an embedding Y =
(X), G(i) can be estimated by Algorithm 1 (RMETRIC) of [PM13]. The RMETRIC also returns the co-metric H(i), which is the pseudo-inverse of the metric G(i), and its Singular Value Decomposition ⌃(i),U(i) 2 Rm⇥d. The latter represents an orthogonal basis of T (x)( (M)).
3 IES problem, related work, and challenges
An example Consider a continuous two dimensional strip with width W , height H , and aspect ratio W/H 1, parametrized by coordinates w 2 [0,W ], h 2 [0, H]. The eigenvalues and eigenfunctions of the Laplace-Beltrami operator with von Neumann boundary conditions [Str07] are k1,k2 = k1⇡
W
2 + k2⇡
H
2 , respectively k1,k2(w, h) = cos k1⇡w
W
cos
k2⇡h
H
.
Eigenfunctions 1,0, 0,1 are in bijection with the w, h coordinates (and give a full rank embedding), while the mapping by 1,0, 2,0 provides no extra information regarding the second dimension h in the underlying manifold (and is rank 1). Theoretically, one can choose as coordinates eigenfunctions indexed by (k1, 0), (0, k2), but, in practice, k1, and k2 are usually
1In this paper, a smooth function or manifold will be assumed to be of class at least C3.
unknown, as the eigenvalues are index by their rank 0 = 0 < 1 2 · · · . For a two dimensional strip, it is known [Str07] that 1,0 always corresponds to 1 and 0,1 corresponds to (dW/He). Therefore, when W/H > 2, the mapping of the strip to R2 by 1, 2 is low rank, while the mapping by 1, dW/He is full rank. Note that other mappings of rank 2 exist, e.g., 1, dW/He+2 (k1 = k2 = 1 in Figure 1b). These embeddings reflect progressively higher frequencies, as the corresponding eigenvalues grow larger.
Prior work [GZKR08] is the first work to give the IES problem a rigurous analysis. Their paper focuses on rectangles, and the failure illustrated in Figure 1a is defined as obtaining a mapping Y = (X) that is not affinely equivalent with the original data. They call this the Price of Normalization and explain it in terms of the variances along w and h. [DTCK18] is the first to frame the failure in terms of the rank of S = { k : k 2 S ✓ [m]}, calling it the repeated eigendirection problem. They propose a heuristic, LLRCOORDSEARCH, based on the observation that if k is a repeated eigendirection of 1, · · · , k 1, one can fit k with local linear regression on predictors [k 1] with low leave-one-out errors rk. A sequential algorithm [BM17] with an unpredictability constraint in the eigenproblem has also been proposed. Under their framework, the k-th coordinate k is obtained from the top eigenvector of the modified kernel matrix K̃k, which is constructed by the original kernel K and 1, · · · , k 1.
Existence of solution Before trying to find an algorithmic solution to the IES problem, we ask the question whether this is even possible, in the smooth manifold setting. Positive answers are given in [Por16], which proves that isometric embeddings by DM with finite m are possible, and more recently in [Bat14], which proves that any closed, connected Riemannian manifold M can be smoothly embedded by its Laplacian eigenfunctions [m] into Rm for some m, which depends only on the intrinsic dimension d of M, the volume of M, and lower bounds for injectivity radius and Ricci curvature. The example in Figure 1a demonstrates that, typically, not all m eigenfunctions are needed. I.e., there exists a set S ⇢ [m], so that S is also a smooth embedding. We follow [DTCK18] in calling such a set S independent. It is not known how to find an independent S analytically for a given M, except in special cases such as the strip. In this paper, we propose a finite sample and algorithmic solution, and we support it with asymptotic theoretical analysis.
The IES Problem We are given data X, and the output of an embedding algorithm (DM for simplicity) Y = (X) = [ 1, · · · , m] 2 Rn⇥m. We assume that X is sampled from a d-dimensional manifold M, with known d, and that m is sufficiently large so that (M) is a smooth embedding. Further, we assume that there is a set S ✓ [m], with |S| = s m, so that S is also a smooth embedding of M. We propose to find such set S so that the rank of S is d on M and S varies as slowly as possible.
Challenges (1) Numerically, and on a finite sample, distiguishing between a full rank mapping and a rank-defective one is imprecise. Therefore, we substitute for rank the volume of a unit parallelogram in T (xi) (M). (2) Since is not an isometry, we must separate the local distortions introduced by from the estimated rank of at x. (3) Finding the optimal balance between the above desired properties. (4) In [Bat14] it is strongly suggested that s the number of eigenfunctions needed may exceed the Whitney embedding dimension ( 2d), and that this number may depend on injectivity radius, aspect ratio, and so on. Supplement G shows an example of a flat 2-manifold, the strip with cavity, for which s > 2. In this paper, we assume that s and m are given and focus on selecting S with |S| = s; for completeness, in Supplement G we present a heuristic to select s.
(Global) functional dependencies, knots and crossings Before we proceed, we describe three different ways a mapping (M) can fail to be invertible. The first, (global) functional dependency is the case when rankD < d on an open subset of M, or on all of M (yellow curve in Figure 1a); this is the case most widely recognized in the literature (e.g., [GZKR08, DTCK18]). The knot is the case when rankD < d at an isolated point (Figure 1b). Third, the crossing (Figure S8 in
Supplement H) is the case when : M! (M) is not invertible at x, but M can be covered with open sets U such that the restriction : U ! (U) has full rank d. Combinations of these three exemplary cases can occur. The criteria and approach we define are based on the (surrogate) rank of , therefore they will not rule out all crossings. We leave the problem of crossings in manifold embeddings to future work, as we believe that it requires an entirely separate approach (based, e.g., or the injectivity radius or density in the co-tangent bundle rather than differential structure).
4 Criteria and algorithm
A geometric criterion We start with the main idea in evaluating the quality of a subset S of coordinate functions. At each data point i, we consider the orthogonal basis U(i) 2 Rm⇥d of the d dimensional tangent subspace T (xi) (M). The projection of the columns of U(i) onto the subspace T (xi) S(M) is U(i)[S, :] ⌘ US(i). The following Lemma connects US(i) and the co-metric HS(i) defined by S , with the full H(i). Lemma 1. Let H(i) = U(i)⌃(i)U(i)> be the co-metric defined by embedding , S ✓ [m], HS(i) and US(i) defined above. Then HS(i) = US(i)⌃(i)US(i)> = H(i)[S, S].
The proof is straightforward and left to the reader. Note that Lemma 1 is responsible for the efficiency of the search over sets S, given that the push-forward co-metric HS can be readily obtained as a submatrix of H. Denote by uS
k (i) the k-th column of US(i). We further normalize each uSk
to length 1 and define the normalized projected volume Volnorm(S, i) = p
det(US(i)>US(i))Qd k=1 kuSk (i)k2
. Conceptually, Volnorm(S, i) is the volume spanned by a (non-orthonormal) “basis” of unit vectors in T S(xi) S(M); Volnorm(S, i) = 1 when US(i) is orthogonal, and it is 0 when rankHS(i) < d. In Figure 1a, the Volnorm({1, 2}) with {1,2} = { 1,0, 2,0} is close to zero, since the projection of the two tangent vectors is parallel to the yellow curve; however Volnorm({1, dw/he}, i) is almost 1, because the projections of the tangent vectors U(i) will be (approximately) orthogonal. Hence, Volnorm(S, i) away from 0 indicates a non-singular S at i, and we use the average log Volnorm(S, i), which penalizes values near 0 highly, as the rank quality R(S) of S.
Higher frequency S maps with high R(S) may exist, being either smooth, such as the embeddings of the strip mentioned previously, or containing knots involving only small fraction of points, such as 1,0, 1,1 in Figure 1a. To choose the lowest frequency, slowest varying smooth map, a regularization term consisting of the eigenvalues k, k 2 S, of the graph Laplacian L is added, obtaining the criterion
L(S; ⇣) = 1
n
nX
i=1
log q det (US(i)>US(i))
| {z } R1(S)= 1n Pn i=1 R1(S;i)
1
n
nX
i=1
dX
k=1
log kuS k (i)k2
| {z } R2(S)= 1n Pn i=1 R2(S;i)
⇣ X
k2S k (2)
Algorithm 2: INDEIGENSEARCH Input : Data X, bandwith ", intrinsic dimension d,
embedding dimension s, regularizer ⇣ 1 Y 2 Rn⇥m,L, 2 Rm DIFFMAP(X, ") 2 U(i), · · · ,U(n) RMETRIC(Y,L, d) 3 for S 2 {S0 ✓ [m] : |S0| = s, 1 2 S0} do 4 R1(S) 0;R2(S) 0 5 for i = 1, · · · , n do 6 US(i) U(i)[S, :] 7 R1(S) += 1 2n · log det US(i)>US(i) 8 R2(S) += 1 n · P d k=1 log ku S k (i)k2
9 end
10 L(S; ⇣) = R1(S) R2(S) ⇣ P k2S k 11 end
12 S⇤ = argmaxS L(S; ⇣) Return: Independent eigencoordinates set S⇤
Search algorithm With this criterion, the IES problem turns into a subset selection problem parametrized by ⇣
S⇤(⇣) = argmax S✓[m];|S|=s;12S L(S; ⇣) (3)
Note that we force the first coordinate 1 to always be chosen, since this coordinate cannot be functionally dependent on previous ones, and, in the case of DM, it also has lowest frequency. Note also that R1 and R2 are both submodular set function (proof in Supplement C.3). For large s and d, algorithms for optimizing over the difference of submodular functions can be used (e.g., see [IB12]). For the experiments in this paper, we have m = 20 and
d, s = 2 ⇠ 4, which enables us to use exhaustive search to handle (3). The exact search algorithm is summarized in Algorithm 2 INDEIGENSEARCH. A greedy variant is also proposed and analyzed in Supplement D. Note that one might be able to search in the continuous space of all s-projections. We conjecture the objective function (2) will be a difference of convex function and leave the details as future work2.
Regularization path and choosing ⇣ According to (2), the optimal subset S⇤ depends on the parameter ⇣. The regularization path `(⇣) = maxS✓[m];|S|=s;12S L(S; ⇣) is the upper envelope of multiple lines (each correspond to a set S) with slopes P k2S k and intercepts R(S). The larger ⇣ is, the more the lower frequency subset penalty prevails, and for sufficiently large ⇣ the algorithm will output [s]. In the supervised learning framework, the regularization parameters are often chosen by cross validation. Here we propose a second criterion, that effectively limits how much R(S) may be ignored, or alternatively, bounds ⇣ by a data dependent quantity. Define the leave-one-out regret of point i as follows
D(S, i) = R(Si⇤; [n]\{i}) R(S; [n]\{i}) with S i ⇤ = argmaxS✓[m];|S|=s;12SR(S; i) (4)
In the above, we denote R(S;T ) = 1|T | P
i2T R1(S; i) R2(S; i) for some subset T ✓ [n]. The quantity D(S, i) in (4) measures the gain in R if all the other points [n]\{i} choose the optimal subset Si⇤. If the regret D(S, i) is larger than zero, it indicates that the alternative choice might be better compared to original choice S. Note that the mean value for all i, i.e., 1
n P i D(S, i)
depends also on the variability of the optimal choice of points i, Si⇤. Therefore, it might not favor an S, if S is optimal for every i 2 [n]. Instead, we propose to inspect the distribution of D(S, i), and remove the sets S for which ↵’s percentile are larger than zero, e.g., ↵ = 75%, recursively from ⇣ = 1 in decreasing order. Namely, the chosen set is S⇤ = S⇤(⇣ 0) with ⇣ 0 = max⇣ 0 PERCENTILE({D(S⇤(⇣), i)}ni=1,↵) 0. The optimal ⇣⇤ value is simply chosen to be the midpoint of all the ⇣’s that outputs set S⇤ i.e., ⇣⇤ = 12 (⇣
0 + ⇣ 00), where ⇣ 00 = min⇣ 0 S⇤(⇣) = S⇤(⇣ 0). The procedure REGUPARAMSEARCH is summarized in Algorithm S5.
5 R as Kullbach-Leibler divergence
In this section we analyze R in its population version, and show that it is reminiscent of a KullbachLeibler divergence between unnormalized measures on S(M). The population version of the regularization term takes the form of a well-known smoothness penalty on the embedding coordinates S . Proofs of the theorems can be found in Supplement C.
Volume element and the Riemannian metric Consider a Riemannian manifold (M, g) mapped by a smooth embedding S into ( S(M), g⇤ S ), S : M ! Rs, where g⇤ S is the push-forward metric defined in (1). A Riemannian metric g induces a Riemannian measure on M, with volume element
p det g. Denote now by µM, respectively µ S(M) the Riemannian measures corresponding
to the metrics induced on M, S(M) by the ambient spaces RD,Rs; let g be the former metric. Lemma 2. Let S, , S ,HS(x),US(x),⌃(x) be defined as in Section 4 and Lemma 1. For simplicity, we denote by HS(y) ⌘ HS( 1S (y)), and similarly for US(y),⌃(y). Assume that S is a smooth embedding. Then, for any measurable function f : M! R,
Z
M f(x)dµM(x) =
Z
S(M) f( 1 S (y))jS(y)dµ S(M)(y), (5)
with jS(y) = 1/Vol(US(y)⌃ 1/2 S (y)). (6)
Asymptotic limit of R We now study the first term of our criterion in the limit of infinite sample size. We make the following assumptions. Assumption 1. The manifold M is compact of class C3, and there exists a set S, with |S| = s so that S is a smooth embedding of M in Rs.
2We thank the anonymous reviewer who made this suggestion.
Assumption 2. The data are sampled from a distribution on M continuous with respect to µM, whose density is denoted by p. Assumption 3. The estimate of HS in Algorithm 1 computed w.r.t. the embedding S is consistent.
We know from [Bat14] that Assumption 1 is satisfied for the DM/LE embedding. The remaining assumptions are minimal requirements ensuring that limits of our quantities exist. Now consider the setting in Sections 3, in which we have a larger set of eigenfunctions, [m] so that [m] contains the set S of Assumption 1. Denote by |̃S(y) = Q d
k=1
||u
S k (y)|| k(y))1/2
1 a new volume element,
here k = [⌃]kk. Theorem 3 (Limit of R). Under Assumptions 1–3,
lim n!1
1
n
X
i
lnR(S,xi) = R(S,M), (7)
and
R(S,M) =
Z
S(M) ln
jS(y) |̃S(y) jS(y)p( 1 S (y))dµ S(M)(y) def = D(pjSkp|̃S) (8)
The expression D(·k·) represents a Kullbach-Leibler divergence. Note that jS |̃S , which implies that D is always positive, and that the measures defined by pjS , p|̃S normalize to different values. By definition, local injectivity is related to the volume element j. Intuitively, pjS is the observation and pj̃S , where j̃S is the minimum attainable for jS , is the model; the objective itself is looking for a view S of the data that agrees with the model.
It is known that k, the k-th eigenvalue of the Laplacian, converges under certain technical conditions [BN07] to an eigenvalue of the Laplace-Beltrami operator M and that
k( M) = h k, M ki =
Z
M k grad k(x)k
2 2dµ(M). (9)
Hence, a smaller value for the regularization term encourages the use of slow varying coordinate functions, as measured by the squared norm of their gradients, as in equation (9). Hence, under Assumptions 1, 2, 3, L converges to
L(S,M) = D(pjSkp|̃S)
✓ ⇣
1(M)
◆X
k2S k(M). (10)
Since eigenvalues scale with the volume of M, the rescaling of ⇣ in comparison with equation (2) makes the ⇣ above adimensional.
6 Experiments
We demonstrate the proposed algorithm on three synthetic datasets, one where the minimum embedding dimension s equals d (D1 long strip), and two (D7 high torus and D13 three torus) where s > d. The complete list of synthetic manifolds (transformations of 2 dimensional strips, 3 dimensional cubes, two and three tori, etc.) investigated can be found in Supplement H and Table S2. The examples have (i) aspect ratio of at least 4 (ii) points sampled non-uniformly from the underlying manifold M, and (iii) Gaussian noise added. The sample size of the synthetic datasets is n = 10, 000 unless otherwise stated. Additionally, we analyze several real datasets from chemistry and astronomy. All embeddings are computed with the DM algorithm, which outputs m = 20 eigenvectors. Hence, we examine 171 sets for s = 3 and 969 sets for s = 4. No more than 2 to 5 of these sets appear on the regularization path. Detailed experimental results are in Table S3. In this section, we show the original dataset X, the embedding S⇤ , with S⇤ selected by INDEIGENSEARCH and ⇣⇤ from REGUPARAMSEARCH, and the maximizer sets on the regularization path with box plots of D(S, i) as discussed in Section 4. The ↵ threshold for REGUPARAMSEARCH is set to 75%. The kernel bandwidth " for synthetic datasets is chosen manually. For real datasets, " is optimized as in [JMM17]. All the experiments are replicated for more than 5 times, and the outputs are similar because of the large sample size n.
Synthetic manifolds The results of synthetic manifolds are in Figure 2. (i) Manifold with s = d. The first synthetic dataset we considered, D1, is a two dimensional strip with aspect ratio W/H = 2⇡. Left panel of the top row shows the scatter plot of such dataset. From the theoretical analysis in Section 3, the coordinate set that corresponds to slowest varying unique eigendirection is S = {1, dW/He} = {1, 7}. Middle panel, with S⇤ = {1, 7} selected by INDEIGENSEARCH with ⇣ chosen by REGUPARAMSEARCH, confirms this. The right panel shows the box plot of {D(S, i)}n
i=1. According to the proposed procedure, we eliminate S0 = {1, 2} since D(S0, i) 0 for almost all the points. (ii) Manifold with s > d. The second data D7 is displayed in the left panel of the second row. Due to the mechanism we used to generate the data, the resultant torus is non-uniformly distributed along the z axis. Middle panel is the embedding of the optimal coordinate set S⇤ = {1, 4, 5} selected by INDEIGENSEARCH. Note that the middle region (in red) is indeed a two dimensional narrow tube when zoomed in. The right panel indicates that both {1, 2, 3} and {1, 2, 4} (median
is around zero) should be removed. The optimal regularization parameter is ⇣⇤ ⇡ 7. The result of the third dataset D13, three torus, is in the third row of the figure. We displayed only projections of the penultimate and the last coordinate of original data X and embedding S⇤ (which is {5, 10}) colored by ↵1 of (S15) in the left and middle panel to conserve space. A full combinations of coordinates can be found in Figure S5. The right panel implies one should eliminate the set {1, 2, 3, 4} and {1, 2, 3, 5} since both of them have more than 75% of the points such that D(S, i) 0. The first remaining subset is {1, 2, 5, 10}, which yields an optimal regularization parameter ⇣⇤ ⇡ 5.
Molecular dynamics dataset [FTP16] The dataset has size n ⇡ 30, 000 and ambient dimension D = 40, with the intrinsic dimension estimate be d̂ = 2 (see Supplement H.1 for details). The embedding with coordinate set S = [3] is shown in Figure 3a. The first three eigenvectors parameterize the same directions, which yields a one dimensional manifold in the figure. Top view (S = [2]) of the figure is a u-shaped structure similar to the yellow curve in Figure 1a. The heat map of L({1, i, j}) for different combinations of coordinates in Figure 3b confirms that L for S = [3] is low and that 1, 2 and 3 give a low rank mapping. The heat map also shows high L values for S1 = {1, 4, 6} or S2 = {1, 5, 7}, which correspond to the top two ranked subsets. The embeddings with S1, S2 are in Figures 3c and 3d, respectively. In this case, we obtain two optimal S sets due to the data symmetry.
Galaxy spectra from the Sloan Digital Sky survey (SDSS) 3 [AAMA+09], preprocessed as in [MMVZ16]. We display a sample of n = 50, 000 points from the first 0.3 million points which correspond to closer galaxies. Figures 3e and 3f show that the first two coordinates are almost dependent; the embedding with S⇤ = {1, 3} is selected by INDEIGENSEARCH with d = 2. Both plots are colored by the blue spectrum magnitude, which is correlated to the number of young stars in the galaxy, showing that this galaxy property varies smoothly and non-linearly with 1, 3, but is not smooth w.r.t. 1, 2.
Comparison with [DTCK18] The LLRCOORDSEARCH method outputs similar candidate coordinates as our proposed algorithm most of the time (see Table S3). However, the results differ for high torus as in Figure 3. Figure 3h is the leave one out (LOO) error rk versus coordinates. The coordinates chosen by LLRCOORDSEARCH was S = {1, 2, 5}, as in Figure 3g. The embedding is clearly shown to be suboptimal, for it failed to capture the cavity within the torus. This is because the algorithm searches in a sequential fashion; the noise eigenvector 2 in this example appears before the signal eigenvectors e.g., 4 and 5.
Additional experiments with real data are shown in Table 1. Not surprisingly, for most real data sets we examined, the independent coordinates are not the first s. They also show that the algorithm scales well and is robust to the noise present in real data.
The asymptotic runtime of LLRCOORDSEARCH has quadratic dependency on n, while for our algorithm is linear in n. Details of runtime analysis are Supplement F. LLRCOORDSEARCH was too slow to be tested on the four larger datasets (see also Figure S1).
3The Sloan Digital Sky Survey data can be downloaded from https://www.sdss.org
7 Conclusion
Algorithms that use eigenvectors, such as DM, are among the most promising and well studied in ML. It is known since [GZKR08] that when the aspect ratio of a low dimensional manifold exceeds a threshold, the choice of eigenvectors becomes non-trivial, and that this threshold can be as low as 2. Our experimental results confirm the need to augment ML algorithms with IES methods in order to successfully apply ML to real world problems. Surprisingly, the IES problem has received little attention in the ML literature, to the extent that the difficulty and complexity of the problem have not been recognized. Our paper advances the state of the art by (i) introducing for the first time a differential geometric definition of the problem, (ii) highlighting geometric factors such as injectivity radius that, in addition to aspect ratio, influence the number of eigenfunctions needed for a smooth embedding, (iii) constructing selection criteria based on intrinsic manifold quantities, (iv) which have analyzable asymptotic limits, (v) can be computed efficiently, and (vi) are also robust to the noise present in real scientific data. The library of hard synthetic examples we constructed will be made available along with the python software implementation of our algorithms.
Acknowledgements
The authors acknowledge partial support from the U.S. Department of Energy, Solar Energy Technology Office award DE-EE0008563 and from the NSF DMS PD 08-1269 and NSF IIS-0313339 awards. They are grateful to the Tkatchenko and Pfaendtner labs and in particular to Stefan Chmiela and Chris Fu for providing the molecular dynamics data and for many hours of brainstorming and advice. | 1. What is the focus of the paper regarding manifold learning?
2. What are the strengths of the proposed method, particularly its originality and scientific soundness?
3. What are the weaknesses of the paper, especially regarding the choice of kernel bandwidth and its impact on results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or suggestions for improving the paper, such as relating to previous works or addressing specific aspects of the method? | Review | Review
The authors propose a criterion and method for selecting independent diffusion coordinates to capture the structure of a manifold with a large aspect ratio. The ideas presented in the paper are original, and the paper is clearly written, well organized and scientifically sound. The theoretical background and new analysis are provided in a clear and well-written form. The authors provide sufficient information to allow reproducibility of the method. Simulations are provided to support the success of the method, furthermore, the method is compared to an alternative approach. The paper indeed addresses a real problem in manifold learning, and the proposed method might be used by others in the future. I have a few minor concerns: the authors do not relate to: "Non-Redundant Spectral Dimensionality Reduction", Michaeli et al. probably unintentionally. However, I believe that this method provides a true alternative to the proposed method and this should be addressed. -The choice of the kernel bandwidth ($\epsilon$) is not addressed, this parameter could dramatically affect the results. Moreover, in some cases, if $\epsilon$ is chosen as a diag matrix (i.e. different number for each coordinate), the aspect ratio problem could be fixed (see for example "Kernel Scaling for Manifold Learning and Classification"). To summarize, I think the paper should be accepted and hope that these minor changes could be easily addressed to improve this manuscript. Respond to rebuttal: The authors have addressed all my comments in the rebuttal, my opinion is unchanged, I think that the paper should be accepted with the appropriate edits included in the final version. |
NIPS | Title
Selecting the independent coordinates of manifolds with large aspect ratios
Abstract
Many manifold embedding algorithms fail apparently when the data manifold has a large aspect ratio (such as a long, thin strip). Here, we formulate success and failure in terms of finding a smooth embedding, showing also that the problem is pervasive and more complex than previously recognized. Mathematically, success is possible under very broad conditions, provided that embedding is done by carefully selected eigenfunctions of the Laplace-Beltrami operator M. Hence, we propose a bicriterial Independent Eigencoordinate Selection (IES) algorithm that selects smooth embeddings with few eigenvectors. The algorithm is grounded in theory, has low computational overhead, and is successful on synthetic and large real data.
N/A
Many manifold embedding algorithms fail apparently when the data manifold has a large aspect ratio (such as a long, thin strip). Here, we formulate success and failure in terms of finding a smooth embedding, showing also that the problem is pervasive and more complex than previously recognized. Mathematically, success is possible under very broad conditions, provided that embedding is done by carefully selected eigenfunctions of the Laplace-Beltrami operator M. Hence, we propose a bicriterial Independent Eigencoordinate Selection (IES) algorithm that selects smooth embeddings with few eigenvectors. The algorithm is grounded in theory, has low computational overhead, and is successful on synthetic and large real data.
1 Motivation
We study a well-documented deficiency of manifold learning algorithms. Namely, as shown in [GZKR08], algorithms such as Laplacian Eigenmaps (LE), Local Tangent Space Alignment (LTSA), Hessian Eigenmaps (HLLE), and Diffusion Maps (DM) fail spectacularly when the data has a large aspect ratio, that is, it extends much more in one geodesic direction than in others. This problem, illustrated by the strip in Figure 1, was studied in [GZKR08] from a linear algebraic perspective; [GZKR08] show that, especially when noise is present, the problem is pervasive.
In the present paper, we revisit the problem from a differential geometric perspective. First, we define failure not as distortion, but as drop in the rank of the mapping represented by the embedding algorithm. In other words, the algorithm fails when the map is not invertible, or, equivalently, when the dimension dim (M) < dimM = d, where M represents the idealized data manifold, and dim denotes the intrinsic dimension. Figure 1 demonstrates that the problem is fixed by choosing the eigenvectors with care. We call this problem the Independent Eigencoordinate Selection (IES) problem, formulate it and explain its challenges in Section 3.
Our second main contribution (Section 4) is to design a bicriterial method that will select from a set of coordinate functions 1, . . . m, a subset S of small size that provides a smooth full-dimensional embedding of the data. The IES problem requires searching over a combinatorial number of sets. We show (Section 4) how to drastically reduce the computational burden per set for our algorithm. Third, we analyze the proposed criterion under asymptotic limit (Section 5). Finally (Section 6), we show examples of successful selection on real and synthetic data. The experiments also demonstrate that users of manifold learning for other than toy data must be aware of the IES problem and have tools for handling it. Notations table, proofs, a library of hard examples, extra experiments and analyses are in Supplements A–H; Figure/Table/Equation references with prefix S are in the Supplement.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
2 Background on manifold learning
Manifold learning (ML) and intrinsic geometry Suppose we observe data X 2 Rn⇥D, with data points denoted by xi 2 RD 8 i 2 [n], that are sampled from a smooth1 d-dimensional submanifold M ⇢ RD. Manifold Learning algorithms map xi, i 2 [n] to yi = (xi) 2 Rs, where d s⌧ D, thus reducing the dimension of the data X while preserving (some of) its properties. Here we present the LE/DM algorithm, but our results can be applied to other ML methods with slight modification. The DM [CL06, NLCK06] algorithm embeds the data by solving the minimum eigen-problem of the renormalized graph Laplacian [CL06] matrix L. The desired m dimensional embedding coordinates are obtained from the second to m + 1-th principal eigenvectors of graph Laplacian L, with 0 = 0 < 1 . . . m, i.e., yi = ( 1(xi), . . . m(xi)) (see also Supplement B).
To analyze ML algorithms, it is useful to consider the limit of the mapping when the data is the entire manifold M. We denote this limit also by , and its image by (M) 2 Rm. For standard algorithms such as LE/DM, it is known that this limit exists [CL06, BN07, HAvL05, HAvL07, THJ10]. One of the fundamental requirements of ML is to preserve the neighborhood relations in the original data. In mathematical terms, we require that : M ! (M) is a smooth embedding, i.e., that is a smooth function (i.e. does not break existing neighborhood relations) whose Jacobian D (x) is full rank d at each x 2M (i.e. does not create new neighborhood relations).
The pushforward Riemannian metric A smooth does not typically preserve geometric quantities such as distances along curves in M. These concepts are captured by Riemannian geometry, and we additionally assume that (M, g) is a Riemannian manifold, with the metric g induced from RD. One can always associate with (M) a Riemannian metric g⇤ , called the pushforward Riemannian metric [Lee03], which preserves the geometry of (M, g); g⇤ is defined by
hu,vig⇤ (x) = ⌦ D 1(x)u,D 1(x)v ↵ g(x) for all u,v 2 T (x) (M) (1)
Algorithm 1: RMETRIC Input : Embedding Y 2 Rn⇥m, Laplacian L,
intrinsic dimension d 1 for all yi 2 Y, k = 1! m, l = 1! m do 2 [H̃(i)]kl = P j 6=i Lij(yjl yil)(yjk yik)
3 end
4 for i = 1! n do 5 U(i), ⌃(i) REDUCEDRANKSVD(H̃(i), d) 6 H(i) = U(i)⌃(i)U(i)> 7 G(i) = U(i)⌃ 1(i)U(i)>
8 end
Return: G(i),H(i) 2 Rm⇥m, U(i) 2 Rm⇥d, ⌃(i) 2 Rd⇥d, for i 2 [n]
In the above, TxM, T (x) (M) are tangent subspaces, D 1(x) maps vectors from T (x) (M) to TxM, and h, i is the Euclidean scalar product. For each (xi), the associated pushforward Riemannian metric expressed in the coordinates of Rm, is a symmetric, semi-positive definite m ⇥ m matrix G(i) of rank d. The scalar product hu,vig⇤ (xi) takes the form u > G(i)v. Given an embedding Y =
(X), G(i) can be estimated by Algorithm 1 (RMETRIC) of [PM13]. The RMETRIC also returns the co-metric H(i), which is the pseudo-inverse of the metric G(i), and its Singular Value Decomposition ⌃(i),U(i) 2 Rm⇥d. The latter represents an orthogonal basis of T (x)( (M)).
3 IES problem, related work, and challenges
An example Consider a continuous two dimensional strip with width W , height H , and aspect ratio W/H 1, parametrized by coordinates w 2 [0,W ], h 2 [0, H]. The eigenvalues and eigenfunctions of the Laplace-Beltrami operator with von Neumann boundary conditions [Str07] are k1,k2 = k1⇡
W
2 + k2⇡
H
2 , respectively k1,k2(w, h) = cos k1⇡w
W
cos
k2⇡h
H
.
Eigenfunctions 1,0, 0,1 are in bijection with the w, h coordinates (and give a full rank embedding), while the mapping by 1,0, 2,0 provides no extra information regarding the second dimension h in the underlying manifold (and is rank 1). Theoretically, one can choose as coordinates eigenfunctions indexed by (k1, 0), (0, k2), but, in practice, k1, and k2 are usually
1In this paper, a smooth function or manifold will be assumed to be of class at least C3.
unknown, as the eigenvalues are index by their rank 0 = 0 < 1 2 · · · . For a two dimensional strip, it is known [Str07] that 1,0 always corresponds to 1 and 0,1 corresponds to (dW/He). Therefore, when W/H > 2, the mapping of the strip to R2 by 1, 2 is low rank, while the mapping by 1, dW/He is full rank. Note that other mappings of rank 2 exist, e.g., 1, dW/He+2 (k1 = k2 = 1 in Figure 1b). These embeddings reflect progressively higher frequencies, as the corresponding eigenvalues grow larger.
Prior work [GZKR08] is the first work to give the IES problem a rigurous analysis. Their paper focuses on rectangles, and the failure illustrated in Figure 1a is defined as obtaining a mapping Y = (X) that is not affinely equivalent with the original data. They call this the Price of Normalization and explain it in terms of the variances along w and h. [DTCK18] is the first to frame the failure in terms of the rank of S = { k : k 2 S ✓ [m]}, calling it the repeated eigendirection problem. They propose a heuristic, LLRCOORDSEARCH, based on the observation that if k is a repeated eigendirection of 1, · · · , k 1, one can fit k with local linear regression on predictors [k 1] with low leave-one-out errors rk. A sequential algorithm [BM17] with an unpredictability constraint in the eigenproblem has also been proposed. Under their framework, the k-th coordinate k is obtained from the top eigenvector of the modified kernel matrix K̃k, which is constructed by the original kernel K and 1, · · · , k 1.
Existence of solution Before trying to find an algorithmic solution to the IES problem, we ask the question whether this is even possible, in the smooth manifold setting. Positive answers are given in [Por16], which proves that isometric embeddings by DM with finite m are possible, and more recently in [Bat14], which proves that any closed, connected Riemannian manifold M can be smoothly embedded by its Laplacian eigenfunctions [m] into Rm for some m, which depends only on the intrinsic dimension d of M, the volume of M, and lower bounds for injectivity radius and Ricci curvature. The example in Figure 1a demonstrates that, typically, not all m eigenfunctions are needed. I.e., there exists a set S ⇢ [m], so that S is also a smooth embedding. We follow [DTCK18] in calling such a set S independent. It is not known how to find an independent S analytically for a given M, except in special cases such as the strip. In this paper, we propose a finite sample and algorithmic solution, and we support it with asymptotic theoretical analysis.
The IES Problem We are given data X, and the output of an embedding algorithm (DM for simplicity) Y = (X) = [ 1, · · · , m] 2 Rn⇥m. We assume that X is sampled from a d-dimensional manifold M, with known d, and that m is sufficiently large so that (M) is a smooth embedding. Further, we assume that there is a set S ✓ [m], with |S| = s m, so that S is also a smooth embedding of M. We propose to find such set S so that the rank of S is d on M and S varies as slowly as possible.
Challenges (1) Numerically, and on a finite sample, distiguishing between a full rank mapping and a rank-defective one is imprecise. Therefore, we substitute for rank the volume of a unit parallelogram in T (xi) (M). (2) Since is not an isometry, we must separate the local distortions introduced by from the estimated rank of at x. (3) Finding the optimal balance between the above desired properties. (4) In [Bat14] it is strongly suggested that s the number of eigenfunctions needed may exceed the Whitney embedding dimension ( 2d), and that this number may depend on injectivity radius, aspect ratio, and so on. Supplement G shows an example of a flat 2-manifold, the strip with cavity, for which s > 2. In this paper, we assume that s and m are given and focus on selecting S with |S| = s; for completeness, in Supplement G we present a heuristic to select s.
(Global) functional dependencies, knots and crossings Before we proceed, we describe three different ways a mapping (M) can fail to be invertible. The first, (global) functional dependency is the case when rankD < d on an open subset of M, or on all of M (yellow curve in Figure 1a); this is the case most widely recognized in the literature (e.g., [GZKR08, DTCK18]). The knot is the case when rankD < d at an isolated point (Figure 1b). Third, the crossing (Figure S8 in
Supplement H) is the case when : M! (M) is not invertible at x, but M can be covered with open sets U such that the restriction : U ! (U) has full rank d. Combinations of these three exemplary cases can occur. The criteria and approach we define are based on the (surrogate) rank of , therefore they will not rule out all crossings. We leave the problem of crossings in manifold embeddings to future work, as we believe that it requires an entirely separate approach (based, e.g., or the injectivity radius or density in the co-tangent bundle rather than differential structure).
4 Criteria and algorithm
A geometric criterion We start with the main idea in evaluating the quality of a subset S of coordinate functions. At each data point i, we consider the orthogonal basis U(i) 2 Rm⇥d of the d dimensional tangent subspace T (xi) (M). The projection of the columns of U(i) onto the subspace T (xi) S(M) is U(i)[S, :] ⌘ US(i). The following Lemma connects US(i) and the co-metric HS(i) defined by S , with the full H(i). Lemma 1. Let H(i) = U(i)⌃(i)U(i)> be the co-metric defined by embedding , S ✓ [m], HS(i) and US(i) defined above. Then HS(i) = US(i)⌃(i)US(i)> = H(i)[S, S].
The proof is straightforward and left to the reader. Note that Lemma 1 is responsible for the efficiency of the search over sets S, given that the push-forward co-metric HS can be readily obtained as a submatrix of H. Denote by uS
k (i) the k-th column of US(i). We further normalize each uSk
to length 1 and define the normalized projected volume Volnorm(S, i) = p
det(US(i)>US(i))Qd k=1 kuSk (i)k2
. Conceptually, Volnorm(S, i) is the volume spanned by a (non-orthonormal) “basis” of unit vectors in T S(xi) S(M); Volnorm(S, i) = 1 when US(i) is orthogonal, and it is 0 when rankHS(i) < d. In Figure 1a, the Volnorm({1, 2}) with {1,2} = { 1,0, 2,0} is close to zero, since the projection of the two tangent vectors is parallel to the yellow curve; however Volnorm({1, dw/he}, i) is almost 1, because the projections of the tangent vectors U(i) will be (approximately) orthogonal. Hence, Volnorm(S, i) away from 0 indicates a non-singular S at i, and we use the average log Volnorm(S, i), which penalizes values near 0 highly, as the rank quality R(S) of S.
Higher frequency S maps with high R(S) may exist, being either smooth, such as the embeddings of the strip mentioned previously, or containing knots involving only small fraction of points, such as 1,0, 1,1 in Figure 1a. To choose the lowest frequency, slowest varying smooth map, a regularization term consisting of the eigenvalues k, k 2 S, of the graph Laplacian L is added, obtaining the criterion
L(S; ⇣) = 1
n
nX
i=1
log q det (US(i)>US(i))
| {z } R1(S)= 1n Pn i=1 R1(S;i)
1
n
nX
i=1
dX
k=1
log kuS k (i)k2
| {z } R2(S)= 1n Pn i=1 R2(S;i)
⇣ X
k2S k (2)
Algorithm 2: INDEIGENSEARCH Input : Data X, bandwith ", intrinsic dimension d,
embedding dimension s, regularizer ⇣ 1 Y 2 Rn⇥m,L, 2 Rm DIFFMAP(X, ") 2 U(i), · · · ,U(n) RMETRIC(Y,L, d) 3 for S 2 {S0 ✓ [m] : |S0| = s, 1 2 S0} do 4 R1(S) 0;R2(S) 0 5 for i = 1, · · · , n do 6 US(i) U(i)[S, :] 7 R1(S) += 1 2n · log det US(i)>US(i) 8 R2(S) += 1 n · P d k=1 log ku S k (i)k2
9 end
10 L(S; ⇣) = R1(S) R2(S) ⇣ P k2S k 11 end
12 S⇤ = argmaxS L(S; ⇣) Return: Independent eigencoordinates set S⇤
Search algorithm With this criterion, the IES problem turns into a subset selection problem parametrized by ⇣
S⇤(⇣) = argmax S✓[m];|S|=s;12S L(S; ⇣) (3)
Note that we force the first coordinate 1 to always be chosen, since this coordinate cannot be functionally dependent on previous ones, and, in the case of DM, it also has lowest frequency. Note also that R1 and R2 are both submodular set function (proof in Supplement C.3). For large s and d, algorithms for optimizing over the difference of submodular functions can be used (e.g., see [IB12]). For the experiments in this paper, we have m = 20 and
d, s = 2 ⇠ 4, which enables us to use exhaustive search to handle (3). The exact search algorithm is summarized in Algorithm 2 INDEIGENSEARCH. A greedy variant is also proposed and analyzed in Supplement D. Note that one might be able to search in the continuous space of all s-projections. We conjecture the objective function (2) will be a difference of convex function and leave the details as future work2.
Regularization path and choosing ⇣ According to (2), the optimal subset S⇤ depends on the parameter ⇣. The regularization path `(⇣) = maxS✓[m];|S|=s;12S L(S; ⇣) is the upper envelope of multiple lines (each correspond to a set S) with slopes P k2S k and intercepts R(S). The larger ⇣ is, the more the lower frequency subset penalty prevails, and for sufficiently large ⇣ the algorithm will output [s]. In the supervised learning framework, the regularization parameters are often chosen by cross validation. Here we propose a second criterion, that effectively limits how much R(S) may be ignored, or alternatively, bounds ⇣ by a data dependent quantity. Define the leave-one-out regret of point i as follows
D(S, i) = R(Si⇤; [n]\{i}) R(S; [n]\{i}) with S i ⇤ = argmaxS✓[m];|S|=s;12SR(S; i) (4)
In the above, we denote R(S;T ) = 1|T | P
i2T R1(S; i) R2(S; i) for some subset T ✓ [n]. The quantity D(S, i) in (4) measures the gain in R if all the other points [n]\{i} choose the optimal subset Si⇤. If the regret D(S, i) is larger than zero, it indicates that the alternative choice might be better compared to original choice S. Note that the mean value for all i, i.e., 1
n P i D(S, i)
depends also on the variability of the optimal choice of points i, Si⇤. Therefore, it might not favor an S, if S is optimal for every i 2 [n]. Instead, we propose to inspect the distribution of D(S, i), and remove the sets S for which ↵’s percentile are larger than zero, e.g., ↵ = 75%, recursively from ⇣ = 1 in decreasing order. Namely, the chosen set is S⇤ = S⇤(⇣ 0) with ⇣ 0 = max⇣ 0 PERCENTILE({D(S⇤(⇣), i)}ni=1,↵) 0. The optimal ⇣⇤ value is simply chosen to be the midpoint of all the ⇣’s that outputs set S⇤ i.e., ⇣⇤ = 12 (⇣
0 + ⇣ 00), where ⇣ 00 = min⇣ 0 S⇤(⇣) = S⇤(⇣ 0). The procedure REGUPARAMSEARCH is summarized in Algorithm S5.
5 R as Kullbach-Leibler divergence
In this section we analyze R in its population version, and show that it is reminiscent of a KullbachLeibler divergence between unnormalized measures on S(M). The population version of the regularization term takes the form of a well-known smoothness penalty on the embedding coordinates S . Proofs of the theorems can be found in Supplement C.
Volume element and the Riemannian metric Consider a Riemannian manifold (M, g) mapped by a smooth embedding S into ( S(M), g⇤ S ), S : M ! Rs, where g⇤ S is the push-forward metric defined in (1). A Riemannian metric g induces a Riemannian measure on M, with volume element
p det g. Denote now by µM, respectively µ S(M) the Riemannian measures corresponding
to the metrics induced on M, S(M) by the ambient spaces RD,Rs; let g be the former metric. Lemma 2. Let S, , S ,HS(x),US(x),⌃(x) be defined as in Section 4 and Lemma 1. For simplicity, we denote by HS(y) ⌘ HS( 1S (y)), and similarly for US(y),⌃(y). Assume that S is a smooth embedding. Then, for any measurable function f : M! R,
Z
M f(x)dµM(x) =
Z
S(M) f( 1 S (y))jS(y)dµ S(M)(y), (5)
with jS(y) = 1/Vol(US(y)⌃ 1/2 S (y)). (6)
Asymptotic limit of R We now study the first term of our criterion in the limit of infinite sample size. We make the following assumptions. Assumption 1. The manifold M is compact of class C3, and there exists a set S, with |S| = s so that S is a smooth embedding of M in Rs.
2We thank the anonymous reviewer who made this suggestion.
Assumption 2. The data are sampled from a distribution on M continuous with respect to µM, whose density is denoted by p. Assumption 3. The estimate of HS in Algorithm 1 computed w.r.t. the embedding S is consistent.
We know from [Bat14] that Assumption 1 is satisfied for the DM/LE embedding. The remaining assumptions are minimal requirements ensuring that limits of our quantities exist. Now consider the setting in Sections 3, in which we have a larger set of eigenfunctions, [m] so that [m] contains the set S of Assumption 1. Denote by |̃S(y) = Q d
k=1
||u
S k (y)|| k(y))1/2
1 a new volume element,
here k = [⌃]kk. Theorem 3 (Limit of R). Under Assumptions 1–3,
lim n!1
1
n
X
i
lnR(S,xi) = R(S,M), (7)
and
R(S,M) =
Z
S(M) ln
jS(y) |̃S(y) jS(y)p( 1 S (y))dµ S(M)(y) def = D(pjSkp|̃S) (8)
The expression D(·k·) represents a Kullbach-Leibler divergence. Note that jS |̃S , which implies that D is always positive, and that the measures defined by pjS , p|̃S normalize to different values. By definition, local injectivity is related to the volume element j. Intuitively, pjS is the observation and pj̃S , where j̃S is the minimum attainable for jS , is the model; the objective itself is looking for a view S of the data that agrees with the model.
It is known that k, the k-th eigenvalue of the Laplacian, converges under certain technical conditions [BN07] to an eigenvalue of the Laplace-Beltrami operator M and that
k( M) = h k, M ki =
Z
M k grad k(x)k
2 2dµ(M). (9)
Hence, a smaller value for the regularization term encourages the use of slow varying coordinate functions, as measured by the squared norm of their gradients, as in equation (9). Hence, under Assumptions 1, 2, 3, L converges to
L(S,M) = D(pjSkp|̃S)
✓ ⇣
1(M)
◆X
k2S k(M). (10)
Since eigenvalues scale with the volume of M, the rescaling of ⇣ in comparison with equation (2) makes the ⇣ above adimensional.
6 Experiments
We demonstrate the proposed algorithm on three synthetic datasets, one where the minimum embedding dimension s equals d (D1 long strip), and two (D7 high torus and D13 three torus) where s > d. The complete list of synthetic manifolds (transformations of 2 dimensional strips, 3 dimensional cubes, two and three tori, etc.) investigated can be found in Supplement H and Table S2. The examples have (i) aspect ratio of at least 4 (ii) points sampled non-uniformly from the underlying manifold M, and (iii) Gaussian noise added. The sample size of the synthetic datasets is n = 10, 000 unless otherwise stated. Additionally, we analyze several real datasets from chemistry and astronomy. All embeddings are computed with the DM algorithm, which outputs m = 20 eigenvectors. Hence, we examine 171 sets for s = 3 and 969 sets for s = 4. No more than 2 to 5 of these sets appear on the regularization path. Detailed experimental results are in Table S3. In this section, we show the original dataset X, the embedding S⇤ , with S⇤ selected by INDEIGENSEARCH and ⇣⇤ from REGUPARAMSEARCH, and the maximizer sets on the regularization path with box plots of D(S, i) as discussed in Section 4. The ↵ threshold for REGUPARAMSEARCH is set to 75%. The kernel bandwidth " for synthetic datasets is chosen manually. For real datasets, " is optimized as in [JMM17]. All the experiments are replicated for more than 5 times, and the outputs are similar because of the large sample size n.
Synthetic manifolds The results of synthetic manifolds are in Figure 2. (i) Manifold with s = d. The first synthetic dataset we considered, D1, is a two dimensional strip with aspect ratio W/H = 2⇡. Left panel of the top row shows the scatter plot of such dataset. From the theoretical analysis in Section 3, the coordinate set that corresponds to slowest varying unique eigendirection is S = {1, dW/He} = {1, 7}. Middle panel, with S⇤ = {1, 7} selected by INDEIGENSEARCH with ⇣ chosen by REGUPARAMSEARCH, confirms this. The right panel shows the box plot of {D(S, i)}n
i=1. According to the proposed procedure, we eliminate S0 = {1, 2} since D(S0, i) 0 for almost all the points. (ii) Manifold with s > d. The second data D7 is displayed in the left panel of the second row. Due to the mechanism we used to generate the data, the resultant torus is non-uniformly distributed along the z axis. Middle panel is the embedding of the optimal coordinate set S⇤ = {1, 4, 5} selected by INDEIGENSEARCH. Note that the middle region (in red) is indeed a two dimensional narrow tube when zoomed in. The right panel indicates that both {1, 2, 3} and {1, 2, 4} (median
is around zero) should be removed. The optimal regularization parameter is ⇣⇤ ⇡ 7. The result of the third dataset D13, three torus, is in the third row of the figure. We displayed only projections of the penultimate and the last coordinate of original data X and embedding S⇤ (which is {5, 10}) colored by ↵1 of (S15) in the left and middle panel to conserve space. A full combinations of coordinates can be found in Figure S5. The right panel implies one should eliminate the set {1, 2, 3, 4} and {1, 2, 3, 5} since both of them have more than 75% of the points such that D(S, i) 0. The first remaining subset is {1, 2, 5, 10}, which yields an optimal regularization parameter ⇣⇤ ⇡ 5.
Molecular dynamics dataset [FTP16] The dataset has size n ⇡ 30, 000 and ambient dimension D = 40, with the intrinsic dimension estimate be d̂ = 2 (see Supplement H.1 for details). The embedding with coordinate set S = [3] is shown in Figure 3a. The first three eigenvectors parameterize the same directions, which yields a one dimensional manifold in the figure. Top view (S = [2]) of the figure is a u-shaped structure similar to the yellow curve in Figure 1a. The heat map of L({1, i, j}) for different combinations of coordinates in Figure 3b confirms that L for S = [3] is low and that 1, 2 and 3 give a low rank mapping. The heat map also shows high L values for S1 = {1, 4, 6} or S2 = {1, 5, 7}, which correspond to the top two ranked subsets. The embeddings with S1, S2 are in Figures 3c and 3d, respectively. In this case, we obtain two optimal S sets due to the data symmetry.
Galaxy spectra from the Sloan Digital Sky survey (SDSS) 3 [AAMA+09], preprocessed as in [MMVZ16]. We display a sample of n = 50, 000 points from the first 0.3 million points which correspond to closer galaxies. Figures 3e and 3f show that the first two coordinates are almost dependent; the embedding with S⇤ = {1, 3} is selected by INDEIGENSEARCH with d = 2. Both plots are colored by the blue spectrum magnitude, which is correlated to the number of young stars in the galaxy, showing that this galaxy property varies smoothly and non-linearly with 1, 3, but is not smooth w.r.t. 1, 2.
Comparison with [DTCK18] The LLRCOORDSEARCH method outputs similar candidate coordinates as our proposed algorithm most of the time (see Table S3). However, the results differ for high torus as in Figure 3. Figure 3h is the leave one out (LOO) error rk versus coordinates. The coordinates chosen by LLRCOORDSEARCH was S = {1, 2, 5}, as in Figure 3g. The embedding is clearly shown to be suboptimal, for it failed to capture the cavity within the torus. This is because the algorithm searches in a sequential fashion; the noise eigenvector 2 in this example appears before the signal eigenvectors e.g., 4 and 5.
Additional experiments with real data are shown in Table 1. Not surprisingly, for most real data sets we examined, the independent coordinates are not the first s. They also show that the algorithm scales well and is robust to the noise present in real data.
The asymptotic runtime of LLRCOORDSEARCH has quadratic dependency on n, while for our algorithm is linear in n. Details of runtime analysis are Supplement F. LLRCOORDSEARCH was too slow to be tested on the four larger datasets (see also Figure S1).
3The Sloan Digital Sky Survey data can be downloaded from https://www.sdss.org
7 Conclusion
Algorithms that use eigenvectors, such as DM, are among the most promising and well studied in ML. It is known since [GZKR08] that when the aspect ratio of a low dimensional manifold exceeds a threshold, the choice of eigenvectors becomes non-trivial, and that this threshold can be as low as 2. Our experimental results confirm the need to augment ML algorithms with IES methods in order to successfully apply ML to real world problems. Surprisingly, the IES problem has received little attention in the ML literature, to the extent that the difficulty and complexity of the problem have not been recognized. Our paper advances the state of the art by (i) introducing for the first time a differential geometric definition of the problem, (ii) highlighting geometric factors such as injectivity radius that, in addition to aspect ratio, influence the number of eigenfunctions needed for a smooth embedding, (iii) constructing selection criteria based on intrinsic manifold quantities, (iv) which have analyzable asymptotic limits, (v) can be computed efficiently, and (vi) are also robust to the noise present in real scientific data. The library of hard synthetic examples we constructed will be made available along with the python software implementation of our algorithms.
Acknowledgements
The authors acknowledge partial support from the U.S. Department of Energy, Solar Energy Technology Office award DE-EE0008563 and from the NSF DMS PD 08-1269 and NSF IIS-0313339 awards. They are grateful to the Tkatchenko and Pfaendtner labs and in particular to Stefan Chmiela and Chris Fu for providing the molecular dynamics data and for many hours of brainstorming and advice. | 1. What is the novel solution proposed by the authors to identify a parsimonious subset of eigenvectors from a diffusion map embedding?
2. How does the new criterion for evaluating the independence of a set of eigenvectors improve upon existing work?
3. Can you provide examples or experiments that demonstrate the effectiveness and scalability of the proposed method?
4. How does the method handle noisy data, and what are the limitations of its robustness?
5. How can the utility of the embeddings provided for real datasets be assessed, and what additional interpretations or visualizations could be provided to enhance understanding?
6. Are there any typos or errors in the text or figures that need correction? | Review | Review
The authors provide a novel solution to the problem first identified in [DTCK18], that of identifying a parsimonious subset of eigenvectors from a diffusion map embedding. From the perspective of differential geometry, the authors identify a new criterion for evaluating the independence of a set of eigenvectors and use this to identify suitable independent subsets of eigenvectors of the diffusion map. This area of manifold embedding is relatively understudied, and the solution by the authors seems elegant, improves on existing work, and is scalable to large datasets. The paper is also accompanied by an impressive number of experiments. Comments: 1. The authors claim that their method is robust to "noise present in real scientific data". However, it is hard to determine whether or not this is the case given the examples provided. An experiment on synthetic data with added noise would improve this claim. 2. Some of the figures in the main text were difficult to parse. It appears that in Figure 1a the y-axis is mislabeled and contains multiple overlaid plots. It is also difficult to assess the utility of the embeddings provided for the real datasets in Figure 3 as there is no ground truth geometry that we can reference. It would be useful to know how the coloring used for the Chloromethane dataset (or what the data actually is) and to have some more interpretation of the utility of the embedding. Typographical comments: 1. Line 98: "regreesion" -> regression 2. "Chloromethane" is misspelled in the Fig. 3 legend Based on the strength of the experimental results and theoretical interpretation of this problem I recommend accepting this paper. Update (Aug 11): Reading the other reviews and the author feedback, my opinion of this paper has not changed. I agree with author three that considering the problem of selecting the idea subspace for conditionality reduction (as opposed to the ideal subset of eigenvector) is an interesting problem and perhaps will yield interesting progress in the field. However, that does not detract from the significance of the problem considered in this work. The authors are thorough and their response to request for analysis of robustness to noise is satisfactory. I do not wish to revise my score. |
NIPS | Title
Selecting the independent coordinates of manifolds with large aspect ratios
Abstract
Many manifold embedding algorithms fail apparently when the data manifold has a large aspect ratio (such as a long, thin strip). Here, we formulate success and failure in terms of finding a smooth embedding, showing also that the problem is pervasive and more complex than previously recognized. Mathematically, success is possible under very broad conditions, provided that embedding is done by carefully selected eigenfunctions of the Laplace-Beltrami operator M. Hence, we propose a bicriterial Independent Eigencoordinate Selection (IES) algorithm that selects smooth embeddings with few eigenvectors. The algorithm is grounded in theory, has low computational overhead, and is successful on synthetic and large real data.
N/A
Many manifold embedding algorithms fail apparently when the data manifold has a large aspect ratio (such as a long, thin strip). Here, we formulate success and failure in terms of finding a smooth embedding, showing also that the problem is pervasive and more complex than previously recognized. Mathematically, success is possible under very broad conditions, provided that embedding is done by carefully selected eigenfunctions of the Laplace-Beltrami operator M. Hence, we propose a bicriterial Independent Eigencoordinate Selection (IES) algorithm that selects smooth embeddings with few eigenvectors. The algorithm is grounded in theory, has low computational overhead, and is successful on synthetic and large real data.
1 Motivation
We study a well-documented deficiency of manifold learning algorithms. Namely, as shown in [GZKR08], algorithms such as Laplacian Eigenmaps (LE), Local Tangent Space Alignment (LTSA), Hessian Eigenmaps (HLLE), and Diffusion Maps (DM) fail spectacularly when the data has a large aspect ratio, that is, it extends much more in one geodesic direction than in others. This problem, illustrated by the strip in Figure 1, was studied in [GZKR08] from a linear algebraic perspective; [GZKR08] show that, especially when noise is present, the problem is pervasive.
In the present paper, we revisit the problem from a differential geometric perspective. First, we define failure not as distortion, but as drop in the rank of the mapping represented by the embedding algorithm. In other words, the algorithm fails when the map is not invertible, or, equivalently, when the dimension dim (M) < dimM = d, where M represents the idealized data manifold, and dim denotes the intrinsic dimension. Figure 1 demonstrates that the problem is fixed by choosing the eigenvectors with care. We call this problem the Independent Eigencoordinate Selection (IES) problem, formulate it and explain its challenges in Section 3.
Our second main contribution (Section 4) is to design a bicriterial method that will select from a set of coordinate functions 1, . . . m, a subset S of small size that provides a smooth full-dimensional embedding of the data. The IES problem requires searching over a combinatorial number of sets. We show (Section 4) how to drastically reduce the computational burden per set for our algorithm. Third, we analyze the proposed criterion under asymptotic limit (Section 5). Finally (Section 6), we show examples of successful selection on real and synthetic data. The experiments also demonstrate that users of manifold learning for other than toy data must be aware of the IES problem and have tools for handling it. Notations table, proofs, a library of hard examples, extra experiments and analyses are in Supplements A–H; Figure/Table/Equation references with prefix S are in the Supplement.
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
2 Background on manifold learning
Manifold learning (ML) and intrinsic geometry Suppose we observe data X 2 Rn⇥D, with data points denoted by xi 2 RD 8 i 2 [n], that are sampled from a smooth1 d-dimensional submanifold M ⇢ RD. Manifold Learning algorithms map xi, i 2 [n] to yi = (xi) 2 Rs, where d s⌧ D, thus reducing the dimension of the data X while preserving (some of) its properties. Here we present the LE/DM algorithm, but our results can be applied to other ML methods with slight modification. The DM [CL06, NLCK06] algorithm embeds the data by solving the minimum eigen-problem of the renormalized graph Laplacian [CL06] matrix L. The desired m dimensional embedding coordinates are obtained from the second to m + 1-th principal eigenvectors of graph Laplacian L, with 0 = 0 < 1 . . . m, i.e., yi = ( 1(xi), . . . m(xi)) (see also Supplement B).
To analyze ML algorithms, it is useful to consider the limit of the mapping when the data is the entire manifold M. We denote this limit also by , and its image by (M) 2 Rm. For standard algorithms such as LE/DM, it is known that this limit exists [CL06, BN07, HAvL05, HAvL07, THJ10]. One of the fundamental requirements of ML is to preserve the neighborhood relations in the original data. In mathematical terms, we require that : M ! (M) is a smooth embedding, i.e., that is a smooth function (i.e. does not break existing neighborhood relations) whose Jacobian D (x) is full rank d at each x 2M (i.e. does not create new neighborhood relations).
The pushforward Riemannian metric A smooth does not typically preserve geometric quantities such as distances along curves in M. These concepts are captured by Riemannian geometry, and we additionally assume that (M, g) is a Riemannian manifold, with the metric g induced from RD. One can always associate with (M) a Riemannian metric g⇤ , called the pushforward Riemannian metric [Lee03], which preserves the geometry of (M, g); g⇤ is defined by
hu,vig⇤ (x) = ⌦ D 1(x)u,D 1(x)v ↵ g(x) for all u,v 2 T (x) (M) (1)
Algorithm 1: RMETRIC Input : Embedding Y 2 Rn⇥m, Laplacian L,
intrinsic dimension d 1 for all yi 2 Y, k = 1! m, l = 1! m do 2 [H̃(i)]kl = P j 6=i Lij(yjl yil)(yjk yik)
3 end
4 for i = 1! n do 5 U(i), ⌃(i) REDUCEDRANKSVD(H̃(i), d) 6 H(i) = U(i)⌃(i)U(i)> 7 G(i) = U(i)⌃ 1(i)U(i)>
8 end
Return: G(i),H(i) 2 Rm⇥m, U(i) 2 Rm⇥d, ⌃(i) 2 Rd⇥d, for i 2 [n]
In the above, TxM, T (x) (M) are tangent subspaces, D 1(x) maps vectors from T (x) (M) to TxM, and h, i is the Euclidean scalar product. For each (xi), the associated pushforward Riemannian metric expressed in the coordinates of Rm, is a symmetric, semi-positive definite m ⇥ m matrix G(i) of rank d. The scalar product hu,vig⇤ (xi) takes the form u > G(i)v. Given an embedding Y =
(X), G(i) can be estimated by Algorithm 1 (RMETRIC) of [PM13]. The RMETRIC also returns the co-metric H(i), which is the pseudo-inverse of the metric G(i), and its Singular Value Decomposition ⌃(i),U(i) 2 Rm⇥d. The latter represents an orthogonal basis of T (x)( (M)).
3 IES problem, related work, and challenges
An example Consider a continuous two dimensional strip with width W , height H , and aspect ratio W/H 1, parametrized by coordinates w 2 [0,W ], h 2 [0, H]. The eigenvalues and eigenfunctions of the Laplace-Beltrami operator with von Neumann boundary conditions [Str07] are k1,k2 = k1⇡
W
2 + k2⇡
H
2 , respectively k1,k2(w, h) = cos k1⇡w
W
cos
k2⇡h
H
.
Eigenfunctions 1,0, 0,1 are in bijection with the w, h coordinates (and give a full rank embedding), while the mapping by 1,0, 2,0 provides no extra information regarding the second dimension h in the underlying manifold (and is rank 1). Theoretically, one can choose as coordinates eigenfunctions indexed by (k1, 0), (0, k2), but, in practice, k1, and k2 are usually
1In this paper, a smooth function or manifold will be assumed to be of class at least C3.
unknown, as the eigenvalues are index by their rank 0 = 0 < 1 2 · · · . For a two dimensional strip, it is known [Str07] that 1,0 always corresponds to 1 and 0,1 corresponds to (dW/He). Therefore, when W/H > 2, the mapping of the strip to R2 by 1, 2 is low rank, while the mapping by 1, dW/He is full rank. Note that other mappings of rank 2 exist, e.g., 1, dW/He+2 (k1 = k2 = 1 in Figure 1b). These embeddings reflect progressively higher frequencies, as the corresponding eigenvalues grow larger.
Prior work [GZKR08] is the first work to give the IES problem a rigurous analysis. Their paper focuses on rectangles, and the failure illustrated in Figure 1a is defined as obtaining a mapping Y = (X) that is not affinely equivalent with the original data. They call this the Price of Normalization and explain it in terms of the variances along w and h. [DTCK18] is the first to frame the failure in terms of the rank of S = { k : k 2 S ✓ [m]}, calling it the repeated eigendirection problem. They propose a heuristic, LLRCOORDSEARCH, based on the observation that if k is a repeated eigendirection of 1, · · · , k 1, one can fit k with local linear regression on predictors [k 1] with low leave-one-out errors rk. A sequential algorithm [BM17] with an unpredictability constraint in the eigenproblem has also been proposed. Under their framework, the k-th coordinate k is obtained from the top eigenvector of the modified kernel matrix K̃k, which is constructed by the original kernel K and 1, · · · , k 1.
Existence of solution Before trying to find an algorithmic solution to the IES problem, we ask the question whether this is even possible, in the smooth manifold setting. Positive answers are given in [Por16], which proves that isometric embeddings by DM with finite m are possible, and more recently in [Bat14], which proves that any closed, connected Riemannian manifold M can be smoothly embedded by its Laplacian eigenfunctions [m] into Rm for some m, which depends only on the intrinsic dimension d of M, the volume of M, and lower bounds for injectivity radius and Ricci curvature. The example in Figure 1a demonstrates that, typically, not all m eigenfunctions are needed. I.e., there exists a set S ⇢ [m], so that S is also a smooth embedding. We follow [DTCK18] in calling such a set S independent. It is not known how to find an independent S analytically for a given M, except in special cases such as the strip. In this paper, we propose a finite sample and algorithmic solution, and we support it with asymptotic theoretical analysis.
The IES Problem We are given data X, and the output of an embedding algorithm (DM for simplicity) Y = (X) = [ 1, · · · , m] 2 Rn⇥m. We assume that X is sampled from a d-dimensional manifold M, with known d, and that m is sufficiently large so that (M) is a smooth embedding. Further, we assume that there is a set S ✓ [m], with |S| = s m, so that S is also a smooth embedding of M. We propose to find such set S so that the rank of S is d on M and S varies as slowly as possible.
Challenges (1) Numerically, and on a finite sample, distiguishing between a full rank mapping and a rank-defective one is imprecise. Therefore, we substitute for rank the volume of a unit parallelogram in T (xi) (M). (2) Since is not an isometry, we must separate the local distortions introduced by from the estimated rank of at x. (3) Finding the optimal balance between the above desired properties. (4) In [Bat14] it is strongly suggested that s the number of eigenfunctions needed may exceed the Whitney embedding dimension ( 2d), and that this number may depend on injectivity radius, aspect ratio, and so on. Supplement G shows an example of a flat 2-manifold, the strip with cavity, for which s > 2. In this paper, we assume that s and m are given and focus on selecting S with |S| = s; for completeness, in Supplement G we present a heuristic to select s.
(Global) functional dependencies, knots and crossings Before we proceed, we describe three different ways a mapping (M) can fail to be invertible. The first, (global) functional dependency is the case when rankD < d on an open subset of M, or on all of M (yellow curve in Figure 1a); this is the case most widely recognized in the literature (e.g., [GZKR08, DTCK18]). The knot is the case when rankD < d at an isolated point (Figure 1b). Third, the crossing (Figure S8 in
Supplement H) is the case when : M! (M) is not invertible at x, but M can be covered with open sets U such that the restriction : U ! (U) has full rank d. Combinations of these three exemplary cases can occur. The criteria and approach we define are based on the (surrogate) rank of , therefore they will not rule out all crossings. We leave the problem of crossings in manifold embeddings to future work, as we believe that it requires an entirely separate approach (based, e.g., or the injectivity radius or density in the co-tangent bundle rather than differential structure).
4 Criteria and algorithm
A geometric criterion We start with the main idea in evaluating the quality of a subset S of coordinate functions. At each data point i, we consider the orthogonal basis U(i) 2 Rm⇥d of the d dimensional tangent subspace T (xi) (M). The projection of the columns of U(i) onto the subspace T (xi) S(M) is U(i)[S, :] ⌘ US(i). The following Lemma connects US(i) and the co-metric HS(i) defined by S , with the full H(i). Lemma 1. Let H(i) = U(i)⌃(i)U(i)> be the co-metric defined by embedding , S ✓ [m], HS(i) and US(i) defined above. Then HS(i) = US(i)⌃(i)US(i)> = H(i)[S, S].
The proof is straightforward and left to the reader. Note that Lemma 1 is responsible for the efficiency of the search over sets S, given that the push-forward co-metric HS can be readily obtained as a submatrix of H. Denote by uS
k (i) the k-th column of US(i). We further normalize each uSk
to length 1 and define the normalized projected volume Volnorm(S, i) = p
det(US(i)>US(i))Qd k=1 kuSk (i)k2
. Conceptually, Volnorm(S, i) is the volume spanned by a (non-orthonormal) “basis” of unit vectors in T S(xi) S(M); Volnorm(S, i) = 1 when US(i) is orthogonal, and it is 0 when rankHS(i) < d. In Figure 1a, the Volnorm({1, 2}) with {1,2} = { 1,0, 2,0} is close to zero, since the projection of the two tangent vectors is parallel to the yellow curve; however Volnorm({1, dw/he}, i) is almost 1, because the projections of the tangent vectors U(i) will be (approximately) orthogonal. Hence, Volnorm(S, i) away from 0 indicates a non-singular S at i, and we use the average log Volnorm(S, i), which penalizes values near 0 highly, as the rank quality R(S) of S.
Higher frequency S maps with high R(S) may exist, being either smooth, such as the embeddings of the strip mentioned previously, or containing knots involving only small fraction of points, such as 1,0, 1,1 in Figure 1a. To choose the lowest frequency, slowest varying smooth map, a regularization term consisting of the eigenvalues k, k 2 S, of the graph Laplacian L is added, obtaining the criterion
L(S; ⇣) = 1
n
nX
i=1
log q det (US(i)>US(i))
| {z } R1(S)= 1n Pn i=1 R1(S;i)
1
n
nX
i=1
dX
k=1
log kuS k (i)k2
| {z } R2(S)= 1n Pn i=1 R2(S;i)
⇣ X
k2S k (2)
Algorithm 2: INDEIGENSEARCH Input : Data X, bandwith ", intrinsic dimension d,
embedding dimension s, regularizer ⇣ 1 Y 2 Rn⇥m,L, 2 Rm DIFFMAP(X, ") 2 U(i), · · · ,U(n) RMETRIC(Y,L, d) 3 for S 2 {S0 ✓ [m] : |S0| = s, 1 2 S0} do 4 R1(S) 0;R2(S) 0 5 for i = 1, · · · , n do 6 US(i) U(i)[S, :] 7 R1(S) += 1 2n · log det US(i)>US(i) 8 R2(S) += 1 n · P d k=1 log ku S k (i)k2
9 end
10 L(S; ⇣) = R1(S) R2(S) ⇣ P k2S k 11 end
12 S⇤ = argmaxS L(S; ⇣) Return: Independent eigencoordinates set S⇤
Search algorithm With this criterion, the IES problem turns into a subset selection problem parametrized by ⇣
S⇤(⇣) = argmax S✓[m];|S|=s;12S L(S; ⇣) (3)
Note that we force the first coordinate 1 to always be chosen, since this coordinate cannot be functionally dependent on previous ones, and, in the case of DM, it also has lowest frequency. Note also that R1 and R2 are both submodular set function (proof in Supplement C.3). For large s and d, algorithms for optimizing over the difference of submodular functions can be used (e.g., see [IB12]). For the experiments in this paper, we have m = 20 and
d, s = 2 ⇠ 4, which enables us to use exhaustive search to handle (3). The exact search algorithm is summarized in Algorithm 2 INDEIGENSEARCH. A greedy variant is also proposed and analyzed in Supplement D. Note that one might be able to search in the continuous space of all s-projections. We conjecture the objective function (2) will be a difference of convex function and leave the details as future work2.
Regularization path and choosing ⇣ According to (2), the optimal subset S⇤ depends on the parameter ⇣. The regularization path `(⇣) = maxS✓[m];|S|=s;12S L(S; ⇣) is the upper envelope of multiple lines (each correspond to a set S) with slopes P k2S k and intercepts R(S). The larger ⇣ is, the more the lower frequency subset penalty prevails, and for sufficiently large ⇣ the algorithm will output [s]. In the supervised learning framework, the regularization parameters are often chosen by cross validation. Here we propose a second criterion, that effectively limits how much R(S) may be ignored, or alternatively, bounds ⇣ by a data dependent quantity. Define the leave-one-out regret of point i as follows
D(S, i) = R(Si⇤; [n]\{i}) R(S; [n]\{i}) with S i ⇤ = argmaxS✓[m];|S|=s;12SR(S; i) (4)
In the above, we denote R(S;T ) = 1|T | P
i2T R1(S; i) R2(S; i) for some subset T ✓ [n]. The quantity D(S, i) in (4) measures the gain in R if all the other points [n]\{i} choose the optimal subset Si⇤. If the regret D(S, i) is larger than zero, it indicates that the alternative choice might be better compared to original choice S. Note that the mean value for all i, i.e., 1
n P i D(S, i)
depends also on the variability of the optimal choice of points i, Si⇤. Therefore, it might not favor an S, if S is optimal for every i 2 [n]. Instead, we propose to inspect the distribution of D(S, i), and remove the sets S for which ↵’s percentile are larger than zero, e.g., ↵ = 75%, recursively from ⇣ = 1 in decreasing order. Namely, the chosen set is S⇤ = S⇤(⇣ 0) with ⇣ 0 = max⇣ 0 PERCENTILE({D(S⇤(⇣), i)}ni=1,↵) 0. The optimal ⇣⇤ value is simply chosen to be the midpoint of all the ⇣’s that outputs set S⇤ i.e., ⇣⇤ = 12 (⇣
0 + ⇣ 00), where ⇣ 00 = min⇣ 0 S⇤(⇣) = S⇤(⇣ 0). The procedure REGUPARAMSEARCH is summarized in Algorithm S5.
5 R as Kullbach-Leibler divergence
In this section we analyze R in its population version, and show that it is reminiscent of a KullbachLeibler divergence between unnormalized measures on S(M). The population version of the regularization term takes the form of a well-known smoothness penalty on the embedding coordinates S . Proofs of the theorems can be found in Supplement C.
Volume element and the Riemannian metric Consider a Riemannian manifold (M, g) mapped by a smooth embedding S into ( S(M), g⇤ S ), S : M ! Rs, where g⇤ S is the push-forward metric defined in (1). A Riemannian metric g induces a Riemannian measure on M, with volume element
p det g. Denote now by µM, respectively µ S(M) the Riemannian measures corresponding
to the metrics induced on M, S(M) by the ambient spaces RD,Rs; let g be the former metric. Lemma 2. Let S, , S ,HS(x),US(x),⌃(x) be defined as in Section 4 and Lemma 1. For simplicity, we denote by HS(y) ⌘ HS( 1S (y)), and similarly for US(y),⌃(y). Assume that S is a smooth embedding. Then, for any measurable function f : M! R,
Z
M f(x)dµM(x) =
Z
S(M) f( 1 S (y))jS(y)dµ S(M)(y), (5)
with jS(y) = 1/Vol(US(y)⌃ 1/2 S (y)). (6)
Asymptotic limit of R We now study the first term of our criterion in the limit of infinite sample size. We make the following assumptions. Assumption 1. The manifold M is compact of class C3, and there exists a set S, with |S| = s so that S is a smooth embedding of M in Rs.
2We thank the anonymous reviewer who made this suggestion.
Assumption 2. The data are sampled from a distribution on M continuous with respect to µM, whose density is denoted by p. Assumption 3. The estimate of HS in Algorithm 1 computed w.r.t. the embedding S is consistent.
We know from [Bat14] that Assumption 1 is satisfied for the DM/LE embedding. The remaining assumptions are minimal requirements ensuring that limits of our quantities exist. Now consider the setting in Sections 3, in which we have a larger set of eigenfunctions, [m] so that [m] contains the set S of Assumption 1. Denote by |̃S(y) = Q d
k=1
||u
S k (y)|| k(y))1/2
1 a new volume element,
here k = [⌃]kk. Theorem 3 (Limit of R). Under Assumptions 1–3,
lim n!1
1
n
X
i
lnR(S,xi) = R(S,M), (7)
and
R(S,M) =
Z
S(M) ln
jS(y) |̃S(y) jS(y)p( 1 S (y))dµ S(M)(y) def = D(pjSkp|̃S) (8)
The expression D(·k·) represents a Kullbach-Leibler divergence. Note that jS |̃S , which implies that D is always positive, and that the measures defined by pjS , p|̃S normalize to different values. By definition, local injectivity is related to the volume element j. Intuitively, pjS is the observation and pj̃S , where j̃S is the minimum attainable for jS , is the model; the objective itself is looking for a view S of the data that agrees with the model.
It is known that k, the k-th eigenvalue of the Laplacian, converges under certain technical conditions [BN07] to an eigenvalue of the Laplace-Beltrami operator M and that
k( M) = h k, M ki =
Z
M k grad k(x)k
2 2dµ(M). (9)
Hence, a smaller value for the regularization term encourages the use of slow varying coordinate functions, as measured by the squared norm of their gradients, as in equation (9). Hence, under Assumptions 1, 2, 3, L converges to
L(S,M) = D(pjSkp|̃S)
✓ ⇣
1(M)
◆X
k2S k(M). (10)
Since eigenvalues scale with the volume of M, the rescaling of ⇣ in comparison with equation (2) makes the ⇣ above adimensional.
6 Experiments
We demonstrate the proposed algorithm on three synthetic datasets, one where the minimum embedding dimension s equals d (D1 long strip), and two (D7 high torus and D13 three torus) where s > d. The complete list of synthetic manifolds (transformations of 2 dimensional strips, 3 dimensional cubes, two and three tori, etc.) investigated can be found in Supplement H and Table S2. The examples have (i) aspect ratio of at least 4 (ii) points sampled non-uniformly from the underlying manifold M, and (iii) Gaussian noise added. The sample size of the synthetic datasets is n = 10, 000 unless otherwise stated. Additionally, we analyze several real datasets from chemistry and astronomy. All embeddings are computed with the DM algorithm, which outputs m = 20 eigenvectors. Hence, we examine 171 sets for s = 3 and 969 sets for s = 4. No more than 2 to 5 of these sets appear on the regularization path. Detailed experimental results are in Table S3. In this section, we show the original dataset X, the embedding S⇤ , with S⇤ selected by INDEIGENSEARCH and ⇣⇤ from REGUPARAMSEARCH, and the maximizer sets on the regularization path with box plots of D(S, i) as discussed in Section 4. The ↵ threshold for REGUPARAMSEARCH is set to 75%. The kernel bandwidth " for synthetic datasets is chosen manually. For real datasets, " is optimized as in [JMM17]. All the experiments are replicated for more than 5 times, and the outputs are similar because of the large sample size n.
Synthetic manifolds The results of synthetic manifolds are in Figure 2. (i) Manifold with s = d. The first synthetic dataset we considered, D1, is a two dimensional strip with aspect ratio W/H = 2⇡. Left panel of the top row shows the scatter plot of such dataset. From the theoretical analysis in Section 3, the coordinate set that corresponds to slowest varying unique eigendirection is S = {1, dW/He} = {1, 7}. Middle panel, with S⇤ = {1, 7} selected by INDEIGENSEARCH with ⇣ chosen by REGUPARAMSEARCH, confirms this. The right panel shows the box plot of {D(S, i)}n
i=1. According to the proposed procedure, we eliminate S0 = {1, 2} since D(S0, i) 0 for almost all the points. (ii) Manifold with s > d. The second data D7 is displayed in the left panel of the second row. Due to the mechanism we used to generate the data, the resultant torus is non-uniformly distributed along the z axis. Middle panel is the embedding of the optimal coordinate set S⇤ = {1, 4, 5} selected by INDEIGENSEARCH. Note that the middle region (in red) is indeed a two dimensional narrow tube when zoomed in. The right panel indicates that both {1, 2, 3} and {1, 2, 4} (median
is around zero) should be removed. The optimal regularization parameter is ⇣⇤ ⇡ 7. The result of the third dataset D13, three torus, is in the third row of the figure. We displayed only projections of the penultimate and the last coordinate of original data X and embedding S⇤ (which is {5, 10}) colored by ↵1 of (S15) in the left and middle panel to conserve space. A full combinations of coordinates can be found in Figure S5. The right panel implies one should eliminate the set {1, 2, 3, 4} and {1, 2, 3, 5} since both of them have more than 75% of the points such that D(S, i) 0. The first remaining subset is {1, 2, 5, 10}, which yields an optimal regularization parameter ⇣⇤ ⇡ 5.
Molecular dynamics dataset [FTP16] The dataset has size n ⇡ 30, 000 and ambient dimension D = 40, with the intrinsic dimension estimate be d̂ = 2 (see Supplement H.1 for details). The embedding with coordinate set S = [3] is shown in Figure 3a. The first three eigenvectors parameterize the same directions, which yields a one dimensional manifold in the figure. Top view (S = [2]) of the figure is a u-shaped structure similar to the yellow curve in Figure 1a. The heat map of L({1, i, j}) for different combinations of coordinates in Figure 3b confirms that L for S = [3] is low and that 1, 2 and 3 give a low rank mapping. The heat map also shows high L values for S1 = {1, 4, 6} or S2 = {1, 5, 7}, which correspond to the top two ranked subsets. The embeddings with S1, S2 are in Figures 3c and 3d, respectively. In this case, we obtain two optimal S sets due to the data symmetry.
Galaxy spectra from the Sloan Digital Sky survey (SDSS) 3 [AAMA+09], preprocessed as in [MMVZ16]. We display a sample of n = 50, 000 points from the first 0.3 million points which correspond to closer galaxies. Figures 3e and 3f show that the first two coordinates are almost dependent; the embedding with S⇤ = {1, 3} is selected by INDEIGENSEARCH with d = 2. Both plots are colored by the blue spectrum magnitude, which is correlated to the number of young stars in the galaxy, showing that this galaxy property varies smoothly and non-linearly with 1, 3, but is not smooth w.r.t. 1, 2.
Comparison with [DTCK18] The LLRCOORDSEARCH method outputs similar candidate coordinates as our proposed algorithm most of the time (see Table S3). However, the results differ for high torus as in Figure 3. Figure 3h is the leave one out (LOO) error rk versus coordinates. The coordinates chosen by LLRCOORDSEARCH was S = {1, 2, 5}, as in Figure 3g. The embedding is clearly shown to be suboptimal, for it failed to capture the cavity within the torus. This is because the algorithm searches in a sequential fashion; the noise eigenvector 2 in this example appears before the signal eigenvectors e.g., 4 and 5.
Additional experiments with real data are shown in Table 1. Not surprisingly, for most real data sets we examined, the independent coordinates are not the first s. They also show that the algorithm scales well and is robust to the noise present in real data.
The asymptotic runtime of LLRCOORDSEARCH has quadratic dependency on n, while for our algorithm is linear in n. Details of runtime analysis are Supplement F. LLRCOORDSEARCH was too slow to be tested on the four larger datasets (see also Figure S1).
3The Sloan Digital Sky Survey data can be downloaded from https://www.sdss.org
7 Conclusion
Algorithms that use eigenvectors, such as DM, are among the most promising and well studied in ML. It is known since [GZKR08] that when the aspect ratio of a low dimensional manifold exceeds a threshold, the choice of eigenvectors becomes non-trivial, and that this threshold can be as low as 2. Our experimental results confirm the need to augment ML algorithms with IES methods in order to successfully apply ML to real world problems. Surprisingly, the IES problem has received little attention in the ML literature, to the extent that the difficulty and complexity of the problem have not been recognized. Our paper advances the state of the art by (i) introducing for the first time a differential geometric definition of the problem, (ii) highlighting geometric factors such as injectivity radius that, in addition to aspect ratio, influence the number of eigenfunctions needed for a smooth embedding, (iii) constructing selection criteria based on intrinsic manifold quantities, (iv) which have analyzable asymptotic limits, (v) can be computed efficiently, and (vi) are also robust to the noise present in real scientific data. The library of hard synthetic examples we constructed will be made available along with the python software implementation of our algorithms.
Acknowledgements
The authors acknowledge partial support from the U.S. Department of Energy, Solar Energy Technology Office award DE-EE0008563 and from the NSF DMS PD 08-1269 and NSF IIS-0313339 awards. They are grateful to the Tkatchenko and Pfaendtner labs and in particular to Stefan Chmiela and Chris Fu for providing the molecular dynamics data and for many hours of brainstorming and advice. | 1. What is the main contribution of the paper regarding the Independent Eigencoordinate Selection problem?
2. What are the strengths and weaknesses of the proposed objective function and regularization term?
3. How does the reviewer assess the choice of objective function and its relation to the K-L divergence limit?
4. Why does the IES problem choose a specific Euclidean projection, and how does it compare to searching over all projections?
5. What are some of the mathematical notation and terminology issues in the paper?
6. How does the reviewer evaluate the section on the regularization path and choosing $\zeta$?
7. Are there any inconsistencies or typos in the paper's notation and characterization of the regularization parameter $\zeta$? | Review | Review
This paper studies the problem of selecting coordinates of a map into a high-dimensional Euclidean space (assumed to be a smooth embedding) to produce a smooth immersion into a lower-dimensional Euclidean space. As the original map is composed of the eigenfunctions of a Laplacian, the authors call this the Independent Eigencoordinate Selection problem. The main contribution of the paper is to design an objective function to encourage the projected map to be locally injective and a regularization term encouraging use of slowly-varying lower eigenvalues. The IES problem is naturally phrased as a subset selection problem given these choices. The paper does not focus on how to optimize this objective function; rather, the authors study the behavior of the exact solution (found via exhaustive search of small subsets) under changes in the regularization parameter as a *regularization path*. I would have liked to see more discussion of the particular objective function chosen. Section 5 of the paper states that in the limit of infinitely many samples, the objective function converges to a K-L divergence between two Riemannian volume forms, one of them a pullback and the other cooked up to rescale the pullback. It seems like this limit is intended to motivate the choice of objective function. In that case, it would have been helpful to introduce it earlier and to discuss it more: e.g, why is K-L between these two volume forms a good way to encourage local injectivity. More fundamentally, the IES problem chooses a composition of the original map with a very specific Euclidean projection: a projection along coordinate axes. Searching over subsets of coordinates seems hard in general (this paper mainly uses exhaustive search). Why is it better to search among subsets of the coordinates than to search over all projections, which would be more amenable to continuous optimization techniques (e.g. manifold optimization on the Grassmannian)? I found the section on the regularization path and choosing $\zeta$ hard to follow. It seems to use notation introduced in the supplementary material without referring to it. Some of the mathematical terminology and notation in the paper is non-standard. For example, the pullback of a metric is normally denoted $\phi^*g$, not $g_{*\phi}$. The paper refers to the pushforward of the metric, which is really the pullback by the inverse map, $(\phi^{-1})^*g$. Of course this only makes sense where the inverse is well-defined. Similarly, the classification of functional dependencies/knots and crossings may be standard in machine learning, but as far as I know mathematicians would call these failures of local injectivity and failures of injectivity, respectively. A map that is locally (infinitesimally) injective but not necessarily globally injective is an immersion. It would be helpful to use this standard term as this is what the paper is seeking. In Section 5, some notation is used without being introduced. For example, I do not see where $p$ is defined, nor what $\sigma_k(y)$ is. The jacobian determinant is defined as the volume of a matrix, which seems like a typo. The characterization of the regularization parameter $\zeta$ is inconsistent. For example, section 5 states that "a smaller value for the regularization term encourages the use of slow varying coordinate functions." In fact, increasing $\zeta$ should put more emphasis on low-frequency modes. The paper states that "The rescaling of $\zeta$ [in equation(10)] in comparison with equation (2) aims to make $\zeta$ adimensional." But it is also stated that the objective function $\mathfrak{L}$ from equation (2) converges to that in equation (10). In that case, the scaling should be consistent between the two equations. If adimensionality is desirable, why not aim for that in the original definition of the objective function? |
NIPS | Title
Quantum Algorithms for Sampling Log-Concave Distributions and Estimating Normalizing Constants
Abstract
Given a convex function f : R → R, the problem of sampling from a distribution ∝ e−f(x) is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants ∫ Rd e −f(x)dx. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number κ and dimension d) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error . Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity Õ(κd) and Õ(κd/ ) for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in κ, d, over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a 1/ 1−o(1) quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in . 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
N/A
Given a convex function f : Rd → R, the problem of sampling from a distribution ∝ e−f(x) is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants ∫ Rd e
−f(x)dx. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number κ and dimension d) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error . Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity Õ(κ1/2d) and Õ(κ1/2d3/2/ ) for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in κ, d, over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a 1/ 1−o(1) quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in .
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
1 Introduction
Sampling from a given distribution is a fundamental computational problem. For example, in statistics, samples can determine confidence intervals or explore posterior distributions. In machine learning, samples are used for regression and to train supervised learning models. In optimization, samples from well-chosen distributions can produce points near local or even global optima.
Sampling can be nontrivial even when the distribution is known. Indeed, efficient sampling is often a challenging computational problem, and bottlenecks the running time in many applications. Many efforts have been made to develop fast sampling methods. Among those, one of the most successful tools is Markov Chain Monte Carlo (MCMC), which uses a Markov chain that converges to the desired distribution to (approximately) sample from it.
Here we focus on the fundamental task of log-concave sampling, i.e., sampling from a distribution proportional to e−f where f : Rd → R is a convex function. This covers many practical applications such as multivariate Gaussian distributions and exponential distributions. Provable performance guarantees for log-concave sampling have been widely studied [15]. A closely related problem is estimating the normalizing constants of log-concave distributions, which also has many applications [16].
Quantum computing has been applied to speed up many classical algorithms based on Markov processes, so it is natural to investigate quantum algorithms for log-concave sampling. If we can prepare a quantum state whose amplitudes are the square roots of the corresponding probabilities, then measurement yields a random sample from the desired distribution. In this approach, the number of required qubits is only poly-logarithmic in the size of the sample space. Unfortunately, such a quantum state probably cannot be efficiently prepared in general, since this would imply SZK ⊆ BQP [1]. Nevertheless, in some cases, quantum algorithms can achieve polynomial speedup over classical algorithms. Examples include uniform sampling on a 2D lattice [35], estimating partition functions [4, 22, 31, 45, 46], and estimating volumes of convex bodies [6]. However, despite the importance of sampling log-concave distributions and estimating normalizing constants, we are not aware of any previous quantum speedups for general instances of these problems.
Formulation In this paper, we consider a d-dimensional convex function f : Rd → R which is L-smooth and µ-strongly convex, i.e., µ,L > 0 and for any x, y ∈ Rd, x 6= y,
f(y)− f(x)− 〈∇f(x), y − x〉 ‖x− y‖22/2 ∈ [µ,L]. (1.1)
We denote by κ := L/µ the condition number of f . The corresponding log-concave distribution has probability density ρf : Rd → R with
ρf (x) := e−f(x)
Zf , (1.2)
where the normalizing constant is
Zf := ∫ x∈Rd e−f(x) dx. (1.3)
When there is no ambiguity, we abbreviate ρf and Zf as ρ and Z, respectively. Given an ∈ (0, 1),
• the goal of log-concave sampling is to output a random variable with distribution ρ̃ such that ‖ρ̃− ρ‖ ≤ , and
• the goal of normalizing constant estimation is to output a value Z̃ such that with probability at least 2/3, (1− )Z ≤ Z̃ ≤ (1 + )Z.
Here ‖ · ‖ is a certain norm. We consider the general setting where the function f is specified by an oracle. In particular, we consider the quantum evaluation oracle Of , a standard model in the quantum computing literature [3, 6, 7, 50]. The evaluation oracle acts as
Of |x, y〉 = |x, f(x) + y〉 ∀x ∈ Rd, y ∈ R. (1.4)
(Quantum computing notations are briefly explained in Section 2.) We also consider the quantum gradient oracle O∇f with
O∇f |x, z〉 = |x,∇f(x) + z〉 ∀x, z ∈ Rd. (1.5)
In other words, we allow superpositions of queries to both function evaluations and gradients. The essence of quantum speedup is the ability to compute with carefully designed superpositions.
Contributions Our main results are quantum algorithms that speed up log-concave sampling and normalizing constant estimation.
Theorem 1.1 (Main log-concave sampling result). Let ρ denote the log-concave distribution (1.2). There exist quantum algorithms that output a random variable distributed according to ρ̃ such that
• W2(ρ̃, ρ) ≤ where W2 is the Wasserstein 2-norm (2.4), using Õ(κ7/6d1/6 −1/3 + κd1/3 −2/3) queries to the quantum evaluation oracle (1.4); or
• ‖ρ̃ − ρ‖TV ≤ where ‖ · ‖TV is the total variation distance (2.3), using Õ ( κ1/2d ) queries to
the quantum gradient oracle (1.5), or Õ ( κ1/2d1/4 ) queries when the initial distribution is warm (formally defined in Appendix C.2.1).
In the above results, the query complexity Õ(κ7/6d1/6 −1/3 +κd1/3 −2/3) is achieved by our quantum ULD-RMM algorithm. Although the quantum query complexity is the same as the best known classical result [37], we emphasize that our quantum algorithm uses a zeroth-order oracle while [37] uses a first-order oracle. The query complexity Õ ( κ1/2d ) is achieved by our quantum MALA algorithm that uses a first-order oracle (as in classical algorithms). This is a quadratic speedup in κ compared with the best known classical algorithm [28]. With a warm start, our quantum speedup is even more significant: we achieve quadratic speedups in κ and d as compared with the best known classical algorithm with a warm start [47].
Theorem 1.2 (Main normalizing constant estimation result). There exist quantum algorithms that estimate the normalizing constant by Z̃ within multiplicative error with probability at least 3/4,
• using Õ(κ7/6d7/6 −1 + κd4/3 −1) queries to the quantum evaluation oracle (1.4); or • using Õ(κ1/2d3/2 −1) queries to the quantum gradient oracle (1.5).
Furthermore, this task has quantum query complexity at least Ω( −1+o(1)) (Theorem 5.1).
Our query complexity Õ(κ7/6d7/6 −1 + κd4/3 −1) for normalizing constant estimation achieves a quadratic speedup in precision compared with the best known classical algorithm [16]. More remarkably, our quantum ULD-RMM algorithm again uses a zeroth-order oracle while the slower best known classical algorithm uses a first-order oracle [16]. Our quantum algorithm working with a first-order oracle achieves polynomial speedups in all parameters compared with the best known classical algorithm [16]. Moreover, the precision-dependence of our quantum algorithms is nearly optimal, which is quadratically better than the classical lower bound in 1/ [16].
To the best of our knowledge, these are the first quantum algorithms with quantum speedup for the fundamental problems of log-concave sampling and estimating normalizing constants. We explore multiple classical techniques including the underdamped Langevin diffusion (ULD) method [12– 14, 43], the randomized midpoint method for underdamped Langevin diffusion (ULD-RMM) [36, 37], and the Metropolis adjusted Langevin algorithm (MALA) [8, 11, 15, 28, 29, 47], and achieve quantum speedups. Our main contributions are as follows.
• Log-concave sampling. For this problem, our quantum algorithms based on ULD and ULDRMM have the same query complexity as the best known classical algorithms, but our quantum algorithms only use a zeroth-order (evaluation) oracle, while the classical algorithms use the firstorder (gradient) oracle. For MALA, this improvement on the order of oracles is nontrivial, but we can use the quantum gradient oracle in our quantum MALA algorithm to achieve a quadratic speedup in the condition number κ. Furthermore, given a warm-start distribution, our quantum algorithm achieves a quadratic speedup in all parameters.
• Normalizing constant estimation. For this problem, our quantum algorithms provide larger speedups. In particular, our quantum algorithms based on ULD and ULD-RMM achieve quadratic
speedup in the multiplicative precision (while using a zeroth-order oracle) compared with the corresponding best-known classical algorithms (using a first-order oracle). Our quantum algorithm based on MALA achieves polynomial speedups in all parameters. Furthermore, we prove that our quantum algorithm is nearly optimal in terms of .
We summarize our results and compare them to previous classical algorithms in Table 1 and Table 2. See Appendix A for more detailed comparisons to related classical and quantum work.
Techniques In this work, we develop a systematic approach for studying the complexity of quantum walk mixing and show that for any reversible classical Markov chain, we can obtain quadratic speedup for the mixing time as long as the initial distribution is warm. In particular, we apply the quantum walk and quantum annealing in the context of Langevin dynamics and achieve polynomial quantum speedups.
The technical ingredients of our quantum algorithms are highlighted below.
• Quantum simulated annealing (Lemma 3.2). Our quantum algorithm for estimating normalizing constants combines the quantum simulated annealing framework of [45] and the quantum mean estimation algorithm of [31]. For each type of Langevin dynamics (which are random walks), we build a corresponding quantum walk. Crucially, the spectral gap of the random walk is quadratically amplified in the phase gap of the corresponding quantum walk. This allows us to use a Grover-like procedure to produce the stationary distribution state given a sufficiently good initial state. In the simulated annealing framework, this initial state is the stationary distribution state of the previous Markov chain.
• Effective spectral gap (Lemma C.7). We show how to leverage a “warm” initial distribution to achieve a quantum speedup for sampling. Classically, a warm start leads to faster mixing even if
the spectral gap is small. Quantumly, we generalize the notion of “effective spectral gap” [6, 27, 34] to our more general sampling problem. We show that with a bounded warmness parameter, quantum algorithms can achieve a quadratic speedup in the mixing time. By viewing the sampling problem as a simulated annealing process with only one Markov chain, we prove a quadratic speedup for quantum MALA by analyzing the effective spectral gap.
• Quantum gradient estimation (Lemma C.1). We adapt Jordan’s quantum gradient algorithm [24] to the ULD and ULD-RMM algorithms and give rigorous proofs to bound the sampling error due to gradient estimation errors.
Open questions Our work raises several natural questions for future investigation:
• Can we achieve quantum speedup in d and κ for unadjusted Langevin algorithms such as ULD and ULD-RMM? The main difficulty is that ULD and ULD-RMM are irreversible, while most available quantum walk techniques only apply to reversible Markov chains. New techniques might be required to resolve this question.
• Can we achieve further quantum speedup for estimating normalizing constants with a warm start distribution? This might require a more refined version of quantum mean estimation.
• Can we give quantum algorithms for estimating normalizing constants with query complexity sublinear in d? Such a result would give a provable quantum-classical separation due to the Ω(d1−o(1)/ 2−o(1)) classical lower bound proved in [16].
Limitations and societal impacts Researchers working on theoretical aspects of quantum computing or Monte Carlo methods may benefit from our results. In the long term, once fault-tolerant quantum computers have been built, our results may find practical applications in MCMC methods arising in the real world. As far as we are aware, our work does not have negative societal impacts.
2 Preliminaries
Basic definitions of quantum computation Quantum mechanics is formulated in terms of linear algebra. The computational basis of Cd is {~e0, . . . , ~ed−1}, where ~ei = (0, . . . , 1, . . . , 0)> with the 1 in the (i+ 1)st position. We use Dirac notation, writing |i〉 (called a “ket”) for ~ei and 〈i| (a “bra”) for ~e>i .
The tensor product of quantum states is their Kronecker product: if |u〉 ∈ Cd1 and |v〉 ∈ Cd2 , then we have |u〉 ⊗ |v〉 ∈ Cd1 ⊗ Cd2 with
|u〉 ⊗ |v〉 = (u0v0, u0v1, . . . , ud1−1vd2−1)>. (2.1) The basic element of quantum information is a qubit, a quantum state in C2, which can be written as a|0〉+ b|1〉 for some a, b ∈ C with |a|2 + |b|2 = 1. An n-qubit tensor product state can be written as |v1〉 ⊗ · · · ⊗ |vn〉 ∈ C2 n
, where for any i ∈ [n], |vi〉 is a one-qubit state. Note however that most states in C2n are not product states. We sometimes abbreviate |u〉 ⊗ |v〉 as |u〉|v〉. Operations on quantum states are unitary transformations. They are typically stated in the circuit model, where a k-qubit gate is a unitary matrix in C2k . Two-qubit gates are universal, i.e., every n-qubit gate can be decomposed into a product of gates that act as the identity on n−2 qubits and as some two-qubit gate on the other 2 qubits. The gate complexity of an operation refers to the number of two-qubit gates used in a quantum circuit for realizing it.
Quantum access to a function, referred to as a quantum oracle, must be reversible and allow access to different values of the function in superposition (i.e., for linear combinations of computational basis states). For example, consider the unitary evaluation oracle Of defined in (1.4). Given a probability distribution {pi}ni=1 and a set of points {xi}ni=1, we have
Of n∑ i=1 √ pi|xi〉|0〉 = n∑ i=1 √ pi|xi〉|f(xi)〉. (2.2)
Then a measurement would give f(xi) with probability pi. However, a quantum oracle can not only simulate random sampling, but can enable uniquely quantum behavior through interference. Examples include amplitude amplification—the main idea behind Grover’s search algorithm [20] and
the amplitude estimation procedure used in this paper—and many other quantum algorithms relying on coherent quantum access to a function. Similar arguments apply to the quantum gradient oracle (1.5). If a classical oracle can be computed by an explicit classical circuit, then the corresponding quantum oracle can be implemented by a quantum circuit of approximately the same size. Therefore, these quantum oracles provide a useful framework for understanding the quantum complexity of log-concave sampling and normalizing constant estimation.
To sample from a distribution π, it suffices to prepare the state |π〉 := ∑ x √ πx|x〉 and then measure it. For a Markov chain specified by a transition matrix P with stationary distribution π, one can construct a corresponding quantum walk operator W (P ). Intuitively, quantum walks can be viewed as applying a sequence of quantum unitaries on a quantum state encoding the initial distribution to rotate it to the subspace of stationary distribution |π〉. The number of rotations needed (i.e., the angle between the initial distribution and stationary distribution) depends on the spectral gap of P , and a quantum algorithm can achieve a quadratic speedup via quantum phase estimation and amplification algorithms. More background on quantum walk is given in Appendix C.2.2.
Notations Throughout the paper, the big-O notations O(·), o(·), Ω(·), and Θ(·) follow common definitions. The Õ notation omits poly-logarithmic terms, i.e., Õ(f) := O(fpoly(log f)). We say a function f is L-Lipschitz continuous at x if |f(x)− f(y)| ≤ L‖x− y‖ for all y sufficiently near x. The total variation distance (TV-distance) between two functions f, g : Rd → R is defined as
‖f − g‖TV := 1
2 ∫ Rd |f(x)− g(x)|dx. (2.3)
Let B(Rd) denote the Borel σ-field of Rd. Given probability measures µ and ν on (Rd,B(Rd)), a transference plan ζ between µ and ν is defined as a probability measure on (Rd × Rd,B(Rd) × B(Rd)) such that for any A ⊆ Rd, ζ(A × Rd) = µ(A) and ζ(Rd × A) = ν(A). We let Γ(µ, ν) denote the set of all transference plans. We let
W2(µ, ν) :=
( inf
ζ∈Γ(µ,ν) ∫ Rd×Rd ‖x− y‖22 dζ(x, y) ) 1 2
(2.4)
denote the Wasserstein 2-norm between µ and ν.
3 Quantum Algorithm for Log-Concave Sampling
In this section, we describe several quantum algorithms for sampling log-concave distributions.
Quantum inexact ULD and ULD-RMM We first show that the gradient oracle in the classical ULD and ULD-RMM algorithms can be efficiently simulated by the quantum evaluation oracle via quantum gradient estimation. Suppose we are given access to the evaluation oracle (1.4) for f(x). Then by Jordan’s algorithm [24] (see Lemma C.1 for details), there is a quantum algorithm that can compute ∇f(x) with a polynomially small `1-error by querying the evaluation oracle O(1) times. Using this, we can prove the following theorem (see Appendix C.1 for details). Theorem 3.1 (Informal version of Theorem C.1 and Theorem C.2). Let ρ ∝ e−f be a d-dimensional log-concave distribution with f satisfying (1.1). Given a quantum evaluation oracle for f ,
• the quantum inexact ULD algorithm uses Õ(κ2d1/2 −1) queries, and • the quantum inexact ULD-RMM algorithm uses Õ(κ7/6d1/6 −1/3 + κd1/3 −2/3) queries,
to quantumly sample from a distribution that is -close to ρ in W2-distance.
We note that the query complexities of our quantum algorithms using a zeroth-order oracle match the state-of-the-art classical ULD [10] and ULD-RMM [37] complexities with a first-order oracle. The main technical difficulty of applying the quantum gradient algorithm is that it produces a stochastic gradient oracle in which the output of the quantum algorithm g satisfies ‖E[g]−∇f(x)‖1 ≤ d−Ω(1). In particular, the randomness of the gradient computation is “entangled” with the randomness of the Markov chain. We use the classical analysis of ULD and ULD-RMM processes [36] to prove that the stochastic gradient will not significantly slow down the mixing of ULD processes, and that the error caused by the quantum gradient algorithm can be controlled.
Quantum MALA We next propose two quantum algorithms with lower query complexity than classical MALA, one with a Gaussian initial distribution and another with a warm-start distribution. The main technical tool we use is a quantum walk in continuous space.
The classical MALA (i.e., Metropolized HMC) starts from a Gaussian distributionN (0, L−1Id) and performs a leapfrog step in each iteration. It is well-known that the initial Gaussian state
|ρ0〉 = ∫ Rd ( L 2π )d/4 e− L 4 ‖z−x ?‖22 |z〉dz (3.1)
can be efficiently prepared. We show that the quantum walk update operator
U := ∫ Rd dx ∫ Rd dy √ px→y|x〉〈x| ⊗ |y〉〈0| (3.2)
can be efficiently implemented, where px→y := p(x, y) is the transition density from x to y, and the density p satisfies ∫ Rd p(x, y) dy = 1 for any x ∈ R
d. Lemma 3.1 (Informal version of Lemma C.6). The continuous-space quantum walk operator corresponding to the MALA Markov chain can be implemented with O(1) gradient and evaluation queries.
In general, it is difficult to quantumly speed up the mixing time of a classical Markov chain, which is upper bounded by O(δ−1 log ( ρ−1min ) ), where δ is the spectral gap. However, [45] shows that a quadratic speedup is possible when following a sequence of slowly-varying Markov chains. More specifically, let ρ0, . . . , ρr be the stationary distributions of the reversible Markov chains M0, . . . ,Mr and let |ρ0〉, . . . , |ρr〉 be the corresponding quantum states. Suppose |〈ρi|ρi+1〉| ≥ p for all i ∈ {0, . . . , r − 1}, and suppose the spectral gaps of M0, . . . ,Mr are lower-bounded by δ. Then we can prepare a quantum state |ρ̃r〉 that is -close to |ρr〉 using Õ ( δ−1/2rp−1 ) quantum walk steps. To fulfill the slowly-varying condition, we consider an annealing process that goes from ρ0 = N (0, L−1Id) to the target distribution ρM+1 = ρ in M = Õ( √ d) stages. At the ith stage, the stationary distribution is ρi ∝ e−fi with fi := f + 12σ −2 i ‖x‖2. By properly choosing σ1 ≤ · · · ≤ σM , we prove that this sequence of Markov chains is slowly varying. Lemma 3.2 (Informal version of Lemma B.6). If we take σ21 = 2dL and σ 2 i+1 = (1 + 1√ d )σ2i , then for 0 ≤ i ≤M , we have |〈ρi|ρi+1〉| ≥ Ω(1).
Combining Lemma 3.1, Lemma 3.2, and the effective spectral gap of MALA (Lemma C.7), we have: Theorem 3.2 (Informal version of Theorem C.7). Let ρ ∝ e−f be a d-dimensional log-concave distribution with f satisfying (1.1). There is a quantum algorithm (Algorithm 1) that prepares a state |ρ̃〉 with ‖|ρ̃〉 − |ρ〉‖ ≤ using Õ(κ1/2d) gradient and evaluation oracle queries.
Algorithm 1: QUANTUMMALAFORLOG-CONCAVESAMPLING (Informal) Input: Evaluation oracle Of , gradient oracle O∇f , smoothness parameter L, convexity parameter µ Output: Quantum state |ρ̃〉 close to the stationary distribution state ∫ Rd e
−f(x)/2 d|x〉 1 Compute the cooling schedule parameters σ1, . . . , σM 2 Prepare the state |ρ0〉 ∝ ∫ Rd e − 14‖x‖ 2/σ21 d|x〉 3 for i← 1, . . . ,M do 4 Construct Ofi and O∇fi where fi(x) = f(x) + 12‖x‖
2/σ2i 5 Construct quantum walk update unitary U with Ofi and O∇fi 6 Implement the quantum walk operator and the approximate reflection R̃i 7 Prepare |ρi〉 by performing π3 -amplitude amplification with R̃i on the state |ρi−1〉|0〉 8 return |ρM 〉
For the classical MALA with a Gaussian initial distribution, it was shown by [29] that the mixing time is at least Ω̃(κd). Theorem 3.2 quadratically reduces the κ dependence.
Note that Algorithm 1 uses a first-order oracle, instead of the zeroth-order oracle used in the quantum ULD algorithms. The technical barrier to applying the quantum gradient algorithm (Lemma C.1) in the quantum MALA is to analyze the classical MALA with a stochastic gradient oracle. We currently do not know whether the “entangled randomness” dramatically increases the mixing time.
More technical details and proofs are provided in Appendix C.
4 Quantum Algorithm for Estimating Normalizing Constants
In this section, we apply our quantum log-concave sampling algorithms to the normalizing constant estimation problem. A very natural approach to this problem is via MCMC, which constructs a multi-stage annealing process and uses a sampler at each stage to solve a mean estimation problem. We show how to quantumly speed up these annealing processes and improve the query complexity of estimating normalizing constants.
Quantum speedup for the standard annealing process We first consider the standard annealing process for log-concave distributions, as already applied in the previous section. Recall that we pick parameters σ1 < σ2 < · · · < σM and construct a sequence of Markov chains with stationary distributions ρi ∝ e−fi , where fi = f+ 12σ2i ‖x‖ 2. Then, at the ith stage, we estimate the expectation
Eρi [gi] where gi = exp ( 1
2 (σ−2i − σ −2 i+1)‖x‖ 2
) . (4.1)
If we can estimate each expectation with relative error at most O( /M), then the product of these M quantities estimates the normalizing constant Z = ∫ Rd e −f(x) dx with relative error at most .
For the mean estimation problem, [31] showed that when the relative variance Varρi [gi]Eρi [gi]2 is constant, there is a quantum algorithm for estimating the expectation Eρi [gi] within relative error at most using Õ(1/ ) quantum samples from the distribution ρi. Our annealing schedule satisfies the bounded relative variance condition. Therefore, by the quantum mean estimation algorithm, we improve the sampling complexity of the standard annealing process from Õ(M2 −2) to Õ(M −1).
To further improve the query complexity, we consider using the quantum MALAs developed in the previous section to generate samples. Observe that Algorithm 1 outputs a quantum state corresponding to some distribution that is close to ρi, instead of an individual sample. If we can estimate the expectation without destroying the quantum state, then we can reuse the state and evolve it for the (i + 1)st Markov chain. Fortunately, we can use non-destructive mean estimation to estimate the expectation and restore the initial states. A detailed error analysis of this algorithm can be found in [6, 22]. We first prepare Õ(M −1) copies of initial states corresponding to the Gaussian distribution N (0, L−1Id). Then, for each stage, we apply the non-destructive mean estimation algorithm to estimate the expectation Eρi [gi] and then run quantum MALA to evolve the states |ρi〉 to |ρi+1〉. This gives our first quantum algorithm for estimating normalizing constants. Theorem 4.1 (Informal version of Theorem D.2). Let Z be the normalizing constant in (1.3). There is a quantum algorithm (Algorithm 2) that outputs an estimate Z̃ with relative error at most using Õ ( d3/2κ1/2 −1 ) queries to the quantum gradient and evaluation oracles.
Quantum speedup for MLMC Now we consider using multilevel Monte Carlo (MLMC) as the annealing process and show how to achieve quantum speedup. MLMC was originally developed by [23] for parametric integration; then [17] applied MLMC to simulate stochastic differential equations (SDEs). The idea of MLMC is natural: we choose a different number of samples at each stage based on the cost and variance of that stage.
To estimate normalizing constants, a variant of MLMC was proposed in [16]. Unlike the standard MLMC for bounding the mean-squared error, they upper bound the bias and the variance separately, and the analysis is technically difficult. The first quantum algorithm based on MLMC was subsequently developed by [2] based on the quantum mean estimation algorithm. Roughly speaking, the quantum algorithm can quadratically reduce the -dependence of the sample complexity compared with classical MLMC.
Algorithm 2: QUANTUMMALAFORESTIMATINGNORMALIZINGCONSTANT (Informal) Input: Evaluation oracle Of , gradient oracle O∇f Output: Estimate Z̃ of Z with relative error at most
1 M ← Õ( √ d), K ← Õ( −1) 2 Compute the cooling schedule parameters σ1, . . . , σM 3 for j ← 1, . . . ,K do 4 Prepare the state |ρ1,j〉 ∝ ∫ Rd e − 14‖x‖ 2/σ21 |x〉dx
5 Z̃ ← (2πσ21)d/2 6 for i← 1, . . . ,M do 7 g̃i ← Non-destructive mean estimation for gi using {|ρi,0〉, . . . , |ρi,K〉} 8 Z̃ ← Z̃g̃i 9 for j ← 1, . . . ,K do
10 |ρi+1,j〉 ← QUANTUMMALA(Ofi+1 ,O∇,fi+1 , |ρi,j〉)
11 return Z̃
In this work, we apply the quantum accelerated MLMC (QA-MLMC) scheme [2] to simulate underdamped Langevin dynamics as the SDE. One challenge in using QA-MLMC is that gi in our setting is not Lipschitz. Fortunately, as suggested by [16], this issue can be resolved by truncating large x
and replacing gi by hi := min { gi, exp ( (r+i )2 σ2i (1+α −1) )} , with the choice
α = Õ ( 1√
d log(1/ )
) r+i = Eρi+1‖x‖+ Θ(σi √ (1 + α) log(1/ )) (4.2)
to ensure hiEρigi is O(σ−1i ) Lipschitz. Furthermore, ∣∣Eρi(hi − gi)∣∣ < by Lemmas C.7 and C.8 in [16]. For simplicity, we regard gi as a Lipschitz continuous function in our main results. We present QA-MLMC in Algorithm 3, where the sampling algorithm A can be chosen to be quantum inexact ULD/ULD-RMM or quantum MALA.
Algorithm 3: QA-MLMC (Informal) Input: Evaluation oracle Of , function g, error , a quantum sampler A(x0, f, η) for ρ Output: An estimate of R̃ = Eρh
1 K ← Õ( −1) 2 Compute the initial point x0 and the step size η0 3 Compute the number of samples N1, . . . , NK 4 for j ← 1, . . . ,K do 5 Let ηj = η/2j−1 6 for i← 1, . . . , Nj do 7 Sample Xηji by A(f, x0, ηj), and sample X ηj/2 i by A(f, x0, ηj/2) 8 G̃−i ← QMEANEST({g(X ηj i )}i∈[Nj ]), and G̃ + i ← QMEANEST({g(X ηj/2 i )}i∈[Nj ])
9 return R̃ = G̃0 + ∑K j=0(G̃ − i − G̃ + i )
This QA-MLMC framework reduces the -dependence of the sampling complexity for estimating normalizing constants from −2 to −1 in both the ULD and ULD-RMM cases, as compared with the state-of-the-art classical results [16].
Using the quantum inexact ULD and ULD-RMM algorithms (Theorem 3.1) to generate samples, we obtain our second quantum algorithm for estimating normalizing constants (see Appendix D for proofs). Theorem 4.2 (Informal version of Theorem D.3 and Theorem D.4). Let Z be the normalizing constant in (1.3). There exist quantum algorithms for estimating Z with relative error at most using
• quantum inexact ULD with Õ(d3/2κ2 −1) queries to the evaluation oracle, and
• quantum inexact ULD-RMM with Õ((d7/6κ7/6 + d4/3κ) −1) queries to the evaluation oracle.
5 Quantum Lower Bound
Finally, we lower bound the quantum query complexity of normalizing constant estimation.
Theorem 5.1. For any fixed positive integer k, given query access (1.4) to a function f : Rk → R that is 1.5-smooth and 0.5-strongly convex, the quantum query complexity of estimating the partition functionZ = ∫ Rk e −f(x) dxwithin multiplicative error with probability at least 2/3 is Ω( − 1 1+4/k ).
The proof of our quantum lower bound is inspired by the construction in Section 5 of [16]. They consider a log-concave function whose value is negligible outside a hypercube centered at 0. The interior of the hypercube is decomposed into cells of two types. The function takes different values on each type, and the normalizing constant estimation problem reduces to determining the number of cells of each type. Quantumly, we follow the same construction and reduce the cell counting problem to the Hamming weight problem: given an n-bit Boolean string and two integers ` < `′, decide whether the Hamming weight (i.e., the number of ones) of this string is `1 or `2. This problem has a known quantum query lower bound [32], which implies the quantum hardness of estimating the normalizing constant. The full proof of Theorem 5.1 appears in Appendix E.
Acknowledgements
AMC acknowledges support from the Army Research Office (grant W911NF-20-1-0015); the Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing program; and the National Science Foundation (grant CCF-1813814). TL was supported by a startup fund from Peking University, and the Advanced Institute of Information Technology, Peking University. JPL was supported by the National Science Foundation (grant CCF-1813814), an NSF Quantum Information Science and Engineering Network (QISE-NET) triplet award (DMR-1747426), a Simons Foundation award (No. 825053), and the Simons Quantum Postdoctoral Fellowship. RZ was supported by the University Graduate Continuing Fellowship from UT Austin. | 1. What are the main contributions and strengths of the paper regarding quantum algorithms for sampling and normalizing constant estimation?
2. What are the weaknesses and limitations of the paper, particularly in terms of mathematical formalism and preparation of quantum states?
3. Do you have any questions or suggestions regarding the quantum tools used in the paper, such as Jordan's algorithm and the square-root speedup?
4. How does the reviewer assess the significance and potential impact of the paper's findings in the field of quantum computing and MCMC algorithms?
5. Are there any minor notes or typos in the review that should be addressed? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The authors consider the problem of sampling and normalizing constant estimation for log-concave distributions, and derive query complexity bounds for quantum algorithms. The authors use state-of-the-art bounds for classical algorithms as a starting point, and show how to achieve quantum speedup.
They show that for strongly log-concave distributions with condition number
κ
:
Sampling: Using underdamped Langevin diffusion, quantum algorithms can obtain the same query complexity as classical algorithms while using zeroth rather than first-order (gradient) queries. Moreover, quantum MALA has mixing time that is the square root of classical MALA,
O
~
(
κ
1
/
2
d
)
and
O
~
(
κ
1
/
2
d
1
/
4
)
from a cold or warm start, respectively.
Normalizing constant estimation: Quantum algorithms can speed up normalization constant estimation, obtaining query complexity
O
~
(
κ
1
/
2
d
3
/
2
/
ϵ
)
(with MALA) or
O
~
(
(
κ
7
/
6
d
7
/
6
+
κ
d
4
/
3
)
/
ϵ
)
(using ULD-RMM), which improves the classical dependence on
ϵ
2
).
They also show a
1
/
ϵ
1
−
o
(
1
)
quantum lower bound for normalizing constant estimateion, giving near-optimality in
ϵ
.
Key quantum tools include:
Jordan's algorithm which gives an estimate of the gradient using O(1) queries to a quantum evaluation oracle.
A general square-root speedup for quantum algorithms based on reversible Markov chains, when using a sequence of Markov chains with slowly varying stationary distributions, starting from the initial distribution. (This is applied to MALA.)
Quantum algorithm for mean estimation, and quantum version of multilevel Monte Carlo whose query complexity has dependence
O
(
1
/
ϵ
)
.
Strengths And Weaknesses
This paper initiates work on the very natural idea of using quantum algorithms to speed up gradient-based MCMC algorithms, establishing basic quantum analogues of the well-studied Langevin-based algorithms and obtaining the signature "square-root" speedups in certain cases. This paves a good foundation for a promising area of research. The observation that quantum algorithms can make do with 0th order information is useful and surprising (for me).
Some of the math regarding the quantum "objects" involved can be made more mathematically precise, especially for those who are not as familiar. I suggest adding more exposition on the quantum formalism.
Questions
How are quantum states over
R
d
defined; what "space" do they live in? It seems that they have to be "square roots" of probability distributions over
R
d
. Some care (and measure theory...) is required to make this mathematically formal, but the preliminaries section only discusses the case of finite-dimensional spaces.
In what sense can one expect to prepare these states (in e.g., a discrete quantum circuit, or another reasonable model of a quantum computer)?
Minor notes:
line 51: delete "log-concave" - that is a description of the distribution, not the problem. Add: random variable "with distribution"
ρ
~
.
(A.3): missing - sign
line 527: Besides... also have -> In addition... also has.
line 637: There seems to be a missing factor of 2:
|
|
x
|
|
2
/
(
2
σ
i
2
)
line 643: Missing
⋅
,
≥
should be
≤
.
666: speedup -> sped up
704: oracle "satisfying" -> satisfies
Algorithm 6-7: It seems that the computation of
g
~
can be put at the beginning of the for loop to avoid writing it twice.
Algorithm 8, line 6: missing tilde.
790: register -> registers
797: What does "uncompute" mean?
849: What is T in the following equation?
855:
O
(
ϵ
)
should be
O
(
β
ϵ
)
.
867: Given a state... -> Let ... be a state
929: "speedup" the query complexity -> reduce
974: "much" complicated -> more
Limitations
The authors acknowledge some limitiations which they present as open questions. Most significantly, do the quantum algorithms based on underdamped Langevin have a better dependence on
d
and
κ
? (Known speedups don't apply to irreversible chains.) |
NIPS | Title
Quantum Algorithms for Sampling Log-Concave Distributions and Estimating Normalizing Constants
Abstract
Given a convex function f : R → R, the problem of sampling from a distribution ∝ e−f(x) is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants ∫ Rd e −f(x)dx. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number κ and dimension d) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error . Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity Õ(κd) and Õ(κd/ ) for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in κ, d, over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a 1/ 1−o(1) quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in . 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
N/A
Given a convex function f : Rd → R, the problem of sampling from a distribution ∝ e−f(x) is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants ∫ Rd e
−f(x)dx. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number κ and dimension d) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error . Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity Õ(κ1/2d) and Õ(κ1/2d3/2/ ) for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in κ, d, over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a 1/ 1−o(1) quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in .
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
1 Introduction
Sampling from a given distribution is a fundamental computational problem. For example, in statistics, samples can determine confidence intervals or explore posterior distributions. In machine learning, samples are used for regression and to train supervised learning models. In optimization, samples from well-chosen distributions can produce points near local or even global optima.
Sampling can be nontrivial even when the distribution is known. Indeed, efficient sampling is often a challenging computational problem, and bottlenecks the running time in many applications. Many efforts have been made to develop fast sampling methods. Among those, one of the most successful tools is Markov Chain Monte Carlo (MCMC), which uses a Markov chain that converges to the desired distribution to (approximately) sample from it.
Here we focus on the fundamental task of log-concave sampling, i.e., sampling from a distribution proportional to e−f where f : Rd → R is a convex function. This covers many practical applications such as multivariate Gaussian distributions and exponential distributions. Provable performance guarantees for log-concave sampling have been widely studied [15]. A closely related problem is estimating the normalizing constants of log-concave distributions, which also has many applications [16].
Quantum computing has been applied to speed up many classical algorithms based on Markov processes, so it is natural to investigate quantum algorithms for log-concave sampling. If we can prepare a quantum state whose amplitudes are the square roots of the corresponding probabilities, then measurement yields a random sample from the desired distribution. In this approach, the number of required qubits is only poly-logarithmic in the size of the sample space. Unfortunately, such a quantum state probably cannot be efficiently prepared in general, since this would imply SZK ⊆ BQP [1]. Nevertheless, in some cases, quantum algorithms can achieve polynomial speedup over classical algorithms. Examples include uniform sampling on a 2D lattice [35], estimating partition functions [4, 22, 31, 45, 46], and estimating volumes of convex bodies [6]. However, despite the importance of sampling log-concave distributions and estimating normalizing constants, we are not aware of any previous quantum speedups for general instances of these problems.
Formulation In this paper, we consider a d-dimensional convex function f : Rd → R which is L-smooth and µ-strongly convex, i.e., µ,L > 0 and for any x, y ∈ Rd, x 6= y,
f(y)− f(x)− 〈∇f(x), y − x〉 ‖x− y‖22/2 ∈ [µ,L]. (1.1)
We denote by κ := L/µ the condition number of f . The corresponding log-concave distribution has probability density ρf : Rd → R with
ρf (x) := e−f(x)
Zf , (1.2)
where the normalizing constant is
Zf := ∫ x∈Rd e−f(x) dx. (1.3)
When there is no ambiguity, we abbreviate ρf and Zf as ρ and Z, respectively. Given an ∈ (0, 1),
• the goal of log-concave sampling is to output a random variable with distribution ρ̃ such that ‖ρ̃− ρ‖ ≤ , and
• the goal of normalizing constant estimation is to output a value Z̃ such that with probability at least 2/3, (1− )Z ≤ Z̃ ≤ (1 + )Z.
Here ‖ · ‖ is a certain norm. We consider the general setting where the function f is specified by an oracle. In particular, we consider the quantum evaluation oracle Of , a standard model in the quantum computing literature [3, 6, 7, 50]. The evaluation oracle acts as
Of |x, y〉 = |x, f(x) + y〉 ∀x ∈ Rd, y ∈ R. (1.4)
(Quantum computing notations are briefly explained in Section 2.) We also consider the quantum gradient oracle O∇f with
O∇f |x, z〉 = |x,∇f(x) + z〉 ∀x, z ∈ Rd. (1.5)
In other words, we allow superpositions of queries to both function evaluations and gradients. The essence of quantum speedup is the ability to compute with carefully designed superpositions.
Contributions Our main results are quantum algorithms that speed up log-concave sampling and normalizing constant estimation.
Theorem 1.1 (Main log-concave sampling result). Let ρ denote the log-concave distribution (1.2). There exist quantum algorithms that output a random variable distributed according to ρ̃ such that
• W2(ρ̃, ρ) ≤ where W2 is the Wasserstein 2-norm (2.4), using Õ(κ7/6d1/6 −1/3 + κd1/3 −2/3) queries to the quantum evaluation oracle (1.4); or
• ‖ρ̃ − ρ‖TV ≤ where ‖ · ‖TV is the total variation distance (2.3), using Õ ( κ1/2d ) queries to
the quantum gradient oracle (1.5), or Õ ( κ1/2d1/4 ) queries when the initial distribution is warm (formally defined in Appendix C.2.1).
In the above results, the query complexity Õ(κ7/6d1/6 −1/3 +κd1/3 −2/3) is achieved by our quantum ULD-RMM algorithm. Although the quantum query complexity is the same as the best known classical result [37], we emphasize that our quantum algorithm uses a zeroth-order oracle while [37] uses a first-order oracle. The query complexity Õ ( κ1/2d ) is achieved by our quantum MALA algorithm that uses a first-order oracle (as in classical algorithms). This is a quadratic speedup in κ compared with the best known classical algorithm [28]. With a warm start, our quantum speedup is even more significant: we achieve quadratic speedups in κ and d as compared with the best known classical algorithm with a warm start [47].
Theorem 1.2 (Main normalizing constant estimation result). There exist quantum algorithms that estimate the normalizing constant by Z̃ within multiplicative error with probability at least 3/4,
• using Õ(κ7/6d7/6 −1 + κd4/3 −1) queries to the quantum evaluation oracle (1.4); or • using Õ(κ1/2d3/2 −1) queries to the quantum gradient oracle (1.5).
Furthermore, this task has quantum query complexity at least Ω( −1+o(1)) (Theorem 5.1).
Our query complexity Õ(κ7/6d7/6 −1 + κd4/3 −1) for normalizing constant estimation achieves a quadratic speedup in precision compared with the best known classical algorithm [16]. More remarkably, our quantum ULD-RMM algorithm again uses a zeroth-order oracle while the slower best known classical algorithm uses a first-order oracle [16]. Our quantum algorithm working with a first-order oracle achieves polynomial speedups in all parameters compared with the best known classical algorithm [16]. Moreover, the precision-dependence of our quantum algorithms is nearly optimal, which is quadratically better than the classical lower bound in 1/ [16].
To the best of our knowledge, these are the first quantum algorithms with quantum speedup for the fundamental problems of log-concave sampling and estimating normalizing constants. We explore multiple classical techniques including the underdamped Langevin diffusion (ULD) method [12– 14, 43], the randomized midpoint method for underdamped Langevin diffusion (ULD-RMM) [36, 37], and the Metropolis adjusted Langevin algorithm (MALA) [8, 11, 15, 28, 29, 47], and achieve quantum speedups. Our main contributions are as follows.
• Log-concave sampling. For this problem, our quantum algorithms based on ULD and ULDRMM have the same query complexity as the best known classical algorithms, but our quantum algorithms only use a zeroth-order (evaluation) oracle, while the classical algorithms use the firstorder (gradient) oracle. For MALA, this improvement on the order of oracles is nontrivial, but we can use the quantum gradient oracle in our quantum MALA algorithm to achieve a quadratic speedup in the condition number κ. Furthermore, given a warm-start distribution, our quantum algorithm achieves a quadratic speedup in all parameters.
• Normalizing constant estimation. For this problem, our quantum algorithms provide larger speedups. In particular, our quantum algorithms based on ULD and ULD-RMM achieve quadratic
speedup in the multiplicative precision (while using a zeroth-order oracle) compared with the corresponding best-known classical algorithms (using a first-order oracle). Our quantum algorithm based on MALA achieves polynomial speedups in all parameters. Furthermore, we prove that our quantum algorithm is nearly optimal in terms of .
We summarize our results and compare them to previous classical algorithms in Table 1 and Table 2. See Appendix A for more detailed comparisons to related classical and quantum work.
Techniques In this work, we develop a systematic approach for studying the complexity of quantum walk mixing and show that for any reversible classical Markov chain, we can obtain quadratic speedup for the mixing time as long as the initial distribution is warm. In particular, we apply the quantum walk and quantum annealing in the context of Langevin dynamics and achieve polynomial quantum speedups.
The technical ingredients of our quantum algorithms are highlighted below.
• Quantum simulated annealing (Lemma 3.2). Our quantum algorithm for estimating normalizing constants combines the quantum simulated annealing framework of [45] and the quantum mean estimation algorithm of [31]. For each type of Langevin dynamics (which are random walks), we build a corresponding quantum walk. Crucially, the spectral gap of the random walk is quadratically amplified in the phase gap of the corresponding quantum walk. This allows us to use a Grover-like procedure to produce the stationary distribution state given a sufficiently good initial state. In the simulated annealing framework, this initial state is the stationary distribution state of the previous Markov chain.
• Effective spectral gap (Lemma C.7). We show how to leverage a “warm” initial distribution to achieve a quantum speedup for sampling. Classically, a warm start leads to faster mixing even if
the spectral gap is small. Quantumly, we generalize the notion of “effective spectral gap” [6, 27, 34] to our more general sampling problem. We show that with a bounded warmness parameter, quantum algorithms can achieve a quadratic speedup in the mixing time. By viewing the sampling problem as a simulated annealing process with only one Markov chain, we prove a quadratic speedup for quantum MALA by analyzing the effective spectral gap.
• Quantum gradient estimation (Lemma C.1). We adapt Jordan’s quantum gradient algorithm [24] to the ULD and ULD-RMM algorithms and give rigorous proofs to bound the sampling error due to gradient estimation errors.
Open questions Our work raises several natural questions for future investigation:
• Can we achieve quantum speedup in d and κ for unadjusted Langevin algorithms such as ULD and ULD-RMM? The main difficulty is that ULD and ULD-RMM are irreversible, while most available quantum walk techniques only apply to reversible Markov chains. New techniques might be required to resolve this question.
• Can we achieve further quantum speedup for estimating normalizing constants with a warm start distribution? This might require a more refined version of quantum mean estimation.
• Can we give quantum algorithms for estimating normalizing constants with query complexity sublinear in d? Such a result would give a provable quantum-classical separation due to the Ω(d1−o(1)/ 2−o(1)) classical lower bound proved in [16].
Limitations and societal impacts Researchers working on theoretical aspects of quantum computing or Monte Carlo methods may benefit from our results. In the long term, once fault-tolerant quantum computers have been built, our results may find practical applications in MCMC methods arising in the real world. As far as we are aware, our work does not have negative societal impacts.
2 Preliminaries
Basic definitions of quantum computation Quantum mechanics is formulated in terms of linear algebra. The computational basis of Cd is {~e0, . . . , ~ed−1}, where ~ei = (0, . . . , 1, . . . , 0)> with the 1 in the (i+ 1)st position. We use Dirac notation, writing |i〉 (called a “ket”) for ~ei and 〈i| (a “bra”) for ~e>i .
The tensor product of quantum states is their Kronecker product: if |u〉 ∈ Cd1 and |v〉 ∈ Cd2 , then we have |u〉 ⊗ |v〉 ∈ Cd1 ⊗ Cd2 with
|u〉 ⊗ |v〉 = (u0v0, u0v1, . . . , ud1−1vd2−1)>. (2.1) The basic element of quantum information is a qubit, a quantum state in C2, which can be written as a|0〉+ b|1〉 for some a, b ∈ C with |a|2 + |b|2 = 1. An n-qubit tensor product state can be written as |v1〉 ⊗ · · · ⊗ |vn〉 ∈ C2 n
, where for any i ∈ [n], |vi〉 is a one-qubit state. Note however that most states in C2n are not product states. We sometimes abbreviate |u〉 ⊗ |v〉 as |u〉|v〉. Operations on quantum states are unitary transformations. They are typically stated in the circuit model, where a k-qubit gate is a unitary matrix in C2k . Two-qubit gates are universal, i.e., every n-qubit gate can be decomposed into a product of gates that act as the identity on n−2 qubits and as some two-qubit gate on the other 2 qubits. The gate complexity of an operation refers to the number of two-qubit gates used in a quantum circuit for realizing it.
Quantum access to a function, referred to as a quantum oracle, must be reversible and allow access to different values of the function in superposition (i.e., for linear combinations of computational basis states). For example, consider the unitary evaluation oracle Of defined in (1.4). Given a probability distribution {pi}ni=1 and a set of points {xi}ni=1, we have
Of n∑ i=1 √ pi|xi〉|0〉 = n∑ i=1 √ pi|xi〉|f(xi)〉. (2.2)
Then a measurement would give f(xi) with probability pi. However, a quantum oracle can not only simulate random sampling, but can enable uniquely quantum behavior through interference. Examples include amplitude amplification—the main idea behind Grover’s search algorithm [20] and
the amplitude estimation procedure used in this paper—and many other quantum algorithms relying on coherent quantum access to a function. Similar arguments apply to the quantum gradient oracle (1.5). If a classical oracle can be computed by an explicit classical circuit, then the corresponding quantum oracle can be implemented by a quantum circuit of approximately the same size. Therefore, these quantum oracles provide a useful framework for understanding the quantum complexity of log-concave sampling and normalizing constant estimation.
To sample from a distribution π, it suffices to prepare the state |π〉 := ∑ x √ πx|x〉 and then measure it. For a Markov chain specified by a transition matrix P with stationary distribution π, one can construct a corresponding quantum walk operator W (P ). Intuitively, quantum walks can be viewed as applying a sequence of quantum unitaries on a quantum state encoding the initial distribution to rotate it to the subspace of stationary distribution |π〉. The number of rotations needed (i.e., the angle between the initial distribution and stationary distribution) depends on the spectral gap of P , and a quantum algorithm can achieve a quadratic speedup via quantum phase estimation and amplification algorithms. More background on quantum walk is given in Appendix C.2.2.
Notations Throughout the paper, the big-O notations O(·), o(·), Ω(·), and Θ(·) follow common definitions. The Õ notation omits poly-logarithmic terms, i.e., Õ(f) := O(fpoly(log f)). We say a function f is L-Lipschitz continuous at x if |f(x)− f(y)| ≤ L‖x− y‖ for all y sufficiently near x. The total variation distance (TV-distance) between two functions f, g : Rd → R is defined as
‖f − g‖TV := 1
2 ∫ Rd |f(x)− g(x)|dx. (2.3)
Let B(Rd) denote the Borel σ-field of Rd. Given probability measures µ and ν on (Rd,B(Rd)), a transference plan ζ between µ and ν is defined as a probability measure on (Rd × Rd,B(Rd) × B(Rd)) such that for any A ⊆ Rd, ζ(A × Rd) = µ(A) and ζ(Rd × A) = ν(A). We let Γ(µ, ν) denote the set of all transference plans. We let
W2(µ, ν) :=
( inf
ζ∈Γ(µ,ν) ∫ Rd×Rd ‖x− y‖22 dζ(x, y) ) 1 2
(2.4)
denote the Wasserstein 2-norm between µ and ν.
3 Quantum Algorithm for Log-Concave Sampling
In this section, we describe several quantum algorithms for sampling log-concave distributions.
Quantum inexact ULD and ULD-RMM We first show that the gradient oracle in the classical ULD and ULD-RMM algorithms can be efficiently simulated by the quantum evaluation oracle via quantum gradient estimation. Suppose we are given access to the evaluation oracle (1.4) for f(x). Then by Jordan’s algorithm [24] (see Lemma C.1 for details), there is a quantum algorithm that can compute ∇f(x) with a polynomially small `1-error by querying the evaluation oracle O(1) times. Using this, we can prove the following theorem (see Appendix C.1 for details). Theorem 3.1 (Informal version of Theorem C.1 and Theorem C.2). Let ρ ∝ e−f be a d-dimensional log-concave distribution with f satisfying (1.1). Given a quantum evaluation oracle for f ,
• the quantum inexact ULD algorithm uses Õ(κ2d1/2 −1) queries, and • the quantum inexact ULD-RMM algorithm uses Õ(κ7/6d1/6 −1/3 + κd1/3 −2/3) queries,
to quantumly sample from a distribution that is -close to ρ in W2-distance.
We note that the query complexities of our quantum algorithms using a zeroth-order oracle match the state-of-the-art classical ULD [10] and ULD-RMM [37] complexities with a first-order oracle. The main technical difficulty of applying the quantum gradient algorithm is that it produces a stochastic gradient oracle in which the output of the quantum algorithm g satisfies ‖E[g]−∇f(x)‖1 ≤ d−Ω(1). In particular, the randomness of the gradient computation is “entangled” with the randomness of the Markov chain. We use the classical analysis of ULD and ULD-RMM processes [36] to prove that the stochastic gradient will not significantly slow down the mixing of ULD processes, and that the error caused by the quantum gradient algorithm can be controlled.
Quantum MALA We next propose two quantum algorithms with lower query complexity than classical MALA, one with a Gaussian initial distribution and another with a warm-start distribution. The main technical tool we use is a quantum walk in continuous space.
The classical MALA (i.e., Metropolized HMC) starts from a Gaussian distributionN (0, L−1Id) and performs a leapfrog step in each iteration. It is well-known that the initial Gaussian state
|ρ0〉 = ∫ Rd ( L 2π )d/4 e− L 4 ‖z−x ?‖22 |z〉dz (3.1)
can be efficiently prepared. We show that the quantum walk update operator
U := ∫ Rd dx ∫ Rd dy √ px→y|x〉〈x| ⊗ |y〉〈0| (3.2)
can be efficiently implemented, where px→y := p(x, y) is the transition density from x to y, and the density p satisfies ∫ Rd p(x, y) dy = 1 for any x ∈ R
d. Lemma 3.1 (Informal version of Lemma C.6). The continuous-space quantum walk operator corresponding to the MALA Markov chain can be implemented with O(1) gradient and evaluation queries.
In general, it is difficult to quantumly speed up the mixing time of a classical Markov chain, which is upper bounded by O(δ−1 log ( ρ−1min ) ), where δ is the spectral gap. However, [45] shows that a quadratic speedup is possible when following a sequence of slowly-varying Markov chains. More specifically, let ρ0, . . . , ρr be the stationary distributions of the reversible Markov chains M0, . . . ,Mr and let |ρ0〉, . . . , |ρr〉 be the corresponding quantum states. Suppose |〈ρi|ρi+1〉| ≥ p for all i ∈ {0, . . . , r − 1}, and suppose the spectral gaps of M0, . . . ,Mr are lower-bounded by δ. Then we can prepare a quantum state |ρ̃r〉 that is -close to |ρr〉 using Õ ( δ−1/2rp−1 ) quantum walk steps. To fulfill the slowly-varying condition, we consider an annealing process that goes from ρ0 = N (0, L−1Id) to the target distribution ρM+1 = ρ in M = Õ( √ d) stages. At the ith stage, the stationary distribution is ρi ∝ e−fi with fi := f + 12σ −2 i ‖x‖2. By properly choosing σ1 ≤ · · · ≤ σM , we prove that this sequence of Markov chains is slowly varying. Lemma 3.2 (Informal version of Lemma B.6). If we take σ21 = 2dL and σ 2 i+1 = (1 + 1√ d )σ2i , then for 0 ≤ i ≤M , we have |〈ρi|ρi+1〉| ≥ Ω(1).
Combining Lemma 3.1, Lemma 3.2, and the effective spectral gap of MALA (Lemma C.7), we have: Theorem 3.2 (Informal version of Theorem C.7). Let ρ ∝ e−f be a d-dimensional log-concave distribution with f satisfying (1.1). There is a quantum algorithm (Algorithm 1) that prepares a state |ρ̃〉 with ‖|ρ̃〉 − |ρ〉‖ ≤ using Õ(κ1/2d) gradient and evaluation oracle queries.
Algorithm 1: QUANTUMMALAFORLOG-CONCAVESAMPLING (Informal) Input: Evaluation oracle Of , gradient oracle O∇f , smoothness parameter L, convexity parameter µ Output: Quantum state |ρ̃〉 close to the stationary distribution state ∫ Rd e
−f(x)/2 d|x〉 1 Compute the cooling schedule parameters σ1, . . . , σM 2 Prepare the state |ρ0〉 ∝ ∫ Rd e − 14‖x‖ 2/σ21 d|x〉 3 for i← 1, . . . ,M do 4 Construct Ofi and O∇fi where fi(x) = f(x) + 12‖x‖
2/σ2i 5 Construct quantum walk update unitary U with Ofi and O∇fi 6 Implement the quantum walk operator and the approximate reflection R̃i 7 Prepare |ρi〉 by performing π3 -amplitude amplification with R̃i on the state |ρi−1〉|0〉 8 return |ρM 〉
For the classical MALA with a Gaussian initial distribution, it was shown by [29] that the mixing time is at least Ω̃(κd). Theorem 3.2 quadratically reduces the κ dependence.
Note that Algorithm 1 uses a first-order oracle, instead of the zeroth-order oracle used in the quantum ULD algorithms. The technical barrier to applying the quantum gradient algorithm (Lemma C.1) in the quantum MALA is to analyze the classical MALA with a stochastic gradient oracle. We currently do not know whether the “entangled randomness” dramatically increases the mixing time.
More technical details and proofs are provided in Appendix C.
4 Quantum Algorithm for Estimating Normalizing Constants
In this section, we apply our quantum log-concave sampling algorithms to the normalizing constant estimation problem. A very natural approach to this problem is via MCMC, which constructs a multi-stage annealing process and uses a sampler at each stage to solve a mean estimation problem. We show how to quantumly speed up these annealing processes and improve the query complexity of estimating normalizing constants.
Quantum speedup for the standard annealing process We first consider the standard annealing process for log-concave distributions, as already applied in the previous section. Recall that we pick parameters σ1 < σ2 < · · · < σM and construct a sequence of Markov chains with stationary distributions ρi ∝ e−fi , where fi = f+ 12σ2i ‖x‖ 2. Then, at the ith stage, we estimate the expectation
Eρi [gi] where gi = exp ( 1
2 (σ−2i − σ −2 i+1)‖x‖ 2
) . (4.1)
If we can estimate each expectation with relative error at most O( /M), then the product of these M quantities estimates the normalizing constant Z = ∫ Rd e −f(x) dx with relative error at most .
For the mean estimation problem, [31] showed that when the relative variance Varρi [gi]Eρi [gi]2 is constant, there is a quantum algorithm for estimating the expectation Eρi [gi] within relative error at most using Õ(1/ ) quantum samples from the distribution ρi. Our annealing schedule satisfies the bounded relative variance condition. Therefore, by the quantum mean estimation algorithm, we improve the sampling complexity of the standard annealing process from Õ(M2 −2) to Õ(M −1).
To further improve the query complexity, we consider using the quantum MALAs developed in the previous section to generate samples. Observe that Algorithm 1 outputs a quantum state corresponding to some distribution that is close to ρi, instead of an individual sample. If we can estimate the expectation without destroying the quantum state, then we can reuse the state and evolve it for the (i + 1)st Markov chain. Fortunately, we can use non-destructive mean estimation to estimate the expectation and restore the initial states. A detailed error analysis of this algorithm can be found in [6, 22]. We first prepare Õ(M −1) copies of initial states corresponding to the Gaussian distribution N (0, L−1Id). Then, for each stage, we apply the non-destructive mean estimation algorithm to estimate the expectation Eρi [gi] and then run quantum MALA to evolve the states |ρi〉 to |ρi+1〉. This gives our first quantum algorithm for estimating normalizing constants. Theorem 4.1 (Informal version of Theorem D.2). Let Z be the normalizing constant in (1.3). There is a quantum algorithm (Algorithm 2) that outputs an estimate Z̃ with relative error at most using Õ ( d3/2κ1/2 −1 ) queries to the quantum gradient and evaluation oracles.
Quantum speedup for MLMC Now we consider using multilevel Monte Carlo (MLMC) as the annealing process and show how to achieve quantum speedup. MLMC was originally developed by [23] for parametric integration; then [17] applied MLMC to simulate stochastic differential equations (SDEs). The idea of MLMC is natural: we choose a different number of samples at each stage based on the cost and variance of that stage.
To estimate normalizing constants, a variant of MLMC was proposed in [16]. Unlike the standard MLMC for bounding the mean-squared error, they upper bound the bias and the variance separately, and the analysis is technically difficult. The first quantum algorithm based on MLMC was subsequently developed by [2] based on the quantum mean estimation algorithm. Roughly speaking, the quantum algorithm can quadratically reduce the -dependence of the sample complexity compared with classical MLMC.
Algorithm 2: QUANTUMMALAFORESTIMATINGNORMALIZINGCONSTANT (Informal) Input: Evaluation oracle Of , gradient oracle O∇f Output: Estimate Z̃ of Z with relative error at most
1 M ← Õ( √ d), K ← Õ( −1) 2 Compute the cooling schedule parameters σ1, . . . , σM 3 for j ← 1, . . . ,K do 4 Prepare the state |ρ1,j〉 ∝ ∫ Rd e − 14‖x‖ 2/σ21 |x〉dx
5 Z̃ ← (2πσ21)d/2 6 for i← 1, . . . ,M do 7 g̃i ← Non-destructive mean estimation for gi using {|ρi,0〉, . . . , |ρi,K〉} 8 Z̃ ← Z̃g̃i 9 for j ← 1, . . . ,K do
10 |ρi+1,j〉 ← QUANTUMMALA(Ofi+1 ,O∇,fi+1 , |ρi,j〉)
11 return Z̃
In this work, we apply the quantum accelerated MLMC (QA-MLMC) scheme [2] to simulate underdamped Langevin dynamics as the SDE. One challenge in using QA-MLMC is that gi in our setting is not Lipschitz. Fortunately, as suggested by [16], this issue can be resolved by truncating large x
and replacing gi by hi := min { gi, exp ( (r+i )2 σ2i (1+α −1) )} , with the choice
α = Õ ( 1√
d log(1/ )
) r+i = Eρi+1‖x‖+ Θ(σi √ (1 + α) log(1/ )) (4.2)
to ensure hiEρigi is O(σ−1i ) Lipschitz. Furthermore, ∣∣Eρi(hi − gi)∣∣ < by Lemmas C.7 and C.8 in [16]. For simplicity, we regard gi as a Lipschitz continuous function in our main results. We present QA-MLMC in Algorithm 3, where the sampling algorithm A can be chosen to be quantum inexact ULD/ULD-RMM or quantum MALA.
Algorithm 3: QA-MLMC (Informal) Input: Evaluation oracle Of , function g, error , a quantum sampler A(x0, f, η) for ρ Output: An estimate of R̃ = Eρh
1 K ← Õ( −1) 2 Compute the initial point x0 and the step size η0 3 Compute the number of samples N1, . . . , NK 4 for j ← 1, . . . ,K do 5 Let ηj = η/2j−1 6 for i← 1, . . . , Nj do 7 Sample Xηji by A(f, x0, ηj), and sample X ηj/2 i by A(f, x0, ηj/2) 8 G̃−i ← QMEANEST({g(X ηj i )}i∈[Nj ]), and G̃ + i ← QMEANEST({g(X ηj/2 i )}i∈[Nj ])
9 return R̃ = G̃0 + ∑K j=0(G̃ − i − G̃ + i )
This QA-MLMC framework reduces the -dependence of the sampling complexity for estimating normalizing constants from −2 to −1 in both the ULD and ULD-RMM cases, as compared with the state-of-the-art classical results [16].
Using the quantum inexact ULD and ULD-RMM algorithms (Theorem 3.1) to generate samples, we obtain our second quantum algorithm for estimating normalizing constants (see Appendix D for proofs). Theorem 4.2 (Informal version of Theorem D.3 and Theorem D.4). Let Z be the normalizing constant in (1.3). There exist quantum algorithms for estimating Z with relative error at most using
• quantum inexact ULD with Õ(d3/2κ2 −1) queries to the evaluation oracle, and
• quantum inexact ULD-RMM with Õ((d7/6κ7/6 + d4/3κ) −1) queries to the evaluation oracle.
5 Quantum Lower Bound
Finally, we lower bound the quantum query complexity of normalizing constant estimation.
Theorem 5.1. For any fixed positive integer k, given query access (1.4) to a function f : Rk → R that is 1.5-smooth and 0.5-strongly convex, the quantum query complexity of estimating the partition functionZ = ∫ Rk e −f(x) dxwithin multiplicative error with probability at least 2/3 is Ω( − 1 1+4/k ).
The proof of our quantum lower bound is inspired by the construction in Section 5 of [16]. They consider a log-concave function whose value is negligible outside a hypercube centered at 0. The interior of the hypercube is decomposed into cells of two types. The function takes different values on each type, and the normalizing constant estimation problem reduces to determining the number of cells of each type. Quantumly, we follow the same construction and reduce the cell counting problem to the Hamming weight problem: given an n-bit Boolean string and two integers ` < `′, decide whether the Hamming weight (i.e., the number of ones) of this string is `1 or `2. This problem has a known quantum query lower bound [32], which implies the quantum hardness of estimating the normalizing constant. The full proof of Theorem 5.1 appears in Appendix E.
Acknowledgements
AMC acknowledges support from the Army Research Office (grant W911NF-20-1-0015); the Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing program; and the National Science Foundation (grant CCF-1813814). TL was supported by a startup fund from Peking University, and the Advanced Institute of Information Technology, Peking University. JPL was supported by the National Science Foundation (grant CCF-1813814), an NSF Quantum Information Science and Engineering Network (QISE-NET) triplet award (DMR-1747426), a Simons Foundation award (No. 825053), and the Simons Quantum Postdoctoral Fellowship. RZ was supported by the University Graduate Continuing Fellowship from UT Austin. | 1. What are the contributions of the paper on quantum algorithms for log concave probability measures?
2. What are the strengths of the paper regarding its presentation and scientific correctness?
3. What are the weaknesses of the paper regarding the tools used for proof and the lack of difference between quantum and standard versions? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper discusses quantum type algorithms in estimating the log concave probability measures. The rates in total variation and W2 norm are provided.
Strengths And Weaknesses
The paper is well presented and I mostly enjoyed reading the paper. The results in the paper are scientifically correct and make sense to me. However, the tools used in the paper to prove the convergence rate are quite standard. The novelty here is to put all analysis in the quantum lens. Moreover, as we can see in Table 1, the quantum versions of ULA or MALA do not really differ from the standard ULA and MALA, regarding the dependence in dimension d. This raised question on whether the quantum setting is really beneficial.
Questions
The paper is well written and I do not have question.
Limitations
NA |
NIPS | Title
Quantum Algorithms for Sampling Log-Concave Distributions and Estimating Normalizing Constants
Abstract
Given a convex function f : R → R, the problem of sampling from a distribution ∝ e−f(x) is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants ∫ Rd e −f(x)dx. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number κ and dimension d) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error . Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity Õ(κd) and Õ(κd/ ) for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in κ, d, over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a 1/ 1−o(1) quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in . 36th Conference on Neural Information Processing Systems (NeurIPS 2022).
N/A
Given a convex function f : Rd → R, the problem of sampling from a distribution ∝ e−f(x) is called log-concave sampling. This task has wide applications in machine learning, physics, statistics, etc. In this work, we develop quantum algorithms for sampling log-concave distributions and for estimating their normalizing constants ∫ Rd e
−f(x)dx. First, we use underdamped Langevin diffusion to develop quantum algorithms that match the query complexity (in terms of the condition number κ and dimension d) of analogous classical algorithms that use gradient (first-order) queries, even though the quantum algorithms use only evaluation (zeroth-order) queries. For estimating normalizing constants, these algorithms also achieve quadratic speedup in the multiplicative error . Second, we develop quantum Metropolis-adjusted Langevin algorithms with query complexity Õ(κ1/2d) and Õ(κ1/2d3/2/ ) for log-concave sampling and normalizing constant estimation, respectively, achieving polynomial speedups in κ, d, over the best known classical algorithms by exploiting quantum analogs of the Monte Carlo method and quantum walks. We also prove a 1/ 1−o(1) quantum lower bound for estimating normalizing constants, implying near-optimality of our quantum algorithms in .
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
1 Introduction
Sampling from a given distribution is a fundamental computational problem. For example, in statistics, samples can determine confidence intervals or explore posterior distributions. In machine learning, samples are used for regression and to train supervised learning models. In optimization, samples from well-chosen distributions can produce points near local or even global optima.
Sampling can be nontrivial even when the distribution is known. Indeed, efficient sampling is often a challenging computational problem, and bottlenecks the running time in many applications. Many efforts have been made to develop fast sampling methods. Among those, one of the most successful tools is Markov Chain Monte Carlo (MCMC), which uses a Markov chain that converges to the desired distribution to (approximately) sample from it.
Here we focus on the fundamental task of log-concave sampling, i.e., sampling from a distribution proportional to e−f where f : Rd → R is a convex function. This covers many practical applications such as multivariate Gaussian distributions and exponential distributions. Provable performance guarantees for log-concave sampling have been widely studied [15]. A closely related problem is estimating the normalizing constants of log-concave distributions, which also has many applications [16].
Quantum computing has been applied to speed up many classical algorithms based on Markov processes, so it is natural to investigate quantum algorithms for log-concave sampling. If we can prepare a quantum state whose amplitudes are the square roots of the corresponding probabilities, then measurement yields a random sample from the desired distribution. In this approach, the number of required qubits is only poly-logarithmic in the size of the sample space. Unfortunately, such a quantum state probably cannot be efficiently prepared in general, since this would imply SZK ⊆ BQP [1]. Nevertheless, in some cases, quantum algorithms can achieve polynomial speedup over classical algorithms. Examples include uniform sampling on a 2D lattice [35], estimating partition functions [4, 22, 31, 45, 46], and estimating volumes of convex bodies [6]. However, despite the importance of sampling log-concave distributions and estimating normalizing constants, we are not aware of any previous quantum speedups for general instances of these problems.
Formulation In this paper, we consider a d-dimensional convex function f : Rd → R which is L-smooth and µ-strongly convex, i.e., µ,L > 0 and for any x, y ∈ Rd, x 6= y,
f(y)− f(x)− 〈∇f(x), y − x〉 ‖x− y‖22/2 ∈ [µ,L]. (1.1)
We denote by κ := L/µ the condition number of f . The corresponding log-concave distribution has probability density ρf : Rd → R with
ρf (x) := e−f(x)
Zf , (1.2)
where the normalizing constant is
Zf := ∫ x∈Rd e−f(x) dx. (1.3)
When there is no ambiguity, we abbreviate ρf and Zf as ρ and Z, respectively. Given an ∈ (0, 1),
• the goal of log-concave sampling is to output a random variable with distribution ρ̃ such that ‖ρ̃− ρ‖ ≤ , and
• the goal of normalizing constant estimation is to output a value Z̃ such that with probability at least 2/3, (1− )Z ≤ Z̃ ≤ (1 + )Z.
Here ‖ · ‖ is a certain norm. We consider the general setting where the function f is specified by an oracle. In particular, we consider the quantum evaluation oracle Of , a standard model in the quantum computing literature [3, 6, 7, 50]. The evaluation oracle acts as
Of |x, y〉 = |x, f(x) + y〉 ∀x ∈ Rd, y ∈ R. (1.4)
(Quantum computing notations are briefly explained in Section 2.) We also consider the quantum gradient oracle O∇f with
O∇f |x, z〉 = |x,∇f(x) + z〉 ∀x, z ∈ Rd. (1.5)
In other words, we allow superpositions of queries to both function evaluations and gradients. The essence of quantum speedup is the ability to compute with carefully designed superpositions.
Contributions Our main results are quantum algorithms that speed up log-concave sampling and normalizing constant estimation.
Theorem 1.1 (Main log-concave sampling result). Let ρ denote the log-concave distribution (1.2). There exist quantum algorithms that output a random variable distributed according to ρ̃ such that
• W2(ρ̃, ρ) ≤ where W2 is the Wasserstein 2-norm (2.4), using Õ(κ7/6d1/6 −1/3 + κd1/3 −2/3) queries to the quantum evaluation oracle (1.4); or
• ‖ρ̃ − ρ‖TV ≤ where ‖ · ‖TV is the total variation distance (2.3), using Õ ( κ1/2d ) queries to
the quantum gradient oracle (1.5), or Õ ( κ1/2d1/4 ) queries when the initial distribution is warm (formally defined in Appendix C.2.1).
In the above results, the query complexity Õ(κ7/6d1/6 −1/3 +κd1/3 −2/3) is achieved by our quantum ULD-RMM algorithm. Although the quantum query complexity is the same as the best known classical result [37], we emphasize that our quantum algorithm uses a zeroth-order oracle while [37] uses a first-order oracle. The query complexity Õ ( κ1/2d ) is achieved by our quantum MALA algorithm that uses a first-order oracle (as in classical algorithms). This is a quadratic speedup in κ compared with the best known classical algorithm [28]. With a warm start, our quantum speedup is even more significant: we achieve quadratic speedups in κ and d as compared with the best known classical algorithm with a warm start [47].
Theorem 1.2 (Main normalizing constant estimation result). There exist quantum algorithms that estimate the normalizing constant by Z̃ within multiplicative error with probability at least 3/4,
• using Õ(κ7/6d7/6 −1 + κd4/3 −1) queries to the quantum evaluation oracle (1.4); or • using Õ(κ1/2d3/2 −1) queries to the quantum gradient oracle (1.5).
Furthermore, this task has quantum query complexity at least Ω( −1+o(1)) (Theorem 5.1).
Our query complexity Õ(κ7/6d7/6 −1 + κd4/3 −1) for normalizing constant estimation achieves a quadratic speedup in precision compared with the best known classical algorithm [16]. More remarkably, our quantum ULD-RMM algorithm again uses a zeroth-order oracle while the slower best known classical algorithm uses a first-order oracle [16]. Our quantum algorithm working with a first-order oracle achieves polynomial speedups in all parameters compared with the best known classical algorithm [16]. Moreover, the precision-dependence of our quantum algorithms is nearly optimal, which is quadratically better than the classical lower bound in 1/ [16].
To the best of our knowledge, these are the first quantum algorithms with quantum speedup for the fundamental problems of log-concave sampling and estimating normalizing constants. We explore multiple classical techniques including the underdamped Langevin diffusion (ULD) method [12– 14, 43], the randomized midpoint method for underdamped Langevin diffusion (ULD-RMM) [36, 37], and the Metropolis adjusted Langevin algorithm (MALA) [8, 11, 15, 28, 29, 47], and achieve quantum speedups. Our main contributions are as follows.
• Log-concave sampling. For this problem, our quantum algorithms based on ULD and ULDRMM have the same query complexity as the best known classical algorithms, but our quantum algorithms only use a zeroth-order (evaluation) oracle, while the classical algorithms use the firstorder (gradient) oracle. For MALA, this improvement on the order of oracles is nontrivial, but we can use the quantum gradient oracle in our quantum MALA algorithm to achieve a quadratic speedup in the condition number κ. Furthermore, given a warm-start distribution, our quantum algorithm achieves a quadratic speedup in all parameters.
• Normalizing constant estimation. For this problem, our quantum algorithms provide larger speedups. In particular, our quantum algorithms based on ULD and ULD-RMM achieve quadratic
speedup in the multiplicative precision (while using a zeroth-order oracle) compared with the corresponding best-known classical algorithms (using a first-order oracle). Our quantum algorithm based on MALA achieves polynomial speedups in all parameters. Furthermore, we prove that our quantum algorithm is nearly optimal in terms of .
We summarize our results and compare them to previous classical algorithms in Table 1 and Table 2. See Appendix A for more detailed comparisons to related classical and quantum work.
Techniques In this work, we develop a systematic approach for studying the complexity of quantum walk mixing and show that for any reversible classical Markov chain, we can obtain quadratic speedup for the mixing time as long as the initial distribution is warm. In particular, we apply the quantum walk and quantum annealing in the context of Langevin dynamics and achieve polynomial quantum speedups.
The technical ingredients of our quantum algorithms are highlighted below.
• Quantum simulated annealing (Lemma 3.2). Our quantum algorithm for estimating normalizing constants combines the quantum simulated annealing framework of [45] and the quantum mean estimation algorithm of [31]. For each type of Langevin dynamics (which are random walks), we build a corresponding quantum walk. Crucially, the spectral gap of the random walk is quadratically amplified in the phase gap of the corresponding quantum walk. This allows us to use a Grover-like procedure to produce the stationary distribution state given a sufficiently good initial state. In the simulated annealing framework, this initial state is the stationary distribution state of the previous Markov chain.
• Effective spectral gap (Lemma C.7). We show how to leverage a “warm” initial distribution to achieve a quantum speedup for sampling. Classically, a warm start leads to faster mixing even if
the spectral gap is small. Quantumly, we generalize the notion of “effective spectral gap” [6, 27, 34] to our more general sampling problem. We show that with a bounded warmness parameter, quantum algorithms can achieve a quadratic speedup in the mixing time. By viewing the sampling problem as a simulated annealing process with only one Markov chain, we prove a quadratic speedup for quantum MALA by analyzing the effective spectral gap.
• Quantum gradient estimation (Lemma C.1). We adapt Jordan’s quantum gradient algorithm [24] to the ULD and ULD-RMM algorithms and give rigorous proofs to bound the sampling error due to gradient estimation errors.
Open questions Our work raises several natural questions for future investigation:
• Can we achieve quantum speedup in d and κ for unadjusted Langevin algorithms such as ULD and ULD-RMM? The main difficulty is that ULD and ULD-RMM are irreversible, while most available quantum walk techniques only apply to reversible Markov chains. New techniques might be required to resolve this question.
• Can we achieve further quantum speedup for estimating normalizing constants with a warm start distribution? This might require a more refined version of quantum mean estimation.
• Can we give quantum algorithms for estimating normalizing constants with query complexity sublinear in d? Such a result would give a provable quantum-classical separation due to the Ω(d1−o(1)/ 2−o(1)) classical lower bound proved in [16].
Limitations and societal impacts Researchers working on theoretical aspects of quantum computing or Monte Carlo methods may benefit from our results. In the long term, once fault-tolerant quantum computers have been built, our results may find practical applications in MCMC methods arising in the real world. As far as we are aware, our work does not have negative societal impacts.
2 Preliminaries
Basic definitions of quantum computation Quantum mechanics is formulated in terms of linear algebra. The computational basis of Cd is {~e0, . . . , ~ed−1}, where ~ei = (0, . . . , 1, . . . , 0)> with the 1 in the (i+ 1)st position. We use Dirac notation, writing |i〉 (called a “ket”) for ~ei and 〈i| (a “bra”) for ~e>i .
The tensor product of quantum states is their Kronecker product: if |u〉 ∈ Cd1 and |v〉 ∈ Cd2 , then we have |u〉 ⊗ |v〉 ∈ Cd1 ⊗ Cd2 with
|u〉 ⊗ |v〉 = (u0v0, u0v1, . . . , ud1−1vd2−1)>. (2.1) The basic element of quantum information is a qubit, a quantum state in C2, which can be written as a|0〉+ b|1〉 for some a, b ∈ C with |a|2 + |b|2 = 1. An n-qubit tensor product state can be written as |v1〉 ⊗ · · · ⊗ |vn〉 ∈ C2 n
, where for any i ∈ [n], |vi〉 is a one-qubit state. Note however that most states in C2n are not product states. We sometimes abbreviate |u〉 ⊗ |v〉 as |u〉|v〉. Operations on quantum states are unitary transformations. They are typically stated in the circuit model, where a k-qubit gate is a unitary matrix in C2k . Two-qubit gates are universal, i.e., every n-qubit gate can be decomposed into a product of gates that act as the identity on n−2 qubits and as some two-qubit gate on the other 2 qubits. The gate complexity of an operation refers to the number of two-qubit gates used in a quantum circuit for realizing it.
Quantum access to a function, referred to as a quantum oracle, must be reversible and allow access to different values of the function in superposition (i.e., for linear combinations of computational basis states). For example, consider the unitary evaluation oracle Of defined in (1.4). Given a probability distribution {pi}ni=1 and a set of points {xi}ni=1, we have
Of n∑ i=1 √ pi|xi〉|0〉 = n∑ i=1 √ pi|xi〉|f(xi)〉. (2.2)
Then a measurement would give f(xi) with probability pi. However, a quantum oracle can not only simulate random sampling, but can enable uniquely quantum behavior through interference. Examples include amplitude amplification—the main idea behind Grover’s search algorithm [20] and
the amplitude estimation procedure used in this paper—and many other quantum algorithms relying on coherent quantum access to a function. Similar arguments apply to the quantum gradient oracle (1.5). If a classical oracle can be computed by an explicit classical circuit, then the corresponding quantum oracle can be implemented by a quantum circuit of approximately the same size. Therefore, these quantum oracles provide a useful framework for understanding the quantum complexity of log-concave sampling and normalizing constant estimation.
To sample from a distribution π, it suffices to prepare the state |π〉 := ∑ x √ πx|x〉 and then measure it. For a Markov chain specified by a transition matrix P with stationary distribution π, one can construct a corresponding quantum walk operator W (P ). Intuitively, quantum walks can be viewed as applying a sequence of quantum unitaries on a quantum state encoding the initial distribution to rotate it to the subspace of stationary distribution |π〉. The number of rotations needed (i.e., the angle between the initial distribution and stationary distribution) depends on the spectral gap of P , and a quantum algorithm can achieve a quadratic speedup via quantum phase estimation and amplification algorithms. More background on quantum walk is given in Appendix C.2.2.
Notations Throughout the paper, the big-O notations O(·), o(·), Ω(·), and Θ(·) follow common definitions. The Õ notation omits poly-logarithmic terms, i.e., Õ(f) := O(fpoly(log f)). We say a function f is L-Lipschitz continuous at x if |f(x)− f(y)| ≤ L‖x− y‖ for all y sufficiently near x. The total variation distance (TV-distance) between two functions f, g : Rd → R is defined as
‖f − g‖TV := 1
2 ∫ Rd |f(x)− g(x)|dx. (2.3)
Let B(Rd) denote the Borel σ-field of Rd. Given probability measures µ and ν on (Rd,B(Rd)), a transference plan ζ between µ and ν is defined as a probability measure on (Rd × Rd,B(Rd) × B(Rd)) such that for any A ⊆ Rd, ζ(A × Rd) = µ(A) and ζ(Rd × A) = ν(A). We let Γ(µ, ν) denote the set of all transference plans. We let
W2(µ, ν) :=
( inf
ζ∈Γ(µ,ν) ∫ Rd×Rd ‖x− y‖22 dζ(x, y) ) 1 2
(2.4)
denote the Wasserstein 2-norm between µ and ν.
3 Quantum Algorithm for Log-Concave Sampling
In this section, we describe several quantum algorithms for sampling log-concave distributions.
Quantum inexact ULD and ULD-RMM We first show that the gradient oracle in the classical ULD and ULD-RMM algorithms can be efficiently simulated by the quantum evaluation oracle via quantum gradient estimation. Suppose we are given access to the evaluation oracle (1.4) for f(x). Then by Jordan’s algorithm [24] (see Lemma C.1 for details), there is a quantum algorithm that can compute ∇f(x) with a polynomially small `1-error by querying the evaluation oracle O(1) times. Using this, we can prove the following theorem (see Appendix C.1 for details). Theorem 3.1 (Informal version of Theorem C.1 and Theorem C.2). Let ρ ∝ e−f be a d-dimensional log-concave distribution with f satisfying (1.1). Given a quantum evaluation oracle for f ,
• the quantum inexact ULD algorithm uses Õ(κ2d1/2 −1) queries, and • the quantum inexact ULD-RMM algorithm uses Õ(κ7/6d1/6 −1/3 + κd1/3 −2/3) queries,
to quantumly sample from a distribution that is -close to ρ in W2-distance.
We note that the query complexities of our quantum algorithms using a zeroth-order oracle match the state-of-the-art classical ULD [10] and ULD-RMM [37] complexities with a first-order oracle. The main technical difficulty of applying the quantum gradient algorithm is that it produces a stochastic gradient oracle in which the output of the quantum algorithm g satisfies ‖E[g]−∇f(x)‖1 ≤ d−Ω(1). In particular, the randomness of the gradient computation is “entangled” with the randomness of the Markov chain. We use the classical analysis of ULD and ULD-RMM processes [36] to prove that the stochastic gradient will not significantly slow down the mixing of ULD processes, and that the error caused by the quantum gradient algorithm can be controlled.
Quantum MALA We next propose two quantum algorithms with lower query complexity than classical MALA, one with a Gaussian initial distribution and another with a warm-start distribution. The main technical tool we use is a quantum walk in continuous space.
The classical MALA (i.e., Metropolized HMC) starts from a Gaussian distributionN (0, L−1Id) and performs a leapfrog step in each iteration. It is well-known that the initial Gaussian state
|ρ0〉 = ∫ Rd ( L 2π )d/4 e− L 4 ‖z−x ?‖22 |z〉dz (3.1)
can be efficiently prepared. We show that the quantum walk update operator
U := ∫ Rd dx ∫ Rd dy √ px→y|x〉〈x| ⊗ |y〉〈0| (3.2)
can be efficiently implemented, where px→y := p(x, y) is the transition density from x to y, and the density p satisfies ∫ Rd p(x, y) dy = 1 for any x ∈ R
d. Lemma 3.1 (Informal version of Lemma C.6). The continuous-space quantum walk operator corresponding to the MALA Markov chain can be implemented with O(1) gradient and evaluation queries.
In general, it is difficult to quantumly speed up the mixing time of a classical Markov chain, which is upper bounded by O(δ−1 log ( ρ−1min ) ), where δ is the spectral gap. However, [45] shows that a quadratic speedup is possible when following a sequence of slowly-varying Markov chains. More specifically, let ρ0, . . . , ρr be the stationary distributions of the reversible Markov chains M0, . . . ,Mr and let |ρ0〉, . . . , |ρr〉 be the corresponding quantum states. Suppose |〈ρi|ρi+1〉| ≥ p for all i ∈ {0, . . . , r − 1}, and suppose the spectral gaps of M0, . . . ,Mr are lower-bounded by δ. Then we can prepare a quantum state |ρ̃r〉 that is -close to |ρr〉 using Õ ( δ−1/2rp−1 ) quantum walk steps. To fulfill the slowly-varying condition, we consider an annealing process that goes from ρ0 = N (0, L−1Id) to the target distribution ρM+1 = ρ in M = Õ( √ d) stages. At the ith stage, the stationary distribution is ρi ∝ e−fi with fi := f + 12σ −2 i ‖x‖2. By properly choosing σ1 ≤ · · · ≤ σM , we prove that this sequence of Markov chains is slowly varying. Lemma 3.2 (Informal version of Lemma B.6). If we take σ21 = 2dL and σ 2 i+1 = (1 + 1√ d )σ2i , then for 0 ≤ i ≤M , we have |〈ρi|ρi+1〉| ≥ Ω(1).
Combining Lemma 3.1, Lemma 3.2, and the effective spectral gap of MALA (Lemma C.7), we have: Theorem 3.2 (Informal version of Theorem C.7). Let ρ ∝ e−f be a d-dimensional log-concave distribution with f satisfying (1.1). There is a quantum algorithm (Algorithm 1) that prepares a state |ρ̃〉 with ‖|ρ̃〉 − |ρ〉‖ ≤ using Õ(κ1/2d) gradient and evaluation oracle queries.
Algorithm 1: QUANTUMMALAFORLOG-CONCAVESAMPLING (Informal) Input: Evaluation oracle Of , gradient oracle O∇f , smoothness parameter L, convexity parameter µ Output: Quantum state |ρ̃〉 close to the stationary distribution state ∫ Rd e
−f(x)/2 d|x〉 1 Compute the cooling schedule parameters σ1, . . . , σM 2 Prepare the state |ρ0〉 ∝ ∫ Rd e − 14‖x‖ 2/σ21 d|x〉 3 for i← 1, . . . ,M do 4 Construct Ofi and O∇fi where fi(x) = f(x) + 12‖x‖
2/σ2i 5 Construct quantum walk update unitary U with Ofi and O∇fi 6 Implement the quantum walk operator and the approximate reflection R̃i 7 Prepare |ρi〉 by performing π3 -amplitude amplification with R̃i on the state |ρi−1〉|0〉 8 return |ρM 〉
For the classical MALA with a Gaussian initial distribution, it was shown by [29] that the mixing time is at least Ω̃(κd). Theorem 3.2 quadratically reduces the κ dependence.
Note that Algorithm 1 uses a first-order oracle, instead of the zeroth-order oracle used in the quantum ULD algorithms. The technical barrier to applying the quantum gradient algorithm (Lemma C.1) in the quantum MALA is to analyze the classical MALA with a stochastic gradient oracle. We currently do not know whether the “entangled randomness” dramatically increases the mixing time.
More technical details and proofs are provided in Appendix C.
4 Quantum Algorithm for Estimating Normalizing Constants
In this section, we apply our quantum log-concave sampling algorithms to the normalizing constant estimation problem. A very natural approach to this problem is via MCMC, which constructs a multi-stage annealing process and uses a sampler at each stage to solve a mean estimation problem. We show how to quantumly speed up these annealing processes and improve the query complexity of estimating normalizing constants.
Quantum speedup for the standard annealing process We first consider the standard annealing process for log-concave distributions, as already applied in the previous section. Recall that we pick parameters σ1 < σ2 < · · · < σM and construct a sequence of Markov chains with stationary distributions ρi ∝ e−fi , where fi = f+ 12σ2i ‖x‖ 2. Then, at the ith stage, we estimate the expectation
Eρi [gi] where gi = exp ( 1
2 (σ−2i − σ −2 i+1)‖x‖ 2
) . (4.1)
If we can estimate each expectation with relative error at most O( /M), then the product of these M quantities estimates the normalizing constant Z = ∫ Rd e −f(x) dx with relative error at most .
For the mean estimation problem, [31] showed that when the relative variance Varρi [gi]Eρi [gi]2 is constant, there is a quantum algorithm for estimating the expectation Eρi [gi] within relative error at most using Õ(1/ ) quantum samples from the distribution ρi. Our annealing schedule satisfies the bounded relative variance condition. Therefore, by the quantum mean estimation algorithm, we improve the sampling complexity of the standard annealing process from Õ(M2 −2) to Õ(M −1).
To further improve the query complexity, we consider using the quantum MALAs developed in the previous section to generate samples. Observe that Algorithm 1 outputs a quantum state corresponding to some distribution that is close to ρi, instead of an individual sample. If we can estimate the expectation without destroying the quantum state, then we can reuse the state and evolve it for the (i + 1)st Markov chain. Fortunately, we can use non-destructive mean estimation to estimate the expectation and restore the initial states. A detailed error analysis of this algorithm can be found in [6, 22]. We first prepare Õ(M −1) copies of initial states corresponding to the Gaussian distribution N (0, L−1Id). Then, for each stage, we apply the non-destructive mean estimation algorithm to estimate the expectation Eρi [gi] and then run quantum MALA to evolve the states |ρi〉 to |ρi+1〉. This gives our first quantum algorithm for estimating normalizing constants. Theorem 4.1 (Informal version of Theorem D.2). Let Z be the normalizing constant in (1.3). There is a quantum algorithm (Algorithm 2) that outputs an estimate Z̃ with relative error at most using Õ ( d3/2κ1/2 −1 ) queries to the quantum gradient and evaluation oracles.
Quantum speedup for MLMC Now we consider using multilevel Monte Carlo (MLMC) as the annealing process and show how to achieve quantum speedup. MLMC was originally developed by [23] for parametric integration; then [17] applied MLMC to simulate stochastic differential equations (SDEs). The idea of MLMC is natural: we choose a different number of samples at each stage based on the cost and variance of that stage.
To estimate normalizing constants, a variant of MLMC was proposed in [16]. Unlike the standard MLMC for bounding the mean-squared error, they upper bound the bias and the variance separately, and the analysis is technically difficult. The first quantum algorithm based on MLMC was subsequently developed by [2] based on the quantum mean estimation algorithm. Roughly speaking, the quantum algorithm can quadratically reduce the -dependence of the sample complexity compared with classical MLMC.
Algorithm 2: QUANTUMMALAFORESTIMATINGNORMALIZINGCONSTANT (Informal) Input: Evaluation oracle Of , gradient oracle O∇f Output: Estimate Z̃ of Z with relative error at most
1 M ← Õ( √ d), K ← Õ( −1) 2 Compute the cooling schedule parameters σ1, . . . , σM 3 for j ← 1, . . . ,K do 4 Prepare the state |ρ1,j〉 ∝ ∫ Rd e − 14‖x‖ 2/σ21 |x〉dx
5 Z̃ ← (2πσ21)d/2 6 for i← 1, . . . ,M do 7 g̃i ← Non-destructive mean estimation for gi using {|ρi,0〉, . . . , |ρi,K〉} 8 Z̃ ← Z̃g̃i 9 for j ← 1, . . . ,K do
10 |ρi+1,j〉 ← QUANTUMMALA(Ofi+1 ,O∇,fi+1 , |ρi,j〉)
11 return Z̃
In this work, we apply the quantum accelerated MLMC (QA-MLMC) scheme [2] to simulate underdamped Langevin dynamics as the SDE. One challenge in using QA-MLMC is that gi in our setting is not Lipschitz. Fortunately, as suggested by [16], this issue can be resolved by truncating large x
and replacing gi by hi := min { gi, exp ( (r+i )2 σ2i (1+α −1) )} , with the choice
α = Õ ( 1√
d log(1/ )
) r+i = Eρi+1‖x‖+ Θ(σi √ (1 + α) log(1/ )) (4.2)
to ensure hiEρigi is O(σ−1i ) Lipschitz. Furthermore, ∣∣Eρi(hi − gi)∣∣ < by Lemmas C.7 and C.8 in [16]. For simplicity, we regard gi as a Lipschitz continuous function in our main results. We present QA-MLMC in Algorithm 3, where the sampling algorithm A can be chosen to be quantum inexact ULD/ULD-RMM or quantum MALA.
Algorithm 3: QA-MLMC (Informal) Input: Evaluation oracle Of , function g, error , a quantum sampler A(x0, f, η) for ρ Output: An estimate of R̃ = Eρh
1 K ← Õ( −1) 2 Compute the initial point x0 and the step size η0 3 Compute the number of samples N1, . . . , NK 4 for j ← 1, . . . ,K do 5 Let ηj = η/2j−1 6 for i← 1, . . . , Nj do 7 Sample Xηji by A(f, x0, ηj), and sample X ηj/2 i by A(f, x0, ηj/2) 8 G̃−i ← QMEANEST({g(X ηj i )}i∈[Nj ]), and G̃ + i ← QMEANEST({g(X ηj/2 i )}i∈[Nj ])
9 return R̃ = G̃0 + ∑K j=0(G̃ − i − G̃ + i )
This QA-MLMC framework reduces the -dependence of the sampling complexity for estimating normalizing constants from −2 to −1 in both the ULD and ULD-RMM cases, as compared with the state-of-the-art classical results [16].
Using the quantum inexact ULD and ULD-RMM algorithms (Theorem 3.1) to generate samples, we obtain our second quantum algorithm for estimating normalizing constants (see Appendix D for proofs). Theorem 4.2 (Informal version of Theorem D.3 and Theorem D.4). Let Z be the normalizing constant in (1.3). There exist quantum algorithms for estimating Z with relative error at most using
• quantum inexact ULD with Õ(d3/2κ2 −1) queries to the evaluation oracle, and
• quantum inexact ULD-RMM with Õ((d7/6κ7/6 + d4/3κ) −1) queries to the evaluation oracle.
5 Quantum Lower Bound
Finally, we lower bound the quantum query complexity of normalizing constant estimation.
Theorem 5.1. For any fixed positive integer k, given query access (1.4) to a function f : Rk → R that is 1.5-smooth and 0.5-strongly convex, the quantum query complexity of estimating the partition functionZ = ∫ Rk e −f(x) dxwithin multiplicative error with probability at least 2/3 is Ω( − 1 1+4/k ).
The proof of our quantum lower bound is inspired by the construction in Section 5 of [16]. They consider a log-concave function whose value is negligible outside a hypercube centered at 0. The interior of the hypercube is decomposed into cells of two types. The function takes different values on each type, and the normalizing constant estimation problem reduces to determining the number of cells of each type. Quantumly, we follow the same construction and reduce the cell counting problem to the Hamming weight problem: given an n-bit Boolean string and two integers ` < `′, decide whether the Hamming weight (i.e., the number of ones) of this string is `1 or `2. This problem has a known quantum query lower bound [32], which implies the quantum hardness of estimating the normalizing constant. The full proof of Theorem 5.1 appears in Appendix E.
Acknowledgements
AMC acknowledges support from the Army Research Office (grant W911NF-20-1-0015); the Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research in Quantum Computing program; and the National Science Foundation (grant CCF-1813814). TL was supported by a startup fund from Peking University, and the Advanced Institute of Information Technology, Peking University. JPL was supported by the National Science Foundation (grant CCF-1813814), an NSF Quantum Information Science and Engineering Network (QISE-NET) triplet award (DMR-1747426), a Simons Foundation award (No. 825053), and the Simons Quantum Postdoctoral Fellowship. RZ was supported by the University Graduate Continuing Fellowship from UT Austin. | 1. What are the main contributions of the paper regarding quantum estimators and their applications?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its query complexity and presentation quality?
3. Do you have any questions or concerns about the paper's content, such as the lack of details in the quantum computations and estimation algorithm?
4. How does the reviewer assess the limitations of the work, including its current impracticality and comparison to existing Langevin sampling methods? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The main contributions of the paper are two-fold. The first contribution consists of applying quantum estimators of the gradient of the potential function
∇
f
in Langevin-type algorithms that sample from a target distribution
π
(
⋅
)
=
exp
(
−
f
(
⋅
)
)
/
Z
f
. The authors use Underdamped Langevin algorithm, Metropolis adjusted Langevin algorithm, ULA with randomized midpoint discretization method. The second part of the paper aims at the estimation of the normalizing constant
Z
f
using what they call quantum speedup for MLMC. Essentially, this paper is (one of) the first to use the quantum estimation techniques in the problem of MCMC sampling.
Strengths And Weaknesses
STRENGTHS The main advantage of this approach is the low query complexity for each iteration which implements a known zeroth-order gradient estimation algorithm.
WEAKNESSES The weakness of the paper is the quality of presentation and the applicability of the algorithm. The paper is hard to read, as most notions are presented very briefly. The quantum computations and the estimation algorithm are not well written and are not well defined. The main sampling algorithms are not presented either. This is very inconvenient for an unexperienced reader in any of these two topics.
Questions
The paper is very hard to read. There no many proofs and the claims are hard to check, as most lemmas and propositions are taken from other papers. The main algorithms should be presented in detail and proofs of the main claims should be stated clearly.
Limitations
The application of this work is not possible at this moment and it is not clear when it will be possible to implement such an algorithm. While as the existing work on Langevin sampling is simple to implement when compared to the proposed methods. |
NIPS | Title
Adaptive Online Estimation of Piecewise Polynomial Trends
Abstract
We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a new variational constraint that enforces the comparator sequence to belong to a discrete k order Total Variation ball of radius Cn. This variational constraint models comparators that have piecewise polynomial structure which has many relevant practical applications [Tibshirani, 2014]. By establishing connections to the theory of wavelet based non-parametric regression, we design a polynomial time algorithm that achieves the nearly optimal dynamic regret of Õ(n 1 2k+3C 2 2k+3 n ). The proposed policy is adaptive to the unknown radius Cn. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest.
N/A
We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a new variational constraint that enforces the comparator sequence to belong to a discrete kth order Total Variation ball of radius Cn. This variational constraint models comparators that have piecewise polynomial structure which has many relevant practical applications [Tibshirani, 2014]. By establishing connections to the theory of wavelet based non-parametric regression, we design a polynomial
time algorithm that achieves the nearly optimal dynamic regret of Õ(n 1 2k+3C 2 2k+3 n ). The proposed policy is adaptive to the unknown radius Cn. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest.
1 Introduction
In time series analysis, estimating and removing the trend are often the first steps taken to make the sequence “stationary”. The non-parametric assumption that the underlying trend is a piecewise polynomial or a spline [de Boor, 1978], is one of the most popular choices, especially when we do not know where the “change points” are and how many of them are appropriate. The higher order Total Variation (see Assumption A3) of the trend can capture in some sense both the sparsity and intensity of changes in underlying dynamics. A non-parametric regression method that penalizes this quantity — trend filtering [Tibshirani, 2014] — enjoys a superior local adaptivity over traditional methods such as the Hodrick-Prescott Filter [Hodrick and Prescott, 1997]. However, Trend Filtering is an offline algorithm which limits its applicability for the inherently online time series forecasting problem. In this paper, we are interested in designing an online forecasting strategy that can essentially match the performance of the offline methods for trend estimation, hence allowing us to apply time series models forecasting on-the-fly. In particular, our problem setup (see Figure 1) and algorithm are applicable to all online variants of trend filtering problem such as predicting stock prices, server payloads, sales etc.
Let’s describe the notations that will be used throughout the paper. All vectors and matrices will be written in bold face letters. For a vector x ∈ Rm, x[i] or xi denotes its value at the ith coordinate. x[a : b] or xa:b is the vector [x[a], . . . ,x[b]]. ‖·‖p denotes finite dimensional Lp norms. ‖x‖0 is the number of non-zero coordinates of a vector x. [n] represents the set {1, . . . , n}. Di ∈ R(n−i)×n denotes the discrete difference operator of order i defined as in [Tibshirani, 2014] and reproduced
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
below.
D1 = −1 1 0 . . . 0 0 0 −1 1 . . . 0 0 ... 0 0 0 . . . −1 1 ∈ R(n−1)×n, andDi = D̃ 1 ·Di−1 ∀i ≥ 2 where D̃ 1 is the (n− i)× (n− i+ 1) truncation ofD1.
The theme of this paper builds on the non-parametric online forecasting model developed in [Baby and Wang, 2019]. We consider a sequential n step interaction process between an agent and an adversary as shown in Figure 1.
A forecasting strategy S is defined as an algorithm that outputs a prediction S(t) at time t only based on the information available after the completion of time t− 1. Random variables t for t ∈ [n] are independent and subgaussian with parameter σ2. This sequential game can be regarded as an online version of the non-parametric regression setup well studied in statistics community.
In this paper, we consider the problem of forecasting sequences that obey nk‖Dk+1θ1:n‖1≤ Cn, k ≥ 0 and ‖θ1:n‖∞≤ B. The constraint nk‖Dk+1θ1:n‖1≤ Cn has been widely used in the rich literature of non-parametric regression. For example, the offline problem of estimating sequences obeying such higher order difference constraint from noisy labels under squared error loss is studied in [Mammen and van de Geer, 1997, Donoho et al., 1998, Tibshirani, 2014, Wang et al., 2016, Sadhanala et al., 2016, Guntuboyina et al., 2017] to cite a few. We aim to design forecasters whose predictions are only based on past history and still perform as good as a batch estimator that sees the entire observations ahead of time.
Scaling of nk. The family {θ1:n | nk‖Dk+1θ1:n‖1≤ Cn} may appear to be alarmingly restrictive for a constant Cn due to the scaling factor nk, but let us argue why this is actually a natural construct. The continuous TV k distance of a function f : [0, 1] → R is defined as ∫ 1 0 |f (k+1)(x)|dx, where f (k+1) is the (k+ 1)th order (weak) derivative. A sequence can be obtained by sampling the function at xi = i/n, i ∈ [n]. Discretizing the integral yields the TV k distance of this sequence to be nk‖Dk+1θ1:n‖1. Thus, the nk‖Dk+1θ1:n‖1 term can be interpreted as the discrete approximation to continuous higher order TV distance of a function. See Figure 2 for an illustration for the case k = 1.
Non-stationary Stochastic Optimization. The setting above can also be viewed under the framework of non-stationary stochastic optimization as studied in [Besbes et al., 2015, Chen et al., 2018b] with squared error loss and noisy gradient feedback. At each time step, the adversary chooses a loss function ft(x) = (x − θt)2. Since ∇ft(x) = 2(x − θt), the feedback ∇̃ft(x) = 2(x − yt) constitutes an unbiased estimate of the gradient ∇ft(x). [Besbes et al., 2015, Chen et al., 2018b] quantifies the performance of a forecasting strategy S in terms of dynamic regret as follows.
Rdynamic(S,θ1:n) := E [ n∑ t=1 ft (S(t)) ] − n∑ t=1 inf xt ft(xt),= E [ n∑ t=1 (S(t)− θ1:n[t])2 ] , (1)
where the last equality follows from the fact that when ft(x) = (x−θ1:n[t])2, infx(x−θ1:n[t])2 = 0. The expectation above is taken over the randomness in the noisy gradient feedback and that of the agent’s forecasting strategy. It is impossible to achieve sublinear dynamic regret against arbitrary
ground truth sequences. However if the sequence of minimizers of loss functions ft(x) = (x− θt)2 obey a path variational constraint, then we can parameterize the dynamic regret as a function of the path length, which could be sublinear when the path-length is sublinear. Typical variational constraints considered in the existing work includes ∑ t|θt − θt−1|, ∑ t|θt − θt−1|2, ( ∑ t‖ft − ft−1‖qp)1/q [see Baby and Wang, 2019, for a review]. These are all useful in their respective contexts, but do not capture higher order smoothness.
The purpose of this work is to connect ideas from batch non-parametric regression to the framework of online stochastic optimization and define a natural family of higher order variational functionals of the form ‖Dk+1θ1:n‖1 to track a comparator sequence with piecewise polynomial structure. To the best of our knowledge such higher order path variationals for k ≥ 1 are vastly unexplored in the domain of non-stationary stochastic optimization. In this work, we take the first steps in introducing such variational constraints to online non-stationary stochastic optimization and exploiting them to get sub-linear dynamic regret.
2 Summary of results
In this section, we summarize the assumptions and main results of the paper.
Assumptions. We start by listing the assumptions made and provide justifications for them.
(A1) The time horizon is known to be n.
(A2) The parameter σ2 of subgaussian noise in the observations is a known fixed positive constant.
(A3) The ground truth denoted by θ1:n has its kth order total variation bounded by some positive Cn, i.e., we consider ground truth sequences that belongs to the class
TVk(Cn) := {θ1:n ∈ Rn : nk‖Dk+1θ1:n‖1≤ Cn}
We refer to nk‖Dk+1θ1:n‖1 as TV k distance of the sequence θ1:n. To avoid trivial cases, we assume Cn = Ω(1).
(A4) The TV order k is a known fixed positive constant.
(A5) ‖θ1:n‖∞≤ B for a known fixed positive constant B.
Though we require the time horizon to be known in advance in assumption (A1), this can be easily lifted using standard doubling trick arguments. The knowledge of time horizon helps us to present the policy in a most transparent way. If standard deviation of sub-gaussian noise is unknown, contrary to assumption (A2), then it can be robustly estimated by a Median Absolute Deviation estimator using first few observations, see for eg. Johnstone [2017]. This is indeed facilitated by the sparsity of wavelet coefficients of TV k bounded sequences. Assumption (A3) characterizes the ground truth
sequences whose forecasting is the main theme of this paper. The TVk(Cn) class features a rich family of sequences that can potentially exhibit spatially non-homogeneous smoothness. For example it can capture sequences that are piecewise polynomials of degree at most k. This poses a challenge to design forecasters that are locally adaptive and can efficiently detect and make predictions under the presence of the non-homogeneous trends. Though knowledge of the TV order k is required in assumption (A4), most of the practical interest is often limited to the lower orders k = 0, 1, 2, 3, see for eg. [Kim et al., 2009, Tibshirani, 2014] and we present (in Appendix D) a meta-policy based on exponential weighted averages [Cesa-Bianchi and Lugosi, 2006] to adapt to these lower orders. Finally assumption (A5) is standard in the online learning literature.
Our contributions. We summarize our main results below.
• When the revealed labels are noisy realizations of sequences that belong to TV k(Cn) we propose a polynomial time policy called Ada-VAW (Adaptive Vovk Azoury Warmuth forecaster) that achieves the nearly minimax optimal rate of Õ ( n 1 2k+3C 2 2k+3 n ) forRdynamic
with high probability. The proposed policy optimally adapts to the unknown radius Cn. • We show that the proposed policy achieves optimal Rdynamic when revealed labels are
noisy realizations of sequences residing in higher order discrete Holder and discrete Sobolev classes.
• When the revealed labels are noisy realizations of sequences that obey ‖Dkθ1:n‖0≤ Jn, ‖θ1:n‖∞≤ B, we show that the same policy achieves the minimax optimal Õ(Jn) rate for for Rdynamic with high probability. The policy optimally adapts to unknown Jn.
Notes on key novelties. It is known that the VAW forecaster is an optimal algorithm for online polynomial regression with squared error losses [Cesa-Bianchi and Lugosi, 2006]. With the side information of change points where the underlying ground truth switches from one polynomial to another, we can run a VAW forecaster on each of the stable polynomial sections to control the cumulative squared error of the policy. We use the machinery of wavelets to mimic an oracle that can provide side information of the change points. For detecting change points, a restart rule is formulated by exploiting connections between wavelet coefficients and locally adaptive regression splines. This is a more general strategy than that used in [Baby and Wang, 2019]. To the best of our knowledge, this is the first time an interplay between VAW forecaster and theory of wavelets along with its adaptive minimaxity [Donoho et al., 1998] has been used in the literature.
Wavelet computations require the length of underlying data whose wavelet transform needs to be computed has to be a power of 2. In practice this is achieved by a padding strategy in cases where original data length is not a power of 2. We show that most commonly used padding strategies – eg. zero padding as in [Baby and Wang, 2019] – are not useful for the current problem and propose a novel packing strategy that alleviates the need to pad. This will be useful to many applications that use wavelets which can be well beyond the scope of the current paper.
Our proof techniques for bounding regret use properties of the CDJV wavelet construction [Cohen et al., 1993]. To the best of our knowledge, this is the first time we witness the ideas from a general CDJV construction scheme implying useful results in an online learning paradigm. Optimally controlling the bias of VAW demands to carefully bound the `2 norm of coefficients computed by polynomial regression. This is done by using ideas from number theory and symbolic determinant evaluation of polynomial matrices. This could be of independent interest in both offline and online polynomial regression.
3 Related Work
In this section, we briefly discuss the related work. A discussion on preliminaries and a detailed exposition of related literature is deferred to Appendix A and B respectively. Throughout this paper when we refer as Õ(n 1 2k+3 ) as optimal regret we assume that Cn = nk‖Dk+1θ1:n‖1 is O(1).
Non-parametric Regression As noted in Section 1, the problem setup we consider can be regarded as an online version of the batch non-parametric regression framework. It has been established (see for eg, [Mammen and van de Geer, 1997, Donoho et al., 1998, Tibshirani, 2014] that minimax rate for estimating sequences with bounded TV k distance under squared error loss scales as
n 1 2k+3 (nk‖Dk+1θ1:n‖1) 2
2k+3 modulo logarithmic factors of n. In this work, we aim to achieve the same rate for minimax dynamic regret in online setting.
Non-stationary Stochastic Optimization Our forecasting framework can be considered as a special case of non-stationary stochastic optimization setting studied in [Besbes et al., 2015, Chen et al., 2018b]. It can be shown that their proposed algorithm namely, restarting Online Gradient Descend (OGD) yields a suboptimal dynamic regret of O ( n1/2(‖Dθ1:n‖1)1/2 ) for our problem. However, it should be noted that their algorithm works with general strongly convex and convex losses. A summary of dynamic regret of various algorithms are presented in Table 1. The rationale behind how to translate existing regret bounds to our setting is elaborated in Appendix B.
Prediction of Bounded Variation sequences Our problem setup is identical to that of [Baby and Wang, 2019] except for the fact that they consider forecasting sequences whose zeroth order Total Variation is bounded. Our work can be considered as a generalization to any TV order k. Their algorithm gives a suboptimal regret of O(n1/3‖Dθ1:n‖2/31 ) for k ≥ 1. Competitive Online Non-parametric Regression [Rakhlin and Sridharan, 2014] considers an online learning framework with squared error losses where the learner competes against the best function in a non-parametric function class. Their results imply via a non-constructive argument, the existence of an algorithm that achieves the regret of Õ(n 1 2k+3 ) for our problem.
4 Main results
We present below the main results of the paper. All proofs are deferred to the appendix.
4.1 Limitations of linear forecasters
We exhibit a lower-bound on the dynamic regret that is implied by [Donoho et al., 1998] in batch regression setting.
Proposition 1 (Minimax Regret). Let yt = θ1:n[t] + t for t = 1, . . . , n where θ1:n ∈ TV (k)(Cn), |θ1:n[t]|≤ B and t are iid σ2 subgaussian random variables. Let AF be the class of all forecasting strategies whose prediction at time t only depends on y1, . . . , yt−1. Let st denote the prediction at time t for a strategy s ∈ AF . Then,
inf s∈AF sup θ1:n∈TV (k)(Cn) n∑ t=1 E [ (st − θ1:n[t])2 ] = Ω ( min{n, n 1 2k+3C 2 2k+3 n } ) ,
where the expectation is taken wrt to randomness in the strategy of the player and t.
We define linear forecasters to be strategies that predict a fixed linear function of the history. This includes a large family of polices including the ARIMA family, Exponential Smoothers for Time Series forecasting, Restarting OGD etc. However in the presence of spatially inhomogeneous
smoothness – which is the case with TV bounded sequences – these policies are doomed to perform sub-optimally. This can be made precise by providing a lower-bound on the minimax regret for linear forecasters. Since the offline problem of smoothing is easier than that of forecasting, a lower-bound on the minimax MSE of linear smoother will directly imply a lower-bound on the regret of linear forecasting strategies. By the results of [Donoho et al., 1998], we have the following proposition: Proposition 2 (Minimax regret for linear forecasters). Linear forecasters will suffer a dynamic regret of at least Ω(n1/(2k+2)) for forecasting sequences that belong to TV k(1).
Thus we must look in the space of policies that are non-linear functions of past labels to achieve a minimax dynamic regret that can potentially match the lower-bound in Proposition 1.
4.2 Policy
In this section, we present our policy and capture the intuition behind its design. First, we introduce the following notations.
• The policy works by partitioning the time horizon into several bins. th denotes start time of the current bin and t be the current time point.
• W denotes the orthonormal Discrete Wavelet Transform (DWT) matrix obtained from a CDJV wavelet construction [Cohen et al., 1993] using wavelets of regularity k + 1. • T (y) denotes the vector obtained by elementwise soft-thresholding of y at level σ √ β log l
where l is the length of input vector.
• xt ∈ R(k+1) denotes the vector [1, t− th + k + 1, . . . , (t− th + k + 1)k]T . • At = I + ∑t s=th−k xsxs T
• recenter(y[s : e]) function first computes the Ordinary Least Square (OLS) polynomial fit with features xs, . . . ,xe. It then outputs the residual vector obtained by subtracting the best polynomial fit from the input vector y[s : e].
• Let L be the length of a vector u1:t. pack(u) first computes l = blog2 Lc. It then returns the pair (u1:2l ,ut−2l+1:t). We call elements of this pair as segments of u.
Ada-VAW: inputs - observed y values, TV order k, time horizon n, sub-gaussian parameter σ, hyper-parameter β > 24 and δ ∈ (0, 1]
1. For t = 1 to k − 1, predict 0 2. Initialize th = k 3. For t = k to n:
(a) Predict ŷt = 〈xt, A−1t ∑t−1 s=th−k ysxs〉 (b) Observe yt and suffer loss (ŷt − θ1:n[t])2 (c) Let yr =recenter(y[th − k : t]) and L be its length (d) Let (y1,y2) = pack(yr) (e) Let (α̂1, α̂2) = (T (Wy1), T (Wy2)) (f) Restart Rule: If ‖α̂1‖2+‖α̂2‖2> σ then
i. set th = t+ 1
The basic idea behind the policy is to adaptively detect intervals that have low TV k distance. If the TV k distance within an interval is guaranteed to be low enough, then outputting a polynomial fit can suffice to obtain low prediction errors. Here we use the polynomial fit from VAW [Vovk, 2001] forecaster in step 3(a) to make predictions in such low TV k intervals. Step 3(e) computes denoised wavelets coefficients. It can be shown that the expression on the LHS of the inequality in step 3(f) can be used to lower bound √ L times the TV k distance of the underlying ground truth with high probability. Informally speaking, this is expected as the wavelet coefficents for a CDJV system with regularity k are computed using higher order differences of the underlying signal. A restart is triggered when the scaled TV k lower-bound within a bin exceeds the threshold of σ. Thus we use
the energy of denoised wavelet coefficients as a device to detect low TV k intervals. In Appendix E we show that popular padding strategies such as zero padding, greatly inflate the TV k distance of the recentered sequence for k ≥ 1. This hurts the dynamic regret of our policy. To obviate the necessity to pad for performing the DWT, we employ a packing strategy as described in the policy.
4.3 Performance Guarantees
Theorem 3. Consider the the feedback model yt = θ1:n[t]+ t t = 1, . . . , n where t are independent σ2 subguassian noise and |θ1:n[t]|≤ B. If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1− δ,
Ada-VAW achieves a dynamic regret of Õ ( n 1 2k+3 ( nk‖Dk+1θ1:n‖1 ) 2 2k+3 ) where Õ hides poly-
logarithmic factors of n, 1/δ and constants k,σ,B that do not depend on n.
Proof Sketch. Our proof strategy falls through the following steps.
1. Obtain a high probability bound of bias variance decomposition type on the total squared error incurred by the policy within a bin.
2. Bound the variance by optimally bounding the number of bins spawned.
3. Bound the squared bias using the restart criterion.
Step 1 is achieved by using the subgaussian behaviour of revealed labels yt. For step 2, we first connect the wavelet coefficients of a recentered signal to its TV k distance using ideas from theory of Regression Splines. Then we invoke the “uniform shrinkage” property of soft thresholding estimator to construct a lowerbound of the TV k distance within a bin. Such a lowerbound when summed across all bins leads to an upperbound on the number of bins spawned. Finally for step 3, we use a reduction from the squared bias within a bin to the regret of VAW forecaster and exploit the restart criterion and adpative minimaxity of soft thresholding estimator [Donoho et al., 1998] that uses a CDJV wavelet system.
Corollary 4. Consider the setup of Theorem 3. For the problem of forecasting sequences θ1:n with nk‖Dk+1θ1:n‖1≤ Cn and ‖θ1:n‖∞≤ B, Ada-VAW when run with β = 24 + 8 log(8/δ)log(n) yields a
dynamic regret of Õ ( n 1 2k+3 (Cn) 2 2k+3 ) with probability atleast 1− δ.
Remark 5. (Adaptive Optimality) By combining with trivial regret bound of O(n), we see that dynamic regret of Ada-VAWmatches the lower-bound provided in Proposition 1. Ada-VAW optimally adapts to the variational budget Cn. Adaptivity to time horizon n can be achieved by the standard doubling trick. Remark 6. (Extension to higher dimensions) Let the ground truth θ1:n[t] ∈ Rd and let vi = [θ1:n[1][i], . . . ,θ1:n[n][i]],∆i = n k‖Dk+1vi‖1 for each i ∈ [d]. Let ∑d i=1 ∆i ≤ Cn. Then by running d instances of Ada-VAW in parallel where instance i predicts ground truth sequence along
co-ordinate i, a regret bound of Õ ( d 2k+1 2k+3n 1 2k+3C 2 2k+3 n ) can be achieved.
Remark 7. (Generalization to other losses) Consider the protocol in Figure 1. Instead of squared error losses in step (5), suppose we use loss functions ft(x) such that argmin ft(x) = θ1:n[t] and f ′t(x) is γ-Lipschitz. Under this setting, Ada-VAW yields a dynamic regret of Õ ( γn 1 2k+3C 2 2k+3 n ) with probability at least 1− δ. Concrete examples include (but not limited to):
1. Huber loss, f (ω)t (x) = {
0.5(x− θ[1:n][t])2 |x− θ[1:n][t]|≤ ω ω(|x− θ[1:n][t]|−ω/2) otherwise is 1-Lipschitz in gra-
dient.
2. Log-Cosh loss, ft(x) = log(cosh(x− θ[1:n][t])) is 1-Lipschitz in gradient.
3. -insensitive logistic loss [Dekel et al., 2005], f ( )t (x) = log(1 + e x−θ[1:n][t]− ) + log(1 +
e−x+θ[1:n][t]− )− 2 log(1 + e− ) is 1/2-Lipschitz in gradient.
The rationale behind both Remark 6 and Remark 7 is described at the end of Appendix C.2 Proposition 8. There exist an O ( ((k + 1)n)2 ) run-time implementation of Ada-VAW.
The run-time of O(n2) is larger than the O(n log n) run-time of the more specialized algorithm of [Baby and Wang, 2019] for k = 0. This is due to the more complex structure of higher order CDJV wavelets which invalidates their trick that updates the Haar wavelets in an amortized O(1) time.
5 Extensions
In this section, we discuss the potential applications of the proposed algorithm which broadens its generalizability to several interesting use cases.
5.1 Optimality for Higher Order Sobolev and Holder Classes
So far we have been dealing with total variation classes, which can be thought of as `1-norm of the (k + 1)th order derivatives. An interesting question to ask is “how does Ada-VAW behave under smoothness metric defined in other norms, e.g., `2-norm and `∞-norm?” Following [Tibshirani, 2014], we define the higher order discrete Sobolev class Sk+1(C ′n) and discrete Holder classHk+1(L′n) as follows.
Sk+1(C ′n) = {θ1:n : nk‖Dk+1θ1:n‖2≤ C ′n}, Hk+1(L′n) = {θ1:n : nk‖Dk+1θ1:n‖∞≤ L′n},
where k ≥ 0. These classes feature sequences that are spatially more regular in comparison to the higher order TV k class. It is well known that (see for eg. [Gyorfi et al., 2002]) the following embedding holds true:
Hk+1 ( Cn n ) ⊆ Sk+1 ( Cn√ n ) ⊆ TV k(Cn).
Here Cn√ n and Cnn are respectively the maximal radius of a Sobolev ball and Holder ball enclosed within a TV k(Cn) ball. Hence we have the following Corollary. Corollary 9. Assume the observation model of Theorem 3 and that θ1:n ∈ Sk+1(C ′n). If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1 − δ, Ada-VAW achieves a dynamic regret
of Õ ( n
2 2k+3 [C ′n] 2 2k+3
) .
It turns out that this is the optimal rate for the Sobolev classes, even in the easier, offline nonparametric regression setting [Gyorfi et al., 2002]. Since a Holder class can be sandwiched between two Sobolev balls of same minimax rates [see, e.g., Gyorfi et al., 2002], this also implies the adaptive optimality for the Holder class. We emphasize that Ada-VAW does not need to know the Cn, C ′n or L′n parameters, which implies that it will achieve the smallest error permitted by the right norm that captures the smoothness structure of the unknown sequence θ1:n.
5.2 Optimality for the case of Exact Sparsity
Next, we consider the performance of Ada-VAW on sequences satisfying an `0-(pseudo)norm measure of the smoothness, defined as
Ek+1(Jn) = {θ1:n : ‖Dk+1θ1:n‖0≤ Jn, ‖θ1:n‖∞≤ B}. This class captures sequences that has at most Jn jumps in its (k + 1)th order difference, which covers (modulo the boundedness) kth order discrete splines [see, e.g., Schumaker, 2007, Chapter 8.5] with exactly Jn knots, and arbitrary piecewise polynomials with O(Jn/k) polynomial pieces.
The techniques we developed in this paper allows us to establish the following performance guarantee for Ada-VAW, when applied to sequences in this family. Theorem 10. Let yt = θ1:n[t]+ t, for t = 1, . . . , n where t are iid sub-gaussian with parameter σ2 and ‖Dk+1θ1:n‖0≤ Jn with |θ1:n[t]|≤ B and Jn ≥ 1. If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1− δ, Ada-VAW achieves a dynamic regret of Õ (Jn) where Õ hides polynmial factors of log(n) and log(1/δ).
We also establish an information-theoretic lower bound that applies to all algorithms. Proposition 11. Under the interaction model in Figure 1, the minimax dynamic regret for forecasting sequences in Ek+1(Jn) is Ω(Jn). Remark 12. Theorem 10 and Proposition 11 imply that Ada-VAW is optimal (up to logarithmic factors) for the sequence family Ek(Jn). It is noteworthy that the Ada-VAW is adaptive in Jn, so it is essentially performing as well as an oracle that knows how many knots are enough to represent the input sequence as a discrete spline and where they are in advance (which leaves only the Jn polynomials to be fitted).
6 Conclusion
In this paper, we considered the problem of forecasting TV k bounded sequences and proposed the first efficient algorithm – Ada-VAW– that is adaptively minimax optimal. We also discussed the adaptive optimality of Ada-VAW in various parameters and other function classes. In establishing strong connections between the locally adaptive nonparametric regression literature to the adaptive online learning literature in a concrete problem, this paper could serve as a stepping stone for future exchanges of ideas between the research communities, and hopefully spark new theory and practical algorithms.
Acknowledgment
The research is partially supported by a start-up grant from UCSB CS department, NSF Award #2029626 and generous gifts from Adobe and Amazon Web Services.
Broader Impact
1. Who may benefit from the research? This work can be applied to the task of estimating trends in time series forecasting. For example, financial firms can use it to do stock market predictions, distribution sector can use it do inventory planning, meterological observatories can use it for weather forecast and health and planning sector can forecast the spread of contagious diseases etc.
2. Who may be put at disadvantage? Not applicable 3. What are the consequences of failure of the system? There is no system to speak off, but
failure of the strategy can lead to financial losses for the firms deploying the strategy to do forecasting. Under the assumptions stated in the paper though, the technical results are formally proven and come with the stated mathematical guarantee.
4. Method leverages the biases in data? Not applicable. | 1. What is the focus of the paper in terms of the problem it addresses?
2. What is the main contribution of the paper, particularly in connecting two seemingly unrelated areas?
3. What are the strengths of the paper, including its ability to close a previously existing gap?
4. How could the writing quality of the paper be improved, specifically regarding its introduction and explanation of certain techniques? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper studies the problem of forecasting TV^k bounded sequence and gives an algorithm that has the optimal regret bound. The main contribution is to connect the batch non-parametric regression to online stochastic optimization. Also, the author uses techniques from wavelet computation which is a very interesting connection.
Strengths
The main contribution of this paper is closing the gap between the previous upper and lower bound. Moreover, the techniques and connections developed in the paper are also of independent interest and might be useful for future study in related problems.
Weaknesses
The writing quality of this paper can be improved. Specifically, the introduction can start with a broader picture of the problem and explain more intuitions on the connection of the online estimation and the wavelet techniques etc. |
NIPS | Title
Adaptive Online Estimation of Piecewise Polynomial Trends
Abstract
We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a new variational constraint that enforces the comparator sequence to belong to a discrete k order Total Variation ball of radius Cn. This variational constraint models comparators that have piecewise polynomial structure which has many relevant practical applications [Tibshirani, 2014]. By establishing connections to the theory of wavelet based non-parametric regression, we design a polynomial time algorithm that achieves the nearly optimal dynamic regret of Õ(n 1 2k+3C 2 2k+3 n ). The proposed policy is adaptive to the unknown radius Cn. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest.
N/A
We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a new variational constraint that enforces the comparator sequence to belong to a discrete kth order Total Variation ball of radius Cn. This variational constraint models comparators that have piecewise polynomial structure which has many relevant practical applications [Tibshirani, 2014]. By establishing connections to the theory of wavelet based non-parametric regression, we design a polynomial
time algorithm that achieves the nearly optimal dynamic regret of Õ(n 1 2k+3C 2 2k+3 n ). The proposed policy is adaptive to the unknown radius Cn. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest.
1 Introduction
In time series analysis, estimating and removing the trend are often the first steps taken to make the sequence “stationary”. The non-parametric assumption that the underlying trend is a piecewise polynomial or a spline [de Boor, 1978], is one of the most popular choices, especially when we do not know where the “change points” are and how many of them are appropriate. The higher order Total Variation (see Assumption A3) of the trend can capture in some sense both the sparsity and intensity of changes in underlying dynamics. A non-parametric regression method that penalizes this quantity — trend filtering [Tibshirani, 2014] — enjoys a superior local adaptivity over traditional methods such as the Hodrick-Prescott Filter [Hodrick and Prescott, 1997]. However, Trend Filtering is an offline algorithm which limits its applicability for the inherently online time series forecasting problem. In this paper, we are interested in designing an online forecasting strategy that can essentially match the performance of the offline methods for trend estimation, hence allowing us to apply time series models forecasting on-the-fly. In particular, our problem setup (see Figure 1) and algorithm are applicable to all online variants of trend filtering problem such as predicting stock prices, server payloads, sales etc.
Let’s describe the notations that will be used throughout the paper. All vectors and matrices will be written in bold face letters. For a vector x ∈ Rm, x[i] or xi denotes its value at the ith coordinate. x[a : b] or xa:b is the vector [x[a], . . . ,x[b]]. ‖·‖p denotes finite dimensional Lp norms. ‖x‖0 is the number of non-zero coordinates of a vector x. [n] represents the set {1, . . . , n}. Di ∈ R(n−i)×n denotes the discrete difference operator of order i defined as in [Tibshirani, 2014] and reproduced
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
below.
D1 = −1 1 0 . . . 0 0 0 −1 1 . . . 0 0 ... 0 0 0 . . . −1 1 ∈ R(n−1)×n, andDi = D̃ 1 ·Di−1 ∀i ≥ 2 where D̃ 1 is the (n− i)× (n− i+ 1) truncation ofD1.
The theme of this paper builds on the non-parametric online forecasting model developed in [Baby and Wang, 2019]. We consider a sequential n step interaction process between an agent and an adversary as shown in Figure 1.
A forecasting strategy S is defined as an algorithm that outputs a prediction S(t) at time t only based on the information available after the completion of time t− 1. Random variables t for t ∈ [n] are independent and subgaussian with parameter σ2. This sequential game can be regarded as an online version of the non-parametric regression setup well studied in statistics community.
In this paper, we consider the problem of forecasting sequences that obey nk‖Dk+1θ1:n‖1≤ Cn, k ≥ 0 and ‖θ1:n‖∞≤ B. The constraint nk‖Dk+1θ1:n‖1≤ Cn has been widely used in the rich literature of non-parametric regression. For example, the offline problem of estimating sequences obeying such higher order difference constraint from noisy labels under squared error loss is studied in [Mammen and van de Geer, 1997, Donoho et al., 1998, Tibshirani, 2014, Wang et al., 2016, Sadhanala et al., 2016, Guntuboyina et al., 2017] to cite a few. We aim to design forecasters whose predictions are only based on past history and still perform as good as a batch estimator that sees the entire observations ahead of time.
Scaling of nk. The family {θ1:n | nk‖Dk+1θ1:n‖1≤ Cn} may appear to be alarmingly restrictive for a constant Cn due to the scaling factor nk, but let us argue why this is actually a natural construct. The continuous TV k distance of a function f : [0, 1] → R is defined as ∫ 1 0 |f (k+1)(x)|dx, where f (k+1) is the (k+ 1)th order (weak) derivative. A sequence can be obtained by sampling the function at xi = i/n, i ∈ [n]. Discretizing the integral yields the TV k distance of this sequence to be nk‖Dk+1θ1:n‖1. Thus, the nk‖Dk+1θ1:n‖1 term can be interpreted as the discrete approximation to continuous higher order TV distance of a function. See Figure 2 for an illustration for the case k = 1.
Non-stationary Stochastic Optimization. The setting above can also be viewed under the framework of non-stationary stochastic optimization as studied in [Besbes et al., 2015, Chen et al., 2018b] with squared error loss and noisy gradient feedback. At each time step, the adversary chooses a loss function ft(x) = (x − θt)2. Since ∇ft(x) = 2(x − θt), the feedback ∇̃ft(x) = 2(x − yt) constitutes an unbiased estimate of the gradient ∇ft(x). [Besbes et al., 2015, Chen et al., 2018b] quantifies the performance of a forecasting strategy S in terms of dynamic regret as follows.
Rdynamic(S,θ1:n) := E [ n∑ t=1 ft (S(t)) ] − n∑ t=1 inf xt ft(xt),= E [ n∑ t=1 (S(t)− θ1:n[t])2 ] , (1)
where the last equality follows from the fact that when ft(x) = (x−θ1:n[t])2, infx(x−θ1:n[t])2 = 0. The expectation above is taken over the randomness in the noisy gradient feedback and that of the agent’s forecasting strategy. It is impossible to achieve sublinear dynamic regret against arbitrary
ground truth sequences. However if the sequence of minimizers of loss functions ft(x) = (x− θt)2 obey a path variational constraint, then we can parameterize the dynamic regret as a function of the path length, which could be sublinear when the path-length is sublinear. Typical variational constraints considered in the existing work includes ∑ t|θt − θt−1|, ∑ t|θt − θt−1|2, ( ∑ t‖ft − ft−1‖qp)1/q [see Baby and Wang, 2019, for a review]. These are all useful in their respective contexts, but do not capture higher order smoothness.
The purpose of this work is to connect ideas from batch non-parametric regression to the framework of online stochastic optimization and define a natural family of higher order variational functionals of the form ‖Dk+1θ1:n‖1 to track a comparator sequence with piecewise polynomial structure. To the best of our knowledge such higher order path variationals for k ≥ 1 are vastly unexplored in the domain of non-stationary stochastic optimization. In this work, we take the first steps in introducing such variational constraints to online non-stationary stochastic optimization and exploiting them to get sub-linear dynamic regret.
2 Summary of results
In this section, we summarize the assumptions and main results of the paper.
Assumptions. We start by listing the assumptions made and provide justifications for them.
(A1) The time horizon is known to be n.
(A2) The parameter σ2 of subgaussian noise in the observations is a known fixed positive constant.
(A3) The ground truth denoted by θ1:n has its kth order total variation bounded by some positive Cn, i.e., we consider ground truth sequences that belongs to the class
TVk(Cn) := {θ1:n ∈ Rn : nk‖Dk+1θ1:n‖1≤ Cn}
We refer to nk‖Dk+1θ1:n‖1 as TV k distance of the sequence θ1:n. To avoid trivial cases, we assume Cn = Ω(1).
(A4) The TV order k is a known fixed positive constant.
(A5) ‖θ1:n‖∞≤ B for a known fixed positive constant B.
Though we require the time horizon to be known in advance in assumption (A1), this can be easily lifted using standard doubling trick arguments. The knowledge of time horizon helps us to present the policy in a most transparent way. If standard deviation of sub-gaussian noise is unknown, contrary to assumption (A2), then it can be robustly estimated by a Median Absolute Deviation estimator using first few observations, see for eg. Johnstone [2017]. This is indeed facilitated by the sparsity of wavelet coefficients of TV k bounded sequences. Assumption (A3) characterizes the ground truth
sequences whose forecasting is the main theme of this paper. The TVk(Cn) class features a rich family of sequences that can potentially exhibit spatially non-homogeneous smoothness. For example it can capture sequences that are piecewise polynomials of degree at most k. This poses a challenge to design forecasters that are locally adaptive and can efficiently detect and make predictions under the presence of the non-homogeneous trends. Though knowledge of the TV order k is required in assumption (A4), most of the practical interest is often limited to the lower orders k = 0, 1, 2, 3, see for eg. [Kim et al., 2009, Tibshirani, 2014] and we present (in Appendix D) a meta-policy based on exponential weighted averages [Cesa-Bianchi and Lugosi, 2006] to adapt to these lower orders. Finally assumption (A5) is standard in the online learning literature.
Our contributions. We summarize our main results below.
• When the revealed labels are noisy realizations of sequences that belong to TV k(Cn) we propose a polynomial time policy called Ada-VAW (Adaptive Vovk Azoury Warmuth forecaster) that achieves the nearly minimax optimal rate of Õ ( n 1 2k+3C 2 2k+3 n ) forRdynamic
with high probability. The proposed policy optimally adapts to the unknown radius Cn. • We show that the proposed policy achieves optimal Rdynamic when revealed labels are
noisy realizations of sequences residing in higher order discrete Holder and discrete Sobolev classes.
• When the revealed labels are noisy realizations of sequences that obey ‖Dkθ1:n‖0≤ Jn, ‖θ1:n‖∞≤ B, we show that the same policy achieves the minimax optimal Õ(Jn) rate for for Rdynamic with high probability. The policy optimally adapts to unknown Jn.
Notes on key novelties. It is known that the VAW forecaster is an optimal algorithm for online polynomial regression with squared error losses [Cesa-Bianchi and Lugosi, 2006]. With the side information of change points where the underlying ground truth switches from one polynomial to another, we can run a VAW forecaster on each of the stable polynomial sections to control the cumulative squared error of the policy. We use the machinery of wavelets to mimic an oracle that can provide side information of the change points. For detecting change points, a restart rule is formulated by exploiting connections between wavelet coefficients and locally adaptive regression splines. This is a more general strategy than that used in [Baby and Wang, 2019]. To the best of our knowledge, this is the first time an interplay between VAW forecaster and theory of wavelets along with its adaptive minimaxity [Donoho et al., 1998] has been used in the literature.
Wavelet computations require the length of underlying data whose wavelet transform needs to be computed has to be a power of 2. In practice this is achieved by a padding strategy in cases where original data length is not a power of 2. We show that most commonly used padding strategies – eg. zero padding as in [Baby and Wang, 2019] – are not useful for the current problem and propose a novel packing strategy that alleviates the need to pad. This will be useful to many applications that use wavelets which can be well beyond the scope of the current paper.
Our proof techniques for bounding regret use properties of the CDJV wavelet construction [Cohen et al., 1993]. To the best of our knowledge, this is the first time we witness the ideas from a general CDJV construction scheme implying useful results in an online learning paradigm. Optimally controlling the bias of VAW demands to carefully bound the `2 norm of coefficients computed by polynomial regression. This is done by using ideas from number theory and symbolic determinant evaluation of polynomial matrices. This could be of independent interest in both offline and online polynomial regression.
3 Related Work
In this section, we briefly discuss the related work. A discussion on preliminaries and a detailed exposition of related literature is deferred to Appendix A and B respectively. Throughout this paper when we refer as Õ(n 1 2k+3 ) as optimal regret we assume that Cn = nk‖Dk+1θ1:n‖1 is O(1).
Non-parametric Regression As noted in Section 1, the problem setup we consider can be regarded as an online version of the batch non-parametric regression framework. It has been established (see for eg, [Mammen and van de Geer, 1997, Donoho et al., 1998, Tibshirani, 2014] that minimax rate for estimating sequences with bounded TV k distance under squared error loss scales as
n 1 2k+3 (nk‖Dk+1θ1:n‖1) 2
2k+3 modulo logarithmic factors of n. In this work, we aim to achieve the same rate for minimax dynamic regret in online setting.
Non-stationary Stochastic Optimization Our forecasting framework can be considered as a special case of non-stationary stochastic optimization setting studied in [Besbes et al., 2015, Chen et al., 2018b]. It can be shown that their proposed algorithm namely, restarting Online Gradient Descend (OGD) yields a suboptimal dynamic regret of O ( n1/2(‖Dθ1:n‖1)1/2 ) for our problem. However, it should be noted that their algorithm works with general strongly convex and convex losses. A summary of dynamic regret of various algorithms are presented in Table 1. The rationale behind how to translate existing regret bounds to our setting is elaborated in Appendix B.
Prediction of Bounded Variation sequences Our problem setup is identical to that of [Baby and Wang, 2019] except for the fact that they consider forecasting sequences whose zeroth order Total Variation is bounded. Our work can be considered as a generalization to any TV order k. Their algorithm gives a suboptimal regret of O(n1/3‖Dθ1:n‖2/31 ) for k ≥ 1. Competitive Online Non-parametric Regression [Rakhlin and Sridharan, 2014] considers an online learning framework with squared error losses where the learner competes against the best function in a non-parametric function class. Their results imply via a non-constructive argument, the existence of an algorithm that achieves the regret of Õ(n 1 2k+3 ) for our problem.
4 Main results
We present below the main results of the paper. All proofs are deferred to the appendix.
4.1 Limitations of linear forecasters
We exhibit a lower-bound on the dynamic regret that is implied by [Donoho et al., 1998] in batch regression setting.
Proposition 1 (Minimax Regret). Let yt = θ1:n[t] + t for t = 1, . . . , n where θ1:n ∈ TV (k)(Cn), |θ1:n[t]|≤ B and t are iid σ2 subgaussian random variables. Let AF be the class of all forecasting strategies whose prediction at time t only depends on y1, . . . , yt−1. Let st denote the prediction at time t for a strategy s ∈ AF . Then,
inf s∈AF sup θ1:n∈TV (k)(Cn) n∑ t=1 E [ (st − θ1:n[t])2 ] = Ω ( min{n, n 1 2k+3C 2 2k+3 n } ) ,
where the expectation is taken wrt to randomness in the strategy of the player and t.
We define linear forecasters to be strategies that predict a fixed linear function of the history. This includes a large family of polices including the ARIMA family, Exponential Smoothers for Time Series forecasting, Restarting OGD etc. However in the presence of spatially inhomogeneous
smoothness – which is the case with TV bounded sequences – these policies are doomed to perform sub-optimally. This can be made precise by providing a lower-bound on the minimax regret for linear forecasters. Since the offline problem of smoothing is easier than that of forecasting, a lower-bound on the minimax MSE of linear smoother will directly imply a lower-bound on the regret of linear forecasting strategies. By the results of [Donoho et al., 1998], we have the following proposition: Proposition 2 (Minimax regret for linear forecasters). Linear forecasters will suffer a dynamic regret of at least Ω(n1/(2k+2)) for forecasting sequences that belong to TV k(1).
Thus we must look in the space of policies that are non-linear functions of past labels to achieve a minimax dynamic regret that can potentially match the lower-bound in Proposition 1.
4.2 Policy
In this section, we present our policy and capture the intuition behind its design. First, we introduce the following notations.
• The policy works by partitioning the time horizon into several bins. th denotes start time of the current bin and t be the current time point.
• W denotes the orthonormal Discrete Wavelet Transform (DWT) matrix obtained from a CDJV wavelet construction [Cohen et al., 1993] using wavelets of regularity k + 1. • T (y) denotes the vector obtained by elementwise soft-thresholding of y at level σ √ β log l
where l is the length of input vector.
• xt ∈ R(k+1) denotes the vector [1, t− th + k + 1, . . . , (t− th + k + 1)k]T . • At = I + ∑t s=th−k xsxs T
• recenter(y[s : e]) function first computes the Ordinary Least Square (OLS) polynomial fit with features xs, . . . ,xe. It then outputs the residual vector obtained by subtracting the best polynomial fit from the input vector y[s : e].
• Let L be the length of a vector u1:t. pack(u) first computes l = blog2 Lc. It then returns the pair (u1:2l ,ut−2l+1:t). We call elements of this pair as segments of u.
Ada-VAW: inputs - observed y values, TV order k, time horizon n, sub-gaussian parameter σ, hyper-parameter β > 24 and δ ∈ (0, 1]
1. For t = 1 to k − 1, predict 0 2. Initialize th = k 3. For t = k to n:
(a) Predict ŷt = 〈xt, A−1t ∑t−1 s=th−k ysxs〉 (b) Observe yt and suffer loss (ŷt − θ1:n[t])2 (c) Let yr =recenter(y[th − k : t]) and L be its length (d) Let (y1,y2) = pack(yr) (e) Let (α̂1, α̂2) = (T (Wy1), T (Wy2)) (f) Restart Rule: If ‖α̂1‖2+‖α̂2‖2> σ then
i. set th = t+ 1
The basic idea behind the policy is to adaptively detect intervals that have low TV k distance. If the TV k distance within an interval is guaranteed to be low enough, then outputting a polynomial fit can suffice to obtain low prediction errors. Here we use the polynomial fit from VAW [Vovk, 2001] forecaster in step 3(a) to make predictions in such low TV k intervals. Step 3(e) computes denoised wavelets coefficients. It can be shown that the expression on the LHS of the inequality in step 3(f) can be used to lower bound √ L times the TV k distance of the underlying ground truth with high probability. Informally speaking, this is expected as the wavelet coefficents for a CDJV system with regularity k are computed using higher order differences of the underlying signal. A restart is triggered when the scaled TV k lower-bound within a bin exceeds the threshold of σ. Thus we use
the energy of denoised wavelet coefficients as a device to detect low TV k intervals. In Appendix E we show that popular padding strategies such as zero padding, greatly inflate the TV k distance of the recentered sequence for k ≥ 1. This hurts the dynamic regret of our policy. To obviate the necessity to pad for performing the DWT, we employ a packing strategy as described in the policy.
4.3 Performance Guarantees
Theorem 3. Consider the the feedback model yt = θ1:n[t]+ t t = 1, . . . , n where t are independent σ2 subguassian noise and |θ1:n[t]|≤ B. If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1− δ,
Ada-VAW achieves a dynamic regret of Õ ( n 1 2k+3 ( nk‖Dk+1θ1:n‖1 ) 2 2k+3 ) where Õ hides poly-
logarithmic factors of n, 1/δ and constants k,σ,B that do not depend on n.
Proof Sketch. Our proof strategy falls through the following steps.
1. Obtain a high probability bound of bias variance decomposition type on the total squared error incurred by the policy within a bin.
2. Bound the variance by optimally bounding the number of bins spawned.
3. Bound the squared bias using the restart criterion.
Step 1 is achieved by using the subgaussian behaviour of revealed labels yt. For step 2, we first connect the wavelet coefficients of a recentered signal to its TV k distance using ideas from theory of Regression Splines. Then we invoke the “uniform shrinkage” property of soft thresholding estimator to construct a lowerbound of the TV k distance within a bin. Such a lowerbound when summed across all bins leads to an upperbound on the number of bins spawned. Finally for step 3, we use a reduction from the squared bias within a bin to the regret of VAW forecaster and exploit the restart criterion and adpative minimaxity of soft thresholding estimator [Donoho et al., 1998] that uses a CDJV wavelet system.
Corollary 4. Consider the setup of Theorem 3. For the problem of forecasting sequences θ1:n with nk‖Dk+1θ1:n‖1≤ Cn and ‖θ1:n‖∞≤ B, Ada-VAW when run with β = 24 + 8 log(8/δ)log(n) yields a
dynamic regret of Õ ( n 1 2k+3 (Cn) 2 2k+3 ) with probability atleast 1− δ.
Remark 5. (Adaptive Optimality) By combining with trivial regret bound of O(n), we see that dynamic regret of Ada-VAWmatches the lower-bound provided in Proposition 1. Ada-VAW optimally adapts to the variational budget Cn. Adaptivity to time horizon n can be achieved by the standard doubling trick. Remark 6. (Extension to higher dimensions) Let the ground truth θ1:n[t] ∈ Rd and let vi = [θ1:n[1][i], . . . ,θ1:n[n][i]],∆i = n k‖Dk+1vi‖1 for each i ∈ [d]. Let ∑d i=1 ∆i ≤ Cn. Then by running d instances of Ada-VAW in parallel where instance i predicts ground truth sequence along
co-ordinate i, a regret bound of Õ ( d 2k+1 2k+3n 1 2k+3C 2 2k+3 n ) can be achieved.
Remark 7. (Generalization to other losses) Consider the protocol in Figure 1. Instead of squared error losses in step (5), suppose we use loss functions ft(x) such that argmin ft(x) = θ1:n[t] and f ′t(x) is γ-Lipschitz. Under this setting, Ada-VAW yields a dynamic regret of Õ ( γn 1 2k+3C 2 2k+3 n ) with probability at least 1− δ. Concrete examples include (but not limited to):
1. Huber loss, f (ω)t (x) = {
0.5(x− θ[1:n][t])2 |x− θ[1:n][t]|≤ ω ω(|x− θ[1:n][t]|−ω/2) otherwise is 1-Lipschitz in gra-
dient.
2. Log-Cosh loss, ft(x) = log(cosh(x− θ[1:n][t])) is 1-Lipschitz in gradient.
3. -insensitive logistic loss [Dekel et al., 2005], f ( )t (x) = log(1 + e x−θ[1:n][t]− ) + log(1 +
e−x+θ[1:n][t]− )− 2 log(1 + e− ) is 1/2-Lipschitz in gradient.
The rationale behind both Remark 6 and Remark 7 is described at the end of Appendix C.2 Proposition 8. There exist an O ( ((k + 1)n)2 ) run-time implementation of Ada-VAW.
The run-time of O(n2) is larger than the O(n log n) run-time of the more specialized algorithm of [Baby and Wang, 2019] for k = 0. This is due to the more complex structure of higher order CDJV wavelets which invalidates their trick that updates the Haar wavelets in an amortized O(1) time.
5 Extensions
In this section, we discuss the potential applications of the proposed algorithm which broadens its generalizability to several interesting use cases.
5.1 Optimality for Higher Order Sobolev and Holder Classes
So far we have been dealing with total variation classes, which can be thought of as `1-norm of the (k + 1)th order derivatives. An interesting question to ask is “how does Ada-VAW behave under smoothness metric defined in other norms, e.g., `2-norm and `∞-norm?” Following [Tibshirani, 2014], we define the higher order discrete Sobolev class Sk+1(C ′n) and discrete Holder classHk+1(L′n) as follows.
Sk+1(C ′n) = {θ1:n : nk‖Dk+1θ1:n‖2≤ C ′n}, Hk+1(L′n) = {θ1:n : nk‖Dk+1θ1:n‖∞≤ L′n},
where k ≥ 0. These classes feature sequences that are spatially more regular in comparison to the higher order TV k class. It is well known that (see for eg. [Gyorfi et al., 2002]) the following embedding holds true:
Hk+1 ( Cn n ) ⊆ Sk+1 ( Cn√ n ) ⊆ TV k(Cn).
Here Cn√ n and Cnn are respectively the maximal radius of a Sobolev ball and Holder ball enclosed within a TV k(Cn) ball. Hence we have the following Corollary. Corollary 9. Assume the observation model of Theorem 3 and that θ1:n ∈ Sk+1(C ′n). If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1 − δ, Ada-VAW achieves a dynamic regret
of Õ ( n
2 2k+3 [C ′n] 2 2k+3
) .
It turns out that this is the optimal rate for the Sobolev classes, even in the easier, offline nonparametric regression setting [Gyorfi et al., 2002]. Since a Holder class can be sandwiched between two Sobolev balls of same minimax rates [see, e.g., Gyorfi et al., 2002], this also implies the adaptive optimality for the Holder class. We emphasize that Ada-VAW does not need to know the Cn, C ′n or L′n parameters, which implies that it will achieve the smallest error permitted by the right norm that captures the smoothness structure of the unknown sequence θ1:n.
5.2 Optimality for the case of Exact Sparsity
Next, we consider the performance of Ada-VAW on sequences satisfying an `0-(pseudo)norm measure of the smoothness, defined as
Ek+1(Jn) = {θ1:n : ‖Dk+1θ1:n‖0≤ Jn, ‖θ1:n‖∞≤ B}. This class captures sequences that has at most Jn jumps in its (k + 1)th order difference, which covers (modulo the boundedness) kth order discrete splines [see, e.g., Schumaker, 2007, Chapter 8.5] with exactly Jn knots, and arbitrary piecewise polynomials with O(Jn/k) polynomial pieces.
The techniques we developed in this paper allows us to establish the following performance guarantee for Ada-VAW, when applied to sequences in this family. Theorem 10. Let yt = θ1:n[t]+ t, for t = 1, . . . , n where t are iid sub-gaussian with parameter σ2 and ‖Dk+1θ1:n‖0≤ Jn with |θ1:n[t]|≤ B and Jn ≥ 1. If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1− δ, Ada-VAW achieves a dynamic regret of Õ (Jn) where Õ hides polynmial factors of log(n) and log(1/δ).
We also establish an information-theoretic lower bound that applies to all algorithms. Proposition 11. Under the interaction model in Figure 1, the minimax dynamic regret for forecasting sequences in Ek+1(Jn) is Ω(Jn). Remark 12. Theorem 10 and Proposition 11 imply that Ada-VAW is optimal (up to logarithmic factors) for the sequence family Ek(Jn). It is noteworthy that the Ada-VAW is adaptive in Jn, so it is essentially performing as well as an oracle that knows how many knots are enough to represent the input sequence as a discrete spline and where they are in advance (which leaves only the Jn polynomials to be fitted).
6 Conclusion
In this paper, we considered the problem of forecasting TV k bounded sequences and proposed the first efficient algorithm – Ada-VAW– that is adaptively minimax optimal. We also discussed the adaptive optimality of Ada-VAW in various parameters and other function classes. In establishing strong connections between the locally adaptive nonparametric regression literature to the adaptive online learning literature in a concrete problem, this paper could serve as a stepping stone for future exchanges of ideas between the research communities, and hopefully spark new theory and practical algorithms.
Acknowledgment
The research is partially supported by a start-up grant from UCSB CS department, NSF Award #2029626 and generous gifts from Adobe and Amazon Web Services.
Broader Impact
1. Who may benefit from the research? This work can be applied to the task of estimating trends in time series forecasting. For example, financial firms can use it to do stock market predictions, distribution sector can use it do inventory planning, meterological observatories can use it for weather forecast and health and planning sector can forecast the spread of contagious diseases etc.
2. Who may be put at disadvantage? Not applicable 3. What are the consequences of failure of the system? There is no system to speak off, but
failure of the strategy can lead to financial losses for the firms deploying the strategy to do forecasting. Under the assumptions stated in the paper though, the technical results are formally proven and come with the stated mathematical guarantee.
4. Method leverages the biases in data? Not applicable. | 1. What is the focus and contribution of the paper regarding online trend estimation?
2. What are the strengths of the proposed method, particularly in terms of its generalization and theoretical analysis?
3. Do you have any concerns or questions about the method's extension to more complex problems?
4. How does the reviewer assess the clarity and quality of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes a new method for online trend estimation. The method is designed to operate in a streaming data setting, where it sequentially predicts the elements of a time series, given observations of all previous points. The objective is to minimise the cumulative error in the predictions over some fixed horizon. The authors' proposed method operates in a framework similar to that of Baby and Wang (2019, NeurIPS). In that paper, the time series being predicted should have bounded total variation difference. In the present paper, this restriction is generalised and the authors consider time series with bounds on higher order total variation - i.e. they difference the time series k>1 times and assume a bound on the magnitude of that resulting sequence. In line 246 the authors state this is equivalent to a bound on the l_1 norm of the (k+1)^{th} derivative. The proposed method predicts the next observation as an exponentially weighted function of previous data. Rather than use all of the previously observed data, the algorithm incorporates a wavelet based change detection method to detect substantial non-stationarities and limits how far back in the data to go based on this. This approach is similar in structure to the algorithm in Baby and Wang (2019) but the wavelet based procedure is different. The first difference is in the way that sequence of length 2^p where p is an integer are constructed (necessary for the wavelet transform). The present paper creates two overlapping sequences instead of padding with extra 0s. The second difference is the threshold for declaring a change which is optimised to the more general framework of the present paper. The authors prove an optimal order bound on the accumulated regret of the approach, and discuss its extensions to more complex problems: alternative norm bounds on the (k+1)^{th} derivative, and the setting of sparse changes.
Strengths
The proposed method is an interesting generalisation of the ARROWS algorithm from Baby and Wang (2019). The theory is as far as I can tell correct and guarantees that the authors have a order-optimal approach. The authors present the theoretical contributions nicely alongside those of other work to make their contribution in this regard clear. I am not the most familiar with this area and as such find it a little difficult to comment on the novelty of the contribution - it seems to make material improvements/extensions on the approach of Baby and Wang (2019), and the proof seems to use new ideas related to the particular wavelet transform.
Weaknesses
As I will discuss below, I think the principle weaknesses of the paper are in the clarity of its exposition. |
NIPS | Title
Adaptive Online Estimation of Piecewise Polynomial Trends
Abstract
We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a new variational constraint that enforces the comparator sequence to belong to a discrete k order Total Variation ball of radius Cn. This variational constraint models comparators that have piecewise polynomial structure which has many relevant practical applications [Tibshirani, 2014]. By establishing connections to the theory of wavelet based non-parametric regression, we design a polynomial time algorithm that achieves the nearly optimal dynamic regret of Õ(n 1 2k+3C 2 2k+3 n ). The proposed policy is adaptive to the unknown radius Cn. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest.
N/A
We consider the framework of non-stationary stochastic optimization [Besbes et al., 2015] with squared error losses and noisy gradient feedback where the dynamic regret of an online learner against a time varying comparator sequence is studied. Motivated from the theory of non-parametric regression, we introduce a new variational constraint that enforces the comparator sequence to belong to a discrete kth order Total Variation ball of radius Cn. This variational constraint models comparators that have piecewise polynomial structure which has many relevant practical applications [Tibshirani, 2014]. By establishing connections to the theory of wavelet based non-parametric regression, we design a polynomial
time algorithm that achieves the nearly optimal dynamic regret of Õ(n 1 2k+3C 2 2k+3 n ). The proposed policy is adaptive to the unknown radius Cn. Further, we show that the same policy is minimax optimal for several other non-parametric families of interest.
1 Introduction
In time series analysis, estimating and removing the trend are often the first steps taken to make the sequence “stationary”. The non-parametric assumption that the underlying trend is a piecewise polynomial or a spline [de Boor, 1978], is one of the most popular choices, especially when we do not know where the “change points” are and how many of them are appropriate. The higher order Total Variation (see Assumption A3) of the trend can capture in some sense both the sparsity and intensity of changes in underlying dynamics. A non-parametric regression method that penalizes this quantity — trend filtering [Tibshirani, 2014] — enjoys a superior local adaptivity over traditional methods such as the Hodrick-Prescott Filter [Hodrick and Prescott, 1997]. However, Trend Filtering is an offline algorithm which limits its applicability for the inherently online time series forecasting problem. In this paper, we are interested in designing an online forecasting strategy that can essentially match the performance of the offline methods for trend estimation, hence allowing us to apply time series models forecasting on-the-fly. In particular, our problem setup (see Figure 1) and algorithm are applicable to all online variants of trend filtering problem such as predicting stock prices, server payloads, sales etc.
Let’s describe the notations that will be used throughout the paper. All vectors and matrices will be written in bold face letters. For a vector x ∈ Rm, x[i] or xi denotes its value at the ith coordinate. x[a : b] or xa:b is the vector [x[a], . . . ,x[b]]. ‖·‖p denotes finite dimensional Lp norms. ‖x‖0 is the number of non-zero coordinates of a vector x. [n] represents the set {1, . . . , n}. Di ∈ R(n−i)×n denotes the discrete difference operator of order i defined as in [Tibshirani, 2014] and reproduced
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
below.
D1 = −1 1 0 . . . 0 0 0 −1 1 . . . 0 0 ... 0 0 0 . . . −1 1 ∈ R(n−1)×n, andDi = D̃ 1 ·Di−1 ∀i ≥ 2 where D̃ 1 is the (n− i)× (n− i+ 1) truncation ofD1.
The theme of this paper builds on the non-parametric online forecasting model developed in [Baby and Wang, 2019]. We consider a sequential n step interaction process between an agent and an adversary as shown in Figure 1.
A forecasting strategy S is defined as an algorithm that outputs a prediction S(t) at time t only based on the information available after the completion of time t− 1. Random variables t for t ∈ [n] are independent and subgaussian with parameter σ2. This sequential game can be regarded as an online version of the non-parametric regression setup well studied in statistics community.
In this paper, we consider the problem of forecasting sequences that obey nk‖Dk+1θ1:n‖1≤ Cn, k ≥ 0 and ‖θ1:n‖∞≤ B. The constraint nk‖Dk+1θ1:n‖1≤ Cn has been widely used in the rich literature of non-parametric regression. For example, the offline problem of estimating sequences obeying such higher order difference constraint from noisy labels under squared error loss is studied in [Mammen and van de Geer, 1997, Donoho et al., 1998, Tibshirani, 2014, Wang et al., 2016, Sadhanala et al., 2016, Guntuboyina et al., 2017] to cite a few. We aim to design forecasters whose predictions are only based on past history and still perform as good as a batch estimator that sees the entire observations ahead of time.
Scaling of nk. The family {θ1:n | nk‖Dk+1θ1:n‖1≤ Cn} may appear to be alarmingly restrictive for a constant Cn due to the scaling factor nk, but let us argue why this is actually a natural construct. The continuous TV k distance of a function f : [0, 1] → R is defined as ∫ 1 0 |f (k+1)(x)|dx, where f (k+1) is the (k+ 1)th order (weak) derivative. A sequence can be obtained by sampling the function at xi = i/n, i ∈ [n]. Discretizing the integral yields the TV k distance of this sequence to be nk‖Dk+1θ1:n‖1. Thus, the nk‖Dk+1θ1:n‖1 term can be interpreted as the discrete approximation to continuous higher order TV distance of a function. See Figure 2 for an illustration for the case k = 1.
Non-stationary Stochastic Optimization. The setting above can also be viewed under the framework of non-stationary stochastic optimization as studied in [Besbes et al., 2015, Chen et al., 2018b] with squared error loss and noisy gradient feedback. At each time step, the adversary chooses a loss function ft(x) = (x − θt)2. Since ∇ft(x) = 2(x − θt), the feedback ∇̃ft(x) = 2(x − yt) constitutes an unbiased estimate of the gradient ∇ft(x). [Besbes et al., 2015, Chen et al., 2018b] quantifies the performance of a forecasting strategy S in terms of dynamic regret as follows.
Rdynamic(S,θ1:n) := E [ n∑ t=1 ft (S(t)) ] − n∑ t=1 inf xt ft(xt),= E [ n∑ t=1 (S(t)− θ1:n[t])2 ] , (1)
where the last equality follows from the fact that when ft(x) = (x−θ1:n[t])2, infx(x−θ1:n[t])2 = 0. The expectation above is taken over the randomness in the noisy gradient feedback and that of the agent’s forecasting strategy. It is impossible to achieve sublinear dynamic regret against arbitrary
ground truth sequences. However if the sequence of minimizers of loss functions ft(x) = (x− θt)2 obey a path variational constraint, then we can parameterize the dynamic regret as a function of the path length, which could be sublinear when the path-length is sublinear. Typical variational constraints considered in the existing work includes ∑ t|θt − θt−1|, ∑ t|θt − θt−1|2, ( ∑ t‖ft − ft−1‖qp)1/q [see Baby and Wang, 2019, for a review]. These are all useful in their respective contexts, but do not capture higher order smoothness.
The purpose of this work is to connect ideas from batch non-parametric regression to the framework of online stochastic optimization and define a natural family of higher order variational functionals of the form ‖Dk+1θ1:n‖1 to track a comparator sequence with piecewise polynomial structure. To the best of our knowledge such higher order path variationals for k ≥ 1 are vastly unexplored in the domain of non-stationary stochastic optimization. In this work, we take the first steps in introducing such variational constraints to online non-stationary stochastic optimization and exploiting them to get sub-linear dynamic regret.
2 Summary of results
In this section, we summarize the assumptions and main results of the paper.
Assumptions. We start by listing the assumptions made and provide justifications for them.
(A1) The time horizon is known to be n.
(A2) The parameter σ2 of subgaussian noise in the observations is a known fixed positive constant.
(A3) The ground truth denoted by θ1:n has its kth order total variation bounded by some positive Cn, i.e., we consider ground truth sequences that belongs to the class
TVk(Cn) := {θ1:n ∈ Rn : nk‖Dk+1θ1:n‖1≤ Cn}
We refer to nk‖Dk+1θ1:n‖1 as TV k distance of the sequence θ1:n. To avoid trivial cases, we assume Cn = Ω(1).
(A4) The TV order k is a known fixed positive constant.
(A5) ‖θ1:n‖∞≤ B for a known fixed positive constant B.
Though we require the time horizon to be known in advance in assumption (A1), this can be easily lifted using standard doubling trick arguments. The knowledge of time horizon helps us to present the policy in a most transparent way. If standard deviation of sub-gaussian noise is unknown, contrary to assumption (A2), then it can be robustly estimated by a Median Absolute Deviation estimator using first few observations, see for eg. Johnstone [2017]. This is indeed facilitated by the sparsity of wavelet coefficients of TV k bounded sequences. Assumption (A3) characterizes the ground truth
sequences whose forecasting is the main theme of this paper. The TVk(Cn) class features a rich family of sequences that can potentially exhibit spatially non-homogeneous smoothness. For example it can capture sequences that are piecewise polynomials of degree at most k. This poses a challenge to design forecasters that are locally adaptive and can efficiently detect and make predictions under the presence of the non-homogeneous trends. Though knowledge of the TV order k is required in assumption (A4), most of the practical interest is often limited to the lower orders k = 0, 1, 2, 3, see for eg. [Kim et al., 2009, Tibshirani, 2014] and we present (in Appendix D) a meta-policy based on exponential weighted averages [Cesa-Bianchi and Lugosi, 2006] to adapt to these lower orders. Finally assumption (A5) is standard in the online learning literature.
Our contributions. We summarize our main results below.
• When the revealed labels are noisy realizations of sequences that belong to TV k(Cn) we propose a polynomial time policy called Ada-VAW (Adaptive Vovk Azoury Warmuth forecaster) that achieves the nearly minimax optimal rate of Õ ( n 1 2k+3C 2 2k+3 n ) forRdynamic
with high probability. The proposed policy optimally adapts to the unknown radius Cn. • We show that the proposed policy achieves optimal Rdynamic when revealed labels are
noisy realizations of sequences residing in higher order discrete Holder and discrete Sobolev classes.
• When the revealed labels are noisy realizations of sequences that obey ‖Dkθ1:n‖0≤ Jn, ‖θ1:n‖∞≤ B, we show that the same policy achieves the minimax optimal Õ(Jn) rate for for Rdynamic with high probability. The policy optimally adapts to unknown Jn.
Notes on key novelties. It is known that the VAW forecaster is an optimal algorithm for online polynomial regression with squared error losses [Cesa-Bianchi and Lugosi, 2006]. With the side information of change points where the underlying ground truth switches from one polynomial to another, we can run a VAW forecaster on each of the stable polynomial sections to control the cumulative squared error of the policy. We use the machinery of wavelets to mimic an oracle that can provide side information of the change points. For detecting change points, a restart rule is formulated by exploiting connections between wavelet coefficients and locally adaptive regression splines. This is a more general strategy than that used in [Baby and Wang, 2019]. To the best of our knowledge, this is the first time an interplay between VAW forecaster and theory of wavelets along with its adaptive minimaxity [Donoho et al., 1998] has been used in the literature.
Wavelet computations require the length of underlying data whose wavelet transform needs to be computed has to be a power of 2. In practice this is achieved by a padding strategy in cases where original data length is not a power of 2. We show that most commonly used padding strategies – eg. zero padding as in [Baby and Wang, 2019] – are not useful for the current problem and propose a novel packing strategy that alleviates the need to pad. This will be useful to many applications that use wavelets which can be well beyond the scope of the current paper.
Our proof techniques for bounding regret use properties of the CDJV wavelet construction [Cohen et al., 1993]. To the best of our knowledge, this is the first time we witness the ideas from a general CDJV construction scheme implying useful results in an online learning paradigm. Optimally controlling the bias of VAW demands to carefully bound the `2 norm of coefficients computed by polynomial regression. This is done by using ideas from number theory and symbolic determinant evaluation of polynomial matrices. This could be of independent interest in both offline and online polynomial regression.
3 Related Work
In this section, we briefly discuss the related work. A discussion on preliminaries and a detailed exposition of related literature is deferred to Appendix A and B respectively. Throughout this paper when we refer as Õ(n 1 2k+3 ) as optimal regret we assume that Cn = nk‖Dk+1θ1:n‖1 is O(1).
Non-parametric Regression As noted in Section 1, the problem setup we consider can be regarded as an online version of the batch non-parametric regression framework. It has been established (see for eg, [Mammen and van de Geer, 1997, Donoho et al., 1998, Tibshirani, 2014] that minimax rate for estimating sequences with bounded TV k distance under squared error loss scales as
n 1 2k+3 (nk‖Dk+1θ1:n‖1) 2
2k+3 modulo logarithmic factors of n. In this work, we aim to achieve the same rate for minimax dynamic regret in online setting.
Non-stationary Stochastic Optimization Our forecasting framework can be considered as a special case of non-stationary stochastic optimization setting studied in [Besbes et al., 2015, Chen et al., 2018b]. It can be shown that their proposed algorithm namely, restarting Online Gradient Descend (OGD) yields a suboptimal dynamic regret of O ( n1/2(‖Dθ1:n‖1)1/2 ) for our problem. However, it should be noted that their algorithm works with general strongly convex and convex losses. A summary of dynamic regret of various algorithms are presented in Table 1. The rationale behind how to translate existing regret bounds to our setting is elaborated in Appendix B.
Prediction of Bounded Variation sequences Our problem setup is identical to that of [Baby and Wang, 2019] except for the fact that they consider forecasting sequences whose zeroth order Total Variation is bounded. Our work can be considered as a generalization to any TV order k. Their algorithm gives a suboptimal regret of O(n1/3‖Dθ1:n‖2/31 ) for k ≥ 1. Competitive Online Non-parametric Regression [Rakhlin and Sridharan, 2014] considers an online learning framework with squared error losses where the learner competes against the best function in a non-parametric function class. Their results imply via a non-constructive argument, the existence of an algorithm that achieves the regret of Õ(n 1 2k+3 ) for our problem.
4 Main results
We present below the main results of the paper. All proofs are deferred to the appendix.
4.1 Limitations of linear forecasters
We exhibit a lower-bound on the dynamic regret that is implied by [Donoho et al., 1998] in batch regression setting.
Proposition 1 (Minimax Regret). Let yt = θ1:n[t] + t for t = 1, . . . , n where θ1:n ∈ TV (k)(Cn), |θ1:n[t]|≤ B and t are iid σ2 subgaussian random variables. Let AF be the class of all forecasting strategies whose prediction at time t only depends on y1, . . . , yt−1. Let st denote the prediction at time t for a strategy s ∈ AF . Then,
inf s∈AF sup θ1:n∈TV (k)(Cn) n∑ t=1 E [ (st − θ1:n[t])2 ] = Ω ( min{n, n 1 2k+3C 2 2k+3 n } ) ,
where the expectation is taken wrt to randomness in the strategy of the player and t.
We define linear forecasters to be strategies that predict a fixed linear function of the history. This includes a large family of polices including the ARIMA family, Exponential Smoothers for Time Series forecasting, Restarting OGD etc. However in the presence of spatially inhomogeneous
smoothness – which is the case with TV bounded sequences – these policies are doomed to perform sub-optimally. This can be made precise by providing a lower-bound on the minimax regret for linear forecasters. Since the offline problem of smoothing is easier than that of forecasting, a lower-bound on the minimax MSE of linear smoother will directly imply a lower-bound on the regret of linear forecasting strategies. By the results of [Donoho et al., 1998], we have the following proposition: Proposition 2 (Minimax regret for linear forecasters). Linear forecasters will suffer a dynamic regret of at least Ω(n1/(2k+2)) for forecasting sequences that belong to TV k(1).
Thus we must look in the space of policies that are non-linear functions of past labels to achieve a minimax dynamic regret that can potentially match the lower-bound in Proposition 1.
4.2 Policy
In this section, we present our policy and capture the intuition behind its design. First, we introduce the following notations.
• The policy works by partitioning the time horizon into several bins. th denotes start time of the current bin and t be the current time point.
• W denotes the orthonormal Discrete Wavelet Transform (DWT) matrix obtained from a CDJV wavelet construction [Cohen et al., 1993] using wavelets of regularity k + 1. • T (y) denotes the vector obtained by elementwise soft-thresholding of y at level σ √ β log l
where l is the length of input vector.
• xt ∈ R(k+1) denotes the vector [1, t− th + k + 1, . . . , (t− th + k + 1)k]T . • At = I + ∑t s=th−k xsxs T
• recenter(y[s : e]) function first computes the Ordinary Least Square (OLS) polynomial fit with features xs, . . . ,xe. It then outputs the residual vector obtained by subtracting the best polynomial fit from the input vector y[s : e].
• Let L be the length of a vector u1:t. pack(u) first computes l = blog2 Lc. It then returns the pair (u1:2l ,ut−2l+1:t). We call elements of this pair as segments of u.
Ada-VAW: inputs - observed y values, TV order k, time horizon n, sub-gaussian parameter σ, hyper-parameter β > 24 and δ ∈ (0, 1]
1. For t = 1 to k − 1, predict 0 2. Initialize th = k 3. For t = k to n:
(a) Predict ŷt = 〈xt, A−1t ∑t−1 s=th−k ysxs〉 (b) Observe yt and suffer loss (ŷt − θ1:n[t])2 (c) Let yr =recenter(y[th − k : t]) and L be its length (d) Let (y1,y2) = pack(yr) (e) Let (α̂1, α̂2) = (T (Wy1), T (Wy2)) (f) Restart Rule: If ‖α̂1‖2+‖α̂2‖2> σ then
i. set th = t+ 1
The basic idea behind the policy is to adaptively detect intervals that have low TV k distance. If the TV k distance within an interval is guaranteed to be low enough, then outputting a polynomial fit can suffice to obtain low prediction errors. Here we use the polynomial fit from VAW [Vovk, 2001] forecaster in step 3(a) to make predictions in such low TV k intervals. Step 3(e) computes denoised wavelets coefficients. It can be shown that the expression on the LHS of the inequality in step 3(f) can be used to lower bound √ L times the TV k distance of the underlying ground truth with high probability. Informally speaking, this is expected as the wavelet coefficents for a CDJV system with regularity k are computed using higher order differences of the underlying signal. A restart is triggered when the scaled TV k lower-bound within a bin exceeds the threshold of σ. Thus we use
the energy of denoised wavelet coefficients as a device to detect low TV k intervals. In Appendix E we show that popular padding strategies such as zero padding, greatly inflate the TV k distance of the recentered sequence for k ≥ 1. This hurts the dynamic regret of our policy. To obviate the necessity to pad for performing the DWT, we employ a packing strategy as described in the policy.
4.3 Performance Guarantees
Theorem 3. Consider the the feedback model yt = θ1:n[t]+ t t = 1, . . . , n where t are independent σ2 subguassian noise and |θ1:n[t]|≤ B. If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1− δ,
Ada-VAW achieves a dynamic regret of Õ ( n 1 2k+3 ( nk‖Dk+1θ1:n‖1 ) 2 2k+3 ) where Õ hides poly-
logarithmic factors of n, 1/δ and constants k,σ,B that do not depend on n.
Proof Sketch. Our proof strategy falls through the following steps.
1. Obtain a high probability bound of bias variance decomposition type on the total squared error incurred by the policy within a bin.
2. Bound the variance by optimally bounding the number of bins spawned.
3. Bound the squared bias using the restart criterion.
Step 1 is achieved by using the subgaussian behaviour of revealed labels yt. For step 2, we first connect the wavelet coefficients of a recentered signal to its TV k distance using ideas from theory of Regression Splines. Then we invoke the “uniform shrinkage” property of soft thresholding estimator to construct a lowerbound of the TV k distance within a bin. Such a lowerbound when summed across all bins leads to an upperbound on the number of bins spawned. Finally for step 3, we use a reduction from the squared bias within a bin to the regret of VAW forecaster and exploit the restart criterion and adpative minimaxity of soft thresholding estimator [Donoho et al., 1998] that uses a CDJV wavelet system.
Corollary 4. Consider the setup of Theorem 3. For the problem of forecasting sequences θ1:n with nk‖Dk+1θ1:n‖1≤ Cn and ‖θ1:n‖∞≤ B, Ada-VAW when run with β = 24 + 8 log(8/δ)log(n) yields a
dynamic regret of Õ ( n 1 2k+3 (Cn) 2 2k+3 ) with probability atleast 1− δ.
Remark 5. (Adaptive Optimality) By combining with trivial regret bound of O(n), we see that dynamic regret of Ada-VAWmatches the lower-bound provided in Proposition 1. Ada-VAW optimally adapts to the variational budget Cn. Adaptivity to time horizon n can be achieved by the standard doubling trick. Remark 6. (Extension to higher dimensions) Let the ground truth θ1:n[t] ∈ Rd and let vi = [θ1:n[1][i], . . . ,θ1:n[n][i]],∆i = n k‖Dk+1vi‖1 for each i ∈ [d]. Let ∑d i=1 ∆i ≤ Cn. Then by running d instances of Ada-VAW in parallel where instance i predicts ground truth sequence along
co-ordinate i, a regret bound of Õ ( d 2k+1 2k+3n 1 2k+3C 2 2k+3 n ) can be achieved.
Remark 7. (Generalization to other losses) Consider the protocol in Figure 1. Instead of squared error losses in step (5), suppose we use loss functions ft(x) such that argmin ft(x) = θ1:n[t] and f ′t(x) is γ-Lipschitz. Under this setting, Ada-VAW yields a dynamic regret of Õ ( γn 1 2k+3C 2 2k+3 n ) with probability at least 1− δ. Concrete examples include (but not limited to):
1. Huber loss, f (ω)t (x) = {
0.5(x− θ[1:n][t])2 |x− θ[1:n][t]|≤ ω ω(|x− θ[1:n][t]|−ω/2) otherwise is 1-Lipschitz in gra-
dient.
2. Log-Cosh loss, ft(x) = log(cosh(x− θ[1:n][t])) is 1-Lipschitz in gradient.
3. -insensitive logistic loss [Dekel et al., 2005], f ( )t (x) = log(1 + e x−θ[1:n][t]− ) + log(1 +
e−x+θ[1:n][t]− )− 2 log(1 + e− ) is 1/2-Lipschitz in gradient.
The rationale behind both Remark 6 and Remark 7 is described at the end of Appendix C.2 Proposition 8. There exist an O ( ((k + 1)n)2 ) run-time implementation of Ada-VAW.
The run-time of O(n2) is larger than the O(n log n) run-time of the more specialized algorithm of [Baby and Wang, 2019] for k = 0. This is due to the more complex structure of higher order CDJV wavelets which invalidates their trick that updates the Haar wavelets in an amortized O(1) time.
5 Extensions
In this section, we discuss the potential applications of the proposed algorithm which broadens its generalizability to several interesting use cases.
5.1 Optimality for Higher Order Sobolev and Holder Classes
So far we have been dealing with total variation classes, which can be thought of as `1-norm of the (k + 1)th order derivatives. An interesting question to ask is “how does Ada-VAW behave under smoothness metric defined in other norms, e.g., `2-norm and `∞-norm?” Following [Tibshirani, 2014], we define the higher order discrete Sobolev class Sk+1(C ′n) and discrete Holder classHk+1(L′n) as follows.
Sk+1(C ′n) = {θ1:n : nk‖Dk+1θ1:n‖2≤ C ′n}, Hk+1(L′n) = {θ1:n : nk‖Dk+1θ1:n‖∞≤ L′n},
where k ≥ 0. These classes feature sequences that are spatially more regular in comparison to the higher order TV k class. It is well known that (see for eg. [Gyorfi et al., 2002]) the following embedding holds true:
Hk+1 ( Cn n ) ⊆ Sk+1 ( Cn√ n ) ⊆ TV k(Cn).
Here Cn√ n and Cnn are respectively the maximal radius of a Sobolev ball and Holder ball enclosed within a TV k(Cn) ball. Hence we have the following Corollary. Corollary 9. Assume the observation model of Theorem 3 and that θ1:n ∈ Sk+1(C ′n). If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1 − δ, Ada-VAW achieves a dynamic regret
of Õ ( n
2 2k+3 [C ′n] 2 2k+3
) .
It turns out that this is the optimal rate for the Sobolev classes, even in the easier, offline nonparametric regression setting [Gyorfi et al., 2002]. Since a Holder class can be sandwiched between two Sobolev balls of same minimax rates [see, e.g., Gyorfi et al., 2002], this also implies the adaptive optimality for the Holder class. We emphasize that Ada-VAW does not need to know the Cn, C ′n or L′n parameters, which implies that it will achieve the smallest error permitted by the right norm that captures the smoothness structure of the unknown sequence θ1:n.
5.2 Optimality for the case of Exact Sparsity
Next, we consider the performance of Ada-VAW on sequences satisfying an `0-(pseudo)norm measure of the smoothness, defined as
Ek+1(Jn) = {θ1:n : ‖Dk+1θ1:n‖0≤ Jn, ‖θ1:n‖∞≤ B}. This class captures sequences that has at most Jn jumps in its (k + 1)th order difference, which covers (modulo the boundedness) kth order discrete splines [see, e.g., Schumaker, 2007, Chapter 8.5] with exactly Jn knots, and arbitrary piecewise polynomials with O(Jn/k) polynomial pieces.
The techniques we developed in this paper allows us to establish the following performance guarantee for Ada-VAW, when applied to sequences in this family. Theorem 10. Let yt = θ1:n[t]+ t, for t = 1, . . . , n where t are iid sub-gaussian with parameter σ2 and ‖Dk+1θ1:n‖0≤ Jn with |θ1:n[t]|≤ B and Jn ≥ 1. If β = 24 + 8 log(8/δ)log(n) , then with probability at least 1− δ, Ada-VAW achieves a dynamic regret of Õ (Jn) where Õ hides polynmial factors of log(n) and log(1/δ).
We also establish an information-theoretic lower bound that applies to all algorithms. Proposition 11. Under the interaction model in Figure 1, the minimax dynamic regret for forecasting sequences in Ek+1(Jn) is Ω(Jn). Remark 12. Theorem 10 and Proposition 11 imply that Ada-VAW is optimal (up to logarithmic factors) for the sequence family Ek(Jn). It is noteworthy that the Ada-VAW is adaptive in Jn, so it is essentially performing as well as an oracle that knows how many knots are enough to represent the input sequence as a discrete spline and where they are in advance (which leaves only the Jn polynomials to be fitted).
6 Conclusion
In this paper, we considered the problem of forecasting TV k bounded sequences and proposed the first efficient algorithm – Ada-VAW– that is adaptively minimax optimal. We also discussed the adaptive optimality of Ada-VAW in various parameters and other function classes. In establishing strong connections between the locally adaptive nonparametric regression literature to the adaptive online learning literature in a concrete problem, this paper could serve as a stepping stone for future exchanges of ideas between the research communities, and hopefully spark new theory and practical algorithms.
Acknowledgment
The research is partially supported by a start-up grant from UCSB CS department, NSF Award #2029626 and generous gifts from Adobe and Amazon Web Services.
Broader Impact
1. Who may benefit from the research? This work can be applied to the task of estimating trends in time series forecasting. For example, financial firms can use it to do stock market predictions, distribution sector can use it do inventory planning, meterological observatories can use it for weather forecast and health and planning sector can forecast the spread of contagious diseases etc.
2. Who may be put at disadvantage? Not applicable 3. What are the consequences of failure of the system? There is no system to speak off, but
failure of the strategy can lead to financial losses for the firms deploying the strategy to do forecasting. Under the assumptions stated in the paper though, the technical results are formally proven and come with the stated mathematical guarantee.
4. Method leverages the biases in data? Not applicable. | 1. What is the main contribution of the paper regarding non-stationary stochastic optimization?
2. What are the strengths of the proposed variational constraint and its ability to capture changes in underlying dynamics?
3. How does the reviewer assess the combination of different components from various communities and their potential usefulness for other studies?
4. What are the weaknesses of the paper regarding the scope of the results and the limitation to square loss?
5. Does the reviewer have any questions or concerns about the scalability of the proposed method? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper considers the non-stationary stochastic optimization with squared error losses and noisy gradient feedback. The main proposal is a new variational constraint that generalizes previous one for dynamic regret analysis. The new contraint captures both the sparsity and intensity of changes in underlying dynamics. From the algorithm side, several seemingly disparate components (VAW forecaster and CDJV wavelet) are combined, which are of interest for online learning and statistical learning communities.
Strengths
+ The new variational budget strictly generalizes that of previous studies [Besbes et al., 2015; Baby and Wang, 2019]. It can capture both the sparsity and intensity of changes in underlying dynamics. So in the scenarios that indeed satisfies the piecewise stationary assumptions, the proposed algorithm will enjoy much desired guarantees, both empirically and theoreically. + Several new pieces from the offline statistical community are combined and introduced to the online nonparametric community (to the best of my knowledge), and the analysis is non-trivial and likely useful for other studies. + The justification and the empirical demonstraition on the scaling of n^k are nice, both of them release my concerns on the scaling issue.
Weaknesses
As far as I can see, the results of this paper only hold for the square loss, because the algorithm and analysis heavily rely on the access of an unbiased gradient estimate. The access is due to the special form of the square loss (as shown in line 60). So my question is: can the resulst be generalized to more general function classes? Is there any lower bound justification on the optimality of the results? It seems that the minimax lower bound of Proposition 11 is only for a special case of k=0. |
NIPS | Title
Finding Bipartite Components in Hypergraphs
Abstract
Hypergraphs are important objects to model ternary or higher-order relations of objects, and have a number of applications in analysing many complex datasets occurring in practice. In this work we study a new heat diffusion process in hypergraphs, and employ this process to design a polynomial-time algorithm that approximately finds bipartite components in a hypergraph. We theoretically prove the performance of our proposed algorithm, and compare it against the previous state-of-the-art through extensive experimental analysis on both synthetic and real-world datasets. We find that our new algorithm consistently and significantly outperforms the previous state-of-the-art across a wide range of hypergraphs.
1 Introduction
Spectral methods study the efficient matrix representation of graphs and datasets, and apply the algebraic properties of these matrices to design efficient algorithms. Over the last three decades, spectral methods have become one of the most powerful techniques in machine learning, and have had comprehensive applications in a wide range of domains, including clustering [24, 31], image and video segmentation [26], and network analysis [25], among many others. While the success of this line of research is based on our rich understanding of Laplacian operators of graphs, there has been a sequence of very recent work studying non-linear Laplacian operators for more complex objects (i.e., hypergraphs) and employing these non-linear operators to design hypergraph algorithms with better performance.
1.1 Our contribution
In this work, we study the non-linear Laplacian-type operators for hypergraphs, and employ such an operator to design a polynomial-time algorithm for finding bipartite components in hypergraphs. The main contribution of our work is as follows:
First of all, we introduce and study a non-linear Laplacian-type operator JH for any hypergraph H . While we’ll formally define the operator JH in Section 3, one can informally think about JH as a variant of the standard non-linear hypergraph Laplacian LH studied in [5, 20, 27], and this variation is needed to study the other end of the spectrum of LH . We present a polynomial-time algorithm that finds some eigenvalue λ and its associated eigenvector of JH , and our algorithm is based on the following heat diffusion process: starting from an arbitrary vector f0 ∈ Rn that describes the initial heat distribution among the vertices, we use f0 to construct some 2-graph1 G0, and use the diffusion process in G0 to represent the one in the original hypergraph H and update ft; this process continues until the time at which G0 cannot be used to appropriately simulate the diffusion process in H any more. At this point, we use the currently maintained ft to construct another 2-graph Gt
1Throughout the paper, we refer to non-hyper graphs as 2-graphs. Similarly, we always use LH to refer to the non-linear hypergraph Laplacian operator, and use LG as the standard 2-graph Laplacian.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
to simulate the diffusion process in H , and update ft. This process continues until the vector ft converges; see Figure 1 for illustration. We theoretically prove that this heat diffusion process is unique, well-defined, and our maintained vector ft converges to some eigenvector of JH . While this result is quite interesting on its own and forms the basis of our second result, our analysis shows that, for certain hypergraphs H , both the operator JH and LH could have ω(1) eigenvectors. This result answers an open question in [5], which asks whether LH could have more than 2 eigenvectors2.
Secondly, we present a polynomial-time algorithm that, given a hypergraph H = (VH , EH , w) as input, finds disjoint subsets L,R ⊂ VH that are highly connected with each other. The key to our algorithm is a Cheeger-type inequality for hypergraphs that relates the spectrum of JH and the bipartiteness ratio of H , an analog of βG studied in [28] for 2-graphs. Both the design and analysis of our algorithm is inspired by [28], however our analysis is much more involved because of the non-linear operator JH and hyperedges of different ranks. Our second result alone answers an open question posed by [33], which asks whether there is a hypergraph operator which satisfies a Cheeger-type inequality for bipartiteness.
The significance of our work is further demonstrated by extensive experimental studies of our algorithms on both synthetic and real-world datasets. In particular, on the well-known Penn Treebank corpus that contains 49, 208 sentences and over 1 million words, our purely unsupervised algorithm is able to identify a significant fraction of verbs from non-verbs in its two output clusters. Hence, we believe that our work could potentially have many applications in unsupervised learning for hypergraphs. Using the publicly available code of our implementation, we welcome the reader to explore further applications of our work in even more diverse datasets.
1.2 Related work
The spectral theory of hypergraphs using non-linear operators is introduced in [5] and generalised in [33]. The operator they describe is applied for hypergraph clustering applications in [20, 27]. There are many approaches for finding clusters in hypergraphs by constructing a 2-graph which approximates the hypergraph and using a 2-graph clustering algorithm directly [7, 19, 35]. Another
2We underline that, while the operator LG of a 2-graph G has n eigenvalues, the number of eigenvalues of LH is unknown because of its non-linearity. As answering this open question isn’t the main point of our work, we refer the reader to Appendix C for detailed discussion.
approach for hypergraph clustering is based on tensor spectral decomposition [15, 18]. [21, 23, 36] consider the problem of finding densely connected clusters in 2-graphs. Heat diffusion processes are used for clustering 2-graphs in [10, 17]. [14] studies a different, flow-based diffusion process for finding clusters in 2-graphs, and [13] generalises this to hypergraphs. We note that all of these methods solve a different problem to ours, and cannot be compared directly. Our algorithm is related to the hypergraph max cut problem, and the state-of-the-art approximation algorithm is given by [34]. [28] introduces graph bipartiteness and gives an approximation algorithm for the 2-graph max cut problem. To the best of our knowledge, we are the first to generalise this notion of bipartiteness to hypergraphs. Finally, we note that there have been recent improvements in the time complexity for solving linear programs [11, 29] although we do not take these into account in our analysis since the goal of this paper is not to obtain the fastest algorithm possible.
2 Notation
2-graphs. Throughout the paper, we call a non-hyper graph a 2-graph [4, 8]. We always use G = (VG, EG, w) to express a 2-graph, in which every edge e ∈ EG consists of two vertices in VG and we let n = |VG|. The degree of any vertex u ∈ VG is defined by dG(u) , ∑ v∈VG w(u, v),
and for any S ⊆ V the volume of S is defined by volG(S) , ∑ u∈S dG(u). Following [28], the bipartiteness ratio of any disjoint sets L,R ⊂ VG is defined by
βG(L,R) , 2w(L,L) + 2w(R,R) + w(L ∪R,L ∪R)
volG(L ∪R) where w(A,B) = ∑ (u,v)∈A×B w(u, v), and we further define βG , minS⊂V βG(S, V \S). Notice that a low βG-value means that there is a dense cut between L andR, and there is a sparse cut between L ∪R and V \ (L ∪R). In particular, βG = 0 implies that (L,R) forms a bipartite component of G. We use DG to denote the n× n diagonal matrix whose entries are (DG)uu = dG(u), for all u ∈ V . Moreover, we use AG to denote the n× n adjacency matrix whose entries are (AG)uv = w(u, v), for all u, v ∈ V . The Laplacian matrix is defined by LG , DG − AG. In addition, we define JG , DG +AG, and JG , D−1/2G JGD −1/2 G . For any real and symmetric matrix A, the eigenvalues of A are denoted by λ1(A) ≤ · · · ≤ λn(A), and the eigenvector associated with λi(A) is denoted by fi(A) for 1 ≤ i ≤ n.
Hypergraphs. Let H = (VH , EH , w) be a hypergraph with n = |VH | vertices and weight function w : EH 7→ R+. For any vertex v ∈ VH , the degree of v is defined by dH(v) , ∑ e∈EH w(e) · I [v ∈ e], where I[X] = 1 if event X holds and I[X] = 0 otherwise. The rank of edge e ∈ EH is the total number of vertices in e. For any A,B ⊂ VH , the cut value between A and B is defined by
w(A,B) , ∑ e∈EH w(e) · I [e ∩A 6= ∅ ∧ e ∩B 6= ∅] .
Sometimes, we are required to analyse the weights of edges that intersect some vertex sets and not others. To this end, we define for any A,B,C ⊆ VH that
w(A,B | C) , ∑ e∈EH w(e) · I [e ∩A 6= ∅ ∧ e ∩B 6= ∅ ∧ e ∩ C = ∅] ,
and we sometimes write w(A | C) , w(A,A | C) for simplicity. Generalising the notion of the bipartiteness ratio of a 2-graph, the bipartiteness ratio of sets L,R in a hypergraph H is defined by
βH(L,R) , 2w(L|L) + 2w(R|R) + w(L,L ∪R|R) + w(R,L ∪R|L)
vol(L ∪R) ,
and we define βH , minS⊂V βH(S, V \ S). For any hypergraph H and f ∈ Rn, we define the discrepancy of an edge e ∈ EH with respect to f as
∆f (e) , max u∈e f(u) + min v∈e f(v).
For any non-linear operator J : Rn 7→ Rn, we say that (λ, f) is an eigen-pair if and only if Jf = λf and note that in general, a non-linear operator can have any number of eigenvalues and eigenvectors.
It is important to remember that throughout the paper, we always use the letter H to represent a hypergraph, and G to represent a 2-graph.
Clique reduction. The clique reduction of a hypergraph H is a 2-graph G such that VG = VH and for every edge e ∈ EH , G contains a clique on the vertices in e with edge weights 1/(re − 1) where re is the rank of the edge e. The clique reduction is a common tool for designing hypergraph algorithms [1, 7, 35], and for this reason we use it as a baseline algorithm in this paper. We note that hypergraph algorithms based on the clique reduction often perform less well when there are edges with large rank in the hypergraph. Specifically, in Appendix C we use two r-uniform hypergraphs as examples to show that no matter how we weight the edges in the clique reduction, some cuts cannot be approximated better than a factor of O(r). This is one of the main reasons to develop spectral theory for hypergraphs through heat diffusion processes [5, 27, 33].
3 Diffusion process and the algorithm
In this section, we propose a new diffusion process in hypergraphs and use it to design a polynomialtime algorithm for finding bipartite components in hypergraphs. We first study 2-graphs to give some intuition, and then generalise to hypergraphs and describe our algorithm. Finally, we sketch some of the detailed analysis which proves that the diffusion process is well defined.
3.1 The diffusion process in 2-graphs
To discuss the intuition behind our designed diffusion process, let us look at the case of 2-graphs. Let G = (V,E,w) be a 2-graph, and we have for any x ∈ Rn that
xᵀJGx xᵀx = xᵀ(I +D −1/2 G AGD −1/2 G )x xᵀx .
By setting x = D1/2G y, we have that
xᵀJGx xᵀx = yᵀD 1/2 G JGD 1/2 G y yᵀDGy = yᵀ(DG +AG)y yᵀDGy =
∑ {u,v}∈EG w(u, v) · (y(u) + y(v))
2∑ u∈VG dG(u) · y(u) 2 . (1)
It is easy to see that λ1(JG) = 0 if G is bipartite, and it is known that λ1(JG) and its corresponding eigenvector f1(JG) are closely related to two densely connected components of G [28]. Moreover, similar to the heat equation for graph Laplacians LG, suppose DGft ∈ Rn is some measure on the vertices of G, then a diffusion process defined by the differential equation
dft dt = −D−1G JGft (2)
will converge to the minimum eigenvalue of D−1G JG and can be employed to find two densely connected components of the underlying 2-graph.3
3.2 The hypergraph diffusion and our algorithm
Now we study whether one can construct a new hypergraph operator JH which generalises the diffusion in 2-graphs to hypergraphs. First of all, we focus on a fixed time t with measure vector DHft ∈ Rn and ask whether we can follow (2) and define the rate of change
dft dt = −D−1H JHft
so that the diffusion can proceed for an infinitesimal time step. Our intuition is that the rate of change due to some edge e ∈ EH should involve only the vertices in e with the maximum or minimum value in the normalised measure ft. To formalise this, for any edge e ∈ EH , we define
Sf (e) , {v ∈ e : ft(v) = max u∈e ft(u)} and If (e) , {v ∈ e : ft(v) = min u∈e ft(u)}.
3For the reader familiar with the heat diffusion process of 2-graphs (e.g., [9, 17]), we remark that the above-defined process essentially employs the operation JG to replace the Laplacian LG when defining the heat diffusion: through JG, the heat diffusion can be used to find two densely connected components of G.
That is, for any edge e and normalised measure ft, Sf (e) ⊆ e consists of the vertices v adjacent to e whose ft(v) values are maximum and If (e) ⊆ e consists of the vertices v adjacent to e whose ft(v) values are minimum. See Figure 2 for an example. Then, applying the JH operator to a vector ft should be equivalent to applying the operator JG for some 2-graph G which we construct by splitting the weight of each hyperedge e ∈ EH between the edges in Sf (e)× If (e). Similar to the case for 2-graphs and (1), for any x = D1/2H ft this will give us the quadratic form
xᵀD −1/2 H JHD −1/2 H x
xᵀx = fᵀt JGft fᵀt DHft =
∑ {u,v}∈EG wG(u, v) · (ft(u) + ft(v))
2∑ u∈VG dH(u) · ft(u) 2
=
∑ e∈EH wH(e)(maxu∈e ft(u) + minv∈e ft(v))
2∑ u∈VH dH(u) · ft(u) 2 ,
where wG(u, v) is the weight of the edge {u, v} in G, and wH(e) is the weight of the edge e in H . We will show in the proof of Theorem 1 that JH has an eigenvalue of 0 if the hypergraph is 2-colourable4, and that the spectrum of JH is closely related to the hypergraph bipartiteness.
For this reason, we would expect that the diffusion process based on the operator JH can be used to find sets with small hypergraph bipartiteness. However, one needs to be very cautious here as, by the nature of the diffusion process, the values ft(v) of all the vertices v change over time and, as a result, the sets Sf (e) and If (e) that consist of the vertices with the maximum and minimum ft-value might change after an infinitesimal time step; this will prevent the process from continuing. We will discuss this issue in detail through the so-called Diffusion Continuity Condition in Section 3.3. In essence, the diffusion continuity condition ensures that one can always construct a 2-graph G by allocating the weight of each hyperedge e to the edges in Sf (e)× If (e) such that the sets Sf (e) and If (e) will not change in infinitesimal time although ft changes according to (dft)/(dt) = −D−1H JGft. We will also present an efficient
procedure in Section 3.3 to compute the weights of edges in Sf (e)× If (e). All of these guarantee that (i) every 2-graph that corresponds to the hypergraph diffusion process at any time step can be efficiently constructed; (ii) with this sequence of constructed 2-graphs, the diffusion process defined by JH is able to continue until the heat distribution converges. With this, we summarise the main idea of our presented algorithm as follows:
• First of all, we introduce some arbitrary f0 ∈ Rn as the initial diffusion vector, and a step size parameter > 0 to discretise the diffusion process. At each step, the algorithm constructs the 2-graph G guaranteed by the diffusion continuity condition, and updates ft ∈ Rn according to the rate of change (dft)/(dt) = −D−1H JGft. The algorithm terminates when ft has converged, i.e., the ratio between the current Rayleigh quotient (fᵀt JGft)/(f ᵀ t DHft) and
the one in the previous time step is bounded by some predefined constant. • Secondly, similar to many previous spectral graph clustering algorithms (e.g. [3, 27, 28]),
the algorithm constructs the sweep sets defined by ft and returns the two sets with minimum βH -value among all the constructed sweep sets. Specifically, for every 1 ≤ i ≤ n, the algorithm constructs Lj = {vi : |ft(vi)| ≥ |ft(vj)| ∧ ft(vi) < 0} and Rj = {vi : |ft(vi)| ≥ |ft(vj)|∧ft(vi) ≥ 0}. Then, between the n pairs (Lj , Rj), the algorithm returns the one with the minimum βH -value.
See Algorithm 1 for the formal description, and its performance is summarised in Theorem 1. Theorem 1 (Main Result). Given a hypergraph H = (VH , EH , w) and parameter > 0, the following holds:
1. There is an algorithm that finds an eigen-pair (λ, f ) of the operator JH such that λ ≤ λ1(JG), where G is the clique reduction of H and the inequality is strict if mine∈EH re > 2 where re is the rank of e. The algorithm runs in poly(|VH |, |EH |, 1/ ) time.
4Hypergraph H is 2-colourable if there are disjoint sets L,R ⊂ VH such that every edge intersects L and R.
2. Given an eigen-pair (λ, f) of the operator JH , there is an algorithm that constructs the two-sided sweep sets defined on f , and finds sets L and R such that βH(L,R) ≤ √ 2λ. The
algorithm runs in poly(|VH |, |EH |) time.
Algorithm 1: FINDBIPARTITECOMPONENTS Input :Hypergraph H , starting vector f0 ∈ Rn, step size > 0 Output :Sets L and R t := 0 while ft has not converged do
Use ft to construct 2-graph G satisfying the diffusion continuity condition ft+ := ft − D−1H JGft t := t+
end Set j := arg min1≤i≤n βH(Li, Ri) return (Lj , Rj)
Remark 1. We make the important remark that there is no polynomial-time algorithm which guarantees any multiplicative approximation of the minimum hypergraph bipartiteness value βH , unless P = NP. We prove this in Appendix C by a reduction from the NP-complete HYPERGRAPH 2- COLOURABILITY problem. This means that the problem we consider in this work is fundamentally more difficult than the equivalent problem for 2-graphs, as well as the problem of finding a sparse cut in a hypergraph. For this reason, the analysis of the non-linear hypergraph Laplacian operator [5, 27] cannot be applied in our case.
3.3 Dealing with the diffusion continuity condition
It remains for us to discuss the diffusion continuity condition, which guarantees that Sf (e) and If (e) will not change in infinitesimal time and the diffusion process will eventually converge to some stable distribution. Formally, let ft be the normalised measure on the vertices of H , and let
r , dft dt = −D−1H JHft
be the derivative of ft, which describes the rate of change for every vertex at the current time t. We write r(v) for any v ∈ VH as r(v) = ∑ e∈EH re(v), where re(v) is the contribution of edge e towards the rate of change of v. Now we discuss three rules that we expect the diffusion process to satisfy, and later prove that these three rules uniquely define the rate of change r.
First of all, as we mentioned in Section 3.2, we expect that only the vertices in Sf (e) ∪ If (e) will participate in the diffusion process, i.e., re(u) = 0 unless u ∈ Sf (e) ∪ If (e). Moreover, any vertex u participating in the diffusion process must satisfy the following:
• Rule (0a): if |re(u)| > 0 and u ∈ Sf (e), then r(u) = maxv∈Sf (e){r(v)}. • Rule (0b): if |re(u)| > 0 and u ∈ If (e), then r(u) = minv∈If (e){r(v)}.
To explain Rule (0), notice that for an infinitesimal time, ft(u) will be increased according to (dft/dt) (u) = r(u). Hence, by Rule (0) we know that, if u ∈ Sf (e) (resp. u ∈ If (e)) participates in the diffusion process in edge e, then in an infinitesimal time f(u) will remain the maximum (resp. minimum) among the vertices in e. Such a rule is necessary to ensure that the vertices involved in the diffusion in edge e do not change in infinitesimal time, and the diffusion process is able to continue.
Our next rule states that the total rate of change of the measure due to edge e is equal to−w(e)·∆f (e): • Rule (1): ∑ v∈Sf (e) d(v)re(v) = ∑ v∈If (e) d(v)re(v) = −w(e) ·∆f (e) for all e ∈ EH . This rule is a generalisation from the operator JG in 2-graphs. In particular, since D−1G JGft(u) =∑ {u,v}∈EG wG(u, v)(ft(u)+ft(v))/dG(u), the rate of change of ft(u) due to the edge {u, v} ∈ EG is−wG(u, v)(ft(u) + ft(v))/dG(u). Rule (1) states that in the hypergraph case the rate of change of the vertices in Sf (e) and If (e) together behave like the rate of change of u and v in the 2-graph case.
One might have expected that these two rules together will define a unique process. Unfortunately, this isn’t the case and we present a counterexample in Appendix A. To overcome this, we introduce the following stronger rule to replace Rule (0):
• Rule (2a): Assume that |re(u)| > 0 and u ∈ Sf (e).
– If ∆f (e) > 0, then r(u) = maxv∈Sf (e){r(v)}; – If ∆f (e) < 0, then r(u) = r(v) for all v ∈ Sf (e).
• Rule (2b): Assume that |re(u)| > 0 and u ∈ If (e):
– If ∆f (e) < 0, then r(u) = minv∈If (e){r(v)}; – If ∆f (e) > 0, then r(u) = r(v) for all v ∈ If (e).
Notice that the first conditions of Rules (2a) and (2b) correspond to Rules (0a) and (0b) respectively; the second conditions are introduced for purely technical reasons: they state that, if the discrepancy of e is negative (resp. positive), then all the vertices u ∈ Sf (e) (resp. u ∈ If (e)) will have the same value of r(u). Theorem 2 shows that there is a unique r ∈ Rn that satisfies Rules (1) and (2), and r can be computed in polynomial time. Therefore, our two rules uniquely define a diffusion process, and we can use the computed r to simulate the continuous diffusion process with a discretised version.5
Theorem 2. For any given ft ∈ Rn, there is a unique r = dft/dt and associated {re(v)}e∈E,v∈V that satisfy Rule (1) and (2), and r can be computed in polynomial time by linear programming. Remark 2. The rules we define and the proof of Theorem 2 are more involved than those used in [5] to define the hypergraph Laplacian operator. In particular, in contrast to [5], in our case the discrepancy ∆f (e) within a hyperedge e can be either positive or negative. This results in the four different cases in Rule (2) which must be carefully considered throughout the proof of Theorem 2.
4 Experiments
In this section, we evaluate the performance of our new algorithm on synthetic and real-world datasets. All algorithms are implemented in Python 3.6, using the scipy library for sparse matrix representations and linear programs. The experiments are performed using an Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz processor, with 16 GB RAM. Our code can be downloaded from https://github.com/pmacg/hypergraph-bipartite-components.
Since ours is the first proposed algorithm for approximating hypergraph bipartiteness, we will compare it to a simple and natural baseline algorithm, which we call CLIQUECUT (CC). In this algorithm, we construct the clique reduction of the hypergraph and use the two-sided sweep-set algorithm described in [28] to find a set with low bipartiteness in the clique reduction.6
Additionally, we will compare two versions of our proposed algorithm. FINDBIPARTITECOMPONENTS (FBC) is our new algorithm described in Algorithm 1 and FBCAPPROX (FBCA) is an approximate version in which we do not solve the linear programs in Theorem 2 to compute the graph G. Instead, at each step of the algorithm, we construct G by splitting the weight of each hyperedge e evenly between the edges in S(e)× I(e). We always set the parameter = 1 for FBC and FBCA, and we set the starting vector f0 ∈ Rn for the diffusion to be the eigenvector corresponding to the minimum eigenvalue of JG, where G is the clique reduction of the hypergraph H .
4.1 Synthetic datasets
We first evaluate the algorithms using a random hypergraph model. Given the parameters n, r, p, and q, we generate an n-vertex r-uniform hypergraph in the following way: the vertex set V is divided
5Note that the graph G used for the diffusion at time t can be easily computed from the {re(v)} values, although in practice this is not actually needed since the r(u) values can be used to update the diffusion directly.
6We choose to use the algorithm in [28] here since, as far we know, this is the only non-SDP based algorithm for solving the MAX-CUT problem for 2-graphs. Notice that, although SDP-based algorithms achieve a better approximation ratio for the MAX-CUT problem, they are not practical even for hypergraphs of medium sizes.
into two clusters L and R of size n/2. For every set S ⊂ V with |S| = r, if S ⊂ L or S ⊂ R we add the hyperedge S with probability p and otherwise we add the hyperedge with probability q. We remark that this is a special case of the hypergraph stochastic block model (e.g., [6]). We limit the number of free parameters for simplicity while maintaining enough flexibility to generate random hypergraphs with a wide range of optimal βH -values.
We will compare the algorithms’ performance using four metrics: the hypergraph bipartiteness ratio βH(L,R), the clique graph bipartiteness ratio βG(L,R), the F1-score [30] of the returned clustering, and the runtime of the algorithm. Throughout this subsection, we always report the average result on 10 hypergraphs randomly generated with each parameter configuration.
Comparison of FBC and FBCA. We first fix the values n = 200, r = 3, and p = 10−4 and vary the ratio of q/p from 2 to 6 which produces hypergraphs with 250 to 650 edges. The performance of each algorithm on these hypergraphs is shown in Figure 3 from which we can make the following observations:
• From Figure 3 (a) we observe that FBC and FBCA find sets with very similar bipartiteness and they perform better than the CLIQUECUT baseline. • From Figure 3 (b) we can see that our proposed algorithms produce output with a lower βG-value than the output of the CLIQUECUT algorithm. This is a surprising result given that CLIQUECUT operates directly on the clique graph. • Figure 3 (c) shows that the FBCA algorithm is much faster than FBC.
From these observations, we conclude that in practice it is sufficient to use the much faster FBCA algorithm in place of the FBC algorithm.
Experiments on larger graphs. We now compare only the FBCA and CLIQUECUT algorithms, which allows us to run on hypergraphs with higher rank and number of vertices. We fix the parameters n = 2000, r = 5, and p = 10−11, producing hypergraphs with between 5000 and 75000 edges7 and show the results in Figure 4. Our algorithm consistently and significantly outperforms the baseline on every metric and across a wide variety of input hypergraphs.
To compare the algorithms’ runtime, we fix the parameter n = 2000 and the ratio q = 2p and report the runtime of the FBCA and CC algorithms on a variety of hypergraphs in Table 1. Our proposed algorithm takes more time than
7In our model, a very small value of p and q is needed since in an n-vertex, r-uniform hypergraph there are( n r ) possible edges which can be a very large number. In this case, ( 2000 5 ) ≈ 2.6× 1014.
the baseline CC algorithm but both appear to scale linearly in the size of the input hypergraph8 which suggests that our algorithm’s runtime is roughly a constant factor multiple of the baseline.
16 18 20 22 24 26 28 30 0.000
0.002
0.004
0.006
0.008
0.010
(a) βH -value
16 18 20 22 24 26 28 30
0.435
0.440
0.445
0.450
0.455
0.460
0.465
0.470
(b) βG-value
16 18 20 22 24 26 28 30
0.5
0.6
0.7
0.8
0.9
1.0
(c) F1-score
Figure 4: The average performance of each algorithm when n = 2000, r = 5, and p = 10−11. We omit the error bars because they are too small to read.
4.2 Real-world datasets
Next, we demonstrate the broad utility of our algorithm on complex real-world datasets with higherorder relationships which are most naturally represented by hypergraphs. Moreover, the hypergraphs are inhomogeneous, meaning that they contain vertices of different types, although this information is not available to the algorithm and so an algorithm has to treat every vertex identically. We demonstrate that our algorithm is able to find clusters which correspond to the vertices of different types. Table 2 shows the F1-score of the clustering produced by our algorithm on each dataset and demonstrates that it consistently outperforms the CLIQUECUT algorithm.
Penn Treebank. The Penn Treebank dataset is an English-language corpus with examples of written American English from several sources, including fiction and journalism [22]. The dataset contains 49, 208 sentences and over 1 million words, which are labelled with their part of speech. We construct a hypergraph in the following way: the vertex set consists of all the verbs, adverbs, and adjectives which occur at least 10 times in the corpus, and for every 4-gram (a sequence of 4 words) we add a hyperedge containing the co-occurring words. This results in a hypergraph with 4, 686 vertices and
176, 286 edges. The clustering returned by our algorithm correctly distinguishes between verbs and non-verbs with an accuracy of 67%. This experiment demonstrates that our unsupervised general purpose algorithm is capable of recovering non-trivial structure in a dataset which would ordinarily be clustered using significant domain knowledge, or a complex pre-trained model [2, 16].
DBLP. We construct a hypergraph from a subset of the DBLP network consisting of 14, 376 papers published in artificial intelligence and machine learning conferences [12, 32]. For each paper, we include a hyperedge linking the authors of the paper with the conference in which it was published, giving a hypergraph with 14, 495 vertices and 14, 376 edges. The clusters returned by our algorithm successfully separate the authors from the conferences with an accuracy of 100%.
5 Concluding remarks
In this paper, we introduce a new hypergraph Laplacian-type operator and apply this operator to design an algorithm that finds almost bipartite components in hypergraphs. Our experimental results
8Although n is fixed, the CLIQUECUT algorithm’s runtime is not constant since the time to compute an eigenvalue of the sparse adjacency matrix scales with the number and rank of the hyperedges.
demonstrate the potentially wide applications of spectral hypergraph theory, and so we believe that designing faster spectral hypergraph algorithms is an important future research direction in algorithms and machine learning. This will allow spectral hypergraph techniques to be applied more effectively to analyse the complex datasets which occur with increasing frequency in the real world.
Acknowledgements
Peter Macgregor is supported by the Langmuir PhD Scholarship, and He Sun is supported by an EPSRC Early Career Fellowship (EP/T00729X/1). | 1. What is the focus of the paper regarding finding bipartite components in a hypergraph?
2. What are the strengths of the proposed approach, particularly in terms of its formal treatment and experimental exploration?
3. What are the weaknesses of the paper, especially regarding its lack of clarity on applications and computational complexity compared to a baseline clique cut algorithm? | Summary Of The Paper
Review | Summary Of The Paper
The paper discusses how to find bipartite components in a hyper graph. A bipartite component consists of two subsets of vertices L and R. Bipartite implies there is minimal connection within L as well as within R. Component implies that there is minimal connection between vertices in L union R and vertices outside.
The algorithm is based on diffusion process to find sets with small hyper graph bipartiteness. The paper provides a couple of theorems for the existence of a polynomial algorithm. Experiments are conducted on both synthetic as well as real-world datasets, including Penn Treebank and DBLP.
Review
On the plus side, the paper gives a formal treatment of the problem. It is written rigorously, if rather densely, but the key ideas come across. It is also good that the experiments explore both synthetic datasets (to look at the runtime with increasing size) as well as real-world datasets (to explore potential applications).
One weakness of the paper is that it is light on applications. It is not clear to me whether there will be many occasions in which we'd be looking for bipartite components in a hyper graph. While the paper gives an example on Peen Treebank, claiming that it is achieving good performance given that it is an unsupervised general purpose algorithm, this seems to still be a contrived example. The discussion on DBLP is similarly vague, saying that the algorithm can separate authors from conferences, but I imagine that task wouldn't really be an actual application.
Another point is the computational complexity relative to the baseline clique cut algorithm, which is much faster (by an order of magnitude or thereabout). While the proposed method produces "some" improvement, some discussion on whether the difference justifies the large increase in computational cost. |
NIPS | Title
Finding Bipartite Components in Hypergraphs
Abstract
Hypergraphs are important objects to model ternary or higher-order relations of objects, and have a number of applications in analysing many complex datasets occurring in practice. In this work we study a new heat diffusion process in hypergraphs, and employ this process to design a polynomial-time algorithm that approximately finds bipartite components in a hypergraph. We theoretically prove the performance of our proposed algorithm, and compare it against the previous state-of-the-art through extensive experimental analysis on both synthetic and real-world datasets. We find that our new algorithm consistently and significantly outperforms the previous state-of-the-art across a wide range of hypergraphs.
1 Introduction
Spectral methods study the efficient matrix representation of graphs and datasets, and apply the algebraic properties of these matrices to design efficient algorithms. Over the last three decades, spectral methods have become one of the most powerful techniques in machine learning, and have had comprehensive applications in a wide range of domains, including clustering [24, 31], image and video segmentation [26], and network analysis [25], among many others. While the success of this line of research is based on our rich understanding of Laplacian operators of graphs, there has been a sequence of very recent work studying non-linear Laplacian operators for more complex objects (i.e., hypergraphs) and employing these non-linear operators to design hypergraph algorithms with better performance.
1.1 Our contribution
In this work, we study the non-linear Laplacian-type operators for hypergraphs, and employ such an operator to design a polynomial-time algorithm for finding bipartite components in hypergraphs. The main contribution of our work is as follows:
First of all, we introduce and study a non-linear Laplacian-type operator JH for any hypergraph H . While we’ll formally define the operator JH in Section 3, one can informally think about JH as a variant of the standard non-linear hypergraph Laplacian LH studied in [5, 20, 27], and this variation is needed to study the other end of the spectrum of LH . We present a polynomial-time algorithm that finds some eigenvalue λ and its associated eigenvector of JH , and our algorithm is based on the following heat diffusion process: starting from an arbitrary vector f0 ∈ Rn that describes the initial heat distribution among the vertices, we use f0 to construct some 2-graph1 G0, and use the diffusion process in G0 to represent the one in the original hypergraph H and update ft; this process continues until the time at which G0 cannot be used to appropriately simulate the diffusion process in H any more. At this point, we use the currently maintained ft to construct another 2-graph Gt
1Throughout the paper, we refer to non-hyper graphs as 2-graphs. Similarly, we always use LH to refer to the non-linear hypergraph Laplacian operator, and use LG as the standard 2-graph Laplacian.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
to simulate the diffusion process in H , and update ft. This process continues until the vector ft converges; see Figure 1 for illustration. We theoretically prove that this heat diffusion process is unique, well-defined, and our maintained vector ft converges to some eigenvector of JH . While this result is quite interesting on its own and forms the basis of our second result, our analysis shows that, for certain hypergraphs H , both the operator JH and LH could have ω(1) eigenvectors. This result answers an open question in [5], which asks whether LH could have more than 2 eigenvectors2.
Secondly, we present a polynomial-time algorithm that, given a hypergraph H = (VH , EH , w) as input, finds disjoint subsets L,R ⊂ VH that are highly connected with each other. The key to our algorithm is a Cheeger-type inequality for hypergraphs that relates the spectrum of JH and the bipartiteness ratio of H , an analog of βG studied in [28] for 2-graphs. Both the design and analysis of our algorithm is inspired by [28], however our analysis is much more involved because of the non-linear operator JH and hyperedges of different ranks. Our second result alone answers an open question posed by [33], which asks whether there is a hypergraph operator which satisfies a Cheeger-type inequality for bipartiteness.
The significance of our work is further demonstrated by extensive experimental studies of our algorithms on both synthetic and real-world datasets. In particular, on the well-known Penn Treebank corpus that contains 49, 208 sentences and over 1 million words, our purely unsupervised algorithm is able to identify a significant fraction of verbs from non-verbs in its two output clusters. Hence, we believe that our work could potentially have many applications in unsupervised learning for hypergraphs. Using the publicly available code of our implementation, we welcome the reader to explore further applications of our work in even more diverse datasets.
1.2 Related work
The spectral theory of hypergraphs using non-linear operators is introduced in [5] and generalised in [33]. The operator they describe is applied for hypergraph clustering applications in [20, 27]. There are many approaches for finding clusters in hypergraphs by constructing a 2-graph which approximates the hypergraph and using a 2-graph clustering algorithm directly [7, 19, 35]. Another
2We underline that, while the operator LG of a 2-graph G has n eigenvalues, the number of eigenvalues of LH is unknown because of its non-linearity. As answering this open question isn’t the main point of our work, we refer the reader to Appendix C for detailed discussion.
approach for hypergraph clustering is based on tensor spectral decomposition [15, 18]. [21, 23, 36] consider the problem of finding densely connected clusters in 2-graphs. Heat diffusion processes are used for clustering 2-graphs in [10, 17]. [14] studies a different, flow-based diffusion process for finding clusters in 2-graphs, and [13] generalises this to hypergraphs. We note that all of these methods solve a different problem to ours, and cannot be compared directly. Our algorithm is related to the hypergraph max cut problem, and the state-of-the-art approximation algorithm is given by [34]. [28] introduces graph bipartiteness and gives an approximation algorithm for the 2-graph max cut problem. To the best of our knowledge, we are the first to generalise this notion of bipartiteness to hypergraphs. Finally, we note that there have been recent improvements in the time complexity for solving linear programs [11, 29] although we do not take these into account in our analysis since the goal of this paper is not to obtain the fastest algorithm possible.
2 Notation
2-graphs. Throughout the paper, we call a non-hyper graph a 2-graph [4, 8]. We always use G = (VG, EG, w) to express a 2-graph, in which every edge e ∈ EG consists of two vertices in VG and we let n = |VG|. The degree of any vertex u ∈ VG is defined by dG(u) , ∑ v∈VG w(u, v),
and for any S ⊆ V the volume of S is defined by volG(S) , ∑ u∈S dG(u). Following [28], the bipartiteness ratio of any disjoint sets L,R ⊂ VG is defined by
βG(L,R) , 2w(L,L) + 2w(R,R) + w(L ∪R,L ∪R)
volG(L ∪R) where w(A,B) = ∑ (u,v)∈A×B w(u, v), and we further define βG , minS⊂V βG(S, V \S). Notice that a low βG-value means that there is a dense cut between L andR, and there is a sparse cut between L ∪R and V \ (L ∪R). In particular, βG = 0 implies that (L,R) forms a bipartite component of G. We use DG to denote the n× n diagonal matrix whose entries are (DG)uu = dG(u), for all u ∈ V . Moreover, we use AG to denote the n× n adjacency matrix whose entries are (AG)uv = w(u, v), for all u, v ∈ V . The Laplacian matrix is defined by LG , DG − AG. In addition, we define JG , DG +AG, and JG , D−1/2G JGD −1/2 G . For any real and symmetric matrix A, the eigenvalues of A are denoted by λ1(A) ≤ · · · ≤ λn(A), and the eigenvector associated with λi(A) is denoted by fi(A) for 1 ≤ i ≤ n.
Hypergraphs. Let H = (VH , EH , w) be a hypergraph with n = |VH | vertices and weight function w : EH 7→ R+. For any vertex v ∈ VH , the degree of v is defined by dH(v) , ∑ e∈EH w(e) · I [v ∈ e], where I[X] = 1 if event X holds and I[X] = 0 otherwise. The rank of edge e ∈ EH is the total number of vertices in e. For any A,B ⊂ VH , the cut value between A and B is defined by
w(A,B) , ∑ e∈EH w(e) · I [e ∩A 6= ∅ ∧ e ∩B 6= ∅] .
Sometimes, we are required to analyse the weights of edges that intersect some vertex sets and not others. To this end, we define for any A,B,C ⊆ VH that
w(A,B | C) , ∑ e∈EH w(e) · I [e ∩A 6= ∅ ∧ e ∩B 6= ∅ ∧ e ∩ C = ∅] ,
and we sometimes write w(A | C) , w(A,A | C) for simplicity. Generalising the notion of the bipartiteness ratio of a 2-graph, the bipartiteness ratio of sets L,R in a hypergraph H is defined by
βH(L,R) , 2w(L|L) + 2w(R|R) + w(L,L ∪R|R) + w(R,L ∪R|L)
vol(L ∪R) ,
and we define βH , minS⊂V βH(S, V \ S). For any hypergraph H and f ∈ Rn, we define the discrepancy of an edge e ∈ EH with respect to f as
∆f (e) , max u∈e f(u) + min v∈e f(v).
For any non-linear operator J : Rn 7→ Rn, we say that (λ, f) is an eigen-pair if and only if Jf = λf and note that in general, a non-linear operator can have any number of eigenvalues and eigenvectors.
It is important to remember that throughout the paper, we always use the letter H to represent a hypergraph, and G to represent a 2-graph.
Clique reduction. The clique reduction of a hypergraph H is a 2-graph G such that VG = VH and for every edge e ∈ EH , G contains a clique on the vertices in e with edge weights 1/(re − 1) where re is the rank of the edge e. The clique reduction is a common tool for designing hypergraph algorithms [1, 7, 35], and for this reason we use it as a baseline algorithm in this paper. We note that hypergraph algorithms based on the clique reduction often perform less well when there are edges with large rank in the hypergraph. Specifically, in Appendix C we use two r-uniform hypergraphs as examples to show that no matter how we weight the edges in the clique reduction, some cuts cannot be approximated better than a factor of O(r). This is one of the main reasons to develop spectral theory for hypergraphs through heat diffusion processes [5, 27, 33].
3 Diffusion process and the algorithm
In this section, we propose a new diffusion process in hypergraphs and use it to design a polynomialtime algorithm for finding bipartite components in hypergraphs. We first study 2-graphs to give some intuition, and then generalise to hypergraphs and describe our algorithm. Finally, we sketch some of the detailed analysis which proves that the diffusion process is well defined.
3.1 The diffusion process in 2-graphs
To discuss the intuition behind our designed diffusion process, let us look at the case of 2-graphs. Let G = (V,E,w) be a 2-graph, and we have for any x ∈ Rn that
xᵀJGx xᵀx = xᵀ(I +D −1/2 G AGD −1/2 G )x xᵀx .
By setting x = D1/2G y, we have that
xᵀJGx xᵀx = yᵀD 1/2 G JGD 1/2 G y yᵀDGy = yᵀ(DG +AG)y yᵀDGy =
∑ {u,v}∈EG w(u, v) · (y(u) + y(v))
2∑ u∈VG dG(u) · y(u) 2 . (1)
It is easy to see that λ1(JG) = 0 if G is bipartite, and it is known that λ1(JG) and its corresponding eigenvector f1(JG) are closely related to two densely connected components of G [28]. Moreover, similar to the heat equation for graph Laplacians LG, suppose DGft ∈ Rn is some measure on the vertices of G, then a diffusion process defined by the differential equation
dft dt = −D−1G JGft (2)
will converge to the minimum eigenvalue of D−1G JG and can be employed to find two densely connected components of the underlying 2-graph.3
3.2 The hypergraph diffusion and our algorithm
Now we study whether one can construct a new hypergraph operator JH which generalises the diffusion in 2-graphs to hypergraphs. First of all, we focus on a fixed time t with measure vector DHft ∈ Rn and ask whether we can follow (2) and define the rate of change
dft dt = −D−1H JHft
so that the diffusion can proceed for an infinitesimal time step. Our intuition is that the rate of change due to some edge e ∈ EH should involve only the vertices in e with the maximum or minimum value in the normalised measure ft. To formalise this, for any edge e ∈ EH , we define
Sf (e) , {v ∈ e : ft(v) = max u∈e ft(u)} and If (e) , {v ∈ e : ft(v) = min u∈e ft(u)}.
3For the reader familiar with the heat diffusion process of 2-graphs (e.g., [9, 17]), we remark that the above-defined process essentially employs the operation JG to replace the Laplacian LG when defining the heat diffusion: through JG, the heat diffusion can be used to find two densely connected components of G.
That is, for any edge e and normalised measure ft, Sf (e) ⊆ e consists of the vertices v adjacent to e whose ft(v) values are maximum and If (e) ⊆ e consists of the vertices v adjacent to e whose ft(v) values are minimum. See Figure 2 for an example. Then, applying the JH operator to a vector ft should be equivalent to applying the operator JG for some 2-graph G which we construct by splitting the weight of each hyperedge e ∈ EH between the edges in Sf (e)× If (e). Similar to the case for 2-graphs and (1), for any x = D1/2H ft this will give us the quadratic form
xᵀD −1/2 H JHD −1/2 H x
xᵀx = fᵀt JGft fᵀt DHft =
∑ {u,v}∈EG wG(u, v) · (ft(u) + ft(v))
2∑ u∈VG dH(u) · ft(u) 2
=
∑ e∈EH wH(e)(maxu∈e ft(u) + minv∈e ft(v))
2∑ u∈VH dH(u) · ft(u) 2 ,
where wG(u, v) is the weight of the edge {u, v} in G, and wH(e) is the weight of the edge e in H . We will show in the proof of Theorem 1 that JH has an eigenvalue of 0 if the hypergraph is 2-colourable4, and that the spectrum of JH is closely related to the hypergraph bipartiteness.
For this reason, we would expect that the diffusion process based on the operator JH can be used to find sets with small hypergraph bipartiteness. However, one needs to be very cautious here as, by the nature of the diffusion process, the values ft(v) of all the vertices v change over time and, as a result, the sets Sf (e) and If (e) that consist of the vertices with the maximum and minimum ft-value might change after an infinitesimal time step; this will prevent the process from continuing. We will discuss this issue in detail through the so-called Diffusion Continuity Condition in Section 3.3. In essence, the diffusion continuity condition ensures that one can always construct a 2-graph G by allocating the weight of each hyperedge e to the edges in Sf (e)× If (e) such that the sets Sf (e) and If (e) will not change in infinitesimal time although ft changes according to (dft)/(dt) = −D−1H JGft. We will also present an efficient
procedure in Section 3.3 to compute the weights of edges in Sf (e)× If (e). All of these guarantee that (i) every 2-graph that corresponds to the hypergraph diffusion process at any time step can be efficiently constructed; (ii) with this sequence of constructed 2-graphs, the diffusion process defined by JH is able to continue until the heat distribution converges. With this, we summarise the main idea of our presented algorithm as follows:
• First of all, we introduce some arbitrary f0 ∈ Rn as the initial diffusion vector, and a step size parameter > 0 to discretise the diffusion process. At each step, the algorithm constructs the 2-graph G guaranteed by the diffusion continuity condition, and updates ft ∈ Rn according to the rate of change (dft)/(dt) = −D−1H JGft. The algorithm terminates when ft has converged, i.e., the ratio between the current Rayleigh quotient (fᵀt JGft)/(f ᵀ t DHft) and
the one in the previous time step is bounded by some predefined constant. • Secondly, similar to many previous spectral graph clustering algorithms (e.g. [3, 27, 28]),
the algorithm constructs the sweep sets defined by ft and returns the two sets with minimum βH -value among all the constructed sweep sets. Specifically, for every 1 ≤ i ≤ n, the algorithm constructs Lj = {vi : |ft(vi)| ≥ |ft(vj)| ∧ ft(vi) < 0} and Rj = {vi : |ft(vi)| ≥ |ft(vj)|∧ft(vi) ≥ 0}. Then, between the n pairs (Lj , Rj), the algorithm returns the one with the minimum βH -value.
See Algorithm 1 for the formal description, and its performance is summarised in Theorem 1. Theorem 1 (Main Result). Given a hypergraph H = (VH , EH , w) and parameter > 0, the following holds:
1. There is an algorithm that finds an eigen-pair (λ, f ) of the operator JH such that λ ≤ λ1(JG), where G is the clique reduction of H and the inequality is strict if mine∈EH re > 2 where re is the rank of e. The algorithm runs in poly(|VH |, |EH |, 1/ ) time.
4Hypergraph H is 2-colourable if there are disjoint sets L,R ⊂ VH such that every edge intersects L and R.
2. Given an eigen-pair (λ, f) of the operator JH , there is an algorithm that constructs the two-sided sweep sets defined on f , and finds sets L and R such that βH(L,R) ≤ √ 2λ. The
algorithm runs in poly(|VH |, |EH |) time.
Algorithm 1: FINDBIPARTITECOMPONENTS Input :Hypergraph H , starting vector f0 ∈ Rn, step size > 0 Output :Sets L and R t := 0 while ft has not converged do
Use ft to construct 2-graph G satisfying the diffusion continuity condition ft+ := ft − D−1H JGft t := t+
end Set j := arg min1≤i≤n βH(Li, Ri) return (Lj , Rj)
Remark 1. We make the important remark that there is no polynomial-time algorithm which guarantees any multiplicative approximation of the minimum hypergraph bipartiteness value βH , unless P = NP. We prove this in Appendix C by a reduction from the NP-complete HYPERGRAPH 2- COLOURABILITY problem. This means that the problem we consider in this work is fundamentally more difficult than the equivalent problem for 2-graphs, as well as the problem of finding a sparse cut in a hypergraph. For this reason, the analysis of the non-linear hypergraph Laplacian operator [5, 27] cannot be applied in our case.
3.3 Dealing with the diffusion continuity condition
It remains for us to discuss the diffusion continuity condition, which guarantees that Sf (e) and If (e) will not change in infinitesimal time and the diffusion process will eventually converge to some stable distribution. Formally, let ft be the normalised measure on the vertices of H , and let
r , dft dt = −D−1H JHft
be the derivative of ft, which describes the rate of change for every vertex at the current time t. We write r(v) for any v ∈ VH as r(v) = ∑ e∈EH re(v), where re(v) is the contribution of edge e towards the rate of change of v. Now we discuss three rules that we expect the diffusion process to satisfy, and later prove that these three rules uniquely define the rate of change r.
First of all, as we mentioned in Section 3.2, we expect that only the vertices in Sf (e) ∪ If (e) will participate in the diffusion process, i.e., re(u) = 0 unless u ∈ Sf (e) ∪ If (e). Moreover, any vertex u participating in the diffusion process must satisfy the following:
• Rule (0a): if |re(u)| > 0 and u ∈ Sf (e), then r(u) = maxv∈Sf (e){r(v)}. • Rule (0b): if |re(u)| > 0 and u ∈ If (e), then r(u) = minv∈If (e){r(v)}.
To explain Rule (0), notice that for an infinitesimal time, ft(u) will be increased according to (dft/dt) (u) = r(u). Hence, by Rule (0) we know that, if u ∈ Sf (e) (resp. u ∈ If (e)) participates in the diffusion process in edge e, then in an infinitesimal time f(u) will remain the maximum (resp. minimum) among the vertices in e. Such a rule is necessary to ensure that the vertices involved in the diffusion in edge e do not change in infinitesimal time, and the diffusion process is able to continue.
Our next rule states that the total rate of change of the measure due to edge e is equal to−w(e)·∆f (e): • Rule (1): ∑ v∈Sf (e) d(v)re(v) = ∑ v∈If (e) d(v)re(v) = −w(e) ·∆f (e) for all e ∈ EH . This rule is a generalisation from the operator JG in 2-graphs. In particular, since D−1G JGft(u) =∑ {u,v}∈EG wG(u, v)(ft(u)+ft(v))/dG(u), the rate of change of ft(u) due to the edge {u, v} ∈ EG is−wG(u, v)(ft(u) + ft(v))/dG(u). Rule (1) states that in the hypergraph case the rate of change of the vertices in Sf (e) and If (e) together behave like the rate of change of u and v in the 2-graph case.
One might have expected that these two rules together will define a unique process. Unfortunately, this isn’t the case and we present a counterexample in Appendix A. To overcome this, we introduce the following stronger rule to replace Rule (0):
• Rule (2a): Assume that |re(u)| > 0 and u ∈ Sf (e).
– If ∆f (e) > 0, then r(u) = maxv∈Sf (e){r(v)}; – If ∆f (e) < 0, then r(u) = r(v) for all v ∈ Sf (e).
• Rule (2b): Assume that |re(u)| > 0 and u ∈ If (e):
– If ∆f (e) < 0, then r(u) = minv∈If (e){r(v)}; – If ∆f (e) > 0, then r(u) = r(v) for all v ∈ If (e).
Notice that the first conditions of Rules (2a) and (2b) correspond to Rules (0a) and (0b) respectively; the second conditions are introduced for purely technical reasons: they state that, if the discrepancy of e is negative (resp. positive), then all the vertices u ∈ Sf (e) (resp. u ∈ If (e)) will have the same value of r(u). Theorem 2 shows that there is a unique r ∈ Rn that satisfies Rules (1) and (2), and r can be computed in polynomial time. Therefore, our two rules uniquely define a diffusion process, and we can use the computed r to simulate the continuous diffusion process with a discretised version.5
Theorem 2. For any given ft ∈ Rn, there is a unique r = dft/dt and associated {re(v)}e∈E,v∈V that satisfy Rule (1) and (2), and r can be computed in polynomial time by linear programming. Remark 2. The rules we define and the proof of Theorem 2 are more involved than those used in [5] to define the hypergraph Laplacian operator. In particular, in contrast to [5], in our case the discrepancy ∆f (e) within a hyperedge e can be either positive or negative. This results in the four different cases in Rule (2) which must be carefully considered throughout the proof of Theorem 2.
4 Experiments
In this section, we evaluate the performance of our new algorithm on synthetic and real-world datasets. All algorithms are implemented in Python 3.6, using the scipy library for sparse matrix representations and linear programs. The experiments are performed using an Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz processor, with 16 GB RAM. Our code can be downloaded from https://github.com/pmacg/hypergraph-bipartite-components.
Since ours is the first proposed algorithm for approximating hypergraph bipartiteness, we will compare it to a simple and natural baseline algorithm, which we call CLIQUECUT (CC). In this algorithm, we construct the clique reduction of the hypergraph and use the two-sided sweep-set algorithm described in [28] to find a set with low bipartiteness in the clique reduction.6
Additionally, we will compare two versions of our proposed algorithm. FINDBIPARTITECOMPONENTS (FBC) is our new algorithm described in Algorithm 1 and FBCAPPROX (FBCA) is an approximate version in which we do not solve the linear programs in Theorem 2 to compute the graph G. Instead, at each step of the algorithm, we construct G by splitting the weight of each hyperedge e evenly between the edges in S(e)× I(e). We always set the parameter = 1 for FBC and FBCA, and we set the starting vector f0 ∈ Rn for the diffusion to be the eigenvector corresponding to the minimum eigenvalue of JG, where G is the clique reduction of the hypergraph H .
4.1 Synthetic datasets
We first evaluate the algorithms using a random hypergraph model. Given the parameters n, r, p, and q, we generate an n-vertex r-uniform hypergraph in the following way: the vertex set V is divided
5Note that the graph G used for the diffusion at time t can be easily computed from the {re(v)} values, although in practice this is not actually needed since the r(u) values can be used to update the diffusion directly.
6We choose to use the algorithm in [28] here since, as far we know, this is the only non-SDP based algorithm for solving the MAX-CUT problem for 2-graphs. Notice that, although SDP-based algorithms achieve a better approximation ratio for the MAX-CUT problem, they are not practical even for hypergraphs of medium sizes.
into two clusters L and R of size n/2. For every set S ⊂ V with |S| = r, if S ⊂ L or S ⊂ R we add the hyperedge S with probability p and otherwise we add the hyperedge with probability q. We remark that this is a special case of the hypergraph stochastic block model (e.g., [6]). We limit the number of free parameters for simplicity while maintaining enough flexibility to generate random hypergraphs with a wide range of optimal βH -values.
We will compare the algorithms’ performance using four metrics: the hypergraph bipartiteness ratio βH(L,R), the clique graph bipartiteness ratio βG(L,R), the F1-score [30] of the returned clustering, and the runtime of the algorithm. Throughout this subsection, we always report the average result on 10 hypergraphs randomly generated with each parameter configuration.
Comparison of FBC and FBCA. We first fix the values n = 200, r = 3, and p = 10−4 and vary the ratio of q/p from 2 to 6 which produces hypergraphs with 250 to 650 edges. The performance of each algorithm on these hypergraphs is shown in Figure 3 from which we can make the following observations:
• From Figure 3 (a) we observe that FBC and FBCA find sets with very similar bipartiteness and they perform better than the CLIQUECUT baseline. • From Figure 3 (b) we can see that our proposed algorithms produce output with a lower βG-value than the output of the CLIQUECUT algorithm. This is a surprising result given that CLIQUECUT operates directly on the clique graph. • Figure 3 (c) shows that the FBCA algorithm is much faster than FBC.
From these observations, we conclude that in practice it is sufficient to use the much faster FBCA algorithm in place of the FBC algorithm.
Experiments on larger graphs. We now compare only the FBCA and CLIQUECUT algorithms, which allows us to run on hypergraphs with higher rank and number of vertices. We fix the parameters n = 2000, r = 5, and p = 10−11, producing hypergraphs with between 5000 and 75000 edges7 and show the results in Figure 4. Our algorithm consistently and significantly outperforms the baseline on every metric and across a wide variety of input hypergraphs.
To compare the algorithms’ runtime, we fix the parameter n = 2000 and the ratio q = 2p and report the runtime of the FBCA and CC algorithms on a variety of hypergraphs in Table 1. Our proposed algorithm takes more time than
7In our model, a very small value of p and q is needed since in an n-vertex, r-uniform hypergraph there are( n r ) possible edges which can be a very large number. In this case, ( 2000 5 ) ≈ 2.6× 1014.
the baseline CC algorithm but both appear to scale linearly in the size of the input hypergraph8 which suggests that our algorithm’s runtime is roughly a constant factor multiple of the baseline.
16 18 20 22 24 26 28 30 0.000
0.002
0.004
0.006
0.008
0.010
(a) βH -value
16 18 20 22 24 26 28 30
0.435
0.440
0.445
0.450
0.455
0.460
0.465
0.470
(b) βG-value
16 18 20 22 24 26 28 30
0.5
0.6
0.7
0.8
0.9
1.0
(c) F1-score
Figure 4: The average performance of each algorithm when n = 2000, r = 5, and p = 10−11. We omit the error bars because they are too small to read.
4.2 Real-world datasets
Next, we demonstrate the broad utility of our algorithm on complex real-world datasets with higherorder relationships which are most naturally represented by hypergraphs. Moreover, the hypergraphs are inhomogeneous, meaning that they contain vertices of different types, although this information is not available to the algorithm and so an algorithm has to treat every vertex identically. We demonstrate that our algorithm is able to find clusters which correspond to the vertices of different types. Table 2 shows the F1-score of the clustering produced by our algorithm on each dataset and demonstrates that it consistently outperforms the CLIQUECUT algorithm.
Penn Treebank. The Penn Treebank dataset is an English-language corpus with examples of written American English from several sources, including fiction and journalism [22]. The dataset contains 49, 208 sentences and over 1 million words, which are labelled with their part of speech. We construct a hypergraph in the following way: the vertex set consists of all the verbs, adverbs, and adjectives which occur at least 10 times in the corpus, and for every 4-gram (a sequence of 4 words) we add a hyperedge containing the co-occurring words. This results in a hypergraph with 4, 686 vertices and
176, 286 edges. The clustering returned by our algorithm correctly distinguishes between verbs and non-verbs with an accuracy of 67%. This experiment demonstrates that our unsupervised general purpose algorithm is capable of recovering non-trivial structure in a dataset which would ordinarily be clustered using significant domain knowledge, or a complex pre-trained model [2, 16].
DBLP. We construct a hypergraph from a subset of the DBLP network consisting of 14, 376 papers published in artificial intelligence and machine learning conferences [12, 32]. For each paper, we include a hyperedge linking the authors of the paper with the conference in which it was published, giving a hypergraph with 14, 495 vertices and 14, 376 edges. The clusters returned by our algorithm successfully separate the authors from the conferences with an accuracy of 100%.
5 Concluding remarks
In this paper, we introduce a new hypergraph Laplacian-type operator and apply this operator to design an algorithm that finds almost bipartite components in hypergraphs. Our experimental results
8Although n is fixed, the CLIQUECUT algorithm’s runtime is not constant since the time to compute an eigenvalue of the sparse adjacency matrix scales with the number and rank of the hyperedges.
demonstrate the potentially wide applications of spectral hypergraph theory, and so we believe that designing faster spectral hypergraph algorithms is an important future research direction in algorithms and machine learning. This will allow spectral hypergraph techniques to be applied more effectively to analyse the complex datasets which occur with increasing frequency in the real world.
Acknowledgements
Peter Macgregor is supported by the Langmuir PhD Scholarship, and He Sun is supported by an EPSRC Early Career Fellowship (EP/T00729X/1). | 1. What is the focus of the paper, and what are the key contributions?
2. What are the strengths of the proposed algorithm, particularly in terms of its theoretical analysis?
3. What are the weaknesses of the paper, especially regarding its computational complexity and scalability?
4. How does the reviewer assess the practical significance of the work, and what additional information would they like to see included? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a diffusion-based algorithm for finding bipartite components in hypergraphs. The work rigorously analyzes the algorithm and provides theoretical guarantees.
Review
This paper uses a diffusion-based algorithm that requires multiple iterations for convergence. Therefore, it comes with high computational complexity. The complexity analysis of algorithm is missing. Further, the empirical time complexity (Table 1) shows that the algorithm is not scalable to large-scale networks with millions of nodes where the problem is much more relevant. More experimental results and computation complexity justification is required to assess the practical significance of the work. The work extends the theoretical understanding of the diffusion process, especially for hyper-graphs which is a major strength of this work. |
NIPS | Title
Finding Bipartite Components in Hypergraphs
Abstract
Hypergraphs are important objects to model ternary or higher-order relations of objects, and have a number of applications in analysing many complex datasets occurring in practice. In this work we study a new heat diffusion process in hypergraphs, and employ this process to design a polynomial-time algorithm that approximately finds bipartite components in a hypergraph. We theoretically prove the performance of our proposed algorithm, and compare it against the previous state-of-the-art through extensive experimental analysis on both synthetic and real-world datasets. We find that our new algorithm consistently and significantly outperforms the previous state-of-the-art across a wide range of hypergraphs.
1 Introduction
Spectral methods study the efficient matrix representation of graphs and datasets, and apply the algebraic properties of these matrices to design efficient algorithms. Over the last three decades, spectral methods have become one of the most powerful techniques in machine learning, and have had comprehensive applications in a wide range of domains, including clustering [24, 31], image and video segmentation [26], and network analysis [25], among many others. While the success of this line of research is based on our rich understanding of Laplacian operators of graphs, there has been a sequence of very recent work studying non-linear Laplacian operators for more complex objects (i.e., hypergraphs) and employing these non-linear operators to design hypergraph algorithms with better performance.
1.1 Our contribution
In this work, we study the non-linear Laplacian-type operators for hypergraphs, and employ such an operator to design a polynomial-time algorithm for finding bipartite components in hypergraphs. The main contribution of our work is as follows:
First of all, we introduce and study a non-linear Laplacian-type operator JH for any hypergraph H . While we’ll formally define the operator JH in Section 3, one can informally think about JH as a variant of the standard non-linear hypergraph Laplacian LH studied in [5, 20, 27], and this variation is needed to study the other end of the spectrum of LH . We present a polynomial-time algorithm that finds some eigenvalue λ and its associated eigenvector of JH , and our algorithm is based on the following heat diffusion process: starting from an arbitrary vector f0 ∈ Rn that describes the initial heat distribution among the vertices, we use f0 to construct some 2-graph1 G0, and use the diffusion process in G0 to represent the one in the original hypergraph H and update ft; this process continues until the time at which G0 cannot be used to appropriately simulate the diffusion process in H any more. At this point, we use the currently maintained ft to construct another 2-graph Gt
1Throughout the paper, we refer to non-hyper graphs as 2-graphs. Similarly, we always use LH to refer to the non-linear hypergraph Laplacian operator, and use LG as the standard 2-graph Laplacian.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
to simulate the diffusion process in H , and update ft. This process continues until the vector ft converges; see Figure 1 for illustration. We theoretically prove that this heat diffusion process is unique, well-defined, and our maintained vector ft converges to some eigenvector of JH . While this result is quite interesting on its own and forms the basis of our second result, our analysis shows that, for certain hypergraphs H , both the operator JH and LH could have ω(1) eigenvectors. This result answers an open question in [5], which asks whether LH could have more than 2 eigenvectors2.
Secondly, we present a polynomial-time algorithm that, given a hypergraph H = (VH , EH , w) as input, finds disjoint subsets L,R ⊂ VH that are highly connected with each other. The key to our algorithm is a Cheeger-type inequality for hypergraphs that relates the spectrum of JH and the bipartiteness ratio of H , an analog of βG studied in [28] for 2-graphs. Both the design and analysis of our algorithm is inspired by [28], however our analysis is much more involved because of the non-linear operator JH and hyperedges of different ranks. Our second result alone answers an open question posed by [33], which asks whether there is a hypergraph operator which satisfies a Cheeger-type inequality for bipartiteness.
The significance of our work is further demonstrated by extensive experimental studies of our algorithms on both synthetic and real-world datasets. In particular, on the well-known Penn Treebank corpus that contains 49, 208 sentences and over 1 million words, our purely unsupervised algorithm is able to identify a significant fraction of verbs from non-verbs in its two output clusters. Hence, we believe that our work could potentially have many applications in unsupervised learning for hypergraphs. Using the publicly available code of our implementation, we welcome the reader to explore further applications of our work in even more diverse datasets.
1.2 Related work
The spectral theory of hypergraphs using non-linear operators is introduced in [5] and generalised in [33]. The operator they describe is applied for hypergraph clustering applications in [20, 27]. There are many approaches for finding clusters in hypergraphs by constructing a 2-graph which approximates the hypergraph and using a 2-graph clustering algorithm directly [7, 19, 35]. Another
2We underline that, while the operator LG of a 2-graph G has n eigenvalues, the number of eigenvalues of LH is unknown because of its non-linearity. As answering this open question isn’t the main point of our work, we refer the reader to Appendix C for detailed discussion.
approach for hypergraph clustering is based on tensor spectral decomposition [15, 18]. [21, 23, 36] consider the problem of finding densely connected clusters in 2-graphs. Heat diffusion processes are used for clustering 2-graphs in [10, 17]. [14] studies a different, flow-based diffusion process for finding clusters in 2-graphs, and [13] generalises this to hypergraphs. We note that all of these methods solve a different problem to ours, and cannot be compared directly. Our algorithm is related to the hypergraph max cut problem, and the state-of-the-art approximation algorithm is given by [34]. [28] introduces graph bipartiteness and gives an approximation algorithm for the 2-graph max cut problem. To the best of our knowledge, we are the first to generalise this notion of bipartiteness to hypergraphs. Finally, we note that there have been recent improvements in the time complexity for solving linear programs [11, 29] although we do not take these into account in our analysis since the goal of this paper is not to obtain the fastest algorithm possible.
2 Notation
2-graphs. Throughout the paper, we call a non-hyper graph a 2-graph [4, 8]. We always use G = (VG, EG, w) to express a 2-graph, in which every edge e ∈ EG consists of two vertices in VG and we let n = |VG|. The degree of any vertex u ∈ VG is defined by dG(u) , ∑ v∈VG w(u, v),
and for any S ⊆ V the volume of S is defined by volG(S) , ∑ u∈S dG(u). Following [28], the bipartiteness ratio of any disjoint sets L,R ⊂ VG is defined by
βG(L,R) , 2w(L,L) + 2w(R,R) + w(L ∪R,L ∪R)
volG(L ∪R) where w(A,B) = ∑ (u,v)∈A×B w(u, v), and we further define βG , minS⊂V βG(S, V \S). Notice that a low βG-value means that there is a dense cut between L andR, and there is a sparse cut between L ∪R and V \ (L ∪R). In particular, βG = 0 implies that (L,R) forms a bipartite component of G. We use DG to denote the n× n diagonal matrix whose entries are (DG)uu = dG(u), for all u ∈ V . Moreover, we use AG to denote the n× n adjacency matrix whose entries are (AG)uv = w(u, v), for all u, v ∈ V . The Laplacian matrix is defined by LG , DG − AG. In addition, we define JG , DG +AG, and JG , D−1/2G JGD −1/2 G . For any real and symmetric matrix A, the eigenvalues of A are denoted by λ1(A) ≤ · · · ≤ λn(A), and the eigenvector associated with λi(A) is denoted by fi(A) for 1 ≤ i ≤ n.
Hypergraphs. Let H = (VH , EH , w) be a hypergraph with n = |VH | vertices and weight function w : EH 7→ R+. For any vertex v ∈ VH , the degree of v is defined by dH(v) , ∑ e∈EH w(e) · I [v ∈ e], where I[X] = 1 if event X holds and I[X] = 0 otherwise. The rank of edge e ∈ EH is the total number of vertices in e. For any A,B ⊂ VH , the cut value between A and B is defined by
w(A,B) , ∑ e∈EH w(e) · I [e ∩A 6= ∅ ∧ e ∩B 6= ∅] .
Sometimes, we are required to analyse the weights of edges that intersect some vertex sets and not others. To this end, we define for any A,B,C ⊆ VH that
w(A,B | C) , ∑ e∈EH w(e) · I [e ∩A 6= ∅ ∧ e ∩B 6= ∅ ∧ e ∩ C = ∅] ,
and we sometimes write w(A | C) , w(A,A | C) for simplicity. Generalising the notion of the bipartiteness ratio of a 2-graph, the bipartiteness ratio of sets L,R in a hypergraph H is defined by
βH(L,R) , 2w(L|L) + 2w(R|R) + w(L,L ∪R|R) + w(R,L ∪R|L)
vol(L ∪R) ,
and we define βH , minS⊂V βH(S, V \ S). For any hypergraph H and f ∈ Rn, we define the discrepancy of an edge e ∈ EH with respect to f as
∆f (e) , max u∈e f(u) + min v∈e f(v).
For any non-linear operator J : Rn 7→ Rn, we say that (λ, f) is an eigen-pair if and only if Jf = λf and note that in general, a non-linear operator can have any number of eigenvalues and eigenvectors.
It is important to remember that throughout the paper, we always use the letter H to represent a hypergraph, and G to represent a 2-graph.
Clique reduction. The clique reduction of a hypergraph H is a 2-graph G such that VG = VH and for every edge e ∈ EH , G contains a clique on the vertices in e with edge weights 1/(re − 1) where re is the rank of the edge e. The clique reduction is a common tool for designing hypergraph algorithms [1, 7, 35], and for this reason we use it as a baseline algorithm in this paper. We note that hypergraph algorithms based on the clique reduction often perform less well when there are edges with large rank in the hypergraph. Specifically, in Appendix C we use two r-uniform hypergraphs as examples to show that no matter how we weight the edges in the clique reduction, some cuts cannot be approximated better than a factor of O(r). This is one of the main reasons to develop spectral theory for hypergraphs through heat diffusion processes [5, 27, 33].
3 Diffusion process and the algorithm
In this section, we propose a new diffusion process in hypergraphs and use it to design a polynomialtime algorithm for finding bipartite components in hypergraphs. We first study 2-graphs to give some intuition, and then generalise to hypergraphs and describe our algorithm. Finally, we sketch some of the detailed analysis which proves that the diffusion process is well defined.
3.1 The diffusion process in 2-graphs
To discuss the intuition behind our designed diffusion process, let us look at the case of 2-graphs. Let G = (V,E,w) be a 2-graph, and we have for any x ∈ Rn that
xᵀJGx xᵀx = xᵀ(I +D −1/2 G AGD −1/2 G )x xᵀx .
By setting x = D1/2G y, we have that
xᵀJGx xᵀx = yᵀD 1/2 G JGD 1/2 G y yᵀDGy = yᵀ(DG +AG)y yᵀDGy =
∑ {u,v}∈EG w(u, v) · (y(u) + y(v))
2∑ u∈VG dG(u) · y(u) 2 . (1)
It is easy to see that λ1(JG) = 0 if G is bipartite, and it is known that λ1(JG) and its corresponding eigenvector f1(JG) are closely related to two densely connected components of G [28]. Moreover, similar to the heat equation for graph Laplacians LG, suppose DGft ∈ Rn is some measure on the vertices of G, then a diffusion process defined by the differential equation
dft dt = −D−1G JGft (2)
will converge to the minimum eigenvalue of D−1G JG and can be employed to find two densely connected components of the underlying 2-graph.3
3.2 The hypergraph diffusion and our algorithm
Now we study whether one can construct a new hypergraph operator JH which generalises the diffusion in 2-graphs to hypergraphs. First of all, we focus on a fixed time t with measure vector DHft ∈ Rn and ask whether we can follow (2) and define the rate of change
dft dt = −D−1H JHft
so that the diffusion can proceed for an infinitesimal time step. Our intuition is that the rate of change due to some edge e ∈ EH should involve only the vertices in e with the maximum or minimum value in the normalised measure ft. To formalise this, for any edge e ∈ EH , we define
Sf (e) , {v ∈ e : ft(v) = max u∈e ft(u)} and If (e) , {v ∈ e : ft(v) = min u∈e ft(u)}.
3For the reader familiar with the heat diffusion process of 2-graphs (e.g., [9, 17]), we remark that the above-defined process essentially employs the operation JG to replace the Laplacian LG when defining the heat diffusion: through JG, the heat diffusion can be used to find two densely connected components of G.
That is, for any edge e and normalised measure ft, Sf (e) ⊆ e consists of the vertices v adjacent to e whose ft(v) values are maximum and If (e) ⊆ e consists of the vertices v adjacent to e whose ft(v) values are minimum. See Figure 2 for an example. Then, applying the JH operator to a vector ft should be equivalent to applying the operator JG for some 2-graph G which we construct by splitting the weight of each hyperedge e ∈ EH between the edges in Sf (e)× If (e). Similar to the case for 2-graphs and (1), for any x = D1/2H ft this will give us the quadratic form
xᵀD −1/2 H JHD −1/2 H x
xᵀx = fᵀt JGft fᵀt DHft =
∑ {u,v}∈EG wG(u, v) · (ft(u) + ft(v))
2∑ u∈VG dH(u) · ft(u) 2
=
∑ e∈EH wH(e)(maxu∈e ft(u) + minv∈e ft(v))
2∑ u∈VH dH(u) · ft(u) 2 ,
where wG(u, v) is the weight of the edge {u, v} in G, and wH(e) is the weight of the edge e in H . We will show in the proof of Theorem 1 that JH has an eigenvalue of 0 if the hypergraph is 2-colourable4, and that the spectrum of JH is closely related to the hypergraph bipartiteness.
For this reason, we would expect that the diffusion process based on the operator JH can be used to find sets with small hypergraph bipartiteness. However, one needs to be very cautious here as, by the nature of the diffusion process, the values ft(v) of all the vertices v change over time and, as a result, the sets Sf (e) and If (e) that consist of the vertices with the maximum and minimum ft-value might change after an infinitesimal time step; this will prevent the process from continuing. We will discuss this issue in detail through the so-called Diffusion Continuity Condition in Section 3.3. In essence, the diffusion continuity condition ensures that one can always construct a 2-graph G by allocating the weight of each hyperedge e to the edges in Sf (e)× If (e) such that the sets Sf (e) and If (e) will not change in infinitesimal time although ft changes according to (dft)/(dt) = −D−1H JGft. We will also present an efficient
procedure in Section 3.3 to compute the weights of edges in Sf (e)× If (e). All of these guarantee that (i) every 2-graph that corresponds to the hypergraph diffusion process at any time step can be efficiently constructed; (ii) with this sequence of constructed 2-graphs, the diffusion process defined by JH is able to continue until the heat distribution converges. With this, we summarise the main idea of our presented algorithm as follows:
• First of all, we introduce some arbitrary f0 ∈ Rn as the initial diffusion vector, and a step size parameter > 0 to discretise the diffusion process. At each step, the algorithm constructs the 2-graph G guaranteed by the diffusion continuity condition, and updates ft ∈ Rn according to the rate of change (dft)/(dt) = −D−1H JGft. The algorithm terminates when ft has converged, i.e., the ratio between the current Rayleigh quotient (fᵀt JGft)/(f ᵀ t DHft) and
the one in the previous time step is bounded by some predefined constant. • Secondly, similar to many previous spectral graph clustering algorithms (e.g. [3, 27, 28]),
the algorithm constructs the sweep sets defined by ft and returns the two sets with minimum βH -value among all the constructed sweep sets. Specifically, for every 1 ≤ i ≤ n, the algorithm constructs Lj = {vi : |ft(vi)| ≥ |ft(vj)| ∧ ft(vi) < 0} and Rj = {vi : |ft(vi)| ≥ |ft(vj)|∧ft(vi) ≥ 0}. Then, between the n pairs (Lj , Rj), the algorithm returns the one with the minimum βH -value.
See Algorithm 1 for the formal description, and its performance is summarised in Theorem 1. Theorem 1 (Main Result). Given a hypergraph H = (VH , EH , w) and parameter > 0, the following holds:
1. There is an algorithm that finds an eigen-pair (λ, f ) of the operator JH such that λ ≤ λ1(JG), where G is the clique reduction of H and the inequality is strict if mine∈EH re > 2 where re is the rank of e. The algorithm runs in poly(|VH |, |EH |, 1/ ) time.
4Hypergraph H is 2-colourable if there are disjoint sets L,R ⊂ VH such that every edge intersects L and R.
2. Given an eigen-pair (λ, f) of the operator JH , there is an algorithm that constructs the two-sided sweep sets defined on f , and finds sets L and R such that βH(L,R) ≤ √ 2λ. The
algorithm runs in poly(|VH |, |EH |) time.
Algorithm 1: FINDBIPARTITECOMPONENTS Input :Hypergraph H , starting vector f0 ∈ Rn, step size > 0 Output :Sets L and R t := 0 while ft has not converged do
Use ft to construct 2-graph G satisfying the diffusion continuity condition ft+ := ft − D−1H JGft t := t+
end Set j := arg min1≤i≤n βH(Li, Ri) return (Lj , Rj)
Remark 1. We make the important remark that there is no polynomial-time algorithm which guarantees any multiplicative approximation of the minimum hypergraph bipartiteness value βH , unless P = NP. We prove this in Appendix C by a reduction from the NP-complete HYPERGRAPH 2- COLOURABILITY problem. This means that the problem we consider in this work is fundamentally more difficult than the equivalent problem for 2-graphs, as well as the problem of finding a sparse cut in a hypergraph. For this reason, the analysis of the non-linear hypergraph Laplacian operator [5, 27] cannot be applied in our case.
3.3 Dealing with the diffusion continuity condition
It remains for us to discuss the diffusion continuity condition, which guarantees that Sf (e) and If (e) will not change in infinitesimal time and the diffusion process will eventually converge to some stable distribution. Formally, let ft be the normalised measure on the vertices of H , and let
r , dft dt = −D−1H JHft
be the derivative of ft, which describes the rate of change for every vertex at the current time t. We write r(v) for any v ∈ VH as r(v) = ∑ e∈EH re(v), where re(v) is the contribution of edge e towards the rate of change of v. Now we discuss three rules that we expect the diffusion process to satisfy, and later prove that these three rules uniquely define the rate of change r.
First of all, as we mentioned in Section 3.2, we expect that only the vertices in Sf (e) ∪ If (e) will participate in the diffusion process, i.e., re(u) = 0 unless u ∈ Sf (e) ∪ If (e). Moreover, any vertex u participating in the diffusion process must satisfy the following:
• Rule (0a): if |re(u)| > 0 and u ∈ Sf (e), then r(u) = maxv∈Sf (e){r(v)}. • Rule (0b): if |re(u)| > 0 and u ∈ If (e), then r(u) = minv∈If (e){r(v)}.
To explain Rule (0), notice that for an infinitesimal time, ft(u) will be increased according to (dft/dt) (u) = r(u). Hence, by Rule (0) we know that, if u ∈ Sf (e) (resp. u ∈ If (e)) participates in the diffusion process in edge e, then in an infinitesimal time f(u) will remain the maximum (resp. minimum) among the vertices in e. Such a rule is necessary to ensure that the vertices involved in the diffusion in edge e do not change in infinitesimal time, and the diffusion process is able to continue.
Our next rule states that the total rate of change of the measure due to edge e is equal to−w(e)·∆f (e): • Rule (1): ∑ v∈Sf (e) d(v)re(v) = ∑ v∈If (e) d(v)re(v) = −w(e) ·∆f (e) for all e ∈ EH . This rule is a generalisation from the operator JG in 2-graphs. In particular, since D−1G JGft(u) =∑ {u,v}∈EG wG(u, v)(ft(u)+ft(v))/dG(u), the rate of change of ft(u) due to the edge {u, v} ∈ EG is−wG(u, v)(ft(u) + ft(v))/dG(u). Rule (1) states that in the hypergraph case the rate of change of the vertices in Sf (e) and If (e) together behave like the rate of change of u and v in the 2-graph case.
One might have expected that these two rules together will define a unique process. Unfortunately, this isn’t the case and we present a counterexample in Appendix A. To overcome this, we introduce the following stronger rule to replace Rule (0):
• Rule (2a): Assume that |re(u)| > 0 and u ∈ Sf (e).
– If ∆f (e) > 0, then r(u) = maxv∈Sf (e){r(v)}; – If ∆f (e) < 0, then r(u) = r(v) for all v ∈ Sf (e).
• Rule (2b): Assume that |re(u)| > 0 and u ∈ If (e):
– If ∆f (e) < 0, then r(u) = minv∈If (e){r(v)}; – If ∆f (e) > 0, then r(u) = r(v) for all v ∈ If (e).
Notice that the first conditions of Rules (2a) and (2b) correspond to Rules (0a) and (0b) respectively; the second conditions are introduced for purely technical reasons: they state that, if the discrepancy of e is negative (resp. positive), then all the vertices u ∈ Sf (e) (resp. u ∈ If (e)) will have the same value of r(u). Theorem 2 shows that there is a unique r ∈ Rn that satisfies Rules (1) and (2), and r can be computed in polynomial time. Therefore, our two rules uniquely define a diffusion process, and we can use the computed r to simulate the continuous diffusion process with a discretised version.5
Theorem 2. For any given ft ∈ Rn, there is a unique r = dft/dt and associated {re(v)}e∈E,v∈V that satisfy Rule (1) and (2), and r can be computed in polynomial time by linear programming. Remark 2. The rules we define and the proof of Theorem 2 are more involved than those used in [5] to define the hypergraph Laplacian operator. In particular, in contrast to [5], in our case the discrepancy ∆f (e) within a hyperedge e can be either positive or negative. This results in the four different cases in Rule (2) which must be carefully considered throughout the proof of Theorem 2.
4 Experiments
In this section, we evaluate the performance of our new algorithm on synthetic and real-world datasets. All algorithms are implemented in Python 3.6, using the scipy library for sparse matrix representations and linear programs. The experiments are performed using an Intel(R) Core(TM) i5-8500 CPU @ 3.00GHz processor, with 16 GB RAM. Our code can be downloaded from https://github.com/pmacg/hypergraph-bipartite-components.
Since ours is the first proposed algorithm for approximating hypergraph bipartiteness, we will compare it to a simple and natural baseline algorithm, which we call CLIQUECUT (CC). In this algorithm, we construct the clique reduction of the hypergraph and use the two-sided sweep-set algorithm described in [28] to find a set with low bipartiteness in the clique reduction.6
Additionally, we will compare two versions of our proposed algorithm. FINDBIPARTITECOMPONENTS (FBC) is our new algorithm described in Algorithm 1 and FBCAPPROX (FBCA) is an approximate version in which we do not solve the linear programs in Theorem 2 to compute the graph G. Instead, at each step of the algorithm, we construct G by splitting the weight of each hyperedge e evenly between the edges in S(e)× I(e). We always set the parameter = 1 for FBC and FBCA, and we set the starting vector f0 ∈ Rn for the diffusion to be the eigenvector corresponding to the minimum eigenvalue of JG, where G is the clique reduction of the hypergraph H .
4.1 Synthetic datasets
We first evaluate the algorithms using a random hypergraph model. Given the parameters n, r, p, and q, we generate an n-vertex r-uniform hypergraph in the following way: the vertex set V is divided
5Note that the graph G used for the diffusion at time t can be easily computed from the {re(v)} values, although in practice this is not actually needed since the r(u) values can be used to update the diffusion directly.
6We choose to use the algorithm in [28] here since, as far we know, this is the only non-SDP based algorithm for solving the MAX-CUT problem for 2-graphs. Notice that, although SDP-based algorithms achieve a better approximation ratio for the MAX-CUT problem, they are not practical even for hypergraphs of medium sizes.
into two clusters L and R of size n/2. For every set S ⊂ V with |S| = r, if S ⊂ L or S ⊂ R we add the hyperedge S with probability p and otherwise we add the hyperedge with probability q. We remark that this is a special case of the hypergraph stochastic block model (e.g., [6]). We limit the number of free parameters for simplicity while maintaining enough flexibility to generate random hypergraphs with a wide range of optimal βH -values.
We will compare the algorithms’ performance using four metrics: the hypergraph bipartiteness ratio βH(L,R), the clique graph bipartiteness ratio βG(L,R), the F1-score [30] of the returned clustering, and the runtime of the algorithm. Throughout this subsection, we always report the average result on 10 hypergraphs randomly generated with each parameter configuration.
Comparison of FBC and FBCA. We first fix the values n = 200, r = 3, and p = 10−4 and vary the ratio of q/p from 2 to 6 which produces hypergraphs with 250 to 650 edges. The performance of each algorithm on these hypergraphs is shown in Figure 3 from which we can make the following observations:
• From Figure 3 (a) we observe that FBC and FBCA find sets with very similar bipartiteness and they perform better than the CLIQUECUT baseline. • From Figure 3 (b) we can see that our proposed algorithms produce output with a lower βG-value than the output of the CLIQUECUT algorithm. This is a surprising result given that CLIQUECUT operates directly on the clique graph. • Figure 3 (c) shows that the FBCA algorithm is much faster than FBC.
From these observations, we conclude that in practice it is sufficient to use the much faster FBCA algorithm in place of the FBC algorithm.
Experiments on larger graphs. We now compare only the FBCA and CLIQUECUT algorithms, which allows us to run on hypergraphs with higher rank and number of vertices. We fix the parameters n = 2000, r = 5, and p = 10−11, producing hypergraphs with between 5000 and 75000 edges7 and show the results in Figure 4. Our algorithm consistently and significantly outperforms the baseline on every metric and across a wide variety of input hypergraphs.
To compare the algorithms’ runtime, we fix the parameter n = 2000 and the ratio q = 2p and report the runtime of the FBCA and CC algorithms on a variety of hypergraphs in Table 1. Our proposed algorithm takes more time than
7In our model, a very small value of p and q is needed since in an n-vertex, r-uniform hypergraph there are( n r ) possible edges which can be a very large number. In this case, ( 2000 5 ) ≈ 2.6× 1014.
the baseline CC algorithm but both appear to scale linearly in the size of the input hypergraph8 which suggests that our algorithm’s runtime is roughly a constant factor multiple of the baseline.
16 18 20 22 24 26 28 30 0.000
0.002
0.004
0.006
0.008
0.010
(a) βH -value
16 18 20 22 24 26 28 30
0.435
0.440
0.445
0.450
0.455
0.460
0.465
0.470
(b) βG-value
16 18 20 22 24 26 28 30
0.5
0.6
0.7
0.8
0.9
1.0
(c) F1-score
Figure 4: The average performance of each algorithm when n = 2000, r = 5, and p = 10−11. We omit the error bars because they are too small to read.
4.2 Real-world datasets
Next, we demonstrate the broad utility of our algorithm on complex real-world datasets with higherorder relationships which are most naturally represented by hypergraphs. Moreover, the hypergraphs are inhomogeneous, meaning that they contain vertices of different types, although this information is not available to the algorithm and so an algorithm has to treat every vertex identically. We demonstrate that our algorithm is able to find clusters which correspond to the vertices of different types. Table 2 shows the F1-score of the clustering produced by our algorithm on each dataset and demonstrates that it consistently outperforms the CLIQUECUT algorithm.
Penn Treebank. The Penn Treebank dataset is an English-language corpus with examples of written American English from several sources, including fiction and journalism [22]. The dataset contains 49, 208 sentences and over 1 million words, which are labelled with their part of speech. We construct a hypergraph in the following way: the vertex set consists of all the verbs, adverbs, and adjectives which occur at least 10 times in the corpus, and for every 4-gram (a sequence of 4 words) we add a hyperedge containing the co-occurring words. This results in a hypergraph with 4, 686 vertices and
176, 286 edges. The clustering returned by our algorithm correctly distinguishes between verbs and non-verbs with an accuracy of 67%. This experiment demonstrates that our unsupervised general purpose algorithm is capable of recovering non-trivial structure in a dataset which would ordinarily be clustered using significant domain knowledge, or a complex pre-trained model [2, 16].
DBLP. We construct a hypergraph from a subset of the DBLP network consisting of 14, 376 papers published in artificial intelligence and machine learning conferences [12, 32]. For each paper, we include a hyperedge linking the authors of the paper with the conference in which it was published, giving a hypergraph with 14, 495 vertices and 14, 376 edges. The clusters returned by our algorithm successfully separate the authors from the conferences with an accuracy of 100%.
5 Concluding remarks
In this paper, we introduce a new hypergraph Laplacian-type operator and apply this operator to design an algorithm that finds almost bipartite components in hypergraphs. Our experimental results
8Although n is fixed, the CLIQUECUT algorithm’s runtime is not constant since the time to compute an eigenvalue of the sparse adjacency matrix scales with the number and rank of the hyperedges.
demonstrate the potentially wide applications of spectral hypergraph theory, and so we believe that designing faster spectral hypergraph algorithms is an important future research direction in algorithms and machine learning. This will allow spectral hypergraph techniques to be applied more effectively to analyse the complex datasets which occur with increasing frequency in the real world.
Acknowledgements
Peter Macgregor is supported by the Langmuir PhD Scholarship, and He Sun is supported by an EPSRC Early Career Fellowship (EP/T00729X/1). | 1. What is the focus and contribution of the paper on finding densely connected bipartite components in hypergraphs?
2. What are the strengths of the proposed algorithm, particularly in its application and technical analysis?
3. Do you have any concerns or questions regarding the heat diffusion process and its limitation to simple nonlinear operators?
4. How does the reviewer assess the clarity and solidity of the paper's technical analyses?
5. What are the reviewer's suggestions for improving the paper, such as providing more explanations or considering more complex higher-order relations? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a polynomial-time algorithm that finds densely connected bipartite components in a hypergraph. The algorithm is based on a heat diffusion process generalized from graphs to hypergraphs. Cheeger-type approximation guarantee is established. Empirical results show that the new method has superior performance for both synthetic and real hypergraphs.
Review
Hypergraphs and related nonlinear Laplacian operators received renewed interests recently in both machine learning and applied mathematics communities. The heat diffusion process studied in this paper is a nice addition, in particular it applies to find bipartite components in hypergraphs, which is a different application from most previous works. At the same time, the nonlinear operator considered in this work is simple and intuitive.
Technical analyses is this paper appear to be clear and solid. Overall I think the paper is well written and it solves an interesting problem. Empirical results are promising and further support the usefulness of the proposed method.
I have a few minor comments:
line 129-131: Why is it intuitive that the rate of change should involve only the maximum and minimum values? Apparently this is the simplest case, but in more complex settings (e.g. when one models more complex higher-order relations as in [22]) can it depend on other node values as well? The formulation considered in this paper seems to correspond to hypergraphs modelled by all-or-nothing cut (or equivalently the unit cut-cost in [13]). Can the heat diffusion process be generalized to more complex higher-order relations among nodes within a hypergraph? More explanations are needed.
Baseline: Following footnote 4, another baseline could be
f
1
(
J
G
)
. Although the authors explained in the supplementary material that the worst-case performance guarantee using
f
1
(
J
G
)
is scaled by a factor of
r
the rank of hyperedge, it would provide additional information to the reader about the practical importance of the new method.
Synthetic experiments: Since you have the ground-truth information can you also show the F1 scores? |
NIPS | Title
Training Image Estimators without Image Ground Truth
Abstract
Deep neural networks have been very successful in image estimation applications such as compressive-sensing and image restoration, as a means to estimate images from partial, blurry, or otherwise degraded measurements. These networks are trained on a large number of corresponding pairs of measurements and ground-truth images, and thus implicitly learn to exploit domain-specific image statistics. But unlike measurement data, it is often expensive or impractical to collect a large training set of ground-truth images in many application settings. In this paper, we introduce an unsupervised framework for training image estimation networks, from a training set that contains only measurements—with two varied measurements per image—but no ground-truth for the full images desired as output. We demonstrate that our framework can be applied for both regular and blind image estimation tasks, where in the latter case parameters of the measurement model (e.g., the blur kernel) are unknown: during inference, and potentially, also during training. We evaluate our method for training networks for compressive-sensing and blind deconvolution, considering both non-blind and blind training for the latter. Our unsupervised framework yields models that are nearly as accurate as those from fully supervised training, despite not having access to any ground-truth images.
1 Introduction
Reconstructing images from imperfect observations is a classic inference task in many imaging applications. In compressive sensing [8], a sensor makes partial measurements for efficient acquisition. These measurements correspond to a low-dimensional projection of the higher-dimensional image signal, and the system relies on computational inference for recovering the full-dimensional image. In other cases, cameras capture degraded images that are low-resolution, blurry, etc., and require a restoration algorithm [10, 29, 34] to recover a corresponding un-corrupted image. Deep convolutional neural networks (CNNs) have recently emerged as an effective tool for such image estimation tasks [4, 6, 7, 12, 27, 30, 31]. Specifically, a CNN for a given application is trained on a large dataset that consists of pairs of ground-truth images and observed measurements (in many cases where the measurement or degradation process is well characterized, having a set of ground-truth images is sufficient to generate corresponding measurements). This training set allows the CNN to learn to exploit the expected statistical properties of images in that application domain, to solve what is essentially an ill-posed inverse problem.
But for many domains, it is impractical or prohibitively expensive to capture full-dimensional or un-corrupted images, and construct such a large representative training set. Unfortunately, it is often in such domains that a computational imaging solution is most useful. Recently, Lehtinen et al. [14] proposed a solution to this issue for denoising, with a method that trains with only pairs of noisy observations. While their method yields remarkably accurate network models without needing any
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
ground-truth images for training, it is applicable only to the specific case of estimation from noisy measurements—when each image intensity is observed as a sample from a (potentially unknown) distribution with mean or mode equal to its corresponding true value.
In this work, we introduce an unsupervised method for training image estimation networks that can be applied to a general class of observation models—where measurements are a linear function of the true image, potentially with additive noise. As training data, it only requires two observations for the same image but not the underlying image itself1. The two measurements in each pair are made with different parameters (such as different compressive measurement matrices or different blur kernels), and these parameters vary across different pairs. Collecting such a training set provides a practical alternative to the more laborious one of collecting full image ground-truth. Given these measurements, our method trains an image estimation network by requiring that its prediction from one measurement of a pair be consistent with the other measurement, when observed with the corresponding parameter. With sufficient diversity in measurement parameters for different training pairs, we show this is sufficient to train an accurate network model despite lacking direct ground-truth supervision.
While our method requires knowledge of the measurement model (e.g., blur by convolution), it also incorporates a novel mechanism to handle the blind setting during training—when the measurement parameters (e.g., the blur kernels) for training observations are unknown. To be able to enforce consistency as above, we use an estimator for measurement parameters that is trained simultaneously using a “proxy” training set. This set is created on-the-fly by taking predictions from the image network even as it trains, and pairing them with observations synthetically created using randomly sampled, and thus known, parameters. The proxy set provides supervision for training the parameter estimator, and to augment training of the image estimator as well. This mechanism allows our method to nearly match the accuracy of fully supervised training on image and parameter ground-truth.
We validate our method with experiments on image reconstruction from compressive measurements and on blind deblurring of face images, with blind and non-blind training for the latter, and compare to fully-supervised baselines with state-of-the-art performance. The supervised baselines use a training set of ground-truth images and generate observations with random parameters on the fly in each epoch, to create a much larger number of effective image-measurement pairs. In contrast, our method is trained with only two measurements per image from the same training set (but not the image
1Note that at test time, the trained network only requires one observation as input as usual.
itself), with the pairs kept fixed through all epochs of training. Despite this, our unsupervised training method yields models with test accuracy close to that of the supervised baselines, and thus presents a practical way to train CNNs for image estimation when lacking access to image ground truth.
2 Related Work
CNN-based Image Estimation. Many imaging tasks require inverting the measurement process to obtain a clean image from the partial or degraded observations—denoising [3], deblurring [29], super-resolution [10], compressive sensing [8], etc. While traditionally solved using statistical image priors [9, 25, 34], CNN-based estimators have been successfully employed for many of these tasks. Most methods [4, 6, 7, 12, 22, 27, 30, 31] learn a network to map measurements to corresponding images from a large training set of pairs of measurements and ideal ground-truth images. Some learn CNN-based image priors, as denoisers [5, 23, 31] or GANs [1], that are agnostic to the inference task (denoising, deblurring, etc.), but still tailored to a chosen class of images. All these methods require access to a large domain-specific dataset of ground-truth images for training. However, capturing image ground-truth is burdensome or simply infeasible in many settings (e.g., for MRI scans [18] and other biomedical imaging applications). In such settings, our method provides a practical alternative by allowing estimation networks to be trained from measurement data alone.
Unsupervised Learning. Unsupervised learning for CNNs is broadly useful in many applications where large-scale training data is hard to collect. Accordingly, researchers have proposed unsupervised and weakly-supervised methods for such applications, such as depth estimation [11, 32], intrinsic image decomposition [16, 19], etc. However, these methods are closely tied to their specific applications. In this work, we seek to enable unsupervised learning for image estimation networks. In the context of image modeling, Bora et al. [2] propose a method to learn a GAN model from only degraded observations. Their method, like ours, includes a measurement model with its discriminator for training (but requires knowledge of measurement parameters, while we are able to handle the blind setting). Their method proves successful in training a generator for ideal images. We seek a similar unsupervised means for training image reconstruction and restoration networks.
The closest work to ours is the recent Noise2Noise method of Lehtinen et al. [14], who propose an unsupervised framework for training denoising networks by training on pairs of noisy observations of the same image. In their case, supervision comes from requiring the denoised output from one observation be close to the other. This works surprisingly well, but is based on the assumption that the expected or median value of the noisy observations is the image itself. We focus on a more general class of observation models, which requires injecting the measurement process in loss computation. We also introduce a proxy training approach to handle blind image estimation applications.
Also related are the works of Metzler et al. [21] and Zhussip et al. [33], that use Stein’s unbiased risk estimator for unsupervised training from only measurement data, for applications in compressive sensing. However, these methods are specific to estimators based on D-AMP estimation [20], since they essentially train denoiser networks for use in unrolled AMP iterations for recovery from compressive measurements. In contrast, ours is a more general framework that can be used to train generic neural network estimators.
3 Proposed Approach
Given a measurement y ∈ RM of an ideal image x ∈ RN that are related as
y = θ x+ , (1)
our goal is to train a CNN to produce an estimate x̂ of the image from y. Here, ∼ p is random noise with distribution p (·) that is assumed to be zero-mean and independent of the image x, and the parameter θ is an M ×N matrix that models the linear measurement operation. Often, the measurement matrix θ is structured with fewer than MN degrees of freedom based on the measurement model—e.g., it is block-Toeplitz for deblurring with entries defined by the blur kernel. We consider both non-blind estimation when the measurement parameter θ is known for a given measurement during inference, and the blind setting where θ is unavailable but we know the distribution pθ(·). For blind estimators, we address both non-blind and blind training—when θ is known for each measurement in the training set but not at test time, and when it is unknown during training as well.
Since (1) is typically non-invertible, image estimation requires reasoning with the statistical distribution px(·) of images for the application domain, and conventionally, this is provided by a large training set of typical ground-truth images x. In particular, CNN-based image estimation methods train a network f : y → x̂ on a large training set {(xt, yt)}Tt=1 of pairs of corresponding images and measurements, based on a loss that measures error ρ(x̂t − xt) between predicted and true images across the training set. In the non-blind setting, the measurement parameter θ is known and provided as input to the network f (we omit this in the notation for convenience), while in the blind setting, the network must also reason about the unknown measurement parameter θ.
To avoid the need for a large number of ground-truth training images, we propose an unsupervised learning method that is able to train an image estimation network using measurements alone. Specifically, we assume we are given a training set of two measurements (yt:1, yt:2) for each image xt:
yt:1 = θt:1 xt + t:1, yt:2 = θt:2 xt + t:2, (2) but not the images {xt} themselves. We require the corresponding measurement parameters θt:1 and θt:2 to be different for each pair, and further, to also vary across different training pairs. These parameters are assumed to be known for the non-blind training setting, but not for blind training.
3.1 Unsupervised Training for Non-Blind Image Estimation
We begin with the simpler case of non-blind estimation, when the parameter θ for a given measurement y is known, both during inference and training. Given pairs of measurements with known parameters, our method trains the network f(·) using a “swap-measurement” loss on each pair, defined as:
Lswap = 1
T ∑ t ρ ( θt:2 f(yt:1) − yt:2 ) + ρ ( θt:1 f(yt:2) − yt:1 ) . (3)
This loss evaluates the accuracy of the full images predicted by the network from each measurement in a pair, by comparing it to the other measurement—using an error function ρ(·)—after simulating observation with the corresponding measurement parameter. Note Noise2Noise [14] can be seen as a special case of (3) for measurements are degraded only by noise, with θt:1 = θt:2 = I .
When the parameters θt:1, θt:2 used to acquire the training set are sufficiently diverse and statistically independent for each underlying xt, this loss provides sufficient supervision to train the network f(·). To see this, we consider using the L2 distance for the error function ρ(z) = ‖z‖2, and note that (3) represents an empirical approximation of the expected loss over image, parameter, and noise distributions. Assuming the training measurement pairs are obtained using (2) with xt ∼ px, θt:1, θt:2 ∼ pθ, and t:1, t:2 ∼ p drawn i.i.d. from their respective distributions, we have Lswap ≈ 2 E
x∼px E θ1∼pθ E 1∼p E θ2∼pθ E 2∼p ‖θ2f(θ1x+ 1)− (θ2x+ 2)‖2
= 2σ2 + 2 E x∼px E θ∼pθ E ∼p
( f(θx+ ) − x )T Q ( f(θx+ ) − x ) , Q = E
θ′∼pθ (θ′ T θ′). (4)
Therefore, because the measurement matrices are independent, we find that in expectation the swapmeasurement loss is equivalent to supervised training against the true image x, with an L2 loss that is weighted by theN×N matrixQ (upto an additive constant given by noise variance). When the matrix Q is full-rank, the swap-measurement loss will provide supervision along all image dimensions, and will reach its theoretical minimum (2σ2 ) iff the network makes exact predictions.
The requirement that Q be full-rank implies that the distribution pθ of measurement parameters must be sufficiently diverse, such that the full set of parameters {θ}, used for training measurements, together span the entire domain RN of full images. Therefore, even though measurements made by individual θ—and even pairs of (θt:1, θt:2)—are incomplete, our method relies on the fact that the full set of measurement parameters used during training is complete. Indeed, for Q to be full-rank, it is important that there be no systematic deficiency in pθ (e.g., no vector direction in RN left unobserved by all measurement parameters used in training). Also note that while we derived (4) for the L2 loss, the argument applies to any error function ρ(·) that is minimized only when its input is 0. In addition to the swap loss, we also find it useful to train with an additional “self-measurement” loss that measures consistency between an image prediction and its own corresponding input measurement:
Lself = 1
T ∑ t ρ ( θt:1 f(yt:1) − yt:1 ) + ρ ( θt:2 f(yt:2) − yt:2 ) . (5)
While not sufficient by itself, we find the additional supervision it provides to be practically useful in yielding more accurate network models since it provides more direct supervision for each training sample. Therefore, our overall unsupervised training objective is a weighted version of the two losses Lswap + γLself, with weight γ chosen on a validation set.
3.2 Unsupervised Training for Blind Image Estimation
We next consider the more challenging case of blind estimation, when the measurement parameter θ for an observation y is unknown—and specifically, the blind training setting, when it is unknown even during training. The blind training setting complicates the use of our unsupervised losses in (3) and (5), since the values of θt:1 and θt:2 used there are unknown. Also, blind estimation tasks often have a more diverse set of possible parameters θ. While supervised training methods with access to ground-truth images can generate a very large database of synthetic image-measurement pairs by pairing the same image with many different θ (assuming pθ(·) is known), our unsupervised framework has access only to two measurements per image.
However, in many blind estimation applications (such as deblurring), the parameter θ has comparatively limited degrees of freedom and the distribution pθ(·) is known. Consequently, it is feasible to train estimators for θ from an observation y with sufficient supervision. With these assumptions, we propose a “proxy training” approach for unsupervised training of blind image estimators. This approach treats estimates from our network during training as a source of image ground-truth to train an estimator g : y → θ̂ for measurement parameters. We use the image network’s predictions to construct synthetic observations as:
x+t:i ← f(yt:i), θ + t:i ∼ pθ, + t:i ∼ p , y + t:i = θ + t:i x + t:i + + t:i, for i ∈ {1, 2}, (6)
where θ+t:i and + t:i are sampled on the fly from the parameter and noise distributions, and← indicates an assignment with a “stop-gradient” operation (to prevent loss gradients on the proxy images from affecting the image estimator f(·)). We use these synthetic observations y+t:i, with known sampled parameters θ+t:i, to train the parameter estimation network g(·) based on the loss:
Lprox:θ = 1
T ∑ t 2∑ i=1 ρ ( g(y+t:i) − θ + t:i ) . (7)
As the parameter network g(·) trains with augmented data, we simultaneously use it to compute estimates of parameters for the original observations: θ̂t:i ← g(yt:i), for i ∈ {1, 2}, and compute the swap- and self-measurement losses in (3) and (5) on the original observations using these estimated, instead of true, parameters. Notice that we use a stop-gradient here as well, since we do not wish to train the parameter estimator g(·) based on the swap- or self-measurement losses—the behavior observed in (4) no longer holds in this case, and we empirically observe that removing the stop-gradient leads to instability and often causes training to fail.
In addition to training the parameter estimator g(·), the proxy training data in (6) can be used to augment training for the image estimator f(·), now with full supervision from the proxy images as:
Lprox:x = 1
T ∑ t 2∑ i=1 ρ ( f(y+t:i) − x + t:i ) . (8)
This loss can be used even in the non-blind training setting, and provides a means of generating additional training data with more pairings of image and measurement parameters. Also note that although our proxy images x+t:i are approximate estimates of the true images, they represent the ground-truth for the synthetically generated observations y+t:i. Hence, the losses Lprox:θ and Lprox:x are approximate only in the sense that they are based on images that are not sampled from the true image distribution px(·). And the effect of this approximation diminishes as training progresses, and the image estimation network produces better image predictions (especially on the training set).
Our overall method randomly initializes the weights of the image and parameter networks f(·) and g(·), and then trains them with a weighted combination of all losses: Lswap+γLself+αLprox:θ+βLprox:x, where the scalar weights α, β, γ are hyper-parameters determined on a validation set. For non-blind training (of blind estimators), only the image estimator f(·) needs to be trained, and α can be set to 0.
4 Experiments
We evaluate our framework on two well-established tasks: non-blind image reconstruction from compressive measurements, and blind deblurring of face images. These tasks were chosen since large training sets of ground-truth images is available in both cases, which allows us to demonstrate the effectiveness of our approach through comparisons to fully supervised baselines. The source code of our implementation is available at https://projects.ayanc.org/unsupimg/.
4.1 Reconstruction from Compressive Measurements
We consider the task of training a CNN to reconstruct images from compressive measurements. We follow the measurement model of [12, 30], where all non-overlapping 33× 33 patches in an image are measured individually by the same low-dimensional orthonormal matrix. Like [12, 30], we train CNN models that operate on individual patches at a time, and assume ideal observations without noise (the supplementary includes additional results for noisy measurements). We train models for compression ratios of 1%, 4%, and 10% (using corresponding matrices provided by [12]).
We generate a training and validation set, of 100k and 256 images respectively, by taking 363× 363 crops from images in the ImageNet database [26]. We use a CNN architecture that stacks two UNets [24], with a residual connection between the two (see supplementary). We begin by training our architecture with full supervision, using all overlapping patches from the training images, and an L2 loss between the network’s predictions and the ground-truth image patches. For unsupervised training
with our approach, we create two partitions of the original image, each containing non-overlapping patches. The partitions themselves overlap, with patches in one partition being shifted from those in the other (see supplementary). We measure patches in both partitions with the same measurement matrix, to yield two sets of measurements. These provide the diversity required by our method as each pixel is measured with a different patch in the two partitions. Moreover, this measurement scheme can be simply implemented in practice by camera translation. The shifts for each image are randomly selected, but kept fixed throughout training. Since the network operates independently on patches, it can be used on measurements from both partitions. To compute the swap-measurement loss, we take the network’s individual patch predictions from one partition, arrange them to form the image, and extract and then apply the measurement matrix to shifted patches corresponding to the other partition. The weight γ for the self-measurement loss is set to 0.05 based on the validation set.
In Table 1, we report results for existing compressive sensing methods that use supervised training [12, 30], as well as two methods that do not require any training [15, 20]. We report numbers for these methods from the evaluation in [30] that, like us, reconstruct each patch in an image individually. We also report results for the algorithm in [20] by running it on entire images (i.e., using the entire image for regularization while still using the per-patch measurement measurement model). Note that [20] is a D-AMP-based estimator (and while slower, performs similarly to the learned D-AMP estimators proposed in [21, 33] as per their own evaluation).
Evaluating our fully supervised baseline against these methods, we find that it achieves state-of-the-art performance. We then report results for training with our unsupervised framework, and find that this leads to accurate models that only lag our supervised baseline by 0.4 db or less in terms of average PSNR on both test sets—and in most cases, actually outperforms previous methods. This is despite the fact that these models have been trained without any access to ground-truth images. In addition to our full unsupervised method with both the self- and swap- losses, Table 1 also contains an ablation without using the self-loss, which is found to lead to a slight drop in performance. Figure 2 provides example reconstructions for some images, and we find that results from our unsupervised method are extremely close in visual quality to those of the baseline model trained with full supervision.
4.2 Blind Face Image Deblurring
We next consider the problem of blind motion deblurring of face images. Like [27], we consider the problem of restoring 128× 128 aligned and cropped face images that have been affected by motion blur, through convolution with motion blur kernels of size upto 27× 27, and Gaussian noise with standard deviation of two gray levels. We use all 160k images in the CelebA training set [17] and 1.8k images from Helen training set [13] to construct our training set, and 2k images from CelebA val and 200 from the Helen training set for our validation set. We use a set of 18k and 2k random motion kernels for training and validation respectively, generated using the method described in [4]. We evaluate our method on the official blurred test images provided by [27] (derived from the CelebA and Helen test sets). Note that unlike [27], we do not use any semantic labels for training.
In this case, we use a single U-Net architecture to map blurry observations to sharp images. We again train a model for this architecture with full supervision, generating blurry-sharp training pairs on the fly by pairing random of blur kernels from training set with the sharp images. Then, for unsupervised training with our approach, we choose two kernels for each training image to form a training set of measurement pairs, that are kept fixed (including the added Gaussian noise) across all epochs of training. We first consider non-blind training, using the true blur kernels to compute the swap- and self-measurement losses. Here, we consider training with and without the proxy loss Lprox:x for the network. Then, we consider the blind training case where we also learn an estimator for blur kernels, and use its predictions to compute the measurement losses. Instead of training a entirely separate network, we share the initial layers with the image UNet, and form a separate decoder path going from the bottleneck to the blur kernel. The weights α, β, γ are all set to one in this case.
We report results for all versions of our method in Table 2, and compare it to [27], as well as a traditional deblurring method that is not trained on face images [28]. We find that with full supervision, our architecture achieves state-of-the-art performance. Then with non-blind training, we find that our method is able to come close to supervised performance when using the proxy loss, but does worse without—highlighting its utility even in the non-blind setting. Finally, we note that models derived using blind-training with our approach are also able to produce results nearly as accurate as those trained with full supervision—despite lacking access both to ground truth image data, and knowledge
of the blur kernels in their training measurements. Figure 3 illustrates this performance qualitatively, with example deblurred results from various models on the official test images. We also visualize the blur kernel estimator learned during blind training with our approach in Fig. 4 on images from our validation set. Additional results, including those on real images, are included in the supplementary.
5 Conclusion
We presented an unsupervised method to train image estimation networks from only measurements pairs, without access to ground-truth images, and in blind settings, without knowledge of measurement parameters. In this paper, we validated this approach on well-established tasks where sufficient groundtruth data (for natural and face images) was available, since it allowed us to compare to training with full-supervision and study the performance gap between the supervised and unsupervised settings. But we believe that our method’s real utility will be in opening up the use of CNNs for image estimation to new domains—such as medical imaging, applications in astronomy, etc.—where such use has been so far infeasible due to the difficulty of collecting large ground-truth datasets.
Acknowledgments. This work was supported by the NSF under award no. IIS-1820693. | 1. What is the main contribution of the paper?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the quality and originality of the paper's content?
4. Are there any concerns regarding the experimental setup and comparisons with other methods?
5. How does the reviewer evaluate the clarity and significance of the paper? | Review | Review
Originality: The paper is mainly based on the idea presented in [14] and could be considered a generalization of it. Section 3.2 is the part which makes this paper's originality clear. Quality: Quality is the issue which makes the reviewer to believe this paper is not ready for publication yet. Here are the issues: - First of all, there are few previous works on the exact same problem that are neither cited nor compared against in this manuscript. These papers even do not need either ground truth data or two sets of measurements (unlike the submitted paper) and have shown impressive results. Examples are but not limited to: -- Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements by Zussip et al. -- Unsupervised Learning with Steinâs Unbiased Risk Estimator by Metzler et al. - Another major problem with this manuscript is that all the methods presented in the experiments section are patch-based methods and are not state-of-the-art methods in learning-based image recovery. Instead, there are other approaches that can recover images in whole (instead of patch-based recovery) and have better performance compared to ISTA-Net and ReconNet. However, authors have not compared their work against any of the non-patch-based methods. Please note that one problem which makes the applicability of patch-based methods limited is that they could not be applied in certain applications such as medical imaging. In other words, based on the structure of sensing matrices (e.g. Fourier matrices), sometimes we are not able to sense and reconstruct images patch by patch. Authors should at least compare against non-patch-based CS recovery methods even if their method cannot be edited in a way that works for non-patch-based applications. - If you take a look at Fig 8 in the supplementary, there is an overlap between two partitions during the training phase. Now if authors claim the compression ratio of 10%, because of the overlap between partitions, the effective compression ratio of common area between two partitions is more than 10% and that makes the comparison presented in Table 1 unfair. In general, if you use two sets of measurements: Y1 = AX + e1 Y2 = BX + e2 you can merge the two and have [Y1 Y2]^T = [A B]^T X + (e1+e2) Although the authors are not using the exact same X during the training phase for the two sets of measurements, these two partitions have a considerable overlapping and that makes the comparison with other patch-based methods problematic. Clarity: The paper is very well-written and well-organized. Significance: The problem that is studied by the authors is an important problem. However, there are issues with the experimental results that should be addressed to consider the proposed approach significant and the reviewer believes that this paper is not ready for publication yet. ------------------------------------------------------------ Post Rebuttal Comments: Thanks to the authors for clarifications in their rebuttal, specifically for providing the new experimental results in this short amount of time. It would be great if authors could add the points they mentioned in the rebuttal to the final version of their paper as well. My score is now updated. |
NIPS | Title
Training Image Estimators without Image Ground Truth
Abstract
Deep neural networks have been very successful in image estimation applications such as compressive-sensing and image restoration, as a means to estimate images from partial, blurry, or otherwise degraded measurements. These networks are trained on a large number of corresponding pairs of measurements and ground-truth images, and thus implicitly learn to exploit domain-specific image statistics. But unlike measurement data, it is often expensive or impractical to collect a large training set of ground-truth images in many application settings. In this paper, we introduce an unsupervised framework for training image estimation networks, from a training set that contains only measurements—with two varied measurements per image—but no ground-truth for the full images desired as output. We demonstrate that our framework can be applied for both regular and blind image estimation tasks, where in the latter case parameters of the measurement model (e.g., the blur kernel) are unknown: during inference, and potentially, also during training. We evaluate our method for training networks for compressive-sensing and blind deconvolution, considering both non-blind and blind training for the latter. Our unsupervised framework yields models that are nearly as accurate as those from fully supervised training, despite not having access to any ground-truth images.
1 Introduction
Reconstructing images from imperfect observations is a classic inference task in many imaging applications. In compressive sensing [8], a sensor makes partial measurements for efficient acquisition. These measurements correspond to a low-dimensional projection of the higher-dimensional image signal, and the system relies on computational inference for recovering the full-dimensional image. In other cases, cameras capture degraded images that are low-resolution, blurry, etc., and require a restoration algorithm [10, 29, 34] to recover a corresponding un-corrupted image. Deep convolutional neural networks (CNNs) have recently emerged as an effective tool for such image estimation tasks [4, 6, 7, 12, 27, 30, 31]. Specifically, a CNN for a given application is trained on a large dataset that consists of pairs of ground-truth images and observed measurements (in many cases where the measurement or degradation process is well characterized, having a set of ground-truth images is sufficient to generate corresponding measurements). This training set allows the CNN to learn to exploit the expected statistical properties of images in that application domain, to solve what is essentially an ill-posed inverse problem.
But for many domains, it is impractical or prohibitively expensive to capture full-dimensional or un-corrupted images, and construct such a large representative training set. Unfortunately, it is often in such domains that a computational imaging solution is most useful. Recently, Lehtinen et al. [14] proposed a solution to this issue for denoising, with a method that trains with only pairs of noisy observations. While their method yields remarkably accurate network models without needing any
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
ground-truth images for training, it is applicable only to the specific case of estimation from noisy measurements—when each image intensity is observed as a sample from a (potentially unknown) distribution with mean or mode equal to its corresponding true value.
In this work, we introduce an unsupervised method for training image estimation networks that can be applied to a general class of observation models—where measurements are a linear function of the true image, potentially with additive noise. As training data, it only requires two observations for the same image but not the underlying image itself1. The two measurements in each pair are made with different parameters (such as different compressive measurement matrices or different blur kernels), and these parameters vary across different pairs. Collecting such a training set provides a practical alternative to the more laborious one of collecting full image ground-truth. Given these measurements, our method trains an image estimation network by requiring that its prediction from one measurement of a pair be consistent with the other measurement, when observed with the corresponding parameter. With sufficient diversity in measurement parameters for different training pairs, we show this is sufficient to train an accurate network model despite lacking direct ground-truth supervision.
While our method requires knowledge of the measurement model (e.g., blur by convolution), it also incorporates a novel mechanism to handle the blind setting during training—when the measurement parameters (e.g., the blur kernels) for training observations are unknown. To be able to enforce consistency as above, we use an estimator for measurement parameters that is trained simultaneously using a “proxy” training set. This set is created on-the-fly by taking predictions from the image network even as it trains, and pairing them with observations synthetically created using randomly sampled, and thus known, parameters. The proxy set provides supervision for training the parameter estimator, and to augment training of the image estimator as well. This mechanism allows our method to nearly match the accuracy of fully supervised training on image and parameter ground-truth.
We validate our method with experiments on image reconstruction from compressive measurements and on blind deblurring of face images, with blind and non-blind training for the latter, and compare to fully-supervised baselines with state-of-the-art performance. The supervised baselines use a training set of ground-truth images and generate observations with random parameters on the fly in each epoch, to create a much larger number of effective image-measurement pairs. In contrast, our method is trained with only two measurements per image from the same training set (but not the image
1Note that at test time, the trained network only requires one observation as input as usual.
itself), with the pairs kept fixed through all epochs of training. Despite this, our unsupervised training method yields models with test accuracy close to that of the supervised baselines, and thus presents a practical way to train CNNs for image estimation when lacking access to image ground truth.
2 Related Work
CNN-based Image Estimation. Many imaging tasks require inverting the measurement process to obtain a clean image from the partial or degraded observations—denoising [3], deblurring [29], super-resolution [10], compressive sensing [8], etc. While traditionally solved using statistical image priors [9, 25, 34], CNN-based estimators have been successfully employed for many of these tasks. Most methods [4, 6, 7, 12, 22, 27, 30, 31] learn a network to map measurements to corresponding images from a large training set of pairs of measurements and ideal ground-truth images. Some learn CNN-based image priors, as denoisers [5, 23, 31] or GANs [1], that are agnostic to the inference task (denoising, deblurring, etc.), but still tailored to a chosen class of images. All these methods require access to a large domain-specific dataset of ground-truth images for training. However, capturing image ground-truth is burdensome or simply infeasible in many settings (e.g., for MRI scans [18] and other biomedical imaging applications). In such settings, our method provides a practical alternative by allowing estimation networks to be trained from measurement data alone.
Unsupervised Learning. Unsupervised learning for CNNs is broadly useful in many applications where large-scale training data is hard to collect. Accordingly, researchers have proposed unsupervised and weakly-supervised methods for such applications, such as depth estimation [11, 32], intrinsic image decomposition [16, 19], etc. However, these methods are closely tied to their specific applications. In this work, we seek to enable unsupervised learning for image estimation networks. In the context of image modeling, Bora et al. [2] propose a method to learn a GAN model from only degraded observations. Their method, like ours, includes a measurement model with its discriminator for training (but requires knowledge of measurement parameters, while we are able to handle the blind setting). Their method proves successful in training a generator for ideal images. We seek a similar unsupervised means for training image reconstruction and restoration networks.
The closest work to ours is the recent Noise2Noise method of Lehtinen et al. [14], who propose an unsupervised framework for training denoising networks by training on pairs of noisy observations of the same image. In their case, supervision comes from requiring the denoised output from one observation be close to the other. This works surprisingly well, but is based on the assumption that the expected or median value of the noisy observations is the image itself. We focus on a more general class of observation models, which requires injecting the measurement process in loss computation. We also introduce a proxy training approach to handle blind image estimation applications.
Also related are the works of Metzler et al. [21] and Zhussip et al. [33], that use Stein’s unbiased risk estimator for unsupervised training from only measurement data, for applications in compressive sensing. However, these methods are specific to estimators based on D-AMP estimation [20], since they essentially train denoiser networks for use in unrolled AMP iterations for recovery from compressive measurements. In contrast, ours is a more general framework that can be used to train generic neural network estimators.
3 Proposed Approach
Given a measurement y ∈ RM of an ideal image x ∈ RN that are related as
y = θ x+ , (1)
our goal is to train a CNN to produce an estimate x̂ of the image from y. Here, ∼ p is random noise with distribution p (·) that is assumed to be zero-mean and independent of the image x, and the parameter θ is an M ×N matrix that models the linear measurement operation. Often, the measurement matrix θ is structured with fewer than MN degrees of freedom based on the measurement model—e.g., it is block-Toeplitz for deblurring with entries defined by the blur kernel. We consider both non-blind estimation when the measurement parameter θ is known for a given measurement during inference, and the blind setting where θ is unavailable but we know the distribution pθ(·). For blind estimators, we address both non-blind and blind training—when θ is known for each measurement in the training set but not at test time, and when it is unknown during training as well.
Since (1) is typically non-invertible, image estimation requires reasoning with the statistical distribution px(·) of images for the application domain, and conventionally, this is provided by a large training set of typical ground-truth images x. In particular, CNN-based image estimation methods train a network f : y → x̂ on a large training set {(xt, yt)}Tt=1 of pairs of corresponding images and measurements, based on a loss that measures error ρ(x̂t − xt) between predicted and true images across the training set. In the non-blind setting, the measurement parameter θ is known and provided as input to the network f (we omit this in the notation for convenience), while in the blind setting, the network must also reason about the unknown measurement parameter θ.
To avoid the need for a large number of ground-truth training images, we propose an unsupervised learning method that is able to train an image estimation network using measurements alone. Specifically, we assume we are given a training set of two measurements (yt:1, yt:2) for each image xt:
yt:1 = θt:1 xt + t:1, yt:2 = θt:2 xt + t:2, (2) but not the images {xt} themselves. We require the corresponding measurement parameters θt:1 and θt:2 to be different for each pair, and further, to also vary across different training pairs. These parameters are assumed to be known for the non-blind training setting, but not for blind training.
3.1 Unsupervised Training for Non-Blind Image Estimation
We begin with the simpler case of non-blind estimation, when the parameter θ for a given measurement y is known, both during inference and training. Given pairs of measurements with known parameters, our method trains the network f(·) using a “swap-measurement” loss on each pair, defined as:
Lswap = 1
T ∑ t ρ ( θt:2 f(yt:1) − yt:2 ) + ρ ( θt:1 f(yt:2) − yt:1 ) . (3)
This loss evaluates the accuracy of the full images predicted by the network from each measurement in a pair, by comparing it to the other measurement—using an error function ρ(·)—after simulating observation with the corresponding measurement parameter. Note Noise2Noise [14] can be seen as a special case of (3) for measurements are degraded only by noise, with θt:1 = θt:2 = I .
When the parameters θt:1, θt:2 used to acquire the training set are sufficiently diverse and statistically independent for each underlying xt, this loss provides sufficient supervision to train the network f(·). To see this, we consider using the L2 distance for the error function ρ(z) = ‖z‖2, and note that (3) represents an empirical approximation of the expected loss over image, parameter, and noise distributions. Assuming the training measurement pairs are obtained using (2) with xt ∼ px, θt:1, θt:2 ∼ pθ, and t:1, t:2 ∼ p drawn i.i.d. from their respective distributions, we have Lswap ≈ 2 E
x∼px E θ1∼pθ E 1∼p E θ2∼pθ E 2∼p ‖θ2f(θ1x+ 1)− (θ2x+ 2)‖2
= 2σ2 + 2 E x∼px E θ∼pθ E ∼p
( f(θx+ ) − x )T Q ( f(θx+ ) − x ) , Q = E
θ′∼pθ (θ′ T θ′). (4)
Therefore, because the measurement matrices are independent, we find that in expectation the swapmeasurement loss is equivalent to supervised training against the true image x, with an L2 loss that is weighted by theN×N matrixQ (upto an additive constant given by noise variance). When the matrix Q is full-rank, the swap-measurement loss will provide supervision along all image dimensions, and will reach its theoretical minimum (2σ2 ) iff the network makes exact predictions.
The requirement that Q be full-rank implies that the distribution pθ of measurement parameters must be sufficiently diverse, such that the full set of parameters {θ}, used for training measurements, together span the entire domain RN of full images. Therefore, even though measurements made by individual θ—and even pairs of (θt:1, θt:2)—are incomplete, our method relies on the fact that the full set of measurement parameters used during training is complete. Indeed, for Q to be full-rank, it is important that there be no systematic deficiency in pθ (e.g., no vector direction in RN left unobserved by all measurement parameters used in training). Also note that while we derived (4) for the L2 loss, the argument applies to any error function ρ(·) that is minimized only when its input is 0. In addition to the swap loss, we also find it useful to train with an additional “self-measurement” loss that measures consistency between an image prediction and its own corresponding input measurement:
Lself = 1
T ∑ t ρ ( θt:1 f(yt:1) − yt:1 ) + ρ ( θt:2 f(yt:2) − yt:2 ) . (5)
While not sufficient by itself, we find the additional supervision it provides to be practically useful in yielding more accurate network models since it provides more direct supervision for each training sample. Therefore, our overall unsupervised training objective is a weighted version of the two losses Lswap + γLself, with weight γ chosen on a validation set.
3.2 Unsupervised Training for Blind Image Estimation
We next consider the more challenging case of blind estimation, when the measurement parameter θ for an observation y is unknown—and specifically, the blind training setting, when it is unknown even during training. The blind training setting complicates the use of our unsupervised losses in (3) and (5), since the values of θt:1 and θt:2 used there are unknown. Also, blind estimation tasks often have a more diverse set of possible parameters θ. While supervised training methods with access to ground-truth images can generate a very large database of synthetic image-measurement pairs by pairing the same image with many different θ (assuming pθ(·) is known), our unsupervised framework has access only to two measurements per image.
However, in many blind estimation applications (such as deblurring), the parameter θ has comparatively limited degrees of freedom and the distribution pθ(·) is known. Consequently, it is feasible to train estimators for θ from an observation y with sufficient supervision. With these assumptions, we propose a “proxy training” approach for unsupervised training of blind image estimators. This approach treats estimates from our network during training as a source of image ground-truth to train an estimator g : y → θ̂ for measurement parameters. We use the image network’s predictions to construct synthetic observations as:
x+t:i ← f(yt:i), θ + t:i ∼ pθ, + t:i ∼ p , y + t:i = θ + t:i x + t:i + + t:i, for i ∈ {1, 2}, (6)
where θ+t:i and + t:i are sampled on the fly from the parameter and noise distributions, and← indicates an assignment with a “stop-gradient” operation (to prevent loss gradients on the proxy images from affecting the image estimator f(·)). We use these synthetic observations y+t:i, with known sampled parameters θ+t:i, to train the parameter estimation network g(·) based on the loss:
Lprox:θ = 1
T ∑ t 2∑ i=1 ρ ( g(y+t:i) − θ + t:i ) . (7)
As the parameter network g(·) trains with augmented data, we simultaneously use it to compute estimates of parameters for the original observations: θ̂t:i ← g(yt:i), for i ∈ {1, 2}, and compute the swap- and self-measurement losses in (3) and (5) on the original observations using these estimated, instead of true, parameters. Notice that we use a stop-gradient here as well, since we do not wish to train the parameter estimator g(·) based on the swap- or self-measurement losses—the behavior observed in (4) no longer holds in this case, and we empirically observe that removing the stop-gradient leads to instability and often causes training to fail.
In addition to training the parameter estimator g(·), the proxy training data in (6) can be used to augment training for the image estimator f(·), now with full supervision from the proxy images as:
Lprox:x = 1
T ∑ t 2∑ i=1 ρ ( f(y+t:i) − x + t:i ) . (8)
This loss can be used even in the non-blind training setting, and provides a means of generating additional training data with more pairings of image and measurement parameters. Also note that although our proxy images x+t:i are approximate estimates of the true images, they represent the ground-truth for the synthetically generated observations y+t:i. Hence, the losses Lprox:θ and Lprox:x are approximate only in the sense that they are based on images that are not sampled from the true image distribution px(·). And the effect of this approximation diminishes as training progresses, and the image estimation network produces better image predictions (especially on the training set).
Our overall method randomly initializes the weights of the image and parameter networks f(·) and g(·), and then trains them with a weighted combination of all losses: Lswap+γLself+αLprox:θ+βLprox:x, where the scalar weights α, β, γ are hyper-parameters determined on a validation set. For non-blind training (of blind estimators), only the image estimator f(·) needs to be trained, and α can be set to 0.
4 Experiments
We evaluate our framework on two well-established tasks: non-blind image reconstruction from compressive measurements, and blind deblurring of face images. These tasks were chosen since large training sets of ground-truth images is available in both cases, which allows us to demonstrate the effectiveness of our approach through comparisons to fully supervised baselines. The source code of our implementation is available at https://projects.ayanc.org/unsupimg/.
4.1 Reconstruction from Compressive Measurements
We consider the task of training a CNN to reconstruct images from compressive measurements. We follow the measurement model of [12, 30], where all non-overlapping 33× 33 patches in an image are measured individually by the same low-dimensional orthonormal matrix. Like [12, 30], we train CNN models that operate on individual patches at a time, and assume ideal observations without noise (the supplementary includes additional results for noisy measurements). We train models for compression ratios of 1%, 4%, and 10% (using corresponding matrices provided by [12]).
We generate a training and validation set, of 100k and 256 images respectively, by taking 363× 363 crops from images in the ImageNet database [26]. We use a CNN architecture that stacks two UNets [24], with a residual connection between the two (see supplementary). We begin by training our architecture with full supervision, using all overlapping patches from the training images, and an L2 loss between the network’s predictions and the ground-truth image patches. For unsupervised training
with our approach, we create two partitions of the original image, each containing non-overlapping patches. The partitions themselves overlap, with patches in one partition being shifted from those in the other (see supplementary). We measure patches in both partitions with the same measurement matrix, to yield two sets of measurements. These provide the diversity required by our method as each pixel is measured with a different patch in the two partitions. Moreover, this measurement scheme can be simply implemented in practice by camera translation. The shifts for each image are randomly selected, but kept fixed throughout training. Since the network operates independently on patches, it can be used on measurements from both partitions. To compute the swap-measurement loss, we take the network’s individual patch predictions from one partition, arrange them to form the image, and extract and then apply the measurement matrix to shifted patches corresponding to the other partition. The weight γ for the self-measurement loss is set to 0.05 based on the validation set.
In Table 1, we report results for existing compressive sensing methods that use supervised training [12, 30], as well as two methods that do not require any training [15, 20]. We report numbers for these methods from the evaluation in [30] that, like us, reconstruct each patch in an image individually. We also report results for the algorithm in [20] by running it on entire images (i.e., using the entire image for regularization while still using the per-patch measurement measurement model). Note that [20] is a D-AMP-based estimator (and while slower, performs similarly to the learned D-AMP estimators proposed in [21, 33] as per their own evaluation).
Evaluating our fully supervised baseline against these methods, we find that it achieves state-of-the-art performance. We then report results for training with our unsupervised framework, and find that this leads to accurate models that only lag our supervised baseline by 0.4 db or less in terms of average PSNR on both test sets—and in most cases, actually outperforms previous methods. This is despite the fact that these models have been trained without any access to ground-truth images. In addition to our full unsupervised method with both the self- and swap- losses, Table 1 also contains an ablation without using the self-loss, which is found to lead to a slight drop in performance. Figure 2 provides example reconstructions for some images, and we find that results from our unsupervised method are extremely close in visual quality to those of the baseline model trained with full supervision.
4.2 Blind Face Image Deblurring
We next consider the problem of blind motion deblurring of face images. Like [27], we consider the problem of restoring 128× 128 aligned and cropped face images that have been affected by motion blur, through convolution with motion blur kernels of size upto 27× 27, and Gaussian noise with standard deviation of two gray levels. We use all 160k images in the CelebA training set [17] and 1.8k images from Helen training set [13] to construct our training set, and 2k images from CelebA val and 200 from the Helen training set for our validation set. We use a set of 18k and 2k random motion kernels for training and validation respectively, generated using the method described in [4]. We evaluate our method on the official blurred test images provided by [27] (derived from the CelebA and Helen test sets). Note that unlike [27], we do not use any semantic labels for training.
In this case, we use a single U-Net architecture to map blurry observations to sharp images. We again train a model for this architecture with full supervision, generating blurry-sharp training pairs on the fly by pairing random of blur kernels from training set with the sharp images. Then, for unsupervised training with our approach, we choose two kernels for each training image to form a training set of measurement pairs, that are kept fixed (including the added Gaussian noise) across all epochs of training. We first consider non-blind training, using the true blur kernels to compute the swap- and self-measurement losses. Here, we consider training with and without the proxy loss Lprox:x for the network. Then, we consider the blind training case where we also learn an estimator for blur kernels, and use its predictions to compute the measurement losses. Instead of training a entirely separate network, we share the initial layers with the image UNet, and form a separate decoder path going from the bottleneck to the blur kernel. The weights α, β, γ are all set to one in this case.
We report results for all versions of our method in Table 2, and compare it to [27], as well as a traditional deblurring method that is not trained on face images [28]. We find that with full supervision, our architecture achieves state-of-the-art performance. Then with non-blind training, we find that our method is able to come close to supervised performance when using the proxy loss, but does worse without—highlighting its utility even in the non-blind setting. Finally, we note that models derived using blind-training with our approach are also able to produce results nearly as accurate as those trained with full supervision—despite lacking access both to ground truth image data, and knowledge
of the blur kernels in their training measurements. Figure 3 illustrates this performance qualitatively, with example deblurred results from various models on the official test images. We also visualize the blur kernel estimator learned during blind training with our approach in Fig. 4 on images from our validation set. Additional results, including those on real images, are included in the supplementary.
5 Conclusion
We presented an unsupervised method to train image estimation networks from only measurements pairs, without access to ground-truth images, and in blind settings, without knowledge of measurement parameters. In this paper, we validated this approach on well-established tasks where sufficient groundtruth data (for natural and face images) was available, since it allowed us to compare to training with full-supervision and study the performance gap between the supervised and unsupervised settings. But we believe that our method’s real utility will be in opening up the use of CNNs for image estimation to new domains—such as medical imaging, applications in astronomy, etc.—where such use has been so far infeasible due to the difficulty of collecting large ground-truth datasets.
Acknowledgments. This work was supported by the NSF under award no. IIS-1820693. | 1. What is the focus and contribution of the paper on image degradations?
2. What are the strengths of the proposed approach, particularly in terms of handling parametric linear image degradations?
3. Do you have any concerns regarding the evaluation of the method, particularly when compared to supervised state-of-the-art baselines?
4. How does the reviewer assess the clarity, quality, significance, and originality of the paper's content?
5. Are there any open questions or potential avenues for future research related to the presented method? | Review | Review
Summary: The authors present an extension of Noise2Noise that is able to deal with (in addition to noise) parametric linear image degradations such as blur. As in Noise2Noise the authors require only pairs of coresponding noisy and degraded images. Here, the to images are assumed to be created from the same clean image by applyinng different parametric distortions (such as a convolution with two different blur kernels) and then adding different instanciations of zero centered noise. The authors distinguish bettween a non-blind and a blind setting. In the non-blind setting the parameters of the degradations are known. During training the authors use their network to process both images and then apply the corresponding other parametric distortion to each of the two processed images. The loss is then calculated as the squared error between the final results. In the blind setting the authors employ a second network to estimate the parameters of the degredation. It is trained in a supervised fashion on simulated training data, which is created from results of the first network obtained during the training process. The authors evaluate their method on two different datasets for the tasks of reconstruction from compressed measurement and face deblurring. The results are comparable to the supervised state-of-the-art baselines. Originality: + The manuscript presents an original and important extension of Noise2Noise. Clarity: + The paper is extremely clear in its presentation and well written. Significance: + I think the overall significance of the manuscript is quite high. The general idea of applying Noise2Noise in a setting witch includes (next to the noise) additional linear distortions of the image could open the door for new applications. Especially if the assumption of linearity were to be dropped in future research, such a method might be used to reduce difficult reconstruction artifacts in methods like tomography or super resolution microscopy. Quality: + I think the manuscript and the presented experiments are technically and theoretically sound. However, there are some open questions regarding the evaluation of the method: - I find it remarkable that the fully supervised baseline outperforms the other state-of-the-art methods in both experiments. How can this be explained? Is this due to the unconventional architecture (two concatenated u-nets) used by the authors? If this is the case, it seems like a separate additional contribution, which is not clearly stated. Final Recommendation: Considering that the paper introduces an original, highly significant extension to Noise2Noise and validates the approach via sufficient experiments, I recommend to accept the paper. ------------------------------------------ Post Rebuttal: I feel that my questions have been adequately addressed and I will stick with my initial rating. |
NIPS | Title
Training Image Estimators without Image Ground Truth
Abstract
Deep neural networks have been very successful in image estimation applications such as compressive-sensing and image restoration, as a means to estimate images from partial, blurry, or otherwise degraded measurements. These networks are trained on a large number of corresponding pairs of measurements and ground-truth images, and thus implicitly learn to exploit domain-specific image statistics. But unlike measurement data, it is often expensive or impractical to collect a large training set of ground-truth images in many application settings. In this paper, we introduce an unsupervised framework for training image estimation networks, from a training set that contains only measurements—with two varied measurements per image—but no ground-truth for the full images desired as output. We demonstrate that our framework can be applied for both regular and blind image estimation tasks, where in the latter case parameters of the measurement model (e.g., the blur kernel) are unknown: during inference, and potentially, also during training. We evaluate our method for training networks for compressive-sensing and blind deconvolution, considering both non-blind and blind training for the latter. Our unsupervised framework yields models that are nearly as accurate as those from fully supervised training, despite not having access to any ground-truth images.
1 Introduction
Reconstructing images from imperfect observations is a classic inference task in many imaging applications. In compressive sensing [8], a sensor makes partial measurements for efficient acquisition. These measurements correspond to a low-dimensional projection of the higher-dimensional image signal, and the system relies on computational inference for recovering the full-dimensional image. In other cases, cameras capture degraded images that are low-resolution, blurry, etc., and require a restoration algorithm [10, 29, 34] to recover a corresponding un-corrupted image. Deep convolutional neural networks (CNNs) have recently emerged as an effective tool for such image estimation tasks [4, 6, 7, 12, 27, 30, 31]. Specifically, a CNN for a given application is trained on a large dataset that consists of pairs of ground-truth images and observed measurements (in many cases where the measurement or degradation process is well characterized, having a set of ground-truth images is sufficient to generate corresponding measurements). This training set allows the CNN to learn to exploit the expected statistical properties of images in that application domain, to solve what is essentially an ill-posed inverse problem.
But for many domains, it is impractical or prohibitively expensive to capture full-dimensional or un-corrupted images, and construct such a large representative training set. Unfortunately, it is often in such domains that a computational imaging solution is most useful. Recently, Lehtinen et al. [14] proposed a solution to this issue for denoising, with a method that trains with only pairs of noisy observations. While their method yields remarkably accurate network models without needing any
33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.
ground-truth images for training, it is applicable only to the specific case of estimation from noisy measurements—when each image intensity is observed as a sample from a (potentially unknown) distribution with mean or mode equal to its corresponding true value.
In this work, we introduce an unsupervised method for training image estimation networks that can be applied to a general class of observation models—where measurements are a linear function of the true image, potentially with additive noise. As training data, it only requires two observations for the same image but not the underlying image itself1. The two measurements in each pair are made with different parameters (such as different compressive measurement matrices or different blur kernels), and these parameters vary across different pairs. Collecting such a training set provides a practical alternative to the more laborious one of collecting full image ground-truth. Given these measurements, our method trains an image estimation network by requiring that its prediction from one measurement of a pair be consistent with the other measurement, when observed with the corresponding parameter. With sufficient diversity in measurement parameters for different training pairs, we show this is sufficient to train an accurate network model despite lacking direct ground-truth supervision.
While our method requires knowledge of the measurement model (e.g., blur by convolution), it also incorporates a novel mechanism to handle the blind setting during training—when the measurement parameters (e.g., the blur kernels) for training observations are unknown. To be able to enforce consistency as above, we use an estimator for measurement parameters that is trained simultaneously using a “proxy” training set. This set is created on-the-fly by taking predictions from the image network even as it trains, and pairing them with observations synthetically created using randomly sampled, and thus known, parameters. The proxy set provides supervision for training the parameter estimator, and to augment training of the image estimator as well. This mechanism allows our method to nearly match the accuracy of fully supervised training on image and parameter ground-truth.
We validate our method with experiments on image reconstruction from compressive measurements and on blind deblurring of face images, with blind and non-blind training for the latter, and compare to fully-supervised baselines with state-of-the-art performance. The supervised baselines use a training set of ground-truth images and generate observations with random parameters on the fly in each epoch, to create a much larger number of effective image-measurement pairs. In contrast, our method is trained with only two measurements per image from the same training set (but not the image
1Note that at test time, the trained network only requires one observation as input as usual.
itself), with the pairs kept fixed through all epochs of training. Despite this, our unsupervised training method yields models with test accuracy close to that of the supervised baselines, and thus presents a practical way to train CNNs for image estimation when lacking access to image ground truth.
2 Related Work
CNN-based Image Estimation. Many imaging tasks require inverting the measurement process to obtain a clean image from the partial or degraded observations—denoising [3], deblurring [29], super-resolution [10], compressive sensing [8], etc. While traditionally solved using statistical image priors [9, 25, 34], CNN-based estimators have been successfully employed for many of these tasks. Most methods [4, 6, 7, 12, 22, 27, 30, 31] learn a network to map measurements to corresponding images from a large training set of pairs of measurements and ideal ground-truth images. Some learn CNN-based image priors, as denoisers [5, 23, 31] or GANs [1], that are agnostic to the inference task (denoising, deblurring, etc.), but still tailored to a chosen class of images. All these methods require access to a large domain-specific dataset of ground-truth images for training. However, capturing image ground-truth is burdensome or simply infeasible in many settings (e.g., for MRI scans [18] and other biomedical imaging applications). In such settings, our method provides a practical alternative by allowing estimation networks to be trained from measurement data alone.
Unsupervised Learning. Unsupervised learning for CNNs is broadly useful in many applications where large-scale training data is hard to collect. Accordingly, researchers have proposed unsupervised and weakly-supervised methods for such applications, such as depth estimation [11, 32], intrinsic image decomposition [16, 19], etc. However, these methods are closely tied to their specific applications. In this work, we seek to enable unsupervised learning for image estimation networks. In the context of image modeling, Bora et al. [2] propose a method to learn a GAN model from only degraded observations. Their method, like ours, includes a measurement model with its discriminator for training (but requires knowledge of measurement parameters, while we are able to handle the blind setting). Their method proves successful in training a generator for ideal images. We seek a similar unsupervised means for training image reconstruction and restoration networks.
The closest work to ours is the recent Noise2Noise method of Lehtinen et al. [14], who propose an unsupervised framework for training denoising networks by training on pairs of noisy observations of the same image. In their case, supervision comes from requiring the denoised output from one observation be close to the other. This works surprisingly well, but is based on the assumption that the expected or median value of the noisy observations is the image itself. We focus on a more general class of observation models, which requires injecting the measurement process in loss computation. We also introduce a proxy training approach to handle blind image estimation applications.
Also related are the works of Metzler et al. [21] and Zhussip et al. [33], that use Stein’s unbiased risk estimator for unsupervised training from only measurement data, for applications in compressive sensing. However, these methods are specific to estimators based on D-AMP estimation [20], since they essentially train denoiser networks for use in unrolled AMP iterations for recovery from compressive measurements. In contrast, ours is a more general framework that can be used to train generic neural network estimators.
3 Proposed Approach
Given a measurement y ∈ RM of an ideal image x ∈ RN that are related as
y = θ x+ , (1)
our goal is to train a CNN to produce an estimate x̂ of the image from y. Here, ∼ p is random noise with distribution p (·) that is assumed to be zero-mean and independent of the image x, and the parameter θ is an M ×N matrix that models the linear measurement operation. Often, the measurement matrix θ is structured with fewer than MN degrees of freedom based on the measurement model—e.g., it is block-Toeplitz for deblurring with entries defined by the blur kernel. We consider both non-blind estimation when the measurement parameter θ is known for a given measurement during inference, and the blind setting where θ is unavailable but we know the distribution pθ(·). For blind estimators, we address both non-blind and blind training—when θ is known for each measurement in the training set but not at test time, and when it is unknown during training as well.
Since (1) is typically non-invertible, image estimation requires reasoning with the statistical distribution px(·) of images for the application domain, and conventionally, this is provided by a large training set of typical ground-truth images x. In particular, CNN-based image estimation methods train a network f : y → x̂ on a large training set {(xt, yt)}Tt=1 of pairs of corresponding images and measurements, based on a loss that measures error ρ(x̂t − xt) between predicted and true images across the training set. In the non-blind setting, the measurement parameter θ is known and provided as input to the network f (we omit this in the notation for convenience), while in the blind setting, the network must also reason about the unknown measurement parameter θ.
To avoid the need for a large number of ground-truth training images, we propose an unsupervised learning method that is able to train an image estimation network using measurements alone. Specifically, we assume we are given a training set of two measurements (yt:1, yt:2) for each image xt:
yt:1 = θt:1 xt + t:1, yt:2 = θt:2 xt + t:2, (2) but not the images {xt} themselves. We require the corresponding measurement parameters θt:1 and θt:2 to be different for each pair, and further, to also vary across different training pairs. These parameters are assumed to be known for the non-blind training setting, but not for blind training.
3.1 Unsupervised Training for Non-Blind Image Estimation
We begin with the simpler case of non-blind estimation, when the parameter θ for a given measurement y is known, both during inference and training. Given pairs of measurements with known parameters, our method trains the network f(·) using a “swap-measurement” loss on each pair, defined as:
Lswap = 1
T ∑ t ρ ( θt:2 f(yt:1) − yt:2 ) + ρ ( θt:1 f(yt:2) − yt:1 ) . (3)
This loss evaluates the accuracy of the full images predicted by the network from each measurement in a pair, by comparing it to the other measurement—using an error function ρ(·)—after simulating observation with the corresponding measurement parameter. Note Noise2Noise [14] can be seen as a special case of (3) for measurements are degraded only by noise, with θt:1 = θt:2 = I .
When the parameters θt:1, θt:2 used to acquire the training set are sufficiently diverse and statistically independent for each underlying xt, this loss provides sufficient supervision to train the network f(·). To see this, we consider using the L2 distance for the error function ρ(z) = ‖z‖2, and note that (3) represents an empirical approximation of the expected loss over image, parameter, and noise distributions. Assuming the training measurement pairs are obtained using (2) with xt ∼ px, θt:1, θt:2 ∼ pθ, and t:1, t:2 ∼ p drawn i.i.d. from their respective distributions, we have Lswap ≈ 2 E
x∼px E θ1∼pθ E 1∼p E θ2∼pθ E 2∼p ‖θ2f(θ1x+ 1)− (θ2x+ 2)‖2
= 2σ2 + 2 E x∼px E θ∼pθ E ∼p
( f(θx+ ) − x )T Q ( f(θx+ ) − x ) , Q = E
θ′∼pθ (θ′ T θ′). (4)
Therefore, because the measurement matrices are independent, we find that in expectation the swapmeasurement loss is equivalent to supervised training against the true image x, with an L2 loss that is weighted by theN×N matrixQ (upto an additive constant given by noise variance). When the matrix Q is full-rank, the swap-measurement loss will provide supervision along all image dimensions, and will reach its theoretical minimum (2σ2 ) iff the network makes exact predictions.
The requirement that Q be full-rank implies that the distribution pθ of measurement parameters must be sufficiently diverse, such that the full set of parameters {θ}, used for training measurements, together span the entire domain RN of full images. Therefore, even though measurements made by individual θ—and even pairs of (θt:1, θt:2)—are incomplete, our method relies on the fact that the full set of measurement parameters used during training is complete. Indeed, for Q to be full-rank, it is important that there be no systematic deficiency in pθ (e.g., no vector direction in RN left unobserved by all measurement parameters used in training). Also note that while we derived (4) for the L2 loss, the argument applies to any error function ρ(·) that is minimized only when its input is 0. In addition to the swap loss, we also find it useful to train with an additional “self-measurement” loss that measures consistency between an image prediction and its own corresponding input measurement:
Lself = 1
T ∑ t ρ ( θt:1 f(yt:1) − yt:1 ) + ρ ( θt:2 f(yt:2) − yt:2 ) . (5)
While not sufficient by itself, we find the additional supervision it provides to be practically useful in yielding more accurate network models since it provides more direct supervision for each training sample. Therefore, our overall unsupervised training objective is a weighted version of the two losses Lswap + γLself, with weight γ chosen on a validation set.
3.2 Unsupervised Training for Blind Image Estimation
We next consider the more challenging case of blind estimation, when the measurement parameter θ for an observation y is unknown—and specifically, the blind training setting, when it is unknown even during training. The blind training setting complicates the use of our unsupervised losses in (3) and (5), since the values of θt:1 and θt:2 used there are unknown. Also, blind estimation tasks often have a more diverse set of possible parameters θ. While supervised training methods with access to ground-truth images can generate a very large database of synthetic image-measurement pairs by pairing the same image with many different θ (assuming pθ(·) is known), our unsupervised framework has access only to two measurements per image.
However, in many blind estimation applications (such as deblurring), the parameter θ has comparatively limited degrees of freedom and the distribution pθ(·) is known. Consequently, it is feasible to train estimators for θ from an observation y with sufficient supervision. With these assumptions, we propose a “proxy training” approach for unsupervised training of blind image estimators. This approach treats estimates from our network during training as a source of image ground-truth to train an estimator g : y → θ̂ for measurement parameters. We use the image network’s predictions to construct synthetic observations as:
x+t:i ← f(yt:i), θ + t:i ∼ pθ, + t:i ∼ p , y + t:i = θ + t:i x + t:i + + t:i, for i ∈ {1, 2}, (6)
where θ+t:i and + t:i are sampled on the fly from the parameter and noise distributions, and← indicates an assignment with a “stop-gradient” operation (to prevent loss gradients on the proxy images from affecting the image estimator f(·)). We use these synthetic observations y+t:i, with known sampled parameters θ+t:i, to train the parameter estimation network g(·) based on the loss:
Lprox:θ = 1
T ∑ t 2∑ i=1 ρ ( g(y+t:i) − θ + t:i ) . (7)
As the parameter network g(·) trains with augmented data, we simultaneously use it to compute estimates of parameters for the original observations: θ̂t:i ← g(yt:i), for i ∈ {1, 2}, and compute the swap- and self-measurement losses in (3) and (5) on the original observations using these estimated, instead of true, parameters. Notice that we use a stop-gradient here as well, since we do not wish to train the parameter estimator g(·) based on the swap- or self-measurement losses—the behavior observed in (4) no longer holds in this case, and we empirically observe that removing the stop-gradient leads to instability and often causes training to fail.
In addition to training the parameter estimator g(·), the proxy training data in (6) can be used to augment training for the image estimator f(·), now with full supervision from the proxy images as:
Lprox:x = 1
T ∑ t 2∑ i=1 ρ ( f(y+t:i) − x + t:i ) . (8)
This loss can be used even in the non-blind training setting, and provides a means of generating additional training data with more pairings of image and measurement parameters. Also note that although our proxy images x+t:i are approximate estimates of the true images, they represent the ground-truth for the synthetically generated observations y+t:i. Hence, the losses Lprox:θ and Lprox:x are approximate only in the sense that they are based on images that are not sampled from the true image distribution px(·). And the effect of this approximation diminishes as training progresses, and the image estimation network produces better image predictions (especially on the training set).
Our overall method randomly initializes the weights of the image and parameter networks f(·) and g(·), and then trains them with a weighted combination of all losses: Lswap+γLself+αLprox:θ+βLprox:x, where the scalar weights α, β, γ are hyper-parameters determined on a validation set. For non-blind training (of blind estimators), only the image estimator f(·) needs to be trained, and α can be set to 0.
4 Experiments
We evaluate our framework on two well-established tasks: non-blind image reconstruction from compressive measurements, and blind deblurring of face images. These tasks were chosen since large training sets of ground-truth images is available in both cases, which allows us to demonstrate the effectiveness of our approach through comparisons to fully supervised baselines. The source code of our implementation is available at https://projects.ayanc.org/unsupimg/.
4.1 Reconstruction from Compressive Measurements
We consider the task of training a CNN to reconstruct images from compressive measurements. We follow the measurement model of [12, 30], where all non-overlapping 33× 33 patches in an image are measured individually by the same low-dimensional orthonormal matrix. Like [12, 30], we train CNN models that operate on individual patches at a time, and assume ideal observations without noise (the supplementary includes additional results for noisy measurements). We train models for compression ratios of 1%, 4%, and 10% (using corresponding matrices provided by [12]).
We generate a training and validation set, of 100k and 256 images respectively, by taking 363× 363 crops from images in the ImageNet database [26]. We use a CNN architecture that stacks two UNets [24], with a residual connection between the two (see supplementary). We begin by training our architecture with full supervision, using all overlapping patches from the training images, and an L2 loss between the network’s predictions and the ground-truth image patches. For unsupervised training
with our approach, we create two partitions of the original image, each containing non-overlapping patches. The partitions themselves overlap, with patches in one partition being shifted from those in the other (see supplementary). We measure patches in both partitions with the same measurement matrix, to yield two sets of measurements. These provide the diversity required by our method as each pixel is measured with a different patch in the two partitions. Moreover, this measurement scheme can be simply implemented in practice by camera translation. The shifts for each image are randomly selected, but kept fixed throughout training. Since the network operates independently on patches, it can be used on measurements from both partitions. To compute the swap-measurement loss, we take the network’s individual patch predictions from one partition, arrange them to form the image, and extract and then apply the measurement matrix to shifted patches corresponding to the other partition. The weight γ for the self-measurement loss is set to 0.05 based on the validation set.
In Table 1, we report results for existing compressive sensing methods that use supervised training [12, 30], as well as two methods that do not require any training [15, 20]. We report numbers for these methods from the evaluation in [30] that, like us, reconstruct each patch in an image individually. We also report results for the algorithm in [20] by running it on entire images (i.e., using the entire image for regularization while still using the per-patch measurement measurement model). Note that [20] is a D-AMP-based estimator (and while slower, performs similarly to the learned D-AMP estimators proposed in [21, 33] as per their own evaluation).
Evaluating our fully supervised baseline against these methods, we find that it achieves state-of-the-art performance. We then report results for training with our unsupervised framework, and find that this leads to accurate models that only lag our supervised baseline by 0.4 db or less in terms of average PSNR on both test sets—and in most cases, actually outperforms previous methods. This is despite the fact that these models have been trained without any access to ground-truth images. In addition to our full unsupervised method with both the self- and swap- losses, Table 1 also contains an ablation without using the self-loss, which is found to lead to a slight drop in performance. Figure 2 provides example reconstructions for some images, and we find that results from our unsupervised method are extremely close in visual quality to those of the baseline model trained with full supervision.
4.2 Blind Face Image Deblurring
We next consider the problem of blind motion deblurring of face images. Like [27], we consider the problem of restoring 128× 128 aligned and cropped face images that have been affected by motion blur, through convolution with motion blur kernels of size upto 27× 27, and Gaussian noise with standard deviation of two gray levels. We use all 160k images in the CelebA training set [17] and 1.8k images from Helen training set [13] to construct our training set, and 2k images from CelebA val and 200 from the Helen training set for our validation set. We use a set of 18k and 2k random motion kernels for training and validation respectively, generated using the method described in [4]. We evaluate our method on the official blurred test images provided by [27] (derived from the CelebA and Helen test sets). Note that unlike [27], we do not use any semantic labels for training.
In this case, we use a single U-Net architecture to map blurry observations to sharp images. We again train a model for this architecture with full supervision, generating blurry-sharp training pairs on the fly by pairing random of blur kernels from training set with the sharp images. Then, for unsupervised training with our approach, we choose two kernels for each training image to form a training set of measurement pairs, that are kept fixed (including the added Gaussian noise) across all epochs of training. We first consider non-blind training, using the true blur kernels to compute the swap- and self-measurement losses. Here, we consider training with and without the proxy loss Lprox:x for the network. Then, we consider the blind training case where we also learn an estimator for blur kernels, and use its predictions to compute the measurement losses. Instead of training a entirely separate network, we share the initial layers with the image UNet, and form a separate decoder path going from the bottleneck to the blur kernel. The weights α, β, γ are all set to one in this case.
We report results for all versions of our method in Table 2, and compare it to [27], as well as a traditional deblurring method that is not trained on face images [28]. We find that with full supervision, our architecture achieves state-of-the-art performance. Then with non-blind training, we find that our method is able to come close to supervised performance when using the proxy loss, but does worse without—highlighting its utility even in the non-blind setting. Finally, we note that models derived using blind-training with our approach are also able to produce results nearly as accurate as those trained with full supervision—despite lacking access both to ground truth image data, and knowledge
of the blur kernels in their training measurements. Figure 3 illustrates this performance qualitatively, with example deblurred results from various models on the official test images. We also visualize the blur kernel estimator learned during blind training with our approach in Fig. 4 on images from our validation set. Additional results, including those on real images, are included in the supplementary.
5 Conclusion
We presented an unsupervised method to train image estimation networks from only measurements pairs, without access to ground-truth images, and in blind settings, without knowledge of measurement parameters. In this paper, we validated this approach on well-established tasks where sufficient groundtruth data (for natural and face images) was available, since it allowed us to compare to training with full-supervision and study the performance gap between the supervised and unsupervised settings. But we believe that our method’s real utility will be in opening up the use of CNNs for image estimation to new domains—such as medical imaging, applications in astronomy, etc.—where such use has been so far infeasible due to the difficulty of collecting large ground-truth datasets.
Acknowledgments. This work was supported by the NSF under award no. IIS-1820693. | 1. What is the focus and contribution of the paper on training deep image estimators?
2. What are the strengths of the proposed method, particularly in dealing with linear operators and noise realizations?
3. What are the weaknesses of the paper regarding the lack of analysis on certain aspects, such as the swap loss and self-loss?
4. How does the reviewer assess the effectiveness of the proposed method in various scenarios, including deblurring face images and natural images?
5. What are some potential future research directions that can be explored based on the limitations of the current approach? | Review | Review
In this paper, the authors introduce a method to train deep image estimators in an unsupervised fashion. The method is based on the same ideas as the recently introduced noise2noise method (Lehtinen et al, 2018) but is generalized to deal with a degradation model that produces linear noisy observations from the latent clean image that one wants to recover. The training avoids using ground-truth images by using two observations of the same latent image under different linear operators (i.e., compressive measurements or motion blur) and different noise realizations. The paper first introduces a non-blind training scheme where the linear operators are known. Then, for blind training the linear operators are estimated, using a second network (subnetwork), and used as a proxy to emulate the non-blind scheme. The whole system is trained by carefully avoiding updating all the parameters together (to avoid a vicious loop). Experimental evidence on two (synthetic) problems shows that the unsupervised (non-blind and blind) training scheme produces similar results as the fully supervised training. Training neural networks to reconstruct images without having access to ground truth data is a major challenge. Recent work (Soltanayev and Chun,2018; Lehtinen et al, 2018) have shown that this is possible in the particular setting of image denoising. This work generalizes the ideas behind Lehtinen et al, 2018 to cope with a linear model (where denoising can be seen as a particular case). This is an interesting paper that proposes several ideas to avoid the restrictions imposed in the Lehtinen et al. procedure. The paper is generally well-written, but there are a few sections that could be improved with more discussion. In fact, the major weakness of the paper is the lack of analysis. -- Swap Loss (Eq (3) and Eq (4)). The performance of the method strongly depends on the matrix Q, but there's no analysis on this. In particular, I can imagine that in the case where the linear operator is formed using random orthogonal matrices, Q will be close to the identity. But, what about other cases? For example, in the case of motion blur, the Q matrix will be related to the power spectrum of the motion kernel process (Q is the autocorrelation). Since it is essentially a random walk, I can imagine that it will have a linear decay with the frequency. This implies that the swap loss is not penalizing high-frequency errors so, the training scheme won't help to recover high-frequency details. The paper does not discuss any of this but claims that if the measurements are different/complementary enough then this matrix will be full rank. More analysis is needed regarding Q. For example, authors could plot the spectrum of this matrix in the given experiments. Also, this matrix Q is probably one major limitation of the method. This is not discussed in the paper. -- Self Loss (Eq (5)). To complement the swap loss, the authors introduce a "self loss" that enforces self-consistency. This is not analyzed and it is not clearly motivated. The justification is just that it produces better results. For instance, in the particular scenario where the linear operator is the identity (i.e., denoising), this loss will enforce that the network f is close to the identity so learns to do nothing (keep the noise!). I understand that this is a particular case of the more general model, but this particular case is covered within the proposed framework. I would like to have more information regarding this loss, and in general what is this loss doing. For instance, the paper could provide results when this self-loss is not used. -- Prox:\theta Loss (Eq (7)) The authors propose to cope with the blind case by learning the linear operator (degradation) using a convnet (a separate branch of the network). I find it quite surprising how well this works (according to the experiments). I would like to see more analysis regarding the estimation of the blur (or more generally, the linear operator), in particular, a comparison to other works doing this (e.g., motion blur estimation using deep learning). This is not mentioned much, but this is a very difficult problem in itself. -- Experiments Deblurring Face images. Why is the method trained for deblurring face images? Would it work if trained in a more complex distribution, e.g., natural images? Noise. The performance gap between supervised and unsupervised training seems to increase with the level of noise. Could you elaborate on this? Also, in the experiment regarding deblurring face images, the level of noise is 2/255, which is pretty low. Have you tried with higher noise levels? Loss. When deblurring face images, all adopted losses are L1. In this case, the mathematical analysis in Eq (4) doesn't hold. Could you comment on this? Also, why did you choose L1 for this case? Are results with L2 norm much worse? Compressive measurements. The shifted partitions are generated in a very particular way. (Figure 8 in supplementary material). Is this really important? How sensitive are the results to this pattern? This is related to the matrix Q. Other comments: It could be interesting to compare the proposed loss with the losses used in multi-image variational deblurring, for example, see: Zhang, H., Wipf, D. and Zhang, Y., 2013. Multi-image blind deblurring using a coupled adaptive sparse prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1051-1058) -------- After rebuttal. I believe that the manuscript will improve with the changes that the authors have committed to do. Additionally, I would like the authors to clearly list the limitations of the current approach and briefly discuss them (see e.g., in rebuttal document, l42 to l44, estimation of \ theta, motion kernel estimation; ). This will allow defining clearer future research directions for those who are willing to pursue this line of work. Since the authors have carefully addressed most of the questions and comments I am therefore updating my score to 7. I would like to see this paper presented at this venue! |
NIPS | Title
Wasserstein Learning of Deep Generative Point Process Models
Abstract
Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model’s expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.
1 Introduction
Event sequences are ubiquitous in areas such as e-commerce, social networks, and health informatics. For example, events in e-commerce are the times a customer purchases a product from an online vendor such as Amazon. In social networks, event sequences are the times a user signs on or generates posts, clicks, and likes. In health informatics, events can be the times when a patient exhibits symptoms or receives treatments. Bidding and asking orders also comprise events in the stock market. In all of these applications, understanding and predicting user behaviors exhibited by the event dynamics are of great practical, economic, and societal interest.
Temporal point processes [1] is an effective mathematical tool for modeling events data. It has been applied to sequences arising from social networks [2, 3, 4], electronic health records [5], ecommerce [6], and finance [7]. A temporal point process is a random process whose realization consists of a list of discrete events localized in (continuous) time. The point process representation of sequence data is fundamentally different from the discrete time representation typically used in time series analysis. It directly models the time period between events as random variables, and allows temporal events to be modeled accurately, without requiring the choice of a time window to aggregate events, which may cause discretization errors. Moreover, it has a remarkably extensive theoretical foundation [8].
∗Authors contributed equally. Work completed at Georgia Tech.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
However, conventional point process models often make strong unrealistic assumptions about the generative processes of the event sequences. In fact, a point process is characterized by its conditional intensity function – a stochastic model for the time of the next event given all the times of previous events. The functional form of the intensity is often designed to capture the phenomena of interests [9]. Some examples are homogeneous and non-homogeneous Poisson processes [10], self-exciting point processes [11], self-correcting point process models [12], and survival processes [8]. Unfortunately, they make various parametric assumptions about the latent dynamics governing the generation of the observed point patterns. As a consequence, model misspecification can cause significantly degraded performance using point process models, which is also shown by our experimental results later.
To address the aforementioned problem, the authors in [13, 14, 15] propose to learn a general representation of the underlying dynamics from the event history without assuming a fixed parametric form in advance. The intensity function of the temporal point process is viewed as a nonlinear function of the history of the process and is parameterized using a recurrent neural network. Attenional mechanism is explored to discover the underlying structure [16]. Apparently this line of work still relies on explicit modeling of the intensity function. However, in many tasks such as data generation or event prediction, knowledge of the whole intensity function is unnecessary. On the other hand, sampling sequences from intensity-based models is usually performed via a thinning algorithm [17], which is computationally expensive; many sample events might be rejected because of the rejection step, especially when the intensity exhibits high variation. More importantly, most of the methods based on intensity function are trained by maximizing log likelihood or a lower bound on it. They are asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distributions, which suffers serious issues such as mode dropping [18, 19]. Alternatively, Generative Adversarial Networks (GAN) [20] have proven to be a promising alternative to traditional maximum likelihood approaches [21, 22].
In this paper, for the first time, we bypass the intensity-based modeling and likelihood-based estimation of temporal point processes and propose a neural network-based model with a generative adversarial learning scheme for point processes. In GANs, two models are used to solve a minimax game: a generator which samples synthetic data from the model, and a discriminator which classifies the data as real or synthetic. Theoretically speaking, these models are capable of modeling an arbitrarily complex probability distribution, including distributions over discrete events. They achieve state-of-the-art results on a variety of generative tasks such as image generation, image super-resolution, 3D object generation, and video prediction [23, 24].
The original GAN in [20] minimizes the Jensen-Shannon (JS) and is regarded as highly unstable and prone to miss modes. Recently, Wasserstein GAN (WGAN) [25] is proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore it is shown that the EM objective, as a metric between probability distributions [26] has many advantages as the loss function correlates with the quality of the generated samples and reduces mode dropping [27]. Moreover, it leverages the geometry of the space of event sequences in terms of their distance, which is not the case for an MLE-based approach. In this paper we extend the notion of WGAN for temporal point processes and adopt a Recurrent Neural Network (RNN) for training. Importantly, we are able to demonstrate that Wasserstein distance training of RNN point process models outperforms the same architecture trained using MLE.
In a nutshell, the contributions of the paper are: i) We propose the first intensity-free generative model for point processes and introduce the first (to our best knowledge) likelihood-free corresponding learning methods; ii) We extend WGAN for point processes with Recurrent Neural Network architecture for sequence generation learning; iii) In contrast to the usual subjective measures of evaluating GANs we use a statistical and a quantitative measure to compare the performance of the model to the conventional ones. iv) Extensive experiments involving various types of point processes on both synthetic and real datasets show the promising performance of our approach.
2 Proposed Framework
In this section, we define Point Processes in a way that is suitable to be combined with the WGANs.
2.1 Point Processes
Let S be a compact space equipped with a Borel σ-algebra B. Take Ξ as the set of counting measures on S with C as the smallest σ-algebra on it. Let (Ω,F ,P) be a probability space. A point process on S is a measurable map ξ : Ω→ Ξ from the probability space (Ω,F ,P) to the measurable space (Ξ, C). Figure 1-a illustrates this mapping.
Every realization of a point process ξ can be written as ξ = ∑n i=1 δXi where δ is the Dirac measure, n is an integer-valued random variable and Xi’s are random elements of S or events. A point process can be equivalently represented by a counting process: N(B) := ∫ B ξ(x)dx, which basically is the number of events in each Borel subset B ∈ B of S. The mean measure M of a point process ξ is a measure on S that assigns to every B ∈ B the expected number of events of ξ in B, i.e., M(B) := E[N(B)] for all B ∈ B. For inhomogeneous Poisson process, M(B) = ∫ B λ(x)dx, where the intensity function λ(x) yields a positive measurable function on S. Intuitively speaking, λ(x)dx is the expected number of events in the infinitesimal dx. For the most common type of point process, a Homogeneous Poisson process, λ(x) = λ and M(B) = λ|B|, where | · | is the Lebesgue measure on (S,B). More generally, in Cox point processes, λ(x) can be a random density possibly depending on the history of the process. For any point process, given λ(·), N(B) ∼ Poisson( ∫ B λ(x)dx). In addition, if B1, . . . , Bk ∈ B are disjoint, then N(B1), . . . , N(Bk) are independent conditioning on λ(·). For the ease of exposition, we will present the framework for the case where the events are happening in the real half-line of time. But the framework is easily extensible to the general space.
2.2 Temporal Point Processes
A particularly interesting case of point processes is given when S is the time interval [0, T ), which we will call a temporal point process. Here, a realization is simply a set of time points: ξ = ∑n i=1 δti . With a slight notation abuse we will write ξ = {t1, . . . , tn} where each ti is a random time before T . Using a conditional intensity (rate) function is the usual way to characterize point processes.
For Inhomogeneous Poisson process (IP), the intensity λ(t) is a fixed non-negative function supported in [0, T ). For example, it can be a multi-modal function comprised of k Gaussian kernels: λ(t) =∑k i=1 αi(2πσ 2 i ) −1/2 exp ( −(t− ci)2/σ2i ) , for t ∈ [0, T ), where ci and σi are fixed center and standard deviation, respectively, and αi is the weight (or importance) for kernel i.
A self-exciting (Hawkes) process (SE) is a cox process where the intensity is determined by previous (random) events in a special parametric form: λ(t) = µ+β ∑ ti<t
g(t− ti), where g is a nonnegative kernel function, e.g., g(t) = exp(−ωt) for some ω > 0. This process has an implication that the occurrence of an event will increase the probability of near future events and its influence will (usually) decrease over time, as captured by (the usually) decaying fixed kernel g. µ is the exogenous rate of firing events and α is the coefficient for the endogenous rate.
In contrast, in self-correcting processes (SC), an event will decrease the probability of an event: λ(t) = exp(ηt− ∑ ti<t
γ). The exp ensures that the intensity is positive, while η and γ are exogenous and endogenous rates.
We can utilize more flexible ways to model the intensity, e.g., by a Recurrent Neural Network (RNN): λ(t) = gw(t, hti), where hti is the feedback loop capturing the influence of previous events (last updated at the latest event) and is updated by hti = hv(ti, hti−1). Here w, v are network weights.
2.3 Wasserstein-Distance for Temporal Point Processes
Given samples from a point process, one way to estimate the process is to find a model (Ωg,Fg,Pg)→ (Ξ, C) that is close enough to the real data (Ωr,Fr,Pr)→ (Ξ, C). As mentioned in the introduction, Wasserstein distance [25] is our choice as the proximity measure. The Wasserstein distance between distribution of two point processes is:
W (Pr,Pg) = inf ψ∈Ψ(Pr,Pg) E(ξ,ρ)∼ψ[‖ξ − ρ‖?], (1)
where Ψ(Pr,Pg) denotes the set of all joint distributions ψ(ξ, ρ) whose marginals are Pr and Pg . The distance between two sequences ‖ξ − ρ‖?, is tricky and need further attention. Take ξ = {x1, x2, . . . , xn} and ρ = {y1, . . . , ym}, where for simplicity we first consider the case m = n. The two sequences can be thought as discrete distributions µξ = ∑n i=1 1 nδxi and µ ρ = ∑n i=1 1 nδyi . Then, the distance between these two is an optimal transport problem argminπ∈Σ〈π,C〉, where Σ is the set of doubly stochastic matrices (rows and columns sum up to one), 〈·, ·〉 is the Frobenius dot product, and C is the cost matrix. Cij captures the energy needed to move a probability mass from xi to yj . We take Cij = ‖xi − yj‖◦ where ‖ · ‖◦ is the norm in S. It can be seen that the optimal solution is attained at extreme points and, by Birkhoff’s theorem, the extreme points of the set of doubly stochastic matrices is a permutation [28]. In other words, the mass is transfered from a unique source
event to a unique target event. Therefore, we have: ‖ξ − ρ‖? = minσ
∑n
i=1 ‖xi − yσ(i)‖◦, where the minimum is taken among all n! permutations of 1 . . . n. For the case m 6= n, without loss of generality we assume n ≤ m and define the distance as follows:
‖ξ − ρ‖? = min σ ∑n i=1 ‖xi − yσ(i)‖◦ + ∑m i=n+1 ‖s− yσ(i)‖, (2)
where s is a fixed limiting point in border of the compact space S and the minimum is over all permutations of 1 . . .m. The second term penalizes unmatched points in a very special way which will be clarified later. Appendix B proves that it is indeed a valid distance measure.
Interestingly, in the case of temporal point process in [0, T ) the distance between ξ = {t1, . . . , tn} and ρ = {τ1, . . . , τm} is reduced to
‖ξ − ρ‖? = ∑n
i=1 |ti − τi|+ (m− n)× T − ∑m i=n+1 τi, (3)
where the time points are ordered increasingly, s = T is chosen as the anchor point, and | · | is the Lebesgue measure in the real line. A proof is given in Appendix C. This choice of distance is significant in two senses. First, it is computationally efficient and no excessive computation is involved. Secondly, in terms of point processes, it is interpreted as the volume by which the two counting measures differ. Figure 1-b demonstrates this intuition and justifies our choice of metric in Ξ and Appendix D contains the proof. The distance used in our current work is the simplest yet effective distance that exhibits high interpretability and efficient computability. More robust distance like local alignment distance and dynamic time warping [29] should be more robust and are great venues for future work.
Equation (1) is computationally highly intractable and its dual form is usually utilized [25]:
W (Pr,Pg) = sup ‖f‖L≤1 Eξ∼Pr [f(ξ)]− Eρ∼Pg [f(ρ)], (4)
where the supremum is taken over all Lipschitz functions f : Ξ → R, i.e., functions that assign a value to a sequence of events (points) and satisfy |f(ξ)− f(ρ)| ≤ ‖ξ − ρ‖? for all ξ and ρ. However, solving the dual form is still highly nontrivial. Enumerating all Lipschitz functions over point process realizations is impossible. Instead, we choose a parametric family of functions to approximate the search space fw and consider solving the problem
max w∈W,‖fw‖L≤1 Eξ∼Pr [fw(ξ)]− Eρ∼Pg [fw(ρ)] (5)
where w ∈ W is the parameter. The more flexible fw, the more accurate will be the approximation. It is notable that W-distance leverages the geometry of the space of event sequences in terms of their distance, which is not the case for MLE-based approach. It in turn requires functions of event sequences f(x1, x2, ...), rather than functions of the time stamps f(xi). Furthermore, Stein’s method to approximate Poisson processes [30, 31] is also relevant as they are defining distances between a Poisson process and an arbitrary point process.
2.4 WGAN for Temporal Point Processes
Equipped with a way to approximately compute the Wasserstein distance, we will look for a model Pr that is close to the distribution of real sequences. Again, we choose a sufficiently flexible parametric family of models, gθ parameterized by θ. Inspired by GAN [20], this generator takes a noise and turns it into a sample to mimic the real samples. In conventional GAN or WGAN, Gaussian or uniform distribution is chosen. In point processes, a homogeneous Poisson process plays the role of a
non-informative and uniform-like distribution: the probability of events in every region is independent of the rest and is proportional to its volume. Define the noise process as (Ωz,Fz,Pz)→ (Ξ, C), then ζ ∼ Pz is a sample from a Poisson process on S = [0, T ) with constant rate λz > 0. Therefore, gθ : Ξ→ Ξ is a transformation in the space of counting measures. Note that λz is part of the prior knowledge and belief about the problem domain. Therefore, the objective of learning the generative model can be written as minW (Pr,Pg) or equivalently:
min θ max w∈W,‖fw‖L≤1 Eξ∼Pr [fw(ξ)]− Eζ∼Pz [fw(gθ(ζ))] (6)
In GAN terminology fw is called the discriminator and gθ is known as the generator model. We estimate the generative model by enforcing that the sample sequences from the model have the same distribution as training sequences. Given L samples sequences from real data Dr = {ξ1, . . . , ξL} and from the noise Dz = {ζ1, . . . , ζL} the two expectations are estimated empirically: Eξ∼Pr [fw(ξ)] = 1 L ∑L l=1 fw(ξl) and Eζ∼Pz [fw(gθ(ζ))] = 1 L ∑L l=1 fw(gθ(ζl)).
2.5 Ingredients of WGANTPP
To proceed with our point process based WGAN, we need the generator function gθ : Ξ→ Ξ, the discriminator function fw : Ξ→ R, and enforce Lipschitz constraint on fw. Figure 4 in Appendix A illustrates the data flow for WGANTPP.
The generator transforms a given sequence to another sequence. Similar to [32, 33] we use Recurrent Neural Networks (RNN) to model the generator. For clarity, we use the vanilla RNN to illustrate the computational process as below. The LSTM is used in our experiments for its capacity to capture long-range dependency. If the input and output sequences are ζ = {z1, . . . , zn} and ρ = {t1, . . . , tn} then the generator gθ(ζ) = ρ works according to
hi = φ h g (A h gzi +B h g hi−1 + b h g ), ti = φ x g(B x ghi + b x g) (7)
Here hi is the k-dimensional history embedding vector and φhg and φ x g are the activation functions. The parameter set of the generator is θ = {( Ahg ) k×1 , ( Bhg ) k×k , ( bhg ) k×1 , ( Bxg ) 1×k , ( bxg ) 1×1 } .
Similarly, we define the discriminator function who assigns a scalar value fw(ρ) = ∑n i=1 ai to the sequence ρ = {t1, . . . , tn} according to hi = φ h d(A h dti +B h g hi−1 + b h g ) ai = φ a d(B a dhi + b a d) (8)
where the parameter set is comprised of w = {( Ahd ) k×1 , ( Bhd ) k×k , ( bhd ) k×1 , (B a d )1×k , (b a d)1×1 } . Note that both generator and discriminator RNNs are causal networks. Each event is only influenced by the previous events. To enforce the Lipschitz constraints the original WGAN paper [18] adopts weight clipping. However, our initial experiments shows an inferior performance by using weight clipping. This is also reported by the same authors in their follow-up paper [27] to the original work. The poor performance of weight clipping for enforcing 1-Lipschitz can be seen theoretically as well: just consider a simple neural network with one input, one neuron, and one output: f(x) = σ(wx+ b) and the weight clipping w < c. Then,
|f ′(x)| ≤ 1⇐⇒ |wσ′(wx+ b)| ≤ 1⇐⇒ |w| ≤ 1/|σ′(wx+ b)| (9) It is clear that when 1/|σ′(wx+ b)| < c, which is quite likely to happen, the Lipschitz constraint is not necessarily satisfied. In our work, we use a novel approach for enforcing the Lipschitz constraints, avoiding the computation of the gradient which can be costly and difficult for point processes. We add the Lipschitz constraint as a regularization term to the empirical loss of RNN.
min θ max w∈W,‖fw‖L≤1
1
L L∑ l=1 fw(ξl)− L∑ l=1 fw(gθ(ζl))− ν L∑ l,m=1 | |fw(ξl)− fw(gθ(ζm))| |ξl − gθ(ζm)|? − 1| (10)
We can take each of the (
2L 2
) pairs of real and generator sequences, and regularize based on them;
however, we have seen that only a small portion of pairs (O(L)), randomly selected, is sufficient. The procedure of WGANTPP learning is given in Algorithm 1
Remark The significance of Lipschitz constraint and regularization (or more generally any capacity control) is more apparent when we consider the connection of W-distance and optimal transport problem [28]. Basically, minimizing the W-distance between the empirical distribution and the model distribution is equivalent to a semidiscrete optimal transport [28]. Without capacity control for the generator and discriminator, the optimal solution simply maps a partition of the sample space to the set of data points, in effect, memorizing the data points.
Algorithm 1 WGANTPP for Temporal Point Process. The default values α = 1e − 4, β1 = 0.5, β2 = 0.9, m = 256, ncritic = 5. Require: : the regularization coefficient ν for direct Lipschitz constraint. the batch size, m. the
number of iterations of the critic per generator iteration, ncritic. Adam hyper-parameters α, β1, β2. Require: : w0, initial critic parameters. θ0, initial generator’s parameters.
1: set prior λz to the expectation of event rate for real data. 2: while θ has not converged do 3: for t = 0, ..., ncritic do 4: Sample point process realizations {ξ(i)}mi=1 ∼ Pr from real data. 5: Sample {ζ(i)}mi=1 ∼ Pz from a Poisson process with rate λz . 6: L′ ← [ 1 m ∑m i=1 fw(gθ(ζ (i)))− 1m ∑m i=1 fw(ξ (i)) ] + ν ∑m i,j=1 | |fw(ξi)−fw(gθ(ζj))| |ξi−gθ(ζj)|? − 1| 7: w ← Adam(∇wL′, w, α, β1, β2) 8: end for 9: Sample {ζ(i)}mi=1 ∼ Pz from a Poisson process with rate λz .
10: θ ← Adam(−∇θ 1m ∑m i=1 fw(gθ(ζ
(i))), θ, α, β1, β2) 11: end while
3 Experiments
The current work aims at exploring the feasibility of modeling point process without prior knowledge of its underlying generating mechanism. To this end, most widely-used parametrized point processes, e.g., self-exciting and self-correcting, and inhomogeneous Poisson processes and one flexible neural network model, neural point process are compared. In this work we use the most general forms for simpler and clear exposition to the reader and propose the very first model in adversarial training of point processes in contrast to likelihood based models.
3.1 Datasets and Protocol
Synthetic datasets. We simulate 20,000 sequences over time [0, T ) where T = 15, for inhomogeneous process (IP), self-exciting (SE), and self-correcting process (SC), recurrent neural point process (NN). We also create another 4 (= C34 ) datasets from the above 4 synthetic data by a uniform mixture
from the triplets. The new datasets IP+SE+SC, IP+SE+NN, IP+SC+NN, SE+SC+NN are created to testify the mode dropping problem of learning a generative model. The parameter setting follows:
i) Inhomogeneous process. The intensity function is independent from history and given in Sec. 2.2, where k = 3, α = [3, 7, 11], c = [1, 1, 1], σ = [2, 3, 2]. ii) Self-exciting process. The past events increase the rate of future events. The conditional intensity function is given in Sec. 2.2 where µ = 1.0, β = 0.8 and the decaying kernel g(t− ti) = e−(t−ti). iii) Self-correcting process. The conditional intensity function is defined in Sec. 2.2. It increases with time and decreases by events occurrence. We set η = 1.0, γ = 0.2. iv) Recurrent Neural Network process. The conditional intensity is given in Sec. 2.2, where the neural network’s parameters are set randomly and fixed. We first feed random variable from [0,1] uniform distribution, and then iteratively sample events from the intensity and feed the output into the RNN to get the new intensity for the next step.
Real datasets. We collect sequences separately from four public available datasets, namely, healthcare MIMIC-III, public media MemeTracker, NYSE stock exchanges, and publications citations. The time scale for all real data are scaled to [0,15], and the details are as follows:
i) MIMIC. MIMIC-III (Medical Information Mart for Intensive Care III) is a large, publicly available dataset, which contains de-identified health-related data during 2001 to 2012 for more than 40,000 patients. We worked with patients who appear at least 3 times, which renders 2246 patients. Their visiting timestamps are collected as the sequences. ii) Meme. MemeTracker tracks the meme diffusion over public media, which contains more than 172 million news articles or blog posts. The memes are sentences, such as ideas, proverbs, and the time is recorded when it spreads to certain websites. We randomly sample 22,000 cascades. iii) MAS. Microsoft Academic Search provides access to its data, including publication venues, time, citations, etc. We collect citations records for 50,000 papers. iv) NYSE. We use 0.7 million high-frequency transaction records from NYSE for a stock in one day. The transactions are evenly divided into 3,200 sequences with equal durations.
3.2 Experimental Setup
Details. We can feed the temporal sequences to generator and discriminator directly. In practice, all temporal sequences are transformed into time duration between two consecutive events, i.e., transforming the sequence ξ = {t1, . . . , tn} into {τ1, . . . , τn−1}, where τi = ti+1−ti. This approach makes the model train easily and perform robustly. The transformed sequences are statistically identical to the original sequences, which can be used as the inputs of our neural network. To make sure we that the times are increasing we use elu + 1 activation function to produce positive inter arrival times for the generator and accumulate the intervals to get the sequence. The Adam optimization method with learning rate 1e-4, β1 = 0.5, β2 = 0.9, is applied. The code is available 2.
Baselines. We compare the proposed method of learning point processes (i.e., minimizing sample distance) with maximum likelihood based methods for point process. To use MLE inference for point process, we have to specify its parametric model. The used parametric model are inhomogeneous Poisson process (mixture of Gaussian), self-exciting process, self-correcting process, and RNN. For
2https://github.com/xiaoshuai09/Wasserstein-Learning-For-Point-Process
each data, we use all the above solvers to learn the model and generate new sequences, and then we compare the generated sequences with real ones.
Evaluation metrics. Although our model is an intensity-free approach we will evaluate the performance by metrics that are computed via intensity. For all models, we work with the empirical intensity instead. Note that our objective measures are in sharp contrast with the best practices in GANs in which the performance is usually evaluated subjectively, e.g., by visual quality assessment. We evaluate the performance of different methods to learn the underlying processes via two measures: 1) The first one is the well-known QQ plot of sequences generated from learned model. The quantile-quantile (q-q) plot is the graphical representation of the quantiles of the first data set against the quantiles of the second data set. From the time change property [10] of point processes, if the sequences come from the point process λ(t) , then the integral Λ = ∫ tt+1 ti
λ(s)ds between consecutive events should be exponential distribution with parameter 1. Therefore, the QQ plot of Λ against exponential distribution with rate 1 should fall approximately along a 45-degree reference line. The evaluation procedure is as follows: i) The ground-truth data is generated from a model, say IP; ii) All 5 methods are used to learn the unknown process using the ground-truth data; iii) The learned model is used to generate a sequence; iv) The sequence is used against the theoretical quantiles from the model to see if the sequence is really coming from the ground-truth generator or not; v) The deviation from slope 1 is visualized or reported as a performance measure. 2) The second metric is the deviation between empirical intensity from the learned model and the ground truth intensity. We can estimate empirical intensity λ′(t) = E(N(t + δt) − N(t))/δt from sufficient number of realizations of point process through counting the average number of events during [t, t+ δt], where N(t) is the count process for λ(t). The L1 distance between the ground-truth empirical intensity and the learned empirical intensity is reported as a performance measure.
3.3 Results and Discussion
Synthetic data. Figure 2 presents the learning ability of WGANTPP when the ground-truth data is generated via different types of point process. We first compare the QQ plots in the top row from the micro perspective view, where QQ plot describes the dependency between events. Red dots legend-ed with Real are the optimal QQ distribution, where the intensity function generates the sequences are known. We can observe that even though WGANTPP has no prior information about the ground-truth point process, it can estimate the model better except for the estimator that knows the parametric form of data. This is quite expected: When we are training a model and we know the parametric form of the generating model we can find it better. However, whenever the model is misspecified (i.e., we don’t know the parametric from a priori) WGANTPP outperforms other parametric forms and RNN approach. The middle row of figure 2 compares the empirical intensity. The Real line is the optimal empirical intensity estimated from the real data. The estimator can recover the empirical intensity well in the case that we know the parametric form where the data comes from. Otherwise, estimated intensity degrades considerably when the model is misspecified. We can observe our WGANTPP produces the empirical intensity better and performs robustly across different types of point process data. For MLE-IP, different number of kernels are tested and the empirical intensity results improves but QQ plot results degrade when the number of kernels increases, so only result of 3 kernels is shown mainly for clarity of presentation. The fact that the empirical intensity estimated from MLE-IP method are good and QQ plots are very bad indicates the inhomogeneous Poisson process can capture the average intensity (Macro dynamics) accurately but incapable of capturing the dependency between events (Micro dynamics). To testify that WGANTPP can cope with mode dropping, we generate mixtures of data from three different point processes and use this data to train different models. Models with specified form can handle limited types of data and fail to learn from diverse data sources. The last row of figure 2 shows the learned intensity from mixtures of data. WGANTPP produces better empirical intensity than alternatives, which fail to capture the heterogeneity in data. To verify the robustness of WGANTPP, we randomly initialize the generator parameters and run 10 rounds to get the mean and std of deviations for both empirical intensity and QQ plot from ground truth. For empirical intensity, we compute the integral of difference of learned intensity and ground-truth intensity. Table 1 reports the mean and std of deviations for intensity deviation. For each estimators, we obtain the slope from the regression line for its QQ plot. Table 1 reports the mean and std of deviations for slope of the QQ plot. Compared to the MLE-estimators, WGANTPP consistently outperforms even without prior knowledge about the parametric form of the true underlying generative point process. Note that for mixture models QQ-plot is not feasible. Real-world data. We evaluate WGANTPP on a diverse real-world data process from health-care, public media, scientific activities and stock exchange. For those real world data, the underlying
generative process is unknown, previous works usually assume that they are certain types of point process from their domain knowledge. Figure 3 shows the intensity learned from different models, where Real is estimated from the real-world data itself. Table 2 reports the intensity deviation. When all models have no prior knowledge about the true generative process, WGANTPP recovers intensity better than all the other models across the data sets.
Analysis. We have observed that when the generating model is misspecified WGANTPP outperforms the other methods without leveraging the a priori knowledge of the parametric form. However, when the exact parametric form of data is known and when it is utilized to learn the parameters, MLE with this full knowledge performs better. However, this is generally a strong assumption. As we have observed from the real-world experiments WGANTPP is superior in terms of performance. Somewhat surprising is the observation that WGANTPP tends to outperform the MLE-NN approach which basically uses the same RNN architecture but trained using MLE. The superior performance of our approach compared to MLE-NN is another witness of the the benefits of using W-distance in finding a generator that fits the observed sequences well. Even though the expressive power of the estimators is the same for WGANTPP and MLE-NN, MLE-NN may suffer from mode dropping or get stuck in an inferior local minimum since maximizing likelihood is asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distribution. The inherent weakness of KL divergence [25] renders MLE-NN perform unstably, and the large variances of deviations empirically demonstrate this point.
4 Conclusion and Future Work
We have presented a novel approach for Wasserstein learning of deep generative point processes which requires no prior knowledge about the underlying true process and can estimate it accurately across a wide scope of theoretical and real-world processes. For the future work, we would like to explore the connection of the WGAN with the optimal transport problem. We will also explore other possible distance metrics over the realizations of point processes, and more sophisticated transforms of point processes, particularly those that are causal. Extending the current work to marked point processes and processes over structured spaces is another interesting venue for future work.
Acknowledgements. This project was supported in part by NSF (IIS-1639792, IIS-1218749, IIS1717916, CMMI-1745382), NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF CNS-1704701, ONR N00014-15-1-2340, NSFC 61602176, Intel ISTC, NVIDIA and Amazon AWS. | 1. What is the main contribution of the paper regarding Wasserstein generative adversarial networks (WGAN)?
2. What are the strengths and weaknesses of the proposed method compared to other approaches in the field?
3. How does the reviewer assess the clarity and reproducibility of the paper's content?
4. What are the suggestions for improving the distance measure used in the proposed approach?
5. Are there any typos or errors in the paper that need to be addressed? | Review | Review
The paper proposes using Wasserstein generative adversarial networks (WGAN) for point process intensity or hazard estimation. The paper demonstrates the usefulness of objective functions beyond MLE and proposes a computationally efficient distance measure. It uses a regularized dual formulation for optimization and an RNN for generator and discriminator functions. Details on the methods and data used in the real analyses are light and could be documented more clearly for reproducability.
The results demonstrate that the WGANTPP performs reasonably in a variety of tasks using both simulated and real data. The comparison baselines appear unreasonably weak, e.g. the kernel mixture process uses 3 components and visibly has 3 humps in all results, and e.g. the self-exciting and self correcting processes use fixed baselines when there is a large literature on settings for semi-parametric hazard functions. Using more flexible and analogous comparison models, e.g. Goulding ICDM 2016, Weiss ECML 2013, Jing WSDM 2017, would make for better comparison.
The distance defined would not deal with noise in the form of extraneous events due to the alignment of the ith element with the ith element of each sequence. Might a distance based on local alignment would likely be more robust than the one proposed? In what specific applications do you expect this distance to be more useful than log likelihood.
Eq 3. perhaps the second summation should read \sum_{j=i+1}^m y_j.
Section 3.1 inhomogeneous process: \alpha and c settings switched?
Spelling e.g. demosntrate |
NIPS | Title
Wasserstein Learning of Deep Generative Point Process Models
Abstract
Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model’s expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.
1 Introduction
Event sequences are ubiquitous in areas such as e-commerce, social networks, and health informatics. For example, events in e-commerce are the times a customer purchases a product from an online vendor such as Amazon. In social networks, event sequences are the times a user signs on or generates posts, clicks, and likes. In health informatics, events can be the times when a patient exhibits symptoms or receives treatments. Bidding and asking orders also comprise events in the stock market. In all of these applications, understanding and predicting user behaviors exhibited by the event dynamics are of great practical, economic, and societal interest.
Temporal point processes [1] is an effective mathematical tool for modeling events data. It has been applied to sequences arising from social networks [2, 3, 4], electronic health records [5], ecommerce [6], and finance [7]. A temporal point process is a random process whose realization consists of a list of discrete events localized in (continuous) time. The point process representation of sequence data is fundamentally different from the discrete time representation typically used in time series analysis. It directly models the time period between events as random variables, and allows temporal events to be modeled accurately, without requiring the choice of a time window to aggregate events, which may cause discretization errors. Moreover, it has a remarkably extensive theoretical foundation [8].
∗Authors contributed equally. Work completed at Georgia Tech.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
However, conventional point process models often make strong unrealistic assumptions about the generative processes of the event sequences. In fact, a point process is characterized by its conditional intensity function – a stochastic model for the time of the next event given all the times of previous events. The functional form of the intensity is often designed to capture the phenomena of interests [9]. Some examples are homogeneous and non-homogeneous Poisson processes [10], self-exciting point processes [11], self-correcting point process models [12], and survival processes [8]. Unfortunately, they make various parametric assumptions about the latent dynamics governing the generation of the observed point patterns. As a consequence, model misspecification can cause significantly degraded performance using point process models, which is also shown by our experimental results later.
To address the aforementioned problem, the authors in [13, 14, 15] propose to learn a general representation of the underlying dynamics from the event history without assuming a fixed parametric form in advance. The intensity function of the temporal point process is viewed as a nonlinear function of the history of the process and is parameterized using a recurrent neural network. Attenional mechanism is explored to discover the underlying structure [16]. Apparently this line of work still relies on explicit modeling of the intensity function. However, in many tasks such as data generation or event prediction, knowledge of the whole intensity function is unnecessary. On the other hand, sampling sequences from intensity-based models is usually performed via a thinning algorithm [17], which is computationally expensive; many sample events might be rejected because of the rejection step, especially when the intensity exhibits high variation. More importantly, most of the methods based on intensity function are trained by maximizing log likelihood or a lower bound on it. They are asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distributions, which suffers serious issues such as mode dropping [18, 19]. Alternatively, Generative Adversarial Networks (GAN) [20] have proven to be a promising alternative to traditional maximum likelihood approaches [21, 22].
In this paper, for the first time, we bypass the intensity-based modeling and likelihood-based estimation of temporal point processes and propose a neural network-based model with a generative adversarial learning scheme for point processes. In GANs, two models are used to solve a minimax game: a generator which samples synthetic data from the model, and a discriminator which classifies the data as real or synthetic. Theoretically speaking, these models are capable of modeling an arbitrarily complex probability distribution, including distributions over discrete events. They achieve state-of-the-art results on a variety of generative tasks such as image generation, image super-resolution, 3D object generation, and video prediction [23, 24].
The original GAN in [20] minimizes the Jensen-Shannon (JS) and is regarded as highly unstable and prone to miss modes. Recently, Wasserstein GAN (WGAN) [25] is proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore it is shown that the EM objective, as a metric between probability distributions [26] has many advantages as the loss function correlates with the quality of the generated samples and reduces mode dropping [27]. Moreover, it leverages the geometry of the space of event sequences in terms of their distance, which is not the case for an MLE-based approach. In this paper we extend the notion of WGAN for temporal point processes and adopt a Recurrent Neural Network (RNN) for training. Importantly, we are able to demonstrate that Wasserstein distance training of RNN point process models outperforms the same architecture trained using MLE.
In a nutshell, the contributions of the paper are: i) We propose the first intensity-free generative model for point processes and introduce the first (to our best knowledge) likelihood-free corresponding learning methods; ii) We extend WGAN for point processes with Recurrent Neural Network architecture for sequence generation learning; iii) In contrast to the usual subjective measures of evaluating GANs we use a statistical and a quantitative measure to compare the performance of the model to the conventional ones. iv) Extensive experiments involving various types of point processes on both synthetic and real datasets show the promising performance of our approach.
2 Proposed Framework
In this section, we define Point Processes in a way that is suitable to be combined with the WGANs.
2.1 Point Processes
Let S be a compact space equipped with a Borel σ-algebra B. Take Ξ as the set of counting measures on S with C as the smallest σ-algebra on it. Let (Ω,F ,P) be a probability space. A point process on S is a measurable map ξ : Ω→ Ξ from the probability space (Ω,F ,P) to the measurable space (Ξ, C). Figure 1-a illustrates this mapping.
Every realization of a point process ξ can be written as ξ = ∑n i=1 δXi where δ is the Dirac measure, n is an integer-valued random variable and Xi’s are random elements of S or events. A point process can be equivalently represented by a counting process: N(B) := ∫ B ξ(x)dx, which basically is the number of events in each Borel subset B ∈ B of S. The mean measure M of a point process ξ is a measure on S that assigns to every B ∈ B the expected number of events of ξ in B, i.e., M(B) := E[N(B)] for all B ∈ B. For inhomogeneous Poisson process, M(B) = ∫ B λ(x)dx, where the intensity function λ(x) yields a positive measurable function on S. Intuitively speaking, λ(x)dx is the expected number of events in the infinitesimal dx. For the most common type of point process, a Homogeneous Poisson process, λ(x) = λ and M(B) = λ|B|, where | · | is the Lebesgue measure on (S,B). More generally, in Cox point processes, λ(x) can be a random density possibly depending on the history of the process. For any point process, given λ(·), N(B) ∼ Poisson( ∫ B λ(x)dx). In addition, if B1, . . . , Bk ∈ B are disjoint, then N(B1), . . . , N(Bk) are independent conditioning on λ(·). For the ease of exposition, we will present the framework for the case where the events are happening in the real half-line of time. But the framework is easily extensible to the general space.
2.2 Temporal Point Processes
A particularly interesting case of point processes is given when S is the time interval [0, T ), which we will call a temporal point process. Here, a realization is simply a set of time points: ξ = ∑n i=1 δti . With a slight notation abuse we will write ξ = {t1, . . . , tn} where each ti is a random time before T . Using a conditional intensity (rate) function is the usual way to characterize point processes.
For Inhomogeneous Poisson process (IP), the intensity λ(t) is a fixed non-negative function supported in [0, T ). For example, it can be a multi-modal function comprised of k Gaussian kernels: λ(t) =∑k i=1 αi(2πσ 2 i ) −1/2 exp ( −(t− ci)2/σ2i ) , for t ∈ [0, T ), where ci and σi are fixed center and standard deviation, respectively, and αi is the weight (or importance) for kernel i.
A self-exciting (Hawkes) process (SE) is a cox process where the intensity is determined by previous (random) events in a special parametric form: λ(t) = µ+β ∑ ti<t
g(t− ti), where g is a nonnegative kernel function, e.g., g(t) = exp(−ωt) for some ω > 0. This process has an implication that the occurrence of an event will increase the probability of near future events and its influence will (usually) decrease over time, as captured by (the usually) decaying fixed kernel g. µ is the exogenous rate of firing events and α is the coefficient for the endogenous rate.
In contrast, in self-correcting processes (SC), an event will decrease the probability of an event: λ(t) = exp(ηt− ∑ ti<t
γ). The exp ensures that the intensity is positive, while η and γ are exogenous and endogenous rates.
We can utilize more flexible ways to model the intensity, e.g., by a Recurrent Neural Network (RNN): λ(t) = gw(t, hti), where hti is the feedback loop capturing the influence of previous events (last updated at the latest event) and is updated by hti = hv(ti, hti−1). Here w, v are network weights.
2.3 Wasserstein-Distance for Temporal Point Processes
Given samples from a point process, one way to estimate the process is to find a model (Ωg,Fg,Pg)→ (Ξ, C) that is close enough to the real data (Ωr,Fr,Pr)→ (Ξ, C). As mentioned in the introduction, Wasserstein distance [25] is our choice as the proximity measure. The Wasserstein distance between distribution of two point processes is:
W (Pr,Pg) = inf ψ∈Ψ(Pr,Pg) E(ξ,ρ)∼ψ[‖ξ − ρ‖?], (1)
where Ψ(Pr,Pg) denotes the set of all joint distributions ψ(ξ, ρ) whose marginals are Pr and Pg . The distance between two sequences ‖ξ − ρ‖?, is tricky and need further attention. Take ξ = {x1, x2, . . . , xn} and ρ = {y1, . . . , ym}, where for simplicity we first consider the case m = n. The two sequences can be thought as discrete distributions µξ = ∑n i=1 1 nδxi and µ ρ = ∑n i=1 1 nδyi . Then, the distance between these two is an optimal transport problem argminπ∈Σ〈π,C〉, where Σ is the set of doubly stochastic matrices (rows and columns sum up to one), 〈·, ·〉 is the Frobenius dot product, and C is the cost matrix. Cij captures the energy needed to move a probability mass from xi to yj . We take Cij = ‖xi − yj‖◦ where ‖ · ‖◦ is the norm in S. It can be seen that the optimal solution is attained at extreme points and, by Birkhoff’s theorem, the extreme points of the set of doubly stochastic matrices is a permutation [28]. In other words, the mass is transfered from a unique source
event to a unique target event. Therefore, we have: ‖ξ − ρ‖? = minσ
∑n
i=1 ‖xi − yσ(i)‖◦, where the minimum is taken among all n! permutations of 1 . . . n. For the case m 6= n, without loss of generality we assume n ≤ m and define the distance as follows:
‖ξ − ρ‖? = min σ ∑n i=1 ‖xi − yσ(i)‖◦ + ∑m i=n+1 ‖s− yσ(i)‖, (2)
where s is a fixed limiting point in border of the compact space S and the minimum is over all permutations of 1 . . .m. The second term penalizes unmatched points in a very special way which will be clarified later. Appendix B proves that it is indeed a valid distance measure.
Interestingly, in the case of temporal point process in [0, T ) the distance between ξ = {t1, . . . , tn} and ρ = {τ1, . . . , τm} is reduced to
‖ξ − ρ‖? = ∑n
i=1 |ti − τi|+ (m− n)× T − ∑m i=n+1 τi, (3)
where the time points are ordered increasingly, s = T is chosen as the anchor point, and | · | is the Lebesgue measure in the real line. A proof is given in Appendix C. This choice of distance is significant in two senses. First, it is computationally efficient and no excessive computation is involved. Secondly, in terms of point processes, it is interpreted as the volume by which the two counting measures differ. Figure 1-b demonstrates this intuition and justifies our choice of metric in Ξ and Appendix D contains the proof. The distance used in our current work is the simplest yet effective distance that exhibits high interpretability and efficient computability. More robust distance like local alignment distance and dynamic time warping [29] should be more robust and are great venues for future work.
Equation (1) is computationally highly intractable and its dual form is usually utilized [25]:
W (Pr,Pg) = sup ‖f‖L≤1 Eξ∼Pr [f(ξ)]− Eρ∼Pg [f(ρ)], (4)
where the supremum is taken over all Lipschitz functions f : Ξ → R, i.e., functions that assign a value to a sequence of events (points) and satisfy |f(ξ)− f(ρ)| ≤ ‖ξ − ρ‖? for all ξ and ρ. However, solving the dual form is still highly nontrivial. Enumerating all Lipschitz functions over point process realizations is impossible. Instead, we choose a parametric family of functions to approximate the search space fw and consider solving the problem
max w∈W,‖fw‖L≤1 Eξ∼Pr [fw(ξ)]− Eρ∼Pg [fw(ρ)] (5)
where w ∈ W is the parameter. The more flexible fw, the more accurate will be the approximation. It is notable that W-distance leverages the geometry of the space of event sequences in terms of their distance, which is not the case for MLE-based approach. It in turn requires functions of event sequences f(x1, x2, ...), rather than functions of the time stamps f(xi). Furthermore, Stein’s method to approximate Poisson processes [30, 31] is also relevant as they are defining distances between a Poisson process and an arbitrary point process.
2.4 WGAN for Temporal Point Processes
Equipped with a way to approximately compute the Wasserstein distance, we will look for a model Pr that is close to the distribution of real sequences. Again, we choose a sufficiently flexible parametric family of models, gθ parameterized by θ. Inspired by GAN [20], this generator takes a noise and turns it into a sample to mimic the real samples. In conventional GAN or WGAN, Gaussian or uniform distribution is chosen. In point processes, a homogeneous Poisson process plays the role of a
non-informative and uniform-like distribution: the probability of events in every region is independent of the rest and is proportional to its volume. Define the noise process as (Ωz,Fz,Pz)→ (Ξ, C), then ζ ∼ Pz is a sample from a Poisson process on S = [0, T ) with constant rate λz > 0. Therefore, gθ : Ξ→ Ξ is a transformation in the space of counting measures. Note that λz is part of the prior knowledge and belief about the problem domain. Therefore, the objective of learning the generative model can be written as minW (Pr,Pg) or equivalently:
min θ max w∈W,‖fw‖L≤1 Eξ∼Pr [fw(ξ)]− Eζ∼Pz [fw(gθ(ζ))] (6)
In GAN terminology fw is called the discriminator and gθ is known as the generator model. We estimate the generative model by enforcing that the sample sequences from the model have the same distribution as training sequences. Given L samples sequences from real data Dr = {ξ1, . . . , ξL} and from the noise Dz = {ζ1, . . . , ζL} the two expectations are estimated empirically: Eξ∼Pr [fw(ξ)] = 1 L ∑L l=1 fw(ξl) and Eζ∼Pz [fw(gθ(ζ))] = 1 L ∑L l=1 fw(gθ(ζl)).
2.5 Ingredients of WGANTPP
To proceed with our point process based WGAN, we need the generator function gθ : Ξ→ Ξ, the discriminator function fw : Ξ→ R, and enforce Lipschitz constraint on fw. Figure 4 in Appendix A illustrates the data flow for WGANTPP.
The generator transforms a given sequence to another sequence. Similar to [32, 33] we use Recurrent Neural Networks (RNN) to model the generator. For clarity, we use the vanilla RNN to illustrate the computational process as below. The LSTM is used in our experiments for its capacity to capture long-range dependency. If the input and output sequences are ζ = {z1, . . . , zn} and ρ = {t1, . . . , tn} then the generator gθ(ζ) = ρ works according to
hi = φ h g (A h gzi +B h g hi−1 + b h g ), ti = φ x g(B x ghi + b x g) (7)
Here hi is the k-dimensional history embedding vector and φhg and φ x g are the activation functions. The parameter set of the generator is θ = {( Ahg ) k×1 , ( Bhg ) k×k , ( bhg ) k×1 , ( Bxg ) 1×k , ( bxg ) 1×1 } .
Similarly, we define the discriminator function who assigns a scalar value fw(ρ) = ∑n i=1 ai to the sequence ρ = {t1, . . . , tn} according to hi = φ h d(A h dti +B h g hi−1 + b h g ) ai = φ a d(B a dhi + b a d) (8)
where the parameter set is comprised of w = {( Ahd ) k×1 , ( Bhd ) k×k , ( bhd ) k×1 , (B a d )1×k , (b a d)1×1 } . Note that both generator and discriminator RNNs are causal networks. Each event is only influenced by the previous events. To enforce the Lipschitz constraints the original WGAN paper [18] adopts weight clipping. However, our initial experiments shows an inferior performance by using weight clipping. This is also reported by the same authors in their follow-up paper [27] to the original work. The poor performance of weight clipping for enforcing 1-Lipschitz can be seen theoretically as well: just consider a simple neural network with one input, one neuron, and one output: f(x) = σ(wx+ b) and the weight clipping w < c. Then,
|f ′(x)| ≤ 1⇐⇒ |wσ′(wx+ b)| ≤ 1⇐⇒ |w| ≤ 1/|σ′(wx+ b)| (9) It is clear that when 1/|σ′(wx+ b)| < c, which is quite likely to happen, the Lipschitz constraint is not necessarily satisfied. In our work, we use a novel approach for enforcing the Lipschitz constraints, avoiding the computation of the gradient which can be costly and difficult for point processes. We add the Lipschitz constraint as a regularization term to the empirical loss of RNN.
min θ max w∈W,‖fw‖L≤1
1
L L∑ l=1 fw(ξl)− L∑ l=1 fw(gθ(ζl))− ν L∑ l,m=1 | |fw(ξl)− fw(gθ(ζm))| |ξl − gθ(ζm)|? − 1| (10)
We can take each of the (
2L 2
) pairs of real and generator sequences, and regularize based on them;
however, we have seen that only a small portion of pairs (O(L)), randomly selected, is sufficient. The procedure of WGANTPP learning is given in Algorithm 1
Remark The significance of Lipschitz constraint and regularization (or more generally any capacity control) is more apparent when we consider the connection of W-distance and optimal transport problem [28]. Basically, minimizing the W-distance between the empirical distribution and the model distribution is equivalent to a semidiscrete optimal transport [28]. Without capacity control for the generator and discriminator, the optimal solution simply maps a partition of the sample space to the set of data points, in effect, memorizing the data points.
Algorithm 1 WGANTPP for Temporal Point Process. The default values α = 1e − 4, β1 = 0.5, β2 = 0.9, m = 256, ncritic = 5. Require: : the regularization coefficient ν for direct Lipschitz constraint. the batch size, m. the
number of iterations of the critic per generator iteration, ncritic. Adam hyper-parameters α, β1, β2. Require: : w0, initial critic parameters. θ0, initial generator’s parameters.
1: set prior λz to the expectation of event rate for real data. 2: while θ has not converged do 3: for t = 0, ..., ncritic do 4: Sample point process realizations {ξ(i)}mi=1 ∼ Pr from real data. 5: Sample {ζ(i)}mi=1 ∼ Pz from a Poisson process with rate λz . 6: L′ ← [ 1 m ∑m i=1 fw(gθ(ζ (i)))− 1m ∑m i=1 fw(ξ (i)) ] + ν ∑m i,j=1 | |fw(ξi)−fw(gθ(ζj))| |ξi−gθ(ζj)|? − 1| 7: w ← Adam(∇wL′, w, α, β1, β2) 8: end for 9: Sample {ζ(i)}mi=1 ∼ Pz from a Poisson process with rate λz .
10: θ ← Adam(−∇θ 1m ∑m i=1 fw(gθ(ζ
(i))), θ, α, β1, β2) 11: end while
3 Experiments
The current work aims at exploring the feasibility of modeling point process without prior knowledge of its underlying generating mechanism. To this end, most widely-used parametrized point processes, e.g., self-exciting and self-correcting, and inhomogeneous Poisson processes and one flexible neural network model, neural point process are compared. In this work we use the most general forms for simpler and clear exposition to the reader and propose the very first model in adversarial training of point processes in contrast to likelihood based models.
3.1 Datasets and Protocol
Synthetic datasets. We simulate 20,000 sequences over time [0, T ) where T = 15, for inhomogeneous process (IP), self-exciting (SE), and self-correcting process (SC), recurrent neural point process (NN). We also create another 4 (= C34 ) datasets from the above 4 synthetic data by a uniform mixture
from the triplets. The new datasets IP+SE+SC, IP+SE+NN, IP+SC+NN, SE+SC+NN are created to testify the mode dropping problem of learning a generative model. The parameter setting follows:
i) Inhomogeneous process. The intensity function is independent from history and given in Sec. 2.2, where k = 3, α = [3, 7, 11], c = [1, 1, 1], σ = [2, 3, 2]. ii) Self-exciting process. The past events increase the rate of future events. The conditional intensity function is given in Sec. 2.2 where µ = 1.0, β = 0.8 and the decaying kernel g(t− ti) = e−(t−ti). iii) Self-correcting process. The conditional intensity function is defined in Sec. 2.2. It increases with time and decreases by events occurrence. We set η = 1.0, γ = 0.2. iv) Recurrent Neural Network process. The conditional intensity is given in Sec. 2.2, where the neural network’s parameters are set randomly and fixed. We first feed random variable from [0,1] uniform distribution, and then iteratively sample events from the intensity and feed the output into the RNN to get the new intensity for the next step.
Real datasets. We collect sequences separately from four public available datasets, namely, healthcare MIMIC-III, public media MemeTracker, NYSE stock exchanges, and publications citations. The time scale for all real data are scaled to [0,15], and the details are as follows:
i) MIMIC. MIMIC-III (Medical Information Mart for Intensive Care III) is a large, publicly available dataset, which contains de-identified health-related data during 2001 to 2012 for more than 40,000 patients. We worked with patients who appear at least 3 times, which renders 2246 patients. Their visiting timestamps are collected as the sequences. ii) Meme. MemeTracker tracks the meme diffusion over public media, which contains more than 172 million news articles or blog posts. The memes are sentences, such as ideas, proverbs, and the time is recorded when it spreads to certain websites. We randomly sample 22,000 cascades. iii) MAS. Microsoft Academic Search provides access to its data, including publication venues, time, citations, etc. We collect citations records for 50,000 papers. iv) NYSE. We use 0.7 million high-frequency transaction records from NYSE for a stock in one day. The transactions are evenly divided into 3,200 sequences with equal durations.
3.2 Experimental Setup
Details. We can feed the temporal sequences to generator and discriminator directly. In practice, all temporal sequences are transformed into time duration between two consecutive events, i.e., transforming the sequence ξ = {t1, . . . , tn} into {τ1, . . . , τn−1}, where τi = ti+1−ti. This approach makes the model train easily and perform robustly. The transformed sequences are statistically identical to the original sequences, which can be used as the inputs of our neural network. To make sure we that the times are increasing we use elu + 1 activation function to produce positive inter arrival times for the generator and accumulate the intervals to get the sequence. The Adam optimization method with learning rate 1e-4, β1 = 0.5, β2 = 0.9, is applied. The code is available 2.
Baselines. We compare the proposed method of learning point processes (i.e., minimizing sample distance) with maximum likelihood based methods for point process. To use MLE inference for point process, we have to specify its parametric model. The used parametric model are inhomogeneous Poisson process (mixture of Gaussian), self-exciting process, self-correcting process, and RNN. For
2https://github.com/xiaoshuai09/Wasserstein-Learning-For-Point-Process
each data, we use all the above solvers to learn the model and generate new sequences, and then we compare the generated sequences with real ones.
Evaluation metrics. Although our model is an intensity-free approach we will evaluate the performance by metrics that are computed via intensity. For all models, we work with the empirical intensity instead. Note that our objective measures are in sharp contrast with the best practices in GANs in which the performance is usually evaluated subjectively, e.g., by visual quality assessment. We evaluate the performance of different methods to learn the underlying processes via two measures: 1) The first one is the well-known QQ plot of sequences generated from learned model. The quantile-quantile (q-q) plot is the graphical representation of the quantiles of the first data set against the quantiles of the second data set. From the time change property [10] of point processes, if the sequences come from the point process λ(t) , then the integral Λ = ∫ tt+1 ti
λ(s)ds between consecutive events should be exponential distribution with parameter 1. Therefore, the QQ plot of Λ against exponential distribution with rate 1 should fall approximately along a 45-degree reference line. The evaluation procedure is as follows: i) The ground-truth data is generated from a model, say IP; ii) All 5 methods are used to learn the unknown process using the ground-truth data; iii) The learned model is used to generate a sequence; iv) The sequence is used against the theoretical quantiles from the model to see if the sequence is really coming from the ground-truth generator or not; v) The deviation from slope 1 is visualized or reported as a performance measure. 2) The second metric is the deviation between empirical intensity from the learned model and the ground truth intensity. We can estimate empirical intensity λ′(t) = E(N(t + δt) − N(t))/δt from sufficient number of realizations of point process through counting the average number of events during [t, t+ δt], where N(t) is the count process for λ(t). The L1 distance between the ground-truth empirical intensity and the learned empirical intensity is reported as a performance measure.
3.3 Results and Discussion
Synthetic data. Figure 2 presents the learning ability of WGANTPP when the ground-truth data is generated via different types of point process. We first compare the QQ plots in the top row from the micro perspective view, where QQ plot describes the dependency between events. Red dots legend-ed with Real are the optimal QQ distribution, where the intensity function generates the sequences are known. We can observe that even though WGANTPP has no prior information about the ground-truth point process, it can estimate the model better except for the estimator that knows the parametric form of data. This is quite expected: When we are training a model and we know the parametric form of the generating model we can find it better. However, whenever the model is misspecified (i.e., we don’t know the parametric from a priori) WGANTPP outperforms other parametric forms and RNN approach. The middle row of figure 2 compares the empirical intensity. The Real line is the optimal empirical intensity estimated from the real data. The estimator can recover the empirical intensity well in the case that we know the parametric form where the data comes from. Otherwise, estimated intensity degrades considerably when the model is misspecified. We can observe our WGANTPP produces the empirical intensity better and performs robustly across different types of point process data. For MLE-IP, different number of kernels are tested and the empirical intensity results improves but QQ plot results degrade when the number of kernels increases, so only result of 3 kernels is shown mainly for clarity of presentation. The fact that the empirical intensity estimated from MLE-IP method are good and QQ plots are very bad indicates the inhomogeneous Poisson process can capture the average intensity (Macro dynamics) accurately but incapable of capturing the dependency between events (Micro dynamics). To testify that WGANTPP can cope with mode dropping, we generate mixtures of data from three different point processes and use this data to train different models. Models with specified form can handle limited types of data and fail to learn from diverse data sources. The last row of figure 2 shows the learned intensity from mixtures of data. WGANTPP produces better empirical intensity than alternatives, which fail to capture the heterogeneity in data. To verify the robustness of WGANTPP, we randomly initialize the generator parameters and run 10 rounds to get the mean and std of deviations for both empirical intensity and QQ plot from ground truth. For empirical intensity, we compute the integral of difference of learned intensity and ground-truth intensity. Table 1 reports the mean and std of deviations for intensity deviation. For each estimators, we obtain the slope from the regression line for its QQ plot. Table 1 reports the mean and std of deviations for slope of the QQ plot. Compared to the MLE-estimators, WGANTPP consistently outperforms even without prior knowledge about the parametric form of the true underlying generative point process. Note that for mixture models QQ-plot is not feasible. Real-world data. We evaluate WGANTPP on a diverse real-world data process from health-care, public media, scientific activities and stock exchange. For those real world data, the underlying
generative process is unknown, previous works usually assume that they are certain types of point process from their domain knowledge. Figure 3 shows the intensity learned from different models, where Real is estimated from the real-world data itself. Table 2 reports the intensity deviation. When all models have no prior knowledge about the true generative process, WGANTPP recovers intensity better than all the other models across the data sets.
Analysis. We have observed that when the generating model is misspecified WGANTPP outperforms the other methods without leveraging the a priori knowledge of the parametric form. However, when the exact parametric form of data is known and when it is utilized to learn the parameters, MLE with this full knowledge performs better. However, this is generally a strong assumption. As we have observed from the real-world experiments WGANTPP is superior in terms of performance. Somewhat surprising is the observation that WGANTPP tends to outperform the MLE-NN approach which basically uses the same RNN architecture but trained using MLE. The superior performance of our approach compared to MLE-NN is another witness of the the benefits of using W-distance in finding a generator that fits the observed sequences well. Even though the expressive power of the estimators is the same for WGANTPP and MLE-NN, MLE-NN may suffer from mode dropping or get stuck in an inferior local minimum since maximizing likelihood is asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distribution. The inherent weakness of KL divergence [25] renders MLE-NN perform unstably, and the large variances of deviations empirically demonstrate this point.
4 Conclusion and Future Work
We have presented a novel approach for Wasserstein learning of deep generative point processes which requires no prior knowledge about the underlying true process and can estimate it accurately across a wide scope of theoretical and real-world processes. For the future work, we would like to explore the connection of the WGAN with the optimal transport problem. We will also explore other possible distance metrics over the realizations of point processes, and more sophisticated transforms of point processes, particularly those that are causal. Extending the current work to marked point processes and processes over structured spaces is another interesting venue for future work.
Acknowledgements. This project was supported in part by NSF (IIS-1639792, IIS-1218749, IIS1717916, CMMI-1745382), NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF CNS-1704701, ONR N00014-15-1-2340, NSFC 61602176, Intel ISTC, NVIDIA and Amazon AWS. | 1. What is the main contribution of the paper in the field of point process estimation?
2. How does the proposed approach utilize Wasserstein-GAN, and what advantages does it offer over traditional maximum likelihood methods?
3. Can you provide more information about the distance metric used between two realizations of a point process, as well as the family of Lipschitz functions defined in the paper?
4. How effective is the proposed method in comparison to maximum likelihood approaches, especially when dealing with unknown underlying intensity models?
5. Could you clarify how the generator function is modeled in the paper, particularly regarding the issue of non-increasing generated sequences? | Review | Review
This paper proposes to perform estimation of a point process using the Wasserstein-GAN approach.
More precisely, given data that has been generated by a point process on the real line, the goal is to build a model of this point process. Instead of using maximum likelihood, the authors proposed to use WGAN.
This requires to:
- define a distance between 2 realizations of a point process
- define a family of Lipschitz functions with respect to this distance
- define a generative model which transforms "noise" into a point process
The contribution of the paper is to propose a particular way of addressing these three points and thus demonstrate how to use WGAN in this setting.
The resulting approach is compared on a variety of point processes (both synthetic and real) with maximum likelihood approaches and shown to compare favorably (especially when the underlying intensity model is unknown).
I must admit that I am not very familiar with estimation of point processes and the corresponding applications and thus cannot judge the potential impact and relevance of the proposed method. However, I feel that the adaptation of WGAN (which is becoming increasingly popular in a variety of domains) to
the estimation of point processes is not so obvious and the originality of the contribution comes from proposing a reasonable approach to do this adaptation, along with some insights regarding the implementation of the Lipschitz constraint which I find interesting.
One aspect that could be further clarified is regarding the modeling of the generator function: from the definition in equation (7) there is no guarantee
that the generated sequence t_i will be increasing. It is the case that the weights are constrained to be positive for example? or is it the case that the algorithm works even when the generated sequence is not increasing since the discriminator function would discriminate against such sequences and
thus encourage the generator to produce increasing ones? |
NIPS | Title
Wasserstein Learning of Deep Generative Point Process Models
Abstract
Point processes are becoming very popular in modeling asynchronous sequential data due to their sound mathematical foundation and strength in modeling a variety of real-world phenomena. Currently, they are often characterized via intensity function which limits model’s expressiveness due to unrealistic assumptions on its parametric form used in practice. Furthermore, they are learned via maximum likelihood approach which is prone to failure in multi-modal distributions of sequences. In this paper, we propose an intensity-free approach for point processes modeling that transforms nuisance processes to a target one. Furthermore, we train the model using a likelihood-free leveraging Wasserstein distance between point processes. Experiments on various synthetic and real-world data substantiate the superiority of the proposed point process model over conventional ones.
1 Introduction
Event sequences are ubiquitous in areas such as e-commerce, social networks, and health informatics. For example, events in e-commerce are the times a customer purchases a product from an online vendor such as Amazon. In social networks, event sequences are the times a user signs on or generates posts, clicks, and likes. In health informatics, events can be the times when a patient exhibits symptoms or receives treatments. Bidding and asking orders also comprise events in the stock market. In all of these applications, understanding and predicting user behaviors exhibited by the event dynamics are of great practical, economic, and societal interest.
Temporal point processes [1] is an effective mathematical tool for modeling events data. It has been applied to sequences arising from social networks [2, 3, 4], electronic health records [5], ecommerce [6], and finance [7]. A temporal point process is a random process whose realization consists of a list of discrete events localized in (continuous) time. The point process representation of sequence data is fundamentally different from the discrete time representation typically used in time series analysis. It directly models the time period between events as random variables, and allows temporal events to be modeled accurately, without requiring the choice of a time window to aggregate events, which may cause discretization errors. Moreover, it has a remarkably extensive theoretical foundation [8].
∗Authors contributed equally. Work completed at Georgia Tech.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
However, conventional point process models often make strong unrealistic assumptions about the generative processes of the event sequences. In fact, a point process is characterized by its conditional intensity function – a stochastic model for the time of the next event given all the times of previous events. The functional form of the intensity is often designed to capture the phenomena of interests [9]. Some examples are homogeneous and non-homogeneous Poisson processes [10], self-exciting point processes [11], self-correcting point process models [12], and survival processes [8]. Unfortunately, they make various parametric assumptions about the latent dynamics governing the generation of the observed point patterns. As a consequence, model misspecification can cause significantly degraded performance using point process models, which is also shown by our experimental results later.
To address the aforementioned problem, the authors in [13, 14, 15] propose to learn a general representation of the underlying dynamics from the event history without assuming a fixed parametric form in advance. The intensity function of the temporal point process is viewed as a nonlinear function of the history of the process and is parameterized using a recurrent neural network. Attenional mechanism is explored to discover the underlying structure [16]. Apparently this line of work still relies on explicit modeling of the intensity function. However, in many tasks such as data generation or event prediction, knowledge of the whole intensity function is unnecessary. On the other hand, sampling sequences from intensity-based models is usually performed via a thinning algorithm [17], which is computationally expensive; many sample events might be rejected because of the rejection step, especially when the intensity exhibits high variation. More importantly, most of the methods based on intensity function are trained by maximizing log likelihood or a lower bound on it. They are asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distributions, which suffers serious issues such as mode dropping [18, 19]. Alternatively, Generative Adversarial Networks (GAN) [20] have proven to be a promising alternative to traditional maximum likelihood approaches [21, 22].
In this paper, for the first time, we bypass the intensity-based modeling and likelihood-based estimation of temporal point processes and propose a neural network-based model with a generative adversarial learning scheme for point processes. In GANs, two models are used to solve a minimax game: a generator which samples synthetic data from the model, and a discriminator which classifies the data as real or synthetic. Theoretically speaking, these models are capable of modeling an arbitrarily complex probability distribution, including distributions over discrete events. They achieve state-of-the-art results on a variety of generative tasks such as image generation, image super-resolution, 3D object generation, and video prediction [23, 24].
The original GAN in [20] minimizes the Jensen-Shannon (JS) and is regarded as highly unstable and prone to miss modes. Recently, Wasserstein GAN (WGAN) [25] is proposed to use the Earth Moving distance (EM) as an objective for training GANs. Furthermore it is shown that the EM objective, as a metric between probability distributions [26] has many advantages as the loss function correlates with the quality of the generated samples and reduces mode dropping [27]. Moreover, it leverages the geometry of the space of event sequences in terms of their distance, which is not the case for an MLE-based approach. In this paper we extend the notion of WGAN for temporal point processes and adopt a Recurrent Neural Network (RNN) for training. Importantly, we are able to demonstrate that Wasserstein distance training of RNN point process models outperforms the same architecture trained using MLE.
In a nutshell, the contributions of the paper are: i) We propose the first intensity-free generative model for point processes and introduce the first (to our best knowledge) likelihood-free corresponding learning methods; ii) We extend WGAN for point processes with Recurrent Neural Network architecture for sequence generation learning; iii) In contrast to the usual subjective measures of evaluating GANs we use a statistical and a quantitative measure to compare the performance of the model to the conventional ones. iv) Extensive experiments involving various types of point processes on both synthetic and real datasets show the promising performance of our approach.
2 Proposed Framework
In this section, we define Point Processes in a way that is suitable to be combined with the WGANs.
2.1 Point Processes
Let S be a compact space equipped with a Borel σ-algebra B. Take Ξ as the set of counting measures on S with C as the smallest σ-algebra on it. Let (Ω,F ,P) be a probability space. A point process on S is a measurable map ξ : Ω→ Ξ from the probability space (Ω,F ,P) to the measurable space (Ξ, C). Figure 1-a illustrates this mapping.
Every realization of a point process ξ can be written as ξ = ∑n i=1 δXi where δ is the Dirac measure, n is an integer-valued random variable and Xi’s are random elements of S or events. A point process can be equivalently represented by a counting process: N(B) := ∫ B ξ(x)dx, which basically is the number of events in each Borel subset B ∈ B of S. The mean measure M of a point process ξ is a measure on S that assigns to every B ∈ B the expected number of events of ξ in B, i.e., M(B) := E[N(B)] for all B ∈ B. For inhomogeneous Poisson process, M(B) = ∫ B λ(x)dx, where the intensity function λ(x) yields a positive measurable function on S. Intuitively speaking, λ(x)dx is the expected number of events in the infinitesimal dx. For the most common type of point process, a Homogeneous Poisson process, λ(x) = λ and M(B) = λ|B|, where | · | is the Lebesgue measure on (S,B). More generally, in Cox point processes, λ(x) can be a random density possibly depending on the history of the process. For any point process, given λ(·), N(B) ∼ Poisson( ∫ B λ(x)dx). In addition, if B1, . . . , Bk ∈ B are disjoint, then N(B1), . . . , N(Bk) are independent conditioning on λ(·). For the ease of exposition, we will present the framework for the case where the events are happening in the real half-line of time. But the framework is easily extensible to the general space.
2.2 Temporal Point Processes
A particularly interesting case of point processes is given when S is the time interval [0, T ), which we will call a temporal point process. Here, a realization is simply a set of time points: ξ = ∑n i=1 δti . With a slight notation abuse we will write ξ = {t1, . . . , tn} where each ti is a random time before T . Using a conditional intensity (rate) function is the usual way to characterize point processes.
For Inhomogeneous Poisson process (IP), the intensity λ(t) is a fixed non-negative function supported in [0, T ). For example, it can be a multi-modal function comprised of k Gaussian kernels: λ(t) =∑k i=1 αi(2πσ 2 i ) −1/2 exp ( −(t− ci)2/σ2i ) , for t ∈ [0, T ), where ci and σi are fixed center and standard deviation, respectively, and αi is the weight (or importance) for kernel i.
A self-exciting (Hawkes) process (SE) is a cox process where the intensity is determined by previous (random) events in a special parametric form: λ(t) = µ+β ∑ ti<t
g(t− ti), where g is a nonnegative kernel function, e.g., g(t) = exp(−ωt) for some ω > 0. This process has an implication that the occurrence of an event will increase the probability of near future events and its influence will (usually) decrease over time, as captured by (the usually) decaying fixed kernel g. µ is the exogenous rate of firing events and α is the coefficient for the endogenous rate.
In contrast, in self-correcting processes (SC), an event will decrease the probability of an event: λ(t) = exp(ηt− ∑ ti<t
γ). The exp ensures that the intensity is positive, while η and γ are exogenous and endogenous rates.
We can utilize more flexible ways to model the intensity, e.g., by a Recurrent Neural Network (RNN): λ(t) = gw(t, hti), where hti is the feedback loop capturing the influence of previous events (last updated at the latest event) and is updated by hti = hv(ti, hti−1). Here w, v are network weights.
2.3 Wasserstein-Distance for Temporal Point Processes
Given samples from a point process, one way to estimate the process is to find a model (Ωg,Fg,Pg)→ (Ξ, C) that is close enough to the real data (Ωr,Fr,Pr)→ (Ξ, C). As mentioned in the introduction, Wasserstein distance [25] is our choice as the proximity measure. The Wasserstein distance between distribution of two point processes is:
W (Pr,Pg) = inf ψ∈Ψ(Pr,Pg) E(ξ,ρ)∼ψ[‖ξ − ρ‖?], (1)
where Ψ(Pr,Pg) denotes the set of all joint distributions ψ(ξ, ρ) whose marginals are Pr and Pg . The distance between two sequences ‖ξ − ρ‖?, is tricky and need further attention. Take ξ = {x1, x2, . . . , xn} and ρ = {y1, . . . , ym}, where for simplicity we first consider the case m = n. The two sequences can be thought as discrete distributions µξ = ∑n i=1 1 nδxi and µ ρ = ∑n i=1 1 nδyi . Then, the distance between these two is an optimal transport problem argminπ∈Σ〈π,C〉, where Σ is the set of doubly stochastic matrices (rows and columns sum up to one), 〈·, ·〉 is the Frobenius dot product, and C is the cost matrix. Cij captures the energy needed to move a probability mass from xi to yj . We take Cij = ‖xi − yj‖◦ where ‖ · ‖◦ is the norm in S. It can be seen that the optimal solution is attained at extreme points and, by Birkhoff’s theorem, the extreme points of the set of doubly stochastic matrices is a permutation [28]. In other words, the mass is transfered from a unique source
event to a unique target event. Therefore, we have: ‖ξ − ρ‖? = minσ
∑n
i=1 ‖xi − yσ(i)‖◦, where the minimum is taken among all n! permutations of 1 . . . n. For the case m 6= n, without loss of generality we assume n ≤ m and define the distance as follows:
‖ξ − ρ‖? = min σ ∑n i=1 ‖xi − yσ(i)‖◦ + ∑m i=n+1 ‖s− yσ(i)‖, (2)
where s is a fixed limiting point in border of the compact space S and the minimum is over all permutations of 1 . . .m. The second term penalizes unmatched points in a very special way which will be clarified later. Appendix B proves that it is indeed a valid distance measure.
Interestingly, in the case of temporal point process in [0, T ) the distance between ξ = {t1, . . . , tn} and ρ = {τ1, . . . , τm} is reduced to
‖ξ − ρ‖? = ∑n
i=1 |ti − τi|+ (m− n)× T − ∑m i=n+1 τi, (3)
where the time points are ordered increasingly, s = T is chosen as the anchor point, and | · | is the Lebesgue measure in the real line. A proof is given in Appendix C. This choice of distance is significant in two senses. First, it is computationally efficient and no excessive computation is involved. Secondly, in terms of point processes, it is interpreted as the volume by which the two counting measures differ. Figure 1-b demonstrates this intuition and justifies our choice of metric in Ξ and Appendix D contains the proof. The distance used in our current work is the simplest yet effective distance that exhibits high interpretability and efficient computability. More robust distance like local alignment distance and dynamic time warping [29] should be more robust and are great venues for future work.
Equation (1) is computationally highly intractable and its dual form is usually utilized [25]:
W (Pr,Pg) = sup ‖f‖L≤1 Eξ∼Pr [f(ξ)]− Eρ∼Pg [f(ρ)], (4)
where the supremum is taken over all Lipschitz functions f : Ξ → R, i.e., functions that assign a value to a sequence of events (points) and satisfy |f(ξ)− f(ρ)| ≤ ‖ξ − ρ‖? for all ξ and ρ. However, solving the dual form is still highly nontrivial. Enumerating all Lipschitz functions over point process realizations is impossible. Instead, we choose a parametric family of functions to approximate the search space fw and consider solving the problem
max w∈W,‖fw‖L≤1 Eξ∼Pr [fw(ξ)]− Eρ∼Pg [fw(ρ)] (5)
where w ∈ W is the parameter. The more flexible fw, the more accurate will be the approximation. It is notable that W-distance leverages the geometry of the space of event sequences in terms of their distance, which is not the case for MLE-based approach. It in turn requires functions of event sequences f(x1, x2, ...), rather than functions of the time stamps f(xi). Furthermore, Stein’s method to approximate Poisson processes [30, 31] is also relevant as they are defining distances between a Poisson process and an arbitrary point process.
2.4 WGAN for Temporal Point Processes
Equipped with a way to approximately compute the Wasserstein distance, we will look for a model Pr that is close to the distribution of real sequences. Again, we choose a sufficiently flexible parametric family of models, gθ parameterized by θ. Inspired by GAN [20], this generator takes a noise and turns it into a sample to mimic the real samples. In conventional GAN or WGAN, Gaussian or uniform distribution is chosen. In point processes, a homogeneous Poisson process plays the role of a
non-informative and uniform-like distribution: the probability of events in every region is independent of the rest and is proportional to its volume. Define the noise process as (Ωz,Fz,Pz)→ (Ξ, C), then ζ ∼ Pz is a sample from a Poisson process on S = [0, T ) with constant rate λz > 0. Therefore, gθ : Ξ→ Ξ is a transformation in the space of counting measures. Note that λz is part of the prior knowledge and belief about the problem domain. Therefore, the objective of learning the generative model can be written as minW (Pr,Pg) or equivalently:
min θ max w∈W,‖fw‖L≤1 Eξ∼Pr [fw(ξ)]− Eζ∼Pz [fw(gθ(ζ))] (6)
In GAN terminology fw is called the discriminator and gθ is known as the generator model. We estimate the generative model by enforcing that the sample sequences from the model have the same distribution as training sequences. Given L samples sequences from real data Dr = {ξ1, . . . , ξL} and from the noise Dz = {ζ1, . . . , ζL} the two expectations are estimated empirically: Eξ∼Pr [fw(ξ)] = 1 L ∑L l=1 fw(ξl) and Eζ∼Pz [fw(gθ(ζ))] = 1 L ∑L l=1 fw(gθ(ζl)).
2.5 Ingredients of WGANTPP
To proceed with our point process based WGAN, we need the generator function gθ : Ξ→ Ξ, the discriminator function fw : Ξ→ R, and enforce Lipschitz constraint on fw. Figure 4 in Appendix A illustrates the data flow for WGANTPP.
The generator transforms a given sequence to another sequence. Similar to [32, 33] we use Recurrent Neural Networks (RNN) to model the generator. For clarity, we use the vanilla RNN to illustrate the computational process as below. The LSTM is used in our experiments for its capacity to capture long-range dependency. If the input and output sequences are ζ = {z1, . . . , zn} and ρ = {t1, . . . , tn} then the generator gθ(ζ) = ρ works according to
hi = φ h g (A h gzi +B h g hi−1 + b h g ), ti = φ x g(B x ghi + b x g) (7)
Here hi is the k-dimensional history embedding vector and φhg and φ x g are the activation functions. The parameter set of the generator is θ = {( Ahg ) k×1 , ( Bhg ) k×k , ( bhg ) k×1 , ( Bxg ) 1×k , ( bxg ) 1×1 } .
Similarly, we define the discriminator function who assigns a scalar value fw(ρ) = ∑n i=1 ai to the sequence ρ = {t1, . . . , tn} according to hi = φ h d(A h dti +B h g hi−1 + b h g ) ai = φ a d(B a dhi + b a d) (8)
where the parameter set is comprised of w = {( Ahd ) k×1 , ( Bhd ) k×k , ( bhd ) k×1 , (B a d )1×k , (b a d)1×1 } . Note that both generator and discriminator RNNs are causal networks. Each event is only influenced by the previous events. To enforce the Lipschitz constraints the original WGAN paper [18] adopts weight clipping. However, our initial experiments shows an inferior performance by using weight clipping. This is also reported by the same authors in their follow-up paper [27] to the original work. The poor performance of weight clipping for enforcing 1-Lipschitz can be seen theoretically as well: just consider a simple neural network with one input, one neuron, and one output: f(x) = σ(wx+ b) and the weight clipping w < c. Then,
|f ′(x)| ≤ 1⇐⇒ |wσ′(wx+ b)| ≤ 1⇐⇒ |w| ≤ 1/|σ′(wx+ b)| (9) It is clear that when 1/|σ′(wx+ b)| < c, which is quite likely to happen, the Lipschitz constraint is not necessarily satisfied. In our work, we use a novel approach for enforcing the Lipschitz constraints, avoiding the computation of the gradient which can be costly and difficult for point processes. We add the Lipschitz constraint as a regularization term to the empirical loss of RNN.
min θ max w∈W,‖fw‖L≤1
1
L L∑ l=1 fw(ξl)− L∑ l=1 fw(gθ(ζl))− ν L∑ l,m=1 | |fw(ξl)− fw(gθ(ζm))| |ξl − gθ(ζm)|? − 1| (10)
We can take each of the (
2L 2
) pairs of real and generator sequences, and regularize based on them;
however, we have seen that only a small portion of pairs (O(L)), randomly selected, is sufficient. The procedure of WGANTPP learning is given in Algorithm 1
Remark The significance of Lipschitz constraint and regularization (or more generally any capacity control) is more apparent when we consider the connection of W-distance and optimal transport problem [28]. Basically, minimizing the W-distance between the empirical distribution and the model distribution is equivalent to a semidiscrete optimal transport [28]. Without capacity control for the generator and discriminator, the optimal solution simply maps a partition of the sample space to the set of data points, in effect, memorizing the data points.
Algorithm 1 WGANTPP for Temporal Point Process. The default values α = 1e − 4, β1 = 0.5, β2 = 0.9, m = 256, ncritic = 5. Require: : the regularization coefficient ν for direct Lipschitz constraint. the batch size, m. the
number of iterations of the critic per generator iteration, ncritic. Adam hyper-parameters α, β1, β2. Require: : w0, initial critic parameters. θ0, initial generator’s parameters.
1: set prior λz to the expectation of event rate for real data. 2: while θ has not converged do 3: for t = 0, ..., ncritic do 4: Sample point process realizations {ξ(i)}mi=1 ∼ Pr from real data. 5: Sample {ζ(i)}mi=1 ∼ Pz from a Poisson process with rate λz . 6: L′ ← [ 1 m ∑m i=1 fw(gθ(ζ (i)))− 1m ∑m i=1 fw(ξ (i)) ] + ν ∑m i,j=1 | |fw(ξi)−fw(gθ(ζj))| |ξi−gθ(ζj)|? − 1| 7: w ← Adam(∇wL′, w, α, β1, β2) 8: end for 9: Sample {ζ(i)}mi=1 ∼ Pz from a Poisson process with rate λz .
10: θ ← Adam(−∇θ 1m ∑m i=1 fw(gθ(ζ
(i))), θ, α, β1, β2) 11: end while
3 Experiments
The current work aims at exploring the feasibility of modeling point process without prior knowledge of its underlying generating mechanism. To this end, most widely-used parametrized point processes, e.g., self-exciting and self-correcting, and inhomogeneous Poisson processes and one flexible neural network model, neural point process are compared. In this work we use the most general forms for simpler and clear exposition to the reader and propose the very first model in adversarial training of point processes in contrast to likelihood based models.
3.1 Datasets and Protocol
Synthetic datasets. We simulate 20,000 sequences over time [0, T ) where T = 15, for inhomogeneous process (IP), self-exciting (SE), and self-correcting process (SC), recurrent neural point process (NN). We also create another 4 (= C34 ) datasets from the above 4 synthetic data by a uniform mixture
from the triplets. The new datasets IP+SE+SC, IP+SE+NN, IP+SC+NN, SE+SC+NN are created to testify the mode dropping problem of learning a generative model. The parameter setting follows:
i) Inhomogeneous process. The intensity function is independent from history and given in Sec. 2.2, where k = 3, α = [3, 7, 11], c = [1, 1, 1], σ = [2, 3, 2]. ii) Self-exciting process. The past events increase the rate of future events. The conditional intensity function is given in Sec. 2.2 where µ = 1.0, β = 0.8 and the decaying kernel g(t− ti) = e−(t−ti). iii) Self-correcting process. The conditional intensity function is defined in Sec. 2.2. It increases with time and decreases by events occurrence. We set η = 1.0, γ = 0.2. iv) Recurrent Neural Network process. The conditional intensity is given in Sec. 2.2, where the neural network’s parameters are set randomly and fixed. We first feed random variable from [0,1] uniform distribution, and then iteratively sample events from the intensity and feed the output into the RNN to get the new intensity for the next step.
Real datasets. We collect sequences separately from four public available datasets, namely, healthcare MIMIC-III, public media MemeTracker, NYSE stock exchanges, and publications citations. The time scale for all real data are scaled to [0,15], and the details are as follows:
i) MIMIC. MIMIC-III (Medical Information Mart for Intensive Care III) is a large, publicly available dataset, which contains de-identified health-related data during 2001 to 2012 for more than 40,000 patients. We worked with patients who appear at least 3 times, which renders 2246 patients. Their visiting timestamps are collected as the sequences. ii) Meme. MemeTracker tracks the meme diffusion over public media, which contains more than 172 million news articles or blog posts. The memes are sentences, such as ideas, proverbs, and the time is recorded when it spreads to certain websites. We randomly sample 22,000 cascades. iii) MAS. Microsoft Academic Search provides access to its data, including publication venues, time, citations, etc. We collect citations records for 50,000 papers. iv) NYSE. We use 0.7 million high-frequency transaction records from NYSE for a stock in one day. The transactions are evenly divided into 3,200 sequences with equal durations.
3.2 Experimental Setup
Details. We can feed the temporal sequences to generator and discriminator directly. In practice, all temporal sequences are transformed into time duration between two consecutive events, i.e., transforming the sequence ξ = {t1, . . . , tn} into {τ1, . . . , τn−1}, where τi = ti+1−ti. This approach makes the model train easily and perform robustly. The transformed sequences are statistically identical to the original sequences, which can be used as the inputs of our neural network. To make sure we that the times are increasing we use elu + 1 activation function to produce positive inter arrival times for the generator and accumulate the intervals to get the sequence. The Adam optimization method with learning rate 1e-4, β1 = 0.5, β2 = 0.9, is applied. The code is available 2.
Baselines. We compare the proposed method of learning point processes (i.e., minimizing sample distance) with maximum likelihood based methods for point process. To use MLE inference for point process, we have to specify its parametric model. The used parametric model are inhomogeneous Poisson process (mixture of Gaussian), self-exciting process, self-correcting process, and RNN. For
2https://github.com/xiaoshuai09/Wasserstein-Learning-For-Point-Process
each data, we use all the above solvers to learn the model and generate new sequences, and then we compare the generated sequences with real ones.
Evaluation metrics. Although our model is an intensity-free approach we will evaluate the performance by metrics that are computed via intensity. For all models, we work with the empirical intensity instead. Note that our objective measures are in sharp contrast with the best practices in GANs in which the performance is usually evaluated subjectively, e.g., by visual quality assessment. We evaluate the performance of different methods to learn the underlying processes via two measures: 1) The first one is the well-known QQ plot of sequences generated from learned model. The quantile-quantile (q-q) plot is the graphical representation of the quantiles of the first data set against the quantiles of the second data set. From the time change property [10] of point processes, if the sequences come from the point process λ(t) , then the integral Λ = ∫ tt+1 ti
λ(s)ds between consecutive events should be exponential distribution with parameter 1. Therefore, the QQ plot of Λ against exponential distribution with rate 1 should fall approximately along a 45-degree reference line. The evaluation procedure is as follows: i) The ground-truth data is generated from a model, say IP; ii) All 5 methods are used to learn the unknown process using the ground-truth data; iii) The learned model is used to generate a sequence; iv) The sequence is used against the theoretical quantiles from the model to see if the sequence is really coming from the ground-truth generator or not; v) The deviation from slope 1 is visualized or reported as a performance measure. 2) The second metric is the deviation between empirical intensity from the learned model and the ground truth intensity. We can estimate empirical intensity λ′(t) = E(N(t + δt) − N(t))/δt from sufficient number of realizations of point process through counting the average number of events during [t, t+ δt], where N(t) is the count process for λ(t). The L1 distance between the ground-truth empirical intensity and the learned empirical intensity is reported as a performance measure.
3.3 Results and Discussion
Synthetic data. Figure 2 presents the learning ability of WGANTPP when the ground-truth data is generated via different types of point process. We first compare the QQ plots in the top row from the micro perspective view, where QQ plot describes the dependency between events. Red dots legend-ed with Real are the optimal QQ distribution, where the intensity function generates the sequences are known. We can observe that even though WGANTPP has no prior information about the ground-truth point process, it can estimate the model better except for the estimator that knows the parametric form of data. This is quite expected: When we are training a model and we know the parametric form of the generating model we can find it better. However, whenever the model is misspecified (i.e., we don’t know the parametric from a priori) WGANTPP outperforms other parametric forms and RNN approach. The middle row of figure 2 compares the empirical intensity. The Real line is the optimal empirical intensity estimated from the real data. The estimator can recover the empirical intensity well in the case that we know the parametric form where the data comes from. Otherwise, estimated intensity degrades considerably when the model is misspecified. We can observe our WGANTPP produces the empirical intensity better and performs robustly across different types of point process data. For MLE-IP, different number of kernels are tested and the empirical intensity results improves but QQ plot results degrade when the number of kernels increases, so only result of 3 kernels is shown mainly for clarity of presentation. The fact that the empirical intensity estimated from MLE-IP method are good and QQ plots are very bad indicates the inhomogeneous Poisson process can capture the average intensity (Macro dynamics) accurately but incapable of capturing the dependency between events (Micro dynamics). To testify that WGANTPP can cope with mode dropping, we generate mixtures of data from three different point processes and use this data to train different models. Models with specified form can handle limited types of data and fail to learn from diverse data sources. The last row of figure 2 shows the learned intensity from mixtures of data. WGANTPP produces better empirical intensity than alternatives, which fail to capture the heterogeneity in data. To verify the robustness of WGANTPP, we randomly initialize the generator parameters and run 10 rounds to get the mean and std of deviations for both empirical intensity and QQ plot from ground truth. For empirical intensity, we compute the integral of difference of learned intensity and ground-truth intensity. Table 1 reports the mean and std of deviations for intensity deviation. For each estimators, we obtain the slope from the regression line for its QQ plot. Table 1 reports the mean and std of deviations for slope of the QQ plot. Compared to the MLE-estimators, WGANTPP consistently outperforms even without prior knowledge about the parametric form of the true underlying generative point process. Note that for mixture models QQ-plot is not feasible. Real-world data. We evaluate WGANTPP on a diverse real-world data process from health-care, public media, scientific activities and stock exchange. For those real world data, the underlying
generative process is unknown, previous works usually assume that they are certain types of point process from their domain knowledge. Figure 3 shows the intensity learned from different models, where Real is estimated from the real-world data itself. Table 2 reports the intensity deviation. When all models have no prior knowledge about the true generative process, WGANTPP recovers intensity better than all the other models across the data sets.
Analysis. We have observed that when the generating model is misspecified WGANTPP outperforms the other methods without leveraging the a priori knowledge of the parametric form. However, when the exact parametric form of data is known and when it is utilized to learn the parameters, MLE with this full knowledge performs better. However, this is generally a strong assumption. As we have observed from the real-world experiments WGANTPP is superior in terms of performance. Somewhat surprising is the observation that WGANTPP tends to outperform the MLE-NN approach which basically uses the same RNN architecture but trained using MLE. The superior performance of our approach compared to MLE-NN is another witness of the the benefits of using W-distance in finding a generator that fits the observed sequences well. Even though the expressive power of the estimators is the same for WGANTPP and MLE-NN, MLE-NN may suffer from mode dropping or get stuck in an inferior local minimum since maximizing likelihood is asymptotically equivalent to minimizing the Kullback-Leibler (KL) divergence between the data and model distribution. The inherent weakness of KL divergence [25] renders MLE-NN perform unstably, and the large variances of deviations empirically demonstrate this point.
4 Conclusion and Future Work
We have presented a novel approach for Wasserstein learning of deep generative point processes which requires no prior knowledge about the underlying true process and can estimate it accurately across a wide scope of theoretical and real-world processes. For the future work, we would like to explore the connection of the WGAN with the optimal transport problem. We will also explore other possible distance metrics over the realizations of point processes, and more sophisticated transforms of point processes, particularly those that are causal. Extending the current work to marked point processes and processes over structured spaces is another interesting venue for future work.
Acknowledgements. This project was supported in part by NSF (IIS-1639792, IIS-1218749, IIS1717916, CMMI-1745382), NIH BIGDATA 1R01GM108341, NSF CAREER IIS-1350983, NSF CNS-1704701, ONR N00014-15-1-2340, NSFC 61602176, Intel ISTC, NVIDIA and Amazon AWS. | 1. What is the focus of the paper regarding point processes?
2. What are the strengths and weaknesses of the proposed method, particularly in comparison to other approaches?
3. Are there any limitations or challenges in applying the method to certain types of point processes?
4. How does the reviewer assess the effectiveness and efficiency of the proposed model?
5. What are some potential alternatives or comparisons to consider for evaluating the performance of the proposed method? | Review | Review
This paper presents a method for learning predictive (one-dimensional) point process models through modelling the count density using a W-GAN.
Detailed comments:
* Abstract: there are several approaches for doing full (or approximate) Bayesian inference on (Cox or renewal) point processes. E.g. see arxiv.org/pdf/1411.0254.pdf or www.gatsby.ucl.ac.uk%2F~vrao%2Fnips2011_raoteh.pdf.
* To me it appears that the approach will only work for 1D point processes, as otherwise it is hard to represent them via the count density? If this is the case, it would be good to see this more explicitly stated.
* This is the case for most inhomogenous point processes, but given only a single realisation of the point process, it would seem very hard to characterise a low variance estimate of distance from the generated count measure to the data? Perhaps this is the main reason that the W-GAN performs so well---much like the GP based intensity-explicit models, a heavy regularisation is applied to the generator/intensity-proxy.
* I don't understand the origin of 'real' in the real world evaluation metrics e.g. Figure 3? How do you arrive at this ground truth?
* A discussion of how easy/difficult these models are to train would have been interesting.
Finally I am very interested to know how simple models compare to this: e.g. KDE with truncation, simple parametric Hawkes etc? My main concern with this work would be that these models are all horrendously over-complex for the signal-to-noise available, and that therefore while the W-GAN does outperform other NN/RNN based approaches, a more heavily regularised (read simpler) intensity based approach would empirically outperform in most cases. |
NIPS | Title
Deep Transformation-Invariant Clustering
Abstract
Recent advances in image clustering typically focus on learning better deep representations. In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict transformations and performs clustering directly in pixel space. This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model, without requiring any additional loss or hyper-parameters. It leads us to two new deep transformation-invariant clustering frameworks, which jointly learn prototypes and transformations. More specifically, we use deep learning modules that enable us to resolve invariance to spatial, color and morphological transformations. Our approach is conceptually simple and comes with several advantages, including the possibility to easily adapt the desired invariance to the task and a strong interpretability of both cluster centers and assignments to clusters. We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks. Finally, we showcase its robustness and the advantages of its improved interpretability by visualizing clustering results over real photograph collections.
1 Introduction
Gathering collections of images on a topic of interest is getting easier every day: simple tools can aggregate data from social media, web search, or specialized websites and filter it using hashtags, GPS coordinates, or semantic labels. However, identifying visual trends in such image collections remains difficult and usually involves manually organizing images or designing an ad hoc algorithm. Our goal in this paper is to design a clustering method which can be applied to such image collections, output a visual representation for each cluster and show how it relates to every associated image.
Directly comparing image pixels to decide if they belong to the same cluster leads to poor results because they are strongly impacted by factors irrelevant to clustering, such as exact viewpoint or lighting. Approaches to obtain clusters invariant to these transformations can be broadly classified into two groups. A first set of methods extracts invariant features and performs clustering in feature space. The features can be manually designed, but most state-of-the-art methods learn them directly from data. This is challenging because images are high-dimensional and learning relevant invariances thus requires huge amounts of data. For this reason, while recent approaches perform well on simple datasets like MNIST, they still struggle with real images. Another limitation of these approaches is that learned features are hard to interpret and visualize, making clustering results difficult to analyze. A second set of approaches, following the seminal work of Frey and Jojic on transformation-invariant clustering [11, 12, 13], uses explicit transformation models to align images before comparing them. These approaches have several potential advantages: (i) they enable direct control of the invariances to consider; (ii) because they do not need to discover invariances, they are potentially less data-hungry; (iii) since images are explicitly aligned, clustering process and results can easily be visualized. However, transformation-invariant approaches require solving a difficult joint optimization problem. In practice, they are thus often limited to small datasets and simple transformations, such as affine
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
βmor
, and thin plate spline T tpsβtps - to align prototype ck to xi. (c) Examples
of interpretable prototypes discovered from large images sets (15k each) associated to hashtags in Instagram using our DTI clustering with 40 clusters. Each cluster contains from 200 to 800 images.
transformations, and to the best of our knowledge they have never been evaluated on large standard image clustering datasets.
In this paper, we propose a deep transformation-invariant (DTI) framework that enables to perform transformation-invariant clustering at scale and uses complex transformations. Our main insight is to jointly learn deep alignment and clustering parameters with a single loss, relying on the gradient-based adaptations of K-means [38] and GMM optimization [9]. Not only is predicting transformations more computationally efficient than optimizing them, but it enables us to use complex color, thin plate spline and morphological transformations without any specific regularization. Because it is pixel-based, our deep transformation-invariant clustering is also easy to interpret: cluster centers and image alignments can be visualized to understand assignments. Despite its apparent simplicity, we demonstrate that our DTI clustering framework leads to results on par with the most recent feature learning approaches on standard benchmarks. We also show it is capable of discovering meaningful modes in real photograph collections, which we see as an important step to bridge the gap between theoretically well-grounded clustering approaches and semi-automatic tools relying on hand-designed features for exploring image collections, such as AverageExplorer [52] or ShadowDraw [32].
We first briefly discuss related works in Section 2. Section 3 then presents our DTI framework (Fig. 1a). Section 4 introduces our deep transformation modules and architecture (Fig. 1b) and discuss training details. Finally, Section 5 presents and analyzes our results (Fig. 1c).
Contributions. In this paper we present: – a deep transformation-invariant clustering approach that jointly learns to cluster and align images, – a deep image transformation module to learn spatial alignment, color modifications and for the
first time morphological transformations, – an experimental evaluation showing that our approach is competitive on standard image clustering
benchmarks, improving over state-of-the-art on Fashion-MNIST and SVHN, and provides highly interpretable qualitative results even on challenging web image collections.
Code, data, models as well as more visual results are available on our project webpage1.
2 Related work
Most recent approaches to image clustering focus on learning deep image representations, or features, on which clustering can be performed. Common strategies include autoencoders [48, 10, 25, 28], contrastive approaches [49, 5, 44], GANs [6, 51, 41] and mutual information based strategies [22, 18, 24]. Especially related to our work is [28] which leverages the idea of capsule [20] to learn equivariant image features, in a similar fashion of equivariant models [33, 45]. However, our method aims at being invariant to transformations but not at learning a representation. Another type of approach is to align images in pixel space using a relevant family of transformations, such as translations, rotations, or affine transformations to obtain more meaningful pixel distances before clustering them. Frey and Jojic first introduced transformation-invariant clustering [11, 12, 13] by integrating pixel permutations as a discrete latent variable within an Expectation Maximization (EM) [9] procedure for a mixture of Gaussians. Their approach was however limited to a finite set of discrete transformations. Congealing generalized the idea to continuous parametric transformations, and in particular affine transformations, initially by using entropy minimization [40, 30]. A later version using least square costs [7, 8] demonstrated the relation of this approach to the classical Lukas-Kanade image alignment algorithm [37]. In its classical version, congealing only enables to align all dataset images together, but the idea was extended to clustering [36, 39, 34], for example using a Bayesian model [39], or in a spectral clustering framework [34]. These works typically formulate difficult joint optimization problems and solve them by alternating between clustering and transformation optimization for each sample. They are thus limited to relatively small datasets and to the best of our knowledge were never compared to modern deep approaches on large benchmarks. Deep learning was recently used to scale the idea of congealing for global alignment of a single class of images [1] or time series [46]. Both works build on the idea of Spatial Transformer Networks [23] (STN) that spatial transformation are differentiable and can be learned by deep networks. We also build upon STN, but go beyond single-class alignment to jointly perform clustering. Additionally, we extend the idea to color and morphological transformations. We believe our work is the first to use deep learning to perform clustering in pixel space by explicitly aligning images.
3 Deep Transformation-Invariant clustering
In this section, we first discuss a generic formulation of our deep transformation-invariant clustering approach, then derive two algorithms based on K-means [38] and Gaussian mixture model [9]. Notation: In all the rest of the paper, we use the notation a1:n to refer to the set {a1, . . . , an}.
3.1 DTI framework
Contrary to most recent image clustering methods which rely on feature learning, we propose to perform clustering in pixel space by making the clustering invariant to a family of transformations. We considerN image samples x1:N and aim at grouping them inK clusters using a prototype method. More specifically, each cluster k is defined by a prototype ck, which can also be seen as an image, and prototypes are optimized to minimize a loss L which typically evaluates how well they represent the samples. We further assume that L can be written as a sum of a loss l computed over each sample:
L(c1:K) = N∑ i=1 l(xi, {c1, . . . , cK}). (1)
Once the problem is solved, each sample xi will be associated to the closest prototype.
Our key assumption is that in addition to the data, we have access to a group of parametric transformations {Tβ , β ∈ B} to which we want to make the clustering invariant. For example, one can consider β ∈ R6 and Tβ the 2D affine transformation parametrized by β. Other transformations are discussed in Section 4.1. Instead of finding clusters by minimizing the loss of Equation 1, one can
1http://imagine.enpc.fr/~monniert/DTIClustering/
minimize the following transformation-invariant loss:
LTI(c1:K) = N∑ i=1 min β1:K l(xi, {Tβ1(c1), . . . , TβK (cK)}). (2)
In this equation, the minimum over β1:K is taken for each sample independently. This loss is invariant to transformations of the prototypes (see proof in Appendix B). Also note there is not a single optimum since the loss is the same if any prototype ck is replaced by Tβ(ck) for any β ∈ B. If necessary, for example for visualization purposes, this ambiguity can easily be resolved by adding a small regularization on the transformations. The optimization problem associated to LTI is of course difficult. A natural approach, which we use as baseline (noted TI), is to alternatively minimize over transformations and clustering parameters. We show that performing such optimization using a gradient descent can already lead to improved results over standard clustering but is computationally expensive. We experimentally show it is faster and actually better to instead learn K (deep) predictors f1:K for each prototype, which aim at associating to each sample xi the transformation parameters f1:K(xi) minimizing the loss, i.e. to minimize the following loss:
LDTI(c1:K , f1:K) = N∑ i=1 l(xi, {Tf1(xi)(c1), . . . , TfK(xi)(cK)}), (3)
where predictors f1:K are now shared for all samples. We found that using deep parameters predictors not only enables more efficient training but also leads to better clustering results especially with more complex transformations. Indeed, the structure and optimization of the predictors naturally regularize the parameters for each sample, without requiring any specific regularization loss, especially in the case of high numbers N of samples and transformation parameters. In the next section we present concrete losses and algorithms. We then describe differentiable modules for relevant transformations and discuss parameter predictor architecture as well as training in Section 4.
3.2 Application to K-means and GMM
K-means. The goal of K-means algorithm [38] is to find a set of prototypes c1:K such that the average Euclidean distance between each sample and the closest prototype is minimized. Following the reasoning of Section 3.1, the loss optimized in K-means can be transformed into a transformationinvariant loss:
LDTI K-means(c1:K , f1:K) = N∑ i=1 min k ‖xi − Tfk(xi)(ck)‖ 2. (4)
Following batch gradient-based trainings [3] of K-means, we can then simply jointly minimize LDTI K-means over prototypes c1:K and deep transformation parameter predictors f1:K using a batch gradient descent algorithm. In practice, we initialize prototypes c1:K with random samples and predictors f1:K such that ∀k, ∀x, Tfk(x) = Id. Gaussian mixture model. We now consider that data are observations of a mixture of K multivariate normal random variables X1:K , i.e. X = ∑ k δk,∆Xk where δ is the Kronecker function and
∆ ∈ {1, . . . ,K} is a random variable defined by P (∆ = k) = πk, with ∀k, πk > 0 and ∑ k πk = 1. We write µk and Σk the mean and covariance of Xk and G( . ;µk,Σk) associated probability density function. The transformation-invariant negative log-likelihood can then be written:
LDTI GMM(µ1:K ,Σ1:K , π1:K , f1:K) = − N∑ i=1 log ( K∑ k=1 πkG ( xi ; Tfk(xi)(µk), T ∗ fk(xi) (Σk) )) , (5) where T ∗ is slightly modified version of T . Indeed, T may include transformations that one can apply to the covariance, such as spatial transformations, and other that would not make sense, such as additive color transformations. We jointly minimize LDTI GMM over Gaussian parameters, mixing probabilities, and deep transformation parameters f1:K using a batch gradient-based EM procedure similar to [21, 15, 14] and detailed in Algorithm 1. In practice, we assume that pixels are independent resulting in diagonal covariance matrices.
In such gradient-based procedures, two constraints have to be enforced, namely the positivity and normalization of mixing probabilities πk and the non-negativeness of the diagonal covariance terms.
Algorithm 1: Deep Transformation-Invariant Gaussian Mixture Model Input: data X, number of clusters K, transformation T Output: cluster assignations, Gaussian parameters µ1:K ,Σ1:K , deep predictors f1:K Initialization: µ1:K with random samples, Σ1:K = 0.5, η1:K = 1 and ∀k,∀x, Tfk(x) = Id while not converged do
i. sample a batch of data points x1:N ii. compute mixing probabilities: π1:K = softmax(η1:K) iii. compute per-sample Gaussian transformed parameters:
∀k, ∀i, µ̃ki = Tfk(xi)(µk) and Σ̃ki = T ∗ fk(xi) (Σk) + diag(σ 2 min)
iv. compute responsibilities: ∀k, ∀i, γki = πkG(xi ;µ̃ki,Σ̃ki)∑ j πjG(xi ;µ̃ji,Σ̃ji)
(E-step)
v. minimize expected negative log-likelihood w.r.t to {µ1:K ,Σ1:K , η1:K , f1:K}: E[LDTI GMM] = − N∑ i=1 K∑ k=1 γki ( log ( G(xi ; µ̃ki, Σ̃ki) ) + log(πk) ) (M-step)
end
For the mixing probabilities constraints, we adopt the approach used in [21] and [14] which optimize mixing parameters ηk used to compute the probabilities πk using a softmax instead of directly optimizing πk, which we write π1:K = softmax(η1:K). For the variance non-negativeness, we introduce a fixed minimal variance value σ2min which is added to the variances when evaluating the probability density function. This approach is different from the one in [14] which instead use clipping, because we found training with clipped values was harder. In practice, we take σmin = 0.25.
4 Learning image transformations
4.1 Architecture and transformation modules
We consider a set of prototypes c1:K we would like to transform to match a given sample x. To do so, we propose to learn for each prototype ck, a separate deep predictor which predicts transformation parameters β. We propose to model the family of transformations Tβ as a sequence of M parametric transformations such that, writing β = (β1, . . . , βM ), Tβ = T MβM ◦ . . . ◦ T 1 β1 . In the following, we describe the architecture of transformation parameter predictors f1:K , as well as each family of parametric transformation modules we use. Figure 1b shows our learned transformation process on a MNIST example.
Parameters prediction network. For all experiments, we use the same parameter predictor network architecture composed of a shared ResNet [19] backbone truncated after the global average pooling, followed by K ×M Multi-Layer Perceptrons (MLPs), one for each prototype and each transformation module. For the ResNet backbone, we use ResNet-20 for images smaller than 64× 64 and ResNet-18 otherwise. Each MLP has the same architecture, with two hidden layers of 128 units.
Spatial transformer module. To model spatial transformations of the prototypes, we follow the spatial transformers developed by Jaderberg et al. [23]. The key idea is to model spatial transformations as a differentiable image sampling of the input using a deformed sampling grid. We use affine T affβ , projective T proj β and thin plate spline T tps β [2] transformations which respectively correspond to 6, 8 and 16 (a 4x4 grid of control points) parameters.
Color transformation module. We model color transformation with a channel-wise diagonal affine transformation on the full image, which we write T colβ . It has 2 parameters for greyscale images and 6 parameters for colored images. We first used a full affine transformation with 12 parameters, however the network was able to hide several patterns in the different color channels of a single prototype (Appendix C.4). Note that a similar transformation was theoretically introduced in capsules [28], but with the different goal of obtaining a color-invariant feature representation. Deep feature-based approaches often handle color images with a pre-processing step such as Sobel filtering [4, 24, 28]. We believe the way we align colors of the prototypes to obtain color invariance in pixel space is novel, and it enables us to directly work with colored images without using any pre-processing or specific invariant features.
Morphological transformation module. We introduce a new transformation module to learn morphological operations [16] such as dilation and erosion. We consider a greyscale image x ∈ RD of size U × V = D, we write x[u, v] the value of the pixel (u, v) for u ∈ {1, . . . , U} and v ∈ {1, . . . , V }. Given a 2D region A, the dilation of x by A, DA(x) ∈ RD, is defined by DA(x)[u, v] = max(u′,v′)∈A x[u + u′, v + v′] and its erosion by A, EA(x) ∈ RD, is defined by EA(x)[u, v] = min(u′,v′)∈A x[u + u′, v + v′]. Directly learning the region A which parametrizes these transformations is challenging, we thus propose to learn parameters (α, a) for the following soft version of these transformations:
T mor(α,a)(x)[u, v] = ∑ (u′,v′)∈W x[u+ u ′, v + v′] · a[u+ u′, v + v′] · eαx[u+u′,v+v′]∑
(u′,v′)∈W a[u+ u ′, v + v′] · eαx[u+u′,v+v′]
, (6)
where W is a fixed set of 2D positions, α is a softmax (positive values) or softmin (negative values) parameter and a is a set of parameters with values between 0 and 1 defined for every position (u′, v′) ∈W . Parameters a can be interpreted as an image, or as a soft version of the region A used for morphological operations. Note that if a[u′, v′] = 1{(u′,v′)∈A}, when α → +∞ (resp. −∞), it successfully emulates DA (resp. EA). In practice, we use a grid of integer positions around the origin of size 7 × 7 for W . Note that since morphological transformations do not form a group, transformation-invariant denomination is slightly abusive.
4.2 Training
We found that two key elements were critical to obtain good results: empty cluster reassignment and curriculum learning. We then discuss further implementation details and computational cost.
Empty cluster reassignment. Similar to [4], we adopt an empty cluster reassignment strategy during our clustering optimization. We reinitialize both prototype and deep predictor of "tiny" clusters using the parameters of the largest cluster with a small added noise. In practice, the size of balanced clusters being N/K, we define "tiny" as less than 20% of N/K.
Curriculum learning. Learning to predict transformations is a hard task, especially when the number of parameters is high. To ease learning, we thus adopt a curriculum learning strategy by gradually adding more complex transformation modules to the training. Given a target sequence of transformations to learn, we first train our model without any transformation - or equivalently with an identity module - then iteratively add subsequent modules once convergence has been reached. We found this is especially important when modeling local deformations with complex transformations with many parameters, such as TPS and morphological transformations. Intuitively, prototypes should first be coarsely aligned before attempting to refine the alignment with more complex transformations.
Implementation details. Both clustering parameters and parameter prediction networks are learned jointly and end-to-end using Adam optimizer [27] with a 10−6 weight decay on the neural network parameters. We sequentially add transformation modules at a constant learning rate of 0.001 then divide the learning rate by 10 after convergence - corresponding to different numbers of epochs depending on the dataset characteristics - and train for a few more epochs with the smaller learning rate. We use a batch size of 64 for real photograph collections and 128 otherwise.
Computational cost. Training DTI K-means or DTI GMM on MNIST takes approximately 50 minutes on a single Nvidia GeForce RTX 2080 Ti GPU and full dataset inference takes 30 seconds. We found it to be much faster than directly optimizing transformation parameters (TI clustering) for which convergence took more than 10 hours of training.
5 Experiments
In this section, we first analyze our approach and compare it to state-of-the-art, then showcase its interest for image collection analysis and visualization.
5.1 Analysis and comparisons
Similar to previous work on image clustering, we evaluate our approach with global classification accuracy (ACC), where a cluster-to-class mapping is computed using the Hungarian algorithm [29],
and Normalized Mutual Information (NMI). Datasets and corresponding transformation modules we used are described in Appendix A.
Comparison on standard benchmarks. In Table 1, we report our results on standard image clustering benchmarks, i.e. digit datasets (MNIST [31], USPS [17]), a clothing dataset (FashionMNIST [47]) and a face dataset (FRGC [43]). We also report results for SVHN [42] where concurrent methods use pre-processing to remove color bias. In the table, we separate representation-based from pixel-based methods and mark results using data augmentation or manually selected features as input. Note that our results depend on initialization, we provide detailed statistics in Appendix C.1. Our DTI clustering is fully unsupervised and does not require any data augmentation, ad hoc features, nor any hyper-parameter while performing clustering directly in pixel space. We report average performances and performances of the minimal loss run which we found to correlate well with high performances (Appendix C.2). Because this non-trivial criterion allows to automatically select a run in a fully unsupervised way, we argue it can be compared to average results from competing methods which don’t provide such criterion. First, DTI clustering achieves competitive results on all datasets, in particular improving state-ofthe-art by a significant margin on SVHN and Fashion-MNIST. For SVHN, we first found that the
prototypes quality was harmed by digits on the side of the image. To pay more attention to the center digit, we weighted the clustering loss by a Gaussian weight (σ = 7). It led to better prototypes and allowed us to improve over all concurrent methods by a large margin. Compared to representationbased methods, our pixel-based clustering is highly interpretable. Figure 2a shows standard GMM prototypes and our prototypes learned with DTI GMM which appear to be much sharper than standard ones. This directly stems from the quality of the learned transformations, visualized in Figure 2b. Our transformation modules can successfully align the prototype, adapt the thickness and apply local elastic deformations. More alignment results are available on our project webpage.
Augmented and specific datasets. DTI clustering also works on small, colored and misaligned datasets. In Table 2, we highlight these strengths on specifics datasets generated from MNIST: MNIST-1k is a 1000 images subset, MNIST-color is obtained by randomly selecting a color for the foreground and background and affNIST-test2 is the result of random affine transformations. We used an online implementation3 for VaDE [25] and official ones for IMSAT [22] and IIC [24] to obtain baselines. Our results show that the performances of DTI clustering is barely affected by spatial and color transformations, while baseline performances drop on affNIST-test and are almost chance on MNIST-color. Figure 2a shows the quality and interpretability of our cluster centers on affNIST-test and MNIST-color. DTI clustering also seems more data-efficient than the baselines we tested.
Ablation on MNIST. In Table 3, we conduct an ablation study on MNIST of our full model trained following Section 4.2 with affine, morphological and TPS transformations. We first explore the effect of transformation modules. Their order is not crucial, as shown by similar minLoss performances, but can greatly affect the stability of the training, as can be seen in the average results. Each module contributes to the final performance, affine transformations being the most important. We then validate our training strategy showing that both empty cluster reassignment and curriculum learning for the different modules are necessary. Finally, we directly optimize the loss of Equation 2 (TI clustering) by optimizing the transformation parameters for each sample at each iteration of the batch clustering algorithm, without using our parameter predictors. With rich transformations which have many parameters, such as TPS and morphological ones, this approach fails completely. Using only affine transformations, we obtain results clearly superior to standard clustering, but worse than ours.
5.2 Application to web images
One of the main interest of our DTI clustering is that it allows to discover trends in real image collections. All images are resized and center cropped to 128×128. The selection of the number of clusters is a difficult problem and is discussed in Appendix C.3. In Figure 1c, we show examples of prototypes discovered in very large unfiltered sets (15k each) of Instagram images associated to different hashtags4 using DTI GMM applied with 40 clusters. While many images are noise and are associated to prototypes which are not easily interpretable, we show prototypes where iconic photos and poses can be clearly identified. To the best of our knowledge, we believe we are the first to demonstrate this type of results from raw social network image collections.
2https://www.cs.toronto.edu/~tijmen/affNIST/ 3https://github.com/GuHongyang/VaDE-pytorch 4https://github.com/arc298/instagram-scraper was used to scrape photographs
Comparable results in AverageExplorer [52], e.g. on Santa images, could be obtained using ad hoc features and user interactions, while our results are produced fully automatically. Figure 3 shows qualitative clustering results on MegaDepth [35] and WikiPaintings [26]. Similar to our results on image clustering benchmarks, our learned prototypes are more relevant and accurate than the ones obtained from standard clustering. Note that some of our prototypes are very sharp: they typically correspond to sets of photographs between which we can accurately model deformations, e.g. scenes that are mostly planar, with little perspective effects. On the contrary, more unique photographs and photographs with strong 3D effects that we cannot model will be associated to less interpretable and blurrier prototypes, such as the ones in the last two columns of Figure 3b. In Figure 3b, in addition to the prototypes discovered, we show examples of images contained in each cluster as well as the aligned prototype. Even for such complex images, the simple combination of our color and spatial modules manages to model real image transformations like illumination variations and viewpoint changes. More web image clustering results are shown on our project webpage.
6 Conclusion
We have introduced an efficient deep transformation-invariant clustering approach in raw input space. Our key insight is the online optimization of a single clustering objective over clustering parameters and deep image transformation modules. We demonstrate competitive results on standard image clustering benchmarks, including improvements over state-of-the-art on SVHN and Fashion-MNIST. We also demonstrate promising results for real photograph collection clustering and visualization. Finally, note that our DTI clustering framework is not specific to images and can be extended to other types of data as long as appropriate transformation modules are designed beforehand.
Acknowledgements
This work was supported in part by ANR project EnHerit ANR-17-CE23-0008, project Rapid Tabasco, gifts from Adobe and HPC resources from GENCI-IDRIS (Grant 2020-AD011011697). We thank Bryan Russell, Vladimir Kim, Matthew Fisher, François Darmon, Simon Roburin, David Picard, Michaël Ramamonjisoa, Vincent Lepetit, Elliot Vincent, Jean Ponce, William Peebles and Alexei Efros for inspiring discussions and valuable feedback.
Broader Impact
The impact of clustering mainly depends on the data it is applied on. For instance, adding structure in user data can raise ethical concerns when users are assimilated to their cluster, and receive targeted advertisement and newsfeed. However, this is not specific to our method and can be said of any clustering algorithm. Also note that while our clustering can be applied for example to data from social media, the visual interpretation of the clusters it returns via the cluster centers respects privacy much better than showing specific examples from each cluster.
Because our method provides highly interpretable results, it might bring increased understanding of clustering algorithm results for the broader public, which we think may be a significant positive impact. | 1. What is the focus and contribution of the paper on image clustering?
2. What are the strengths of the proposed approach, particularly in terms of its ability to jointly learn clustering and alignment?
3. What are the weaknesses of the paper, especially regarding the transformation parameters and the optimization problem?
4. Do you have any concerns or questions about the empirical results presented in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper presents an approach to image clustering, that computes the distance of each input image to a cluster prototype image, by first appropriately transforming the prototype to match the image and then computing the distance of the transformed protoype to the input image. The main idea is the training of deep neural networks (one for each transformation) that take an image as input and provide as output the corresponding transformation parameters.
Strengths
An image clustering approach is presented that joinly learns to cluster and align images. The method provides good empirical results on challenging web image collections.
Weaknesses
The main drawback of the paper is the lack of important details concerning the tranformation parameters that are predicted (section 4.1). No information is provided about the outputs of the networks that predict the transformation parmeters (e.g. how many are the parameters to be predicted). The optimization problem solved in the M-step seems to be hard. Performance depends on cluster initialization and network initialization. There is no comment on this issue. |
NIPS | Title
Deep Transformation-Invariant Clustering
Abstract
Recent advances in image clustering typically focus on learning better deep representations. In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict transformations and performs clustering directly in pixel space. This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model, without requiring any additional loss or hyper-parameters. It leads us to two new deep transformation-invariant clustering frameworks, which jointly learn prototypes and transformations. More specifically, we use deep learning modules that enable us to resolve invariance to spatial, color and morphological transformations. Our approach is conceptually simple and comes with several advantages, including the possibility to easily adapt the desired invariance to the task and a strong interpretability of both cluster centers and assignments to clusters. We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks. Finally, we showcase its robustness and the advantages of its improved interpretability by visualizing clustering results over real photograph collections.
1 Introduction
Gathering collections of images on a topic of interest is getting easier every day: simple tools can aggregate data from social media, web search, or specialized websites and filter it using hashtags, GPS coordinates, or semantic labels. However, identifying visual trends in such image collections remains difficult and usually involves manually organizing images or designing an ad hoc algorithm. Our goal in this paper is to design a clustering method which can be applied to such image collections, output a visual representation for each cluster and show how it relates to every associated image.
Directly comparing image pixels to decide if they belong to the same cluster leads to poor results because they are strongly impacted by factors irrelevant to clustering, such as exact viewpoint or lighting. Approaches to obtain clusters invariant to these transformations can be broadly classified into two groups. A first set of methods extracts invariant features and performs clustering in feature space. The features can be manually designed, but most state-of-the-art methods learn them directly from data. This is challenging because images are high-dimensional and learning relevant invariances thus requires huge amounts of data. For this reason, while recent approaches perform well on simple datasets like MNIST, they still struggle with real images. Another limitation of these approaches is that learned features are hard to interpret and visualize, making clustering results difficult to analyze. A second set of approaches, following the seminal work of Frey and Jojic on transformation-invariant clustering [11, 12, 13], uses explicit transformation models to align images before comparing them. These approaches have several potential advantages: (i) they enable direct control of the invariances to consider; (ii) because they do not need to discover invariances, they are potentially less data-hungry; (iii) since images are explicitly aligned, clustering process and results can easily be visualized. However, transformation-invariant approaches require solving a difficult joint optimization problem. In practice, they are thus often limited to small datasets and simple transformations, such as affine
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
βmor
, and thin plate spline T tpsβtps - to align prototype ck to xi. (c) Examples
of interpretable prototypes discovered from large images sets (15k each) associated to hashtags in Instagram using our DTI clustering with 40 clusters. Each cluster contains from 200 to 800 images.
transformations, and to the best of our knowledge they have never been evaluated on large standard image clustering datasets.
In this paper, we propose a deep transformation-invariant (DTI) framework that enables to perform transformation-invariant clustering at scale and uses complex transformations. Our main insight is to jointly learn deep alignment and clustering parameters with a single loss, relying on the gradient-based adaptations of K-means [38] and GMM optimization [9]. Not only is predicting transformations more computationally efficient than optimizing them, but it enables us to use complex color, thin plate spline and morphological transformations without any specific regularization. Because it is pixel-based, our deep transformation-invariant clustering is also easy to interpret: cluster centers and image alignments can be visualized to understand assignments. Despite its apparent simplicity, we demonstrate that our DTI clustering framework leads to results on par with the most recent feature learning approaches on standard benchmarks. We also show it is capable of discovering meaningful modes in real photograph collections, which we see as an important step to bridge the gap between theoretically well-grounded clustering approaches and semi-automatic tools relying on hand-designed features for exploring image collections, such as AverageExplorer [52] or ShadowDraw [32].
We first briefly discuss related works in Section 2. Section 3 then presents our DTI framework (Fig. 1a). Section 4 introduces our deep transformation modules and architecture (Fig. 1b) and discuss training details. Finally, Section 5 presents and analyzes our results (Fig. 1c).
Contributions. In this paper we present: – a deep transformation-invariant clustering approach that jointly learns to cluster and align images, – a deep image transformation module to learn spatial alignment, color modifications and for the
first time morphological transformations, – an experimental evaluation showing that our approach is competitive on standard image clustering
benchmarks, improving over state-of-the-art on Fashion-MNIST and SVHN, and provides highly interpretable qualitative results even on challenging web image collections.
Code, data, models as well as more visual results are available on our project webpage1.
2 Related work
Most recent approaches to image clustering focus on learning deep image representations, or features, on which clustering can be performed. Common strategies include autoencoders [48, 10, 25, 28], contrastive approaches [49, 5, 44], GANs [6, 51, 41] and mutual information based strategies [22, 18, 24]. Especially related to our work is [28] which leverages the idea of capsule [20] to learn equivariant image features, in a similar fashion of equivariant models [33, 45]. However, our method aims at being invariant to transformations but not at learning a representation. Another type of approach is to align images in pixel space using a relevant family of transformations, such as translations, rotations, or affine transformations to obtain more meaningful pixel distances before clustering them. Frey and Jojic first introduced transformation-invariant clustering [11, 12, 13] by integrating pixel permutations as a discrete latent variable within an Expectation Maximization (EM) [9] procedure for a mixture of Gaussians. Their approach was however limited to a finite set of discrete transformations. Congealing generalized the idea to continuous parametric transformations, and in particular affine transformations, initially by using entropy minimization [40, 30]. A later version using least square costs [7, 8] demonstrated the relation of this approach to the classical Lukas-Kanade image alignment algorithm [37]. In its classical version, congealing only enables to align all dataset images together, but the idea was extended to clustering [36, 39, 34], for example using a Bayesian model [39], or in a spectral clustering framework [34]. These works typically formulate difficult joint optimization problems and solve them by alternating between clustering and transformation optimization for each sample. They are thus limited to relatively small datasets and to the best of our knowledge were never compared to modern deep approaches on large benchmarks. Deep learning was recently used to scale the idea of congealing for global alignment of a single class of images [1] or time series [46]. Both works build on the idea of Spatial Transformer Networks [23] (STN) that spatial transformation are differentiable and can be learned by deep networks. We also build upon STN, but go beyond single-class alignment to jointly perform clustering. Additionally, we extend the idea to color and morphological transformations. We believe our work is the first to use deep learning to perform clustering in pixel space by explicitly aligning images.
3 Deep Transformation-Invariant clustering
In this section, we first discuss a generic formulation of our deep transformation-invariant clustering approach, then derive two algorithms based on K-means [38] and Gaussian mixture model [9]. Notation: In all the rest of the paper, we use the notation a1:n to refer to the set {a1, . . . , an}.
3.1 DTI framework
Contrary to most recent image clustering methods which rely on feature learning, we propose to perform clustering in pixel space by making the clustering invariant to a family of transformations. We considerN image samples x1:N and aim at grouping them inK clusters using a prototype method. More specifically, each cluster k is defined by a prototype ck, which can also be seen as an image, and prototypes are optimized to minimize a loss L which typically evaluates how well they represent the samples. We further assume that L can be written as a sum of a loss l computed over each sample:
L(c1:K) = N∑ i=1 l(xi, {c1, . . . , cK}). (1)
Once the problem is solved, each sample xi will be associated to the closest prototype.
Our key assumption is that in addition to the data, we have access to a group of parametric transformations {Tβ , β ∈ B} to which we want to make the clustering invariant. For example, one can consider β ∈ R6 and Tβ the 2D affine transformation parametrized by β. Other transformations are discussed in Section 4.1. Instead of finding clusters by minimizing the loss of Equation 1, one can
1http://imagine.enpc.fr/~monniert/DTIClustering/
minimize the following transformation-invariant loss:
LTI(c1:K) = N∑ i=1 min β1:K l(xi, {Tβ1(c1), . . . , TβK (cK)}). (2)
In this equation, the minimum over β1:K is taken for each sample independently. This loss is invariant to transformations of the prototypes (see proof in Appendix B). Also note there is not a single optimum since the loss is the same if any prototype ck is replaced by Tβ(ck) for any β ∈ B. If necessary, for example for visualization purposes, this ambiguity can easily be resolved by adding a small regularization on the transformations. The optimization problem associated to LTI is of course difficult. A natural approach, which we use as baseline (noted TI), is to alternatively minimize over transformations and clustering parameters. We show that performing such optimization using a gradient descent can already lead to improved results over standard clustering but is computationally expensive. We experimentally show it is faster and actually better to instead learn K (deep) predictors f1:K for each prototype, which aim at associating to each sample xi the transformation parameters f1:K(xi) minimizing the loss, i.e. to minimize the following loss:
LDTI(c1:K , f1:K) = N∑ i=1 l(xi, {Tf1(xi)(c1), . . . , TfK(xi)(cK)}), (3)
where predictors f1:K are now shared for all samples. We found that using deep parameters predictors not only enables more efficient training but also leads to better clustering results especially with more complex transformations. Indeed, the structure and optimization of the predictors naturally regularize the parameters for each sample, without requiring any specific regularization loss, especially in the case of high numbers N of samples and transformation parameters. In the next section we present concrete losses and algorithms. We then describe differentiable modules for relevant transformations and discuss parameter predictor architecture as well as training in Section 4.
3.2 Application to K-means and GMM
K-means. The goal of K-means algorithm [38] is to find a set of prototypes c1:K such that the average Euclidean distance between each sample and the closest prototype is minimized. Following the reasoning of Section 3.1, the loss optimized in K-means can be transformed into a transformationinvariant loss:
LDTI K-means(c1:K , f1:K) = N∑ i=1 min k ‖xi − Tfk(xi)(ck)‖ 2. (4)
Following batch gradient-based trainings [3] of K-means, we can then simply jointly minimize LDTI K-means over prototypes c1:K and deep transformation parameter predictors f1:K using a batch gradient descent algorithm. In practice, we initialize prototypes c1:K with random samples and predictors f1:K such that ∀k, ∀x, Tfk(x) = Id. Gaussian mixture model. We now consider that data are observations of a mixture of K multivariate normal random variables X1:K , i.e. X = ∑ k δk,∆Xk where δ is the Kronecker function and
∆ ∈ {1, . . . ,K} is a random variable defined by P (∆ = k) = πk, with ∀k, πk > 0 and ∑ k πk = 1. We write µk and Σk the mean and covariance of Xk and G( . ;µk,Σk) associated probability density function. The transformation-invariant negative log-likelihood can then be written:
LDTI GMM(µ1:K ,Σ1:K , π1:K , f1:K) = − N∑ i=1 log ( K∑ k=1 πkG ( xi ; Tfk(xi)(µk), T ∗ fk(xi) (Σk) )) , (5) where T ∗ is slightly modified version of T . Indeed, T may include transformations that one can apply to the covariance, such as spatial transformations, and other that would not make sense, such as additive color transformations. We jointly minimize LDTI GMM over Gaussian parameters, mixing probabilities, and deep transformation parameters f1:K using a batch gradient-based EM procedure similar to [21, 15, 14] and detailed in Algorithm 1. In practice, we assume that pixels are independent resulting in diagonal covariance matrices.
In such gradient-based procedures, two constraints have to be enforced, namely the positivity and normalization of mixing probabilities πk and the non-negativeness of the diagonal covariance terms.
Algorithm 1: Deep Transformation-Invariant Gaussian Mixture Model Input: data X, number of clusters K, transformation T Output: cluster assignations, Gaussian parameters µ1:K ,Σ1:K , deep predictors f1:K Initialization: µ1:K with random samples, Σ1:K = 0.5, η1:K = 1 and ∀k,∀x, Tfk(x) = Id while not converged do
i. sample a batch of data points x1:N ii. compute mixing probabilities: π1:K = softmax(η1:K) iii. compute per-sample Gaussian transformed parameters:
∀k, ∀i, µ̃ki = Tfk(xi)(µk) and Σ̃ki = T ∗ fk(xi) (Σk) + diag(σ 2 min)
iv. compute responsibilities: ∀k, ∀i, γki = πkG(xi ;µ̃ki,Σ̃ki)∑ j πjG(xi ;µ̃ji,Σ̃ji)
(E-step)
v. minimize expected negative log-likelihood w.r.t to {µ1:K ,Σ1:K , η1:K , f1:K}: E[LDTI GMM] = − N∑ i=1 K∑ k=1 γki ( log ( G(xi ; µ̃ki, Σ̃ki) ) + log(πk) ) (M-step)
end
For the mixing probabilities constraints, we adopt the approach used in [21] and [14] which optimize mixing parameters ηk used to compute the probabilities πk using a softmax instead of directly optimizing πk, which we write π1:K = softmax(η1:K). For the variance non-negativeness, we introduce a fixed minimal variance value σ2min which is added to the variances when evaluating the probability density function. This approach is different from the one in [14] which instead use clipping, because we found training with clipped values was harder. In practice, we take σmin = 0.25.
4 Learning image transformations
4.1 Architecture and transformation modules
We consider a set of prototypes c1:K we would like to transform to match a given sample x. To do so, we propose to learn for each prototype ck, a separate deep predictor which predicts transformation parameters β. We propose to model the family of transformations Tβ as a sequence of M parametric transformations such that, writing β = (β1, . . . , βM ), Tβ = T MβM ◦ . . . ◦ T 1 β1 . In the following, we describe the architecture of transformation parameter predictors f1:K , as well as each family of parametric transformation modules we use. Figure 1b shows our learned transformation process on a MNIST example.
Parameters prediction network. For all experiments, we use the same parameter predictor network architecture composed of a shared ResNet [19] backbone truncated after the global average pooling, followed by K ×M Multi-Layer Perceptrons (MLPs), one for each prototype and each transformation module. For the ResNet backbone, we use ResNet-20 for images smaller than 64× 64 and ResNet-18 otherwise. Each MLP has the same architecture, with two hidden layers of 128 units.
Spatial transformer module. To model spatial transformations of the prototypes, we follow the spatial transformers developed by Jaderberg et al. [23]. The key idea is to model spatial transformations as a differentiable image sampling of the input using a deformed sampling grid. We use affine T affβ , projective T proj β and thin plate spline T tps β [2] transformations which respectively correspond to 6, 8 and 16 (a 4x4 grid of control points) parameters.
Color transformation module. We model color transformation with a channel-wise diagonal affine transformation on the full image, which we write T colβ . It has 2 parameters for greyscale images and 6 parameters for colored images. We first used a full affine transformation with 12 parameters, however the network was able to hide several patterns in the different color channels of a single prototype (Appendix C.4). Note that a similar transformation was theoretically introduced in capsules [28], but with the different goal of obtaining a color-invariant feature representation. Deep feature-based approaches often handle color images with a pre-processing step such as Sobel filtering [4, 24, 28]. We believe the way we align colors of the prototypes to obtain color invariance in pixel space is novel, and it enables us to directly work with colored images without using any pre-processing or specific invariant features.
Morphological transformation module. We introduce a new transformation module to learn morphological operations [16] such as dilation and erosion. We consider a greyscale image x ∈ RD of size U × V = D, we write x[u, v] the value of the pixel (u, v) for u ∈ {1, . . . , U} and v ∈ {1, . . . , V }. Given a 2D region A, the dilation of x by A, DA(x) ∈ RD, is defined by DA(x)[u, v] = max(u′,v′)∈A x[u + u′, v + v′] and its erosion by A, EA(x) ∈ RD, is defined by EA(x)[u, v] = min(u′,v′)∈A x[u + u′, v + v′]. Directly learning the region A which parametrizes these transformations is challenging, we thus propose to learn parameters (α, a) for the following soft version of these transformations:
T mor(α,a)(x)[u, v] = ∑ (u′,v′)∈W x[u+ u ′, v + v′] · a[u+ u′, v + v′] · eαx[u+u′,v+v′]∑
(u′,v′)∈W a[u+ u ′, v + v′] · eαx[u+u′,v+v′]
, (6)
where W is a fixed set of 2D positions, α is a softmax (positive values) or softmin (negative values) parameter and a is a set of parameters with values between 0 and 1 defined for every position (u′, v′) ∈W . Parameters a can be interpreted as an image, or as a soft version of the region A used for morphological operations. Note that if a[u′, v′] = 1{(u′,v′)∈A}, when α → +∞ (resp. −∞), it successfully emulates DA (resp. EA). In practice, we use a grid of integer positions around the origin of size 7 × 7 for W . Note that since morphological transformations do not form a group, transformation-invariant denomination is slightly abusive.
4.2 Training
We found that two key elements were critical to obtain good results: empty cluster reassignment and curriculum learning. We then discuss further implementation details and computational cost.
Empty cluster reassignment. Similar to [4], we adopt an empty cluster reassignment strategy during our clustering optimization. We reinitialize both prototype and deep predictor of "tiny" clusters using the parameters of the largest cluster with a small added noise. In practice, the size of balanced clusters being N/K, we define "tiny" as less than 20% of N/K.
Curriculum learning. Learning to predict transformations is a hard task, especially when the number of parameters is high. To ease learning, we thus adopt a curriculum learning strategy by gradually adding more complex transformation modules to the training. Given a target sequence of transformations to learn, we first train our model without any transformation - or equivalently with an identity module - then iteratively add subsequent modules once convergence has been reached. We found this is especially important when modeling local deformations with complex transformations with many parameters, such as TPS and morphological transformations. Intuitively, prototypes should first be coarsely aligned before attempting to refine the alignment with more complex transformations.
Implementation details. Both clustering parameters and parameter prediction networks are learned jointly and end-to-end using Adam optimizer [27] with a 10−6 weight decay on the neural network parameters. We sequentially add transformation modules at a constant learning rate of 0.001 then divide the learning rate by 10 after convergence - corresponding to different numbers of epochs depending on the dataset characteristics - and train for a few more epochs with the smaller learning rate. We use a batch size of 64 for real photograph collections and 128 otherwise.
Computational cost. Training DTI K-means or DTI GMM on MNIST takes approximately 50 minutes on a single Nvidia GeForce RTX 2080 Ti GPU and full dataset inference takes 30 seconds. We found it to be much faster than directly optimizing transformation parameters (TI clustering) for which convergence took more than 10 hours of training.
5 Experiments
In this section, we first analyze our approach and compare it to state-of-the-art, then showcase its interest for image collection analysis and visualization.
5.1 Analysis and comparisons
Similar to previous work on image clustering, we evaluate our approach with global classification accuracy (ACC), where a cluster-to-class mapping is computed using the Hungarian algorithm [29],
and Normalized Mutual Information (NMI). Datasets and corresponding transformation modules we used are described in Appendix A.
Comparison on standard benchmarks. In Table 1, we report our results on standard image clustering benchmarks, i.e. digit datasets (MNIST [31], USPS [17]), a clothing dataset (FashionMNIST [47]) and a face dataset (FRGC [43]). We also report results for SVHN [42] where concurrent methods use pre-processing to remove color bias. In the table, we separate representation-based from pixel-based methods and mark results using data augmentation or manually selected features as input. Note that our results depend on initialization, we provide detailed statistics in Appendix C.1. Our DTI clustering is fully unsupervised and does not require any data augmentation, ad hoc features, nor any hyper-parameter while performing clustering directly in pixel space. We report average performances and performances of the minimal loss run which we found to correlate well with high performances (Appendix C.2). Because this non-trivial criterion allows to automatically select a run in a fully unsupervised way, we argue it can be compared to average results from competing methods which don’t provide such criterion. First, DTI clustering achieves competitive results on all datasets, in particular improving state-ofthe-art by a significant margin on SVHN and Fashion-MNIST. For SVHN, we first found that the
prototypes quality was harmed by digits on the side of the image. To pay more attention to the center digit, we weighted the clustering loss by a Gaussian weight (σ = 7). It led to better prototypes and allowed us to improve over all concurrent methods by a large margin. Compared to representationbased methods, our pixel-based clustering is highly interpretable. Figure 2a shows standard GMM prototypes and our prototypes learned with DTI GMM which appear to be much sharper than standard ones. This directly stems from the quality of the learned transformations, visualized in Figure 2b. Our transformation modules can successfully align the prototype, adapt the thickness and apply local elastic deformations. More alignment results are available on our project webpage.
Augmented and specific datasets. DTI clustering also works on small, colored and misaligned datasets. In Table 2, we highlight these strengths on specifics datasets generated from MNIST: MNIST-1k is a 1000 images subset, MNIST-color is obtained by randomly selecting a color for the foreground and background and affNIST-test2 is the result of random affine transformations. We used an online implementation3 for VaDE [25] and official ones for IMSAT [22] and IIC [24] to obtain baselines. Our results show that the performances of DTI clustering is barely affected by spatial and color transformations, while baseline performances drop on affNIST-test and are almost chance on MNIST-color. Figure 2a shows the quality and interpretability of our cluster centers on affNIST-test and MNIST-color. DTI clustering also seems more data-efficient than the baselines we tested.
Ablation on MNIST. In Table 3, we conduct an ablation study on MNIST of our full model trained following Section 4.2 with affine, morphological and TPS transformations. We first explore the effect of transformation modules. Their order is not crucial, as shown by similar minLoss performances, but can greatly affect the stability of the training, as can be seen in the average results. Each module contributes to the final performance, affine transformations being the most important. We then validate our training strategy showing that both empty cluster reassignment and curriculum learning for the different modules are necessary. Finally, we directly optimize the loss of Equation 2 (TI clustering) by optimizing the transformation parameters for each sample at each iteration of the batch clustering algorithm, without using our parameter predictors. With rich transformations which have many parameters, such as TPS and morphological ones, this approach fails completely. Using only affine transformations, we obtain results clearly superior to standard clustering, but worse than ours.
5.2 Application to web images
One of the main interest of our DTI clustering is that it allows to discover trends in real image collections. All images are resized and center cropped to 128×128. The selection of the number of clusters is a difficult problem and is discussed in Appendix C.3. In Figure 1c, we show examples of prototypes discovered in very large unfiltered sets (15k each) of Instagram images associated to different hashtags4 using DTI GMM applied with 40 clusters. While many images are noise and are associated to prototypes which are not easily interpretable, we show prototypes where iconic photos and poses can be clearly identified. To the best of our knowledge, we believe we are the first to demonstrate this type of results from raw social network image collections.
2https://www.cs.toronto.edu/~tijmen/affNIST/ 3https://github.com/GuHongyang/VaDE-pytorch 4https://github.com/arc298/instagram-scraper was used to scrape photographs
Comparable results in AverageExplorer [52], e.g. on Santa images, could be obtained using ad hoc features and user interactions, while our results are produced fully automatically. Figure 3 shows qualitative clustering results on MegaDepth [35] and WikiPaintings [26]. Similar to our results on image clustering benchmarks, our learned prototypes are more relevant and accurate than the ones obtained from standard clustering. Note that some of our prototypes are very sharp: they typically correspond to sets of photographs between which we can accurately model deformations, e.g. scenes that are mostly planar, with little perspective effects. On the contrary, more unique photographs and photographs with strong 3D effects that we cannot model will be associated to less interpretable and blurrier prototypes, such as the ones in the last two columns of Figure 3b. In Figure 3b, in addition to the prototypes discovered, we show examples of images contained in each cluster as well as the aligned prototype. Even for such complex images, the simple combination of our color and spatial modules manages to model real image transformations like illumination variations and viewpoint changes. More web image clustering results are shown on our project webpage.
6 Conclusion
We have introduced an efficient deep transformation-invariant clustering approach in raw input space. Our key insight is the online optimization of a single clustering objective over clustering parameters and deep image transformation modules. We demonstrate competitive results on standard image clustering benchmarks, including improvements over state-of-the-art on SVHN and Fashion-MNIST. We also demonstrate promising results for real photograph collection clustering and visualization. Finally, note that our DTI clustering framework is not specific to images and can be extended to other types of data as long as appropriate transformation modules are designed beforehand.
Acknowledgements
This work was supported in part by ANR project EnHerit ANR-17-CE23-0008, project Rapid Tabasco, gifts from Adobe and HPC resources from GENCI-IDRIS (Grant 2020-AD011011697). We thank Bryan Russell, Vladimir Kim, Matthew Fisher, François Darmon, Simon Roburin, David Picard, Michaël Ramamonjisoa, Vincent Lepetit, Elliot Vincent, Jean Ponce, William Peebles and Alexei Efros for inspiring discussions and valuable feedback.
Broader Impact
The impact of clustering mainly depends on the data it is applied on. For instance, adding structure in user data can raise ethical concerns when users are assimilated to their cluster, and receive targeted advertisement and newsfeed. However, this is not specific to our method and can be said of any clustering algorithm. Also note that while our clustering can be applied for example to data from social media, the visual interpretation of the clusters it returns via the cluster centers respects privacy much better than showing specific examples from each cluster.
Because our method provides highly interpretable results, it might bring increased understanding of clustering algorithm results for the broader public, which we think may be a significant positive impact. | 1. What is the main contribution of the paper regarding clustering in image space?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and empirical evaluation?
3. What are the weaknesses of the paper, especially regarding the experimental evaluation and comparison with other works?
4. How does the reviewer assess the significance and relevance of the paper's content to the NeurIPS community? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
***Post rebuttal update*** I have read the author's rebuttal and thank the author for answering my questions. I am in favour of the paper's acceptance. This paper introduces a new method for clustering directly in image space. Existing methods build features on which to perform clustering in features space or use explicit image transformations to align the images before clustering in a joint optimisation manner. This paper also learns the transformations while clustering with a single loss and a joint optimisation algorithm, for both K-means and Gaussian Mixture Model (GMM). However, the authors propose to predict the transformations of each data point instead of optimising them, with use of a neural network. It thus builds on Spatial Transformers Networks and integrates the method in the clustering problem. Experiments are performed on standard benchmarks and more challenging real images (web images) to validate the relevance of their method.
Strengths
Strengths: * This work is not of theoretical contribution yet all claims are supported by strong empirical evaluation (ablation study is performed and extended comparison with existing methods is provided) and proof. * The method is novel albeit its comparison with relevant methods are missing (see below). * The significance of the paper is good, improving on minLoss over the different datasets, and interpretability. * This paper is of high relevance to the NeurIPS community as it is simple to implement, leads to interpretable results, and is shown to work on real web images.
Weaknesses
Weaknesses: * The paper experimental evaluation lacks an analysis (unless I missed it) of the effect of the number of clusters K, which is an important parameter of the model especially on real images when the number of cluster is unknown. This is an important point when deciding on the applicability of the method. * The related work section makes no comparison with the literature of equivariant models, that are, models that learn to encode the natural transformations of the data (e.g. rotations, translations), either with or without prior knowledge. See for example: https://arxiv.org/pdf/1901.11399.pdf (and references therein) https://arxiv.org/abs/1411.5908 https://arxiv.org/pdf/2002.06991.pdf |
NIPS | Title
Deep Transformation-Invariant Clustering
Abstract
Recent advances in image clustering typically focus on learning better deep representations. In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict transformations and performs clustering directly in pixel space. This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model, without requiring any additional loss or hyper-parameters. It leads us to two new deep transformation-invariant clustering frameworks, which jointly learn prototypes and transformations. More specifically, we use deep learning modules that enable us to resolve invariance to spatial, color and morphological transformations. Our approach is conceptually simple and comes with several advantages, including the possibility to easily adapt the desired invariance to the task and a strong interpretability of both cluster centers and assignments to clusters. We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks. Finally, we showcase its robustness and the advantages of its improved interpretability by visualizing clustering results over real photograph collections.
1 Introduction
Gathering collections of images on a topic of interest is getting easier every day: simple tools can aggregate data from social media, web search, or specialized websites and filter it using hashtags, GPS coordinates, or semantic labels. However, identifying visual trends in such image collections remains difficult and usually involves manually organizing images or designing an ad hoc algorithm. Our goal in this paper is to design a clustering method which can be applied to such image collections, output a visual representation for each cluster and show how it relates to every associated image.
Directly comparing image pixels to decide if they belong to the same cluster leads to poor results because they are strongly impacted by factors irrelevant to clustering, such as exact viewpoint or lighting. Approaches to obtain clusters invariant to these transformations can be broadly classified into two groups. A first set of methods extracts invariant features and performs clustering in feature space. The features can be manually designed, but most state-of-the-art methods learn them directly from data. This is challenging because images are high-dimensional and learning relevant invariances thus requires huge amounts of data. For this reason, while recent approaches perform well on simple datasets like MNIST, they still struggle with real images. Another limitation of these approaches is that learned features are hard to interpret and visualize, making clustering results difficult to analyze. A second set of approaches, following the seminal work of Frey and Jojic on transformation-invariant clustering [11, 12, 13], uses explicit transformation models to align images before comparing them. These approaches have several potential advantages: (i) they enable direct control of the invariances to consider; (ii) because they do not need to discover invariances, they are potentially less data-hungry; (iii) since images are explicitly aligned, clustering process and results can easily be visualized. However, transformation-invariant approaches require solving a difficult joint optimization problem. In practice, they are thus often limited to small datasets and simple transformations, such as affine
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
βmor
, and thin plate spline T tpsβtps - to align prototype ck to xi. (c) Examples
of interpretable prototypes discovered from large images sets (15k each) associated to hashtags in Instagram using our DTI clustering with 40 clusters. Each cluster contains from 200 to 800 images.
transformations, and to the best of our knowledge they have never been evaluated on large standard image clustering datasets.
In this paper, we propose a deep transformation-invariant (DTI) framework that enables to perform transformation-invariant clustering at scale and uses complex transformations. Our main insight is to jointly learn deep alignment and clustering parameters with a single loss, relying on the gradient-based adaptations of K-means [38] and GMM optimization [9]. Not only is predicting transformations more computationally efficient than optimizing them, but it enables us to use complex color, thin plate spline and morphological transformations without any specific regularization. Because it is pixel-based, our deep transformation-invariant clustering is also easy to interpret: cluster centers and image alignments can be visualized to understand assignments. Despite its apparent simplicity, we demonstrate that our DTI clustering framework leads to results on par with the most recent feature learning approaches on standard benchmarks. We also show it is capable of discovering meaningful modes in real photograph collections, which we see as an important step to bridge the gap between theoretically well-grounded clustering approaches and semi-automatic tools relying on hand-designed features for exploring image collections, such as AverageExplorer [52] or ShadowDraw [32].
We first briefly discuss related works in Section 2. Section 3 then presents our DTI framework (Fig. 1a). Section 4 introduces our deep transformation modules and architecture (Fig. 1b) and discuss training details. Finally, Section 5 presents and analyzes our results (Fig. 1c).
Contributions. In this paper we present: – a deep transformation-invariant clustering approach that jointly learns to cluster and align images, – a deep image transformation module to learn spatial alignment, color modifications and for the
first time morphological transformations, – an experimental evaluation showing that our approach is competitive on standard image clustering
benchmarks, improving over state-of-the-art on Fashion-MNIST and SVHN, and provides highly interpretable qualitative results even on challenging web image collections.
Code, data, models as well as more visual results are available on our project webpage1.
2 Related work
Most recent approaches to image clustering focus on learning deep image representations, or features, on which clustering can be performed. Common strategies include autoencoders [48, 10, 25, 28], contrastive approaches [49, 5, 44], GANs [6, 51, 41] and mutual information based strategies [22, 18, 24]. Especially related to our work is [28] which leverages the idea of capsule [20] to learn equivariant image features, in a similar fashion of equivariant models [33, 45]. However, our method aims at being invariant to transformations but not at learning a representation. Another type of approach is to align images in pixel space using a relevant family of transformations, such as translations, rotations, or affine transformations to obtain more meaningful pixel distances before clustering them. Frey and Jojic first introduced transformation-invariant clustering [11, 12, 13] by integrating pixel permutations as a discrete latent variable within an Expectation Maximization (EM) [9] procedure for a mixture of Gaussians. Their approach was however limited to a finite set of discrete transformations. Congealing generalized the idea to continuous parametric transformations, and in particular affine transformations, initially by using entropy minimization [40, 30]. A later version using least square costs [7, 8] demonstrated the relation of this approach to the classical Lukas-Kanade image alignment algorithm [37]. In its classical version, congealing only enables to align all dataset images together, but the idea was extended to clustering [36, 39, 34], for example using a Bayesian model [39], or in a spectral clustering framework [34]. These works typically formulate difficult joint optimization problems and solve them by alternating between clustering and transformation optimization for each sample. They are thus limited to relatively small datasets and to the best of our knowledge were never compared to modern deep approaches on large benchmarks. Deep learning was recently used to scale the idea of congealing for global alignment of a single class of images [1] or time series [46]. Both works build on the idea of Spatial Transformer Networks [23] (STN) that spatial transformation are differentiable and can be learned by deep networks. We also build upon STN, but go beyond single-class alignment to jointly perform clustering. Additionally, we extend the idea to color and morphological transformations. We believe our work is the first to use deep learning to perform clustering in pixel space by explicitly aligning images.
3 Deep Transformation-Invariant clustering
In this section, we first discuss a generic formulation of our deep transformation-invariant clustering approach, then derive two algorithms based on K-means [38] and Gaussian mixture model [9]. Notation: In all the rest of the paper, we use the notation a1:n to refer to the set {a1, . . . , an}.
3.1 DTI framework
Contrary to most recent image clustering methods which rely on feature learning, we propose to perform clustering in pixel space by making the clustering invariant to a family of transformations. We considerN image samples x1:N and aim at grouping them inK clusters using a prototype method. More specifically, each cluster k is defined by a prototype ck, which can also be seen as an image, and prototypes are optimized to minimize a loss L which typically evaluates how well they represent the samples. We further assume that L can be written as a sum of a loss l computed over each sample:
L(c1:K) = N∑ i=1 l(xi, {c1, . . . , cK}). (1)
Once the problem is solved, each sample xi will be associated to the closest prototype.
Our key assumption is that in addition to the data, we have access to a group of parametric transformations {Tβ , β ∈ B} to which we want to make the clustering invariant. For example, one can consider β ∈ R6 and Tβ the 2D affine transformation parametrized by β. Other transformations are discussed in Section 4.1. Instead of finding clusters by minimizing the loss of Equation 1, one can
1http://imagine.enpc.fr/~monniert/DTIClustering/
minimize the following transformation-invariant loss:
LTI(c1:K) = N∑ i=1 min β1:K l(xi, {Tβ1(c1), . . . , TβK (cK)}). (2)
In this equation, the minimum over β1:K is taken for each sample independently. This loss is invariant to transformations of the prototypes (see proof in Appendix B). Also note there is not a single optimum since the loss is the same if any prototype ck is replaced by Tβ(ck) for any β ∈ B. If necessary, for example for visualization purposes, this ambiguity can easily be resolved by adding a small regularization on the transformations. The optimization problem associated to LTI is of course difficult. A natural approach, which we use as baseline (noted TI), is to alternatively minimize over transformations and clustering parameters. We show that performing such optimization using a gradient descent can already lead to improved results over standard clustering but is computationally expensive. We experimentally show it is faster and actually better to instead learn K (deep) predictors f1:K for each prototype, which aim at associating to each sample xi the transformation parameters f1:K(xi) minimizing the loss, i.e. to minimize the following loss:
LDTI(c1:K , f1:K) = N∑ i=1 l(xi, {Tf1(xi)(c1), . . . , TfK(xi)(cK)}), (3)
where predictors f1:K are now shared for all samples. We found that using deep parameters predictors not only enables more efficient training but also leads to better clustering results especially with more complex transformations. Indeed, the structure and optimization of the predictors naturally regularize the parameters for each sample, without requiring any specific regularization loss, especially in the case of high numbers N of samples and transformation parameters. In the next section we present concrete losses and algorithms. We then describe differentiable modules for relevant transformations and discuss parameter predictor architecture as well as training in Section 4.
3.2 Application to K-means and GMM
K-means. The goal of K-means algorithm [38] is to find a set of prototypes c1:K such that the average Euclidean distance between each sample and the closest prototype is minimized. Following the reasoning of Section 3.1, the loss optimized in K-means can be transformed into a transformationinvariant loss:
LDTI K-means(c1:K , f1:K) = N∑ i=1 min k ‖xi − Tfk(xi)(ck)‖ 2. (4)
Following batch gradient-based trainings [3] of K-means, we can then simply jointly minimize LDTI K-means over prototypes c1:K and deep transformation parameter predictors f1:K using a batch gradient descent algorithm. In practice, we initialize prototypes c1:K with random samples and predictors f1:K such that ∀k, ∀x, Tfk(x) = Id. Gaussian mixture model. We now consider that data are observations of a mixture of K multivariate normal random variables X1:K , i.e. X = ∑ k δk,∆Xk where δ is the Kronecker function and
∆ ∈ {1, . . . ,K} is a random variable defined by P (∆ = k) = πk, with ∀k, πk > 0 and ∑ k πk = 1. We write µk and Σk the mean and covariance of Xk and G( . ;µk,Σk) associated probability density function. The transformation-invariant negative log-likelihood can then be written:
LDTI GMM(µ1:K ,Σ1:K , π1:K , f1:K) = − N∑ i=1 log ( K∑ k=1 πkG ( xi ; Tfk(xi)(µk), T ∗ fk(xi) (Σk) )) , (5) where T ∗ is slightly modified version of T . Indeed, T may include transformations that one can apply to the covariance, such as spatial transformations, and other that would not make sense, such as additive color transformations. We jointly minimize LDTI GMM over Gaussian parameters, mixing probabilities, and deep transformation parameters f1:K using a batch gradient-based EM procedure similar to [21, 15, 14] and detailed in Algorithm 1. In practice, we assume that pixels are independent resulting in diagonal covariance matrices.
In such gradient-based procedures, two constraints have to be enforced, namely the positivity and normalization of mixing probabilities πk and the non-negativeness of the diagonal covariance terms.
Algorithm 1: Deep Transformation-Invariant Gaussian Mixture Model Input: data X, number of clusters K, transformation T Output: cluster assignations, Gaussian parameters µ1:K ,Σ1:K , deep predictors f1:K Initialization: µ1:K with random samples, Σ1:K = 0.5, η1:K = 1 and ∀k,∀x, Tfk(x) = Id while not converged do
i. sample a batch of data points x1:N ii. compute mixing probabilities: π1:K = softmax(η1:K) iii. compute per-sample Gaussian transformed parameters:
∀k, ∀i, µ̃ki = Tfk(xi)(µk) and Σ̃ki = T ∗ fk(xi) (Σk) + diag(σ 2 min)
iv. compute responsibilities: ∀k, ∀i, γki = πkG(xi ;µ̃ki,Σ̃ki)∑ j πjG(xi ;µ̃ji,Σ̃ji)
(E-step)
v. minimize expected negative log-likelihood w.r.t to {µ1:K ,Σ1:K , η1:K , f1:K}: E[LDTI GMM] = − N∑ i=1 K∑ k=1 γki ( log ( G(xi ; µ̃ki, Σ̃ki) ) + log(πk) ) (M-step)
end
For the mixing probabilities constraints, we adopt the approach used in [21] and [14] which optimize mixing parameters ηk used to compute the probabilities πk using a softmax instead of directly optimizing πk, which we write π1:K = softmax(η1:K). For the variance non-negativeness, we introduce a fixed minimal variance value σ2min which is added to the variances when evaluating the probability density function. This approach is different from the one in [14] which instead use clipping, because we found training with clipped values was harder. In practice, we take σmin = 0.25.
4 Learning image transformations
4.1 Architecture and transformation modules
We consider a set of prototypes c1:K we would like to transform to match a given sample x. To do so, we propose to learn for each prototype ck, a separate deep predictor which predicts transformation parameters β. We propose to model the family of transformations Tβ as a sequence of M parametric transformations such that, writing β = (β1, . . . , βM ), Tβ = T MβM ◦ . . . ◦ T 1 β1 . In the following, we describe the architecture of transformation parameter predictors f1:K , as well as each family of parametric transformation modules we use. Figure 1b shows our learned transformation process on a MNIST example.
Parameters prediction network. For all experiments, we use the same parameter predictor network architecture composed of a shared ResNet [19] backbone truncated after the global average pooling, followed by K ×M Multi-Layer Perceptrons (MLPs), one for each prototype and each transformation module. For the ResNet backbone, we use ResNet-20 for images smaller than 64× 64 and ResNet-18 otherwise. Each MLP has the same architecture, with two hidden layers of 128 units.
Spatial transformer module. To model spatial transformations of the prototypes, we follow the spatial transformers developed by Jaderberg et al. [23]. The key idea is to model spatial transformations as a differentiable image sampling of the input using a deformed sampling grid. We use affine T affβ , projective T proj β and thin plate spline T tps β [2] transformations which respectively correspond to 6, 8 and 16 (a 4x4 grid of control points) parameters.
Color transformation module. We model color transformation with a channel-wise diagonal affine transformation on the full image, which we write T colβ . It has 2 parameters for greyscale images and 6 parameters for colored images. We first used a full affine transformation with 12 parameters, however the network was able to hide several patterns in the different color channels of a single prototype (Appendix C.4). Note that a similar transformation was theoretically introduced in capsules [28], but with the different goal of obtaining a color-invariant feature representation. Deep feature-based approaches often handle color images with a pre-processing step such as Sobel filtering [4, 24, 28]. We believe the way we align colors of the prototypes to obtain color invariance in pixel space is novel, and it enables us to directly work with colored images without using any pre-processing or specific invariant features.
Morphological transformation module. We introduce a new transformation module to learn morphological operations [16] such as dilation and erosion. We consider a greyscale image x ∈ RD of size U × V = D, we write x[u, v] the value of the pixel (u, v) for u ∈ {1, . . . , U} and v ∈ {1, . . . , V }. Given a 2D region A, the dilation of x by A, DA(x) ∈ RD, is defined by DA(x)[u, v] = max(u′,v′)∈A x[u + u′, v + v′] and its erosion by A, EA(x) ∈ RD, is defined by EA(x)[u, v] = min(u′,v′)∈A x[u + u′, v + v′]. Directly learning the region A which parametrizes these transformations is challenging, we thus propose to learn parameters (α, a) for the following soft version of these transformations:
T mor(α,a)(x)[u, v] = ∑ (u′,v′)∈W x[u+ u ′, v + v′] · a[u+ u′, v + v′] · eαx[u+u′,v+v′]∑
(u′,v′)∈W a[u+ u ′, v + v′] · eαx[u+u′,v+v′]
, (6)
where W is a fixed set of 2D positions, α is a softmax (positive values) or softmin (negative values) parameter and a is a set of parameters with values between 0 and 1 defined for every position (u′, v′) ∈W . Parameters a can be interpreted as an image, or as a soft version of the region A used for morphological operations. Note that if a[u′, v′] = 1{(u′,v′)∈A}, when α → +∞ (resp. −∞), it successfully emulates DA (resp. EA). In practice, we use a grid of integer positions around the origin of size 7 × 7 for W . Note that since morphological transformations do not form a group, transformation-invariant denomination is slightly abusive.
4.2 Training
We found that two key elements were critical to obtain good results: empty cluster reassignment and curriculum learning. We then discuss further implementation details and computational cost.
Empty cluster reassignment. Similar to [4], we adopt an empty cluster reassignment strategy during our clustering optimization. We reinitialize both prototype and deep predictor of "tiny" clusters using the parameters of the largest cluster with a small added noise. In practice, the size of balanced clusters being N/K, we define "tiny" as less than 20% of N/K.
Curriculum learning. Learning to predict transformations is a hard task, especially when the number of parameters is high. To ease learning, we thus adopt a curriculum learning strategy by gradually adding more complex transformation modules to the training. Given a target sequence of transformations to learn, we first train our model without any transformation - or equivalently with an identity module - then iteratively add subsequent modules once convergence has been reached. We found this is especially important when modeling local deformations with complex transformations with many parameters, such as TPS and morphological transformations. Intuitively, prototypes should first be coarsely aligned before attempting to refine the alignment with more complex transformations.
Implementation details. Both clustering parameters and parameter prediction networks are learned jointly and end-to-end using Adam optimizer [27] with a 10−6 weight decay on the neural network parameters. We sequentially add transformation modules at a constant learning rate of 0.001 then divide the learning rate by 10 after convergence - corresponding to different numbers of epochs depending on the dataset characteristics - and train for a few more epochs with the smaller learning rate. We use a batch size of 64 for real photograph collections and 128 otherwise.
Computational cost. Training DTI K-means or DTI GMM on MNIST takes approximately 50 minutes on a single Nvidia GeForce RTX 2080 Ti GPU and full dataset inference takes 30 seconds. We found it to be much faster than directly optimizing transformation parameters (TI clustering) for which convergence took more than 10 hours of training.
5 Experiments
In this section, we first analyze our approach and compare it to state-of-the-art, then showcase its interest for image collection analysis and visualization.
5.1 Analysis and comparisons
Similar to previous work on image clustering, we evaluate our approach with global classification accuracy (ACC), where a cluster-to-class mapping is computed using the Hungarian algorithm [29],
and Normalized Mutual Information (NMI). Datasets and corresponding transformation modules we used are described in Appendix A.
Comparison on standard benchmarks. In Table 1, we report our results on standard image clustering benchmarks, i.e. digit datasets (MNIST [31], USPS [17]), a clothing dataset (FashionMNIST [47]) and a face dataset (FRGC [43]). We also report results for SVHN [42] where concurrent methods use pre-processing to remove color bias. In the table, we separate representation-based from pixel-based methods and mark results using data augmentation or manually selected features as input. Note that our results depend on initialization, we provide detailed statistics in Appendix C.1. Our DTI clustering is fully unsupervised and does not require any data augmentation, ad hoc features, nor any hyper-parameter while performing clustering directly in pixel space. We report average performances and performances of the minimal loss run which we found to correlate well with high performances (Appendix C.2). Because this non-trivial criterion allows to automatically select a run in a fully unsupervised way, we argue it can be compared to average results from competing methods which don’t provide such criterion. First, DTI clustering achieves competitive results on all datasets, in particular improving state-ofthe-art by a significant margin on SVHN and Fashion-MNIST. For SVHN, we first found that the
prototypes quality was harmed by digits on the side of the image. To pay more attention to the center digit, we weighted the clustering loss by a Gaussian weight (σ = 7). It led to better prototypes and allowed us to improve over all concurrent methods by a large margin. Compared to representationbased methods, our pixel-based clustering is highly interpretable. Figure 2a shows standard GMM prototypes and our prototypes learned with DTI GMM which appear to be much sharper than standard ones. This directly stems from the quality of the learned transformations, visualized in Figure 2b. Our transformation modules can successfully align the prototype, adapt the thickness and apply local elastic deformations. More alignment results are available on our project webpage.
Augmented and specific datasets. DTI clustering also works on small, colored and misaligned datasets. In Table 2, we highlight these strengths on specifics datasets generated from MNIST: MNIST-1k is a 1000 images subset, MNIST-color is obtained by randomly selecting a color for the foreground and background and affNIST-test2 is the result of random affine transformations. We used an online implementation3 for VaDE [25] and official ones for IMSAT [22] and IIC [24] to obtain baselines. Our results show that the performances of DTI clustering is barely affected by spatial and color transformations, while baseline performances drop on affNIST-test and are almost chance on MNIST-color. Figure 2a shows the quality and interpretability of our cluster centers on affNIST-test and MNIST-color. DTI clustering also seems more data-efficient than the baselines we tested.
Ablation on MNIST. In Table 3, we conduct an ablation study on MNIST of our full model trained following Section 4.2 with affine, morphological and TPS transformations. We first explore the effect of transformation modules. Their order is not crucial, as shown by similar minLoss performances, but can greatly affect the stability of the training, as can be seen in the average results. Each module contributes to the final performance, affine transformations being the most important. We then validate our training strategy showing that both empty cluster reassignment and curriculum learning for the different modules are necessary. Finally, we directly optimize the loss of Equation 2 (TI clustering) by optimizing the transformation parameters for each sample at each iteration of the batch clustering algorithm, without using our parameter predictors. With rich transformations which have many parameters, such as TPS and morphological ones, this approach fails completely. Using only affine transformations, we obtain results clearly superior to standard clustering, but worse than ours.
5.2 Application to web images
One of the main interest of our DTI clustering is that it allows to discover trends in real image collections. All images are resized and center cropped to 128×128. The selection of the number of clusters is a difficult problem and is discussed in Appendix C.3. In Figure 1c, we show examples of prototypes discovered in very large unfiltered sets (15k each) of Instagram images associated to different hashtags4 using DTI GMM applied with 40 clusters. While many images are noise and are associated to prototypes which are not easily interpretable, we show prototypes where iconic photos and poses can be clearly identified. To the best of our knowledge, we believe we are the first to demonstrate this type of results from raw social network image collections.
2https://www.cs.toronto.edu/~tijmen/affNIST/ 3https://github.com/GuHongyang/VaDE-pytorch 4https://github.com/arc298/instagram-scraper was used to scrape photographs
Comparable results in AverageExplorer [52], e.g. on Santa images, could be obtained using ad hoc features and user interactions, while our results are produced fully automatically. Figure 3 shows qualitative clustering results on MegaDepth [35] and WikiPaintings [26]. Similar to our results on image clustering benchmarks, our learned prototypes are more relevant and accurate than the ones obtained from standard clustering. Note that some of our prototypes are very sharp: they typically correspond to sets of photographs between which we can accurately model deformations, e.g. scenes that are mostly planar, with little perspective effects. On the contrary, more unique photographs and photographs with strong 3D effects that we cannot model will be associated to less interpretable and blurrier prototypes, such as the ones in the last two columns of Figure 3b. In Figure 3b, in addition to the prototypes discovered, we show examples of images contained in each cluster as well as the aligned prototype. Even for such complex images, the simple combination of our color and spatial modules manages to model real image transformations like illumination variations and viewpoint changes. More web image clustering results are shown on our project webpage.
6 Conclusion
We have introduced an efficient deep transformation-invariant clustering approach in raw input space. Our key insight is the online optimization of a single clustering objective over clustering parameters and deep image transformation modules. We demonstrate competitive results on standard image clustering benchmarks, including improvements over state-of-the-art on SVHN and Fashion-MNIST. We also demonstrate promising results for real photograph collection clustering and visualization. Finally, note that our DTI clustering framework is not specific to images and can be extended to other types of data as long as appropriate transformation modules are designed beforehand.
Acknowledgements
This work was supported in part by ANR project EnHerit ANR-17-CE23-0008, project Rapid Tabasco, gifts from Adobe and HPC resources from GENCI-IDRIS (Grant 2020-AD011011697). We thank Bryan Russell, Vladimir Kim, Matthew Fisher, François Darmon, Simon Roburin, David Picard, Michaël Ramamonjisoa, Vincent Lepetit, Elliot Vincent, Jean Ponce, William Peebles and Alexei Efros for inspiring discussions and valuable feedback.
Broader Impact
The impact of clustering mainly depends on the data it is applied on. For instance, adding structure in user data can raise ethical concerns when users are assimilated to their cluster, and receive targeted advertisement and newsfeed. However, this is not specific to our method and can be said of any clustering algorithm. Also note that while our clustering can be applied for example to data from social media, the visual interpretation of the clusters it returns via the cluster centers respects privacy much better than showing specific examples from each cluster.
Because our method provides highly interpretable results, it might bring increased understanding of clustering algorithm results for the broader public, which we think may be a significant positive impact. | 1. What is the focus and contribution of the paper on transformation-invariant clustering?
2. What are the strengths of the proposed approach, particularly in terms of learning image transformations and clustering?
3. What are the weaknesses of the paper, especially regarding its limitation to image data?
4. How does the reviewer assess the clarity and quality of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper presents a novel approach for transformation-invariant clustering, called Deep Transformation-Invariant (DTI). The main idea is to jointly learn image transformations (to align images) and to cluster them (previous work learn to cluster with explicit transformation). The main novelty of this work is to learn the transformation from the pixels while learning to cluster. The deep image transformation module is designed to learn image alignments . The module can model three types of transformation: spatial transforms (as in [34, 38]), color transform, and morphological ones (dilation, erosion). Experiments are conducted on standard benchmarks for image clustering, as well as web image benchmarks with strong results. Written presentation is clear and easy to understand.
Strengths
- The proposed method can simultaneously learn to align images and cluster which is new and interesting. - The design of the transformation module is interesting and include some new aspect of color & morphological transforms. - Experiments are strong compared with current methods.
Weaknesses
- The transformations are specifically applied to image data while the DTI framework can be more generic. |
NIPS | Title
Deep Transformation-Invariant Clustering
Abstract
Recent advances in image clustering typically focus on learning better deep representations. In contrast, we present an orthogonal approach that does not rely on abstract features but instead learns to predict transformations and performs clustering directly in pixel space. This learning process naturally fits in the gradient-based training of K-means and Gaussian mixture model, without requiring any additional loss or hyper-parameters. It leads us to two new deep transformation-invariant clustering frameworks, which jointly learn prototypes and transformations. More specifically, we use deep learning modules that enable us to resolve invariance to spatial, color and morphological transformations. Our approach is conceptually simple and comes with several advantages, including the possibility to easily adapt the desired invariance to the task and a strong interpretability of both cluster centers and assignments to clusters. We demonstrate that our novel approach yields competitive and highly promising results on standard image clustering benchmarks. Finally, we showcase its robustness and the advantages of its improved interpretability by visualizing clustering results over real photograph collections.
1 Introduction
Gathering collections of images on a topic of interest is getting easier every day: simple tools can aggregate data from social media, web search, or specialized websites and filter it using hashtags, GPS coordinates, or semantic labels. However, identifying visual trends in such image collections remains difficult and usually involves manually organizing images or designing an ad hoc algorithm. Our goal in this paper is to design a clustering method which can be applied to such image collections, output a visual representation for each cluster and show how it relates to every associated image.
Directly comparing image pixels to decide if they belong to the same cluster leads to poor results because they are strongly impacted by factors irrelevant to clustering, such as exact viewpoint or lighting. Approaches to obtain clusters invariant to these transformations can be broadly classified into two groups. A first set of methods extracts invariant features and performs clustering in feature space. The features can be manually designed, but most state-of-the-art methods learn them directly from data. This is challenging because images are high-dimensional and learning relevant invariances thus requires huge amounts of data. For this reason, while recent approaches perform well on simple datasets like MNIST, they still struggle with real images. Another limitation of these approaches is that learned features are hard to interpret and visualize, making clustering results difficult to analyze. A second set of approaches, following the seminal work of Frey and Jojic on transformation-invariant clustering [11, 12, 13], uses explicit transformation models to align images before comparing them. These approaches have several potential advantages: (i) they enable direct control of the invariances to consider; (ii) because they do not need to discover invariances, they are potentially less data-hungry; (iii) since images are explicitly aligned, clustering process and results can easily be visualized. However, transformation-invariant approaches require solving a difficult joint optimization problem. In practice, they are thus often limited to small datasets and simple transformations, such as affine
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
βmor
, and thin plate spline T tpsβtps - to align prototype ck to xi. (c) Examples
of interpretable prototypes discovered from large images sets (15k each) associated to hashtags in Instagram using our DTI clustering with 40 clusters. Each cluster contains from 200 to 800 images.
transformations, and to the best of our knowledge they have never been evaluated on large standard image clustering datasets.
In this paper, we propose a deep transformation-invariant (DTI) framework that enables to perform transformation-invariant clustering at scale and uses complex transformations. Our main insight is to jointly learn deep alignment and clustering parameters with a single loss, relying on the gradient-based adaptations of K-means [38] and GMM optimization [9]. Not only is predicting transformations more computationally efficient than optimizing them, but it enables us to use complex color, thin plate spline and morphological transformations without any specific regularization. Because it is pixel-based, our deep transformation-invariant clustering is also easy to interpret: cluster centers and image alignments can be visualized to understand assignments. Despite its apparent simplicity, we demonstrate that our DTI clustering framework leads to results on par with the most recent feature learning approaches on standard benchmarks. We also show it is capable of discovering meaningful modes in real photograph collections, which we see as an important step to bridge the gap between theoretically well-grounded clustering approaches and semi-automatic tools relying on hand-designed features for exploring image collections, such as AverageExplorer [52] or ShadowDraw [32].
We first briefly discuss related works in Section 2. Section 3 then presents our DTI framework (Fig. 1a). Section 4 introduces our deep transformation modules and architecture (Fig. 1b) and discuss training details. Finally, Section 5 presents and analyzes our results (Fig. 1c).
Contributions. In this paper we present: – a deep transformation-invariant clustering approach that jointly learns to cluster and align images, – a deep image transformation module to learn spatial alignment, color modifications and for the
first time morphological transformations, – an experimental evaluation showing that our approach is competitive on standard image clustering
benchmarks, improving over state-of-the-art on Fashion-MNIST and SVHN, and provides highly interpretable qualitative results even on challenging web image collections.
Code, data, models as well as more visual results are available on our project webpage1.
2 Related work
Most recent approaches to image clustering focus on learning deep image representations, or features, on which clustering can be performed. Common strategies include autoencoders [48, 10, 25, 28], contrastive approaches [49, 5, 44], GANs [6, 51, 41] and mutual information based strategies [22, 18, 24]. Especially related to our work is [28] which leverages the idea of capsule [20] to learn equivariant image features, in a similar fashion of equivariant models [33, 45]. However, our method aims at being invariant to transformations but not at learning a representation. Another type of approach is to align images in pixel space using a relevant family of transformations, such as translations, rotations, or affine transformations to obtain more meaningful pixel distances before clustering them. Frey and Jojic first introduced transformation-invariant clustering [11, 12, 13] by integrating pixel permutations as a discrete latent variable within an Expectation Maximization (EM) [9] procedure for a mixture of Gaussians. Their approach was however limited to a finite set of discrete transformations. Congealing generalized the idea to continuous parametric transformations, and in particular affine transformations, initially by using entropy minimization [40, 30]. A later version using least square costs [7, 8] demonstrated the relation of this approach to the classical Lukas-Kanade image alignment algorithm [37]. In its classical version, congealing only enables to align all dataset images together, but the idea was extended to clustering [36, 39, 34], for example using a Bayesian model [39], or in a spectral clustering framework [34]. These works typically formulate difficult joint optimization problems and solve them by alternating between clustering and transformation optimization for each sample. They are thus limited to relatively small datasets and to the best of our knowledge were never compared to modern deep approaches on large benchmarks. Deep learning was recently used to scale the idea of congealing for global alignment of a single class of images [1] or time series [46]. Both works build on the idea of Spatial Transformer Networks [23] (STN) that spatial transformation are differentiable and can be learned by deep networks. We also build upon STN, but go beyond single-class alignment to jointly perform clustering. Additionally, we extend the idea to color and morphological transformations. We believe our work is the first to use deep learning to perform clustering in pixel space by explicitly aligning images.
3 Deep Transformation-Invariant clustering
In this section, we first discuss a generic formulation of our deep transformation-invariant clustering approach, then derive two algorithms based on K-means [38] and Gaussian mixture model [9]. Notation: In all the rest of the paper, we use the notation a1:n to refer to the set {a1, . . . , an}.
3.1 DTI framework
Contrary to most recent image clustering methods which rely on feature learning, we propose to perform clustering in pixel space by making the clustering invariant to a family of transformations. We considerN image samples x1:N and aim at grouping them inK clusters using a prototype method. More specifically, each cluster k is defined by a prototype ck, which can also be seen as an image, and prototypes are optimized to minimize a loss L which typically evaluates how well they represent the samples. We further assume that L can be written as a sum of a loss l computed over each sample:
L(c1:K) = N∑ i=1 l(xi, {c1, . . . , cK}). (1)
Once the problem is solved, each sample xi will be associated to the closest prototype.
Our key assumption is that in addition to the data, we have access to a group of parametric transformations {Tβ , β ∈ B} to which we want to make the clustering invariant. For example, one can consider β ∈ R6 and Tβ the 2D affine transformation parametrized by β. Other transformations are discussed in Section 4.1. Instead of finding clusters by minimizing the loss of Equation 1, one can
1http://imagine.enpc.fr/~monniert/DTIClustering/
minimize the following transformation-invariant loss:
LTI(c1:K) = N∑ i=1 min β1:K l(xi, {Tβ1(c1), . . . , TβK (cK)}). (2)
In this equation, the minimum over β1:K is taken for each sample independently. This loss is invariant to transformations of the prototypes (see proof in Appendix B). Also note there is not a single optimum since the loss is the same if any prototype ck is replaced by Tβ(ck) for any β ∈ B. If necessary, for example for visualization purposes, this ambiguity can easily be resolved by adding a small regularization on the transformations. The optimization problem associated to LTI is of course difficult. A natural approach, which we use as baseline (noted TI), is to alternatively minimize over transformations and clustering parameters. We show that performing such optimization using a gradient descent can already lead to improved results over standard clustering but is computationally expensive. We experimentally show it is faster and actually better to instead learn K (deep) predictors f1:K for each prototype, which aim at associating to each sample xi the transformation parameters f1:K(xi) minimizing the loss, i.e. to minimize the following loss:
LDTI(c1:K , f1:K) = N∑ i=1 l(xi, {Tf1(xi)(c1), . . . , TfK(xi)(cK)}), (3)
where predictors f1:K are now shared for all samples. We found that using deep parameters predictors not only enables more efficient training but also leads to better clustering results especially with more complex transformations. Indeed, the structure and optimization of the predictors naturally regularize the parameters for each sample, without requiring any specific regularization loss, especially in the case of high numbers N of samples and transformation parameters. In the next section we present concrete losses and algorithms. We then describe differentiable modules for relevant transformations and discuss parameter predictor architecture as well as training in Section 4.
3.2 Application to K-means and GMM
K-means. The goal of K-means algorithm [38] is to find a set of prototypes c1:K such that the average Euclidean distance between each sample and the closest prototype is minimized. Following the reasoning of Section 3.1, the loss optimized in K-means can be transformed into a transformationinvariant loss:
LDTI K-means(c1:K , f1:K) = N∑ i=1 min k ‖xi − Tfk(xi)(ck)‖ 2. (4)
Following batch gradient-based trainings [3] of K-means, we can then simply jointly minimize LDTI K-means over prototypes c1:K and deep transformation parameter predictors f1:K using a batch gradient descent algorithm. In practice, we initialize prototypes c1:K with random samples and predictors f1:K such that ∀k, ∀x, Tfk(x) = Id. Gaussian mixture model. We now consider that data are observations of a mixture of K multivariate normal random variables X1:K , i.e. X = ∑ k δk,∆Xk where δ is the Kronecker function and
∆ ∈ {1, . . . ,K} is a random variable defined by P (∆ = k) = πk, with ∀k, πk > 0 and ∑ k πk = 1. We write µk and Σk the mean and covariance of Xk and G( . ;µk,Σk) associated probability density function. The transformation-invariant negative log-likelihood can then be written:
LDTI GMM(µ1:K ,Σ1:K , π1:K , f1:K) = − N∑ i=1 log ( K∑ k=1 πkG ( xi ; Tfk(xi)(µk), T ∗ fk(xi) (Σk) )) , (5) where T ∗ is slightly modified version of T . Indeed, T may include transformations that one can apply to the covariance, such as spatial transformations, and other that would not make sense, such as additive color transformations. We jointly minimize LDTI GMM over Gaussian parameters, mixing probabilities, and deep transformation parameters f1:K using a batch gradient-based EM procedure similar to [21, 15, 14] and detailed in Algorithm 1. In practice, we assume that pixels are independent resulting in diagonal covariance matrices.
In such gradient-based procedures, two constraints have to be enforced, namely the positivity and normalization of mixing probabilities πk and the non-negativeness of the diagonal covariance terms.
Algorithm 1: Deep Transformation-Invariant Gaussian Mixture Model Input: data X, number of clusters K, transformation T Output: cluster assignations, Gaussian parameters µ1:K ,Σ1:K , deep predictors f1:K Initialization: µ1:K with random samples, Σ1:K = 0.5, η1:K = 1 and ∀k,∀x, Tfk(x) = Id while not converged do
i. sample a batch of data points x1:N ii. compute mixing probabilities: π1:K = softmax(η1:K) iii. compute per-sample Gaussian transformed parameters:
∀k, ∀i, µ̃ki = Tfk(xi)(µk) and Σ̃ki = T ∗ fk(xi) (Σk) + diag(σ 2 min)
iv. compute responsibilities: ∀k, ∀i, γki = πkG(xi ;µ̃ki,Σ̃ki)∑ j πjG(xi ;µ̃ji,Σ̃ji)
(E-step)
v. minimize expected negative log-likelihood w.r.t to {µ1:K ,Σ1:K , η1:K , f1:K}: E[LDTI GMM] = − N∑ i=1 K∑ k=1 γki ( log ( G(xi ; µ̃ki, Σ̃ki) ) + log(πk) ) (M-step)
end
For the mixing probabilities constraints, we adopt the approach used in [21] and [14] which optimize mixing parameters ηk used to compute the probabilities πk using a softmax instead of directly optimizing πk, which we write π1:K = softmax(η1:K). For the variance non-negativeness, we introduce a fixed minimal variance value σ2min which is added to the variances when evaluating the probability density function. This approach is different from the one in [14] which instead use clipping, because we found training with clipped values was harder. In practice, we take σmin = 0.25.
4 Learning image transformations
4.1 Architecture and transformation modules
We consider a set of prototypes c1:K we would like to transform to match a given sample x. To do so, we propose to learn for each prototype ck, a separate deep predictor which predicts transformation parameters β. We propose to model the family of transformations Tβ as a sequence of M parametric transformations such that, writing β = (β1, . . . , βM ), Tβ = T MβM ◦ . . . ◦ T 1 β1 . In the following, we describe the architecture of transformation parameter predictors f1:K , as well as each family of parametric transformation modules we use. Figure 1b shows our learned transformation process on a MNIST example.
Parameters prediction network. For all experiments, we use the same parameter predictor network architecture composed of a shared ResNet [19] backbone truncated after the global average pooling, followed by K ×M Multi-Layer Perceptrons (MLPs), one for each prototype and each transformation module. For the ResNet backbone, we use ResNet-20 for images smaller than 64× 64 and ResNet-18 otherwise. Each MLP has the same architecture, with two hidden layers of 128 units.
Spatial transformer module. To model spatial transformations of the prototypes, we follow the spatial transformers developed by Jaderberg et al. [23]. The key idea is to model spatial transformations as a differentiable image sampling of the input using a deformed sampling grid. We use affine T affβ , projective T proj β and thin plate spline T tps β [2] transformations which respectively correspond to 6, 8 and 16 (a 4x4 grid of control points) parameters.
Color transformation module. We model color transformation with a channel-wise diagonal affine transformation on the full image, which we write T colβ . It has 2 parameters for greyscale images and 6 parameters for colored images. We first used a full affine transformation with 12 parameters, however the network was able to hide several patterns in the different color channels of a single prototype (Appendix C.4). Note that a similar transformation was theoretically introduced in capsules [28], but with the different goal of obtaining a color-invariant feature representation. Deep feature-based approaches often handle color images with a pre-processing step such as Sobel filtering [4, 24, 28]. We believe the way we align colors of the prototypes to obtain color invariance in pixel space is novel, and it enables us to directly work with colored images without using any pre-processing or specific invariant features.
Morphological transformation module. We introduce a new transformation module to learn morphological operations [16] such as dilation and erosion. We consider a greyscale image x ∈ RD of size U × V = D, we write x[u, v] the value of the pixel (u, v) for u ∈ {1, . . . , U} and v ∈ {1, . . . , V }. Given a 2D region A, the dilation of x by A, DA(x) ∈ RD, is defined by DA(x)[u, v] = max(u′,v′)∈A x[u + u′, v + v′] and its erosion by A, EA(x) ∈ RD, is defined by EA(x)[u, v] = min(u′,v′)∈A x[u + u′, v + v′]. Directly learning the region A which parametrizes these transformations is challenging, we thus propose to learn parameters (α, a) for the following soft version of these transformations:
T mor(α,a)(x)[u, v] = ∑ (u′,v′)∈W x[u+ u ′, v + v′] · a[u+ u′, v + v′] · eαx[u+u′,v+v′]∑
(u′,v′)∈W a[u+ u ′, v + v′] · eαx[u+u′,v+v′]
, (6)
where W is a fixed set of 2D positions, α is a softmax (positive values) or softmin (negative values) parameter and a is a set of parameters with values between 0 and 1 defined for every position (u′, v′) ∈W . Parameters a can be interpreted as an image, or as a soft version of the region A used for morphological operations. Note that if a[u′, v′] = 1{(u′,v′)∈A}, when α → +∞ (resp. −∞), it successfully emulates DA (resp. EA). In practice, we use a grid of integer positions around the origin of size 7 × 7 for W . Note that since morphological transformations do not form a group, transformation-invariant denomination is slightly abusive.
4.2 Training
We found that two key elements were critical to obtain good results: empty cluster reassignment and curriculum learning. We then discuss further implementation details and computational cost.
Empty cluster reassignment. Similar to [4], we adopt an empty cluster reassignment strategy during our clustering optimization. We reinitialize both prototype and deep predictor of "tiny" clusters using the parameters of the largest cluster with a small added noise. In practice, the size of balanced clusters being N/K, we define "tiny" as less than 20% of N/K.
Curriculum learning. Learning to predict transformations is a hard task, especially when the number of parameters is high. To ease learning, we thus adopt a curriculum learning strategy by gradually adding more complex transformation modules to the training. Given a target sequence of transformations to learn, we first train our model without any transformation - or equivalently with an identity module - then iteratively add subsequent modules once convergence has been reached. We found this is especially important when modeling local deformations with complex transformations with many parameters, such as TPS and morphological transformations. Intuitively, prototypes should first be coarsely aligned before attempting to refine the alignment with more complex transformations.
Implementation details. Both clustering parameters and parameter prediction networks are learned jointly and end-to-end using Adam optimizer [27] with a 10−6 weight decay on the neural network parameters. We sequentially add transformation modules at a constant learning rate of 0.001 then divide the learning rate by 10 after convergence - corresponding to different numbers of epochs depending on the dataset characteristics - and train for a few more epochs with the smaller learning rate. We use a batch size of 64 for real photograph collections and 128 otherwise.
Computational cost. Training DTI K-means or DTI GMM on MNIST takes approximately 50 minutes on a single Nvidia GeForce RTX 2080 Ti GPU and full dataset inference takes 30 seconds. We found it to be much faster than directly optimizing transformation parameters (TI clustering) for which convergence took more than 10 hours of training.
5 Experiments
In this section, we first analyze our approach and compare it to state-of-the-art, then showcase its interest for image collection analysis and visualization.
5.1 Analysis and comparisons
Similar to previous work on image clustering, we evaluate our approach with global classification accuracy (ACC), where a cluster-to-class mapping is computed using the Hungarian algorithm [29],
and Normalized Mutual Information (NMI). Datasets and corresponding transformation modules we used are described in Appendix A.
Comparison on standard benchmarks. In Table 1, we report our results on standard image clustering benchmarks, i.e. digit datasets (MNIST [31], USPS [17]), a clothing dataset (FashionMNIST [47]) and a face dataset (FRGC [43]). We also report results for SVHN [42] where concurrent methods use pre-processing to remove color bias. In the table, we separate representation-based from pixel-based methods and mark results using data augmentation or manually selected features as input. Note that our results depend on initialization, we provide detailed statistics in Appendix C.1. Our DTI clustering is fully unsupervised and does not require any data augmentation, ad hoc features, nor any hyper-parameter while performing clustering directly in pixel space. We report average performances and performances of the minimal loss run which we found to correlate well with high performances (Appendix C.2). Because this non-trivial criterion allows to automatically select a run in a fully unsupervised way, we argue it can be compared to average results from competing methods which don’t provide such criterion. First, DTI clustering achieves competitive results on all datasets, in particular improving state-ofthe-art by a significant margin on SVHN and Fashion-MNIST. For SVHN, we first found that the
prototypes quality was harmed by digits on the side of the image. To pay more attention to the center digit, we weighted the clustering loss by a Gaussian weight (σ = 7). It led to better prototypes and allowed us to improve over all concurrent methods by a large margin. Compared to representationbased methods, our pixel-based clustering is highly interpretable. Figure 2a shows standard GMM prototypes and our prototypes learned with DTI GMM which appear to be much sharper than standard ones. This directly stems from the quality of the learned transformations, visualized in Figure 2b. Our transformation modules can successfully align the prototype, adapt the thickness and apply local elastic deformations. More alignment results are available on our project webpage.
Augmented and specific datasets. DTI clustering also works on small, colored and misaligned datasets. In Table 2, we highlight these strengths on specifics datasets generated from MNIST: MNIST-1k is a 1000 images subset, MNIST-color is obtained by randomly selecting a color for the foreground and background and affNIST-test2 is the result of random affine transformations. We used an online implementation3 for VaDE [25] and official ones for IMSAT [22] and IIC [24] to obtain baselines. Our results show that the performances of DTI clustering is barely affected by spatial and color transformations, while baseline performances drop on affNIST-test and are almost chance on MNIST-color. Figure 2a shows the quality and interpretability of our cluster centers on affNIST-test and MNIST-color. DTI clustering also seems more data-efficient than the baselines we tested.
Ablation on MNIST. In Table 3, we conduct an ablation study on MNIST of our full model trained following Section 4.2 with affine, morphological and TPS transformations. We first explore the effect of transformation modules. Their order is not crucial, as shown by similar minLoss performances, but can greatly affect the stability of the training, as can be seen in the average results. Each module contributes to the final performance, affine transformations being the most important. We then validate our training strategy showing that both empty cluster reassignment and curriculum learning for the different modules are necessary. Finally, we directly optimize the loss of Equation 2 (TI clustering) by optimizing the transformation parameters for each sample at each iteration of the batch clustering algorithm, without using our parameter predictors. With rich transformations which have many parameters, such as TPS and morphological ones, this approach fails completely. Using only affine transformations, we obtain results clearly superior to standard clustering, but worse than ours.
5.2 Application to web images
One of the main interest of our DTI clustering is that it allows to discover trends in real image collections. All images are resized and center cropped to 128×128. The selection of the number of clusters is a difficult problem and is discussed in Appendix C.3. In Figure 1c, we show examples of prototypes discovered in very large unfiltered sets (15k each) of Instagram images associated to different hashtags4 using DTI GMM applied with 40 clusters. While many images are noise and are associated to prototypes which are not easily interpretable, we show prototypes where iconic photos and poses can be clearly identified. To the best of our knowledge, we believe we are the first to demonstrate this type of results from raw social network image collections.
2https://www.cs.toronto.edu/~tijmen/affNIST/ 3https://github.com/GuHongyang/VaDE-pytorch 4https://github.com/arc298/instagram-scraper was used to scrape photographs
Comparable results in AverageExplorer [52], e.g. on Santa images, could be obtained using ad hoc features and user interactions, while our results are produced fully automatically. Figure 3 shows qualitative clustering results on MegaDepth [35] and WikiPaintings [26]. Similar to our results on image clustering benchmarks, our learned prototypes are more relevant and accurate than the ones obtained from standard clustering. Note that some of our prototypes are very sharp: they typically correspond to sets of photographs between which we can accurately model deformations, e.g. scenes that are mostly planar, with little perspective effects. On the contrary, more unique photographs and photographs with strong 3D effects that we cannot model will be associated to less interpretable and blurrier prototypes, such as the ones in the last two columns of Figure 3b. In Figure 3b, in addition to the prototypes discovered, we show examples of images contained in each cluster as well as the aligned prototype. Even for such complex images, the simple combination of our color and spatial modules manages to model real image transformations like illumination variations and viewpoint changes. More web image clustering results are shown on our project webpage.
6 Conclusion
We have introduced an efficient deep transformation-invariant clustering approach in raw input space. Our key insight is the online optimization of a single clustering objective over clustering parameters and deep image transformation modules. We demonstrate competitive results on standard image clustering benchmarks, including improvements over state-of-the-art on SVHN and Fashion-MNIST. We also demonstrate promising results for real photograph collection clustering and visualization. Finally, note that our DTI clustering framework is not specific to images and can be extended to other types of data as long as appropriate transformation modules are designed beforehand.
Acknowledgements
This work was supported in part by ANR project EnHerit ANR-17-CE23-0008, project Rapid Tabasco, gifts from Adobe and HPC resources from GENCI-IDRIS (Grant 2020-AD011011697). We thank Bryan Russell, Vladimir Kim, Matthew Fisher, François Darmon, Simon Roburin, David Picard, Michaël Ramamonjisoa, Vincent Lepetit, Elliot Vincent, Jean Ponce, William Peebles and Alexei Efros for inspiring discussions and valuable feedback.
Broader Impact
The impact of clustering mainly depends on the data it is applied on. For instance, adding structure in user data can raise ethical concerns when users are assimilated to their cluster, and receive targeted advertisement and newsfeed. However, this is not specific to our method and can be said of any clustering algorithm. Also note that while our clustering can be applied for example to data from social media, the visual interpretation of the clusters it returns via the cluster centers respects privacy much better than showing specific examples from each cluster.
Because our method provides highly interpretable results, it might bring increased understanding of clustering algorithm results for the broader public, which we think may be a significant positive impact. | 1. What is the main contribution of the paper in the field of deep image clustering?
2. What are the strengths of the proposed approach compared to existing methods?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The paper proposes a novel approach towards deep image clustering which, unlike previous approaches does not aim at learning suitable latent space representations but at learning to predict image transformations in order to cluster in the image space. The proposed approach is a deep transformation-invariant clustering approach that jointly learns to cluster and align images. The transformations such as spatial alignment, color modifications or morphological transformations are learnt in an image transformation module. The paper provides a comparison to SotA image clustering approaches on MNIST, Fashion MNIST and USPS and show on par performance or small improvements.
Strengths
The proposed approach is conceptually different from existing approaches. And performs well. The idea to use transformers in this context is novel and interesting. The paper is clearly written and well illustrated. The proposed approach is evaluated in the context of kmeans clustering and Gaussian mixture models and performs well in both cases. The proposed approach provides interpretable qualitative results.
Weaknesses
The results depend on the initialization (kmeans), yet, they are reported without standard deviation. Mean, median and standard deviation over several runs should be reported. The improvement over the SotA is small. |
NIPS | Title
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Abstract
A fundamental question in the theory of reinforcement learning is: suppose the optimal Q-function lies in the linear span of a given d dimensional feature mapping, is sample-efficient reinforcement learning (RL) possible? The recent and remarkable result of Weisz et al. (2020) resolves this question in the negative, providing an exponential (in d) sample size lower bound, which holds even if the agent has access to a generative model of the environment. One may hope that such a lower can be circumvented with an even stronger assumption that there is a constant gap between the optimal Q-value of the best action and that of the second-best action (for all states); indeed, the construction in Weisz et al. (2020) relies on having an exponentially small gap. This work resolves this subsequent question, showing that an exponential sample complexity lower bound still holds even if a constant gap is assumed. Perhaps surprisingly, this result implies an exponential separation between the online RL setting and the generative model setting, where sample-efficient RL is in fact possible in the latter setting with a constant gap. Complementing our negative hardness result, we give two positive results showing that provably sample-efficient RL is possible either under an additional low-variance assumption or under a novel hypercontractivity assumption.
1 Introduction
There has been substantial recent theoretical interest in understanding the means by which we can avoid the curse of dimensionality and obtain sample-efficient reinforcement learning (RL) methods [Wen and Van Roy, 2017, Du et al., 2019b,a, Wang et al., 2019, Yang and Wang, 2019, Lattimore et al., 2020, Yang and Wang, 2020, Jin et al., 2020, Cai et al., 2020, Zanette et al., 2020, Weisz et al., 2020, Du et al., 2020, Zhou et al., 2020b,a, Modi et al., 2020, Jia et al., 2020, Ayoub et al., 2020]. Here, the extant body of literature largely focuses on sufficient conditions for efficient reinforcement learning. Our understanding of what are the necessary conditions for efficient reinforcement learning is far more limited. With regards to the latter, arguably, the most natural assumption is linear realizability: we assume that the optimal Q-function lies in the linear span of a given feature map. The goal is to the obtain polynomial sample complexity under this linear realizability assumption alone.
This “linear Q∗ problem” was a major open problem (see Du et al. [2019a] for discussion), and a recent hardness result by Weisz et al. [2020] provides a negative answer. In particular, the result shows that even with access to a generative model, any algorithm requires an exponential number
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
of samples (in the dimension d of the feature mapping) to find a near-optimal policy, provided the action space has exponential size.
With this question resolved, one may naturally ask what is the source of hardness for the construction in Weisz et al. [2020] and if there are additional assumptions that can serve to bypass the underlying source of this hardness. Here, arguably, it is most natural to further examine the suboptimality gap in the problem, which is the gap between the optimal Q-value of the best action and that of the second-best action; the construction in Weisz et al. [2020] does in fact fundamentally rely on having an exponentially small gap. Instead, if we assume the gap is lower bounded by a constant for all states, we may hope that the problem becomes substantially easier since with a finite number of samples (appropriately obtained), we can identify the optimal policy itself (i.e., the gap assumption allows us to translate value-based accuracy to the identification of the optimal policy itself). In fact, this intuition is correct in the following sense: with a generative model, it is not difficult to see that polynomial sample complexity is possible under the linear realizability assumption plus the suboptimality gap assumption, since the suboptimality gap assumption allows us to easily identify an optimal action for all states, thus making the problem tractable (see Section C in Du et al. [2019a] for a formal argument).
More generally, the suboptimality gap assumption is widely discussed in the bandit literature [Dani et al., 2008, Audibert and Bubeck, 2010, Abbasi-Yadkori et al., 2011] and the reinforcement learning literature [Simchowitz and Jamieson, 2019, Yang et al., 2020] to obtain fine-grained sample complexity upper bounds. More specifically, under the realizability assumption and the suboptimality gap assumption, it has been shown that polynomial sample complexity is possible if the transition is nearly deterministic [Du et al., 2019b, 2020] (also see Wen and Van Roy [2017]). However, it remains unclear whether the suboptimality gap assumption is sufficient to bypass the hardness result in Weisz et al. [2020], or the same exponential lower bound still holds even under the suboptimality gap assumption, when the transition could be stochastic and the generative model is unavailable. For the construction in Weisz et al. [2020], at the final stage, the gap between the value of the optimal action and its non-optimal counterparts will be exponentially small, and therefore the same construction does not imply an exponential sample complexity lower bound under the suboptimality gap assumption.
Our contributions. In this work, we significantly strengthen the hardness result in Weisz et al. [2020]. In particular, we show that in the online RL setting (where a generative model is unavailable) with exponential-sized action space, the exponential sample complexity lower bound still holds even under the suboptimality gap assumption. Complementing our hardness result, we show that under the realizability assumption and the suboptimality gap assumption, our hardness result can be bypassed if one further assumes the low variance assumption in Du et al. [2019b] 1, or a hypercontractivity assumption. Hypercontractive distributions include Gaussian distributions (with arbitrary covariance matrices), uniform distributions over hypercubes and strongly log-concave distributions [Kothari and Steinhardt, 2017]. This condition has been shown powerful for outlier-robust linear regression [Kothari and Steurer, 2017], but has not yet been introduced for reinforcement learning with linear function approximation.
Our results have several interesting implications, which we discuss in detail in Section 6. Most notably, our results imply an exponential separation between the standard reinforcement learning setting and the generative model setting. Moreover, our construction enjoys greater simplicity, making it more suitable to be generalized for other RL problems or to be presented for pedagogical purposes.
1We note that the sample complexity of the algorithm in Du et al. [2019b] has at least linear dependency on the number of actions, which is not sufficient for bypassing our hardness results which assumes an exponential-sized action space.
2 Related work
Previous hardness results. Existing exponential lower bounds in RL [Krishnamurthy et al., 2016, Chen and Jiang, 2019] usually construct unstructured MDPs with an exponentially large state space. Du et al. [2019a] prove that under the approximate version of the realizability assumption, i.e., the optimal Q-function lies in the linear span of a given feature mapping approximately, any algorithm requires an exponential number of samples to find a near-optimal policy. The main idea in Du et al. [2019a] is to use the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984] to construct a large set of near-orthogonal feature vectors. Such idea is later generalized to other settings, including those in Wang et al. [2020a], Kumar et al. [2020], Van Roy and Dong [2019], Lattimore et al. [2020]. Whether the exponential lower bound still holds under the exact version of the realizability assumption is left as an open problem in Du et al. [2019a].
The above open problem is recently solved by Weisz et al. [2020]. They show that under the exact version of the realizability assumption, any algorithm requires an exponential number of samples to find a near-optimal policy assuming an exponential-sized action space. The construction in Weisz et al. [2020] also uses the Johnson-Lindenstrauss lemma to construct a large set of near-orthogonal feature vectors, with additional subtleties to ensure exact realizability.
Very recently, under the exact realizability assumption, strong lower bounds are proved in the offline setting [Wang et al., 2020b, Zanette, 2020, Amortila et al., 2020]. These work focus on the offline RL setting, where a fixed data distribution with sufficient coverage is given and the agent cannot interact with the environment in an online manner. Instead, we focus on the online RL setting in this paper.
Existing upper bounds. For RL with linear function approximation, most existing upper bounds require representation conditions stronger than realizability. For example, the algorithms in Yang and Wang [2019, 2020], Jin et al. [2020], Cai et al. [2020], Zhou et al. [2020b,a], Modi et al. [2020], Jia et al. [2020], Ayoub et al. [2020] assume that the transition model lies in the linear span of a given feature mapping, and the algorithms in Wang et al. [2019], Lattimore et al. [2020], Zanette et al. [2020] assume completeness properties of the given feature mapping. In the remaining part of this section, we mostly focus on previous upper bounds that require only realizability as the representation condition.
For deterministic systems, under the realizability assumption, Wen and Van Roy [2017] provide an algorithm that achieves polynomial sample complexity. Later, under the realizability assumption and the suboptimality gap assumption, polynomial sample complexity upper bounds are shown if the transition is deterministic [Du et al., 2020], a generative model is available [Du et al., 2019a], or a low-variance condition holds [Du et al., 2019b]. Compared to the original algorithm in Du et al. [2019b], our modified algorithm in Section 5 works under a similar low-variance condition. However, the sample complexity in Du et al. [2019b] has at least linear dependency on the number of actions, whereas our sample complexity in Section 5 has no dependency on the size of the action space. Finally, Shariff and Szepesvári [2020] obtain a polynomial upper bound under the realizability assumption when the features for all state-action pairs are inside the convex hull of a polynomial-sized coreset and the generative model is available to the agent.
3 Preliminaries
3.1 Markov decision process (MDP) and reinforcement learning
An MDP is specified by (S,A, H, P, {Rh}h∈[H]), where S is the state space, A is the action space with |A| = A, H ∈ Z+ is the planning horizon, P : S × A → ∆S is the transition function and Rh : S ×A → ∆R is the reward distribution. Throughout the paper, we occasionally abuse notation and use a scalar a to denote the single-point distribution at a.
A (stochastic) policy takes the form π = {πh}h∈[H], where each πh : S → ∆A assigns a distribution over actions for each state. We assume that the initial state is drawn from a fixed distribution, i.e. s1 ∼ µ. Starting from the initial state, a policy π induces a random trajectory s1, a1, r1, · · · , sH , aH , rH via the process ah ∼ πh(·), rh ∼ R(·|sh, ah) and sh+1 ∼ P (·|sh, ah). For a policy π, denote the distribution of sh in its induced trajectory by Dπh .
Given a policy π, the Q-function (action-value function) is defined as
Qπh(s, a) := E
[ H∑
h′=h
rh′ |sh = s, ah = a, π ] ,
while V πh (s) := Ea∼πh(s)[Qπh(s, a)]. We denote the optimal policy by π∗, and the associated optimal Q-function and value function by Q∗ and V ∗ respectively. Note that Q∗ and V ∗ can also be defined via the Bellman optimality equation2:
V ∗h (s) = max a∈A Q∗h(s, a), Q∗h(s, a) = E [ Rh(s, a) + V ∗ h+1(sh+1)|sh = s, ah = a ] .
The online RL setting. In this paper, we aim to prove lower bound and upper bound in the online RL setting. In this setting, in each episode, the agent interacts with the unknown environment using a policy and observes rewards and the next states. We remark that the hardness result by Weisz et al. [2020] operates in the setting where a generative model is available to the agent so that the agent can transit to any state. Also, it is known that with a generative model, under the linear realizability assumption plus the suboptimality gap assumption, one can find a near-optimal policy with polynomial number of samples (see Section C in Du et al. [2019a] for a formal argument).
3.2 Linear Q? function approximation
When the state space is large or infinite, structures on the state space are necessary for efficient reinforcement learning. In this work we consider linear function approximation. Specifically, there exists a feature map φ : S ×A → Rd, and we will use linear functions of φ to represent Q-functions of the MDP. To ensure that such function approximation is viable, we assume that the optimal Q-function is realizable. Assumption 1 (Realizability). For all h ∈ [H], there exists θ∗h ∈ Rd such that for all (s, a) ∈ S ×A, Q∗h(s, a) = φ(s, a) >θ∗h.
This assumption is widely used in existing reinforcement learning and contextual bandit literature [Du et al., 2019b, Foster and Rakhlin, 2020]. However, even for linear function approximation, realizability alone is not sufficient for sample-efficient reinforcement learning [Weisz et al., 2020]. In this work, we also impose the regularity condition that ‖θ∗h‖2 = O(1) and ‖φ(s, a)‖2 = O(1), which can always be achieved via rescaling.
Another assumption that we will use is that the minimum suboptimality gap is lower bounded. As mentioned in the introduction, this assumption is common in bandit and reinforcement learning literature. Assumption 2 (Minimum Gap). For any state s ∈ S, a ∈ A, the suboptimality gap is defined as ∆h(s, a) := V ∗ h (s) − Q∗h(s, a). We assume that minh∈[H],s∈S,a∈A {∆h(s, a) : ∆h(s, a) > 0} ≥ ∆min.
4 Hard Instance with Constant Suboptimality Gap
We now present our main hardness result: Theorem 1. Consider an arbitrary online RL algorithm that takes the feature mapping φ : S × A → Rd as input. In the online RL setting, there exists an MDP with a feature mapping φ satisfying Assumption 1 and Assumption 2 with ∆min = Ω(1), such that the algorithm requires min{2Ω(d), 2Ω(H)} samples to find a policy π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05 with probability 0.1.
The remainder of this section provides the construction of a hard family of MDPs whereQ∗ is linearly realizable and has constant suboptimality gap and where it takes exponential samples to learn a
2We additionally define VH+1(s) = 0 for all s ∈ S.
near-optimal policy. Each of these hard MDPs can roughly be seen as a “leaking complete graph” (see detailed transtion probabilities below). Information about the optimal policy can only be gained by: (1) taking the optimal action; (2) reaching a non-terminal state at level H . We will show that when there are exponentially many actions, both events happen with negligible probability unless exponentially many trajectories are played.
4.1 Construction of the MDP family
In this section we describe the construction of the hard instance (the hard MDP family) in detail. Let m be an integer to be determined. The state space is {1̄, · · · , m̄, f}. The special state f is called the terminal state. At state ī, the set of available actions is [m] \ {i}; at the terminal state f , the set of available actions is [m− 1]. 3 In other words there are m− 1 actions available at each state. Each MDP in this family is specified by an index a∗ ∈ [m] and denoted byMa∗ . In other words, there are m MDPs in this family.
In order to construct the MDP family, we first find a set of approximately orthogonal vectors by leveraging the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984]. Lemma 1 (Johnson-Lindenstrauss). For any γ > 0, if m ≤ exp( 18γ
2d′), there exists m unit vectors {v1, · · · , vm} in Rd ′ such that for all i, j ∈ [m] such that i 6= j, |〈vi, vj〉| ≤ γ.
We will set γ = 14 and m = bexp( 1 8γ 2d)c. By Lemma 1, we can find such a set of d-dimensional unit vectors {v1, · · · , vm}. For the clarity of presentation, we will use vi and v(i) interchangeably. The construction ofMa∗ is specified below.
Features. The feature map, which maps state-action pairs to d dimensional vectors, is defined as φ(a1, a2) := (〈 v(a1), v(a2) 〉 + 2γ ) · v(a2), ∀a1, a2 ∈ [m], a1 6= a2,
φ(f, ·) := 0. Note that the feature map is independent of a∗ and is shared across the MDP family.
Rewards. For 1 ≤ h < H , the rewards are defined as
Rh(a1, a ∗) := 〈 v(a1), v(a ∗) 〉 + 2γ,
Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , (a2 6= a∗, a2 6= a1)
Rh(f, ·) := 0. For h = H , rH(s, a) := 〈φ(s, a), v(a∗)〉 for every state-action pair.
Transitions. The initial state distribution µ is set as a uniform distribution over {1̄, · · · , m̄}. The transition probabilities are set as follows.
Pr[f |a1, a∗] = 1,
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (a2 6= a∗, a2 6= a1)
Pr[f |f, ·] = 1. After taking action a2, the next state is either a2 or f . Thus this MDP looks roughly like a “leaking complete graph”: starting from state a, it is possible to visit any other state (except for a∗); however, there is always at least 1− 3γ probability of going to the terminal state f . The transition probabilities are indeed valid, because
0 < γ ≤ 〈 v(a1), v(a2) 〉 + 2γ ≤ 3γ < 1.
We now verify that realizability, i.e. Assumption 1, is satisfied. In particular, we claim the following. 3Note that for simplicity we assume different state could have different set of available actions. In the Supplementary Material we provide another construction where all states have the same set of available actions.
Lemma 2. In the MDPMa∗ , ∀h ∈ [H], for any state-action pair (s, a),Q∗h(s, a) = 〈φ(s, a), v(a∗)〉.
The lemma can be proved via induction, with the hypothesis being for all a1 ∈ [m], a2 6= a1, Q∗h(a1, a2) = (〈 v(a1), v(a2) 〉 + 2γ ) · 〈 v(a2), v(a ∗) 〉 , (1)
and that for all a1 6= a∗,
V ∗h (a1) = Q ∗ h(a1, a ∗) = 〈 v(a1), v(a ∗) 〉 + 2γ. (2)
From Eq. (1) and (2), it is easy to see that at state a1 6= a∗, for a2 6= a∗, the suboptimality gap is
∆h(a1, a2) := V ∗ h (a1)−Q∗h(a1, a2) > γ − 3γ2 ≥
1 4 γ.
Thus in this MDP, Assumption 2 is satisfied with ∆min ≥ 14γ = Ω(1). 4
4.2 The information-theoretic argument
Now we are ready to state and prove our main technical lemma. Lemma 3. For any algorithm, there exists a∗ ∈ [m] such that in order to output π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05
with probability at least 0.1 forMa∗ , the number of samples required is 2Ω(min{d,H}).
We provide a proof sketch for the lower bound below. The full proof can be found in the Supplementary Material. Our main result, Theorem 1, is a direct consequence of Lemma 3.
Proof sketch. Observe that the feature map ofMa∗ does not depend on a∗, and that for h < H and a2 6= a∗, the reward Rh(a1, a2) also contains no information about a∗. The transition probabilities are also independent of a∗, unless the action a∗ is taken. Moreover, the reward at state f is always 0. Thus, to receive information about a∗, the agent either needs to take the action a∗, or be at a non-terminal state at the final time step (h = H).
However, note that the probability of remaining at a non-terminal state at the next layer is at most
sup a1 6=a2
〈v(a1), v(a2)〉+ 2γ ≤ 3γ ≤ 3
4 .
Thus for any algorithm, Pr[sH 6= f ] ≤ ( 3 4 )H , which is exponentially small. In other words, any algorithm that does not know a∗ either needs to “be lucky” so that sH = f , or needs to take a∗ “by accident”. Since the number of actions is m = 2Θ(d), either event cannot happen with constant probability unless the number of episodes is exponential in min{d,H}. In order to make this claim rigorous, we can construct a reference MDPM0 as follows. The state space, action space, and features ofM0 are the same as those ofMa. The transitions are defined as follows:
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (∀a1, a2 s.t. a1 6= a2)
Pr[f |f, ·] = 1.
The rewards are defined as follows: Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , ( ∀a1, a2 s.t. a1 6= a2)
Rh(f, ·) := 0. 4Here we ignored the terminal state f and the essentially unreachable state a∗ for simplicity. This issue will
be handled in the Supplementary Material rigorously.
Note thatM0 is identical toMa∗ , except when a∗ is taken, or when an trajectory ends at a nonterminal state. Since the latter event happens with an exponentially small probability, we can show that for any algorithm, the probability of taking a∗ inMa∗ is close to the probability of taking a∗ inM0. SinceM0 is independent of a∗, unless an exponential number of samples are used, for any algorithm there exists a∗ ∈ [m] such that the probability of taking a∗ inM0 is o(1). It then follows that the probability of taking a∗ inMa∗ is o(1). Since a∗ is the optimal action for every state, such an algorithm cannot output a near-optimal policy forMa∗ .
5 Upper Bounds under Further Assumptions
Theorem 1 suggests that Assumption 1 and Assumption 2 are not sufficient for sample-efficient RL when the number of actions could be exponential, and that additional assumptions are needed to achieve polynomial sample complexity. One style of assumption is via assuming a global representation property on the features, such as completeness [Zanette et al., 2020].
In this section, we consider two assumptions on additional structures on the transitions of the MDP rather than the feature representation that enable good rates for linear regression with sparse bias. The first condition is a variant of the low variance condition in Du et al. [2019b]. Assumption 3 (Low variance condition). There exists a constant 1 ≤ Cvar <∞ such that for any h ∈ [H] and any policy π,
Es∼Dπh [ |V π(s)− V ∗(s)|2 ] ≤ Cvar · ( Es∼Dπh [|V π(s)− V ∗(s)|] )2 .
The second assumption is that the feature distribution is hypercontractive. Assumption 4. There exists a constant 1 ≤ Chyper <∞ such that for any h ∈ [H] and any policy π, the distribution of φ(s, a) with (s, a) ∼ Dπh is (Chyper, 4)-hypercontractive. In other words, ∀π, ∀h ∈ [H], ∀v ∈ Rd,
E(s,a)∼Dπh [ (φ(s, a)>v)4 ] ≤ Chyper · ( E(s,a)∼Dπh [(φ(s, a) >v)2] )2 .
Intuitively, hypercontractivity characterizes the anti-concentration of a distribution. A broad class of distributions are hypercontractive with Chyper = O(1), including Gaussian distributions (of arbitrary covariance matrices), uniform distributions over the hypercube and sphere, and strongly log-concave distributions [Kothari and Steurer, 2017]. Hypercontractivity has been previously used for outlier-robust linear regression [Klivans et al., 2018, Bakshi and Prasad, 2020] and momentestimation [Kothari and Steurer, 2017].
We show that under Assumptions 1, 2, 3 or 1, 2, 4, a modified version of the Difference Maximization Q-learning (DMQ) algorithm [Du et al., 2019b] is able to learn a near-optimal policy using polynomial number of trajectories with no dependency on the number of actions.
5.1 Optimal experiment design
Given a set of d-dimensional vectors, G-optimal experiment design aims at finding a distribution ρ over the vectors such that when sampling from this distribution, the maximum prediction variance over the set via linear regression is minimized. The following lemma on G-optimal design is a direct corollary of the Kiefer-Wolfowitz theorem [Kiefer and Wolfowitz, 1960]. Lemma 4 (Existence of G-optimal design). For any set X ⊆ Rd, there exists a distribution ρX supported on X , known as the G-optimal design, such that
max x∈X
x> ( Ez∼ρXzz> )−1 x ≤ d.
Efficient algorithms for finding such a distribution can be found in Todd [2016].
In the context of reinforcement learning, the set X corresponds to the set of all features, which is inaccessible. Instead, one can only observe one state s at a time, and choose a ∈ A based on the features {φ(s, a)}a∈A. Such a problem is closer to the distributional optimal design problem described by Ruan et al. [2020]. For our purpose, the following simple approach suffices: given a state
s, perform exploration by sampling from the G-optimal design on {φ(s, a)}a∈A. The performance of this exploration strategy is guaranteed by the following lemma, which will be used in the analysis of Algorithm 1.
Lemma 5 (Lemma 4 in Ruan et al. [2020]). For any state s, denote the G-optimal design with its features by ρs(·) ∈ ∆A, and the corresponding covariance matrix by Σs := ∑ a ρs(a)φ(s, a)φ(s, a)
>. Given a distribution ν over states. Denote the average covariance matrix by Σ := Es∼νΣs. Then
Es∼ν [ max a∈A φ(s, a)>Σ−1φ(s, a) ] ≤ d2.
Note that the performance of this strategy is only worse by a factor of d (compared to the case where one can query all features), and has no dependency on the number of actions.
5.2 The modified DMQ algorithm
Overview. During the execution of the Difference Maximization Q-learning (DMQ) algorithm, for each level h ∈ [H], we maintain three variables: the estimated linear coefficients θh ∈ Rd, a set of exploratory policies Πh, and the empirical feature covariance matrix Σh associated with Πh. We initialize θh = 0 ∈ Rd, Σh := λrId×d and Πh to as a single purely random exploration policy, i.e., Πh = {π} where π chooses an action uniformly at random for all states.5
Each time we execute Algorithm 1, the goal is to update the estimated linear coefficients θh ∈ Rd, so that for all π ∈ Πh, θh is a good estimation to θ∗h with respect to the distribution induced by π. We run ridge regression on the data distribution induced by policies in Πh, and the regression targets are collected by invoking the greedy policy induced by {θh′}h′>h. However, there are two apparent issues with such an approach. First, for levels h′ > h, θh′ is guaranteed to achieve low estimation error only with respect to the distributions induced by policies Πh′ . It is possible that for some π ∈ Πh, the estimation error of θh′ is high for the distribution induced by π (followed by the greedy policy). To resolve this issue, the main idea in Du et al. [2019b] is to explicitly check whether θh′ also predicts well on the new distribution (see Line 5 in Algorithm 1). If not, we add the new policy into Πh′ and invoke Algorithm 1 recursively. The analysis in Du et al. [2019b] upper bounds the total number of recursive calls by a potential function argument, which also gives an upper bound on the sample complexity of the algorithm.
Second, the exploratory policies Πh only induce a distribution over states at level h, and the algorithm still needs to decide an exploration strategy to choose actions at level h. To this end, the algorithm in Du et al. [2019b] explores all actions uniformly at random, and therefore the sample complexity has at least linear dependency on the number of actions. We note that similar issues also appear in the linear contextual bandit literature [Lattimore and Szepesvári, 2020, Ruan et al., 2020], and indeed our solution here is to explore by sampling from the G-optimal design over the features at a single state. As shown by Lemma 5, for all possible roll-in distributions, such an exploration strategy achieves a nice coverage over the feature space, and is therefore sufficient for eliminating the dependency on the size of the action space.
The algorithm. The formal description of the algorithm is given in Algorithm 1. The algorithm should be run by calling LearnLevel on input h = 0.
Here, for a policy πh ∈ Πh, the associated exploratory policy π̃h is defined as
π̃h(sh′) = π(sh′) (if h′ < h) Sample from ρsh(·) (if h′ = h) arg maxa φh′(sh′ , a) >θh′ (if h′ > h) . (3)
Here ρs(·) is the G-optimal design on the set of vectors {φ(s, ·)}a∈A, as defined by Lemma 4. Note that when h = 0, π̃h is always the greedy policy on {θh}h∈[H]. The choice of the algorithmic parameters (β, λr, λridge) can be found in the proof of Theorem 2.
5We also define a special Π0 in the same manner. Choice of λr and other parameters can be found in the Supplementary Material.
Algorithm 1: LearnLevel(h) Input: A level h ∈ {0, · · · , H}
1 for πh ∈ Πh do 2 for h′ = H,H − 1, · · · , h+ 1 do 3 Collect N samples {(sjh′ , a j h′)}j∈[N ] with s j h′ ∼ D π̃h h′ and a j h′ ∼ ρsj
h′ (π̃h defined in (3)) 4 Σ̂h′ ← 1N ∑N j=1 φ(s j h′ , a j h′)φ(s j h′ , a j h′) > 5 if ‖Σ− 1 2
h′ Σ̂h′Σ − 12 h′ ‖2 > β|Πh′ | then
6 Πh′ ← Πh′ ∪ {π̃h} 7 LearnLevel(h′) 8 LearnLevel(h)
9 if h = 0 then 10 Output greedy policy with respect to {θh}h∈[H] and exit 11 Σh ← λr|Πh|I , wh ← 0 ∈ R d 12 for i = 1, · · · , N |Πh| do 13 Sample π from uniform distribution over Πh 14 Execute π̃h (see (3)) to collect (sih, a i h, yi), where yi := ∑ h′≥h r i h′ is the on-the-go reward 15 Σh ← Σh + 1N |Πh|φ(s i h, a i h)φ(s i h, a i h) > 16 wh ← wh + 1N |Πh|φ(s i h, a i h)yi
17 θh ← ( (λridge − λr|Πh| )I + Σh )−1 wh
5.3 Analysis
We show the following theorem regarding the modified algorithm. Theorem 2. Assume that Assumption 1, 2 and one of Assumption 3 and 4 hold. Also assume that
≤ poly(∆min, 1/Cvar, 1/d, 1/H) (Under Assumption 3) or ≤ poly(∆min, 1/Chyper, 1/d, 1/H). (Under Assumption 4)
Let µ be the initial state distribution. Then with probability 1− , running Algorithm 1 on input 0 returns a policy π which satisfies Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− using poly(1/ ) trajectories.
Note that here both the algorithm and the theorem have no dependence on the number of actions A. The proof of the theorem under Assumption 3 is largely based on the analysis in Du et al. [2019b]. The largest difference is that we used Lemma 5 instead of the original union bound argument when controlling Pr [ supa |θ>h φ(s, a)Q∗h(s, a)| > γ 2 ] . The proof under Assumption 4 relies on a novel analysis of least squares regression under hypercontractivity. The full proof can be found in the Supplementary Material.
6 Discussion
Exponential separation between the generative model and the online setting. When a generative model (also known as simulator) is available, Assumption 1 and Assumption 2 are sufficient for designing an algorithm with poly(1/ , 1/∆min, d,H) sample complexity [Du et al., 2019a, Theorem C.1]. As shown by Theorem 1, under the standard online RL setting (i.e. without access to a generative model), the sample complexity is lower bounded by 2Ω(min{d,H}) when ∆min = Θ(1), under the same set of assumptions. This implies that the generative model is exponentially more powerful than the standard online RL setting.
Although the generative model is conceptually much stronger than the online RL model, previously little is known on the extent to which the former is more powerful. In tabular RL, for instance, the known sample complexity bounds with or without access to generative models are nearly the same [Zhang et al., 2020, Agarwal et al., 2020]. To the best of our knowledge, the only existing example of such separation is shown by Wang et al. [2020a] under the following set of conditions: (i)
deterministic system; (ii) realizability (Assumption 1); (iii) no reward feedback (a.k.a. reward-free exploration). In comparison, our separation result holds under less restrictions (allows stochasticity) and for the usual RL environment (instead of reward-free exploration), and is thus far more natural.
Connecting Theorem 1 and Theorem 2. Our hardness result in Theorem 1 shows that under Assumption 1 and Assumption 2, any algorithm requires exponential number of samples to find a near-optimal policy, and therefore, sample-efficient RL is impossible without further assumptions (e.g., Assumption 3 or 4 assumed in Theorem 2). Indeed, Theorem 1 and Theorem 2 imply that the coefficients Cvar and Chyper in Assumption 3 and 4 are at least exponential for the hard MDP family used in Theorem 1, which can also be verified easily.
Open problems. The first open problem is whether a sample complexity lower bound under Assumption 1 can be shown with polynomial number of actions. This will further rule out poly(A, d,H)style upper bounds, which are still possible with the current results. Another open problem is whether Assumption 3 or 4 can be replaced by or understood as more natural characterizations of the complexity of the MDP.
Acknowledgments and Disclosure of Funding
The authors would like to thank Kefan Dong and Dean Foster for helpful discussions. Sham M. Kakade acknowledges funding from the ONR award N00014-18-1-2247 and from the National Science Foundation under award #CCF-1703574. Ruosong Wang was supported in part by the NSF IIS1763562, US Army W911NF1920104, and ONR Grant N000141812861. | 1. What is the main contribution of the paper regarding online RL with linear realizability?
2. What are the strengths and weaknesses of the paper's lower bound construction compared to Weisz et al. 2020's construction?
3. What are the necessary assumptions for the upper bound, and how do they compare to Du et al 2019b's assumptions?
4. Are there any inaccuracies or missing details in the paper's arguments, particularly regarding the upper bound?
5. How does the paper's result relate to sample-efficient RL in other settings, such as RL with a generative model?
6. Can the upper bound work under a near-realizability assumption, and how would this affect the result? | Summary Of The Paper
Review | Summary Of The Paper
This paper considers online RL with linear realizability of Q* and a suboptimality gap assumption. It shows an exponential sample complexity lower bound holds for this setting, but under an additional assumption of either (a) or (b), there is an algorithm (modified DMQ) that achieves a polynomial upper bound, showing sample-efficient RL is possible in this setting. The additional assumption required for this is either (a) low variance of value differences between some policy pi and the optimal policy, or (b) hypercontractivity of the distribution of features encountered along any policy.
The negative result is a modification of Weisz et al 2020’s construction of hard MDPs into a “leaky” graph that allows for a large suboptimality gap. The positive result is similar in algorithm and analysis to Du et al 2019b, with the novelties that (1) a number-of-actions factor in the sample complexity is avoided by sampling actions from an optimal design, and (2) the method is shown to work for assumption (b).
(1) is important to make the separation between the lower and upper bounds clear as the lower bound uses exponentially many actions. As the authors point out, these results also imply that there is an exponential separation between online RL and RL with a generative model, as in the latter, sample-efficient learning is possible in the same setting as this paper’s lower bound.
Review
LOWER BOUND:
The construction for the bound seems to be largely inspired by Weisz et al. 2020’s construction: the crucial difference to achieve the suboptimality gap is to construct a "leaky” graph instead of downscaling the features. This way, the downscaling is essentially achieved by the transitions. The two techniques allow for a similar information theoretic argument to finish the bound. The result is both stronger and weaker: the suboptimality gap is a stronger assumption, but the lower bound only works in the online RL setting (as opposed to the generative model). I was happy to see a hard MDP construction similar to Weisz et al 2020’s presented (in my opinion) much more cleanly.
I am missing the definition of Pr_M at appendix line 61.
I am unsure why the appendix "addressing footnote 3" is necessary: could we not just bijectively remap the m-1 actions of each state to [m-1]?
UPPER BOUND:
I would like to see some assumptions clarified here, as there might be inaccuracies. Below is my best attempt at understanding them. Whether I am right or wrong here, the effort required to address my issues in the paper should be fairly small.
First, in line 147, I believe the precise assumption on theta* and phi's 2-norms is that they are <=1 (instead of O(1); see eg appendix line 189). I would like to see this stated as a separate assumption. The trouble with the argument that this "can always be achieved via rescaling" is that one would have to rescale the rewards, and consequently epsilon too, to achieve the same guarantee. This means that the final bound will scale polynomially with these 2-norms, instead of (what might be possible) logarithmically. I believe for this reason the assumption should be made explicit, as it is in Du et al 2019b. It would be nice for this assumption to be referenced in the proof too, (eg appendix lines 103, 189, lemma 7, etc.).
Second, in lemma 7, Du et al 2019b’s relevant lemma is A.3 instead of A.2, (at least in the versions available to me), and it is stated slightly differently. I would like to see these differences proved or explained. These are:
a condition on eta<=1
|\xi|<=1 (I'm not sure what n refers to in appendix line 149)
eta^2 in the bound instead of eta
Third, Du et al 2019b uses an additional assumption, that rewards are nonnegative and the sum of rewards on any trajectory is bounded by 1. I believe this assumption is missing from this paper, but seems to be implicitly used at places like (other than possibly at lemma 7): appendix line 98 (for the value bound), and line 107 (to bound \xi almost surely; here this bound would be <=1, not sure where the 2 comes from). Both of these arguments feel somewhat unfinished to me.
For the low variance and hypercontractivity assumptions introduced at the beginning of section 5, it would be nice to see examples or descriptions of the kinds of MDPs whose value functions satisfy this assumption. The current description focuses on the value functions, not on the MDPs that might induce such value functions. Having this could help interpreting the generality of these assumptions (and hence the corresponding upper bound too).
Does the upper bound work under a near-realizability assumption (eg. ||Q*-phi^T theta*||_\infty <= something very small)?
Line 315: is known -> was known
Line 7: such a lower bound (add bound)
Appendix line 119: consider better justifying line 3 to line 4, bound of theta-theta* by including the calculation |
NIPS | Title
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Abstract
A fundamental question in the theory of reinforcement learning is: suppose the optimal Q-function lies in the linear span of a given d dimensional feature mapping, is sample-efficient reinforcement learning (RL) possible? The recent and remarkable result of Weisz et al. (2020) resolves this question in the negative, providing an exponential (in d) sample size lower bound, which holds even if the agent has access to a generative model of the environment. One may hope that such a lower can be circumvented with an even stronger assumption that there is a constant gap between the optimal Q-value of the best action and that of the second-best action (for all states); indeed, the construction in Weisz et al. (2020) relies on having an exponentially small gap. This work resolves this subsequent question, showing that an exponential sample complexity lower bound still holds even if a constant gap is assumed. Perhaps surprisingly, this result implies an exponential separation between the online RL setting and the generative model setting, where sample-efficient RL is in fact possible in the latter setting with a constant gap. Complementing our negative hardness result, we give two positive results showing that provably sample-efficient RL is possible either under an additional low-variance assumption or under a novel hypercontractivity assumption.
1 Introduction
There has been substantial recent theoretical interest in understanding the means by which we can avoid the curse of dimensionality and obtain sample-efficient reinforcement learning (RL) methods [Wen and Van Roy, 2017, Du et al., 2019b,a, Wang et al., 2019, Yang and Wang, 2019, Lattimore et al., 2020, Yang and Wang, 2020, Jin et al., 2020, Cai et al., 2020, Zanette et al., 2020, Weisz et al., 2020, Du et al., 2020, Zhou et al., 2020b,a, Modi et al., 2020, Jia et al., 2020, Ayoub et al., 2020]. Here, the extant body of literature largely focuses on sufficient conditions for efficient reinforcement learning. Our understanding of what are the necessary conditions for efficient reinforcement learning is far more limited. With regards to the latter, arguably, the most natural assumption is linear realizability: we assume that the optimal Q-function lies in the linear span of a given feature map. The goal is to the obtain polynomial sample complexity under this linear realizability assumption alone.
This “linear Q∗ problem” was a major open problem (see Du et al. [2019a] for discussion), and a recent hardness result by Weisz et al. [2020] provides a negative answer. In particular, the result shows that even with access to a generative model, any algorithm requires an exponential number
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
of samples (in the dimension d of the feature mapping) to find a near-optimal policy, provided the action space has exponential size.
With this question resolved, one may naturally ask what is the source of hardness for the construction in Weisz et al. [2020] and if there are additional assumptions that can serve to bypass the underlying source of this hardness. Here, arguably, it is most natural to further examine the suboptimality gap in the problem, which is the gap between the optimal Q-value of the best action and that of the second-best action; the construction in Weisz et al. [2020] does in fact fundamentally rely on having an exponentially small gap. Instead, if we assume the gap is lower bounded by a constant for all states, we may hope that the problem becomes substantially easier since with a finite number of samples (appropriately obtained), we can identify the optimal policy itself (i.e., the gap assumption allows us to translate value-based accuracy to the identification of the optimal policy itself). In fact, this intuition is correct in the following sense: with a generative model, it is not difficult to see that polynomial sample complexity is possible under the linear realizability assumption plus the suboptimality gap assumption, since the suboptimality gap assumption allows us to easily identify an optimal action for all states, thus making the problem tractable (see Section C in Du et al. [2019a] for a formal argument).
More generally, the suboptimality gap assumption is widely discussed in the bandit literature [Dani et al., 2008, Audibert and Bubeck, 2010, Abbasi-Yadkori et al., 2011] and the reinforcement learning literature [Simchowitz and Jamieson, 2019, Yang et al., 2020] to obtain fine-grained sample complexity upper bounds. More specifically, under the realizability assumption and the suboptimality gap assumption, it has been shown that polynomial sample complexity is possible if the transition is nearly deterministic [Du et al., 2019b, 2020] (also see Wen and Van Roy [2017]). However, it remains unclear whether the suboptimality gap assumption is sufficient to bypass the hardness result in Weisz et al. [2020], or the same exponential lower bound still holds even under the suboptimality gap assumption, when the transition could be stochastic and the generative model is unavailable. For the construction in Weisz et al. [2020], at the final stage, the gap between the value of the optimal action and its non-optimal counterparts will be exponentially small, and therefore the same construction does not imply an exponential sample complexity lower bound under the suboptimality gap assumption.
Our contributions. In this work, we significantly strengthen the hardness result in Weisz et al. [2020]. In particular, we show that in the online RL setting (where a generative model is unavailable) with exponential-sized action space, the exponential sample complexity lower bound still holds even under the suboptimality gap assumption. Complementing our hardness result, we show that under the realizability assumption and the suboptimality gap assumption, our hardness result can be bypassed if one further assumes the low variance assumption in Du et al. [2019b] 1, or a hypercontractivity assumption. Hypercontractive distributions include Gaussian distributions (with arbitrary covariance matrices), uniform distributions over hypercubes and strongly log-concave distributions [Kothari and Steinhardt, 2017]. This condition has been shown powerful for outlier-robust linear regression [Kothari and Steurer, 2017], but has not yet been introduced for reinforcement learning with linear function approximation.
Our results have several interesting implications, which we discuss in detail in Section 6. Most notably, our results imply an exponential separation between the standard reinforcement learning setting and the generative model setting. Moreover, our construction enjoys greater simplicity, making it more suitable to be generalized for other RL problems or to be presented for pedagogical purposes.
1We note that the sample complexity of the algorithm in Du et al. [2019b] has at least linear dependency on the number of actions, which is not sufficient for bypassing our hardness results which assumes an exponential-sized action space.
2 Related work
Previous hardness results. Existing exponential lower bounds in RL [Krishnamurthy et al., 2016, Chen and Jiang, 2019] usually construct unstructured MDPs with an exponentially large state space. Du et al. [2019a] prove that under the approximate version of the realizability assumption, i.e., the optimal Q-function lies in the linear span of a given feature mapping approximately, any algorithm requires an exponential number of samples to find a near-optimal policy. The main idea in Du et al. [2019a] is to use the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984] to construct a large set of near-orthogonal feature vectors. Such idea is later generalized to other settings, including those in Wang et al. [2020a], Kumar et al. [2020], Van Roy and Dong [2019], Lattimore et al. [2020]. Whether the exponential lower bound still holds under the exact version of the realizability assumption is left as an open problem in Du et al. [2019a].
The above open problem is recently solved by Weisz et al. [2020]. They show that under the exact version of the realizability assumption, any algorithm requires an exponential number of samples to find a near-optimal policy assuming an exponential-sized action space. The construction in Weisz et al. [2020] also uses the Johnson-Lindenstrauss lemma to construct a large set of near-orthogonal feature vectors, with additional subtleties to ensure exact realizability.
Very recently, under the exact realizability assumption, strong lower bounds are proved in the offline setting [Wang et al., 2020b, Zanette, 2020, Amortila et al., 2020]. These work focus on the offline RL setting, where a fixed data distribution with sufficient coverage is given and the agent cannot interact with the environment in an online manner. Instead, we focus on the online RL setting in this paper.
Existing upper bounds. For RL with linear function approximation, most existing upper bounds require representation conditions stronger than realizability. For example, the algorithms in Yang and Wang [2019, 2020], Jin et al. [2020], Cai et al. [2020], Zhou et al. [2020b,a], Modi et al. [2020], Jia et al. [2020], Ayoub et al. [2020] assume that the transition model lies in the linear span of a given feature mapping, and the algorithms in Wang et al. [2019], Lattimore et al. [2020], Zanette et al. [2020] assume completeness properties of the given feature mapping. In the remaining part of this section, we mostly focus on previous upper bounds that require only realizability as the representation condition.
For deterministic systems, under the realizability assumption, Wen and Van Roy [2017] provide an algorithm that achieves polynomial sample complexity. Later, under the realizability assumption and the suboptimality gap assumption, polynomial sample complexity upper bounds are shown if the transition is deterministic [Du et al., 2020], a generative model is available [Du et al., 2019a], or a low-variance condition holds [Du et al., 2019b]. Compared to the original algorithm in Du et al. [2019b], our modified algorithm in Section 5 works under a similar low-variance condition. However, the sample complexity in Du et al. [2019b] has at least linear dependency on the number of actions, whereas our sample complexity in Section 5 has no dependency on the size of the action space. Finally, Shariff and Szepesvári [2020] obtain a polynomial upper bound under the realizability assumption when the features for all state-action pairs are inside the convex hull of a polynomial-sized coreset and the generative model is available to the agent.
3 Preliminaries
3.1 Markov decision process (MDP) and reinforcement learning
An MDP is specified by (S,A, H, P, {Rh}h∈[H]), where S is the state space, A is the action space with |A| = A, H ∈ Z+ is the planning horizon, P : S × A → ∆S is the transition function and Rh : S ×A → ∆R is the reward distribution. Throughout the paper, we occasionally abuse notation and use a scalar a to denote the single-point distribution at a.
A (stochastic) policy takes the form π = {πh}h∈[H], where each πh : S → ∆A assigns a distribution over actions for each state. We assume that the initial state is drawn from a fixed distribution, i.e. s1 ∼ µ. Starting from the initial state, a policy π induces a random trajectory s1, a1, r1, · · · , sH , aH , rH via the process ah ∼ πh(·), rh ∼ R(·|sh, ah) and sh+1 ∼ P (·|sh, ah). For a policy π, denote the distribution of sh in its induced trajectory by Dπh .
Given a policy π, the Q-function (action-value function) is defined as
Qπh(s, a) := E
[ H∑
h′=h
rh′ |sh = s, ah = a, π ] ,
while V πh (s) := Ea∼πh(s)[Qπh(s, a)]. We denote the optimal policy by π∗, and the associated optimal Q-function and value function by Q∗ and V ∗ respectively. Note that Q∗ and V ∗ can also be defined via the Bellman optimality equation2:
V ∗h (s) = max a∈A Q∗h(s, a), Q∗h(s, a) = E [ Rh(s, a) + V ∗ h+1(sh+1)|sh = s, ah = a ] .
The online RL setting. In this paper, we aim to prove lower bound and upper bound in the online RL setting. In this setting, in each episode, the agent interacts with the unknown environment using a policy and observes rewards and the next states. We remark that the hardness result by Weisz et al. [2020] operates in the setting where a generative model is available to the agent so that the agent can transit to any state. Also, it is known that with a generative model, under the linear realizability assumption plus the suboptimality gap assumption, one can find a near-optimal policy with polynomial number of samples (see Section C in Du et al. [2019a] for a formal argument).
3.2 Linear Q? function approximation
When the state space is large or infinite, structures on the state space are necessary for efficient reinforcement learning. In this work we consider linear function approximation. Specifically, there exists a feature map φ : S ×A → Rd, and we will use linear functions of φ to represent Q-functions of the MDP. To ensure that such function approximation is viable, we assume that the optimal Q-function is realizable. Assumption 1 (Realizability). For all h ∈ [H], there exists θ∗h ∈ Rd such that for all (s, a) ∈ S ×A, Q∗h(s, a) = φ(s, a) >θ∗h.
This assumption is widely used in existing reinforcement learning and contextual bandit literature [Du et al., 2019b, Foster and Rakhlin, 2020]. However, even for linear function approximation, realizability alone is not sufficient for sample-efficient reinforcement learning [Weisz et al., 2020]. In this work, we also impose the regularity condition that ‖θ∗h‖2 = O(1) and ‖φ(s, a)‖2 = O(1), which can always be achieved via rescaling.
Another assumption that we will use is that the minimum suboptimality gap is lower bounded. As mentioned in the introduction, this assumption is common in bandit and reinforcement learning literature. Assumption 2 (Minimum Gap). For any state s ∈ S, a ∈ A, the suboptimality gap is defined as ∆h(s, a) := V ∗ h (s) − Q∗h(s, a). We assume that minh∈[H],s∈S,a∈A {∆h(s, a) : ∆h(s, a) > 0} ≥ ∆min.
4 Hard Instance with Constant Suboptimality Gap
We now present our main hardness result: Theorem 1. Consider an arbitrary online RL algorithm that takes the feature mapping φ : S × A → Rd as input. In the online RL setting, there exists an MDP with a feature mapping φ satisfying Assumption 1 and Assumption 2 with ∆min = Ω(1), such that the algorithm requires min{2Ω(d), 2Ω(H)} samples to find a policy π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05 with probability 0.1.
The remainder of this section provides the construction of a hard family of MDPs whereQ∗ is linearly realizable and has constant suboptimality gap and where it takes exponential samples to learn a
2We additionally define VH+1(s) = 0 for all s ∈ S.
near-optimal policy. Each of these hard MDPs can roughly be seen as a “leaking complete graph” (see detailed transtion probabilities below). Information about the optimal policy can only be gained by: (1) taking the optimal action; (2) reaching a non-terminal state at level H . We will show that when there are exponentially many actions, both events happen with negligible probability unless exponentially many trajectories are played.
4.1 Construction of the MDP family
In this section we describe the construction of the hard instance (the hard MDP family) in detail. Let m be an integer to be determined. The state space is {1̄, · · · , m̄, f}. The special state f is called the terminal state. At state ī, the set of available actions is [m] \ {i}; at the terminal state f , the set of available actions is [m− 1]. 3 In other words there are m− 1 actions available at each state. Each MDP in this family is specified by an index a∗ ∈ [m] and denoted byMa∗ . In other words, there are m MDPs in this family.
In order to construct the MDP family, we first find a set of approximately orthogonal vectors by leveraging the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984]. Lemma 1 (Johnson-Lindenstrauss). For any γ > 0, if m ≤ exp( 18γ
2d′), there exists m unit vectors {v1, · · · , vm} in Rd ′ such that for all i, j ∈ [m] such that i 6= j, |〈vi, vj〉| ≤ γ.
We will set γ = 14 and m = bexp( 1 8γ 2d)c. By Lemma 1, we can find such a set of d-dimensional unit vectors {v1, · · · , vm}. For the clarity of presentation, we will use vi and v(i) interchangeably. The construction ofMa∗ is specified below.
Features. The feature map, which maps state-action pairs to d dimensional vectors, is defined as φ(a1, a2) := (〈 v(a1), v(a2) 〉 + 2γ ) · v(a2), ∀a1, a2 ∈ [m], a1 6= a2,
φ(f, ·) := 0. Note that the feature map is independent of a∗ and is shared across the MDP family.
Rewards. For 1 ≤ h < H , the rewards are defined as
Rh(a1, a ∗) := 〈 v(a1), v(a ∗) 〉 + 2γ,
Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , (a2 6= a∗, a2 6= a1)
Rh(f, ·) := 0. For h = H , rH(s, a) := 〈φ(s, a), v(a∗)〉 for every state-action pair.
Transitions. The initial state distribution µ is set as a uniform distribution over {1̄, · · · , m̄}. The transition probabilities are set as follows.
Pr[f |a1, a∗] = 1,
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (a2 6= a∗, a2 6= a1)
Pr[f |f, ·] = 1. After taking action a2, the next state is either a2 or f . Thus this MDP looks roughly like a “leaking complete graph”: starting from state a, it is possible to visit any other state (except for a∗); however, there is always at least 1− 3γ probability of going to the terminal state f . The transition probabilities are indeed valid, because
0 < γ ≤ 〈 v(a1), v(a2) 〉 + 2γ ≤ 3γ < 1.
We now verify that realizability, i.e. Assumption 1, is satisfied. In particular, we claim the following. 3Note that for simplicity we assume different state could have different set of available actions. In the Supplementary Material we provide another construction where all states have the same set of available actions.
Lemma 2. In the MDPMa∗ , ∀h ∈ [H], for any state-action pair (s, a),Q∗h(s, a) = 〈φ(s, a), v(a∗)〉.
The lemma can be proved via induction, with the hypothesis being for all a1 ∈ [m], a2 6= a1, Q∗h(a1, a2) = (〈 v(a1), v(a2) 〉 + 2γ ) · 〈 v(a2), v(a ∗) 〉 , (1)
and that for all a1 6= a∗,
V ∗h (a1) = Q ∗ h(a1, a ∗) = 〈 v(a1), v(a ∗) 〉 + 2γ. (2)
From Eq. (1) and (2), it is easy to see that at state a1 6= a∗, for a2 6= a∗, the suboptimality gap is
∆h(a1, a2) := V ∗ h (a1)−Q∗h(a1, a2) > γ − 3γ2 ≥
1 4 γ.
Thus in this MDP, Assumption 2 is satisfied with ∆min ≥ 14γ = Ω(1). 4
4.2 The information-theoretic argument
Now we are ready to state and prove our main technical lemma. Lemma 3. For any algorithm, there exists a∗ ∈ [m] such that in order to output π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05
with probability at least 0.1 forMa∗ , the number of samples required is 2Ω(min{d,H}).
We provide a proof sketch for the lower bound below. The full proof can be found in the Supplementary Material. Our main result, Theorem 1, is a direct consequence of Lemma 3.
Proof sketch. Observe that the feature map ofMa∗ does not depend on a∗, and that for h < H and a2 6= a∗, the reward Rh(a1, a2) also contains no information about a∗. The transition probabilities are also independent of a∗, unless the action a∗ is taken. Moreover, the reward at state f is always 0. Thus, to receive information about a∗, the agent either needs to take the action a∗, or be at a non-terminal state at the final time step (h = H).
However, note that the probability of remaining at a non-terminal state at the next layer is at most
sup a1 6=a2
〈v(a1), v(a2)〉+ 2γ ≤ 3γ ≤ 3
4 .
Thus for any algorithm, Pr[sH 6= f ] ≤ ( 3 4 )H , which is exponentially small. In other words, any algorithm that does not know a∗ either needs to “be lucky” so that sH = f , or needs to take a∗ “by accident”. Since the number of actions is m = 2Θ(d), either event cannot happen with constant probability unless the number of episodes is exponential in min{d,H}. In order to make this claim rigorous, we can construct a reference MDPM0 as follows. The state space, action space, and features ofM0 are the same as those ofMa. The transitions are defined as follows:
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (∀a1, a2 s.t. a1 6= a2)
Pr[f |f, ·] = 1.
The rewards are defined as follows: Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , ( ∀a1, a2 s.t. a1 6= a2)
Rh(f, ·) := 0. 4Here we ignored the terminal state f and the essentially unreachable state a∗ for simplicity. This issue will
be handled in the Supplementary Material rigorously.
Note thatM0 is identical toMa∗ , except when a∗ is taken, or when an trajectory ends at a nonterminal state. Since the latter event happens with an exponentially small probability, we can show that for any algorithm, the probability of taking a∗ inMa∗ is close to the probability of taking a∗ inM0. SinceM0 is independent of a∗, unless an exponential number of samples are used, for any algorithm there exists a∗ ∈ [m] such that the probability of taking a∗ inM0 is o(1). It then follows that the probability of taking a∗ inMa∗ is o(1). Since a∗ is the optimal action for every state, such an algorithm cannot output a near-optimal policy forMa∗ .
5 Upper Bounds under Further Assumptions
Theorem 1 suggests that Assumption 1 and Assumption 2 are not sufficient for sample-efficient RL when the number of actions could be exponential, and that additional assumptions are needed to achieve polynomial sample complexity. One style of assumption is via assuming a global representation property on the features, such as completeness [Zanette et al., 2020].
In this section, we consider two assumptions on additional structures on the transitions of the MDP rather than the feature representation that enable good rates for linear regression with sparse bias. The first condition is a variant of the low variance condition in Du et al. [2019b]. Assumption 3 (Low variance condition). There exists a constant 1 ≤ Cvar <∞ such that for any h ∈ [H] and any policy π,
Es∼Dπh [ |V π(s)− V ∗(s)|2 ] ≤ Cvar · ( Es∼Dπh [|V π(s)− V ∗(s)|] )2 .
The second assumption is that the feature distribution is hypercontractive. Assumption 4. There exists a constant 1 ≤ Chyper <∞ such that for any h ∈ [H] and any policy π, the distribution of φ(s, a) with (s, a) ∼ Dπh is (Chyper, 4)-hypercontractive. In other words, ∀π, ∀h ∈ [H], ∀v ∈ Rd,
E(s,a)∼Dπh [ (φ(s, a)>v)4 ] ≤ Chyper · ( E(s,a)∼Dπh [(φ(s, a) >v)2] )2 .
Intuitively, hypercontractivity characterizes the anti-concentration of a distribution. A broad class of distributions are hypercontractive with Chyper = O(1), including Gaussian distributions (of arbitrary covariance matrices), uniform distributions over the hypercube and sphere, and strongly log-concave distributions [Kothari and Steurer, 2017]. Hypercontractivity has been previously used for outlier-robust linear regression [Klivans et al., 2018, Bakshi and Prasad, 2020] and momentestimation [Kothari and Steurer, 2017].
We show that under Assumptions 1, 2, 3 or 1, 2, 4, a modified version of the Difference Maximization Q-learning (DMQ) algorithm [Du et al., 2019b] is able to learn a near-optimal policy using polynomial number of trajectories with no dependency on the number of actions.
5.1 Optimal experiment design
Given a set of d-dimensional vectors, G-optimal experiment design aims at finding a distribution ρ over the vectors such that when sampling from this distribution, the maximum prediction variance over the set via linear regression is minimized. The following lemma on G-optimal design is a direct corollary of the Kiefer-Wolfowitz theorem [Kiefer and Wolfowitz, 1960]. Lemma 4 (Existence of G-optimal design). For any set X ⊆ Rd, there exists a distribution ρX supported on X , known as the G-optimal design, such that
max x∈X
x> ( Ez∼ρXzz> )−1 x ≤ d.
Efficient algorithms for finding such a distribution can be found in Todd [2016].
In the context of reinforcement learning, the set X corresponds to the set of all features, which is inaccessible. Instead, one can only observe one state s at a time, and choose a ∈ A based on the features {φ(s, a)}a∈A. Such a problem is closer to the distributional optimal design problem described by Ruan et al. [2020]. For our purpose, the following simple approach suffices: given a state
s, perform exploration by sampling from the G-optimal design on {φ(s, a)}a∈A. The performance of this exploration strategy is guaranteed by the following lemma, which will be used in the analysis of Algorithm 1.
Lemma 5 (Lemma 4 in Ruan et al. [2020]). For any state s, denote the G-optimal design with its features by ρs(·) ∈ ∆A, and the corresponding covariance matrix by Σs := ∑ a ρs(a)φ(s, a)φ(s, a)
>. Given a distribution ν over states. Denote the average covariance matrix by Σ := Es∼νΣs. Then
Es∼ν [ max a∈A φ(s, a)>Σ−1φ(s, a) ] ≤ d2.
Note that the performance of this strategy is only worse by a factor of d (compared to the case where one can query all features), and has no dependency on the number of actions.
5.2 The modified DMQ algorithm
Overview. During the execution of the Difference Maximization Q-learning (DMQ) algorithm, for each level h ∈ [H], we maintain three variables: the estimated linear coefficients θh ∈ Rd, a set of exploratory policies Πh, and the empirical feature covariance matrix Σh associated with Πh. We initialize θh = 0 ∈ Rd, Σh := λrId×d and Πh to as a single purely random exploration policy, i.e., Πh = {π} where π chooses an action uniformly at random for all states.5
Each time we execute Algorithm 1, the goal is to update the estimated linear coefficients θh ∈ Rd, so that for all π ∈ Πh, θh is a good estimation to θ∗h with respect to the distribution induced by π. We run ridge regression on the data distribution induced by policies in Πh, and the regression targets are collected by invoking the greedy policy induced by {θh′}h′>h. However, there are two apparent issues with such an approach. First, for levels h′ > h, θh′ is guaranteed to achieve low estimation error only with respect to the distributions induced by policies Πh′ . It is possible that for some π ∈ Πh, the estimation error of θh′ is high for the distribution induced by π (followed by the greedy policy). To resolve this issue, the main idea in Du et al. [2019b] is to explicitly check whether θh′ also predicts well on the new distribution (see Line 5 in Algorithm 1). If not, we add the new policy into Πh′ and invoke Algorithm 1 recursively. The analysis in Du et al. [2019b] upper bounds the total number of recursive calls by a potential function argument, which also gives an upper bound on the sample complexity of the algorithm.
Second, the exploratory policies Πh only induce a distribution over states at level h, and the algorithm still needs to decide an exploration strategy to choose actions at level h. To this end, the algorithm in Du et al. [2019b] explores all actions uniformly at random, and therefore the sample complexity has at least linear dependency on the number of actions. We note that similar issues also appear in the linear contextual bandit literature [Lattimore and Szepesvári, 2020, Ruan et al., 2020], and indeed our solution here is to explore by sampling from the G-optimal design over the features at a single state. As shown by Lemma 5, for all possible roll-in distributions, such an exploration strategy achieves a nice coverage over the feature space, and is therefore sufficient for eliminating the dependency on the size of the action space.
The algorithm. The formal description of the algorithm is given in Algorithm 1. The algorithm should be run by calling LearnLevel on input h = 0.
Here, for a policy πh ∈ Πh, the associated exploratory policy π̃h is defined as
π̃h(sh′) = π(sh′) (if h′ < h) Sample from ρsh(·) (if h′ = h) arg maxa φh′(sh′ , a) >θh′ (if h′ > h) . (3)
Here ρs(·) is the G-optimal design on the set of vectors {φ(s, ·)}a∈A, as defined by Lemma 4. Note that when h = 0, π̃h is always the greedy policy on {θh}h∈[H]. The choice of the algorithmic parameters (β, λr, λridge) can be found in the proof of Theorem 2.
5We also define a special Π0 in the same manner. Choice of λr and other parameters can be found in the Supplementary Material.
Algorithm 1: LearnLevel(h) Input: A level h ∈ {0, · · · , H}
1 for πh ∈ Πh do 2 for h′ = H,H − 1, · · · , h+ 1 do 3 Collect N samples {(sjh′ , a j h′)}j∈[N ] with s j h′ ∼ D π̃h h′ and a j h′ ∼ ρsj
h′ (π̃h defined in (3)) 4 Σ̂h′ ← 1N ∑N j=1 φ(s j h′ , a j h′)φ(s j h′ , a j h′) > 5 if ‖Σ− 1 2
h′ Σ̂h′Σ − 12 h′ ‖2 > β|Πh′ | then
6 Πh′ ← Πh′ ∪ {π̃h} 7 LearnLevel(h′) 8 LearnLevel(h)
9 if h = 0 then 10 Output greedy policy with respect to {θh}h∈[H] and exit 11 Σh ← λr|Πh|I , wh ← 0 ∈ R d 12 for i = 1, · · · , N |Πh| do 13 Sample π from uniform distribution over Πh 14 Execute π̃h (see (3)) to collect (sih, a i h, yi), where yi := ∑ h′≥h r i h′ is the on-the-go reward 15 Σh ← Σh + 1N |Πh|φ(s i h, a i h)φ(s i h, a i h) > 16 wh ← wh + 1N |Πh|φ(s i h, a i h)yi
17 θh ← ( (λridge − λr|Πh| )I + Σh )−1 wh
5.3 Analysis
We show the following theorem regarding the modified algorithm. Theorem 2. Assume that Assumption 1, 2 and one of Assumption 3 and 4 hold. Also assume that
≤ poly(∆min, 1/Cvar, 1/d, 1/H) (Under Assumption 3) or ≤ poly(∆min, 1/Chyper, 1/d, 1/H). (Under Assumption 4)
Let µ be the initial state distribution. Then with probability 1− , running Algorithm 1 on input 0 returns a policy π which satisfies Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− using poly(1/ ) trajectories.
Note that here both the algorithm and the theorem have no dependence on the number of actions A. The proof of the theorem under Assumption 3 is largely based on the analysis in Du et al. [2019b]. The largest difference is that we used Lemma 5 instead of the original union bound argument when controlling Pr [ supa |θ>h φ(s, a)Q∗h(s, a)| > γ 2 ] . The proof under Assumption 4 relies on a novel analysis of least squares regression under hypercontractivity. The full proof can be found in the Supplementary Material.
6 Discussion
Exponential separation between the generative model and the online setting. When a generative model (also known as simulator) is available, Assumption 1 and Assumption 2 are sufficient for designing an algorithm with poly(1/ , 1/∆min, d,H) sample complexity [Du et al., 2019a, Theorem C.1]. As shown by Theorem 1, under the standard online RL setting (i.e. without access to a generative model), the sample complexity is lower bounded by 2Ω(min{d,H}) when ∆min = Θ(1), under the same set of assumptions. This implies that the generative model is exponentially more powerful than the standard online RL setting.
Although the generative model is conceptually much stronger than the online RL model, previously little is known on the extent to which the former is more powerful. In tabular RL, for instance, the known sample complexity bounds with or without access to generative models are nearly the same [Zhang et al., 2020, Agarwal et al., 2020]. To the best of our knowledge, the only existing example of such separation is shown by Wang et al. [2020a] under the following set of conditions: (i)
deterministic system; (ii) realizability (Assumption 1); (iii) no reward feedback (a.k.a. reward-free exploration). In comparison, our separation result holds under less restrictions (allows stochasticity) and for the usual RL environment (instead of reward-free exploration), and is thus far more natural.
Connecting Theorem 1 and Theorem 2. Our hardness result in Theorem 1 shows that under Assumption 1 and Assumption 2, any algorithm requires exponential number of samples to find a near-optimal policy, and therefore, sample-efficient RL is impossible without further assumptions (e.g., Assumption 3 or 4 assumed in Theorem 2). Indeed, Theorem 1 and Theorem 2 imply that the coefficients Cvar and Chyper in Assumption 3 and 4 are at least exponential for the hard MDP family used in Theorem 1, which can also be verified easily.
Open problems. The first open problem is whether a sample complexity lower bound under Assumption 1 can be shown with polynomial number of actions. This will further rule out poly(A, d,H)style upper bounds, which are still possible with the current results. Another open problem is whether Assumption 3 or 4 can be replaced by or understood as more natural characterizations of the complexity of the MDP.
Acknowledgments and Disclosure of Funding
The authors would like to thank Kefan Dong and Dean Foster for helpful discussions. Sham M. Kakade acknowledges funding from the ONR award N00014-18-1-2247 and from the National Science Foundation under award #CCF-1703574. Ruosong Wang was supported in part by the NSF IIS1763562, US Army W911NF1920104, and ONR Grant N000141812861. | 1. What is the focus of the paper regarding lower bounds in reinforcement learning?
2. What are the strengths of the proposed approach, particularly in terms of novel ideas and clarity?
3. Do you have any concerns or questions regarding the hypercontractivity assumption in the upper bound section?
4. How does the reviewer assess the significance and impact of the presented results in the context of prior works?
5. What is the overall opinion of the reviewer regarding the suitability of the paper for publication at NeurIPS 2021? | Summary Of The Paper
Review | Summary Of The Paper
The authors present exponential lower bounds under perfect realizability when the model must be learned online even with constant action-value function gaps. An upper bound is presented under a new hypercontractivity assumption.
Review
I thank the authors for their submission, I liked the work very much.
Strengths:
the work has some genuinely new ideas in the construction (although it leverages prior techniques like JL lemma)
it clarifies what’s achievable in the far more interesting online setting (i.e., the separation with the generative model setting)
the construction is easier than Weisz et al ’20, making it more amenable to explanation
I read the upper bound section a bit faster, but I’d like the authors to clarify a bit better the assumption about hypercontractivity (in particular, its meaning in the RL setting) in the final version of the work.
The work makes a non-trivial technical contribution, but most importantly, it presents a key new result: while Weisz et al ’20 also give MDPs that are hard to learn in the online setting considered here, the construction there is more contrived / pathological (this is by necessity). By making the setting more realistic (i.e., online), learning becomes harder and the authors leverage a less pathological construction to still get ~ e^d complexity, further supporting the idea that such hardness is more `` `realistic'.
I support the work for publication to NeurIPS 2021. |
NIPS | Title
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Abstract
A fundamental question in the theory of reinforcement learning is: suppose the optimal Q-function lies in the linear span of a given d dimensional feature mapping, is sample-efficient reinforcement learning (RL) possible? The recent and remarkable result of Weisz et al. (2020) resolves this question in the negative, providing an exponential (in d) sample size lower bound, which holds even if the agent has access to a generative model of the environment. One may hope that such a lower can be circumvented with an even stronger assumption that there is a constant gap between the optimal Q-value of the best action and that of the second-best action (for all states); indeed, the construction in Weisz et al. (2020) relies on having an exponentially small gap. This work resolves this subsequent question, showing that an exponential sample complexity lower bound still holds even if a constant gap is assumed. Perhaps surprisingly, this result implies an exponential separation between the online RL setting and the generative model setting, where sample-efficient RL is in fact possible in the latter setting with a constant gap. Complementing our negative hardness result, we give two positive results showing that provably sample-efficient RL is possible either under an additional low-variance assumption or under a novel hypercontractivity assumption.
1 Introduction
There has been substantial recent theoretical interest in understanding the means by which we can avoid the curse of dimensionality and obtain sample-efficient reinforcement learning (RL) methods [Wen and Van Roy, 2017, Du et al., 2019b,a, Wang et al., 2019, Yang and Wang, 2019, Lattimore et al., 2020, Yang and Wang, 2020, Jin et al., 2020, Cai et al., 2020, Zanette et al., 2020, Weisz et al., 2020, Du et al., 2020, Zhou et al., 2020b,a, Modi et al., 2020, Jia et al., 2020, Ayoub et al., 2020]. Here, the extant body of literature largely focuses on sufficient conditions for efficient reinforcement learning. Our understanding of what are the necessary conditions for efficient reinforcement learning is far more limited. With regards to the latter, arguably, the most natural assumption is linear realizability: we assume that the optimal Q-function lies in the linear span of a given feature map. The goal is to the obtain polynomial sample complexity under this linear realizability assumption alone.
This “linear Q∗ problem” was a major open problem (see Du et al. [2019a] for discussion), and a recent hardness result by Weisz et al. [2020] provides a negative answer. In particular, the result shows that even with access to a generative model, any algorithm requires an exponential number
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
of samples (in the dimension d of the feature mapping) to find a near-optimal policy, provided the action space has exponential size.
With this question resolved, one may naturally ask what is the source of hardness for the construction in Weisz et al. [2020] and if there are additional assumptions that can serve to bypass the underlying source of this hardness. Here, arguably, it is most natural to further examine the suboptimality gap in the problem, which is the gap between the optimal Q-value of the best action and that of the second-best action; the construction in Weisz et al. [2020] does in fact fundamentally rely on having an exponentially small gap. Instead, if we assume the gap is lower bounded by a constant for all states, we may hope that the problem becomes substantially easier since with a finite number of samples (appropriately obtained), we can identify the optimal policy itself (i.e., the gap assumption allows us to translate value-based accuracy to the identification of the optimal policy itself). In fact, this intuition is correct in the following sense: with a generative model, it is not difficult to see that polynomial sample complexity is possible under the linear realizability assumption plus the suboptimality gap assumption, since the suboptimality gap assumption allows us to easily identify an optimal action for all states, thus making the problem tractable (see Section C in Du et al. [2019a] for a formal argument).
More generally, the suboptimality gap assumption is widely discussed in the bandit literature [Dani et al., 2008, Audibert and Bubeck, 2010, Abbasi-Yadkori et al., 2011] and the reinforcement learning literature [Simchowitz and Jamieson, 2019, Yang et al., 2020] to obtain fine-grained sample complexity upper bounds. More specifically, under the realizability assumption and the suboptimality gap assumption, it has been shown that polynomial sample complexity is possible if the transition is nearly deterministic [Du et al., 2019b, 2020] (also see Wen and Van Roy [2017]). However, it remains unclear whether the suboptimality gap assumption is sufficient to bypass the hardness result in Weisz et al. [2020], or the same exponential lower bound still holds even under the suboptimality gap assumption, when the transition could be stochastic and the generative model is unavailable. For the construction in Weisz et al. [2020], at the final stage, the gap between the value of the optimal action and its non-optimal counterparts will be exponentially small, and therefore the same construction does not imply an exponential sample complexity lower bound under the suboptimality gap assumption.
Our contributions. In this work, we significantly strengthen the hardness result in Weisz et al. [2020]. In particular, we show that in the online RL setting (where a generative model is unavailable) with exponential-sized action space, the exponential sample complexity lower bound still holds even under the suboptimality gap assumption. Complementing our hardness result, we show that under the realizability assumption and the suboptimality gap assumption, our hardness result can be bypassed if one further assumes the low variance assumption in Du et al. [2019b] 1, or a hypercontractivity assumption. Hypercontractive distributions include Gaussian distributions (with arbitrary covariance matrices), uniform distributions over hypercubes and strongly log-concave distributions [Kothari and Steinhardt, 2017]. This condition has been shown powerful for outlier-robust linear regression [Kothari and Steurer, 2017], but has not yet been introduced for reinforcement learning with linear function approximation.
Our results have several interesting implications, which we discuss in detail in Section 6. Most notably, our results imply an exponential separation between the standard reinforcement learning setting and the generative model setting. Moreover, our construction enjoys greater simplicity, making it more suitable to be generalized for other RL problems or to be presented for pedagogical purposes.
1We note that the sample complexity of the algorithm in Du et al. [2019b] has at least linear dependency on the number of actions, which is not sufficient for bypassing our hardness results which assumes an exponential-sized action space.
2 Related work
Previous hardness results. Existing exponential lower bounds in RL [Krishnamurthy et al., 2016, Chen and Jiang, 2019] usually construct unstructured MDPs with an exponentially large state space. Du et al. [2019a] prove that under the approximate version of the realizability assumption, i.e., the optimal Q-function lies in the linear span of a given feature mapping approximately, any algorithm requires an exponential number of samples to find a near-optimal policy. The main idea in Du et al. [2019a] is to use the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984] to construct a large set of near-orthogonal feature vectors. Such idea is later generalized to other settings, including those in Wang et al. [2020a], Kumar et al. [2020], Van Roy and Dong [2019], Lattimore et al. [2020]. Whether the exponential lower bound still holds under the exact version of the realizability assumption is left as an open problem in Du et al. [2019a].
The above open problem is recently solved by Weisz et al. [2020]. They show that under the exact version of the realizability assumption, any algorithm requires an exponential number of samples to find a near-optimal policy assuming an exponential-sized action space. The construction in Weisz et al. [2020] also uses the Johnson-Lindenstrauss lemma to construct a large set of near-orthogonal feature vectors, with additional subtleties to ensure exact realizability.
Very recently, under the exact realizability assumption, strong lower bounds are proved in the offline setting [Wang et al., 2020b, Zanette, 2020, Amortila et al., 2020]. These work focus on the offline RL setting, where a fixed data distribution with sufficient coverage is given and the agent cannot interact with the environment in an online manner. Instead, we focus on the online RL setting in this paper.
Existing upper bounds. For RL with linear function approximation, most existing upper bounds require representation conditions stronger than realizability. For example, the algorithms in Yang and Wang [2019, 2020], Jin et al. [2020], Cai et al. [2020], Zhou et al. [2020b,a], Modi et al. [2020], Jia et al. [2020], Ayoub et al. [2020] assume that the transition model lies in the linear span of a given feature mapping, and the algorithms in Wang et al. [2019], Lattimore et al. [2020], Zanette et al. [2020] assume completeness properties of the given feature mapping. In the remaining part of this section, we mostly focus on previous upper bounds that require only realizability as the representation condition.
For deterministic systems, under the realizability assumption, Wen and Van Roy [2017] provide an algorithm that achieves polynomial sample complexity. Later, under the realizability assumption and the suboptimality gap assumption, polynomial sample complexity upper bounds are shown if the transition is deterministic [Du et al., 2020], a generative model is available [Du et al., 2019a], or a low-variance condition holds [Du et al., 2019b]. Compared to the original algorithm in Du et al. [2019b], our modified algorithm in Section 5 works under a similar low-variance condition. However, the sample complexity in Du et al. [2019b] has at least linear dependency on the number of actions, whereas our sample complexity in Section 5 has no dependency on the size of the action space. Finally, Shariff and Szepesvári [2020] obtain a polynomial upper bound under the realizability assumption when the features for all state-action pairs are inside the convex hull of a polynomial-sized coreset and the generative model is available to the agent.
3 Preliminaries
3.1 Markov decision process (MDP) and reinforcement learning
An MDP is specified by (S,A, H, P, {Rh}h∈[H]), where S is the state space, A is the action space with |A| = A, H ∈ Z+ is the planning horizon, P : S × A → ∆S is the transition function and Rh : S ×A → ∆R is the reward distribution. Throughout the paper, we occasionally abuse notation and use a scalar a to denote the single-point distribution at a.
A (stochastic) policy takes the form π = {πh}h∈[H], where each πh : S → ∆A assigns a distribution over actions for each state. We assume that the initial state is drawn from a fixed distribution, i.e. s1 ∼ µ. Starting from the initial state, a policy π induces a random trajectory s1, a1, r1, · · · , sH , aH , rH via the process ah ∼ πh(·), rh ∼ R(·|sh, ah) and sh+1 ∼ P (·|sh, ah). For a policy π, denote the distribution of sh in its induced trajectory by Dπh .
Given a policy π, the Q-function (action-value function) is defined as
Qπh(s, a) := E
[ H∑
h′=h
rh′ |sh = s, ah = a, π ] ,
while V πh (s) := Ea∼πh(s)[Qπh(s, a)]. We denote the optimal policy by π∗, and the associated optimal Q-function and value function by Q∗ and V ∗ respectively. Note that Q∗ and V ∗ can also be defined via the Bellman optimality equation2:
V ∗h (s) = max a∈A Q∗h(s, a), Q∗h(s, a) = E [ Rh(s, a) + V ∗ h+1(sh+1)|sh = s, ah = a ] .
The online RL setting. In this paper, we aim to prove lower bound and upper bound in the online RL setting. In this setting, in each episode, the agent interacts with the unknown environment using a policy and observes rewards and the next states. We remark that the hardness result by Weisz et al. [2020] operates in the setting where a generative model is available to the agent so that the agent can transit to any state. Also, it is known that with a generative model, under the linear realizability assumption plus the suboptimality gap assumption, one can find a near-optimal policy with polynomial number of samples (see Section C in Du et al. [2019a] for a formal argument).
3.2 Linear Q? function approximation
When the state space is large or infinite, structures on the state space are necessary for efficient reinforcement learning. In this work we consider linear function approximation. Specifically, there exists a feature map φ : S ×A → Rd, and we will use linear functions of φ to represent Q-functions of the MDP. To ensure that such function approximation is viable, we assume that the optimal Q-function is realizable. Assumption 1 (Realizability). For all h ∈ [H], there exists θ∗h ∈ Rd such that for all (s, a) ∈ S ×A, Q∗h(s, a) = φ(s, a) >θ∗h.
This assumption is widely used in existing reinforcement learning and contextual bandit literature [Du et al., 2019b, Foster and Rakhlin, 2020]. However, even for linear function approximation, realizability alone is not sufficient for sample-efficient reinforcement learning [Weisz et al., 2020]. In this work, we also impose the regularity condition that ‖θ∗h‖2 = O(1) and ‖φ(s, a)‖2 = O(1), which can always be achieved via rescaling.
Another assumption that we will use is that the minimum suboptimality gap is lower bounded. As mentioned in the introduction, this assumption is common in bandit and reinforcement learning literature. Assumption 2 (Minimum Gap). For any state s ∈ S, a ∈ A, the suboptimality gap is defined as ∆h(s, a) := V ∗ h (s) − Q∗h(s, a). We assume that minh∈[H],s∈S,a∈A {∆h(s, a) : ∆h(s, a) > 0} ≥ ∆min.
4 Hard Instance with Constant Suboptimality Gap
We now present our main hardness result: Theorem 1. Consider an arbitrary online RL algorithm that takes the feature mapping φ : S × A → Rd as input. In the online RL setting, there exists an MDP with a feature mapping φ satisfying Assumption 1 and Assumption 2 with ∆min = Ω(1), such that the algorithm requires min{2Ω(d), 2Ω(H)} samples to find a policy π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05 with probability 0.1.
The remainder of this section provides the construction of a hard family of MDPs whereQ∗ is linearly realizable and has constant suboptimality gap and where it takes exponential samples to learn a
2We additionally define VH+1(s) = 0 for all s ∈ S.
near-optimal policy. Each of these hard MDPs can roughly be seen as a “leaking complete graph” (see detailed transtion probabilities below). Information about the optimal policy can only be gained by: (1) taking the optimal action; (2) reaching a non-terminal state at level H . We will show that when there are exponentially many actions, both events happen with negligible probability unless exponentially many trajectories are played.
4.1 Construction of the MDP family
In this section we describe the construction of the hard instance (the hard MDP family) in detail. Let m be an integer to be determined. The state space is {1̄, · · · , m̄, f}. The special state f is called the terminal state. At state ī, the set of available actions is [m] \ {i}; at the terminal state f , the set of available actions is [m− 1]. 3 In other words there are m− 1 actions available at each state. Each MDP in this family is specified by an index a∗ ∈ [m] and denoted byMa∗ . In other words, there are m MDPs in this family.
In order to construct the MDP family, we first find a set of approximately orthogonal vectors by leveraging the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984]. Lemma 1 (Johnson-Lindenstrauss). For any γ > 0, if m ≤ exp( 18γ
2d′), there exists m unit vectors {v1, · · · , vm} in Rd ′ such that for all i, j ∈ [m] such that i 6= j, |〈vi, vj〉| ≤ γ.
We will set γ = 14 and m = bexp( 1 8γ 2d)c. By Lemma 1, we can find such a set of d-dimensional unit vectors {v1, · · · , vm}. For the clarity of presentation, we will use vi and v(i) interchangeably. The construction ofMa∗ is specified below.
Features. The feature map, which maps state-action pairs to d dimensional vectors, is defined as φ(a1, a2) := (〈 v(a1), v(a2) 〉 + 2γ ) · v(a2), ∀a1, a2 ∈ [m], a1 6= a2,
φ(f, ·) := 0. Note that the feature map is independent of a∗ and is shared across the MDP family.
Rewards. For 1 ≤ h < H , the rewards are defined as
Rh(a1, a ∗) := 〈 v(a1), v(a ∗) 〉 + 2γ,
Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , (a2 6= a∗, a2 6= a1)
Rh(f, ·) := 0. For h = H , rH(s, a) := 〈φ(s, a), v(a∗)〉 for every state-action pair.
Transitions. The initial state distribution µ is set as a uniform distribution over {1̄, · · · , m̄}. The transition probabilities are set as follows.
Pr[f |a1, a∗] = 1,
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (a2 6= a∗, a2 6= a1)
Pr[f |f, ·] = 1. After taking action a2, the next state is either a2 or f . Thus this MDP looks roughly like a “leaking complete graph”: starting from state a, it is possible to visit any other state (except for a∗); however, there is always at least 1− 3γ probability of going to the terminal state f . The transition probabilities are indeed valid, because
0 < γ ≤ 〈 v(a1), v(a2) 〉 + 2γ ≤ 3γ < 1.
We now verify that realizability, i.e. Assumption 1, is satisfied. In particular, we claim the following. 3Note that for simplicity we assume different state could have different set of available actions. In the Supplementary Material we provide another construction where all states have the same set of available actions.
Lemma 2. In the MDPMa∗ , ∀h ∈ [H], for any state-action pair (s, a),Q∗h(s, a) = 〈φ(s, a), v(a∗)〉.
The lemma can be proved via induction, with the hypothesis being for all a1 ∈ [m], a2 6= a1, Q∗h(a1, a2) = (〈 v(a1), v(a2) 〉 + 2γ ) · 〈 v(a2), v(a ∗) 〉 , (1)
and that for all a1 6= a∗,
V ∗h (a1) = Q ∗ h(a1, a ∗) = 〈 v(a1), v(a ∗) 〉 + 2γ. (2)
From Eq. (1) and (2), it is easy to see that at state a1 6= a∗, for a2 6= a∗, the suboptimality gap is
∆h(a1, a2) := V ∗ h (a1)−Q∗h(a1, a2) > γ − 3γ2 ≥
1 4 γ.
Thus in this MDP, Assumption 2 is satisfied with ∆min ≥ 14γ = Ω(1). 4
4.2 The information-theoretic argument
Now we are ready to state and prove our main technical lemma. Lemma 3. For any algorithm, there exists a∗ ∈ [m] such that in order to output π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05
with probability at least 0.1 forMa∗ , the number of samples required is 2Ω(min{d,H}).
We provide a proof sketch for the lower bound below. The full proof can be found in the Supplementary Material. Our main result, Theorem 1, is a direct consequence of Lemma 3.
Proof sketch. Observe that the feature map ofMa∗ does not depend on a∗, and that for h < H and a2 6= a∗, the reward Rh(a1, a2) also contains no information about a∗. The transition probabilities are also independent of a∗, unless the action a∗ is taken. Moreover, the reward at state f is always 0. Thus, to receive information about a∗, the agent either needs to take the action a∗, or be at a non-terminal state at the final time step (h = H).
However, note that the probability of remaining at a non-terminal state at the next layer is at most
sup a1 6=a2
〈v(a1), v(a2)〉+ 2γ ≤ 3γ ≤ 3
4 .
Thus for any algorithm, Pr[sH 6= f ] ≤ ( 3 4 )H , which is exponentially small. In other words, any algorithm that does not know a∗ either needs to “be lucky” so that sH = f , or needs to take a∗ “by accident”. Since the number of actions is m = 2Θ(d), either event cannot happen with constant probability unless the number of episodes is exponential in min{d,H}. In order to make this claim rigorous, we can construct a reference MDPM0 as follows. The state space, action space, and features ofM0 are the same as those ofMa. The transitions are defined as follows:
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (∀a1, a2 s.t. a1 6= a2)
Pr[f |f, ·] = 1.
The rewards are defined as follows: Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , ( ∀a1, a2 s.t. a1 6= a2)
Rh(f, ·) := 0. 4Here we ignored the terminal state f and the essentially unreachable state a∗ for simplicity. This issue will
be handled in the Supplementary Material rigorously.
Note thatM0 is identical toMa∗ , except when a∗ is taken, or when an trajectory ends at a nonterminal state. Since the latter event happens with an exponentially small probability, we can show that for any algorithm, the probability of taking a∗ inMa∗ is close to the probability of taking a∗ inM0. SinceM0 is independent of a∗, unless an exponential number of samples are used, for any algorithm there exists a∗ ∈ [m] such that the probability of taking a∗ inM0 is o(1). It then follows that the probability of taking a∗ inMa∗ is o(1). Since a∗ is the optimal action for every state, such an algorithm cannot output a near-optimal policy forMa∗ .
5 Upper Bounds under Further Assumptions
Theorem 1 suggests that Assumption 1 and Assumption 2 are not sufficient for sample-efficient RL when the number of actions could be exponential, and that additional assumptions are needed to achieve polynomial sample complexity. One style of assumption is via assuming a global representation property on the features, such as completeness [Zanette et al., 2020].
In this section, we consider two assumptions on additional structures on the transitions of the MDP rather than the feature representation that enable good rates for linear regression with sparse bias. The first condition is a variant of the low variance condition in Du et al. [2019b]. Assumption 3 (Low variance condition). There exists a constant 1 ≤ Cvar <∞ such that for any h ∈ [H] and any policy π,
Es∼Dπh [ |V π(s)− V ∗(s)|2 ] ≤ Cvar · ( Es∼Dπh [|V π(s)− V ∗(s)|] )2 .
The second assumption is that the feature distribution is hypercontractive. Assumption 4. There exists a constant 1 ≤ Chyper <∞ such that for any h ∈ [H] and any policy π, the distribution of φ(s, a) with (s, a) ∼ Dπh is (Chyper, 4)-hypercontractive. In other words, ∀π, ∀h ∈ [H], ∀v ∈ Rd,
E(s,a)∼Dπh [ (φ(s, a)>v)4 ] ≤ Chyper · ( E(s,a)∼Dπh [(φ(s, a) >v)2] )2 .
Intuitively, hypercontractivity characterizes the anti-concentration of a distribution. A broad class of distributions are hypercontractive with Chyper = O(1), including Gaussian distributions (of arbitrary covariance matrices), uniform distributions over the hypercube and sphere, and strongly log-concave distributions [Kothari and Steurer, 2017]. Hypercontractivity has been previously used for outlier-robust linear regression [Klivans et al., 2018, Bakshi and Prasad, 2020] and momentestimation [Kothari and Steurer, 2017].
We show that under Assumptions 1, 2, 3 or 1, 2, 4, a modified version of the Difference Maximization Q-learning (DMQ) algorithm [Du et al., 2019b] is able to learn a near-optimal policy using polynomial number of trajectories with no dependency on the number of actions.
5.1 Optimal experiment design
Given a set of d-dimensional vectors, G-optimal experiment design aims at finding a distribution ρ over the vectors such that when sampling from this distribution, the maximum prediction variance over the set via linear regression is minimized. The following lemma on G-optimal design is a direct corollary of the Kiefer-Wolfowitz theorem [Kiefer and Wolfowitz, 1960]. Lemma 4 (Existence of G-optimal design). For any set X ⊆ Rd, there exists a distribution ρX supported on X , known as the G-optimal design, such that
max x∈X
x> ( Ez∼ρXzz> )−1 x ≤ d.
Efficient algorithms for finding such a distribution can be found in Todd [2016].
In the context of reinforcement learning, the set X corresponds to the set of all features, which is inaccessible. Instead, one can only observe one state s at a time, and choose a ∈ A based on the features {φ(s, a)}a∈A. Such a problem is closer to the distributional optimal design problem described by Ruan et al. [2020]. For our purpose, the following simple approach suffices: given a state
s, perform exploration by sampling from the G-optimal design on {φ(s, a)}a∈A. The performance of this exploration strategy is guaranteed by the following lemma, which will be used in the analysis of Algorithm 1.
Lemma 5 (Lemma 4 in Ruan et al. [2020]). For any state s, denote the G-optimal design with its features by ρs(·) ∈ ∆A, and the corresponding covariance matrix by Σs := ∑ a ρs(a)φ(s, a)φ(s, a)
>. Given a distribution ν over states. Denote the average covariance matrix by Σ := Es∼νΣs. Then
Es∼ν [ max a∈A φ(s, a)>Σ−1φ(s, a) ] ≤ d2.
Note that the performance of this strategy is only worse by a factor of d (compared to the case where one can query all features), and has no dependency on the number of actions.
5.2 The modified DMQ algorithm
Overview. During the execution of the Difference Maximization Q-learning (DMQ) algorithm, for each level h ∈ [H], we maintain three variables: the estimated linear coefficients θh ∈ Rd, a set of exploratory policies Πh, and the empirical feature covariance matrix Σh associated with Πh. We initialize θh = 0 ∈ Rd, Σh := λrId×d and Πh to as a single purely random exploration policy, i.e., Πh = {π} where π chooses an action uniformly at random for all states.5
Each time we execute Algorithm 1, the goal is to update the estimated linear coefficients θh ∈ Rd, so that for all π ∈ Πh, θh is a good estimation to θ∗h with respect to the distribution induced by π. We run ridge regression on the data distribution induced by policies in Πh, and the regression targets are collected by invoking the greedy policy induced by {θh′}h′>h. However, there are two apparent issues with such an approach. First, for levels h′ > h, θh′ is guaranteed to achieve low estimation error only with respect to the distributions induced by policies Πh′ . It is possible that for some π ∈ Πh, the estimation error of θh′ is high for the distribution induced by π (followed by the greedy policy). To resolve this issue, the main idea in Du et al. [2019b] is to explicitly check whether θh′ also predicts well on the new distribution (see Line 5 in Algorithm 1). If not, we add the new policy into Πh′ and invoke Algorithm 1 recursively. The analysis in Du et al. [2019b] upper bounds the total number of recursive calls by a potential function argument, which also gives an upper bound on the sample complexity of the algorithm.
Second, the exploratory policies Πh only induce a distribution over states at level h, and the algorithm still needs to decide an exploration strategy to choose actions at level h. To this end, the algorithm in Du et al. [2019b] explores all actions uniformly at random, and therefore the sample complexity has at least linear dependency on the number of actions. We note that similar issues also appear in the linear contextual bandit literature [Lattimore and Szepesvári, 2020, Ruan et al., 2020], and indeed our solution here is to explore by sampling from the G-optimal design over the features at a single state. As shown by Lemma 5, for all possible roll-in distributions, such an exploration strategy achieves a nice coverage over the feature space, and is therefore sufficient for eliminating the dependency on the size of the action space.
The algorithm. The formal description of the algorithm is given in Algorithm 1. The algorithm should be run by calling LearnLevel on input h = 0.
Here, for a policy πh ∈ Πh, the associated exploratory policy π̃h is defined as
π̃h(sh′) = π(sh′) (if h′ < h) Sample from ρsh(·) (if h′ = h) arg maxa φh′(sh′ , a) >θh′ (if h′ > h) . (3)
Here ρs(·) is the G-optimal design on the set of vectors {φ(s, ·)}a∈A, as defined by Lemma 4. Note that when h = 0, π̃h is always the greedy policy on {θh}h∈[H]. The choice of the algorithmic parameters (β, λr, λridge) can be found in the proof of Theorem 2.
5We also define a special Π0 in the same manner. Choice of λr and other parameters can be found in the Supplementary Material.
Algorithm 1: LearnLevel(h) Input: A level h ∈ {0, · · · , H}
1 for πh ∈ Πh do 2 for h′ = H,H − 1, · · · , h+ 1 do 3 Collect N samples {(sjh′ , a j h′)}j∈[N ] with s j h′ ∼ D π̃h h′ and a j h′ ∼ ρsj
h′ (π̃h defined in (3)) 4 Σ̂h′ ← 1N ∑N j=1 φ(s j h′ , a j h′)φ(s j h′ , a j h′) > 5 if ‖Σ− 1 2
h′ Σ̂h′Σ − 12 h′ ‖2 > β|Πh′ | then
6 Πh′ ← Πh′ ∪ {π̃h} 7 LearnLevel(h′) 8 LearnLevel(h)
9 if h = 0 then 10 Output greedy policy with respect to {θh}h∈[H] and exit 11 Σh ← λr|Πh|I , wh ← 0 ∈ R d 12 for i = 1, · · · , N |Πh| do 13 Sample π from uniform distribution over Πh 14 Execute π̃h (see (3)) to collect (sih, a i h, yi), where yi := ∑ h′≥h r i h′ is the on-the-go reward 15 Σh ← Σh + 1N |Πh|φ(s i h, a i h)φ(s i h, a i h) > 16 wh ← wh + 1N |Πh|φ(s i h, a i h)yi
17 θh ← ( (λridge − λr|Πh| )I + Σh )−1 wh
5.3 Analysis
We show the following theorem regarding the modified algorithm. Theorem 2. Assume that Assumption 1, 2 and one of Assumption 3 and 4 hold. Also assume that
≤ poly(∆min, 1/Cvar, 1/d, 1/H) (Under Assumption 3) or ≤ poly(∆min, 1/Chyper, 1/d, 1/H). (Under Assumption 4)
Let µ be the initial state distribution. Then with probability 1− , running Algorithm 1 on input 0 returns a policy π which satisfies Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− using poly(1/ ) trajectories.
Note that here both the algorithm and the theorem have no dependence on the number of actions A. The proof of the theorem under Assumption 3 is largely based on the analysis in Du et al. [2019b]. The largest difference is that we used Lemma 5 instead of the original union bound argument when controlling Pr [ supa |θ>h φ(s, a)Q∗h(s, a)| > γ 2 ] . The proof under Assumption 4 relies on a novel analysis of least squares regression under hypercontractivity. The full proof can be found in the Supplementary Material.
6 Discussion
Exponential separation between the generative model and the online setting. When a generative model (also known as simulator) is available, Assumption 1 and Assumption 2 are sufficient for designing an algorithm with poly(1/ , 1/∆min, d,H) sample complexity [Du et al., 2019a, Theorem C.1]. As shown by Theorem 1, under the standard online RL setting (i.e. without access to a generative model), the sample complexity is lower bounded by 2Ω(min{d,H}) when ∆min = Θ(1), under the same set of assumptions. This implies that the generative model is exponentially more powerful than the standard online RL setting.
Although the generative model is conceptually much stronger than the online RL model, previously little is known on the extent to which the former is more powerful. In tabular RL, for instance, the known sample complexity bounds with or without access to generative models are nearly the same [Zhang et al., 2020, Agarwal et al., 2020]. To the best of our knowledge, the only existing example of such separation is shown by Wang et al. [2020a] under the following set of conditions: (i)
deterministic system; (ii) realizability (Assumption 1); (iii) no reward feedback (a.k.a. reward-free exploration). In comparison, our separation result holds under less restrictions (allows stochasticity) and for the usual RL environment (instead of reward-free exploration), and is thus far more natural.
Connecting Theorem 1 and Theorem 2. Our hardness result in Theorem 1 shows that under Assumption 1 and Assumption 2, any algorithm requires exponential number of samples to find a near-optimal policy, and therefore, sample-efficient RL is impossible without further assumptions (e.g., Assumption 3 or 4 assumed in Theorem 2). Indeed, Theorem 1 and Theorem 2 imply that the coefficients Cvar and Chyper in Assumption 3 and 4 are at least exponential for the hard MDP family used in Theorem 1, which can also be verified easily.
Open problems. The first open problem is whether a sample complexity lower bound under Assumption 1 can be shown with polynomial number of actions. This will further rule out poly(A, d,H)style upper bounds, which are still possible with the current results. Another open problem is whether Assumption 3 or 4 can be replaced by or understood as more natural characterizations of the complexity of the MDP.
Acknowledgments and Disclosure of Funding
The authors would like to thank Kefan Dong and Dean Foster for helpful discussions. Sham M. Kakade acknowledges funding from the ONR award N00014-18-1-2247 and from the National Science Foundation under award #CCF-1703574. Ruosong Wang was supported in part by the NSF IIS1763562, US Army W911NF1920104, and ONR Grant N000141812861. | 1. What is the main contribution of the paper regarding linear-Q* setting with a gap>0 assumption?
2. How does the result of the paper establish a separation in sample complexity compared to the online setting under the same assumptions?
3. Can you explain how the authors propose algorithms that break the exponential sample complexity lower bounds in this setting under either of (I) a low variance assumption or (II) a (C, 4)-hypercontractivity assumption?
4. How does the proof under the latter assumption follow from a novel analysis of least squares regression under hypercontractivity assumptions?
5. Are there any suggestions for improving the presentation of the paper, such as including a figure detailing a rough description of the lower bound or discussing the current construction compared to the previous lower bound of Weisz et al.?
6. Is it possible to provide a more explicit description of the number of trajectories consumed by the algorithm instead of poly(1/ε)?
7. Is there any conjectured approach for a lower bound in the case where the action space is constrained to be poly(d)?
8. Do you believe the flexibility of the next-state distribution is sufficient to get such a lower bound instance to work when the action space is only polynomially large? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies lower bounds for the linear-
Q
∗
setting with a gap>0 assumption. Recently Weisz et al prove a milestone result that the linear-
Q
∗
setting requires exponential in d sample complexity. However the gap in these instances, the gap is exponentially small too. It is known that with linear-
Q
∗
assumption + gap>0 , an optimal policy can be learned with polynomial (in all parameters) sample complexity given access to a generative model. The result of this paper thus establishes a separation in the sample complexity compared to the online setting under the same assumptions. In addition, the authors also propose algorithms that break the exponential sample complexity lower bounds in this setting under either of (I) a low variance assumption on the variance of the value of any policy from
V
∗
under its own state distribution, or (II) a
(
C
,
4
)
-hypercontractivity assumption on the feature vector distribution under any policy. The proof under the latter assumption follows from a novel analysis of least squares regression under hypercontractivity assumptions.
Review
Overall I think the contribution of the paper is significant and the presentation is quite clear. To the best of my knowledge most of the related works have been sufficiently addressed. Here are my suggestions for the paper:
In the paper, it would help (subject to space constraints) to (i) include a figure detailing a rough description of the lower bound in the paper and (ii) a discussion comparing the current construction to the previous lower bound of Weisz et al.
In section 4.1 it would help to add a line that the total number of states in the MDP is also equal to
m
.
In the presentation, I sometimes got confused with notations such as
a
¯
1
, $\bar{a}2
,
e
t
c
.
w
h
i
c
h
a
r
e
i
n
f
a
c
t
u
s
e
d
t
o
r
e
f
e
r
t
o
s
t
a
t
e
s
a
n
d
n
o
t
a
c
t
i
o
n
s
.
T
h
i
s
i
s
a
b
i
t
j
a
r
r
i
n
g
a
t
t
i
m
e
s
.
I
t
h
i
n
k
i
n
g
e
n
e
r
a
l
i
t
m
a
y
h
e
l
p
t
o
o
n
l
y
i
n
d
e
x
t
h
e
s
t
a
t
e
s
b
y
\bar{i}
a
n
d
\bar{j}
a
n
d
a
v
o
i
d
u
s
i
n
g
t
h
e
n
o
t
a
t
i
o
n
a\cdot$ entirely here.
In Theorem 2, it might helpful for a reader to see a more explicit description of the number of trajectories consumed by the algorithm instead of
poly
(
1
/
ϵ
)
.
In the case the action space constrained to be
poly
(
d
)
, is there any conjectured approach for a lower bound? A natural approach is to split each state in the current (exponential action space) MDP into an exponential number of states with a constant number of actions each, and inducing the uniform distribution over these states upon taking any action in the previous state. However, this approach fails because even the value of the optimal policy becomes exponentially small. But of course, there is a lot more room to work around here. So, I wonder: do you believe the flexibility of the next-state distribution here is sufficient to get such a lower bound instance to work when the action space is only polynomially large? |
NIPS | Title
An Exponential Lower Bound for Linearly Realizable MDP with Constant Suboptimality Gap
Abstract
A fundamental question in the theory of reinforcement learning is: suppose the optimal Q-function lies in the linear span of a given d dimensional feature mapping, is sample-efficient reinforcement learning (RL) possible? The recent and remarkable result of Weisz et al. (2020) resolves this question in the negative, providing an exponential (in d) sample size lower bound, which holds even if the agent has access to a generative model of the environment. One may hope that such a lower can be circumvented with an even stronger assumption that there is a constant gap between the optimal Q-value of the best action and that of the second-best action (for all states); indeed, the construction in Weisz et al. (2020) relies on having an exponentially small gap. This work resolves this subsequent question, showing that an exponential sample complexity lower bound still holds even if a constant gap is assumed. Perhaps surprisingly, this result implies an exponential separation between the online RL setting and the generative model setting, where sample-efficient RL is in fact possible in the latter setting with a constant gap. Complementing our negative hardness result, we give two positive results showing that provably sample-efficient RL is possible either under an additional low-variance assumption or under a novel hypercontractivity assumption.
1 Introduction
There has been substantial recent theoretical interest in understanding the means by which we can avoid the curse of dimensionality and obtain sample-efficient reinforcement learning (RL) methods [Wen and Van Roy, 2017, Du et al., 2019b,a, Wang et al., 2019, Yang and Wang, 2019, Lattimore et al., 2020, Yang and Wang, 2020, Jin et al., 2020, Cai et al., 2020, Zanette et al., 2020, Weisz et al., 2020, Du et al., 2020, Zhou et al., 2020b,a, Modi et al., 2020, Jia et al., 2020, Ayoub et al., 2020]. Here, the extant body of literature largely focuses on sufficient conditions for efficient reinforcement learning. Our understanding of what are the necessary conditions for efficient reinforcement learning is far more limited. With regards to the latter, arguably, the most natural assumption is linear realizability: we assume that the optimal Q-function lies in the linear span of a given feature map. The goal is to the obtain polynomial sample complexity under this linear realizability assumption alone.
This “linear Q∗ problem” was a major open problem (see Du et al. [2019a] for discussion), and a recent hardness result by Weisz et al. [2020] provides a negative answer. In particular, the result shows that even with access to a generative model, any algorithm requires an exponential number
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
of samples (in the dimension d of the feature mapping) to find a near-optimal policy, provided the action space has exponential size.
With this question resolved, one may naturally ask what is the source of hardness for the construction in Weisz et al. [2020] and if there are additional assumptions that can serve to bypass the underlying source of this hardness. Here, arguably, it is most natural to further examine the suboptimality gap in the problem, which is the gap between the optimal Q-value of the best action and that of the second-best action; the construction in Weisz et al. [2020] does in fact fundamentally rely on having an exponentially small gap. Instead, if we assume the gap is lower bounded by a constant for all states, we may hope that the problem becomes substantially easier since with a finite number of samples (appropriately obtained), we can identify the optimal policy itself (i.e., the gap assumption allows us to translate value-based accuracy to the identification of the optimal policy itself). In fact, this intuition is correct in the following sense: with a generative model, it is not difficult to see that polynomial sample complexity is possible under the linear realizability assumption plus the suboptimality gap assumption, since the suboptimality gap assumption allows us to easily identify an optimal action for all states, thus making the problem tractable (see Section C in Du et al. [2019a] for a formal argument).
More generally, the suboptimality gap assumption is widely discussed in the bandit literature [Dani et al., 2008, Audibert and Bubeck, 2010, Abbasi-Yadkori et al., 2011] and the reinforcement learning literature [Simchowitz and Jamieson, 2019, Yang et al., 2020] to obtain fine-grained sample complexity upper bounds. More specifically, under the realizability assumption and the suboptimality gap assumption, it has been shown that polynomial sample complexity is possible if the transition is nearly deterministic [Du et al., 2019b, 2020] (also see Wen and Van Roy [2017]). However, it remains unclear whether the suboptimality gap assumption is sufficient to bypass the hardness result in Weisz et al. [2020], or the same exponential lower bound still holds even under the suboptimality gap assumption, when the transition could be stochastic and the generative model is unavailable. For the construction in Weisz et al. [2020], at the final stage, the gap between the value of the optimal action and its non-optimal counterparts will be exponentially small, and therefore the same construction does not imply an exponential sample complexity lower bound under the suboptimality gap assumption.
Our contributions. In this work, we significantly strengthen the hardness result in Weisz et al. [2020]. In particular, we show that in the online RL setting (where a generative model is unavailable) with exponential-sized action space, the exponential sample complexity lower bound still holds even under the suboptimality gap assumption. Complementing our hardness result, we show that under the realizability assumption and the suboptimality gap assumption, our hardness result can be bypassed if one further assumes the low variance assumption in Du et al. [2019b] 1, or a hypercontractivity assumption. Hypercontractive distributions include Gaussian distributions (with arbitrary covariance matrices), uniform distributions over hypercubes and strongly log-concave distributions [Kothari and Steinhardt, 2017]. This condition has been shown powerful for outlier-robust linear regression [Kothari and Steurer, 2017], but has not yet been introduced for reinforcement learning with linear function approximation.
Our results have several interesting implications, which we discuss in detail in Section 6. Most notably, our results imply an exponential separation between the standard reinforcement learning setting and the generative model setting. Moreover, our construction enjoys greater simplicity, making it more suitable to be generalized for other RL problems or to be presented for pedagogical purposes.
1We note that the sample complexity of the algorithm in Du et al. [2019b] has at least linear dependency on the number of actions, which is not sufficient for bypassing our hardness results which assumes an exponential-sized action space.
2 Related work
Previous hardness results. Existing exponential lower bounds in RL [Krishnamurthy et al., 2016, Chen and Jiang, 2019] usually construct unstructured MDPs with an exponentially large state space. Du et al. [2019a] prove that under the approximate version of the realizability assumption, i.e., the optimal Q-function lies in the linear span of a given feature mapping approximately, any algorithm requires an exponential number of samples to find a near-optimal policy. The main idea in Du et al. [2019a] is to use the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984] to construct a large set of near-orthogonal feature vectors. Such idea is later generalized to other settings, including those in Wang et al. [2020a], Kumar et al. [2020], Van Roy and Dong [2019], Lattimore et al. [2020]. Whether the exponential lower bound still holds under the exact version of the realizability assumption is left as an open problem in Du et al. [2019a].
The above open problem is recently solved by Weisz et al. [2020]. They show that under the exact version of the realizability assumption, any algorithm requires an exponential number of samples to find a near-optimal policy assuming an exponential-sized action space. The construction in Weisz et al. [2020] also uses the Johnson-Lindenstrauss lemma to construct a large set of near-orthogonal feature vectors, with additional subtleties to ensure exact realizability.
Very recently, under the exact realizability assumption, strong lower bounds are proved in the offline setting [Wang et al., 2020b, Zanette, 2020, Amortila et al., 2020]. These work focus on the offline RL setting, where a fixed data distribution with sufficient coverage is given and the agent cannot interact with the environment in an online manner. Instead, we focus on the online RL setting in this paper.
Existing upper bounds. For RL with linear function approximation, most existing upper bounds require representation conditions stronger than realizability. For example, the algorithms in Yang and Wang [2019, 2020], Jin et al. [2020], Cai et al. [2020], Zhou et al. [2020b,a], Modi et al. [2020], Jia et al. [2020], Ayoub et al. [2020] assume that the transition model lies in the linear span of a given feature mapping, and the algorithms in Wang et al. [2019], Lattimore et al. [2020], Zanette et al. [2020] assume completeness properties of the given feature mapping. In the remaining part of this section, we mostly focus on previous upper bounds that require only realizability as the representation condition.
For deterministic systems, under the realizability assumption, Wen and Van Roy [2017] provide an algorithm that achieves polynomial sample complexity. Later, under the realizability assumption and the suboptimality gap assumption, polynomial sample complexity upper bounds are shown if the transition is deterministic [Du et al., 2020], a generative model is available [Du et al., 2019a], or a low-variance condition holds [Du et al., 2019b]. Compared to the original algorithm in Du et al. [2019b], our modified algorithm in Section 5 works under a similar low-variance condition. However, the sample complexity in Du et al. [2019b] has at least linear dependency on the number of actions, whereas our sample complexity in Section 5 has no dependency on the size of the action space. Finally, Shariff and Szepesvári [2020] obtain a polynomial upper bound under the realizability assumption when the features for all state-action pairs are inside the convex hull of a polynomial-sized coreset and the generative model is available to the agent.
3 Preliminaries
3.1 Markov decision process (MDP) and reinforcement learning
An MDP is specified by (S,A, H, P, {Rh}h∈[H]), where S is the state space, A is the action space with |A| = A, H ∈ Z+ is the planning horizon, P : S × A → ∆S is the transition function and Rh : S ×A → ∆R is the reward distribution. Throughout the paper, we occasionally abuse notation and use a scalar a to denote the single-point distribution at a.
A (stochastic) policy takes the form π = {πh}h∈[H], where each πh : S → ∆A assigns a distribution over actions for each state. We assume that the initial state is drawn from a fixed distribution, i.e. s1 ∼ µ. Starting from the initial state, a policy π induces a random trajectory s1, a1, r1, · · · , sH , aH , rH via the process ah ∼ πh(·), rh ∼ R(·|sh, ah) and sh+1 ∼ P (·|sh, ah). For a policy π, denote the distribution of sh in its induced trajectory by Dπh .
Given a policy π, the Q-function (action-value function) is defined as
Qπh(s, a) := E
[ H∑
h′=h
rh′ |sh = s, ah = a, π ] ,
while V πh (s) := Ea∼πh(s)[Qπh(s, a)]. We denote the optimal policy by π∗, and the associated optimal Q-function and value function by Q∗ and V ∗ respectively. Note that Q∗ and V ∗ can also be defined via the Bellman optimality equation2:
V ∗h (s) = max a∈A Q∗h(s, a), Q∗h(s, a) = E [ Rh(s, a) + V ∗ h+1(sh+1)|sh = s, ah = a ] .
The online RL setting. In this paper, we aim to prove lower bound and upper bound in the online RL setting. In this setting, in each episode, the agent interacts with the unknown environment using a policy and observes rewards and the next states. We remark that the hardness result by Weisz et al. [2020] operates in the setting where a generative model is available to the agent so that the agent can transit to any state. Also, it is known that with a generative model, under the linear realizability assumption plus the suboptimality gap assumption, one can find a near-optimal policy with polynomial number of samples (see Section C in Du et al. [2019a] for a formal argument).
3.2 Linear Q? function approximation
When the state space is large or infinite, structures on the state space are necessary for efficient reinforcement learning. In this work we consider linear function approximation. Specifically, there exists a feature map φ : S ×A → Rd, and we will use linear functions of φ to represent Q-functions of the MDP. To ensure that such function approximation is viable, we assume that the optimal Q-function is realizable. Assumption 1 (Realizability). For all h ∈ [H], there exists θ∗h ∈ Rd such that for all (s, a) ∈ S ×A, Q∗h(s, a) = φ(s, a) >θ∗h.
This assumption is widely used in existing reinforcement learning and contextual bandit literature [Du et al., 2019b, Foster and Rakhlin, 2020]. However, even for linear function approximation, realizability alone is not sufficient for sample-efficient reinforcement learning [Weisz et al., 2020]. In this work, we also impose the regularity condition that ‖θ∗h‖2 = O(1) and ‖φ(s, a)‖2 = O(1), which can always be achieved via rescaling.
Another assumption that we will use is that the minimum suboptimality gap is lower bounded. As mentioned in the introduction, this assumption is common in bandit and reinforcement learning literature. Assumption 2 (Minimum Gap). For any state s ∈ S, a ∈ A, the suboptimality gap is defined as ∆h(s, a) := V ∗ h (s) − Q∗h(s, a). We assume that minh∈[H],s∈S,a∈A {∆h(s, a) : ∆h(s, a) > 0} ≥ ∆min.
4 Hard Instance with Constant Suboptimality Gap
We now present our main hardness result: Theorem 1. Consider an arbitrary online RL algorithm that takes the feature mapping φ : S × A → Rd as input. In the online RL setting, there exists an MDP with a feature mapping φ satisfying Assumption 1 and Assumption 2 with ∆min = Ω(1), such that the algorithm requires min{2Ω(d), 2Ω(H)} samples to find a policy π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05 with probability 0.1.
The remainder of this section provides the construction of a hard family of MDPs whereQ∗ is linearly realizable and has constant suboptimality gap and where it takes exponential samples to learn a
2We additionally define VH+1(s) = 0 for all s ∈ S.
near-optimal policy. Each of these hard MDPs can roughly be seen as a “leaking complete graph” (see detailed transtion probabilities below). Information about the optimal policy can only be gained by: (1) taking the optimal action; (2) reaching a non-terminal state at level H . We will show that when there are exponentially many actions, both events happen with negligible probability unless exponentially many trajectories are played.
4.1 Construction of the MDP family
In this section we describe the construction of the hard instance (the hard MDP family) in detail. Let m be an integer to be determined. The state space is {1̄, · · · , m̄, f}. The special state f is called the terminal state. At state ī, the set of available actions is [m] \ {i}; at the terminal state f , the set of available actions is [m− 1]. 3 In other words there are m− 1 actions available at each state. Each MDP in this family is specified by an index a∗ ∈ [m] and denoted byMa∗ . In other words, there are m MDPs in this family.
In order to construct the MDP family, we first find a set of approximately orthogonal vectors by leveraging the Johnson-Lindenstrauss lemma [Johnson and Lindenstrauss, 1984]. Lemma 1 (Johnson-Lindenstrauss). For any γ > 0, if m ≤ exp( 18γ
2d′), there exists m unit vectors {v1, · · · , vm} in Rd ′ such that for all i, j ∈ [m] such that i 6= j, |〈vi, vj〉| ≤ γ.
We will set γ = 14 and m = bexp( 1 8γ 2d)c. By Lemma 1, we can find such a set of d-dimensional unit vectors {v1, · · · , vm}. For the clarity of presentation, we will use vi and v(i) interchangeably. The construction ofMa∗ is specified below.
Features. The feature map, which maps state-action pairs to d dimensional vectors, is defined as φ(a1, a2) := (〈 v(a1), v(a2) 〉 + 2γ ) · v(a2), ∀a1, a2 ∈ [m], a1 6= a2,
φ(f, ·) := 0. Note that the feature map is independent of a∗ and is shared across the MDP family.
Rewards. For 1 ≤ h < H , the rewards are defined as
Rh(a1, a ∗) := 〈 v(a1), v(a ∗) 〉 + 2γ,
Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , (a2 6= a∗, a2 6= a1)
Rh(f, ·) := 0. For h = H , rH(s, a) := 〈φ(s, a), v(a∗)〉 for every state-action pair.
Transitions. The initial state distribution µ is set as a uniform distribution over {1̄, · · · , m̄}. The transition probabilities are set as follows.
Pr[f |a1, a∗] = 1,
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (a2 6= a∗, a2 6= a1)
Pr[f |f, ·] = 1. After taking action a2, the next state is either a2 or f . Thus this MDP looks roughly like a “leaking complete graph”: starting from state a, it is possible to visit any other state (except for a∗); however, there is always at least 1− 3γ probability of going to the terminal state f . The transition probabilities are indeed valid, because
0 < γ ≤ 〈 v(a1), v(a2) 〉 + 2γ ≤ 3γ < 1.
We now verify that realizability, i.e. Assumption 1, is satisfied. In particular, we claim the following. 3Note that for simplicity we assume different state could have different set of available actions. In the Supplementary Material we provide another construction where all states have the same set of available actions.
Lemma 2. In the MDPMa∗ , ∀h ∈ [H], for any state-action pair (s, a),Q∗h(s, a) = 〈φ(s, a), v(a∗)〉.
The lemma can be proved via induction, with the hypothesis being for all a1 ∈ [m], a2 6= a1, Q∗h(a1, a2) = (〈 v(a1), v(a2) 〉 + 2γ ) · 〈 v(a2), v(a ∗) 〉 , (1)
and that for all a1 6= a∗,
V ∗h (a1) = Q ∗ h(a1, a ∗) = 〈 v(a1), v(a ∗) 〉 + 2γ. (2)
From Eq. (1) and (2), it is easy to see that at state a1 6= a∗, for a2 6= a∗, the suboptimality gap is
∆h(a1, a2) := V ∗ h (a1)−Q∗h(a1, a2) > γ − 3γ2 ≥
1 4 γ.
Thus in this MDP, Assumption 2 is satisfied with ∆min ≥ 14γ = Ω(1). 4
4.2 The information-theoretic argument
Now we are ready to state and prove our main technical lemma. Lemma 3. For any algorithm, there exists a∗ ∈ [m] such that in order to output π with
Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− 0.05
with probability at least 0.1 forMa∗ , the number of samples required is 2Ω(min{d,H}).
We provide a proof sketch for the lower bound below. The full proof can be found in the Supplementary Material. Our main result, Theorem 1, is a direct consequence of Lemma 3.
Proof sketch. Observe that the feature map ofMa∗ does not depend on a∗, and that for h < H and a2 6= a∗, the reward Rh(a1, a2) also contains no information about a∗. The transition probabilities are also independent of a∗, unless the action a∗ is taken. Moreover, the reward at state f is always 0. Thus, to receive information about a∗, the agent either needs to take the action a∗, or be at a non-terminal state at the final time step (h = H).
However, note that the probability of remaining at a non-terminal state at the next layer is at most
sup a1 6=a2
〈v(a1), v(a2)〉+ 2γ ≤ 3γ ≤ 3
4 .
Thus for any algorithm, Pr[sH 6= f ] ≤ ( 3 4 )H , which is exponentially small. In other words, any algorithm that does not know a∗ either needs to “be lucky” so that sH = f , or needs to take a∗ “by accident”. Since the number of actions is m = 2Θ(d), either event cannot happen with constant probability unless the number of episodes is exponential in min{d,H}. In order to make this claim rigorous, we can construct a reference MDPM0 as follows. The state space, action space, and features ofM0 are the same as those ofMa. The transitions are defined as follows:
Pr[·|a1, a2] = a2 : 〈 v(a1), v(a2) 〉 + 2γ f : 1− 〈 v(a1), v(a2) 〉 − 2γ , (∀a1, a2 s.t. a1 6= a2)
Pr[f |f, ·] = 1.
The rewards are defined as follows: Rh(a1, a2) := −2γ [〈 v(a1), v(a2) 〉 + 2γ ] , ( ∀a1, a2 s.t. a1 6= a2)
Rh(f, ·) := 0. 4Here we ignored the terminal state f and the essentially unreachable state a∗ for simplicity. This issue will
be handled in the Supplementary Material rigorously.
Note thatM0 is identical toMa∗ , except when a∗ is taken, or when an trajectory ends at a nonterminal state. Since the latter event happens with an exponentially small probability, we can show that for any algorithm, the probability of taking a∗ inMa∗ is close to the probability of taking a∗ inM0. SinceM0 is independent of a∗, unless an exponential number of samples are used, for any algorithm there exists a∗ ∈ [m] such that the probability of taking a∗ inM0 is o(1). It then follows that the probability of taking a∗ inMa∗ is o(1). Since a∗ is the optimal action for every state, such an algorithm cannot output a near-optimal policy forMa∗ .
5 Upper Bounds under Further Assumptions
Theorem 1 suggests that Assumption 1 and Assumption 2 are not sufficient for sample-efficient RL when the number of actions could be exponential, and that additional assumptions are needed to achieve polynomial sample complexity. One style of assumption is via assuming a global representation property on the features, such as completeness [Zanette et al., 2020].
In this section, we consider two assumptions on additional structures on the transitions of the MDP rather than the feature representation that enable good rates for linear regression with sparse bias. The first condition is a variant of the low variance condition in Du et al. [2019b]. Assumption 3 (Low variance condition). There exists a constant 1 ≤ Cvar <∞ such that for any h ∈ [H] and any policy π,
Es∼Dπh [ |V π(s)− V ∗(s)|2 ] ≤ Cvar · ( Es∼Dπh [|V π(s)− V ∗(s)|] )2 .
The second assumption is that the feature distribution is hypercontractive. Assumption 4. There exists a constant 1 ≤ Chyper <∞ such that for any h ∈ [H] and any policy π, the distribution of φ(s, a) with (s, a) ∼ Dπh is (Chyper, 4)-hypercontractive. In other words, ∀π, ∀h ∈ [H], ∀v ∈ Rd,
E(s,a)∼Dπh [ (φ(s, a)>v)4 ] ≤ Chyper · ( E(s,a)∼Dπh [(φ(s, a) >v)2] )2 .
Intuitively, hypercontractivity characterizes the anti-concentration of a distribution. A broad class of distributions are hypercontractive with Chyper = O(1), including Gaussian distributions (of arbitrary covariance matrices), uniform distributions over the hypercube and sphere, and strongly log-concave distributions [Kothari and Steurer, 2017]. Hypercontractivity has been previously used for outlier-robust linear regression [Klivans et al., 2018, Bakshi and Prasad, 2020] and momentestimation [Kothari and Steurer, 2017].
We show that under Assumptions 1, 2, 3 or 1, 2, 4, a modified version of the Difference Maximization Q-learning (DMQ) algorithm [Du et al., 2019b] is able to learn a near-optimal policy using polynomial number of trajectories with no dependency on the number of actions.
5.1 Optimal experiment design
Given a set of d-dimensional vectors, G-optimal experiment design aims at finding a distribution ρ over the vectors such that when sampling from this distribution, the maximum prediction variance over the set via linear regression is minimized. The following lemma on G-optimal design is a direct corollary of the Kiefer-Wolfowitz theorem [Kiefer and Wolfowitz, 1960]. Lemma 4 (Existence of G-optimal design). For any set X ⊆ Rd, there exists a distribution ρX supported on X , known as the G-optimal design, such that
max x∈X
x> ( Ez∼ρXzz> )−1 x ≤ d.
Efficient algorithms for finding such a distribution can be found in Todd [2016].
In the context of reinforcement learning, the set X corresponds to the set of all features, which is inaccessible. Instead, one can only observe one state s at a time, and choose a ∈ A based on the features {φ(s, a)}a∈A. Such a problem is closer to the distributional optimal design problem described by Ruan et al. [2020]. For our purpose, the following simple approach suffices: given a state
s, perform exploration by sampling from the G-optimal design on {φ(s, a)}a∈A. The performance of this exploration strategy is guaranteed by the following lemma, which will be used in the analysis of Algorithm 1.
Lemma 5 (Lemma 4 in Ruan et al. [2020]). For any state s, denote the G-optimal design with its features by ρs(·) ∈ ∆A, and the corresponding covariance matrix by Σs := ∑ a ρs(a)φ(s, a)φ(s, a)
>. Given a distribution ν over states. Denote the average covariance matrix by Σ := Es∼νΣs. Then
Es∼ν [ max a∈A φ(s, a)>Σ−1φ(s, a) ] ≤ d2.
Note that the performance of this strategy is only worse by a factor of d (compared to the case where one can query all features), and has no dependency on the number of actions.
5.2 The modified DMQ algorithm
Overview. During the execution of the Difference Maximization Q-learning (DMQ) algorithm, for each level h ∈ [H], we maintain three variables: the estimated linear coefficients θh ∈ Rd, a set of exploratory policies Πh, and the empirical feature covariance matrix Σh associated with Πh. We initialize θh = 0 ∈ Rd, Σh := λrId×d and Πh to as a single purely random exploration policy, i.e., Πh = {π} where π chooses an action uniformly at random for all states.5
Each time we execute Algorithm 1, the goal is to update the estimated linear coefficients θh ∈ Rd, so that for all π ∈ Πh, θh is a good estimation to θ∗h with respect to the distribution induced by π. We run ridge regression on the data distribution induced by policies in Πh, and the regression targets are collected by invoking the greedy policy induced by {θh′}h′>h. However, there are two apparent issues with such an approach. First, for levels h′ > h, θh′ is guaranteed to achieve low estimation error only with respect to the distributions induced by policies Πh′ . It is possible that for some π ∈ Πh, the estimation error of θh′ is high for the distribution induced by π (followed by the greedy policy). To resolve this issue, the main idea in Du et al. [2019b] is to explicitly check whether θh′ also predicts well on the new distribution (see Line 5 in Algorithm 1). If not, we add the new policy into Πh′ and invoke Algorithm 1 recursively. The analysis in Du et al. [2019b] upper bounds the total number of recursive calls by a potential function argument, which also gives an upper bound on the sample complexity of the algorithm.
Second, the exploratory policies Πh only induce a distribution over states at level h, and the algorithm still needs to decide an exploration strategy to choose actions at level h. To this end, the algorithm in Du et al. [2019b] explores all actions uniformly at random, and therefore the sample complexity has at least linear dependency on the number of actions. We note that similar issues also appear in the linear contextual bandit literature [Lattimore and Szepesvári, 2020, Ruan et al., 2020], and indeed our solution here is to explore by sampling from the G-optimal design over the features at a single state. As shown by Lemma 5, for all possible roll-in distributions, such an exploration strategy achieves a nice coverage over the feature space, and is therefore sufficient for eliminating the dependency on the size of the action space.
The algorithm. The formal description of the algorithm is given in Algorithm 1. The algorithm should be run by calling LearnLevel on input h = 0.
Here, for a policy πh ∈ Πh, the associated exploratory policy π̃h is defined as
π̃h(sh′) = π(sh′) (if h′ < h) Sample from ρsh(·) (if h′ = h) arg maxa φh′(sh′ , a) >θh′ (if h′ > h) . (3)
Here ρs(·) is the G-optimal design on the set of vectors {φ(s, ·)}a∈A, as defined by Lemma 4. Note that when h = 0, π̃h is always the greedy policy on {θh}h∈[H]. The choice of the algorithmic parameters (β, λr, λridge) can be found in the proof of Theorem 2.
5We also define a special Π0 in the same manner. Choice of λr and other parameters can be found in the Supplementary Material.
Algorithm 1: LearnLevel(h) Input: A level h ∈ {0, · · · , H}
1 for πh ∈ Πh do 2 for h′ = H,H − 1, · · · , h+ 1 do 3 Collect N samples {(sjh′ , a j h′)}j∈[N ] with s j h′ ∼ D π̃h h′ and a j h′ ∼ ρsj
h′ (π̃h defined in (3)) 4 Σ̂h′ ← 1N ∑N j=1 φ(s j h′ , a j h′)φ(s j h′ , a j h′) > 5 if ‖Σ− 1 2
h′ Σ̂h′Σ − 12 h′ ‖2 > β|Πh′ | then
6 Πh′ ← Πh′ ∪ {π̃h} 7 LearnLevel(h′) 8 LearnLevel(h)
9 if h = 0 then 10 Output greedy policy with respect to {θh}h∈[H] and exit 11 Σh ← λr|Πh|I , wh ← 0 ∈ R d 12 for i = 1, · · · , N |Πh| do 13 Sample π from uniform distribution over Πh 14 Execute π̃h (see (3)) to collect (sih, a i h, yi), where yi := ∑ h′≥h r i h′ is the on-the-go reward 15 Σh ← Σh + 1N |Πh|φ(s i h, a i h)φ(s i h, a i h) > 16 wh ← wh + 1N |Πh|φ(s i h, a i h)yi
17 θh ← ( (λridge − λr|Πh| )I + Σh )−1 wh
5.3 Analysis
We show the following theorem regarding the modified algorithm. Theorem 2. Assume that Assumption 1, 2 and one of Assumption 3 and 4 hold. Also assume that
≤ poly(∆min, 1/Cvar, 1/d, 1/H) (Under Assumption 3) or ≤ poly(∆min, 1/Chyper, 1/d, 1/H). (Under Assumption 4)
Let µ be the initial state distribution. Then with probability 1− , running Algorithm 1 on input 0 returns a policy π which satisfies Es1∼µV π(s1) ≥ Es1∼µV ∗(s1)− using poly(1/ ) trajectories.
Note that here both the algorithm and the theorem have no dependence on the number of actions A. The proof of the theorem under Assumption 3 is largely based on the analysis in Du et al. [2019b]. The largest difference is that we used Lemma 5 instead of the original union bound argument when controlling Pr [ supa |θ>h φ(s, a)Q∗h(s, a)| > γ 2 ] . The proof under Assumption 4 relies on a novel analysis of least squares regression under hypercontractivity. The full proof can be found in the Supplementary Material.
6 Discussion
Exponential separation between the generative model and the online setting. When a generative model (also known as simulator) is available, Assumption 1 and Assumption 2 are sufficient for designing an algorithm with poly(1/ , 1/∆min, d,H) sample complexity [Du et al., 2019a, Theorem C.1]. As shown by Theorem 1, under the standard online RL setting (i.e. without access to a generative model), the sample complexity is lower bounded by 2Ω(min{d,H}) when ∆min = Θ(1), under the same set of assumptions. This implies that the generative model is exponentially more powerful than the standard online RL setting.
Although the generative model is conceptually much stronger than the online RL model, previously little is known on the extent to which the former is more powerful. In tabular RL, for instance, the known sample complexity bounds with or without access to generative models are nearly the same [Zhang et al., 2020, Agarwal et al., 2020]. To the best of our knowledge, the only existing example of such separation is shown by Wang et al. [2020a] under the following set of conditions: (i)
deterministic system; (ii) realizability (Assumption 1); (iii) no reward feedback (a.k.a. reward-free exploration). In comparison, our separation result holds under less restrictions (allows stochasticity) and for the usual RL environment (instead of reward-free exploration), and is thus far more natural.
Connecting Theorem 1 and Theorem 2. Our hardness result in Theorem 1 shows that under Assumption 1 and Assumption 2, any algorithm requires exponential number of samples to find a near-optimal policy, and therefore, sample-efficient RL is impossible without further assumptions (e.g., Assumption 3 or 4 assumed in Theorem 2). Indeed, Theorem 1 and Theorem 2 imply that the coefficients Cvar and Chyper in Assumption 3 and 4 are at least exponential for the hard MDP family used in Theorem 1, which can also be verified easily.
Open problems. The first open problem is whether a sample complexity lower bound under Assumption 1 can be shown with polynomial number of actions. This will further rule out poly(A, d,H)style upper bounds, which are still possible with the current results. Another open problem is whether Assumption 3 or 4 can be replaced by or understood as more natural characterizations of the complexity of the MDP.
Acknowledgments and Disclosure of Funding
The authors would like to thank Kefan Dong and Dean Foster for helpful discussions. Sham M. Kakade acknowledges funding from the ONR award N00014-18-1-2247 and from the National Science Foundation under award #CCF-1703574. Ruosong Wang was supported in part by the NSF IIS1763562, US Army W911NF1920104, and ONR Grant N000141812861. | 1. What is the focus of the paper regarding online RL with linear function approximation?
2. What are the strengths of the proposed algorithm, particularly in removing the linear dependency on the number of actions?
3. Do you have any concerns or questions regarding the negative result, especially in terms of its applicability to practical problems with finite action spaces?
4. Could you provide further explanations or examples regarding Assumptions 3 and 4, specifically in the context of RL, and how they contribute to overcoming the impossibility result?
5. How broadly applicable are Assumptions 3 and 4 in practice, and what are some examples of systems that satisfy these assumptions?
6. Are there any minor suggestions or improvements that could enhance the clarity and impact of the paper? | Summary Of The Paper
Review | Summary Of The Paper
This is a technical paper that considers online RL with linear function approximation under the assumptions: 1) Q^* is realizable, 2) gap = V^*(s) - Q^*(s,a) > 0 for all (s,a). The first result is an unfortunate impossibility result that gives an exponential sample lower bound without further assumptions. Then under additional assumptions on system dynamics, a variation of existing algorithm (Difference Maximization Q-Learning) returns a near-optimal policy in polynomial time.
Review
The paper considers an interesting open question on the online RL with linear function approximation. Realizability and constant minimum gap are commonly used, and reasonable assumptions for this setting, but nevertheless, the paper shows that it is still a hard problem without further assumptions. This negative result is a good addition to the current literature. For the upper bound, a major contribution is to remove the linear dependency on the number of actions with a nice adaptation of G-optimal experimental design. Overall, the paper is very well-written and presents solid improvements/contributions to the RL theory, hence I recommend the acceptance.
Main Comments
It seems that there is a caveat in this negative result: there is no limit on the action space and thus there could be exponentially many possible actions to choose. It would have been better clarified if the same negative result can be obtained when the number of actions are finite (like in many practical problems).
Could you explain what Assumption 4 (hypercontractivity) means in the RL world more explicitly? Unlike in robust-statistics where the hypercontractivity is crucial for the success of specific moment-based methods, for the proposed algorithm it is hard to see why it helps to overcome the impossibility result. Is Assumption 4 some sort of uniform ergodicity? or is it some variation of (nearly) deterministic systems?
How broadly Assumption 3 or 4 can be applied in practice? What are some practical examples whose transition kernels are not completely deterministic but still satisfying these assumptions?
Minor Comments
Line 225 - 226: One style of assumption ... [Zanette et al., 2020] -> I don't understand how this work is relevant to your assumptions.
Section 5.1: It might be nicer to state and emphasize what is the major challenge / target to improve at the beginning of the section. |
NIPS | Title
Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
Abstract
Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty. In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML. ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts. Based on our new benchmark, we conduct a thorough empirical analysis of state-of-the-art DML methods. We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases. Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML1.
1 Introduction
Image representations that generalize well are the foundation of numerous computer vision tasks, such as image and video retrieval [61, 71, 54, 38, 1], face (re-)identification [57, 34, 8] and image classification [65, 4, 19, 40, 37]. Ideally, these representations should not only capture data within the training distribution, but also transfer to new, out-of-distribution (OOD) data. However, in practice, achieving effective OOD generalization is more challenging than in-distribution [28, 12, 21, 49, 31, 55]. In the case of zero-shot generalization, where train and test classes are completely distinct, Deep Metric Learning (DML) is used to learn metric representation spaces that capture and transfer visual similarity to unseen classes, constituting a priori unknown test distributions with unspecified shift. To approximate such a setting, current DML benchmarks use single, predefined and fixed data splits of disjoint train and test classes, which are assigned arbitrarily [71, 8, 61, 24, 11, 33, 74, 51, 54, 42, 26, 64, 58]. This means that (i) generalization is only evaluated on a fixed problem difficulty, (ii)
1Code available here: https://github.com/CompVis/Characterizing_Generalization_in_DML ∗ Equal contribution, alphabetical order, † equal supervision, x now at University of Tuebingen.
35th Conference on Neural Information Processing Systems (NeurIPS 2021), virtual.
generalization difficulty is only implicitly defined by the arbitrary data split, (iii) the distribution shift is not measured and (iv) cannot be not changed. As a result, proposed models can overfit to these singular evaluation settings, which puts into question the true zero-shot generalization capabilities of proposed DML models.
In this work, we first construct a new benchmark ooDML to characterize generalization under outof-distribution shifts in DML. We systematically build ooDML as a comprehensive benchmark for evaluating OOD generalization in changing zero-shot learning settings which covers a much larger variety of zero-shot transfer learning scenarios potentially encountered in practice. We systematically construct training and testing data splits of increasing difficulty as measured by their Frechet-Inception Distance [23] and extensively evaluate the performance of current DML approaches.
Our experiments reveal that the standard evaluation splits are often close to i.i.d. evaluation settings. In contrast, our novel benchmark continually evaluates models on significantly harder learning problems, providing a more complete perspective into OOD generalization in DML. Second, we perform a large-scale study of representative DML methods on ooDML, and study the actual benefit of underlying regularizations such as self-supervision [38], knowledge distillation [53], adversarial regularization [59] and specialized objective functions [71, 70, 8, 26, 54]. We find that conceptual differences between DML approaches play a more significant role as the distribution shift to the test split becomes harder. Finally, we present a study on few-shot DML as a simple extension to achieve systematic and consistent OOD generalization. As the transfer learning problem becomes harder, even very little in-domain knowledge effectively helps to adjust learned metric representation spaces to novel test distributions. We publish our code and train-test splits on three established benchmark sets, CUB2002011 [68], CARS196 [30] and Stanford Online Products (SOP) [43]. Similarly, we provide training and evaluation episodes for further research into few-shot DML. Overall, our contributions can be summarized as:
• Proposing the ooDML benchmark to create a set of more realistic train-test splits that evaluate DML generalization capabilities under increasingly more difficult zero-shot learning tasks.
• Analyzing the current DML method landscape under ooDML to characterize benefits and drawbacks of different conceptual approaches to DML.
• Introducing and examining few-shot DML as a potential remedy for systematically improved OOD generalization, especially when moving to larger train-test distribution shifts.
2 Related Work
DML has become essential for many applications, especially in zero-shot image and video retrieval [61, 71, 51, 24, 1, 36]. Proposed approaches most commonly rely on a surrogate ranking task over tuples during training [62], ranging from simple pairs [17] and triplets [57] to higher-order quadruplets [5] and more generic n-tuples [61, 43, 22, 70]. These ranking tasks can also leverage additional context such as geometrical embedding structures [69, 8]. However, due to the exponentially increased complexity of tuple sampling spaces, these methods are usually also combined with tuple sampling objectives, relying on predefined or learned heuristics to avoid training over tuples that are too easy or too hard [57, 72] or reducing tuple redudancy encountered during training [71, 15, 18, 52]. More recent work has tackled sampling complexity through the usage of proxy-representations utilized as sample stand-ins during training, following a NCA [16] objective [41, 26, 64], leveraging softmax-style training through class-proxies [8, 73] or simulating intraclass structures [46].
Unfortunately, the true benefit of these proposed objectives has been put into question recently, with [54] and [42] highlighting high levels of performance saturation of these discriminative DML objectives on default benchmark splits under fair comparison. Instead, orthogonal work extending the standard DML training paradigm through multi-task approaches [56, 51, 39], boosting [44, 45], attention [27], sample generation [11, 33, 74], multi-feature learning [38] or self-distillation [53] have shown more promise with strong relative improvements under fair comparison [54, 38], however still only in single split benchmark settings. It thus remains unclear how well these methods generalize in more realistic settings [28] under potentially much more challenging, different train-to-test distribution shifts, which we investigate in this work.
3 ooDML: Constructing a Benchmark for OOD Generalization in DML
An image representation ϕ(x) learned on samples x ∈ Xtrain drawn from some training distribution generalizes well if can transfer to test data Xtest that are not observed during training. In the particular case of OOD generalization, the learned representation ϕ is supposed to transfer to samples Xtest which are not independently and identically distributed (i.i.d.) to Xtrain. A successful approach to learning such representations is DML, which is evaluated for the special case of zero-shot generalization, i.e. the transfer of ϕ to distributions of unknown classes [57, 71, 24, 8, 54, 42]. DML models aim to learn an embedding ϕ mapping datapoints x into an embedding space Φ, which allows to measure similarity between xi and xj as g(ϕ(xi), ϕ(xj)). Typically, g is a predefined metric, such as the Euclidean or Cosine distance and ϕ is parameterized by a deep neural network.
In realistic zero-shot learning scenarios, test distributions are not specified a priori. Thus, their respective distribution shifts relative to the training, which indicates the difficulty of the transfer learning problem, is unknown as well. To determine the generalization capabilities of ϕ, we would ideally measure its performance on different test distributions covering a large spectrum of distribution shifts, which we will also refer to as “problem difficulties" in this work. Unfortunately, standard evaluation protocols test the generalization of ϕ on a single and fixed train-test data split of predetermined difficulty, hence only allow for limited conclusions about zero-shot generalization.
To thoroughly assess and compare zero-shot generalization of DML models, we aim to build an evaluation protocol that resembles the undetermined nature of the transfer learning problem. In order to achieve this, we need to be able to change, measure and control the difficulty of train-test data splits. To this end, we present an approach to construct multiple train-test splits of measurably increasing difficulty to investigate out-of-distribution generalization in DML, which make up the ooDML benchmark. Our generated train-test splits resort to the established DML benchmark sets, and are subsequently used in Sec. 4 to thoroughly analyze the current state-of-the-art in DML. For future research, this approach is also easily applicable to other datasets and transfer learning problems.
3.1 Measuring the gap between train and test distributions
To create our train-test data splits, we need a way of measuring the distance between image datasets. This is a difficult task due to high dimensionality and natural noise in the images. Recently, Frechet Inception Distance (FID) [23] was proposed to measure the distance between two image distributions by using the neural embeddings of an Inception-v3 network trained for classification on the ImageNet dataset. FID assumes that the embeddings of the penultimate layer follow a Gaussian distribution, with a given mean µX and covariance ΣX for a distribution of images X . The FID between two data distributions X1 and X2 is defined as:
d(X1,X2) ≜ ∥µX1 − µX2∥ 2 2 + Tr(ΣX1 +ΣX2 − 2(ΣX1ΣX2) 1 2 ) , (1)
In this paper, instead of the Inception network, we use the embeddings of a ResNet-50 classifier (Frechet ResNet Distance) for consistency with most DML studies (see e.g. [71, 64, 26, 56, 51, 38, 54, 58]). For simplicity, in the following sections we will still use the abbreviation FID.
3.2 On the issue with default train-test splits in DML
To motivate the need for more comprehensive OOD evaluation protocols, we look at the split difficulty as measured by FID of typically used train-test splits and compare to i.i.d. sampling of training and test sets from the same benchmark. Empirical results in Tab. 1 show that commonly utilized DML
train-test splits are very close to in-distribution learning problems when compared to more out-ofdistribution splits in CARS196 and SOP (see Fig. 1). This indicates that semantic differences due to disjoint train and test classes, do not necessarily relate to actual significant distribution shifts between the train and test set. This also explains the consistently lower zero-shot retrieval performance on CUB200-2011 as compared to both CARS196 and SOP in literature [71, 70, 24, 54, 42, 38], despite SOP containing significantly more classes with fewer examples per class. In addition to the previously discussed issues of DML evaluation protocols, this further questions conclusions drawn from these protocols about the OOD generalization of representations ϕ.
3.3 Creating train-test splits of increasing difficulty
Let Xtrain and Xtest denote the original train and test set of a given benchmark dataset D = Xtrain ∪ Xtest. To generate train-test splits of increasing difficulty while retaining the available data D and maintaining balance of their sizes, we exchange samples between them. To ensure and emphasize semantic consistency and unbiased data distributions with respect to image context unrelated to the target object categories, we swap entire classes instead of individual samples. Measuring distribution similarity based on FID, the goal is then to identify classes Ctrain ⊂ Xtrain and Ctest ⊂ Xtest whose exchange yields higher FID d(Xtrain,Xtest). To this end, similar to other works [33, 51, 38], we find resorting to an unimodal approximation of the intraclass distributions sufficient and approximate FID by only considering the class means and neglect the covariance in Eq. 1. We select Ctrain and Ctest as
C∗train = argmax Ctrain∈Xtrain ∥µCtrain − µXtrain∥2 − ∥µCtrain − µXtest∥2 (2)
C∗test = argmax Ctest∈Xtest ∥µCtest − µXtest∥2 − ∥µCtest − µXtrain∥2 (3)
where we measure distance to mean class-representations µXC . By iteratively exchanging classes between data splits, i.e. X t+1train = (X ttrain \ C∗train) ∪ C∗test and vice versa, we obtain a more difficult train-test split (X t+1train ,X t+1 test ) at iteration step t. Hence, we obtain a sequence of train-test splits XD = ((X 0train,X 0test), . . . , (X ttrain,X ttest), . . . , (X Ttrain,X Ttest)), with X 0train ≜ Xtrain and X 0test ≜ Xtest. Fig. 1 (columns 1-3) indeed shows that our FID approximation yields data splits with gradually increasing approximate FID scores with each swap until the scores cannot be further increased by swapping classes.
UMAP visualizations in the supplementary verify that the increase corresponds to larger OOD shifts. For CUB200-2011 and CARS196, we swap two classes per iteration, while for Stanford Online Products we swap 1000 classes due to a significantly higher class count. Moreover, to cover the overall spectrum of distribution shifts and ensure comparability between benchmarks we also reverse the iteration procedure on CUB200-2011 to generate splits minimizing the approximate FID while still maintaining disjunct train and test classes.
To further increase d(X Ttrain,X Ttest) beyond convergence (see Fig. 1) of the swapping procedure, we subsequently also identify and remove classes from both X Ttrain and X Ttest. More specifically, we remove classes Ctrain from X Ttrain that are closest to the mean of X Ttest and vice versa. For k steps, we successively repeat class removal as long as 50% of the original data is still maintained in these
additional train-test splits. Fig. 1 (rightmost) shows how splits generated through class removal progressively increase the approximate FID beyond what was achieved only by swapping. To analyze if the generated data splits are not inherently biased to the used backbone network for FID computation, we also repeat this procedure based on representations from different architectures, pretraining methods and datasets in the supplementary. Note, that comparison of absolute FID values between datasets may not be meaningful and we are mainly interested in distribution shifts within a given dataset distribution. Overall, using class swapping and removal we select splits that cover the broadest FID range possible, while still maintaining sufficient data. Hence, our splits are significantly harder and more diverse than the default splits.
4 Assessing the State of Generalization in Deep Metric Learning
This section assesses the state of zero-shot generalization in DML via a large experimental study of representative DML methods on our ooDML benchmark, offering a much more complete and thorough perspective on zero-shot generalization in DML as compared to previous DML studies [13, 54, 42, 39].
For our experiments we use the three most widely used benchmarks in DML, CUB200-2011[68], CARS196[30] and Stanford Online Products[43]. For a complete list of implementation and training details see the supplementary if not explicitly stated in the respective sections. Moreover, to measure generalization performance, we resort to the most widely used metric for image retrieval in DML, Recall@k [25]. Additionally, we also evaluate results over mean average precision (mAP@1000) [54, 42], but provide respective tables and visualizations in the supplementary when the interpretation of results is not impacted.
The exact training and test splits ultimately utilized throughout this work are selected based on Fig. 1 to ensure approximately uniform coverage of the spectrum of distribution shifts within intervals ranging from the lowest (near i.i.d. splits) to the highest generated shift achieved with class removal. For experiments on CARS196 and Stanford Online Products, eight total splits were investigated, included the original default benchmark split. For CUB200-2011, we select nine splits to also account for benchmark additions with reduced distributional shifts. The exact FID ranges are provided in the supplementary. Training on CARS196 and CUB200-2011 was done for a maximum of 200 epochs following standard training protocols utilized in [54], while 150 epochs were used for the much larger SOP dataset. Additional training details if not directly stated in the respective sections can be found in the supplementary.
4.1 Zero-shot generalization under varying distribution shifts
Many different concepts have been proposed in DML to learn embedding functions ϕ that generalize from the training distribution to differently distributed test data. To analyze the zero-shot transfer capabilities of DML models, we consider representative approaches making use of the following concepts: (i) surrogate ranking tasks and tuple mining heuristics (Margin loss with distance-based sampling [71] and Multisimilarity loss [70]), (ii) geometric constraints or class proxies (ArcFace [8], ProxyAnchor [26]), (iii) learning of semantically diverse features (R-Margin [54]) and selfsupervised training (DiVA [38]), adversarial regularization (Uniform Prior [59]) and (iv) knowledge self-distillation (S2SD [53]).
Fig. 2 (top) analyzes these methods for their generalization to distribution shifts the varying degrees represented in ooDML. The top row shows absolute zero-shot retrieval performance measured on Recall@1 (results for mAP@1000 can be found in the supplementary) with respect to the FID between train and test sets. Additionally, Fig. 2 (bottom) examines the relative differences of performance to the performance mean over all methods for each train-test split. Based on these experiments, we make the following main observations:
(i) Performance deteriorates monotonically with the distribution shifts. Independent of dataset, approach or evaluation metric, performance drops steadily as the distribution shift increases.
(ii) Relative performance differences are affected by train-test split difficulty. We see that the overall ranking between approaches oftentimes remains stable on the CARS196 and CUB2002011 datasets. However, we also see that particularly on a large-scale dataset (SOP), established proxy-based approaches ArcFace [8] (which incorporates additional geometric constraints) and ProxyAnchor [26] are surprisingly susceptible to more difficult distribution shifts. Both methods perform poorly compared to the more consistent general trend of the other approaches. Hence, conclusions on the generality of methods solely based on the default benchmarks need to be handled with care, as at least for SOP, performance comparisons reported on single (e.g. the standard) data splits do not translate to more general train-test scenarios.
(iii) Conceptual differences matter at larger distribution shifts While the ranking between most methods is largely consistent on CUB200-2011 and CARS196, their differences in performance becomes more prominent with increasing distribution shifts. The relative changes (deviation from the mean of all methods at the stage) depicted in Fig. 2 (bottom) clearly indicates that particular methods based on machine learning techniques such as self-supervision, feature diversity (DiVA, R-Margin) and self-distillation (S2SD) are among the best at generalizing in DML on more challenging splits while retaining strong performance on more i.i.d. splits as well.
While directly stating performance in dependence to the individual distribution shifts offers a detailed overview, the overall comparison of approaches is typically based on single benchmark scores. To further provide a single metric of comparison, we utilize the well-known Area-under-Curve (AUC) score to condense performance (either based on Recall@1 or mAP@1000) over all available distribution shifts into a single aggregated score indicating general zero-shot capabilities. This Aggregated Generalization Score (AGS) is computed based on the normalized FID scores of our splits to the interval [0, 1]. As Recall@k or mAP@k scores are naturally bounded to [0, 1], AGS is similarly bound to [0, 1] with higher being the better model. Our corresponding results are visualized in Fig. 3. Indeed, we see that AGS reflects our observations from Fig. 2, with self-supervision (DiVA)
and self-distillation (S2SD) generally performing best when facing unknown train-test shifts. Exact scores are provided in the supplementary.
4.2 Consistency of structural representation properties on ooDML
Roth et al. [54] attempts to identify potential drivers of generalization in DML by measuring the following structural properties of a representation ϕ: (i) the mean distance πinter between the centers of the embedded samples of each class, (ii) the mean distance πintra between the embedded samples within a class πintra, (iii) the ‘embedding space density’ measured as the ratio πratio = πintraπinter and (iv) ‘spectral decay’ ρ(Φ) measuring the degree of uniformity of singular values obtained by singular value decomposition on the training sample representations, which indicates the number of significant directions of variance. For a more detailed description, we refer to [54]. These metrics indeed are empirically shown to exhibit a certain correlation to generalization performance on the default benchmark splits. In contrast, we are now interested if these observations hold when measuring generalization performance on the ooDML train-test splits of varying difficulty.
We visualize our results in Fig. 4 for CUB200-2011 and SOP, with CARS196 provided in the supplementary. For better visualization we normalize all measured values obtained for both metrics (i)-(iv) and the recall performances (Recall@1) to the interval [0, 1] for each train-test split. Thus, the relation between structural properties and generalization performance becomes comparable across all train-test splits, allowing us to examine if superior generalization is still correlated to the structural properties of the learned representation ϕ, i.e. if the correlation is independent of the underlying distribution shifts. For a perfectly descriptive metric, one should expect to see heavy correlation between normalized metric and normalized generalization performance jointly across shifts. Unfortunately, our results show only a small indication of any structural metric being consistently correlated with generalization performance over varying distribution shifts. This is also observed when evaluating only against basic, purely discriminative DML objectives as was done in [54] for the default split, as well as when incorporating methods that extend and change the base DML training setup (such as DiVA [38] or adversarial regularization [59]).
This not only demonstrates that experimental conclusions derived from the analysis of only single benchmark split may not hold for overall zero-shot generalization, but also that future research should consider more general learning problems and difficulty to better understand the conceptual impact various regulatory approaches. To this end, our benchmark protocol offers more comprehensive experimental ground for future studies to find potential drivers of zero-shot generalization in DML.
4.3 Network capacity and pretrained representations
A common way to improve generalization, as also highlighted in [54] and [42], is to select a stronger backbone architecture for feature extraction. In this section, we look at how changes in network capacity can influence OOD generalization across distribution shifts. Moreover, we also analyze the zero-shot performance of a diverse set of state-of-the-art pretraining approaches.
Influence of network capacity. In Fig. 5, we compare different members of the ResNet architecture family [20] with increasing capacity, each of which achieve increasingly higher performance on i.i.d. test benchmarks such as ImageNet [7], going from a small ResNet18 (R18) over ResNet50 (R50) to ResNet101 (R101) variants. As can be seen, while larger network capacity helps to some extent, we observe that performance actually saturates in zero-shot transfer settings, regardless of the DML approach and dataset (in particular also the large scale SOP dataset). Interestingly, we also observe that the performance drops with increasing distribution shifts are consistent across network capacity, suggesting that zero-shot generalization is less driven by network capacity but rather conceptual choices of the learning formulation (compare Fig. 2).
Generic representations versus Deep Metric Learning. Recently, self-supervised representation learning has taken great leaps with ever stronger models trained on huge amounts of image [29, 47] and language data [9, 35, 2]. These approaches are designed to learn expressive, well-transferring features and methods like CLIP [47] even prove surprisingly useful for zero-shot classification. We now evaluate and compare such representations against state-of-the-art DML models to understand if generic representations that are readily available nowadays actually pose an alternative to explicit application of DML. We select state-of-the-art self-supervision models SwAV [3] (ResNet50 backbone), CLIP [47] trained via natural language supervision on a large dataset of 400 million image and sentence pairs (VisionTransformer [10] backbone), BiT(-M) [29], which trains a ResNet50-V2 [29] on both the standard ImageNet [7] (1 million training samples) and the ImageNet-21k dataset [7, 50] with 14 million training samples and over 21 thousand classes, an EfficientNet-B0 [63] trained on ImageNet, and a standard baseline ResNet50 network trained on ImageNet. Note, that none of these representations has been additionally adapted to the benchmark sets, in contrast to the DML approaches which have been trained on the respective train splits.
The results presented in Fig. 6 show large performance differences of the pretrained representations, which are largely dependent on the test dataset. While BiT outperforms the DML state-of-the-art on CUB200-2011 without any finetuning, it significantly trails behind the DML models on the other
two datasets. On CARS196, only CLIP comes close to the DML approaches when the distribution shift is sufficiently large. Finally, on SOP, none of these models comes even close to the adapted DML methods. This shows how although representations learned by extensive pretraining can offer strong zero-shot generalization, their performance heavily depends on the target dataset and specific model. Furthermore, the generalization abilities notably depend on the size of the pretraining dataset (compare e.g. BiT-1k vs BiT-21k or CLIP), which is significantly larger than the number of training images seen by the DML methods. We see that only actual training on these datasets provides sufficiently reliable performance.
4.4 Few-shot adaption boosts generalization performance in DML
Since distribution shifts can be arbitrarily large, the zero-shot transfer of ϕ can be ill-posed. Features learned on a training set Xtrain will not meaningfully transfer to test samples Xtest once they are sufficiently far from Xtrain, as also already indicated by Fig. 2. As a remedy few-shot learning [60, 67, 14, 48, 32, 6, 66] assumes few samples of the test distribution to to be available during training, i.e. adjusting a previously learned representation. While these approaches are typically explicitly trained for fast adaption to novel classes, we are now interested if similar adaptation of DML representations ϕ helps to bridge increasingly large distribution shifts.
To investigate this hypothesis, we follow the evaluation protocol of few-shot learning and use k representatives (also referred to as shots) of each class from a test set Xtest as a support set for finetuning the penultimate embedding network layer. The remaining test samples then represent the new test set to evaluate retrieval performance, also referred to as query set. For evaluation we perform 10 episodes, i.e. we repeat and average the adaptation of ϕ over 10 different, randomly sampled support and corresponding query sets. Independent of the DML model used for learning the original representation ϕ on Xtrain, adaptation to the support data is conducted using the Marginloss [71] objective with distance-based sampling [71] due to its faster convergence. This also ensures fair comparisons to the adaptation benefit to ϕ and also allows to adapt complex approaches like selfsupervision (DiVA [38]) to the small number of samples in the support sets.
Fig. 7 shows 2 and 5 shot results on CUB200-2011, with CARS196 available in the supplementary. SOP is not considered since each class is already composed of small number of samples. As we see, even very limited in-domain data can significantly improve generalization performance, with the benefit becoming stronger for larger distribution shifts. Moreover, we observe that weaker approaches like ArcFace [8] seem to benefit more than state-of-the-art methods like S2SD [53] or DiVA [38]. We presume this to be caused by their underlying concepts already encouraging learning of more robust and general features. To conclude, few-shot learning provides a substantial and reliable benefit when facing OOD learning settings where the shift is not known a priori.
5 Conclusion
In this work we analyzed zero-shot transfer of image representations learned by Deep Metric Learning (DML) models. We proposed a systematic construction of train-test data splits of increasing
difficulty, as opposed to standard evaluation protocols that test out-of-distribution generalization only on single data splits of fixed difficulty. Based on this, we presented the novel benchmark ooDML and thoroughly assessed current DML methods. Our study reveals the following main findings: Standard evaluation protocols are insufficient to probe general out-of-distribution transfer: Prevailing train-test splits in DML are often close to i.i.d. evaluation settings. Hence, they only provide limited insights into the impact of train-test distribution shift on generalization performance. Our benchmark ooDML alleviates this issue by evaluating a large, controllable and measurable spectrum of problem difficulty to facilitate future research. Larger distribution shifts show the impact of conceptual differences in DML approaches: Our study reveals that generalization performance degrades consistently with increasing problem difficulty for all DML methods. However, certain concepts underlying the approaches are shown to be more robust to shifts than others, such as semantic feature diversity and knowledge-distillation. Generic, self-supervised representations without finetuning can surpass dedicated data adaptation: When facing large distribution shifts, representations learned only by self-supervision on large amounts of of unlabelled data are competitive to explicit DML training without any finetuning. However, their performance is heavily dependent on the data distribution and the models themselves. Few-shot adaptation consistently improves out-of-distribution generalization in DML: Even very few examples from a target data distribution effectively help to adapt DML representations. The benefit becomes even more prominent with increasing train-test distribution shifts, and encourages further research into few-shot adaptation in DML.
Funding transparency statement This research has been funded by the German Federal Ministry for Economic Affairs and Energy within the project “KI-Absicherung – Safe AI for automated driving” and by the German Research Foundation (DFG) within projects 371923335 and 421703927. Moreover, it was funded in part by a CIFAR AI Chair at the Vector Institute, Microsoft Research, and an NSERC Discovery Grant. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/#partners.
Acknowledgements We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting K.R; K.R. acknowledges his membership in the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program. | 1. What is the focus of the paper regarding deep metric learning?
2. What are the strengths of the proposed approach, particularly in quantifying distribution shifts?
3. What are the weaknesses of the paper, especially regarding its contributions to the problem and insights into method performances?
4. How does the reviewer assess the usefulness of the proposed benchmark for deep metric learning? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a new benchmark for deep metric learning under varying degree of distribution shift. First, it introduces a method for creating train/test split with increasing FID --- thereby increasing distribution shift between train and test. Second, various metric learning methods are compared under the benchmarks of varying difficulties.
Review
[Strengths]
It is interesting and novel to quantify (a) the degree of distribution shift & (b) the effect of such shift.
The paper is largely well-written and easy to understand.
[Weaknesses]
It introduces the problem, but it does not tell much about the problem. From machine learning theory, we know that out-of-distribution benchmark is hard. Introducing a simple method to mitigate the problem can be a plus to this paper.
It does not provide much insight on why different method performs better or worse under out-of-distribution scenarios.
I'm not sure how useful this benchmark is. We know that machine learning does not aim to work under out-of-distribution shift. Also, I think this benchmark is meaningful when the ranking between methods under i.i.d benchmark is different from out-of-distribution benchmark. From what I observed, the relative ranking is largely preserved under different difficulties of data split. |
NIPS | Title
Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
Abstract
Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty. In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML. ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts. Based on our new benchmark, we conduct a thorough empirical analysis of state-of-the-art DML methods. We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases. Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML1.
1 Introduction
Image representations that generalize well are the foundation of numerous computer vision tasks, such as image and video retrieval [61, 71, 54, 38, 1], face (re-)identification [57, 34, 8] and image classification [65, 4, 19, 40, 37]. Ideally, these representations should not only capture data within the training distribution, but also transfer to new, out-of-distribution (OOD) data. However, in practice, achieving effective OOD generalization is more challenging than in-distribution [28, 12, 21, 49, 31, 55]. In the case of zero-shot generalization, where train and test classes are completely distinct, Deep Metric Learning (DML) is used to learn metric representation spaces that capture and transfer visual similarity to unseen classes, constituting a priori unknown test distributions with unspecified shift. To approximate such a setting, current DML benchmarks use single, predefined and fixed data splits of disjoint train and test classes, which are assigned arbitrarily [71, 8, 61, 24, 11, 33, 74, 51, 54, 42, 26, 64, 58]. This means that (i) generalization is only evaluated on a fixed problem difficulty, (ii)
1Code available here: https://github.com/CompVis/Characterizing_Generalization_in_DML ∗ Equal contribution, alphabetical order, † equal supervision, x now at University of Tuebingen.
35th Conference on Neural Information Processing Systems (NeurIPS 2021), virtual.
generalization difficulty is only implicitly defined by the arbitrary data split, (iii) the distribution shift is not measured and (iv) cannot be not changed. As a result, proposed models can overfit to these singular evaluation settings, which puts into question the true zero-shot generalization capabilities of proposed DML models.
In this work, we first construct a new benchmark ooDML to characterize generalization under outof-distribution shifts in DML. We systematically build ooDML as a comprehensive benchmark for evaluating OOD generalization in changing zero-shot learning settings which covers a much larger variety of zero-shot transfer learning scenarios potentially encountered in practice. We systematically construct training and testing data splits of increasing difficulty as measured by their Frechet-Inception Distance [23] and extensively evaluate the performance of current DML approaches.
Our experiments reveal that the standard evaluation splits are often close to i.i.d. evaluation settings. In contrast, our novel benchmark continually evaluates models on significantly harder learning problems, providing a more complete perspective into OOD generalization in DML. Second, we perform a large-scale study of representative DML methods on ooDML, and study the actual benefit of underlying regularizations such as self-supervision [38], knowledge distillation [53], adversarial regularization [59] and specialized objective functions [71, 70, 8, 26, 54]. We find that conceptual differences between DML approaches play a more significant role as the distribution shift to the test split becomes harder. Finally, we present a study on few-shot DML as a simple extension to achieve systematic and consistent OOD generalization. As the transfer learning problem becomes harder, even very little in-domain knowledge effectively helps to adjust learned metric representation spaces to novel test distributions. We publish our code and train-test splits on three established benchmark sets, CUB2002011 [68], CARS196 [30] and Stanford Online Products (SOP) [43]. Similarly, we provide training and evaluation episodes for further research into few-shot DML. Overall, our contributions can be summarized as:
• Proposing the ooDML benchmark to create a set of more realistic train-test splits that evaluate DML generalization capabilities under increasingly more difficult zero-shot learning tasks.
• Analyzing the current DML method landscape under ooDML to characterize benefits and drawbacks of different conceptual approaches to DML.
• Introducing and examining few-shot DML as a potential remedy for systematically improved OOD generalization, especially when moving to larger train-test distribution shifts.
2 Related Work
DML has become essential for many applications, especially in zero-shot image and video retrieval [61, 71, 51, 24, 1, 36]. Proposed approaches most commonly rely on a surrogate ranking task over tuples during training [62], ranging from simple pairs [17] and triplets [57] to higher-order quadruplets [5] and more generic n-tuples [61, 43, 22, 70]. These ranking tasks can also leverage additional context such as geometrical embedding structures [69, 8]. However, due to the exponentially increased complexity of tuple sampling spaces, these methods are usually also combined with tuple sampling objectives, relying on predefined or learned heuristics to avoid training over tuples that are too easy or too hard [57, 72] or reducing tuple redudancy encountered during training [71, 15, 18, 52]. More recent work has tackled sampling complexity through the usage of proxy-representations utilized as sample stand-ins during training, following a NCA [16] objective [41, 26, 64], leveraging softmax-style training through class-proxies [8, 73] or simulating intraclass structures [46].
Unfortunately, the true benefit of these proposed objectives has been put into question recently, with [54] and [42] highlighting high levels of performance saturation of these discriminative DML objectives on default benchmark splits under fair comparison. Instead, orthogonal work extending the standard DML training paradigm through multi-task approaches [56, 51, 39], boosting [44, 45], attention [27], sample generation [11, 33, 74], multi-feature learning [38] or self-distillation [53] have shown more promise with strong relative improvements under fair comparison [54, 38], however still only in single split benchmark settings. It thus remains unclear how well these methods generalize in more realistic settings [28] under potentially much more challenging, different train-to-test distribution shifts, which we investigate in this work.
3 ooDML: Constructing a Benchmark for OOD Generalization in DML
An image representation ϕ(x) learned on samples x ∈ Xtrain drawn from some training distribution generalizes well if can transfer to test data Xtest that are not observed during training. In the particular case of OOD generalization, the learned representation ϕ is supposed to transfer to samples Xtest which are not independently and identically distributed (i.i.d.) to Xtrain. A successful approach to learning such representations is DML, which is evaluated for the special case of zero-shot generalization, i.e. the transfer of ϕ to distributions of unknown classes [57, 71, 24, 8, 54, 42]. DML models aim to learn an embedding ϕ mapping datapoints x into an embedding space Φ, which allows to measure similarity between xi and xj as g(ϕ(xi), ϕ(xj)). Typically, g is a predefined metric, such as the Euclidean or Cosine distance and ϕ is parameterized by a deep neural network.
In realistic zero-shot learning scenarios, test distributions are not specified a priori. Thus, their respective distribution shifts relative to the training, which indicates the difficulty of the transfer learning problem, is unknown as well. To determine the generalization capabilities of ϕ, we would ideally measure its performance on different test distributions covering a large spectrum of distribution shifts, which we will also refer to as “problem difficulties" in this work. Unfortunately, standard evaluation protocols test the generalization of ϕ on a single and fixed train-test data split of predetermined difficulty, hence only allow for limited conclusions about zero-shot generalization.
To thoroughly assess and compare zero-shot generalization of DML models, we aim to build an evaluation protocol that resembles the undetermined nature of the transfer learning problem. In order to achieve this, we need to be able to change, measure and control the difficulty of train-test data splits. To this end, we present an approach to construct multiple train-test splits of measurably increasing difficulty to investigate out-of-distribution generalization in DML, which make up the ooDML benchmark. Our generated train-test splits resort to the established DML benchmark sets, and are subsequently used in Sec. 4 to thoroughly analyze the current state-of-the-art in DML. For future research, this approach is also easily applicable to other datasets and transfer learning problems.
3.1 Measuring the gap between train and test distributions
To create our train-test data splits, we need a way of measuring the distance between image datasets. This is a difficult task due to high dimensionality and natural noise in the images. Recently, Frechet Inception Distance (FID) [23] was proposed to measure the distance between two image distributions by using the neural embeddings of an Inception-v3 network trained for classification on the ImageNet dataset. FID assumes that the embeddings of the penultimate layer follow a Gaussian distribution, with a given mean µX and covariance ΣX for a distribution of images X . The FID between two data distributions X1 and X2 is defined as:
d(X1,X2) ≜ ∥µX1 − µX2∥ 2 2 + Tr(ΣX1 +ΣX2 − 2(ΣX1ΣX2) 1 2 ) , (1)
In this paper, instead of the Inception network, we use the embeddings of a ResNet-50 classifier (Frechet ResNet Distance) for consistency with most DML studies (see e.g. [71, 64, 26, 56, 51, 38, 54, 58]). For simplicity, in the following sections we will still use the abbreviation FID.
3.2 On the issue with default train-test splits in DML
To motivate the need for more comprehensive OOD evaluation protocols, we look at the split difficulty as measured by FID of typically used train-test splits and compare to i.i.d. sampling of training and test sets from the same benchmark. Empirical results in Tab. 1 show that commonly utilized DML
train-test splits are very close to in-distribution learning problems when compared to more out-ofdistribution splits in CARS196 and SOP (see Fig. 1). This indicates that semantic differences due to disjoint train and test classes, do not necessarily relate to actual significant distribution shifts between the train and test set. This also explains the consistently lower zero-shot retrieval performance on CUB200-2011 as compared to both CARS196 and SOP in literature [71, 70, 24, 54, 42, 38], despite SOP containing significantly more classes with fewer examples per class. In addition to the previously discussed issues of DML evaluation protocols, this further questions conclusions drawn from these protocols about the OOD generalization of representations ϕ.
3.3 Creating train-test splits of increasing difficulty
Let Xtrain and Xtest denote the original train and test set of a given benchmark dataset D = Xtrain ∪ Xtest. To generate train-test splits of increasing difficulty while retaining the available data D and maintaining balance of their sizes, we exchange samples between them. To ensure and emphasize semantic consistency and unbiased data distributions with respect to image context unrelated to the target object categories, we swap entire classes instead of individual samples. Measuring distribution similarity based on FID, the goal is then to identify classes Ctrain ⊂ Xtrain and Ctest ⊂ Xtest whose exchange yields higher FID d(Xtrain,Xtest). To this end, similar to other works [33, 51, 38], we find resorting to an unimodal approximation of the intraclass distributions sufficient and approximate FID by only considering the class means and neglect the covariance in Eq. 1. We select Ctrain and Ctest as
C∗train = argmax Ctrain∈Xtrain ∥µCtrain − µXtrain∥2 − ∥µCtrain − µXtest∥2 (2)
C∗test = argmax Ctest∈Xtest ∥µCtest − µXtest∥2 − ∥µCtest − µXtrain∥2 (3)
where we measure distance to mean class-representations µXC . By iteratively exchanging classes between data splits, i.e. X t+1train = (X ttrain \ C∗train) ∪ C∗test and vice versa, we obtain a more difficult train-test split (X t+1train ,X t+1 test ) at iteration step t. Hence, we obtain a sequence of train-test splits XD = ((X 0train,X 0test), . . . , (X ttrain,X ttest), . . . , (X Ttrain,X Ttest)), with X 0train ≜ Xtrain and X 0test ≜ Xtest. Fig. 1 (columns 1-3) indeed shows that our FID approximation yields data splits with gradually increasing approximate FID scores with each swap until the scores cannot be further increased by swapping classes.
UMAP visualizations in the supplementary verify that the increase corresponds to larger OOD shifts. For CUB200-2011 and CARS196, we swap two classes per iteration, while for Stanford Online Products we swap 1000 classes due to a significantly higher class count. Moreover, to cover the overall spectrum of distribution shifts and ensure comparability between benchmarks we also reverse the iteration procedure on CUB200-2011 to generate splits minimizing the approximate FID while still maintaining disjunct train and test classes.
To further increase d(X Ttrain,X Ttest) beyond convergence (see Fig. 1) of the swapping procedure, we subsequently also identify and remove classes from both X Ttrain and X Ttest. More specifically, we remove classes Ctrain from X Ttrain that are closest to the mean of X Ttest and vice versa. For k steps, we successively repeat class removal as long as 50% of the original data is still maintained in these
additional train-test splits. Fig. 1 (rightmost) shows how splits generated through class removal progressively increase the approximate FID beyond what was achieved only by swapping. To analyze if the generated data splits are not inherently biased to the used backbone network for FID computation, we also repeat this procedure based on representations from different architectures, pretraining methods and datasets in the supplementary. Note, that comparison of absolute FID values between datasets may not be meaningful and we are mainly interested in distribution shifts within a given dataset distribution. Overall, using class swapping and removal we select splits that cover the broadest FID range possible, while still maintaining sufficient data. Hence, our splits are significantly harder and more diverse than the default splits.
4 Assessing the State of Generalization in Deep Metric Learning
This section assesses the state of zero-shot generalization in DML via a large experimental study of representative DML methods on our ooDML benchmark, offering a much more complete and thorough perspective on zero-shot generalization in DML as compared to previous DML studies [13, 54, 42, 39].
For our experiments we use the three most widely used benchmarks in DML, CUB200-2011[68], CARS196[30] and Stanford Online Products[43]. For a complete list of implementation and training details see the supplementary if not explicitly stated in the respective sections. Moreover, to measure generalization performance, we resort to the most widely used metric for image retrieval in DML, Recall@k [25]. Additionally, we also evaluate results over mean average precision (mAP@1000) [54, 42], but provide respective tables and visualizations in the supplementary when the interpretation of results is not impacted.
The exact training and test splits ultimately utilized throughout this work are selected based on Fig. 1 to ensure approximately uniform coverage of the spectrum of distribution shifts within intervals ranging from the lowest (near i.i.d. splits) to the highest generated shift achieved with class removal. For experiments on CARS196 and Stanford Online Products, eight total splits were investigated, included the original default benchmark split. For CUB200-2011, we select nine splits to also account for benchmark additions with reduced distributional shifts. The exact FID ranges are provided in the supplementary. Training on CARS196 and CUB200-2011 was done for a maximum of 200 epochs following standard training protocols utilized in [54], while 150 epochs were used for the much larger SOP dataset. Additional training details if not directly stated in the respective sections can be found in the supplementary.
4.1 Zero-shot generalization under varying distribution shifts
Many different concepts have been proposed in DML to learn embedding functions ϕ that generalize from the training distribution to differently distributed test data. To analyze the zero-shot transfer capabilities of DML models, we consider representative approaches making use of the following concepts: (i) surrogate ranking tasks and tuple mining heuristics (Margin loss with distance-based sampling [71] and Multisimilarity loss [70]), (ii) geometric constraints or class proxies (ArcFace [8], ProxyAnchor [26]), (iii) learning of semantically diverse features (R-Margin [54]) and selfsupervised training (DiVA [38]), adversarial regularization (Uniform Prior [59]) and (iv) knowledge self-distillation (S2SD [53]).
Fig. 2 (top) analyzes these methods for their generalization to distribution shifts the varying degrees represented in ooDML. The top row shows absolute zero-shot retrieval performance measured on Recall@1 (results for mAP@1000 can be found in the supplementary) with respect to the FID between train and test sets. Additionally, Fig. 2 (bottom) examines the relative differences of performance to the performance mean over all methods for each train-test split. Based on these experiments, we make the following main observations:
(i) Performance deteriorates monotonically with the distribution shifts. Independent of dataset, approach or evaluation metric, performance drops steadily as the distribution shift increases.
(ii) Relative performance differences are affected by train-test split difficulty. We see that the overall ranking between approaches oftentimes remains stable on the CARS196 and CUB2002011 datasets. However, we also see that particularly on a large-scale dataset (SOP), established proxy-based approaches ArcFace [8] (which incorporates additional geometric constraints) and ProxyAnchor [26] are surprisingly susceptible to more difficult distribution shifts. Both methods perform poorly compared to the more consistent general trend of the other approaches. Hence, conclusions on the generality of methods solely based on the default benchmarks need to be handled with care, as at least for SOP, performance comparisons reported on single (e.g. the standard) data splits do not translate to more general train-test scenarios.
(iii) Conceptual differences matter at larger distribution shifts While the ranking between most methods is largely consistent on CUB200-2011 and CARS196, their differences in performance becomes more prominent with increasing distribution shifts. The relative changes (deviation from the mean of all methods at the stage) depicted in Fig. 2 (bottom) clearly indicates that particular methods based on machine learning techniques such as self-supervision, feature diversity (DiVA, R-Margin) and self-distillation (S2SD) are among the best at generalizing in DML on more challenging splits while retaining strong performance on more i.i.d. splits as well.
While directly stating performance in dependence to the individual distribution shifts offers a detailed overview, the overall comparison of approaches is typically based on single benchmark scores. To further provide a single metric of comparison, we utilize the well-known Area-under-Curve (AUC) score to condense performance (either based on Recall@1 or mAP@1000) over all available distribution shifts into a single aggregated score indicating general zero-shot capabilities. This Aggregated Generalization Score (AGS) is computed based on the normalized FID scores of our splits to the interval [0, 1]. As Recall@k or mAP@k scores are naturally bounded to [0, 1], AGS is similarly bound to [0, 1] with higher being the better model. Our corresponding results are visualized in Fig. 3. Indeed, we see that AGS reflects our observations from Fig. 2, with self-supervision (DiVA)
and self-distillation (S2SD) generally performing best when facing unknown train-test shifts. Exact scores are provided in the supplementary.
4.2 Consistency of structural representation properties on ooDML
Roth et al. [54] attempts to identify potential drivers of generalization in DML by measuring the following structural properties of a representation ϕ: (i) the mean distance πinter between the centers of the embedded samples of each class, (ii) the mean distance πintra between the embedded samples within a class πintra, (iii) the ‘embedding space density’ measured as the ratio πratio = πintraπinter and (iv) ‘spectral decay’ ρ(Φ) measuring the degree of uniformity of singular values obtained by singular value decomposition on the training sample representations, which indicates the number of significant directions of variance. For a more detailed description, we refer to [54]. These metrics indeed are empirically shown to exhibit a certain correlation to generalization performance on the default benchmark splits. In contrast, we are now interested if these observations hold when measuring generalization performance on the ooDML train-test splits of varying difficulty.
We visualize our results in Fig. 4 for CUB200-2011 and SOP, with CARS196 provided in the supplementary. For better visualization we normalize all measured values obtained for both metrics (i)-(iv) and the recall performances (Recall@1) to the interval [0, 1] for each train-test split. Thus, the relation between structural properties and generalization performance becomes comparable across all train-test splits, allowing us to examine if superior generalization is still correlated to the structural properties of the learned representation ϕ, i.e. if the correlation is independent of the underlying distribution shifts. For a perfectly descriptive metric, one should expect to see heavy correlation between normalized metric and normalized generalization performance jointly across shifts. Unfortunately, our results show only a small indication of any structural metric being consistently correlated with generalization performance over varying distribution shifts. This is also observed when evaluating only against basic, purely discriminative DML objectives as was done in [54] for the default split, as well as when incorporating methods that extend and change the base DML training setup (such as DiVA [38] or adversarial regularization [59]).
This not only demonstrates that experimental conclusions derived from the analysis of only single benchmark split may not hold for overall zero-shot generalization, but also that future research should consider more general learning problems and difficulty to better understand the conceptual impact various regulatory approaches. To this end, our benchmark protocol offers more comprehensive experimental ground for future studies to find potential drivers of zero-shot generalization in DML.
4.3 Network capacity and pretrained representations
A common way to improve generalization, as also highlighted in [54] and [42], is to select a stronger backbone architecture for feature extraction. In this section, we look at how changes in network capacity can influence OOD generalization across distribution shifts. Moreover, we also analyze the zero-shot performance of a diverse set of state-of-the-art pretraining approaches.
Influence of network capacity. In Fig. 5, we compare different members of the ResNet architecture family [20] with increasing capacity, each of which achieve increasingly higher performance on i.i.d. test benchmarks such as ImageNet [7], going from a small ResNet18 (R18) over ResNet50 (R50) to ResNet101 (R101) variants. As can be seen, while larger network capacity helps to some extent, we observe that performance actually saturates in zero-shot transfer settings, regardless of the DML approach and dataset (in particular also the large scale SOP dataset). Interestingly, we also observe that the performance drops with increasing distribution shifts are consistent across network capacity, suggesting that zero-shot generalization is less driven by network capacity but rather conceptual choices of the learning formulation (compare Fig. 2).
Generic representations versus Deep Metric Learning. Recently, self-supervised representation learning has taken great leaps with ever stronger models trained on huge amounts of image [29, 47] and language data [9, 35, 2]. These approaches are designed to learn expressive, well-transferring features and methods like CLIP [47] even prove surprisingly useful for zero-shot classification. We now evaluate and compare such representations against state-of-the-art DML models to understand if generic representations that are readily available nowadays actually pose an alternative to explicit application of DML. We select state-of-the-art self-supervision models SwAV [3] (ResNet50 backbone), CLIP [47] trained via natural language supervision on a large dataset of 400 million image and sentence pairs (VisionTransformer [10] backbone), BiT(-M) [29], which trains a ResNet50-V2 [29] on both the standard ImageNet [7] (1 million training samples) and the ImageNet-21k dataset [7, 50] with 14 million training samples and over 21 thousand classes, an EfficientNet-B0 [63] trained on ImageNet, and a standard baseline ResNet50 network trained on ImageNet. Note, that none of these representations has been additionally adapted to the benchmark sets, in contrast to the DML approaches which have been trained on the respective train splits.
The results presented in Fig. 6 show large performance differences of the pretrained representations, which are largely dependent on the test dataset. While BiT outperforms the DML state-of-the-art on CUB200-2011 without any finetuning, it significantly trails behind the DML models on the other
two datasets. On CARS196, only CLIP comes close to the DML approaches when the distribution shift is sufficiently large. Finally, on SOP, none of these models comes even close to the adapted DML methods. This shows how although representations learned by extensive pretraining can offer strong zero-shot generalization, their performance heavily depends on the target dataset and specific model. Furthermore, the generalization abilities notably depend on the size of the pretraining dataset (compare e.g. BiT-1k vs BiT-21k or CLIP), which is significantly larger than the number of training images seen by the DML methods. We see that only actual training on these datasets provides sufficiently reliable performance.
4.4 Few-shot adaption boosts generalization performance in DML
Since distribution shifts can be arbitrarily large, the zero-shot transfer of ϕ can be ill-posed. Features learned on a training set Xtrain will not meaningfully transfer to test samples Xtest once they are sufficiently far from Xtrain, as also already indicated by Fig. 2. As a remedy few-shot learning [60, 67, 14, 48, 32, 6, 66] assumes few samples of the test distribution to to be available during training, i.e. adjusting a previously learned representation. While these approaches are typically explicitly trained for fast adaption to novel classes, we are now interested if similar adaptation of DML representations ϕ helps to bridge increasingly large distribution shifts.
To investigate this hypothesis, we follow the evaluation protocol of few-shot learning and use k representatives (also referred to as shots) of each class from a test set Xtest as a support set for finetuning the penultimate embedding network layer. The remaining test samples then represent the new test set to evaluate retrieval performance, also referred to as query set. For evaluation we perform 10 episodes, i.e. we repeat and average the adaptation of ϕ over 10 different, randomly sampled support and corresponding query sets. Independent of the DML model used for learning the original representation ϕ on Xtrain, adaptation to the support data is conducted using the Marginloss [71] objective with distance-based sampling [71] due to its faster convergence. This also ensures fair comparisons to the adaptation benefit to ϕ and also allows to adapt complex approaches like selfsupervision (DiVA [38]) to the small number of samples in the support sets.
Fig. 7 shows 2 and 5 shot results on CUB200-2011, with CARS196 available in the supplementary. SOP is not considered since each class is already composed of small number of samples. As we see, even very limited in-domain data can significantly improve generalization performance, with the benefit becoming stronger for larger distribution shifts. Moreover, we observe that weaker approaches like ArcFace [8] seem to benefit more than state-of-the-art methods like S2SD [53] or DiVA [38]. We presume this to be caused by their underlying concepts already encouraging learning of more robust and general features. To conclude, few-shot learning provides a substantial and reliable benefit when facing OOD learning settings where the shift is not known a priori.
5 Conclusion
In this work we analyzed zero-shot transfer of image representations learned by Deep Metric Learning (DML) models. We proposed a systematic construction of train-test data splits of increasing
difficulty, as opposed to standard evaluation protocols that test out-of-distribution generalization only on single data splits of fixed difficulty. Based on this, we presented the novel benchmark ooDML and thoroughly assessed current DML methods. Our study reveals the following main findings: Standard evaluation protocols are insufficient to probe general out-of-distribution transfer: Prevailing train-test splits in DML are often close to i.i.d. evaluation settings. Hence, they only provide limited insights into the impact of train-test distribution shift on generalization performance. Our benchmark ooDML alleviates this issue by evaluating a large, controllable and measurable spectrum of problem difficulty to facilitate future research. Larger distribution shifts show the impact of conceptual differences in DML approaches: Our study reveals that generalization performance degrades consistently with increasing problem difficulty for all DML methods. However, certain concepts underlying the approaches are shown to be more robust to shifts than others, such as semantic feature diversity and knowledge-distillation. Generic, self-supervised representations without finetuning can surpass dedicated data adaptation: When facing large distribution shifts, representations learned only by self-supervision on large amounts of of unlabelled data are competitive to explicit DML training without any finetuning. However, their performance is heavily dependent on the data distribution and the models themselves. Few-shot adaptation consistently improves out-of-distribution generalization in DML: Even very few examples from a target data distribution effectively help to adapt DML representations. The benefit becomes even more prominent with increasing train-test distribution shifts, and encourages further research into few-shot adaptation in DML.
Funding transparency statement This research has been funded by the German Federal Ministry for Economic Affairs and Energy within the project “KI-Absicherung – Safe AI for automated driving” and by the German Research Foundation (DFG) within projects 371923335 and 421703927. Moreover, it was funded in part by a CIFAR AI Chair at the Vector Institute, Microsoft Research, and an NSERC Discovery Grant. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/#partners.
Acknowledgements We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting K.R; K.R. acknowledges his membership in the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program. | 1. What is the focus of the paper regarding deep metric learning?
2. What are the strengths and weaknesses of the proposed approach?
3. Do you have any concerns or questions regarding the experimental setup and results?
4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The authors of this paper investigate how generalization is affected by varying the training and testing data splits in deep metric learning (DML). Specifically, they create train/test splits (splitting on classes) which increase in difficulty (meaning that the distributions are more different) and evaluate DML generalization. To measure the gap between train and test distributions, they use FID with ResNet-50. Experimentally, they use popular benchmarks and the Recall@k metric to show that performance decreases as the train/test FID distances become larger. This, and other experiments, show that using a fixed train/test split can lead to misleading conclusions about generalization.
Review
Originality:
The idea is very interesting and original. It makes sense that different train/test splits will make the training harder and this should be investigated.
Quality:
Overall, the paper is of high quality and is relatively easy to understand.
There are some mistakes in the paper. For example, on line 107, the authors state that FID uses Inception-v2 when it actually is Inception-v3.
Some things are not well justified. For example, the approximation of FID using only the mean is not well justified. Just because the mean-FID is monotonically increasing per iteration does not mean that the FID will behave the same. More justification is needed here. Another justification that is needed is why ResNet-50 was used and why the authors think that FID with this network will work well on the CUB200-2011, CARS196, and Stanford Online Products datasets, assuming that it was trained on ImageNet.
There might be an issue with the FID calculation as well. If the train/test splits have varying (or small) sizes then FID will become biased. See “Effectively Unbiased FID and Inception Score and where to find them” for more details.
Clarity:
Overall, the paper is easy to read and understand.
The fonts in the figures are too small, making them hard to read. The captions and descriptions are also hard to understand. Moreover, several figures do not print well in black and white.
Significance:
I am not sure about the significance, because I don’t specialize in deep metric learning. However, the authors make a great point that if the train/test splits make a huge difference, then they should be investigated. However, the authors also split on classes, which may be causing the huge difference in performance. |
NIPS | Title
Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
Abstract
Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty. In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML. ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts. Based on our new benchmark, we conduct a thorough empirical analysis of state-of-the-art DML methods. We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases. Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML1.
1 Introduction
Image representations that generalize well are the foundation of numerous computer vision tasks, such as image and video retrieval [61, 71, 54, 38, 1], face (re-)identification [57, 34, 8] and image classification [65, 4, 19, 40, 37]. Ideally, these representations should not only capture data within the training distribution, but also transfer to new, out-of-distribution (OOD) data. However, in practice, achieving effective OOD generalization is more challenging than in-distribution [28, 12, 21, 49, 31, 55]. In the case of zero-shot generalization, where train and test classes are completely distinct, Deep Metric Learning (DML) is used to learn metric representation spaces that capture and transfer visual similarity to unseen classes, constituting a priori unknown test distributions with unspecified shift. To approximate such a setting, current DML benchmarks use single, predefined and fixed data splits of disjoint train and test classes, which are assigned arbitrarily [71, 8, 61, 24, 11, 33, 74, 51, 54, 42, 26, 64, 58]. This means that (i) generalization is only evaluated on a fixed problem difficulty, (ii)
1Code available here: https://github.com/CompVis/Characterizing_Generalization_in_DML ∗ Equal contribution, alphabetical order, † equal supervision, x now at University of Tuebingen.
35th Conference on Neural Information Processing Systems (NeurIPS 2021), virtual.
generalization difficulty is only implicitly defined by the arbitrary data split, (iii) the distribution shift is not measured and (iv) cannot be not changed. As a result, proposed models can overfit to these singular evaluation settings, which puts into question the true zero-shot generalization capabilities of proposed DML models.
In this work, we first construct a new benchmark ooDML to characterize generalization under outof-distribution shifts in DML. We systematically build ooDML as a comprehensive benchmark for evaluating OOD generalization in changing zero-shot learning settings which covers a much larger variety of zero-shot transfer learning scenarios potentially encountered in practice. We systematically construct training and testing data splits of increasing difficulty as measured by their Frechet-Inception Distance [23] and extensively evaluate the performance of current DML approaches.
Our experiments reveal that the standard evaluation splits are often close to i.i.d. evaluation settings. In contrast, our novel benchmark continually evaluates models on significantly harder learning problems, providing a more complete perspective into OOD generalization in DML. Second, we perform a large-scale study of representative DML methods on ooDML, and study the actual benefit of underlying regularizations such as self-supervision [38], knowledge distillation [53], adversarial regularization [59] and specialized objective functions [71, 70, 8, 26, 54]. We find that conceptual differences between DML approaches play a more significant role as the distribution shift to the test split becomes harder. Finally, we present a study on few-shot DML as a simple extension to achieve systematic and consistent OOD generalization. As the transfer learning problem becomes harder, even very little in-domain knowledge effectively helps to adjust learned metric representation spaces to novel test distributions. We publish our code and train-test splits on three established benchmark sets, CUB2002011 [68], CARS196 [30] and Stanford Online Products (SOP) [43]. Similarly, we provide training and evaluation episodes for further research into few-shot DML. Overall, our contributions can be summarized as:
• Proposing the ooDML benchmark to create a set of more realistic train-test splits that evaluate DML generalization capabilities under increasingly more difficult zero-shot learning tasks.
• Analyzing the current DML method landscape under ooDML to characterize benefits and drawbacks of different conceptual approaches to DML.
• Introducing and examining few-shot DML as a potential remedy for systematically improved OOD generalization, especially when moving to larger train-test distribution shifts.
2 Related Work
DML has become essential for many applications, especially in zero-shot image and video retrieval [61, 71, 51, 24, 1, 36]. Proposed approaches most commonly rely on a surrogate ranking task over tuples during training [62], ranging from simple pairs [17] and triplets [57] to higher-order quadruplets [5] and more generic n-tuples [61, 43, 22, 70]. These ranking tasks can also leverage additional context such as geometrical embedding structures [69, 8]. However, due to the exponentially increased complexity of tuple sampling spaces, these methods are usually also combined with tuple sampling objectives, relying on predefined or learned heuristics to avoid training over tuples that are too easy or too hard [57, 72] or reducing tuple redudancy encountered during training [71, 15, 18, 52]. More recent work has tackled sampling complexity through the usage of proxy-representations utilized as sample stand-ins during training, following a NCA [16] objective [41, 26, 64], leveraging softmax-style training through class-proxies [8, 73] or simulating intraclass structures [46].
Unfortunately, the true benefit of these proposed objectives has been put into question recently, with [54] and [42] highlighting high levels of performance saturation of these discriminative DML objectives on default benchmark splits under fair comparison. Instead, orthogonal work extending the standard DML training paradigm through multi-task approaches [56, 51, 39], boosting [44, 45], attention [27], sample generation [11, 33, 74], multi-feature learning [38] or self-distillation [53] have shown more promise with strong relative improvements under fair comparison [54, 38], however still only in single split benchmark settings. It thus remains unclear how well these methods generalize in more realistic settings [28] under potentially much more challenging, different train-to-test distribution shifts, which we investigate in this work.
3 ooDML: Constructing a Benchmark for OOD Generalization in DML
An image representation ϕ(x) learned on samples x ∈ Xtrain drawn from some training distribution generalizes well if can transfer to test data Xtest that are not observed during training. In the particular case of OOD generalization, the learned representation ϕ is supposed to transfer to samples Xtest which are not independently and identically distributed (i.i.d.) to Xtrain. A successful approach to learning such representations is DML, which is evaluated for the special case of zero-shot generalization, i.e. the transfer of ϕ to distributions of unknown classes [57, 71, 24, 8, 54, 42]. DML models aim to learn an embedding ϕ mapping datapoints x into an embedding space Φ, which allows to measure similarity between xi and xj as g(ϕ(xi), ϕ(xj)). Typically, g is a predefined metric, such as the Euclidean or Cosine distance and ϕ is parameterized by a deep neural network.
In realistic zero-shot learning scenarios, test distributions are not specified a priori. Thus, their respective distribution shifts relative to the training, which indicates the difficulty of the transfer learning problem, is unknown as well. To determine the generalization capabilities of ϕ, we would ideally measure its performance on different test distributions covering a large spectrum of distribution shifts, which we will also refer to as “problem difficulties" in this work. Unfortunately, standard evaluation protocols test the generalization of ϕ on a single and fixed train-test data split of predetermined difficulty, hence only allow for limited conclusions about zero-shot generalization.
To thoroughly assess and compare zero-shot generalization of DML models, we aim to build an evaluation protocol that resembles the undetermined nature of the transfer learning problem. In order to achieve this, we need to be able to change, measure and control the difficulty of train-test data splits. To this end, we present an approach to construct multiple train-test splits of measurably increasing difficulty to investigate out-of-distribution generalization in DML, which make up the ooDML benchmark. Our generated train-test splits resort to the established DML benchmark sets, and are subsequently used in Sec. 4 to thoroughly analyze the current state-of-the-art in DML. For future research, this approach is also easily applicable to other datasets and transfer learning problems.
3.1 Measuring the gap between train and test distributions
To create our train-test data splits, we need a way of measuring the distance between image datasets. This is a difficult task due to high dimensionality and natural noise in the images. Recently, Frechet Inception Distance (FID) [23] was proposed to measure the distance between two image distributions by using the neural embeddings of an Inception-v3 network trained for classification on the ImageNet dataset. FID assumes that the embeddings of the penultimate layer follow a Gaussian distribution, with a given mean µX and covariance ΣX for a distribution of images X . The FID between two data distributions X1 and X2 is defined as:
d(X1,X2) ≜ ∥µX1 − µX2∥ 2 2 + Tr(ΣX1 +ΣX2 − 2(ΣX1ΣX2) 1 2 ) , (1)
In this paper, instead of the Inception network, we use the embeddings of a ResNet-50 classifier (Frechet ResNet Distance) for consistency with most DML studies (see e.g. [71, 64, 26, 56, 51, 38, 54, 58]). For simplicity, in the following sections we will still use the abbreviation FID.
3.2 On the issue with default train-test splits in DML
To motivate the need for more comprehensive OOD evaluation protocols, we look at the split difficulty as measured by FID of typically used train-test splits and compare to i.i.d. sampling of training and test sets from the same benchmark. Empirical results in Tab. 1 show that commonly utilized DML
train-test splits are very close to in-distribution learning problems when compared to more out-ofdistribution splits in CARS196 and SOP (see Fig. 1). This indicates that semantic differences due to disjoint train and test classes, do not necessarily relate to actual significant distribution shifts between the train and test set. This also explains the consistently lower zero-shot retrieval performance on CUB200-2011 as compared to both CARS196 and SOP in literature [71, 70, 24, 54, 42, 38], despite SOP containing significantly more classes with fewer examples per class. In addition to the previously discussed issues of DML evaluation protocols, this further questions conclusions drawn from these protocols about the OOD generalization of representations ϕ.
3.3 Creating train-test splits of increasing difficulty
Let Xtrain and Xtest denote the original train and test set of a given benchmark dataset D = Xtrain ∪ Xtest. To generate train-test splits of increasing difficulty while retaining the available data D and maintaining balance of their sizes, we exchange samples between them. To ensure and emphasize semantic consistency and unbiased data distributions with respect to image context unrelated to the target object categories, we swap entire classes instead of individual samples. Measuring distribution similarity based on FID, the goal is then to identify classes Ctrain ⊂ Xtrain and Ctest ⊂ Xtest whose exchange yields higher FID d(Xtrain,Xtest). To this end, similar to other works [33, 51, 38], we find resorting to an unimodal approximation of the intraclass distributions sufficient and approximate FID by only considering the class means and neglect the covariance in Eq. 1. We select Ctrain and Ctest as
C∗train = argmax Ctrain∈Xtrain ∥µCtrain − µXtrain∥2 − ∥µCtrain − µXtest∥2 (2)
C∗test = argmax Ctest∈Xtest ∥µCtest − µXtest∥2 − ∥µCtest − µXtrain∥2 (3)
where we measure distance to mean class-representations µXC . By iteratively exchanging classes between data splits, i.e. X t+1train = (X ttrain \ C∗train) ∪ C∗test and vice versa, we obtain a more difficult train-test split (X t+1train ,X t+1 test ) at iteration step t. Hence, we obtain a sequence of train-test splits XD = ((X 0train,X 0test), . . . , (X ttrain,X ttest), . . . , (X Ttrain,X Ttest)), with X 0train ≜ Xtrain and X 0test ≜ Xtest. Fig. 1 (columns 1-3) indeed shows that our FID approximation yields data splits with gradually increasing approximate FID scores with each swap until the scores cannot be further increased by swapping classes.
UMAP visualizations in the supplementary verify that the increase corresponds to larger OOD shifts. For CUB200-2011 and CARS196, we swap two classes per iteration, while for Stanford Online Products we swap 1000 classes due to a significantly higher class count. Moreover, to cover the overall spectrum of distribution shifts and ensure comparability between benchmarks we also reverse the iteration procedure on CUB200-2011 to generate splits minimizing the approximate FID while still maintaining disjunct train and test classes.
To further increase d(X Ttrain,X Ttest) beyond convergence (see Fig. 1) of the swapping procedure, we subsequently also identify and remove classes from both X Ttrain and X Ttest. More specifically, we remove classes Ctrain from X Ttrain that are closest to the mean of X Ttest and vice versa. For k steps, we successively repeat class removal as long as 50% of the original data is still maintained in these
additional train-test splits. Fig. 1 (rightmost) shows how splits generated through class removal progressively increase the approximate FID beyond what was achieved only by swapping. To analyze if the generated data splits are not inherently biased to the used backbone network for FID computation, we also repeat this procedure based on representations from different architectures, pretraining methods and datasets in the supplementary. Note, that comparison of absolute FID values between datasets may not be meaningful and we are mainly interested in distribution shifts within a given dataset distribution. Overall, using class swapping and removal we select splits that cover the broadest FID range possible, while still maintaining sufficient data. Hence, our splits are significantly harder and more diverse than the default splits.
4 Assessing the State of Generalization in Deep Metric Learning
This section assesses the state of zero-shot generalization in DML via a large experimental study of representative DML methods on our ooDML benchmark, offering a much more complete and thorough perspective on zero-shot generalization in DML as compared to previous DML studies [13, 54, 42, 39].
For our experiments we use the three most widely used benchmarks in DML, CUB200-2011[68], CARS196[30] and Stanford Online Products[43]. For a complete list of implementation and training details see the supplementary if not explicitly stated in the respective sections. Moreover, to measure generalization performance, we resort to the most widely used metric for image retrieval in DML, Recall@k [25]. Additionally, we also evaluate results over mean average precision (mAP@1000) [54, 42], but provide respective tables and visualizations in the supplementary when the interpretation of results is not impacted.
The exact training and test splits ultimately utilized throughout this work are selected based on Fig. 1 to ensure approximately uniform coverage of the spectrum of distribution shifts within intervals ranging from the lowest (near i.i.d. splits) to the highest generated shift achieved with class removal. For experiments on CARS196 and Stanford Online Products, eight total splits were investigated, included the original default benchmark split. For CUB200-2011, we select nine splits to also account for benchmark additions with reduced distributional shifts. The exact FID ranges are provided in the supplementary. Training on CARS196 and CUB200-2011 was done for a maximum of 200 epochs following standard training protocols utilized in [54], while 150 epochs were used for the much larger SOP dataset. Additional training details if not directly stated in the respective sections can be found in the supplementary.
4.1 Zero-shot generalization under varying distribution shifts
Many different concepts have been proposed in DML to learn embedding functions ϕ that generalize from the training distribution to differently distributed test data. To analyze the zero-shot transfer capabilities of DML models, we consider representative approaches making use of the following concepts: (i) surrogate ranking tasks and tuple mining heuristics (Margin loss with distance-based sampling [71] and Multisimilarity loss [70]), (ii) geometric constraints or class proxies (ArcFace [8], ProxyAnchor [26]), (iii) learning of semantically diverse features (R-Margin [54]) and selfsupervised training (DiVA [38]), adversarial regularization (Uniform Prior [59]) and (iv) knowledge self-distillation (S2SD [53]).
Fig. 2 (top) analyzes these methods for their generalization to distribution shifts the varying degrees represented in ooDML. The top row shows absolute zero-shot retrieval performance measured on Recall@1 (results for mAP@1000 can be found in the supplementary) with respect to the FID between train and test sets. Additionally, Fig. 2 (bottom) examines the relative differences of performance to the performance mean over all methods for each train-test split. Based on these experiments, we make the following main observations:
(i) Performance deteriorates monotonically with the distribution shifts. Independent of dataset, approach or evaluation metric, performance drops steadily as the distribution shift increases.
(ii) Relative performance differences are affected by train-test split difficulty. We see that the overall ranking between approaches oftentimes remains stable on the CARS196 and CUB2002011 datasets. However, we also see that particularly on a large-scale dataset (SOP), established proxy-based approaches ArcFace [8] (which incorporates additional geometric constraints) and ProxyAnchor [26] are surprisingly susceptible to more difficult distribution shifts. Both methods perform poorly compared to the more consistent general trend of the other approaches. Hence, conclusions on the generality of methods solely based on the default benchmarks need to be handled with care, as at least for SOP, performance comparisons reported on single (e.g. the standard) data splits do not translate to more general train-test scenarios.
(iii) Conceptual differences matter at larger distribution shifts While the ranking between most methods is largely consistent on CUB200-2011 and CARS196, their differences in performance becomes more prominent with increasing distribution shifts. The relative changes (deviation from the mean of all methods at the stage) depicted in Fig. 2 (bottom) clearly indicates that particular methods based on machine learning techniques such as self-supervision, feature diversity (DiVA, R-Margin) and self-distillation (S2SD) are among the best at generalizing in DML on more challenging splits while retaining strong performance on more i.i.d. splits as well.
While directly stating performance in dependence to the individual distribution shifts offers a detailed overview, the overall comparison of approaches is typically based on single benchmark scores. To further provide a single metric of comparison, we utilize the well-known Area-under-Curve (AUC) score to condense performance (either based on Recall@1 or mAP@1000) over all available distribution shifts into a single aggregated score indicating general zero-shot capabilities. This Aggregated Generalization Score (AGS) is computed based on the normalized FID scores of our splits to the interval [0, 1]. As Recall@k or mAP@k scores are naturally bounded to [0, 1], AGS is similarly bound to [0, 1] with higher being the better model. Our corresponding results are visualized in Fig. 3. Indeed, we see that AGS reflects our observations from Fig. 2, with self-supervision (DiVA)
and self-distillation (S2SD) generally performing best when facing unknown train-test shifts. Exact scores are provided in the supplementary.
4.2 Consistency of structural representation properties on ooDML
Roth et al. [54] attempts to identify potential drivers of generalization in DML by measuring the following structural properties of a representation ϕ: (i) the mean distance πinter between the centers of the embedded samples of each class, (ii) the mean distance πintra between the embedded samples within a class πintra, (iii) the ‘embedding space density’ measured as the ratio πratio = πintraπinter and (iv) ‘spectral decay’ ρ(Φ) measuring the degree of uniformity of singular values obtained by singular value decomposition on the training sample representations, which indicates the number of significant directions of variance. For a more detailed description, we refer to [54]. These metrics indeed are empirically shown to exhibit a certain correlation to generalization performance on the default benchmark splits. In contrast, we are now interested if these observations hold when measuring generalization performance on the ooDML train-test splits of varying difficulty.
We visualize our results in Fig. 4 for CUB200-2011 and SOP, with CARS196 provided in the supplementary. For better visualization we normalize all measured values obtained for both metrics (i)-(iv) and the recall performances (Recall@1) to the interval [0, 1] for each train-test split. Thus, the relation between structural properties and generalization performance becomes comparable across all train-test splits, allowing us to examine if superior generalization is still correlated to the structural properties of the learned representation ϕ, i.e. if the correlation is independent of the underlying distribution shifts. For a perfectly descriptive metric, one should expect to see heavy correlation between normalized metric and normalized generalization performance jointly across shifts. Unfortunately, our results show only a small indication of any structural metric being consistently correlated with generalization performance over varying distribution shifts. This is also observed when evaluating only against basic, purely discriminative DML objectives as was done in [54] for the default split, as well as when incorporating methods that extend and change the base DML training setup (such as DiVA [38] or adversarial regularization [59]).
This not only demonstrates that experimental conclusions derived from the analysis of only single benchmark split may not hold for overall zero-shot generalization, but also that future research should consider more general learning problems and difficulty to better understand the conceptual impact various regulatory approaches. To this end, our benchmark protocol offers more comprehensive experimental ground for future studies to find potential drivers of zero-shot generalization in DML.
4.3 Network capacity and pretrained representations
A common way to improve generalization, as also highlighted in [54] and [42], is to select a stronger backbone architecture for feature extraction. In this section, we look at how changes in network capacity can influence OOD generalization across distribution shifts. Moreover, we also analyze the zero-shot performance of a diverse set of state-of-the-art pretraining approaches.
Influence of network capacity. In Fig. 5, we compare different members of the ResNet architecture family [20] with increasing capacity, each of which achieve increasingly higher performance on i.i.d. test benchmarks such as ImageNet [7], going from a small ResNet18 (R18) over ResNet50 (R50) to ResNet101 (R101) variants. As can be seen, while larger network capacity helps to some extent, we observe that performance actually saturates in zero-shot transfer settings, regardless of the DML approach and dataset (in particular also the large scale SOP dataset). Interestingly, we also observe that the performance drops with increasing distribution shifts are consistent across network capacity, suggesting that zero-shot generalization is less driven by network capacity but rather conceptual choices of the learning formulation (compare Fig. 2).
Generic representations versus Deep Metric Learning. Recently, self-supervised representation learning has taken great leaps with ever stronger models trained on huge amounts of image [29, 47] and language data [9, 35, 2]. These approaches are designed to learn expressive, well-transferring features and methods like CLIP [47] even prove surprisingly useful for zero-shot classification. We now evaluate and compare such representations against state-of-the-art DML models to understand if generic representations that are readily available nowadays actually pose an alternative to explicit application of DML. We select state-of-the-art self-supervision models SwAV [3] (ResNet50 backbone), CLIP [47] trained via natural language supervision on a large dataset of 400 million image and sentence pairs (VisionTransformer [10] backbone), BiT(-M) [29], which trains a ResNet50-V2 [29] on both the standard ImageNet [7] (1 million training samples) and the ImageNet-21k dataset [7, 50] with 14 million training samples and over 21 thousand classes, an EfficientNet-B0 [63] trained on ImageNet, and a standard baseline ResNet50 network trained on ImageNet. Note, that none of these representations has been additionally adapted to the benchmark sets, in contrast to the DML approaches which have been trained on the respective train splits.
The results presented in Fig. 6 show large performance differences of the pretrained representations, which are largely dependent on the test dataset. While BiT outperforms the DML state-of-the-art on CUB200-2011 without any finetuning, it significantly trails behind the DML models on the other
two datasets. On CARS196, only CLIP comes close to the DML approaches when the distribution shift is sufficiently large. Finally, on SOP, none of these models comes even close to the adapted DML methods. This shows how although representations learned by extensive pretraining can offer strong zero-shot generalization, their performance heavily depends on the target dataset and specific model. Furthermore, the generalization abilities notably depend on the size of the pretraining dataset (compare e.g. BiT-1k vs BiT-21k or CLIP), which is significantly larger than the number of training images seen by the DML methods. We see that only actual training on these datasets provides sufficiently reliable performance.
4.4 Few-shot adaption boosts generalization performance in DML
Since distribution shifts can be arbitrarily large, the zero-shot transfer of ϕ can be ill-posed. Features learned on a training set Xtrain will not meaningfully transfer to test samples Xtest once they are sufficiently far from Xtrain, as also already indicated by Fig. 2. As a remedy few-shot learning [60, 67, 14, 48, 32, 6, 66] assumes few samples of the test distribution to to be available during training, i.e. adjusting a previously learned representation. While these approaches are typically explicitly trained for fast adaption to novel classes, we are now interested if similar adaptation of DML representations ϕ helps to bridge increasingly large distribution shifts.
To investigate this hypothesis, we follow the evaluation protocol of few-shot learning and use k representatives (also referred to as shots) of each class from a test set Xtest as a support set for finetuning the penultimate embedding network layer. The remaining test samples then represent the new test set to evaluate retrieval performance, also referred to as query set. For evaluation we perform 10 episodes, i.e. we repeat and average the adaptation of ϕ over 10 different, randomly sampled support and corresponding query sets. Independent of the DML model used for learning the original representation ϕ on Xtrain, adaptation to the support data is conducted using the Marginloss [71] objective with distance-based sampling [71] due to its faster convergence. This also ensures fair comparisons to the adaptation benefit to ϕ and also allows to adapt complex approaches like selfsupervision (DiVA [38]) to the small number of samples in the support sets.
Fig. 7 shows 2 and 5 shot results on CUB200-2011, with CARS196 available in the supplementary. SOP is not considered since each class is already composed of small number of samples. As we see, even very limited in-domain data can significantly improve generalization performance, with the benefit becoming stronger for larger distribution shifts. Moreover, we observe that weaker approaches like ArcFace [8] seem to benefit more than state-of-the-art methods like S2SD [53] or DiVA [38]. We presume this to be caused by their underlying concepts already encouraging learning of more robust and general features. To conclude, few-shot learning provides a substantial and reliable benefit when facing OOD learning settings where the shift is not known a priori.
5 Conclusion
In this work we analyzed zero-shot transfer of image representations learned by Deep Metric Learning (DML) models. We proposed a systematic construction of train-test data splits of increasing
difficulty, as opposed to standard evaluation protocols that test out-of-distribution generalization only on single data splits of fixed difficulty. Based on this, we presented the novel benchmark ooDML and thoroughly assessed current DML methods. Our study reveals the following main findings: Standard evaluation protocols are insufficient to probe general out-of-distribution transfer: Prevailing train-test splits in DML are often close to i.i.d. evaluation settings. Hence, they only provide limited insights into the impact of train-test distribution shift on generalization performance. Our benchmark ooDML alleviates this issue by evaluating a large, controllable and measurable spectrum of problem difficulty to facilitate future research. Larger distribution shifts show the impact of conceptual differences in DML approaches: Our study reveals that generalization performance degrades consistently with increasing problem difficulty for all DML methods. However, certain concepts underlying the approaches are shown to be more robust to shifts than others, such as semantic feature diversity and knowledge-distillation. Generic, self-supervised representations without finetuning can surpass dedicated data adaptation: When facing large distribution shifts, representations learned only by self-supervision on large amounts of of unlabelled data are competitive to explicit DML training without any finetuning. However, their performance is heavily dependent on the data distribution and the models themselves. Few-shot adaptation consistently improves out-of-distribution generalization in DML: Even very few examples from a target data distribution effectively help to adapt DML representations. The benefit becomes even more prominent with increasing train-test distribution shifts, and encourages further research into few-shot adaptation in DML.
Funding transparency statement This research has been funded by the German Federal Ministry for Economic Affairs and Energy within the project “KI-Absicherung – Safe AI for automated driving” and by the German Research Foundation (DFG) within projects 371923335 and 421703927. Moreover, it was funded in part by a CIFAR AI Chair at the Vector Institute, Microsoft Research, and an NSERC Discovery Grant. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/#partners.
Acknowledgements We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting K.R; K.R. acknowledges his membership in the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program. | 1. What is the focus of the paper regarding deep metric learning algorithms?
2. What are the strengths of the proposed benchmark for evaluating few-shot tasks?
3. Do you have concerns about the choice of FID metric?
4. How does the reviewer assess the quality and significance of the paper's content?
5. Are there any suggestions for improving the method or its applications? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a new benchmark for deep metric learning algorithms, particularly for zero-shot tasks, in order to reflect their generalization under out-of-distribution shifts performance. They propose a novel way of splitting train and test data with increasing difficulty(the gap(distribution shift) between train and test get higher). FID score is chosen to determine the distribution shift between train and test. After an initial split is chosen, the classes are exchanged between the train and test data to obtain a higher FID measure at each time iteratively. The authors suggest to use ROC curve to obtain a one single criteria, while keeping the results for each split in order to observe the performance based on task difficulty. They include experimental analysis on their proposed criteria and show results for some existing methods and architectures. The authors also analyze query-support framework in few-shot learning and its effect on the generalization performance.
Review
Originality:
There are a few recent previous attempts to analyze quantify the difficulty of few/zero shot tasks and the it is still an important open question. Progressively increasing the difficulty of train-test split and obtaining a range of tasks as a benchmark is novel as of my knowledge. They used Resnet FID distance to achieve this in order to cover many of the existing methods.
The related work contains sufficient citations of the previous contributions(especially 52,40,27), however the authors could explain them in a couple of more sentences to create a better set up for the current progress and their contribution.
Suggested citations: Two Sides of Meta-Learning Evaluation:In vs. Out of Distribution(Amrith Setlur et. al.). Huang, Gabriel, Hugo Larochelle, and Simon Lacoste-Julien. "Are Few-shot Learning Benchmarks Too Simple?." (2019).
The paper also switches from zero shot case to few shot by including some class samples into their test split and show that the generalization improves (as expected) as another novel contribution.
Quality
Choosing FID metric can be unreliable as the authors are aware of. Figure S2 is sufficient proof of concept for me at this stage. Although a comparison between the datasets might not be reliable(Table 1 suggests the otherwise), it can still be a relative measure for a given fixed dataset.
Why didn't you simultaneously decrease FID to obtain simpler/simplest tasks given that your initialization is random? Is it guaranteed that random initialization is simple enough to cover a wider difficulty range?
The methods seem to act similar in recall criteria(Figure 2), is it necessary to use splitting via FID in this case? The authors could provide better insights on why mAP and recall behaves very differently(Figure 2).
The authors mention their weaknesses in having similar max FID value for different datasets, being flawed in its descriptiveness(Figure 4) and provided sufficient explanations for the possible reasons.
Can this method be used in domain adaptation? The method might be costly for the large datasets, isn't?
Adapting the method directly might cause a bias for example if minimizing FID measure alone helps to get better results in more difficult data splits.
Clarity:
The paper is clearly written, the figures are very helpful.
As I mentioned above, the related work section could be more informative.
Significance:
The proposed method is applicable to any zero/few shot dataset(as long as not very large and checking FID scores before using as a benchmark might be needed). This makes the method widely useable.
The problem itself is important and hard to solve, I like the idea of creating different distribution shifts for a given dataset. The method has a potential to be further improved or inspire future works, both experimental or theoretical approaches. |
NIPS | Title
Characterizing Generalization under Out-Of-Distribution Shifts in Deep Metric Learning
Abstract
Deep Metric Learning (DML) aims to find representations suitable for zero-shot transfer to a priori unknown test distributions. However, common evaluation protocols only test a single, fixed data split in which train and test classes are assigned randomly. More realistic evaluations should consider a broad spectrum of distribution shifts with potentially varying degree and difficulty. In this work, we systematically construct train-test splits of increasing difficulty and present the ooDML benchmark to characterize generalization under out-of-distribution shifts in DML. ooDML is designed to probe the generalization performance on much more challenging, diverse train-to-test distribution shifts. Based on our new benchmark, we conduct a thorough empirical analysis of state-of-the-art DML methods. We find that while generalization tends to consistently degrade with difficulty, some methods are better at retaining performance as the distribution shift increases. Finally, we propose few-shot DML as an efficient way to consistently improve generalization in response to unknown test shifts presented in ooDML1.
1 Introduction
Image representations that generalize well are the foundation of numerous computer vision tasks, such as image and video retrieval [61, 71, 54, 38, 1], face (re-)identification [57, 34, 8] and image classification [65, 4, 19, 40, 37]. Ideally, these representations should not only capture data within the training distribution, but also transfer to new, out-of-distribution (OOD) data. However, in practice, achieving effective OOD generalization is more challenging than in-distribution [28, 12, 21, 49, 31, 55]. In the case of zero-shot generalization, where train and test classes are completely distinct, Deep Metric Learning (DML) is used to learn metric representation spaces that capture and transfer visual similarity to unseen classes, constituting a priori unknown test distributions with unspecified shift. To approximate such a setting, current DML benchmarks use single, predefined and fixed data splits of disjoint train and test classes, which are assigned arbitrarily [71, 8, 61, 24, 11, 33, 74, 51, 54, 42, 26, 64, 58]. This means that (i) generalization is only evaluated on a fixed problem difficulty, (ii)
1Code available here: https://github.com/CompVis/Characterizing_Generalization_in_DML ∗ Equal contribution, alphabetical order, † equal supervision, x now at University of Tuebingen.
35th Conference on Neural Information Processing Systems (NeurIPS 2021), virtual.
generalization difficulty is only implicitly defined by the arbitrary data split, (iii) the distribution shift is not measured and (iv) cannot be not changed. As a result, proposed models can overfit to these singular evaluation settings, which puts into question the true zero-shot generalization capabilities of proposed DML models.
In this work, we first construct a new benchmark ooDML to characterize generalization under outof-distribution shifts in DML. We systematically build ooDML as a comprehensive benchmark for evaluating OOD generalization in changing zero-shot learning settings which covers a much larger variety of zero-shot transfer learning scenarios potentially encountered in practice. We systematically construct training and testing data splits of increasing difficulty as measured by their Frechet-Inception Distance [23] and extensively evaluate the performance of current DML approaches.
Our experiments reveal that the standard evaluation splits are often close to i.i.d. evaluation settings. In contrast, our novel benchmark continually evaluates models on significantly harder learning problems, providing a more complete perspective into OOD generalization in DML. Second, we perform a large-scale study of representative DML methods on ooDML, and study the actual benefit of underlying regularizations such as self-supervision [38], knowledge distillation [53], adversarial regularization [59] and specialized objective functions [71, 70, 8, 26, 54]. We find that conceptual differences between DML approaches play a more significant role as the distribution shift to the test split becomes harder. Finally, we present a study on few-shot DML as a simple extension to achieve systematic and consistent OOD generalization. As the transfer learning problem becomes harder, even very little in-domain knowledge effectively helps to adjust learned metric representation spaces to novel test distributions. We publish our code and train-test splits on three established benchmark sets, CUB2002011 [68], CARS196 [30] and Stanford Online Products (SOP) [43]. Similarly, we provide training and evaluation episodes for further research into few-shot DML. Overall, our contributions can be summarized as:
• Proposing the ooDML benchmark to create a set of more realistic train-test splits that evaluate DML generalization capabilities under increasingly more difficult zero-shot learning tasks.
• Analyzing the current DML method landscape under ooDML to characterize benefits and drawbacks of different conceptual approaches to DML.
• Introducing and examining few-shot DML as a potential remedy for systematically improved OOD generalization, especially when moving to larger train-test distribution shifts.
2 Related Work
DML has become essential for many applications, especially in zero-shot image and video retrieval [61, 71, 51, 24, 1, 36]. Proposed approaches most commonly rely on a surrogate ranking task over tuples during training [62], ranging from simple pairs [17] and triplets [57] to higher-order quadruplets [5] and more generic n-tuples [61, 43, 22, 70]. These ranking tasks can also leverage additional context such as geometrical embedding structures [69, 8]. However, due to the exponentially increased complexity of tuple sampling spaces, these methods are usually also combined with tuple sampling objectives, relying on predefined or learned heuristics to avoid training over tuples that are too easy or too hard [57, 72] or reducing tuple redudancy encountered during training [71, 15, 18, 52]. More recent work has tackled sampling complexity through the usage of proxy-representations utilized as sample stand-ins during training, following a NCA [16] objective [41, 26, 64], leveraging softmax-style training through class-proxies [8, 73] or simulating intraclass structures [46].
Unfortunately, the true benefit of these proposed objectives has been put into question recently, with [54] and [42] highlighting high levels of performance saturation of these discriminative DML objectives on default benchmark splits under fair comparison. Instead, orthogonal work extending the standard DML training paradigm through multi-task approaches [56, 51, 39], boosting [44, 45], attention [27], sample generation [11, 33, 74], multi-feature learning [38] or self-distillation [53] have shown more promise with strong relative improvements under fair comparison [54, 38], however still only in single split benchmark settings. It thus remains unclear how well these methods generalize in more realistic settings [28] under potentially much more challenging, different train-to-test distribution shifts, which we investigate in this work.
3 ooDML: Constructing a Benchmark for OOD Generalization in DML
An image representation ϕ(x) learned on samples x ∈ Xtrain drawn from some training distribution generalizes well if can transfer to test data Xtest that are not observed during training. In the particular case of OOD generalization, the learned representation ϕ is supposed to transfer to samples Xtest which are not independently and identically distributed (i.i.d.) to Xtrain. A successful approach to learning such representations is DML, which is evaluated for the special case of zero-shot generalization, i.e. the transfer of ϕ to distributions of unknown classes [57, 71, 24, 8, 54, 42]. DML models aim to learn an embedding ϕ mapping datapoints x into an embedding space Φ, which allows to measure similarity between xi and xj as g(ϕ(xi), ϕ(xj)). Typically, g is a predefined metric, such as the Euclidean or Cosine distance and ϕ is parameterized by a deep neural network.
In realistic zero-shot learning scenarios, test distributions are not specified a priori. Thus, their respective distribution shifts relative to the training, which indicates the difficulty of the transfer learning problem, is unknown as well. To determine the generalization capabilities of ϕ, we would ideally measure its performance on different test distributions covering a large spectrum of distribution shifts, which we will also refer to as “problem difficulties" in this work. Unfortunately, standard evaluation protocols test the generalization of ϕ on a single and fixed train-test data split of predetermined difficulty, hence only allow for limited conclusions about zero-shot generalization.
To thoroughly assess and compare zero-shot generalization of DML models, we aim to build an evaluation protocol that resembles the undetermined nature of the transfer learning problem. In order to achieve this, we need to be able to change, measure and control the difficulty of train-test data splits. To this end, we present an approach to construct multiple train-test splits of measurably increasing difficulty to investigate out-of-distribution generalization in DML, which make up the ooDML benchmark. Our generated train-test splits resort to the established DML benchmark sets, and are subsequently used in Sec. 4 to thoroughly analyze the current state-of-the-art in DML. For future research, this approach is also easily applicable to other datasets and transfer learning problems.
3.1 Measuring the gap between train and test distributions
To create our train-test data splits, we need a way of measuring the distance between image datasets. This is a difficult task due to high dimensionality and natural noise in the images. Recently, Frechet Inception Distance (FID) [23] was proposed to measure the distance between two image distributions by using the neural embeddings of an Inception-v3 network trained for classification on the ImageNet dataset. FID assumes that the embeddings of the penultimate layer follow a Gaussian distribution, with a given mean µX and covariance ΣX for a distribution of images X . The FID between two data distributions X1 and X2 is defined as:
d(X1,X2) ≜ ∥µX1 − µX2∥ 2 2 + Tr(ΣX1 +ΣX2 − 2(ΣX1ΣX2) 1 2 ) , (1)
In this paper, instead of the Inception network, we use the embeddings of a ResNet-50 classifier (Frechet ResNet Distance) for consistency with most DML studies (see e.g. [71, 64, 26, 56, 51, 38, 54, 58]). For simplicity, in the following sections we will still use the abbreviation FID.
3.2 On the issue with default train-test splits in DML
To motivate the need for more comprehensive OOD evaluation protocols, we look at the split difficulty as measured by FID of typically used train-test splits and compare to i.i.d. sampling of training and test sets from the same benchmark. Empirical results in Tab. 1 show that commonly utilized DML
train-test splits are very close to in-distribution learning problems when compared to more out-ofdistribution splits in CARS196 and SOP (see Fig. 1). This indicates that semantic differences due to disjoint train and test classes, do not necessarily relate to actual significant distribution shifts between the train and test set. This also explains the consistently lower zero-shot retrieval performance on CUB200-2011 as compared to both CARS196 and SOP in literature [71, 70, 24, 54, 42, 38], despite SOP containing significantly more classes with fewer examples per class. In addition to the previously discussed issues of DML evaluation protocols, this further questions conclusions drawn from these protocols about the OOD generalization of representations ϕ.
3.3 Creating train-test splits of increasing difficulty
Let Xtrain and Xtest denote the original train and test set of a given benchmark dataset D = Xtrain ∪ Xtest. To generate train-test splits of increasing difficulty while retaining the available data D and maintaining balance of their sizes, we exchange samples between them. To ensure and emphasize semantic consistency and unbiased data distributions with respect to image context unrelated to the target object categories, we swap entire classes instead of individual samples. Measuring distribution similarity based on FID, the goal is then to identify classes Ctrain ⊂ Xtrain and Ctest ⊂ Xtest whose exchange yields higher FID d(Xtrain,Xtest). To this end, similar to other works [33, 51, 38], we find resorting to an unimodal approximation of the intraclass distributions sufficient and approximate FID by only considering the class means and neglect the covariance in Eq. 1. We select Ctrain and Ctest as
C∗train = argmax Ctrain∈Xtrain ∥µCtrain − µXtrain∥2 − ∥µCtrain − µXtest∥2 (2)
C∗test = argmax Ctest∈Xtest ∥µCtest − µXtest∥2 − ∥µCtest − µXtrain∥2 (3)
where we measure distance to mean class-representations µXC . By iteratively exchanging classes between data splits, i.e. X t+1train = (X ttrain \ C∗train) ∪ C∗test and vice versa, we obtain a more difficult train-test split (X t+1train ,X t+1 test ) at iteration step t. Hence, we obtain a sequence of train-test splits XD = ((X 0train,X 0test), . . . , (X ttrain,X ttest), . . . , (X Ttrain,X Ttest)), with X 0train ≜ Xtrain and X 0test ≜ Xtest. Fig. 1 (columns 1-3) indeed shows that our FID approximation yields data splits with gradually increasing approximate FID scores with each swap until the scores cannot be further increased by swapping classes.
UMAP visualizations in the supplementary verify that the increase corresponds to larger OOD shifts. For CUB200-2011 and CARS196, we swap two classes per iteration, while for Stanford Online Products we swap 1000 classes due to a significantly higher class count. Moreover, to cover the overall spectrum of distribution shifts and ensure comparability between benchmarks we also reverse the iteration procedure on CUB200-2011 to generate splits minimizing the approximate FID while still maintaining disjunct train and test classes.
To further increase d(X Ttrain,X Ttest) beyond convergence (see Fig. 1) of the swapping procedure, we subsequently also identify and remove classes from both X Ttrain and X Ttest. More specifically, we remove classes Ctrain from X Ttrain that are closest to the mean of X Ttest and vice versa. For k steps, we successively repeat class removal as long as 50% of the original data is still maintained in these
additional train-test splits. Fig. 1 (rightmost) shows how splits generated through class removal progressively increase the approximate FID beyond what was achieved only by swapping. To analyze if the generated data splits are not inherently biased to the used backbone network for FID computation, we also repeat this procedure based on representations from different architectures, pretraining methods and datasets in the supplementary. Note, that comparison of absolute FID values between datasets may not be meaningful and we are mainly interested in distribution shifts within a given dataset distribution. Overall, using class swapping and removal we select splits that cover the broadest FID range possible, while still maintaining sufficient data. Hence, our splits are significantly harder and more diverse than the default splits.
4 Assessing the State of Generalization in Deep Metric Learning
This section assesses the state of zero-shot generalization in DML via a large experimental study of representative DML methods on our ooDML benchmark, offering a much more complete and thorough perspective on zero-shot generalization in DML as compared to previous DML studies [13, 54, 42, 39].
For our experiments we use the three most widely used benchmarks in DML, CUB200-2011[68], CARS196[30] and Stanford Online Products[43]. For a complete list of implementation and training details see the supplementary if not explicitly stated in the respective sections. Moreover, to measure generalization performance, we resort to the most widely used metric for image retrieval in DML, Recall@k [25]. Additionally, we also evaluate results over mean average precision (mAP@1000) [54, 42], but provide respective tables and visualizations in the supplementary when the interpretation of results is not impacted.
The exact training and test splits ultimately utilized throughout this work are selected based on Fig. 1 to ensure approximately uniform coverage of the spectrum of distribution shifts within intervals ranging from the lowest (near i.i.d. splits) to the highest generated shift achieved with class removal. For experiments on CARS196 and Stanford Online Products, eight total splits were investigated, included the original default benchmark split. For CUB200-2011, we select nine splits to also account for benchmark additions with reduced distributional shifts. The exact FID ranges are provided in the supplementary. Training on CARS196 and CUB200-2011 was done for a maximum of 200 epochs following standard training protocols utilized in [54], while 150 epochs were used for the much larger SOP dataset. Additional training details if not directly stated in the respective sections can be found in the supplementary.
4.1 Zero-shot generalization under varying distribution shifts
Many different concepts have been proposed in DML to learn embedding functions ϕ that generalize from the training distribution to differently distributed test data. To analyze the zero-shot transfer capabilities of DML models, we consider representative approaches making use of the following concepts: (i) surrogate ranking tasks and tuple mining heuristics (Margin loss with distance-based sampling [71] and Multisimilarity loss [70]), (ii) geometric constraints or class proxies (ArcFace [8], ProxyAnchor [26]), (iii) learning of semantically diverse features (R-Margin [54]) and selfsupervised training (DiVA [38]), adversarial regularization (Uniform Prior [59]) and (iv) knowledge self-distillation (S2SD [53]).
Fig. 2 (top) analyzes these methods for their generalization to distribution shifts the varying degrees represented in ooDML. The top row shows absolute zero-shot retrieval performance measured on Recall@1 (results for mAP@1000 can be found in the supplementary) with respect to the FID between train and test sets. Additionally, Fig. 2 (bottom) examines the relative differences of performance to the performance mean over all methods for each train-test split. Based on these experiments, we make the following main observations:
(i) Performance deteriorates monotonically with the distribution shifts. Independent of dataset, approach or evaluation metric, performance drops steadily as the distribution shift increases.
(ii) Relative performance differences are affected by train-test split difficulty. We see that the overall ranking between approaches oftentimes remains stable on the CARS196 and CUB2002011 datasets. However, we also see that particularly on a large-scale dataset (SOP), established proxy-based approaches ArcFace [8] (which incorporates additional geometric constraints) and ProxyAnchor [26] are surprisingly susceptible to more difficult distribution shifts. Both methods perform poorly compared to the more consistent general trend of the other approaches. Hence, conclusions on the generality of methods solely based on the default benchmarks need to be handled with care, as at least for SOP, performance comparisons reported on single (e.g. the standard) data splits do not translate to more general train-test scenarios.
(iii) Conceptual differences matter at larger distribution shifts While the ranking between most methods is largely consistent on CUB200-2011 and CARS196, their differences in performance becomes more prominent with increasing distribution shifts. The relative changes (deviation from the mean of all methods at the stage) depicted in Fig. 2 (bottom) clearly indicates that particular methods based on machine learning techniques such as self-supervision, feature diversity (DiVA, R-Margin) and self-distillation (S2SD) are among the best at generalizing in DML on more challenging splits while retaining strong performance on more i.i.d. splits as well.
While directly stating performance in dependence to the individual distribution shifts offers a detailed overview, the overall comparison of approaches is typically based on single benchmark scores. To further provide a single metric of comparison, we utilize the well-known Area-under-Curve (AUC) score to condense performance (either based on Recall@1 or mAP@1000) over all available distribution shifts into a single aggregated score indicating general zero-shot capabilities. This Aggregated Generalization Score (AGS) is computed based on the normalized FID scores of our splits to the interval [0, 1]. As Recall@k or mAP@k scores are naturally bounded to [0, 1], AGS is similarly bound to [0, 1] with higher being the better model. Our corresponding results are visualized in Fig. 3. Indeed, we see that AGS reflects our observations from Fig. 2, with self-supervision (DiVA)
and self-distillation (S2SD) generally performing best when facing unknown train-test shifts. Exact scores are provided in the supplementary.
4.2 Consistency of structural representation properties on ooDML
Roth et al. [54] attempts to identify potential drivers of generalization in DML by measuring the following structural properties of a representation ϕ: (i) the mean distance πinter between the centers of the embedded samples of each class, (ii) the mean distance πintra between the embedded samples within a class πintra, (iii) the ‘embedding space density’ measured as the ratio πratio = πintraπinter and (iv) ‘spectral decay’ ρ(Φ) measuring the degree of uniformity of singular values obtained by singular value decomposition on the training sample representations, which indicates the number of significant directions of variance. For a more detailed description, we refer to [54]. These metrics indeed are empirically shown to exhibit a certain correlation to generalization performance on the default benchmark splits. In contrast, we are now interested if these observations hold when measuring generalization performance on the ooDML train-test splits of varying difficulty.
We visualize our results in Fig. 4 for CUB200-2011 and SOP, with CARS196 provided in the supplementary. For better visualization we normalize all measured values obtained for both metrics (i)-(iv) and the recall performances (Recall@1) to the interval [0, 1] for each train-test split. Thus, the relation between structural properties and generalization performance becomes comparable across all train-test splits, allowing us to examine if superior generalization is still correlated to the structural properties of the learned representation ϕ, i.e. if the correlation is independent of the underlying distribution shifts. For a perfectly descriptive metric, one should expect to see heavy correlation between normalized metric and normalized generalization performance jointly across shifts. Unfortunately, our results show only a small indication of any structural metric being consistently correlated with generalization performance over varying distribution shifts. This is also observed when evaluating only against basic, purely discriminative DML objectives as was done in [54] for the default split, as well as when incorporating methods that extend and change the base DML training setup (such as DiVA [38] or adversarial regularization [59]).
This not only demonstrates that experimental conclusions derived from the analysis of only single benchmark split may not hold for overall zero-shot generalization, but also that future research should consider more general learning problems and difficulty to better understand the conceptual impact various regulatory approaches. To this end, our benchmark protocol offers more comprehensive experimental ground for future studies to find potential drivers of zero-shot generalization in DML.
4.3 Network capacity and pretrained representations
A common way to improve generalization, as also highlighted in [54] and [42], is to select a stronger backbone architecture for feature extraction. In this section, we look at how changes in network capacity can influence OOD generalization across distribution shifts. Moreover, we also analyze the zero-shot performance of a diverse set of state-of-the-art pretraining approaches.
Influence of network capacity. In Fig. 5, we compare different members of the ResNet architecture family [20] with increasing capacity, each of which achieve increasingly higher performance on i.i.d. test benchmarks such as ImageNet [7], going from a small ResNet18 (R18) over ResNet50 (R50) to ResNet101 (R101) variants. As can be seen, while larger network capacity helps to some extent, we observe that performance actually saturates in zero-shot transfer settings, regardless of the DML approach and dataset (in particular also the large scale SOP dataset). Interestingly, we also observe that the performance drops with increasing distribution shifts are consistent across network capacity, suggesting that zero-shot generalization is less driven by network capacity but rather conceptual choices of the learning formulation (compare Fig. 2).
Generic representations versus Deep Metric Learning. Recently, self-supervised representation learning has taken great leaps with ever stronger models trained on huge amounts of image [29, 47] and language data [9, 35, 2]. These approaches are designed to learn expressive, well-transferring features and methods like CLIP [47] even prove surprisingly useful for zero-shot classification. We now evaluate and compare such representations against state-of-the-art DML models to understand if generic representations that are readily available nowadays actually pose an alternative to explicit application of DML. We select state-of-the-art self-supervision models SwAV [3] (ResNet50 backbone), CLIP [47] trained via natural language supervision on a large dataset of 400 million image and sentence pairs (VisionTransformer [10] backbone), BiT(-M) [29], which trains a ResNet50-V2 [29] on both the standard ImageNet [7] (1 million training samples) and the ImageNet-21k dataset [7, 50] with 14 million training samples and over 21 thousand classes, an EfficientNet-B0 [63] trained on ImageNet, and a standard baseline ResNet50 network trained on ImageNet. Note, that none of these representations has been additionally adapted to the benchmark sets, in contrast to the DML approaches which have been trained on the respective train splits.
The results presented in Fig. 6 show large performance differences of the pretrained representations, which are largely dependent on the test dataset. While BiT outperforms the DML state-of-the-art on CUB200-2011 without any finetuning, it significantly trails behind the DML models on the other
two datasets. On CARS196, only CLIP comes close to the DML approaches when the distribution shift is sufficiently large. Finally, on SOP, none of these models comes even close to the adapted DML methods. This shows how although representations learned by extensive pretraining can offer strong zero-shot generalization, their performance heavily depends on the target dataset and specific model. Furthermore, the generalization abilities notably depend on the size of the pretraining dataset (compare e.g. BiT-1k vs BiT-21k or CLIP), which is significantly larger than the number of training images seen by the DML methods. We see that only actual training on these datasets provides sufficiently reliable performance.
4.4 Few-shot adaption boosts generalization performance in DML
Since distribution shifts can be arbitrarily large, the zero-shot transfer of ϕ can be ill-posed. Features learned on a training set Xtrain will not meaningfully transfer to test samples Xtest once they are sufficiently far from Xtrain, as also already indicated by Fig. 2. As a remedy few-shot learning [60, 67, 14, 48, 32, 6, 66] assumes few samples of the test distribution to to be available during training, i.e. adjusting a previously learned representation. While these approaches are typically explicitly trained for fast adaption to novel classes, we are now interested if similar adaptation of DML representations ϕ helps to bridge increasingly large distribution shifts.
To investigate this hypothesis, we follow the evaluation protocol of few-shot learning and use k representatives (also referred to as shots) of each class from a test set Xtest as a support set for finetuning the penultimate embedding network layer. The remaining test samples then represent the new test set to evaluate retrieval performance, also referred to as query set. For evaluation we perform 10 episodes, i.e. we repeat and average the adaptation of ϕ over 10 different, randomly sampled support and corresponding query sets. Independent of the DML model used for learning the original representation ϕ on Xtrain, adaptation to the support data is conducted using the Marginloss [71] objective with distance-based sampling [71] due to its faster convergence. This also ensures fair comparisons to the adaptation benefit to ϕ and also allows to adapt complex approaches like selfsupervision (DiVA [38]) to the small number of samples in the support sets.
Fig. 7 shows 2 and 5 shot results on CUB200-2011, with CARS196 available in the supplementary. SOP is not considered since each class is already composed of small number of samples. As we see, even very limited in-domain data can significantly improve generalization performance, with the benefit becoming stronger for larger distribution shifts. Moreover, we observe that weaker approaches like ArcFace [8] seem to benefit more than state-of-the-art methods like S2SD [53] or DiVA [38]. We presume this to be caused by their underlying concepts already encouraging learning of more robust and general features. To conclude, few-shot learning provides a substantial and reliable benefit when facing OOD learning settings where the shift is not known a priori.
5 Conclusion
In this work we analyzed zero-shot transfer of image representations learned by Deep Metric Learning (DML) models. We proposed a systematic construction of train-test data splits of increasing
difficulty, as opposed to standard evaluation protocols that test out-of-distribution generalization only on single data splits of fixed difficulty. Based on this, we presented the novel benchmark ooDML and thoroughly assessed current DML methods. Our study reveals the following main findings: Standard evaluation protocols are insufficient to probe general out-of-distribution transfer: Prevailing train-test splits in DML are often close to i.i.d. evaluation settings. Hence, they only provide limited insights into the impact of train-test distribution shift on generalization performance. Our benchmark ooDML alleviates this issue by evaluating a large, controllable and measurable spectrum of problem difficulty to facilitate future research. Larger distribution shifts show the impact of conceptual differences in DML approaches: Our study reveals that generalization performance degrades consistently with increasing problem difficulty for all DML methods. However, certain concepts underlying the approaches are shown to be more robust to shifts than others, such as semantic feature diversity and knowledge-distillation. Generic, self-supervised representations without finetuning can surpass dedicated data adaptation: When facing large distribution shifts, representations learned only by self-supervision on large amounts of of unlabelled data are competitive to explicit DML training without any finetuning. However, their performance is heavily dependent on the data distribution and the models themselves. Few-shot adaptation consistently improves out-of-distribution generalization in DML: Even very few examples from a target data distribution effectively help to adapt DML representations. The benefit becomes even more prominent with increasing train-test distribution shifts, and encourages further research into few-shot adaptation in DML.
Funding transparency statement This research has been funded by the German Federal Ministry for Economic Affairs and Energy within the project “KI-Absicherung – Safe AI for automated driving” and by the German Research Foundation (DFG) within projects 371923335 and 421703927. Moreover, it was funded in part by a CIFAR AI Chair at the Vector Institute, Microsoft Research, and an NSERC Discovery Grant. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute www.vectorinstitute.ai/#partners.
Acknowledgements We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting K.R; K.R. acknowledges his membership in the European Laboratory for Learning and Intelligent Systems (ELLIS) PhD program. | 1. What is the focus and contribution of the paper regarding OOD performance measurement?
2. What are the strengths of the proposed proxy-metric for ranking dataset generalization difficulty?
3. Do you have any concerns or questions about the experimental design or methodology?
4. How does the reviewer assess the significance and impact of the paper's findings on the few-shot learning community?
5. Are there any suggestions for additional experiments or analysis that could enhance the paper's contributions? | Summary Of The Paper
Review | Summary Of The Paper
This work proposes a new way to measure OOD performance of metric-learning algorithms. It introduces a proxy-metric for measuring ranking dataset generalization difficulty based on FID score. They construct several train/test splits of increasing difficulty based on this score. They show that splits constructed in such a way are well correlated with performance, thus validating the proposed proxy-metric of OOD generalization difficulty. Several experiments are performed to assess the performance of modern metric learning methods. An AUC type score over all splits (difficulties) is proposed as an overall score for various methods. Finally, additional experiments that include increasing network capacity, using self-supervised pre-trained models, and fine-tuning with few-shot learning are presented.
Review
The proposed proxy-metric of difficulty makes sense, and is nicely empirically validated. I am glad to see that the splits/code will be open sourced because these will be very valuable to the community. Particularly moving beyond the recent negative results [52, 40]. Because this paper presents a new benchmark it may be well suited to the Benchmarks & Datasets track at NeurIPS (although I am happy to have it accepted via the main submission route as well). Although some of the conclusions are not particularly surprising (e.g. few-shot learning on the target distribution helps, self-supervised learning doesn't always work), they are nevertheless useful experiments to the community. Some questions: 1-How were the error bars in Figure 3 computed, aren't the splits deterministic? 2-According to Figure 1 there is a wide, but limited region for each dataset where swapping affects the FID. Do you only use these dataset splits in your experiments as the early/late ones lead to saturated FID scores ? I know that these are further extended using removals. 3- Would the methods presented in the paper have impact on the few-shot learning community as well (e.g. MetaDataset uses fixed splits)? 4-It would be nice to see other methods that are known to increase generalization i the analyses, data augmentation is an obvious one that comes to mind. |
NIPS | Title
Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning
Abstract
Sample efficiency has been one of the major challenges for deep reinforcement learning. Recently, model-based reinforcement learning has been proposed to address this challenge by performing planning on imaginary trajectories with a learned world model. However, world model learning may suffer from overfitting to training trajectories, and thus model-based value estimation and policy search will be prone to be sucked in an inferior local policy. In this paper, we propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD). It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories. We demonstrate that our approach improves sample efficiency of model-based planning, and achieves state-of-the-art performance on challenging visual control benchmarks.
1 Introduction
Reinforcement learning (RL) is proposed as a general-purpose learning framework for artificial intelligence problems, and has led to tremendous progress in a variety of domains [1, 2, 3, 4]. Modelfree RL adopts a trail-and-error paradigm, which directly learns a mapping function from observations to values or actions through interactions with environments. It has achieved remarkable performance in certain video games and continuous control tasks because of its simplicity and minimal assumptions about environments. However, model-free approaches are not yet sample efficient and require several orders of magnitude more training samples than human learning, which limits its applications on real-world tasks [5].
A promising direction for improving sample efficiency is to explore model-based RL, which first builds an action-conditioned world model and then performs planning or policy search based on the learned model. The world model needs to encode the representations and dynamics of an environment is then used as a “dreamer” to do multi-step lookaheads for planning or policy search. Recently, world models based on deep neural networks were developed to handle dynamics in complex highdimensional environments, which offers opportunities for learning model-based polices with visual observations [6, 7, 8, 9, 10, 11, 12, 13].
⇤Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Model-based frameworks can be roughly grouped into four categories. First, Dyna-style algorithms alternate between building the world model from interactions with environments and performing policy optimization on simulated data generated by the learned model [14, 15, 16, 17, 11]. Second, model predictive control (MPC) and shooting algorithms alternate model learning, planning and action execution [18, 19, 20]. Third, model-augmented value expansion algorithms use model-based rollouts to improve targets for model-free temporal difference (TD) updates or policy gradients [21, 9, 6, 10]. Fourth, analytic-gradient algorithms leverage the gradients of the model-based imaginary returns with respect to the policy and directly propagate such gradients through a differentiable world model to the policy network [22, 23, 24, 25, 26, 27, 13]. Compared to conventional planning algorithms that generate numerous rollouts to select the highest performing action sequence, analyticgradient algorithm is more computationally efficient, especially in complex domains with deep neural networks. Dreamer [13] as a landmark of analytic-gradient model-based RL, achieves state-of-the-art performance on visual control tasks.
However, most existing breakthroughs on analytic gradients focus on optimizing the policy on imaginary trajectories and leave the discrepancy between imagination and reality largely unstudied, which often bottlenecks their performance on real trajectories. In practice, a learning-based world model is not perfect, especially in complex environments. Unrolling with an imperfect model for multiple steps generates a large accumulative error, leaving a gap between the generated trajectories and reality. If we directly optimize policy based on the analytic gradients through the imaginary trajectories, the policy will tend to deviate from reality and get sucked in an inferior local solution.
Evidence from humans’ cognition and learning in the physical world suggests that humans naturally have the capacity of self-reflection and introspection. In everyday life, we track and review our past thoughts and imaginations, introspect to further understand our internal states and interactions with the external world, and change our values and behavior patterns accordingly [28, 29]. Inspired by this insight, our basic idea is to leverage information from real trajectories to endow policy improvement on imaginations with awareness of discrepancy between imagination and reality. We propose a novel reality-aware model-based framework, called BrIdging Reality and Dream (BIRD), which performs differentiable planning on imaginary trajectories, as well as enables adaptive generalization to reality for learned policy by optimizing mutual information between imaginary and real trajectories. Our model-based policy optimization framework naturally unifies confidence-aware analytic gradients, entropy regularization maximization, and model learning. We conduct experiments on challenging visual control benchmarks (DeepMind Control Suite with image inputs [30]) and the results demonstrate that BIRD achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further verifies the superiority of BIRD benefits from mutual information maximization rather than from the increase of policy entropy.
2 Related Work
Model-Based Reinforcement Learning Model-based RL exhibits high sample efficiency and has been widely used in several real-world control tasks, such as robotics [31, 32, 7]. Dyna-style algorithms [14, 15, 16, 17, 11] optimize policies with samples generated from a learned world model. Model predictive control (MPC) and shooting methods [18, 19, 20] leverage planning to select actions, but suffer from expensive computation. In model-augmented value expansion approaches, MVE [21], VPN [6] and STEVE [9] use model-based rollouts to improve targets for model-free TD updates. MuZero [10] further incorporates Monte-Carlo tree search (MCTS) and achieves remarkable performance on Atari and board games. To manage visual control tasks, VisualMPC [33] introduces a visual prediction model to keep track of entities through occlusion by temporal skip connections. PlaNet [12] improves the model learning by combining deterministic and stochastic latent dynamics models. [34] presents a summary of model-based approaches and benchmarks popular algorithms for comparisons and extensions.
Analytic Value Gradients If a differentiable world model is available, analytic value gradients are proposed to directly update the policy by gradients that flow through the world model. PILCO [24] and iLQR [25] compute an analytic gradient by assuming Gaussian processes and linear functions for the dynamics model, respectively. Guided policy search (GPS) [26, 35, 36, 37, 38] uses deep neural networks to distill behaviors from the iLQR controller. Value Gradients (VG) [22] and Stochastic Value Gradients (SVG) [23] provide a new direction to calculate analytic value gradients through a generic differentiable world model. Dreamer [13] and IVG [27] further extend SVG by
generating imaginary rollouts in the latent space. However, these works focus on improving the policy in imaginations, leaving the discrepancy between imagination and reality largely unstudied. Our approach enables policy generalization to real-world interactions by maximizing mutual information between imagination and real trajectories, while optimizing the policy on imaginary trajectories. In addition, alternative end-to-end planning methods [39, 40] leverage analytic gradients, but they either focus on online planning in simple tasks [39] or require goal images and distance metrics for the reward function [40].
Information-Based Optimization In addition to maximizing the expected return objective, a reliable RL agent may exhibit more characteristics, like meaningful representations, strong generalization, and efficient exploration. Deep information-based methods [41, 42, 43, 44] recently show progress towards this direction. [45, 46, 47] are proposed to learn more efficient representations. Maximum entropy RL maximizes the entropy regularized return to obtain a robust policy [48, 49] and [50, 51] further connect policy optimization under such regularization with value based RL. [52] learns a goal-conditioned policy with information bottleneck to identify decision states. IDS [53] estimates the information gain for a sampling-based exploration strategy. These algorithms mainly focus on facilitating policy learning in the model-free setting, while BIRD aims at bridging imagination and reality by mutual information maximization in the context of model-based RL.
3 Preliminaries
3.1 Reinforcement Learning
A reinforcement learning agent aims at learning a policy to maximize the cumulative rewards by exploring in a Markov Decision Processes (MDP) [54]. Normally, we use denote time step as t and introduce state st 2 S, action at 2 A, reward function r(st, at), a policy ⇡✓(s), and a transition probability p(st+1|st, at) to characterize the process of interacting with the environment. The goal of the agent is to find a policy parameter ✓ that maximizes the long-horizon summed rewards represented
by a value function v (st) . = E ✓ t+HP i=t i t ri ◆ parameterized with . In model-based RL, the agent builds a world model p parameterized by for environmental dynamics p and reward function r, and then performs planning or policy search based on this model.
3.2 World Model
Considering that several complex tasks (e.g., visual control tasks [30]) are partially observable Markov decision process (POMDP), this paper adopts a similar world model with PlaNet [12] and Dreamer [13], which learns latent states from the history of visual observations and models the latent dynamics by LSTM-like recurrent networks. Specifically, the world model consists of the following modules:
Representation model : st ⇠ p (st|st 1, at 1, ot)
Transition model : st ⇠ p (st|st 1, at 1)
Observation model : ot ⇠ p (ot|st)
Reward model : rt ⇠ p (rt|st).
(1)
The representation model encodes the image input into a compact latent space and the long-horizon dynamics on latent states are captured by a latent transition model. We use RSSM [12] as our transition model, which combines deterministic and stochastic transition model in order to learn dynamics more accurately and efficiently. For each latent state on the predicted trajectories, observation model learns to reconstruct its visual observations, and the reward model predicts the immediate reward. The entire world model JModel is optimized by a VAE-like objective [55]:
J Model (⌧ img , ⌧ real ) =
X
(at 1,ot,rt)⇠⌧ real
h ln(p (ot|st)) + ln(p (rt|st))
DKL(p (st|st 1, at 1, ot)||p (st|st 1, at 1)) i .
(2)
3.3 Stochastic Value Gradients
Given a differentiable world model, stochastic value gradients (SVG) [22, 23] can be applied to directly compute policy gradient on the whole imaginary trajectory, which is a recursive composition of policy, transition, reward, and value function. According to the stochastic Bellman Equation, we have:
v(s) = E⇢(⌘) r(s,⇡✓(s, ⌘)) + E⇢(⇠) (v(p(s,⇡✓(s, ⌘), ⇠))) , (3)
where ⌘ ⇠ ⇢(⌘) and ⇠ ⇠ ⇢(⇠) are noises from a fixed noise distribution for re-parameterization. So the gradients through trajectories can be iteratively computed as:
@v @s = E⇢(⌘)
✓ @r
@s + @r @a @⇡ @s + E⇢(⇠)
✓ @v
@s0
✓ @p
@s + @p @a @⇡ @s
◆◆◆
@v @✓ = E⇢(⌘)
✓ @r
@a
@⇡ @✓ + E⇢(⇠)
✓ @v
@s0 @p @a @⇡ @✓ + @v @✓
◆◆ ,
(4)
where s0 denotes the next state given by the transition function. Intuitively, policy can be improved by propagating analytic gradients with respect to the policy network through the imaginary trajectories.
4 Reality-Aware Model-Based Policy Improvement
In this section, we present a novel model-based RL framework, called BrIdging Reality and Dream (BIRD), as shown in Figure 1. The agent represents its policy function with a policy network ( ). To estimate the future effects of its policy and enable potential policy improvement, it unrolls trajectories based on its world model ( ) using the current policy and optimizes the accumulative rewards on the imaginary trajectories. The policy network and differentiable world model connect to one another forming a larger trainable network, which supports differentiable planning and allows the analytic gradients of accumulative rewards with respect to the policy flow through the world model. In the meantime, the agent also interacts with the real world ( ) and generates real trajectories. BIRD maximizes the mutual information between real and imaginary trajectories to endow both the policy network and the world model with adaptive generalization to real-world interactions. In summary, BIRD maximizes the total objective function:
JBIRD = J SVG ✓ (⌧ img_roll ) L TD (⌧ img_roll ) + wI✓, (⌧ img , ⌧ real ), (5)
where ⌧ real and ⌧ img indicate the real trajectories and the corresponding imaginary trajectories under the same policy, and ⌧ img_roll indicate the rolled out imaginary trajectories during the optimization
of policy improvement. ✓, , , are parameters of policy network ⇡✓, value network v , and world model p , respectively. The first two terms J SVG✓ (⌧ img_roll ) J TD (⌧ img_roll
) account for policy improvement on imaginations, the last term I✓, (⌧ img, ⌧ real) optimizes the mutual information, and w is a weighting factor between them.
In conventional model-based RL approaches, real-world trajectories are normally used to optimize model prediction error, which is quite different from BIRD. In complex domains, optimizing model prediction error cannot guarantee a perfect predictive model. Unrolling with such an imperfect model for multiple steps will generate a large accumulative error, leaving a large gap between the generated trajectories and real ones. Thus, policy optimized by such a model may overfit undesirable imaginations and have a low generalization ability to reality, which is also shown in our experiments (Figure 3). This problem is further exacerbated in analytic-gradient RL that performs differentiable planning by gradient-based local search. This is because even a small gradient step along the imperfect model can easily reach a non-generalizable neighbourhood and lead to a direction of incorrect policy improvement. To address this problem, our method optimizes mutual information with respect to both the model and the policy, which makes policy improvement aware of the discrepancy between real and imaginary trajectories. Intuitively, BIRD optimizes the world model to be more real and reinforces the actions whose resulting imaginations not only have large accumulative rewards, but also resemble real trajectories. As a result, BIRD learns a policy from imaginations with easier generalization to the real-world environment.
4.1 Policy Improvement on Imaginations
As a model-based RL algorithm, BIRD improves the policy by maximizing the accumulative rewards of the imaginary trajectories unrolled by the world model. Conventional model-based approaches [18, 7, 11] perform policy improvement by selecting the optimal action sequence that maximizes the expected planning reward, that is maxat:t+H Esx⇠p P t+H x=t r(sx, ax). If the world model is differentiable, we use stochastic value gradients (SVG) to directly leverage the gradients through the world model for policy improvement. Similar with Dreamer [13], our objective of maximizing the model-based value expansion within horizon H is given by:
J SVG ✓ (⌧ img
) = max ✓
t+HX
x=t
V (sx),
V (sx) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
ri
! + h t v (sh) # ,
(6)
where ri represents the immediate reward at timestep i predicted by the world model . For each expand length k, we expand the expected value from current timestep x to timestep h 1 (h = min(x+ k, t+H)) and use learned value function v (sh) to estimate returns beyond h steps, i.e., v (sh) = E ✓
HP i=h i h ri
◆ . Here, we use the exponentially-weighted average of the estimates
for different values of k to balance bias and variance, and the exponential weighting factor is indicated by k. As shown in the Equation 6, we alternate the policy network ⇡✓ and the differentiable world model p , connect them to one another to form a large end-to-end trainable network, and then back-propagate the gradients of expected values with respect to policy parameters ✓ though this large network. Intuitively, a gradient step of the policy network encourages the world model to obtain a gradient step of new states, and in turn affect the future value. As a result, the states and policy will be optimized sequentially based on the feedback on future values. To optimize the value network, we use TD updates as actor-critic algorithms [54, 56, 21], instead of Monte Carlo estimation:
L TD (⌧ img ) =
t+HX
x=t
kv (sx) V (sx)k 2 , (7)
4.2 Bridge Imagination and Reality by Mutual Information Maximization
To ensure the policy improvement based on the learned world model is equally effective in the real world, we introduce an information-theoretic objective, that optimizes mutual information between
real and imaginary trajectories with respect to the policy network and the world model: I✓, (⌧ img , ⌧ real ) = H(⌧ real ) H(⌧ real ⌧ img)
= H(⌧ real )
X
u
P (u)H(⌧ real ⌧ img = u)
= H(⌧ real ) +
X
u
P (u)
X
v
P (v|u) log(P (⌧ real = v|u))
= H(⌧ real ) +
X
u,v
P (u, v) log(P (v|u)).
(8)
To reduce computational complexity, we alternately optimize the total mutual information with respect to world model and policy network. First, we fix the policy parameters ✓ and only optimize the parameters of world model to maximize the total mutual information I✓, (⌧ img, ⌧ real). Since the first term H(⌧ real) measures the entropy of real trajectories generated by policy ⇡✓ on real MDP, it is not related to parameters of the world models and we can remove this term. As for the second term P u,v
P (u, v) log(P (v|u)), we consider the fact that our world model in conjunction with the policy network, can be regarded as a predictor for real trajectories and the second term serves as a log likelihood of a real trajectory of given imagined one. Thus, optimizing this term is equivalent to minimize the prediction error on training pairs of imagined and real trajectories (u, v). When the policy is fixed, P (u, v) is tractable and we can directly approximate it by sampling the data from replay buffer B (i.e., a collection of experienced trajectories). Thus, the second term becomesP
u,v⇠B log(P (v|u; )), which is equivalent to the conventional model prediction error L Model . In summary, we can get the gradient,
r I✓, (⌧ img , ⌧ real ) = r L Model (⌧ img , ⌧ real ), (9) Second, we fix the model parameters and only optimize the parameters of policy network ✓ to maximize the total mutual information I✓, (⌧ img, ⌧ real). The first term of mutual information becomes maximizing the entropy of the current policy. In some sense, this term encourages exploration and also learns a robust policy. We use a Gaussian distribution N (m✓(st), v✓(st)) to model the stochastic policy ⇡✓, and thus can analytically compute its entropy on real data as Est⇠⌧ real 12 log 2⇡ev 2 ✓ (st).
Then we consider how to optimize the second term, P
u,v P (u, v) log(P (v|u)). The joint distribution
of real and imagined trajectories P (u, v) is determined by the policy ⇡✓. When the updates of the world model are stopped, the log likelihood of a real trajectory of given imagined one log(P (v|u)) is fixed and can be regarded as the weight for optimizing distribution P (u, v) by policy. Thus, the essential objective of maximizing P u,v
P (u, v) log(P (v|u)) with respect to policy parameters ✓ is to guide policy to the space with high confidence of model prediction (i.e., high log likelihood log(P (v|u))). Specifically, we implement it by a confidence-aware policy optimization, which reweights the degree of learning by prediction confidence log(P (⌧ img_roll|⌧ img)) during the policy improvement process. The new objective of reweighted policy improvement is written as log(P (⌧ img_roll |⌧ img ))J
SVG ✓ (⌧ img_roll ). In addition, we normalize the confidence weight for each batch to make training stable. In summary, the gradient of policy optimization is rewritten as:
r✓ I✓, (⌧
img , ⌧ real ) + J SVG ✓ (⌧ img ))
=r✓ ✓ Est⇠⌧ real 1
2 log 2⇡ev
2 ✓ (st) + log(P (⌧ img_roll |⌧ img ))J SVG ✓ (⌧ img_roll )
◆ .
(10)
From Equation 9 and 10, we can see there are three terms, model error minimization, policy entropy maximization, and confidence-aware policy optimization, derivated by our total objective of optimizing mutual information between real and imaginary trajectories. We have the same model error loss as Dreamer, and thus the main difference from Dreamer is the policy entropy maximization and confidence-aware policy optimization. Intuitively, entropy maximization term aims at increasing the search space of SVG-based policy search like Dreamer and thus can explore more possibilities. Then the confidence-aware optimization term reweighs the search results by confidence, which contributes to improve the search quality and make sure the additional search results from large entropy are reliable enough. This approach has strong connections to distributional shift refinement in offline RL setting and may be beneficial to the community of batch RL [57]. In addition, considering that ⌧ real, ⌧ img and ⌧ img_roll are trajectories under current policy, we use a first-in-first-out replay buffer with limited capacity to mimic a approximately on-policy data stream.
Algorithm 1 summarizes our entire algorithm of optimizing mutual information and policy.
Algorithm 1 BIRD Algorithm Initialize buffer B with random agent. Initialize parameters ✓, , randomly. Set hyper-parameters: imagination horizon H , learning step C, interacting step T , batch size B, batch length L. while not converged do
for i = 1 . . . C do Draw B data sequences {(ot, at, rt)}t+Lt from B. Compute latent states st ⇠ p (st|st 1, at 1, ot) and imaginary trajectories {(sx, ax)}t+Hx=t For each sx, predict rewards p (rx|sx) and values v (sx) . Calculate imaginary returns Update ✓, , using Equation 5 . Optimize policy and mutual information end for Reset o1 in real world. for t = 1 . . . T do
Compute latent state st ⇠ p (st|st 1, at 1, ot). Compute at ⇠ ⇡✓(at|st) using policy network and add exploration noise. Take action at and get rt, ot+1 from real world. . Interact with real world
end for Add experience {(ot, at, rt)Tt=1} to B.
end while
4.3 Policy Optimization with Entropy Maximization
In the context of model-free RL, maximum entropy deep RL [49, 58] contributes to learning robust policies with estimation errors, generating a question: if we simply add a maximization objective for policy entropy in the context of model-based RL with stochastic value gradients, can we also obtain policies from imaginations that generalize well to real environment? Thus, we design an ablation version of BIRD, Soft-BIRD, which just adds a entropy augmented objective to the return objective:
⇡ ⇤ ✓ = argmax
✓
X
t
E (rt + ↵H(⇡(·|st))) , (11)
where ↵ is a hyper-parameter. We use a soft Bellman Equation for value function v0 (st) like SAC [49] and rewrite the objective of policy improvement J 0SVG ✓ as:
v 0 (st) = E rt + ↵H(⇡✓(·|st)) + v 0 (st+1) ,
J 0SVG ✓ (⌧ img ) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
(ri + ↵H(⇡✓(·|si)))
! + h t v 0 (sh) # .
(12) Compared to BIRD, soft-BIRD only maximizes the entropy of the policy instead of optimizing the mutual information between real and imaginary trajectories generated from the policy, which will provide further insights on the contribution of BIRD.
5 Experiments
We evaluate BIRD on DeepMind Control Suite (https://github.com/deepmind/dm_control) [30], a standard benchmark for continuous control. In Section 5.2, we compare BIRD with both model-free and model-based RL methods. For model-free baselines, we compare with D4PG [59], a distributed extension of DDPG [2], and A3C [56], the distributed actor-critic approach. We include the scores for D4PG with pixel inputs and A3C with state inputs, which are also used as baselines in Dreamer. For model-based baselines, we use PlaNet [12] and Dreamer [13], two state-of-the-art model-based RL. Some popular model-based RL papers [60, 61, 62, 63] are not inlcuded in our experiments since they use MPC for sampling-based planning and do not show effectiveness on RL tasks with image inputs. Compared to the MPC-based approaches that generate many rollouts to select the highest performing action sequence, our paper builds upon analytic value gradients that can directly propagate gradients through a differentiable world model and is more computationally efficient on domains that require learning from pixels. Our paper focuses on visual control tasks, and thus we only compare with state-of-the-art algorithms of these tasks (i.e., PlaNet and Dreamer).
In addition, we conduct an ablation experiment in Section 5.3 to illustrate the contribution of mutual information maximization. In Section 5.4, we further study cases and visualize BIRD’s generalization to real-world information.
5.1 Experiment Setting
We mainly follow the experiment settings of Dreamer. Among all environments, observations are 64 ⇥ 64 ⇥ 3 images, rewards are scaled to 0 to 1, and the dimensions of action space vary from 1 to 12 . Action repeat is fixed at 2 for all tasks. We implement Dreamer by its released codes (https://github.com/google-research/dreamer) and all hyper-parameters remain the same as reported. Since our model loss term in Equation 9 has the same form as Dreamer, we directly use the same model learning component as Dreamer that adopts multi-step prediction and removes latent overshooting used in PlaNet. We also use the same architecture for neural networks thus we have the same computational complexity as Dreamer. Specifically, CNN layers are employed to compress observations into latent state space and GRU [64] is used for learning latent dynamics. Policy network, reward network, and value network are all implemented with multi-layer perceptrons (MLP) and they respectively trained with Adam optimizer [65]. For all experiments, we select a discount factor of 0.99 and a mutual information coefficient of 1e-8. Buffersize is 100k. We train BIRD with a single Nvidia 2080Ti and a single CPU, and it takes 8 hours to run 1 million samples.
5.2 Results on DeepMind Control Suite
Learning policy from raw visual observation has always been a challenging problem for RL algorithms. We significantly improve the state-of-the-art visual control approach on the visual control tasks from DeepMind Control Suite, which provides a promising avenue for model-based policy learning from pixels. Figure 5 shows the training curves on 6 tasks and additional results are placed in supplementary materials. Comparison results demonstrate that BIRD significantly outperforms baselines in terms of sample efficiency. We observe that BIRD can use half training samples to obtain the same score with PlaNet and Dreamer in Hopper Stand and Hopper Hop. Among all tasks, BIRD achieves comparable performance to D4PG and A3C, which are trained with 1,000 times more samples. In addition, BIRD achieves higher or similar convergence scores in all tasks than baselines. Here, we provide insights into the superiority of BIRD. As the mutual information between real and imaginary
trajectories increases, the behaviors that BIRD learns using the world model can be adapted to the real environment more appropriately and faster, while other model-based methods require a slower adaptation process. Besides, although world model usually tend to overfit poor policies in the early stage, BIRD will not be tempted by greedy policy optimization on the poor trajectories generated by such an imperfect model. Because the entropy maximization term in Equation 10 endows the agent a stronger exploration ability, and the confidence-aware policy optimization term encourages it re-estimate all the gathered trajectories and focus on optimizing high-confidence ones.
5.3 Ablation Study
In order to verify the outperformance of BIRD is not simply due to simply increasing the entropy of policy, we conduct an ablation study that compares BIRD with Soft-BIRD (4.3). Figure 5 shows the best performance of Soft-BIRD, but there is still a big gap from BIRD. As shown in Walker Run of Figure 5, we find that the score of Soft-BIRD first rises for a while, but eventually falls. The failure of Soft-BIRD suggests that policy improvement in model-based RL with analytic gradients is bottlenecked by the discrepancy of reality and imagination, thus only improving the entropy of policy will not help.
5.4 Case Study: Predictions on Key Actions
Our algorithm learns a world model with better generalization to real trajectories, especially on key actions which matters for long-horizon behavior learning. We visualize some predictions on key actions, such as the explosive force for standing up and jumping in Hopper Stand and Hopper Hop, stomping with front leg to prevent tumble in Walker Run, and throwing pole up to keep stable in Cartpole Swingup. As shown in Figure 3, BIRD makes more accurate predictions compared to Dreamer. For example, in Hopper Hop, Dreamer wrongly predicts the takeoff moment to fall down while BIRD has an accurate foresight that the agent will leap from the ground. Precise forecast of the key actions implicitly suggests that our imaginary trajectories generated by the learned policy indeed possess more real-world information.
6 Conclusion
Generalization from imagination to reality is a crucial yet challenging problem in the context of model-based RL. In this paper, we propose a novel model-based framework, called BrIdging Reality and Dream (BIRD), which not only performs differentiable planning on imaginary trajectories, but also encourages adaptive generalization to reality by optimizing mutual information between imaginary and real trajectories. Results on challenging visual control tasks demonstrate that our algorithm achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further shows that the superiority is attributed to maximizing mutual information rather than simply increasing the entropy of the policy. In the future, we will explore directions to further improve the generalization of imaginations, such as generalizable representations and reusable skill discovery.
Broader Impact
Model-free RL requires a large amount of samples, thus limits its applications to real-world tasks. For example, the trial-and-error training process of a robot requires substantial manpower and financial resources, and certain harmful actions can greatly reduce the life of the robot. Building a world model and learning behaviors by imaginations provides a boarder prospect for real-world applications. This paper is situated in model-based RL and further improves sample efficiency over existing work, which will accelerate the development of real-world applications on automatic control, such as robotics and autonomous driving. In addition, this paper tackles a valuable problem about generalization, from imagination to reality, thus it is also of great interest to researchers in generalizable machine learning.
In the long run, this paper will improve the efficiency of factory operations, avoid artificial repetition of difficult or dangerous work, save costs, and reduce risks in the industrial and agricultural industry. For daily life, it will create a more intelligent lifestyle and improve the quality of life.
Our algorithm is a generic framework that does not leverages biases in data. We evaluated our model in a popular benchmark of visual control tasks. However, similar to a majority of deep learning approaches, our algorithm has a common disadvantage. The learned knowledge and policy is not friendly to humans and it is hard for us to know why the agent learns to act so well. Interpretability has always been a challenging open question and in the future we are interested in incorporating recent deep learning progresses on causal inference into RL.
Acknowledgments and Disclosure of Funding
This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), and a grant from the Institute of Guo Qiang,Tsinghua University. | 1. What is the main contribution of the paper, and how does it address the issue in current MBRL methods?
2. What are the strengths of the proposed algorithm, particularly in its derivation and results?
3. What are the weaknesses of the paper, especially regarding its novelty and comparison with other works?
4. How does the reviewer assess the clarity and thoroughness of the paper's content, including its related work section, flow diagrams, pseudocode, and figures?
5. What additional experiments or clarifications would help address the reviewer's concerns about the novelty and differences between the proposed method and prior works? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This work proposes an algorithm, BIRD, that addresses the issue in current MBRL methods where the policy or value function is updated with incorrect trajectories from a learned model. They propose to incorporate an additional objective that maximizes the mutual information between the model predicted trajectory and the real on-policy trajectory to fix this issue. This objective consists of two terms, one which maximizes the entropy of the policy, and another that minimizes model error.
Strengths
The authors show the derivation of the objective in a very clear way, of incorporating a mutual information maximization between the model output trajectories and true trajectories. There are compelling results comparing performance in 6 DMC tasks, showing that BIRD performs better than baselines like Planet and Dreamer. The flow diagrams and pseudocode make very clear the modifications to the Dreamer algorithm, and there is a nice, very thorough related work section that breaks down the different types of MBRL methods. Figure 3 is a good, qualitative example of how their method learns a more accurate model for key actions compared to Dreamer.
Weaknesses
While I find this paper reasonably thorough, I'm skeptical of the novelty. It seems the two components that differentiate it from Dreamer come from this mutual information maximization objective, which is to maximize the policy entropy and minimize the model loss. While there is an ablation showing what happens if you remove the model loss component, there is no ablation showing what happens if you remove the entropy maximization. My assumption is that the core reason for improvement is the model loss, which is not a surprising result. Doing this ablation would address this concern. It is also unclear to me how the model loss component differs from the original Dreamer objective to train the latent transition model? Is it that it is matching an entire trajectory of data instead of 1-step transitions pulled from a replay buffer? Would this make it more similar to the latent overshooting used in the Planet paper? More clarification here would be helpful. |
NIPS | Title
Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning
Abstract
Sample efficiency has been one of the major challenges for deep reinforcement learning. Recently, model-based reinforcement learning has been proposed to address this challenge by performing planning on imaginary trajectories with a learned world model. However, world model learning may suffer from overfitting to training trajectories, and thus model-based value estimation and policy search will be prone to be sucked in an inferior local policy. In this paper, we propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD). It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories. We demonstrate that our approach improves sample efficiency of model-based planning, and achieves state-of-the-art performance on challenging visual control benchmarks.
1 Introduction
Reinforcement learning (RL) is proposed as a general-purpose learning framework for artificial intelligence problems, and has led to tremendous progress in a variety of domains [1, 2, 3, 4]. Modelfree RL adopts a trail-and-error paradigm, which directly learns a mapping function from observations to values or actions through interactions with environments. It has achieved remarkable performance in certain video games and continuous control tasks because of its simplicity and minimal assumptions about environments. However, model-free approaches are not yet sample efficient and require several orders of magnitude more training samples than human learning, which limits its applications on real-world tasks [5].
A promising direction for improving sample efficiency is to explore model-based RL, which first builds an action-conditioned world model and then performs planning or policy search based on the learned model. The world model needs to encode the representations and dynamics of an environment is then used as a “dreamer” to do multi-step lookaheads for planning or policy search. Recently, world models based on deep neural networks were developed to handle dynamics in complex highdimensional environments, which offers opportunities for learning model-based polices with visual observations [6, 7, 8, 9, 10, 11, 12, 13].
⇤Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Model-based frameworks can be roughly grouped into four categories. First, Dyna-style algorithms alternate between building the world model from interactions with environments and performing policy optimization on simulated data generated by the learned model [14, 15, 16, 17, 11]. Second, model predictive control (MPC) and shooting algorithms alternate model learning, planning and action execution [18, 19, 20]. Third, model-augmented value expansion algorithms use model-based rollouts to improve targets for model-free temporal difference (TD) updates or policy gradients [21, 9, 6, 10]. Fourth, analytic-gradient algorithms leverage the gradients of the model-based imaginary returns with respect to the policy and directly propagate such gradients through a differentiable world model to the policy network [22, 23, 24, 25, 26, 27, 13]. Compared to conventional planning algorithms that generate numerous rollouts to select the highest performing action sequence, analyticgradient algorithm is more computationally efficient, especially in complex domains with deep neural networks. Dreamer [13] as a landmark of analytic-gradient model-based RL, achieves state-of-the-art performance on visual control tasks.
However, most existing breakthroughs on analytic gradients focus on optimizing the policy on imaginary trajectories and leave the discrepancy between imagination and reality largely unstudied, which often bottlenecks their performance on real trajectories. In practice, a learning-based world model is not perfect, especially in complex environments. Unrolling with an imperfect model for multiple steps generates a large accumulative error, leaving a gap between the generated trajectories and reality. If we directly optimize policy based on the analytic gradients through the imaginary trajectories, the policy will tend to deviate from reality and get sucked in an inferior local solution.
Evidence from humans’ cognition and learning in the physical world suggests that humans naturally have the capacity of self-reflection and introspection. In everyday life, we track and review our past thoughts and imaginations, introspect to further understand our internal states and interactions with the external world, and change our values and behavior patterns accordingly [28, 29]. Inspired by this insight, our basic idea is to leverage information from real trajectories to endow policy improvement on imaginations with awareness of discrepancy between imagination and reality. We propose a novel reality-aware model-based framework, called BrIdging Reality and Dream (BIRD), which performs differentiable planning on imaginary trajectories, as well as enables adaptive generalization to reality for learned policy by optimizing mutual information between imaginary and real trajectories. Our model-based policy optimization framework naturally unifies confidence-aware analytic gradients, entropy regularization maximization, and model learning. We conduct experiments on challenging visual control benchmarks (DeepMind Control Suite with image inputs [30]) and the results demonstrate that BIRD achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further verifies the superiority of BIRD benefits from mutual information maximization rather than from the increase of policy entropy.
2 Related Work
Model-Based Reinforcement Learning Model-based RL exhibits high sample efficiency and has been widely used in several real-world control tasks, such as robotics [31, 32, 7]. Dyna-style algorithms [14, 15, 16, 17, 11] optimize policies with samples generated from a learned world model. Model predictive control (MPC) and shooting methods [18, 19, 20] leverage planning to select actions, but suffer from expensive computation. In model-augmented value expansion approaches, MVE [21], VPN [6] and STEVE [9] use model-based rollouts to improve targets for model-free TD updates. MuZero [10] further incorporates Monte-Carlo tree search (MCTS) and achieves remarkable performance on Atari and board games. To manage visual control tasks, VisualMPC [33] introduces a visual prediction model to keep track of entities through occlusion by temporal skip connections. PlaNet [12] improves the model learning by combining deterministic and stochastic latent dynamics models. [34] presents a summary of model-based approaches and benchmarks popular algorithms for comparisons and extensions.
Analytic Value Gradients If a differentiable world model is available, analytic value gradients are proposed to directly update the policy by gradients that flow through the world model. PILCO [24] and iLQR [25] compute an analytic gradient by assuming Gaussian processes and linear functions for the dynamics model, respectively. Guided policy search (GPS) [26, 35, 36, 37, 38] uses deep neural networks to distill behaviors from the iLQR controller. Value Gradients (VG) [22] and Stochastic Value Gradients (SVG) [23] provide a new direction to calculate analytic value gradients through a generic differentiable world model. Dreamer [13] and IVG [27] further extend SVG by
generating imaginary rollouts in the latent space. However, these works focus on improving the policy in imaginations, leaving the discrepancy between imagination and reality largely unstudied. Our approach enables policy generalization to real-world interactions by maximizing mutual information between imagination and real trajectories, while optimizing the policy on imaginary trajectories. In addition, alternative end-to-end planning methods [39, 40] leverage analytic gradients, but they either focus on online planning in simple tasks [39] or require goal images and distance metrics for the reward function [40].
Information-Based Optimization In addition to maximizing the expected return objective, a reliable RL agent may exhibit more characteristics, like meaningful representations, strong generalization, and efficient exploration. Deep information-based methods [41, 42, 43, 44] recently show progress towards this direction. [45, 46, 47] are proposed to learn more efficient representations. Maximum entropy RL maximizes the entropy regularized return to obtain a robust policy [48, 49] and [50, 51] further connect policy optimization under such regularization with value based RL. [52] learns a goal-conditioned policy with information bottleneck to identify decision states. IDS [53] estimates the information gain for a sampling-based exploration strategy. These algorithms mainly focus on facilitating policy learning in the model-free setting, while BIRD aims at bridging imagination and reality by mutual information maximization in the context of model-based RL.
3 Preliminaries
3.1 Reinforcement Learning
A reinforcement learning agent aims at learning a policy to maximize the cumulative rewards by exploring in a Markov Decision Processes (MDP) [54]. Normally, we use denote time step as t and introduce state st 2 S, action at 2 A, reward function r(st, at), a policy ⇡✓(s), and a transition probability p(st+1|st, at) to characterize the process of interacting with the environment. The goal of the agent is to find a policy parameter ✓ that maximizes the long-horizon summed rewards represented
by a value function v (st) . = E ✓ t+HP i=t i t ri ◆ parameterized with . In model-based RL, the agent builds a world model p parameterized by for environmental dynamics p and reward function r, and then performs planning or policy search based on this model.
3.2 World Model
Considering that several complex tasks (e.g., visual control tasks [30]) are partially observable Markov decision process (POMDP), this paper adopts a similar world model with PlaNet [12] and Dreamer [13], which learns latent states from the history of visual observations and models the latent dynamics by LSTM-like recurrent networks. Specifically, the world model consists of the following modules:
Representation model : st ⇠ p (st|st 1, at 1, ot)
Transition model : st ⇠ p (st|st 1, at 1)
Observation model : ot ⇠ p (ot|st)
Reward model : rt ⇠ p (rt|st).
(1)
The representation model encodes the image input into a compact latent space and the long-horizon dynamics on latent states are captured by a latent transition model. We use RSSM [12] as our transition model, which combines deterministic and stochastic transition model in order to learn dynamics more accurately and efficiently. For each latent state on the predicted trajectories, observation model learns to reconstruct its visual observations, and the reward model predicts the immediate reward. The entire world model JModel is optimized by a VAE-like objective [55]:
J Model (⌧ img , ⌧ real ) =
X
(at 1,ot,rt)⇠⌧ real
h ln(p (ot|st)) + ln(p (rt|st))
DKL(p (st|st 1, at 1, ot)||p (st|st 1, at 1)) i .
(2)
3.3 Stochastic Value Gradients
Given a differentiable world model, stochastic value gradients (SVG) [22, 23] can be applied to directly compute policy gradient on the whole imaginary trajectory, which is a recursive composition of policy, transition, reward, and value function. According to the stochastic Bellman Equation, we have:
v(s) = E⇢(⌘) r(s,⇡✓(s, ⌘)) + E⇢(⇠) (v(p(s,⇡✓(s, ⌘), ⇠))) , (3)
where ⌘ ⇠ ⇢(⌘) and ⇠ ⇠ ⇢(⇠) are noises from a fixed noise distribution for re-parameterization. So the gradients through trajectories can be iteratively computed as:
@v @s = E⇢(⌘)
✓ @r
@s + @r @a @⇡ @s + E⇢(⇠)
✓ @v
@s0
✓ @p
@s + @p @a @⇡ @s
◆◆◆
@v @✓ = E⇢(⌘)
✓ @r
@a
@⇡ @✓ + E⇢(⇠)
✓ @v
@s0 @p @a @⇡ @✓ + @v @✓
◆◆ ,
(4)
where s0 denotes the next state given by the transition function. Intuitively, policy can be improved by propagating analytic gradients with respect to the policy network through the imaginary trajectories.
4 Reality-Aware Model-Based Policy Improvement
In this section, we present a novel model-based RL framework, called BrIdging Reality and Dream (BIRD), as shown in Figure 1. The agent represents its policy function with a policy network ( ). To estimate the future effects of its policy and enable potential policy improvement, it unrolls trajectories based on its world model ( ) using the current policy and optimizes the accumulative rewards on the imaginary trajectories. The policy network and differentiable world model connect to one another forming a larger trainable network, which supports differentiable planning and allows the analytic gradients of accumulative rewards with respect to the policy flow through the world model. In the meantime, the agent also interacts with the real world ( ) and generates real trajectories. BIRD maximizes the mutual information between real and imaginary trajectories to endow both the policy network and the world model with adaptive generalization to real-world interactions. In summary, BIRD maximizes the total objective function:
JBIRD = J SVG ✓ (⌧ img_roll ) L TD (⌧ img_roll ) + wI✓, (⌧ img , ⌧ real ), (5)
where ⌧ real and ⌧ img indicate the real trajectories and the corresponding imaginary trajectories under the same policy, and ⌧ img_roll indicate the rolled out imaginary trajectories during the optimization
of policy improvement. ✓, , , are parameters of policy network ⇡✓, value network v , and world model p , respectively. The first two terms J SVG✓ (⌧ img_roll ) J TD (⌧ img_roll
) account for policy improvement on imaginations, the last term I✓, (⌧ img, ⌧ real) optimizes the mutual information, and w is a weighting factor between them.
In conventional model-based RL approaches, real-world trajectories are normally used to optimize model prediction error, which is quite different from BIRD. In complex domains, optimizing model prediction error cannot guarantee a perfect predictive model. Unrolling with such an imperfect model for multiple steps will generate a large accumulative error, leaving a large gap between the generated trajectories and real ones. Thus, policy optimized by such a model may overfit undesirable imaginations and have a low generalization ability to reality, which is also shown in our experiments (Figure 3). This problem is further exacerbated in analytic-gradient RL that performs differentiable planning by gradient-based local search. This is because even a small gradient step along the imperfect model can easily reach a non-generalizable neighbourhood and lead to a direction of incorrect policy improvement. To address this problem, our method optimizes mutual information with respect to both the model and the policy, which makes policy improvement aware of the discrepancy between real and imaginary trajectories. Intuitively, BIRD optimizes the world model to be more real and reinforces the actions whose resulting imaginations not only have large accumulative rewards, but also resemble real trajectories. As a result, BIRD learns a policy from imaginations with easier generalization to the real-world environment.
4.1 Policy Improvement on Imaginations
As a model-based RL algorithm, BIRD improves the policy by maximizing the accumulative rewards of the imaginary trajectories unrolled by the world model. Conventional model-based approaches [18, 7, 11] perform policy improvement by selecting the optimal action sequence that maximizes the expected planning reward, that is maxat:t+H Esx⇠p P t+H x=t r(sx, ax). If the world model is differentiable, we use stochastic value gradients (SVG) to directly leverage the gradients through the world model for policy improvement. Similar with Dreamer [13], our objective of maximizing the model-based value expansion within horizon H is given by:
J SVG ✓ (⌧ img
) = max ✓
t+HX
x=t
V (sx),
V (sx) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
ri
! + h t v (sh) # ,
(6)
where ri represents the immediate reward at timestep i predicted by the world model . For each expand length k, we expand the expected value from current timestep x to timestep h 1 (h = min(x+ k, t+H)) and use learned value function v (sh) to estimate returns beyond h steps, i.e., v (sh) = E ✓
HP i=h i h ri
◆ . Here, we use the exponentially-weighted average of the estimates
for different values of k to balance bias and variance, and the exponential weighting factor is indicated by k. As shown in the Equation 6, we alternate the policy network ⇡✓ and the differentiable world model p , connect them to one another to form a large end-to-end trainable network, and then back-propagate the gradients of expected values with respect to policy parameters ✓ though this large network. Intuitively, a gradient step of the policy network encourages the world model to obtain a gradient step of new states, and in turn affect the future value. As a result, the states and policy will be optimized sequentially based on the feedback on future values. To optimize the value network, we use TD updates as actor-critic algorithms [54, 56, 21], instead of Monte Carlo estimation:
L TD (⌧ img ) =
t+HX
x=t
kv (sx) V (sx)k 2 , (7)
4.2 Bridge Imagination and Reality by Mutual Information Maximization
To ensure the policy improvement based on the learned world model is equally effective in the real world, we introduce an information-theoretic objective, that optimizes mutual information between
real and imaginary trajectories with respect to the policy network and the world model: I✓, (⌧ img , ⌧ real ) = H(⌧ real ) H(⌧ real ⌧ img)
= H(⌧ real )
X
u
P (u)H(⌧ real ⌧ img = u)
= H(⌧ real ) +
X
u
P (u)
X
v
P (v|u) log(P (⌧ real = v|u))
= H(⌧ real ) +
X
u,v
P (u, v) log(P (v|u)).
(8)
To reduce computational complexity, we alternately optimize the total mutual information with respect to world model and policy network. First, we fix the policy parameters ✓ and only optimize the parameters of world model to maximize the total mutual information I✓, (⌧ img, ⌧ real). Since the first term H(⌧ real) measures the entropy of real trajectories generated by policy ⇡✓ on real MDP, it is not related to parameters of the world models and we can remove this term. As for the second term P u,v
P (u, v) log(P (v|u)), we consider the fact that our world model in conjunction with the policy network, can be regarded as a predictor for real trajectories and the second term serves as a log likelihood of a real trajectory of given imagined one. Thus, optimizing this term is equivalent to minimize the prediction error on training pairs of imagined and real trajectories (u, v). When the policy is fixed, P (u, v) is tractable and we can directly approximate it by sampling the data from replay buffer B (i.e., a collection of experienced trajectories). Thus, the second term becomesP
u,v⇠B log(P (v|u; )), which is equivalent to the conventional model prediction error L Model . In summary, we can get the gradient,
r I✓, (⌧ img , ⌧ real ) = r L Model (⌧ img , ⌧ real ), (9) Second, we fix the model parameters and only optimize the parameters of policy network ✓ to maximize the total mutual information I✓, (⌧ img, ⌧ real). The first term of mutual information becomes maximizing the entropy of the current policy. In some sense, this term encourages exploration and also learns a robust policy. We use a Gaussian distribution N (m✓(st), v✓(st)) to model the stochastic policy ⇡✓, and thus can analytically compute its entropy on real data as Est⇠⌧ real 12 log 2⇡ev 2 ✓ (st).
Then we consider how to optimize the second term, P
u,v P (u, v) log(P (v|u)). The joint distribution
of real and imagined trajectories P (u, v) is determined by the policy ⇡✓. When the updates of the world model are stopped, the log likelihood of a real trajectory of given imagined one log(P (v|u)) is fixed and can be regarded as the weight for optimizing distribution P (u, v) by policy. Thus, the essential objective of maximizing P u,v
P (u, v) log(P (v|u)) with respect to policy parameters ✓ is to guide policy to the space with high confidence of model prediction (i.e., high log likelihood log(P (v|u))). Specifically, we implement it by a confidence-aware policy optimization, which reweights the degree of learning by prediction confidence log(P (⌧ img_roll|⌧ img)) during the policy improvement process. The new objective of reweighted policy improvement is written as log(P (⌧ img_roll |⌧ img ))J
SVG ✓ (⌧ img_roll ). In addition, we normalize the confidence weight for each batch to make training stable. In summary, the gradient of policy optimization is rewritten as:
r✓ I✓, (⌧
img , ⌧ real ) + J SVG ✓ (⌧ img ))
=r✓ ✓ Est⇠⌧ real 1
2 log 2⇡ev
2 ✓ (st) + log(P (⌧ img_roll |⌧ img ))J SVG ✓ (⌧ img_roll )
◆ .
(10)
From Equation 9 and 10, we can see there are three terms, model error minimization, policy entropy maximization, and confidence-aware policy optimization, derivated by our total objective of optimizing mutual information between real and imaginary trajectories. We have the same model error loss as Dreamer, and thus the main difference from Dreamer is the policy entropy maximization and confidence-aware policy optimization. Intuitively, entropy maximization term aims at increasing the search space of SVG-based policy search like Dreamer and thus can explore more possibilities. Then the confidence-aware optimization term reweighs the search results by confidence, which contributes to improve the search quality and make sure the additional search results from large entropy are reliable enough. This approach has strong connections to distributional shift refinement in offline RL setting and may be beneficial to the community of batch RL [57]. In addition, considering that ⌧ real, ⌧ img and ⌧ img_roll are trajectories under current policy, we use a first-in-first-out replay buffer with limited capacity to mimic a approximately on-policy data stream.
Algorithm 1 summarizes our entire algorithm of optimizing mutual information and policy.
Algorithm 1 BIRD Algorithm Initialize buffer B with random agent. Initialize parameters ✓, , randomly. Set hyper-parameters: imagination horizon H , learning step C, interacting step T , batch size B, batch length L. while not converged do
for i = 1 . . . C do Draw B data sequences {(ot, at, rt)}t+Lt from B. Compute latent states st ⇠ p (st|st 1, at 1, ot) and imaginary trajectories {(sx, ax)}t+Hx=t For each sx, predict rewards p (rx|sx) and values v (sx) . Calculate imaginary returns Update ✓, , using Equation 5 . Optimize policy and mutual information end for Reset o1 in real world. for t = 1 . . . T do
Compute latent state st ⇠ p (st|st 1, at 1, ot). Compute at ⇠ ⇡✓(at|st) using policy network and add exploration noise. Take action at and get rt, ot+1 from real world. . Interact with real world
end for Add experience {(ot, at, rt)Tt=1} to B.
end while
4.3 Policy Optimization with Entropy Maximization
In the context of model-free RL, maximum entropy deep RL [49, 58] contributes to learning robust policies with estimation errors, generating a question: if we simply add a maximization objective for policy entropy in the context of model-based RL with stochastic value gradients, can we also obtain policies from imaginations that generalize well to real environment? Thus, we design an ablation version of BIRD, Soft-BIRD, which just adds a entropy augmented objective to the return objective:
⇡ ⇤ ✓ = argmax
✓
X
t
E (rt + ↵H(⇡(·|st))) , (11)
where ↵ is a hyper-parameter. We use a soft Bellman Equation for value function v0 (st) like SAC [49] and rewrite the objective of policy improvement J 0SVG ✓ as:
v 0 (st) = E rt + ↵H(⇡✓(·|st)) + v 0 (st+1) ,
J 0SVG ✓ (⌧ img ) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
(ri + ↵H(⇡✓(·|si)))
! + h t v 0 (sh) # .
(12) Compared to BIRD, soft-BIRD only maximizes the entropy of the policy instead of optimizing the mutual information between real and imaginary trajectories generated from the policy, which will provide further insights on the contribution of BIRD.
5 Experiments
We evaluate BIRD on DeepMind Control Suite (https://github.com/deepmind/dm_control) [30], a standard benchmark for continuous control. In Section 5.2, we compare BIRD with both model-free and model-based RL methods. For model-free baselines, we compare with D4PG [59], a distributed extension of DDPG [2], and A3C [56], the distributed actor-critic approach. We include the scores for D4PG with pixel inputs and A3C with state inputs, which are also used as baselines in Dreamer. For model-based baselines, we use PlaNet [12] and Dreamer [13], two state-of-the-art model-based RL. Some popular model-based RL papers [60, 61, 62, 63] are not inlcuded in our experiments since they use MPC for sampling-based planning and do not show effectiveness on RL tasks with image inputs. Compared to the MPC-based approaches that generate many rollouts to select the highest performing action sequence, our paper builds upon analytic value gradients that can directly propagate gradients through a differentiable world model and is more computationally efficient on domains that require learning from pixels. Our paper focuses on visual control tasks, and thus we only compare with state-of-the-art algorithms of these tasks (i.e., PlaNet and Dreamer).
In addition, we conduct an ablation experiment in Section 5.3 to illustrate the contribution of mutual information maximization. In Section 5.4, we further study cases and visualize BIRD’s generalization to real-world information.
5.1 Experiment Setting
We mainly follow the experiment settings of Dreamer. Among all environments, observations are 64 ⇥ 64 ⇥ 3 images, rewards are scaled to 0 to 1, and the dimensions of action space vary from 1 to 12 . Action repeat is fixed at 2 for all tasks. We implement Dreamer by its released codes (https://github.com/google-research/dreamer) and all hyper-parameters remain the same as reported. Since our model loss term in Equation 9 has the same form as Dreamer, we directly use the same model learning component as Dreamer that adopts multi-step prediction and removes latent overshooting used in PlaNet. We also use the same architecture for neural networks thus we have the same computational complexity as Dreamer. Specifically, CNN layers are employed to compress observations into latent state space and GRU [64] is used for learning latent dynamics. Policy network, reward network, and value network are all implemented with multi-layer perceptrons (MLP) and they respectively trained with Adam optimizer [65]. For all experiments, we select a discount factor of 0.99 and a mutual information coefficient of 1e-8. Buffersize is 100k. We train BIRD with a single Nvidia 2080Ti and a single CPU, and it takes 8 hours to run 1 million samples.
5.2 Results on DeepMind Control Suite
Learning policy from raw visual observation has always been a challenging problem for RL algorithms. We significantly improve the state-of-the-art visual control approach on the visual control tasks from DeepMind Control Suite, which provides a promising avenue for model-based policy learning from pixels. Figure 5 shows the training curves on 6 tasks and additional results are placed in supplementary materials. Comparison results demonstrate that BIRD significantly outperforms baselines in terms of sample efficiency. We observe that BIRD can use half training samples to obtain the same score with PlaNet and Dreamer in Hopper Stand and Hopper Hop. Among all tasks, BIRD achieves comparable performance to D4PG and A3C, which are trained with 1,000 times more samples. In addition, BIRD achieves higher or similar convergence scores in all tasks than baselines. Here, we provide insights into the superiority of BIRD. As the mutual information between real and imaginary
trajectories increases, the behaviors that BIRD learns using the world model can be adapted to the real environment more appropriately and faster, while other model-based methods require a slower adaptation process. Besides, although world model usually tend to overfit poor policies in the early stage, BIRD will not be tempted by greedy policy optimization on the poor trajectories generated by such an imperfect model. Because the entropy maximization term in Equation 10 endows the agent a stronger exploration ability, and the confidence-aware policy optimization term encourages it re-estimate all the gathered trajectories and focus on optimizing high-confidence ones.
5.3 Ablation Study
In order to verify the outperformance of BIRD is not simply due to simply increasing the entropy of policy, we conduct an ablation study that compares BIRD with Soft-BIRD (4.3). Figure 5 shows the best performance of Soft-BIRD, but there is still a big gap from BIRD. As shown in Walker Run of Figure 5, we find that the score of Soft-BIRD first rises for a while, but eventually falls. The failure of Soft-BIRD suggests that policy improvement in model-based RL with analytic gradients is bottlenecked by the discrepancy of reality and imagination, thus only improving the entropy of policy will not help.
5.4 Case Study: Predictions on Key Actions
Our algorithm learns a world model with better generalization to real trajectories, especially on key actions which matters for long-horizon behavior learning. We visualize some predictions on key actions, such as the explosive force for standing up and jumping in Hopper Stand and Hopper Hop, stomping with front leg to prevent tumble in Walker Run, and throwing pole up to keep stable in Cartpole Swingup. As shown in Figure 3, BIRD makes more accurate predictions compared to Dreamer. For example, in Hopper Hop, Dreamer wrongly predicts the takeoff moment to fall down while BIRD has an accurate foresight that the agent will leap from the ground. Precise forecast of the key actions implicitly suggests that our imaginary trajectories generated by the learned policy indeed possess more real-world information.
6 Conclusion
Generalization from imagination to reality is a crucial yet challenging problem in the context of model-based RL. In this paper, we propose a novel model-based framework, called BrIdging Reality and Dream (BIRD), which not only performs differentiable planning on imaginary trajectories, but also encourages adaptive generalization to reality by optimizing mutual information between imaginary and real trajectories. Results on challenging visual control tasks demonstrate that our algorithm achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further shows that the superiority is attributed to maximizing mutual information rather than simply increasing the entropy of the policy. In the future, we will explore directions to further improve the generalization of imaginations, such as generalizable representations and reusable skill discovery.
Broader Impact
Model-free RL requires a large amount of samples, thus limits its applications to real-world tasks. For example, the trial-and-error training process of a robot requires substantial manpower and financial resources, and certain harmful actions can greatly reduce the life of the robot. Building a world model and learning behaviors by imaginations provides a boarder prospect for real-world applications. This paper is situated in model-based RL and further improves sample efficiency over existing work, which will accelerate the development of real-world applications on automatic control, such as robotics and autonomous driving. In addition, this paper tackles a valuable problem about generalization, from imagination to reality, thus it is also of great interest to researchers in generalizable machine learning.
In the long run, this paper will improve the efficiency of factory operations, avoid artificial repetition of difficult or dangerous work, save costs, and reduce risks in the industrial and agricultural industry. For daily life, it will create a more intelligent lifestyle and improve the quality of life.
Our algorithm is a generic framework that does not leverages biases in data. We evaluated our model in a popular benchmark of visual control tasks. However, similar to a majority of deep learning approaches, our algorithm has a common disadvantage. The learned knowledge and policy is not friendly to humans and it is hard for us to know why the agent learns to act so well. Interpretability has always been a challenging open question and in the future we are interested in incorporating recent deep learning progresses on causal inference into RL.
Acknowledgments and Disclosure of Funding
This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), and a grant from the Institute of Guo Qiang,Tsinghua University. | 1. What is the main contribution of the paper in model-based reinforcement learning?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper regarding its limited scope and lack of comparison with related works?
4. How does the reviewer assess the significance of the proposed method in improving the prediction of imagined trajectories?
5. Are there any questions or concerns regarding the applicability and efficiency of the mutual information maximization objective in various model-based RL algorithms? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper considers the issue of discrepancies between imagined and real trajectories in model-based value estimation and policy search. The authors propose to overcome this by using a mutual information maximization objective to improve the prediction of imagine trajectories such that they are close to real trajectories. The experiments in the paper show that adding this mutual information maximization objective to the state-of-the-art model-based RL algorithm Dreamer results in improved performance.
Strengths
- The proposed objective is simple and can be easily used in existing model-based RL algorithms. - The experiments demonstrate that the proposed objective leads to better performance. - The ablation studies demonstrate that the proposed objective leads to qualitatively better predictions.
Weaknesses
- Even though the experiments are performed on top of Dreamer, the paper only presents results for a subset of the tasks considered in the Dreamer paper. The results for the higher-dimensional tasks such as cheetah and quadruped are not presented. - Regularization of model-based RL so that the imagined and real trajectories are similar have also been identified and considered in several other papers. For example ensembling [1], DAE regularization [2], energy-based models [3], and Section 3.6 in [4]. The paper does not compare to any of them. |
NIPS | Title
Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning
Abstract
Sample efficiency has been one of the major challenges for deep reinforcement learning. Recently, model-based reinforcement learning has been proposed to address this challenge by performing planning on imaginary trajectories with a learned world model. However, world model learning may suffer from overfitting to training trajectories, and thus model-based value estimation and policy search will be prone to be sucked in an inferior local policy. In this paper, we propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD). It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories. We demonstrate that our approach improves sample efficiency of model-based planning, and achieves state-of-the-art performance on challenging visual control benchmarks.
1 Introduction
Reinforcement learning (RL) is proposed as a general-purpose learning framework for artificial intelligence problems, and has led to tremendous progress in a variety of domains [1, 2, 3, 4]. Modelfree RL adopts a trail-and-error paradigm, which directly learns a mapping function from observations to values or actions through interactions with environments. It has achieved remarkable performance in certain video games and continuous control tasks because of its simplicity and minimal assumptions about environments. However, model-free approaches are not yet sample efficient and require several orders of magnitude more training samples than human learning, which limits its applications on real-world tasks [5].
A promising direction for improving sample efficiency is to explore model-based RL, which first builds an action-conditioned world model and then performs planning or policy search based on the learned model. The world model needs to encode the representations and dynamics of an environment is then used as a “dreamer” to do multi-step lookaheads for planning or policy search. Recently, world models based on deep neural networks were developed to handle dynamics in complex highdimensional environments, which offers opportunities for learning model-based polices with visual observations [6, 7, 8, 9, 10, 11, 12, 13].
⇤Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Model-based frameworks can be roughly grouped into four categories. First, Dyna-style algorithms alternate between building the world model from interactions with environments and performing policy optimization on simulated data generated by the learned model [14, 15, 16, 17, 11]. Second, model predictive control (MPC) and shooting algorithms alternate model learning, planning and action execution [18, 19, 20]. Third, model-augmented value expansion algorithms use model-based rollouts to improve targets for model-free temporal difference (TD) updates or policy gradients [21, 9, 6, 10]. Fourth, analytic-gradient algorithms leverage the gradients of the model-based imaginary returns with respect to the policy and directly propagate such gradients through a differentiable world model to the policy network [22, 23, 24, 25, 26, 27, 13]. Compared to conventional planning algorithms that generate numerous rollouts to select the highest performing action sequence, analyticgradient algorithm is more computationally efficient, especially in complex domains with deep neural networks. Dreamer [13] as a landmark of analytic-gradient model-based RL, achieves state-of-the-art performance on visual control tasks.
However, most existing breakthroughs on analytic gradients focus on optimizing the policy on imaginary trajectories and leave the discrepancy between imagination and reality largely unstudied, which often bottlenecks their performance on real trajectories. In practice, a learning-based world model is not perfect, especially in complex environments. Unrolling with an imperfect model for multiple steps generates a large accumulative error, leaving a gap between the generated trajectories and reality. If we directly optimize policy based on the analytic gradients through the imaginary trajectories, the policy will tend to deviate from reality and get sucked in an inferior local solution.
Evidence from humans’ cognition and learning in the physical world suggests that humans naturally have the capacity of self-reflection and introspection. In everyday life, we track and review our past thoughts and imaginations, introspect to further understand our internal states and interactions with the external world, and change our values and behavior patterns accordingly [28, 29]. Inspired by this insight, our basic idea is to leverage information from real trajectories to endow policy improvement on imaginations with awareness of discrepancy between imagination and reality. We propose a novel reality-aware model-based framework, called BrIdging Reality and Dream (BIRD), which performs differentiable planning on imaginary trajectories, as well as enables adaptive generalization to reality for learned policy by optimizing mutual information between imaginary and real trajectories. Our model-based policy optimization framework naturally unifies confidence-aware analytic gradients, entropy regularization maximization, and model learning. We conduct experiments on challenging visual control benchmarks (DeepMind Control Suite with image inputs [30]) and the results demonstrate that BIRD achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further verifies the superiority of BIRD benefits from mutual information maximization rather than from the increase of policy entropy.
2 Related Work
Model-Based Reinforcement Learning Model-based RL exhibits high sample efficiency and has been widely used in several real-world control tasks, such as robotics [31, 32, 7]. Dyna-style algorithms [14, 15, 16, 17, 11] optimize policies with samples generated from a learned world model. Model predictive control (MPC) and shooting methods [18, 19, 20] leverage planning to select actions, but suffer from expensive computation. In model-augmented value expansion approaches, MVE [21], VPN [6] and STEVE [9] use model-based rollouts to improve targets for model-free TD updates. MuZero [10] further incorporates Monte-Carlo tree search (MCTS) and achieves remarkable performance on Atari and board games. To manage visual control tasks, VisualMPC [33] introduces a visual prediction model to keep track of entities through occlusion by temporal skip connections. PlaNet [12] improves the model learning by combining deterministic and stochastic latent dynamics models. [34] presents a summary of model-based approaches and benchmarks popular algorithms for comparisons and extensions.
Analytic Value Gradients If a differentiable world model is available, analytic value gradients are proposed to directly update the policy by gradients that flow through the world model. PILCO [24] and iLQR [25] compute an analytic gradient by assuming Gaussian processes and linear functions for the dynamics model, respectively. Guided policy search (GPS) [26, 35, 36, 37, 38] uses deep neural networks to distill behaviors from the iLQR controller. Value Gradients (VG) [22] and Stochastic Value Gradients (SVG) [23] provide a new direction to calculate analytic value gradients through a generic differentiable world model. Dreamer [13] and IVG [27] further extend SVG by
generating imaginary rollouts in the latent space. However, these works focus on improving the policy in imaginations, leaving the discrepancy between imagination and reality largely unstudied. Our approach enables policy generalization to real-world interactions by maximizing mutual information between imagination and real trajectories, while optimizing the policy on imaginary trajectories. In addition, alternative end-to-end planning methods [39, 40] leverage analytic gradients, but they either focus on online planning in simple tasks [39] or require goal images and distance metrics for the reward function [40].
Information-Based Optimization In addition to maximizing the expected return objective, a reliable RL agent may exhibit more characteristics, like meaningful representations, strong generalization, and efficient exploration. Deep information-based methods [41, 42, 43, 44] recently show progress towards this direction. [45, 46, 47] are proposed to learn more efficient representations. Maximum entropy RL maximizes the entropy regularized return to obtain a robust policy [48, 49] and [50, 51] further connect policy optimization under such regularization with value based RL. [52] learns a goal-conditioned policy with information bottleneck to identify decision states. IDS [53] estimates the information gain for a sampling-based exploration strategy. These algorithms mainly focus on facilitating policy learning in the model-free setting, while BIRD aims at bridging imagination and reality by mutual information maximization in the context of model-based RL.
3 Preliminaries
3.1 Reinforcement Learning
A reinforcement learning agent aims at learning a policy to maximize the cumulative rewards by exploring in a Markov Decision Processes (MDP) [54]. Normally, we use denote time step as t and introduce state st 2 S, action at 2 A, reward function r(st, at), a policy ⇡✓(s), and a transition probability p(st+1|st, at) to characterize the process of interacting with the environment. The goal of the agent is to find a policy parameter ✓ that maximizes the long-horizon summed rewards represented
by a value function v (st) . = E ✓ t+HP i=t i t ri ◆ parameterized with . In model-based RL, the agent builds a world model p parameterized by for environmental dynamics p and reward function r, and then performs planning or policy search based on this model.
3.2 World Model
Considering that several complex tasks (e.g., visual control tasks [30]) are partially observable Markov decision process (POMDP), this paper adopts a similar world model with PlaNet [12] and Dreamer [13], which learns latent states from the history of visual observations and models the latent dynamics by LSTM-like recurrent networks. Specifically, the world model consists of the following modules:
Representation model : st ⇠ p (st|st 1, at 1, ot)
Transition model : st ⇠ p (st|st 1, at 1)
Observation model : ot ⇠ p (ot|st)
Reward model : rt ⇠ p (rt|st).
(1)
The representation model encodes the image input into a compact latent space and the long-horizon dynamics on latent states are captured by a latent transition model. We use RSSM [12] as our transition model, which combines deterministic and stochastic transition model in order to learn dynamics more accurately and efficiently. For each latent state on the predicted trajectories, observation model learns to reconstruct its visual observations, and the reward model predicts the immediate reward. The entire world model JModel is optimized by a VAE-like objective [55]:
J Model (⌧ img , ⌧ real ) =
X
(at 1,ot,rt)⇠⌧ real
h ln(p (ot|st)) + ln(p (rt|st))
DKL(p (st|st 1, at 1, ot)||p (st|st 1, at 1)) i .
(2)
3.3 Stochastic Value Gradients
Given a differentiable world model, stochastic value gradients (SVG) [22, 23] can be applied to directly compute policy gradient on the whole imaginary trajectory, which is a recursive composition of policy, transition, reward, and value function. According to the stochastic Bellman Equation, we have:
v(s) = E⇢(⌘) r(s,⇡✓(s, ⌘)) + E⇢(⇠) (v(p(s,⇡✓(s, ⌘), ⇠))) , (3)
where ⌘ ⇠ ⇢(⌘) and ⇠ ⇠ ⇢(⇠) are noises from a fixed noise distribution for re-parameterization. So the gradients through trajectories can be iteratively computed as:
@v @s = E⇢(⌘)
✓ @r
@s + @r @a @⇡ @s + E⇢(⇠)
✓ @v
@s0
✓ @p
@s + @p @a @⇡ @s
◆◆◆
@v @✓ = E⇢(⌘)
✓ @r
@a
@⇡ @✓ + E⇢(⇠)
✓ @v
@s0 @p @a @⇡ @✓ + @v @✓
◆◆ ,
(4)
where s0 denotes the next state given by the transition function. Intuitively, policy can be improved by propagating analytic gradients with respect to the policy network through the imaginary trajectories.
4 Reality-Aware Model-Based Policy Improvement
In this section, we present a novel model-based RL framework, called BrIdging Reality and Dream (BIRD), as shown in Figure 1. The agent represents its policy function with a policy network ( ). To estimate the future effects of its policy and enable potential policy improvement, it unrolls trajectories based on its world model ( ) using the current policy and optimizes the accumulative rewards on the imaginary trajectories. The policy network and differentiable world model connect to one another forming a larger trainable network, which supports differentiable planning and allows the analytic gradients of accumulative rewards with respect to the policy flow through the world model. In the meantime, the agent also interacts with the real world ( ) and generates real trajectories. BIRD maximizes the mutual information between real and imaginary trajectories to endow both the policy network and the world model with adaptive generalization to real-world interactions. In summary, BIRD maximizes the total objective function:
JBIRD = J SVG ✓ (⌧ img_roll ) L TD (⌧ img_roll ) + wI✓, (⌧ img , ⌧ real ), (5)
where ⌧ real and ⌧ img indicate the real trajectories and the corresponding imaginary trajectories under the same policy, and ⌧ img_roll indicate the rolled out imaginary trajectories during the optimization
of policy improvement. ✓, , , are parameters of policy network ⇡✓, value network v , and world model p , respectively. The first two terms J SVG✓ (⌧ img_roll ) J TD (⌧ img_roll
) account for policy improvement on imaginations, the last term I✓, (⌧ img, ⌧ real) optimizes the mutual information, and w is a weighting factor between them.
In conventional model-based RL approaches, real-world trajectories are normally used to optimize model prediction error, which is quite different from BIRD. In complex domains, optimizing model prediction error cannot guarantee a perfect predictive model. Unrolling with such an imperfect model for multiple steps will generate a large accumulative error, leaving a large gap between the generated trajectories and real ones. Thus, policy optimized by such a model may overfit undesirable imaginations and have a low generalization ability to reality, which is also shown in our experiments (Figure 3). This problem is further exacerbated in analytic-gradient RL that performs differentiable planning by gradient-based local search. This is because even a small gradient step along the imperfect model can easily reach a non-generalizable neighbourhood and lead to a direction of incorrect policy improvement. To address this problem, our method optimizes mutual information with respect to both the model and the policy, which makes policy improvement aware of the discrepancy between real and imaginary trajectories. Intuitively, BIRD optimizes the world model to be more real and reinforces the actions whose resulting imaginations not only have large accumulative rewards, but also resemble real trajectories. As a result, BIRD learns a policy from imaginations with easier generalization to the real-world environment.
4.1 Policy Improvement on Imaginations
As a model-based RL algorithm, BIRD improves the policy by maximizing the accumulative rewards of the imaginary trajectories unrolled by the world model. Conventional model-based approaches [18, 7, 11] perform policy improvement by selecting the optimal action sequence that maximizes the expected planning reward, that is maxat:t+H Esx⇠p P t+H x=t r(sx, ax). If the world model is differentiable, we use stochastic value gradients (SVG) to directly leverage the gradients through the world model for policy improvement. Similar with Dreamer [13], our objective of maximizing the model-based value expansion within horizon H is given by:
J SVG ✓ (⌧ img
) = max ✓
t+HX
x=t
V (sx),
V (sx) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
ri
! + h t v (sh) # ,
(6)
where ri represents the immediate reward at timestep i predicted by the world model . For each expand length k, we expand the expected value from current timestep x to timestep h 1 (h = min(x+ k, t+H)) and use learned value function v (sh) to estimate returns beyond h steps, i.e., v (sh) = E ✓
HP i=h i h ri
◆ . Here, we use the exponentially-weighted average of the estimates
for different values of k to balance bias and variance, and the exponential weighting factor is indicated by k. As shown in the Equation 6, we alternate the policy network ⇡✓ and the differentiable world model p , connect them to one another to form a large end-to-end trainable network, and then back-propagate the gradients of expected values with respect to policy parameters ✓ though this large network. Intuitively, a gradient step of the policy network encourages the world model to obtain a gradient step of new states, and in turn affect the future value. As a result, the states and policy will be optimized sequentially based on the feedback on future values. To optimize the value network, we use TD updates as actor-critic algorithms [54, 56, 21], instead of Monte Carlo estimation:
L TD (⌧ img ) =
t+HX
x=t
kv (sx) V (sx)k 2 , (7)
4.2 Bridge Imagination and Reality by Mutual Information Maximization
To ensure the policy improvement based on the learned world model is equally effective in the real world, we introduce an information-theoretic objective, that optimizes mutual information between
real and imaginary trajectories with respect to the policy network and the world model: I✓, (⌧ img , ⌧ real ) = H(⌧ real ) H(⌧ real ⌧ img)
= H(⌧ real )
X
u
P (u)H(⌧ real ⌧ img = u)
= H(⌧ real ) +
X
u
P (u)
X
v
P (v|u) log(P (⌧ real = v|u))
= H(⌧ real ) +
X
u,v
P (u, v) log(P (v|u)).
(8)
To reduce computational complexity, we alternately optimize the total mutual information with respect to world model and policy network. First, we fix the policy parameters ✓ and only optimize the parameters of world model to maximize the total mutual information I✓, (⌧ img, ⌧ real). Since the first term H(⌧ real) measures the entropy of real trajectories generated by policy ⇡✓ on real MDP, it is not related to parameters of the world models and we can remove this term. As for the second term P u,v
P (u, v) log(P (v|u)), we consider the fact that our world model in conjunction with the policy network, can be regarded as a predictor for real trajectories and the second term serves as a log likelihood of a real trajectory of given imagined one. Thus, optimizing this term is equivalent to minimize the prediction error on training pairs of imagined and real trajectories (u, v). When the policy is fixed, P (u, v) is tractable and we can directly approximate it by sampling the data from replay buffer B (i.e., a collection of experienced trajectories). Thus, the second term becomesP
u,v⇠B log(P (v|u; )), which is equivalent to the conventional model prediction error L Model . In summary, we can get the gradient,
r I✓, (⌧ img , ⌧ real ) = r L Model (⌧ img , ⌧ real ), (9) Second, we fix the model parameters and only optimize the parameters of policy network ✓ to maximize the total mutual information I✓, (⌧ img, ⌧ real). The first term of mutual information becomes maximizing the entropy of the current policy. In some sense, this term encourages exploration and also learns a robust policy. We use a Gaussian distribution N (m✓(st), v✓(st)) to model the stochastic policy ⇡✓, and thus can analytically compute its entropy on real data as Est⇠⌧ real 12 log 2⇡ev 2 ✓ (st).
Then we consider how to optimize the second term, P
u,v P (u, v) log(P (v|u)). The joint distribution
of real and imagined trajectories P (u, v) is determined by the policy ⇡✓. When the updates of the world model are stopped, the log likelihood of a real trajectory of given imagined one log(P (v|u)) is fixed and can be regarded as the weight for optimizing distribution P (u, v) by policy. Thus, the essential objective of maximizing P u,v
P (u, v) log(P (v|u)) with respect to policy parameters ✓ is to guide policy to the space with high confidence of model prediction (i.e., high log likelihood log(P (v|u))). Specifically, we implement it by a confidence-aware policy optimization, which reweights the degree of learning by prediction confidence log(P (⌧ img_roll|⌧ img)) during the policy improvement process. The new objective of reweighted policy improvement is written as log(P (⌧ img_roll |⌧ img ))J
SVG ✓ (⌧ img_roll ). In addition, we normalize the confidence weight for each batch to make training stable. In summary, the gradient of policy optimization is rewritten as:
r✓ I✓, (⌧
img , ⌧ real ) + J SVG ✓ (⌧ img ))
=r✓ ✓ Est⇠⌧ real 1
2 log 2⇡ev
2 ✓ (st) + log(P (⌧ img_roll |⌧ img ))J SVG ✓ (⌧ img_roll )
◆ .
(10)
From Equation 9 and 10, we can see there are three terms, model error minimization, policy entropy maximization, and confidence-aware policy optimization, derivated by our total objective of optimizing mutual information between real and imaginary trajectories. We have the same model error loss as Dreamer, and thus the main difference from Dreamer is the policy entropy maximization and confidence-aware policy optimization. Intuitively, entropy maximization term aims at increasing the search space of SVG-based policy search like Dreamer and thus can explore more possibilities. Then the confidence-aware optimization term reweighs the search results by confidence, which contributes to improve the search quality and make sure the additional search results from large entropy are reliable enough. This approach has strong connections to distributional shift refinement in offline RL setting and may be beneficial to the community of batch RL [57]. In addition, considering that ⌧ real, ⌧ img and ⌧ img_roll are trajectories under current policy, we use a first-in-first-out replay buffer with limited capacity to mimic a approximately on-policy data stream.
Algorithm 1 summarizes our entire algorithm of optimizing mutual information and policy.
Algorithm 1 BIRD Algorithm Initialize buffer B with random agent. Initialize parameters ✓, , randomly. Set hyper-parameters: imagination horizon H , learning step C, interacting step T , batch size B, batch length L. while not converged do
for i = 1 . . . C do Draw B data sequences {(ot, at, rt)}t+Lt from B. Compute latent states st ⇠ p (st|st 1, at 1, ot) and imaginary trajectories {(sx, ax)}t+Hx=t For each sx, predict rewards p (rx|sx) and values v (sx) . Calculate imaginary returns Update ✓, , using Equation 5 . Optimize policy and mutual information end for Reset o1 in real world. for t = 1 . . . T do
Compute latent state st ⇠ p (st|st 1, at 1, ot). Compute at ⇠ ⇡✓(at|st) using policy network and add exploration noise. Take action at and get rt, ot+1 from real world. . Interact with real world
end for Add experience {(ot, at, rt)Tt=1} to B.
end while
4.3 Policy Optimization with Entropy Maximization
In the context of model-free RL, maximum entropy deep RL [49, 58] contributes to learning robust policies with estimation errors, generating a question: if we simply add a maximization objective for policy entropy in the context of model-based RL with stochastic value gradients, can we also obtain policies from imaginations that generalize well to real environment? Thus, we design an ablation version of BIRD, Soft-BIRD, which just adds a entropy augmented objective to the return objective:
⇡ ⇤ ✓ = argmax
✓
X
t
E (rt + ↵H(⇡(·|st))) , (11)
where ↵ is a hyper-parameter. We use a soft Bellman Equation for value function v0 (st) like SAC [49] and rewrite the objective of policy improvement J 0SVG ✓ as:
v 0 (st) = E rt + ↵H(⇡✓(·|st)) + v 0 (st+1) ,
J 0SVG ✓ (⌧ img ) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
(ri + ↵H(⇡✓(·|si)))
! + h t v 0 (sh) # .
(12) Compared to BIRD, soft-BIRD only maximizes the entropy of the policy instead of optimizing the mutual information between real and imaginary trajectories generated from the policy, which will provide further insights on the contribution of BIRD.
5 Experiments
We evaluate BIRD on DeepMind Control Suite (https://github.com/deepmind/dm_control) [30], a standard benchmark for continuous control. In Section 5.2, we compare BIRD with both model-free and model-based RL methods. For model-free baselines, we compare with D4PG [59], a distributed extension of DDPG [2], and A3C [56], the distributed actor-critic approach. We include the scores for D4PG with pixel inputs and A3C with state inputs, which are also used as baselines in Dreamer. For model-based baselines, we use PlaNet [12] and Dreamer [13], two state-of-the-art model-based RL. Some popular model-based RL papers [60, 61, 62, 63] are not inlcuded in our experiments since they use MPC for sampling-based planning and do not show effectiveness on RL tasks with image inputs. Compared to the MPC-based approaches that generate many rollouts to select the highest performing action sequence, our paper builds upon analytic value gradients that can directly propagate gradients through a differentiable world model and is more computationally efficient on domains that require learning from pixels. Our paper focuses on visual control tasks, and thus we only compare with state-of-the-art algorithms of these tasks (i.e., PlaNet and Dreamer).
In addition, we conduct an ablation experiment in Section 5.3 to illustrate the contribution of mutual information maximization. In Section 5.4, we further study cases and visualize BIRD’s generalization to real-world information.
5.1 Experiment Setting
We mainly follow the experiment settings of Dreamer. Among all environments, observations are 64 ⇥ 64 ⇥ 3 images, rewards are scaled to 0 to 1, and the dimensions of action space vary from 1 to 12 . Action repeat is fixed at 2 for all tasks. We implement Dreamer by its released codes (https://github.com/google-research/dreamer) and all hyper-parameters remain the same as reported. Since our model loss term in Equation 9 has the same form as Dreamer, we directly use the same model learning component as Dreamer that adopts multi-step prediction and removes latent overshooting used in PlaNet. We also use the same architecture for neural networks thus we have the same computational complexity as Dreamer. Specifically, CNN layers are employed to compress observations into latent state space and GRU [64] is used for learning latent dynamics. Policy network, reward network, and value network are all implemented with multi-layer perceptrons (MLP) and they respectively trained with Adam optimizer [65]. For all experiments, we select a discount factor of 0.99 and a mutual information coefficient of 1e-8. Buffersize is 100k. We train BIRD with a single Nvidia 2080Ti and a single CPU, and it takes 8 hours to run 1 million samples.
5.2 Results on DeepMind Control Suite
Learning policy from raw visual observation has always been a challenging problem for RL algorithms. We significantly improve the state-of-the-art visual control approach on the visual control tasks from DeepMind Control Suite, which provides a promising avenue for model-based policy learning from pixels. Figure 5 shows the training curves on 6 tasks and additional results are placed in supplementary materials. Comparison results demonstrate that BIRD significantly outperforms baselines in terms of sample efficiency. We observe that BIRD can use half training samples to obtain the same score with PlaNet and Dreamer in Hopper Stand and Hopper Hop. Among all tasks, BIRD achieves comparable performance to D4PG and A3C, which are trained with 1,000 times more samples. In addition, BIRD achieves higher or similar convergence scores in all tasks than baselines. Here, we provide insights into the superiority of BIRD. As the mutual information between real and imaginary
trajectories increases, the behaviors that BIRD learns using the world model can be adapted to the real environment more appropriately and faster, while other model-based methods require a slower adaptation process. Besides, although world model usually tend to overfit poor policies in the early stage, BIRD will not be tempted by greedy policy optimization on the poor trajectories generated by such an imperfect model. Because the entropy maximization term in Equation 10 endows the agent a stronger exploration ability, and the confidence-aware policy optimization term encourages it re-estimate all the gathered trajectories and focus on optimizing high-confidence ones.
5.3 Ablation Study
In order to verify the outperformance of BIRD is not simply due to simply increasing the entropy of policy, we conduct an ablation study that compares BIRD with Soft-BIRD (4.3). Figure 5 shows the best performance of Soft-BIRD, but there is still a big gap from BIRD. As shown in Walker Run of Figure 5, we find that the score of Soft-BIRD first rises for a while, but eventually falls. The failure of Soft-BIRD suggests that policy improvement in model-based RL with analytic gradients is bottlenecked by the discrepancy of reality and imagination, thus only improving the entropy of policy will not help.
5.4 Case Study: Predictions on Key Actions
Our algorithm learns a world model with better generalization to real trajectories, especially on key actions which matters for long-horizon behavior learning. We visualize some predictions on key actions, such as the explosive force for standing up and jumping in Hopper Stand and Hopper Hop, stomping with front leg to prevent tumble in Walker Run, and throwing pole up to keep stable in Cartpole Swingup. As shown in Figure 3, BIRD makes more accurate predictions compared to Dreamer. For example, in Hopper Hop, Dreamer wrongly predicts the takeoff moment to fall down while BIRD has an accurate foresight that the agent will leap from the ground. Precise forecast of the key actions implicitly suggests that our imaginary trajectories generated by the learned policy indeed possess more real-world information.
6 Conclusion
Generalization from imagination to reality is a crucial yet challenging problem in the context of model-based RL. In this paper, we propose a novel model-based framework, called BrIdging Reality and Dream (BIRD), which not only performs differentiable planning on imaginary trajectories, but also encourages adaptive generalization to reality by optimizing mutual information between imaginary and real trajectories. Results on challenging visual control tasks demonstrate that our algorithm achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further shows that the superiority is attributed to maximizing mutual information rather than simply increasing the entropy of the policy. In the future, we will explore directions to further improve the generalization of imaginations, such as generalizable representations and reusable skill discovery.
Broader Impact
Model-free RL requires a large amount of samples, thus limits its applications to real-world tasks. For example, the trial-and-error training process of a robot requires substantial manpower and financial resources, and certain harmful actions can greatly reduce the life of the robot. Building a world model and learning behaviors by imaginations provides a boarder prospect for real-world applications. This paper is situated in model-based RL and further improves sample efficiency over existing work, which will accelerate the development of real-world applications on automatic control, such as robotics and autonomous driving. In addition, this paper tackles a valuable problem about generalization, from imagination to reality, thus it is also of great interest to researchers in generalizable machine learning.
In the long run, this paper will improve the efficiency of factory operations, avoid artificial repetition of difficult or dangerous work, save costs, and reduce risks in the industrial and agricultural industry. For daily life, it will create a more intelligent lifestyle and improve the quality of life.
Our algorithm is a generic framework that does not leverages biases in data. We evaluated our model in a popular benchmark of visual control tasks. However, similar to a majority of deep learning approaches, our algorithm has a common disadvantage. The learned knowledge and policy is not friendly to humans and it is hard for us to know why the agent learns to act so well. Interpretability has always been a challenging open question and in the future we are interested in incorporating recent deep learning progresses on causal inference into RL.
Acknowledgments and Disclosure of Funding
This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), and a grant from the Institute of Guo Qiang,Tsinghua University. | 1. What is the main contribution of the paper in the field of deep reinforcement learning?
2. What are the strengths of the proposed algorithm, particularly in its novelty and performance improvement?
3. What are the weaknesses of the paper regarding the importance of SVG and D4PG and the lack of certain experiments? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
[Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning] In this paper, the authors propose an interesting pipeline where model-free policy updates and model-based updates can be combined. More specifically SVG is used as the model-based part and D4PG is used as the model-free part. An additional loss is used to constraint the imaginary trajectories from deviating from the real trajectories.
Strengths
1) The algorithm is novel and interesting. While the idea of combining model-free and model-based training is straight-forward, it has been unclear how to do it efficiently and effectively. I believe it can be beneficial to the community of reinforcement learning. And it is also mathematically and engineeringly neat, making it reproducible and without the worry of heavy tuning. 2) The performance is improved with the proposed algorithms. Some of the state-of-the-art baselines, including PlaNet and Dreamer are used as comparisons. 3) The paper is well-written and easy to understand.
Weaknesses
1) It is unclear how important SVG and D4PG are to the proposed algorithm. The ablation on the choice of sub-modules in this algorithm is missing. 2) There are also no experiments which take states as input. Some experiments are quite challenging, for example humanoid. Additionally what happens if we do not include the mutual information term? |
NIPS | Title
Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning
Abstract
Sample efficiency has been one of the major challenges for deep reinforcement learning. Recently, model-based reinforcement learning has been proposed to address this challenge by performing planning on imaginary trajectories with a learned world model. However, world model learning may suffer from overfitting to training trajectories, and thus model-based value estimation and policy search will be prone to be sucked in an inferior local policy. In this paper, we propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD). It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories. We demonstrate that our approach improves sample efficiency of model-based planning, and achieves state-of-the-art performance on challenging visual control benchmarks.
1 Introduction
Reinforcement learning (RL) is proposed as a general-purpose learning framework for artificial intelligence problems, and has led to tremendous progress in a variety of domains [1, 2, 3, 4]. Modelfree RL adopts a trail-and-error paradigm, which directly learns a mapping function from observations to values or actions through interactions with environments. It has achieved remarkable performance in certain video games and continuous control tasks because of its simplicity and minimal assumptions about environments. However, model-free approaches are not yet sample efficient and require several orders of magnitude more training samples than human learning, which limits its applications on real-world tasks [5].
A promising direction for improving sample efficiency is to explore model-based RL, which first builds an action-conditioned world model and then performs planning or policy search based on the learned model. The world model needs to encode the representations and dynamics of an environment is then used as a “dreamer” to do multi-step lookaheads for planning or policy search. Recently, world models based on deep neural networks were developed to handle dynamics in complex highdimensional environments, which offers opportunities for learning model-based polices with visual observations [6, 7, 8, 9, 10, 11, 12, 13].
⇤Equal Contribution
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Model-based frameworks can be roughly grouped into four categories. First, Dyna-style algorithms alternate between building the world model from interactions with environments and performing policy optimization on simulated data generated by the learned model [14, 15, 16, 17, 11]. Second, model predictive control (MPC) and shooting algorithms alternate model learning, planning and action execution [18, 19, 20]. Third, model-augmented value expansion algorithms use model-based rollouts to improve targets for model-free temporal difference (TD) updates or policy gradients [21, 9, 6, 10]. Fourth, analytic-gradient algorithms leverage the gradients of the model-based imaginary returns with respect to the policy and directly propagate such gradients through a differentiable world model to the policy network [22, 23, 24, 25, 26, 27, 13]. Compared to conventional planning algorithms that generate numerous rollouts to select the highest performing action sequence, analyticgradient algorithm is more computationally efficient, especially in complex domains with deep neural networks. Dreamer [13] as a landmark of analytic-gradient model-based RL, achieves state-of-the-art performance on visual control tasks.
However, most existing breakthroughs on analytic gradients focus on optimizing the policy on imaginary trajectories and leave the discrepancy between imagination and reality largely unstudied, which often bottlenecks their performance on real trajectories. In practice, a learning-based world model is not perfect, especially in complex environments. Unrolling with an imperfect model for multiple steps generates a large accumulative error, leaving a gap between the generated trajectories and reality. If we directly optimize policy based on the analytic gradients through the imaginary trajectories, the policy will tend to deviate from reality and get sucked in an inferior local solution.
Evidence from humans’ cognition and learning in the physical world suggests that humans naturally have the capacity of self-reflection and introspection. In everyday life, we track and review our past thoughts and imaginations, introspect to further understand our internal states and interactions with the external world, and change our values and behavior patterns accordingly [28, 29]. Inspired by this insight, our basic idea is to leverage information from real trajectories to endow policy improvement on imaginations with awareness of discrepancy between imagination and reality. We propose a novel reality-aware model-based framework, called BrIdging Reality and Dream (BIRD), which performs differentiable planning on imaginary trajectories, as well as enables adaptive generalization to reality for learned policy by optimizing mutual information between imaginary and real trajectories. Our model-based policy optimization framework naturally unifies confidence-aware analytic gradients, entropy regularization maximization, and model learning. We conduct experiments on challenging visual control benchmarks (DeepMind Control Suite with image inputs [30]) and the results demonstrate that BIRD achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further verifies the superiority of BIRD benefits from mutual information maximization rather than from the increase of policy entropy.
2 Related Work
Model-Based Reinforcement Learning Model-based RL exhibits high sample efficiency and has been widely used in several real-world control tasks, such as robotics [31, 32, 7]. Dyna-style algorithms [14, 15, 16, 17, 11] optimize policies with samples generated from a learned world model. Model predictive control (MPC) and shooting methods [18, 19, 20] leverage planning to select actions, but suffer from expensive computation. In model-augmented value expansion approaches, MVE [21], VPN [6] and STEVE [9] use model-based rollouts to improve targets for model-free TD updates. MuZero [10] further incorporates Monte-Carlo tree search (MCTS) and achieves remarkable performance on Atari and board games. To manage visual control tasks, VisualMPC [33] introduces a visual prediction model to keep track of entities through occlusion by temporal skip connections. PlaNet [12] improves the model learning by combining deterministic and stochastic latent dynamics models. [34] presents a summary of model-based approaches and benchmarks popular algorithms for comparisons and extensions.
Analytic Value Gradients If a differentiable world model is available, analytic value gradients are proposed to directly update the policy by gradients that flow through the world model. PILCO [24] and iLQR [25] compute an analytic gradient by assuming Gaussian processes and linear functions for the dynamics model, respectively. Guided policy search (GPS) [26, 35, 36, 37, 38] uses deep neural networks to distill behaviors from the iLQR controller. Value Gradients (VG) [22] and Stochastic Value Gradients (SVG) [23] provide a new direction to calculate analytic value gradients through a generic differentiable world model. Dreamer [13] and IVG [27] further extend SVG by
generating imaginary rollouts in the latent space. However, these works focus on improving the policy in imaginations, leaving the discrepancy between imagination and reality largely unstudied. Our approach enables policy generalization to real-world interactions by maximizing mutual information between imagination and real trajectories, while optimizing the policy on imaginary trajectories. In addition, alternative end-to-end planning methods [39, 40] leverage analytic gradients, but they either focus on online planning in simple tasks [39] or require goal images and distance metrics for the reward function [40].
Information-Based Optimization In addition to maximizing the expected return objective, a reliable RL agent may exhibit more characteristics, like meaningful representations, strong generalization, and efficient exploration. Deep information-based methods [41, 42, 43, 44] recently show progress towards this direction. [45, 46, 47] are proposed to learn more efficient representations. Maximum entropy RL maximizes the entropy regularized return to obtain a robust policy [48, 49] and [50, 51] further connect policy optimization under such regularization with value based RL. [52] learns a goal-conditioned policy with information bottleneck to identify decision states. IDS [53] estimates the information gain for a sampling-based exploration strategy. These algorithms mainly focus on facilitating policy learning in the model-free setting, while BIRD aims at bridging imagination and reality by mutual information maximization in the context of model-based RL.
3 Preliminaries
3.1 Reinforcement Learning
A reinforcement learning agent aims at learning a policy to maximize the cumulative rewards by exploring in a Markov Decision Processes (MDP) [54]. Normally, we use denote time step as t and introduce state st 2 S, action at 2 A, reward function r(st, at), a policy ⇡✓(s), and a transition probability p(st+1|st, at) to characterize the process of interacting with the environment. The goal of the agent is to find a policy parameter ✓ that maximizes the long-horizon summed rewards represented
by a value function v (st) . = E ✓ t+HP i=t i t ri ◆ parameterized with . In model-based RL, the agent builds a world model p parameterized by for environmental dynamics p and reward function r, and then performs planning or policy search based on this model.
3.2 World Model
Considering that several complex tasks (e.g., visual control tasks [30]) are partially observable Markov decision process (POMDP), this paper adopts a similar world model with PlaNet [12] and Dreamer [13], which learns latent states from the history of visual observations and models the latent dynamics by LSTM-like recurrent networks. Specifically, the world model consists of the following modules:
Representation model : st ⇠ p (st|st 1, at 1, ot)
Transition model : st ⇠ p (st|st 1, at 1)
Observation model : ot ⇠ p (ot|st)
Reward model : rt ⇠ p (rt|st).
(1)
The representation model encodes the image input into a compact latent space and the long-horizon dynamics on latent states are captured by a latent transition model. We use RSSM [12] as our transition model, which combines deterministic and stochastic transition model in order to learn dynamics more accurately and efficiently. For each latent state on the predicted trajectories, observation model learns to reconstruct its visual observations, and the reward model predicts the immediate reward. The entire world model JModel is optimized by a VAE-like objective [55]:
J Model (⌧ img , ⌧ real ) =
X
(at 1,ot,rt)⇠⌧ real
h ln(p (ot|st)) + ln(p (rt|st))
DKL(p (st|st 1, at 1, ot)||p (st|st 1, at 1)) i .
(2)
3.3 Stochastic Value Gradients
Given a differentiable world model, stochastic value gradients (SVG) [22, 23] can be applied to directly compute policy gradient on the whole imaginary trajectory, which is a recursive composition of policy, transition, reward, and value function. According to the stochastic Bellman Equation, we have:
v(s) = E⇢(⌘) r(s,⇡✓(s, ⌘)) + E⇢(⇠) (v(p(s,⇡✓(s, ⌘), ⇠))) , (3)
where ⌘ ⇠ ⇢(⌘) and ⇠ ⇠ ⇢(⇠) are noises from a fixed noise distribution for re-parameterization. So the gradients through trajectories can be iteratively computed as:
@v @s = E⇢(⌘)
✓ @r
@s + @r @a @⇡ @s + E⇢(⇠)
✓ @v
@s0
✓ @p
@s + @p @a @⇡ @s
◆◆◆
@v @✓ = E⇢(⌘)
✓ @r
@a
@⇡ @✓ + E⇢(⇠)
✓ @v
@s0 @p @a @⇡ @✓ + @v @✓
◆◆ ,
(4)
where s0 denotes the next state given by the transition function. Intuitively, policy can be improved by propagating analytic gradients with respect to the policy network through the imaginary trajectories.
4 Reality-Aware Model-Based Policy Improvement
In this section, we present a novel model-based RL framework, called BrIdging Reality and Dream (BIRD), as shown in Figure 1. The agent represents its policy function with a policy network ( ). To estimate the future effects of its policy and enable potential policy improvement, it unrolls trajectories based on its world model ( ) using the current policy and optimizes the accumulative rewards on the imaginary trajectories. The policy network and differentiable world model connect to one another forming a larger trainable network, which supports differentiable planning and allows the analytic gradients of accumulative rewards with respect to the policy flow through the world model. In the meantime, the agent also interacts with the real world ( ) and generates real trajectories. BIRD maximizes the mutual information between real and imaginary trajectories to endow both the policy network and the world model with adaptive generalization to real-world interactions. In summary, BIRD maximizes the total objective function:
JBIRD = J SVG ✓ (⌧ img_roll ) L TD (⌧ img_roll ) + wI✓, (⌧ img , ⌧ real ), (5)
where ⌧ real and ⌧ img indicate the real trajectories and the corresponding imaginary trajectories under the same policy, and ⌧ img_roll indicate the rolled out imaginary trajectories during the optimization
of policy improvement. ✓, , , are parameters of policy network ⇡✓, value network v , and world model p , respectively. The first two terms J SVG✓ (⌧ img_roll ) J TD (⌧ img_roll
) account for policy improvement on imaginations, the last term I✓, (⌧ img, ⌧ real) optimizes the mutual information, and w is a weighting factor between them.
In conventional model-based RL approaches, real-world trajectories are normally used to optimize model prediction error, which is quite different from BIRD. In complex domains, optimizing model prediction error cannot guarantee a perfect predictive model. Unrolling with such an imperfect model for multiple steps will generate a large accumulative error, leaving a large gap between the generated trajectories and real ones. Thus, policy optimized by such a model may overfit undesirable imaginations and have a low generalization ability to reality, which is also shown in our experiments (Figure 3). This problem is further exacerbated in analytic-gradient RL that performs differentiable planning by gradient-based local search. This is because even a small gradient step along the imperfect model can easily reach a non-generalizable neighbourhood and lead to a direction of incorrect policy improvement. To address this problem, our method optimizes mutual information with respect to both the model and the policy, which makes policy improvement aware of the discrepancy between real and imaginary trajectories. Intuitively, BIRD optimizes the world model to be more real and reinforces the actions whose resulting imaginations not only have large accumulative rewards, but also resemble real trajectories. As a result, BIRD learns a policy from imaginations with easier generalization to the real-world environment.
4.1 Policy Improvement on Imaginations
As a model-based RL algorithm, BIRD improves the policy by maximizing the accumulative rewards of the imaginary trajectories unrolled by the world model. Conventional model-based approaches [18, 7, 11] perform policy improvement by selecting the optimal action sequence that maximizes the expected planning reward, that is maxat:t+H Esx⇠p P t+H x=t r(sx, ax). If the world model is differentiable, we use stochastic value gradients (SVG) to directly leverage the gradients through the world model for policy improvement. Similar with Dreamer [13], our objective of maximizing the model-based value expansion within horizon H is given by:
J SVG ✓ (⌧ img
) = max ✓
t+HX
x=t
V (sx),
V (sx) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
ri
! + h t v (sh) # ,
(6)
where ri represents the immediate reward at timestep i predicted by the world model . For each expand length k, we expand the expected value from current timestep x to timestep h 1 (h = min(x+ k, t+H)) and use learned value function v (sh) to estimate returns beyond h steps, i.e., v (sh) = E ✓
HP i=h i h ri
◆ . Here, we use the exponentially-weighted average of the estimates
for different values of k to balance bias and variance, and the exponential weighting factor is indicated by k. As shown in the Equation 6, we alternate the policy network ⇡✓ and the differentiable world model p , connect them to one another to form a large end-to-end trainable network, and then back-propagate the gradients of expected values with respect to policy parameters ✓ though this large network. Intuitively, a gradient step of the policy network encourages the world model to obtain a gradient step of new states, and in turn affect the future value. As a result, the states and policy will be optimized sequentially based on the feedback on future values. To optimize the value network, we use TD updates as actor-critic algorithms [54, 56, 21], instead of Monte Carlo estimation:
L TD (⌧ img ) =
t+HX
x=t
kv (sx) V (sx)k 2 , (7)
4.2 Bridge Imagination and Reality by Mutual Information Maximization
To ensure the policy improvement based on the learned world model is equally effective in the real world, we introduce an information-theoretic objective, that optimizes mutual information between
real and imaginary trajectories with respect to the policy network and the world model: I✓, (⌧ img , ⌧ real ) = H(⌧ real ) H(⌧ real ⌧ img)
= H(⌧ real )
X
u
P (u)H(⌧ real ⌧ img = u)
= H(⌧ real ) +
X
u
P (u)
X
v
P (v|u) log(P (⌧ real = v|u))
= H(⌧ real ) +
X
u,v
P (u, v) log(P (v|u)).
(8)
To reduce computational complexity, we alternately optimize the total mutual information with respect to world model and policy network. First, we fix the policy parameters ✓ and only optimize the parameters of world model to maximize the total mutual information I✓, (⌧ img, ⌧ real). Since the first term H(⌧ real) measures the entropy of real trajectories generated by policy ⇡✓ on real MDP, it is not related to parameters of the world models and we can remove this term. As for the second term P u,v
P (u, v) log(P (v|u)), we consider the fact that our world model in conjunction with the policy network, can be regarded as a predictor for real trajectories and the second term serves as a log likelihood of a real trajectory of given imagined one. Thus, optimizing this term is equivalent to minimize the prediction error on training pairs of imagined and real trajectories (u, v). When the policy is fixed, P (u, v) is tractable and we can directly approximate it by sampling the data from replay buffer B (i.e., a collection of experienced trajectories). Thus, the second term becomesP
u,v⇠B log(P (v|u; )), which is equivalent to the conventional model prediction error L Model . In summary, we can get the gradient,
r I✓, (⌧ img , ⌧ real ) = r L Model (⌧ img , ⌧ real ), (9) Second, we fix the model parameters and only optimize the parameters of policy network ✓ to maximize the total mutual information I✓, (⌧ img, ⌧ real). The first term of mutual information becomes maximizing the entropy of the current policy. In some sense, this term encourages exploration and also learns a robust policy. We use a Gaussian distribution N (m✓(st), v✓(st)) to model the stochastic policy ⇡✓, and thus can analytically compute its entropy on real data as Est⇠⌧ real 12 log 2⇡ev 2 ✓ (st).
Then we consider how to optimize the second term, P
u,v P (u, v) log(P (v|u)). The joint distribution
of real and imagined trajectories P (u, v) is determined by the policy ⇡✓. When the updates of the world model are stopped, the log likelihood of a real trajectory of given imagined one log(P (v|u)) is fixed and can be regarded as the weight for optimizing distribution P (u, v) by policy. Thus, the essential objective of maximizing P u,v
P (u, v) log(P (v|u)) with respect to policy parameters ✓ is to guide policy to the space with high confidence of model prediction (i.e., high log likelihood log(P (v|u))). Specifically, we implement it by a confidence-aware policy optimization, which reweights the degree of learning by prediction confidence log(P (⌧ img_roll|⌧ img)) during the policy improvement process. The new objective of reweighted policy improvement is written as log(P (⌧ img_roll |⌧ img ))J
SVG ✓ (⌧ img_roll ). In addition, we normalize the confidence weight for each batch to make training stable. In summary, the gradient of policy optimization is rewritten as:
r✓ I✓, (⌧
img , ⌧ real ) + J SVG ✓ (⌧ img ))
=r✓ ✓ Est⇠⌧ real 1
2 log 2⇡ev
2 ✓ (st) + log(P (⌧ img_roll |⌧ img ))J SVG ✓ (⌧ img_roll )
◆ .
(10)
From Equation 9 and 10, we can see there are three terms, model error minimization, policy entropy maximization, and confidence-aware policy optimization, derivated by our total objective of optimizing mutual information between real and imaginary trajectories. We have the same model error loss as Dreamer, and thus the main difference from Dreamer is the policy entropy maximization and confidence-aware policy optimization. Intuitively, entropy maximization term aims at increasing the search space of SVG-based policy search like Dreamer and thus can explore more possibilities. Then the confidence-aware optimization term reweighs the search results by confidence, which contributes to improve the search quality and make sure the additional search results from large entropy are reliable enough. This approach has strong connections to distributional shift refinement in offline RL setting and may be beneficial to the community of batch RL [57]. In addition, considering that ⌧ real, ⌧ img and ⌧ img_roll are trajectories under current policy, we use a first-in-first-out replay buffer with limited capacity to mimic a approximately on-policy data stream.
Algorithm 1 summarizes our entire algorithm of optimizing mutual information and policy.
Algorithm 1 BIRD Algorithm Initialize buffer B with random agent. Initialize parameters ✓, , randomly. Set hyper-parameters: imagination horizon H , learning step C, interacting step T , batch size B, batch length L. while not converged do
for i = 1 . . . C do Draw B data sequences {(ot, at, rt)}t+Lt from B. Compute latent states st ⇠ p (st|st 1, at 1, ot) and imaginary trajectories {(sx, ax)}t+Hx=t For each sx, predict rewards p (rx|sx) and values v (sx) . Calculate imaginary returns Update ✓, , using Equation 5 . Optimize policy and mutual information end for Reset o1 in real world. for t = 1 . . . T do
Compute latent state st ⇠ p (st|st 1, at 1, ot). Compute at ⇠ ⇡✓(at|st) using policy network and add exploration noise. Take action at and get rt, ot+1 from real world. . Interact with real world
end for Add experience {(ot, at, rt)Tt=1} to B.
end while
4.3 Policy Optimization with Entropy Maximization
In the context of model-free RL, maximum entropy deep RL [49, 58] contributes to learning robust policies with estimation errors, generating a question: if we simply add a maximization objective for policy entropy in the context of model-based RL with stochastic value gradients, can we also obtain policies from imaginations that generalize well to real environment? Thus, we design an ablation version of BIRD, Soft-BIRD, which just adds a entropy augmented objective to the return objective:
⇡ ⇤ ✓ = argmax
✓
X
t
E (rt + ↵H(⇡(·|st))) , (11)
where ↵ is a hyper-parameter. We use a soft Bellman Equation for value function v0 (st) like SAC [49] and rewrite the objective of policy improvement J 0SVG ✓ as:
v 0 (st) = E rt + ↵H(⇡✓(·|st)) + v 0 (st+1) ,
J 0SVG ✓ (⌧ img ) = Eai⇠⇡✓,si⇠p (si|si 1,ai 1) HX
k=1
k
" h 1X
i=t
i t
(ri + ↵H(⇡✓(·|si)))
! + h t v 0 (sh) # .
(12) Compared to BIRD, soft-BIRD only maximizes the entropy of the policy instead of optimizing the mutual information between real and imaginary trajectories generated from the policy, which will provide further insights on the contribution of BIRD.
5 Experiments
We evaluate BIRD on DeepMind Control Suite (https://github.com/deepmind/dm_control) [30], a standard benchmark for continuous control. In Section 5.2, we compare BIRD with both model-free and model-based RL methods. For model-free baselines, we compare with D4PG [59], a distributed extension of DDPG [2], and A3C [56], the distributed actor-critic approach. We include the scores for D4PG with pixel inputs and A3C with state inputs, which are also used as baselines in Dreamer. For model-based baselines, we use PlaNet [12] and Dreamer [13], two state-of-the-art model-based RL. Some popular model-based RL papers [60, 61, 62, 63] are not inlcuded in our experiments since they use MPC for sampling-based planning and do not show effectiveness on RL tasks with image inputs. Compared to the MPC-based approaches that generate many rollouts to select the highest performing action sequence, our paper builds upon analytic value gradients that can directly propagate gradients through a differentiable world model and is more computationally efficient on domains that require learning from pixels. Our paper focuses on visual control tasks, and thus we only compare with state-of-the-art algorithms of these tasks (i.e., PlaNet and Dreamer).
In addition, we conduct an ablation experiment in Section 5.3 to illustrate the contribution of mutual information maximization. In Section 5.4, we further study cases and visualize BIRD’s generalization to real-world information.
5.1 Experiment Setting
We mainly follow the experiment settings of Dreamer. Among all environments, observations are 64 ⇥ 64 ⇥ 3 images, rewards are scaled to 0 to 1, and the dimensions of action space vary from 1 to 12 . Action repeat is fixed at 2 for all tasks. We implement Dreamer by its released codes (https://github.com/google-research/dreamer) and all hyper-parameters remain the same as reported. Since our model loss term in Equation 9 has the same form as Dreamer, we directly use the same model learning component as Dreamer that adopts multi-step prediction and removes latent overshooting used in PlaNet. We also use the same architecture for neural networks thus we have the same computational complexity as Dreamer. Specifically, CNN layers are employed to compress observations into latent state space and GRU [64] is used for learning latent dynamics. Policy network, reward network, and value network are all implemented with multi-layer perceptrons (MLP) and they respectively trained with Adam optimizer [65]. For all experiments, we select a discount factor of 0.99 and a mutual information coefficient of 1e-8. Buffersize is 100k. We train BIRD with a single Nvidia 2080Ti and a single CPU, and it takes 8 hours to run 1 million samples.
5.2 Results on DeepMind Control Suite
Learning policy from raw visual observation has always been a challenging problem for RL algorithms. We significantly improve the state-of-the-art visual control approach on the visual control tasks from DeepMind Control Suite, which provides a promising avenue for model-based policy learning from pixels. Figure 5 shows the training curves on 6 tasks and additional results are placed in supplementary materials. Comparison results demonstrate that BIRD significantly outperforms baselines in terms of sample efficiency. We observe that BIRD can use half training samples to obtain the same score with PlaNet and Dreamer in Hopper Stand and Hopper Hop. Among all tasks, BIRD achieves comparable performance to D4PG and A3C, which are trained with 1,000 times more samples. In addition, BIRD achieves higher or similar convergence scores in all tasks than baselines. Here, we provide insights into the superiority of BIRD. As the mutual information between real and imaginary
trajectories increases, the behaviors that BIRD learns using the world model can be adapted to the real environment more appropriately and faster, while other model-based methods require a slower adaptation process. Besides, although world model usually tend to overfit poor policies in the early stage, BIRD will not be tempted by greedy policy optimization on the poor trajectories generated by such an imperfect model. Because the entropy maximization term in Equation 10 endows the agent a stronger exploration ability, and the confidence-aware policy optimization term encourages it re-estimate all the gathered trajectories and focus on optimizing high-confidence ones.
5.3 Ablation Study
In order to verify the outperformance of BIRD is not simply due to simply increasing the entropy of policy, we conduct an ablation study that compares BIRD with Soft-BIRD (4.3). Figure 5 shows the best performance of Soft-BIRD, but there is still a big gap from BIRD. As shown in Walker Run of Figure 5, we find that the score of Soft-BIRD first rises for a while, but eventually falls. The failure of Soft-BIRD suggests that policy improvement in model-based RL with analytic gradients is bottlenecked by the discrepancy of reality and imagination, thus only improving the entropy of policy will not help.
5.4 Case Study: Predictions on Key Actions
Our algorithm learns a world model with better generalization to real trajectories, especially on key actions which matters for long-horizon behavior learning. We visualize some predictions on key actions, such as the explosive force for standing up and jumping in Hopper Stand and Hopper Hop, stomping with front leg to prevent tumble in Walker Run, and throwing pole up to keep stable in Cartpole Swingup. As shown in Figure 3, BIRD makes more accurate predictions compared to Dreamer. For example, in Hopper Hop, Dreamer wrongly predicts the takeoff moment to fall down while BIRD has an accurate foresight that the agent will leap from the ground. Precise forecast of the key actions implicitly suggests that our imaginary trajectories generated by the learned policy indeed possess more real-world information.
6 Conclusion
Generalization from imagination to reality is a crucial yet challenging problem in the context of model-based RL. In this paper, we propose a novel model-based framework, called BrIdging Reality and Dream (BIRD), which not only performs differentiable planning on imaginary trajectories, but also encourages adaptive generalization to reality by optimizing mutual information between imaginary and real trajectories. Results on challenging visual control tasks demonstrate that our algorithm achieves state-of-the-art performance in terms of sample efficiency. Our ablation study further shows that the superiority is attributed to maximizing mutual information rather than simply increasing the entropy of the policy. In the future, we will explore directions to further improve the generalization of imaginations, such as generalizable representations and reusable skill discovery.
Broader Impact
Model-free RL requires a large amount of samples, thus limits its applications to real-world tasks. For example, the trial-and-error training process of a robot requires substantial manpower and financial resources, and certain harmful actions can greatly reduce the life of the robot. Building a world model and learning behaviors by imaginations provides a boarder prospect for real-world applications. This paper is situated in model-based RL and further improves sample efficiency over existing work, which will accelerate the development of real-world applications on automatic control, such as robotics and autonomous driving. In addition, this paper tackles a valuable problem about generalization, from imagination to reality, thus it is also of great interest to researchers in generalizable machine learning.
In the long run, this paper will improve the efficiency of factory operations, avoid artificial repetition of difficult or dangerous work, save costs, and reduce risks in the industrial and agricultural industry. For daily life, it will create a more intelligent lifestyle and improve the quality of life.
Our algorithm is a generic framework that does not leverages biases in data. We evaluated our model in a popular benchmark of visual control tasks. However, similar to a majority of deep learning approaches, our algorithm has a common disadvantage. The learned knowledge and policy is not friendly to humans and it is hard for us to know why the agent learns to act so well. Interpretability has always been a challenging open question and in the future we are interested in incorporating recent deep learning progresses on causal inference into RL.
Acknowledgments and Disclosure of Funding
This work is supported in part by Science and Technology Innovation 2030 – “New Generation Artificial Intelligence” Major Project (No. 2018AAA0100904), and a grant from the Institute of Guo Qiang,Tsinghua University. | 1. What is the main contribution of the paper in the field of deep reinforcement learning?
2. What are the strengths of the proposed method, particularly in its ability to increase sample efficiency?
3. How does the reviewer assess the performance of the proposed method compared to other recent approaches, such as DREAMER?
4. Are there any concerns regarding the empirical nature of the paper's contributions? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
The authors present a method for using rollouts from a learned model to help increase the sample efficiency of online deep reinforcement learning.In contrast to some prior approaches, the authors explicitly use a term for model fitting to prioritize maximizing the mutual information between roll outs under the learned model and trajectories obtained from roll outs in the real world. The authors compare to DREAMER on 6 of the DeepMind Control suite tasks and see encouraging performance.
Strengths
Using models can be a promising way to increase sample efficiency in deep reinforcement learning but if the models are inaccurate it has the potential to yield worse performance. The authors propose an objective that both tries to maximize the policy performance and maximize the mutual information between the trajectories generated by running the policy in real life and the trajectories that would be generated under the learned model. The empirical results on a number of common benchmarks in the DeepMind control suite show their approach has equal or better performance than other benchmarks. Their algorithm did better on hopper than dreamer in terms of sample efficiency and slightly better (though the confidence intervals looked to overlap). These are much more efficient than some other prior approaches The authors present some nice ablation studies to help understand if the mutual information significantly impacts the algorithm, which it does.
Weaknesses
The primary contribution is empirical. Therefore I’d expect a bit more significant improvement over other recent approaches: the empirical performance seems good but not substantially better than Dreamer except on Hopper. |
NIPS | Title
Information-Theoretic Safe Exploration with Gaussian Processes
Abstract
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint. A common approach is to place a Gaussian process prior on the unknown constraint and allow evaluations only in regions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter. In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate. Our approach is naturally applicable to continuous domains and does not require additional hyperparameters. We theoretically analyze the method and show that we do not violate the safety constraint with high probability and that we explore by learning about the constraint up to arbitrary precision. Empirical evaluations demonstrate improved data-efficiency and scalability.
1 Introduction
In sequential decision making problems, we iteratively select parameters in order to optimize a given performance criterion. However, real-world applications such as robotics (Berkenkamp et al., 2021), mechanical systems (Schillinger et al., 2017) or medicine (Sui et al., 2015) are often subject to additional safety constraints that we cannot violate during the exploration process (Dulac-Arnold et al., 2019). Since it is a priori unknown which parameters lead to constraint violations, we need to actively and carefully learn about the constraints without violating them. That is, we need to learn about the safety of parameters by only evaluating parameters that are currently known to be safe.
Existing methods by Schreiter et al. (2015); Sui et al. (2015) tackle this problem by placing a Gaussian process (GP) prior over the constraint and only evaluate parameters that do not violate the constraint with high probability. To learn about the safety of parameters, they evaluate the parameter with the largest posterior variance. This process is made more efficient by SAFEOPT, which restricts its safe set expansion exploration component to parameters that are close to the boundary of the current set of safe parameters (Sui et al., 2015) at the cost of an additional tuning hyperparameter (Lipschitz constant). However, uncertainty about the constraint is only a proxy objective that only indirectly learns about the safety of parameters. Consequently, data-efficiency could be improved with an exploration criterion that directly maximizes the information gained about the safety of parameters.
Our contribution In this paper, we propose Information-Theoretic Safe Exploration (ISE), a safe exploration algorithm that directly exploits the information gain about the safety of parameters in order to expand the region of the parameter space that we can classify as safe with high confidence. By directly optimizing for safe information gain, ISE is more data-efficient than existing approaches without manually restricting evaluated parameters to be on the boundary of the safe set, particularly
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
in scenarios where the posterior variance alone is not enough to identify good evaluation candidates, as in the case of heteroskedastic observation noise. This exploration criterion also means that we do not require additional hyperparameters beyond the GP posterior and that ISE is directly applicable to continuous domains. We theoretically analyze our method and prove that it learns about the safety of reachable parameters to arbitrary precision.
Related work Information-based selection criteria with Gaussian processes models are successfully used in the context of unconstrained Bayesian optimization (BO, Shahriari et al. (2016); Bubeck and Cesa-Bianchi (2012)), where the goal is to find the parameters that maximize an a priori unknown function. Hennig and Schuler (2012); Henrández-Lobato et al. (2014); Wang and Jegelka (2017) select parameters that provide the most information about the optimal parameters, while Fröhlich et al. (2020) consider the information under noisy parameters. The success of these information-based approaches also relies on the superior data efficiency that they demonstrated. We draw inspiration from these methods when defining an information-based criterion w.r.t. the safety of parameters to guide safe exploration.
In the presence of constraints that the final solution needs to satisfy, but which we can violate during exploration, Gelbart et al. (2014) propose to combine typical BO acquisition functions with the probability of satisfying the constraint. Instead, Gotovos et al. (2013) propose an uncertainty-based criterion that learns about the feasible region of parameters. When we are not allowed to ever evaluate unsafe parameters, safe exploration is a necessary sub-routine of BO algorithms to learn about the safety of parameters. To safely explore, Schreiter et al. (2015) globally learn about the constraint by evaluating the most uncertain parameters. SAFEOPT by Sui et al. (2015) extends this to joint exploration and optimization and makes it more efficient by explicitly restricting safe exploration to the boundary of the safe set. Sui et al. (2018) proposes STAGEOPT, which additionally separates the exploration and optimization phases. Both of these algorithms assume access to a Lipschitz constant to define parameters close to the boundary of the safe set, which is a difficult tuning parameter in practice. These methods have been extended to multiple constraints by Berkenkamp et al. (2021), while Kirschner et al. (2019) scale them to higher dimensions with LINEBO, which explores in low-dimensional sub-spaces. To improve computational costs, Duivenvoorden et al. (2017) suggest a continuous approximation to SAFEOPT without providing exploration guarantees. All of these methods rely on function uncertainty to drive exploration, while we directly maximize the information gained about the safety of parameters.
Safe exploration also arises in the context of Markov decision processes (MDP), (Moldovan and Abbeel, 2012; Hans et al., 2008). In particular, Turchetta et al. (2016, 2019) traverse the MDP to learn about the safety of parameters using methods that, at their core, explore using the same ideas as SAFEOPT and STAGEOPT to select parameters to evaluate. Consequently, our proposed method for safe exploration is also directly applicable to their setting.
2 Problem Statement
In this section, we introduce the problem and notation that we use throughout the paper. We are given an unknown and expensive to evaluate safety constraint f : X → R s.t. parameters that satisfy f(x) ≥ 0 are classified as safe, while others are unsafe. To start exploring safely, we also have access to an initial safe parameter x0 that satisfies the safety constraint, f(x0) ≥ 0. We sequentially select safe parameters xn ∈ X where to evaluate f in order to learn about the safety of parameters beyond x0. At each iteration n, we obtain a noisy observation of yn := f(xn) + νn that is corrupted by additive homoscedastic Gaussian noise νn ∼ N ( 0, σ2ν ) . We illustrate the task in Figure 1a, where starting from x0 we aim to safely explore the domain so that we ultimately classify as safe all the safe parameters that are reachable from x0.
As f is unknown and the evaluations yn are noisy, it is not feasible to select parameters that are safe with certainty and we provide high-probability safety guarantees instead. To this end, we assume that the safety constraint f has bounded norm in the Reproducing Kernel Hilbert Space (RKHS) (Schölkopf and Smola, 2002)Hk associated to some kernel k : X ×X → R with k(x,x′) ≤ 1. This assumption allows us to to model f as a Gaussian process (GP) (Srinivas et al., 2010).
Gaussian Processes A GP is a stochastic process specified by a mean function µ : X → R and a kernel k (Rasmussen and Williams, 2006). It defines a probability distribution over real-valued functions on X , such that any finite collection of function values at parameters [x1, . . . ,xn] is
distributed as a multivariate normal distribution. The GP prior can then be conditioned on (noisy) function evaluations Dn = {(xi, yi)}ni=1. If the noise is Gaussian, then the resulting posterior is also a GP and with posterior mean and variance
µn(x) = µ(x) + k(x) >(K + Iσ2ν) −1(y − µ), σ2n(x) = k(x,x)− k(x)>(K + Iσ2ν)−1k(x),
(1)
where µ := [µ(x1), . . . µ(xn)] is the mean vector at parameters xi ∈ Dn and [y]i := y(xi) the corresponding vector of observations. We have [ k(x) ] i
:= k(x,xi), the kernel matrix has entries [K]ij := k(xi,xj), and I is the identity matrix. In the following, we assume without loss of generality that the prior mean is identically zero: µ(x) ≡ 0. Safe set Using the previous assumptions, we can construct high-probability confidence intervals on the function values f(x). Concretely, for any δ > 0 it is possible to find a sequence of positive numbers {βn} such that f(x) ∈ [ µn(x)± βnσn(x) ] with probability at least 1− δ, jointly for all x ∈ X and n ≥ 1. For a proof and more details see (Chowdhury and Gopalan, 2017). We use these confidence intervals to define a safe set
Sn := {x ∈ X : µn(x)− βnσn(x) ≥ 0} ∪ {x0}, (2) which contains all parameters whose βn-lower confidence bound is above the safety threshold and the initial safe parameter x0. Consequently, we know that all parameters in Sn are safe, f(x) ≥ 0 for all x ∈ Sn, with probability at least 1− δ jointly over all iterations n. Safe exploration Given the safe set Sn, the next question is which parameters in Sn to evaluate in order to efficiently expand it. Most existing safe exploration methods rely on uncertainty sampling over subsets of Sn. SAFEOPT-like approaches, for example, use the Lipschitz assumption on f to identify parameters in Sn that could expand the safe set and then select the parameter that has the biggest uncertainty among those. In the next sections, we present and analyze our safe exploration strategy, ISE, that instead uses an information gain measure to identify the parameters that allow us to efficiently learn about the safety of parameters outside of Sn.
3 Information-Theoretic Safe Exploration
We present Information-Theoretic Safe Exploration (ISE), which guides the safe exploration by using an information-theoretic criterion. Our goal is to design an exploration strategy that directly exploits the properties of GPs to learn about the safety of parameters outside of Sn. We draw inspiration from Hennig and Schuler (2012); Wang and Jegelka (2017) who exploit information-theoretic insights to design data-efficient BO acquisition functions for their respective optimization objectives.
Information gain measure In our case, we want to evaluate f at safe parameters that are maximally informative about the safety of other parameters, in particular of those where we are uncertain
Algorithm 1 Information-Theoretic Safe Exploration 1: Input: GP prior (µ0, k, σν), Safe seed x0 2: for n = 0, . . . , N do 3: xn+1← arg maxx∈Sn maxz∈X În ( {x, y}; Ψ(z)
) 4: yn+1← f(xn+1) + ν 5: Update GP posterior with (xn+1, yn+1)
about whether they are safe or not. To this end, we need a corresponding measure of information gain. We define such a measure using the binary variable Ψ(x) = If(x)≥0, which is equal to one iff f(x) ≥ 0. Its entropy is given by
Hn [ Ψ(z) ] = −p−n (z) ln ( p−n (z) ) − ( 1− p−n (z) ) ln ( 1− p−n (z) ) (3)
where p−n (z) is the probability of z being unsafe: p − n (z) = 1 2 + 1 2 erf
( − 1√
2 µn(z) σn(z)
) . The random
variable Ψ(z) has high-entropy when we are uncertain whether a parameter is safe or not; that is, its entropy decreases monotonically as |µn(z)| increases and the GP posterior moves away from the safety threshold. It also decreases monotonically as σn(z) decreases and we become more certain about the constraint. This behavior also implies that the entropy goes to zero as the confidence about the safety of z increases, as desired.
Given our definition of Ψ, we consider the mutual information I ( {x, y}; Ψ(z) ) between an observation y at a parameter x and the value of Ψ at another parameter z. Since Ψ is the indicator function of the safe regions of the parameter space, the quantity In ( {x, y}; Ψ(z) ) measures how much information about the safety of z we gain by evaluating the safety constraint f at x at iteration n, averaged over all possible observed values y. This interpretation follows directly from the definition of mutual information: In ( {x, y}; Ψ(z) ) = Hn [ Ψ(z) ] − Ey [ Hn+1 [ Ψ(z)
∣∣{x, y}]], where Hn[Ψ(z)] is the entropy of Ψ(z) according to the GP posterior at iteration n, while Hn+1 [ Ψ(z)
∣∣{x, y}] is its entropy at iteration n+ 1, conditioned on a measurement y at x at iteration n. Intuitively, In ( {x, y}; Ψ(z)
) is negligible whenever the confidence about the safety of z is high or, more generally, whenever an evaluation at x does not have the potential to substantially change our belief about the safety of z. The mutual information is large whenever an evaluation at x on average causes the confidence about the safety of z to increase significantly. As an example, in Figure 1 we plot In ( {x, y}; Ψ(z) ) as a function of x ∈ Sn for a specific choice of z and for an RBF kernel. As one would expect, we see that the closer it gets to z, the bigger the mutual information becomes, and that it vanishes in the neighborhood of previously evaluated parameters, where the posterior variance is negligible.
To compute In ( {x, y}; Ψ(z) ) , we need to average (3) conditioned on an evaluation y over all possible values of y. However, the resulting integral is intractable given the expression of Hn[Ψ(z)] in (3). In order to get a tractable result, we derive a close approximation of (3),
Hn [ Ψ(z) ] ≈ Ĥn [ Ψ(z) ] . = ln(2) exp { − 1 π ln(2) ( µn(z)
σn(z)
)2} . (4)
The approximation in (4) is obtained by truncating the Taylor expansion of Hn[Ψ(z)] at the second order, and it recovers almost exactly its true behavior (see Appendix B for details). Since the posterior mean at z after an evaluation at x depends linearly on µn(x), and since the probability density of y depends exponentially on−µ2n(x), using (4) reduces the conditional entropy Ey [ Ĥn+1 [ Ψ(z)
∣∣{x, y}]] to a Gaussian integral with the exact solution
Ey [ Ĥn+1 [ Ψ(z) ∣∣{x, y}]] = ln(2) √ σ2ν + σ 2 n(x)(1− ρ2n(x, z))
σ2ν + σ 2 n(x)(1 + c2ρ 2 n(x, z))
exp { −c1 µ2n(z)
σ2n(z)
σ2ν + σ 2 n(x)
σ2ν + σ 2 n(x)(1 + c2ρ 2 n(x, z))
} , (5)
where ρn(x, z) is the linear correlation coefficient between f(x) and f(z), and with c1 and c2 given by c1 := 1/ ln(2)π and c2 := 2c1−1. This result allows us to analytically calculate the approximated
mutual information În ( {x, y}; Ψ(z) ) . = Ĥn [ Ψ(z) ] − Ey [ Ĥn+1 [ Ψ(z) ∣∣{x, y}]], which we use to define the ISE acquisition function, and which we analyze theoretically in Section 4.
ISE acquisition function Now that we have defined a way to measure and compute the information gain about the safety of parameters, we can use it to design an exploration strategy that selects the next parameters to evaluate. The natural choice for such selection criterion is to select the parameter that maximizes the information gain; that is, we select xn+1 according to
xn+1 ∈ arg max x∈Sn max z∈X
În ( {x, y}; Ψ(z) ) , (6)
where we jointly optimize over x in the safe set Sn and an unconstrained second parameter z. Evaluating f at xn+1 according to (6) maximizes the information gained about the safety of some parameter z ∈ X , so that it allows us to efficiently learn about parameters that are not yet known to be safe. While z can lie in the whole domain, the parameters where we are the most uncertain about the safety constraint lie outside the safe set. By leaving z unconstrained, we show in our theoretical analysis in Section 4 that, once we have learned about the safety of parameters outside the safe set, (6) resorts to learning about the constraint function also inside Sn. An overview of ISE can be found in Algorithm 1 and we show an example run of a one-dimensional illustration of the algorithm in Figure 2.
4 Theoretical Results
In this section, we study the expression for În ( {x, y}; Ψ(z) ) obtained using (4) and (5) and analyze the properties of the ISE exploration criterion (6). By construction of Sn in (2) and the assumptions on f in Section 2, we know that any parameter selected according to (6) is safe with high probability, see Appendix A for details. To show that we also learn about the safe set, we first need to define what it means to successfully explore starting from x0. The main challenge is that it is difficult to analyze how a GP generalizes based on noisy observations, so that it is difficult to define a notion of convergence that is not dependent on the specific run. SAFEOPT avoids this issue by expanding the safe set not based on the GP, but only using the Lipschitz constant L. Contrary to their approach, we depend on the GP to generalize from the safe set. In this case, the natural notion of convergence is provided by the the posterior variance. In particular, we say that at iteration n we have explored the safe set up to ε-accuracy if σ2n(x) ≤ ε for all parameters x in Sn. In the following, we show that ISE asymptotically leads either to ε-accurate exploration of the safe set or to indefinite expansion of the safe set. In future work it will be interesting to further investigate the notion of generalization and to derive a similar convergence result as those obtained by Sui et al. (2015).
Theorem 1. Assume that xn+1 is chosen according to (6), and that there exists n̂ such that for all n ≥ n̂ Sn+1 ⊆ Sn. Moreover, assume that for all n ≥ n̂, |µn(x)| ≤ M for some M > 0 for all x ∈ Sn. Then, for all ε > 0 there exists Nε such that σ2n(x) ≤ ε for every x ∈ Sn if n ≥ n̂+Nε.
The smallest of such Nε is given by
Nε = min
{ N ∈ N : b−1 ( CγN N ) ≤ ε } , (7)
where b(η) := ln(2) exp { −c1M 2
η
}[ 1− √ σ2ν
2c1η+σ2ν
] , γN = maxD⊂X ;|D|=N I ( f(D);y(D) ) is
the maximum information capacity of the chosen kernel (Srinivas et al., 2010; Contal et al., 2014), and C = ln(2)/σ2ν ln ( 1 + σ−2ν ) .
Proof. See Appendix A.
Theorem 1 tells us that if at some point the set of safe parameters Sn stops expanding, then the posterior variance over the safe set vanishes eventually. The intuition behind Theorem 1 is that if there were a parameter x in the safe set whose posterior mean remained finite and whose posterior variance remained bounded from below, then an evaluation of f at such x would yield a non negligible average information gain about the safety of x, so that, since x is in the safe set, at some point ISE will be forced to choose to evaluate x, reducing its posterior variance. This result guarantees that, should the safe set stop expanding, ISE will asymptotically explore the safe set up to an arbitrary ε-accuracy. In practice, we observe that ISE first focuses on reducing the uncertainty in areas of the safe set that are most informative about parameters whose classification is still uncertain (e.g. the boundary of the safe set), and only eventually turns to learning about the inside of the safe set. This behavior is what ultimately leads to the posterior variance to decay over the whole Sn. Therefore, even if in general it is not always possible to say whether or not the safe set will ever stop expanding, we can read Theorem 1 as an exploration guarantee for ISE, as it rules out the possibility that the proposed acquisition function forever leaves the uncertainty high in areas of the safe set that, if better understood, could lead to an expansion of the safe set.
Theorem 1 requires a bound on the GP posterior mean function, which is always satisfied with high probability based on our assumptions about f . Specifically, we have that |µn(x)| ≤ 2βn with probability of at least 1− δ for all n (see Appendix A for details). Therefore, it does not represent an additional restrictive assumption for f . Finally, we also note that the the constant Nε defined by (7) always exists since the function b is monotonically increasing, as long as γN grows sublinearly in N . Srinivas et al. (2010) prove that this is the case for commonly-used kernel and, more generally, it is a prerequisite for data-efficient learning with GP models.
5 Discussion and Limitations
ISE drives exploration of the parameter space by selecting the parameters to evaluate according to (6). An alternative but conceptually similar approach to this criterion would be to consider the parameter that yields the biggest information gain on average over the domain, i.e., substituting the inner max in (6) with an average over X . The resulting integral, however, is intractable and would require further approximations. Moreover, the parameter found by solving (6) will also yield a high average information gain over the domain, due to the regularity of all involved objects.
Being able to work in a continuous domain, ISE can deal with higher dimensional domains better than algorithms requiring a discrete parameter space. However, as noted in Section 4, finding xn+1 as in (6) means to solve a non-convex optimization problem with twice the dimension of the the parameter space, which can also become a computationally challenging problem as the dimension grows. In a high-dimensional setting, we follow LINEBO by Kirschner et al. (2019), which at each iteration selects a random one-dimensional subspace to which it restricts the optimization of the acquisition function.
In Sections 2 and 3, we assumed the observation process to be homoskedastic. However, it needs not to be the case, and the results can be extended to the case of heteroskedastic Gaussian noise. The observation noise at a parameter x explicitly appears in the ISE acquisition function, since it crucially affects the amount of information that we can gain by evaluating the constraint f at x. On the contrary, STAGEOPT-like methods do not consider the observation noise in their acquisition functions. As a consequence, ISE can perform significantly better in an heteroskedastic setting, as we also show in Section 6.
Lastly, we reiterate that the theoretical safety guarantees offered by ISE are derived under the assumption that f is a bounded norm element of the RKHS space associated with the GP’s kernel. In applications, therefore, the choice of the kernel function becomes even more crucial when safety is an issue. For details on how to construct and choose kernels see (Garnett, 2022). The safety guarantees also depend on the choice of βn. Typical expressions for βn include the RKHS norm of the constraint f (Chowdhury and Gopalan, 2017; Fiedler et al., 2021), which is in general difficult to estimate in practice. Because of this, usually in practice a constant value of βn is used instead.
6 Experiments
In this section we empirically evaluate ISE. Additional details about the experiments and setup can be found in Appendix C. As commonly done in the literature (see Section 5), we set βn = 2 for all experiments. This choice guarantees safety per iteration, rather than jointly for all n and it allows for a less conservative bound than the one needed for the joint guarantees.
GP samples For the first part of the experiments, we evaluate ISE on constraint functions f that we obtain by sampling a GP prior at a finite number of points. This allows us to test ISE under the assumptions of the theory and we compare its performance to that of the exploration part of STAGEOPT (Sui et al., 2018). STAGEOPT is a modified version of SAFEOPT, in which the exploration and optimization parts are performed separately: first the SAFEOPT exploration strategy is used to expand the safe set as much as possible, then the objective function is optimized within the discovered safe set. We further modify the version of STAGEOPT that we use in the experiment by defining the safe set in the same way ISE does, i.e., by means of the GP posterior, as done, for example, also by Berkenkamp et al. (2016). We select 100 samples from a two-dimensional GP with RBF kernel, defined in [−2.5, 2.5]×[−2.5, 2.5] and run ISE and STAGEOPT for 100 iterations for each sample. As STAGEOPT requires a discretization of the domain, we use this discretization to compare the sample efficiency of the two methods, by computing, at each iteration, what percentage of the discretized domain is classified as safe. Moreover, we also compare with the heuristic acquisition inspired by SAFEOPT proposed by Berkenkamp et al. (2016). This method works exactly as STAGEOPT, with the difference that the set of expanders is computed using directly the GP posterior, rather than the Lipschitz constant. More precisely, a parameter x is considered an expander if observing a value of µn(x) + βnσn(x) at x would enlarge the safe set. For the STAGEOPT run, we use the kernel metric to compute the set of potential expanders, for different values of the Lipschitz constant L. From the results shown in Figure 3a, we see not only that ISE performs as well or better than all tested instances of STAGEOPT, but also how the choice of L affects the performance of the latter. This plot makes it also evident how crucial the choice of the Lipschitz constant is for STAGEOPT and SAFEOPT-like algorithms in general. In Table 1, in Appendix C, we report the average percentage of safety violations per run achieved by ISE and STAGEOPT. As expected, we see that the percentage of safety violations is comparable among all algorithms.
To show that for STAGEOPT exploration not only overestimating the Lipschitz constant, but also underestimating it can negatively impact performance, we consider the simple one-dimensional constraint function f(x) = e−x + 0.05 and run the safe exploration for multiple values of the Lipschitz constant. This function gets increasingly away from the safety threshold for x → −∞, while it asymptotically approaches the threshold for x→∞, so that a good exploration algorithm would, ideally, quickly classify as safe the domain region for x < 0 and then keep exploring the boundary of the safe set for x > 0. The results plotted in Figure 3b show how both a too high and a too low Lipschitz constant can lead to sub-optimal exploration. In the case of a too small constant, this is because STAGEOPT considers expanders almost all parameters in the domain, leading to additional evaluations in the region for x < 0 that are unlikely to cause expansion of the safe set. On the other hand, a too high value of the Lipschitz constant can lead to the set of expanders to be empty as soon as the posterior mean gets close to the safety threshold for x > 0.
OpenAI Gym Control After investigating the performance of ISE under the hypothesis of the theory, we apply it to two classic control tasks from the OpenAI Gym framework (Brockman et al., 2016), with the goal of finding the set of parameters of a controller that satisfy some safety constraint. In particular we consider linear controllers for the inverted pendulum and cart pole tasks.
For the inverted pendulum task, the linear controller is given by ut = α1θt + α2θ̇t, where ut is the control signal at time t, while θt and θ̇t are, respectively, the angular position and the angular velocity
of the pendulum. Starting from a position close to the upright equilibrium, the controller’s task is the stabilization of the pendulum, subject to a safety constraint on the maximum velocity reached within one episode. For some given initial controller configuration α0 := (α01, α 0 2), we want to explore the controller’s parameter space avoiding configurations that lead the pendulum to swing with a too high velocity. We apply ISE to explore the α-space with x0 = α0 and the safety constraint being the maximum angular velocity reached by the pendulum in an episode of fixed length. In this case, the safety threshold is not at zero, but rather at some finite value θ̇M , and the safe parameters are those for which the maximum velocity is below θ̇M . The formalism developed in the previous sections can be easily applied to this scenario if we consider f(α) = −(maxt θ̇t(α)− θ̇M ). In Figure 4a we show the true safe set for this problem, while in Figures 4b–4d one can see how ISE safely explores the true safe set. These plots show how the ISE acquisition function (6) selects parameters that are close to the current safe set boundary and, hence, most informative about the safety of parameters outside of the safe set. This behavior eventually leads to the full true safe set to be classified as safe by the GP posterior, as Figure 4d shows.
The cart pole task is similar to the inverted pendulum one, but the parameter space has three dimensions. The controller we consider is given by ut = α1θt + α2θ̇t + α3ṡt, where θt and θ̇t are, respectively, the angular position and angular velocity of the pole at time t, while ṡt is the cart’s velocity. We set the initial state to zero angular and linear velocity and with the pole close to the vertical position, with the controller’s goal being to keep the pole stable in the upright position. A combination of the three parameters α1, α2 and α3 is considered safe if the angle of the pole does not exceed a given threshold within the episode. Again, we can easily cast this safety constraint in terms of the formalism developed in the paper: f(α) = −(maxt θt(α)− θM ), where θM is the maximum allowed angle. Figure 5a shows the expansion of the cart pole α space promoted by ISE, compared with STAGEOPT for different values of the Lipschitz constant. Both methods achieve a comparable sample efficiency and both lead to the classification as safe of the full true safe set.
High dimensional domains Many interesting applications have a high dimensional parameter space. While SAFEOPT-like methods are difficult to apply already with dimension > 3 due to the discretization of the domain, ISE can perform well also in four or five dimensions. To see this, we apply ISE to the constraint function f(x) = e−x 2 + 2e−(x−x1) 2 + 5e−(x−x2) 2 − 0.2. Figure 5b shows the ISE performance in dimension 5. We see that ISE is able to promote the expansion of the safe set, leading to an increasingly bigger portion of the true safe set to be classified as safe.
Heteroskedastic noise domains For even higher dimensions, we can follow a similar approach to LINEBO, limiting the optimization of the acquisition function to a randomly selected one-dimensional subspace of the domain. Moreover, as discussed in Section 5, it is also interesting to test ISE in the case of heteroskedastic observation noise, since the noise is a critical quantity for the ISE acquisition function, while it does not affect the selection criterion of STAGEOPT-like methods. Therefore, in
this experiment we combine a high dimensional problem with heteroskedastic noise. In particular, we apply a LINEBO version of ISE to the constraint function f(x) = 12e −x2 + e−(x±x1) 2 + 3e−(x±x2) 2
+ 0.2 in dimension nine and ten, with the safe seed being the origin. This function has two symmetric global optima at ±x2 and we set two different noise levels in the two symmetric domain halves containing the optima. To assess the exploration performance, we use the simple regret, defined as the difference between the current safe optimum and the true safe optimum. As the results in Figure 6 show, ISE achieve a greater sample efficiency than the other STAGEOPT-like methods. Namely, for a given number of iterations, by explicitly exploiting knowledge about the observation noise, ISE is able to classify as safe regions of the domain further away from the origin, in which the constraint function assumes its largest values, resulting in a smaller regret. On the other hand, SAFEOPT-like methods only focus on the posterior variance, so that the higher observation noise causes them to remain stuck in a smaller neighborhood of the origin, resulting in bigger regret.
7 Conclusion and Societal Impact
We have introduced Information-Theoretic Safe Exploration (ISE), a novel approach to safely explore a space in a sequential decision task where the safety constraint is a priori unknown. ISE efficiently and safely explores by evaluating only parameters that are safe with high probability and by choosing those parameters that yield the greatest information gain about the safety of other parameters. We theoretically analyzed ISE and showed that it leads to arbitrary reduction of the uncertainty in the largest reachable safe set containing the starting parameter. Our experiments support these theoretical results and demonstrate an increased sample efficiency and scalability of ISE compared to SAFEOPT-based approaches.
In many safety sensitive applications the shape of the safety constraints is unknown, so that an important prerequisite for any kind of process is to identify what parameters are safe to evaluate. By providing a principled way to do this, the contributions of this paper allow to deal with safety in a broad range of applications, which can favor the usage of ML approaches also in safety sensitive settings. On the other hand, misuse of the proposed method cannot be prevented in general. | 1. What is the focus and contribution of the paper regarding optimization under unknown constraints?
2. What are the strengths and weaknesses of the proposed approach, particularly in its theoretical analysis and practical demonstration?
3. Do you have any questions or concerns regarding the paper's content, such as the approximation used in the analysis, the scenario analyzed in Theorem 1, or the definition of the safe set?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any minor suggestions or questions regarding the presentation of the method, such as highlighting ISE using a bold line in figures or discussing the choice of beta in simulations? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
Optimization of an unknown function under unknown constraints is a problem which has been often addressed based on Gaussian process surrogate models in the past. Due to the unknown constraints, existing approaches typically start with a given set of safe parameters, which is subsequently expanded. In order to ensure the safety of the approach, this expansion must not lead to constraint violations. State-of-the-art methods determine evaluation points for the expansion of the safe set by finding the most uncertain point in discretized sets. However, this leads to additional hyperparameters and tends to scale poorly to high dimensional input domains. This paper aims to avoid the issues resulting from discretization by employing the information gain of the binary random variable describing the constraint violation at a given input point. Since this information gain cannot be computed exactly in practice, an efficient approximation is proposed. It is shown that determining evaluation points using the proposed criterion is guaranteed to achieve any upper bounds on the posterior GP variance in a finite number of evaluation points if the safe set is not expanded anymore. The practical strengths of the method are demonstrated in simulations with GP sample functions and control examples. In particular in high dimensional problems, a significantly better exploration of the true safe region can be observed.
Strengths And Weaknesses
The paper is on an interesting and relevant topic. The proposed ideas are novel to the best of my knowledge and nicely relate to entropy search and similar approaches in unconstrained Bayesian optimization. The paper combines theoretical analysis and demonstration of practical advantages well. In general, the paper is well-structured and the presentation of the method is excellent in my opinion. Merely in the supplemental material I have noticed that everything is proven for the approximation proposed by the authors, while it is stated that the analysis is performed for the mutual information itself. It took me some time to realize this discrepancy, such that I would recommend to clarify this. The derivation of the theoretical results looks quite straightforward, but the result is certainly interesting and fits well into the overall paper. However, it does not become clear if the scenario analyzed in Theorem 1 does actually ever occur. The reason for this is that it requires that the safe set needs to stop growing at some point in time, which could potentially never happen if the increase of the size of the safe set becomes smaller and smaller every iteration, but never becomes zero. This case is also not discussed in the paper, such that the significance of the theoretical results strongly diminishes in my opinion. I did not check the theoretical results in detail, but they seem to be correct to the best of my knowledge. The simulations are properly executed with sufficiently many random seeds including functions satisfying the assumptions of the approach and functions for which this is not clear. The effect of the dimensionality on the performance of the different approaches is illustrated nicely. I merely have a small technical comment regarding the statement in section 6 that the ISE approach is evaluated on samples from a GP because this would allow to test ISE under the assumptions of the theory. While this is true for the exploration, the usage of GP sample function poses the theoretical challenge that samples from a GP are well-known to almost surely have an unbounded RKHS norm. Therefore, the definition of the safe set based on (Chowdhary and Gopalan, 2017) does not work anymore because it requires a bound for the RKHS norm.
Questions
Apart from my comments mentioned above, I only have minor suggestions/questions:
Highlighting ISE using a bold line in Figs. 3 and 6 might be a good idea
How is beta chosen in the simulations?
Line 47: in
→
on
Lines 231-232: exploration exploration
Lines 262-263: the true safe for this problem
The y axis label of Fig. 6 seems wrong. I think it should be the found optimum.
Line 280: I think it should be referred to Fig. 5b instead of 6b here.
Limitations
I think the limitations of the method and potential societal impact are generally well discussed. While the choice of the kernel function is highlighted as a potential challenge, it is not sufficiently discussed that the RKHS bound also needs to be known for the theory to hold. This bound is generally difficult to obtain in practice, which crucially limits practical safety guarantees. In my opinion, this should be discussed in the limitations section. |
NIPS | Title
Information-Theoretic Safe Exploration with Gaussian Processes
Abstract
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint. A common approach is to place a Gaussian process prior on the unknown constraint and allow evaluations only in regions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter. In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate. Our approach is naturally applicable to continuous domains and does not require additional hyperparameters. We theoretically analyze the method and show that we do not violate the safety constraint with high probability and that we explore by learning about the constraint up to arbitrary precision. Empirical evaluations demonstrate improved data-efficiency and scalability.
1 Introduction
In sequential decision making problems, we iteratively select parameters in order to optimize a given performance criterion. However, real-world applications such as robotics (Berkenkamp et al., 2021), mechanical systems (Schillinger et al., 2017) or medicine (Sui et al., 2015) are often subject to additional safety constraints that we cannot violate during the exploration process (Dulac-Arnold et al., 2019). Since it is a priori unknown which parameters lead to constraint violations, we need to actively and carefully learn about the constraints without violating them. That is, we need to learn about the safety of parameters by only evaluating parameters that are currently known to be safe.
Existing methods by Schreiter et al. (2015); Sui et al. (2015) tackle this problem by placing a Gaussian process (GP) prior over the constraint and only evaluate parameters that do not violate the constraint with high probability. To learn about the safety of parameters, they evaluate the parameter with the largest posterior variance. This process is made more efficient by SAFEOPT, which restricts its safe set expansion exploration component to parameters that are close to the boundary of the current set of safe parameters (Sui et al., 2015) at the cost of an additional tuning hyperparameter (Lipschitz constant). However, uncertainty about the constraint is only a proxy objective that only indirectly learns about the safety of parameters. Consequently, data-efficiency could be improved with an exploration criterion that directly maximizes the information gained about the safety of parameters.
Our contribution In this paper, we propose Information-Theoretic Safe Exploration (ISE), a safe exploration algorithm that directly exploits the information gain about the safety of parameters in order to expand the region of the parameter space that we can classify as safe with high confidence. By directly optimizing for safe information gain, ISE is more data-efficient than existing approaches without manually restricting evaluated parameters to be on the boundary of the safe set, particularly
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
in scenarios where the posterior variance alone is not enough to identify good evaluation candidates, as in the case of heteroskedastic observation noise. This exploration criterion also means that we do not require additional hyperparameters beyond the GP posterior and that ISE is directly applicable to continuous domains. We theoretically analyze our method and prove that it learns about the safety of reachable parameters to arbitrary precision.
Related work Information-based selection criteria with Gaussian processes models are successfully used in the context of unconstrained Bayesian optimization (BO, Shahriari et al. (2016); Bubeck and Cesa-Bianchi (2012)), where the goal is to find the parameters that maximize an a priori unknown function. Hennig and Schuler (2012); Henrández-Lobato et al. (2014); Wang and Jegelka (2017) select parameters that provide the most information about the optimal parameters, while Fröhlich et al. (2020) consider the information under noisy parameters. The success of these information-based approaches also relies on the superior data efficiency that they demonstrated. We draw inspiration from these methods when defining an information-based criterion w.r.t. the safety of parameters to guide safe exploration.
In the presence of constraints that the final solution needs to satisfy, but which we can violate during exploration, Gelbart et al. (2014) propose to combine typical BO acquisition functions with the probability of satisfying the constraint. Instead, Gotovos et al. (2013) propose an uncertainty-based criterion that learns about the feasible region of parameters. When we are not allowed to ever evaluate unsafe parameters, safe exploration is a necessary sub-routine of BO algorithms to learn about the safety of parameters. To safely explore, Schreiter et al. (2015) globally learn about the constraint by evaluating the most uncertain parameters. SAFEOPT by Sui et al. (2015) extends this to joint exploration and optimization and makes it more efficient by explicitly restricting safe exploration to the boundary of the safe set. Sui et al. (2018) proposes STAGEOPT, which additionally separates the exploration and optimization phases. Both of these algorithms assume access to a Lipschitz constant to define parameters close to the boundary of the safe set, which is a difficult tuning parameter in practice. These methods have been extended to multiple constraints by Berkenkamp et al. (2021), while Kirschner et al. (2019) scale them to higher dimensions with LINEBO, which explores in low-dimensional sub-spaces. To improve computational costs, Duivenvoorden et al. (2017) suggest a continuous approximation to SAFEOPT without providing exploration guarantees. All of these methods rely on function uncertainty to drive exploration, while we directly maximize the information gained about the safety of parameters.
Safe exploration also arises in the context of Markov decision processes (MDP), (Moldovan and Abbeel, 2012; Hans et al., 2008). In particular, Turchetta et al. (2016, 2019) traverse the MDP to learn about the safety of parameters using methods that, at their core, explore using the same ideas as SAFEOPT and STAGEOPT to select parameters to evaluate. Consequently, our proposed method for safe exploration is also directly applicable to their setting.
2 Problem Statement
In this section, we introduce the problem and notation that we use throughout the paper. We are given an unknown and expensive to evaluate safety constraint f : X → R s.t. parameters that satisfy f(x) ≥ 0 are classified as safe, while others are unsafe. To start exploring safely, we also have access to an initial safe parameter x0 that satisfies the safety constraint, f(x0) ≥ 0. We sequentially select safe parameters xn ∈ X where to evaluate f in order to learn about the safety of parameters beyond x0. At each iteration n, we obtain a noisy observation of yn := f(xn) + νn that is corrupted by additive homoscedastic Gaussian noise νn ∼ N ( 0, σ2ν ) . We illustrate the task in Figure 1a, where starting from x0 we aim to safely explore the domain so that we ultimately classify as safe all the safe parameters that are reachable from x0.
As f is unknown and the evaluations yn are noisy, it is not feasible to select parameters that are safe with certainty and we provide high-probability safety guarantees instead. To this end, we assume that the safety constraint f has bounded norm in the Reproducing Kernel Hilbert Space (RKHS) (Schölkopf and Smola, 2002)Hk associated to some kernel k : X ×X → R with k(x,x′) ≤ 1. This assumption allows us to to model f as a Gaussian process (GP) (Srinivas et al., 2010).
Gaussian Processes A GP is a stochastic process specified by a mean function µ : X → R and a kernel k (Rasmussen and Williams, 2006). It defines a probability distribution over real-valued functions on X , such that any finite collection of function values at parameters [x1, . . . ,xn] is
distributed as a multivariate normal distribution. The GP prior can then be conditioned on (noisy) function evaluations Dn = {(xi, yi)}ni=1. If the noise is Gaussian, then the resulting posterior is also a GP and with posterior mean and variance
µn(x) = µ(x) + k(x) >(K + Iσ2ν) −1(y − µ), σ2n(x) = k(x,x)− k(x)>(K + Iσ2ν)−1k(x),
(1)
where µ := [µ(x1), . . . µ(xn)] is the mean vector at parameters xi ∈ Dn and [y]i := y(xi) the corresponding vector of observations. We have [ k(x) ] i
:= k(x,xi), the kernel matrix has entries [K]ij := k(xi,xj), and I is the identity matrix. In the following, we assume without loss of generality that the prior mean is identically zero: µ(x) ≡ 0. Safe set Using the previous assumptions, we can construct high-probability confidence intervals on the function values f(x). Concretely, for any δ > 0 it is possible to find a sequence of positive numbers {βn} such that f(x) ∈ [ µn(x)± βnσn(x) ] with probability at least 1− δ, jointly for all x ∈ X and n ≥ 1. For a proof and more details see (Chowdhury and Gopalan, 2017). We use these confidence intervals to define a safe set
Sn := {x ∈ X : µn(x)− βnσn(x) ≥ 0} ∪ {x0}, (2) which contains all parameters whose βn-lower confidence bound is above the safety threshold and the initial safe parameter x0. Consequently, we know that all parameters in Sn are safe, f(x) ≥ 0 for all x ∈ Sn, with probability at least 1− δ jointly over all iterations n. Safe exploration Given the safe set Sn, the next question is which parameters in Sn to evaluate in order to efficiently expand it. Most existing safe exploration methods rely on uncertainty sampling over subsets of Sn. SAFEOPT-like approaches, for example, use the Lipschitz assumption on f to identify parameters in Sn that could expand the safe set and then select the parameter that has the biggest uncertainty among those. In the next sections, we present and analyze our safe exploration strategy, ISE, that instead uses an information gain measure to identify the parameters that allow us to efficiently learn about the safety of parameters outside of Sn.
3 Information-Theoretic Safe Exploration
We present Information-Theoretic Safe Exploration (ISE), which guides the safe exploration by using an information-theoretic criterion. Our goal is to design an exploration strategy that directly exploits the properties of GPs to learn about the safety of parameters outside of Sn. We draw inspiration from Hennig and Schuler (2012); Wang and Jegelka (2017) who exploit information-theoretic insights to design data-efficient BO acquisition functions for their respective optimization objectives.
Information gain measure In our case, we want to evaluate f at safe parameters that are maximally informative about the safety of other parameters, in particular of those where we are uncertain
Algorithm 1 Information-Theoretic Safe Exploration 1: Input: GP prior (µ0, k, σν), Safe seed x0 2: for n = 0, . . . , N do 3: xn+1← arg maxx∈Sn maxz∈X În ( {x, y}; Ψ(z)
) 4: yn+1← f(xn+1) + ν 5: Update GP posterior with (xn+1, yn+1)
about whether they are safe or not. To this end, we need a corresponding measure of information gain. We define such a measure using the binary variable Ψ(x) = If(x)≥0, which is equal to one iff f(x) ≥ 0. Its entropy is given by
Hn [ Ψ(z) ] = −p−n (z) ln ( p−n (z) ) − ( 1− p−n (z) ) ln ( 1− p−n (z) ) (3)
where p−n (z) is the probability of z being unsafe: p − n (z) = 1 2 + 1 2 erf
( − 1√
2 µn(z) σn(z)
) . The random
variable Ψ(z) has high-entropy when we are uncertain whether a parameter is safe or not; that is, its entropy decreases monotonically as |µn(z)| increases and the GP posterior moves away from the safety threshold. It also decreases monotonically as σn(z) decreases and we become more certain about the constraint. This behavior also implies that the entropy goes to zero as the confidence about the safety of z increases, as desired.
Given our definition of Ψ, we consider the mutual information I ( {x, y}; Ψ(z) ) between an observation y at a parameter x and the value of Ψ at another parameter z. Since Ψ is the indicator function of the safe regions of the parameter space, the quantity In ( {x, y}; Ψ(z) ) measures how much information about the safety of z we gain by evaluating the safety constraint f at x at iteration n, averaged over all possible observed values y. This interpretation follows directly from the definition of mutual information: In ( {x, y}; Ψ(z) ) = Hn [ Ψ(z) ] − Ey [ Hn+1 [ Ψ(z)
∣∣{x, y}]], where Hn[Ψ(z)] is the entropy of Ψ(z) according to the GP posterior at iteration n, while Hn+1 [ Ψ(z)
∣∣{x, y}] is its entropy at iteration n+ 1, conditioned on a measurement y at x at iteration n. Intuitively, In ( {x, y}; Ψ(z)
) is negligible whenever the confidence about the safety of z is high or, more generally, whenever an evaluation at x does not have the potential to substantially change our belief about the safety of z. The mutual information is large whenever an evaluation at x on average causes the confidence about the safety of z to increase significantly. As an example, in Figure 1 we plot In ( {x, y}; Ψ(z) ) as a function of x ∈ Sn for a specific choice of z and for an RBF kernel. As one would expect, we see that the closer it gets to z, the bigger the mutual information becomes, and that it vanishes in the neighborhood of previously evaluated parameters, where the posterior variance is negligible.
To compute In ( {x, y}; Ψ(z) ) , we need to average (3) conditioned on an evaluation y over all possible values of y. However, the resulting integral is intractable given the expression of Hn[Ψ(z)] in (3). In order to get a tractable result, we derive a close approximation of (3),
Hn [ Ψ(z) ] ≈ Ĥn [ Ψ(z) ] . = ln(2) exp { − 1 π ln(2) ( µn(z)
σn(z)
)2} . (4)
The approximation in (4) is obtained by truncating the Taylor expansion of Hn[Ψ(z)] at the second order, and it recovers almost exactly its true behavior (see Appendix B for details). Since the posterior mean at z after an evaluation at x depends linearly on µn(x), and since the probability density of y depends exponentially on−µ2n(x), using (4) reduces the conditional entropy Ey [ Ĥn+1 [ Ψ(z)
∣∣{x, y}]] to a Gaussian integral with the exact solution
Ey [ Ĥn+1 [ Ψ(z) ∣∣{x, y}]] = ln(2) √ σ2ν + σ 2 n(x)(1− ρ2n(x, z))
σ2ν + σ 2 n(x)(1 + c2ρ 2 n(x, z))
exp { −c1 µ2n(z)
σ2n(z)
σ2ν + σ 2 n(x)
σ2ν + σ 2 n(x)(1 + c2ρ 2 n(x, z))
} , (5)
where ρn(x, z) is the linear correlation coefficient between f(x) and f(z), and with c1 and c2 given by c1 := 1/ ln(2)π and c2 := 2c1−1. This result allows us to analytically calculate the approximated
mutual information În ( {x, y}; Ψ(z) ) . = Ĥn [ Ψ(z) ] − Ey [ Ĥn+1 [ Ψ(z) ∣∣{x, y}]], which we use to define the ISE acquisition function, and which we analyze theoretically in Section 4.
ISE acquisition function Now that we have defined a way to measure and compute the information gain about the safety of parameters, we can use it to design an exploration strategy that selects the next parameters to evaluate. The natural choice for such selection criterion is to select the parameter that maximizes the information gain; that is, we select xn+1 according to
xn+1 ∈ arg max x∈Sn max z∈X
În ( {x, y}; Ψ(z) ) , (6)
where we jointly optimize over x in the safe set Sn and an unconstrained second parameter z. Evaluating f at xn+1 according to (6) maximizes the information gained about the safety of some parameter z ∈ X , so that it allows us to efficiently learn about parameters that are not yet known to be safe. While z can lie in the whole domain, the parameters where we are the most uncertain about the safety constraint lie outside the safe set. By leaving z unconstrained, we show in our theoretical analysis in Section 4 that, once we have learned about the safety of parameters outside the safe set, (6) resorts to learning about the constraint function also inside Sn. An overview of ISE can be found in Algorithm 1 and we show an example run of a one-dimensional illustration of the algorithm in Figure 2.
4 Theoretical Results
In this section, we study the expression for În ( {x, y}; Ψ(z) ) obtained using (4) and (5) and analyze the properties of the ISE exploration criterion (6). By construction of Sn in (2) and the assumptions on f in Section 2, we know that any parameter selected according to (6) is safe with high probability, see Appendix A for details. To show that we also learn about the safe set, we first need to define what it means to successfully explore starting from x0. The main challenge is that it is difficult to analyze how a GP generalizes based on noisy observations, so that it is difficult to define a notion of convergence that is not dependent on the specific run. SAFEOPT avoids this issue by expanding the safe set not based on the GP, but only using the Lipschitz constant L. Contrary to their approach, we depend on the GP to generalize from the safe set. In this case, the natural notion of convergence is provided by the the posterior variance. In particular, we say that at iteration n we have explored the safe set up to ε-accuracy if σ2n(x) ≤ ε for all parameters x in Sn. In the following, we show that ISE asymptotically leads either to ε-accurate exploration of the safe set or to indefinite expansion of the safe set. In future work it will be interesting to further investigate the notion of generalization and to derive a similar convergence result as those obtained by Sui et al. (2015).
Theorem 1. Assume that xn+1 is chosen according to (6), and that there exists n̂ such that for all n ≥ n̂ Sn+1 ⊆ Sn. Moreover, assume that for all n ≥ n̂, |µn(x)| ≤ M for some M > 0 for all x ∈ Sn. Then, for all ε > 0 there exists Nε such that σ2n(x) ≤ ε for every x ∈ Sn if n ≥ n̂+Nε.
The smallest of such Nε is given by
Nε = min
{ N ∈ N : b−1 ( CγN N ) ≤ ε } , (7)
where b(η) := ln(2) exp { −c1M 2
η
}[ 1− √ σ2ν
2c1η+σ2ν
] , γN = maxD⊂X ;|D|=N I ( f(D);y(D) ) is
the maximum information capacity of the chosen kernel (Srinivas et al., 2010; Contal et al., 2014), and C = ln(2)/σ2ν ln ( 1 + σ−2ν ) .
Proof. See Appendix A.
Theorem 1 tells us that if at some point the set of safe parameters Sn stops expanding, then the posterior variance over the safe set vanishes eventually. The intuition behind Theorem 1 is that if there were a parameter x in the safe set whose posterior mean remained finite and whose posterior variance remained bounded from below, then an evaluation of f at such x would yield a non negligible average information gain about the safety of x, so that, since x is in the safe set, at some point ISE will be forced to choose to evaluate x, reducing its posterior variance. This result guarantees that, should the safe set stop expanding, ISE will asymptotically explore the safe set up to an arbitrary ε-accuracy. In practice, we observe that ISE first focuses on reducing the uncertainty in areas of the safe set that are most informative about parameters whose classification is still uncertain (e.g. the boundary of the safe set), and only eventually turns to learning about the inside of the safe set. This behavior is what ultimately leads to the posterior variance to decay over the whole Sn. Therefore, even if in general it is not always possible to say whether or not the safe set will ever stop expanding, we can read Theorem 1 as an exploration guarantee for ISE, as it rules out the possibility that the proposed acquisition function forever leaves the uncertainty high in areas of the safe set that, if better understood, could lead to an expansion of the safe set.
Theorem 1 requires a bound on the GP posterior mean function, which is always satisfied with high probability based on our assumptions about f . Specifically, we have that |µn(x)| ≤ 2βn with probability of at least 1− δ for all n (see Appendix A for details). Therefore, it does not represent an additional restrictive assumption for f . Finally, we also note that the the constant Nε defined by (7) always exists since the function b is monotonically increasing, as long as γN grows sublinearly in N . Srinivas et al. (2010) prove that this is the case for commonly-used kernel and, more generally, it is a prerequisite for data-efficient learning with GP models.
5 Discussion and Limitations
ISE drives exploration of the parameter space by selecting the parameters to evaluate according to (6). An alternative but conceptually similar approach to this criterion would be to consider the parameter that yields the biggest information gain on average over the domain, i.e., substituting the inner max in (6) with an average over X . The resulting integral, however, is intractable and would require further approximations. Moreover, the parameter found by solving (6) will also yield a high average information gain over the domain, due to the regularity of all involved objects.
Being able to work in a continuous domain, ISE can deal with higher dimensional domains better than algorithms requiring a discrete parameter space. However, as noted in Section 4, finding xn+1 as in (6) means to solve a non-convex optimization problem with twice the dimension of the the parameter space, which can also become a computationally challenging problem as the dimension grows. In a high-dimensional setting, we follow LINEBO by Kirschner et al. (2019), which at each iteration selects a random one-dimensional subspace to which it restricts the optimization of the acquisition function.
In Sections 2 and 3, we assumed the observation process to be homoskedastic. However, it needs not to be the case, and the results can be extended to the case of heteroskedastic Gaussian noise. The observation noise at a parameter x explicitly appears in the ISE acquisition function, since it crucially affects the amount of information that we can gain by evaluating the constraint f at x. On the contrary, STAGEOPT-like methods do not consider the observation noise in their acquisition functions. As a consequence, ISE can perform significantly better in an heteroskedastic setting, as we also show in Section 6.
Lastly, we reiterate that the theoretical safety guarantees offered by ISE are derived under the assumption that f is a bounded norm element of the RKHS space associated with the GP’s kernel. In applications, therefore, the choice of the kernel function becomes even more crucial when safety is an issue. For details on how to construct and choose kernels see (Garnett, 2022). The safety guarantees also depend on the choice of βn. Typical expressions for βn include the RKHS norm of the constraint f (Chowdhury and Gopalan, 2017; Fiedler et al., 2021), which is in general difficult to estimate in practice. Because of this, usually in practice a constant value of βn is used instead.
6 Experiments
In this section we empirically evaluate ISE. Additional details about the experiments and setup can be found in Appendix C. As commonly done in the literature (see Section 5), we set βn = 2 for all experiments. This choice guarantees safety per iteration, rather than jointly for all n and it allows for a less conservative bound than the one needed for the joint guarantees.
GP samples For the first part of the experiments, we evaluate ISE on constraint functions f that we obtain by sampling a GP prior at a finite number of points. This allows us to test ISE under the assumptions of the theory and we compare its performance to that of the exploration part of STAGEOPT (Sui et al., 2018). STAGEOPT is a modified version of SAFEOPT, in which the exploration and optimization parts are performed separately: first the SAFEOPT exploration strategy is used to expand the safe set as much as possible, then the objective function is optimized within the discovered safe set. We further modify the version of STAGEOPT that we use in the experiment by defining the safe set in the same way ISE does, i.e., by means of the GP posterior, as done, for example, also by Berkenkamp et al. (2016). We select 100 samples from a two-dimensional GP with RBF kernel, defined in [−2.5, 2.5]×[−2.5, 2.5] and run ISE and STAGEOPT for 100 iterations for each sample. As STAGEOPT requires a discretization of the domain, we use this discretization to compare the sample efficiency of the two methods, by computing, at each iteration, what percentage of the discretized domain is classified as safe. Moreover, we also compare with the heuristic acquisition inspired by SAFEOPT proposed by Berkenkamp et al. (2016). This method works exactly as STAGEOPT, with the difference that the set of expanders is computed using directly the GP posterior, rather than the Lipschitz constant. More precisely, a parameter x is considered an expander if observing a value of µn(x) + βnσn(x) at x would enlarge the safe set. For the STAGEOPT run, we use the kernel metric to compute the set of potential expanders, for different values of the Lipschitz constant L. From the results shown in Figure 3a, we see not only that ISE performs as well or better than all tested instances of STAGEOPT, but also how the choice of L affects the performance of the latter. This plot makes it also evident how crucial the choice of the Lipschitz constant is for STAGEOPT and SAFEOPT-like algorithms in general. In Table 1, in Appendix C, we report the average percentage of safety violations per run achieved by ISE and STAGEOPT. As expected, we see that the percentage of safety violations is comparable among all algorithms.
To show that for STAGEOPT exploration not only overestimating the Lipschitz constant, but also underestimating it can negatively impact performance, we consider the simple one-dimensional constraint function f(x) = e−x + 0.05 and run the safe exploration for multiple values of the Lipschitz constant. This function gets increasingly away from the safety threshold for x → −∞, while it asymptotically approaches the threshold for x→∞, so that a good exploration algorithm would, ideally, quickly classify as safe the domain region for x < 0 and then keep exploring the boundary of the safe set for x > 0. The results plotted in Figure 3b show how both a too high and a too low Lipschitz constant can lead to sub-optimal exploration. In the case of a too small constant, this is because STAGEOPT considers expanders almost all parameters in the domain, leading to additional evaluations in the region for x < 0 that are unlikely to cause expansion of the safe set. On the other hand, a too high value of the Lipschitz constant can lead to the set of expanders to be empty as soon as the posterior mean gets close to the safety threshold for x > 0.
OpenAI Gym Control After investigating the performance of ISE under the hypothesis of the theory, we apply it to two classic control tasks from the OpenAI Gym framework (Brockman et al., 2016), with the goal of finding the set of parameters of a controller that satisfy some safety constraint. In particular we consider linear controllers for the inverted pendulum and cart pole tasks.
For the inverted pendulum task, the linear controller is given by ut = α1θt + α2θ̇t, where ut is the control signal at time t, while θt and θ̇t are, respectively, the angular position and the angular velocity
of the pendulum. Starting from a position close to the upright equilibrium, the controller’s task is the stabilization of the pendulum, subject to a safety constraint on the maximum velocity reached within one episode. For some given initial controller configuration α0 := (α01, α 0 2), we want to explore the controller’s parameter space avoiding configurations that lead the pendulum to swing with a too high velocity. We apply ISE to explore the α-space with x0 = α0 and the safety constraint being the maximum angular velocity reached by the pendulum in an episode of fixed length. In this case, the safety threshold is not at zero, but rather at some finite value θ̇M , and the safe parameters are those for which the maximum velocity is below θ̇M . The formalism developed in the previous sections can be easily applied to this scenario if we consider f(α) = −(maxt θ̇t(α)− θ̇M ). In Figure 4a we show the true safe set for this problem, while in Figures 4b–4d one can see how ISE safely explores the true safe set. These plots show how the ISE acquisition function (6) selects parameters that are close to the current safe set boundary and, hence, most informative about the safety of parameters outside of the safe set. This behavior eventually leads to the full true safe set to be classified as safe by the GP posterior, as Figure 4d shows.
The cart pole task is similar to the inverted pendulum one, but the parameter space has three dimensions. The controller we consider is given by ut = α1θt + α2θ̇t + α3ṡt, where θt and θ̇t are, respectively, the angular position and angular velocity of the pole at time t, while ṡt is the cart’s velocity. We set the initial state to zero angular and linear velocity and with the pole close to the vertical position, with the controller’s goal being to keep the pole stable in the upright position. A combination of the three parameters α1, α2 and α3 is considered safe if the angle of the pole does not exceed a given threshold within the episode. Again, we can easily cast this safety constraint in terms of the formalism developed in the paper: f(α) = −(maxt θt(α)− θM ), where θM is the maximum allowed angle. Figure 5a shows the expansion of the cart pole α space promoted by ISE, compared with STAGEOPT for different values of the Lipschitz constant. Both methods achieve a comparable sample efficiency and both lead to the classification as safe of the full true safe set.
High dimensional domains Many interesting applications have a high dimensional parameter space. While SAFEOPT-like methods are difficult to apply already with dimension > 3 due to the discretization of the domain, ISE can perform well also in four or five dimensions. To see this, we apply ISE to the constraint function f(x) = e−x 2 + 2e−(x−x1) 2 + 5e−(x−x2) 2 − 0.2. Figure 5b shows the ISE performance in dimension 5. We see that ISE is able to promote the expansion of the safe set, leading to an increasingly bigger portion of the true safe set to be classified as safe.
Heteroskedastic noise domains For even higher dimensions, we can follow a similar approach to LINEBO, limiting the optimization of the acquisition function to a randomly selected one-dimensional subspace of the domain. Moreover, as discussed in Section 5, it is also interesting to test ISE in the case of heteroskedastic observation noise, since the noise is a critical quantity for the ISE acquisition function, while it does not affect the selection criterion of STAGEOPT-like methods. Therefore, in
this experiment we combine a high dimensional problem with heteroskedastic noise. In particular, we apply a LINEBO version of ISE to the constraint function f(x) = 12e −x2 + e−(x±x1) 2 + 3e−(x±x2) 2
+ 0.2 in dimension nine and ten, with the safe seed being the origin. This function has two symmetric global optima at ±x2 and we set two different noise levels in the two symmetric domain halves containing the optima. To assess the exploration performance, we use the simple regret, defined as the difference between the current safe optimum and the true safe optimum. As the results in Figure 6 show, ISE achieve a greater sample efficiency than the other STAGEOPT-like methods. Namely, for a given number of iterations, by explicitly exploiting knowledge about the observation noise, ISE is able to classify as safe regions of the domain further away from the origin, in which the constraint function assumes its largest values, resulting in a smaller regret. On the other hand, SAFEOPT-like methods only focus on the posterior variance, so that the higher observation noise causes them to remain stuck in a smaller neighborhood of the origin, resulting in bigger regret.
7 Conclusion and Societal Impact
We have introduced Information-Theoretic Safe Exploration (ISE), a novel approach to safely explore a space in a sequential decision task where the safety constraint is a priori unknown. ISE efficiently and safely explores by evaluating only parameters that are safe with high probability and by choosing those parameters that yield the greatest information gain about the safety of other parameters. We theoretically analyzed ISE and showed that it leads to arbitrary reduction of the uncertainty in the largest reachable safe set containing the starting parameter. Our experiments support these theoretical results and demonstrate an increased sample efficiency and scalability of ISE compared to SAFEOPT-based approaches.
In many safety sensitive applications the shape of the safety constraints is unknown, so that an important prerequisite for any kind of process is to identify what parameters are safe to evaluate. By providing a principled way to do this, the contributions of this paper allow to deal with safety in a broad range of applications, which can favor the usage of ML approaches also in safety sensitive settings. On the other hand, misuse of the proposed method cannot be prevented in general. | 1. What is the main contribution of the paper regarding safe exploration algorithms for GP safe optimization?
2. What are the strengths and weaknesses of the proposed approach compared to other works in the field?
3. How does the paper evaluate the performance of the proposed method, and what are the results?
4. What are some limitations of the paper regarding scalability and negative societal impacts?
5. Are there any questions or concerns regarding the experiment section, such as the choice of Lipschitz values or the lack of analysis of safety specifications? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
The paper focuses on an important stage of GP safe exploration algorithms: safely expanding the size of the set of parameters believed to be safe, in a data-efficient manner. To do this, the authors propose an information-theoretic approach inspired by other non-safety-constrained Bayesian Optimization (BO) literature. Compared to standard SafeOpt (Sui et al. "Safe exploration for optimization with Gaussian processes." PMLR15), the method is shown to be more efficient at expanding the safe set.
Strengths And Weaknesses
Strengths:
The originality and contribution the paper makes is good:
It adapts mutual-information-based methods to the safe set expansion setting, which no other work (that I was able to find) does. Usually information theoretic measures for BO simplify to just sampling at the parameters with maximum variance (Schillinger et al., 2017) or highest UCB, but in this case it is more interesting and the authors are able to come up with a function to optimise to find the maximum information gain about expanding the safe set.
Their acquisition function simplification to make optimization tractable seems justified and well explained in the appendix. This is not the only work to avoid the inclusion of a Lipschitz constant hyperparameter (Berkenkamp et al. "Safe controller optimization for quadrotors with Gaussian processes.", ICRA16) (they use hypothesised GP observations for safe set expansion) or to avoid SafeOpt-style discretisation (Schillinger et al., 2017) (this is another method that works in continuous parameter space). However, these two problems are solved here in an elegant way. In particular, it avoids the basic heuristic/arbitrariness of the expander state definition in SafeOpt.
I believe the authors have covered all the relevant related work – although not sure the paper experimentally compares to enough related work (see below).
I believe the clarity of the paper is generally good until the experiments section.
Weaknesses:
The acquisition function simplification should be more explained in the main body of the paper. Furthermore, it would be good to have further discussion on the approximation on the rest of the paper, e.g. Theorem 1.
The evaluation with GP samples, shown in Figure 3, is confusing. The only alternative method comparison is standard StageOpt with differing Lipschitz (L) parameter values. The justification for the choice of plotted L-values isn’t clear, and furthermore there is a clear trend of decreasing L leading to better performance. From this graph, it would suggest that reducing L further <1 might result in it outperforming ISE. For those with background knowledge, it is clear why it’s not possible to keep reducing L. Eventually, reducing L enough will result in StageOpt classifying unsafe states as safe and then failing the safety specification. However this section contains no discussion of whether/how any algorithms fail the safety specification, and only shows the first few % of safe set exploration. Evaluation should include plots on how many time s the safety spec is broken, as that is a crucial part of the problem
I think evaluation might have benefited comparing to SafeOpt variants that do not require the Lipschitz constant, such as (Berkenkamp et al. "Safe controller optimization for quadrotors with Gaussian processes.", ICRA16). That would better differentiate which performance improvements are from more intelligent expansion behaviour, vs tuning of the Lipschitz constant. Figure 6 does not have enough baseline comparison. Only one StageOpt plot is shown in 6 (a) and it is not clear which Lipschitz value was used for this. There is no StageOpt plot in 6 (b) and it is unclear why. It's also unclear why the plot only shows the first 5% of exploration.
There should be some experiments on scalability, e.g. it would be nice to see a scalability plot on number of dimensions.
Other points:
The last paragraph of related work should also mention Turchetta, Matteo, Felix Berkenkamp, and Andreas Krause. "Safe exploration in finite markov decision processes with gaussian processes." Advances in Neural Information Processing Systems 29 (2016).
“This process is made more efficient by SafeOpt, which restricts exploration to parameters that are close to the boundary of the current set of safe parameters (Sui et al., 2015)”: I think it is more accurate to say that StageOpt does this restriction (in its expansion phase). SafeOpt can also choose to sample potential maximisers in M_t, not just from the expander set G_t.
Clarity-wise, I don’t fully understand how Theorem 1 justifies the sentence “in practice it will first focus on reducing the uncertainty in areas of the safe set that are most informative about parameters whose classification is still uncertain (e.g. areas close to the boundary of the safe set), and only eventually turns to learning about the inside of the safe set”. If it doesn’t justify that sentence, the theorem doesn’t seem to actually prove anything about safe set expansion.
Questions
why didn't the experimental section analyse the safety spec?
In figure 2 (c), the left-most green evaluation cross is clearly under the orange safety bound line, which suggests that the agent has broken the safety specification in this run. Is this a formatting error or representative of an actual failed run?
Please clarify the point about Theorem 1 above.
Limitations
Limitations on scalability are mentioned in text but not analysed empirically. Negative societal impacts are addressed with a boilerplate sentence in he end of the paper, bit unsatisfactory; probably better just not saying anything. |
NIPS | Title
Information-Theoretic Safe Exploration with Gaussian Processes
Abstract
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint. A common approach is to place a Gaussian process prior on the unknown constraint and allow evaluations only in regions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter. In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate. Our approach is naturally applicable to continuous domains and does not require additional hyperparameters. We theoretically analyze the method and show that we do not violate the safety constraint with high probability and that we explore by learning about the constraint up to arbitrary precision. Empirical evaluations demonstrate improved data-efficiency and scalability.
1 Introduction
In sequential decision making problems, we iteratively select parameters in order to optimize a given performance criterion. However, real-world applications such as robotics (Berkenkamp et al., 2021), mechanical systems (Schillinger et al., 2017) or medicine (Sui et al., 2015) are often subject to additional safety constraints that we cannot violate during the exploration process (Dulac-Arnold et al., 2019). Since it is a priori unknown which parameters lead to constraint violations, we need to actively and carefully learn about the constraints without violating them. That is, we need to learn about the safety of parameters by only evaluating parameters that are currently known to be safe.
Existing methods by Schreiter et al. (2015); Sui et al. (2015) tackle this problem by placing a Gaussian process (GP) prior over the constraint and only evaluate parameters that do not violate the constraint with high probability. To learn about the safety of parameters, they evaluate the parameter with the largest posterior variance. This process is made more efficient by SAFEOPT, which restricts its safe set expansion exploration component to parameters that are close to the boundary of the current set of safe parameters (Sui et al., 2015) at the cost of an additional tuning hyperparameter (Lipschitz constant). However, uncertainty about the constraint is only a proxy objective that only indirectly learns about the safety of parameters. Consequently, data-efficiency could be improved with an exploration criterion that directly maximizes the information gained about the safety of parameters.
Our contribution In this paper, we propose Information-Theoretic Safe Exploration (ISE), a safe exploration algorithm that directly exploits the information gain about the safety of parameters in order to expand the region of the parameter space that we can classify as safe with high confidence. By directly optimizing for safe information gain, ISE is more data-efficient than existing approaches without manually restricting evaluated parameters to be on the boundary of the safe set, particularly
36th Conference on Neural Information Processing Systems (NeurIPS 2022).
in scenarios where the posterior variance alone is not enough to identify good evaluation candidates, as in the case of heteroskedastic observation noise. This exploration criterion also means that we do not require additional hyperparameters beyond the GP posterior and that ISE is directly applicable to continuous domains. We theoretically analyze our method and prove that it learns about the safety of reachable parameters to arbitrary precision.
Related work Information-based selection criteria with Gaussian processes models are successfully used in the context of unconstrained Bayesian optimization (BO, Shahriari et al. (2016); Bubeck and Cesa-Bianchi (2012)), where the goal is to find the parameters that maximize an a priori unknown function. Hennig and Schuler (2012); Henrández-Lobato et al. (2014); Wang and Jegelka (2017) select parameters that provide the most information about the optimal parameters, while Fröhlich et al. (2020) consider the information under noisy parameters. The success of these information-based approaches also relies on the superior data efficiency that they demonstrated. We draw inspiration from these methods when defining an information-based criterion w.r.t. the safety of parameters to guide safe exploration.
In the presence of constraints that the final solution needs to satisfy, but which we can violate during exploration, Gelbart et al. (2014) propose to combine typical BO acquisition functions with the probability of satisfying the constraint. Instead, Gotovos et al. (2013) propose an uncertainty-based criterion that learns about the feasible region of parameters. When we are not allowed to ever evaluate unsafe parameters, safe exploration is a necessary sub-routine of BO algorithms to learn about the safety of parameters. To safely explore, Schreiter et al. (2015) globally learn about the constraint by evaluating the most uncertain parameters. SAFEOPT by Sui et al. (2015) extends this to joint exploration and optimization and makes it more efficient by explicitly restricting safe exploration to the boundary of the safe set. Sui et al. (2018) proposes STAGEOPT, which additionally separates the exploration and optimization phases. Both of these algorithms assume access to a Lipschitz constant to define parameters close to the boundary of the safe set, which is a difficult tuning parameter in practice. These methods have been extended to multiple constraints by Berkenkamp et al. (2021), while Kirschner et al. (2019) scale them to higher dimensions with LINEBO, which explores in low-dimensional sub-spaces. To improve computational costs, Duivenvoorden et al. (2017) suggest a continuous approximation to SAFEOPT without providing exploration guarantees. All of these methods rely on function uncertainty to drive exploration, while we directly maximize the information gained about the safety of parameters.
Safe exploration also arises in the context of Markov decision processes (MDP), (Moldovan and Abbeel, 2012; Hans et al., 2008). In particular, Turchetta et al. (2016, 2019) traverse the MDP to learn about the safety of parameters using methods that, at their core, explore using the same ideas as SAFEOPT and STAGEOPT to select parameters to evaluate. Consequently, our proposed method for safe exploration is also directly applicable to their setting.
2 Problem Statement
In this section, we introduce the problem and notation that we use throughout the paper. We are given an unknown and expensive to evaluate safety constraint f : X → R s.t. parameters that satisfy f(x) ≥ 0 are classified as safe, while others are unsafe. To start exploring safely, we also have access to an initial safe parameter x0 that satisfies the safety constraint, f(x0) ≥ 0. We sequentially select safe parameters xn ∈ X where to evaluate f in order to learn about the safety of parameters beyond x0. At each iteration n, we obtain a noisy observation of yn := f(xn) + νn that is corrupted by additive homoscedastic Gaussian noise νn ∼ N ( 0, σ2ν ) . We illustrate the task in Figure 1a, where starting from x0 we aim to safely explore the domain so that we ultimately classify as safe all the safe parameters that are reachable from x0.
As f is unknown and the evaluations yn are noisy, it is not feasible to select parameters that are safe with certainty and we provide high-probability safety guarantees instead. To this end, we assume that the safety constraint f has bounded norm in the Reproducing Kernel Hilbert Space (RKHS) (Schölkopf and Smola, 2002)Hk associated to some kernel k : X ×X → R with k(x,x′) ≤ 1. This assumption allows us to to model f as a Gaussian process (GP) (Srinivas et al., 2010).
Gaussian Processes A GP is a stochastic process specified by a mean function µ : X → R and a kernel k (Rasmussen and Williams, 2006). It defines a probability distribution over real-valued functions on X , such that any finite collection of function values at parameters [x1, . . . ,xn] is
distributed as a multivariate normal distribution. The GP prior can then be conditioned on (noisy) function evaluations Dn = {(xi, yi)}ni=1. If the noise is Gaussian, then the resulting posterior is also a GP and with posterior mean and variance
µn(x) = µ(x) + k(x) >(K + Iσ2ν) −1(y − µ), σ2n(x) = k(x,x)− k(x)>(K + Iσ2ν)−1k(x),
(1)
where µ := [µ(x1), . . . µ(xn)] is the mean vector at parameters xi ∈ Dn and [y]i := y(xi) the corresponding vector of observations. We have [ k(x) ] i
:= k(x,xi), the kernel matrix has entries [K]ij := k(xi,xj), and I is the identity matrix. In the following, we assume without loss of generality that the prior mean is identically zero: µ(x) ≡ 0. Safe set Using the previous assumptions, we can construct high-probability confidence intervals on the function values f(x). Concretely, for any δ > 0 it is possible to find a sequence of positive numbers {βn} such that f(x) ∈ [ µn(x)± βnσn(x) ] with probability at least 1− δ, jointly for all x ∈ X and n ≥ 1. For a proof and more details see (Chowdhury and Gopalan, 2017). We use these confidence intervals to define a safe set
Sn := {x ∈ X : µn(x)− βnσn(x) ≥ 0} ∪ {x0}, (2) which contains all parameters whose βn-lower confidence bound is above the safety threshold and the initial safe parameter x0. Consequently, we know that all parameters in Sn are safe, f(x) ≥ 0 for all x ∈ Sn, with probability at least 1− δ jointly over all iterations n. Safe exploration Given the safe set Sn, the next question is which parameters in Sn to evaluate in order to efficiently expand it. Most existing safe exploration methods rely on uncertainty sampling over subsets of Sn. SAFEOPT-like approaches, for example, use the Lipschitz assumption on f to identify parameters in Sn that could expand the safe set and then select the parameter that has the biggest uncertainty among those. In the next sections, we present and analyze our safe exploration strategy, ISE, that instead uses an information gain measure to identify the parameters that allow us to efficiently learn about the safety of parameters outside of Sn.
3 Information-Theoretic Safe Exploration
We present Information-Theoretic Safe Exploration (ISE), which guides the safe exploration by using an information-theoretic criterion. Our goal is to design an exploration strategy that directly exploits the properties of GPs to learn about the safety of parameters outside of Sn. We draw inspiration from Hennig and Schuler (2012); Wang and Jegelka (2017) who exploit information-theoretic insights to design data-efficient BO acquisition functions for their respective optimization objectives.
Information gain measure In our case, we want to evaluate f at safe parameters that are maximally informative about the safety of other parameters, in particular of those where we are uncertain
Algorithm 1 Information-Theoretic Safe Exploration 1: Input: GP prior (µ0, k, σν), Safe seed x0 2: for n = 0, . . . , N do 3: xn+1← arg maxx∈Sn maxz∈X În ( {x, y}; Ψ(z)
) 4: yn+1← f(xn+1) + ν 5: Update GP posterior with (xn+1, yn+1)
about whether they are safe or not. To this end, we need a corresponding measure of information gain. We define such a measure using the binary variable Ψ(x) = If(x)≥0, which is equal to one iff f(x) ≥ 0. Its entropy is given by
Hn [ Ψ(z) ] = −p−n (z) ln ( p−n (z) ) − ( 1− p−n (z) ) ln ( 1− p−n (z) ) (3)
where p−n (z) is the probability of z being unsafe: p − n (z) = 1 2 + 1 2 erf
( − 1√
2 µn(z) σn(z)
) . The random
variable Ψ(z) has high-entropy when we are uncertain whether a parameter is safe or not; that is, its entropy decreases monotonically as |µn(z)| increases and the GP posterior moves away from the safety threshold. It also decreases monotonically as σn(z) decreases and we become more certain about the constraint. This behavior also implies that the entropy goes to zero as the confidence about the safety of z increases, as desired.
Given our definition of Ψ, we consider the mutual information I ( {x, y}; Ψ(z) ) between an observation y at a parameter x and the value of Ψ at another parameter z. Since Ψ is the indicator function of the safe regions of the parameter space, the quantity In ( {x, y}; Ψ(z) ) measures how much information about the safety of z we gain by evaluating the safety constraint f at x at iteration n, averaged over all possible observed values y. This interpretation follows directly from the definition of mutual information: In ( {x, y}; Ψ(z) ) = Hn [ Ψ(z) ] − Ey [ Hn+1 [ Ψ(z)
∣∣{x, y}]], where Hn[Ψ(z)] is the entropy of Ψ(z) according to the GP posterior at iteration n, while Hn+1 [ Ψ(z)
∣∣{x, y}] is its entropy at iteration n+ 1, conditioned on a measurement y at x at iteration n. Intuitively, In ( {x, y}; Ψ(z)
) is negligible whenever the confidence about the safety of z is high or, more generally, whenever an evaluation at x does not have the potential to substantially change our belief about the safety of z. The mutual information is large whenever an evaluation at x on average causes the confidence about the safety of z to increase significantly. As an example, in Figure 1 we plot In ( {x, y}; Ψ(z) ) as a function of x ∈ Sn for a specific choice of z and for an RBF kernel. As one would expect, we see that the closer it gets to z, the bigger the mutual information becomes, and that it vanishes in the neighborhood of previously evaluated parameters, where the posterior variance is negligible.
To compute In ( {x, y}; Ψ(z) ) , we need to average (3) conditioned on an evaluation y over all possible values of y. However, the resulting integral is intractable given the expression of Hn[Ψ(z)] in (3). In order to get a tractable result, we derive a close approximation of (3),
Hn [ Ψ(z) ] ≈ Ĥn [ Ψ(z) ] . = ln(2) exp { − 1 π ln(2) ( µn(z)
σn(z)
)2} . (4)
The approximation in (4) is obtained by truncating the Taylor expansion of Hn[Ψ(z)] at the second order, and it recovers almost exactly its true behavior (see Appendix B for details). Since the posterior mean at z after an evaluation at x depends linearly on µn(x), and since the probability density of y depends exponentially on−µ2n(x), using (4) reduces the conditional entropy Ey [ Ĥn+1 [ Ψ(z)
∣∣{x, y}]] to a Gaussian integral with the exact solution
Ey [ Ĥn+1 [ Ψ(z) ∣∣{x, y}]] = ln(2) √ σ2ν + σ 2 n(x)(1− ρ2n(x, z))
σ2ν + σ 2 n(x)(1 + c2ρ 2 n(x, z))
exp { −c1 µ2n(z)
σ2n(z)
σ2ν + σ 2 n(x)
σ2ν + σ 2 n(x)(1 + c2ρ 2 n(x, z))
} , (5)
where ρn(x, z) is the linear correlation coefficient between f(x) and f(z), and with c1 and c2 given by c1 := 1/ ln(2)π and c2 := 2c1−1. This result allows us to analytically calculate the approximated
mutual information În ( {x, y}; Ψ(z) ) . = Ĥn [ Ψ(z) ] − Ey [ Ĥn+1 [ Ψ(z) ∣∣{x, y}]], which we use to define the ISE acquisition function, and which we analyze theoretically in Section 4.
ISE acquisition function Now that we have defined a way to measure and compute the information gain about the safety of parameters, we can use it to design an exploration strategy that selects the next parameters to evaluate. The natural choice for such selection criterion is to select the parameter that maximizes the information gain; that is, we select xn+1 according to
xn+1 ∈ arg max x∈Sn max z∈X
În ( {x, y}; Ψ(z) ) , (6)
where we jointly optimize over x in the safe set Sn and an unconstrained second parameter z. Evaluating f at xn+1 according to (6) maximizes the information gained about the safety of some parameter z ∈ X , so that it allows us to efficiently learn about parameters that are not yet known to be safe. While z can lie in the whole domain, the parameters where we are the most uncertain about the safety constraint lie outside the safe set. By leaving z unconstrained, we show in our theoretical analysis in Section 4 that, once we have learned about the safety of parameters outside the safe set, (6) resorts to learning about the constraint function also inside Sn. An overview of ISE can be found in Algorithm 1 and we show an example run of a one-dimensional illustration of the algorithm in Figure 2.
4 Theoretical Results
In this section, we study the expression for În ( {x, y}; Ψ(z) ) obtained using (4) and (5) and analyze the properties of the ISE exploration criterion (6). By construction of Sn in (2) and the assumptions on f in Section 2, we know that any parameter selected according to (6) is safe with high probability, see Appendix A for details. To show that we also learn about the safe set, we first need to define what it means to successfully explore starting from x0. The main challenge is that it is difficult to analyze how a GP generalizes based on noisy observations, so that it is difficult to define a notion of convergence that is not dependent on the specific run. SAFEOPT avoids this issue by expanding the safe set not based on the GP, but only using the Lipschitz constant L. Contrary to their approach, we depend on the GP to generalize from the safe set. In this case, the natural notion of convergence is provided by the the posterior variance. In particular, we say that at iteration n we have explored the safe set up to ε-accuracy if σ2n(x) ≤ ε for all parameters x in Sn. In the following, we show that ISE asymptotically leads either to ε-accurate exploration of the safe set or to indefinite expansion of the safe set. In future work it will be interesting to further investigate the notion of generalization and to derive a similar convergence result as those obtained by Sui et al. (2015).
Theorem 1. Assume that xn+1 is chosen according to (6), and that there exists n̂ such that for all n ≥ n̂ Sn+1 ⊆ Sn. Moreover, assume that for all n ≥ n̂, |µn(x)| ≤ M for some M > 0 for all x ∈ Sn. Then, for all ε > 0 there exists Nε such that σ2n(x) ≤ ε for every x ∈ Sn if n ≥ n̂+Nε.
The smallest of such Nε is given by
Nε = min
{ N ∈ N : b−1 ( CγN N ) ≤ ε } , (7)
where b(η) := ln(2) exp { −c1M 2
η
}[ 1− √ σ2ν
2c1η+σ2ν
] , γN = maxD⊂X ;|D|=N I ( f(D);y(D) ) is
the maximum information capacity of the chosen kernel (Srinivas et al., 2010; Contal et al., 2014), and C = ln(2)/σ2ν ln ( 1 + σ−2ν ) .
Proof. See Appendix A.
Theorem 1 tells us that if at some point the set of safe parameters Sn stops expanding, then the posterior variance over the safe set vanishes eventually. The intuition behind Theorem 1 is that if there were a parameter x in the safe set whose posterior mean remained finite and whose posterior variance remained bounded from below, then an evaluation of f at such x would yield a non negligible average information gain about the safety of x, so that, since x is in the safe set, at some point ISE will be forced to choose to evaluate x, reducing its posterior variance. This result guarantees that, should the safe set stop expanding, ISE will asymptotically explore the safe set up to an arbitrary ε-accuracy. In practice, we observe that ISE first focuses on reducing the uncertainty in areas of the safe set that are most informative about parameters whose classification is still uncertain (e.g. the boundary of the safe set), and only eventually turns to learning about the inside of the safe set. This behavior is what ultimately leads to the posterior variance to decay over the whole Sn. Therefore, even if in general it is not always possible to say whether or not the safe set will ever stop expanding, we can read Theorem 1 as an exploration guarantee for ISE, as it rules out the possibility that the proposed acquisition function forever leaves the uncertainty high in areas of the safe set that, if better understood, could lead to an expansion of the safe set.
Theorem 1 requires a bound on the GP posterior mean function, which is always satisfied with high probability based on our assumptions about f . Specifically, we have that |µn(x)| ≤ 2βn with probability of at least 1− δ for all n (see Appendix A for details). Therefore, it does not represent an additional restrictive assumption for f . Finally, we also note that the the constant Nε defined by (7) always exists since the function b is monotonically increasing, as long as γN grows sublinearly in N . Srinivas et al. (2010) prove that this is the case for commonly-used kernel and, more generally, it is a prerequisite for data-efficient learning with GP models.
5 Discussion and Limitations
ISE drives exploration of the parameter space by selecting the parameters to evaluate according to (6). An alternative but conceptually similar approach to this criterion would be to consider the parameter that yields the biggest information gain on average over the domain, i.e., substituting the inner max in (6) with an average over X . The resulting integral, however, is intractable and would require further approximations. Moreover, the parameter found by solving (6) will also yield a high average information gain over the domain, due to the regularity of all involved objects.
Being able to work in a continuous domain, ISE can deal with higher dimensional domains better than algorithms requiring a discrete parameter space. However, as noted in Section 4, finding xn+1 as in (6) means to solve a non-convex optimization problem with twice the dimension of the the parameter space, which can also become a computationally challenging problem as the dimension grows. In a high-dimensional setting, we follow LINEBO by Kirschner et al. (2019), which at each iteration selects a random one-dimensional subspace to which it restricts the optimization of the acquisition function.
In Sections 2 and 3, we assumed the observation process to be homoskedastic. However, it needs not to be the case, and the results can be extended to the case of heteroskedastic Gaussian noise. The observation noise at a parameter x explicitly appears in the ISE acquisition function, since it crucially affects the amount of information that we can gain by evaluating the constraint f at x. On the contrary, STAGEOPT-like methods do not consider the observation noise in their acquisition functions. As a consequence, ISE can perform significantly better in an heteroskedastic setting, as we also show in Section 6.
Lastly, we reiterate that the theoretical safety guarantees offered by ISE are derived under the assumption that f is a bounded norm element of the RKHS space associated with the GP’s kernel. In applications, therefore, the choice of the kernel function becomes even more crucial when safety is an issue. For details on how to construct and choose kernels see (Garnett, 2022). The safety guarantees also depend on the choice of βn. Typical expressions for βn include the RKHS norm of the constraint f (Chowdhury and Gopalan, 2017; Fiedler et al., 2021), which is in general difficult to estimate in practice. Because of this, usually in practice a constant value of βn is used instead.
6 Experiments
In this section we empirically evaluate ISE. Additional details about the experiments and setup can be found in Appendix C. As commonly done in the literature (see Section 5), we set βn = 2 for all experiments. This choice guarantees safety per iteration, rather than jointly for all n and it allows for a less conservative bound than the one needed for the joint guarantees.
GP samples For the first part of the experiments, we evaluate ISE on constraint functions f that we obtain by sampling a GP prior at a finite number of points. This allows us to test ISE under the assumptions of the theory and we compare its performance to that of the exploration part of STAGEOPT (Sui et al., 2018). STAGEOPT is a modified version of SAFEOPT, in which the exploration and optimization parts are performed separately: first the SAFEOPT exploration strategy is used to expand the safe set as much as possible, then the objective function is optimized within the discovered safe set. We further modify the version of STAGEOPT that we use in the experiment by defining the safe set in the same way ISE does, i.e., by means of the GP posterior, as done, for example, also by Berkenkamp et al. (2016). We select 100 samples from a two-dimensional GP with RBF kernel, defined in [−2.5, 2.5]×[−2.5, 2.5] and run ISE and STAGEOPT for 100 iterations for each sample. As STAGEOPT requires a discretization of the domain, we use this discretization to compare the sample efficiency of the two methods, by computing, at each iteration, what percentage of the discretized domain is classified as safe. Moreover, we also compare with the heuristic acquisition inspired by SAFEOPT proposed by Berkenkamp et al. (2016). This method works exactly as STAGEOPT, with the difference that the set of expanders is computed using directly the GP posterior, rather than the Lipschitz constant. More precisely, a parameter x is considered an expander if observing a value of µn(x) + βnσn(x) at x would enlarge the safe set. For the STAGEOPT run, we use the kernel metric to compute the set of potential expanders, for different values of the Lipschitz constant L. From the results shown in Figure 3a, we see not only that ISE performs as well or better than all tested instances of STAGEOPT, but also how the choice of L affects the performance of the latter. This plot makes it also evident how crucial the choice of the Lipschitz constant is for STAGEOPT and SAFEOPT-like algorithms in general. In Table 1, in Appendix C, we report the average percentage of safety violations per run achieved by ISE and STAGEOPT. As expected, we see that the percentage of safety violations is comparable among all algorithms.
To show that for STAGEOPT exploration not only overestimating the Lipschitz constant, but also underestimating it can negatively impact performance, we consider the simple one-dimensional constraint function f(x) = e−x + 0.05 and run the safe exploration for multiple values of the Lipschitz constant. This function gets increasingly away from the safety threshold for x → −∞, while it asymptotically approaches the threshold for x→∞, so that a good exploration algorithm would, ideally, quickly classify as safe the domain region for x < 0 and then keep exploring the boundary of the safe set for x > 0. The results plotted in Figure 3b show how both a too high and a too low Lipschitz constant can lead to sub-optimal exploration. In the case of a too small constant, this is because STAGEOPT considers expanders almost all parameters in the domain, leading to additional evaluations in the region for x < 0 that are unlikely to cause expansion of the safe set. On the other hand, a too high value of the Lipschitz constant can lead to the set of expanders to be empty as soon as the posterior mean gets close to the safety threshold for x > 0.
OpenAI Gym Control After investigating the performance of ISE under the hypothesis of the theory, we apply it to two classic control tasks from the OpenAI Gym framework (Brockman et al., 2016), with the goal of finding the set of parameters of a controller that satisfy some safety constraint. In particular we consider linear controllers for the inverted pendulum and cart pole tasks.
For the inverted pendulum task, the linear controller is given by ut = α1θt + α2θ̇t, where ut is the control signal at time t, while θt and θ̇t are, respectively, the angular position and the angular velocity
of the pendulum. Starting from a position close to the upright equilibrium, the controller’s task is the stabilization of the pendulum, subject to a safety constraint on the maximum velocity reached within one episode. For some given initial controller configuration α0 := (α01, α 0 2), we want to explore the controller’s parameter space avoiding configurations that lead the pendulum to swing with a too high velocity. We apply ISE to explore the α-space with x0 = α0 and the safety constraint being the maximum angular velocity reached by the pendulum in an episode of fixed length. In this case, the safety threshold is not at zero, but rather at some finite value θ̇M , and the safe parameters are those for which the maximum velocity is below θ̇M . The formalism developed in the previous sections can be easily applied to this scenario if we consider f(α) = −(maxt θ̇t(α)− θ̇M ). In Figure 4a we show the true safe set for this problem, while in Figures 4b–4d one can see how ISE safely explores the true safe set. These plots show how the ISE acquisition function (6) selects parameters that are close to the current safe set boundary and, hence, most informative about the safety of parameters outside of the safe set. This behavior eventually leads to the full true safe set to be classified as safe by the GP posterior, as Figure 4d shows.
The cart pole task is similar to the inverted pendulum one, but the parameter space has three dimensions. The controller we consider is given by ut = α1θt + α2θ̇t + α3ṡt, where θt and θ̇t are, respectively, the angular position and angular velocity of the pole at time t, while ṡt is the cart’s velocity. We set the initial state to zero angular and linear velocity and with the pole close to the vertical position, with the controller’s goal being to keep the pole stable in the upright position. A combination of the three parameters α1, α2 and α3 is considered safe if the angle of the pole does not exceed a given threshold within the episode. Again, we can easily cast this safety constraint in terms of the formalism developed in the paper: f(α) = −(maxt θt(α)− θM ), where θM is the maximum allowed angle. Figure 5a shows the expansion of the cart pole α space promoted by ISE, compared with STAGEOPT for different values of the Lipschitz constant. Both methods achieve a comparable sample efficiency and both lead to the classification as safe of the full true safe set.
High dimensional domains Many interesting applications have a high dimensional parameter space. While SAFEOPT-like methods are difficult to apply already with dimension > 3 due to the discretization of the domain, ISE can perform well also in four or five dimensions. To see this, we apply ISE to the constraint function f(x) = e−x 2 + 2e−(x−x1) 2 + 5e−(x−x2) 2 − 0.2. Figure 5b shows the ISE performance in dimension 5. We see that ISE is able to promote the expansion of the safe set, leading to an increasingly bigger portion of the true safe set to be classified as safe.
Heteroskedastic noise domains For even higher dimensions, we can follow a similar approach to LINEBO, limiting the optimization of the acquisition function to a randomly selected one-dimensional subspace of the domain. Moreover, as discussed in Section 5, it is also interesting to test ISE in the case of heteroskedastic observation noise, since the noise is a critical quantity for the ISE acquisition function, while it does not affect the selection criterion of STAGEOPT-like methods. Therefore, in
this experiment we combine a high dimensional problem with heteroskedastic noise. In particular, we apply a LINEBO version of ISE to the constraint function f(x) = 12e −x2 + e−(x±x1) 2 + 3e−(x±x2) 2
+ 0.2 in dimension nine and ten, with the safe seed being the origin. This function has two symmetric global optima at ±x2 and we set two different noise levels in the two symmetric domain halves containing the optima. To assess the exploration performance, we use the simple regret, defined as the difference between the current safe optimum and the true safe optimum. As the results in Figure 6 show, ISE achieve a greater sample efficiency than the other STAGEOPT-like methods. Namely, for a given number of iterations, by explicitly exploiting knowledge about the observation noise, ISE is able to classify as safe regions of the domain further away from the origin, in which the constraint function assumes its largest values, resulting in a smaller regret. On the other hand, SAFEOPT-like methods only focus on the posterior variance, so that the higher observation noise causes them to remain stuck in a smaller neighborhood of the origin, resulting in bigger regret.
7 Conclusion and Societal Impact
We have introduced Information-Theoretic Safe Exploration (ISE), a novel approach to safely explore a space in a sequential decision task where the safety constraint is a priori unknown. ISE efficiently and safely explores by evaluating only parameters that are safe with high probability and by choosing those parameters that yield the greatest information gain about the safety of other parameters. We theoretically analyzed ISE and showed that it leads to arbitrary reduction of the uncertainty in the largest reachable safe set containing the starting parameter. Our experiments support these theoretical results and demonstrate an increased sample efficiency and scalability of ISE compared to SAFEOPT-based approaches.
In many safety sensitive applications the shape of the safety constraints is unknown, so that an important prerequisite for any kind of process is to identify what parameters are safe to evaluate. By providing a principled way to do this, the contributions of this paper allow to deal with safety in a broad range of applications, which can favor the usage of ML approaches also in safety sensitive settings. On the other hand, misuse of the proposed method cannot be prevented in general. | 1. What is the focus and contribution of the paper regarding safe exploration in Bayesian optimization?
2. What are the strengths of the proposed approach, particularly in its principled nature and use of information gain criteria?
3. Do you have any concerns or questions regarding the paper's theoretical analysis and guarantees?
4. How does the reviewer assess the novelty and distinction of the paper compared to prior works in the field?
5. Are there any suggestions for improving the paper's comparison with other related works or providing more specific bounds on the exploration process? | Summary Of The Paper
Strengths And Weaknesses
Questions
Limitations | Summary Of The Paper
This paper provides a newer approach to explore and uncover the safe set of parameters. The exploration is principled and is guaranteed to be safe with high probability. The developments in the theoretical section provide closed-form approximations for the information-gain relevant quantities. The experiments show that their approach can provide the maximally safe set faster than existing methods. The novelty with respect to prior work is that the information gain criteria is used for bayesian optimization with an emphasis on safety. As an illustration of the algorithm, the GP prior is known beforehand with at least one point in the safe set. The next point is chosen to be in the region that is known to be safe such that decreasing the variance around this region will provide maximal information about the safety of any point beyond the safe set. Rather than the Lipschitz constant, the use of a smoothness prior does seem interesting and novel.
Strengths And Weaknesses
I believe this is overall a good paper as it is a principled approach to Bayesian optimization and safety. It is improving the Lipschitz constant based methods for sampling-based safety verification.
Questions
From a theory perspective, [A] proves a finite-time regret bound while this paper only proves an asymptotic guarantee. The first type of bounds are stronger and provide more information about the rate of convergence. In practice, the method actually has an advantage but this is lost in the theory. What is proven here is that once we have found the maximal safe set, the posterior variances of the points inside the safe set will eventually become small. The exploration for uncovering the safe set is optimal at least assuming the prior is correct. Is it relevant to prove the number of samples it would take to uncover the safe set or let us say the regret wrt an algorithm that knows the optimal prior?
The comparison to previous related work is at a very high-level. It would help to provide a deeper theoretical comparison to the 2-3 most related papers. For example, can you place the theorem being proven in the context of the literature in this topic?
[A] Max-value Entropy Search for Efficient Bayesian Optimization - Wang and Jegelka
Limitations
There is a great discussion on limitations of the work. |
NIPS | Title
Fast deep reinforcement learning using online adjustments from the past
Abstract
We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by planning over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games.
1 Introduction
Complementary learning systems [McClelland et al., 1995, CLS] combine two mechanisms for learning: one, fast learning and highly adaptive but poor at generalising, the other, slow at learning and consequentially better at generalising across many examples. The need for two systems reflects the typical trade-off between the sample efficiency and the computational complexity of a learning algorithm. We argue that the majority of contemporary deep reinforcement learning systems fall into the latter category: slow, gradient-based updates combined with incremental updates from Bellman backups result in systems that are good at generalising, as evidenced by many successes [Mnih et al., 2015, Silver et al., 2016, Moravčík et al., 2017], but take many steps in an environment to achieve this feat.
RL methods are often categorised as either model-free methods or model-based RL methods [Sutton and Barto, 1998]. In practice, model-free methods are typically fast at acting time, but computationally expensive to update from experience, whilst model-based methods can be quick to update but expensive to act with (as on-the-fly planning is required). Recently there has been interest in incorporating episodic memory-like into reinforcement learning algorithms [Blundell et al., 2016a, Pritzel et al., 2017], potentially providing increases in flexibility and learning speed, driven by motivations from the neuroscience literature known as Episodic Control [Dayan and Daw, 2008, Gershman and Daw, 2017]. Episodic Control use episodic memory in lieu of a learnt model of the environment, aiming for a different computational trade-off to model-free and model-based approaches.
We will be interested in a hybrid approach, motivated by the observations of CLS [McClelland et al., 1995], where we will build an agent with two systems: one slow and general (model-free) and the other fast and adaptive (episodic control-like). Similar to previous proposals for agents, the fast, adaptive subsystem of our agent uses episodic memories to remember and later mimic previously experienced rewarding sequences of states and actions. This can be seen as a memory-based form of planning [Silver et al., 2008], in which related experiences are recalled to inform decisions. Planning
∗denotes equal contribution.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
in this context can be thought as the re-evaluation of the past experience using current knowledge to improve model-free value estimates.
Critical to many approaches to deep reinforcement learning is the replay buffer [Mnih et al., 2015, Espeholt et al., 2018]. The replay buffer stores previously seen tuples of experience: state, action, reward, and next state. These stored experience tuples are then used to train a value function approximator using gradient descent. Typically one step of gradient descent on data from the replay buffer is taken per action in the environment, as (with the exception of [Barth-Maron et al., 2018]) a greater reliance on replay data leads to unstable performance. Consequently, we propose that the replay buffer may frequently contain information that could significantly improve the policy of an agent but never be fully integrated into the decision making of an agent. We posit that this happens for three reasons: (i) the slow, global gradient updates to the value function due to noisy gradients and the stability of learning dynamics, (ii) the replay buffer is of limited size and experience tuples are regularly removed (thus limiting the opportunity for gradient descent to learn from it), (iii) training from experience tuples neglects the trajectory nature of an agents experience: one tuple occurs after another and so information about the value of the next state should be quickly integrated into the value of the current state.
In this work we explore a method of allowing deep reinforcement learning agents to simultaneously: (i) learn the parameters of the value function approximation slowly, and (ii) adapt the value function quickly and locally within an episode. Adaptation of the value function is achieved by planning over previously experienced trajectories (sequences of temporally adjacent tuples) that are grounded in estimates from the value function approximation. This process provides a complementary way of estimating the value function.
Interestingly our approach requires very little modification of existing replay-based deep reinforcement learning agents: in addition to storing the current state and next state (which are typically large: full inputs to the network), we propose to also store trajectory information (pointers to successor tuples) and one layer of current hidden activations (typically much smaller than the state). Using this information our method adapts the value function prediction using memory-based rollouts of previous experience based on the hidden representation. The adjustment to the value function is not stored after it is used to take an action (thus it is ephemeral). We call our method Ephemeral Value Adjustment (EVA).
2 Background
The action-value function of a policy π is defined as Qπ(s, a) = Eπ [ ∑ t γ
trt | s, a] [Sutton and Barto, 1998], where s and a are the initial state and action respectively, γ ∈ [0, 1] is a discount factor, and the expectation denotes that the π is followed thereafter. Similarly, the value function under the policy π at state s is given by V π(s) = Eπ [ ∑ t γ
trt | s] and is simply the expected return for following policy π starting at state s.
In value-based model-free reinforcement learning methods, the action-value function is represented using a function approximator. Deep Q-Network agents [Mnih et al., 2015, DQN] use Q-learning [Watkins and Dayan, 1992] to learn an action-value function Qθ(st, at) to rank which action at is best to take in each state st at step t. Qθ is parameterised by a convolutional neural network (CNN), with parameters collectively denoted by θ, that takes a 2D pixel representation of the state st as input, and outputs a vector containing the value of each action at that state. The agent executes an -greedy policy to trade-off exploration and exploitation: with probability the agent picks an action uniformly at random, otherwise it picks the action at = argmaxaQ(st, a).
When the agent observes a transition, DQN stores the (st, at, rt, st+1) tuple in a replay buffer, the contents of which are used for training. This neural network is trained by minimizing the squared error between the network’s output and the Q-learning target yt = rt + γmaxa Q̃(st+1, a), for a subset of transitions sampled at random from the replay buffer. The target network Q̃(st+1, a) is an older version of the value network that is updated periodically. It was shown by Mnih et al. [2015] that both, the use of a target network and sampling uncorrelated transitions from the replay buffer, are critical for stable training.
3 Ephemeral Value Adjustments
Ephemeral value adjustments are a way to augment an arbitrary value-based off-policy agent. This is accomplished through a trace computation algorithm, which rapidly produces value estimates by combining previously encountered trajectories with parametric estimates. Our agent consists of three components: a standard parametric reinforcement learner with its replay buffer augmented to maintains trajectory information, a trace computation algorithm that periodically plans over subsets of data in the replay buffer, a small value buffer which stores the value estimates resulting from the planning process. The overall policy of EVA is dictated by the action-value function,
Q(s, a) = λQθ(s, a) + (1− λ)QNP(s, a) (1)
Qθ is the value estimate from the parametric model and QNP is the value estimate from the trace computation algorithm (non-parametric). Figure 1 (Right) shows a block diagram of the method. The parametric component of EVA consists of the standard DQN-style architecture, Qθ, a feedforward convolutional neural network: several convolution layers followed by two linear layers that ultimately produce action-value function estimates. Training is done exactly as in DQN, briefly reviewed in Section 2 and fully described in [Mnih et al., 2015].
3.1 Trajectory selection and planning
The second to final layer of the DQN network is used to embed the currently observed state (pixels) into a lower dimensional space. Note that similarity in this space has been optimised for action-value estimation by the parametric model. Periodically (every 20 steps in all the reported experiments), the k nearest neighbours in the global buffer are queried from the current state embedding (on the basis of their `2 distance). Using the stored trajectory information, the 50 subsequent steps are also retrieved for each neighbour. Each of these k trajectories are passed to a trace computation algorithm (described below), and all of the resulting Q values are stored into the value buffer alongside their embedding. Figure 1 (Left) shows a diagram of this procedure. The non-parametric nature of this process means that while these estimates are less reliant on the accuracy of the parametric model, they are more relevant locally. This local buffer is meant to cache the results of the trace computation for states that are likely to be nearby the current state.
3.2 Computing value estimates on memory traces
By having the replay buffer maintain trajectory information, values can be propagated through time to produce trajectory-centric value estimates QNP(s, a). Figure 1 (Right) shows how the value buffer is used to derive the action-value estimate. There are several methods for estimating this value function, we shall describe n-step, trajectory-centric planning (TCP) and kernel-based RL (KBRL) trace computation algorithms. N-step estimates for trajectories from the replay buffer are calculated as follows,
VNP(st) = { maxaQθ(st, a) if t = T rt + γVNP(st+1) otherwise,
(2)
where T is the length of the trajectory and st, rtt are the states and rewards of the trajectory. These estimates utilise information in the replay buffer that might not be consolidated into the parametric model, and thus should be complementary to the purely parametric estimates. While this process will
Algorithm 1: Ephemerally Value Adjustments Input : Replay buffer D
Value buffer L Mixing hyper-parameter λ Maximum roll-out hyper-parameter τ
for e := 1,∞ do for t := 1, T do
Receive observation st from environment with embedding ht Collect trace computed values from k nearest neighbours QNP(sk, ·)|h(sk) ∈ KNN(h(st),L) QEVA(st, ·) := λQθ(ŝ, ·) + (1− λ) ∑K k=0QNP(sk,·) K at ← -greedy policy based on QEVA(st, ·) Take action at, receive reward rt+1 Append (st, at, rt+1, ht, e) to D Tm := (st:t+τ , at:t+τ , rt+1:t+τ+1, ht:t+τ , et:t+τ )|h(sm) ∈ KNN(h(st),D)) QNP ← using Tm via the TCP algorithm Append (ht, QNP) to L
end end
serve as a useful baseline, the n-step return just evaluates the policy defined by the sampled trajectory; only the initial parametric bootstrap involves an estimate of the optimal value function. W Ideally, the values at all time-steps should estimate the optimal value function,
Q(s, a)← r(s, a) + γmax a′
Q(s′, a′). (3)
Thus another way to estimate QNP(s, a) is to apply the Bellman policy improvement operator at each time step, as shown in (3). While (2) could be applied recursively, traversing the trajectory backwards, this improvement operator requires knowing the value of the counter-factual actions. We call this trajectory-centric planning. We propose using the parametric model for these off-trajectory value estimates, constructing the complete set of action-conditional value-estimates, called this trajectory-centric planning (TCP):
QNP(st, a) = { rt + γVNP(st+1) if at = a Qθ(st, a) otherwise.
(4)
This allows for the same recursive application as before,
VNP(st) = { maxaQθ(st, a) if t = T maxaQNP(st, a) otherwise,
(5)
The trajectory-centric estimates for the k nearest neighbours are then averaged with the parametric estimate on the basis of a hyper-parameter λ, as shown in Algorithm 1 and represented graphically on Figure 1 (Left). Refer to the supplementary material for a detailed algorithm.
3.3 From trajectory-centric to kernel-based planning
The above method may seem ad hoc – why trust the on-trajectory samples completely and only utilise the parametric estimates for the counter-factual actions? Why not analyse the trajectories together, rather than treating them independently? To address these concerns, we propose a generalisation of the trajectory-centric method which extends kernel-based reinforcement learning (KBRL)[Ormoneit and Sen, 2002]. KBRL is a non-parametric approach to planning with strong theoretical guarantees.2
For each action a, KBRL stores experience tuples (st, rt, st+1) ∈ Sa. Since Sa is finite (equal to the number of stored transitions), and these states have known transitions, we can perform value iteration
2Convergence to a global optima assuming that underlying MDP dynamics are Lipschitz continuous, and the kernel is appropriately shrunk as a function of data.
to obtain value estimates for all resultant states st+1 (the values of the origin states st are not needed, as the Bellman equation only evaluates states after a transition). We can obtain an approximate version of the Bellman equation by using the kernel to compare all resultant states to all origin states, as shown in Equation 6. We define a similarity kernel on states (in fact, embeddings of the current state, as described above), κ(s, s′), typically a Gaussian kernel. The action-value function of KBRL is then estimated using:
QNP(st, at) = ∑
(s,r,s′)∈Sa
κ(st, s) [ r + γmax
a′ QNP(s
′, a′) ]
(6)
In effect, the stored ‘origin’ states (s ∈ S) transition to some ‘resultant state’ (s ∈ S′) and get the stored reward. By using a similarity kernel κ(x0, x1), we can map resultant states to a distribution over the origin states. This makes the state transitions from S → S instead of S → S′, meaning that all transitions only involve states that have been previously encountered.
In the context of trajectory-centric planning, KBRL can be seen as an alternative way of dealing with counter-factual actions: estimate their effects using nearby transitions. Additionally, KBRL is not constrained to dealing with individual trajectories, since it treats all transitions independently.
We propose to add an absorbing pseudo-state ŝ to KBRL’s model whose similarity to the other pseudostates is fixed, that is, κ(st, ŝ) = C for some C > 0 for all st. Using this definition we can make KBRL softly blend similarity and parametric counter-factual action evaluation. This is accomplished by setting the pseudo-state’s value to be equal to the parametric value function evaluated at the state under comparison: when st is being evaluated, QNP(ŝ, a) ≈ Qθ(ŝ, a) thus by setting C appropriately, we can guarantee that the parametric estimates will dominate when data density is low. Note that this is in addition to the blending of value functions described in Equation 1.
KBRL can be made numerically identical to trajectory-centric planning by shrinking the kernel bandwidth (i.e., the length scale of the Gaussian kernel) and pseudo-state similarity.3 With the appropriate values, this will result in value estimates being dominated by exact matches (on-trajectory) and parametric estimates when none are found. This reduction is of interest as KBRL is significantly more expensive than trajectory-centric planning. KBRL’s computational complexity is O(AN2) and trajectory-centric planning has a complexity of O(N), where N is the number of stored transitions and A is the cardinality of the action space. We can thus think of this parametrically augmented version of KBRL as the theoretical foundation for trajectory-centric planning. In practice, we use the TCP trace computation algorithm (Equations 4 and 5) unless otherwise noted.
4 Related work
There has been a lot of recent work on using memory-augmented neural networks as a function approximation for RL agents: using LSTMs [Bakker et al., 2003, Hausknecht and Stone, 2015], or more sophisticated architectures [Graves et al., 2016, Oh et al., 2016, Wayne et al., 2018]. However, the motivation behind these works is to obtain a better state representation in partially observable or non-Markovian environments, in which feed-forward models would not be appropriate. The focus of this work is on data efficiency, which is improved in a representation agnostic manner.
The main use of long term episodic memory is the replay buffer introduced by DQN.
While it is central to stable training, it also allows to significantly improve the data efficiency of the method, compare with the online counterparts that achieve stable training by having several actors [Mnih et al., 2016]. The replay frequency is hyper-parameter that has been carefully tuned in DQN. Learning cannot be sped-up by increasing the frequency of replay without harming end performance. The problem is that the network would overfit to the content of the replay buffer affecting its ability to learn a better policy. An alternative approach is prioritised experience replay [Schaul et al., 2015], which changes the data distribution used during training by biasing it toward transitions with high temporal difference error. These works use the replay buffer during training time only. Our approach aims at leveraging the replay buffer at decision time and thus is complementary to prioritisation, as it impacts the behaviour policy but not how the replay buffer is sampled from (the supplementary materials for a preliminary comparison).
3Modulo the fact that KBRL would still be able to find ‘shortcuts’ between or within trajectories owing to its exhaustive similarity comparisons between states
Using previous experience at decision time is closely related to non-parametric approaches for Qfunction approximation [Santamaría et al., 1997, Munos and Moore, 1998, Gabel and Riedmiller, 2005]. Our work is particularly related to techniques following the ideas of episodic control. Blundell et al. [2016b, MFEC] recently used local regression for Q-function estimation using the mean of the k-nearest neighbours searched over random projections of the pixel inputs. Pritzel et al. [2017] extended this line of work with NEC, using the reward signal to learn an embedding space in which to perform the local-regression. These works showed dramatic improvements in data efficiency, specially in early stages of training. This work differs from these approaches in that, rather than using memory for local regression, memory is used as a form of local planning, which is made possible by exploiting the trajectory structure of the memories in the replay buffer. Furthermore, the memory requirements of NEC is significantly larger than that of EVA. NEC uses a large memory buffer per action in addition to a replay buffer. Our work only adds a small overhead over the standard DQN replay buffer and needs to query a single replay buffer one time every several acting steps (20 in our experiments) during training. In addition, NEC and MFEC fundamentally change the structure of the model, whereas EVA is strictly supplemental. More recent works have looked at including NEC type of architecture to aid the learning of a parametric model [Nishio and Yamane, 2018, Jain and Lindsey, 2018], sharing memory requirements with NEC.
The memory-based planning aspect of our approach also has precedent in the literature. Brea [2017] explicitly compare a local regression approach (NEC) to prioritised sweeping and find that the latter is preferable, but fail to show scalable result. Savinov et al. [2018] build a memory-based graph and plan over it, but rely on a fixed exploration policy. Xiao et al. [2018] combine MCTS planning with NEC, but relies on a built-in model of the environment.
In the context of supervised learning, several works have looked at using non-parametric type of approaches to improve the performance of models using neural networks. Kaiser et al. [2016] introduced a differentiable layer of key-value pairs that can be plugged into a neural network to help it remember rare events. Works in the context of language modelling have augmented prediction with attention over recent examples to account for the distributional shift between training and testing settings, such as neural cache [Grave et al., 2016] and pointer sentinel networks [Merity et al., 2016]. The work by Sprechmann et al. [2018] is also motivated by the CLS framework. However, they use an episodic memory to improve a parametric model in the context of supervised learning and do not consider reinforcement learning.
5 Experiments
5.1 A simple example
We begin the experimental section by showing how EVA works on a simple “gridworld” environment implemented with the pycolab game engine [Stepleton, 2017]. The task is to collect a given number of coins in the minimum number of steps possible, that can be thought as a very simple variant of the travel salesman problem. At the beginning of each episode, the agent and the coins are placed at a
random location of a grid with size 5× 13, see the supplementary material for a screen-shot. The agent can take four possible actions {left, right, up, down} and receives a reward of 1 when collecting a coin and a reward of −0.01 at every step. If the agent takes an action that would it move into a wall, it stays at its current position. We restrict the maximum length of an episode to 500 steps. We use an agent featuring a two-layer convolutional neural network, followed by a fully connected layer producing a 64-dimensional embedding which is then used for the look-ups in the replay buffer of size 50K. The input is an RGB image of the maze. Results are reported in Figure 2.
Evaluation of a single episode We use the same pre-trained network (with its corresponding replay buffer) and run a single episode with and without using EVA, see Figure 2 (Left). We can see that, by leveraging the trajectories in the replay buffer, EVA immediately boosts performance of the baseline. Note that the weights of the network are exactly the same in both cases. The benefits saturate around λ = 0.4, which suggests that the policy of the non-parametric component alone is unable to generalise properly.
Evaluation of the full EVA algorithm Figure 2 (Center, Left) shows the performance of EVA on ful episodes using one and two coins evaluating different values of the mixing parameter λ. λ = 0 corresponds to the standard DQN baseline. We show the hyper-parameters that lead to the highest end performance of the baseline DQN. We can see that EVA provides a significant boost in data efficiency. For the single coin case, it requires slightly more than half of the data to obtain final performance and higher value of lambda is better. This is likely due to the fact that there are only 4K unique states, thus all states are likely to be in the replay buffer. On the two case setting, however, the number of possible states for the two coin case is approximately 195K, which is significantly larger than the replay buffer size. Again here, performance saturates around λ = 0.4.
5.2 EVA and Atari games
In order to validate whether EVA leads to gains in complex domains we evaluated our approach on the Atari Learning Environment(ALE; Bellemare et al., 2013). We used the set of 55 Atari Games, please see the supplementary material for details. The hyper-parameters were tuned using a subset of 5 games (Pong, H.E.R.O., Frostbite, Ms Pacman and Qbert). The hyper-parameters shared between the baseline and EVA (e.g. learning rate) were chosen to maximise the performance of the baseline (λ = 0) on a run over 20M frames on the selected subset of games. The influence of these hyper-parameters on EVA and the baseline are highly correlated. Performance saturates around λ = 0.4 as in the simple example. We chose the lowest frequency that would not harm performance (20 steps), the rollout length was set to 50 and the number of neighbours used for estimating QNP was set to 5. We observed that performance decreases as the number of neighbours increases. See the supplementary material for details on all hyper-parameters used.
We compared absolute performance of agents according to human normalised score as in Mnih et al. [2015]. Figure 3 summarises the obtained results, where we ran three random seeds for λ = 0 (which is our version of DQN) and EVA with λ = 0.4. In order to obtain uncertainty estimates, we report the mean and standard deviation per time step of the curves obtained by randomly selecting one random seed per game (this is, one out of three possible seeds for each of the 55 games). For reference, we also included the original DQN results from [Mnih et al., 2015]. EVA is able to improve the learning speed as well as the final performance level using exactly the same architecture and learning parameters as our baseline. It is able to achieve the end performance of the baseline in 40 million frames.
Effect of trace computation To understand how EVA helps performance, we compare three different versions of the trace computation at the core of the EVA approach. The standard (trajectorycentric) trace computation can be simplified by removing the parametric evaluations of counter-factual actions. This ablation results in the n-step trace computation (as shown in 2). Since the standard trace computation can be seen as a special-case of parametrically-augmented KBRL, we also consider this trace computation. Due to the increased computation of this trace computation, these experiments are only run for 40 million frames. For parametrically-augmented KBRL, a Gaussian similarity kernel is used with a bandwidth parameter of 10−4 and a paramteric similarity of 10−2.
EVA is significantly worse than the baseline with the n-step trace computation. This can be seen as evidence for the importance of the parametric evaluation of counter-factual actions. Without this additional computation, EVA’s policy is too dependant on the quality of the policy expressed in the trajectories, a negative feedback loop that results in divergence on several games. Interesting, the standard trace computation is as good as, if not better than, the much more costly KBRL method. While KBRL is capable of merging the data from the different trajectories into a global plan, it does not given on-trajectory information a privileged status without an extremely small bandwidth 4. In near-deterministic environments like Atari, this privileged status is appropriate and acts as a strong prior, as can be seen in the lower variance of this method.
Consolidation EVA relies in the TCP at decision time. However, one would expect that after training, the parametric model would be able to consolidate the information available on the episodic memory and be capable of acting without relying on the planning process. We verified that annealing the value of λ to zero over two million steps leads to no degradation in performance on our Atari experiments. Note that when λ = 0 our agent reduces to the standard DQN agent.
4To achieve this privileged status for on-trajectory information, the minimum off-trajectory similarity must be known, and typically results in bandwidth so small as to be numerically unstable
6 Discussion
Despite only changing the value function underlying the behaviour policy, EVA improves the overall rate of learning. This is due to two factors. The first is that the adjusted policy should be closer to the optimal policy by better exploiting the information in the replay data. The second is that this improved policy should fill the replay buffer with more useful data. This means that the ephemeral adjustments indirectly impact the parametric value function by changing the distribution of data that it is trained on.
During the training process, as the agent explores the environment, knowledge about value functions are extracted gradually from the interactions with the environment. Since the value-function drives the data acquisition process, the ability to quickly incorporate on highly rewarded experiences could significantly boost the sample efficiency of the learning process.
Acknowledgments
The authors would like to thank Melissa Tan, Paul Komarek, Volodymyr Mnih, Alistair Muldal, Adrià Badia, Hado van Hasselt, Yotam Doron, Ian Osband, Daan Wierstra, Demis Hassabis, Dharshan Kumaran, Siddhant Jayakumar, Razvan Pascanu, and Oriol Vinyals. Finally, we thank the anonymous reviewers for their comments and suggestions to improve the paper. | 1. What is the key contribution of the paper in the field of deep reinforcement learning?
2. How does the proposed Ephemeral Value Adjustment method improve the learning process and performance in Atari games?
3. What is the significance of storing one layer of current hidden activations in the method?
4. How does the reviewer assess the clarity and convincing nature of the paper's presentation and results? | Review | Review
This paper proposes the Ephemeral Value Adjustment method to allow deep reinforcement learning agents to simultaneously (1) learn the parameters of the value function approximation, and (2) adapt the value function quickly and locally within an episode. This is achieved by additionally storing one layer of current hidden activations. Experiments on Atari games show this method show that this method significantly improves both the learning speed and performance using the same architecture. The presentation is clear and the results are convincible. I am not familiar with this topic but the basic idea of this paper makes sense. |
NIPS | Title
Fast deep reinforcement learning using online adjustments from the past
Abstract
We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by planning over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games.
1 Introduction
Complementary learning systems [McClelland et al., 1995, CLS] combine two mechanisms for learning: one, fast learning and highly adaptive but poor at generalising, the other, slow at learning and consequentially better at generalising across many examples. The need for two systems reflects the typical trade-off between the sample efficiency and the computational complexity of a learning algorithm. We argue that the majority of contemporary deep reinforcement learning systems fall into the latter category: slow, gradient-based updates combined with incremental updates from Bellman backups result in systems that are good at generalising, as evidenced by many successes [Mnih et al., 2015, Silver et al., 2016, Moravčík et al., 2017], but take many steps in an environment to achieve this feat.
RL methods are often categorised as either model-free methods or model-based RL methods [Sutton and Barto, 1998]. In practice, model-free methods are typically fast at acting time, but computationally expensive to update from experience, whilst model-based methods can be quick to update but expensive to act with (as on-the-fly planning is required). Recently there has been interest in incorporating episodic memory-like into reinforcement learning algorithms [Blundell et al., 2016a, Pritzel et al., 2017], potentially providing increases in flexibility and learning speed, driven by motivations from the neuroscience literature known as Episodic Control [Dayan and Daw, 2008, Gershman and Daw, 2017]. Episodic Control use episodic memory in lieu of a learnt model of the environment, aiming for a different computational trade-off to model-free and model-based approaches.
We will be interested in a hybrid approach, motivated by the observations of CLS [McClelland et al., 1995], where we will build an agent with two systems: one slow and general (model-free) and the other fast and adaptive (episodic control-like). Similar to previous proposals for agents, the fast, adaptive subsystem of our agent uses episodic memories to remember and later mimic previously experienced rewarding sequences of states and actions. This can be seen as a memory-based form of planning [Silver et al., 2008], in which related experiences are recalled to inform decisions. Planning
∗denotes equal contribution.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
in this context can be thought as the re-evaluation of the past experience using current knowledge to improve model-free value estimates.
Critical to many approaches to deep reinforcement learning is the replay buffer [Mnih et al., 2015, Espeholt et al., 2018]. The replay buffer stores previously seen tuples of experience: state, action, reward, and next state. These stored experience tuples are then used to train a value function approximator using gradient descent. Typically one step of gradient descent on data from the replay buffer is taken per action in the environment, as (with the exception of [Barth-Maron et al., 2018]) a greater reliance on replay data leads to unstable performance. Consequently, we propose that the replay buffer may frequently contain information that could significantly improve the policy of an agent but never be fully integrated into the decision making of an agent. We posit that this happens for three reasons: (i) the slow, global gradient updates to the value function due to noisy gradients and the stability of learning dynamics, (ii) the replay buffer is of limited size and experience tuples are regularly removed (thus limiting the opportunity for gradient descent to learn from it), (iii) training from experience tuples neglects the trajectory nature of an agents experience: one tuple occurs after another and so information about the value of the next state should be quickly integrated into the value of the current state.
In this work we explore a method of allowing deep reinforcement learning agents to simultaneously: (i) learn the parameters of the value function approximation slowly, and (ii) adapt the value function quickly and locally within an episode. Adaptation of the value function is achieved by planning over previously experienced trajectories (sequences of temporally adjacent tuples) that are grounded in estimates from the value function approximation. This process provides a complementary way of estimating the value function.
Interestingly our approach requires very little modification of existing replay-based deep reinforcement learning agents: in addition to storing the current state and next state (which are typically large: full inputs to the network), we propose to also store trajectory information (pointers to successor tuples) and one layer of current hidden activations (typically much smaller than the state). Using this information our method adapts the value function prediction using memory-based rollouts of previous experience based on the hidden representation. The adjustment to the value function is not stored after it is used to take an action (thus it is ephemeral). We call our method Ephemeral Value Adjustment (EVA).
2 Background
The action-value function of a policy π is defined as Qπ(s, a) = Eπ [ ∑ t γ
trt | s, a] [Sutton and Barto, 1998], where s and a are the initial state and action respectively, γ ∈ [0, 1] is a discount factor, and the expectation denotes that the π is followed thereafter. Similarly, the value function under the policy π at state s is given by V π(s) = Eπ [ ∑ t γ
trt | s] and is simply the expected return for following policy π starting at state s.
In value-based model-free reinforcement learning methods, the action-value function is represented using a function approximator. Deep Q-Network agents [Mnih et al., 2015, DQN] use Q-learning [Watkins and Dayan, 1992] to learn an action-value function Qθ(st, at) to rank which action at is best to take in each state st at step t. Qθ is parameterised by a convolutional neural network (CNN), with parameters collectively denoted by θ, that takes a 2D pixel representation of the state st as input, and outputs a vector containing the value of each action at that state. The agent executes an -greedy policy to trade-off exploration and exploitation: with probability the agent picks an action uniformly at random, otherwise it picks the action at = argmaxaQ(st, a).
When the agent observes a transition, DQN stores the (st, at, rt, st+1) tuple in a replay buffer, the contents of which are used for training. This neural network is trained by minimizing the squared error between the network’s output and the Q-learning target yt = rt + γmaxa Q̃(st+1, a), for a subset of transitions sampled at random from the replay buffer. The target network Q̃(st+1, a) is an older version of the value network that is updated periodically. It was shown by Mnih et al. [2015] that both, the use of a target network and sampling uncorrelated transitions from the replay buffer, are critical for stable training.
3 Ephemeral Value Adjustments
Ephemeral value adjustments are a way to augment an arbitrary value-based off-policy agent. This is accomplished through a trace computation algorithm, which rapidly produces value estimates by combining previously encountered trajectories with parametric estimates. Our agent consists of three components: a standard parametric reinforcement learner with its replay buffer augmented to maintains trajectory information, a trace computation algorithm that periodically plans over subsets of data in the replay buffer, a small value buffer which stores the value estimates resulting from the planning process. The overall policy of EVA is dictated by the action-value function,
Q(s, a) = λQθ(s, a) + (1− λ)QNP(s, a) (1)
Qθ is the value estimate from the parametric model and QNP is the value estimate from the trace computation algorithm (non-parametric). Figure 1 (Right) shows a block diagram of the method. The parametric component of EVA consists of the standard DQN-style architecture, Qθ, a feedforward convolutional neural network: several convolution layers followed by two linear layers that ultimately produce action-value function estimates. Training is done exactly as in DQN, briefly reviewed in Section 2 and fully described in [Mnih et al., 2015].
3.1 Trajectory selection and planning
The second to final layer of the DQN network is used to embed the currently observed state (pixels) into a lower dimensional space. Note that similarity in this space has been optimised for action-value estimation by the parametric model. Periodically (every 20 steps in all the reported experiments), the k nearest neighbours in the global buffer are queried from the current state embedding (on the basis of their `2 distance). Using the stored trajectory information, the 50 subsequent steps are also retrieved for each neighbour. Each of these k trajectories are passed to a trace computation algorithm (described below), and all of the resulting Q values are stored into the value buffer alongside their embedding. Figure 1 (Left) shows a diagram of this procedure. The non-parametric nature of this process means that while these estimates are less reliant on the accuracy of the parametric model, they are more relevant locally. This local buffer is meant to cache the results of the trace computation for states that are likely to be nearby the current state.
3.2 Computing value estimates on memory traces
By having the replay buffer maintain trajectory information, values can be propagated through time to produce trajectory-centric value estimates QNP(s, a). Figure 1 (Right) shows how the value buffer is used to derive the action-value estimate. There are several methods for estimating this value function, we shall describe n-step, trajectory-centric planning (TCP) and kernel-based RL (KBRL) trace computation algorithms. N-step estimates for trajectories from the replay buffer are calculated as follows,
VNP(st) = { maxaQθ(st, a) if t = T rt + γVNP(st+1) otherwise,
(2)
where T is the length of the trajectory and st, rtt are the states and rewards of the trajectory. These estimates utilise information in the replay buffer that might not be consolidated into the parametric model, and thus should be complementary to the purely parametric estimates. While this process will
Algorithm 1: Ephemerally Value Adjustments Input : Replay buffer D
Value buffer L Mixing hyper-parameter λ Maximum roll-out hyper-parameter τ
for e := 1,∞ do for t := 1, T do
Receive observation st from environment with embedding ht Collect trace computed values from k nearest neighbours QNP(sk, ·)|h(sk) ∈ KNN(h(st),L) QEVA(st, ·) := λQθ(ŝ, ·) + (1− λ) ∑K k=0QNP(sk,·) K at ← -greedy policy based on QEVA(st, ·) Take action at, receive reward rt+1 Append (st, at, rt+1, ht, e) to D Tm := (st:t+τ , at:t+τ , rt+1:t+τ+1, ht:t+τ , et:t+τ )|h(sm) ∈ KNN(h(st),D)) QNP ← using Tm via the TCP algorithm Append (ht, QNP) to L
end end
serve as a useful baseline, the n-step return just evaluates the policy defined by the sampled trajectory; only the initial parametric bootstrap involves an estimate of the optimal value function. W Ideally, the values at all time-steps should estimate the optimal value function,
Q(s, a)← r(s, a) + γmax a′
Q(s′, a′). (3)
Thus another way to estimate QNP(s, a) is to apply the Bellman policy improvement operator at each time step, as shown in (3). While (2) could be applied recursively, traversing the trajectory backwards, this improvement operator requires knowing the value of the counter-factual actions. We call this trajectory-centric planning. We propose using the parametric model for these off-trajectory value estimates, constructing the complete set of action-conditional value-estimates, called this trajectory-centric planning (TCP):
QNP(st, a) = { rt + γVNP(st+1) if at = a Qθ(st, a) otherwise.
(4)
This allows for the same recursive application as before,
VNP(st) = { maxaQθ(st, a) if t = T maxaQNP(st, a) otherwise,
(5)
The trajectory-centric estimates for the k nearest neighbours are then averaged with the parametric estimate on the basis of a hyper-parameter λ, as shown in Algorithm 1 and represented graphically on Figure 1 (Left). Refer to the supplementary material for a detailed algorithm.
3.3 From trajectory-centric to kernel-based planning
The above method may seem ad hoc – why trust the on-trajectory samples completely and only utilise the parametric estimates for the counter-factual actions? Why not analyse the trajectories together, rather than treating them independently? To address these concerns, we propose a generalisation of the trajectory-centric method which extends kernel-based reinforcement learning (KBRL)[Ormoneit and Sen, 2002]. KBRL is a non-parametric approach to planning with strong theoretical guarantees.2
For each action a, KBRL stores experience tuples (st, rt, st+1) ∈ Sa. Since Sa is finite (equal to the number of stored transitions), and these states have known transitions, we can perform value iteration
2Convergence to a global optima assuming that underlying MDP dynamics are Lipschitz continuous, and the kernel is appropriately shrunk as a function of data.
to obtain value estimates for all resultant states st+1 (the values of the origin states st are not needed, as the Bellman equation only evaluates states after a transition). We can obtain an approximate version of the Bellman equation by using the kernel to compare all resultant states to all origin states, as shown in Equation 6. We define a similarity kernel on states (in fact, embeddings of the current state, as described above), κ(s, s′), typically a Gaussian kernel. The action-value function of KBRL is then estimated using:
QNP(st, at) = ∑
(s,r,s′)∈Sa
κ(st, s) [ r + γmax
a′ QNP(s
′, a′) ]
(6)
In effect, the stored ‘origin’ states (s ∈ S) transition to some ‘resultant state’ (s ∈ S′) and get the stored reward. By using a similarity kernel κ(x0, x1), we can map resultant states to a distribution over the origin states. This makes the state transitions from S → S instead of S → S′, meaning that all transitions only involve states that have been previously encountered.
In the context of trajectory-centric planning, KBRL can be seen as an alternative way of dealing with counter-factual actions: estimate their effects using nearby transitions. Additionally, KBRL is not constrained to dealing with individual trajectories, since it treats all transitions independently.
We propose to add an absorbing pseudo-state ŝ to KBRL’s model whose similarity to the other pseudostates is fixed, that is, κ(st, ŝ) = C for some C > 0 for all st. Using this definition we can make KBRL softly blend similarity and parametric counter-factual action evaluation. This is accomplished by setting the pseudo-state’s value to be equal to the parametric value function evaluated at the state under comparison: when st is being evaluated, QNP(ŝ, a) ≈ Qθ(ŝ, a) thus by setting C appropriately, we can guarantee that the parametric estimates will dominate when data density is low. Note that this is in addition to the blending of value functions described in Equation 1.
KBRL can be made numerically identical to trajectory-centric planning by shrinking the kernel bandwidth (i.e., the length scale of the Gaussian kernel) and pseudo-state similarity.3 With the appropriate values, this will result in value estimates being dominated by exact matches (on-trajectory) and parametric estimates when none are found. This reduction is of interest as KBRL is significantly more expensive than trajectory-centric planning. KBRL’s computational complexity is O(AN2) and trajectory-centric planning has a complexity of O(N), where N is the number of stored transitions and A is the cardinality of the action space. We can thus think of this parametrically augmented version of KBRL as the theoretical foundation for trajectory-centric planning. In practice, we use the TCP trace computation algorithm (Equations 4 and 5) unless otherwise noted.
4 Related work
There has been a lot of recent work on using memory-augmented neural networks as a function approximation for RL agents: using LSTMs [Bakker et al., 2003, Hausknecht and Stone, 2015], or more sophisticated architectures [Graves et al., 2016, Oh et al., 2016, Wayne et al., 2018]. However, the motivation behind these works is to obtain a better state representation in partially observable or non-Markovian environments, in which feed-forward models would not be appropriate. The focus of this work is on data efficiency, which is improved in a representation agnostic manner.
The main use of long term episodic memory is the replay buffer introduced by DQN.
While it is central to stable training, it also allows to significantly improve the data efficiency of the method, compare with the online counterparts that achieve stable training by having several actors [Mnih et al., 2016]. The replay frequency is hyper-parameter that has been carefully tuned in DQN. Learning cannot be sped-up by increasing the frequency of replay without harming end performance. The problem is that the network would overfit to the content of the replay buffer affecting its ability to learn a better policy. An alternative approach is prioritised experience replay [Schaul et al., 2015], which changes the data distribution used during training by biasing it toward transitions with high temporal difference error. These works use the replay buffer during training time only. Our approach aims at leveraging the replay buffer at decision time and thus is complementary to prioritisation, as it impacts the behaviour policy but not how the replay buffer is sampled from (the supplementary materials for a preliminary comparison).
3Modulo the fact that KBRL would still be able to find ‘shortcuts’ between or within trajectories owing to its exhaustive similarity comparisons between states
Using previous experience at decision time is closely related to non-parametric approaches for Qfunction approximation [Santamaría et al., 1997, Munos and Moore, 1998, Gabel and Riedmiller, 2005]. Our work is particularly related to techniques following the ideas of episodic control. Blundell et al. [2016b, MFEC] recently used local regression for Q-function estimation using the mean of the k-nearest neighbours searched over random projections of the pixel inputs. Pritzel et al. [2017] extended this line of work with NEC, using the reward signal to learn an embedding space in which to perform the local-regression. These works showed dramatic improvements in data efficiency, specially in early stages of training. This work differs from these approaches in that, rather than using memory for local regression, memory is used as a form of local planning, which is made possible by exploiting the trajectory structure of the memories in the replay buffer. Furthermore, the memory requirements of NEC is significantly larger than that of EVA. NEC uses a large memory buffer per action in addition to a replay buffer. Our work only adds a small overhead over the standard DQN replay buffer and needs to query a single replay buffer one time every several acting steps (20 in our experiments) during training. In addition, NEC and MFEC fundamentally change the structure of the model, whereas EVA is strictly supplemental. More recent works have looked at including NEC type of architecture to aid the learning of a parametric model [Nishio and Yamane, 2018, Jain and Lindsey, 2018], sharing memory requirements with NEC.
The memory-based planning aspect of our approach also has precedent in the literature. Brea [2017] explicitly compare a local regression approach (NEC) to prioritised sweeping and find that the latter is preferable, but fail to show scalable result. Savinov et al. [2018] build a memory-based graph and plan over it, but rely on a fixed exploration policy. Xiao et al. [2018] combine MCTS planning with NEC, but relies on a built-in model of the environment.
In the context of supervised learning, several works have looked at using non-parametric type of approaches to improve the performance of models using neural networks. Kaiser et al. [2016] introduced a differentiable layer of key-value pairs that can be plugged into a neural network to help it remember rare events. Works in the context of language modelling have augmented prediction with attention over recent examples to account for the distributional shift between training and testing settings, such as neural cache [Grave et al., 2016] and pointer sentinel networks [Merity et al., 2016]. The work by Sprechmann et al. [2018] is also motivated by the CLS framework. However, they use an episodic memory to improve a parametric model in the context of supervised learning and do not consider reinforcement learning.
5 Experiments
5.1 A simple example
We begin the experimental section by showing how EVA works on a simple “gridworld” environment implemented with the pycolab game engine [Stepleton, 2017]. The task is to collect a given number of coins in the minimum number of steps possible, that can be thought as a very simple variant of the travel salesman problem. At the beginning of each episode, the agent and the coins are placed at a
random location of a grid with size 5× 13, see the supplementary material for a screen-shot. The agent can take four possible actions {left, right, up, down} and receives a reward of 1 when collecting a coin and a reward of −0.01 at every step. If the agent takes an action that would it move into a wall, it stays at its current position. We restrict the maximum length of an episode to 500 steps. We use an agent featuring a two-layer convolutional neural network, followed by a fully connected layer producing a 64-dimensional embedding which is then used for the look-ups in the replay buffer of size 50K. The input is an RGB image of the maze. Results are reported in Figure 2.
Evaluation of a single episode We use the same pre-trained network (with its corresponding replay buffer) and run a single episode with and without using EVA, see Figure 2 (Left). We can see that, by leveraging the trajectories in the replay buffer, EVA immediately boosts performance of the baseline. Note that the weights of the network are exactly the same in both cases. The benefits saturate around λ = 0.4, which suggests that the policy of the non-parametric component alone is unable to generalise properly.
Evaluation of the full EVA algorithm Figure 2 (Center, Left) shows the performance of EVA on ful episodes using one and two coins evaluating different values of the mixing parameter λ. λ = 0 corresponds to the standard DQN baseline. We show the hyper-parameters that lead to the highest end performance of the baseline DQN. We can see that EVA provides a significant boost in data efficiency. For the single coin case, it requires slightly more than half of the data to obtain final performance and higher value of lambda is better. This is likely due to the fact that there are only 4K unique states, thus all states are likely to be in the replay buffer. On the two case setting, however, the number of possible states for the two coin case is approximately 195K, which is significantly larger than the replay buffer size. Again here, performance saturates around λ = 0.4.
5.2 EVA and Atari games
In order to validate whether EVA leads to gains in complex domains we evaluated our approach on the Atari Learning Environment(ALE; Bellemare et al., 2013). We used the set of 55 Atari Games, please see the supplementary material for details. The hyper-parameters were tuned using a subset of 5 games (Pong, H.E.R.O., Frostbite, Ms Pacman and Qbert). The hyper-parameters shared between the baseline and EVA (e.g. learning rate) were chosen to maximise the performance of the baseline (λ = 0) on a run over 20M frames on the selected subset of games. The influence of these hyper-parameters on EVA and the baseline are highly correlated. Performance saturates around λ = 0.4 as in the simple example. We chose the lowest frequency that would not harm performance (20 steps), the rollout length was set to 50 and the number of neighbours used for estimating QNP was set to 5. We observed that performance decreases as the number of neighbours increases. See the supplementary material for details on all hyper-parameters used.
We compared absolute performance of agents according to human normalised score as in Mnih et al. [2015]. Figure 3 summarises the obtained results, where we ran three random seeds for λ = 0 (which is our version of DQN) and EVA with λ = 0.4. In order to obtain uncertainty estimates, we report the mean and standard deviation per time step of the curves obtained by randomly selecting one random seed per game (this is, one out of three possible seeds for each of the 55 games). For reference, we also included the original DQN results from [Mnih et al., 2015]. EVA is able to improve the learning speed as well as the final performance level using exactly the same architecture and learning parameters as our baseline. It is able to achieve the end performance of the baseline in 40 million frames.
Effect of trace computation To understand how EVA helps performance, we compare three different versions of the trace computation at the core of the EVA approach. The standard (trajectorycentric) trace computation can be simplified by removing the parametric evaluations of counter-factual actions. This ablation results in the n-step trace computation (as shown in 2). Since the standard trace computation can be seen as a special-case of parametrically-augmented KBRL, we also consider this trace computation. Due to the increased computation of this trace computation, these experiments are only run for 40 million frames. For parametrically-augmented KBRL, a Gaussian similarity kernel is used with a bandwidth parameter of 10−4 and a paramteric similarity of 10−2.
EVA is significantly worse than the baseline with the n-step trace computation. This can be seen as evidence for the importance of the parametric evaluation of counter-factual actions. Without this additional computation, EVA’s policy is too dependant on the quality of the policy expressed in the trajectories, a negative feedback loop that results in divergence on several games. Interesting, the standard trace computation is as good as, if not better than, the much more costly KBRL method. While KBRL is capable of merging the data from the different trajectories into a global plan, it does not given on-trajectory information a privileged status without an extremely small bandwidth 4. In near-deterministic environments like Atari, this privileged status is appropriate and acts as a strong prior, as can be seen in the lower variance of this method.
Consolidation EVA relies in the TCP at decision time. However, one would expect that after training, the parametric model would be able to consolidate the information available on the episodic memory and be capable of acting without relying on the planning process. We verified that annealing the value of λ to zero over two million steps leads to no degradation in performance on our Atari experiments. Note that when λ = 0 our agent reduces to the standard DQN agent.
4To achieve this privileged status for on-trajectory information, the minimum off-trajectory similarity must be known, and typically results in bandwidth so small as to be numerically unstable
6 Discussion
Despite only changing the value function underlying the behaviour policy, EVA improves the overall rate of learning. This is due to two factors. The first is that the adjusted policy should be closer to the optimal policy by better exploiting the information in the replay data. The second is that this improved policy should fill the replay buffer with more useful data. This means that the ephemeral adjustments indirectly impact the parametric value function by changing the distribution of data that it is trained on.
During the training process, as the agent explores the environment, knowledge about value functions are extracted gradually from the interactions with the environment. Since the value-function drives the data acquisition process, the ability to quickly incorporate on highly rewarded experiences could significantly boost the sample efficiency of the learning process.
Acknowledgments
The authors would like to thank Melissa Tan, Paul Komarek, Volodymyr Mnih, Alistair Muldal, Adrià Badia, Hado van Hasselt, Yotam Doron, Ian Osband, Daan Wierstra, Demis Hassabis, Dharshan Kumaran, Siddhant Jayakumar, Razvan Pascanu, and Oriol Vinyals. Finally, we thank the anonymous reviewers for their comments and suggestions to improve the paper. | 1. What is the primary idea behind the proposed method in the paper?
2. How does the method improve the performance of DQN?
3. What is the issue with using standard DQN as a baseline for evaluating the approach's effectiveness?
4. What are some alternative methods for enhancing DQN's performance that could be used as a comparison point?
5. Does the paper adequately demonstrate the benefits of the proposed technique over other approaches? | Review | Review
This paper presents a method for improving the performance of DQN by mixing the standard model-based value estimates with a locally-adapted "non-parametric" value estimate. The basic idea is to combine the main Q network with trajectories from the replay buffer that pass near the current state, and fit a value estimate that is (hopefully) more precise/accurate around the current state. Selecting actions based on improved value estimates in the current state (hopefully) leads to higher rewards than using the main Q network. I thought the idea presented in this paper was reasonable, and the empirical support provided was alright. Using standard DQN as a baseline for measuring the value of the approach is questionable. A stronger baseline should be used, e.g. Double DQN with prioritized experience replay. And maybe some distributional Q learning too. Deciding whether the proposed algorithm, in its current form, makes a useful contribution would require showing that it offers more benefits than alternate approaches to improving standard DQN. --- I have read the author rebuttal. |
NIPS | Title
Fast deep reinforcement learning using online adjustments from the past
Abstract
We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by planning over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games.
1 Introduction
Complementary learning systems [McClelland et al., 1995, CLS] combine two mechanisms for learning: one, fast learning and highly adaptive but poor at generalising, the other, slow at learning and consequentially better at generalising across many examples. The need for two systems reflects the typical trade-off between the sample efficiency and the computational complexity of a learning algorithm. We argue that the majority of contemporary deep reinforcement learning systems fall into the latter category: slow, gradient-based updates combined with incremental updates from Bellman backups result in systems that are good at generalising, as evidenced by many successes [Mnih et al., 2015, Silver et al., 2016, Moravčík et al., 2017], but take many steps in an environment to achieve this feat.
RL methods are often categorised as either model-free methods or model-based RL methods [Sutton and Barto, 1998]. In practice, model-free methods are typically fast at acting time, but computationally expensive to update from experience, whilst model-based methods can be quick to update but expensive to act with (as on-the-fly planning is required). Recently there has been interest in incorporating episodic memory-like into reinforcement learning algorithms [Blundell et al., 2016a, Pritzel et al., 2017], potentially providing increases in flexibility and learning speed, driven by motivations from the neuroscience literature known as Episodic Control [Dayan and Daw, 2008, Gershman and Daw, 2017]. Episodic Control use episodic memory in lieu of a learnt model of the environment, aiming for a different computational trade-off to model-free and model-based approaches.
We will be interested in a hybrid approach, motivated by the observations of CLS [McClelland et al., 1995], where we will build an agent with two systems: one slow and general (model-free) and the other fast and adaptive (episodic control-like). Similar to previous proposals for agents, the fast, adaptive subsystem of our agent uses episodic memories to remember and later mimic previously experienced rewarding sequences of states and actions. This can be seen as a memory-based form of planning [Silver et al., 2008], in which related experiences are recalled to inform decisions. Planning
∗denotes equal contribution.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
in this context can be thought as the re-evaluation of the past experience using current knowledge to improve model-free value estimates.
Critical to many approaches to deep reinforcement learning is the replay buffer [Mnih et al., 2015, Espeholt et al., 2018]. The replay buffer stores previously seen tuples of experience: state, action, reward, and next state. These stored experience tuples are then used to train a value function approximator using gradient descent. Typically one step of gradient descent on data from the replay buffer is taken per action in the environment, as (with the exception of [Barth-Maron et al., 2018]) a greater reliance on replay data leads to unstable performance. Consequently, we propose that the replay buffer may frequently contain information that could significantly improve the policy of an agent but never be fully integrated into the decision making of an agent. We posit that this happens for three reasons: (i) the slow, global gradient updates to the value function due to noisy gradients and the stability of learning dynamics, (ii) the replay buffer is of limited size and experience tuples are regularly removed (thus limiting the opportunity for gradient descent to learn from it), (iii) training from experience tuples neglects the trajectory nature of an agents experience: one tuple occurs after another and so information about the value of the next state should be quickly integrated into the value of the current state.
In this work we explore a method of allowing deep reinforcement learning agents to simultaneously: (i) learn the parameters of the value function approximation slowly, and (ii) adapt the value function quickly and locally within an episode. Adaptation of the value function is achieved by planning over previously experienced trajectories (sequences of temporally adjacent tuples) that are grounded in estimates from the value function approximation. This process provides a complementary way of estimating the value function.
Interestingly our approach requires very little modification of existing replay-based deep reinforcement learning agents: in addition to storing the current state and next state (which are typically large: full inputs to the network), we propose to also store trajectory information (pointers to successor tuples) and one layer of current hidden activations (typically much smaller than the state). Using this information our method adapts the value function prediction using memory-based rollouts of previous experience based on the hidden representation. The adjustment to the value function is not stored after it is used to take an action (thus it is ephemeral). We call our method Ephemeral Value Adjustment (EVA).
2 Background
The action-value function of a policy π is defined as Qπ(s, a) = Eπ [ ∑ t γ
trt | s, a] [Sutton and Barto, 1998], where s and a are the initial state and action respectively, γ ∈ [0, 1] is a discount factor, and the expectation denotes that the π is followed thereafter. Similarly, the value function under the policy π at state s is given by V π(s) = Eπ [ ∑ t γ
trt | s] and is simply the expected return for following policy π starting at state s.
In value-based model-free reinforcement learning methods, the action-value function is represented using a function approximator. Deep Q-Network agents [Mnih et al., 2015, DQN] use Q-learning [Watkins and Dayan, 1992] to learn an action-value function Qθ(st, at) to rank which action at is best to take in each state st at step t. Qθ is parameterised by a convolutional neural network (CNN), with parameters collectively denoted by θ, that takes a 2D pixel representation of the state st as input, and outputs a vector containing the value of each action at that state. The agent executes an -greedy policy to trade-off exploration and exploitation: with probability the agent picks an action uniformly at random, otherwise it picks the action at = argmaxaQ(st, a).
When the agent observes a transition, DQN stores the (st, at, rt, st+1) tuple in a replay buffer, the contents of which are used for training. This neural network is trained by minimizing the squared error between the network’s output and the Q-learning target yt = rt + γmaxa Q̃(st+1, a), for a subset of transitions sampled at random from the replay buffer. The target network Q̃(st+1, a) is an older version of the value network that is updated periodically. It was shown by Mnih et al. [2015] that both, the use of a target network and sampling uncorrelated transitions from the replay buffer, are critical for stable training.
3 Ephemeral Value Adjustments
Ephemeral value adjustments are a way to augment an arbitrary value-based off-policy agent. This is accomplished through a trace computation algorithm, which rapidly produces value estimates by combining previously encountered trajectories with parametric estimates. Our agent consists of three components: a standard parametric reinforcement learner with its replay buffer augmented to maintains trajectory information, a trace computation algorithm that periodically plans over subsets of data in the replay buffer, a small value buffer which stores the value estimates resulting from the planning process. The overall policy of EVA is dictated by the action-value function,
Q(s, a) = λQθ(s, a) + (1− λ)QNP(s, a) (1)
Qθ is the value estimate from the parametric model and QNP is the value estimate from the trace computation algorithm (non-parametric). Figure 1 (Right) shows a block diagram of the method. The parametric component of EVA consists of the standard DQN-style architecture, Qθ, a feedforward convolutional neural network: several convolution layers followed by two linear layers that ultimately produce action-value function estimates. Training is done exactly as in DQN, briefly reviewed in Section 2 and fully described in [Mnih et al., 2015].
3.1 Trajectory selection and planning
The second to final layer of the DQN network is used to embed the currently observed state (pixels) into a lower dimensional space. Note that similarity in this space has been optimised for action-value estimation by the parametric model. Periodically (every 20 steps in all the reported experiments), the k nearest neighbours in the global buffer are queried from the current state embedding (on the basis of their `2 distance). Using the stored trajectory information, the 50 subsequent steps are also retrieved for each neighbour. Each of these k trajectories are passed to a trace computation algorithm (described below), and all of the resulting Q values are stored into the value buffer alongside their embedding. Figure 1 (Left) shows a diagram of this procedure. The non-parametric nature of this process means that while these estimates are less reliant on the accuracy of the parametric model, they are more relevant locally. This local buffer is meant to cache the results of the trace computation for states that are likely to be nearby the current state.
3.2 Computing value estimates on memory traces
By having the replay buffer maintain trajectory information, values can be propagated through time to produce trajectory-centric value estimates QNP(s, a). Figure 1 (Right) shows how the value buffer is used to derive the action-value estimate. There are several methods for estimating this value function, we shall describe n-step, trajectory-centric planning (TCP) and kernel-based RL (KBRL) trace computation algorithms. N-step estimates for trajectories from the replay buffer are calculated as follows,
VNP(st) = { maxaQθ(st, a) if t = T rt + γVNP(st+1) otherwise,
(2)
where T is the length of the trajectory and st, rtt are the states and rewards of the trajectory. These estimates utilise information in the replay buffer that might not be consolidated into the parametric model, and thus should be complementary to the purely parametric estimates. While this process will
Algorithm 1: Ephemerally Value Adjustments Input : Replay buffer D
Value buffer L Mixing hyper-parameter λ Maximum roll-out hyper-parameter τ
for e := 1,∞ do for t := 1, T do
Receive observation st from environment with embedding ht Collect trace computed values from k nearest neighbours QNP(sk, ·)|h(sk) ∈ KNN(h(st),L) QEVA(st, ·) := λQθ(ŝ, ·) + (1− λ) ∑K k=0QNP(sk,·) K at ← -greedy policy based on QEVA(st, ·) Take action at, receive reward rt+1 Append (st, at, rt+1, ht, e) to D Tm := (st:t+τ , at:t+τ , rt+1:t+τ+1, ht:t+τ , et:t+τ )|h(sm) ∈ KNN(h(st),D)) QNP ← using Tm via the TCP algorithm Append (ht, QNP) to L
end end
serve as a useful baseline, the n-step return just evaluates the policy defined by the sampled trajectory; only the initial parametric bootstrap involves an estimate of the optimal value function. W Ideally, the values at all time-steps should estimate the optimal value function,
Q(s, a)← r(s, a) + γmax a′
Q(s′, a′). (3)
Thus another way to estimate QNP(s, a) is to apply the Bellman policy improvement operator at each time step, as shown in (3). While (2) could be applied recursively, traversing the trajectory backwards, this improvement operator requires knowing the value of the counter-factual actions. We call this trajectory-centric planning. We propose using the parametric model for these off-trajectory value estimates, constructing the complete set of action-conditional value-estimates, called this trajectory-centric planning (TCP):
QNP(st, a) = { rt + γVNP(st+1) if at = a Qθ(st, a) otherwise.
(4)
This allows for the same recursive application as before,
VNP(st) = { maxaQθ(st, a) if t = T maxaQNP(st, a) otherwise,
(5)
The trajectory-centric estimates for the k nearest neighbours are then averaged with the parametric estimate on the basis of a hyper-parameter λ, as shown in Algorithm 1 and represented graphically on Figure 1 (Left). Refer to the supplementary material for a detailed algorithm.
3.3 From trajectory-centric to kernel-based planning
The above method may seem ad hoc – why trust the on-trajectory samples completely and only utilise the parametric estimates for the counter-factual actions? Why not analyse the trajectories together, rather than treating them independently? To address these concerns, we propose a generalisation of the trajectory-centric method which extends kernel-based reinforcement learning (KBRL)[Ormoneit and Sen, 2002]. KBRL is a non-parametric approach to planning with strong theoretical guarantees.2
For each action a, KBRL stores experience tuples (st, rt, st+1) ∈ Sa. Since Sa is finite (equal to the number of stored transitions), and these states have known transitions, we can perform value iteration
2Convergence to a global optima assuming that underlying MDP dynamics are Lipschitz continuous, and the kernel is appropriately shrunk as a function of data.
to obtain value estimates for all resultant states st+1 (the values of the origin states st are not needed, as the Bellman equation only evaluates states after a transition). We can obtain an approximate version of the Bellman equation by using the kernel to compare all resultant states to all origin states, as shown in Equation 6. We define a similarity kernel on states (in fact, embeddings of the current state, as described above), κ(s, s′), typically a Gaussian kernel. The action-value function of KBRL is then estimated using:
QNP(st, at) = ∑
(s,r,s′)∈Sa
κ(st, s) [ r + γmax
a′ QNP(s
′, a′) ]
(6)
In effect, the stored ‘origin’ states (s ∈ S) transition to some ‘resultant state’ (s ∈ S′) and get the stored reward. By using a similarity kernel κ(x0, x1), we can map resultant states to a distribution over the origin states. This makes the state transitions from S → S instead of S → S′, meaning that all transitions only involve states that have been previously encountered.
In the context of trajectory-centric planning, KBRL can be seen as an alternative way of dealing with counter-factual actions: estimate their effects using nearby transitions. Additionally, KBRL is not constrained to dealing with individual trajectories, since it treats all transitions independently.
We propose to add an absorbing pseudo-state ŝ to KBRL’s model whose similarity to the other pseudostates is fixed, that is, κ(st, ŝ) = C for some C > 0 for all st. Using this definition we can make KBRL softly blend similarity and parametric counter-factual action evaluation. This is accomplished by setting the pseudo-state’s value to be equal to the parametric value function evaluated at the state under comparison: when st is being evaluated, QNP(ŝ, a) ≈ Qθ(ŝ, a) thus by setting C appropriately, we can guarantee that the parametric estimates will dominate when data density is low. Note that this is in addition to the blending of value functions described in Equation 1.
KBRL can be made numerically identical to trajectory-centric planning by shrinking the kernel bandwidth (i.e., the length scale of the Gaussian kernel) and pseudo-state similarity.3 With the appropriate values, this will result in value estimates being dominated by exact matches (on-trajectory) and parametric estimates when none are found. This reduction is of interest as KBRL is significantly more expensive than trajectory-centric planning. KBRL’s computational complexity is O(AN2) and trajectory-centric planning has a complexity of O(N), where N is the number of stored transitions and A is the cardinality of the action space. We can thus think of this parametrically augmented version of KBRL as the theoretical foundation for trajectory-centric planning. In practice, we use the TCP trace computation algorithm (Equations 4 and 5) unless otherwise noted.
4 Related work
There has been a lot of recent work on using memory-augmented neural networks as a function approximation for RL agents: using LSTMs [Bakker et al., 2003, Hausknecht and Stone, 2015], or more sophisticated architectures [Graves et al., 2016, Oh et al., 2016, Wayne et al., 2018]. However, the motivation behind these works is to obtain a better state representation in partially observable or non-Markovian environments, in which feed-forward models would not be appropriate. The focus of this work is on data efficiency, which is improved in a representation agnostic manner.
The main use of long term episodic memory is the replay buffer introduced by DQN.
While it is central to stable training, it also allows to significantly improve the data efficiency of the method, compare with the online counterparts that achieve stable training by having several actors [Mnih et al., 2016]. The replay frequency is hyper-parameter that has been carefully tuned in DQN. Learning cannot be sped-up by increasing the frequency of replay without harming end performance. The problem is that the network would overfit to the content of the replay buffer affecting its ability to learn a better policy. An alternative approach is prioritised experience replay [Schaul et al., 2015], which changes the data distribution used during training by biasing it toward transitions with high temporal difference error. These works use the replay buffer during training time only. Our approach aims at leveraging the replay buffer at decision time and thus is complementary to prioritisation, as it impacts the behaviour policy but not how the replay buffer is sampled from (the supplementary materials for a preliminary comparison).
3Modulo the fact that KBRL would still be able to find ‘shortcuts’ between or within trajectories owing to its exhaustive similarity comparisons between states
Using previous experience at decision time is closely related to non-parametric approaches for Qfunction approximation [Santamaría et al., 1997, Munos and Moore, 1998, Gabel and Riedmiller, 2005]. Our work is particularly related to techniques following the ideas of episodic control. Blundell et al. [2016b, MFEC] recently used local regression for Q-function estimation using the mean of the k-nearest neighbours searched over random projections of the pixel inputs. Pritzel et al. [2017] extended this line of work with NEC, using the reward signal to learn an embedding space in which to perform the local-regression. These works showed dramatic improvements in data efficiency, specially in early stages of training. This work differs from these approaches in that, rather than using memory for local regression, memory is used as a form of local planning, which is made possible by exploiting the trajectory structure of the memories in the replay buffer. Furthermore, the memory requirements of NEC is significantly larger than that of EVA. NEC uses a large memory buffer per action in addition to a replay buffer. Our work only adds a small overhead over the standard DQN replay buffer and needs to query a single replay buffer one time every several acting steps (20 in our experiments) during training. In addition, NEC and MFEC fundamentally change the structure of the model, whereas EVA is strictly supplemental. More recent works have looked at including NEC type of architecture to aid the learning of a parametric model [Nishio and Yamane, 2018, Jain and Lindsey, 2018], sharing memory requirements with NEC.
The memory-based planning aspect of our approach also has precedent in the literature. Brea [2017] explicitly compare a local regression approach (NEC) to prioritised sweeping and find that the latter is preferable, but fail to show scalable result. Savinov et al. [2018] build a memory-based graph and plan over it, but rely on a fixed exploration policy. Xiao et al. [2018] combine MCTS planning with NEC, but relies on a built-in model of the environment.
In the context of supervised learning, several works have looked at using non-parametric type of approaches to improve the performance of models using neural networks. Kaiser et al. [2016] introduced a differentiable layer of key-value pairs that can be plugged into a neural network to help it remember rare events. Works in the context of language modelling have augmented prediction with attention over recent examples to account for the distributional shift between training and testing settings, such as neural cache [Grave et al., 2016] and pointer sentinel networks [Merity et al., 2016]. The work by Sprechmann et al. [2018] is also motivated by the CLS framework. However, they use an episodic memory to improve a parametric model in the context of supervised learning and do not consider reinforcement learning.
5 Experiments
5.1 A simple example
We begin the experimental section by showing how EVA works on a simple “gridworld” environment implemented with the pycolab game engine [Stepleton, 2017]. The task is to collect a given number of coins in the minimum number of steps possible, that can be thought as a very simple variant of the travel salesman problem. At the beginning of each episode, the agent and the coins are placed at a
random location of a grid with size 5× 13, see the supplementary material for a screen-shot. The agent can take four possible actions {left, right, up, down} and receives a reward of 1 when collecting a coin and a reward of −0.01 at every step. If the agent takes an action that would it move into a wall, it stays at its current position. We restrict the maximum length of an episode to 500 steps. We use an agent featuring a two-layer convolutional neural network, followed by a fully connected layer producing a 64-dimensional embedding which is then used for the look-ups in the replay buffer of size 50K. The input is an RGB image of the maze. Results are reported in Figure 2.
Evaluation of a single episode We use the same pre-trained network (with its corresponding replay buffer) and run a single episode with and without using EVA, see Figure 2 (Left). We can see that, by leveraging the trajectories in the replay buffer, EVA immediately boosts performance of the baseline. Note that the weights of the network are exactly the same in both cases. The benefits saturate around λ = 0.4, which suggests that the policy of the non-parametric component alone is unable to generalise properly.
Evaluation of the full EVA algorithm Figure 2 (Center, Left) shows the performance of EVA on ful episodes using one and two coins evaluating different values of the mixing parameter λ. λ = 0 corresponds to the standard DQN baseline. We show the hyper-parameters that lead to the highest end performance of the baseline DQN. We can see that EVA provides a significant boost in data efficiency. For the single coin case, it requires slightly more than half of the data to obtain final performance and higher value of lambda is better. This is likely due to the fact that there are only 4K unique states, thus all states are likely to be in the replay buffer. On the two case setting, however, the number of possible states for the two coin case is approximately 195K, which is significantly larger than the replay buffer size. Again here, performance saturates around λ = 0.4.
5.2 EVA and Atari games
In order to validate whether EVA leads to gains in complex domains we evaluated our approach on the Atari Learning Environment(ALE; Bellemare et al., 2013). We used the set of 55 Atari Games, please see the supplementary material for details. The hyper-parameters were tuned using a subset of 5 games (Pong, H.E.R.O., Frostbite, Ms Pacman and Qbert). The hyper-parameters shared between the baseline and EVA (e.g. learning rate) were chosen to maximise the performance of the baseline (λ = 0) on a run over 20M frames on the selected subset of games. The influence of these hyper-parameters on EVA and the baseline are highly correlated. Performance saturates around λ = 0.4 as in the simple example. We chose the lowest frequency that would not harm performance (20 steps), the rollout length was set to 50 and the number of neighbours used for estimating QNP was set to 5. We observed that performance decreases as the number of neighbours increases. See the supplementary material for details on all hyper-parameters used.
We compared absolute performance of agents according to human normalised score as in Mnih et al. [2015]. Figure 3 summarises the obtained results, where we ran three random seeds for λ = 0 (which is our version of DQN) and EVA with λ = 0.4. In order to obtain uncertainty estimates, we report the mean and standard deviation per time step of the curves obtained by randomly selecting one random seed per game (this is, one out of three possible seeds for each of the 55 games). For reference, we also included the original DQN results from [Mnih et al., 2015]. EVA is able to improve the learning speed as well as the final performance level using exactly the same architecture and learning parameters as our baseline. It is able to achieve the end performance of the baseline in 40 million frames.
Effect of trace computation To understand how EVA helps performance, we compare three different versions of the trace computation at the core of the EVA approach. The standard (trajectorycentric) trace computation can be simplified by removing the parametric evaluations of counter-factual actions. This ablation results in the n-step trace computation (as shown in 2). Since the standard trace computation can be seen as a special-case of parametrically-augmented KBRL, we also consider this trace computation. Due to the increased computation of this trace computation, these experiments are only run for 40 million frames. For parametrically-augmented KBRL, a Gaussian similarity kernel is used with a bandwidth parameter of 10−4 and a paramteric similarity of 10−2.
EVA is significantly worse than the baseline with the n-step trace computation. This can be seen as evidence for the importance of the parametric evaluation of counter-factual actions. Without this additional computation, EVA’s policy is too dependant on the quality of the policy expressed in the trajectories, a negative feedback loop that results in divergence on several games. Interesting, the standard trace computation is as good as, if not better than, the much more costly KBRL method. While KBRL is capable of merging the data from the different trajectories into a global plan, it does not given on-trajectory information a privileged status without an extremely small bandwidth 4. In near-deterministic environments like Atari, this privileged status is appropriate and acts as a strong prior, as can be seen in the lower variance of this method.
Consolidation EVA relies in the TCP at decision time. However, one would expect that after training, the parametric model would be able to consolidate the information available on the episodic memory and be capable of acting without relying on the planning process. We verified that annealing the value of λ to zero over two million steps leads to no degradation in performance on our Atari experiments. Note that when λ = 0 our agent reduces to the standard DQN agent.
4To achieve this privileged status for on-trajectory information, the minimum off-trajectory similarity must be known, and typically results in bandwidth so small as to be numerically unstable
6 Discussion
Despite only changing the value function underlying the behaviour policy, EVA improves the overall rate of learning. This is due to two factors. The first is that the adjusted policy should be closer to the optimal policy by better exploiting the information in the replay data. The second is that this improved policy should fill the replay buffer with more useful data. This means that the ephemeral adjustments indirectly impact the parametric value function by changing the distribution of data that it is trained on.
During the training process, as the agent explores the environment, knowledge about value functions are extracted gradually from the interactions with the environment. Since the value-function drives the data acquisition process, the ability to quickly incorporate on highly rewarded experiences could significantly boost the sample efficiency of the learning process.
Acknowledgments
The authors would like to thank Melissa Tan, Paul Komarek, Volodymyr Mnih, Alistair Muldal, Adrià Badia, Hado van Hasselt, Yotam Doron, Ian Osband, Daan Wierstra, Demis Hassabis, Dharshan Kumaran, Siddhant Jayakumar, Razvan Pascanu, and Oriol Vinyals. Finally, we thank the anonymous reviewers for their comments and suggestions to improve the paper. | 1. What is the focus and contribution of the paper regarding rapid adaptation in RL agents?
2. What are the strengths of the proposed approach, particularly in combining model-free and episodic memories?
3. Do you have any concerns or questions regarding the methodology, such as the slight change in the replay buffer?
4. How does the reviewer assess the clarity and quality of the paper's content, especially in sections 3.1 and 3.2?
5. Are there any variables or concepts used without proper definition, such as 'e' in Figure 1 and 'h_t' in Algorithm 1?
6. What is the purpose of storing trajectory information, and how does it relate to the 50 subsequent steps retrieved for each neighbor?
7. Are there any typos or minor issues in the paper, such as "W Idealy"? | Review | Review
Summary: This paper proposes a method that can help an RL agent to rapidly adapt to experience in the replay buffer. The method is the combination of slow and general component (i.e. model-free ) and episodic memories which are fast and adaptive. An interesting part of this proposed approach is that it slightly changes the replay buffer by adding trajectory information but get a good boost in the perfoamnce. In addition, the extensive number of experiments have been conducted in order to verify the claim of this paper. Comments and Questions - This paper, in general, is well-written (especially the related works, they actually talk about the relation and difference with previous works) except for the followings: -- Section 3.1 and 3.2 do not have smooth flow. The message of these parts wasn't clear to me at first. it took me multiple iterations to find out what are the main take away from these parts. - Some variable or concepts are used without prior definition, the paper should be self-contained: 'e' in figure 1. Is it the same as 'e' in Algorithm 1? if yes, why different formats are used? if not, define. Is it a scalar? vector? - Define the counter-factual action. Again the paper should self-contained. - Line 62: 'The adjustment to the value function is not stored after it is... 'is very confusing to me. Can you clarify? - Line 104-108: 'Since trajectory information is also stored, the 50 subsequent steps are also retrieved for each neighbour.' I am not sure understood this one, can you explain? - Why in algorithm 1, 'h_t' is appended to the replay buffer but it is not sth that was mentioned in the text, eqautions, or figure. Can you explain? In general, I like the idea of this paper and I wish the text was more clear (refer to above comments) Minor: -Line 123: 'W Idealy' is a typo? - In Algorithm 1: ' ...via the TCP algorithm', referring to the TCP equation(s) would be a better idea |
NIPS | Title
Instance-Conditioned GAN
Abstract
Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces. Yet, modeling complex distributions of datasets such as ImageNet and COCO-Stuff remains challenging in unconditional settings. In this paper, we take inspiration from kernel density estimation techniques and introduce a non-parametric approach to modeling distributions of complex datasets. We partition the data manifold into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors, and introduce a model, called instance-conditioned GAN (IC-GAN), which learns the distribution around each datapoint. Experimental results on ImageNet and COCO-Stuff show that IC-GAN significantly improves over unconditional models and unsupervised data partitioning baselines. Moreover, we show that IC-GAN can effortlessly transfer to datasets not seen during training by simply changing the conditioning instances, and still generate realistic images. Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-LT. Code and trained models to reproduce the reported results are available at https://github.com/facebookresearch/ic_gan.
1 Introduction
Generative Adversarial Networks (GANs) [18] have shown impressive results in unconditional image generation [27, 29]. Despite their success, GANs present optimization difficulties and can suffer from mode collapse, resulting in the generator not being able to obtain a good distribution coverage, and often producing poor quality and/or low diversity generated samples. Although many approaches attempt to mitigate this problem – e.g. [20, 32, 35, 38] –, complex data distributions such as the one in ImageNet [45] remain a challenge for unconditional GANs [33, 36]. Classconditional GANs [5, 39, 40, 56] ease the task of learning the data distribution by conditioning on class labels, effectively partitioning the data. Although they provide higher quality samples than their unconditional counterparts, they require labelled data, which may be unavailable or costly to obtain.
Several recent approaches explore the use of unsupervised data partitioning to improve GANs [2, 14, 17, 23, 33, 42]. While these methods are promising and yield visually appealing samples, their quality is still far from those obtained with class-conditional GANs. These methods make use of relatively coarse and non-overlapping data partitions, which oftentimes contain data points from different types of objects or scenes. This diversity of data points may result in a manifold with low
⇤Equal contribution.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
density regions, which degrades the quality of the generated samples [11]. Using finer partitions, however, tends to deteriorate results [33, 36, 42] because the clusters may contain too few data points for the generator and discriminator to properly model their data distribution.
In this work, we introduce a new approach, called instance-conditioned GAN (IC-GAN), which extends the GAN framework to model a mixture of local data densities. More precisely, IC-GAN learns to model the distribution of the neighborhood of a data point, also referred to as instance, by providing a representation of the instance as an additional input to both the generator and discriminator, and by using the neighbors of the instance as real samples for the discriminator. By choosing a sufficiently large neighborhood around the conditioning instance, we avoid the pitfall
of excessively partitioning the data into small clusters. Given the overlapping nature of these clusters, increasing the number of partitions does not come at the expense of having less samples in each of them. Moreover, unlike when conditioning on discrete cluster indices, conditioning on instance representations naturally leads the generator to produce similar samples for similar instances. Interestingly, once trained, our IC-GAN can be used to effortlessly transfer to other datasets not seen during training by simply swapping-out the conditioning instances at inference time.
IC-GAN bears similarities with kernel density estimation (KDE), a non-parametric density estimator in the form of a mixture of parametrized kernels modeling the density around each training data point – see e.g. [4]. Similar to KDE, IC-GAN can be seen as a mixture density estimator, where each component is obtained by conditioning on a training instance. Unlike KDE, however, we do not model the data likelihood explicitly, but take an adversarial approach in which we model the local density implicitly with a neural network that takes as input the conditioning instance as well as a noise vector. Therefore, the kernel in IC-GAN is no longer independent on the data point on which we condition, and instead of a kernel bandwidth parameter, we control the smoothness by choosing the neighborhood size of an instance from which we sample the real samples to be fed to the discriminator.
We validate our approach on two image generation tasks: (1) unlabeled image generation where there is no class information available, and (2) class-conditional image generation. For the unlabeled scenario, we report results on the ImageNet and COCO-Stuff datasets. We show that IC-GAN outperforms previous approaches in unlabeled image generation on both datasets. Additionally, we perform a series of transfer experiments and demonstrate that an IC-GAN trained on ImageNet achieves better generation quality and diversity when testing on COCO-Stuff than the same model trained on COCO-Stuff. In the class-conditional setting, we show that IC-GAN can generate images with controllable semantics – by adapting both class and instance–, while achieving competitive sample quality and diversity on the ImageNet dataset. Finally, we test IC-GAN in ImageNetLT, a long-tail class distribution ablated version of ImageNet, highlighting the benefits of nonparametric density estimation in datasets with unbalanced classes. Figure 1 shows IC-GAN unlabeled ImageNet generations (a), IC-GAN class-conditional ImageNet generations (b), and IC-GAN transfer generations both in the unlabeled (c) and controllable class-conditional (d) setting.
2 Instance-conditioned GAN
The key idea of IC-GAN is to model the distribution of a complex dataset by leveraging fine-grained overlapping clusters in the data manifold, where each cluster is described by a datapoint xi – referred to as instance – and its nearest neighbors set Ai in a feature space. Our objective is to model the underlying data distribution p(x) as a mixture of conditional distributions p(x|hi) around each of M instance feature vectors hi in the dataset, such that p(x) ⇡ 1M P i p(x|hi).
More precisely, given an unlabeled dataset D = {xi}Mi=1 with M data samples xi and an embedding function f parametrized by , we start by extracting instance features hi = f (xi) 8xi 2 D, where f (·) is learned in an unsupervised or self-supervised manner. We then define the set Ai of k nearest neighbors for each data sample using the cosine similarity – as is common in nearest neighbor classifiers, e.g. [53, 54] – over the features hi. Figure 2a depicts a sample xi and its nearest neighbors.
We are interested in implicitly modelling the conditional distributions p(x|hi) with a generator G✓G(z,hi), implemented by a deep neural network with parameters ✓G. The generator transforms samples from a unit Gaussian prior z ⇠ N (0, I) into samples x from the conditional distribution p(x|hi), where hi is the feature vector of an instance xi sampled from the training data. In IC-GAN, we adopt an adversarial approach to train the generator G✓G . Therefore, our generator is jointly trained with a discriminator D✓D (x,hi) that discerns between real neighbors and generated neighbors of hi, as shown in Figure 2b. Note that for each hi, real neighbors are sampled uniformly from Ai. Both G and D engage in a two player min-max game where they try to find the Nash equilibrium for the following equation:
min G max D Exi⇠p(x),xn⇠U(Ai)[logD(xn, f (xi))] +
Exi⇠p(x),z⇠p(z)[log(1 D(G(z, f (xi)), f (xi)))]. (1)
Note that when training IC-GAN we use all available training datapoints to condition the model. At inference time, as in non-parametric density estimation methods such as KDE, the generator of ICGAN also requires instance features, which may come from the training distribution or a different one.
Extension to class-conditional generation. We extend IC-GAN for class-conditional generation by additionally conditioning the generator and discriminator on a class label y. More precisely, given a labeled dataset Dl = {(xi,yi)}Mi=1 with M data sample pairs (xi,yi) and an embedding function f , we extract instance features hi = f (xi) 8xi 2 Dl, where f (·) is learned in an unsupervised, self-supervised, or supervised manner. We then define the set Ai of k nearest neighbors for each data sample using the cosine similarity over the features hi, where neighbors may be from different classes. This results in neighborhoods, where the number of neighbors belonging to the same class as the instance hi is often smaller than k. During training, real neighbors xj and their respective labels yj are sampled uniformly from Ai for each hi. In the class-conditional case, we model p(x|hi,yj) with a generator G✓G(z,hi,yj) trained jointly with a discriminator D✓D (x,hi,yj).
3 Experimental evaluation
We describe our experimental setup in Section 3.1, followed by results presented in the unlabeled setting in Section 3.2, dataset transfer in Section 3.3 and class-conditional generation in Section 3.4. We analyze the impact of the number of stored instances and neighborhood size in Section 3.5.
3.1 Experimental setup
Datasets. We evaluate our model in the unlabeled scenario on ImageNet [45] and COCO-Stuff [6]. The ImageNet dataset contains 1.2M and 50k images for training and evaluation, respectively. COCOStuff is a very diverse and complex dataset which contains multi-object images and has been widely used for complex scene generation. We use the train and evaluation splits of [8], and the (un)seen subsets of the evaluation images with only class combinations that have (not) been seen during training. These splits contain 76k, 2k, 675 and 1.3k images, respectively. For the class-conditional image generation, we use ImageNet as well as ImageNet-LT [34]. The latter is a long-tail variant of ImageNet that contains a subset of 115k samples, where the 1,000 classes have between 5 and 1,280 samples each. Moreover, we use some samples of four additional datasets to highlight the transfer abilities of IC-GAN: Cityscapes [10], MetFaces [28], PACS [31] and Sketches [15].
Evaluation protocol. We report Fréchet Inception Distance (FID) [22], Inception Score (IS) [47], and LPIPS [57]. LPIPS computes the distance between the AlexNet activations of two images generated with two different latent vectors and same conditioning. On ImageNet, we follow [5], and
compute FID over 50k generated images and the 50k real validation samples are used as reference. On COCO-Stuff and ImageNet-LT, we compute the FID for each of the splits using all images in the split as reference, and sample the same number images. Additionally, in ImageNet-LT we stratify the FID by grouping classes based on the number of train samples: more than 100 (many-shot FID), between 20 and 100 (med-shot FID), and less than 20 (few-shot FID). For the reference set, we split the validation images along these three groups of classes, and generate a matching number of samples per group. In order to compute all above-mentioned metrics, IC-GAN requires instance features for sampling. Unless stated otherwise, we store 1,000 training set instances by applying k-means clustering to the training set and selecting the features of the data point that is the closest to each one of the centroids. All quantitative metrics for IC-GAN are reported over five random seeds for the input noise when sampling from the model.
Network architectures and hyperparameters. As feature extractor f , we use a ResNet50 [21] trained in a self-supervised way with SwAV [7] for the unlabeled scenario; for the class-conditional IC-GAN, we use a ResNet50 trained for the classification task on either ImageNet or ImageNetLT [26]. For ImageNet experiments, we use BigGAN [5] as a baseline architecture, given its superior image quality and ubiquitous use in conditional image generation. For IC-GAN, we replace the class embedding layers in the generator by a fully connected layer that takes the instance features as input and reduces its dimensionality from 2,048 to 512; the same approach is followed to adapt the discriminator. For COCO-Stuff, we additionally include the state-of-the-art unconditional StyleGAN2 architecture [29], as it has shown good generation quality and diversity in the lower data regime [28, 29]. We follow its class-conditional version [28] to extend it to IC-GAN by replacing the input class embedding by the instance features. Unless stated otherwise, we set the size of the neighborhoods to k=50 for ImageNet and k=5 for both COCO-Stuff and ImageNet-LT. See the supplementary material for details on the architecture and optimization hyperparameters.
3.2 Unlabeled setting
ImageNet. We start by comparing IC-GAN against previous work in Table 1. Note that unconditional BigGAN baseline is trained by setting all labels in the training set to zero, following [36, 42]. IC-GAN surpasses all previous approaches at both 64⇥64 and 128⇥128 resolutions in both FID and IS scores. At 256⇥256 resolution, IC-GAN outperforms the concurrent unconditional diffusion-based model of [12]; the only other result we are aware of in this setting. Additional results in terms of precision and recall can be found in Table 8 in the supplementary material.
As shown in Figure 1a, IC-GAN generates high quality images preserving most of the appearance of the conditioning instance. Note that generated images are not mere training memorizations; as shown in the supplementary material, generated images differ substantially from the nearest training samples.
COCO-Stuff. We proceed with the evaluation of IC-GAN on COCOStuff in Table 2. We also compare to state-of-the-art complex scene generation pipelines which rely on labeled bounding box annotations as conditioning – LostGANv2 [49] and OC-GAN [50]. Both of
these approaches use tailored architectures for complex scene generation, which have at least twice the number of parameters of IC-GAN. Our IC-GAN matches or improves upon the unconditional version of the same backbone architecture in terms of FID in all cases, except for training FID with the StyleGAN2 backbone at 256⇥256 resolution. Overall, the StyleGAN2 backbone is superior to BigGAN on this dataset, and StyleGAN2-based IC-GAN achieves the state-of-the-art FID scores, even when compared to the bounding-box conditioned LostGANv2 and OC-GAN. IC-GAN exhibits notably higher LPIPS than LostGANv2 and OC-GAN, which could be explained by the fact that the latter only leverage one real sample per input conditioning during training; whereas IC-GAN uses multiple real neighboring samples per each instance, naturally favouring diversity in the generated images. As shown in figures 3b and 3c, IC-GAN generates high quality diverse images given the input instance. A qualitative comparison between LostGANv2, OC-GAN and IC-GAN can be found in Section E of the supplementary material.
3.3 Off-the-shelf transfer to other datasets
extractor and generator. When we replace the conditioning instances from COCO-Stuff with those of ImageNet, we obtain a train FID score of 43.5, underlining the important distribution shift that can be implemented by changing the conditioning instances.
Interestingly, the transferred IC-GAN also outperforms LostGANv2 and OC-GAN which condition on labeled bounding box annotations. Transferring the model from ImageNet boosts diversity w.r.t. the model trained on COCO-Stuff (see LPIPS in Table 2), which may be in part due to the larger k=50 used for ImageNet training, compared to k=5 when training on COCO-Stuff. Qualitative results of COCO-Stuff generations from the ImageNet pre-trained IC-GAN can be found in Figure 1c (top row) and Figure 3d. These generations suggest that IC-GAN is able to effectively leverage the large scale training on ImageNet to improve the quality and diversity of the COCO-Stuff scene generation, which contains significantly less data to train.
We further explore how the ImageNet trained IC-GAN transfers to conditioning on other datasets using Cityscapes, MetFaces, and PACS in Figure 1c. Generated images still preserve the semantics and style of the images for all datasets, although degrading their quality when compared to samples in Figure 1a, as the instances in these datasets –in particular MetFaces and PACS– are very different from the ImageNet ones. See Section F in the supplementary material for more discussion, additional evaluations, and more qualitative examples of dataset transfer.
3.4 Class-conditional setting
ImageNet. In Table 3, we show that the class-conditioned IC-GAN outperforms BigGAN in terms of both FID and IS across all resolutions except the FID at 128⇥128 resolution. It is worth mentioning that, unlike BigGAN, IC-GAN can control the semantics of the generated images by either fixing the instance features and swapping the class conditioning, or by fixing the class conditioning and swapping the instance features; see Figure 1b. As shown in the figure, generated images preserve semantics of both the class label and the instance, generating different dog breeds on similar backgrounds, or generating camels in the snow, an unseen scenario in ImageNet to the best of our knowledge. Moreover, in
Figure 1d, we show the transfer capabilities of our class-conditional IC-GAN trained on ImageNet and conditioned on instances from other datasets, generating camels in the grass, zebras in the city, and husky dogs with the style of MetFaces and PACS instances. These controllable conditionings enable the generation of images that are not present or very rare in the ImageNet dataset, e.g. camels surrounded by snow or zebras in the city. Additional qualitative transfer results which either fix the class label and swap the instance features, or vice-versa, can be found in Section F of the supplementary material.
ImageNet-LT. Due to the class imbalance in ImageNet-LT, selecting a subset of instances with either k-means or uniform sampling can easily result in ignoring rare classes, and penalizing their generation. Therefore, for this dataset we use all available 115k training instances to sample from the model and compute the metrics. In Table 4 we compare to BigGAN, showing that IC-GAN is better in terms of FID and IS for modeling this long-tailed distribution. Note that the improvement is noticeable for each of the three groups of classes with different number of samples, see many/med/few column. In Section G of the supplementary material we present experiments when using class-balancing to train BigGAN, showing that it does not improve quality nor diversity of generated samples. We
hypothesize that oversampling some classes may result in overfitting for the discriminator, leading to low quality image generations.
3.5 Selection of stored instances and neighborhood size
In this section, we empirically justify the k-means procedure to select the instances to sample from the model, consider the effect of the number of instances used to sample from the model, as well as the effect of the size k of the neighborhoods Ai used during training. The impact of different choices for the instance embedding function f (x) is evaluated in the supplementary material.
Selecting instances to sample from the model. In Figure 4 (left), we compare two instance selection methods in terms of FID: uniform sampling (Random) and k-means (Clustered), where we select the closest instance to each cluster centroid, using k = 50 neighbors during training (solid and dotted green lines). Random selection is consistently outperformed by k-means; selecting only 1,000 instances with k-means results in better FID than randomly selecting 5,000 instances. Moreover, storing more than 1,000 instances selected with k-means does not result in noticeable improvements in FID. Additionally, we computed FID metrics for the 1,000 ground truth images that are closest to the k-means cluster centers, obtaining 41.8 ± 0.2 FID, which is considerably higher than the 10.4± 0.1 FID we obtain with IC-GAN (k = 50) when using the same 1,000 cluster centers. This supports the idea that IC-GAN is generating data points that go beyond the stored instances, better recovering the data distribution.
We consider precision (P) and recall (R) [30] (using an InceptionV3 [51] as feature extractor and sampling 10,000 generated and real images) to disentangle the factors driving the improvement in FID, namely image quality and diversity (coverage) – see Figure 4 (right). We see that augmenting the number of stored instances results in slightly worse precision (image quality) but notably better recall (coverage). Intuitively, this suggests that by increasing the number of stored instances, we can better recover the data density at the expense of slightly degraded image quality in lower density regions of the manifold – see e.g. [11].
Neighborhood size. In Figure 4 (both panels) we analyze the interplay between the neighborhood size and the number of instances used to recover the data distribution. For small numbers of stored instances, we observe that larger the neighborhoods lead to better (lower) FID scores (left-hand side of left panel). For recall, we also observe improvements for large neighborhoods when storing few instances (left-hand side of right panel), suggesting that larger neighborhoods are more effective in recovering the data distribution from few instances. This trend is reverted for large numbers of stored instances, where smaller values of k are more effective. This supports the idea that the neighborhood size acts as a bandwidth parameter – similar to KDE –, that controls the smoothness of the implicitly learnt conditional distributions around instances. For example, k = 500 leads to smoother conditional distributions than k = 5, and as a result requires fewer stored instances to recover the data distribution. Moreover, as expected, we notice that the value of k does not significantly affect precision (right panel). Overall, k = 50 offers a good compromise, exhibiting top performance across all metrics when using at least 500 stored instances. We visualize the smoothness effect by means of a qualitative comparison across samples from different neighborhood sizes in Section K of the supplementary material. Using (very) small neighborhoods (e.g. of k = 5), results in lower diversity in the generated images.
4 Related work
Data partitioning for GANs. Previous works have attempted to improve the image generation quality and diversity of GANs by partitioning the data manifold through clustering techniques [2, 19, 33, 36, 42, 46], or by leveraging mixture models in their design [14, 17, 23]. In particular, [36, 46] apply k-means on representations from a pre-trained feature extractor to cluster the data, and then use cluster indices to condition the generator network. Then, [19, 33] introduce an alternating two-stage approach where the first stage applies k-means to the discriminator feature space and the second stage trains a GAN conditioned on the cluster indices. Similarly, [42] proposes to train a clustering network, which outputs pseudolabels, in cooperation with the generator. Further, [2] trains a feature extractor with self-supervised pre-training tasks, and creates a k-nearest neighbor graph in the learned representation space to cluster connected points into the same sub-manifold. In this case, a different generator is then trained for each identified sub-manifold. By contrast, IC-GAN uses fine-grained overlapping data neighborhoods in tandem with conditioning on rich feature embeddings (instances) to learn a localized distribution around each data point.
Mitigating mode collapse in GANs. Works which attempt to mitigate mode collapse may also bear some similarities to ours. In [32], the discriminator takes into consideration multiple random samples from the same class to output a decision. In [35], a mixed batch of generated and real samples is fed to the discriminator with the goal of predicting the ratio of real samples in the batch. Other works use a mixture of generators [17, 23] and encourage each generator to focus on generating samples from a different mode. Similarly, in [14], the discriminator is pushed to form clusters in its representation space, where each cluster is represented by a Gaussian kernel. In turn, the generator tends to learn to generate samples covering all clusters, hence mitigating mode collapse. By contrast, we focus on discriminating between real and generated neighbors of an instance conditioning, by using a single generator network trained following the GAN formulation.
Conditioning on feature vectors. Very recent work [37] uses image self-supervised feature representations to condition a generative model whose objective is to produce a good input reconstruction; this requires storing the features of all training samples. In contrast, our objective is to learn a localized distribution (as captured by nearest neighboring images) around each conditioning instance, and we only need to save a very small subset of the dataset features to approximately recover the training distribution.
Kernel density estimation and adversarial training. Connections between adversarial training and nonparametric density estimation have been made in prior work [1]. However, to the best of our knowledge, no prior work models the dataset density in a nonparametric fashion with a localized distribution around each data point with a single conditional generation network.
Complex scene generation. Existing methods for complex scene generation, where natural looking scenes contain multiple objects, most often aim at controllability and rely on detailed conditionings such as a scene graphs [3, 25], bounding box layouts [48–50, 58], semantic segmentation masks [9, 43, 44, 52, 55] or more recently, freehand sketches [16]. All these methods leverage intricate pipelines to generate complex scenes and require labeled datasets. By contrast, our approach
relies on instance conditionings which control the global semantics of the generation process, and does not require any dataset labels. It is worth noting that complex scene generation is often characterized by unbalanced, strongly long tailed datasets. Long-tail class distributions negatively affect classconditional GANs, as they struggle to generate visually appealing samples for classes in the tail [8]. However, to the best of our knowledge, no other previous work tackles this problem for GANs.
5 Discussion
Contributions. We presented instance-conditioned GAN (IC-GAN), which models dataset distributions in a non-parametric way by conditioning both generator and discriminator on instance features. We validated our approach on the unlabeled setting, showing consistent improvements over baselines on ImageNet and COCO-Stuff. Moreover, we showed through transfer experiments, where we condition the ImageNet-trained model on instances of other datasets, the ability of IC-GAN to produce compelling samples from different data distributions. Finally, we validated IC-GAN in the class-conditional setting, obtaining competitive results on ImageNet and surpassing the BigGAN baseline on the challenging ImageNet-LT; and showed compelling controllable generations by swapping the class-conditioning given a fixed instance or the instance given a fixed conditioning.
Limitations. IC-GAN showed excellent image quality for labeled (class-conditional) and unlabeled image generation. However, as any machine learning tool, it has some limitations. First, as kernel density estimator approaches, IC-GAN requires storing training instances to use the model. Experimentally, we noticed that for complex datasets, such as ImageNet, using 1,000 instances is enough to approximately cover the dataset distribution. Second, the instance feature vectors used to condition the model are obtained with a pre-trained feature extractor (self-supervised in the unlabeled case) and depend on it. We speculate that this limitation might be mitigated if the feature extractor and the generator are trained jointly, and leave it as future work. Third, although, we highlighted excellent transfer potential of our approach to unseen datasets, we observed that, in the case of transfer to datasets that are very different from ImageNet, the quality of generated images degrades.
Broader impacts. IC-GAN brings with it several benefits such as excellent image quality in labeled (class-conditional) and unlabeled image generation tasks, and the transfer potential to unseen datasets, enabling the use of our model on a variety of datasets without the need of fine-tuning or re-training. Moreover, in the case of class-conditional image generation, IC-GAN enables controllable generation of content by adapting either the style – by changing the instance – or the semantics – by altering the class –. Thus, we expect that our model can positively affect the workflow for creative content generators. That being said, with improving image quality in generative modeling, there is some potential for misuse. A common example are deepfakes, where a generative model is used to manipulate images or videos well enough that humans cannot distinguish real from fake, with the intent to misinform. We believe, however, that open research on generative image models also contributes to better understand such synthetic content, and to detect it where it is undesirable. Recently, the community has also started to undertake explicit efforts towards detecting manipulated content by organizing challenges such as the Deepfake Detection Challenge [13]. | 1. What is the focus and contribution of the paper on instance conditioned GANs?
2. What are the strengths of the proposed approach, particularly in improving unconditional image performance baselines?
3. Do you have any concerns or suggestions regarding the diversity metrics used in the evaluation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the limitations or potential improvements regarding the transfer learning aspect of the IC-GAN approach? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces Instance Conditioned GAN (IC-GAN) which aims to model complex multi modal distributions in an unconditional manner. The model partitions the target distribution into sub distributions learned by conditioning on a single training point and its nearest neighbors. IC-GAN improves unconditional image performance baselines on ImageNet and COCO-stuff with a variety of architectures. The authors extend the model to perform class conditional generation and transfer learning.
Review
Unconditional generation for datasets like ImageNet is a very hard problem and this paper proposes a novel way of partitioning the dataset. The authors draw parallels to Kernel density estimation (KDE) to motivate the approach which to my knowledge has not been done. The ease of performing transfer learning adds to the novelty of the approach.
The paper is very well written and easy to understand. The authors have conducted a thorough set of experiments and ablation studies to back up their claims.
Some observations:
The authors claim their method produces a diverse set of images but they do not share recall values anywhere apart from Fig 4. I think adding some metric of diversity in addition to FID/IS to at least some experiments would help.
In the transfer learning section, I would have liked a little bit more discussion about what the authors think is being transferred. For example, if we condition on an image from a different dataset, what property of that dataset its discarded and what what property is preserved/ transferred?
In all, I think the authors have performed a comprehensive set of experiments which help push forward our understanding in unconditional generation and, therefore, I would recommend acceptance. |
NIPS | Title
Instance-Conditioned GAN
Abstract
Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces. Yet, modeling complex distributions of datasets such as ImageNet and COCO-Stuff remains challenging in unconditional settings. In this paper, we take inspiration from kernel density estimation techniques and introduce a non-parametric approach to modeling distributions of complex datasets. We partition the data manifold into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors, and introduce a model, called instance-conditioned GAN (IC-GAN), which learns the distribution around each datapoint. Experimental results on ImageNet and COCO-Stuff show that IC-GAN significantly improves over unconditional models and unsupervised data partitioning baselines. Moreover, we show that IC-GAN can effortlessly transfer to datasets not seen during training by simply changing the conditioning instances, and still generate realistic images. Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-LT. Code and trained models to reproduce the reported results are available at https://github.com/facebookresearch/ic_gan.
1 Introduction
Generative Adversarial Networks (GANs) [18] have shown impressive results in unconditional image generation [27, 29]. Despite their success, GANs present optimization difficulties and can suffer from mode collapse, resulting in the generator not being able to obtain a good distribution coverage, and often producing poor quality and/or low diversity generated samples. Although many approaches attempt to mitigate this problem – e.g. [20, 32, 35, 38] –, complex data distributions such as the one in ImageNet [45] remain a challenge for unconditional GANs [33, 36]. Classconditional GANs [5, 39, 40, 56] ease the task of learning the data distribution by conditioning on class labels, effectively partitioning the data. Although they provide higher quality samples than their unconditional counterparts, they require labelled data, which may be unavailable or costly to obtain.
Several recent approaches explore the use of unsupervised data partitioning to improve GANs [2, 14, 17, 23, 33, 42]. While these methods are promising and yield visually appealing samples, their quality is still far from those obtained with class-conditional GANs. These methods make use of relatively coarse and non-overlapping data partitions, which oftentimes contain data points from different types of objects or scenes. This diversity of data points may result in a manifold with low
⇤Equal contribution.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
density regions, which degrades the quality of the generated samples [11]. Using finer partitions, however, tends to deteriorate results [33, 36, 42] because the clusters may contain too few data points for the generator and discriminator to properly model their data distribution.
In this work, we introduce a new approach, called instance-conditioned GAN (IC-GAN), which extends the GAN framework to model a mixture of local data densities. More precisely, IC-GAN learns to model the distribution of the neighborhood of a data point, also referred to as instance, by providing a representation of the instance as an additional input to both the generator and discriminator, and by using the neighbors of the instance as real samples for the discriminator. By choosing a sufficiently large neighborhood around the conditioning instance, we avoid the pitfall
of excessively partitioning the data into small clusters. Given the overlapping nature of these clusters, increasing the number of partitions does not come at the expense of having less samples in each of them. Moreover, unlike when conditioning on discrete cluster indices, conditioning on instance representations naturally leads the generator to produce similar samples for similar instances. Interestingly, once trained, our IC-GAN can be used to effortlessly transfer to other datasets not seen during training by simply swapping-out the conditioning instances at inference time.
IC-GAN bears similarities with kernel density estimation (KDE), a non-parametric density estimator in the form of a mixture of parametrized kernels modeling the density around each training data point – see e.g. [4]. Similar to KDE, IC-GAN can be seen as a mixture density estimator, where each component is obtained by conditioning on a training instance. Unlike KDE, however, we do not model the data likelihood explicitly, but take an adversarial approach in which we model the local density implicitly with a neural network that takes as input the conditioning instance as well as a noise vector. Therefore, the kernel in IC-GAN is no longer independent on the data point on which we condition, and instead of a kernel bandwidth parameter, we control the smoothness by choosing the neighborhood size of an instance from which we sample the real samples to be fed to the discriminator.
We validate our approach on two image generation tasks: (1) unlabeled image generation where there is no class information available, and (2) class-conditional image generation. For the unlabeled scenario, we report results on the ImageNet and COCO-Stuff datasets. We show that IC-GAN outperforms previous approaches in unlabeled image generation on both datasets. Additionally, we perform a series of transfer experiments and demonstrate that an IC-GAN trained on ImageNet achieves better generation quality and diversity when testing on COCO-Stuff than the same model trained on COCO-Stuff. In the class-conditional setting, we show that IC-GAN can generate images with controllable semantics – by adapting both class and instance–, while achieving competitive sample quality and diversity on the ImageNet dataset. Finally, we test IC-GAN in ImageNetLT, a long-tail class distribution ablated version of ImageNet, highlighting the benefits of nonparametric density estimation in datasets with unbalanced classes. Figure 1 shows IC-GAN unlabeled ImageNet generations (a), IC-GAN class-conditional ImageNet generations (b), and IC-GAN transfer generations both in the unlabeled (c) and controllable class-conditional (d) setting.
2 Instance-conditioned GAN
The key idea of IC-GAN is to model the distribution of a complex dataset by leveraging fine-grained overlapping clusters in the data manifold, where each cluster is described by a datapoint xi – referred to as instance – and its nearest neighbors set Ai in a feature space. Our objective is to model the underlying data distribution p(x) as a mixture of conditional distributions p(x|hi) around each of M instance feature vectors hi in the dataset, such that p(x) ⇡ 1M P i p(x|hi).
More precisely, given an unlabeled dataset D = {xi}Mi=1 with M data samples xi and an embedding function f parametrized by , we start by extracting instance features hi = f (xi) 8xi 2 D, where f (·) is learned in an unsupervised or self-supervised manner. We then define the set Ai of k nearest neighbors for each data sample using the cosine similarity – as is common in nearest neighbor classifiers, e.g. [53, 54] – over the features hi. Figure 2a depicts a sample xi and its nearest neighbors.
We are interested in implicitly modelling the conditional distributions p(x|hi) with a generator G✓G(z,hi), implemented by a deep neural network with parameters ✓G. The generator transforms samples from a unit Gaussian prior z ⇠ N (0, I) into samples x from the conditional distribution p(x|hi), where hi is the feature vector of an instance xi sampled from the training data. In IC-GAN, we adopt an adversarial approach to train the generator G✓G . Therefore, our generator is jointly trained with a discriminator D✓D (x,hi) that discerns between real neighbors and generated neighbors of hi, as shown in Figure 2b. Note that for each hi, real neighbors are sampled uniformly from Ai. Both G and D engage in a two player min-max game where they try to find the Nash equilibrium for the following equation:
min G max D Exi⇠p(x),xn⇠U(Ai)[logD(xn, f (xi))] +
Exi⇠p(x),z⇠p(z)[log(1 D(G(z, f (xi)), f (xi)))]. (1)
Note that when training IC-GAN we use all available training datapoints to condition the model. At inference time, as in non-parametric density estimation methods such as KDE, the generator of ICGAN also requires instance features, which may come from the training distribution or a different one.
Extension to class-conditional generation. We extend IC-GAN for class-conditional generation by additionally conditioning the generator and discriminator on a class label y. More precisely, given a labeled dataset Dl = {(xi,yi)}Mi=1 with M data sample pairs (xi,yi) and an embedding function f , we extract instance features hi = f (xi) 8xi 2 Dl, where f (·) is learned in an unsupervised, self-supervised, or supervised manner. We then define the set Ai of k nearest neighbors for each data sample using the cosine similarity over the features hi, where neighbors may be from different classes. This results in neighborhoods, where the number of neighbors belonging to the same class as the instance hi is often smaller than k. During training, real neighbors xj and their respective labels yj are sampled uniformly from Ai for each hi. In the class-conditional case, we model p(x|hi,yj) with a generator G✓G(z,hi,yj) trained jointly with a discriminator D✓D (x,hi,yj).
3 Experimental evaluation
We describe our experimental setup in Section 3.1, followed by results presented in the unlabeled setting in Section 3.2, dataset transfer in Section 3.3 and class-conditional generation in Section 3.4. We analyze the impact of the number of stored instances and neighborhood size in Section 3.5.
3.1 Experimental setup
Datasets. We evaluate our model in the unlabeled scenario on ImageNet [45] and COCO-Stuff [6]. The ImageNet dataset contains 1.2M and 50k images for training and evaluation, respectively. COCOStuff is a very diverse and complex dataset which contains multi-object images and has been widely used for complex scene generation. We use the train and evaluation splits of [8], and the (un)seen subsets of the evaluation images with only class combinations that have (not) been seen during training. These splits contain 76k, 2k, 675 and 1.3k images, respectively. For the class-conditional image generation, we use ImageNet as well as ImageNet-LT [34]. The latter is a long-tail variant of ImageNet that contains a subset of 115k samples, where the 1,000 classes have between 5 and 1,280 samples each. Moreover, we use some samples of four additional datasets to highlight the transfer abilities of IC-GAN: Cityscapes [10], MetFaces [28], PACS [31] and Sketches [15].
Evaluation protocol. We report Fréchet Inception Distance (FID) [22], Inception Score (IS) [47], and LPIPS [57]. LPIPS computes the distance between the AlexNet activations of two images generated with two different latent vectors and same conditioning. On ImageNet, we follow [5], and
compute FID over 50k generated images and the 50k real validation samples are used as reference. On COCO-Stuff and ImageNet-LT, we compute the FID for each of the splits using all images in the split as reference, and sample the same number images. Additionally, in ImageNet-LT we stratify the FID by grouping classes based on the number of train samples: more than 100 (many-shot FID), between 20 and 100 (med-shot FID), and less than 20 (few-shot FID). For the reference set, we split the validation images along these three groups of classes, and generate a matching number of samples per group. In order to compute all above-mentioned metrics, IC-GAN requires instance features for sampling. Unless stated otherwise, we store 1,000 training set instances by applying k-means clustering to the training set and selecting the features of the data point that is the closest to each one of the centroids. All quantitative metrics for IC-GAN are reported over five random seeds for the input noise when sampling from the model.
Network architectures and hyperparameters. As feature extractor f , we use a ResNet50 [21] trained in a self-supervised way with SwAV [7] for the unlabeled scenario; for the class-conditional IC-GAN, we use a ResNet50 trained for the classification task on either ImageNet or ImageNetLT [26]. For ImageNet experiments, we use BigGAN [5] as a baseline architecture, given its superior image quality and ubiquitous use in conditional image generation. For IC-GAN, we replace the class embedding layers in the generator by a fully connected layer that takes the instance features as input and reduces its dimensionality from 2,048 to 512; the same approach is followed to adapt the discriminator. For COCO-Stuff, we additionally include the state-of-the-art unconditional StyleGAN2 architecture [29], as it has shown good generation quality and diversity in the lower data regime [28, 29]. We follow its class-conditional version [28] to extend it to IC-GAN by replacing the input class embedding by the instance features. Unless stated otherwise, we set the size of the neighborhoods to k=50 for ImageNet and k=5 for both COCO-Stuff and ImageNet-LT. See the supplementary material for details on the architecture and optimization hyperparameters.
3.2 Unlabeled setting
ImageNet. We start by comparing IC-GAN against previous work in Table 1. Note that unconditional BigGAN baseline is trained by setting all labels in the training set to zero, following [36, 42]. IC-GAN surpasses all previous approaches at both 64⇥64 and 128⇥128 resolutions in both FID and IS scores. At 256⇥256 resolution, IC-GAN outperforms the concurrent unconditional diffusion-based model of [12]; the only other result we are aware of in this setting. Additional results in terms of precision and recall can be found in Table 8 in the supplementary material.
As shown in Figure 1a, IC-GAN generates high quality images preserving most of the appearance of the conditioning instance. Note that generated images are not mere training memorizations; as shown in the supplementary material, generated images differ substantially from the nearest training samples.
COCO-Stuff. We proceed with the evaluation of IC-GAN on COCOStuff in Table 2. We also compare to state-of-the-art complex scene generation pipelines which rely on labeled bounding box annotations as conditioning – LostGANv2 [49] and OC-GAN [50]. Both of
these approaches use tailored architectures for complex scene generation, which have at least twice the number of parameters of IC-GAN. Our IC-GAN matches or improves upon the unconditional version of the same backbone architecture in terms of FID in all cases, except for training FID with the StyleGAN2 backbone at 256⇥256 resolution. Overall, the StyleGAN2 backbone is superior to BigGAN on this dataset, and StyleGAN2-based IC-GAN achieves the state-of-the-art FID scores, even when compared to the bounding-box conditioned LostGANv2 and OC-GAN. IC-GAN exhibits notably higher LPIPS than LostGANv2 and OC-GAN, which could be explained by the fact that the latter only leverage one real sample per input conditioning during training; whereas IC-GAN uses multiple real neighboring samples per each instance, naturally favouring diversity in the generated images. As shown in figures 3b and 3c, IC-GAN generates high quality diverse images given the input instance. A qualitative comparison between LostGANv2, OC-GAN and IC-GAN can be found in Section E of the supplementary material.
3.3 Off-the-shelf transfer to other datasets
extractor and generator. When we replace the conditioning instances from COCO-Stuff with those of ImageNet, we obtain a train FID score of 43.5, underlining the important distribution shift that can be implemented by changing the conditioning instances.
Interestingly, the transferred IC-GAN also outperforms LostGANv2 and OC-GAN which condition on labeled bounding box annotations. Transferring the model from ImageNet boosts diversity w.r.t. the model trained on COCO-Stuff (see LPIPS in Table 2), which may be in part due to the larger k=50 used for ImageNet training, compared to k=5 when training on COCO-Stuff. Qualitative results of COCO-Stuff generations from the ImageNet pre-trained IC-GAN can be found in Figure 1c (top row) and Figure 3d. These generations suggest that IC-GAN is able to effectively leverage the large scale training on ImageNet to improve the quality and diversity of the COCO-Stuff scene generation, which contains significantly less data to train.
We further explore how the ImageNet trained IC-GAN transfers to conditioning on other datasets using Cityscapes, MetFaces, and PACS in Figure 1c. Generated images still preserve the semantics and style of the images for all datasets, although degrading their quality when compared to samples in Figure 1a, as the instances in these datasets –in particular MetFaces and PACS– are very different from the ImageNet ones. See Section F in the supplementary material for more discussion, additional evaluations, and more qualitative examples of dataset transfer.
3.4 Class-conditional setting
ImageNet. In Table 3, we show that the class-conditioned IC-GAN outperforms BigGAN in terms of both FID and IS across all resolutions except the FID at 128⇥128 resolution. It is worth mentioning that, unlike BigGAN, IC-GAN can control the semantics of the generated images by either fixing the instance features and swapping the class conditioning, or by fixing the class conditioning and swapping the instance features; see Figure 1b. As shown in the figure, generated images preserve semantics of both the class label and the instance, generating different dog breeds on similar backgrounds, or generating camels in the snow, an unseen scenario in ImageNet to the best of our knowledge. Moreover, in
Figure 1d, we show the transfer capabilities of our class-conditional IC-GAN trained on ImageNet and conditioned on instances from other datasets, generating camels in the grass, zebras in the city, and husky dogs with the style of MetFaces and PACS instances. These controllable conditionings enable the generation of images that are not present or very rare in the ImageNet dataset, e.g. camels surrounded by snow or zebras in the city. Additional qualitative transfer results which either fix the class label and swap the instance features, or vice-versa, can be found in Section F of the supplementary material.
ImageNet-LT. Due to the class imbalance in ImageNet-LT, selecting a subset of instances with either k-means or uniform sampling can easily result in ignoring rare classes, and penalizing their generation. Therefore, for this dataset we use all available 115k training instances to sample from the model and compute the metrics. In Table 4 we compare to BigGAN, showing that IC-GAN is better in terms of FID and IS for modeling this long-tailed distribution. Note that the improvement is noticeable for each of the three groups of classes with different number of samples, see many/med/few column. In Section G of the supplementary material we present experiments when using class-balancing to train BigGAN, showing that it does not improve quality nor diversity of generated samples. We
hypothesize that oversampling some classes may result in overfitting for the discriminator, leading to low quality image generations.
3.5 Selection of stored instances and neighborhood size
In this section, we empirically justify the k-means procedure to select the instances to sample from the model, consider the effect of the number of instances used to sample from the model, as well as the effect of the size k of the neighborhoods Ai used during training. The impact of different choices for the instance embedding function f (x) is evaluated in the supplementary material.
Selecting instances to sample from the model. In Figure 4 (left), we compare two instance selection methods in terms of FID: uniform sampling (Random) and k-means (Clustered), where we select the closest instance to each cluster centroid, using k = 50 neighbors during training (solid and dotted green lines). Random selection is consistently outperformed by k-means; selecting only 1,000 instances with k-means results in better FID than randomly selecting 5,000 instances. Moreover, storing more than 1,000 instances selected with k-means does not result in noticeable improvements in FID. Additionally, we computed FID metrics for the 1,000 ground truth images that are closest to the k-means cluster centers, obtaining 41.8 ± 0.2 FID, which is considerably higher than the 10.4± 0.1 FID we obtain with IC-GAN (k = 50) when using the same 1,000 cluster centers. This supports the idea that IC-GAN is generating data points that go beyond the stored instances, better recovering the data distribution.
We consider precision (P) and recall (R) [30] (using an InceptionV3 [51] as feature extractor and sampling 10,000 generated and real images) to disentangle the factors driving the improvement in FID, namely image quality and diversity (coverage) – see Figure 4 (right). We see that augmenting the number of stored instances results in slightly worse precision (image quality) but notably better recall (coverage). Intuitively, this suggests that by increasing the number of stored instances, we can better recover the data density at the expense of slightly degraded image quality in lower density regions of the manifold – see e.g. [11].
Neighborhood size. In Figure 4 (both panels) we analyze the interplay between the neighborhood size and the number of instances used to recover the data distribution. For small numbers of stored instances, we observe that larger the neighborhoods lead to better (lower) FID scores (left-hand side of left panel). For recall, we also observe improvements for large neighborhoods when storing few instances (left-hand side of right panel), suggesting that larger neighborhoods are more effective in recovering the data distribution from few instances. This trend is reverted for large numbers of stored instances, where smaller values of k are more effective. This supports the idea that the neighborhood size acts as a bandwidth parameter – similar to KDE –, that controls the smoothness of the implicitly learnt conditional distributions around instances. For example, k = 500 leads to smoother conditional distributions than k = 5, and as a result requires fewer stored instances to recover the data distribution. Moreover, as expected, we notice that the value of k does not significantly affect precision (right panel). Overall, k = 50 offers a good compromise, exhibiting top performance across all metrics when using at least 500 stored instances. We visualize the smoothness effect by means of a qualitative comparison across samples from different neighborhood sizes in Section K of the supplementary material. Using (very) small neighborhoods (e.g. of k = 5), results in lower diversity in the generated images.
4 Related work
Data partitioning for GANs. Previous works have attempted to improve the image generation quality and diversity of GANs by partitioning the data manifold through clustering techniques [2, 19, 33, 36, 42, 46], or by leveraging mixture models in their design [14, 17, 23]. In particular, [36, 46] apply k-means on representations from a pre-trained feature extractor to cluster the data, and then use cluster indices to condition the generator network. Then, [19, 33] introduce an alternating two-stage approach where the first stage applies k-means to the discriminator feature space and the second stage trains a GAN conditioned on the cluster indices. Similarly, [42] proposes to train a clustering network, which outputs pseudolabels, in cooperation with the generator. Further, [2] trains a feature extractor with self-supervised pre-training tasks, and creates a k-nearest neighbor graph in the learned representation space to cluster connected points into the same sub-manifold. In this case, a different generator is then trained for each identified sub-manifold. By contrast, IC-GAN uses fine-grained overlapping data neighborhoods in tandem with conditioning on rich feature embeddings (instances) to learn a localized distribution around each data point.
Mitigating mode collapse in GANs. Works which attempt to mitigate mode collapse may also bear some similarities to ours. In [32], the discriminator takes into consideration multiple random samples from the same class to output a decision. In [35], a mixed batch of generated and real samples is fed to the discriminator with the goal of predicting the ratio of real samples in the batch. Other works use a mixture of generators [17, 23] and encourage each generator to focus on generating samples from a different mode. Similarly, in [14], the discriminator is pushed to form clusters in its representation space, where each cluster is represented by a Gaussian kernel. In turn, the generator tends to learn to generate samples covering all clusters, hence mitigating mode collapse. By contrast, we focus on discriminating between real and generated neighbors of an instance conditioning, by using a single generator network trained following the GAN formulation.
Conditioning on feature vectors. Very recent work [37] uses image self-supervised feature representations to condition a generative model whose objective is to produce a good input reconstruction; this requires storing the features of all training samples. In contrast, our objective is to learn a localized distribution (as captured by nearest neighboring images) around each conditioning instance, and we only need to save a very small subset of the dataset features to approximately recover the training distribution.
Kernel density estimation and adversarial training. Connections between adversarial training and nonparametric density estimation have been made in prior work [1]. However, to the best of our knowledge, no prior work models the dataset density in a nonparametric fashion with a localized distribution around each data point with a single conditional generation network.
Complex scene generation. Existing methods for complex scene generation, where natural looking scenes contain multiple objects, most often aim at controllability and rely on detailed conditionings such as a scene graphs [3, 25], bounding box layouts [48–50, 58], semantic segmentation masks [9, 43, 44, 52, 55] or more recently, freehand sketches [16]. All these methods leverage intricate pipelines to generate complex scenes and require labeled datasets. By contrast, our approach
relies on instance conditionings which control the global semantics of the generation process, and does not require any dataset labels. It is worth noting that complex scene generation is often characterized by unbalanced, strongly long tailed datasets. Long-tail class distributions negatively affect classconditional GANs, as they struggle to generate visually appealing samples for classes in the tail [8]. However, to the best of our knowledge, no other previous work tackles this problem for GANs.
5 Discussion
Contributions. We presented instance-conditioned GAN (IC-GAN), which models dataset distributions in a non-parametric way by conditioning both generator and discriminator on instance features. We validated our approach on the unlabeled setting, showing consistent improvements over baselines on ImageNet and COCO-Stuff. Moreover, we showed through transfer experiments, where we condition the ImageNet-trained model on instances of other datasets, the ability of IC-GAN to produce compelling samples from different data distributions. Finally, we validated IC-GAN in the class-conditional setting, obtaining competitive results on ImageNet and surpassing the BigGAN baseline on the challenging ImageNet-LT; and showed compelling controllable generations by swapping the class-conditioning given a fixed instance or the instance given a fixed conditioning.
Limitations. IC-GAN showed excellent image quality for labeled (class-conditional) and unlabeled image generation. However, as any machine learning tool, it has some limitations. First, as kernel density estimator approaches, IC-GAN requires storing training instances to use the model. Experimentally, we noticed that for complex datasets, such as ImageNet, using 1,000 instances is enough to approximately cover the dataset distribution. Second, the instance feature vectors used to condition the model are obtained with a pre-trained feature extractor (self-supervised in the unlabeled case) and depend on it. We speculate that this limitation might be mitigated if the feature extractor and the generator are trained jointly, and leave it as future work. Third, although, we highlighted excellent transfer potential of our approach to unseen datasets, we observed that, in the case of transfer to datasets that are very different from ImageNet, the quality of generated images degrades.
Broader impacts. IC-GAN brings with it several benefits such as excellent image quality in labeled (class-conditional) and unlabeled image generation tasks, and the transfer potential to unseen datasets, enabling the use of our model on a variety of datasets without the need of fine-tuning or re-training. Moreover, in the case of class-conditional image generation, IC-GAN enables controllable generation of content by adapting either the style – by changing the instance – or the semantics – by altering the class –. Thus, we expect that our model can positively affect the workflow for creative content generators. That being said, with improving image quality in generative modeling, there is some potential for misuse. A common example are deepfakes, where a generative model is used to manipulate images or videos well enough that humans cannot distinguish real from fake, with the intent to misinform. We believe, however, that open research on generative image models also contributes to better understand such synthetic content, and to detect it where it is undesirable. Recently, the community has also started to undertake explicit efforts towards detecting manipulated content by organizing challenges such as the Deepfake Detection Challenge [13]. | 1. What is the main contribution of the paper, and how does it differ from previous works on GANs?
2. How effective are the proposed Instance-conditioned GANs compared to other approaches, and what advantages do they offer?
3. What are some potential limitations or areas for improvement regarding the method's reliance on feature vectors and density estimation?
4. How might the approach be applied to other domains or tasks beyond image generation?
5. What further research directions could be explored based on the ideas presented in the paper? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose a new way of training GANs, which they call instance-conditioned GANs. This method is similar to conditional GANs, but instead of using a label, they use a feature vector extracted from some feature function (by using ResNet50 in the experiments).This bypasses the need for labels for conditional GANs. However, there is another difference. The features are assumed to be locally similar so when evaluating the loss over batches, real images are chosen to be neighbors of the given instance. This creates a kernel-density estimation type of model. Experimentally, these Instance-condition GANs perform very well, beating self-supervised, and unconditional GANs. Moreover, the conditional version of instance-conditioned GANs outperform conditional GANs as well in some smaller-resolution settings.
Review
Originality:
This paper is very original and interesting.
Quality:
The paper is a bit confusing (see clarity below) but the research does seem to be well developed. The experiments are very impressive as well!
Clarity:
The overall clarity of this paper is only okay. Some of the terms are a bit confusing. For example, on line 71, the authors say “overlapping partitions”. What are overlapping partitions of a manifold? Typically, given a set A, we partition it. Meaning that there is a singular partition of disjoint sets whose union is the whole set A. I can only conclude that overlapping partitions means a collection of sets whose union is A but they are not disjoint.
It is also confusing why they call a datapoint an instance. In fact this is defined on line 72 but used without definition on line 38, causing more confusion.
Line 79 says that Figure 2a shows 7 neighbors which is confusing because there are only 4 images, although there are 7 darker shapes. And the different shapes in the figure make it also confusing because one would think that neighboring shapes be similar.
Significance:
This is a very interesting idea and significant because it can be used on completely unsupervised data. |
NIPS | Title
Instance-Conditioned GAN
Abstract
Generative Adversarial Networks (GANs) can generate near photo realistic images in narrow domains such as human faces. Yet, modeling complex distributions of datasets such as ImageNet and COCO-Stuff remains challenging in unconditional settings. In this paper, we take inspiration from kernel density estimation techniques and introduce a non-parametric approach to modeling distributions of complex datasets. We partition the data manifold into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors, and introduce a model, called instance-conditioned GAN (IC-GAN), which learns the distribution around each datapoint. Experimental results on ImageNet and COCO-Stuff show that IC-GAN significantly improves over unconditional models and unsupervised data partitioning baselines. Moreover, we show that IC-GAN can effortlessly transfer to datasets not seen during training by simply changing the conditioning instances, and still generate realistic images. Finally, we extend IC-GAN to the class-conditional case and show semantically controllable generation and competitive quantitative results on ImageNet; while improving over BigGAN on ImageNet-LT. Code and trained models to reproduce the reported results are available at https://github.com/facebookresearch/ic_gan.
1 Introduction
Generative Adversarial Networks (GANs) [18] have shown impressive results in unconditional image generation [27, 29]. Despite their success, GANs present optimization difficulties and can suffer from mode collapse, resulting in the generator not being able to obtain a good distribution coverage, and often producing poor quality and/or low diversity generated samples. Although many approaches attempt to mitigate this problem – e.g. [20, 32, 35, 38] –, complex data distributions such as the one in ImageNet [45] remain a challenge for unconditional GANs [33, 36]. Classconditional GANs [5, 39, 40, 56] ease the task of learning the data distribution by conditioning on class labels, effectively partitioning the data. Although they provide higher quality samples than their unconditional counterparts, they require labelled data, which may be unavailable or costly to obtain.
Several recent approaches explore the use of unsupervised data partitioning to improve GANs [2, 14, 17, 23, 33, 42]. While these methods are promising and yield visually appealing samples, their quality is still far from those obtained with class-conditional GANs. These methods make use of relatively coarse and non-overlapping data partitions, which oftentimes contain data points from different types of objects or scenes. This diversity of data points may result in a manifold with low
⇤Equal contribution.
35th Conference on Neural Information Processing Systems (NeurIPS 2021).
density regions, which degrades the quality of the generated samples [11]. Using finer partitions, however, tends to deteriorate results [33, 36, 42] because the clusters may contain too few data points for the generator and discriminator to properly model their data distribution.
In this work, we introduce a new approach, called instance-conditioned GAN (IC-GAN), which extends the GAN framework to model a mixture of local data densities. More precisely, IC-GAN learns to model the distribution of the neighborhood of a data point, also referred to as instance, by providing a representation of the instance as an additional input to both the generator and discriminator, and by using the neighbors of the instance as real samples for the discriminator. By choosing a sufficiently large neighborhood around the conditioning instance, we avoid the pitfall
of excessively partitioning the data into small clusters. Given the overlapping nature of these clusters, increasing the number of partitions does not come at the expense of having less samples in each of them. Moreover, unlike when conditioning on discrete cluster indices, conditioning on instance representations naturally leads the generator to produce similar samples for similar instances. Interestingly, once trained, our IC-GAN can be used to effortlessly transfer to other datasets not seen during training by simply swapping-out the conditioning instances at inference time.
IC-GAN bears similarities with kernel density estimation (KDE), a non-parametric density estimator in the form of a mixture of parametrized kernels modeling the density around each training data point – see e.g. [4]. Similar to KDE, IC-GAN can be seen as a mixture density estimator, where each component is obtained by conditioning on a training instance. Unlike KDE, however, we do not model the data likelihood explicitly, but take an adversarial approach in which we model the local density implicitly with a neural network that takes as input the conditioning instance as well as a noise vector. Therefore, the kernel in IC-GAN is no longer independent on the data point on which we condition, and instead of a kernel bandwidth parameter, we control the smoothness by choosing the neighborhood size of an instance from which we sample the real samples to be fed to the discriminator.
We validate our approach on two image generation tasks: (1) unlabeled image generation where there is no class information available, and (2) class-conditional image generation. For the unlabeled scenario, we report results on the ImageNet and COCO-Stuff datasets. We show that IC-GAN outperforms previous approaches in unlabeled image generation on both datasets. Additionally, we perform a series of transfer experiments and demonstrate that an IC-GAN trained on ImageNet achieves better generation quality and diversity when testing on COCO-Stuff than the same model trained on COCO-Stuff. In the class-conditional setting, we show that IC-GAN can generate images with controllable semantics – by adapting both class and instance–, while achieving competitive sample quality and diversity on the ImageNet dataset. Finally, we test IC-GAN in ImageNetLT, a long-tail class distribution ablated version of ImageNet, highlighting the benefits of nonparametric density estimation in datasets with unbalanced classes. Figure 1 shows IC-GAN unlabeled ImageNet generations (a), IC-GAN class-conditional ImageNet generations (b), and IC-GAN transfer generations both in the unlabeled (c) and controllable class-conditional (d) setting.
2 Instance-conditioned GAN
The key idea of IC-GAN is to model the distribution of a complex dataset by leveraging fine-grained overlapping clusters in the data manifold, where each cluster is described by a datapoint xi – referred to as instance – and its nearest neighbors set Ai in a feature space. Our objective is to model the underlying data distribution p(x) as a mixture of conditional distributions p(x|hi) around each of M instance feature vectors hi in the dataset, such that p(x) ⇡ 1M P i p(x|hi).
More precisely, given an unlabeled dataset D = {xi}Mi=1 with M data samples xi and an embedding function f parametrized by , we start by extracting instance features hi = f (xi) 8xi 2 D, where f (·) is learned in an unsupervised or self-supervised manner. We then define the set Ai of k nearest neighbors for each data sample using the cosine similarity – as is common in nearest neighbor classifiers, e.g. [53, 54] – over the features hi. Figure 2a depicts a sample xi and its nearest neighbors.
We are interested in implicitly modelling the conditional distributions p(x|hi) with a generator G✓G(z,hi), implemented by a deep neural network with parameters ✓G. The generator transforms samples from a unit Gaussian prior z ⇠ N (0, I) into samples x from the conditional distribution p(x|hi), where hi is the feature vector of an instance xi sampled from the training data. In IC-GAN, we adopt an adversarial approach to train the generator G✓G . Therefore, our generator is jointly trained with a discriminator D✓D (x,hi) that discerns between real neighbors and generated neighbors of hi, as shown in Figure 2b. Note that for each hi, real neighbors are sampled uniformly from Ai. Both G and D engage in a two player min-max game where they try to find the Nash equilibrium for the following equation:
min G max D Exi⇠p(x),xn⇠U(Ai)[logD(xn, f (xi))] +
Exi⇠p(x),z⇠p(z)[log(1 D(G(z, f (xi)), f (xi)))]. (1)
Note that when training IC-GAN we use all available training datapoints to condition the model. At inference time, as in non-parametric density estimation methods such as KDE, the generator of ICGAN also requires instance features, which may come from the training distribution or a different one.
Extension to class-conditional generation. We extend IC-GAN for class-conditional generation by additionally conditioning the generator and discriminator on a class label y. More precisely, given a labeled dataset Dl = {(xi,yi)}Mi=1 with M data sample pairs (xi,yi) and an embedding function f , we extract instance features hi = f (xi) 8xi 2 Dl, where f (·) is learned in an unsupervised, self-supervised, or supervised manner. We then define the set Ai of k nearest neighbors for each data sample using the cosine similarity over the features hi, where neighbors may be from different classes. This results in neighborhoods, where the number of neighbors belonging to the same class as the instance hi is often smaller than k. During training, real neighbors xj and their respective labels yj are sampled uniformly from Ai for each hi. In the class-conditional case, we model p(x|hi,yj) with a generator G✓G(z,hi,yj) trained jointly with a discriminator D✓D (x,hi,yj).
3 Experimental evaluation
We describe our experimental setup in Section 3.1, followed by results presented in the unlabeled setting in Section 3.2, dataset transfer in Section 3.3 and class-conditional generation in Section 3.4. We analyze the impact of the number of stored instances and neighborhood size in Section 3.5.
3.1 Experimental setup
Datasets. We evaluate our model in the unlabeled scenario on ImageNet [45] and COCO-Stuff [6]. The ImageNet dataset contains 1.2M and 50k images for training and evaluation, respectively. COCOStuff is a very diverse and complex dataset which contains multi-object images and has been widely used for complex scene generation. We use the train and evaluation splits of [8], and the (un)seen subsets of the evaluation images with only class combinations that have (not) been seen during training. These splits contain 76k, 2k, 675 and 1.3k images, respectively. For the class-conditional image generation, we use ImageNet as well as ImageNet-LT [34]. The latter is a long-tail variant of ImageNet that contains a subset of 115k samples, where the 1,000 classes have between 5 and 1,280 samples each. Moreover, we use some samples of four additional datasets to highlight the transfer abilities of IC-GAN: Cityscapes [10], MetFaces [28], PACS [31] and Sketches [15].
Evaluation protocol. We report Fréchet Inception Distance (FID) [22], Inception Score (IS) [47], and LPIPS [57]. LPIPS computes the distance between the AlexNet activations of two images generated with two different latent vectors and same conditioning. On ImageNet, we follow [5], and
compute FID over 50k generated images and the 50k real validation samples are used as reference. On COCO-Stuff and ImageNet-LT, we compute the FID for each of the splits using all images in the split as reference, and sample the same number images. Additionally, in ImageNet-LT we stratify the FID by grouping classes based on the number of train samples: more than 100 (many-shot FID), between 20 and 100 (med-shot FID), and less than 20 (few-shot FID). For the reference set, we split the validation images along these three groups of classes, and generate a matching number of samples per group. In order to compute all above-mentioned metrics, IC-GAN requires instance features for sampling. Unless stated otherwise, we store 1,000 training set instances by applying k-means clustering to the training set and selecting the features of the data point that is the closest to each one of the centroids. All quantitative metrics for IC-GAN are reported over five random seeds for the input noise when sampling from the model.
Network architectures and hyperparameters. As feature extractor f , we use a ResNet50 [21] trained in a self-supervised way with SwAV [7] for the unlabeled scenario; for the class-conditional IC-GAN, we use a ResNet50 trained for the classification task on either ImageNet or ImageNetLT [26]. For ImageNet experiments, we use BigGAN [5] as a baseline architecture, given its superior image quality and ubiquitous use in conditional image generation. For IC-GAN, we replace the class embedding layers in the generator by a fully connected layer that takes the instance features as input and reduces its dimensionality from 2,048 to 512; the same approach is followed to adapt the discriminator. For COCO-Stuff, we additionally include the state-of-the-art unconditional StyleGAN2 architecture [29], as it has shown good generation quality and diversity in the lower data regime [28, 29]. We follow its class-conditional version [28] to extend it to IC-GAN by replacing the input class embedding by the instance features. Unless stated otherwise, we set the size of the neighborhoods to k=50 for ImageNet and k=5 for both COCO-Stuff and ImageNet-LT. See the supplementary material for details on the architecture and optimization hyperparameters.
3.2 Unlabeled setting
ImageNet. We start by comparing IC-GAN against previous work in Table 1. Note that unconditional BigGAN baseline is trained by setting all labels in the training set to zero, following [36, 42]. IC-GAN surpasses all previous approaches at both 64⇥64 and 128⇥128 resolutions in both FID and IS scores. At 256⇥256 resolution, IC-GAN outperforms the concurrent unconditional diffusion-based model of [12]; the only other result we are aware of in this setting. Additional results in terms of precision and recall can be found in Table 8 in the supplementary material.
As shown in Figure 1a, IC-GAN generates high quality images preserving most of the appearance of the conditioning instance. Note that generated images are not mere training memorizations; as shown in the supplementary material, generated images differ substantially from the nearest training samples.
COCO-Stuff. We proceed with the evaluation of IC-GAN on COCOStuff in Table 2. We also compare to state-of-the-art complex scene generation pipelines which rely on labeled bounding box annotations as conditioning – LostGANv2 [49] and OC-GAN [50]. Both of
these approaches use tailored architectures for complex scene generation, which have at least twice the number of parameters of IC-GAN. Our IC-GAN matches or improves upon the unconditional version of the same backbone architecture in terms of FID in all cases, except for training FID with the StyleGAN2 backbone at 256⇥256 resolution. Overall, the StyleGAN2 backbone is superior to BigGAN on this dataset, and StyleGAN2-based IC-GAN achieves the state-of-the-art FID scores, even when compared to the bounding-box conditioned LostGANv2 and OC-GAN. IC-GAN exhibits notably higher LPIPS than LostGANv2 and OC-GAN, which could be explained by the fact that the latter only leverage one real sample per input conditioning during training; whereas IC-GAN uses multiple real neighboring samples per each instance, naturally favouring diversity in the generated images. As shown in figures 3b and 3c, IC-GAN generates high quality diverse images given the input instance. A qualitative comparison between LostGANv2, OC-GAN and IC-GAN can be found in Section E of the supplementary material.
3.3 Off-the-shelf transfer to other datasets
extractor and generator. When we replace the conditioning instances from COCO-Stuff with those of ImageNet, we obtain a train FID score of 43.5, underlining the important distribution shift that can be implemented by changing the conditioning instances.
Interestingly, the transferred IC-GAN also outperforms LostGANv2 and OC-GAN which condition on labeled bounding box annotations. Transferring the model from ImageNet boosts diversity w.r.t. the model trained on COCO-Stuff (see LPIPS in Table 2), which may be in part due to the larger k=50 used for ImageNet training, compared to k=5 when training on COCO-Stuff. Qualitative results of COCO-Stuff generations from the ImageNet pre-trained IC-GAN can be found in Figure 1c (top row) and Figure 3d. These generations suggest that IC-GAN is able to effectively leverage the large scale training on ImageNet to improve the quality and diversity of the COCO-Stuff scene generation, which contains significantly less data to train.
We further explore how the ImageNet trained IC-GAN transfers to conditioning on other datasets using Cityscapes, MetFaces, and PACS in Figure 1c. Generated images still preserve the semantics and style of the images for all datasets, although degrading their quality when compared to samples in Figure 1a, as the instances in these datasets –in particular MetFaces and PACS– are very different from the ImageNet ones. See Section F in the supplementary material for more discussion, additional evaluations, and more qualitative examples of dataset transfer.
3.4 Class-conditional setting
ImageNet. In Table 3, we show that the class-conditioned IC-GAN outperforms BigGAN in terms of both FID and IS across all resolutions except the FID at 128⇥128 resolution. It is worth mentioning that, unlike BigGAN, IC-GAN can control the semantics of the generated images by either fixing the instance features and swapping the class conditioning, or by fixing the class conditioning and swapping the instance features; see Figure 1b. As shown in the figure, generated images preserve semantics of both the class label and the instance, generating different dog breeds on similar backgrounds, or generating camels in the snow, an unseen scenario in ImageNet to the best of our knowledge. Moreover, in
Figure 1d, we show the transfer capabilities of our class-conditional IC-GAN trained on ImageNet and conditioned on instances from other datasets, generating camels in the grass, zebras in the city, and husky dogs with the style of MetFaces and PACS instances. These controllable conditionings enable the generation of images that are not present or very rare in the ImageNet dataset, e.g. camels surrounded by snow or zebras in the city. Additional qualitative transfer results which either fix the class label and swap the instance features, or vice-versa, can be found in Section F of the supplementary material.
ImageNet-LT. Due to the class imbalance in ImageNet-LT, selecting a subset of instances with either k-means or uniform sampling can easily result in ignoring rare classes, and penalizing their generation. Therefore, for this dataset we use all available 115k training instances to sample from the model and compute the metrics. In Table 4 we compare to BigGAN, showing that IC-GAN is better in terms of FID and IS for modeling this long-tailed distribution. Note that the improvement is noticeable for each of the three groups of classes with different number of samples, see many/med/few column. In Section G of the supplementary material we present experiments when using class-balancing to train BigGAN, showing that it does not improve quality nor diversity of generated samples. We
hypothesize that oversampling some classes may result in overfitting for the discriminator, leading to low quality image generations.
3.5 Selection of stored instances and neighborhood size
In this section, we empirically justify the k-means procedure to select the instances to sample from the model, consider the effect of the number of instances used to sample from the model, as well as the effect of the size k of the neighborhoods Ai used during training. The impact of different choices for the instance embedding function f (x) is evaluated in the supplementary material.
Selecting instances to sample from the model. In Figure 4 (left), we compare two instance selection methods in terms of FID: uniform sampling (Random) and k-means (Clustered), where we select the closest instance to each cluster centroid, using k = 50 neighbors during training (solid and dotted green lines). Random selection is consistently outperformed by k-means; selecting only 1,000 instances with k-means results in better FID than randomly selecting 5,000 instances. Moreover, storing more than 1,000 instances selected with k-means does not result in noticeable improvements in FID. Additionally, we computed FID metrics for the 1,000 ground truth images that are closest to the k-means cluster centers, obtaining 41.8 ± 0.2 FID, which is considerably higher than the 10.4± 0.1 FID we obtain with IC-GAN (k = 50) when using the same 1,000 cluster centers. This supports the idea that IC-GAN is generating data points that go beyond the stored instances, better recovering the data distribution.
We consider precision (P) and recall (R) [30] (using an InceptionV3 [51] as feature extractor and sampling 10,000 generated and real images) to disentangle the factors driving the improvement in FID, namely image quality and diversity (coverage) – see Figure 4 (right). We see that augmenting the number of stored instances results in slightly worse precision (image quality) but notably better recall (coverage). Intuitively, this suggests that by increasing the number of stored instances, we can better recover the data density at the expense of slightly degraded image quality in lower density regions of the manifold – see e.g. [11].
Neighborhood size. In Figure 4 (both panels) we analyze the interplay between the neighborhood size and the number of instances used to recover the data distribution. For small numbers of stored instances, we observe that larger the neighborhoods lead to better (lower) FID scores (left-hand side of left panel). For recall, we also observe improvements for large neighborhoods when storing few instances (left-hand side of right panel), suggesting that larger neighborhoods are more effective in recovering the data distribution from few instances. This trend is reverted for large numbers of stored instances, where smaller values of k are more effective. This supports the idea that the neighborhood size acts as a bandwidth parameter – similar to KDE –, that controls the smoothness of the implicitly learnt conditional distributions around instances. For example, k = 500 leads to smoother conditional distributions than k = 5, and as a result requires fewer stored instances to recover the data distribution. Moreover, as expected, we notice that the value of k does not significantly affect precision (right panel). Overall, k = 50 offers a good compromise, exhibiting top performance across all metrics when using at least 500 stored instances. We visualize the smoothness effect by means of a qualitative comparison across samples from different neighborhood sizes in Section K of the supplementary material. Using (very) small neighborhoods (e.g. of k = 5), results in lower diversity in the generated images.
4 Related work
Data partitioning for GANs. Previous works have attempted to improve the image generation quality and diversity of GANs by partitioning the data manifold through clustering techniques [2, 19, 33, 36, 42, 46], or by leveraging mixture models in their design [14, 17, 23]. In particular, [36, 46] apply k-means on representations from a pre-trained feature extractor to cluster the data, and then use cluster indices to condition the generator network. Then, [19, 33] introduce an alternating two-stage approach where the first stage applies k-means to the discriminator feature space and the second stage trains a GAN conditioned on the cluster indices. Similarly, [42] proposes to train a clustering network, which outputs pseudolabels, in cooperation with the generator. Further, [2] trains a feature extractor with self-supervised pre-training tasks, and creates a k-nearest neighbor graph in the learned representation space to cluster connected points into the same sub-manifold. In this case, a different generator is then trained for each identified sub-manifold. By contrast, IC-GAN uses fine-grained overlapping data neighborhoods in tandem with conditioning on rich feature embeddings (instances) to learn a localized distribution around each data point.
Mitigating mode collapse in GANs. Works which attempt to mitigate mode collapse may also bear some similarities to ours. In [32], the discriminator takes into consideration multiple random samples from the same class to output a decision. In [35], a mixed batch of generated and real samples is fed to the discriminator with the goal of predicting the ratio of real samples in the batch. Other works use a mixture of generators [17, 23] and encourage each generator to focus on generating samples from a different mode. Similarly, in [14], the discriminator is pushed to form clusters in its representation space, where each cluster is represented by a Gaussian kernel. In turn, the generator tends to learn to generate samples covering all clusters, hence mitigating mode collapse. By contrast, we focus on discriminating between real and generated neighbors of an instance conditioning, by using a single generator network trained following the GAN formulation.
Conditioning on feature vectors. Very recent work [37] uses image self-supervised feature representations to condition a generative model whose objective is to produce a good input reconstruction; this requires storing the features of all training samples. In contrast, our objective is to learn a localized distribution (as captured by nearest neighboring images) around each conditioning instance, and we only need to save a very small subset of the dataset features to approximately recover the training distribution.
Kernel density estimation and adversarial training. Connections between adversarial training and nonparametric density estimation have been made in prior work [1]. However, to the best of our knowledge, no prior work models the dataset density in a nonparametric fashion with a localized distribution around each data point with a single conditional generation network.
Complex scene generation. Existing methods for complex scene generation, where natural looking scenes contain multiple objects, most often aim at controllability and rely on detailed conditionings such as a scene graphs [3, 25], bounding box layouts [48–50, 58], semantic segmentation masks [9, 43, 44, 52, 55] or more recently, freehand sketches [16]. All these methods leverage intricate pipelines to generate complex scenes and require labeled datasets. By contrast, our approach
relies on instance conditionings which control the global semantics of the generation process, and does not require any dataset labels. It is worth noting that complex scene generation is often characterized by unbalanced, strongly long tailed datasets. Long-tail class distributions negatively affect classconditional GANs, as they struggle to generate visually appealing samples for classes in the tail [8]. However, to the best of our knowledge, no other previous work tackles this problem for GANs.
5 Discussion
Contributions. We presented instance-conditioned GAN (IC-GAN), which models dataset distributions in a non-parametric way by conditioning both generator and discriminator on instance features. We validated our approach on the unlabeled setting, showing consistent improvements over baselines on ImageNet and COCO-Stuff. Moreover, we showed through transfer experiments, where we condition the ImageNet-trained model on instances of other datasets, the ability of IC-GAN to produce compelling samples from different data distributions. Finally, we validated IC-GAN in the class-conditional setting, obtaining competitive results on ImageNet and surpassing the BigGAN baseline on the challenging ImageNet-LT; and showed compelling controllable generations by swapping the class-conditioning given a fixed instance or the instance given a fixed conditioning.
Limitations. IC-GAN showed excellent image quality for labeled (class-conditional) and unlabeled image generation. However, as any machine learning tool, it has some limitations. First, as kernel density estimator approaches, IC-GAN requires storing training instances to use the model. Experimentally, we noticed that for complex datasets, such as ImageNet, using 1,000 instances is enough to approximately cover the dataset distribution. Second, the instance feature vectors used to condition the model are obtained with a pre-trained feature extractor (self-supervised in the unlabeled case) and depend on it. We speculate that this limitation might be mitigated if the feature extractor and the generator are trained jointly, and leave it as future work. Third, although, we highlighted excellent transfer potential of our approach to unseen datasets, we observed that, in the case of transfer to datasets that are very different from ImageNet, the quality of generated images degrades.
Broader impacts. IC-GAN brings with it several benefits such as excellent image quality in labeled (class-conditional) and unlabeled image generation tasks, and the transfer potential to unseen datasets, enabling the use of our model on a variety of datasets without the need of fine-tuning or re-training. Moreover, in the case of class-conditional image generation, IC-GAN enables controllable generation of content by adapting either the style – by changing the instance – or the semantics – by altering the class –. Thus, we expect that our model can positively affect the workflow for creative content generators. That being said, with improving image quality in generative modeling, there is some potential for misuse. A common example are deepfakes, where a generative model is used to manipulate images or videos well enough that humans cannot distinguish real from fake, with the intent to misinform. We believe, however, that open research on generative image models also contributes to better understand such synthetic content, and to detect it where it is undesirable. Recently, the community has also started to undertake explicit efforts towards detecting manipulated content by organizing challenges such as the Deepfake Detection Challenge [13]. | 1. What is the focus of the paper regarding dataset modeling?
2. What are the strengths of the proposed approach in terms of data distribution and generator training?
3. What are the weaknesses of the paper regarding its claims and contributions?
4. Can the reviewer think of any practical applications where the proposed method might be useful?
5. Is the approach truly considered an unconditional setting, or is it more closely related to image-to-image translation? | Summary Of The Paper
Review | Summary Of The Paper
To model complex distributions of datasets, this paper proposed to partition datasets into a mixture of overlapping neighborhoods described by a datapoint and its nearest neighbors. An instance-conditioned generator is trained to learn the distribution around each datapoint, namely the generator learns to generate images similar to the conditional images. The proposed approach is evaluated on ImageNet and Coco-Stuff.
Review
Pros:
The paper is well written and easy to read.
Comprehensive experiments are conducted, and multiple metrics are adopted to evaluate the proposed method.
Cons:
The contribution is marginal. The authors claim the proposed method can learn complex distributions, however the generator can only generate images similar to the given conditional image, and the the number of modes to display during test phase largely depends on the conditional images provided.
Can author provide several real-life applications when this model could be helpful?
Is the proposed approach indeed under unconditional setting? It is more like image-to-image translations, instead of unconditional generations. |
NIPS | Title
A Probabilistic U-Net for Segmentation of Ambiguous Images
Abstract
Many real-world vision problems suffer from inherent ambiguities. In clinical applications for example, it might not be clear from a CT scan alone which particular region is cancer tissue. Therefore a group of graders typically produces a set of diverse but plausible segmentations. We consider the task of learning a distribution over segmentations given an input. To this end we propose a generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible hypotheses. We show on a lung abnormalities segmentation task and on a Cityscapes segmentation task that our model reproduces the possible segmentation variants as well as the frequencies with which they occur, doing so significantly better than published approaches. These models could have a high impact in real-world applications, such as being used as clinical decision-making algorithms accounting for multiple plausible semantic segmentation hypotheses to provide possible diagnoses and recommend further actions to resolve the present ambiguities.
1 Introduction
The semantic segmentation task assigns a class label to each pixel in an image. While in many cases the context in the image provides sufficient information to resolve the ambiguities in this mapping, there exists an important class of images where even the full image context is not sufficient to resolve all ambiguities. Such ambiguities are common in medical imaging applications, e.g., in lung abnormalities segmentation from CT images. A lesion might be clearly visible, but the information about whether it is cancer tissue or not might not be available from this image alone. Similar ambiguities are also present in photos. E.g. a part of fur visible under the sofa might belong to a cat or a dog, but it is not possible from the image alone to resolve this ambiguity2. Most existing segmentation algorithms either provide only one likely consistent hypothesis (e.g., “all pixels belong to a cat”) or a pixel-wise probability (e.g., “each pixel is 50% cat and 50% dog”).
Especially in medical applications where a subsequent diagnosis or a treatment depends on the segmentation map, an algorithm that only provides the most likely hypothesis might lead to misdiagnoses
∗work done during an internship at DeepMind. 2In [1] this is defined as ambiguous evidence in contrast to implicit class confusion, that stems from an ambiguous class definition (e.g. the concepts of desk vs. table). For the presented work this differentiation is not required.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
and sub-optimal treatment. Providing only pixel-wise probabilities ignores all co-variances between the pixels, which makes a subsequent analysis much more difficult if not impossible. If multiple consistent hypotheses are provided, these can be directly propagated into the next step in a diagnosis pipeline, they can be used to suggest further diagnostic tests to resolve the ambiguities, or an expert with access to additional information can select the appropriate one(s) for the subsequent steps.
Here we present a segmentation framework that provides multiple segmentation hypotheses for ambiguous images (Fig. 1a). Our framework combines a conditional variational auto encoder (CVAE) [2, 3, 4, 5] which can model complex distributions, with a U-Net [6] which delivers state-of-the-art segmentations in many medical application domains. A low-dimensional latent space encodes the possible segmentation variants. A random sample from this space is injected into the U-Net to produce the corresponding segmentation map. One key feature of this architecture is the ability to model the joint probability of all pixels in the segmentation map. This results in multiple segmentation maps, where each of them provides a consistent interpretation of the whole image. Furthermore our framework is able to also learn hypotheses that have a low probability and to predict them with the corresponding frequency. We demonstrate these features on a lung abnormalities segmentation task, where each lesion has been segmented independently by four experts, and on the Cityscapes dataset, where we artificially flip labels with a certain frequency during training.
A body of work with different approaches towards probabilistic and multi-modal segmentation exists. The most common approaches provide independent pixel-wise probabilities [7, 8]. These models induce a probability distribution by using dropout over spatial features. Whereas this strategy fulfills this line of work’s objective of quantifying the pixel-wise uncertainty, it produces inconsistent outputs. A simple way to produce plausible hypotheses is to learn an ensemble of (deep) models [9]. While the outputs produced by ensembles are consistent, they are not necessarily diverse and ensembles are typically not able to learn the rare variants as their members are trained independently. In order to overcome this, several approaches train models jointly using the oracle set loss [10], i.e. a loss that only accounts for the closest prediction to the ground truth. This has been explored in [11] and [1] using an ensemble of deep networks, and in [12] and [13] using one common deep network with M heads. While multi-head approaches may have the capacity to capture a diverse set of variants, they are not equipped to learn the occurrence frequencies of individual variants. Two common disadvantages of both ensembles and M heads models are their ungraceful scaling to large numbers of hypotheses, and their requirement of fixing the number of allowed hypotheses at training time. Another set of approaches to produce multiple diverse solutions relies on graphical models, such as junction chains [14], and more generally Markov Random Fields [15, 16, 17, 18]. While many of the
previous approaches are guaranteed to find the best diverse solutions, these are confined to structured problems whose dependencies can be described by tractable graphical models.
The task of image-to-image translation [19] tackles a very similar problem: an under-constrained domain transfer of images needs to be learned. Many of the recent approaches employ generative adversarial networks (GANs) which are known to suffer from challenges such as ‘mode-collapse’ [20]. In an attempt to solve the mode-collapse problem, the ‘bicycleGAN’ [21] involves a component that is similar in architecture to ours. In contrast to our proposed architecture, their model encompasses a fixed prior distribution and during training their posterior distribution is only conditioned on the output image. Very recent work on generating appearances given a shape encoding [22] also combines a U-Net with a VAE, and was developed concurrently to ours. In contrast to our proposal, their training requires an additional pretrained VGG-net that is employed as a reconstruction loss. Finally, in [23] is proposed a probabilistic model for structured outputs based on optimizing the dissimilarity coefficient [24] between the ground truth and predicted distributions. The resultant approach is assessed on the task of hand pose estimation, that is, predicting the location of 14 joints, arguably a simpler space compared to the space of segmentations we consider here. Similarly to the approach presented below, they inject latent variables at a later stage of the network architecture.
The main contributions of this work are: (1) Our framework provides consistent segmentation maps instead of pixel-wise probabilities and can therefore give a joint likelihood of modes. (2) Our model can induce arbitrarily complex output distributions including the occurrence of very rare modes, and is able to learn calibrated probabilities of segmentation modes. (3) Sampling from our model is computationally cheap. (4) In contrast to many existing applications of deep generative models that can only be qualitatively evaluated, our application and datasets allow quantitative performance evaluation including penalization of missing modes.
2 Network Architecture and Training Procedure
Our proposed network architecture is a combination of a conditional variational auto encoder [2, 3, 4, 5] with a U-Net [6], with the objective of learning a conditional density model over segmentations, conditioned on the image.
Sampling. The central component of our architecture (Fig. 1a) is a low-dimensional latent space RN (e.g., N = 6, which performed best in our experiments). Each position in this space encodes a segmentation variant. The ‘prior net’, parametrized by weights ω, estimates the probability of these variants for a given input image X . This prior probability distribution (called P in the following) is modelled as an axis-aligned Gaussian with meanµprior(X;ω) ∈ RN and varianceσprior(X;ω) ∈ RN . To predict a set of m segmentations we apply the network m times to the same input image (only a small part of the network needs to be re-evaluated in each iteration, see below). In each iteration i ∈ {1, . . . ,m}, we draw a random sample zi ∈ RN from P
zi ∼ P (·|X) = N ( µprior(X;ω), diag(σprior(X;ω)) ) , (1)
broadcast the sample to an N -channel feature map with the same shape as the segmentation map, and concatenate this feature map to the last activation map of a U-Net (the U-Net is parameterized by weights θ). A function fcomb. composed of three subsequent 1× 1 convolutions (ψ being the set of their weights) combines the information and maps it to the desired number of classes. The output, Si, is the segmentation map corresponding to point zi in the latent space:
Si = fcomb. ( fU-Net(X; θ), zi;ψ ) . (2)
Notice that when drawing m samples for the same input image, we can reuse the output of the prior net and the feature activations of the U-Net. Only the function fcomb. needs to be re-evaluated m times.
Training. The networks are trained with the standard training procedure for conditional VAEs (Fig. 1b), i.e. by minimizing the variational lower bound (Eq. 4). The main difference with respect to training a deterministic segmentation model, is that the training process additionally needs to find a useful embedding of the segmentation variants in the latent space. This is solved by introducing a ‘posterior net’, parametrized by weights ν, that learns to recognize a segmentation variant (given the raw imageX and the ground truth segmentation Y ) and to map this to a positionµpost(X,Y ; ν) ∈ RN
with some uncertainty σpost(X,Y ; ν) ∈ RN in the latent space. The output is denoted as posterior distribution Q. A sample z from this distribution,
z ∼ Q(·|X,Y ) = N ( µpost(X,Y ; ν), diag(σpost(X,Y ; ν)) ) , (3)
combined with the activation map of the U-Net (Eq. 1) must result in a predicted segmentation S identical to the ground truth segmentation Y provided in the training example. A cross-entropy loss penalizes differences between S and Y (the cross-entropy loss arises from treating the output S as the parameterization of a pixel-wise categorical distribution Pc). Additionally there is a Kullback-Leibler divergence DKL(Q||P ) = Ez∼Q [ log Q− log P ] which penalizes differences between the posterior distribution Q and the prior distribution P . Both losses are combined as a weighted sum with a weighting factor β, as done in [25]:
L(Y,X) = Ez∼Q(·|Y,X) [ − log Pc(Y |S(X, z)) ] + β ·DKL ( Q(z|Y,X)||P (z|X) ) . (4)
The training is done from scratch with randomly initialized weights. During training, this KL loss “pulls” the posterior distribution (which encodes a segmentation variant) and the prior distribution towards each other. On average (over multiple training examples) the prior distribution will be modified in a way such that it “covers” the space of all presented segmentation variants for a specific input image3.
3 Performance Measures and Baseline Methods
In this section we first present the metric used to assess the performance of all approaches, and then describe each competitor approach used in the comparisons.
3.1 Performance measures
As it is common in the semantic segmentation literature, we employ the intersection over union (IoU) as a measure to compare a pair of segmentations. However, in the present case, we not only want to compare a deterministic prediction with a unique ground truth, but rather we are interested in comparing distributions of segmentations. To do so, we use the generalized energy distance [26, 27, 28], which leverages distances between observations:
D2GED(Pgt, Pout) = 2E [ d(S, Y ) ] − E [ d(S, S ′ ) ] − E [ d(Y, Y ′ ) ] , (5)
where d is a distance measure, Y and Y ′
are independent samples from the ground truth distribution Pgt, and similarly, S and S ′ are independent samples from the predicted distribution Pout. The energy distance DGED is a metric as long as d is also a metric [29]. In our case we choose d(x, y) = 1− IoU(x, y), which as proved in [30, 31], is a metric. In practice, we only have access to samples from the distributions that models induce, so we rely on statistics of Eq. 5, D̂2GED. The details about its computation for each experiment are presented in Appendix B.
3.2 Baseline methods
With the aim of providing context for the performance of our proposed approach we compare against a range of baselines. To the best of our knowledge there exists no other work that has considered capturing a distribution over multi-modal segmentations and has measured the agreement with such a distribution. For fair comparison, we train the baseline models whose architectures are depicted in Fig. 2 in the exact same manner as we train ours. The baseline methods all involve the same U-Net architecture, i.e. they share the same core component and thus employ comparable numbers of learnable parameters in the segmentation tasks.
Dropout U-Net (Fig. 2a). Our ‘Dropout U-Net’ baselines follow the Bayesian segnet’s [7] proposition: we dropout the activations of the respective incoming layers of the three inner-most encoder and decoder blocks with a dropout probability of p = 0.5 during training as well as when sampling.
3An open source re-implementation of our approach can be found at https://github.com/SimonKohl/ probabilistic_unet.
U-Net Ensemble (Fig. 2b). We report results for ensembles with the number of members matching the required number of samples (referred to as ‘U-Net Ensemble’). The original deterministic variant of the U-Net is the 1-sample corner case of an ensemble.
M-Heads (Fig. 2c). Aiming for diverse semantic segmentation outputs, the works of [12] and [13] propose to branch off M heads after the last layer of a deep net each of which contributes one output variant. An adjusted cross-entropy loss that adaptively assigns heads to ground-truth hypotheses is employed to promote diversity while reducing the risk of idle heads: the loss of the best performing head is weighted with a factor of 1− , while the remaining heads each contribute with a weight of /(M − 1) to the loss. For our ‘M-Heads’ baselines we again employ a U-Net core and set = 0.05 as proposed by [12]. In order to allow for the evaluation of 4, 8 and 16 samples, we train M-Heads models with the corresponding number of heads.
Image2Image VAE (Fig. 2d). In [21] the authors propose a U-Net VAE-GAN hybrid for multimodal image-to-image translation, that owes its stochasticity to normal distributed latents that are broadcasted and fed into the encoder path of the U-Net. In order to deal with the complex solution space in image-to-image translation tasks, they employ an adversarial discriminator as additional supervision alongside a reconstruction loss. In the fully supervised setting of semantic segmentation such an additional learning signal is however not necessary and we therefore train with a cross-entropy loss only. In contrast to our proposition, this baseline, which we refer to as the ‘Image2Image VAE’, employs a prior that is not conditioned on the input image (a fixed normal distribution) and a posterior net that is not conditioned on the input either.
In all cases we examine the models’ performance when drawing a different number of samples (1, 4, 8 and 16) from each of them.
4 Results
A quantitative evaluation of multiple segmentation predictions per image requires annotations from multiple labelers. Here we consider two datasets: The LIDC-IDRI dataset [32, 33, 34] which contains 4 annotations per input, and the Cityscapes dataset [35], which we artificially modify by adding synonymous classes to introduce uncertainty in the way concepts are labelled.
4.1 Lung abnormalities segmentation
The LIDC-IDRI dataset [32, 33, 34] contains 1018 lung CT scans from 1010 lung patients with manual lesion segmentations from four experts. This dataset is a good representation of the typical ambiguities that appear in CT scans. For each scan, 4 radiologists (from a total of 12) provided annotation masks for lesions that they independently detected and considered to be abnormal. We use the masks resulting from a second reading in which the radiologists were shown the anonymized annotations of the others and were allowed to make adjustments to their own masks.
For our experiments we split this dataset into a training set composed of 722 patients, a validation set composed of 144 patients, and a test set composed of the remaining 144 patients. We then resampled
the CT scans to 0.5mm× 0.5mm in-plane resolution (the original resolution is between 0.461mm and 0.977mm, 0.688mm on average) and cropped 2D images (180 × 180 pixels) centered at the lesion positions. The lesion positions are those where at least one of the experts segmented a lesion. By cropping the scans, the resultant task is in isolation not directly clinically relevant. However, this allows us to ignore the vast areas in which all labelers agree, in order to focus on those where there is uncertainty. This resulted in 8882 images in the training set, 1996 images in the validation set and 1992 images in the test set. Because the experts can disagree whether the lesion is abnormal tissue, up to 3 masks per image can be empty. Fig. 3a shows an example of such lesion-centered images and the masks provided by 4 graders.
As all models share the same U-Net core component and for fairness and ease of comparability, we let all models undergo the same training schedule, which is detailed in subsection H.1.
In order to grasp some intuition about the kind of samples produced by each model, we show in Fig. 3a, as well as in Appendix F, representative results for the baseline methods and our proposed Probabilistic U-Net. Fig. 4a shows the squared generalized energy distance D̂2GED for all models as a function of the number of samples. The data accumulations visible as horizontal stripes are owed to the existence of empty ground-truth masks. The energy distance on the 1992 images large lung abnormalities test set, decreases for all models as more samples are drawn indicating an improved matching of the ground-truth distribution as well as enhanced sample diversity. Our proposed
Probabilistic U-Net outperforms all baselines when sampling 4, 8 and 16 times. The performance at 16 samples is found significantly higher than that of the baselines (p-value ∼ O(10−13)), according to the Wilcoxon signed-rank test. Finally, in Appendix E we show the results of an experiment regarding the capacity different models have to distinguish between unambiguous and ambiguous instances (i.e. instances where graders disagree on the presence of a lesion).
4.2 Cityscapes semantic segmentation
As a second dataset we use the Cityscapes dataset [35]. It contains images of street scenes taken from a car with corresponding semantic segmentation maps. A total of 19 different semantic classes are labelled. Based on this dataset we designed a task that allows full control of the ambiguities: we create ambiguities by artificial random flips of five classes to newly introduced classes. We flip ‘sidewalk’ to ‘sidewalk 2’ with a probability of 8/17, ‘person’ to ‘person 2’ with a probability of 7/17, ‘car’ to ‘car 2’ with 6/17, ‘vegetation’ to ‘vegetation 2’ with 5/17 and ‘road’ to ‘road 2’ with probability 4/17. This choice yields distinct probabilities for the ensuing 25 = 32 discrete modes with probabilities ranging from 10.9% (all unflipped) down to 0.5% (all flipped). The official training dataset with fine-grained annotation labels comprises 2975 images and the validation dataset contains 500 images. We employ this offical validation set as a test set to report results on, and split off 274 images (corresponding to the 3 cities of Darmstadt, Mönchengladbach and Ulm) from the official training set as our internal validation set. As in the previous experiment, in this task we use a similar setting for the training processes of all approaches, which we present in detail in subsection H.2.
Fig. 3b shows samples of each approach in the comparison given one input image. In Appendix G we show further samples of other images, produced by our approach. Fig. 4b shows that the Probabilistic U-Net on the Cityscapes task outperforms the baseline methods when sampling 4, 8 and 16 times in terms of the energy distance. This edge in segmentation performance at 16 samples is highly significant according to the Wilcoxon signed-rank test (p-value ∼ O(10−77)). We have also conducted ablation experiments in order to explore which elements of our architecture contribute to its performance. These were (1) Fixing the prior, (2) Fixing the prior, and not using the context in the posterior and (3) Injecting the latent features at the beginning of the U-Net. Each of these variations resulted in a lower performance. Detailed results can be found in Appendix D.
Reproducing the segmentation probabilities. In the Cityscapes segmentation task, we can provide further analysis by leveraging our knowledge of the underlying conditional distribution that we have set by design. In particular we compare the frequency with which every model predicts each mode, to the corresponding ground truth probability of that mode. To compute the frequency of each mode by each model, we draw 16 samples from that model for all images in the test set. Then we count the number of those samples that have that mode as the closest (using 1-IoU as the distance function).
In Fig. 5 (and Figs. 8, 9, 10 in Appendix C) we report the mode-wise frequencies for all 32 modes in the Cityscape task and show that the Probabilistic U-Net is the only model in this comparison that is able to closely capture the frequencies of a large combinatorial space of hypotheses including very rare modes, thus supplying calibrated likelihoods of modes. The Image2Image VAE is the only
model among competitors that picks up on all variants, but the frequencies are far off as can be seen in its deviation from the bisector line in blue. The other baselines perform worse still in that all of them fail to represent modes and the modes they do capture do not match the expected frequencies.
4.3 Analysis of the Latent Space
The embedding of the segmentation variants in a low-dimensional latent space allows a qualitative analysis of the internal representation of our model. For a 2D or 3D latent space we can directly visualize where the segmentation variants get assigned. See Appendix A for details.
5 Discussion and conclusions
Our first set of experiments demonstrates that our proposed architecture provides consistent segmentation maps that closely match the multi-modal ground-truth distributions given by the expert graders in the lung abnormalities task and by the combinatorial ground-truth segmentation modes in the Cityscapes task. The employed IoU-based energy distance measures whether the models’ individual samples are both coherent as well as whether they are produced with the expected frequencies. It not only penalizes predicted segmentation variants that are far away from the ground truth, but also penalizes missing variants. On this task the Probabilistic U-Net is able to significantly outperform the considered baselines, indicating its capability to model the joint likelihood of segmentation variants.
The second type of experiments demonstrates that our model scales to complex output distributions including the occurrence of very rare modes. With 32 discrete modes of largely differing occurrence likelihoods (0.5% to 10.9%), the Cityscapes task requires the ability to closely match complex data distributions. Here too our model performs best and picks the segmentation modes very close to the expected frequencies, all the way into the regime of very unlikely modes, thus defying mode-collapse and exhibiting excellent probability calibration. As an additional advantage our model scales to such large numbers of modes without requiring any prior assumptions on the number of modes or hypotheses.
The lower performance of the baseline models relative to our proposition can be attributed to design choices of these models. While the Dropout U-Net successfully models the pixel-wise data distribution (Fig. 8a bottom right, in the Appendix), such pixel-wise mixtures of variants can not be valid hypotheses in themselves (see Fig. 3). The U-Net Ensemble’s members are trained independently and each of them can only learn the most likely segmentation variant as attested to by Fig. 8b. In contrast to that the closely related M-Heads model can pick up on multiple discrete segmentation modes, due to the joint training procedure that enables diversity. The training does however not allow to correctly represent frequencies and requires knowledge of the number of present variants (see Fig. 9a, in the Appendix). Furthermore neither the U-Net Ensemble, nor the M-Heads can deal with the combinatorial explosion of segmentation variants when multiple aspects vary independently of each other. The Image2Image VAE shares similarities with our model, but as its prior is fixed and not conditioned on the input image, it can not learn to capture variant frequencies by allocating corresponding probability mass to the respective latent space regions. Fig. 17 in the Appendix shows a severe miss-calibration of variant likelihoods on the lung abnormalities task that is also reflected in its corresponding energy distance. Furthermore, in this architecture, the latent samples are fed into the U-Net’s encoder path, while we feed in the samples just after the decoder path. This design choice in the Image2Image VAE requires the model to carry the latent information all the way through the U-Net core, while simultaneously performing the recognition required for segmentation, which might additionally complicate training (see analysis in Appendix D). Beside that, our design choice of late injection has the additional advantage that we can produce a large set of samples for a given image at a very low computational cost: for each new sample from the latent space only the network part after the injection needs to be re-executed to produce the corresponding segmentation map (this bears similarity to the approach taken in [23], where a generative model is employed to model hand pose estimation).
Aside from the ability to capture arbitrary modes with their corresponding probability conditioned on the input, our proposed Probabilistic U-Net allows to inspect its latent space. This is because as opposed to e.g. GAN-based approaches, VAE-like models explicitly parametrize distributions, a characteristic that grants direct access to the corresponding likelihood landscape. Appendix A discusses how the Probabilistic U-Net chooses to structure its latent spaces.
Compared to aforementioned concurrent work for image-to-image tasks [22], our model disentangles the prior and the segmentation net. This can be of particular relevance in medical imaging, where processing 3D scans is common. In this case it is desirable to condition on the entire scan, while retaining the possibility to process the scan tile by tile in order to be able to process large volumes with large models with a limited amount of GPU memory.
On a more general note, we would like to remark that current image-to-image translation tasks only allow subjective (and expensive) performance evaluations, as it is typically intractable to assess the entire solution space. For this reason surrogate metrics such as the inception score based on the evaluation via a separately trained deep net are employed [36]. The task of multi-modal semantic segmentation, which we consider here, allows for a direct and thus perhaps more meaningful manner of performance evaluation and could help guide the design of future generative architectures.
All in all we see a large field where our proposed Probabilistic U-Net can replace the currently applied deterministic U-Nets. Especially in the medical domain, with its often ambiguous images and highly critical decisions that depend on the correct interpretation of the image, our model’s segmentation hypotheses and their likelihoods could 1) inform diagnosis/classification probabilities or 2) guide steps to resolve ambiguities. Our method could prove useful beyond explicitly multi-modal tasks, as the inspectability of the Probabilistic U-Net’s latent space could yield insights for many segmentation tasks that are currently treated as a uni-modal problem.
6 Acknowledgements
The authors would like to thank Mustafa Suleyman, Trevor Back and the whole DeepMind team for their exceptional support, and Shakir Mohamed and Andrew Zisserman for very helpful comments and discussions. The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. | 1. What is the main contribution of the paper regarding its application and experiments?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity and positioning of the manuscript relative to prior work?
4. What are the specific concerns regarding the methodological innovation and contribution of the paper?
5. How does the reviewer evaluate the quality and significance of the experimental results? | Review | Review
Post rebuttal: Authors have responded well to the issues raised, and I champion publication of this work. Main idea: Use a conditional variational auto-encoder to produce well-calibrated segmentation hypotheses for a given input. Strengths: The application is well motivated and experiments are convincing and state of the art. Convincing baselines, good discussion and conclusion. Nice illustrations in the appendix. Weaknesses: No significant theoretical contribution. Possibly in response, the manuscript is a little vague in its positioning relative to prior work. While relevant prior work is cited, the reader is left with some ambiguity and, if not familiar with this prior work, might be misled to think that there is methodological innovation beyond the specifics of architecture and application. Based on its solid and nontrivial experimental contribution I advocate acceptance; but the manuscript would profit from a clearer enunciation of the fact that / what prior work is being built on. Comments: 50: "treat pixels independently"; this is unclear (or faulty?), the quoted papers also use an encoder/decoder structure As a consequence, contribution 1 (line 74) is dubious or needs clarification. Contribution (2) is fine. Contribution (3): The statement is true, but the contribution is unclear. If the claim refers to the fact that latent z is concatenated only at the fully connected layers, then this has been done before (e.g. in DISCO Nets by Bouchacourt et al., (NIPS 2016)). Contribution (4): The claim is vague. If we take generative models in their generality, then it encompasses e.g. most of Bayesian Statistics and the claim of only qualitative evaluation is obviously wrong. If we only consider generative models in the deep learning world, the statement is correct insofar as many papers only contain qualitative evaluation of "My pictures are prettier than yours"; but there are nevertheless quantitative metrics, such as the inception score, or the metrics used in the Wasserstein AE. Section 2: It is not made very obvious to the reader which part of this VAE structure is novel and which parts are not. The paper does follow [4] and especially [5] closely (the latter should also be cited in line 82). So the only really novel part here is the U-net structure of P(y|z,x). Concatenating z after the U-net in (2) is new in this formulation, but not in general (e.g. as already mentioned by DISCO Nets (Bouchacourt at al., NIPS 2016)). Finally, there is no justification for the appearance of $\beta$ in (4), but it is up to the parameter name identical to what Higgins et al., (ICLR 2017) do with their \beta-VAE. Especially since authors choose $\beta >= 1$, which follows the Higgins et al. disentangling argument, and not the usual $\beta_t <= 1$, in which case it would be a time dependent downscaling of the KL term to avoid too much regularization in the beginning of the training (but then again the references to earlier work would be missing). |
NIPS | Title
A Probabilistic U-Net for Segmentation of Ambiguous Images
Abstract
Many real-world vision problems suffer from inherent ambiguities. In clinical applications for example, it might not be clear from a CT scan alone which particular region is cancer tissue. Therefore a group of graders typically produces a set of diverse but plausible segmentations. We consider the task of learning a distribution over segmentations given an input. To this end we propose a generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible hypotheses. We show on a lung abnormalities segmentation task and on a Cityscapes segmentation task that our model reproduces the possible segmentation variants as well as the frequencies with which they occur, doing so significantly better than published approaches. These models could have a high impact in real-world applications, such as being used as clinical decision-making algorithms accounting for multiple plausible semantic segmentation hypotheses to provide possible diagnoses and recommend further actions to resolve the present ambiguities.
1 Introduction
The semantic segmentation task assigns a class label to each pixel in an image. While in many cases the context in the image provides sufficient information to resolve the ambiguities in this mapping, there exists an important class of images where even the full image context is not sufficient to resolve all ambiguities. Such ambiguities are common in medical imaging applications, e.g., in lung abnormalities segmentation from CT images. A lesion might be clearly visible, but the information about whether it is cancer tissue or not might not be available from this image alone. Similar ambiguities are also present in photos. E.g. a part of fur visible under the sofa might belong to a cat or a dog, but it is not possible from the image alone to resolve this ambiguity2. Most existing segmentation algorithms either provide only one likely consistent hypothesis (e.g., “all pixels belong to a cat”) or a pixel-wise probability (e.g., “each pixel is 50% cat and 50% dog”).
Especially in medical applications where a subsequent diagnosis or a treatment depends on the segmentation map, an algorithm that only provides the most likely hypothesis might lead to misdiagnoses
∗work done during an internship at DeepMind. 2In [1] this is defined as ambiguous evidence in contrast to implicit class confusion, that stems from an ambiguous class definition (e.g. the concepts of desk vs. table). For the presented work this differentiation is not required.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
and sub-optimal treatment. Providing only pixel-wise probabilities ignores all co-variances between the pixels, which makes a subsequent analysis much more difficult if not impossible. If multiple consistent hypotheses are provided, these can be directly propagated into the next step in a diagnosis pipeline, they can be used to suggest further diagnostic tests to resolve the ambiguities, or an expert with access to additional information can select the appropriate one(s) for the subsequent steps.
Here we present a segmentation framework that provides multiple segmentation hypotheses for ambiguous images (Fig. 1a). Our framework combines a conditional variational auto encoder (CVAE) [2, 3, 4, 5] which can model complex distributions, with a U-Net [6] which delivers state-of-the-art segmentations in many medical application domains. A low-dimensional latent space encodes the possible segmentation variants. A random sample from this space is injected into the U-Net to produce the corresponding segmentation map. One key feature of this architecture is the ability to model the joint probability of all pixels in the segmentation map. This results in multiple segmentation maps, where each of them provides a consistent interpretation of the whole image. Furthermore our framework is able to also learn hypotheses that have a low probability and to predict them with the corresponding frequency. We demonstrate these features on a lung abnormalities segmentation task, where each lesion has been segmented independently by four experts, and on the Cityscapes dataset, where we artificially flip labels with a certain frequency during training.
A body of work with different approaches towards probabilistic and multi-modal segmentation exists. The most common approaches provide independent pixel-wise probabilities [7, 8]. These models induce a probability distribution by using dropout over spatial features. Whereas this strategy fulfills this line of work’s objective of quantifying the pixel-wise uncertainty, it produces inconsistent outputs. A simple way to produce plausible hypotheses is to learn an ensemble of (deep) models [9]. While the outputs produced by ensembles are consistent, they are not necessarily diverse and ensembles are typically not able to learn the rare variants as their members are trained independently. In order to overcome this, several approaches train models jointly using the oracle set loss [10], i.e. a loss that only accounts for the closest prediction to the ground truth. This has been explored in [11] and [1] using an ensemble of deep networks, and in [12] and [13] using one common deep network with M heads. While multi-head approaches may have the capacity to capture a diverse set of variants, they are not equipped to learn the occurrence frequencies of individual variants. Two common disadvantages of both ensembles and M heads models are their ungraceful scaling to large numbers of hypotheses, and their requirement of fixing the number of allowed hypotheses at training time. Another set of approaches to produce multiple diverse solutions relies on graphical models, such as junction chains [14], and more generally Markov Random Fields [15, 16, 17, 18]. While many of the
previous approaches are guaranteed to find the best diverse solutions, these are confined to structured problems whose dependencies can be described by tractable graphical models.
The task of image-to-image translation [19] tackles a very similar problem: an under-constrained domain transfer of images needs to be learned. Many of the recent approaches employ generative adversarial networks (GANs) which are known to suffer from challenges such as ‘mode-collapse’ [20]. In an attempt to solve the mode-collapse problem, the ‘bicycleGAN’ [21] involves a component that is similar in architecture to ours. In contrast to our proposed architecture, their model encompasses a fixed prior distribution and during training their posterior distribution is only conditioned on the output image. Very recent work on generating appearances given a shape encoding [22] also combines a U-Net with a VAE, and was developed concurrently to ours. In contrast to our proposal, their training requires an additional pretrained VGG-net that is employed as a reconstruction loss. Finally, in [23] is proposed a probabilistic model for structured outputs based on optimizing the dissimilarity coefficient [24] between the ground truth and predicted distributions. The resultant approach is assessed on the task of hand pose estimation, that is, predicting the location of 14 joints, arguably a simpler space compared to the space of segmentations we consider here. Similarly to the approach presented below, they inject latent variables at a later stage of the network architecture.
The main contributions of this work are: (1) Our framework provides consistent segmentation maps instead of pixel-wise probabilities and can therefore give a joint likelihood of modes. (2) Our model can induce arbitrarily complex output distributions including the occurrence of very rare modes, and is able to learn calibrated probabilities of segmentation modes. (3) Sampling from our model is computationally cheap. (4) In contrast to many existing applications of deep generative models that can only be qualitatively evaluated, our application and datasets allow quantitative performance evaluation including penalization of missing modes.
2 Network Architecture and Training Procedure
Our proposed network architecture is a combination of a conditional variational auto encoder [2, 3, 4, 5] with a U-Net [6], with the objective of learning a conditional density model over segmentations, conditioned on the image.
Sampling. The central component of our architecture (Fig. 1a) is a low-dimensional latent space RN (e.g., N = 6, which performed best in our experiments). Each position in this space encodes a segmentation variant. The ‘prior net’, parametrized by weights ω, estimates the probability of these variants for a given input image X . This prior probability distribution (called P in the following) is modelled as an axis-aligned Gaussian with meanµprior(X;ω) ∈ RN and varianceσprior(X;ω) ∈ RN . To predict a set of m segmentations we apply the network m times to the same input image (only a small part of the network needs to be re-evaluated in each iteration, see below). In each iteration i ∈ {1, . . . ,m}, we draw a random sample zi ∈ RN from P
zi ∼ P (·|X) = N ( µprior(X;ω), diag(σprior(X;ω)) ) , (1)
broadcast the sample to an N -channel feature map with the same shape as the segmentation map, and concatenate this feature map to the last activation map of a U-Net (the U-Net is parameterized by weights θ). A function fcomb. composed of three subsequent 1× 1 convolutions (ψ being the set of their weights) combines the information and maps it to the desired number of classes. The output, Si, is the segmentation map corresponding to point zi in the latent space:
Si = fcomb. ( fU-Net(X; θ), zi;ψ ) . (2)
Notice that when drawing m samples for the same input image, we can reuse the output of the prior net and the feature activations of the U-Net. Only the function fcomb. needs to be re-evaluated m times.
Training. The networks are trained with the standard training procedure for conditional VAEs (Fig. 1b), i.e. by minimizing the variational lower bound (Eq. 4). The main difference with respect to training a deterministic segmentation model, is that the training process additionally needs to find a useful embedding of the segmentation variants in the latent space. This is solved by introducing a ‘posterior net’, parametrized by weights ν, that learns to recognize a segmentation variant (given the raw imageX and the ground truth segmentation Y ) and to map this to a positionµpost(X,Y ; ν) ∈ RN
with some uncertainty σpost(X,Y ; ν) ∈ RN in the latent space. The output is denoted as posterior distribution Q. A sample z from this distribution,
z ∼ Q(·|X,Y ) = N ( µpost(X,Y ; ν), diag(σpost(X,Y ; ν)) ) , (3)
combined with the activation map of the U-Net (Eq. 1) must result in a predicted segmentation S identical to the ground truth segmentation Y provided in the training example. A cross-entropy loss penalizes differences between S and Y (the cross-entropy loss arises from treating the output S as the parameterization of a pixel-wise categorical distribution Pc). Additionally there is a Kullback-Leibler divergence DKL(Q||P ) = Ez∼Q [ log Q− log P ] which penalizes differences between the posterior distribution Q and the prior distribution P . Both losses are combined as a weighted sum with a weighting factor β, as done in [25]:
L(Y,X) = Ez∼Q(·|Y,X) [ − log Pc(Y |S(X, z)) ] + β ·DKL ( Q(z|Y,X)||P (z|X) ) . (4)
The training is done from scratch with randomly initialized weights. During training, this KL loss “pulls” the posterior distribution (which encodes a segmentation variant) and the prior distribution towards each other. On average (over multiple training examples) the prior distribution will be modified in a way such that it “covers” the space of all presented segmentation variants for a specific input image3.
3 Performance Measures and Baseline Methods
In this section we first present the metric used to assess the performance of all approaches, and then describe each competitor approach used in the comparisons.
3.1 Performance measures
As it is common in the semantic segmentation literature, we employ the intersection over union (IoU) as a measure to compare a pair of segmentations. However, in the present case, we not only want to compare a deterministic prediction with a unique ground truth, but rather we are interested in comparing distributions of segmentations. To do so, we use the generalized energy distance [26, 27, 28], which leverages distances between observations:
D2GED(Pgt, Pout) = 2E [ d(S, Y ) ] − E [ d(S, S ′ ) ] − E [ d(Y, Y ′ ) ] , (5)
where d is a distance measure, Y and Y ′
are independent samples from the ground truth distribution Pgt, and similarly, S and S ′ are independent samples from the predicted distribution Pout. The energy distance DGED is a metric as long as d is also a metric [29]. In our case we choose d(x, y) = 1− IoU(x, y), which as proved in [30, 31], is a metric. In practice, we only have access to samples from the distributions that models induce, so we rely on statistics of Eq. 5, D̂2GED. The details about its computation for each experiment are presented in Appendix B.
3.2 Baseline methods
With the aim of providing context for the performance of our proposed approach we compare against a range of baselines. To the best of our knowledge there exists no other work that has considered capturing a distribution over multi-modal segmentations and has measured the agreement with such a distribution. For fair comparison, we train the baseline models whose architectures are depicted in Fig. 2 in the exact same manner as we train ours. The baseline methods all involve the same U-Net architecture, i.e. they share the same core component and thus employ comparable numbers of learnable parameters in the segmentation tasks.
Dropout U-Net (Fig. 2a). Our ‘Dropout U-Net’ baselines follow the Bayesian segnet’s [7] proposition: we dropout the activations of the respective incoming layers of the three inner-most encoder and decoder blocks with a dropout probability of p = 0.5 during training as well as when sampling.
3An open source re-implementation of our approach can be found at https://github.com/SimonKohl/ probabilistic_unet.
U-Net Ensemble (Fig. 2b). We report results for ensembles with the number of members matching the required number of samples (referred to as ‘U-Net Ensemble’). The original deterministic variant of the U-Net is the 1-sample corner case of an ensemble.
M-Heads (Fig. 2c). Aiming for diverse semantic segmentation outputs, the works of [12] and [13] propose to branch off M heads after the last layer of a deep net each of which contributes one output variant. An adjusted cross-entropy loss that adaptively assigns heads to ground-truth hypotheses is employed to promote diversity while reducing the risk of idle heads: the loss of the best performing head is weighted with a factor of 1− , while the remaining heads each contribute with a weight of /(M − 1) to the loss. For our ‘M-Heads’ baselines we again employ a U-Net core and set = 0.05 as proposed by [12]. In order to allow for the evaluation of 4, 8 and 16 samples, we train M-Heads models with the corresponding number of heads.
Image2Image VAE (Fig. 2d). In [21] the authors propose a U-Net VAE-GAN hybrid for multimodal image-to-image translation, that owes its stochasticity to normal distributed latents that are broadcasted and fed into the encoder path of the U-Net. In order to deal with the complex solution space in image-to-image translation tasks, they employ an adversarial discriminator as additional supervision alongside a reconstruction loss. In the fully supervised setting of semantic segmentation such an additional learning signal is however not necessary and we therefore train with a cross-entropy loss only. In contrast to our proposition, this baseline, which we refer to as the ‘Image2Image VAE’, employs a prior that is not conditioned on the input image (a fixed normal distribution) and a posterior net that is not conditioned on the input either.
In all cases we examine the models’ performance when drawing a different number of samples (1, 4, 8 and 16) from each of them.
4 Results
A quantitative evaluation of multiple segmentation predictions per image requires annotations from multiple labelers. Here we consider two datasets: The LIDC-IDRI dataset [32, 33, 34] which contains 4 annotations per input, and the Cityscapes dataset [35], which we artificially modify by adding synonymous classes to introduce uncertainty in the way concepts are labelled.
4.1 Lung abnormalities segmentation
The LIDC-IDRI dataset [32, 33, 34] contains 1018 lung CT scans from 1010 lung patients with manual lesion segmentations from four experts. This dataset is a good representation of the typical ambiguities that appear in CT scans. For each scan, 4 radiologists (from a total of 12) provided annotation masks for lesions that they independently detected and considered to be abnormal. We use the masks resulting from a second reading in which the radiologists were shown the anonymized annotations of the others and were allowed to make adjustments to their own masks.
For our experiments we split this dataset into a training set composed of 722 patients, a validation set composed of 144 patients, and a test set composed of the remaining 144 patients. We then resampled
the CT scans to 0.5mm× 0.5mm in-plane resolution (the original resolution is between 0.461mm and 0.977mm, 0.688mm on average) and cropped 2D images (180 × 180 pixels) centered at the lesion positions. The lesion positions are those where at least one of the experts segmented a lesion. By cropping the scans, the resultant task is in isolation not directly clinically relevant. However, this allows us to ignore the vast areas in which all labelers agree, in order to focus on those where there is uncertainty. This resulted in 8882 images in the training set, 1996 images in the validation set and 1992 images in the test set. Because the experts can disagree whether the lesion is abnormal tissue, up to 3 masks per image can be empty. Fig. 3a shows an example of such lesion-centered images and the masks provided by 4 graders.
As all models share the same U-Net core component and for fairness and ease of comparability, we let all models undergo the same training schedule, which is detailed in subsection H.1.
In order to grasp some intuition about the kind of samples produced by each model, we show in Fig. 3a, as well as in Appendix F, representative results for the baseline methods and our proposed Probabilistic U-Net. Fig. 4a shows the squared generalized energy distance D̂2GED for all models as a function of the number of samples. The data accumulations visible as horizontal stripes are owed to the existence of empty ground-truth masks. The energy distance on the 1992 images large lung abnormalities test set, decreases for all models as more samples are drawn indicating an improved matching of the ground-truth distribution as well as enhanced sample diversity. Our proposed
Probabilistic U-Net outperforms all baselines when sampling 4, 8 and 16 times. The performance at 16 samples is found significantly higher than that of the baselines (p-value ∼ O(10−13)), according to the Wilcoxon signed-rank test. Finally, in Appendix E we show the results of an experiment regarding the capacity different models have to distinguish between unambiguous and ambiguous instances (i.e. instances where graders disagree on the presence of a lesion).
4.2 Cityscapes semantic segmentation
As a second dataset we use the Cityscapes dataset [35]. It contains images of street scenes taken from a car with corresponding semantic segmentation maps. A total of 19 different semantic classes are labelled. Based on this dataset we designed a task that allows full control of the ambiguities: we create ambiguities by artificial random flips of five classes to newly introduced classes. We flip ‘sidewalk’ to ‘sidewalk 2’ with a probability of 8/17, ‘person’ to ‘person 2’ with a probability of 7/17, ‘car’ to ‘car 2’ with 6/17, ‘vegetation’ to ‘vegetation 2’ with 5/17 and ‘road’ to ‘road 2’ with probability 4/17. This choice yields distinct probabilities for the ensuing 25 = 32 discrete modes with probabilities ranging from 10.9% (all unflipped) down to 0.5% (all flipped). The official training dataset with fine-grained annotation labels comprises 2975 images and the validation dataset contains 500 images. We employ this offical validation set as a test set to report results on, and split off 274 images (corresponding to the 3 cities of Darmstadt, Mönchengladbach and Ulm) from the official training set as our internal validation set. As in the previous experiment, in this task we use a similar setting for the training processes of all approaches, which we present in detail in subsection H.2.
Fig. 3b shows samples of each approach in the comparison given one input image. In Appendix G we show further samples of other images, produced by our approach. Fig. 4b shows that the Probabilistic U-Net on the Cityscapes task outperforms the baseline methods when sampling 4, 8 and 16 times in terms of the energy distance. This edge in segmentation performance at 16 samples is highly significant according to the Wilcoxon signed-rank test (p-value ∼ O(10−77)). We have also conducted ablation experiments in order to explore which elements of our architecture contribute to its performance. These were (1) Fixing the prior, (2) Fixing the prior, and not using the context in the posterior and (3) Injecting the latent features at the beginning of the U-Net. Each of these variations resulted in a lower performance. Detailed results can be found in Appendix D.
Reproducing the segmentation probabilities. In the Cityscapes segmentation task, we can provide further analysis by leveraging our knowledge of the underlying conditional distribution that we have set by design. In particular we compare the frequency with which every model predicts each mode, to the corresponding ground truth probability of that mode. To compute the frequency of each mode by each model, we draw 16 samples from that model for all images in the test set. Then we count the number of those samples that have that mode as the closest (using 1-IoU as the distance function).
In Fig. 5 (and Figs. 8, 9, 10 in Appendix C) we report the mode-wise frequencies for all 32 modes in the Cityscape task and show that the Probabilistic U-Net is the only model in this comparison that is able to closely capture the frequencies of a large combinatorial space of hypotheses including very rare modes, thus supplying calibrated likelihoods of modes. The Image2Image VAE is the only
model among competitors that picks up on all variants, but the frequencies are far off as can be seen in its deviation from the bisector line in blue. The other baselines perform worse still in that all of them fail to represent modes and the modes they do capture do not match the expected frequencies.
4.3 Analysis of the Latent Space
The embedding of the segmentation variants in a low-dimensional latent space allows a qualitative analysis of the internal representation of our model. For a 2D or 3D latent space we can directly visualize where the segmentation variants get assigned. See Appendix A for details.
5 Discussion and conclusions
Our first set of experiments demonstrates that our proposed architecture provides consistent segmentation maps that closely match the multi-modal ground-truth distributions given by the expert graders in the lung abnormalities task and by the combinatorial ground-truth segmentation modes in the Cityscapes task. The employed IoU-based energy distance measures whether the models’ individual samples are both coherent as well as whether they are produced with the expected frequencies. It not only penalizes predicted segmentation variants that are far away from the ground truth, but also penalizes missing variants. On this task the Probabilistic U-Net is able to significantly outperform the considered baselines, indicating its capability to model the joint likelihood of segmentation variants.
The second type of experiments demonstrates that our model scales to complex output distributions including the occurrence of very rare modes. With 32 discrete modes of largely differing occurrence likelihoods (0.5% to 10.9%), the Cityscapes task requires the ability to closely match complex data distributions. Here too our model performs best and picks the segmentation modes very close to the expected frequencies, all the way into the regime of very unlikely modes, thus defying mode-collapse and exhibiting excellent probability calibration. As an additional advantage our model scales to such large numbers of modes without requiring any prior assumptions on the number of modes or hypotheses.
The lower performance of the baseline models relative to our proposition can be attributed to design choices of these models. While the Dropout U-Net successfully models the pixel-wise data distribution (Fig. 8a bottom right, in the Appendix), such pixel-wise mixtures of variants can not be valid hypotheses in themselves (see Fig. 3). The U-Net Ensemble’s members are trained independently and each of them can only learn the most likely segmentation variant as attested to by Fig. 8b. In contrast to that the closely related M-Heads model can pick up on multiple discrete segmentation modes, due to the joint training procedure that enables diversity. The training does however not allow to correctly represent frequencies and requires knowledge of the number of present variants (see Fig. 9a, in the Appendix). Furthermore neither the U-Net Ensemble, nor the M-Heads can deal with the combinatorial explosion of segmentation variants when multiple aspects vary independently of each other. The Image2Image VAE shares similarities with our model, but as its prior is fixed and not conditioned on the input image, it can not learn to capture variant frequencies by allocating corresponding probability mass to the respective latent space regions. Fig. 17 in the Appendix shows a severe miss-calibration of variant likelihoods on the lung abnormalities task that is also reflected in its corresponding energy distance. Furthermore, in this architecture, the latent samples are fed into the U-Net’s encoder path, while we feed in the samples just after the decoder path. This design choice in the Image2Image VAE requires the model to carry the latent information all the way through the U-Net core, while simultaneously performing the recognition required for segmentation, which might additionally complicate training (see analysis in Appendix D). Beside that, our design choice of late injection has the additional advantage that we can produce a large set of samples for a given image at a very low computational cost: for each new sample from the latent space only the network part after the injection needs to be re-executed to produce the corresponding segmentation map (this bears similarity to the approach taken in [23], where a generative model is employed to model hand pose estimation).
Aside from the ability to capture arbitrary modes with their corresponding probability conditioned on the input, our proposed Probabilistic U-Net allows to inspect its latent space. This is because as opposed to e.g. GAN-based approaches, VAE-like models explicitly parametrize distributions, a characteristic that grants direct access to the corresponding likelihood landscape. Appendix A discusses how the Probabilistic U-Net chooses to structure its latent spaces.
Compared to aforementioned concurrent work for image-to-image tasks [22], our model disentangles the prior and the segmentation net. This can be of particular relevance in medical imaging, where processing 3D scans is common. In this case it is desirable to condition on the entire scan, while retaining the possibility to process the scan tile by tile in order to be able to process large volumes with large models with a limited amount of GPU memory.
On a more general note, we would like to remark that current image-to-image translation tasks only allow subjective (and expensive) performance evaluations, as it is typically intractable to assess the entire solution space. For this reason surrogate metrics such as the inception score based on the evaluation via a separately trained deep net are employed [36]. The task of multi-modal semantic segmentation, which we consider here, allows for a direct and thus perhaps more meaningful manner of performance evaluation and could help guide the design of future generative architectures.
All in all we see a large field where our proposed Probabilistic U-Net can replace the currently applied deterministic U-Nets. Especially in the medical domain, with its often ambiguous images and highly critical decisions that depend on the correct interpretation of the image, our model’s segmentation hypotheses and their likelihoods could 1) inform diagnosis/classification probabilities or 2) guide steps to resolve ambiguities. Our method could prove useful beyond explicitly multi-modal tasks, as the inspectability of the Probabilistic U-Net’s latent space could yield insights for many segmentation tasks that are currently treated as a uni-modal problem.
6 Acknowledgements
The authors would like to thank Mustafa Suleyman, Trevor Back and the whole DeepMind team for their exceptional support, and Shakir Mohamed and Andrew Zisserman for very helpful comments and discussions. The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. | 1. What is the main contribution of the paper in image segmentation?
2. What are the strengths of the proposed approach, particularly in addressing ambiguity in medical images?
3. How does the reviewer assess the effectiveness of the experimental validation?
4. Are there any suggestions for improving the practical applicability of the method?
5. What are the limitations of the current approach regarding its ability to handle complex scenarios? | Review | Review
This paper focuses on the problem of image segmentation, addressing the specific issue of segmenting ambigious images for which multiple interpretations may be consistent with the image evidence. This type of problem may arise in medical domains, in which case awareness of this ambiguity would allow for subsequent testing/refinement, as opposed to simply predicting a single best segmentation hypothesis. With this motivation, this paper proposes a method for producing multiple segmentation hypotheses for a given potentially ambiguous image, where each hypothesis is a globally consistent segmentation. The approach taken is a combination of a conditional variational auto-encoder (CVAE) and U-Net CNN. Specifically: a prior net is used to model a latent distribution conditioned on an input image. Samples from this distribution are concatenated with the final activation layers of a U-Net, and used to produce a segmentation map for each sample. During training, a posterior network is used to produce a latent distribution conditioned on the input image and a given ground-truth segmentation. The full system is then trained by minimizing the cross-entropy between the predicted and ground-truth segmentations and the KL divergence between the prior and posterior latent distributions. The proposed method is evaluated on two different datasets - a lung abnormalities dataset in which each image has 4 associated ground-truth segmentations from different radiologists, and a synthetic test on the Cityscapes dataset where new classes are added through random flips (eg sidewalk class becomes "sidewalk 2" class with probability 8/17). Comparison is done against existing baseline methods for producing multiple segmentations, such as U-Net Ensemble and M-Heads (branching off last layer of network). Experiments show consistent improved performance using the proposed method, by evaluating the the predicted and ground-truth segmentation distributions using generalized energy distance. Additional analysis shows that the proposed method is able to recover lower probability modes of the underlying ground-truth distribution with the correct frequency, unlike the baseline methods. Overall, I found this to be a well-written paper with a nice method for addressing an important problem. Experimental validation is detailed and convincing. One small possible suggestion: part of the stated motivation for the paper is to allow for some indication of ambiguity to guide subsequent analysis/testing. It could be nice as an additional performance metric to have some rough evaluation in this practical context, as a gauge for an application-specific improvement. For instance, a criterion could be that lung images should be flagged for additional study if 1 or more of the experts disagree on whether the lesion is abnormal tissue; how often would this be correctly produced using the multiple predicted segmentations? |
NIPS | Title
A Probabilistic U-Net for Segmentation of Ambiguous Images
Abstract
Many real-world vision problems suffer from inherent ambiguities. In clinical applications for example, it might not be clear from a CT scan alone which particular region is cancer tissue. Therefore a group of graders typically produces a set of diverse but plausible segmentations. We consider the task of learning a distribution over segmentations given an input. To this end we propose a generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible hypotheses. We show on a lung abnormalities segmentation task and on a Cityscapes segmentation task that our model reproduces the possible segmentation variants as well as the frequencies with which they occur, doing so significantly better than published approaches. These models could have a high impact in real-world applications, such as being used as clinical decision-making algorithms accounting for multiple plausible semantic segmentation hypotheses to provide possible diagnoses and recommend further actions to resolve the present ambiguities.
1 Introduction
The semantic segmentation task assigns a class label to each pixel in an image. While in many cases the context in the image provides sufficient information to resolve the ambiguities in this mapping, there exists an important class of images where even the full image context is not sufficient to resolve all ambiguities. Such ambiguities are common in medical imaging applications, e.g., in lung abnormalities segmentation from CT images. A lesion might be clearly visible, but the information about whether it is cancer tissue or not might not be available from this image alone. Similar ambiguities are also present in photos. E.g. a part of fur visible under the sofa might belong to a cat or a dog, but it is not possible from the image alone to resolve this ambiguity2. Most existing segmentation algorithms either provide only one likely consistent hypothesis (e.g., “all pixels belong to a cat”) or a pixel-wise probability (e.g., “each pixel is 50% cat and 50% dog”).
Especially in medical applications where a subsequent diagnosis or a treatment depends on the segmentation map, an algorithm that only provides the most likely hypothesis might lead to misdiagnoses
∗work done during an internship at DeepMind. 2In [1] this is defined as ambiguous evidence in contrast to implicit class confusion, that stems from an ambiguous class definition (e.g. the concepts of desk vs. table). For the presented work this differentiation is not required.
32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada.
and sub-optimal treatment. Providing only pixel-wise probabilities ignores all co-variances between the pixels, which makes a subsequent analysis much more difficult if not impossible. If multiple consistent hypotheses are provided, these can be directly propagated into the next step in a diagnosis pipeline, they can be used to suggest further diagnostic tests to resolve the ambiguities, or an expert with access to additional information can select the appropriate one(s) for the subsequent steps.
Here we present a segmentation framework that provides multiple segmentation hypotheses for ambiguous images (Fig. 1a). Our framework combines a conditional variational auto encoder (CVAE) [2, 3, 4, 5] which can model complex distributions, with a U-Net [6] which delivers state-of-the-art segmentations in many medical application domains. A low-dimensional latent space encodes the possible segmentation variants. A random sample from this space is injected into the U-Net to produce the corresponding segmentation map. One key feature of this architecture is the ability to model the joint probability of all pixels in the segmentation map. This results in multiple segmentation maps, where each of them provides a consistent interpretation of the whole image. Furthermore our framework is able to also learn hypotheses that have a low probability and to predict them with the corresponding frequency. We demonstrate these features on a lung abnormalities segmentation task, where each lesion has been segmented independently by four experts, and on the Cityscapes dataset, where we artificially flip labels with a certain frequency during training.
A body of work with different approaches towards probabilistic and multi-modal segmentation exists. The most common approaches provide independent pixel-wise probabilities [7, 8]. These models induce a probability distribution by using dropout over spatial features. Whereas this strategy fulfills this line of work’s objective of quantifying the pixel-wise uncertainty, it produces inconsistent outputs. A simple way to produce plausible hypotheses is to learn an ensemble of (deep) models [9]. While the outputs produced by ensembles are consistent, they are not necessarily diverse and ensembles are typically not able to learn the rare variants as their members are trained independently. In order to overcome this, several approaches train models jointly using the oracle set loss [10], i.e. a loss that only accounts for the closest prediction to the ground truth. This has been explored in [11] and [1] using an ensemble of deep networks, and in [12] and [13] using one common deep network with M heads. While multi-head approaches may have the capacity to capture a diverse set of variants, they are not equipped to learn the occurrence frequencies of individual variants. Two common disadvantages of both ensembles and M heads models are their ungraceful scaling to large numbers of hypotheses, and their requirement of fixing the number of allowed hypotheses at training time. Another set of approaches to produce multiple diverse solutions relies on graphical models, such as junction chains [14], and more generally Markov Random Fields [15, 16, 17, 18]. While many of the
previous approaches are guaranteed to find the best diverse solutions, these are confined to structured problems whose dependencies can be described by tractable graphical models.
The task of image-to-image translation [19] tackles a very similar problem: an under-constrained domain transfer of images needs to be learned. Many of the recent approaches employ generative adversarial networks (GANs) which are known to suffer from challenges such as ‘mode-collapse’ [20]. In an attempt to solve the mode-collapse problem, the ‘bicycleGAN’ [21] involves a component that is similar in architecture to ours. In contrast to our proposed architecture, their model encompasses a fixed prior distribution and during training their posterior distribution is only conditioned on the output image. Very recent work on generating appearances given a shape encoding [22] also combines a U-Net with a VAE, and was developed concurrently to ours. In contrast to our proposal, their training requires an additional pretrained VGG-net that is employed as a reconstruction loss. Finally, in [23] is proposed a probabilistic model for structured outputs based on optimizing the dissimilarity coefficient [24] between the ground truth and predicted distributions. The resultant approach is assessed on the task of hand pose estimation, that is, predicting the location of 14 joints, arguably a simpler space compared to the space of segmentations we consider here. Similarly to the approach presented below, they inject latent variables at a later stage of the network architecture.
The main contributions of this work are: (1) Our framework provides consistent segmentation maps instead of pixel-wise probabilities and can therefore give a joint likelihood of modes. (2) Our model can induce arbitrarily complex output distributions including the occurrence of very rare modes, and is able to learn calibrated probabilities of segmentation modes. (3) Sampling from our model is computationally cheap. (4) In contrast to many existing applications of deep generative models that can only be qualitatively evaluated, our application and datasets allow quantitative performance evaluation including penalization of missing modes.
2 Network Architecture and Training Procedure
Our proposed network architecture is a combination of a conditional variational auto encoder [2, 3, 4, 5] with a U-Net [6], with the objective of learning a conditional density model over segmentations, conditioned on the image.
Sampling. The central component of our architecture (Fig. 1a) is a low-dimensional latent space RN (e.g., N = 6, which performed best in our experiments). Each position in this space encodes a segmentation variant. The ‘prior net’, parametrized by weights ω, estimates the probability of these variants for a given input image X . This prior probability distribution (called P in the following) is modelled as an axis-aligned Gaussian with meanµprior(X;ω) ∈ RN and varianceσprior(X;ω) ∈ RN . To predict a set of m segmentations we apply the network m times to the same input image (only a small part of the network needs to be re-evaluated in each iteration, see below). In each iteration i ∈ {1, . . . ,m}, we draw a random sample zi ∈ RN from P
zi ∼ P (·|X) = N ( µprior(X;ω), diag(σprior(X;ω)) ) , (1)
broadcast the sample to an N -channel feature map with the same shape as the segmentation map, and concatenate this feature map to the last activation map of a U-Net (the U-Net is parameterized by weights θ). A function fcomb. composed of three subsequent 1× 1 convolutions (ψ being the set of their weights) combines the information and maps it to the desired number of classes. The output, Si, is the segmentation map corresponding to point zi in the latent space:
Si = fcomb. ( fU-Net(X; θ), zi;ψ ) . (2)
Notice that when drawing m samples for the same input image, we can reuse the output of the prior net and the feature activations of the U-Net. Only the function fcomb. needs to be re-evaluated m times.
Training. The networks are trained with the standard training procedure for conditional VAEs (Fig. 1b), i.e. by minimizing the variational lower bound (Eq. 4). The main difference with respect to training a deterministic segmentation model, is that the training process additionally needs to find a useful embedding of the segmentation variants in the latent space. This is solved by introducing a ‘posterior net’, parametrized by weights ν, that learns to recognize a segmentation variant (given the raw imageX and the ground truth segmentation Y ) and to map this to a positionµpost(X,Y ; ν) ∈ RN
with some uncertainty σpost(X,Y ; ν) ∈ RN in the latent space. The output is denoted as posterior distribution Q. A sample z from this distribution,
z ∼ Q(·|X,Y ) = N ( µpost(X,Y ; ν), diag(σpost(X,Y ; ν)) ) , (3)
combined with the activation map of the U-Net (Eq. 1) must result in a predicted segmentation S identical to the ground truth segmentation Y provided in the training example. A cross-entropy loss penalizes differences between S and Y (the cross-entropy loss arises from treating the output S as the parameterization of a pixel-wise categorical distribution Pc). Additionally there is a Kullback-Leibler divergence DKL(Q||P ) = Ez∼Q [ log Q− log P ] which penalizes differences between the posterior distribution Q and the prior distribution P . Both losses are combined as a weighted sum with a weighting factor β, as done in [25]:
L(Y,X) = Ez∼Q(·|Y,X) [ − log Pc(Y |S(X, z)) ] + β ·DKL ( Q(z|Y,X)||P (z|X) ) . (4)
The training is done from scratch with randomly initialized weights. During training, this KL loss “pulls” the posterior distribution (which encodes a segmentation variant) and the prior distribution towards each other. On average (over multiple training examples) the prior distribution will be modified in a way such that it “covers” the space of all presented segmentation variants for a specific input image3.
3 Performance Measures and Baseline Methods
In this section we first present the metric used to assess the performance of all approaches, and then describe each competitor approach used in the comparisons.
3.1 Performance measures
As it is common in the semantic segmentation literature, we employ the intersection over union (IoU) as a measure to compare a pair of segmentations. However, in the present case, we not only want to compare a deterministic prediction with a unique ground truth, but rather we are interested in comparing distributions of segmentations. To do so, we use the generalized energy distance [26, 27, 28], which leverages distances between observations:
D2GED(Pgt, Pout) = 2E [ d(S, Y ) ] − E [ d(S, S ′ ) ] − E [ d(Y, Y ′ ) ] , (5)
where d is a distance measure, Y and Y ′
are independent samples from the ground truth distribution Pgt, and similarly, S and S ′ are independent samples from the predicted distribution Pout. The energy distance DGED is a metric as long as d is also a metric [29]. In our case we choose d(x, y) = 1− IoU(x, y), which as proved in [30, 31], is a metric. In practice, we only have access to samples from the distributions that models induce, so we rely on statistics of Eq. 5, D̂2GED. The details about its computation for each experiment are presented in Appendix B.
3.2 Baseline methods
With the aim of providing context for the performance of our proposed approach we compare against a range of baselines. To the best of our knowledge there exists no other work that has considered capturing a distribution over multi-modal segmentations and has measured the agreement with such a distribution. For fair comparison, we train the baseline models whose architectures are depicted in Fig. 2 in the exact same manner as we train ours. The baseline methods all involve the same U-Net architecture, i.e. they share the same core component and thus employ comparable numbers of learnable parameters in the segmentation tasks.
Dropout U-Net (Fig. 2a). Our ‘Dropout U-Net’ baselines follow the Bayesian segnet’s [7] proposition: we dropout the activations of the respective incoming layers of the three inner-most encoder and decoder blocks with a dropout probability of p = 0.5 during training as well as when sampling.
3An open source re-implementation of our approach can be found at https://github.com/SimonKohl/ probabilistic_unet.
U-Net Ensemble (Fig. 2b). We report results for ensembles with the number of members matching the required number of samples (referred to as ‘U-Net Ensemble’). The original deterministic variant of the U-Net is the 1-sample corner case of an ensemble.
M-Heads (Fig. 2c). Aiming for diverse semantic segmentation outputs, the works of [12] and [13] propose to branch off M heads after the last layer of a deep net each of which contributes one output variant. An adjusted cross-entropy loss that adaptively assigns heads to ground-truth hypotheses is employed to promote diversity while reducing the risk of idle heads: the loss of the best performing head is weighted with a factor of 1− , while the remaining heads each contribute with a weight of /(M − 1) to the loss. For our ‘M-Heads’ baselines we again employ a U-Net core and set = 0.05 as proposed by [12]. In order to allow for the evaluation of 4, 8 and 16 samples, we train M-Heads models with the corresponding number of heads.
Image2Image VAE (Fig. 2d). In [21] the authors propose a U-Net VAE-GAN hybrid for multimodal image-to-image translation, that owes its stochasticity to normal distributed latents that are broadcasted and fed into the encoder path of the U-Net. In order to deal with the complex solution space in image-to-image translation tasks, they employ an adversarial discriminator as additional supervision alongside a reconstruction loss. In the fully supervised setting of semantic segmentation such an additional learning signal is however not necessary and we therefore train with a cross-entropy loss only. In contrast to our proposition, this baseline, which we refer to as the ‘Image2Image VAE’, employs a prior that is not conditioned on the input image (a fixed normal distribution) and a posterior net that is not conditioned on the input either.
In all cases we examine the models’ performance when drawing a different number of samples (1, 4, 8 and 16) from each of them.
4 Results
A quantitative evaluation of multiple segmentation predictions per image requires annotations from multiple labelers. Here we consider two datasets: The LIDC-IDRI dataset [32, 33, 34] which contains 4 annotations per input, and the Cityscapes dataset [35], which we artificially modify by adding synonymous classes to introduce uncertainty in the way concepts are labelled.
4.1 Lung abnormalities segmentation
The LIDC-IDRI dataset [32, 33, 34] contains 1018 lung CT scans from 1010 lung patients with manual lesion segmentations from four experts. This dataset is a good representation of the typical ambiguities that appear in CT scans. For each scan, 4 radiologists (from a total of 12) provided annotation masks for lesions that they independently detected and considered to be abnormal. We use the masks resulting from a second reading in which the radiologists were shown the anonymized annotations of the others and were allowed to make adjustments to their own masks.
For our experiments we split this dataset into a training set composed of 722 patients, a validation set composed of 144 patients, and a test set composed of the remaining 144 patients. We then resampled
the CT scans to 0.5mm× 0.5mm in-plane resolution (the original resolution is between 0.461mm and 0.977mm, 0.688mm on average) and cropped 2D images (180 × 180 pixels) centered at the lesion positions. The lesion positions are those where at least one of the experts segmented a lesion. By cropping the scans, the resultant task is in isolation not directly clinically relevant. However, this allows us to ignore the vast areas in which all labelers agree, in order to focus on those where there is uncertainty. This resulted in 8882 images in the training set, 1996 images in the validation set and 1992 images in the test set. Because the experts can disagree whether the lesion is abnormal tissue, up to 3 masks per image can be empty. Fig. 3a shows an example of such lesion-centered images and the masks provided by 4 graders.
As all models share the same U-Net core component and for fairness and ease of comparability, we let all models undergo the same training schedule, which is detailed in subsection H.1.
In order to grasp some intuition about the kind of samples produced by each model, we show in Fig. 3a, as well as in Appendix F, representative results for the baseline methods and our proposed Probabilistic U-Net. Fig. 4a shows the squared generalized energy distance D̂2GED for all models as a function of the number of samples. The data accumulations visible as horizontal stripes are owed to the existence of empty ground-truth masks. The energy distance on the 1992 images large lung abnormalities test set, decreases for all models as more samples are drawn indicating an improved matching of the ground-truth distribution as well as enhanced sample diversity. Our proposed
Probabilistic U-Net outperforms all baselines when sampling 4, 8 and 16 times. The performance at 16 samples is found significantly higher than that of the baselines (p-value ∼ O(10−13)), according to the Wilcoxon signed-rank test. Finally, in Appendix E we show the results of an experiment regarding the capacity different models have to distinguish between unambiguous and ambiguous instances (i.e. instances where graders disagree on the presence of a lesion).
4.2 Cityscapes semantic segmentation
As a second dataset we use the Cityscapes dataset [35]. It contains images of street scenes taken from a car with corresponding semantic segmentation maps. A total of 19 different semantic classes are labelled. Based on this dataset we designed a task that allows full control of the ambiguities: we create ambiguities by artificial random flips of five classes to newly introduced classes. We flip ‘sidewalk’ to ‘sidewalk 2’ with a probability of 8/17, ‘person’ to ‘person 2’ with a probability of 7/17, ‘car’ to ‘car 2’ with 6/17, ‘vegetation’ to ‘vegetation 2’ with 5/17 and ‘road’ to ‘road 2’ with probability 4/17. This choice yields distinct probabilities for the ensuing 25 = 32 discrete modes with probabilities ranging from 10.9% (all unflipped) down to 0.5% (all flipped). The official training dataset with fine-grained annotation labels comprises 2975 images and the validation dataset contains 500 images. We employ this offical validation set as a test set to report results on, and split off 274 images (corresponding to the 3 cities of Darmstadt, Mönchengladbach and Ulm) from the official training set as our internal validation set. As in the previous experiment, in this task we use a similar setting for the training processes of all approaches, which we present in detail in subsection H.2.
Fig. 3b shows samples of each approach in the comparison given one input image. In Appendix G we show further samples of other images, produced by our approach. Fig. 4b shows that the Probabilistic U-Net on the Cityscapes task outperforms the baseline methods when sampling 4, 8 and 16 times in terms of the energy distance. This edge in segmentation performance at 16 samples is highly significant according to the Wilcoxon signed-rank test (p-value ∼ O(10−77)). We have also conducted ablation experiments in order to explore which elements of our architecture contribute to its performance. These were (1) Fixing the prior, (2) Fixing the prior, and not using the context in the posterior and (3) Injecting the latent features at the beginning of the U-Net. Each of these variations resulted in a lower performance. Detailed results can be found in Appendix D.
Reproducing the segmentation probabilities. In the Cityscapes segmentation task, we can provide further analysis by leveraging our knowledge of the underlying conditional distribution that we have set by design. In particular we compare the frequency with which every model predicts each mode, to the corresponding ground truth probability of that mode. To compute the frequency of each mode by each model, we draw 16 samples from that model for all images in the test set. Then we count the number of those samples that have that mode as the closest (using 1-IoU as the distance function).
In Fig. 5 (and Figs. 8, 9, 10 in Appendix C) we report the mode-wise frequencies for all 32 modes in the Cityscape task and show that the Probabilistic U-Net is the only model in this comparison that is able to closely capture the frequencies of a large combinatorial space of hypotheses including very rare modes, thus supplying calibrated likelihoods of modes. The Image2Image VAE is the only
model among competitors that picks up on all variants, but the frequencies are far off as can be seen in its deviation from the bisector line in blue. The other baselines perform worse still in that all of them fail to represent modes and the modes they do capture do not match the expected frequencies.
4.3 Analysis of the Latent Space
The embedding of the segmentation variants in a low-dimensional latent space allows a qualitative analysis of the internal representation of our model. For a 2D or 3D latent space we can directly visualize where the segmentation variants get assigned. See Appendix A for details.
5 Discussion and conclusions
Our first set of experiments demonstrates that our proposed architecture provides consistent segmentation maps that closely match the multi-modal ground-truth distributions given by the expert graders in the lung abnormalities task and by the combinatorial ground-truth segmentation modes in the Cityscapes task. The employed IoU-based energy distance measures whether the models’ individual samples are both coherent as well as whether they are produced with the expected frequencies. It not only penalizes predicted segmentation variants that are far away from the ground truth, but also penalizes missing variants. On this task the Probabilistic U-Net is able to significantly outperform the considered baselines, indicating its capability to model the joint likelihood of segmentation variants.
The second type of experiments demonstrates that our model scales to complex output distributions including the occurrence of very rare modes. With 32 discrete modes of largely differing occurrence likelihoods (0.5% to 10.9%), the Cityscapes task requires the ability to closely match complex data distributions. Here too our model performs best and picks the segmentation modes very close to the expected frequencies, all the way into the regime of very unlikely modes, thus defying mode-collapse and exhibiting excellent probability calibration. As an additional advantage our model scales to such large numbers of modes without requiring any prior assumptions on the number of modes or hypotheses.
The lower performance of the baseline models relative to our proposition can be attributed to design choices of these models. While the Dropout U-Net successfully models the pixel-wise data distribution (Fig. 8a bottom right, in the Appendix), such pixel-wise mixtures of variants can not be valid hypotheses in themselves (see Fig. 3). The U-Net Ensemble’s members are trained independently and each of them can only learn the most likely segmentation variant as attested to by Fig. 8b. In contrast to that the closely related M-Heads model can pick up on multiple discrete segmentation modes, due to the joint training procedure that enables diversity. The training does however not allow to correctly represent frequencies and requires knowledge of the number of present variants (see Fig. 9a, in the Appendix). Furthermore neither the U-Net Ensemble, nor the M-Heads can deal with the combinatorial explosion of segmentation variants when multiple aspects vary independently of each other. The Image2Image VAE shares similarities with our model, but as its prior is fixed and not conditioned on the input image, it can not learn to capture variant frequencies by allocating corresponding probability mass to the respective latent space regions. Fig. 17 in the Appendix shows a severe miss-calibration of variant likelihoods on the lung abnormalities task that is also reflected in its corresponding energy distance. Furthermore, in this architecture, the latent samples are fed into the U-Net’s encoder path, while we feed in the samples just after the decoder path. This design choice in the Image2Image VAE requires the model to carry the latent information all the way through the U-Net core, while simultaneously performing the recognition required for segmentation, which might additionally complicate training (see analysis in Appendix D). Beside that, our design choice of late injection has the additional advantage that we can produce a large set of samples for a given image at a very low computational cost: for each new sample from the latent space only the network part after the injection needs to be re-executed to produce the corresponding segmentation map (this bears similarity to the approach taken in [23], where a generative model is employed to model hand pose estimation).
Aside from the ability to capture arbitrary modes with their corresponding probability conditioned on the input, our proposed Probabilistic U-Net allows to inspect its latent space. This is because as opposed to e.g. GAN-based approaches, VAE-like models explicitly parametrize distributions, a characteristic that grants direct access to the corresponding likelihood landscape. Appendix A discusses how the Probabilistic U-Net chooses to structure its latent spaces.
Compared to aforementioned concurrent work for image-to-image tasks [22], our model disentangles the prior and the segmentation net. This can be of particular relevance in medical imaging, where processing 3D scans is common. In this case it is desirable to condition on the entire scan, while retaining the possibility to process the scan tile by tile in order to be able to process large volumes with large models with a limited amount of GPU memory.
On a more general note, we would like to remark that current image-to-image translation tasks only allow subjective (and expensive) performance evaluations, as it is typically intractable to assess the entire solution space. For this reason surrogate metrics such as the inception score based on the evaluation via a separately trained deep net are employed [36]. The task of multi-modal semantic segmentation, which we consider here, allows for a direct and thus perhaps more meaningful manner of performance evaluation and could help guide the design of future generative architectures.
All in all we see a large field where our proposed Probabilistic U-Net can replace the currently applied deterministic U-Nets. Especially in the medical domain, with its often ambiguous images and highly critical decisions that depend on the correct interpretation of the image, our model’s segmentation hypotheses and their likelihoods could 1) inform diagnosis/classification probabilities or 2) guide steps to resolve ambiguities. Our method could prove useful beyond explicitly multi-modal tasks, as the inspectability of the Probabilistic U-Net’s latent space could yield insights for many segmentation tasks that are currently treated as a uni-modal problem.
6 Acknowledgements
The authors would like to thank Mustafa Suleyman, Trevor Back and the whole DeepMind team for their exceptional support, and Shakir Mohamed and Andrew Zisserman for very helpful comments and discussions. The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. | 1. What is the main contribution of the paper in the field of generative segmentation models?
2. How does the proposed method differ from other approaches in terms of its ability to produce plausible segmentation hypotheses?
3. What are some potential limitations or areas for improvement in the proposed approach?
4. How does the reviewer assess the quality and impact of the paper's contributions and findings? | Review | Review
This paper deals with the problem of learning a distribution over segmentations given an input image. For that authors propose a generative segmentation model based on a combination of a U-Net with a conditional variational autoencoder that is capable of efficiently producing an unlimited number of plausible segmentation hypotheses. The problem is challenging, well-motivated and well presented. The related work is properly presented and the novelty of the proposed method is clear. The proposed method is original and means a sufficient contribution. The validation of the proposal and comparison with baseline methods is correct and the experimental results are quite convincing. In my opinion, the paper can be accepted as it is. |
NIPS | Title
Learning to Mutate with Hypergradient Guided Population
Abstract
Computing the gradient of model hyperparameters, i.e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task. However, gradient-based methods could lead to suboptimal solutions due to the non-convex nature of optimization in a complex hyperparameter space. In this study, we propose a hyperparameter mutation (HPM) algorithm to explicitly consider a learnable trade-off between using global and local search, where we adopt a population of student models to simultaneously explore the hyperparameter space guided by hypergradient and leverage a teacher model to mutate the underperforming students by exploiting the top ones. The teacher model is implemented with an attention mechanism and is used to learn a mutation schedule for different hyperparameters on the fly. Empirical evidence on synthetic functions is provided to show that HPM outperforms hypergradient significantly. Experiments on two benchmark datasets are also conducted to validate the effectiveness of the proposed HPM algorithm for training deep neural networks compared with several strong baselines.
1 Introduction
Hyperparameter optimization (HPO) [4, 11] is one of the fundamental research problems in the field of automated machine learning. It aims to maximize the model performance by tuning model hyperparameters automatically, which could be achieved either by searching a fixed hyperparameter configuration setting [3, 22, 32, 9] from the predefined hyperparameter space or by learning a hyperparameter schedule along with the training process [17, 25]. Among existing methods, hypergradient [2, 26] forms a promising direction, as it naturally enables gradient descent on hyperparameters.
Hypergradient is usually defined as the gradient of a validation loss function w.r.t hyperparameters. Previous methods mainly focus on computing hypergradients by using reverse-mode differentiation [2, 6, 26], or designing a differentiable response function [12, 25] for hyperparameters, yet without explicitly considering the non-convex optimization nature in a complex hyperparameter space. Thus, while hypergradient methods could deliver highly-efficient local search solutions, they may easily get stuck in local minima and achieve suboptimal performance. This can be clearly observed on some synthetic functions which share a similar shape of parameter space to the HPO problem (see Sec. 4.1). It also leads to the question: can we find a way to help hypergradient with global information?
The population based hyperparameter search methods work as a good complementary to the hypergradient, such as evolutionary search [27, 5], particle swarm optimization [8], and the population based ∗Work done when Zhiqiang Tao interned at Alibaba Group and worked at Northeastern University.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
training [17, 21, 14], which generally employ a population of agent models to search different hyperparameter configurations and update hyperparameters with a mutation operation. The population could provide sufficient diversity to globally explore hypergradients throughout the hyperparameter space. However, it is non-trivial to incorporate hypergradients in the population based methods due to a possible conflict between the hand-crafted mutation operation (e.g., random perturbation) and the direction of hypergradient descent.
To address the above challenges, we propose a novel hyperparameter mutation (HPM) scheduling algorithm in this study, which adopts a population based training framework to explicitly learn a trade-off (i.e., a mutation schedule) between using the hypergradient-guided local search and the mutation-driven global search. We develop the proposed framework by alternatively proceeding model training and hyperparameter mutation, where the former jointly optimizes model parameters and hyperparameters upon gradients, while the latter leverages a student-teaching schema for the exploration. Particularly, HPM treats the population as a group of student models and employs a teacher model to mutate the hyperparameters of underperforming students. We instantiate our teacher model as a neural network with attention mechanism and learn the mutation direction towards minimizing the validation loss. Benefiting from learning-to-mutate, the mutation is adaptively scheduled for the population based training with hypergradient.
In the experiments, we extensively discuss the properties of the proposed HPM algorithm and show that HPM significantly outperforms hypergradient and global search methods on synthetic functions. We also employ the HPM scheduler in training deep neural networks on two benchmark datasets, where experimental results validate the effectiveness of HPM compared with several strong baselines.
2 Related Work
Roughly we divide the existing HPO methods into two categories, namely, hyperparameter configuration search and hyperparameter schedule search. Hyperparameter configuration search methods assume that the optimal hyperparameter is a set of fixed values, whereas hyperparameter schedule search methods relax this assumption and allow hyperparameters to change in a single trail.
Hyperparameter configuration. For hyperparameter configuration search methods, we may divide existing methods into three subcategories: model-free, Bayesian optimization, and the gradientbased methods. The first subcategory includes grid search [31], random search [3], successive halving [18], Hyperband [22], etc. Grid search adopts an exhausting strategy to select hyperparameter configurations in pre-defined grids, and the random search method randomly selects hyperparameters from the configuration space with a given budget. Inspired by the amazing success of random search, successive halving [18] and Hyperband [22] are further designed with multi-arm bandit strategies to adjust the computation resource of each hyperparameter configuration upon their performance.
All the above HPO methods are model-free as they do not have any distribution assumption about the hyperparameters. Differently, Bayesian optimization methods [32, 16, 7]) assume the existence of a distribution about the model performance over the hyperparameter search space. This category of methods estimates the model performance distribution based on the tested hyperparameter configurations, and predicts the next hyperparameter configuration by maximizing an acquisition function. However, due to the distribution estimation, the computation cost of Bayesian optimization methods could be high, and thus the hyperparameter searching is time-consuming. Recently, BOHB [32, 9] utilizes model-free methods such as Hyperband to improve the efficiency of Bayesian optimization.
The gradient-based HPO method is closely related to this work. Pioneering works [2, 6] propose to employ the reverse-mode differentiation (RMD) to calculate hypergradients on the validation loss based on the minimizer given by a number of model training iterations. Following this line, research efforts [26] have been made to reduce the memory complexity of RMD to handle the large-scale HPO problem. A forward-mode differentiation algorithm is proposed in [12] to further improve the efficiency of computing hypergradients based on the chain rule and a dynamic system formulation.
Hyperparameter Schedule. Two representative ways of changing hyperparameters are gradientbased methods such as self-tuning networks (STN) [25] and mutation-based methods such as population based training (PBT) [17, 21, 14]. STN employs hypernetworks [24] as a response function to map hyperparameters to model parameters so that it could obtain hypergradient by backpropagating the validation error through the hypernetworks. PBT performs an evolutionary search over the
hyperparameter space with a population of agent models. It provides a discrete mutation schedule via random perturbation. The other two interesting works related to this regime include hypergradient descent [1] and online meta-optimization [35]. However, these two works both focus more on online learning rate adaptation rather than a generic HPO problem. The proposed HPM algorithm belongs to the category of hyperparameter schedule. Different from existing methods, HPM explicitly learns suitable mutations when optimizing hypergradient in a complex hyperparameter space.
3 Hyperparameter Mutation (HPM)
3.1 Preliminary
Given input space X and output space Y , we define f(·; θ, h) : X → Y as a model parameterized by θ and h, where θ ∈ RD represents model parameters and h ∈ RN vectorizes N hyperparameters sampled from the hyperparameter configuration spaceH = H1×· · ·×HN . Hi is a set of configuration values for the i-th hyperparameter. Let Dtrn,Dval : {(x, y)} be the training and validation set. We define L(θ, h) : RD × RN → R as a function of parameter and hyperparameter by
L(θ, h) = ∑
(x,y)∈D
`(f(x; θ, h), y), (1)
where `(·, ·) denotes a loss function and D refers to Dtrn or Dval. Upon Eq. (1), we further define Ltrn and Lval as the training and validation loss functions by computing L(θ, h) on Dtrn and Dval, respectively. Generally, we train the model f on Dtrn with the fixed hyperparameter h or a humancrafted schedule strategy, and peek at the model performance by Lval with the learned parameter θ. Thus, the validation loss is usually bounded to the hyperparameter selection.
Hyperparameter optimization (HPO) solves the above issue, and it could be formulated as
min h∈H Lval(θ∗, h) s.t. θ∗ = argmin θ Ltrn(θ, h), (2)
which seeks for an optimal hyperparameter configuration h∗ or an optimal hyperparameter schedule. Hypergradient [2, 26, 30, 12] provides a natural way to solve Eq. (2) by performing gradient descent. However, due to the non-convex nature of a hyperparameter space, this kind of method may get stuck in local minima and thus lead to suboptimal performance. In contrast, the population based methods utilize a mutation-driven strategy to search the hyperparameter space thoroughly, which provides the potential to help hypergradient escape from local valleys. In this study, we focus on developing a trade-off solution between using hypergradient and the mutation-driven search.
3.2 Population Based Hyperparameter Search
We adopt a similar population based training framework as proposed in [17]. Let St = {Skt }Kk=1 be a population of agent models w.r.t f(·; θ, h) at the t-th training step, where Skt refers to the k-th agent model, T represents the total training steps, and K denotes the population size. Generally, the iterative optimization method (e.g., stochastic gradient decent) is used to optimize model weights for each agent. Hence, for ∀k, one training step could be described as
θkt+1 ← Skt (θkt , hkt ), (3)
where Skt updates model parameters from θ k t to θ k t+1 with a fixed hyperparameter h k t during the training step. The population based hyperparameter search is given by
k∗ = argmin k {Lval(θkT , hkT )}Kk=1. (4)
In Eq. (4), θkT = S k T−1(S k T−2(. . . S k 0 (θ k 0 , h k 0) . . . , h k T−2), h k T−1) is obtained by chaining a sequence of update steps with Eq. (3) and the hyperparameters are updated through some pre-defined or rule-based mutation operations (e.g., random perturbation). More specifically, we summarize the searching process with population based training [17] as follows.
• Train step updates θkt−1 to θkt and evaluates the validation loss Lval(θkt , hkt ) for each k. One training step could be one epoch or a fixed number of iterations. An agent model is ready to be exploited and explored after one step.
• Exploit St by selection methods, e.g., the truncation selection, which divides St into three sets of top, middle, and bottom agents in terms of validation performance. The agent models in bottom exploit the top ones by cloning their model parameters and hyperparameters, i.e., (θkt , h k t )← (θ∗t , h∗t ), where k ∈ bottom and ∗ represents the index of a top performer.
• Explore the hyperparameters with a mutation operation, denoted as Φ. As in [17], Φ keeps non-bottom agents unchanged, and randomly perturbs a bottom agent’s hyperparameter.
The population based training (PBT) methods [17, 21] simultaneously explore the hyperparameter space with a group of agent models. PBT inherits the merits of random search and leverages exploit & explore strategy to alternatively optimize the model parameter θ (by training step) and hyperparameter h (by mutation). This leads to a joint optimization over θ and h, and eventually provides an optimal hyperparameter schedule, i.e., hk ∗ 0 , . . . , h k∗
T−1 given by Eq. (4), among the population of agents. However, PBT has two limitations. 1) For each training step, the joint optimization stays at a coarse level since St(θt, ht) updates θt by fixing ht. 2) The hyperparameters are mainly updated by the mutation operation, yet a learnable mutation is under-explored.
3.3 Hypergradient Guided Population
We propose to use hypergradient to guide the population based hyperparameter search. To obtain hypergradient, we define θ(h) : RN → RD as a response function of hyperparameter h to approximate the model parameter θ. By using θ(h), we could extend the agent model to St(θt(ht), ht), and formulate our hyperparameter mutation (HPM) scheduling algorithm as
min hT {Lval(θkT (hkT ), hkT )}Kk=1, (5)
where (θkT (h k T ), h k T ) is obtained by alternatively proceeding with one hypertraining step and one learnable mutation step as shown in Fig. 1. It is worth noting that, hT is optimized over the population in a sequential update way, i.e., (θkt−1(h k t−1), h k t−1)→ (θkt (hkt ), hkt ), where hkt is updated by hypergradient and mutation at each step t. Thus, optimizing hT in Eq. (5) is equivalent to optimize the hyperparameter schedule: h0 → · · ·ht · · · → hT . Hypertraining jointly optimizes θ and h with hypergradients. Specifically, (θ, h) is updated by
θt = θt−1(ht−1)− ηθ∇θ, ht = ht−1 − ηh∇h,
(6)
where∇θ = ∂Ltrn/∂θ is the gradient of model parameter and∇h is the hypergradient computed by
∇h = ∂Lval(θ(h), h) ∂θ ∂θ ∂h + ∂Lval(θ(h), h) ∂h . (7)
The computation of hypergradient in Eq. (7) is mainly depended on the response function θ(h). In this work, θ(h) is implemented by hypernetworks [24, 25], which provide a flexible and efficient way to compute hypergradients.
Algorithm 1 Hyperparameter Optimization via HPM Let S be a set of student models, and T be the given budget for t = 1 to T do
for Skt−1 ∈ St−1 (could be parallelized) do Update Skt−1(θ k t−1, h k t−1) to S k t (θ k t , h k t ) by one hypertraining step with Eq. (6) and Eq. (7) Divide St into top, middle, bottom students by the truncation section method for Skt ∈ bottom do
Clone model parameters as θkt ← θ∗t where (θ∗t , h∗t ) ∈ top Train the teacher network gφ(hkt ) with Eq. (10) conditioning on (θ ∗ t , h ∗ t )
Mutate the hyperparameter with Eq. (8) as hkt ← gφ(hkt ) h∗t return {h∗0, . . . , h∗T−1}, θ∗T
Learnable mutation employs a similar exploit strategy as in Section 3.2 (without hkt ← h∗t ) and develops a student-teaching schema [10, 34] for exploration. Particularly, after updating St−1 to St via one hypertraining step, we treat each agent Skt ∈ St as a student model and learn a teacher model to mutate the underperforming student’s hyperparameters. The mutation module Φ is developed as
hkt = Φ(h k t , h ∗ t ) = α h∗t , (8)
where hkt ∈ bottom, h∗t ∈ top, is the hadamard product, and α ∈ RN denotes the mutation weights. In the following, we will show how to learn α with the teacher network.
3.4 Learning to Mutate
We formulate our teacher model gφ as a neural network with attention mechanism parameterized by φ = {W,V }, where W ∈ RN×M , V ∈ RN×M are two learnable parameters and M represents the number of attention units, as shown in Fig. 2. It takes input as a bottom student’s hyperparameter hkt and computes the mutation weights by
α = gφ(h k t ) = 1 + tanh(c), c = W softmax(V Thkt ), (9)
where α ∈ [0, 2]N and c ∈ RN is a mass vector that tries to characterize the mutation degree for each dimension of h. The benefits of using attention mechanism lie in two folds. 1) It provides sufficient model capability with a key-value architecture, which uses the key slots stored in V to address different underperforming hyperparameters and assign the mutations with the corresponding memory slots in W . 2) gφ enables a learnable way to adaptively mutate hyperparameters along with the training process, where α→ 1 gives a mild mutation for a small exploration (update) step, and α→ 0 or α→ 2 encourages an aggressive exploration to the hyperparameter space.
We aim to learn the mutation direction towards minimizing Lval. To this end, we train our teacher model gφ conditioning on (θ∗t , h ∗ t ) by
min φ={W,V }
Lval(θ∗t (h′t), h′t), (10)
where h′t = α h∗t = gφ(hkt ) h∗t . The parameters of gφ are updated by backpropagating the
hypergradients given in Eq. (7) through the chain rule. By freezing the cloned model parameters and hyperparameters (θ∗t , h ∗ t ), gφ could be focused on learning the mutations to minimize Lval. Please refer to the supplementary material for more details about training the teacher model.
Algorithm 1 summarizes the entire HPM scheduling algorithm. Particularly, HPM computes hypergradients with hypernetworks [24, 25], which add a linear transformation between hyperparameters and model parameters layer-wisely. The hypernetwork can be efficiently computed via feed-forward and backpropagation operations. Moreover, since the teacher network is trained with the frozen student model, the additional computing cost it brings in is much less than training a student model. Thus, the time complexity of HPM is mainly subject to the population size K. While the hypertraining step could be parallelized, the whole population cannot be asynchronously updated due to the centralized teaching process. This can be effectively addressed by introducing an asynchronous HPM, similar to [17]. We leave it as future work and focus on learning to mutate in this study.
4 Experiments
4.1 Synthetic Functions
One common strategy for exploring the properties of hyperparameters is to perform hyperparameter optimization on synthetic loss functions [36]. These loss functions usually have many local minima and different shapes, and thus could well simulate the optimizing behavior of the real hyperparameters, yet work as much computationally cheaper testbeds than real-world datasets.
Experimental Settings. We employ the Branin and Hartmann6D function provided by the HPOlib2 library, where Branin is defined in a two-dimensional space with three global minima (f(h∗) = 0.39787) and Hartmann6D is defined over a hypercube of [0, 1]6 with one global minima (f(h∗) = −3.32237). We compare the proposed HPM with three baseline methods, including 1) random search [3], 2) population based training (PBT) [17], and 3) Hypergradient. We also compare HPM with HPM w/o T, which is the ablated HPM model without using a teacher network. It uses a random perturbation (α is randomly chosen from [0.8, 1.2]) for mutation instead. We ran the random search algorithm in HPOlib library and implement the PBT scheduler according to [17]. Note that, as we use the synthetic function f to mimic the loss function of hyperparameters h, the hypergradient is directly given by ∂f/∂h and is optimized with the gradient descent algorithm.
Hyperparameter Optimization Performance. Fig. 3a and Fig. 3b compare the performance of different HPO methods on the Branin and Hartmann6D functions, respectively, where we have several interesting observations. 1) The hypergradient method generally performs better than the global search methods (e.g., random search and PBT) on Hartmann6D rather than Branin, which is consistent with the fact that Hartmann6D has a less number of global minima than Branin. 2) There should be a trade-off between using hypergradient and global search methods (e.g., PBT) according to their opposite performance on these two test functions. 3) The proposed teacher network leads to a more stable and faster convergence performance for HPM compared with HPM w/o T.
Mutation Schedule. Fig. 4 shows the optimization steps of three methods on the Branin function, where we run PBT, hypergradient, and HPM from the same random initialization point with a budget of 30 iterations. As can be seen, the hypergradient decreases well along with the direction of gradient yet may get stuck in local minima. In contrast, while the PBT method could fully explore the hyperparameter space, it cannot achieve the global minimum without using the gradient guidance. Guided by the teacher network and hypergradient information, the proposed HPM moves towards the
2https://github.com/automl/HPOlib
global optimum adaptively, where HPM skips over several areas quickly and half steps to the end. Interestingly, this is consistent with the mutation schedule as shown in Fig. 3c on Branin, where HPM employs a larger mutation in the first three steps (α→ 0 or α→ 2) and mild mutations (α→ 1) in the last two. Hence, benefiting from the learned mutation schedule, the proposed HPM is a good trade-off between using the hypergradient and mutation-driven update.
4.2 Benchmark Datasets
We validate the effectiveness of HPM for tuning hyperparameters of deep neural networks on two representative tasks, including image classification with CNN and language modeling with LSTM.
Experimental Settings. For a fair comparison to hypergradient, all the experiments in this section follow the same setting as in self-tuning networks [25], which is specifically designed for optimizing hyperparameters of deep neural networks with hypergradients. Particularly, we tune 15 hyperparameters, including 8 dropout rates and 7 data augmentation hyperparameters for AlexNet [20] in the CIFAR10 image dataset [19], and 7 RNN regularization hyperparameters [13, 33, 29] for LSTM [15] model in the Penn Treebank (PTB) [28] corpus dataset. We compare our approach with two groups of HPO methods as 1) fixed hyperparameter and 2) hyperparameter schedule methods. The first group tries to find a fixed hyperparameter configuration over the hyperparameter space, including grid search, random search, Bayesian Optimization3 and Hyperband [22]. The second group learns a dynamical hyperparameter schedule along with the training process, such as population based training (PBT) [17] and self-tuning network (STN) [25]. Our HPM belongs to the second category.
Implementation Details. We implement PBT with different baseline networks (e.g., AlexNet and LSTM) and use the truncation selection with random perturbation for exploitation and exploration according to [17]. For STN, we directly run the authors’ code. We implement our HPM algorithm by using STN as a student model to proceed the hypertraining. HPM employs the same exploit strategy as in PBT and performs learnable mutation with a teacher model (e.g., an attention neural network) for exploration. For both PBT and HPM, we take one training epoch as one training step, and do exploit & explore operation after each step. The teacher model in HPM is trained by one epoch on the validation set each time called by an underperforming student model. We also implement a strong baseline model as HPM w/o T, which incorporates hypergradient in the population based training without using a teacher network.
All the codes on benchmark datasets were implemented with Pytorch library. We set the population size as 20 and the truncation selection ratio as 20% for PBT, HPM w/o T, and HPM. We employed the recommended optimizers and learning rates for all the baseline networks and STN models following [25]. Our teacher network was implemented with 64 key slots and was trained with Adam optimizer with a learning rate of 0.001. For the fixed hyperparameter methods, we used the Hyperband [22] implementation provided in [23] and posted the results of the others reported in [25]. For all the hyperparameter schedule methods, we ran the experiments in the same computing environment. STN usually converges within 250 (150) epochs on the CIFAR-10 (PTB) dataset. Thus, we set T as 250 and 150 for all the population based methods on CIFAR-10 and PTB, respectively.
3https://github.com/HIPS/Spearmint
Image Classification. Table 1 reports the performance of the fixed hyperparameter and hyperparameter schedule methods on the CIFAR-10 dataset in terms of validation and test loss, respectively. As can be seen, the hyperparameter schedule methods generally perform better than the fixed ones and the proposed HPM scheduler achieves the best performance, which demonstrates the effectiveness of using HPM in tuning deep neural networks. Fig. 5a shows the best validation loss of different methods over training epochs, where the loss of HPM is consistently lower than PBT and STN. We also show the hyperparameter and mutation schedule learned by HPM in Fig. 5b and Fig. 5c. Specifically, we select four hyperparameters including the dropout rates of the input, the third and fourth layer activation, and the rate of adding noise on the hue of an image. We observe that the mutation has a consistent behavior with the hyperparameter. For example, HPM schedules the dropout rate of Layer 3 with a high variance at the early training stage and assigns it a stable small value after the 150-th epoch. Accordingly, the mutation α of Layer 3 oscillates between [0.5, 1.75] before 150 epochs and then tends to be 1. For another example, as Hue and Input have a relatively stable schedule, their mutation weights spread around 1 with a small variance. These observations indicate that HPM can learn a meaningful mutation schedule during the training process.
Language Modeling. We summarize the validation and test perplexity of all the methods on the PTB corpus dataset in Table 1, where HPM also outperforms all the compared methods. One may note that HPM w/o T performs much worse than PBT and STN. This might be due to the conflict between hypergradient and the exploration of random perturbation, which justifies that HPM is not a trivial combination of PBT and STN, and supports that the proposed teacher network plays a key role in finding the mutation schedule. Fig. 6 shows the best validation perplexity of different methods over training epochs on the PTB dataset, as well as the hyperparameter and mutation schedules given by HPM, where a similar observation to the image classification experiment could be obtained.
4.3 Ablation Study
The proposed HPM method adopts a population-based training framework and learns the hyperparameter schedule by alternatively proceeding with the hypertraining and learnable mutation steps. To investigate the impact of different components in HPM, we provide more ablated models other than HPM w/o T as follows: 1) RS+STN combines STN [25] and random search (RS). We ran RS with the same given budget as the population size in HPM, i.e., K = 20. 2) HPM w/o H freezes hyperparame-
ters in the hypertraining step and only updates hyperparameters with learnable mutations. Thus, it could be treated as a PBT model with hypergradient-guided mutations. 3) HPM w/o M disables the mutation operation in HPM and, instead, performs one more hypergradient descent step on the cloned hyperparameters for the exploration purpose. 4) In HPM, the mutation is learned by a teacher model implemented with attention networks. Here HPM (T-MLP) employs a different implementation for the teacher model. Specifically, it implements the teacher model gφ(h) = 1 + tanh(Wσ(V Th)) by setting σ as LeakyRelu rather than the softmax function in Eq. (9), in which case, it turns the attention networks as multilayer perceptron (MLP) networks.
Table 2 shows the ablation study results on two benchmark datasets, where our full model HPM consistently outperforms all the ablated models. On the one hand, RS+STN achieves a similar performance compared to STN [25], indicating that, without leveraging an effective exploit & explore strategy, a simple combination between local gradient and global search may not boost the performance significantly. On the other hand, while HPM w/o H adopts a learnable mutation, it only performs hypergradient descent with the teacher model, leading to hyperparameters will be updated slowly and cannot be seamlessly tuned along with model parameters. Hence, both hypertraining and learnable mutations are useful for optimizing hyperparameters.
We further compare HPM with two ablated models without using mutations (HPM w/o M) and the teacher network (HPM w/o T). Particularly, HPM w/o M degrades the performance due to overoptimizing hyperparameters and the lack of mutation-driven search; HPM w/o T underperforms since the potential conflict between hypergradient descent and the random-perturbation based mutation. Hence, the ablation studies in Table 2 demonstrate the effectiveness of learning mutations with a teacher model. Moreover, we also provide an alternative implementation of the teacher model with MLP networks, i.e., HPM (T-MLP), which delivers comparative performance to the proposed HPM.
5 Conclusions
We proposed a novel hyperparameter mutation (HPM) algorithm for solving the hyperparameter optimization task, where we developed a hypergradient-guided population based training framework and designed a student-teaching schema to deliver adaptive mutations for the underperforming student models. We implemented a learning-to-mutate algorithm with the attention mechanism to learn a mutation schedule towards minimizing the validation loss, which provides a trade-off solution between using the hypergradient-guided local search and the mutation-driven global search. Experimental results on both synthetic and benchmark datasets clearly demonstrated the benefit of using the proposed HPM over hypergradient and the population based methods.
Broader Impact
The proposed HPM algorithm addresses the challenge of combining local gradient and global search for solving the hyperparameter optimization problem. The proposed framework could be incorporated in many automated machine learning systems to provide an effective hyperparameter schedule solution. The outcome of this work will benefit both the academic and industry communities by liberating researchers from the tedious hyperparameter tuning work.
Acknowledgments
We would like to thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was supported by Alibaba DAMO Academy and the SMILE Lab (https://web.northeastern.edu/smilelab/) at Northeastern University. | 1. What is the main contribution of the paper, and how does it relate to previous works?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novel combination of hypergradient and population-guided search?
3. How does the reviewer assess the paper's experimental results and ablation studies?
4. What are the reviewer's concerns regarding computational efficiency and wall clock time, and how could they be addressed?
5. Are there any suggestions or recommendations for future work related to this research? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper proposes a novel hypergradient-guided population-based training framework for hyperparameter optimization. The framework benefits utilizes hypergradients to jointly optimize the model and hyperparameters in a population of student networks while also using training a teacher network that learns to mutate the hyperparameters of student networks better via hypergradients from all the students. They show empirically that this performs better than hypergradient optimization or population-based search on several synthetic functions, and deep networks trained on CIFAR10(image classification) and PTB(Language Modeling).
Strengths
The work proposes a novel combination of hypergradient and population-guided search which to provide some benefits of both local-oriented hypergradient and global population-based search. They demonstrate the performance of the algorithm on both synthetic functions, and optimizing deep networks. They do ablation experiments with and without the teacher network learning the mutation operator and show that it improves performance. They include the code in the submission.
Weaknesses
It might benefit from an analysis of the additional cost of the computation of the hypergradients and whether optimization of the teacher network has significant impact on clock runtime time of the hyperparameter optimization. In addition, it may benefit from some more discussion of the variance in performance of the framework compared to other methods. _______________ AFTER REBUTTAL I thank the authors for the the author response which partially addressed my concerns and the additional ablation experiments make the paper significantly stronger especially the GB-HPO + RS baseline and the experiments showing the stability of the algorithm Unfortunately, after considering the other reviews, I believe that the paper is borderline partially due to concerns about the computational efficiency and wall clock time which were not sufficiently address in the rebuttal. While I agree updating the teacher network is significantly cheaper due to freezing the student model, in the current algorithm it must be done sequentially for half the population and training of the population is partially blocked by the teacher. The paper would benefit from significant analysis of the computational cost and parallelizability of the algorithms and wall time comparison with the baselines. |
NIPS | Title
Learning to Mutate with Hypergradient Guided Population
Abstract
Computing the gradient of model hyperparameters, i.e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task. However, gradient-based methods could lead to suboptimal solutions due to the non-convex nature of optimization in a complex hyperparameter space. In this study, we propose a hyperparameter mutation (HPM) algorithm to explicitly consider a learnable trade-off between using global and local search, where we adopt a population of student models to simultaneously explore the hyperparameter space guided by hypergradient and leverage a teacher model to mutate the underperforming students by exploiting the top ones. The teacher model is implemented with an attention mechanism and is used to learn a mutation schedule for different hyperparameters on the fly. Empirical evidence on synthetic functions is provided to show that HPM outperforms hypergradient significantly. Experiments on two benchmark datasets are also conducted to validate the effectiveness of the proposed HPM algorithm for training deep neural networks compared with several strong baselines.
1 Introduction
Hyperparameter optimization (HPO) [4, 11] is one of the fundamental research problems in the field of automated machine learning. It aims to maximize the model performance by tuning model hyperparameters automatically, which could be achieved either by searching a fixed hyperparameter configuration setting [3, 22, 32, 9] from the predefined hyperparameter space or by learning a hyperparameter schedule along with the training process [17, 25]. Among existing methods, hypergradient [2, 26] forms a promising direction, as it naturally enables gradient descent on hyperparameters.
Hypergradient is usually defined as the gradient of a validation loss function w.r.t hyperparameters. Previous methods mainly focus on computing hypergradients by using reverse-mode differentiation [2, 6, 26], or designing a differentiable response function [12, 25] for hyperparameters, yet without explicitly considering the non-convex optimization nature in a complex hyperparameter space. Thus, while hypergradient methods could deliver highly-efficient local search solutions, they may easily get stuck in local minima and achieve suboptimal performance. This can be clearly observed on some synthetic functions which share a similar shape of parameter space to the HPO problem (see Sec. 4.1). It also leads to the question: can we find a way to help hypergradient with global information?
The population based hyperparameter search methods work as a good complementary to the hypergradient, such as evolutionary search [27, 5], particle swarm optimization [8], and the population based ∗Work done when Zhiqiang Tao interned at Alibaba Group and worked at Northeastern University.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
training [17, 21, 14], which generally employ a population of agent models to search different hyperparameter configurations and update hyperparameters with a mutation operation. The population could provide sufficient diversity to globally explore hypergradients throughout the hyperparameter space. However, it is non-trivial to incorporate hypergradients in the population based methods due to a possible conflict between the hand-crafted mutation operation (e.g., random perturbation) and the direction of hypergradient descent.
To address the above challenges, we propose a novel hyperparameter mutation (HPM) scheduling algorithm in this study, which adopts a population based training framework to explicitly learn a trade-off (i.e., a mutation schedule) between using the hypergradient-guided local search and the mutation-driven global search. We develop the proposed framework by alternatively proceeding model training and hyperparameter mutation, where the former jointly optimizes model parameters and hyperparameters upon gradients, while the latter leverages a student-teaching schema for the exploration. Particularly, HPM treats the population as a group of student models and employs a teacher model to mutate the hyperparameters of underperforming students. We instantiate our teacher model as a neural network with attention mechanism and learn the mutation direction towards minimizing the validation loss. Benefiting from learning-to-mutate, the mutation is adaptively scheduled for the population based training with hypergradient.
In the experiments, we extensively discuss the properties of the proposed HPM algorithm and show that HPM significantly outperforms hypergradient and global search methods on synthetic functions. We also employ the HPM scheduler in training deep neural networks on two benchmark datasets, where experimental results validate the effectiveness of HPM compared with several strong baselines.
2 Related Work
Roughly we divide the existing HPO methods into two categories, namely, hyperparameter configuration search and hyperparameter schedule search. Hyperparameter configuration search methods assume that the optimal hyperparameter is a set of fixed values, whereas hyperparameter schedule search methods relax this assumption and allow hyperparameters to change in a single trail.
Hyperparameter configuration. For hyperparameter configuration search methods, we may divide existing methods into three subcategories: model-free, Bayesian optimization, and the gradientbased methods. The first subcategory includes grid search [31], random search [3], successive halving [18], Hyperband [22], etc. Grid search adopts an exhausting strategy to select hyperparameter configurations in pre-defined grids, and the random search method randomly selects hyperparameters from the configuration space with a given budget. Inspired by the amazing success of random search, successive halving [18] and Hyperband [22] are further designed with multi-arm bandit strategies to adjust the computation resource of each hyperparameter configuration upon their performance.
All the above HPO methods are model-free as they do not have any distribution assumption about the hyperparameters. Differently, Bayesian optimization methods [32, 16, 7]) assume the existence of a distribution about the model performance over the hyperparameter search space. This category of methods estimates the model performance distribution based on the tested hyperparameter configurations, and predicts the next hyperparameter configuration by maximizing an acquisition function. However, due to the distribution estimation, the computation cost of Bayesian optimization methods could be high, and thus the hyperparameter searching is time-consuming. Recently, BOHB [32, 9] utilizes model-free methods such as Hyperband to improve the efficiency of Bayesian optimization.
The gradient-based HPO method is closely related to this work. Pioneering works [2, 6] propose to employ the reverse-mode differentiation (RMD) to calculate hypergradients on the validation loss based on the minimizer given by a number of model training iterations. Following this line, research efforts [26] have been made to reduce the memory complexity of RMD to handle the large-scale HPO problem. A forward-mode differentiation algorithm is proposed in [12] to further improve the efficiency of computing hypergradients based on the chain rule and a dynamic system formulation.
Hyperparameter Schedule. Two representative ways of changing hyperparameters are gradientbased methods such as self-tuning networks (STN) [25] and mutation-based methods such as population based training (PBT) [17, 21, 14]. STN employs hypernetworks [24] as a response function to map hyperparameters to model parameters so that it could obtain hypergradient by backpropagating the validation error through the hypernetworks. PBT performs an evolutionary search over the
hyperparameter space with a population of agent models. It provides a discrete mutation schedule via random perturbation. The other two interesting works related to this regime include hypergradient descent [1] and online meta-optimization [35]. However, these two works both focus more on online learning rate adaptation rather than a generic HPO problem. The proposed HPM algorithm belongs to the category of hyperparameter schedule. Different from existing methods, HPM explicitly learns suitable mutations when optimizing hypergradient in a complex hyperparameter space.
3 Hyperparameter Mutation (HPM)
3.1 Preliminary
Given input space X and output space Y , we define f(·; θ, h) : X → Y as a model parameterized by θ and h, where θ ∈ RD represents model parameters and h ∈ RN vectorizes N hyperparameters sampled from the hyperparameter configuration spaceH = H1×· · ·×HN . Hi is a set of configuration values for the i-th hyperparameter. Let Dtrn,Dval : {(x, y)} be the training and validation set. We define L(θ, h) : RD × RN → R as a function of parameter and hyperparameter by
L(θ, h) = ∑
(x,y)∈D
`(f(x; θ, h), y), (1)
where `(·, ·) denotes a loss function and D refers to Dtrn or Dval. Upon Eq. (1), we further define Ltrn and Lval as the training and validation loss functions by computing L(θ, h) on Dtrn and Dval, respectively. Generally, we train the model f on Dtrn with the fixed hyperparameter h or a humancrafted schedule strategy, and peek at the model performance by Lval with the learned parameter θ. Thus, the validation loss is usually bounded to the hyperparameter selection.
Hyperparameter optimization (HPO) solves the above issue, and it could be formulated as
min h∈H Lval(θ∗, h) s.t. θ∗ = argmin θ Ltrn(θ, h), (2)
which seeks for an optimal hyperparameter configuration h∗ or an optimal hyperparameter schedule. Hypergradient [2, 26, 30, 12] provides a natural way to solve Eq. (2) by performing gradient descent. However, due to the non-convex nature of a hyperparameter space, this kind of method may get stuck in local minima and thus lead to suboptimal performance. In contrast, the population based methods utilize a mutation-driven strategy to search the hyperparameter space thoroughly, which provides the potential to help hypergradient escape from local valleys. In this study, we focus on developing a trade-off solution between using hypergradient and the mutation-driven search.
3.2 Population Based Hyperparameter Search
We adopt a similar population based training framework as proposed in [17]. Let St = {Skt }Kk=1 be a population of agent models w.r.t f(·; θ, h) at the t-th training step, where Skt refers to the k-th agent model, T represents the total training steps, and K denotes the population size. Generally, the iterative optimization method (e.g., stochastic gradient decent) is used to optimize model weights for each agent. Hence, for ∀k, one training step could be described as
θkt+1 ← Skt (θkt , hkt ), (3)
where Skt updates model parameters from θ k t to θ k t+1 with a fixed hyperparameter h k t during the training step. The population based hyperparameter search is given by
k∗ = argmin k {Lval(θkT , hkT )}Kk=1. (4)
In Eq. (4), θkT = S k T−1(S k T−2(. . . S k 0 (θ k 0 , h k 0) . . . , h k T−2), h k T−1) is obtained by chaining a sequence of update steps with Eq. (3) and the hyperparameters are updated through some pre-defined or rule-based mutation operations (e.g., random perturbation). More specifically, we summarize the searching process with population based training [17] as follows.
• Train step updates θkt−1 to θkt and evaluates the validation loss Lval(θkt , hkt ) for each k. One training step could be one epoch or a fixed number of iterations. An agent model is ready to be exploited and explored after one step.
• Exploit St by selection methods, e.g., the truncation selection, which divides St into three sets of top, middle, and bottom agents in terms of validation performance. The agent models in bottom exploit the top ones by cloning their model parameters and hyperparameters, i.e., (θkt , h k t )← (θ∗t , h∗t ), where k ∈ bottom and ∗ represents the index of a top performer.
• Explore the hyperparameters with a mutation operation, denoted as Φ. As in [17], Φ keeps non-bottom agents unchanged, and randomly perturbs a bottom agent’s hyperparameter.
The population based training (PBT) methods [17, 21] simultaneously explore the hyperparameter space with a group of agent models. PBT inherits the merits of random search and leverages exploit & explore strategy to alternatively optimize the model parameter θ (by training step) and hyperparameter h (by mutation). This leads to a joint optimization over θ and h, and eventually provides an optimal hyperparameter schedule, i.e., hk ∗ 0 , . . . , h k∗
T−1 given by Eq. (4), among the population of agents. However, PBT has two limitations. 1) For each training step, the joint optimization stays at a coarse level since St(θt, ht) updates θt by fixing ht. 2) The hyperparameters are mainly updated by the mutation operation, yet a learnable mutation is under-explored.
3.3 Hypergradient Guided Population
We propose to use hypergradient to guide the population based hyperparameter search. To obtain hypergradient, we define θ(h) : RN → RD as a response function of hyperparameter h to approximate the model parameter θ. By using θ(h), we could extend the agent model to St(θt(ht), ht), and formulate our hyperparameter mutation (HPM) scheduling algorithm as
min hT {Lval(θkT (hkT ), hkT )}Kk=1, (5)
where (θkT (h k T ), h k T ) is obtained by alternatively proceeding with one hypertraining step and one learnable mutation step as shown in Fig. 1. It is worth noting that, hT is optimized over the population in a sequential update way, i.e., (θkt−1(h k t−1), h k t−1)→ (θkt (hkt ), hkt ), where hkt is updated by hypergradient and mutation at each step t. Thus, optimizing hT in Eq. (5) is equivalent to optimize the hyperparameter schedule: h0 → · · ·ht · · · → hT . Hypertraining jointly optimizes θ and h with hypergradients. Specifically, (θ, h) is updated by
θt = θt−1(ht−1)− ηθ∇θ, ht = ht−1 − ηh∇h,
(6)
where∇θ = ∂Ltrn/∂θ is the gradient of model parameter and∇h is the hypergradient computed by
∇h = ∂Lval(θ(h), h) ∂θ ∂θ ∂h + ∂Lval(θ(h), h) ∂h . (7)
The computation of hypergradient in Eq. (7) is mainly depended on the response function θ(h). In this work, θ(h) is implemented by hypernetworks [24, 25], which provide a flexible and efficient way to compute hypergradients.
Algorithm 1 Hyperparameter Optimization via HPM Let S be a set of student models, and T be the given budget for t = 1 to T do
for Skt−1 ∈ St−1 (could be parallelized) do Update Skt−1(θ k t−1, h k t−1) to S k t (θ k t , h k t ) by one hypertraining step with Eq. (6) and Eq. (7) Divide St into top, middle, bottom students by the truncation section method for Skt ∈ bottom do
Clone model parameters as θkt ← θ∗t where (θ∗t , h∗t ) ∈ top Train the teacher network gφ(hkt ) with Eq. (10) conditioning on (θ ∗ t , h ∗ t )
Mutate the hyperparameter with Eq. (8) as hkt ← gφ(hkt ) h∗t return {h∗0, . . . , h∗T−1}, θ∗T
Learnable mutation employs a similar exploit strategy as in Section 3.2 (without hkt ← h∗t ) and develops a student-teaching schema [10, 34] for exploration. Particularly, after updating St−1 to St via one hypertraining step, we treat each agent Skt ∈ St as a student model and learn a teacher model to mutate the underperforming student’s hyperparameters. The mutation module Φ is developed as
hkt = Φ(h k t , h ∗ t ) = α h∗t , (8)
where hkt ∈ bottom, h∗t ∈ top, is the hadamard product, and α ∈ RN denotes the mutation weights. In the following, we will show how to learn α with the teacher network.
3.4 Learning to Mutate
We formulate our teacher model gφ as a neural network with attention mechanism parameterized by φ = {W,V }, where W ∈ RN×M , V ∈ RN×M are two learnable parameters and M represents the number of attention units, as shown in Fig. 2. It takes input as a bottom student’s hyperparameter hkt and computes the mutation weights by
α = gφ(h k t ) = 1 + tanh(c), c = W softmax(V Thkt ), (9)
where α ∈ [0, 2]N and c ∈ RN is a mass vector that tries to characterize the mutation degree for each dimension of h. The benefits of using attention mechanism lie in two folds. 1) It provides sufficient model capability with a key-value architecture, which uses the key slots stored in V to address different underperforming hyperparameters and assign the mutations with the corresponding memory slots in W . 2) gφ enables a learnable way to adaptively mutate hyperparameters along with the training process, where α→ 1 gives a mild mutation for a small exploration (update) step, and α→ 0 or α→ 2 encourages an aggressive exploration to the hyperparameter space.
We aim to learn the mutation direction towards minimizing Lval. To this end, we train our teacher model gφ conditioning on (θ∗t , h ∗ t ) by
min φ={W,V }
Lval(θ∗t (h′t), h′t), (10)
where h′t = α h∗t = gφ(hkt ) h∗t . The parameters of gφ are updated by backpropagating the
hypergradients given in Eq. (7) through the chain rule. By freezing the cloned model parameters and hyperparameters (θ∗t , h ∗ t ), gφ could be focused on learning the mutations to minimize Lval. Please refer to the supplementary material for more details about training the teacher model.
Algorithm 1 summarizes the entire HPM scheduling algorithm. Particularly, HPM computes hypergradients with hypernetworks [24, 25], which add a linear transformation between hyperparameters and model parameters layer-wisely. The hypernetwork can be efficiently computed via feed-forward and backpropagation operations. Moreover, since the teacher network is trained with the frozen student model, the additional computing cost it brings in is much less than training a student model. Thus, the time complexity of HPM is mainly subject to the population size K. While the hypertraining step could be parallelized, the whole population cannot be asynchronously updated due to the centralized teaching process. This can be effectively addressed by introducing an asynchronous HPM, similar to [17]. We leave it as future work and focus on learning to mutate in this study.
4 Experiments
4.1 Synthetic Functions
One common strategy for exploring the properties of hyperparameters is to perform hyperparameter optimization on synthetic loss functions [36]. These loss functions usually have many local minima and different shapes, and thus could well simulate the optimizing behavior of the real hyperparameters, yet work as much computationally cheaper testbeds than real-world datasets.
Experimental Settings. We employ the Branin and Hartmann6D function provided by the HPOlib2 library, where Branin is defined in a two-dimensional space with three global minima (f(h∗) = 0.39787) and Hartmann6D is defined over a hypercube of [0, 1]6 with one global minima (f(h∗) = −3.32237). We compare the proposed HPM with three baseline methods, including 1) random search [3], 2) population based training (PBT) [17], and 3) Hypergradient. We also compare HPM with HPM w/o T, which is the ablated HPM model without using a teacher network. It uses a random perturbation (α is randomly chosen from [0.8, 1.2]) for mutation instead. We ran the random search algorithm in HPOlib library and implement the PBT scheduler according to [17]. Note that, as we use the synthetic function f to mimic the loss function of hyperparameters h, the hypergradient is directly given by ∂f/∂h and is optimized with the gradient descent algorithm.
Hyperparameter Optimization Performance. Fig. 3a and Fig. 3b compare the performance of different HPO methods on the Branin and Hartmann6D functions, respectively, where we have several interesting observations. 1) The hypergradient method generally performs better than the global search methods (e.g., random search and PBT) on Hartmann6D rather than Branin, which is consistent with the fact that Hartmann6D has a less number of global minima than Branin. 2) There should be a trade-off between using hypergradient and global search methods (e.g., PBT) according to their opposite performance on these two test functions. 3) The proposed teacher network leads to a more stable and faster convergence performance for HPM compared with HPM w/o T.
Mutation Schedule. Fig. 4 shows the optimization steps of three methods on the Branin function, where we run PBT, hypergradient, and HPM from the same random initialization point with a budget of 30 iterations. As can be seen, the hypergradient decreases well along with the direction of gradient yet may get stuck in local minima. In contrast, while the PBT method could fully explore the hyperparameter space, it cannot achieve the global minimum without using the gradient guidance. Guided by the teacher network and hypergradient information, the proposed HPM moves towards the
2https://github.com/automl/HPOlib
global optimum adaptively, where HPM skips over several areas quickly and half steps to the end. Interestingly, this is consistent with the mutation schedule as shown in Fig. 3c on Branin, where HPM employs a larger mutation in the first three steps (α→ 0 or α→ 2) and mild mutations (α→ 1) in the last two. Hence, benefiting from the learned mutation schedule, the proposed HPM is a good trade-off between using the hypergradient and mutation-driven update.
4.2 Benchmark Datasets
We validate the effectiveness of HPM for tuning hyperparameters of deep neural networks on two representative tasks, including image classification with CNN and language modeling with LSTM.
Experimental Settings. For a fair comparison to hypergradient, all the experiments in this section follow the same setting as in self-tuning networks [25], which is specifically designed for optimizing hyperparameters of deep neural networks with hypergradients. Particularly, we tune 15 hyperparameters, including 8 dropout rates and 7 data augmentation hyperparameters for AlexNet [20] in the CIFAR10 image dataset [19], and 7 RNN regularization hyperparameters [13, 33, 29] for LSTM [15] model in the Penn Treebank (PTB) [28] corpus dataset. We compare our approach with two groups of HPO methods as 1) fixed hyperparameter and 2) hyperparameter schedule methods. The first group tries to find a fixed hyperparameter configuration over the hyperparameter space, including grid search, random search, Bayesian Optimization3 and Hyperband [22]. The second group learns a dynamical hyperparameter schedule along with the training process, such as population based training (PBT) [17] and self-tuning network (STN) [25]. Our HPM belongs to the second category.
Implementation Details. We implement PBT with different baseline networks (e.g., AlexNet and LSTM) and use the truncation selection with random perturbation for exploitation and exploration according to [17]. For STN, we directly run the authors’ code. We implement our HPM algorithm by using STN as a student model to proceed the hypertraining. HPM employs the same exploit strategy as in PBT and performs learnable mutation with a teacher model (e.g., an attention neural network) for exploration. For both PBT and HPM, we take one training epoch as one training step, and do exploit & explore operation after each step. The teacher model in HPM is trained by one epoch on the validation set each time called by an underperforming student model. We also implement a strong baseline model as HPM w/o T, which incorporates hypergradient in the population based training without using a teacher network.
All the codes on benchmark datasets were implemented with Pytorch library. We set the population size as 20 and the truncation selection ratio as 20% for PBT, HPM w/o T, and HPM. We employed the recommended optimizers and learning rates for all the baseline networks and STN models following [25]. Our teacher network was implemented with 64 key slots and was trained with Adam optimizer with a learning rate of 0.001. For the fixed hyperparameter methods, we used the Hyperband [22] implementation provided in [23] and posted the results of the others reported in [25]. For all the hyperparameter schedule methods, we ran the experiments in the same computing environment. STN usually converges within 250 (150) epochs on the CIFAR-10 (PTB) dataset. Thus, we set T as 250 and 150 for all the population based methods on CIFAR-10 and PTB, respectively.
3https://github.com/HIPS/Spearmint
Image Classification. Table 1 reports the performance of the fixed hyperparameter and hyperparameter schedule methods on the CIFAR-10 dataset in terms of validation and test loss, respectively. As can be seen, the hyperparameter schedule methods generally perform better than the fixed ones and the proposed HPM scheduler achieves the best performance, which demonstrates the effectiveness of using HPM in tuning deep neural networks. Fig. 5a shows the best validation loss of different methods over training epochs, where the loss of HPM is consistently lower than PBT and STN. We also show the hyperparameter and mutation schedule learned by HPM in Fig. 5b and Fig. 5c. Specifically, we select four hyperparameters including the dropout rates of the input, the third and fourth layer activation, and the rate of adding noise on the hue of an image. We observe that the mutation has a consistent behavior with the hyperparameter. For example, HPM schedules the dropout rate of Layer 3 with a high variance at the early training stage and assigns it a stable small value after the 150-th epoch. Accordingly, the mutation α of Layer 3 oscillates between [0.5, 1.75] before 150 epochs and then tends to be 1. For another example, as Hue and Input have a relatively stable schedule, their mutation weights spread around 1 with a small variance. These observations indicate that HPM can learn a meaningful mutation schedule during the training process.
Language Modeling. We summarize the validation and test perplexity of all the methods on the PTB corpus dataset in Table 1, where HPM also outperforms all the compared methods. One may note that HPM w/o T performs much worse than PBT and STN. This might be due to the conflict between hypergradient and the exploration of random perturbation, which justifies that HPM is not a trivial combination of PBT and STN, and supports that the proposed teacher network plays a key role in finding the mutation schedule. Fig. 6 shows the best validation perplexity of different methods over training epochs on the PTB dataset, as well as the hyperparameter and mutation schedules given by HPM, where a similar observation to the image classification experiment could be obtained.
4.3 Ablation Study
The proposed HPM method adopts a population-based training framework and learns the hyperparameter schedule by alternatively proceeding with the hypertraining and learnable mutation steps. To investigate the impact of different components in HPM, we provide more ablated models other than HPM w/o T as follows: 1) RS+STN combines STN [25] and random search (RS). We ran RS with the same given budget as the population size in HPM, i.e., K = 20. 2) HPM w/o H freezes hyperparame-
ters in the hypertraining step and only updates hyperparameters with learnable mutations. Thus, it could be treated as a PBT model with hypergradient-guided mutations. 3) HPM w/o M disables the mutation operation in HPM and, instead, performs one more hypergradient descent step on the cloned hyperparameters for the exploration purpose. 4) In HPM, the mutation is learned by a teacher model implemented with attention networks. Here HPM (T-MLP) employs a different implementation for the teacher model. Specifically, it implements the teacher model gφ(h) = 1 + tanh(Wσ(V Th)) by setting σ as LeakyRelu rather than the softmax function in Eq. (9), in which case, it turns the attention networks as multilayer perceptron (MLP) networks.
Table 2 shows the ablation study results on two benchmark datasets, where our full model HPM consistently outperforms all the ablated models. On the one hand, RS+STN achieves a similar performance compared to STN [25], indicating that, without leveraging an effective exploit & explore strategy, a simple combination between local gradient and global search may not boost the performance significantly. On the other hand, while HPM w/o H adopts a learnable mutation, it only performs hypergradient descent with the teacher model, leading to hyperparameters will be updated slowly and cannot be seamlessly tuned along with model parameters. Hence, both hypertraining and learnable mutations are useful for optimizing hyperparameters.
We further compare HPM with two ablated models without using mutations (HPM w/o M) and the teacher network (HPM w/o T). Particularly, HPM w/o M degrades the performance due to overoptimizing hyperparameters and the lack of mutation-driven search; HPM w/o T underperforms since the potential conflict between hypergradient descent and the random-perturbation based mutation. Hence, the ablation studies in Table 2 demonstrate the effectiveness of learning mutations with a teacher model. Moreover, we also provide an alternative implementation of the teacher model with MLP networks, i.e., HPM (T-MLP), which delivers comparative performance to the proposed HPM.
5 Conclusions
We proposed a novel hyperparameter mutation (HPM) algorithm for solving the hyperparameter optimization task, where we developed a hypergradient-guided population based training framework and designed a student-teaching schema to deliver adaptive mutations for the underperforming student models. We implemented a learning-to-mutate algorithm with the attention mechanism to learn a mutation schedule towards minimizing the validation loss, which provides a trade-off solution between using the hypergradient-guided local search and the mutation-driven global search. Experimental results on both synthetic and benchmark datasets clearly demonstrated the benefit of using the proposed HPM over hypergradient and the population based methods.
Broader Impact
The proposed HPM algorithm addresses the challenge of combining local gradient and global search for solving the hyperparameter optimization problem. The proposed framework could be incorporated in many automated machine learning systems to provide an effective hyperparameter schedule solution. The outcome of this work will benefit both the academic and industry communities by liberating researchers from the tedious hyperparameter tuning work.
Acknowledgments
We would like to thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was supported by Alibaba DAMO Academy and the SMILE Lab (https://web.northeastern.edu/smilelab/) at Northeastern University. | 1. What is the focus and contribution of the paper on hyperparameter optimization?
2. What are the strengths of the proposed approach, particularly in its novelty and combination of local and global search strategies?
3. What are the weaknesses of the paper regarding clarity, experimentation, and limited scope?
4. Do you have any concerns or suggestions for improving the proposed method, such as separating the effects of different components or comparing with additional baselines?
5. How does the reviewer assess the overall quality and impact of the work in the field of hyperparameter optimization? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This work explores the possibility of combining gradient-based (GB-HPO) and population-based hyperparameter optimization (PB-HPO). The authors propose a novel method, called Hypergradient Mutation (HPM) that builds on the work on self-tuning networks by MacKay et al. while adding a component of evolutionary search by keeping a population of student models and learning a mutation operation to change the hyperparameter values of underperforming models. The authors present experiments on test functions and two benchmark datasets to demonstrate the effectiveness of their approach.
Strengths
The main strengths of the paper lie in its novelty: - The idea of combining a local search strategy with a global one in the context of HPO seems a natural direction; and, to the best of my knowledge, this is the first work that attempts to do so with GB-HPO and PB-HPO. - Also learning mutation operations by gradient descent is a novel and interesting research direction.
Weaknesses
My main concern is about clarity as detailed below. Unfortunately I do not think this issue is limited to poor exposition, but rather impacts also the quality of the presentation and derivations necessary to properly understand the method proposed. Furthermore - on the experimental side, while the authors provide comparison with other method, I find it difficult to put the results in perspective ((*) could you please report also accuracy?); also the choice of datasets, but especially neural models, is rather limited. Regarding the method: - (*) There are quite a number of components at work, and I don't think the reader is in the position to judge the effectiveness of each of them with the material presented in this work. While I appreciate the presented ablation, I would really like to see two other very natural ways of proceeding: 1) GB-HPO with a global search strategy, like random search (essentially a multi-start method in the context of HPO) (2) Only learning to mutate (without ``hyper-training''). |
NIPS | Title
Learning to Mutate with Hypergradient Guided Population
Abstract
Computing the gradient of model hyperparameters, i.e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task. However, gradient-based methods could lead to suboptimal solutions due to the non-convex nature of optimization in a complex hyperparameter space. In this study, we propose a hyperparameter mutation (HPM) algorithm to explicitly consider a learnable trade-off between using global and local search, where we adopt a population of student models to simultaneously explore the hyperparameter space guided by hypergradient and leverage a teacher model to mutate the underperforming students by exploiting the top ones. The teacher model is implemented with an attention mechanism and is used to learn a mutation schedule for different hyperparameters on the fly. Empirical evidence on synthetic functions is provided to show that HPM outperforms hypergradient significantly. Experiments on two benchmark datasets are also conducted to validate the effectiveness of the proposed HPM algorithm for training deep neural networks compared with several strong baselines.
1 Introduction
Hyperparameter optimization (HPO) [4, 11] is one of the fundamental research problems in the field of automated machine learning. It aims to maximize the model performance by tuning model hyperparameters automatically, which could be achieved either by searching a fixed hyperparameter configuration setting [3, 22, 32, 9] from the predefined hyperparameter space or by learning a hyperparameter schedule along with the training process [17, 25]. Among existing methods, hypergradient [2, 26] forms a promising direction, as it naturally enables gradient descent on hyperparameters.
Hypergradient is usually defined as the gradient of a validation loss function w.r.t hyperparameters. Previous methods mainly focus on computing hypergradients by using reverse-mode differentiation [2, 6, 26], or designing a differentiable response function [12, 25] for hyperparameters, yet without explicitly considering the non-convex optimization nature in a complex hyperparameter space. Thus, while hypergradient methods could deliver highly-efficient local search solutions, they may easily get stuck in local minima and achieve suboptimal performance. This can be clearly observed on some synthetic functions which share a similar shape of parameter space to the HPO problem (see Sec. 4.1). It also leads to the question: can we find a way to help hypergradient with global information?
The population based hyperparameter search methods work as a good complementary to the hypergradient, such as evolutionary search [27, 5], particle swarm optimization [8], and the population based ∗Work done when Zhiqiang Tao interned at Alibaba Group and worked at Northeastern University.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
training [17, 21, 14], which generally employ a population of agent models to search different hyperparameter configurations and update hyperparameters with a mutation operation. The population could provide sufficient diversity to globally explore hypergradients throughout the hyperparameter space. However, it is non-trivial to incorporate hypergradients in the population based methods due to a possible conflict between the hand-crafted mutation operation (e.g., random perturbation) and the direction of hypergradient descent.
To address the above challenges, we propose a novel hyperparameter mutation (HPM) scheduling algorithm in this study, which adopts a population based training framework to explicitly learn a trade-off (i.e., a mutation schedule) between using the hypergradient-guided local search and the mutation-driven global search. We develop the proposed framework by alternatively proceeding model training and hyperparameter mutation, where the former jointly optimizes model parameters and hyperparameters upon gradients, while the latter leverages a student-teaching schema for the exploration. Particularly, HPM treats the population as a group of student models and employs a teacher model to mutate the hyperparameters of underperforming students. We instantiate our teacher model as a neural network with attention mechanism and learn the mutation direction towards minimizing the validation loss. Benefiting from learning-to-mutate, the mutation is adaptively scheduled for the population based training with hypergradient.
In the experiments, we extensively discuss the properties of the proposed HPM algorithm and show that HPM significantly outperforms hypergradient and global search methods on synthetic functions. We also employ the HPM scheduler in training deep neural networks on two benchmark datasets, where experimental results validate the effectiveness of HPM compared with several strong baselines.
2 Related Work
Roughly we divide the existing HPO methods into two categories, namely, hyperparameter configuration search and hyperparameter schedule search. Hyperparameter configuration search methods assume that the optimal hyperparameter is a set of fixed values, whereas hyperparameter schedule search methods relax this assumption and allow hyperparameters to change in a single trail.
Hyperparameter configuration. For hyperparameter configuration search methods, we may divide existing methods into three subcategories: model-free, Bayesian optimization, and the gradientbased methods. The first subcategory includes grid search [31], random search [3], successive halving [18], Hyperband [22], etc. Grid search adopts an exhausting strategy to select hyperparameter configurations in pre-defined grids, and the random search method randomly selects hyperparameters from the configuration space with a given budget. Inspired by the amazing success of random search, successive halving [18] and Hyperband [22] are further designed with multi-arm bandit strategies to adjust the computation resource of each hyperparameter configuration upon their performance.
All the above HPO methods are model-free as they do not have any distribution assumption about the hyperparameters. Differently, Bayesian optimization methods [32, 16, 7]) assume the existence of a distribution about the model performance over the hyperparameter search space. This category of methods estimates the model performance distribution based on the tested hyperparameter configurations, and predicts the next hyperparameter configuration by maximizing an acquisition function. However, due to the distribution estimation, the computation cost of Bayesian optimization methods could be high, and thus the hyperparameter searching is time-consuming. Recently, BOHB [32, 9] utilizes model-free methods such as Hyperband to improve the efficiency of Bayesian optimization.
The gradient-based HPO method is closely related to this work. Pioneering works [2, 6] propose to employ the reverse-mode differentiation (RMD) to calculate hypergradients on the validation loss based on the minimizer given by a number of model training iterations. Following this line, research efforts [26] have been made to reduce the memory complexity of RMD to handle the large-scale HPO problem. A forward-mode differentiation algorithm is proposed in [12] to further improve the efficiency of computing hypergradients based on the chain rule and a dynamic system formulation.
Hyperparameter Schedule. Two representative ways of changing hyperparameters are gradientbased methods such as self-tuning networks (STN) [25] and mutation-based methods such as population based training (PBT) [17, 21, 14]. STN employs hypernetworks [24] as a response function to map hyperparameters to model parameters so that it could obtain hypergradient by backpropagating the validation error through the hypernetworks. PBT performs an evolutionary search over the
hyperparameter space with a population of agent models. It provides a discrete mutation schedule via random perturbation. The other two interesting works related to this regime include hypergradient descent [1] and online meta-optimization [35]. However, these two works both focus more on online learning rate adaptation rather than a generic HPO problem. The proposed HPM algorithm belongs to the category of hyperparameter schedule. Different from existing methods, HPM explicitly learns suitable mutations when optimizing hypergradient in a complex hyperparameter space.
3 Hyperparameter Mutation (HPM)
3.1 Preliminary
Given input space X and output space Y , we define f(·; θ, h) : X → Y as a model parameterized by θ and h, where θ ∈ RD represents model parameters and h ∈ RN vectorizes N hyperparameters sampled from the hyperparameter configuration spaceH = H1×· · ·×HN . Hi is a set of configuration values for the i-th hyperparameter. Let Dtrn,Dval : {(x, y)} be the training and validation set. We define L(θ, h) : RD × RN → R as a function of parameter and hyperparameter by
L(θ, h) = ∑
(x,y)∈D
`(f(x; θ, h), y), (1)
where `(·, ·) denotes a loss function and D refers to Dtrn or Dval. Upon Eq. (1), we further define Ltrn and Lval as the training and validation loss functions by computing L(θ, h) on Dtrn and Dval, respectively. Generally, we train the model f on Dtrn with the fixed hyperparameter h or a humancrafted schedule strategy, and peek at the model performance by Lval with the learned parameter θ. Thus, the validation loss is usually bounded to the hyperparameter selection.
Hyperparameter optimization (HPO) solves the above issue, and it could be formulated as
min h∈H Lval(θ∗, h) s.t. θ∗ = argmin θ Ltrn(θ, h), (2)
which seeks for an optimal hyperparameter configuration h∗ or an optimal hyperparameter schedule. Hypergradient [2, 26, 30, 12] provides a natural way to solve Eq. (2) by performing gradient descent. However, due to the non-convex nature of a hyperparameter space, this kind of method may get stuck in local minima and thus lead to suboptimal performance. In contrast, the population based methods utilize a mutation-driven strategy to search the hyperparameter space thoroughly, which provides the potential to help hypergradient escape from local valleys. In this study, we focus on developing a trade-off solution between using hypergradient and the mutation-driven search.
3.2 Population Based Hyperparameter Search
We adopt a similar population based training framework as proposed in [17]. Let St = {Skt }Kk=1 be a population of agent models w.r.t f(·; θ, h) at the t-th training step, where Skt refers to the k-th agent model, T represents the total training steps, and K denotes the population size. Generally, the iterative optimization method (e.g., stochastic gradient decent) is used to optimize model weights for each agent. Hence, for ∀k, one training step could be described as
θkt+1 ← Skt (θkt , hkt ), (3)
where Skt updates model parameters from θ k t to θ k t+1 with a fixed hyperparameter h k t during the training step. The population based hyperparameter search is given by
k∗ = argmin k {Lval(θkT , hkT )}Kk=1. (4)
In Eq. (4), θkT = S k T−1(S k T−2(. . . S k 0 (θ k 0 , h k 0) . . . , h k T−2), h k T−1) is obtained by chaining a sequence of update steps with Eq. (3) and the hyperparameters are updated through some pre-defined or rule-based mutation operations (e.g., random perturbation). More specifically, we summarize the searching process with population based training [17] as follows.
• Train step updates θkt−1 to θkt and evaluates the validation loss Lval(θkt , hkt ) for each k. One training step could be one epoch or a fixed number of iterations. An agent model is ready to be exploited and explored after one step.
• Exploit St by selection methods, e.g., the truncation selection, which divides St into three sets of top, middle, and bottom agents in terms of validation performance. The agent models in bottom exploit the top ones by cloning their model parameters and hyperparameters, i.e., (θkt , h k t )← (θ∗t , h∗t ), where k ∈ bottom and ∗ represents the index of a top performer.
• Explore the hyperparameters with a mutation operation, denoted as Φ. As in [17], Φ keeps non-bottom agents unchanged, and randomly perturbs a bottom agent’s hyperparameter.
The population based training (PBT) methods [17, 21] simultaneously explore the hyperparameter space with a group of agent models. PBT inherits the merits of random search and leverages exploit & explore strategy to alternatively optimize the model parameter θ (by training step) and hyperparameter h (by mutation). This leads to a joint optimization over θ and h, and eventually provides an optimal hyperparameter schedule, i.e., hk ∗ 0 , . . . , h k∗
T−1 given by Eq. (4), among the population of agents. However, PBT has two limitations. 1) For each training step, the joint optimization stays at a coarse level since St(θt, ht) updates θt by fixing ht. 2) The hyperparameters are mainly updated by the mutation operation, yet a learnable mutation is under-explored.
3.3 Hypergradient Guided Population
We propose to use hypergradient to guide the population based hyperparameter search. To obtain hypergradient, we define θ(h) : RN → RD as a response function of hyperparameter h to approximate the model parameter θ. By using θ(h), we could extend the agent model to St(θt(ht), ht), and formulate our hyperparameter mutation (HPM) scheduling algorithm as
min hT {Lval(θkT (hkT ), hkT )}Kk=1, (5)
where (θkT (h k T ), h k T ) is obtained by alternatively proceeding with one hypertraining step and one learnable mutation step as shown in Fig. 1. It is worth noting that, hT is optimized over the population in a sequential update way, i.e., (θkt−1(h k t−1), h k t−1)→ (θkt (hkt ), hkt ), where hkt is updated by hypergradient and mutation at each step t. Thus, optimizing hT in Eq. (5) is equivalent to optimize the hyperparameter schedule: h0 → · · ·ht · · · → hT . Hypertraining jointly optimizes θ and h with hypergradients. Specifically, (θ, h) is updated by
θt = θt−1(ht−1)− ηθ∇θ, ht = ht−1 − ηh∇h,
(6)
where∇θ = ∂Ltrn/∂θ is the gradient of model parameter and∇h is the hypergradient computed by
∇h = ∂Lval(θ(h), h) ∂θ ∂θ ∂h + ∂Lval(θ(h), h) ∂h . (7)
The computation of hypergradient in Eq. (7) is mainly depended on the response function θ(h). In this work, θ(h) is implemented by hypernetworks [24, 25], which provide a flexible and efficient way to compute hypergradients.
Algorithm 1 Hyperparameter Optimization via HPM Let S be a set of student models, and T be the given budget for t = 1 to T do
for Skt−1 ∈ St−1 (could be parallelized) do Update Skt−1(θ k t−1, h k t−1) to S k t (θ k t , h k t ) by one hypertraining step with Eq. (6) and Eq. (7) Divide St into top, middle, bottom students by the truncation section method for Skt ∈ bottom do
Clone model parameters as θkt ← θ∗t where (θ∗t , h∗t ) ∈ top Train the teacher network gφ(hkt ) with Eq. (10) conditioning on (θ ∗ t , h ∗ t )
Mutate the hyperparameter with Eq. (8) as hkt ← gφ(hkt ) h∗t return {h∗0, . . . , h∗T−1}, θ∗T
Learnable mutation employs a similar exploit strategy as in Section 3.2 (without hkt ← h∗t ) and develops a student-teaching schema [10, 34] for exploration. Particularly, after updating St−1 to St via one hypertraining step, we treat each agent Skt ∈ St as a student model and learn a teacher model to mutate the underperforming student’s hyperparameters. The mutation module Φ is developed as
hkt = Φ(h k t , h ∗ t ) = α h∗t , (8)
where hkt ∈ bottom, h∗t ∈ top, is the hadamard product, and α ∈ RN denotes the mutation weights. In the following, we will show how to learn α with the teacher network.
3.4 Learning to Mutate
We formulate our teacher model gφ as a neural network with attention mechanism parameterized by φ = {W,V }, where W ∈ RN×M , V ∈ RN×M are two learnable parameters and M represents the number of attention units, as shown in Fig. 2. It takes input as a bottom student’s hyperparameter hkt and computes the mutation weights by
α = gφ(h k t ) = 1 + tanh(c), c = W softmax(V Thkt ), (9)
where α ∈ [0, 2]N and c ∈ RN is a mass vector that tries to characterize the mutation degree for each dimension of h. The benefits of using attention mechanism lie in two folds. 1) It provides sufficient model capability with a key-value architecture, which uses the key slots stored in V to address different underperforming hyperparameters and assign the mutations with the corresponding memory slots in W . 2) gφ enables a learnable way to adaptively mutate hyperparameters along with the training process, where α→ 1 gives a mild mutation for a small exploration (update) step, and α→ 0 or α→ 2 encourages an aggressive exploration to the hyperparameter space.
We aim to learn the mutation direction towards minimizing Lval. To this end, we train our teacher model gφ conditioning on (θ∗t , h ∗ t ) by
min φ={W,V }
Lval(θ∗t (h′t), h′t), (10)
where h′t = α h∗t = gφ(hkt ) h∗t . The parameters of gφ are updated by backpropagating the
hypergradients given in Eq. (7) through the chain rule. By freezing the cloned model parameters and hyperparameters (θ∗t , h ∗ t ), gφ could be focused on learning the mutations to minimize Lval. Please refer to the supplementary material for more details about training the teacher model.
Algorithm 1 summarizes the entire HPM scheduling algorithm. Particularly, HPM computes hypergradients with hypernetworks [24, 25], which add a linear transformation between hyperparameters and model parameters layer-wisely. The hypernetwork can be efficiently computed via feed-forward and backpropagation operations. Moreover, since the teacher network is trained with the frozen student model, the additional computing cost it brings in is much less than training a student model. Thus, the time complexity of HPM is mainly subject to the population size K. While the hypertraining step could be parallelized, the whole population cannot be asynchronously updated due to the centralized teaching process. This can be effectively addressed by introducing an asynchronous HPM, similar to [17]. We leave it as future work and focus on learning to mutate in this study.
4 Experiments
4.1 Synthetic Functions
One common strategy for exploring the properties of hyperparameters is to perform hyperparameter optimization on synthetic loss functions [36]. These loss functions usually have many local minima and different shapes, and thus could well simulate the optimizing behavior of the real hyperparameters, yet work as much computationally cheaper testbeds than real-world datasets.
Experimental Settings. We employ the Branin and Hartmann6D function provided by the HPOlib2 library, where Branin is defined in a two-dimensional space with three global minima (f(h∗) = 0.39787) and Hartmann6D is defined over a hypercube of [0, 1]6 with one global minima (f(h∗) = −3.32237). We compare the proposed HPM with three baseline methods, including 1) random search [3], 2) population based training (PBT) [17], and 3) Hypergradient. We also compare HPM with HPM w/o T, which is the ablated HPM model without using a teacher network. It uses a random perturbation (α is randomly chosen from [0.8, 1.2]) for mutation instead. We ran the random search algorithm in HPOlib library and implement the PBT scheduler according to [17]. Note that, as we use the synthetic function f to mimic the loss function of hyperparameters h, the hypergradient is directly given by ∂f/∂h and is optimized with the gradient descent algorithm.
Hyperparameter Optimization Performance. Fig. 3a and Fig. 3b compare the performance of different HPO methods on the Branin and Hartmann6D functions, respectively, where we have several interesting observations. 1) The hypergradient method generally performs better than the global search methods (e.g., random search and PBT) on Hartmann6D rather than Branin, which is consistent with the fact that Hartmann6D has a less number of global minima than Branin. 2) There should be a trade-off between using hypergradient and global search methods (e.g., PBT) according to their opposite performance on these two test functions. 3) The proposed teacher network leads to a more stable and faster convergence performance for HPM compared with HPM w/o T.
Mutation Schedule. Fig. 4 shows the optimization steps of three methods on the Branin function, where we run PBT, hypergradient, and HPM from the same random initialization point with a budget of 30 iterations. As can be seen, the hypergradient decreases well along with the direction of gradient yet may get stuck in local minima. In contrast, while the PBT method could fully explore the hyperparameter space, it cannot achieve the global minimum without using the gradient guidance. Guided by the teacher network and hypergradient information, the proposed HPM moves towards the
2https://github.com/automl/HPOlib
global optimum adaptively, where HPM skips over several areas quickly and half steps to the end. Interestingly, this is consistent with the mutation schedule as shown in Fig. 3c on Branin, where HPM employs a larger mutation in the first three steps (α→ 0 or α→ 2) and mild mutations (α→ 1) in the last two. Hence, benefiting from the learned mutation schedule, the proposed HPM is a good trade-off between using the hypergradient and mutation-driven update.
4.2 Benchmark Datasets
We validate the effectiveness of HPM for tuning hyperparameters of deep neural networks on two representative tasks, including image classification with CNN and language modeling with LSTM.
Experimental Settings. For a fair comparison to hypergradient, all the experiments in this section follow the same setting as in self-tuning networks [25], which is specifically designed for optimizing hyperparameters of deep neural networks with hypergradients. Particularly, we tune 15 hyperparameters, including 8 dropout rates and 7 data augmentation hyperparameters for AlexNet [20] in the CIFAR10 image dataset [19], and 7 RNN regularization hyperparameters [13, 33, 29] for LSTM [15] model in the Penn Treebank (PTB) [28] corpus dataset. We compare our approach with two groups of HPO methods as 1) fixed hyperparameter and 2) hyperparameter schedule methods. The first group tries to find a fixed hyperparameter configuration over the hyperparameter space, including grid search, random search, Bayesian Optimization3 and Hyperband [22]. The second group learns a dynamical hyperparameter schedule along with the training process, such as population based training (PBT) [17] and self-tuning network (STN) [25]. Our HPM belongs to the second category.
Implementation Details. We implement PBT with different baseline networks (e.g., AlexNet and LSTM) and use the truncation selection with random perturbation for exploitation and exploration according to [17]. For STN, we directly run the authors’ code. We implement our HPM algorithm by using STN as a student model to proceed the hypertraining. HPM employs the same exploit strategy as in PBT and performs learnable mutation with a teacher model (e.g., an attention neural network) for exploration. For both PBT and HPM, we take one training epoch as one training step, and do exploit & explore operation after each step. The teacher model in HPM is trained by one epoch on the validation set each time called by an underperforming student model. We also implement a strong baseline model as HPM w/o T, which incorporates hypergradient in the population based training without using a teacher network.
All the codes on benchmark datasets were implemented with Pytorch library. We set the population size as 20 and the truncation selection ratio as 20% for PBT, HPM w/o T, and HPM. We employed the recommended optimizers and learning rates for all the baseline networks and STN models following [25]. Our teacher network was implemented with 64 key slots and was trained with Adam optimizer with a learning rate of 0.001. For the fixed hyperparameter methods, we used the Hyperband [22] implementation provided in [23] and posted the results of the others reported in [25]. For all the hyperparameter schedule methods, we ran the experiments in the same computing environment. STN usually converges within 250 (150) epochs on the CIFAR-10 (PTB) dataset. Thus, we set T as 250 and 150 for all the population based methods on CIFAR-10 and PTB, respectively.
3https://github.com/HIPS/Spearmint
Image Classification. Table 1 reports the performance of the fixed hyperparameter and hyperparameter schedule methods on the CIFAR-10 dataset in terms of validation and test loss, respectively. As can be seen, the hyperparameter schedule methods generally perform better than the fixed ones and the proposed HPM scheduler achieves the best performance, which demonstrates the effectiveness of using HPM in tuning deep neural networks. Fig. 5a shows the best validation loss of different methods over training epochs, where the loss of HPM is consistently lower than PBT and STN. We also show the hyperparameter and mutation schedule learned by HPM in Fig. 5b and Fig. 5c. Specifically, we select four hyperparameters including the dropout rates of the input, the third and fourth layer activation, and the rate of adding noise on the hue of an image. We observe that the mutation has a consistent behavior with the hyperparameter. For example, HPM schedules the dropout rate of Layer 3 with a high variance at the early training stage and assigns it a stable small value after the 150-th epoch. Accordingly, the mutation α of Layer 3 oscillates between [0.5, 1.75] before 150 epochs and then tends to be 1. For another example, as Hue and Input have a relatively stable schedule, their mutation weights spread around 1 with a small variance. These observations indicate that HPM can learn a meaningful mutation schedule during the training process.
Language Modeling. We summarize the validation and test perplexity of all the methods on the PTB corpus dataset in Table 1, where HPM also outperforms all the compared methods. One may note that HPM w/o T performs much worse than PBT and STN. This might be due to the conflict between hypergradient and the exploration of random perturbation, which justifies that HPM is not a trivial combination of PBT and STN, and supports that the proposed teacher network plays a key role in finding the mutation schedule. Fig. 6 shows the best validation perplexity of different methods over training epochs on the PTB dataset, as well as the hyperparameter and mutation schedules given by HPM, where a similar observation to the image classification experiment could be obtained.
4.3 Ablation Study
The proposed HPM method adopts a population-based training framework and learns the hyperparameter schedule by alternatively proceeding with the hypertraining and learnable mutation steps. To investigate the impact of different components in HPM, we provide more ablated models other than HPM w/o T as follows: 1) RS+STN combines STN [25] and random search (RS). We ran RS with the same given budget as the population size in HPM, i.e., K = 20. 2) HPM w/o H freezes hyperparame-
ters in the hypertraining step and only updates hyperparameters with learnable mutations. Thus, it could be treated as a PBT model with hypergradient-guided mutations. 3) HPM w/o M disables the mutation operation in HPM and, instead, performs one more hypergradient descent step on the cloned hyperparameters for the exploration purpose. 4) In HPM, the mutation is learned by a teacher model implemented with attention networks. Here HPM (T-MLP) employs a different implementation for the teacher model. Specifically, it implements the teacher model gφ(h) = 1 + tanh(Wσ(V Th)) by setting σ as LeakyRelu rather than the softmax function in Eq. (9), in which case, it turns the attention networks as multilayer perceptron (MLP) networks.
Table 2 shows the ablation study results on two benchmark datasets, where our full model HPM consistently outperforms all the ablated models. On the one hand, RS+STN achieves a similar performance compared to STN [25], indicating that, without leveraging an effective exploit & explore strategy, a simple combination between local gradient and global search may not boost the performance significantly. On the other hand, while HPM w/o H adopts a learnable mutation, it only performs hypergradient descent with the teacher model, leading to hyperparameters will be updated slowly and cannot be seamlessly tuned along with model parameters. Hence, both hypertraining and learnable mutations are useful for optimizing hyperparameters.
We further compare HPM with two ablated models without using mutations (HPM w/o M) and the teacher network (HPM w/o T). Particularly, HPM w/o M degrades the performance due to overoptimizing hyperparameters and the lack of mutation-driven search; HPM w/o T underperforms since the potential conflict between hypergradient descent and the random-perturbation based mutation. Hence, the ablation studies in Table 2 demonstrate the effectiveness of learning mutations with a teacher model. Moreover, we also provide an alternative implementation of the teacher model with MLP networks, i.e., HPM (T-MLP), which delivers comparative performance to the proposed HPM.
5 Conclusions
We proposed a novel hyperparameter mutation (HPM) algorithm for solving the hyperparameter optimization task, where we developed a hypergradient-guided population based training framework and designed a student-teaching schema to deliver adaptive mutations for the underperforming student models. We implemented a learning-to-mutate algorithm with the attention mechanism to learn a mutation schedule towards minimizing the validation loss, which provides a trade-off solution between using the hypergradient-guided local search and the mutation-driven global search. Experimental results on both synthetic and benchmark datasets clearly demonstrated the benefit of using the proposed HPM over hypergradient and the population based methods.
Broader Impact
The proposed HPM algorithm addresses the challenge of combining local gradient and global search for solving the hyperparameter optimization problem. The proposed framework could be incorporated in many automated machine learning systems to provide an effective hyperparameter schedule solution. The outcome of this work will benefit both the academic and industry communities by liberating researchers from the tedious hyperparameter tuning work.
Acknowledgments
We would like to thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was supported by Alibaba DAMO Academy and the SMILE Lab (https://web.northeastern.edu/smilelab/) at Northeastern University. | 1. What is the main contribution of the paper regarding hyperparameter optimization?
2. What are the strengths of the proposed approach, particularly in combining different optimization methods?
3. What are the weaknesses of the paper, especially regarding the choice of functions and structures, and the explanation of hypergradient directed mutation?
4. Do you have any concerns or suggestions for improving the experimental results, such as conducting experiments on larger datasets like Imagenet? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This paper mainly introduces the use of a group of agent models to search for different configuration hyperparameters, and update the hyperparameters by mutation operation. The author thinks: if we can consider the direction of hypergradient when mutating, we can not only avoid conflicts between the direction of hand craft mutation operation and that of gradient descent, but also use global information to do hypergradient. The idea is natural and interesting.
Strengths
-- This paper combines the two methods of hypergradient optimization and population based optimization, which is a certain degree of innovative. -- The experimental part specially shows the trajectories of different optimization methods, which well proves the viewpoint of this paper.
Weaknesses
-- In this paper, the parameters of the attention mechanism network need to be retrained every time, which is time-consuming. -- Tanh and softmax functions are used in the g_{\phi}(h_t^k) network, but there is no comparative experiment on why to choose these two functions and structures. -- Why is it that replacing the parameter of mutation with a network can be regarded as hypergradient directed mutation? Please give a more specific explanation. -- Due to the lack of experiments on large-scale Imagenet data set, it is necessary to supplement to prove the effectiveness of the method. |
NIPS | Title
Learning to Mutate with Hypergradient Guided Population
Abstract
Computing the gradient of model hyperparameters, i.e., hypergradient, enables a promising and natural way to solve the hyperparameter optimization task. However, gradient-based methods could lead to suboptimal solutions due to the non-convex nature of optimization in a complex hyperparameter space. In this study, we propose a hyperparameter mutation (HPM) algorithm to explicitly consider a learnable trade-off between using global and local search, where we adopt a population of student models to simultaneously explore the hyperparameter space guided by hypergradient and leverage a teacher model to mutate the underperforming students by exploiting the top ones. The teacher model is implemented with an attention mechanism and is used to learn a mutation schedule for different hyperparameters on the fly. Empirical evidence on synthetic functions is provided to show that HPM outperforms hypergradient significantly. Experiments on two benchmark datasets are also conducted to validate the effectiveness of the proposed HPM algorithm for training deep neural networks compared with several strong baselines.
1 Introduction
Hyperparameter optimization (HPO) [4, 11] is one of the fundamental research problems in the field of automated machine learning. It aims to maximize the model performance by tuning model hyperparameters automatically, which could be achieved either by searching a fixed hyperparameter configuration setting [3, 22, 32, 9] from the predefined hyperparameter space or by learning a hyperparameter schedule along with the training process [17, 25]. Among existing methods, hypergradient [2, 26] forms a promising direction, as it naturally enables gradient descent on hyperparameters.
Hypergradient is usually defined as the gradient of a validation loss function w.r.t hyperparameters. Previous methods mainly focus on computing hypergradients by using reverse-mode differentiation [2, 6, 26], or designing a differentiable response function [12, 25] for hyperparameters, yet without explicitly considering the non-convex optimization nature in a complex hyperparameter space. Thus, while hypergradient methods could deliver highly-efficient local search solutions, they may easily get stuck in local minima and achieve suboptimal performance. This can be clearly observed on some synthetic functions which share a similar shape of parameter space to the HPO problem (see Sec. 4.1). It also leads to the question: can we find a way to help hypergradient with global information?
The population based hyperparameter search methods work as a good complementary to the hypergradient, such as evolutionary search [27, 5], particle swarm optimization [8], and the population based ∗Work done when Zhiqiang Tao interned at Alibaba Group and worked at Northeastern University.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
training [17, 21, 14], which generally employ a population of agent models to search different hyperparameter configurations and update hyperparameters with a mutation operation. The population could provide sufficient diversity to globally explore hypergradients throughout the hyperparameter space. However, it is non-trivial to incorporate hypergradients in the population based methods due to a possible conflict between the hand-crafted mutation operation (e.g., random perturbation) and the direction of hypergradient descent.
To address the above challenges, we propose a novel hyperparameter mutation (HPM) scheduling algorithm in this study, which adopts a population based training framework to explicitly learn a trade-off (i.e., a mutation schedule) between using the hypergradient-guided local search and the mutation-driven global search. We develop the proposed framework by alternatively proceeding model training and hyperparameter mutation, where the former jointly optimizes model parameters and hyperparameters upon gradients, while the latter leverages a student-teaching schema for the exploration. Particularly, HPM treats the population as a group of student models and employs a teacher model to mutate the hyperparameters of underperforming students. We instantiate our teacher model as a neural network with attention mechanism and learn the mutation direction towards minimizing the validation loss. Benefiting from learning-to-mutate, the mutation is adaptively scheduled for the population based training with hypergradient.
In the experiments, we extensively discuss the properties of the proposed HPM algorithm and show that HPM significantly outperforms hypergradient and global search methods on synthetic functions. We also employ the HPM scheduler in training deep neural networks on two benchmark datasets, where experimental results validate the effectiveness of HPM compared with several strong baselines.
2 Related Work
Roughly we divide the existing HPO methods into two categories, namely, hyperparameter configuration search and hyperparameter schedule search. Hyperparameter configuration search methods assume that the optimal hyperparameter is a set of fixed values, whereas hyperparameter schedule search methods relax this assumption and allow hyperparameters to change in a single trail.
Hyperparameter configuration. For hyperparameter configuration search methods, we may divide existing methods into three subcategories: model-free, Bayesian optimization, and the gradientbased methods. The first subcategory includes grid search [31], random search [3], successive halving [18], Hyperband [22], etc. Grid search adopts an exhausting strategy to select hyperparameter configurations in pre-defined grids, and the random search method randomly selects hyperparameters from the configuration space with a given budget. Inspired by the amazing success of random search, successive halving [18] and Hyperband [22] are further designed with multi-arm bandit strategies to adjust the computation resource of each hyperparameter configuration upon their performance.
All the above HPO methods are model-free as they do not have any distribution assumption about the hyperparameters. Differently, Bayesian optimization methods [32, 16, 7]) assume the existence of a distribution about the model performance over the hyperparameter search space. This category of methods estimates the model performance distribution based on the tested hyperparameter configurations, and predicts the next hyperparameter configuration by maximizing an acquisition function. However, due to the distribution estimation, the computation cost of Bayesian optimization methods could be high, and thus the hyperparameter searching is time-consuming. Recently, BOHB [32, 9] utilizes model-free methods such as Hyperband to improve the efficiency of Bayesian optimization.
The gradient-based HPO method is closely related to this work. Pioneering works [2, 6] propose to employ the reverse-mode differentiation (RMD) to calculate hypergradients on the validation loss based on the minimizer given by a number of model training iterations. Following this line, research efforts [26] have been made to reduce the memory complexity of RMD to handle the large-scale HPO problem. A forward-mode differentiation algorithm is proposed in [12] to further improve the efficiency of computing hypergradients based on the chain rule and a dynamic system formulation.
Hyperparameter Schedule. Two representative ways of changing hyperparameters are gradientbased methods such as self-tuning networks (STN) [25] and mutation-based methods such as population based training (PBT) [17, 21, 14]. STN employs hypernetworks [24] as a response function to map hyperparameters to model parameters so that it could obtain hypergradient by backpropagating the validation error through the hypernetworks. PBT performs an evolutionary search over the
hyperparameter space with a population of agent models. It provides a discrete mutation schedule via random perturbation. The other two interesting works related to this regime include hypergradient descent [1] and online meta-optimization [35]. However, these two works both focus more on online learning rate adaptation rather than a generic HPO problem. The proposed HPM algorithm belongs to the category of hyperparameter schedule. Different from existing methods, HPM explicitly learns suitable mutations when optimizing hypergradient in a complex hyperparameter space.
3 Hyperparameter Mutation (HPM)
3.1 Preliminary
Given input space X and output space Y , we define f(·; θ, h) : X → Y as a model parameterized by θ and h, where θ ∈ RD represents model parameters and h ∈ RN vectorizes N hyperparameters sampled from the hyperparameter configuration spaceH = H1×· · ·×HN . Hi is a set of configuration values for the i-th hyperparameter. Let Dtrn,Dval : {(x, y)} be the training and validation set. We define L(θ, h) : RD × RN → R as a function of parameter and hyperparameter by
L(θ, h) = ∑
(x,y)∈D
`(f(x; θ, h), y), (1)
where `(·, ·) denotes a loss function and D refers to Dtrn or Dval. Upon Eq. (1), we further define Ltrn and Lval as the training and validation loss functions by computing L(θ, h) on Dtrn and Dval, respectively. Generally, we train the model f on Dtrn with the fixed hyperparameter h or a humancrafted schedule strategy, and peek at the model performance by Lval with the learned parameter θ. Thus, the validation loss is usually bounded to the hyperparameter selection.
Hyperparameter optimization (HPO) solves the above issue, and it could be formulated as
min h∈H Lval(θ∗, h) s.t. θ∗ = argmin θ Ltrn(θ, h), (2)
which seeks for an optimal hyperparameter configuration h∗ or an optimal hyperparameter schedule. Hypergradient [2, 26, 30, 12] provides a natural way to solve Eq. (2) by performing gradient descent. However, due to the non-convex nature of a hyperparameter space, this kind of method may get stuck in local minima and thus lead to suboptimal performance. In contrast, the population based methods utilize a mutation-driven strategy to search the hyperparameter space thoroughly, which provides the potential to help hypergradient escape from local valleys. In this study, we focus on developing a trade-off solution between using hypergradient and the mutation-driven search.
3.2 Population Based Hyperparameter Search
We adopt a similar population based training framework as proposed in [17]. Let St = {Skt }Kk=1 be a population of agent models w.r.t f(·; θ, h) at the t-th training step, where Skt refers to the k-th agent model, T represents the total training steps, and K denotes the population size. Generally, the iterative optimization method (e.g., stochastic gradient decent) is used to optimize model weights for each agent. Hence, for ∀k, one training step could be described as
θkt+1 ← Skt (θkt , hkt ), (3)
where Skt updates model parameters from θ k t to θ k t+1 with a fixed hyperparameter h k t during the training step. The population based hyperparameter search is given by
k∗ = argmin k {Lval(θkT , hkT )}Kk=1. (4)
In Eq. (4), θkT = S k T−1(S k T−2(. . . S k 0 (θ k 0 , h k 0) . . . , h k T−2), h k T−1) is obtained by chaining a sequence of update steps with Eq. (3) and the hyperparameters are updated through some pre-defined or rule-based mutation operations (e.g., random perturbation). More specifically, we summarize the searching process with population based training [17] as follows.
• Train step updates θkt−1 to θkt and evaluates the validation loss Lval(θkt , hkt ) for each k. One training step could be one epoch or a fixed number of iterations. An agent model is ready to be exploited and explored after one step.
• Exploit St by selection methods, e.g., the truncation selection, which divides St into three sets of top, middle, and bottom agents in terms of validation performance. The agent models in bottom exploit the top ones by cloning their model parameters and hyperparameters, i.e., (θkt , h k t )← (θ∗t , h∗t ), where k ∈ bottom and ∗ represents the index of a top performer.
• Explore the hyperparameters with a mutation operation, denoted as Φ. As in [17], Φ keeps non-bottom agents unchanged, and randomly perturbs a bottom agent’s hyperparameter.
The population based training (PBT) methods [17, 21] simultaneously explore the hyperparameter space with a group of agent models. PBT inherits the merits of random search and leverages exploit & explore strategy to alternatively optimize the model parameter θ (by training step) and hyperparameter h (by mutation). This leads to a joint optimization over θ and h, and eventually provides an optimal hyperparameter schedule, i.e., hk ∗ 0 , . . . , h k∗
T−1 given by Eq. (4), among the population of agents. However, PBT has two limitations. 1) For each training step, the joint optimization stays at a coarse level since St(θt, ht) updates θt by fixing ht. 2) The hyperparameters are mainly updated by the mutation operation, yet a learnable mutation is under-explored.
3.3 Hypergradient Guided Population
We propose to use hypergradient to guide the population based hyperparameter search. To obtain hypergradient, we define θ(h) : RN → RD as a response function of hyperparameter h to approximate the model parameter θ. By using θ(h), we could extend the agent model to St(θt(ht), ht), and formulate our hyperparameter mutation (HPM) scheduling algorithm as
min hT {Lval(θkT (hkT ), hkT )}Kk=1, (5)
where (θkT (h k T ), h k T ) is obtained by alternatively proceeding with one hypertraining step and one learnable mutation step as shown in Fig. 1. It is worth noting that, hT is optimized over the population in a sequential update way, i.e., (θkt−1(h k t−1), h k t−1)→ (θkt (hkt ), hkt ), where hkt is updated by hypergradient and mutation at each step t. Thus, optimizing hT in Eq. (5) is equivalent to optimize the hyperparameter schedule: h0 → · · ·ht · · · → hT . Hypertraining jointly optimizes θ and h with hypergradients. Specifically, (θ, h) is updated by
θt = θt−1(ht−1)− ηθ∇θ, ht = ht−1 − ηh∇h,
(6)
where∇θ = ∂Ltrn/∂θ is the gradient of model parameter and∇h is the hypergradient computed by
∇h = ∂Lval(θ(h), h) ∂θ ∂θ ∂h + ∂Lval(θ(h), h) ∂h . (7)
The computation of hypergradient in Eq. (7) is mainly depended on the response function θ(h). In this work, θ(h) is implemented by hypernetworks [24, 25], which provide a flexible and efficient way to compute hypergradients.
Algorithm 1 Hyperparameter Optimization via HPM Let S be a set of student models, and T be the given budget for t = 1 to T do
for Skt−1 ∈ St−1 (could be parallelized) do Update Skt−1(θ k t−1, h k t−1) to S k t (θ k t , h k t ) by one hypertraining step with Eq. (6) and Eq. (7) Divide St into top, middle, bottom students by the truncation section method for Skt ∈ bottom do
Clone model parameters as θkt ← θ∗t where (θ∗t , h∗t ) ∈ top Train the teacher network gφ(hkt ) with Eq. (10) conditioning on (θ ∗ t , h ∗ t )
Mutate the hyperparameter with Eq. (8) as hkt ← gφ(hkt ) h∗t return {h∗0, . . . , h∗T−1}, θ∗T
Learnable mutation employs a similar exploit strategy as in Section 3.2 (without hkt ← h∗t ) and develops a student-teaching schema [10, 34] for exploration. Particularly, after updating St−1 to St via one hypertraining step, we treat each agent Skt ∈ St as a student model and learn a teacher model to mutate the underperforming student’s hyperparameters. The mutation module Φ is developed as
hkt = Φ(h k t , h ∗ t ) = α h∗t , (8)
where hkt ∈ bottom, h∗t ∈ top, is the hadamard product, and α ∈ RN denotes the mutation weights. In the following, we will show how to learn α with the teacher network.
3.4 Learning to Mutate
We formulate our teacher model gφ as a neural network with attention mechanism parameterized by φ = {W,V }, where W ∈ RN×M , V ∈ RN×M are two learnable parameters and M represents the number of attention units, as shown in Fig. 2. It takes input as a bottom student’s hyperparameter hkt and computes the mutation weights by
α = gφ(h k t ) = 1 + tanh(c), c = W softmax(V Thkt ), (9)
where α ∈ [0, 2]N and c ∈ RN is a mass vector that tries to characterize the mutation degree for each dimension of h. The benefits of using attention mechanism lie in two folds. 1) It provides sufficient model capability with a key-value architecture, which uses the key slots stored in V to address different underperforming hyperparameters and assign the mutations with the corresponding memory slots in W . 2) gφ enables a learnable way to adaptively mutate hyperparameters along with the training process, where α→ 1 gives a mild mutation for a small exploration (update) step, and α→ 0 or α→ 2 encourages an aggressive exploration to the hyperparameter space.
We aim to learn the mutation direction towards minimizing Lval. To this end, we train our teacher model gφ conditioning on (θ∗t , h ∗ t ) by
min φ={W,V }
Lval(θ∗t (h′t), h′t), (10)
where h′t = α h∗t = gφ(hkt ) h∗t . The parameters of gφ are updated by backpropagating the
hypergradients given in Eq. (7) through the chain rule. By freezing the cloned model parameters and hyperparameters (θ∗t , h ∗ t ), gφ could be focused on learning the mutations to minimize Lval. Please refer to the supplementary material for more details about training the teacher model.
Algorithm 1 summarizes the entire HPM scheduling algorithm. Particularly, HPM computes hypergradients with hypernetworks [24, 25], which add a linear transformation between hyperparameters and model parameters layer-wisely. The hypernetwork can be efficiently computed via feed-forward and backpropagation operations. Moreover, since the teacher network is trained with the frozen student model, the additional computing cost it brings in is much less than training a student model. Thus, the time complexity of HPM is mainly subject to the population size K. While the hypertraining step could be parallelized, the whole population cannot be asynchronously updated due to the centralized teaching process. This can be effectively addressed by introducing an asynchronous HPM, similar to [17]. We leave it as future work and focus on learning to mutate in this study.
4 Experiments
4.1 Synthetic Functions
One common strategy for exploring the properties of hyperparameters is to perform hyperparameter optimization on synthetic loss functions [36]. These loss functions usually have many local minima and different shapes, and thus could well simulate the optimizing behavior of the real hyperparameters, yet work as much computationally cheaper testbeds than real-world datasets.
Experimental Settings. We employ the Branin and Hartmann6D function provided by the HPOlib2 library, where Branin is defined in a two-dimensional space with three global minima (f(h∗) = 0.39787) and Hartmann6D is defined over a hypercube of [0, 1]6 with one global minima (f(h∗) = −3.32237). We compare the proposed HPM with three baseline methods, including 1) random search [3], 2) population based training (PBT) [17], and 3) Hypergradient. We also compare HPM with HPM w/o T, which is the ablated HPM model without using a teacher network. It uses a random perturbation (α is randomly chosen from [0.8, 1.2]) for mutation instead. We ran the random search algorithm in HPOlib library and implement the PBT scheduler according to [17]. Note that, as we use the synthetic function f to mimic the loss function of hyperparameters h, the hypergradient is directly given by ∂f/∂h and is optimized with the gradient descent algorithm.
Hyperparameter Optimization Performance. Fig. 3a and Fig. 3b compare the performance of different HPO methods on the Branin and Hartmann6D functions, respectively, where we have several interesting observations. 1) The hypergradient method generally performs better than the global search methods (e.g., random search and PBT) on Hartmann6D rather than Branin, which is consistent with the fact that Hartmann6D has a less number of global minima than Branin. 2) There should be a trade-off between using hypergradient and global search methods (e.g., PBT) according to their opposite performance on these two test functions. 3) The proposed teacher network leads to a more stable and faster convergence performance for HPM compared with HPM w/o T.
Mutation Schedule. Fig. 4 shows the optimization steps of three methods on the Branin function, where we run PBT, hypergradient, and HPM from the same random initialization point with a budget of 30 iterations. As can be seen, the hypergradient decreases well along with the direction of gradient yet may get stuck in local minima. In contrast, while the PBT method could fully explore the hyperparameter space, it cannot achieve the global minimum without using the gradient guidance. Guided by the teacher network and hypergradient information, the proposed HPM moves towards the
2https://github.com/automl/HPOlib
global optimum adaptively, where HPM skips over several areas quickly and half steps to the end. Interestingly, this is consistent with the mutation schedule as shown in Fig. 3c on Branin, where HPM employs a larger mutation in the first three steps (α→ 0 or α→ 2) and mild mutations (α→ 1) in the last two. Hence, benefiting from the learned mutation schedule, the proposed HPM is a good trade-off between using the hypergradient and mutation-driven update.
4.2 Benchmark Datasets
We validate the effectiveness of HPM for tuning hyperparameters of deep neural networks on two representative tasks, including image classification with CNN and language modeling with LSTM.
Experimental Settings. For a fair comparison to hypergradient, all the experiments in this section follow the same setting as in self-tuning networks [25], which is specifically designed for optimizing hyperparameters of deep neural networks with hypergradients. Particularly, we tune 15 hyperparameters, including 8 dropout rates and 7 data augmentation hyperparameters for AlexNet [20] in the CIFAR10 image dataset [19], and 7 RNN regularization hyperparameters [13, 33, 29] for LSTM [15] model in the Penn Treebank (PTB) [28] corpus dataset. We compare our approach with two groups of HPO methods as 1) fixed hyperparameter and 2) hyperparameter schedule methods. The first group tries to find a fixed hyperparameter configuration over the hyperparameter space, including grid search, random search, Bayesian Optimization3 and Hyperband [22]. The second group learns a dynamical hyperparameter schedule along with the training process, such as population based training (PBT) [17] and self-tuning network (STN) [25]. Our HPM belongs to the second category.
Implementation Details. We implement PBT with different baseline networks (e.g., AlexNet and LSTM) and use the truncation selection with random perturbation for exploitation and exploration according to [17]. For STN, we directly run the authors’ code. We implement our HPM algorithm by using STN as a student model to proceed the hypertraining. HPM employs the same exploit strategy as in PBT and performs learnable mutation with a teacher model (e.g., an attention neural network) for exploration. For both PBT and HPM, we take one training epoch as one training step, and do exploit & explore operation after each step. The teacher model in HPM is trained by one epoch on the validation set each time called by an underperforming student model. We also implement a strong baseline model as HPM w/o T, which incorporates hypergradient in the population based training without using a teacher network.
All the codes on benchmark datasets were implemented with Pytorch library. We set the population size as 20 and the truncation selection ratio as 20% for PBT, HPM w/o T, and HPM. We employed the recommended optimizers and learning rates for all the baseline networks and STN models following [25]. Our teacher network was implemented with 64 key slots and was trained with Adam optimizer with a learning rate of 0.001. For the fixed hyperparameter methods, we used the Hyperband [22] implementation provided in [23] and posted the results of the others reported in [25]. For all the hyperparameter schedule methods, we ran the experiments in the same computing environment. STN usually converges within 250 (150) epochs on the CIFAR-10 (PTB) dataset. Thus, we set T as 250 and 150 for all the population based methods on CIFAR-10 and PTB, respectively.
3https://github.com/HIPS/Spearmint
Image Classification. Table 1 reports the performance of the fixed hyperparameter and hyperparameter schedule methods on the CIFAR-10 dataset in terms of validation and test loss, respectively. As can be seen, the hyperparameter schedule methods generally perform better than the fixed ones and the proposed HPM scheduler achieves the best performance, which demonstrates the effectiveness of using HPM in tuning deep neural networks. Fig. 5a shows the best validation loss of different methods over training epochs, where the loss of HPM is consistently lower than PBT and STN. We also show the hyperparameter and mutation schedule learned by HPM in Fig. 5b and Fig. 5c. Specifically, we select four hyperparameters including the dropout rates of the input, the third and fourth layer activation, and the rate of adding noise on the hue of an image. We observe that the mutation has a consistent behavior with the hyperparameter. For example, HPM schedules the dropout rate of Layer 3 with a high variance at the early training stage and assigns it a stable small value after the 150-th epoch. Accordingly, the mutation α of Layer 3 oscillates between [0.5, 1.75] before 150 epochs and then tends to be 1. For another example, as Hue and Input have a relatively stable schedule, their mutation weights spread around 1 with a small variance. These observations indicate that HPM can learn a meaningful mutation schedule during the training process.
Language Modeling. We summarize the validation and test perplexity of all the methods on the PTB corpus dataset in Table 1, where HPM also outperforms all the compared methods. One may note that HPM w/o T performs much worse than PBT and STN. This might be due to the conflict between hypergradient and the exploration of random perturbation, which justifies that HPM is not a trivial combination of PBT and STN, and supports that the proposed teacher network plays a key role in finding the mutation schedule. Fig. 6 shows the best validation perplexity of different methods over training epochs on the PTB dataset, as well as the hyperparameter and mutation schedules given by HPM, where a similar observation to the image classification experiment could be obtained.
4.3 Ablation Study
The proposed HPM method adopts a population-based training framework and learns the hyperparameter schedule by alternatively proceeding with the hypertraining and learnable mutation steps. To investigate the impact of different components in HPM, we provide more ablated models other than HPM w/o T as follows: 1) RS+STN combines STN [25] and random search (RS). We ran RS with the same given budget as the population size in HPM, i.e., K = 20. 2) HPM w/o H freezes hyperparame-
ters in the hypertraining step and only updates hyperparameters with learnable mutations. Thus, it could be treated as a PBT model with hypergradient-guided mutations. 3) HPM w/o M disables the mutation operation in HPM and, instead, performs one more hypergradient descent step on the cloned hyperparameters for the exploration purpose. 4) In HPM, the mutation is learned by a teacher model implemented with attention networks. Here HPM (T-MLP) employs a different implementation for the teacher model. Specifically, it implements the teacher model gφ(h) = 1 + tanh(Wσ(V Th)) by setting σ as LeakyRelu rather than the softmax function in Eq. (9), in which case, it turns the attention networks as multilayer perceptron (MLP) networks.
Table 2 shows the ablation study results on two benchmark datasets, where our full model HPM consistently outperforms all the ablated models. On the one hand, RS+STN achieves a similar performance compared to STN [25], indicating that, without leveraging an effective exploit & explore strategy, a simple combination between local gradient and global search may not boost the performance significantly. On the other hand, while HPM w/o H adopts a learnable mutation, it only performs hypergradient descent with the teacher model, leading to hyperparameters will be updated slowly and cannot be seamlessly tuned along with model parameters. Hence, both hypertraining and learnable mutations are useful for optimizing hyperparameters.
We further compare HPM with two ablated models without using mutations (HPM w/o M) and the teacher network (HPM w/o T). Particularly, HPM w/o M degrades the performance due to overoptimizing hyperparameters and the lack of mutation-driven search; HPM w/o T underperforms since the potential conflict between hypergradient descent and the random-perturbation based mutation. Hence, the ablation studies in Table 2 demonstrate the effectiveness of learning mutations with a teacher model. Moreover, we also provide an alternative implementation of the teacher model with MLP networks, i.e., HPM (T-MLP), which delivers comparative performance to the proposed HPM.
5 Conclusions
We proposed a novel hyperparameter mutation (HPM) algorithm for solving the hyperparameter optimization task, where we developed a hypergradient-guided population based training framework and designed a student-teaching schema to deliver adaptive mutations for the underperforming student models. We implemented a learning-to-mutate algorithm with the attention mechanism to learn a mutation schedule towards minimizing the validation loss, which provides a trade-off solution between using the hypergradient-guided local search and the mutation-driven global search. Experimental results on both synthetic and benchmark datasets clearly demonstrated the benefit of using the proposed HPM over hypergradient and the population based methods.
Broader Impact
The proposed HPM algorithm addresses the challenge of combining local gradient and global search for solving the hyperparameter optimization problem. The proposed framework could be incorporated in many automated machine learning systems to provide an effective hyperparameter schedule solution. The outcome of this work will benefit both the academic and industry communities by liberating researchers from the tedious hyperparameter tuning work.
Acknowledgments
We would like to thank the anonymous reviewers for their insightful comments and valuable suggestions. This work was supported by Alibaba DAMO Academy and the SMILE Lab (https://web.northeastern.edu/smilelab/) at Northeastern University. | 1. What is the focus and contribution of the paper regarding hyperparameter tuning?
2. What are the strengths of the proposed method, particularly in its combination of global and local search strategies?
3. What are the weaknesses of the paper, especially regarding the necessity and justification of certain components?
4. Do you have any questions or suggestions regarding the evaluation process and comparisons with other methods?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary and Contributions
Strengths
Weaknesses | Summary and Contributions
This work proposes the hyperparameter mutation (HPM) algorithm to combine the benefits of global search like PBT and the local search like hypergradient. Specifically it uses a population of student models like in PBT and interleaves a hyper-training step that uses hypergradient and a learnable mutation step that clones and mutates the top students. Additionally, the mutations are guided by a "teacher" model that learns to generate better mutations through hypergradients on the validation set. The proposed method is evaluated on the synthetic functions and tuning hyperparameters for deep neural network on CIFAR-10 and PTB, and performed better than baselines like PBT and STN.
Strengths
(1) The proposed method (HPM) that combines PBT and hypergradient is intuitive and well motivated and performs better than both PBT and hypergradient methods. (2) The ablation study (HPM w/o T) showed the benefits of the learnable mutations. (3) The evaluation is performed on both synthetic and real benchmarks.
Weaknesses
(1) The reason behind the gain from learnable mutations is a bit unclear. From algorithm 1 and equation 10, it seems the teacher network is trained on a given h^{k}_t before computing the mutation over it. So the mutation is guided by the hypergradient. If that's the main reason, perhaps you don't even need the teacher network and learnable mutations. Instead you just need to add one more hypergradient update step over the hyperparameters after cloning the top student models. Some simpler baseline like this should be compared with to justify introducing additional complexity of a teacher network. Another minor issue is the concern on the fairness of comparison with other methods, since the teacher network training also requires computation, which should be counted as part of the budget used in HPM. (2) The exact form of the teacher model is not very well motivated and justified. Attention mechanism is usually applied in situations where you use a query to attend to a number of items, for example, using a query word to attend to a number of other words in a sentence. However, there aren't any other items to attend to in this case and W and V are all just parameter matrices. It would be helpful to compare against some simpler forms, for example, just a multilayer feedforward networks, to justify the advantage of using attention mechanism here. ==================== Thanks for the author response, which addressed some of my concerns. I have increased my score accordingly. |